Search Results

Search found 29201 results on 1169 pages for 'game development'.

Page 564/1169 | < Previous Page | 560 561 562 563 564 565 566 567 568 569 570 571  | Next Page >

  • Triangulating a partially triangulated mesh (2D)

    - by teodron
    Referring to the above exhibits, this is the scenario I am working with: starting with a planar graph (in my case, a 2D mesh) with a given triangulation, based on a certain criterion, the graph nodes are labeled as RED and BLACK. (A) a subgraph containing all the RED nodes (with edges between only the directly connected neighbours) is formed (note: although this figure shows a tree forming, it may well happen that the subgraph contain loops) (B) Problem: I need to quickly build a triangulation around the subgraph (e.g. as shown in figure C), but under the constraint that I have to keep the already present edges in the final result. Question: Is there a fast way of achieving this given a partially triangulated mesh? Ideally, the complexity should be in the O(n) class. Some side-remarks: it would be nice for the triangulation algorithm to take into account a certain vertex priority when adding edges (e.g. it should always try to build a "1-ring" structure around the most important nodes first - I can implement iteratively such a routine, but it's O(n^2) ). it would also be nice to reflect somehow the "hop distance" when adding edges: add edges first between the nodes that were "closer" to each other given the start topology. Nevertheless, disregarding the remarks, is there an already known scenario similar to this one where a triangulation is built upon a partially given set of triangles/edges?

    Read the article

  • exact point on a rotating sphere

    - by nkint
    I have a sphere that represents the Earth textured with real pictures. It's rotating around the x axis, and when user click down it has to show me the exact place he clicked on. For example if he clicked on Singapore the system should be able to: understand that user clicked on the sphere (OK, I'll do it with unProject) understand where user clicked on the sphere (ray-sphere collision?) and take into account the rotation transform sphere-coordinate to some coordinate system good for some web-api service ask to api (OK, this is the simpler thing for me ;-) some advice?

    Read the article

  • Opengl-es picking object

    - by lacas
    I saw a lot of picking code opengl-es, but nothing worked. Can someone give me what am I missing? My code is (from tutorials/forums) Vec3 far = Camera.getPosition(); Vec3 near = Shared.opengl().getPickingRay(ev.getX(), ev.getY(), 0); Vec3 direction = far.sub(near); direction.normalize(); Log.e("direction", direction.x+" "+direction.y+" "+direction.z); Ray mouseRay = new Ray(near, direction); for (int n=0; n<ObjectFactory.objects.size(); n++) { if (ObjectFactory.objects.get(n)!=null) { IObject obj = ObjectFactory.objects.get(n); float discriminant, b; float radius=0.1f; b = -mouseRay.getOrigin().dot(mouseRay.getDirection()); discriminant = b * b - mouseRay.getOrigin().dot(mouseRay.getOrigin()) + radius*radius; discriminant = FloatMath.sqrt(discriminant); double x1 = b - discriminant; double x2 = b + discriminant; Log.e("asd", obj.getName() + " "+discriminant+" "+x1+" "+x2); } } my camera vectors: //cam Vec3 position =new Vec3(-obj.getPosX()+x, obj.getPosZ()-0.3f, obj.getPosY()+z); Vec3 direction =new Vec3(-obj.getPosX(), obj.getPosZ(), obj.getPosY()); Vec3 up =new Vec3(0.0f, -1.0f, 0.0f); Camera.set(position, direction, up); and my picking code: public Vec3 getPickingRay(float mouseX, float mouseY, float mouseZ) { int[] viewport = getViewport(); float[] modelview = getModelView(); float[] projection = getProjection(); float winX, winY; float[] position = new float[4]; winX = (float)mouseX; winY = (float)Shared.screen.width - (float)mouseY; GLU.gluUnProject(winX, winY, mouseZ, modelview, 0, projection, 0, viewport, 0, position, 0); return new Vec3(position[0], position[1], position[2]); } My camera moving all the time in 3d space. and my actors/modells moving too. my camera is following one actor/modell and the user can move the camera on a circle on this model. How can I change the above code to working?

    Read the article

  • How i can sign and/or group a specific set of vertices in a 3D file container like OBJ ? - in Blender

    - by user827992
    I would like to export a 3D model with each part having a name or a label if you will. For example i would like to export a model of an human body and name each part in specifics vertex groups like: left hand, right hand, right foot, head, ears, ... and you got the idea; so i can have a single 3D model that i can explode in various parts if needed. If there is a better technique about how to mark vertex groups in a 3D file please share your solution. As 3D editor i use Blender.

    Read the article

  • A Star Path finding endless loop

    - by PoeHaH
    I have implemented A* algorithm. Sometimes it works, sometimes it doesn't, and it goes through an endless loop. After days of debugging and googling, I hope you can come to the rescue. This is my code: The algorythm: public ArrayList<Coordinate> findClosestPathTo(Coordinate start, Coordinate goal) { ArrayList<Coordinate> closed = new ArrayList<Coordinate>(); ArrayList<Coordinate> open = new ArrayList<Coordinate>(); ArrayList<Coordinate> travelpath = new ArrayList<Coordinate>(); open.add(start); while(open.size()>0) { Coordinate current = searchCoordinateWithLowestF(open); if(current.equals(goal)) { return travelpath; } travelpath.add(current); open.remove(current); closed.add(current); ArrayList<Coordinate> neighbors = current.calculateCoordAdjacencies(true, rowbound, colbound); for(Coordinate n:neighbors) { if(closed.contains(n) || map.isWalkeable(n)) { continue; } int gScore = current.getGvalue() + 1; boolean gScoreIsBest = false; if(!open.contains(n)) { gScoreIsBest = true; n.setHvalue(manhattanHeuristic(n,goal)); open.add(n); } else { if(gScore<n.getGvalue()) { gScoreIsBest = true; } } if(gScoreIsBest) { n.setGvalue(gScore); n.setFvalue(n.getGvalue()+n.getHvalue()); } } } return null; } What I have found out is that it always fails whenever there's an obstacle in the path. If I'm running it on 'open terrain', it seems to work. It seems to be affected by this part: || map.isWalkeable(n) Though, the isWalkeable function seems to work fine. If additional code is needed, I will provide it. Your help is greatly appreciated, Thanks :)

    Read the article

  • Adjust sprite bounds of the visible part of texture

    - by Crazy D0G
    Is there any way to adjust the boundaries of the visible part of the sprite? To make it easier to understand: I have a texture, such as shown at figure 1. Then I break it into pieces and fill the resulting fragments using PRKit (wood texture on figure 2 and 3). But the resulting fragments have the transparent (green color on figure 2 and 3) and when creating a sprite from the fragments they have the size of the initial texture. Is there a way to get rid of this transparency and to adjust the size of the visible part (wood texture), openGL or cocos2d-x means? Maybe it help - draw() method from PRKit: void PRFilledPolygon::draw() { //CCNode::draw(); glDisableClientState(GL_COLOR_ARRAY); // we have a pointer to vertex points so enable client state glBindTexture(GL_TEXTURE_2D, texture->getName()); glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_ONE_MINUS_SRC_ALPHA); glVertexPointer(2, GL_FLOAT, 0, areaTrianglePoints); glTexCoordPointer(2, GL_FLOAT, 0, textureCoordinates); glDrawArrays(GL_TRIANGLES, 0, areaTrianglePointCount); glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE); //Restore texture matrix and switch back to modelview matrix glEnableClientState(GL_COLOR_ARRAY);}

    Read the article

  • forward motion car physics - gradual slow

    - by spartan2417
    Im having trouble creating realistic car movements in xna 4. Right now i have a car going forward and hitting a terminal velocity which is fine but when i release the up key i need to the car to slow down gradually and then come to a stop. Im pretty sure this is easy code but i cant seem to get it to work the code - update if (Keyboard.GetState().IsKeyDown(Keys.Up)) { double elapsedTime = gameTime.ElapsedGameTime.Milliseconds; CalcTotalForce(); Acceleration = Vector2.Divide(CalcTotalForce(), MASS); Velocity = Vector2.Add(Velocity, Vector2.Multiply(Acceleration, (float)(elapsedTime))); Position = Vector2.Add(Position, Vector2.Multiply(Velocity, (float)(elapsedTime))); } added functions public Vector2 CalcTraction() { //Traction force = vector direction * engine force return Vector2.Multiply(forwardDirection, ENGINE_FORCE); } public Vector2 CalcDrag() { //Drag force = constdrag * velocity * speed return Vector2.Multiply(Vector2.Multiply(Velocity, DRAG_CONST), Velocity.Y); } public Vector2 CalcRoll() { //roll force = const roll * velocity return Vector2.Multiply(Velocity, ROLL_CONST); } public Vector2 CalcTotalForce() { //total force = traction + (-drag) + (-rolling) return Vector2.Add(CalcTraction(), Vector2.Add(-CalcDrag(), -CalcRoll())); } anyone have any ideas?

    Read the article

  • Architecture a for a central renderer rather than self-rendering

    - by The Communist Duck
    For the architectural side of rendering, there's two main ways: having each object render itself, and having a single renderer which renders everything. I'm currently aiming for the second idea, for the following reasons: The list can be sorted to only use shaders once. Else each object would have to bind the shader, because it's not sure if it's active. The objects could be sorted and grouped. Easier to swap APIs. With a few macro lines, it can be easy to swap between a DirectX renderer and an OpenGL renderer (not a reason for my project, but still a good point) Easier to manage rendering code Of course, if anyone has strong recommendations for the first method, I will listen to them. But I was wondering how make this work. First idea The renderer has a list of pointers to the renderable components of each entity, which register themselves on RenderCompoent creation. However, I'm worrying that this may end up as a lot of extra pointer weight. But I can sort the list of pointers every so often. Second idea The entire list of entities is passed to the renderer each render call. The renderer then sorts the list (each call, or maybe once?) and gets what it wants. That's a lot of passing and/or sorting, however. Other ideas ??? PROFIT Anyone got ideas? Thank you.

    Read the article

  • libgdx collision detection / bounding the object

    - by johnny-b
    i am trying to get collision detection so i am drawing a red rectangle to see if it is working, and when i do the code below in the update method. to check if it is going to work. the position is not in the right place. the red rectangle starts from the middle and not at the x and y point?Huh so it draws it wrong. i also have a getter method so nothing wrong there. bullet.set(getX(), getY(), getOriginX(), getOriginY()); this is for the render shapeRenderer.begin(ShapeType.Filled); shapeRenderer.setColor(Color.RED); shapeRenderer.rect(bullet.getX(), bullet.getY(), bullet.getOriginX(), bullet.getOriginY(), 15, 5, bullet.getRotation()); shapeRenderer.end(); i have tried to do it with a circle but the circle draws in the middle and i want it to be at the tip of the bullet. at the front of the bullet. x, y point. boundingCircle.set(getX() + getOriginX(), getY() + getOriginY(), 4.0f); shapeRenderer.begin(ShapeType.Filled); shapeRenderer.setColor(Color.RED); shapeRenderer.circle(bullet.getBoundingCircle().x, bullet.getBoundingCircle().y, bullet.getBoundingCircle().radius); shapeRenderer.end(); thank you need it to be of the x and y as the bullet is in the middle of the sprite when drawn originally via paint.

    Read the article

  • XNA model drawing problem

    - by user1990950
    When using this code: public static void DrawModel(Model model, Vector3 position, Vector3 offset, float xRotation, float yRotation, float zRotation, float allrot, float xScale, float yScale, float zScale) { position.Y *= -1; offset.Y *= -1; Matrix worldMatrix = ((Matrix.CreateRotationZ(MathHelper.ToRadians(zRotation)) * Matrix.CreateRotationX(MathHelper.ToRadians(xRotation))) * Matrix.CreateRotationY(MathHelper.ToRadians(yRotation))) * (Matrix.CreateTranslation(offset) * Matrix.CreateRotationY(MathHelper.ToRadians(allrot))) * Matrix.CreateScale(xScale, yScale, zScale); worldMatrix *= Matrix.CreateTranslation(position) * theCamera.GetTransformation() * Matrix.CreateTranslation(new Vector3(-(graphics.GraphicsDevice.Viewport.Width / 2), graphics.GraphicsDevice.Viewport.Height / 2, 0)); foreach (ModelMesh mesh in model.Meshes) { for (int i = 0; i < mesh.Effects.Count; i++) { ((BasicEffect)mesh.Effects[i]).EnableDefaultLighting(); ((BasicEffect)mesh.Effects[i]).World = worldMatrix; ((BasicEffect)mesh.Effects[i]).View = viewMatrix; ((BasicEffect)mesh.Effects[i]).Projection = projectionMatrix; } mesh.Draw(); } } The model rotates and then scales. It should scale and then rotate, but whenever I try to change it, it won't work.

    Read the article

  • How do I implement Unreal-like object serialization?

    - by MrWiggels
    Recently, I've been working on the core of my engine, and as I'm moving forward I find myself developing throwaway code to read files and simple data into the engine. This got me thinking about how I should implement a file management system. After a bit of googleing I came across the Unreal Package format, and boy does it look like the perfect one. I think it's good because the way how it allows you to separate different assets into different packages and allow something like a level to reference the different packages. I was just wondering, is this possible with C#? Because the built-in serialization API in .NET does not seem to support any form of this, only reading and writing to a single file.

    Read the article

  • XNA hlsl tex2D() only reads 3 channels from normal maps and specular maps

    - by cubrman
    Our engine uses deferred rendering and at the main draw phase gathers plenty of data from the objects it draws. In order to save on tex2D calls, we packed our objects' specular maps with all sorts of data, so three out of four channels are already taken. To make it clear: I am talking about the assets that come with the models and are stored in their material's Specular Level channel, not about the RenderTarget. So now I need another information to be stored in the alpha channel, but I cannot make the shader to read it properly! Nomatter what I write into alpha it ends up being 1 (255)! I tried: saving the textures in PNG/TGA formats. turning off pre-computed alpha in model's properties. Out of every texture available to me (we use Diffuse map, Normal Map and Specular Map) I was only able to read alpha successfully from the Diffuse Map! Here is how I add specular and normal maps to my model's material in the content processor: if (geometry.Material.Textures.ContainsKey(normalMapKey)) { ExternalReference<TextureContent> texRef = geometry.Material.Textures[normalMapKey]; geometry.Material.Textures.Remove("NormalMap"); geometry.Material.Textures.Add("NormalMap", texRef); } ... foreach (KeyValuePair<String, ExternalReference<TextureContent>> texture in material.Textures) { if ((texture.Key == "Texture") || (texture.Key == "NormalMap") || (texture.Key == "SpecularMap")) mat.Textures.Add(texture.Key, texture.Value); } In the shader I obviously use: float4 data = tex2D(specularMapSampler, TexCoords); so data.a is always 1 in my case, could you suggest a reason?

    Read the article

  • backface culling error

    - by acrilige
    I write simple software renderer. In my pipeline i have stage of backface culling. But looks like it has some error (see picture). I perform culling right after world transformation. (i can't insert picture in post coz i don't have enough points, so i just upload it (cube model): http://imageshack.us/photo/my-images/705/bcerror.png/) Vector3F view_dir(0.0f, 0.0f, 1.0f); std::vector<Triangle> to_remove; for (Triangle &t : m_triangles) { Vector4F e1 = t.v2 - t.v1; Vector4F e2 = t.v3 - t.v1; Vector3F normal( e1.y * e2.z - e1.z * e2.y, e1.z * e2.x - e1.x * e2.z, e1.x * e2.y - e1.y * e2.x ); normal.Normalize(); float dot = Dot(view_dir, normal); if (dot <= 0) to_remove.push_back(t); } for (Triangle& t : to_remove) m_triangles.erase(std::remove(m_triangles.begin(), m_triangles.end(), t), m_triangles.end()); Camera sits in origin and points in screen (RH). What is the reason?

    Read the article

  • Algorithm to map an area [on hold]

    - by user37843
    I want to create a crawler that starts in a room and from that room to move North,East,West and South until there aren't any new rooms to visit. I don't want to have duplicates and the output format per line to be something like this: current room, neighbour 1, neighbour 2 ... and in the end to apply BFS algorithm to find the shortest path between 2 rooms. Can anyone offer me some suggestion what to use? Thanks

    Read the article

  • relationship between the model and the renderer

    - by acrilige
    I tried to build a simple graphics engine, and faced with this problems: i have a list of models that i need to draw, and object (renderer) that implements IRenderer interface with method DrawObject(Object* obj). Implementation of renderer depends on using graphics library (opengl/directx). 1st question: model should not know nothing about renderer implementation, but in this case where can i hold (cache) information that depends on renderer implementation? For example, if model have this definition: class Model { public: Model(); Vertex* GetVertices() const; private: Vertex* m_vertices; }; what is the best way to cache, for example, vertex buffer of this model for dx11? Hold it in renderer object? 2nd question: what is the best way for model to say renderer HOW it must be rendered (for example with texture, bump mapping, or may be just in one color). I thought it can be done with flags, like this: model-SetRenderOptions(RENDER_TEXTURE | RENDER_BUMPMAPPING | RENDER_LIGHTING); and in Renderer::DrawModel method check for each flag. But looks like it will become uncomfortable with the options count growth...

    Read the article

  • GLSL Normals not transforming propertly

    - by instancedName
    I've been stuck on this problem for two days. I've read many articles about transforming normals, but I'm just totaly stuck. I understand choping off W component for "turning off" translation, and doing inverse/traspose transformation for non-uniform scaling problem, but my bug seems to be from a different source. So, I've imported a simple ball into OpenGL. Only transformation that I'm applying is rotation over time. But when my ball rotates, the illuminated part of the ball moves around just as it would if direction light direction was changing. I just can't figure out what is the problem. Can anyone help me with this? Here's the GLSL code: Vertex Shader: #version 440 core uniform mat4 World, View, Projection; layout(location = 0) in vec3 VertexPosition; layout(location = 1) in vec3 VertexColor; layout(location = 2) in vec3 VertexNormal; out vec4 Color; out vec3 Normal; void main() { Color = vec4(VertexColor, 1.0); vec4 n = World * vec4(VertexNormal, 0.0f); Normal = n.xyz; gl_Position = Projection * View * World * vec4(VertexPosition, 1.0); } Fragment Shader: #version 440 core uniform vec3 LightDirection = vec3(0.0, 0.0, -1.0); uniform vec3 LightColor = vec3(1f); in vec4 Color; in vec3 Normal; out vec4 FragColor; void main() { diffuse = max(0.0, dot(normalize(-LightDirection), normalize(Normal))); vec4 scatteredLight = vec4(LightColor * diffuse, 1.0f); FragColor = min(Color * scatteredLight, vec4(1.0)); }

    Read the article

  • 2D scene graph not transforming relative to parent

    - by Dr.Denis McCracleJizz
    I am currently in the process of coding my own 2D Scene graph, which is basically a port of flash's render engine. The problem I have right now is my rendering doesn't seem to be working properly. This code creates the localTransform property for each DisplayObject. Matrix m_transform = Matrix.CreateRotationZ(rotation) * Matrix.CreateScale(scaleX, scaleY, 1) * Matrix.CreateTranslation(new Vector3(x, y, z)); This is my render code. float dRotation; Vector2 dPosition, dScale; Matrix transform; transform = this.localTransform; if (parent != null) transform = localTransform * parent.localTransform; DecomposeMatrix(ref transform, out dPosition, out dRotation, out dScale); spriteBatch.Draw(this.texture, dPosition, null, Color.White, dRotation, new Vector2(originX, originY), dScale, SpriteEffects.None, 0.0f); Here is the result when I try to add the Stage then to the stage a First DisplayObjectContainer and then a second one. It may look fine but the problem lies in the fact that I add a first DisplayObjectContainer at (400,400) and the second one within it (that's the smallest one) at position (0,0). So he should be right over its parent but he gets render within the parent at the same position the parent has (400, 400) for some reason. It's just as if I double the parent's localMatrix and then render the second cat there. This is the code i use to loop through every childs. base.Draw(spriteBatch); foreach (DisplayObject childs in _childs) { childs.Draw(spriteBatch); }

    Read the article

  • Android how to get opengl 3D coordinates in ontouch event

    - by Sandy
    I created a cube in opengl and it rotates in ontouch event. To to this I created a CustomSurfaceView as follows public class CustomSurfaceView extends GLSurfaceView { @Override public boolean onTouchEvent(MotionEvent e) { float x = e.getX() float y = e.getY(); } } Here x and y are screen coordinates. How can I get 3D coordinated from this? I have already looked gluProject and NeHe. But I dont know how to implement this in my project, it shows that there is no GLdouble,GLfloat class.

    Read the article

  • Isometric algorithm [fixed]

    - by David
    so i've been toying with isometric and i just cant get the tiles to be in the right order. im probably missing something obvious and i just cant see it... but even at the risk of looking stupid, heres my code. for (int i = 0; i < Tile.MapSize; i++) { for (int j = 0; j < Tile.MapSize; j++) { spriteBatch.Draw( Tile.TileSetTexture, new Rectangle( (-j * Tile.TileWidth / 2) + (i * Tile.TileWidth / 2), (i * (Tile.TileHeight - 9) / 2) - (-j * (Tile.TileHeight - 9) / 2), Tile.TileWidth, Tile.TileHeight), Tile.GetSourceRectangle(tileID), Color.White, 0.0f, new Vector2(-350, -60), SpriteEffects.None, 1.0f); } } and heres what i end up with delicious messed up map yep, bit of an issue. so if anyone could help, i'd appreciate it. edit* works now <_<

    Read the article

  • How to capture the screen in DirectX 9 to a raw bitmap in memory without using D3DXSaveSurfaceToFile

    - by cloudraven
    I know that in OpenGL I can do something like this glReadBuffer( GL_FRONT ); glReadPixels( 0, 0, _width, _height, GL_RGB, GL_UNSIGNED_BYTE, _buffer ); And its pretty fast, I get the raw bitmap in _buffer. When I try to do this in DirectX. Assuming that I have a D3DDevice object I can do something like this if (SUCCEEDED(D3DDevice->GetBackBuffer(0, 0, D3DBACKBUFFER_TYPE_MONO, &pBackbuffer))) { HResult hr = D3DXSaveSurfaceToFileA(filename, D3DXIFF_BMP, pBackbuffer, NULL, NULL); But D3DXSaveSurfaceToFile is pretty slow, and I don't need to write the capture to disk anyway, so I was wondering if there was a faster way to do this

    Read the article

  • Move the location of the XYZ pivot point on a mesh in UDK

    - by WebDevHobo
    When working with any mesh, you get an XYZ point somewhere on it. If you just want to move the mesh in any direction, it doesn't matter where this point is located. However, I want to rotate a door. This requires the point of rotation to be very specific. I can't find anywhere how to change the location of the point. Can anyone help? EDIT: solved, to change the pivot point, right click on the mesh, go to "Pivot" and move it. Then right click again and this time select "Save PrePivot to Pivot"

    Read the article

  • Why the clip space in OpenGL has 4 dimensions?

    - by user827992
    I will use this as a generic reference, but the more i browser online docs and books, the less i understand about this. const float vertexPositions[] = { 0.75f, 0.75f, 0.0f, 1.0f, 0.75f, -0.75f, 0.0f, 1.0f, -0.75f, -0.75f, 0.0f, 1.0f, }; in this online book there is an example about how to draw the first and classic hello world for OpenGL about making a triangle. The vertex structure for the triangle is declared as stated in the code above. The book, as all the other sources about this, stress the point that the Clip Space is a 4D structure that is used to basically decide what will be rasterized and rendered to the screen. Here I have my questions: i can't imagine something in 4D, i don't think that a human can do that, what is a 4D for this Clip space ? the most human-readable doc that i have read speaks about a camera, which is just an abstraction over the clipping concept, and i get that, the problem is, why not using the concept of a camera in the first place which is a more familiar 3D structure? The only problem with the concept of a camera is that you need to define the prospective in other way and so you basically have to add another statement about what kind of camera you wish to have. How i'm supposed to read this 0.75f, 0.75f, 0.0f, 1.0f ? All i get is that they are all float values and i get the meaning of the first 3 values, what does it mean the last one?

    Read the article

  • OpenGL ES 2. How do I Create a Basic Fading Streak Effect?

    - by dugla
    For the iPad app I am writing using OpenGL ES 2 I have a single quad - shaded using GLSL - that is dragged around the screen. Very basic. This works fine. But is rather boring. I want to increase the coolness a bit in the following way: when the user drags the quad it leaves a streak behind that fades over time. Continuous dragging would be a bit like a streaking comet across the night sky. What is the simplest way to implement this? Thanks.

    Read the article

  • OpenGL error LNK2019

    - by Ghilliedrone
    I'm trying to compile a basic OpenGL program. I linked opengl32.lib and glu32.lib but I'm getting errors. The errors I get are: error LNK1120: 7 unresolved externals error LNK2019: unresolved external symbol _main referenced in function ___tmainCRTStartup error LNK2019: unresolved external symbol "public: float __thiscall GLWindow::getElapsedSeconds(void)" (?getElapsedSeconds@GLWindow@@QAEMXZ) referenced in function _WinMain@16 error LNK2019: unresolved external symbol "public: bool __thiscall GLWindow::isRunning(void)" (?isRunning@GLWindow@@QAE_NXZ) referenced in function _WinMain@16 error LNK2019: unresolved external symbol "public: void __thiscall GLWindow::attachExample(class Example *)" (?attachExample@GLWindow@@QAEXPAVExample@@@Z) referenced in function _WinMain@16 error LNK2019: unresolved external symbol "public: void __thiscall GLWindow::destroy(void)" (?destroy@GLWindow@@QAEXXZ) referenced in function _WinMain@16 error LNK2019: unresolved external symbol "public: __thiscall GLWindow::GLWindow(struct HINSTANCE__ *)" (??0GLWindow@@QAE@PAUHINSTANCE__@@@Z) referenced in function _WinMain@16 error LNK2019: unresolved external symbol "private: void __thiscall GLWindow::setupPixelFormat(void)" (?setupPixelFormat@GLWindow@@AAEXXZ) referenced in function "public: long __stdcall GLWindow::WndProc(struct HWND__ *,unsigned int,unsigned int,long)" (?WndProc@GLWindow@@QAGJPAUHWND__@@IIJ@Z)

    Read the article

  • When mapping the surface of a sphere with tiles, how might you deal with polar distortion?

    - by clweeks
    It's easy to deal with the way locations interact on a clean Cartesian grid. It's just vanilla math. And you can kind of ignore the geometry of the sphere's surface for a bunch of it if you want to just truncate the poles or something. But I keep coming up with ideas for games where the polar space matters. Geo-coded ARGs and global roguelikes and stuff. I want square(ish?) locations -- reasonably representable by square tiles of the same size across the globe, anyway. This has to be a solved problem, right? What are the solutions? ETA: At the equator -- and assuming that your square locations are reasonably small, it's close enough to true that you can get away with having one square in the rows north and south of the most equatorial row. And you could probably get away with that by just hand-waving the difference up to like 45-degrees or so. But eventually, you need to have fewer squares in a pole-ward circumferential row. If I reduce the length of the row by one and offset the squares by 1/2 then they're just like hexes and it's relatively easy to do the coding to keep track of the connections. But as you get pole-ward, it gets more and more extreme. Projecting the surface of the world onto the surface of a cube is tempting. But I figured there must be more elegant solutions already in use. If I did the cube thing (not dissecting it further through geodesy) Are there any pros and cons related to placing the pole at the center of a face or at the vertex of three sides?

    Read the article

< Previous Page | 560 561 562 563 564 565 566 567 568 569 570 571  | Next Page >