Search Results

Search found 25377 results on 1016 pages for 'development'.

Page 399/1016 | < Previous Page | 395 396 397 398 399 400 401 402 403 404 405 406  | Next Page >

  • What is the best way to check if there is overlap between player and static, non-collidable items in bullet physic engine

    - by tigrou
    I'd like to add non collidable objects (eg: power ups, items, ...) in a game world using Bullet Physics Engine and to know if there is collision between player and them. Some info : there is a lot of items ( 1000), all are box shapes and they don't overlap. Here is things i have tried : btDbvt* bvtItems = new btDbvt(); //btDbvt is a hierachical AABB tree, used by Bullet foreach(var item ...) { btDbvtVolume volume = ... //compute item AABB; bvtItems->insert(volume, (void*)someExtraData); } Then, to find collisions between items and player : playerRigidBody->getAabb(min, max); btDbvtVolume playervolume = ... //compute player AABB bvtItems->collideTV(bvtItems->m_root, playervolume, *someCollisionHandler); This works fairly well (and its very fast), however, there is a problem : it only check items AABB against player AABB. That loss of precision is acceptable for items but not for player which is not a box. It would actually need another check to make sure player really collide with item but i don't know how to do this in Bullet. It would have been nice to have a function like this : playerRigidBody->checkCollisionWithAABB(); After doing trying that, I discovered that a btGhostObject exist and seems to have been made for that. I changed my code like this : foreach(var item...) { btCollisionObject* ghostObject = new btGhostObject(); ghostObject->setCollisionShape(boxShape); ghostObject->setCollisionFlags(ghostObject->getCollisionFlags() | btCollisionObject::CF_NO_CONTACT_RESPONSE); startTransform.setOrigin(...); //item position ghostObject->setWorldTransform(startTransform); dynamicsWorld->addCollisionObject(ghostObject, btBroadphaseProxy::SensorTrigger, btBroadphaseProxy:: CharacterFilter); } It also works ok, but there is a huge fps drop (almost ten times slower) which is not acceptable. Maybe there is something missing (forget set a flag) and Bullet is doing extra job for nothing or maybe all that ghostObjects are polluting broad phase and ghostObject is not the right thing for that. Any help would be appreciated.

    Read the article

  • Which data structure you will use to for a witness list?

    - by mateen
    I'm making a game where the plot is a bank robbery. Lots of people witness that robbery. The game will load a list of suspects, while the players (witnesses) will have to identify the suspects of this robbery. The game should load a list of suspects to identify the one as quickly as possible. Admin can add/remove suspects in the lists and two or more lists of suspects can also be merged into one (to show it to the player). The question is which data structure will be suitable to develop the lists?

    Read the article

  • Data Structures for Logic Games / Deduction Rules / Sufficient Set of Clues?

    - by taserian
    I've been cogitating about developing a logic game similar to Einstein's Puzzle , which would have different sets of clues for every new game replay. What data structures would you use to handle the different entities (pets, colors of houses, nationalities, etc.), deduction rules, etc. to guarantee that the clues you provide point to a unique solution? I'm having a hard time thinking about how to get the deduction rules to play along with the possible clues; any insight would be appreciated.

    Read the article

  • Updating physics for animated models

    - by Mathias Hölzl
    For a new game we have do set up a scene with a minimum of 30 bone animated models.(shooter) The problem is that the update process for the animated models takes too long. Thats what I do: Each character has ~30 bones and for every update tick the animation gets calculated and every bone fires a event with the new matrix. The physics receives the event with the new matrix and updates the collision shape for that bone. The time that it takes to build the animation isn't that bad (0.2ms for 30 Bones - 6ms for 30 models). But the main problem is that the physic engine (Bullet) uses a diffrent matrix for transformation and so its necessary to convert it. Code for matrix conversion: (~0.005ms) btTransform CLEAR_PHYSICS_API Mat_to_btTransform( Mat mat ) { btMatrix3x3 bulletRotation; btVector3 bulletPosition; XMFLOAT4X4 matData = mat.GetStorage(); // copy rotation matrix for ( int row=0; row<3; ++row ) for ( int column=0; column<3; ++column ) bulletRotation[row][column] = matData.m[column][row]; for ( int column=0; column<3; ++column ) bulletPosition[column] = matData.m[3][column]; return btTransform( bulletRotation, bulletPosition ); } The function for updating the transform(Physic): void CLEAR_PHYSICS_API BulletPhysics::VKinematicMove(Mat mat, ActorId aid) { if ( btRigidBody * const body = FindActorBody( aid ) ) { btTransform tmp = Mat_to_btTransform( mat ); body->setWorldTransform( tmp ); } } The real problem is the function FindActorBody(id): ActorIDToBulletActorMap::const_iterator found = m_actorBodies.find( id ); if ( found != m_actorBodies.end() ) return found->second; All physic actors are stored in m_actorBodies and thats why the updating process takes to long. But I have no idea how I could avoid this. Friendly greedings, Mathias

    Read the article

  • How is an HTML5 game sold?

    - by Bane
    (I know this site doesn't give legal advice, but what I'm dealing here with isn't anything serious at all. Also, I apologize to JP for being annoying over this.) Someone found a game I made on the Internet, and expressed interest in buying it. We agreed upon a price, and, in the meantime, I removed the game's source from the Internet, just to be sure. Now, I'm wondering what to do next. These are the terms: He gets the game's source code, and only that, without the graphics (which weren't made by me). He gets the right to develop and sell the game. I get to keep the ownership of the original game, meaning that I can use it in my portfolio when applying for jobs, for example. The game gets to stay on its original site. But I am not sure how can I legally realize this. Which license can I use?

    Read the article

  • SpriteFont Exception, no such character?

    - by Michal Bozydar Pawlowski
    I have such spriteFont: <?xml version="1.0" encoding="utf-8"?> <!-- This file contains an xml description of a font, and will be read by the XNA Framework Content Pipeline. Follow the comments to customize the appearance of the font in your game, and to change the characters which are available to draw with. --> <XnaContent xmlns:Graphics="Microsoft.Xna.Framework.Content.Pipeline.Graphics"> <Asset Type="Graphics:FontDescription"> <!-- Modify this string to change the font that will be imported. --> <FontName>Segoe UI</FontName> <!-- Size is a float value, measured in points. Modify this value to change the size of the font. --> <Size>20</Size> <!-- Spacing is a float value, measured in pixels. Modify this value to change the amount of spacing in between characters. --> <Spacing>0</Spacing> <!-- UseKerning controls the layout of the font. If this value is true, kerning information will be used when placing characters. --> <UseKerning>true</UseKerning> <!-- Style controls the style of the font. Valid entries are "Regular", "Bold", "Italic", and "Bold, Italic", and are case sensitive. --> <Style>Regular</Style> <!-- If you uncomment this line, the default character will be substituted if you draw or measure text that contains characters which were not included in the font. --> <!-- <DefaultCharacter>*</DefaultCharacter> --> <!-- CharacterRegions control what letters are available in the font. Every character from Start to End will be built and made available for drawing. The default range is from 32, (ASCII space), to 126, ('~'), covering the basic Latin character set. The characters are ordered according to the Unicode standard. See the documentation for more information. --> <CharacterRegions> <CharacterRegion> <Start>&#09;</Start> <End>&#09;</End> </CharacterRegion> <CharacterRegion> <Start>&#32;</Start> <End>&#1200;</End> </CharacterRegion> </CharacterRegions> </Asset> </XnaContent> It has the character regions (32-1200) And I get this exception: A first chance exception of type 'System.ArgumentException' occurred in Microsoft.Xna.Framework.Graphics.ni.dll The character '?' (0x0441) is not available in this SpriteFont. If applicable, adjust the font's start and end CharacterRegions to include this character. Parameter name: character Why? I'm drawing the string like this: spriteBatch.DrawString(font24, zasadyText, zasadyTextPos, kolorCzcionki1, -0.05f, Vector2.Zero, 1.0f, SpriteEffects.None, 0.5f) I even changed the spriteFont to cyrillic: <CharacterRegions> <CharacterRegion> <Start>&#09;</Start> <End>&#09;</End> </CharacterRegion> <CharacterRegion> <Start>&#0032;</Start> <End>&#0383;</End> </CharacterRegion> <CharacterRegion> <Start>&#1040;</Start> <End>&#1111;</End> </CharacterRegion> </CharacterRegions> </Asset> </XnaContent> and it still doesn't work. I got the (0x441 = char) exception -- EDIT -- Ok, I got the solution. It was a letter mistake in language. I had this: if (jezyk == "ru_RU") { font14 = Content.Load<SpriteFont>("ru_font14"); font24 = Content.Load<SpriteFont>("ru_font24"); font12 = Content.Load<SpriteFont>("ru_czcionkaFloty"); font10 = Content.Load<SpriteFont>("ru_font10"); font28 = Content.Load<SpriteFont>("ru_font28"); font20 = Content.Load<SpriteFont>("ru_font20"); } else { font14 = Content.Load<SpriteFont>("font14"); font24 = Content.Load<SpriteFont>("font24"); font12 = Content.Load<SpriteFont>("czcionkaFloty"); font10 = Content.Load<SpriteFont>("font10"); font28 = Content.Load<SpriteFont>("font28"); font20 = Content.Load<SpriteFont>("font20"); } and there should be not "ru_RU" but "ru-RU" I have no idea. I changed the spriteFont to cyrillic: <CharacterRegions> <CharacterRegion> <Start>&#09;</Start> <End>&#09;</End> </CharacterRegion> <CharacterRegion> <Start>&#0032;</Start> <End>&#0383;</End> </CharacterRegion> <CharacterRegion> <Start>&#1040;</Start> <End>&#1111;</End> </CharacterRegion> </CharacterRegions> </Asset> </XnaContent> and it still doesn't work. I got the (0x441 = char) exception

    Read the article

  • Combine 3D objects in XNA 4

    - by Christoph
    Currently I am writing on my thesis for university, the theme I am working on is 3D Visualization of hierarchical structures using cone trees. I want to do is to draw a cone and arrange a number of spheres at the bottom of the cone. The spheres should be arranged according to the radius and the number of spheres correctly. As you can imagine I need a lot of these cone/sphere combinations. First Attempt I was able to find some tutorials that helped with drawing cones and spheres. Cone public Cone(GraphicsDevice device, float height, int tessellation, string name, List<Sphere> children) { //prepare children and calculate the children spacing and radius of the cone if (children == null || children.Count == 0) { throw new ArgumentNullException("children"); } this.Height = height; this.Name = name; this.Children = children; //create the cone if (tessellation < 3) { throw new ArgumentOutOfRangeException("tessellation"); } //Create a ring of triangels around the outside of the cones bottom for (int i = 0; i < tessellation; i++) { Vector3 normal = this.GetCircleVector(i, tessellation); // add the vertices for the top of the cone base.AddVertex(Vector3.Up * height, normal); //add the bottom circle base.AddVertex(normal * this.radius + Vector3.Down * height, normal); //Add indices base.AddIndex(i * 2); base.AddIndex(i * 2 + 1); base.AddIndex((i * 2 + 2) % (tessellation * 2)); base.AddIndex(i * 2 + 1); base.AddIndex((i * 2 + 3) % (tessellation * 2)); base.AddIndex((i * 2 + 2) % (tessellation * 2)); } //create flate triangle to seal the bottom this.CreateCap(tessellation, height, this.Radius, Vector3.Down); base.InitializePrimitive(device); } Sphere public void Initialize(GraphicsDevice device, Vector3 qi) { int verticalSegments = this.Tesselation; int horizontalSegments = this.Tesselation * 2; //single vertex on the bottom base.AddVertex((qi * this.Radius) + this.lowering, Vector3.Down); for (int i = 0; i < verticalSegments; i++) { float latitude = ((i + 1) * MathHelper.Pi / verticalSegments) - MathHelper.PiOver2; float dy = (float)Math.Sin(latitude); float dxz = (float)Math.Cos(latitude); //Create a singe ring of latitudes for (int j = 0; j < horizontalSegments; j++) { float longitude = j * MathHelper.TwoPi / horizontalSegments; float dx = (float)Math.Cos(longitude) * dxz; float dz = (float)Math.Sin(longitude) * dxz; Vector3 normal = new Vector3(dx, dy, dz); base.AddVertex(normal * this.Radius, normal); } } // Finish with a single vertex at the top of the sphere. AddVertex((qi * this.Radius) + this.lowering, Vector3.Up); // Create a fan connecting the bottom vertex to the bottom latitude ring. for (int i = 0; i < horizontalSegments; i++) { AddIndex(0); AddIndex(1 + (i + 1) % horizontalSegments); AddIndex(1 + i); } // Fill the sphere body with triangles joining each pair of latitude rings. for (int i = 0; i < verticalSegments - 2; i++) { for (int j = 0; j < horizontalSegments; j++) { int nextI = i + 1; int nextJ = (j + 1) % horizontalSegments; base.AddIndex(1 + i * horizontalSegments + j); base.AddIndex(1 + i * horizontalSegments + nextJ); base.AddIndex(1 + nextI * horizontalSegments + j); base.AddIndex(1 + i * horizontalSegments + nextJ); base.AddIndex(1 + nextI * horizontalSegments + nextJ); base.AddIndex(1 + nextI * horizontalSegments + j); } } // Create a fan connecting the top vertex to the top latitude ring. for (int i = 0; i < horizontalSegments; i++) { base.AddIndex(CurrentVertex - 1); base.AddIndex(CurrentVertex - 2 - (i + 1) % horizontalSegments); base.AddIndex(CurrentVertex - 2 - i); } base.InitializePrimitive(device); } The tricky part now is to arrange the spheres at the bottom of the cone. I tried is to draw just the cone and then draw the spheres. I need a lot of these cones, so it would be pretty hard to calculate all the positions correctly. Second Attempt So the second try was to generate a object that builds all vertices of the cone and all of the spheres at once. So I was hoping to render a cone with all its spheres arranged correctly. After a short debug I found out that the cone is created and the first sphere, when it turn of the second sphere I am running into an OutOfBoundsException of ushort.MaxValue. Cone and Spheres public ConeWithSpheres(GraphicsDevice device, float height, float coneDiameter, float sphereDiameter, int coneTessellation, int sphereTessellation, int numberOfSpheres) { if (coneTessellation < 3) { throw new ArgumentException(string.Format("{0} is to small for the tessellation of the cone. The number must be greater or equal to 3", coneTessellation)); } if (sphereTessellation < 3) { throw new ArgumentException(string.Format("{0} is to small for the tessellation of the sphere. The number must be greater or equal to 3", sphereTessellation)); } //set properties this.Height = height; this.ConeDiameter = coneDiameter; this.SphereDiameter = sphereDiameter; this.NumberOfChildren = numberOfSpheres; //end set properties //generate the cone this.GenerateCone(device, coneTessellation); //generate the spheres //vector that defines the Y position of the sphere on the cones bottom Vector3 lowering = new Vector3(0, 0.888f, 0); this.GenerateSpheres(device, sphereTessellation, numberOfSpheres, lowering); } // ------ GENERATE CONE ------ private void GenerateCone(GraphicsDevice device, int coneTessellation) { int doubleTessellation = coneTessellation * 2; //Create a ring of triangels around the outside of the cones bottom for (int index = 0; index < coneTessellation; index++) { Vector3 normal = this.GetCircleVector(index, coneTessellation); //add the vertices for the top of the cone base.AddVertex(Vector3.Up * this.Height, normal); //add the bottom of the cone base.AddVertex(normal * this.ConeRadius + Vector3.Down * this.Height, normal); //add indices base.AddIndex(index * 2); base.AddIndex(index * 2 + 1); base.AddIndex((index * 2 + 2) % doubleTessellation); base.AddIndex(index * 2 + 1); base.AddIndex((index * 2 + 3) % doubleTessellation); base.AddIndex((index * 2 + 2) % doubleTessellation); } //create flate triangle to seal the bottom this.CreateCap(coneTessellation, this.Height, this.ConeRadius, Vector3.Down); base.InitializePrimitive(device); } // ------ GENERATE SPHERES ------ private void GenerateSpheres(GraphicsDevice device, int sphereTessellation, int numberOfSpheres, Vector3 lowering) { int verticalSegments = sphereTessellation; int horizontalSegments = sphereTessellation * 2; for (int childCount = 1; childCount < numberOfSpheres; childCount++) { //single vertex at the bottom of the sphere base.AddVertex((this.GetCircleVector(childCount, this.NumberOfChildren) * this.SphereRadius) + lowering, Vector3.Down); for (int verticalSegmentsCount = 0; verticalSegmentsCount < verticalSegments; verticalSegmentsCount++) { float latitude = ((verticalSegmentsCount + 1) * MathHelper.Pi / verticalSegments) - MathHelper.PiOver2; float dy = (float)Math.Sin(latitude); float dxz = (float)Math.Cos(latitude); //create a single ring of latitudes for (int horizontalSegmentsCount = 0; horizontalSegmentsCount < horizontalSegments; horizontalSegmentsCount++) { float longitude = horizontalSegmentsCount * MathHelper.TwoPi / horizontalSegments; float dx = (float)Math.Cos(longitude) * dxz; float dz = (float)Math.Sin(longitude) * dxz; Vector3 normal = new Vector3(dx, dy, dz); base.AddVertex((normal * this.SphereRadius) + lowering, normal); } } //finish with a single vertex at the top of the sphere base.AddVertex((this.GetCircleVector(childCount, this.NumberOfChildren) * this.SphereRadius) + lowering, Vector3.Up); //create a fan connecting the bottom vertex to the bottom latitude ring for (int i = 0; i < horizontalSegments; i++) { base.AddIndex(0); base.AddIndex(1 + (i + 1) % horizontalSegments); base.AddIndex(1 + i); } //Fill the sphere body with triangles joining each pair of latitude rings for (int i = 0; i < verticalSegments - 2; i++) { for (int j = 0; j < horizontalSegments; j++) { int nextI = i + 1; int nextJ = (j + 1) % horizontalSegments; base.AddIndex(1 + i * horizontalSegments + j); base.AddIndex(1 + i * horizontalSegments + nextJ); base.AddIndex(1 + nextI * horizontalSegments + j); base.AddIndex(1 + i * horizontalSegments + nextJ); base.AddIndex(1 + nextI * horizontalSegments + nextJ); base.AddIndex(1 + nextI * horizontalSegments + j); } } //create a fan connecting the top vertiex to the top latitude for (int i = 0; i < horizontalSegments; i++) { base.AddIndex(this.CurrentVertex - 1); base.AddIndex(this.CurrentVertex - 2 - (i + 1) % horizontalSegments); base.AddIndex(this.CurrentVertex - 2 - i); } base.InitializePrimitive(device); } } Any ideas how I could fix this?

    Read the article

  • How to move Objects smoothly like swimming arround

    - by philipp
    I have a Box2D project that is about to create a view where the user looks from the Sky onto Water. Or perhaps on a bathtub filled with water or something like this. The Object which holds the fluid actually does not matter, what matters is the movement of the bodies, because they should move like drops of grease on a soup, or wood on water, I can even imagine the the fluid is mercurial, extreme heavy and "lazy". How can I manipulate the bodies (every frame or time by time) to make them move like this? I started with randomly manipulation their linear velocity, but I turned out that this not very smooth and looks quite hard. Is it a better idea to check their velocity and apply impulses? Is there any example? Greetings philipp

    Read the article

  • SEHException throw using Microsoft XACT Audio Framework (XACT3)

    - by Sweta Dwivedi
    I have been developing a game using Kinect + XNA and using Microsoft Audio Creation tool (XACT3) for managing my sound files and music, however in the code an SEHException is thrown whenever it tries to get the wave file from the wave Bank . . Sometimes the code works magically and all of a sudden it will start throwing this exception randomly ..I need a help on solving this exception /*Declaring Audio Engine for music*/ AudioEngine engine; SoundBank soundBank; WaveBank waveBank; Cue cue; /*Declaring Audio engine for sound effects*/ AudioEngine engine1; SoundBank soundbank; WaveBank wavebank; Cue effect; engine = new AudioEngine(@"Content\therapy.xgs"); soundBank = new SoundBank(engine, @"Content\Sound Bank.xsb"); **waveBank = new WaveBank(engine, @"Content\Wave Bank.xwb");** cue = null; engine1 = new AudioEngine(@"Content\Music_Manager\Sound_effects.xgs"); soundbank = new SoundBank(engine1, @"Content\Music_Manager\Sound1.xsb"); **wavebank = new WaveBank(engine1, @"Content\Music_Manager\Wave1.xwb");** effect = null; cue = soundBank.GetCue("hypnotizing"); cue.Play();

    Read the article

  • Draw multiple objects with textures

    - by Simplex
    I want to draw cubes using textures. void OperateWithMainMatrix(ESContext* esContext, GLfloat offsetX, GLfloat offsetY, GLfloat offsetZ) { UserData *userData = (UserData*) esContext->userData; ESMatrix modelview; ESMatrix perspective; //Manipulation with matrix ... glVertexAttribPointer(userData->positionLoc, 3, GL_FLOAT, GL_FALSE, 0, cubeFaces); //in cubeFaces coordinates verticles cube glVertexAttribPointer(userData->normalLoc, 3, GL_FLOAT, GL_FALSE, 0, cubeFaces); //for normals (use in fragment shaider for textures) glEnableVertexAttribArray(userData->positionLoc); glEnableVertexAttribArray(userData->normalLoc); // Load the MVP matrix glUniformMatrix4fv(userData->mvpLoc, 1, GL_FALSE, (GLfloat*)&userData->mvpMatrix.m[0][0]); //Bind base map glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_CUBE_MAP, userData->baseMapTexId); //Set the base map sampler to texture unit to 0 glUniform1i(userData->baseMapLoc, 0); // Draw the cube glDrawArrays(GL_TRIANGLES, 0, 36); } (coordinates transformation is in OperateWithMainMatrix() ) Then Draw() function is called: void Draw(ESContext *esContext) { UserData *userData = esContext->userData; // Set the viewport glViewport(0, 0, esContext->width, esContext->height); // Clear the color buffer glClear(GL_COLOR_BUFFER_BIT); // Use the program object glUseProgram(userData->programObject); OperateWithMainMatrix(esContext, 0.0f, 0.0f, 0.0f); eglSwapBuffers(esContext->eglDisplay, esContext->eglSurface); } This work fine, but if I try to draw multiple cubes (next code for example): void Draw(ESContext *esContext) { ... // Use the program object glUseProgram(userData->programObject); OperateWithMainMatrix(esContext, 2.0f, 0.0f, 0.0f); OperateWithMainMatrix(esContext, 1.0f, 0.0f, 0.0f); OperateWithMainMatrix(esContext, 0.0f, 0.0f, 0.0f); OperateWithMainMatrix(esContext, -1.0f, 0.0f, 0.0f); OperateWithMainMatrix(esContext, -2.0f, 0.0f, 0.0f); eglSwapBuffers(esContext->eglDisplay, esContext->eglSurface); } A side faces overlapes frontal face. The side face of the right cube overlaps frontal face of the center cube. How can i remove this effect and display miltiple cubes without it?

    Read the article

  • How can I calculate a vertex normal for a hard edge?

    - by K.G.
    Here is a picture of a lovely polygon: Circled is a vertex, and numbered are its adjacent faces. I have calculated the normals of those faces as such (not yet normalized, 0-indexed): Vertex 1 normal 0: 0.000000 0.000000 -0.250000 Vertex 1 normal 1: 0.000000 0.000000 -0.250000 Vertex 1 normal 2: -0.250000 0.000000 0.000000 Vertex 1 normal 3: -0.250000 0.000000 0.000000 Vertex 1 normal 4: 0.250000 0.000000 0.000000 What I'm wondering is, how can I determine, taken as given that I want this vertex to represent a hard edge, whether its normal should be the normal of 1/2 or 3/4? My plan after I glanced at the sketch I used to put this together was "Ha! I'll just use whichever two faces have the same normal!" and now I see that there are two sets of two faces for which this is true. Is there a rule I can apply based on the face winding, angle of the adjacent edges, moon phase, coin flip, to consistently choose a normal direction for this box? For the record, all of the other polygons I plan to use will have their normals dictated in Maya, but after encountering this problem, it made me really curious.

    Read the article

  • Light following me around the room. Something is wrong with my shader!

    - by Robinson
    I'm trying to do a spot (Blinn) light, with falloff and attenuation. It seems to be working OK except I have a bit of a space problem. That is, whenever I move the camera the light moves to maintain the same relative position, rather than changing with the camera. This results in the light moving around, i.e. not always falling on the same surfaces. It's as if there's a flashlight attached to the camera. I'm transforming the lights beforehand into view space, so Light_Position and Light_Direction are already in eye space (I hope!). I made a little movie of what it looks like here: My camera rotating around a point inside a box. The light is fixed in the centre up and its "look at" point in a fixed position in front of it. As you can see, as the camera rotates around the origin (always looking at the centre), so don't think the box is rotating (!). The lighting follows it around. To start, some code. This is how I'm transforming the light into view space (it gets passed into the shader already in view space): // Compute eye-space light position. Math::Vector3d eyeSpacePosition = MyCamera->ViewMatrix() * MyLightPosition; MyShaderVariables->Set(MyLightPositionIndex, eyeSpacePosition); // Compute eye-space light direction vector. Math::Vector3d eyeSpaceDirection = Math::Unit(MyLightLookAt - MyLightPosition); MyCamera->ViewMatrixInverseTranspose().TransformNormal(eyeSpaceDirection); MyShaderVariables->Set(MyLightDirectionIndex, eyeSpaceDirection); Can anyone give me a clue as to what I'm doing wrong here? I think the light should remain looking at a fixed point on the box, regardless of the camera orientation. Here are the vertex and pixel shaders: /////////////////////////////////////////////////// // Vertex Shader /////////////////////////////////////////////////// #version 420 /////////////////////////////////////////////////// // Uniform Buffer Structures /////////////////////////////////////////////////// // Camera. layout (std140) uniform Camera { mat4 Camera_View; mat4 Camera_ViewInverseTranspose; mat4 Camera_Projection; }; // Matrices per model. layout (std140) uniform Model { mat4 Model_World; mat4 Model_WorldView; mat4 Model_WorldViewInverseTranspose; mat4 Model_WorldViewProjection; }; // Spotlight. layout (std140) uniform OmniLight { float Light_Intensity; vec3 Light_Position; vec3 Light_Direction; vec4 Light_Ambient_Colour; vec4 Light_Diffuse_Colour; vec4 Light_Specular_Colour; float Light_Attenuation_Min; float Light_Attenuation_Max; float Light_Cone_Min; float Light_Cone_Max; }; /////////////////////////////////////////////////// // Streams (per vertex) /////////////////////////////////////////////////// layout(location = 0) in vec3 attrib_Position; layout(location = 1) in vec3 attrib_Normal; layout(location = 2) in vec3 attrib_Tangent; layout(location = 3) in vec3 attrib_BiNormal; layout(location = 4) in vec2 attrib_Texture; /////////////////////////////////////////////////// // Output streams (per vertex) /////////////////////////////////////////////////// out vec3 attrib_Fragment_Normal; out vec4 attrib_Fragment_Position; out vec2 attrib_Fragment_Texture; out vec3 attrib_Fragment_Light; out vec3 attrib_Fragment_Eye; /////////////////////////////////////////////////// // Main /////////////////////////////////////////////////// void main() { // Transform normal into eye space attrib_Fragment_Normal = (Model_WorldViewInverseTranspose * vec4(attrib_Normal, 0.0)).xyz; // Transform vertex into eye space (world * view * vertex = eye) vec4 position = Model_WorldView * vec4(attrib_Position, 1.0); // Compute vector from eye space vertex to light (light is in eye space already) attrib_Fragment_Light = Light_Position - position.xyz; // Compute vector from the vertex to the eye (which is now at the origin). attrib_Fragment_Eye = -position.xyz; // Output texture coord. attrib_Fragment_Texture = attrib_Texture; // Compute vertex position by applying camera projection. gl_Position = Camera_Projection * position; } and the pixel shader: /////////////////////////////////////////////////// // Pixel Shader /////////////////////////////////////////////////// #version 420 /////////////////////////////////////////////////// // Samplers /////////////////////////////////////////////////// uniform sampler2D Map_Diffuse; /////////////////////////////////////////////////// // Global Uniforms /////////////////////////////////////////////////// // Material. layout (std140) uniform Material { vec4 Material_Ambient_Colour; vec4 Material_Diffuse_Colour; vec4 Material_Specular_Colour; vec4 Material_Emissive_Colour; float Material_Shininess; float Material_Strength; }; // Spotlight. layout (std140) uniform OmniLight { float Light_Intensity; vec3 Light_Position; vec3 Light_Direction; vec4 Light_Ambient_Colour; vec4 Light_Diffuse_Colour; vec4 Light_Specular_Colour; float Light_Attenuation_Min; float Light_Attenuation_Max; float Light_Cone_Min; float Light_Cone_Max; }; /////////////////////////////////////////////////// // Input streams (per vertex) /////////////////////////////////////////////////// in vec3 attrib_Fragment_Normal; in vec3 attrib_Fragment_Position; in vec2 attrib_Fragment_Texture; in vec3 attrib_Fragment_Light; in vec3 attrib_Fragment_Eye; /////////////////////////////////////////////////// // Result /////////////////////////////////////////////////// out vec4 Out_Colour; /////////////////////////////////////////////////// // Main /////////////////////////////////////////////////// void main(void) { // Compute N dot L. vec3 N = normalize(attrib_Fragment_Normal); vec3 L = normalize(attrib_Fragment_Light); vec3 E = normalize(attrib_Fragment_Eye); vec3 H = normalize(L + E); float NdotL = clamp(dot(L,N), 0.0, 1.0); float NdotH = clamp(dot(N,H), 0.0, 1.0); // Compute ambient term. vec4 ambient = Material_Ambient_Colour * Light_Ambient_Colour; // Diffuse. vec4 diffuse = texture2D(Map_Diffuse, attrib_Fragment_Texture) * Light_Diffuse_Colour * Material_Diffuse_Colour * NdotL; // Specular. float specularIntensity = pow(NdotH, Material_Shininess) * Material_Strength; vec4 specular = Light_Specular_Colour * Material_Specular_Colour * specularIntensity; // Light attenuation (so we don't have to use 1 - x, we step between Max and Min). float d = length(-attrib_Fragment_Light); float attenuation = smoothstep(Light_Attenuation_Max, Light_Attenuation_Min, d); // Adjust attenuation based on light cone. float LdotS = dot(-L, Light_Direction), CosI = Light_Cone_Min - Light_Cone_Max; attenuation *= clamp((LdotS - Light_Cone_Max) / CosI, 0.0, 1.0); // Final colour. Out_Colour = (ambient + diffuse + specular) * Light_Intensity * attenuation; }

    Read the article

  • Scaling Down Pixel Art?

    - by Michael Stum
    There's plenty of algorithms to scale up pixel art (I prefer hqx personally), but are there any notable algorithms to scale it down? In my case, the game is designed to run at 1280x720, but if someone plays at a lower resolution I want it to still look good. Most Pixel Art discussions center around 320x200 or 640x480 and upscaling for use in console emulators, but I wonder how modern 2D games like the Monkey Island Remake look good on lower resolutions? (Ignoring the options of having multiple versions of assets (essentially, mipmapping))

    Read the article

  • Pong Collision Help in C# w/ XNA

    - by Ramses Brown
    Edit: My goal is to have it function like this: Ball hits 1st Quarter = rebounds higher (aka Y++) Ball hits 2nd Quarter = rebounds higher (using random value) Ball hits 3rd Quarter = rebounds lower (using random value) Ball hits 4th Quarter = rebounds lower (aka Y--) I'm currently using Rectangle Collision for my collision detection, and it's worked. Now I wish to expand it. Instead of it simply detecting whether or not the paddle/ball intersect, I want to make it so that it can determine what section of the paddle gets hit. I wanted it in 4 parts, with each having a different reaction to impact. My first thought is to base it on the Ball's Y position compared to the Paddle's Y position. But since I want it in 4 parts, I don't know how to do that. So it's essentially be if (ball.Y > Paddle.Y) { PaddleSection1 == true; } Except modified so that instead of being top half/bottom half, it's 1st Quarter, etc.

    Read the article

  • Distributed C++ game server which use database.

    - by Slav
    Hello. My C++ turn-based game server (which uses database) does stand against current average amount of clients (players), so I want to expand it to multiple (more then one) amount of computers and databases where all clients still will remain within single game world (servers will must communicate with each other and use multiple databases). Is there some tutorials/books/common standards which explain how to do it in a best way?

    Read the article

  • 2D Platformer Collision Handling

    - by defender-zone
    Hello, everyone! I am trying to create a 2D platformer (Mario-type) game and I am some having some issues with handling collisions properly. I am writing this game in C++, using SDL for input, image loading, font loading, etcetera. I am also using OpenGL via the FreeGLUT library in conjunction with SDL to display graphics. My method of collision detection is AABB (Axis-Aligned Bounding Box), which is really all I need to start with. What I need is an easy way to both detect which side the collision occurred on and handle the collisions properly. So, basically, if the player collides with the top of the platform, reposition him to the top; if there is a collision to the sides, reposition the player back to the side of the object; if there is a collision to the bottom, reposition the player under the platform. I have tried many different ways of doing this, such as trying to find the penetration depth and repositioning the player backwards by the penetration depth. Sadly, nothing I've tried seems to work correctly. Player movement ends up being very glitchy and repositions the player when I don't want it to. Part of the reason is probably because I feel like this is something so simple but I'm over-thinking it. If anyone thinks they can help, please take a look at the code below and help me try to improve on this if you can. I would like to refrain from using a library to handle this (as I want to learn on my own) or the something like the SAT (Separating Axis Theorem) if at all possible. Thank you in advance for your help! void world1Level1CollisionDetection() { for(int i; i < blocks; i++) { if (de2dCheckCollision(ball,block[i],0.0f,0.0f)==true) { int up = 0; int left = 0; int right = 0; int down = 0; if(ball.coords[0] < block[i].coords[0] && block[i].coords[0] < ball.coords[2] && ball.coords[2] < block[i].coords[2]) { left = 1; } if(block[i].coords[0] < ball.coords[0] && ball.coords[0] < block[i].coords[2] && block[i].coords[2] < ball.coords[2]) { right = 1; } if(ball.coords[1] < block[i].coords[1] && block[i].coords[1] < ball.coords[3] && ball.coords[3] < block[i].coords[3]) { up = 1; } if(block[i].coords[1] < ball.coords[1] && ball.coords[1] < block[i].coords[3] && block[i].coords[3] < ball.coords[3]) { down = 1; } cout << left << ", " << right << ", " << up << ", " << down << ", " << endl; if (left == 1) { ball.coords[0] = block[i].coords[0] - 16.0f; ball.coords[2] = block[i].coords[0] - 0.0f; } if (right == 1) { ball.coords[0] = block[i].coords[2] + 0.0f; ball.coords[2] = block[i].coords[2] + 16.0f; } if (down == 1) { ball.coords[1] = block[i].coords[3] + 0.0f; ball.coords[3] = block[i].coords[3] + 16.0f; } if (up == 1) { ball.yspeed = 0.0f; ball.gravity = 0.0f; ball.coords[1] = block[i].coords[1] - 16.0f; ball.coords[3] = block[i].coords[1] - 0.0f; } } if (de2dCheckCollision(ball,block[i],0.0f,0.0f)==false) { ball.gravity = -0.5f; } } } To explain what some of this code means: The blocks variable is basically an integer that is storing the amount of blocks, or platforms. I am checking all of the blocks using a for loop, and the number that the loop is currently on is represented by integer i. The coordinate system might seem a little weird, so that's worth explaining. coords[0] represents the x position (left) of the object (where it starts on the x axis). coords[1] represents the y position (top) of the object (where it starts on the y axis). coords[2] represents the width of the object plus coords[0] (right). coords[3] represents the height of the object plus coords[1] (bottom). de2dCheckCollision performs an AABB collision detection. Up is negative y and down is positive y, as it is in most games. Hopefully I have provided enough information for someone to help me successfully. If there is something I left out that might be crucial, let me know and I'll provide the necessary information. Finally, for anyone who can help, providing code would be very helpful and much appreciated. Thank you again for your help!

    Read the article

  • Grid pathfinding with a lot of entities

    - by Vee
    I'd like to explain this problem with a screenshot from a released game, DROD: Gunthro's Epic Blunder, by Caravel Games. The game is turn-based and tile-based. I'm trying to create something very similar (a clone of the game), and I've got most of the fundamentals done, but I'm having trouble implementing pathfinding. Look at the screenshot. The guys in yellow are friendly, and want to kill the roaches. Every turn, every guy in yellow pathfinds to the closest roach, and every roach pathfinds to the closest guy in yellow. By closest I mean the target with the shortest path, not a simple distance calculation. All of this without any kind of slowdown when loading the level or when passing turns. And all of the entities change position every turn. Also (not shown in screenshot), there can be doors that open and close and change the level's layout. Impressive. I've tried implementing pathfinding in my clone. First attempt was making every roach find a path to a yellow guy every turn, using a breadth-first search algorithm. Obviously incredibly slow with more than a single roach, and would get exponentially slower with more than a single yellow guy. Second attempt was mas making every yellow guy generate a pathmap (still breadth-first search) every time he moved. Worked perfectly with multiple roaches and a single yellow guy, but adding more yellow guys made the game slow and unplayable. Last attempt was implementing JPS (jump point search). Every entity would individually calculate a path to its target. Fast, but with a limited number of entities. Having less than half the entities in the screenshot would make the game slow. And also, I had to get the "closest" enemy by calculating distance, not shortest path. I've asked on the DROD forums how they did it, and a user replied that it was breadth-first search. The game is open source, and I took a look at the source code, but it's C++ (I'm using C#) and I found it confusing. I don't know how to do it. Every approach I tried isn't good enough. And I believe that DROD generates global pathmaps, somehow, but I can't understand how every entity find the best individual path to other entities that move every turn. What's the trick? This is a reply I just got on the DROD forums: Without having looked at the code I'd wager it's two (or so) pathmaps for the whole room: One to the nearest enemy, and one to the nearest friendly for every tile. There's no need to make a separate pathmap for every entity when the overall goal is "move towards nearest enemy/friendly"... just mark every tile with the number of moves it takes to the nearest target and have the entity chose the move that takes it to the tile with the lowest number. To be honest, I don't understand it that well.

    Read the article

  • monotouch 2d pixel with correct resolution

    - by acidzombie24
    I am writing up a game that is size sensitive. It needs to be pixel perfect. I believe the resolution is 480x320 pixels with the iphone being twice the width and height. My code is grid based with images exactly 16x16pixels. I found samples of opengl in the past but I never found any good tutorial that had 0,0 the top left and was the correct size in resolution (which made images look terrible) What can I use? I'd like to write the code in C# (or C++ but C# is preferred) and use monotouch. I don't know any libraries for 2d graphics. I'll figure out sound and such afterwards and I seen documentation on monotouch for input.

    Read the article

  • How do I do Collisions in my JavaScript Game Code Below?

    - by Henry
    I'm trying to figure out how would I add collision detection to my code so that when the "Man" character touches the "RedHouse" the RedHouse disappears? Thanks. By the way, I'm new to how things are done on this site, so thus, if there is anything else needed or so, let me know. <title>HMan</title> <body style="background:#808080;"> <br> <canvas id="canvasBg" width="800px" height="500px"style="display:block;background:#ffffff;margin:100px auto 0px;"></canvas> <canvas id="canvasRedHouse" width="800px" height="500px" style="display:block;margin:-500px auto 0px;"></canvas> <canvas id="canvasEnemy" width="800px" height="500px" style="display:block;margin:-500px auto 0px;"></canvas> <canvas id="canvasEnemy2" width="800px" height="500px" style="display:block;margin:-500px auto 0px;"></canvas> <canvas id="canvasMan" width="800px" height="500px" style="display:block;margin:-500px auto 0px;"></canvas> <script> var isPlaying = false; var requestAnimframe = window.requestAnimationFrame || window.webkitRequestAnimationFrame || window.mozRequestAnimationFrame || window.msRequestAnimationFrame || window.oRequestAnimationFrame; var canvasBg = document.getElementById('canvasBg'); var ctxBg = canvasBg.getContext('2d'); var canvasRedHouse = document.getElementById('canvasRedHouse'); var ctxRedHouse = canvasRedHouse.getContext('2d'); var House1; House1 = new RedHouse(); var canvasMan = document.getElementById('canvasMan'); var ctxMan = canvasMan.getContext('2d'); var Man1; Man1 = new Man(); var imgSprite = new Image(); imgSprite.src = 'SpritesI.png'; imgSprite.addEventListener('load',init,false); function init() { drawBg(); startLoop(); document.addEventListener('keydown',checkKeyDown,false); document.addEventListener('keyup',checkKeyUp,false); } function drawBg() { var SpriteSourceX = 0; var SpriteSourceY = 0; var drawManOnScreenX = 0; var drawManOnScreenY = 0; ctxBg.drawImage(imgSprite,SpriteSourceX,SpriteSourceY,800,500,drawManOnScreenX, drawManOnScreenY,800,500); } function clearctxBg() { ctxBg.clearRect(0,0,800,500); } function Man() { this.SpriteSourceX = 10; this.SpriteSourceY = 540; this.width = 40; this.height = 115; this.DrawManOnScreenX = 100; this.DrawManOnScreenY = 260; this.speed = 10; this.actualFrame = 1; this.speed = 2; this.isUpKey = false; this.isRightKey = false; this.isDownKey = false; this.isLeftKey = false; } Man.prototype.draw = function () { clearCtxMan(); this.updateCoors(); this.checkDirection(); ctxMan.drawImage(imgSprite,this.SpriteSourceX,this.SpriteSourceY+this.height* this.actualFrame, this.width,this.height,this.DrawManOnScreenX,this.DrawManOnScreenY, this.width,this.height); } Man.prototype.updateCoors = function(){ this.leftX = this.DrawManOnScreenX; this.rightX = this.DrawManOnScreenX + this.width; this.topY = this.DrawManOnScreenY; this.bottomY = this.DrawManOnScreenY + this.height; } Man.prototype.checkDirection = function () { if (this.isUpKey && this.topY > 240) { this.DrawManOnScreenY -= this.speed; } if (this.isRightKey && this.rightX < 800) { this.DrawManOnScreenX += this.speed; } if (this.isDownKey && this.bottomY < 500) { this.DrawManOnScreenY += this.speed; } if (this.isLeftKey && this.leftX > 0) { this.DrawManOnScreenX -= this.speed; } if (this.isRightKey && this.rightX < 800) { if (this.actualFrame > 0) { this.actualFrame = 0; } else { this.actualFrame++; } } if (this.isLeftKey) { if (this.actualFrame > 2) { this.actualFrame = 2; } function checkKeyDown(var keyID = e.keyCode || e.which; if (keyID === 38) { Man1.isUpKey = true; e.preventDefault(); } if (keyID === 39 ) { Man1.isRightKey = true; e.preventDefault(); } if (keyID === 40 ) { Man1.isDownKey = true; e.preventDefault(); } if (keyID === 37 ) { Man1.isLeftKey = true; e.preventDefault(); } } function checkKeyUp(e) { var keyID = e.keyCode || e.which; if (keyID === 38 || keyID === 87) { Man1.isUpKey = false; e.preventDefault(); } if (keyID === 39 || keyID === 68) { Man1.isRightKey = false; e.preventDefault(); } if (keyID === 40 || keyID === 83) { Man1.isDownKey = false; e.preventDefault(); } if (keyID === 37 || keyID === 65) { Man1.isLeftKey = false; e.preventDefault(); } } function clearCtxMan() { ctxMan.clearRect(0,0,800,500); } function RedHouse() { this.srcX = 135; this.srcY = 525; this.width = 265; this.height = 245; this.drawX = 480; this.drawY = 85; } RedHouse.prototype.draw = function () { clearCtxRedHouse(); ctxRedHouse.drawImage(imgSprite,this.srcX,this.srcY, this.width,this.height,this.drawX,this.drawY,this.width,this.height); }; function clearCtxRedHouse() { ctxRedHouse.clearRect(0,0,800,500); } function loop() { if (isPlaying === true){ Man1.draw(); House1.draw(); requestAnimframe(loop); } } function startLoop(){ isPlaying = true; loop(); } function stopLoop(){ isPlaying = false; } </script> <style> .top{ position: absolute; top: 4px; left: 10px; color:black; } .top2{ position: absolute; top: 60px; left: 10px; color:black; } </style> <div class="top"> <p><font face="arial" color="black" size="4"><b>HGame</b><font/><p/> <p><font face="arial" color="black" size="3"> My Game Here <font/><p/> </div> <div class="top2"> <p><font face="arial" color="black" size="3"> It will start now <font/><p/> </div>

    Read the article

  • GLSL Shader Texture Performance

    - by Austin
    I currently have a project that renders OpenGL video using a vertex and fragment shader. The shaders work fine as-is, but in trying to add in texturing, I am running into performance issues and can't figure out why. Before adding texturing, my program ran just fine and loaded my CPU between 0%-4%. When adding texturing (specifically textures AND color -- noted by comment below), my CPU is 100% loaded. The only code I have added is the relevant texturing code to the shader, and the "glBindTexture()" calls to the rendering code. Here are my shaders and relevant rending code. Vertex Shader: #version 150 uniform mat4 mvMatrix; uniform mat4 mvpMatrix; uniform mat3 normalMatrix; uniform vec4 lightPosition; uniform float diffuseValue; layout(location = 0) in vec3 vertex; layout(location = 1) in vec3 color; layout(location = 2) in vec3 normal; layout(location = 3) in vec2 texCoord; smooth out VertData { vec3 color; vec3 normal; vec3 toLight; float diffuseValue; vec2 texCoord; } VertOut; void main(void) { gl_Position = mvpMatrix * vec4(vertex, 1.0); VertOut.normal = normalize(normalMatrix * normal); VertOut.toLight = normalize(vec3(mvMatrix * lightPosition - gl_Position)); VertOut.color = color; VertOut.diffuseValue = diffuseValue; VertOut.texCoord = texCoord; } Fragment Shader: #version 150 smooth in VertData { vec3 color; vec3 normal; vec3 toLight; float diffuseValue; vec2 texCoord; } VertIn; uniform sampler2D tex; layout(location = 0) out vec3 colorOut; void main(void) { float diffuseComp = max( dot(normalize(VertIn.normal), normalize(VertIn.toLight)) ), 0.0); vec4 color = texture2D(tex, VertIn.texCoord); colorOut = color.rgb * diffuseComp * VertIn.diffuseValue + color.rgb * (1 - VertIn.diffuseValue); // FOLLOWING LINE CAUSES PERFORMANCE ISSUES colorOut *= VertIn.color; } Relevant Rendering Code: // 3 textures have been successfully pre-loaded, and can be used // texture[0] is a 1x1 white texture to effectively turn off texturing glUseProgram(program); // Draw squares glBindTexture(GL_TEXTURE_2D, texture[1]); // Set attributes, uniforms, etc glDrawArrays(GL_QUADS, 0, 6*4); // Draw triangles glBindTexture(GL_TEXTURE_2D, texture[0]); // Set attributes, uniforms, etc glDrawArrays(GL_TRIANGLES, 0, 3*4); // Draw reference planes glBindTexture(GL_TEXTURE_2D, texture[0]); // Set attributes, uniforms, etc glDrawArrays(GL_LINES, 0, 4*81*2); // Draw terrain glBindTexture(GL_TEXTURE_2D, texture[2]); // Set attributes, uniforms, etc glDrawArrays(GL_TRIANGLES, 0, 501*501*6); // Release glBindTexture(GL_TEXTURE_2D, 0); glUseProgram(0); Any help is greatly appreciated!

    Read the article

  • Triangle Strips and Tangent Space Normal Mapping

    - by Koarl
    Short: Do triangle strips and Tangent Space Normal mapping go together? According to quite a lot of tutorials on bump mapping, it seems common practice to derive tangent space matrices in a vertex program and transform the light direction vector(s) to tangent space and then pass them on to a fragment program. However, if one was using triangle strips or index buffers, it is a given that the vertex buffer contains vertices that sit at border edges and would thus require more than one normal to derive tangent space matrices to interpolate between in fragment programs. Is there any reasonable way to not have duplicate vertices in your buffer and still use tangent space normal mapping? Which one do you think is better: Having normal and tangent encoded in the assets and just optimize the geometry handling to alleviate the cost of duplicate vertices or using triangle strips and computing normals/tangents completely at run time? Thinking about it, the more reasonable answer seems to be the first one, but why might my professor still be fussing about triangle strips when it seems so obvious?

    Read the article

  • XNA: Auto-populate content within the content project based on current folder/file structure and content management for large games

    - by Joe
    1) Is it possible to implement a system where I can simply drop a new image into my content project's folder and VS will automatically see that and bring it into the project for compiling? 2) Similarly, if I wanted a specific texture I could state something like var texture = Game.Assets.Image["backgrounds/sky_02"]; (where Game is the standard XNA Game class and Assets is some kind of content manager statically defined within Game). I know this is fairly simple to implement manually and have done such things in the past (static Dictionary defined within Game) except this only works for relatively small games where you can have all assets loaded at the start without much issue. How would you go about making this work for games where content is loaded and unloaded based on level / area? I'm not asking for the solution, just how you would go about this and what things you would have to be aware of. Thanks.

    Read the article

  • Question about mipmaps + anisotropic filtering

    - by Telanor
    I'm a bit confused here and maybe someone can explain this to me. I created a simple test texture for my terrain which is nothing more than a solid green color with a black grid overlayed on top of it. If I look at the terrain in the distance with mipmapping on and linear filtering, the grid lines become blurry fairly quickly and further back the grid is pretty much invisible. With these settings, I don't get any moire patterns at all. If I turn on anisotropic filtering, however, the higher the anisotropic level, the more the terrain looks like it did with without mipmapping. The lines are much crisper nearby but in the distance I start to see terrible moire patterns. My understanding was that mipmapping is supposed to get rid of moire patterns. I've always had anisotropic filtering on in every game I play and I've never noticed any moire patterns as a result, so I don't understand why it's happening in my game. I am using logarithmic depth however, could that be causing any problems? And if it is, how do I resolve it? I've created my sampler state like so (I'm using slimdx): ssa = SamplerState.FromDescription(Engine.Device, new SamplerDescription { AddressU = TextureAddressMode.Clamp, AddressV = TextureAddressMode.Clamp, AddressW = TextureAddressMode.Clamp, Filter = Filter.Anisotropic, MaximumAnisotropy = anisotropicLevel, MinimumLod = 0, MaximumLod = float.MaxValue });

    Read the article

  • What are the possible options for AI path-finding etc when the world is "partitionned"?

    - by Sebastien Diot
    If you anticipate a large persistent game world, and you don't want to end up with some game server crashing due to overload, then you have to design from the ground up a game world that is partitioned in chunks. This is in particular true if you want to run your game servers in the cloud, where each individual VM is relatively week, and memory and CPU are at a premium. I think the biggest challenge here is that the player receives all the parts around the location of the avatar, but mobs/monsters are normally located in the server itself, and can only directly access the data about the part of the world that the server own. So how can we make the AI behave realistically in that context? It can send queries to the other servers that own the neighboring parts, but that sounds rather network intensive and latency prone. It would probably be more performant for each mob AI to be spread over the neighboring parts, and proactively send the relevant info to the part that contains the actual mob atm. That would also reduce the stress in a mob crossing a border between two parts, and therefore "switching server". Have you heard of any AI design that solves those issues? Some kind of distributed AI brain? Maybe some kind of "agent" community working together through message passing?

    Read the article

  • SDL_image & OpenGL Problem

    - by Dylan
    i've been following tutorials online to load textures using SDL and display them on a opengl quad. but ive been getting weird results that no one else on the internet seems to be getting... so when i render the texture in opengl i get something like this. http://www.kiddiescissors.com/after.png when the original .bmp file is this: http://www.kiddiescissors.com/before.bmp ive tried other images too, so its not that this particular image is corrupt. it seems like my rgb channels are all jumbled or something. im pulling my hair out at this point. heres the relevant code from my init() function if ( SDL_Init(SDL_INIT_VIDEO) != 0 ) { return 1; } SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1); SDL_GL_SetAttribute( SDL_GL_RED_SIZE, 8 ); SDL_GL_SetAttribute( SDL_GL_GREEN_SIZE, 8 ); SDL_GL_SetAttribute( SDL_GL_BLUE_SIZE, 8 ); SDL_GL_SetAttribute( SDL_GL_ALPHA_SIZE, 8 ); SDL_GL_SetAttribute(SDL_GL_MULTISAMPLEBUFFERS, 1); SDL_GL_SetAttribute(SDL_GL_MULTISAMPLESAMPLES, 4); SDL_SetVideoMode(WINDOW_WIDTH, WINDOW_HEIGHT, 32, SDL_HWSURFACE | SDL_GL_DOUBLEBUFFER | SDL_OPENGL); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(50, (GLfloat)WINDOW_WIDTH/WINDOW_HEIGHT, 1, 50); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glEnable(GL_DEPTH_TEST); glEnable(GL_MULTISAMPLE); glEnable(GL_TEXTURE_2D); glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE); glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA); glEnable(GL_BLEND); heres the code that is called when my main player object (the one with which this sprite is associated) is initialized texture = 0; SDL_Surface* surface = IMG_Load("i.bmp"); glGenTextures(1, &texture); glBindTexture(GL_TEXTURE_2D, texture); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, surface->w, surface->h, 0, GL_RGB, GL_UNSIGNED_BYTE, surface->pixels); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); SDL_FreeSurface(surface); and then heres the relevant code from my display function glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); glColor4f(1, 1, 1, 1); glPushMatrix(); glBindTexture(GL_TEXTURE_2D, texture); glTranslatef(getCenter().x, getCenter().y, 0); glRotatef(getAngle()*(180/M_PI), 0, 0, 1); glTranslatef(-getCenter().x, -getCenter().y, 0); glBegin(GL_QUADS); glTexCoord2f(0, 0); glVertex3f(getTopLeft().x, getTopLeft().y, 0); glTexCoord2f(0, 1); glVertex3f(getTopLeft().x, getTopLeft().y + size.y, 0); glTexCoord2f(1, 1); glVertex3f(getTopLeft().x + size.x, getTopLeft().y + size.y, 0); glTexCoord2f(1, 0); glVertex3f(getTopLeft().x + size.x, getTopLeft().y, 0); glEnd(); glPopMatrix(); let me know if i left out anything important... or if you need more info from me. thanks a ton, -Dylan

    Read the article

< Previous Page | 395 396 397 398 399 400 401 402 403 404 405 406  | Next Page >