Search Results

Search found 25518 results on 1021 pages for 'iterative development'.

Page 480/1021 | < Previous Page | 476 477 478 479 480 481 482 483 484 485 486 487  | Next Page >

  • How do I efficiently generate chunks to fill entire screen when my player moves?

    - by Trixmix
    In my game I generate chunks when the player moves. The chunks are all generated on the fly, but currently I just created a simple flat 8X8 floor. What happens is that when he moves to a new chunk the chunk in the direction of the player gets generated and its neighboring chunks. This is not efficient because the generator does not fill the entire screen. I did try to use recursion but its not as fast as I would like it to be. My question is what would be an efficient way of doing so? How does minecraft do so? When I say this I mean just the way it PICKS which chunks to generate and in what order. Not how they generate or how they are saved in regions, just the order/way it generates them. I just want to know what is a good way to load chunks around the player.

    Read the article

  • Vector.Unproject - Checking if a model intersects a large sprite

    - by Fibericon
    Let's say I have a sprite, drawn like this: spriteBatch.Draw(levelCannons[i].texture, levelCannons[i].position, null, alpha, levelCannons[i].rotation, Vector2.Zero, scale, SpriteEffects.None, 0); Picture levelCannon as being a laser beam that goes across the entire screen. I need to see if my 3d model intersects with the screen space inhabited by the sprite. I managed to dig up Vector.Unproject, but that seems to only be useful when dealing with a single point in 2d space, rather than an area. What can I do in my case?

    Read the article

  • What are the common character animation techniques used in tile based hack&slash games?

    - by Gorky
    I wonder what kind of animation techniques are used for creature and character animation in modern hack&slash type tile based games? Keyframing for different actions may be one option. Skeletal framing may be another. But how about the physics? Or do they use a totally hybrid system of inverse kinematics supported with a skeleton,physics and mixed with interpolated keyframing for more realistic animations? If so, how and for what reasons? I can think of many different solutions for the issues below but I wonder what's used and best suited for issues like: Walking or moving on an uneven terrain Combat interaction, combat physics and collisions Attaching rigid items to character and their iteractions ih physics world Soft body dynamics like hair, vegetation, clothes and fabric in line with animations and iteractions.

    Read the article

  • Movement in RPG

    - by user1264811
    I want to make an RPG game in which I move tile by tile. So when I hit up, the tile row that I am on decreases by one for example. Also, it's supposed to be a slow movement so that I can see the change in tile, i.e. I can see my sprite move from tile to tile. Currently, with the code I have, when I hit a direction on my keyboard, I move several blocks within seconds and by the time I release the button I have already gotten a nullPointerException error because I have left the map. How can I slow down the movement?

    Read the article

  • Falling CCSprites

    - by Coder404
    Im trying to make ccsprites fall from the top of the screen. Im planning to use a touch delegate to determine when they fall. How could I make CCSprites fall from the screen in a way like this: -(void)addTarget { Monster *target = nil; if ((arc4random() % 2) == 0) { target = [WeakAndFastMonster monster]; } else { target = [StrongAndSlowMonster monster]; } // Determine where to spawn the target along the Y axis CGSize winSize = [[CCDirector sharedDirector] winSize]; int minY = target.contentSize.height/2; int maxY = winSize.height - target.contentSize.height/2; int rangeY = maxY - minY; int actualY = (arc4random() % rangeY) + minY; // Create the target slightly off-screen along the right edge, // and along a random position along the Y axis as calculated above target.position = ccp(winSize.width + (target.contentSize.width/2), actualY); [self addChild:target z:1]; // Determine speed of the target int minDuration = target.minMoveDuration; //2.0; int maxDuration = target.maxMoveDuration; //4.0; int rangeDuration = maxDuration - minDuration; int actualDuration = (arc4random() % rangeDuration) + minDuration; // Create the actions id actionMove = [CCMoveTo actionWithDuration:actualDuration position:ccp(-target.contentSize.width/2, actualY)]; id actionMoveDone = [CCCallFuncN actionWithTarget:self selector:@selector(spriteMoveFinished:)]; [target runAction:[CCSequence actions:actionMove, actionMoveDone, nil]]; // Add to targets array target.tag = 1; [_targets addObject:target]; } This code makes CCSprites move from the right side of the screen to the left. How could I change this to make the CCSprites to move from the top of the screen to the bottom?

    Read the article

  • Simple rendering produces minor stutter

    - by Ben
    For some reason, this game loop renders the movement of a simple rectangle with no stuttering. double currTime; double prevTime = System.nanoTime() / NANO_TO_SEC; double FPSTIMER = System.nanoTime(); double maxTimeDiff = 100.0 / 1000.0; double delta = 1.0 / 60.0; int processes = 0, frames = 0; while(true){ currTime = System.nanoTime() / NANO_TO_SEC; if(currTime - prevTime > maxTimeDiff) prevTime = currTime; if(currTime >= prevTime){ process(); processes++; prevTime += delta; if(currTime < prevTime){ render(); frames++; } } else{ try{ Thread.sleep((long) (1000 * (prevTime - currTime))); } catch(Exception e){} } if(System.nanoTime() - FPSTIMER > 1000000000.0){ System.out.println("Process: " + (1000 / processes) + "ms FPS: " + (1000 / frames) + "ms"); processes = frames = 0; FPSTIMER += 1000000000.0; } } But for this game loop, I get really minor stuttering where the movement does not look smooth. long prevTime = System.currentTimeMillis(); long prevRenderTime = 0; long currRenderTime = 0; long delta = 0; long msPerTick = 1000 / 60; int frames = 0; int ticks = 0; double FPSTIMER = System.currentTimeMillis(); while (true){ long currTime = System.currentTimeMillis(); delta += (currTime - prevTime) / msPerTick; prevTime = currTime; while (delta >= 1){ ticks++; process(); delta -= 1; } prevRenderTime = System.currentTimeMillis(); render(); frames++; currRenderTime = System.currentTimeMillis(); try{ Thread.sleep((long) ((1000 / FPS) - (currRenderTime - prevRenderTime))); } catch(Exception e){} if(System.currentTimeMillis() - FPSTIMER > 1000.0){ System.out.println("Process: " + (1000.0 / ticks) + "ms FPS: " + (1000.0 / frames) + "ms"); ticks = frames = 0; FPSTIMER += 1000.0; } Is there any critical difference that I'm missing here? The one thing I noticed is that if I uncap the fps for the second game loop, the stuttering goes away. It doesn't make sense to me. Also, the second game loop came from Notch's Minicraft code with just my thread sleeping code added in.

    Read the article

  • How do i make a minecraft server mod? [closed]

    - by Simon
    Possible Duplicate: Mods for Minecraft Server - how does it work? I have made some minecraft client mods, but i've started a server a mounth ago and i want to make a mod for it, but i cant find any tutorial on the internet. How can then the other guys making those mods for minecraft server know how they are going to do? Do they try forward as i tryed or are they doing something else. I would be glad if someone could tell me how to do or find tutorials for me, couse I have tryed to find them in nearly a week of searching. But i guess im searching at the wrong spot of internet, what do i know :o

    Read the article

  • Car brands and models licensing

    - by Ju-v
    We are small team which working on car racing game but we don't know about licensing process for branded cars like Nissan, Lamborghini, Chevrolet and etc. Do we need to buy any licence for using real car brand names, models, logos,... or we can use them for free? Second option we think about using not real brand with real models is it possible? If someone have experience with that, fell free to share it. Any information about that is welcome.

    Read the article

  • One-way platforms in UDK

    - by Jordaan Mylonas
    I'm looking to make a multi-player platforming game using UDK. I'm currently doing feasibility research, to make sure I will reasonably be able to do all of the technical things I want to do. The first major hurdle I've come across without being able to find as answer, are one-way platforms. That is to say, platforms through which a player can jump up, but not fall through (unless they choose to). These are commonly seen in games like Mario, Kirby and Smash Bros. Does anyone know how such a system would work within UDK? I can think of solutions that might work for single-player, but not multi.

    Read the article

  • OpenGL lighting with dynamic geometry

    - by Tank
    I'm currently thinking hard about how to implement lighting in my game. The geometry is quite dynamic (fixed 3D grid with custom geometry in each cell) and needs some light to get more depth and in general look nicer. A scene in my game always contains sunlight and local light sources like lamps (point lights). One can move underground, so sunlight must be able to illuminate as far as it can get. Here's a render of a typical situation: The lamp is positioned behind the wall to the top, and in the hollow cube there's a hole in the back, so that light can shine through. (I don't want soft shadows, this is just for illustration) While spending the whole day searching through Google, I stumbled on some keywords like deferred rendering, forward rendering, ambient occlusion, screen space ambient occlusion etc. Some articles/tutorials even refer to "normal shading", but to be honest I don't really have an idea to even do simple shading. OpenGL of course has a fixed lighting pipeline with 8 possible light sources. However they just illuminate all vertices without checking for occluding geometry. I'd be very thankful if someone could give me some pointers into the right direction. I don't need complete solutions or similar, just good sources with information understandable for someone with nearly no lighting experience (preferably with OpenGL).

    Read the article

  • Help understand GLSL directional light on iOS (left handed coord system)

    - by Robse
    I now have changed from GLKBaseEffect to a own shader implementation. I have a shader management, which compiles and applies a shader to the right time and does some shader setup like lights. Please have a look at my vertex shader code. Now, light direction should be provided in eye space, but I think there is something I don't get right. After I setup my view with camera I save a lightMatrix to transform the light from global space to eye space. My modelview and projection setup: - (void)setupViewWithWidth:(int)width height:(int)height camera:(N3DCamera *)aCamera { aCamera.aspect = (float)width / (float)height; float aspect = aCamera.aspect; float far = aCamera.far; float near = aCamera.near; float vFOV = aCamera.fieldOfView; float top = near * tanf(M_PI * vFOV / 360.0f); float bottom = -top; float right = aspect * top; float left = -right; // projection GLKMatrixStackLoadMatrix4(projectionStack, GLKMatrix4MakeFrustum(left, right, bottom, top, near, far)); // identity modelview GLKMatrixStackLoadMatrix4(modelviewStack, GLKMatrix4Identity); // switch to left handed coord system (forward = z+) GLKMatrixStackMultiplyMatrix4(modelviewStack, GLKMatrix4MakeScale(1, 1, -1)); // transform camera GLKMatrixStackMultiplyMatrix4(modelviewStack, GLKMatrix4MakeWithMatrix3(GLKMatrix3Transpose(aCamera.orientation))); GLKMatrixStackTranslate(modelviewStack, -aCamera.position.x, -aCamera.position.y, -aCamera.position.z); } - (GLKMatrix4)modelviewMatrix { return GLKMatrixStackGetMatrix4(modelviewStack); } - (GLKMatrix4)projectionMatrix { return GLKMatrixStackGetMatrix4(projectionStack); } - (GLKMatrix4)modelviewProjectionMatrix { return GLKMatrix4Multiply([self projectionMatrix], [self modelviewMatrix]); } - (GLKMatrix3)normalMatrix { return GLKMatrix3InvertAndTranspose(GLKMatrix4GetMatrix3([self modelviewProjectionMatrix]), NULL); } After that, I save the lightMatrix like this: [self.renderer setupViewWithWidth:view.drawableWidth height:view.drawableHeight camera:self.camera]; self.lightMatrix = [self.renderer modelviewProjectionMatrix]; And just before I render a 3d entity of the scene graph, I setup the light config for its shader with the lightMatrix like this: - (N3DLight)transformedLight:(N3DLight)light transformation:(GLKMatrix4)matrix { N3DLight transformedLight = N3DLightMakeDisabled(); if (N3DLightIsDirectional(light)) { GLKVector3 direction = GLKVector3MakeWithArray(GLKMatrix4MultiplyVector4(matrix, light.position).v); direction = GLKVector3Negate(direction); // HACK -> TODO: get lightMatrix right! transformedLight = N3DLightMakeDirectional(direction, light.diffuse, light.specular); } else { ... } return transformedLight; } You see the line, where I negate the direction!? I can't explain why I need to do that, but if I do, the lights are correct as far as I can tell. Please help me, to get rid of the hack. I'am scared that this has something to do, with my switch to left handed coord system. My vertex shader looks like this: attribute highp vec4 inPosition; attribute lowp vec4 inNormal; ... uniform highp mat4 MVP; uniform highp mat4 MV; uniform lowp mat3 N; uniform lowp vec4 constantColor; uniform lowp vec4 ambient; uniform lowp vec4 light0Position; uniform lowp vec4 light0Diffuse; uniform lowp vec4 light0Specular; varying lowp vec4 vColor; varying lowp vec3 vTexCoord0; vec4 calcDirectional(vec3 dir, vec4 diffuse, vec4 specular, vec3 normal) { float NdotL = max(dot(normal, dir), 0.0); return NdotL * diffuse; } ... vec4 calcLight(vec4 pos, vec4 diffuse, vec4 specular, vec3 normal) { if (pos.w == 0.0) { // Directional Light return calcDirectional(normalize(pos.xyz), diffuse, specular, normal); } else { ... } } void main(void) { // position highp vec4 position = MVP * inPosition; gl_Position = position; // normal lowp vec3 normal = inNormal.xyz / inNormal.w; normal = N * normal; normal = normalize(normal); // colors vColor = constantColor * ambient; // add lights vColor += calcLight(light0Position, light0Diffuse, light0Specular, normal); ... }

    Read the article

  • Jitter during wall collisions with Bullet Physics: contact/penetration tolerance?

    - by Niriel
    I use the bullet physics engine through Panda3d. My scene is still very simple, think 'Wolfenstein3d': tile-based, walls are solid cubes. I expect walls to block the player, and I expect the player to slide along the walls in case of non-normal incidence. What I get is what I expect, with one difference: there is some jitter. If I try to force myself into the wall, then I see the frames blinking quickly between two positions. These differ by about 0.04 units of distance, which corresponds to 4 cm in my game. I noticed a 4 cm elsewhere: the bottom of my player capsule is 4 cm below ground, when at rest. Does that mean that there is somewhere in the Bullet engine a default 0.04-units-long tolerance to differentiate contact from collision? If so, what should I do ? Should I change the scale of my game so that these 0.04 units correspond to 0.4 cm, making the jitter ten times smaller? Or can I ask bullet to change its tolerance to a smaller value? Edit This is the jitter I get: 6.155 - 6.118 = 0.036 LPoint3f(0, 6.11694, 0.835) LPoint3f(0, 6.15499, 0.835) LPoint3f(0, 6.11802, 0.835) LPoint3f(0, 6.15545, 0.835) LPoint3f(0, 6.11817, 0.835) LPoint3f(0, 6.15726, 0.835) LPoint3f(0, 6.11876, 0.835) LPoint3f(0, 6.15911, 0.835) LPoint3f(0, 6.11937, 0.835) I found a setMargin method. I set it to 5 mm both on the BoxShape for the walls and on the Capsule shape for the player. It still jitters by about 35 mm as illustrated by this log (11.117 - 11.082 = 0.035): LPoint3f(0, 11.0821, 0.905) LPoint3f(0, 11.1169, 0.905) LPoint3f(0, 11.082, 0.905) LPoint3f(0, 11.117, 0.905) LPoint3f(0, 11.082, 0.905) LPoint3f(0, 11.117, 0.905) LPoint3f(0, 11.0821, 0.905) LPoint3f(0, 11.1175, 0.905) LPoint3f(0, 11.0822, 0.905) LPoint3f(0, 11.1178, 0.905) LPoint3f(0, 11.0823, 0.905) LPoint3f(0, 11.1183, 0.905) The margin on the capsule did change my penetration with the floor though, I'm a bit higher (0.905 instead of 0.835). However, it did not change anything when colliding with the walls. How can I make the collisions against the walls less jittery? Edit, the day after: After more investigation, it appears that dynamic objects behave well. My problem comes from the btKinematicCharacterController that I use for moving my character; that stuff is totally bugged, according to the whole Internet :/.

    Read the article

  • converting 2d grid of squares to polygon nav mesh

    - by Roflha
    I haven't actually started programming for this one yet, but I wanted to see how I would go about doing this anyway. Say I have a 2D matrix of squares, all of the same size, some traversable and some not. How would I go about creating a navigation mesh of polygons from this grid. Is there any reading I can look at until I get a chance to get to my computer or should I just give it a go. My idea was to take the non-traversable squares out and extend lines from there edges to make polygons.. that's all I have got so far. Any advice?

    Read the article

  • What common interface would be appropriate for these game object classes?

    - by Jefffrey
    Question A component based system's goal is to solve the problems that derives from inheritance: for example the fact that some parts of the code (that are called components) are reused by very different classes that, hypothetically, would lie in a very different branch of the inheritance tree. That's a very nice concept, but I've found out that CBS is often hard to accomplish without using ugly hacks. Implementations of this system are often far from clean. But I don't want to discuss this any further. My question is: how can I solve the same problems a CBS try to solve with a very clean interface? (possibly with examples, there are a lot of abstract talks about the "perfect" design already). Context Here's an example I was going for before realizing I was just reinventing inheritance again: class Human { public: Position position; Movement movement; Sprite sprite; // other human specific components }; class Zombie { Position position; Movement movement; Sprite sprite; // other zombie specific components }; After writing that I realized I needed an interface, otherwise I would have needed N containers for N different types of objects (or to use boost::variant to gather them all together). So I've thought of polymorphism (move what systems do in a CBS design into class specific functions): class Entity { public: virtual void on_event(Event) {} // not pure virtual on purpose virtual void on_update(World) {} virtual void on_draw(Window) {} }; class Human : public Entity { private: Position position; Movement movement; Sprite sprite; public: virtual void on_event(Event) { ... } virtual void on_update(World) { ... } virtual void on_draw(Window) { ... } }; class Zombie : public Entity { private: Position position; Movement movement; Sprite sprite; public: virtual void on_event(Event) { ... } virtual void on_update(World) { ... } virtual void on_draw(Window) { ... } }; Which was nice, except for the fact that now the outside world would not even be able to know where a Human is positioned (it does not have access to its position member). That would be useful to track the player position for collision detection or if on_update the Zombie would want to track down its nearest human to move towards him. So I added const Position& get_position() const; to both the Zombie and Human classes. And then I realized that both functionality were shared, so it should have gone to the common base class: Entity. Do you notice anything? Yes, with that methodology I would have a god Entity class full of common functionality (which is the thing I was trying to avoid in the first place). Meaning of "hacks" in the implementation I'm referring to I'm talking about the implementations that defines Entities as simple IDs to which components are dynamically attached. Their implementation can vary from C-stylish: int last_id; Position* positions[MAX_ENTITIES]; Movement* movements[MAX_ENTITIES]; Where positions[i], movements[i], component[i], ... make up the entity. Or to more C++-style: int last_id; std::map<int, Position> positions; std::map<int, Movement> movements; From which systems can detect if an entity/id can have attached components.

    Read the article

  • Arranging Gizmos in Unity 3D [on hold]

    - by Simran kaur
    I have this arrangement of Gizmos which was handed over to me. ! 1. How do I get it? I have read the documentation but I could get it as shown. I have basically track or lane that is coming towards the camera by moving towards negative z. I am moving lanes so that it appears as if cars are moving, The roads need to be rotated by 90 degrees otherwise they appear to move towards the upper end of the screen and that too parellely.Why exactly is that?

    Read the article

  • How can I make permanent death in a MUD seem acceptable and fair to players?

    - by Luke Laupheimer
    I have considered writing a MUD for years, and I have a lot of ideas my friends think are really cool (and that's how I'd hope to get anywhere -- word of mouth). Thing is, there's one thing I have always wanted, that my friends and strangers hated: permanent death. Now, the emotional response I get to this is visceral revulsion, every time. I'm pretty sure I am the only person that wants this, or if I'm not, I'm a tiny minority. Now, the reason I want it is because I want the actions of the players to matter. Unlike a lot of other MUDs, which have a set of static city-states and social institutions etc, I want the things my players do, should I get any, to actually change the situation. And that includes killing people. If you kill someone, you didn't send them to time out, you killed them. What happens when you kill people? They go away. They don't come back in half an hour to smack talk you some more. They're gone. Forever. By making death non-permanent, you make death not matter. It would be similar if a climax to a character's arc is getting a speeding ticket. It cheapens it. Non-permanent death cheapens death. How can I: 1) Convince my players (and random people!) that this is actually a good idea?, or 2) Find some other way to make death and violence matter as much as it does in real life (except within the game, of course) sans character deletion? What alternatives are there out there?

    Read the article

  • Animate multiple entities

    - by Robert
    I'm trying to animate multiple(3) entities using one model(IQM format). It's working but performance is really bad because I'm calling animate function for each entity in my game loop (I think problem is there). What's the best way to animate multiple entities (with different animation ofc) in OpenGL? I think I can try build one VBO / entity for better performances but I don't think it's the best way to do it.

    Read the article

  • Flickering problem with world matrix

    - by gnomgrol
    I do have a pretty wierd problem today. As soon as I try to change my translation- or rotationmatrix for an object to something else than (0,0,0), the object starts to flicker (scaling works fine). It rapid and randomly switches between the spot it should be in and a crippled something. I first thought that the problem would be z-fighting, but now Im pretty sure it isn't. I have now clue at all what it could be, here are two screenshots of the two states the plant is switching between. I already used PIX, but could find anything of use (Im not a very good debugger anyway) I would appreciate any help, thanks a lot! Important code: D3DXMatrixIdentity(&World); D3DXVECTOR3 rotaxisX = D3DXVECTOR3(1.0f, 0.0f, 0.0f); D3DXVECTOR3 rotaxisY = D3DXVECTOR3(0.0f, 1.0f, 0.0f); D3DXVECTOR3 rotaxisZ = D3DXVECTOR3(0.0f, 0.0f, 1.0f); D3DXMATRIX temprot1, temprot2, temprot3; D3DXMatrixRotationAxis(&temprot1, &rotaxisX, 0); D3DXMatrixRotationAxis(&temprot2, &rotaxisY, 0); D3DXMatrixRotationAxis(&temprot3, &rotaxisZ, 0); Rotation = temprot1 *temprot2 * temprot3; D3DXMatrixTranslation(&Translation, 0.0f, 10.0f, 0.0f); D3DXMatrixScaling(&Scale, 0.02f, 0.02f, 0.02f); //Set objs world space using the transformations World = Translation * Rotation * Scale; shader: cbuffer cbPerObject { matrix worldMatrix; matrix viewMatrix; matrix projectionMatrix; }; // Change the position vector to be 4 units for proper matrix calculations. input.position.w = 1.0f; // Calculate the position of the vertex against the world, view, and projection matrices. output.position = mul(input.position, worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix);

    Read the article

  • Dynamic model interactions

    - by Richard
    I am just curious as to how in many games (namely games like arkham asylum/city, manhunt, hitman) do they make it so that your character can "grab" a character in front of you and do stuff to them. I know this may sound very confusing but for an example go to youtube and search "hitman executions", and the first video is an example of what i'm asking. Basically I'm wondering how they make your model dynamically interact with whatever other model you come across, so in hitman when you come up behind some one with the fibre wire you strangle the other character or if you have the anesthetic you come up behind some person and put your hand over there mouth while they struggle and slowly go to the floor where you lay them down. I am confused as to whether it was animated to use two models using specific bone/skeletal identifiers, if it is just two completely separate animations that are played at the correct time to make it look like they are actually interacting or something else all together. I am not an animator so i assume most of what i just said is not right but i hope that some one can understand what i mean and provide an answer. PS) I am a programmer and I am in the process of building a hitmanesque game, just because i love that style of game and I want to increase my skills on something fun, so if you do know what i'm talking about have some examples with involving both models and programming (i use c++ and mainly Ogre3D at the moment but i am getting into unity and XNA) i would greatly appreciate it. Thanks.

    Read the article

  • X Error of failed request: BadMatch [migrated]

    - by Andrew Grabko
    I'm trying to execute some "hello world" opengl code: #include <GL/freeglut.h> void displayCall() { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnable(GL_DEPTH_TEST); ... Some more code here glutSwapBuffers(); } int main(int argc, char *argv[]) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_RGB | GLUT_DOUBLE | GLUT_DEPTH); glutInitWindowSize(500, 500); glutInitWindowPosition(300, 200); glutInitContextVersion(4, 2); glutInitContextFlags(GLUT_FORWARD_COMPATIBLE); glutCreateWindow("Hello World!"); glutDisplayFunc(displayCall); glutMainLoop(); return 0; } As a result I get: X Error of failed request: BadMatch (invalid parameter attributes) Major opcode of failed request: 128 (GLX) Minor opcode of failed request: 34 () Serial number of failed request: 39 Current serial number in output stream: 40 Here is the stack trace: fghCreateNewContext() at freeglut_window.c:737 0x7ffff7bbaa81 fgOpenWindow() at freeglut_window.c:878 0x7ffff7bbb2fb fgCreateWindow() at freeglut_structure.c:106 0x7ffff7bb9d86 glutCreateWindow() at freeglut_window.c:1,183 0x7ffff7bbb4f2 main() at AlphaTest.cpp:51 0x4007df Here is the last piece of code, after witch the program crashes: createContextAttribs = (CreateContextAttribsProc) fghGetProcAddress("glXCreateContextAttribsARB" ); if ( createContextAttribs == NULL ) { fgError( "glXCreateContextAttribsARB not found" ); } context = createContextAttribs( dpy, config, share_list, direct, attributes ); "glXCreateContextAttribsARB" address is obtained successfully, but the program crashes on its invocation. If I specify OpenGL version less than 4.2 in "glutInitContextVersion()" program runs without errors. Here is my glxinfo's OpelGL version: OpenGL version string: 4.2.0 NVIDIA 285.05.09 I would be very appreciate any further ideas.

    Read the article

  • Projected trajectory of a vehicle?

    - by mac
    In the game I am developing, I have to calculate if my vehicle (1) which in the example is travelling north with a speed V, can reach its target (2). The example depict the problem from atop: There are actually two possible scenarios: V is constant (resulting in trajectory 4, an arc of a circle) or the vehicle has the capacity to accelerate/decelerate (trajectory 3, an arc of a spiral). I would like to know if there is a straightforward way to verify if the vehicle is able to reach its target (as opposed to overshooting it). I'm particularly interested in trajectory #3, as I the only thing I could think of is integrating the position of the vehicle over time. EDIT: of course the vehicle has always the capacity to steer, but the steer radius vary with its speed (think to a maximum lateral g-force). EDIT2: also notice that (as most of the vehicles in real life) there is a minimum steering radius for the in-game ones too).

    Read the article

  • Shader inputs in a general purpouse engine

    - by dreta
    I'm not familiar with SDKs like Unity or UDK that much, so i can't check this off hand. Do general purpouse engines allow users to create custom uniform variables? The way i see it, and the way i have implemented it in an engine i'm writing to learn 3D, is that there is a "set" of uniforms provided by the engine and if you want to write a custom shader then you utilize uniforms you need to create a wanted effect. Now, the thing is, first of all i'm not an artist, second of all, i didn't have a chance to create complex scenes yet. So my question is, is it common practice to define variables that the engine provides and only allow the user to work with what they're given? Allowing users to add custom programs and use them where they want is not hard, but i have issues imagining how you'd go about doing the same for uniforms.

    Read the article

  • How to add isometric (rts-alike) perspective and scolling in unity?

    - by keinabel
    I want to develop some RTS/simulation game. Therefore I need a camera perspective like one knows it from Anno 1602 - 1404, as well as the camera scrolling. I think this is called isometric perspective (and scrolling). But I honestly have no clue how to manage this. I tried some things I found on google, but they were not satisfying. Can you give me some good tutorials or advice for managing this? Thanks in advance

    Read the article

  • Understanding dot notation

    - by Starkers
    Here's my interpretation of dot notation: a = [2,6] b = [1,4] c = [0,8] a . b . c = (2*6)+(1*4)+(0*8) = 12 + 4 + 0 = 16 What is the significance of 16? Apparently it's a scalar. Am I right in thinking that a scalar is the number we times a unit vector by to get a vector that has a scaled up magnitude but the same direction as the unit vector? So again, what is the relevance of 16? When is it used? It's not the magnitude of all the vectors added up. The magnitude of all of them is calculated as follows: sqrt( ax * ax + ay * ay ) + sqrt( bx * bx + by * by ) + sqrt( cx * cx + cy * cy) sqrt( 2 * 2 + 6 * 6 ) + sqrt( 1 * 1 + 4 * 4 ) + sqrt( 0 * 0 + 8 * 8) sqrt( 4 + 36 ) + sqrt( 1 + 16 ) + sqrt( 0 + 64) sqrt( 40 ) + sqrt( 17 ) + sqrt( 64) 6.3 + 4.1 + 8 10.4 + 8 18.4 So I don't really get this diagram: Attempting with sensible numbers: a = [1,0] b = [4,3] a . b = (1*0) + (4*3) = 0 + 12 = 12 So what exactly is a . b describing here? The magnitude of that vector? Because that isn't right: the 'a.b' vector = [4,0] sqrt( x*x + y*y ) sqrt( 4*4 + 0*0 ) sqrt( 16 + 0 ) 4 So what is 12 describing?

    Read the article

  • Sampling Heightmap Edges for Normal map

    - by pl12
    I use a Sobel filter to generate normal maps from procedural height maps. The heightmaps are 258x258 pixels. I scale my texture coordinates like so: texCoord = (texCoord * (256/258)) + (1/258) Yet even with this I am left with the following problem: As you can see the edges of the normal map still proves to be problematic. Putting the texture wrap mode to "clamp" also proved no help. EDIT: The Sobel Filter function by sampling the 8 surrounding pixels around a given pixel so that a derivative can be calculated in order to find the "normal" of the given pixel. The texture coordinates are instanced once per quad (for the quadtree that makes up the world) and are created as follows (it is quite possible that the problem results from the way I scale and offset the texCoords as seen above): Java: for(int i = 0; i<vertices.length; i++){ Vector2f coord = new Vector2f((vertices[i].x)/(worldSize), (vertices[i].z)/( worldSize)); texCoords[i] = coord; } the quad used for input here rests on the X0Z plane. 'worldSize' is the diameter of the planet. No negative texCoords are seen as the quad used for input for this method is not centered around the origin. Is there something I am missing here? Thanks.

    Read the article

< Previous Page | 476 477 478 479 480 481 482 483 484 485 486 487  | Next Page >