Search Results

Search found 16410 results on 657 pages for 'game component'.

Page 343/657 | < Previous Page | 339 340 341 342 343 344 345 346 347 348 349 350  | Next Page >

  • Java : 2D Collision Detection

    - by neko
    I'm been working on 2D rectangle collision for weeks and still cannot get this problem fixed. The problem I'm having is how to adjust a player to obstacles when it collides. I'm referencing this link. The player sometime does not get adjusted to obstacles. Also, it sometimes stuck in obstacle guy after colliding. Here, the player and the obstacle are inheriting super class Sprite I can detect collision between the two rectangles and the point by ; public Point getSpriteCollision(Sprite sprite, double newX, double newY) { // set each rectangle Rectangle spriteRectA = new Rectangle( (int)getPosX(), (int)getPosY(), getWidth(), getHeight()); Rectangle spriteRectB = new Rectangle( (int)sprite.getPosX(), (int)sprite.getPosY(), sprite.getWidth(), sprite.getHeight()); // if a sprite is colliding with the other sprite if (spriteRectA.intersects(spriteRectB)){ System.out.println("Colliding"); return new Point((int)getPosX(), (int)getPosY()); } return null; } and to adjust sprites after a collision: // Update the sprite's conditions public void update() { // only the player is moving for simplicity // collision detection on x-axis (just x-axis collision detection at this moment) double newX = x + vx; // calculate the x-coordinate of sprite move Point sprite = getSpriteCollision(map.getSprite().get(1), newX, y);// collision coordinates (x,y) if (sprite == null) { // if the player is no colliding with obstacle guy x = newX; // move } else { // if collided if (vx > 0) { // if the player was moving from left to right x = (sprite.x - vx); // this works but a bit strange } else if (vx < 0) { x = (sprite.x + vx); // there's something wrong with this too } } vx=0; y+=vy; vy=0; } I think there is something wrong in update() but cannot fix it. Now I only have a collision with the player and an obstacle guy but in future, I'm planning to have more of them and making them all collide with each other. What would be a good way to do it? Thanks in advance.

    Read the article

  • Level design - Are games development degrees worth it? [on hold]

    - by Tristan
    I want to go into Level design or Environment design and wondering if degrees are at all worth it for this area, as long as you have a good portfolio. I'm currently on a "Computing and Games Development" course and I feel like dropping out because I am not enjoying it. It's mostly computer based, which I'm not doing that great at, and little games Dev. But I don't know what do to... I do already have some high level education with a "Level 3 extended diploma". Thanks.

    Read the article

  • openGL Camera setup for Zoom in/out centered at point under cursor

    - by user3228921
    I am trying to implement a zoom in/out navigation mode in a openGL 3dViewer. I was able to implement zoom functionality centered at screen center just by moving eye towards the center in perspective mode. Now i am trying to do the zoom centered at arbitrary position under the cursor. I am unable to figure out how should i move my camera forward and backward such that point under cursor remains at the same screen coordinates after zoom in/out. Any help would be appreciated. Below are the images which show the desired effect. Just to mention, I am working in a perspective mode with eye target and up vectors to control camera. Same effect i found in google sketchup and 'zoom to mouse position' setting in blender.

    Read the article

  • Adjust sprite bounds of the visible part of texture

    - by Crazy D0G
    Is there any way to adjust the boundaries of the visible part of the sprite? To make it easier to understand: I have a texture, such as shown at figure 1. Then I break it into pieces and fill the resulting fragments using PRKit (wood texture on figure 2 and 3). But the resulting fragments have the transparent (green color on figure 2 and 3) and when creating a sprite from the fragments they have the size of the initial texture. Is there a way to get rid of this transparency and to adjust the size of the visible part (wood texture), openGL or cocos2d-x means? Maybe it help - draw() method from PRKit: void PRFilledPolygon::draw() { //CCNode::draw(); glDisableClientState(GL_COLOR_ARRAY); // we have a pointer to vertex points so enable client state glBindTexture(GL_TEXTURE_2D, texture->getName()); glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_ONE_MINUS_SRC_ALPHA); glVertexPointer(2, GL_FLOAT, 0, areaTrianglePoints); glTexCoordPointer(2, GL_FLOAT, 0, textureCoordinates); glDrawArrays(GL_TRIANGLES, 0, areaTrianglePointCount); glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE); //Restore texture matrix and switch back to modelview matrix glEnableClientState(GL_COLOR_ARRAY);}

    Read the article

  • How i can sign and/or group a specific set of vertices in a 3D file container like OBJ ? - in Blender

    - by user827992
    I would like to export a 3D model with each part having a name or a label if you will. For example i would like to export a model of an human body and name each part in specifics vertex groups like: left hand, right hand, right foot, head, ears, ... and you got the idea; so i can have a single 3D model that i can explode in various parts if needed. If there is a better technique about how to mark vertex groups in a 3D file please share your solution. As 3D editor i use Blender.

    Read the article

  • Creating my own kill cam

    - by DalexL
    I plan on creating my own kill cam system for a sandbox tool set. After thinking about the mechanics of the kill cam itself, however, I'm quite lost. I'm trying to recreate the ones commonly seen in call of duty games that show, from the view of the killer, the actual killing scene. My Thoughts: -I can't just keep in memory when people kill others because I wouldn't know when to start the 'recording process'. There is on way for me to accurately determine when somebody is 'about' to kill someone. -My only real idea so far is to have a complete duplicate of everything loaded off to the side copying all the movement from the original world but with a 10 second delay. That way, all the kill cams would be 10 seconds long and the persons camera would just be moved to the second world of their killer. My Questions: Is there already an accepted way to do this? Does anybody have any good ideas for something like this? Thanks if you can!

    Read the article

  • Whats a good setup/toolchain for a project?

    - by acidzombie24
    I was thinking, what is needed for a good setup and what are good (free) tools to use? Some of what i came up with are Bug tracking Some good (distributed:P) source control (which means no svn fellas) automated nightly builds or a continuous integration (or anything that automates builds and possibly sends emails when there are build errors) wiki to document decisions, road map or milestones. Something to backup assets (art, sound, etc) What else? and do you have suggestions for any of the above? i pretty much clueless of all of these except for source control

    Read the article

  • Why is my model's scale changing after rotating it?

    - by justnS
    I have just started a simple flight simulator and have implemented Roll and pitch. In the beginning, testing went very well; however, after about 15-20 seconds of constantly moving the thumbsticks in a random or circular motion, my model's scale begins to grow. At first I thought the model was moving closer to the camera, but i set break points when it was happening and can confirm the translation of my orientation matrix remains 0,0,0. Is this a result of Gimbal Lock? Does anyone see an obvious error in my code below? public override void Draw( Matrix view, Matrix projection ) { Matrix[] transforms = new Matrix[Model.Bones.Count]; Model.CopyAbsoluteBoneTransformsTo( transforms ); Matrix translateMatrix = Matrix.Identity * Matrix.CreateFromAxisAngle( _orientation.Right, MathHelper.ToRadians( pitch ) ) * Matrix.CreateFromAxisAngle( _orientation.Down, MathHelper.ToRadians( roll ) ); _orientation *= translateMatrix; foreach ( ModelMesh mesh in Model.Meshes ) { foreach ( BasicEffect effect in mesh.Effects ) { effect.World = _orientation * transforms[mesh.ParentBone.Index]; effect.View = view; effect.Projection = projection; effect.EnableDefaultLighting(); } mesh.Draw(); } } public void Update( GamePadState gpState ) { roll = 5 * gpState.ThumbSticks.Left.X; pitch = 5 * gpState.ThumbSticks.Left.Y; }

    Read the article

  • PNG file loading error in ImageMagick

    - by khanhhh89
    I'm trying to understand the tutorial 16 at http://ogldev.atspace.co.uk, which requires the image processing library ImageMagick. But when I run the tutorial, I encountered an following error: freeglut: failed to change scree settings Error loading textures 'test.png': no decode delegates for this image format 'C:/../appdata/magick-6024a_cIJcw90t-j'@error/constitute.c/ReadImage/552 I searched for google and found that my ImageMagick library do not have PNG Delegaes, but when I checked for the information of ImageMagick, it sees PNG in its delegate lists. Command line: convert -configure Result: LIB_VERSION 0x687 DELEGATES: bzlib, freetype, jpeg, jp2, lcms, png, tiff, x11, xml, wmf, zlib Could you explain to me this error, thanks so much!

    Read the article

  • write to depth buffer while using multiple render targets

    - by DocSeuss
    Presently my engine is set up to use deferred shading. My pixel shader output struct is as follows: struct GBuffer { float4 Depth : DEPTH0; //depth render target float4 Normal : COLOR0; //normal render target float4 Diffuse : COLOR1; //diffuse render target float4 Specular : COLOR2; //specular render target }; This works fine for flat surfaces, but I'm trying to implement relief mapping which requires me to manually write to the depth buffer to get correct silhouettes. MSDN suggests doing what I'm already doing to output to my depth render target - however, this has no impact on z culling. I think it might be because XNA uses a different depth buffer for every RenderTarget2D. How can I address these depth buffers from the pixel shader?

    Read the article

  • Movement of body after applying weld joint

    - by ved
    I have two rectangular bodies. I've applied Weldjoint successfully on these bodies. I want to move that joined body by applying linear impulse. After weld joint, these two bodies becomes single body right? How do I apply force/impulse on the joined body? I am using Box2D with LibGDX. I've tried this: polygon1.applyLinearImpulse(new Vector2(-5, 0), polygon1.getWorldCenter(), true); I thought that if I move polygon1 then polygon2 will also move due to my weld joint but it is not working properly. Why don't they move together after being welded?

    Read the article

  • Make objects slide across the screen in random positions

    - by user3475907
    I want to make an object appear randomly at the right hand side of the screen and then slide across the screen and disapear at the left hand side. I am working with libgdx. I have this bit of code but it makes items fall from the top down. Please help. public EntityManager(int amount, OrthoCamera camera) { player = new Player(new Vector2(15, 230), new Vector2(0, 0), this, camera); for (int i = 0; i < amount; i++) { float x = MathUtils.random(0, MainGame.HEIGHT - TextureManager.ENEMY.getHeight()); float y = MathUtils.random(MainGame.WIDTH, MainGame.WIDTH * 10); float speed = MathUtils.random(2, 10); addEntity(new Enemy(new Vector2(x, y), new Vector2(-0, -speed))); }

    Read the article

  • relationship between the model and the renderer

    - by acrilige
    I tried to build a simple graphics engine, and faced with this problems: i have a list of models that i need to draw, and object (renderer) that implements IRenderer interface with method DrawObject(Object* obj). Implementation of renderer depends on using graphics library (opengl/directx). 1st question: model should not know nothing about renderer implementation, but in this case where can i hold (cache) information that depends on renderer implementation? For example, if model have this definition: class Model { public: Model(); Vertex* GetVertices() const; private: Vertex* m_vertices; }; what is the best way to cache, for example, vertex buffer of this model for dx11? Hold it in renderer object? 2nd question: what is the best way for model to say renderer HOW it must be rendered (for example with texture, bump mapping, or may be just in one color). I thought it can be done with flags, like this: model-SetRenderOptions(RENDER_TEXTURE | RENDER_BUMPMAPPING | RENDER_LIGHTING); and in Renderer::DrawModel method check for each flag. But looks like it will become uncomfortable with the options count growth...

    Read the article

  • fragment shader directional light positioning with camera

    - by meWantToLearn
    Im trying to set up directional lighting in the fragment shader. So the direction of my light moves with the camera position. #version 150 core uniform sampler2D diffuseTex; uniform vec4 lightColour; uniform vec3 lightDirection; vec3 LNorm = normalize(lightDirection); vec3 normal = normalize(IN.normal); vec3 calColour = lightColour[i].rgb * intensity; gl_FragColor = vec4(diffuse.rbg * calColour, diffuse.a); It lights the entire scene.

    Read the article

  • Subdividing a polygon into boxes of varying size

    - by Michael Trouw
    I would like to be pointed to information / resources for creating algorithms like the one illustrated on this blog, which is a subdivision of a polygon (in my case a voronoi cell) into several boxes of varying size: http://procworld.blogspot.nl/2011/07/city-lots.html In the comments a paper by among others the author of the blog can be found, however the only formula listed is about candidate location suitability: http://www.groenewegen.de/delft/thesis-final/ProceduralCityLayoutGeneration-Preprint.pdf Any language will do, but if examples can be given Javascript is preferred (as it is the language i am currently working with) A similar question is this one: What is an efficient packing algorithm for packing rectangles into a polygon?

    Read the article

  • Architecture a for a central renderer rather than self-rendering

    - by The Communist Duck
    For the architectural side of rendering, there's two main ways: having each object render itself, and having a single renderer which renders everything. I'm currently aiming for the second idea, for the following reasons: The list can be sorted to only use shaders once. Else each object would have to bind the shader, because it's not sure if it's active. The objects could be sorted and grouped. Easier to swap APIs. With a few macro lines, it can be easy to swap between a DirectX renderer and an OpenGL renderer (not a reason for my project, but still a good point) Easier to manage rendering code Of course, if anyone has strong recommendations for the first method, I will listen to them. But I was wondering how make this work. First idea The renderer has a list of pointers to the renderable components of each entity, which register themselves on RenderCompoent creation. However, I'm worrying that this may end up as a lot of extra pointer weight. But I can sort the list of pointers every so often. Second idea The entire list of entities is passed to the renderer each render call. The renderer then sorts the list (each call, or maybe once?) and gets what it wants. That's a lot of passing and/or sorting, however. Other ideas ??? PROFIT Anyone got ideas? Thank you.

    Read the article

  • exact point on a rotating sphere

    - by nkint
    I have a sphere that represents the Earth textured with real pictures. It's rotating around the x axis, and when user click down it has to show me the exact place he clicked on. For example if he clicked on Singapore the system should be able to: understand that user clicked on the sphere (OK, I'll do it with unProject) understand where user clicked on the sphere (ray-sphere collision?) and take into account the rotation transform sphere-coordinate to some coordinate system good for some web-api service ask to api (OK, this is the simpler thing for me ;-) some advice?

    Read the article

  • 3d trajectory - calculate initial velocity

    - by Skoder
    Hey, I've got a 2D projectile code sample working, but would like to extend it to 3D. How would I calculate the initial velocity of the Z-axis? At the moment, I've got: initVel.X = (float)Math.Cos(45.0); initVel.Y = (float)Math.Sin(45.0); How would I convert this to work in 3D (and add the initial velocity for the Z-axis)? In my example, X is across, Y is up down and Z is going into the screen. I also normalize the vector and multiply it by the speed. Thanks

    Read the article

  • Projecting onto different size screens by cropping

    - by Jason
    Hi, I am building a phone application which will display a shape on screen. The shape should look the same on different screen sizes. I. Decided the best way to do this is to show more of the background on larger screen keeping the shapes proportion the same on all screens. My problem is I am not sure how to achieve this, I can query the screen size at runtime and calculate how different it is from the six is designed for but I am not sure what to do with this value. What kind of projection should I use for my orthographic matrix an hour will I display more on larger screens and not loose information on smaller screens? Thanks, Jason.

    Read the article

  • Learning OpenGL GLSL - VAO buffer problems?

    - by Bleary
    I've just started digging through OpenGL and GLSL, and now stumbled on something I can't get my head around this one!? I've stepped back to loading a simple cube and using a simple shader on it, but the result is triangles drawn incorrectly and/or missing. The code I had working perfectly on meshes, but was attempting to move to using VAOs so none of the code for storing the vertices and indices has changed. http://i.stack.imgur.com/RxxZ5.jpg http://i.stack.imgur.com/zSU50.jpg What I have for creating the VAO and buffers is this //Create the Vertex array object glGenVertexArrays(1, &vaoID); // Finally create our vertex buffer objects glGenBuffers(VBO_COUNT, mVBONames); glBindVertexArray(vaoID); // Save vertex attributes into GPU glBindBuffer(GL_ARRAY_BUFFER, mVBONames[VERTEX_VBO]); // Copy data into the buffer object glBufferData(GL_ARRAY_BUFFER, lPolygonVertexCount*VERTEX_STRIDE*sizeof(GLfloat), lVertices, GL_STATIC_DRAW); glEnableVertexAttribArray(pos); glVertexAttribPointer(pos, 3, GL_FLOAT, GL_FALSE, VERTEX_STRIDE*sizeof(GLfloat),0); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, mVBONames[INDEX_VBO]); glBufferData(GL_ELEMENT_ARRAY_BUFFER, lPolygonCount*sizeof(unsigned int), lIndices, GL_STATIC_DRAW); glBindVertexArray(0); And the code for drawing the mesh. glBindVertexArray(vaoID); glUseProgram(shader->programID); GLsizei lOffset = mSubMeshes[pMaterialIndex]->IndexOffset*sizeof(unsigned int); const GLsizei lElementCount = mSubMeshes[pMaterialIndex]->TriangleCount*TRIAGNLE_VERTEX_COUNT; glDrawElements(GL_TRIANGLES, lElementCount, GL_UNSIGNED_SHORT, reinterpret_cast<const GLvoid*>(lOffset)); // All the points are indeed in the correct place!? //glPointSize(10.0f); //glDrawElements(GL_POINTS, lElementCount, GL_UNSIGNED_SHORT, 0); glUseProgram(0); glBindVertexArray(0); Eyes have become bleary looking at this today so any thoughts or a fresh set of eyes would be greatly appreciated.

    Read the article

  • Intersection points of plane set forming convex hull

    - by Toji
    Mostly looking for a nudge in the right direction here. Given a set of planes (defined as a normal and distance from origin) that form a convex hull, I would like to find the intersection points that form the corners of that hull. More directly, I'm looking for a way to generate a point cloud appropriate to provide to Bullet. Bonus points if someone knows of a way I could give bullet the plane list directly, since I somewhat suspect that's what it's building on the backend anyway.

    Read the article

  • DevExpress Wins #1 Publisher Award

    Thanks to loyal and awesome customers like you, DevExpress recently won several bestseller awards from Component Source. This morning at TechEd 2010, Chris Brooke from Component Source dropped by to present the top #1 Publisher award: Thanks for your support! DXperience? What's That? DXperience is the .NET developer's secret weapon. Get full access to a complete suite of professional components that let you instantly drop in new features, designer styles and fast performance for...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Moving objects colliding when using unalligned collision avoidance (steering)

    - by James Bedford
    I'm having trouble with unaligned collision avoidance for what I think is a rare case. I have set two objects to move towards each other but with a slight offset, so one of the objects is moving slightly upwards, and one of the objects is moving slightly downwards. In my unaligned collision avoidance steering algorithm I'm finding the points on the object's forward line and the other object's forward line where these two lines are the closest. If these closest points are within a collision avoidance distance, and if the distance between them is smaller than the two radii of the two object's bounding spheres, then the objects should steer away in the appropriate direction. The problem is that for my case, the closest points on the lines are calculated to be really far away from the actual collision point. This is because the two forward lines for each object are moving away from each other as the objects pass. The problem is that because of this, no steering takes place, and the two objects partially collide. Does anyone have any suggestions as to how I can correctly calculate the point of collision? Perhaps by somehow taking into account the size of the two objects?

    Read the article

  • Algorithms for rainfall + river creation in procedurally generated terrain

    - by Peck
    I've recently become fascinated by the things that can be done with procedurally terrain and have started experimenting with world building a bit. I'd like to be able to make worlds something like Dwarf fortress with biomes created from meshing together various maps. So first step has been done. Using the diamond-square algorithm I've created some nice hieghtmaps. Next step is I would like to add some water features and have them somewhat realistically generated with rainfall. I've read about a few different approaches such as starting at the high points of the map, and "stepping" down to the lowest neighboring point, pooling/eroding as it works its way down to sea level. Are there any documented algorithms with this or are they more off the cuff? Would love any advice/thoughts.

    Read the article

  • Need help drawings planets in Java.

    - by d33j
    I am looking for help/links/notes/agorithms/URLs/examples on drawing/rendering spheres in pure Java (so that I can hopefully, one day, generate/render planets with various surfaces & atmospheres) So for the moment, i'd be pretty happy to be able to start off with just drawing a wireframed sphere(s). ps: I don't want to use external libraries like Java3D, JOGL or aftermarket engines like JMonkeyEngine, Would rather keep it as straight Java.

    Read the article

< Previous Page | 339 340 341 342 343 344 345 346 347 348 349 350  | Next Page >