Search Results

Search found 44141 results on 1766 pages for 'unix development support'.

Page 581/1766 | < Previous Page | 577 578 579 580 581 582 583 584 585 586 587 588  | Next Page >

  • Shadow mapping: what is the light looking at?

    - by PgrAm
    I'm all set to set up shadow mapping in my 3d engine but there is one thing I am struggling to understand. The scene needs to be rendered from the light's point of view so I simply first move my camera to the light's position but then I need to find out which direction the light is looking. Since its a point light its not shining in any particular direction. How do I figure out what the orientation for the light point of view is?

    Read the article

  • Why does this game loop stop my process from responding?

    - by Ben
    I implemented a fixed time step loop for my C# game. All it does at the moment is make a square bounce around the screen. The problem I'm having is that when I execute the program, I can't close it from the window's close button and the cursor is stuck on the "busy" icon. I have to go into Visual Studio and stop the program manually. Here's the loop at the moment: public void run() { int updates = 0; int frames = 0; double msPerTick = 1000.0 / 60.0; double threshhold = 0; long lastTime = getCurrentTime(); long lastTimer = getCurrentTime(); while (true) { long currTime = getCurrentTime(); threshhold += (currTime - lastTime) / msPerTick; lastTime = currTime; while (threshhold >= 1) { update(); updates++; threshhold -= 1; } this.Refresh(); frames++; if ((getCurrentTime() - lastTimer) >= 1000) { this.Text = updates + " updates and " + frames + " frames per second"; updates = 0; frames = 0; lastTimer += 1000; } } } Why is this happening?

    Read the article

  • Modify game using external file

    - by Veehmot
    In Flash, for example, I can place an xml file along with the binary, then if I modify some variable the game will change for everyone. How to achieve something like that in Android? I know that for every change I make to the game, the player would need to download a new update. But the main goal I'm looking for, is modifying a game stats without the need for recompile the entire APK. I'm working with Haxe+OpenFL.

    Read the article

  • matrix 4x4 position data

    - by freefallr
    I understand that a 4x4 matrix holds rotation and position data. The rotation data is held in the 3x3 sub-matrix at the top left of the matrix. The position data is held in the last column of the matrix. e.g. glm::vec3 vParentPos( mParent[3][0], mParent[3][1], mParent[3][2] ); My question is - am I accessing the parent matrix correctly in the example above? I know that opengl uses a different matrix ordering that directx, (row order instead of column order or something), so, should the mParent be accessed as follows instead? glm::vec3 vParentPos( mParent[0][3], mParent[1][3], mParent[2][3] ); thanks!

    Read the article

  • Using 2D sprites and 3D models together

    - by Sweta Dwivedi
    I have gone through a few posts that talks about changing the GraphicsDevice.BlendState and GraphicsDevice.DepthStencilState (SpriteBatch & Render states). . however even after changing the states .. i cant see my 3D model on the screen.. I see the model for a second before i draw my video in the background. . Here is the code: case GameState.InGame: GraphicsDevice.Clear(Color.AliceBlue); spriteBatch.Begin(); if (player.State != MediaState.Stopped) { videoTexture = player.GetTexture(); } Rectangle screen = new Rectangle(GraphicsDevice.Viewport.X, GraphicsDevice.Viewport.Y, GraphicsDevice.Viewport.Width, GraphicsDevice.Viewport.Height); // Draw the video, if we have a texture to draw. if (videoTexture != null) { spriteBatch.Draw(videoTexture, screen, Color.White); if (Selected_underwater == true) { spriteBatch.DrawString(font, "MaxX , MaxY" + maxWidth + "," + maxHeight, new Vector2(400, 10), Color.Red); spriteBatch.Draw(kinectRGBVideo, new Rectangle(0, 0, 100, 100), Color.White); spriteBatch.Draw(butterfly, handPosition, Color.White); foreach (AnimatedSprite a in aSprites) { a.Draw(spriteBatch); } } if(Selected_planet == true) { spriteBatch.Draw(kinectRGBVideo, new Rectangle(0, 0, 100, 100), Color.White); spriteBatch.Draw(butterfly, handPosition, Color.White); spriteBatch.Draw(videoTexture,screen,Color.White); GraphicsDevice.BlendState = BlendState.Opaque; GraphicsDevice.DepthStencilState = DepthStencilState.Default; GraphicsDevice.SamplerStates[0] = SamplerState.LinearWrap; foreach (_3DModel m in Solar) { m.DrawModel(); } } spriteBatch.End(); break;

    Read the article

  • My vertex shader doesn't affect texture coords or diffuse info but works for position

    - by tina nyaa
    I am new to 3D and DirectX - in the past I have only used abstractions for 2D drawing. Over the past month I've been studying really hard and I'm trying to modify and adapt some of the shaders as part of my personal 'study project'. Below I have a shader, modified from one of the Microsoft samples. I set diffuse and tex0 vertex shader outputs to zero, but my model still shows the full texture and lighting as if I hadn't changed the values from the vertex buffer. Changing the position of the model works, but nothing else. Why is this? // // Skinned Mesh Effect file // Copyright (c) 2000-2002 Microsoft Corporation. All rights reserved. // float4 lhtDir = {0.0f, 0.0f, -1.0f, 1.0f}; //light Direction float4 lightDiffuse = {0.6f, 0.6f, 0.6f, 1.0f}; // Light Diffuse float4 MaterialAmbient : MATERIALAMBIENT = {0.1f, 0.1f, 0.1f, 1.0f}; float4 MaterialDiffuse : MATERIALDIFFUSE = {0.8f, 0.8f, 0.8f, 1.0f}; // Matrix Pallette static const int MAX_MATRICES = 100; float4x3 mWorldMatrixArray[MAX_MATRICES] : WORLDMATRIXARRAY; float4x4 mViewProj : VIEWPROJECTION; /////////////////////////////////////////////////////// struct VS_INPUT { float4 Pos : POSITION; float4 BlendWeights : BLENDWEIGHT; float4 BlendIndices : BLENDINDICES; float3 Normal : NORMAL; float3 Tex0 : TEXCOORD0; }; struct VS_OUTPUT { float4 Pos : POSITION; float4 Diffuse : COLOR; float2 Tex0 : TEXCOORD0; }; float3 Diffuse(float3 Normal) { float CosTheta; // N.L Clamped CosTheta = max(0.0f, dot(Normal, lhtDir.xyz)); // propogate scalar result to vector return (CosTheta); } VS_OUTPUT VShade(VS_INPUT i, uniform int NumBones) { VS_OUTPUT o; float3 Pos = 0.0f; float3 Normal = 0.0f; float LastWeight = 0.0f; // Compensate for lack of UBYTE4 on Geforce3 int4 IndexVector = D3DCOLORtoUBYTE4(i.BlendIndices); // cast the vectors to arrays for use in the for loop below float BlendWeightsArray[4] = (float[4])i.BlendWeights; int IndexArray[4] = (int[4])IndexVector; // calculate the pos/normal using the "normal" weights // and accumulate the weights to calculate the last weight for (int iBone = 0; iBone < NumBones-1; iBone++) { LastWeight = LastWeight + BlendWeightsArray[iBone]; Pos += mul(i.Pos, mWorldMatrixArray[IndexArray[iBone]]) * BlendWeightsArray[iBone]; Normal += mul(i.Normal, mWorldMatrixArray[IndexArray[iBone]]) * BlendWeightsArray[iBone]; } LastWeight = 1.0f - LastWeight; // Now that we have the calculated weight, add in the final influence Pos += (mul(i.Pos, mWorldMatrixArray[IndexArray[NumBones-1]]) * LastWeight); Normal += (mul(i.Normal, mWorldMatrixArray[IndexArray[NumBones-1]]) * LastWeight); // transform position from world space into view and then projection space //o.Pos = mul(float4(Pos.xyz, 1.0f), mViewProj); o.Pos = mul(float4(Pos.xyz, 1.0f), mViewProj); o.Diffuse.x = 0.0f; o.Diffuse.y = 0.0f; o.Diffuse.z = 0.0f; o.Diffuse.w = 0.0f; o.Tex0 = float2(0,0); return o; } technique t0 { pass p0 { VertexShader = compile vs_3_0 VShade(4); } } I am currently using the SlimDX .NET wrapper around DirectX, but the API is extremely similar: public void Draw() { var device = vertexBuffer.Device; device.Clear(ClearFlags.Target | ClearFlags.ZBuffer, Color.White, 1.0f, 0); device.SetRenderState(RenderState.Lighting, true); device.SetRenderState(RenderState.DitherEnable, true); device.SetRenderState(RenderState.ZEnable, true); device.SetRenderState(RenderState.CullMode, Cull.Counterclockwise); device.SetRenderState(RenderState.NormalizeNormals, true); device.SetSamplerState(0, SamplerState.MagFilter, TextureFilter.Anisotropic); device.SetSamplerState(0, SamplerState.MinFilter, TextureFilter.Anisotropic); device.SetTransform(TransformState.World, Matrix.Identity * Matrix.Translation(0, -50, 0)); device.SetTransform(TransformState.View, Matrix.LookAtLH(new Vector3(-200, 0, 0), Vector3.Zero, Vector3.UnitY)); device.SetTransform(TransformState.Projection, Matrix.PerspectiveFovLH((float)Math.PI / 4, (float)device.Viewport.Width / device.Viewport.Height, 10, 10000000)); var material = new Material(); material.Ambient = material.Diffuse = material.Emissive = material.Specular = new Color4(Color.White); material.Power = 1f; device.SetStreamSource(0, vertexBuffer, 0, vertexSize); device.VertexDeclaration = vertexDeclaration; device.Indices = indexBuffer; device.Material = material; device.SetTexture(0, texture); var param = effect.GetParameter(null, "mWorldMatrixArray"); var boneWorldTransforms = bones.OrderedBones.OrderBy(x => x.Id).Select(x => x.CombinedTransformation).ToArray(); effect.SetValue(param, boneWorldTransforms); effect.SetValue(effect.GetParameter(null, "mViewProj"), Matrix.Identity);// Matrix.PerspectiveFovLH((float)Math.PI / 4, (float)device.Viewport.Width / device.Viewport.Height, 10, 10000000)); effect.SetValue(effect.GetParameter(null, "MaterialDiffuse"), material.Diffuse); effect.SetValue(effect.GetParameter(null, "MaterialAmbient"), material.Ambient); effect.Technique = effect.GetTechnique(0); var passes = effect.Begin(FX.DoNotSaveState); for (var i = 0; i < passes; i++) { effect.BeginPass(i); device.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, skin.Vertices.Length, 0, skin.Indicies.Length / 3); effect.EndPass(); } effect.End(); } Again, I set diffuse and tex0 vertex shader outputs to zero, but my model still shows the full texture and lighting as if I hadn't changed the values from the vertex buffer. Changing the position of the model works, but nothing else. Why is this? Also, whatever I set in the bone transformation matrices doesn't seem to have an effect on my model. If I set every bone transformation to a zero matrix, the model still shows up as if nothing had happened, but changing the Pos field in shader output makes the model disappear. I don't understand why I'm getting this kind of behaviour. Thank you!

    Read the article

  • Understanding how to create/use textures for games when limited by power of two sizes

    - by Matthias Reisner
    I have some questions about the creating graphics for a game. As an example. I want to create a motorbike. (1pixel = 1centimeter) So my motorbike will have 200 width and 150 height. (200x150) But the libgdx only allows to load sizes with the power of 2?! (2,4,8,16,...) First I thought about that way. I will create my bike with the size (200x150) and save it as png. Than I will open it again (e.g. with gimp) resize the image to a size which uses values with power of 2 (128x128). I will load that as texture in the programm and set width as 200 and height as 150. But wouldn't it be a problem? Because I will lose some pixel information when I make the first conversation.?! Isn't it?

    Read the article

  • Double sides face with two normals

    - by Marnix
    I think this isn't possible, but I just want to check this: Is it possible to create a face in opengl that has two normals? So: I want the inside and outside of some cilinder to be drawn, but I want the lights to do as expected and not calculate it for the normal given. I was trying to do this with backface culling off, so I would have both faces, but the light was wrongly calculated of course. Is this possible, or do I have to draw an inside and an outside? So draw twice?

    Read the article

  • How do I run my XBOX XNA game without a network connection?

    - by Hugh
    I need to demo my XBOX XNA game in college. The college doesn't allow this type of device to connect to the network. I deployed my game to the Xbox and it is sitting in the games list along with my other games. It runs fine with a network connection but when its offline it comes up with an error message saying its needs a connection to run the game. This makes no sense, the game is deployed on the Xbox memory, it must be some security policy or something! Is there any way around this? The demo is on monday!

    Read the article

  • OpenGL Tessellation makes point

    - by urza57
    A little problem with my tessellation shader. I try to implement a simple tessellation shader but it only makes points. Here's my vertex shader : out vec4 ecPosition; out vec3 ecNormal; void main( void ) { vec4 position = gl_Vertex; gl_Position = gl_ModelViewProjectionMatrix * position; ecPosition = gl_ModelViewMatrix * position; ecNormal = normalize(gl_NormalMatrix * gl_Normal); } My tessellation control shader : layout(vertices = 3) out; out vec4 ecPosition3[]; in vec3 ecNormal[]; in vec4 ecPosition[]; out vec3 myNormal[]; void main() { gl_out[gl_InvocationID].gl_Position = gl_in[gl_InvocationID].gl_Position; myNormal[gl_InvocationID] = ecNormal[gl_InvocationID]; ecPosition3[gl_InvocationID] = ecPosition[gl_InvocationID]; gl_TessLevelOuter[0] = float(4.0); gl_TessLevelOuter[1] = float(4.0); gl_TessLevelOuter[2] = float(4.0); gl_TessLevelInner[0] = float(4.0); } And my Tessellation Evaluation shader: layout(triangles, equal_spacing, ccw) in; in vec3 myNormal[]; in vec4 ecPosition3[]; out vec3 ecNormal; out vec4 ecPosition; void main() { float u = gl_TessCoord.x; float v = gl_TessCoord.y; float w = gl_TessCoord.z; vec3 position = vec4(gl_in[0].gl_Position.xyz * u + gl_in[1].gl_Position.xyz * v + gl_in[2].gl_Position.xyz * w ); vec3 position2 = vec4(ecPosition3[0].xyz * u + ecPosition3[1].xyz * v + ecPosition3[2].xyz * w ); vec3 normal = myNormal[0] * u + myNormal[1] * v + myNormal[2] * w ); ecNormal = normal; gl_Position = vec4(position, 1.0); ecPosition = vec4(position2, 1.0); } Thank you !

    Read the article

  • How can I do Mouse Selection In OpenGL 3.0?

    - by NoobScratcher
    Hello I'm pretty good programmer I've made my own 2D games in SDL and made a gui in 3D using Old OpenGL and Modern OpenGL but.. I'm having problems with trying to click 3D models with opengl I have no idea what to do too be honest. Do I read the area that I've clicked? or what do I do? 100% shore this has been asked before but I just don't know what to do...?? using : OpenGL 3.0 WIN32 API C++

    Read the article

  • AndEngine doesn't fill correctly an image on my device

    - by Guille
    I'm learning about AndEngine a little bit, I'm trying to follow a tutorial but I don't get to fill the background image correctly, so, it's just appear in one side of my screen. My device is a Galaxy Nexus (1270x768 I think...). The image is 800x480. The code is: public EngineOptions onCreateEngineOptions() { camera = new Camera(0, 0, 800, 480); EngineOptions engineOptions = new EngineOptions(true, ScreenOrientation.LANDSCAPE_FIXED, new FillResolutionPolicy(), this.camera); engineOptions.getAudioOptions().setNeedsMusic(true).setNeedsSound(true); engineOptions.getRenderOptions().setMultiSampling(true);//.getConfigChooserOptions().setRequestedMultiSampling(true); engineOptions.setWakeLockOptions(WakeLockOptions.SCREEN_ON); return engineOptions; } I have been trying with several values in the camera, but it doesn't fill in all the screen, why?

    Read the article

  • Should NPC dialog be stored in XML or in a script?

    - by Andrea Tucci
    I'm developing an action RPG with some friends. I would like to know the differences and pros/cons of making NPC's dialogue using a file in XMLformat instead of using a script. I see that script method is often used by game developers for NPC text, but is it better then a XML file? We've thought that a XML file with tags like <FirstText>[text1]<SecondText>[text2] et cetera is perfect for NPC text and also for possible quests to give the player. So what are the differences between this two methods? Is a script suitable for this aim?

    Read the article

  • Switching my collision detection to array lists caused it to stop working

    - by Charlton Santana
    I have made a collision detection system which worked when I did not use array list and block generation. It is weird why it's not working but here's the code, and if anyone could help I would be very grateful :) The first code if the block generation. private static final List<Block> BLOCKS = new ArrayList<Block>(); Random rnd = new Random(System.currentTimeMillis()); int randomx = 400; int randomy = 400; int blocknum = 100; String Title = "blocktitle" + blocknum; private Block block; public void generateBlocks(){ if(blocknum > 0){ int offset = rnd.nextInt(250) + 100; //500 is the maximum offset, this is a constant randomx += offset;//ofset will be between 100 and 400 int randomyoff = rnd.nextInt(80); //500 is the maximum offset, this is a constant randomy = platformheighttwo - 6 - randomyoff;//ofset will be between 100 and 400 block = new Block(BitmapFactory.decodeResource(getResources(), R.drawable.block2), randomx, randomy); BLOCKS.add(block); blocknum -= 1; } The second is where the collision detection takes place note: the block.draw(canvas); works perfectly. It's the blocks that don't work. for(Block block : BLOCKS) { block.draw(canvas); if (sprite.bottomrx < block.bottomrx && sprite.bottomrx > block.bottomlx && sprite.bottomry < block.bottommy && sprite.bottomry > block.topry ){ Log.d(TAG, "Collided!!!!!!!!!!!!1"); } // bottom left touching block? if (sprite.bottomlx < block.bottomrx && sprite.bottomlx > block.bottomlx && sprite.bottomly < block.bottommy && sprite.bottomly > block.topry ){ Log.d(TAG, "Collided!!!!!!!!!!!!1"); } // top right touching block? if (sprite.toprx < block.bottomrx && sprite.toprx > block.bottomlx && sprite.topry < block.bottommy && sprite.topry > block.topry ){ Log.d(TAG, "Collided!!!!!!!!!!!!1"); } //top left touching block? if (sprite.toprx < block.bottomrx && sprite.toprx > block.bottomlx && sprite.topry < block.bottommy && sprite.topry > block.topry ){ Log.d(TAG, "Collided!!!!!!!!!!!!1"); } } The values eg bottomrx are in the block.java file..

    Read the article

  • 3D Ball Physics Theory: collision response on ground and against walls?

    - by David
    I'm really struggling to get a strong grasp on how I should be handling collision response in a game engine I'm building around a 3D ball physics concept. Think Monkey Ball as an example of the type of gameplay. I am currently using sphere-to-sphere broad phase, then AABB to OBB testing (the final test I am using right now is one that checks if one of the 8 OBB points crosses the planes of the object it is testing against). This seems to work pretty well, and I am getting back: Plane that object is colliding against (with a point on the plane, the plane's normal, and the exact point of intersection. I've tried what feels like dozens of different high-level strategies for handling these collisions, without any real success. I think my biggest problem is understanding how to handle collisions against walls in the x-y axes (left/right, front/back), which I want to have elasticity, and the ground (z-axis) where I want an elastic reaction if the ball drops down, but then for it to eventually normalize and be kept "on the ground" (not go into the ground, but also not continue bouncing). Without kluging something together, I'm positive there is a good way to handle this, my theories just aren't getting me all the way there. For physics modeling and movement, I am trying to use a Euler based setup with each object maintaining a position (and destination position prior to collision detection), a velocity (which is added onto the position to determine the destination position), and an acceleration (which I use to store any player input being put on the ball, as well as gravity in the z coord). Starting from when I detect a collision, what is a good way to approach the response to get the expected behavior in all cases? Thanks in advance to anyone taking the time to assist... I am grateful for any pointers, and happy to post any additional info or code if it is useful. UPDATE Based on Steve H's and eBusiness' responses below, I have adapted my collision response to what makes a lot more sense now. It was close to right before, but I didn't have all the right pieces together at the right time! I have one problem left to solve, and that is what is causing the floor collision to hit every frame. Here's the collision response code I have now for the ball, then I'll describe the last bit I'm still struggling to understand. // if we are moving in the direction of the plane (against the normal)... if (m_velocity.dot(intersection.plane.normal) <= 0.0f) { float dampeningForce = 1.8f; // eventually create this value based on mass and acceleration // Calculate the projection velocity PVRTVec3 actingVelocity = m_velocity.project(intersection.plane.normal); m_velocity -= actingVelocity * dampeningForce; } // Clamp z-velocity to zero if we are within a certain threshold // -- NOTE: this was an experimental idea I had to solve the "jitter" bug I'll describe below float diff = 0.2f - abs(m_velocity.z); if (diff > 0.0f && diff <= 0.2f) { m_velocity.z = 0.0f; } // Take this object to its new destination position based on... // -- our pre-collision position + vector to the collision point + our new velocity after collision * time // -- remaining after the collision to finish the movement m_destPosition = m_position + intersection.diff + (m_velocity * intersection.tRemaining * GAMESTATE->dt); The above snippet is run after a collision is detected on the ball (collider) with a collidee (floor in this case). With a dampening force of 1.8f, the ball's reflected "upward" velocity will eventually be overcome by gravity, so the ball will essentially be stuck on the floor. THIS is the problem I have now... the collision code is running every frame (since the ball's z-velocity is constantly pushing it a collision with the floor below it). The ball is not technically stuck, I can move it around still, but the movement is really goofy because the velocity and position keep getting affected adversely by the above snippet. I was experimenting with an idea to clamp the z-velocity to zero if it was "close to zero", but this didn't do what I think... probably because the very next frame the ball gets a new gravity acceleration applied to its velocity regardless (which I think is good, right?). Collisions with walls are as they used to be and work very well. It's just this last bit of "stickiness" to deal with. The camera is constantly jittering up and down by extremely small fractions too when the ball is "at rest". I'll keep playing with it... I like puzzles like this, especially when I think I'm close. Any final ideas on what I could be doing wrong here? UPDATE 2 Good news - I discovered I should be subtracting the intersection.diff from the m_position (position prior to collision). The intersection.diff is my calculation of the difference in the vector of position to destPosition from the intersection point to the position. In this case, adding it was causing my ball to always go "up" just a little bit, causing the jitter. By subtracting it, and moving that clamper for the velocity.z when close to zero to being above the dot product (and changing the test from <= 0 to < 0), I now have the following: // Clamp z-velocity to zero if we are within a certain threshold float diff = 0.2f - abs(m_velocity.z); if (diff > 0.0f && diff <= 0.2f) { m_velocity.z = 0.0f; } // if we are moving in the direction of the plane (against the normal)... float dotprod = m_velocity.dot(intersection.plane.normal); if (dotprod < 0.0f) { float dampeningForce = 1.8f; // eventually create this value based on mass and acceleration? // Calculate the projection velocity PVRTVec3 actingVelocity = m_velocity.project(intersection.plane.normal); m_velocity -= actingVelocity * dampeningForce; } // Take this object to its new destination position based on... // -- our pre-collision position + vector to the collision point + our new velocity after collision * time // -- remaining after the collision to finish the movement m_destPosition = m_position - intersection.diff + (m_velocity * intersection.tRemaining * GAMESTATE->dt); UpdateWorldMatrix(m_destWorldMatrix, m_destOBB, m_destPosition, false); This is MUCH better. No jitter, and the ball now "rests" at the floor, while still bouncing off the floor and walls. The ONLY thing left is that the ball is now virtually "stuck". He can move but at a much slower rate, likely because the else of my dot product test is only letting the ball move at a rate multiplied against the tRemaining... I think this is a better solution than I had previously, but still somehow not the right idea. BTW, I'm trying to journal my progress through this problem for anyone else with a similar situation - hopefully it will serve as some help, as many similar posts have for me over the years.

    Read the article

  • cross resolution level design advice [on hold]

    - by Mike
    I was looking for some beginner advice regarding level design across multiple resolutions. I believe the answer is likely "it depends", but any input from anyone with real experience is very appreciated. Basically, I am building a 2D Super Metroid type game. If rooms/levels are to be a tiled grid, what are some general best practices for designing rooms when taking into account different resolutions? Since more or less tiles could fit vertically on a single screen depending on the resolution, is it better to design towards possibly having more of the room visible depending on the screen (with a bare minimum needed for gameplay), or should I fix the design at a certain tile height and scale the graphics?

    Read the article

  • Seeking an C/C++ OBJ geometry read/write that does not modify the representation

    - by Blake Senftner
    I am seeking a means to read and write OBJ geometry files with logic that does not modify the geometry representation. i.e. read geometry, immediately write it, and a diff of the source OBJ and the one just written will be identical. Every OBJ writing utility I've been able to find online fails this test. I am writing small command line tools to modify my OBJ geometries, and I need to write my results, not just read the geometry for rendering purposes. Simply needing to write the geometry knocks out 95% of the OBJ libraries on the web. Also, many of the popular libraries modify the geometry representation. For example, Nat Robbin's GLUT library includes the GLM library, which both converts quads to triangles, as well as reverses the topology (face ordering) of the geometry. It's still the same geometry, but if your tool chain expects a given topology, such as for rigging or morph targets, then GLM is useless. I'm not rendering in these tools, so dependencies like OpenGL or GLUT make no sense. And god forbid, do not "optimize" the geometry! Redundant vertices are on purpose for maintaining oneself on cache with our weird little low memory mobile devices.

    Read the article

  • Better way to do AI Behavior in AS3/Flixel

    - by joon
    I'm making a game in Flixel and I need to program an NPC. It's rapidly turning more complex than I expected. I was wondering if there are any best practices, tutorials or examples that you can refer me to, to see how this is done. I can probably hack it together, which is what I always do, but it would be nice if I can make it maintanable and can add stuff later on. Here's screenshot to give you an idea: The butler will be an NPC that will follow you, or guide you, and talk to you the whole time. EDIT: More specifically: What I have now is a long list of IF statements in the update loop of the butler (about 8 different cases), and all I have covered is his walking behavior. I want him to comment on things and sometimes switch his main behavior to be more aggresive or distant,... Is there any way to keep track of this, or is complex code with many many nested if statements the way to go?

    Read the article

  • Generating geometry when using VBO

    - by onedayitwillmake
    Currently I am working on a project in which I generate geometry based on the players movement. A glorified very long trail, composed of quads. I am doing this by storing a STD::Vector, and removing the oldest verticies once enough exist, and then calling glDrawArrays. I am interested in switching to a shader based model, usually examples I see the VBO is generated at start and then that's basically it. What is the best route to go about creating geometry in real time, using shader / VBO approach

    Read the article

  • OpenGL and gluUnProject, 3d object following mouse

    - by Robert
    i have a 3d object and i want him to "follow" my mouse position, so i use gluUnProject function to convert screen coordinates to 3d world coordinates and i translate this object with the new coordinates. Its working but i have a problem, my object can follow my mouse but he is moving extremely fast, when i move my mouse a little bit(something like 2 pixels), its moving extremly fast in the 3d world. I want something like that : http://www.youtube.com/watch?v=90zS8SVUAIY (red circle following mouse). Thanks for your help.

    Read the article

  • How should I do 3D games through Java on a mac?

    - by Steven Rogers
    I have been self-teaching myself Java on the mac mostly because the language is cross-platform. Recently, I have been only able to develop 2D games using the Graphics2D class. Now, I want to learn how to make 3D games in Java. I used to model and animate stuff in 3D, so my knowledge of 3-Dimensional stuff is okay. I have spent the last 3 hours using google to look up ways of making 3D games in java. Apparently the best one to use is OpenGL, so i looked up a tutorial on it and i cannot find a tutorial that shows how to (if there is a way) install JOGL on the Mac platform. Should i continue to use Java? How can i make 3D games using Java? What is the best way to make 3D games on a mac?

    Read the article

  • Issues with touch buttons in XNA (Release state to be precise)

    - by Aditya
    I am trying to make touch buttons in WP8 with all the states (Pressed, Released, Moved), but the TouchLocationState.Released is not working. Here's my code: Class variables: bool touching = false; int touchID; Button tempButton; Button is a separate class with a method to switch states when touched. The Update method contains the following code: TouchCollection touchCollection = TouchPanel.GetState(); if (!touching && touchCollection.Count > 0) { touching = true; foreach (TouchLocation location in touchCollection) { for (int i = 0; i < menuButtons.Count; i++) { touchID = location.Id; // store the ID of current touch Point touchLocation = new Point((int)location.Position.X, (int)location.Position.Y); // create a point Button button = menuButtons[i]; if (GetMenuEntryHitBounds(button).Contains(touchLocation)) // a method which returns a rectangle. { button.SwitchState(true); // change the button state tempButton = button; // store the pressed button for accessing later } } } } else if (touchCollection.Count == 0) // clears the state of all buttons if no touch is detected { touching = false; for (int i = 0; i < menuButtons.Count; i++) { Button button = menuButtons[i]; button.SwitchState(false); } } menuButtons is a list of buttons on the menu. A separate loop (within the Update method) after the touched variable is true if (touching) { TouchLocation location; TouchLocation prevLocation; if (touchCollection.FindById(touchID, out location)) { if (location.TryGetPreviousLocation(out prevLocation)) { Point point = new Point((int)location.Position.X, (int)location.Position.Y); if (prevLocation.State == TouchLocationState.Pressed && location.State == TouchLocationState.Released) { if (GetMenuEntryHitBounds(tempButton).Contains(point)) // Execute the button action. I removed the excess } } } } The code for switching the button state is working fine but the code where I want to trigger the action is not. location.State == TouchLocationState.Released mostly ends up being false. (Even after I release the touch, it has a value of TouchLocationState.Moved) And what is more irritating is that it sometimes works! I am really confused and stuck for days now. Is this the right way? If yes then where am I going wrong? Or is there some other more effective way to do this? PS: I also posted this question on stack overflow then realized this question is more appropriate in gamedev. Sorry if it counts as being redundant.

    Read the article

  • What is the best way to "carve" a terrain created from a heightmap?

    - by tigrou
    I have a 3d landscape created from a heightmap. I'd like to "carve" some holes in that terrain. That will allow me to create bridges, caverns and tunnels inside it. That operation will be done in the game editor so it doesn't need to be realtime. In the end, rendering is done using traditional polygons. What would be the best/easiest way to do that ? I already think about several solutions : Solution 1 1) Create voxels from the heightmap (very easy). In other words, fill a 3D array like this : voxels[32][32][32] from the heightmap values. 2) Carve holes in the voxels as i want (easy too). 3) Convert voxels to polygons using some iso-surface extraction technique (like marching cubes). 4) Reduce (decimate) polygons created in 3). This technique seems to be the most promising for giving good results (untested). However the problem with marching cubes is that they tends to produce lots of polygons thus reducing them is mandatory. Implementing 4) also seems not trivial, i have read several papers on the web and it seems pretty complex. I was also unable to find an example, code snippet or something to start writing an algorithm for triangle mesh decimation. Maybe there is a special decimation algorithm (simpler) for meshes created from marching cubes ? Solution 2 1) Create some triangle mesh from the heighmap (easy). 2) Apply severals 3D boolean operation (eg: subtraction with a sphere) to carve the mesh. 3) apply some procedure to reduce polygons (optional). Operation 2) seems to be very complex and to be honest i have no idea how to do that. Also applying many boolean operation seems to be slow and will maybe degrade the triangle mesh every time a boolean operation is applied.

    Read the article

  • Jumping with Mecanim synchronization

    - by Abhishek Deb
    I am using Unity3D 4.1 for one of my projects. I have a robot character which is always running. I am using mecanim animation system. What I really want:When I press Space bar, the character should jump up in the air, triggering an animation clip and then by the time it reaches the ground, the animation clip should also end. What actually is happening:When I press Space bar, the character jumps in the air. Animation clip plays as it should, but ends way before it reaches the ground. So, it looks like he is running in the mid air. What have I done: I have this humanoid robot setup with a jump animation bounded with the space bar key. Also, instead of using root motion, I am directly moving the robot from code. //Jumping if(Input.GetKeyDown(KeyCode.Space)){ rigidbody.AddForce(Vector3.up*jumpVelocty); anim.SetBool("Jump",true); } else anim.SetBool("Jump",false); Character's Details: Rigidbody = Mass:30, Freeze rotaion:x,y,z Capsule Collider = Material: metal, center(0,4.5,0), radius:1, height:11 Script = jumpVelocity:20000 Jump Animation Clip: ~ 2 seconds. I am really out of ideas how to synchronize everything. Should I make the character jump in some other way so that it quickly comes down and touches the ground to match the animation clip? If yes, please provide a direction.

    Read the article

  • Sounds to describe the weather?

    - by Matthew
    I'm trying to think of sounds that will help convey the time of day and weather condition. I'm not even sure of all the weather conditions I would consider, and some are obvious. Like if it's raining, the sound of rain. But then I'm thinking, what about for a calm day? If it's morning time, I could do birds chirping or something. Night time could be an owl or something. What are some good combinations of sounds/weather/time to have a good effect?

    Read the article

< Previous Page | 577 578 579 580 581 582 583 584 585 586 587 588  | Next Page >