Search Results

Search found 26774 results on 1071 pages for 'distributed development'.

Page 528/1071 | < Previous Page | 524 525 526 527 528 529 530 531 532 533 534 535  | Next Page >

  • Inputting cheat codes - hidden keyboard input

    - by Fibericon
    Okay, here's what I want to do - when the player is at the main menu, I want them to be able to type in cheat codes. That's the only place I want it to work. I don't want to give them a text box to type into. Rather, I want them to simply type in a word (let's say "cheat", just for simplicity sake) that activates the cheat code. I only need to capture keyboard input when the window is in focus. What can I do to accomplish this?

    Read the article

  • DirectX 10 Instancing Problem (objects cannot be seen)

    - by Riffraff
    Right now I'm trying to implement an area that is filled with vegetation. I have tried mesh version and right now I'm trying to implement instancing version but I cannot manage to make it work. I can't see any object. I search for any problem of buffers with FAILED() and D3D10_CREATE_DEVICE_DEBUG but they didn't help me either. Right now I don't even know which part of my code to share to explain my problem.

    Read the article

  • Problem with Update(GameTime) Methods and Pause implementation

    - by Adam
    I have the pause function implemented and it works correctly in that it dims the player screen and stops updating the gameplay. The problem is that GameTime continues to increase while it is paused, so my method that checks gameTime versus previousSpawnTime before spawning another enemy gets messed up and if the game is paused too long it is noticeable that the next enemy draws far too early. Here is my code for the enemy update. private void UpdateEnemies(GameTime gameTime) { // Spawn a new enemy every 1.5 seconds if (gameTime.TotalGameTime - previousSpawnTime > enemySpawnTime) { previousSpawnTime = gameTime.TotalGameTime; // Add an Enemy AddEnemy(); } ... I also have other methods that depend on gameTime. I've tried getting the total pause time and subtracting that from the total game time, but I can't seem to get it to work correctly if that is the way I should go about solving this. If you need to see any other code let me know. Thank you.

    Read the article

  • How does a collison engine work?

    - by JXPheonix
    Original question: Click me How exactly does a collision engine work? This is an extremely broad question. What code keeps things bouncing against each other, what code makes the player walk into a wall instead of walk through the wall? How does the code constantly refresh the players position and objects position to keep gravity and collision working as it should? If you don't know what a collision engine is, basically it's generally used in platformer games to make the player acutally hit walls and the like. There's the 2d type and the 3d type, but they all accomplish the same thing: collision. So, what keeps a collision engine ticking?

    Read the article

  • Create Adventure Game Scene/Room/Backdrop from Real Photo

    - by Lyuben
    Is there a suitable software or a good tutorial for creating 2D rooms/scenery for adventure games from real photos? Is it possible to achieve good results by using photos, or the hand-drawn style will always be the best choice? Thank you! --- EDIT --- I want to clarify that I'm particularly interested in the art creation process, not on the environment in which to build games. I'm writing the game in Java for Android, but I don't think it matters. Also, I'm not trying to decide if the game will have photo realistic rooms or not - I want to achieve 2d pixelated, old-school style background scenes and I wonder if this can be made from photos, because I cannot draw them myself. For example, can I shoot a scene with my camera and then make it look something like the image in the following link: PIXEL ART FOREST I know that I cannot get the same quality as an absolutely hand-drawn pixel, but I'm looking for some decent technology/tutorial/software to make them somewhat similar.

    Read the article

  • First frame has a much longer delta time than other frames

    - by Kipras
    I had a problem where my AI moved extreme at the first frame and then normal after that. I then figured out it was my delta. It's about 0.016 seconds (60 fps), but the first frame was about 19000 seconds, which is obviously impossible. Does anybody know what might be happening? Also the delta later on likes to oscillate from 0.01 to 0.03, which is, again, crazy. long time = Sys.getTime() * 1000 / Sys.getTimerResolution(); float delta = (time - lastFrame) / 1000f; lastFrame = time; return delta; That's the delta code.

    Read the article

  • Multi Threading - How to split the tasks

    - by Motig
    if I have a game engine with the basic 'game engine' components, what is the best way to 'split' the tasks with a multi-threaded approach? Assuming I have the standard components of: Rendering Physics Scripts Networking And a quad-core, I see two ways of multi-threading: Option A ('Vertical'): Using this approach I can allow one core for each component of the engine; e.g. one core for the Rendering task, one for the Physics, etc. Advantages: I do not need to worry about thread-safety within each component I can take advantage of special optimizations provided for single-threaded access (e.g. DirectX offers a flag that can be set to tell it that you will only use single-threading) Option B ('Horizontal'): Using this approach, each task may be split up into 1 <= n <= numCores threads, and executed simultaneously, one after the other. Advantages: Allows for work-sharing, i.e. each thread can take over work still remaining as the others are still processing I can take advantage of libraries that are designed for multi-threading (i.e. ... DirectX) I think, in retrospect, I would pick Option B, but I wanted to hear you guys' thoughts on the matter.

    Read the article

  • DirectCompute information

    - by N0xus
    I've been trying to make use of the GPU as part of a project of mine. I've looked into both CUDA and OpenCL, but the lack of information showing you how to introduce these into a project is shocking. Even their dedicated forum groups are dead. So now, I'm looking into DirectCompute. From what I can tell, it's simply a new type of shader file that makes use of HLSL. My question is this, does my program (aside from being DirectX 10 / 11 ) need its structure changed? I mean, is it simply a case of creating the CS file, setting in the project like I would any other shader, and watch the magic happen? Any information on this would be appreciated.

    Read the article

  • Sub-systems in game engines

    - by Hillel
    So here's the problem- I'm writing my own engine library, and it works fine with stuff like menus and the actual game screen. The thing is, I can't really figure out how to integrate something like an intro or dialogue preceding certain levels into this system. Let's look at another example- say I have a game-specific engine which gets a Level object and runs it. Engine would have its own collision and physics system, all hard coded. Now, what if at some point in a level, I want the player to enter a mini-game with different rules? How do I morph the Engine class to support these sub-systems without having to deal with their code all the time (as in: if(regular game) ... else if(mini game) ...)? And what if I want an intro animation at the start of a level, and I want the player to be able to assume control of his character once the animation ends, do I implement the animation into the Engine class itself? Or maybe I need to run another class, CutScene, and when it ends, it calls Engine and starts the level? What if I want to add a dialogue system, where at the start of each level there's a short dialogue and the player can't control his character, and once it ends, he can? Would I then run the dialogue code inside the Engine code? Maybe these sub-systems should all be scripted? I don't know anything about scripting, is it necessary for this kind of situation? Any help would be appreciated.

    Read the article

  • How do I interpolate air drag with a variable time step?

    - by Valentin Krummenacher
    So I have a little game which works with small steps, however those steps vary in time, so for example I sometimes have 10 Steps/second and then I have 20 Steps/second. This changes automatically depending on how many steps the user's computer can take. To avoid inaccurate positioning of the game's player object I use y=v0*dt+g*dt^2/2 to determine my objects y-position, where dt is the time since the last step, v0 is the velocity of my object in the beginning of my step and g is the gravity. To calculate the velocity in the end of a step I use v=v0+g*dt what also gives me correct results, independent of whether I use 2 steps with a dt of for example 20ms or one step with a dt of 40ms. Now I would like to introduce air drag. For simplicity's sake I use a=k*v^2 where a is the air drag's acceleration (I am aware that it would usually result in a force, but since I assume 1kg for my object's mass the force is the same as the resulting acceleration), k is a constant (in this case I'm using 0.001) and v is the speed. Now in an infinitely small time interval a is k multiplied by the velocity in this small time interval powered by 2. The problem is that v in the next time interval would depend on the drag of the last which again depends on the v of the last interval and so on... In other words: If I use a=k*v^2 I get different results for my position/velocity when I use 2 steps of 20ms than when I use one step of 40ms. I used to have this problem for my position too, but adding +g*dt^2/2 to the formula for my position fixed the problem since it takes into account that the position depends on the velocity which changes slightly in every infinitely small time interval. Does something like that exist for air drag too? And no, I dont mean anything like Adding air drag to a golf ball trajectory equation or similar, for that kind of method only gives correct results when all my steps are the same. (I hope you can understand my intermediate english, it's not my main language so I would like to say sorry for all the silly mistakes I might have made in my question)

    Read the article

  • Trouble with Collada bones

    - by KyleT
    I have a Collada file with a rigged mesh. I've read the node tags in the library_visual_scenes tag and extracted the matrix for each node and stored everything in a hierarchical bone structure. My Matrix container is "row major", so I'd store the first float of a matrix tag in the 1st row, 1st column, the second in the 1st row, 2nd column, etc. From what I gather this is the Bind Pose Matrix. After that I went through the tag and extracted the float array in the source tag of the skin tag of the controller for the mesh. I stored each matrix from this float array in their corresponding Bone as the Inverse Bind Matrix. I also extracted the bind-shape-matrix and stored it. Now I'd like to draw the skeleton with OpenGL to see if everything is working correctly before I go about skinning. I iterate once over my bones and multiply a bone's Bind Pose Matrix by it's parents and store that. After that I iterate again over the bones and multiply the result of the previous matrix multiplication by the Inverse Bind Matrix and then by the Bind Shape Matrix. The results look something like this: [0.2, 9.2, 5.8, 1.2 ] [4.6, -3.3, -0.2, -0.1 ] [-1.8, 0.2, -4.2, -3.9 ] [0, 0, 0, 1 ] I've had to go to various sources to get the little understanding of Collada I have and books about 3d transform matricies can get pretty intense. I've hit a brick wall and if you could please read through this and see if there is something I'm doing wrong, and how I'd go about getting an X,Y,Z to draw a point for each of these joints once I've calculated the final transform, I'd really appreciate it.

    Read the article

  • Problem with SAT collision detection overlap checking code

    - by handyface
    I'm trying to implement a script that detects whether two rotated rectangles collide for my game, using SAT (Separating Axis Theorem). I used the method explained in the following article for my implementation in Google Dart. 2D Rotated Rectangle Collision I tried to implement this code into my game. Basically from what I understood was that I have two rectangles, these two rectangles can produce four axis (two per rectangle) by subtracting adjacent corner coordinates. Then all the corners from both rectangles need to be projected onto each axis, then multiplying the coordinates of the projection by the axis coordinates (point.x*axis.x+point.y*axis.y) to make a scalar value and checking whether the range of both the rectangle's projections overlap. When all the axis have overlapping projections, there's a collision. First of all, I'm wondering whether my comprehension about this algorithm is correct. If so I'd like to get some pointers in where my implementation (written in Dart, which is very readable for people comfortable with C-syntax) goes wrong. Thanks! EDIT: The question has been solved. For those interested in the working implementation: Click here

    Read the article

  • How to handle jumping up a slope in a runner game?

    - by you786
    In an 2D endless runner, what should happen when the player is running "too fast" up a slope and jumps? For example, in a "normal" case: .O. . __..O_____ . / . / O/ _/ If he is moving to the right slowly enough, he will jump upwards and land on the flat part of the surface. However, if he is moving too fast, the jump will have no effect as his forward motion will bring him back in contact with the slope before he can get high enough to pass over it. When the speed is sufficiently high, there will effectively be no jump. _________ / .O/ O/ _/ Are there any known ways to solve this issue? I know it's physically correct*, but are there techniques that other games use to overcome this in a reasonable manner? As a last resort I'll have to just remove all slopes that are too slanted. *If you constrain the player to never jumping backwards.

    Read the article

  • Unity: Render 2D textures on a 3D object's face

    - by www.Sillitoy.com
    I am not familiar with 3D graphics and I'd like to know what is the right way to render some 2D figures on different points of a wider face of a 3D object. My 3D object is just a cube representing a poker table. I have 2D png for players placeholders and I'd like to render these figures on the 3D object where needed. An alternative solution would be to render the whole face with a big picture containing all the placeholders figures. However it would be a waste of memory and thus less efficient. What do you suggest me?

    Read the article

  • Video playback in games - formats & decoding

    - by snake5
    What free / non-restrictive open-source solutions (not GPL) are available for decoding game videos? The requirements are simple: a relatively easy to use C API encoded files must be quite small there must be an application that converts videos from any format (whatever codec is installed on Windows or equivalent amount of internally decoded formats) decoding has to happen fairly quickly bonus points go to file formats that are popular / actively supported and developed

    Read the article

  • Developing an AI opponent for Monopoly

    - by Bernhard Zürn
    i want to develop an AI opponent for the Board Game Monopoly. I want to implement the whole Game with Prolog (XPCE). The probability for a field on the Board being hit, can be computed with Markov Chains. I already know some "best practices" like "after 50% of the playing time it does not make sense to buy out of jail because in jail you get renting fees for your fields but you don't have to pay for other fields as long as you stay in prison". The interesting question always is: buy a streetfield ? buy houses / hotels ? how much ? so i think i would have to compute some kind of future liquidity .. does anyone know how to pack that into an algorithm or how to translate it to prolog ?

    Read the article

  • Having the same texture data in different ID3D11Texture2D

    - by bdmnd
    Sorry if this has been answered elsewhere - I'm rather new to DX. My question concerns conservation of resources - specifically textures in VRAM. I assume that upon returning from a call to CreateTexture2D, a copy of any textures data supplied has been copied elsewhere, likely VRAM. Does DX11 have any facility for having multiple ID3D11Texture2D objects which point to the same data? This might at first seem silly, but imagine a ID3D11Texture2D which is an array of textures. In one material, an artist has chosen to blend three identically sized maps, saved on disk as A.dds, B.dds, and C.dds. Then imagine they have another material which also uses three maps, but this time A.dds, B.dds, and D.dds. The shader code knows the diffuse texture is a texture array, and also has the number of layers baked (three in each case). I would essentially like to set up just two ID3D11Texture2D objects, one for each material, but I don't want to waste VRAM for two identical copies of A.dds and B.dds. I could use explicit texture arrays, of course, but this reduces the number of resources available to the shader and can complicate code somewhat more than would otherwise be needed.

    Read the article

  • Cloning a game and releasing the source

    - by Manux
    I'm not really aware of the legal issues surrounding game clones. I'm around halfway done of making a clone, but it's not just the same gaming concepts, I'm literally using the original game's files (which I do not intend to distribute in any way) in my clone. My original intention was to add features to the game (Firefly studios's first Stronghold) while still using the same art. Is it ok to distribute the source of my clone?

    Read the article

  • why specular light is not running?

    - by nkint
    hi, i'm on JOGL this is my method for lighting: private void lights(GL gl) { float[] LightPos = {0.0f, 0.0f, -1.0f, 1.0f}; float[] LightAmb = {0.2f, 0.2f, 0.2f, 1.0f}; float[] LightDif = {0.6f, 0.6f, 0.6f, 1.0f}; float[] LightSpc = {0.9f, 0.9f, 0.9f, 1.0f}; gl.glLightfv(GL.GL_LIGHT1, GL.GL_POSITION, LightPos, 0); gl.glLightfv(GL.GL_LIGHT1, GL.GL_AMBIENT, LightAmb, 0); gl.glLightfv(GL.GL_LIGHT1, GL.GL_DIFFUSE, LightDif, 0); gl.glLightfv(GL.GL_LIGHT1, GL.GL_SPECULAR, LightSpc, 0); gl.glLightfv(GL.GL_LIGHT0, GL.GL_SPECULAR, LightSpc, 0); gl.glEnable(GL.GL_LIGHT0); gl.glEnable(GL.GL_LIGHT1); gl.glShadeModel(GL.GL_SMOOTH); gl.glEnable(GL.GL_LIGHTING); } and i see my objects flat, no specular light.. any ideas? ps. to render my objects: gl.glColor3f(1f,0f,0f); gl.glBegin(GL.GL_TRIANGLES); for(Triangle t : tubeModel.getTriangles()) { gl.glVertex3f(t.v1.x, t.v1.y, t.v1.z); gl.glVertex3f(t.v2.x, t.v2.y, t.v2.z); gl.glVertex3f(t.v3.x, t.v3.y, t.v3.z); } gl.glEnd();

    Read the article

  • Setting uniform value of a vertex shader for different sprites in a SpriteBatch

    - by midasmax
    I'm using libGDX and currently have a simple shader that does a passthrough, except for randomly shifting the vertex positions. This shift is a vec2 uniform that I set within my code's render() loop. It's declared in my vertex shader as uniform vec2 u_random. I have two different kind of Sprites -- let's called them SpriteA and SpriteB. Both are drawn within the same SpriteBatch's begin()/end() calls. Prior to drawing each sprite in my scene, I check the type of the sprite. If sprite instance of SpriteA: I set the uniform u_random value to Vector2.Zero, meaning that I don't want any vertex changes for it. If sprite instance of SpriteB, I set the uniform u_random to Vector2(MathUtils.random(), MathUtils.random(). The expected behavior was that all the SpriteA objects in my scene won't experience any jittering, while all SpriteB objects would be jittering about their positions. However, what I'm experiencing is that both SpriteA and SpriteB are jittering, leading me to believe that the u_random uniform is not actually being set per Sprite, and being applied to all sprites. What is the reason for this? And how can I fix this such that the vertex shader correctly accepts the uniform value set to affect each sprite individually? passthrough.vsh attribute vec4 a_color; attribute vec3 a_position; attribute vec2 a_texCoord0; uniform mat4 u_projTrans; uniform vec2 u_random; varying vec4 v_color; varying vec2 v_texCoord; void main() { v_color = a_color; v_texCoord = a_texCoord0; vec3 temp_position = vec3( a_position.x + u_random.x, a_position.y + u_random.y, a_position.z); gl_Position = u_projTrans * vec4(temp_position, 1.0); } Java Code this.batch.begin(); this.batch.setShader(shader); for (Sprite sprite : sprites) { Vector2 v = Vector2.Zero; if (sprite instanceof SpriteB) { v.x = MathUtils.random(-1, 1); v.y = MathUtils.random(-1, 1); } shader.setUniformf("u_random", v); sprite.draw(this.batch); } this.batch.end();

    Read the article

  • What are some ways to texture map a terrain?

    - by ApocKalipsS
    I'm working with XNA on a 3D Game, and I'm trying to have a proper and nice environnement. I actually followed a tutorial to create a terrain from a heightmap. To texture it, I just apply a grass texture on it and tile it a number of times. But what I want to do is to have a really realistic texturing, but also generate it automatically (for example if I want to use Perlin noise to generate a terrain and then texture it). I already learned about multi-texturing, loading a map file with different colors for different textures, but I don't think this is really efficient, for instance for cliffs or very steep areas it will tile a texture badly as it's a view from the top. (Also, I don't know how I'll draw roads or dirt paths with that.) I'm looking for an efficient solution to realistically texture mapping procedurally-generated terrain.

    Read the article

  • Help with calculation to steer ship in 3d space

    - by Aaron Anodide
    I'm a beginner using XNA to try and make a 3D Asteroids game. I'm really close to having my space ship drive around as if it had thrusters for pitch and yaw. The problem is I can't quite figure out how to translate the rotations, for instance, when I pitch forward 45 degrees and then start to turn - in this case there should be rotation being applied to all three directions to get the "diagonal yaw" - right? I thought I had it right with the calculations below, but they cause a partly pitched forward ship to wobble instead of turn.... :( Here's current (almost working) calculations for the Rotation acceleration: float accel = .75f; // Thrust +Y / Forward if (currentKeyboardState.IsKeyDown(Keys.I)) { this.ship.AccelerationY += (float)Math.Cos(this.ship.RotationZ) * accel; this.ship.AccelerationX += (float)Math.Sin(this.ship.RotationZ) * -accel; this.ship.AccelerationZ += (float)Math.Sin(this.ship.RotationX) * accel; } // Rotation +Z / Yaw if (currentKeyboardState.IsKeyDown(Keys.J)) { this.ship.RotationAccelerationZ += (float)Math.Cos(this.ship.RotationX) * accel; this.ship.RotationAccelerationY += (float)Math.Sin(this.ship.RotationX) * accel; this.ship.RotationAccelerationX += (float)Math.Sin(this.ship.RotationY) * accel; } // Rotation -Z / Yaw if (currentKeyboardState.IsKeyDown(Keys.K)) { this.ship.RotationAccelerationZ += (float)Math.Cos(this.ship.RotationX) * -accel; this.ship.RotationAccelerationY += (float)Math.Sin(this.ship.RotationX) * -accel; this.ship.RotationAccelerationX += (float)Math.Sin(this.ship.RotationY) * -accel; } // Rotation +X / Pitch if (currentKeyboardState.IsKeyDown(Keys.F)) { this.ship.RotationAccelerationX += accel; } // Rotation -X / Pitch if (currentKeyboardState.IsKeyDown(Keys.D)) { this.ship.RotationAccelerationX -= accel; } I'm combining that with drawing code that does a rotation to the model: public void Draw(Matrix world, Matrix view, Matrix projection, TimeSpan elsapsedTime) { float seconds = (float)elsapsedTime.TotalSeconds; // update velocity based on acceleration this.VelocityX += this.AccelerationX * seconds; this.VelocityY += this.AccelerationY * seconds; this.VelocityZ += this.AccelerationZ * seconds; // update position based on velocity this.PositionX += this.VelocityX * seconds; this.PositionY += this.VelocityY * seconds; this.PositionZ += this.VelocityZ * seconds; // update rotational velocity based on rotational acceleration this.RotationVelocityX += this.RotationAccelerationX * seconds; this.RotationVelocityY += this.RotationAccelerationY * seconds; this.RotationVelocityZ += this.RotationAccelerationZ * seconds; // update rotation based on rotational velocity this.RotationX += this.RotationVelocityX * seconds; this.RotationY += this.RotationVelocityY * seconds; this.RotationZ += this.RotationVelocityZ * seconds; Matrix translation = Matrix.CreateTranslation(PositionX, PositionY, PositionZ); Matrix rotation = Matrix.CreateRotationX(RotationX) * Matrix.CreateRotationY(RotationY) * Matrix.CreateRotationZ(RotationZ); model.Root.Transform = rotation * translation * world; model.CopyAbsoluteBoneTransformsTo(boneTransforms); foreach (ModelMesh mesh in model.Meshes) { foreach (BasicEffect effect in mesh.Effects) { effect.World = boneTransforms[mesh.ParentBone.Index]; effect.View = view; effect.Projection = projection; effect.EnableDefaultLighting(); } mesh.Draw(); } }

    Read the article

  • Problems loading Hilva tutorials

    - by Beska
    I'm a newcomer to XNA, and I'm evaluating some libraries. The Hilva Graphics Engine looks interesting, and I'm trying to run their tutorials. However, all of them give me errors. For example, if I download the ParallaxMappingSample demo, and try to build it, I get Error 1 Error loading pipeline assembly "C:\Users\Me\Desktop\ParallaxMappingSample\Hilva.Content.dll". ParallaxMappingSample I get similar errors for all of the samples. Unfortunately, this error isn't very enlightening. I can see the Hilva.Content.dll in the appropriate directory. I tried removing and readding the reference from the content project, but I get the same error. I'm not sure it's relevant, but I'm on Windows 7, I'm using Microsoft Visual Studio 2010, and XNA 4.0. Is there an easy (or difficult) solution? EDIT: If you happen to try this, even if you don't have a solution, let me know about it in a comment. Whether it works for you, or if you get the same problem...either result would be something that might let me know if it's just a problem with the tutorial, or if it's on my end.

    Read the article

  • how get collision callback of two specific objects using bullet physics?

    - by sebap123
    I have got problem implementing collision callback into my project. I would like to have detection between two specific objects. I have got normall collision but I want one object to stop or change color or whatever when colides with another. I wrote code from bullet wiki: int numManifolds = dynamicsWorld->getDispatcher()->getNumManifolds(); for (int i=0;i<numManifolds;i++) { btPersistentManifold* contactManifold = dynamicsWorld->getDispatcher()->getManifoldByIndexInternal(i); btCollisionObject* obA = static_cast<btCollisionObject*>(contactManifold->getBody0()); btCollisionObject* obB = static_cast<btCollisionObject*>(contactManifold->getBody1()); int numContacts = contactManifold->getNumContacts(); for (int j=0;j<numContacts;j++) { btManifoldPoint& pt = contactManifold->getContactPoint(j); if (pt.getDistance()<0.f) { const btVector3& ptA = pt.getPositionWorldOnA(); const btVector3& ptB = pt.getPositionWorldOnB(); const btVector3& normalOnB = pt.m_normalWorldOnB; bool x = (ContactProcessedCallback)(pt,fallRigidBody,earthRigidBody); if(x) printf("collision\n"); } } } where fallRigidBody is a dynamic body - a sphere and earthRigiBody is static body - StaticPlaneShape and sphere isn't touching earthRigidBody all the time. I have got also other objects that are colliding with sphere and it works fine. But the program detects collision all the time. It doesn't matter if the objects are or aren't colliding. I have also added after declarations of rigid body: earthRigidBody->setCollisionFlags(earthRigidBody->getCollisionFlags() | btCollisionObject::CF_CUSTOM_MATERIAL_CALLBACK); fallRigidBody->setCollisionFlags(fallRigidBody->getCollisionFlags() | btCollisionObject::CF_CUSTOM_MATERIAL_CALLBACK); So can someone tell me what I am doing wrong? Maybe it is something simple?

    Read the article

  • Cheap ways to do scaling ops in shader?

    - by Nick Wiggill
    I've got an extensive world terrain that uses vec3 for the vertex position attribute. That's good, because the terrain has endless gradations due to the use of floating point. But I'm thinking about how to reduce the amount of data uploaded to the GPU. For my terrain, which uses discrete / grid-based vertex positions in x and z, it's pretty clear that I can replace my vec3s (floats, really) with shorts, halving the per-vertex position attribute cost from 12 bytes each to 6 bytes. Considering I've got little enough other vertex data, and an enormous amount of terrain data to push into the world, it's a major gain. Currently in my code, one unit in GLSL shaders is equal to 1m in the world. I like that scale. If I move over to using shorts, though, I won't be able to use the same scale, as I would then have a very blocky world where every step in height is an entire metre. So I see these potential solutions to scale the positional data correctly once it arrives at the vertex shader stage: Use 10:1 scaling, i.e. 1 short unit = 1 decimetre in CPU-side code. Do a division by 10 in the vertex shader to scale incoming decimetre values back to metres. Arbirary (non-PoT) divisions tend to be slow, however. Use (some-power-of-two):1 scaling (eg. 8:1), which enables the use of a bitshift (eg. val >> 3) to do the division... not sure how performant this is in shaders, though. Not as intuitive to read values, but possibly quite a bit faster than div by a non-PoT value. Use a texture as lookup table. I've heard that this is really fast. Or whatever solutions others can offer to achieve the same results -- minimal vertex data with sensible scaling.

    Read the article

< Previous Page | 524 525 526 527 528 529 530 531 532 533 534 535  | Next Page >