Search Results

Search found 25660 results on 1027 pages for 'dotnetnuke development'.

Page 496/1027 | < Previous Page | 492 493 494 495 496 497 498 499 500 501 502 503  | Next Page >

  • Particle trajectory smoothing: where to do the simulation?

    - by nkint
    I have a particle system in which I have particles that are moving to a target and the new targets are received via network. The list of new target are some noisy coordinates of a moving target stored in the server that I want to smooth in the client. For doing the smoothing and the particle I wrote a simple particle engine with standard euler integration model. So, my pseudo code is something like that: # pseudo code class Particle: def update(): # do euler motion model integration: # if the distance to the target is more than a limit # add a new force to the accelleration # seeking the target, # and add the accelleration to velocity # and velocity to the position positionHistory.push_back(position); if history.length > historySize : history.pop_front() class ParticleEngine: particleById = dict() # an associative array # where the keys are the id # and particle istances are sotred as values # this method is called each time a new tcp packet is received and parsed def setNetTarget(int id, Vec2D new_target): particleById[id].setNewTarget(new_target) # this method is called each new frame def draw(): for p in particleById.values: p.update() beginVertex(LINE_STRIP) for v in p.positionHistory: vertex(v.x, v.y) endVertex() The new target that are arriving are noisy but setting some accelleration/velocity parameters let the particle to have a smoothed trajectories. But if a particle trajectory is a circle after a while the particle position converge to the center (a normal behaviour of euler integration model). So I decided to change the simulation and use some other interpolation (spline?) or smooth method (kalman filter?) between the targets. Something like: switch( INTERPOLATION_MODEL ): case EULER_MOTION: ... case HERMITE_INTERPOLATION: ... case SPLINE_INTERPOLATION: ... case KALMAN_FILTER_SMOOTHING: ... Now my question: where to write the motion simulation / trajectory interpolation? In the Particle? So I will have some Particle subclass like ParticleEuler, ParticleSpline, ParticleKalman, etc..? Or in the particle engine?

    Read the article

  • Get collision details from Rectangle.Intersects()

    - by Daniel Ribeiro
    I have a Breakout game in which, at some point, I detect the collision between the ball and the paddle with something like this: // Ball class rectangle.Intersects(paddle.Rectangle); Is there any way I can get the exact coordinates of the collision, or any details about it, with the current XNA API? I thought of doing some basic calculations, such as comparing the exact coordinates of each object on the moment of the collision. It would look something like this: // Ball class if((rectangle.X - paddle.Rectangle.X) < (paddle.Rectangle.Width / 2)) // Collision happened on the left side else // Collision happened on the right side But I'm not sure this is the correct way to do it. Do you guys have any tips on maybe an engine I might have to use to achieve that? Or even good coding practices using this method?

    Read the article

  • Good practices in screen states management?

    - by DevilWithin
    I wonder what are the best ways to organize different screens in a game? I am thinking of it like this: Inheriting a base State class, and overriding update and render methods, to handle the current screen. Then, under certain events a StateManager is able to activate another Screen State, and the game screen changes as only the current State is rendered. On the activation of a new screen, effects like fading could be added, and also the same goes for its deactivation. This way a flow of screen could be made. By saying when A ends, B starts, allowing for complex animations etc. Toughts?

    Read the article

  • Bubble shooter search alghoritm

    - by Fofole
    So I have a Matrix of NxM. At a given position (for ex. [2][5]) I have a value which represents a color. If there is nothing at that point the value is -1. What I need to do is after I add a new point, to check all his neighbours with the same color value and if there are more than 2, set them all to -1. If what I said doesn't make sense what I'm trying to do is an alghoritm which I use to destroy all the same color bubbles from my screen, where the bubbles are memorized in a matrix where -1 means no bubble and {0,1,2,...} represent that there is a bubble with a specific color. This is what I tried and failed: public class Testing { static private int[][] gameMatrix= {{3, 3, 4, 1, 1, 2, 2, 2, 0, 0}, {1, 4, 1, 4, 2, 2, 1, 3, 0, 0}, {2, 2, 4, 4, 3, 1, 2, 4, 0, 0}, {0, 1, 2, 3, 4, 1, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, }; static int Rows=6; static int Cols=10; static int count; static boolean[][] visited=new boolean[15][15]; static int NOCOLOR = -1; static int color = 1; public static void dfs(int r, int c, int color, boolean set) { for(int dr = -1; dr <= 1; dr++) for(int dc = -1; dc <= 1; dc++) if(!(dr == 0 && dc == 0) && ok(r+dr, c+dc)) { int nr = r+dr; int nc = c+dc; // if it is the same color and we haven't visited this location before if(gameMatrix[nr][nc] == color && !visited[nr][nc]) { visited[nr][nc] = true; count++; dfs(nr, nc, color, set); if(set) { gameMatrix[nr][nc] = NOCOLOR; } } } } static boolean ok(int r, int c) { return r >= 0 && r < Rows && c >= 0 && c < Cols; } static void showMatrix(){ for(int i = 0; i < gameMatrix.length; i++) { System.out.print("["); for(int j = 0; j < gameMatrix[0].length; j++) { System.out.print(" " + gameMatrix[i][j]); } System.out.println(" ]"); } System.out.println(); } static void putValue(int value,int row,int col){ gameMatrix[row][col]=value; } public static void main(String[] args){ System.out.println("Initial Matrix:"); putValue(1, 4, 1); putValue(1, 5, 1); showMatrix(); for(int n = 0; n < 15; n++) for(int m = 0; m < 15; m++) visited[n][m] = false; //reset count count = 0; //dfs(bubbles.get(i).getRow(), bubbles.get(i).getCol(), color, false); // get the contiguous count dfs(5,1,color,false); //if there are more than 2 set the color to NOCOLOR for(int n = 0; n < 15; n++) for(int m = 0; m < 15; m++) visited[n][m] = false; if(count > 2) { //dfs(bubbles.get(i).getRow(), bubbles.get(i).getCol(), color, true); dfs(5,1,color,true); } System.out.println("Matrix after dfs:"); showMatrix(); } }

    Read the article

  • How to choose how to store data?

    - by Eldros
    Give a man a fish and you feed him for a day. Teach a man to fish and you feed him for a lifetime. - Chinese Proverb I could ask what kind of data storage I should use for my actual project, but I want to learn to fish, so I don't need to ask for a fish each time I begin a new project. So, until I used two methods to store data on my non-game project: XML files, and relational databases. I know that there is also other kind of database, of the NoSQL kind. However I wouldn't know if there is more choice available to me, or how to choose in the first place, aside arbitrary picking one. So the question is the following: How should I choose the kind of data storage for a game project? And I would be interested on the following criterion when choosing: The size of the project. The platform targeted by the game. The complexity of the data structure. Added Portability of data amongst many project. Added How often should the data be accessed Added Multiple type of data for a same application Any other point you think is of interest when deciding what to use. EDIT I know about Would it be better to use XML/JSON/Text or a database to store game content?, but thought it didn't address exactly my point. Now if I am wrong, I would gladely be shown the error in my ways.

    Read the article

  • How to get this wavefront .obj data onto the frustum?

    - by NoobScratcher
    I've finally figured out how to get the data from a .obj file and store the vertex positions x,y,z into a structure called Points with members x y z which are of type float. I want to know how to get this data onto the screen. Here is my attempt at doing so: //make a fileobject and store list and the index of that list in a c string ifstream file (list[index].c_str() ); std::vector<int>faces; std::vector<Point>points; points.push_back(Point()); Point p; int face[4]; while ( !file.eof() ) { char modelbuffer[10000]; //Get lines and store it in line string file.getline(modelbuffer, 10000); switch(modelbuffer[0]) { case 'v' : sscanf(modelbuffer, "v %f %f %f", &p.x, &p.y, &p.z); points.push_back(p); cout << "Getting Vertex Positions" << endl; cout << "v" << p.x << endl; cout << "v" << p.y << endl; cout << "v" << p.z << endl; break; case 'f': sscanf(modelbuffer, "f %d %d %d %d", face, face+1, face+2, face+3 ); cout << face[0] << endl; cout << face[1] << endl; cout << face[2] << endl; cout << face[3] << endl; faces.push_back(face[0]); faces.push_back(face[1]); faces.push_back(face[2]); faces.push_back(face[3]); } GLuint vertexbuffer; glGenBuffers(1, &vertexbuffer); glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer); glBufferData(GL_ARRAY_BUFFER, points.size(), points.data(), GL_STATIC_DRAW); //glBufferData(GL_ARRAY_BUFFER,sizeof(points), &(points[0]), GL_STATIC_DRAW); glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, 0); glVertexPointer(3, GL_FLOAT, sizeof(points),points.data()); glIndexPointer(GL_DOUBLE, 0, faces.data()); glDrawArrays(GL_QUADS, 0, points.size()); glDrawElements(GL_QUADS, faces.size(), GL_UNSIGNED_INT, faces.data()); } As you can see I've clearly failed the end part but I really don't know why its not rendering the data onto the frustum? Does anyone have a solution for this?

    Read the article

  • Perminantly Sync a wiimote with a computer

    - by Adam Geisweit
    i have tried to look up many ways to sync up my wiimotes to my computer so that i can program games with it, but every time it only syncs them up temporarily, or if it says it can permanently sync it, it doesn't actually do it. it gets tiresome when i have to keep on reconnecting it every time i want to save battery life. how would i be able to sync up my wiimote to my computer so that if i turn off my wiimote, i can just hit any button and it will automatically sync it up?

    Read the article

  • How can be data oriented programming applied for GUI system?

    - by Miro
    I've just learned basics of Data oriented programming design, but I'm not very familiar with that yet. I've also read Pitfalls of Object Oriented Programming GCAP 09. It seems that data oriented programming is much better idea for games, than OOP. I'm just creating my own GUI system and it's completely OOP. I'm thinking if is data oriented programming design applicable for structured things like GUI. The main problem I see is that every type widget has different data, so I can hardly group them into arrays. Also every type of widget renders differently so I still need to call virtual functions.

    Read the article

  • Geometry Shader : points + Triangles

    - by CmasterG
    I have different Shaders and for each Shader a instance of the ShaderClass class, which initializes the Shaders, Renders the Shaders, etc. I use most of the Shaderclasses without Geometry Shader, but in one Shader Class i also use a Geometry Shader. The problem is, that when I render one object with the Shaderclass that uses the Geometry shader, all other object are rendered with the same geometry that I create in the Geometry Shader. Can you help me? Is it possible that I have to use a Geometry Shader for each object, when I use one for one object? I use DirectX 11 with C++.

    Read the article

  • Rendering transparent textures in directX

    - by Vibhore Tanwer
    I am working with a directX application with WPF, I am facing a problem with videos and images that contains transparent pixels, I have to draw a color in background an then a video/image over it. What I expect is background color should be visible while playing video only non transparent pixels should be visible but what I get is a black background behind the video. I am using following settings on device to achieve alpha blending : device.RenderState.SourceBlend = Blend.SourceAlpha; device.RenderState.DestinationBlend = Blend.InvSourceAlpha; device.RenderState.AlphaBlendEnable = true; What am I missing here? What is the best approach to handle transparent videos? Any help will be of great value to me.

    Read the article

  • How can I solve this SAT direct corner intersection edge case?

    - by ssb
    I have a working SAT implementation, but I am running into a problem where direct collisions at a corner do not work for tiled surfaces. That is, it clips on the surface when going in a certain direction because it gets hung up on one of the tiles, and so, for example, if I walk across a floor while holding both down and left, the player will stop when meeting the next shape because the player will be colliding with the right side rather than with the top of the floor tile. This illustration shows what I mean: The top block will translate right first and then up. I have checked here and here which are helpful, but this does not address what I should do in a situation where I don't have a tile-based world. My usage of the term "tile" before isn't really accurate since what I'm doing here is manually placing square obstacles next to each other, not assigning them spots on a grid. What can I do to fix this?

    Read the article

  • OpenGL depth texture wrong

    - by CoffeeandCode
    I have been writing a game engine for a while now and have decided to reconstruct my positions from depth... but how I read the depth seems to be wrong :/ What is wrong in my rendering? How I init my depth texture in the FBO gl::BindTexture(gl::TEXTURE_2D, this->textures[0]); // Depth gl::TexImage2D( gl::TEXTURE_2D, 0, gl::DEPTH32F_STENCIL8, width, height, 0, gl::DEPTH_STENCIL, gl::FLOAT_32_UNSIGNED_INT_24_8_REV, nullptr ); gl::TexParameterf(gl::TEXTURE_2D, gl::TEXTURE_MAG_FILTER, gl::NEAREST); gl::TexParameterf(gl::TEXTURE_2D, gl::TEXTURE_MIN_FILTER, gl::NEAREST); gl::TexParameterf(gl::TEXTURE_2D, gl::TEXTURE_WRAP_S, gl::CLAMP_TO_EDGE); gl::TexParameterf(gl::TEXTURE_2D, gl::TEXTURE_WRAP_T, gl::CLAMP_TO_EDGE); gl::FramebufferTexture2D( gl::FRAMEBUFFER, gl::DEPTH_STENCIL_ATTACHMENT, gl::TEXTURE_2D, this->textures[0], 0 ); Linear depth readings in my shader Vertex #version 150 layout(location = 0) in vec3 position; layout(location = 1) in vec2 uv; out vec2 uv_f; void main(){ uv_f = uv; gl_Position = vec4(position, 1.0); } Fragment (where the issue probably is) #version 150\n uniform sampler2D depth_texture; in vec2 uv_f; out vec4 Screen; void main(){ float n = 0.00001; float f = 100.0; float z = texture(depth_texture, uv_f).x; float linear_depth = (n * z)/(f - z * (f - n)); Screen = vec4(linear_depth); // It ISN'T because I don't separate alpha } When Rendered so gamedev.stackexchange, what's wrong with my rendering/glsl?

    Read the article

  • Terrain square loading

    - by AndroidXTr3meN
    Games like Skyrim, Morrowind, and more are using quads or square to divide the terrain if im correct. The player is always at #5 1 | 2 | 3 4 | 5 | 6 7 | 8 | 9 So whenever you cross the border you unload and load the new "areas" But if the user goes just over the edge and then the second after goes back previous area a lot of unnecessary loading and unloading is done. Is there a general approach to this because I dont think games like skyrim have this issue? Cheers!

    Read the article

  • Any learning/studying material for C/C++ that use game programming as learning context out there?

    - by mac
    As most of game programming is done - I read on this very site - in C/C++ I was wondering if there is any learning/studying material for C/C++ that would target specifically game programming. I am not looking for material about "developing games" or "software architecture for games", but rather for material that uses "game programming" as the CONTEXT for introducing and illustrating C/C++ features, idioms, programming techniques, etc... With a simile: think to the GOF book on design patterns. There, they used "developing a text-editor" as a context for introducing design patterns, but the book is most definitively not a book about "developing text-editors". Thanks in advance for your time and advice! PS: My background: I am a programmer with a solid experience in OO scripting languages and only some experience in C and Assembler (on AVR microcontrollers), so I am thinking to mid-to-advanced level material, rather than tutorials for beginners, although it might be interesting to take a look to the latter ones if nothing else is available.

    Read the article

  • what is the easiest way to make a hitbox that rotates with it's texture

    - by Matthew Optional Meehan
    In xna when you have a sprite that doesnt rotate it's very easy to get the four corner of a sprite to make a hitbox, but when you do a rotation the points get moved and I assume there is some kind of math that I can use to aquire them. I am using the four points to draw a rectangle that visually represents the hitboxes. I have seen some per-pixel collission examples but I can forsee they would be hard to draw a box/'convex hull' around. I have also seen physics like farseer but I'm not sure if there is a quick tutorial to do what I want. What do you guys think is the best approach becuase I am looking to complete this work by the end of the week.

    Read the article

  • OpenGL Vertex Attributes - Normalisation

    - by Daniel
    Alas, I have searched, and have found no definitive answer. When would you normalize the vertex data in OpenGL using the following command: glVertexAttribPointer(index, size, type, normalize, stride, pointer); I.e when would normalize == GL_TRUE; what situations, and why would you choose to let the GPU do the calculations instead of preprocessing it? All examples I have ever seen, have this set to GL_FALSE; and I cannot personally see a use for it. But Khronos aren't stupid, so it must be there for something useful (and probably common).

    Read the article

  • Why my collision detection is not accurate?

    - by optimisez
    After trying and trying, I still cannot understand why the leg of character exceeds the wall but no clipping issue when I hit the wall from below. How should I fix it to make him standstill on the wall? void initPlayer() { // Create texture. hr = D3DXCreateTextureFromFileEx(d3dDevice, "player.png", 169, 44, D3DX_DEFAULT, NULL, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, D3DX_DEFAULT, D3DX_DEFAULT, D3DCOLOR_XRGB(255, 255, 255), NULL, NULL, &player); playerRect.left = playerRect.top = 0; playerRect.right = 29; playerRect.bottom = 36; playerDest.X = 0; playerDest.Y = 564; playerDest.length = playerRect.right - playerRect.left; playerDest.height = playerRect.bottom - playerRect.top; } void initBox() { hr = D3DXCreateTextureFromFileEx(d3dDevice, "brock.png", 330, 132, D3DX_DEFAULT, NULL, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, D3DX_DEFAULT, D3DX_DEFAULT, D3DCOLOR_XRGB(255, 255, 255), NULL, NULL, &box); boxRect.left = 33; boxRect.top = 0; boxRect.right = 63; boxRect.bottom = 30; boxDest.X = boxDest.Y = 300; boxDest.length = boxRect.right - boxRect.left; boxDest.height = boxRect.bottom - boxRect.top; } bool spriteCollide(Entity player, Entity target) { float left1, left2; float right1, right2; float top1, top2; float bottom1, bottom2; left1 = player.X; left2 = target.X; right1 = player.X + player.length; right2 = target.X + target.length; top1 = player.Y; top2 = target.Y; bottom1 = player.Y + player.height; bottom2 = target.Y + target.height; if (bottom1 < top2) return false; if (top1 > bottom2) return false; if (right1 < left2) return false; if (left1 > right2) return false; return true; } void collideWithBox() { if ( spriteCollide(playerDest, boxDest) && keyArr[VK_UP]) //playerDest.Y += 50; playerDest.Y = boxDest.Y + boxDest.height; else if ( spriteCollide(playerDest, boxDest) && !keyArr[VK_UP]) playerDest.Y = boxDest.Y - boxDest.height; }

    Read the article

  • How should I determine direction from a phone's orientation & accelerometer?

    - by Manoj Kumar
    I have an Android application which moves a ball based on the orientation of the phone. I've been using the following code to extract the data - but how do I use it to determine what direction the ball should actually travel in? public void onSensorChanged(int sensor, float[] values) { // TODO Auto-generated method stub synchronized (this) { Log.d("HIIIII :- ", "onSensorChanged: " + sensor + ", x: " + values[0] + ", y: " + values[1] + ", z: " + values[2]); if (sensor == SensorManager.SENSOR_ORIENTATION) { System.out.println("Orientation X: " + values[0]); System.out.println("Orientation Y: " + values[1]); System.out.println("Orientation Z: " + values[2]); } if (sensor == SensorManager.SENSOR_ACCELEROMETER) { System.out.println("Accel X: " + values[0]); System.out.println("Accel Y: " + values[1]); System.out.println("Accel Z: " + values[2]); } } }

    Read the article

  • How do I scroll in the physical world?

    - by Esteban Quintero
    I am using andengine to make a game where a sprite (player) is going up across the stage, this is my world. final Rectangle ground = new Rectangle(0, CAMERA_HEIGHT - 2, CAMERA_WIDTH, 2, vertexBufferObjectManager); final Rectangle roof = new Rectangle(0, 0, CAMERA_WIDTH, 2, vertexBufferObjectManager); final Rectangle left = new Rectangle(0, 0, 2, CAMERA_HEIGHT, vertexBufferObjectManager); final Rectangle right = new Rectangle(CAMERA_WIDTH - 2, 0, 2, CAMERA_HEIGHT, vertexBufferObjectManager); final FixtureDef wallFixtureDef = PhysicsFactory.createFixtureDef(0, 0.5f, 0.5f); PhysicsFactory.createBoxBody(this.mPhysicsWorld, ground, BodyType.StaticBody, wallFixtureDef); PhysicsFactory.createBoxBody(this.mPhysicsWorld, roof, BodyType.StaticBody, wallFixtureDef); PhysicsFactory.createBoxBody(this.mPhysicsWorld, left, BodyType.StaticBody, wallFixtureDef); PhysicsFactory.createBoxBody(this.mPhysicsWorld, right, BodyType.StaticBody, wallFixtureDef); /* Create two sprits and add it to the scene. */ this.mScene.setBackground(autoParallaxBackground); this.mScene.attachChild(ground); this.mScene.attachChild(roof); this.mScene.attachChild(left); this.mScene.attachChild(right); this.mScene.registerUpdateHandler(this.mPhysicsWorld); The problem is that if the sprite reaches up and hits the wall, as I scroll here?

    Read the article

  • How to control an actor movement in UDK

    - by Mikalichov
    This might be very basic, but I couldn't find something relevant to what I need (see below). I am working on a very basic thing: a 3D environment with some buildings, and actors walking inside it. It looks like following: I mainly want to manage to have one actor standing around, idling, and another walking around the area. Right now, this is done through matinee + skeletal mesh groups, and forcing a looped animation on the actors: But I realize this is super caveman-level. So I've build an AnimTree, linking the idling and directional animations to the corresponding nodes. But then, I'm stuck. I added the AnimTree in the actors properties, but nothing happens. I've tried MoveToActor, but no success - is there a thing to set to allow an actor to move? Also, I place the actors on the map manually (they are supposed to be unique), should I spawn them instead? Every tutorial I find explains how to use an AnimTree for the player character, which is not what I want. I need a way to move the actors. I tried to look for AI tutorials, but only found UT3 bots-modifications, which is not what I need either. Since I have so much trouble finding how to do this through Kismet, I'm starting to suspect this has to be done through scripting/coding, but I would like to be sure there is no way to do it through Kismet before going that route. Every bit of answer about how to tell an actor something along the lines of "go in that direction as much as you can, then when you hit a wall turn 45° and continue" would be awesome. I'll be happy to move/edit the question if there is any problem with it

    Read the article

  • Game maps with "counties" that are split along lines that aren't necessarily straight

    - by pm_2
    I want to create a game environment that supports a 2D map. This is a really basic map, but must be split along lines that are not necessarily straight. So imagine a country with county boundaries. I then want to be able to detect drag / drop events within these counties. What I'm really looking for here is a pointer to where to start on this (how it has been done before - any existing libraries out there), as I'm sure that what I'm trying to do is not new - although I can't find a beginners guide for this anywhere.

    Read the article

  • Box2D platformer movement. Are joints a good idea?

    - by Romeo
    So i smashed my brains trying to make my character move. As i wanted later in the game to add explosions and bullets it wasn't a good idea to mess with the velocity and the forces/impulses didn't work as i expected so something stuck in my mind: Is it a good idea to put at his bottom a wheel(circle) which is invisible to the player that will do the movement by rotation? I will attach this to my main body with a revolute joint but i don't really know how to make the main body and wheel body to don't collide one with each other since funny things can happen. What is your oppinion?

    Read the article

  • Bouncing ball isssue

    - by user
    I am currently working on the 2D Bouncing ball physics that bounces the ball up and down. The physics behaviour works fine but at the end the velocity keep +3 then 0 non-stop even the ball has stopped bouncing. How should I modify the code to fix this issue? ballPos = D3DXVECTOR2( 50, 100 ); velocity = 0; accelaration = 3.0f; isBallUp = false; void GameClass::Update() { velocity += accelaration; ballPos.y += velocity; if ( ballPos.y >= 590 ) isBallUp = true; else isBallUp = false; if ( isBallUp ) { ballPos.y = 590; velocity *= -1; } // Graphics Rendering m_Graphics.BeginFrame(); ComposeFrame(); m_Graphics.EndFrame(); }

    Read the article

  • What kind of physics to choose for our arcade 3D MMO?

    - by Nick
    We're creating an action MMO using Three.js (WebGL) with an arcadish feel, and implementing physics for it has been a pain in the butt. Our game has a terrain where the character will walk on, and in the future 3D objects (a house, a tree, etc) that will have collisions. In terms of complexity, the physics engine should be like World of Warcraft. We don't need friction, bouncing behaviour or anything more complex like joints, etc. Just gravity. I have managed to implement terrain physics so far by casting a ray downwards, but it does not take into account possible 3D objects. Note that these 3D objects need to have convex collisions, so our artists create a 3D house and the player can walk inside but can't walk through the walls. How do I implement proper collision detection with 3D objects like in World of Warcraft? Do I need an advanced physics engine? I read about Physijs which looks cool, but I fear that it may be overkill to implement that for our game. Also, how does WoW do it? Do they have a separate raycasting system for the terrain? Or do they treat the terrain like any other convex mesh? A screenshot of our game so far:

    Read the article

  • Rain drops on screen

    - by user1075940
    I am trying to make simple rain drop effect on screen.Something like this http://fc00.deviantart.net/fs20/f/2007/302/5/6/Rain_drops_by_rockraikar.png My idea is to: Create small drop shaped normal textures,randomly put few on screen,apply texture perturbation and mix with current frame pixels. Here are my questions: -Does this idea even have sense?How professionals do this effect?Everything from text to code will be appreciated -How to pass pixels to shader of already rendered frame?

    Read the article

< Previous Page | 492 493 494 495 496 497 498 499 500 501 502 503  | Next Page >