Search Results

Search found 29201 results on 1169 pages for 'game development'.

Page 589/1169 | < Previous Page | 585 586 587 588 589 590 591 592 593 594 595 596  | Next Page >

  • Wall avoidance steering

    - by Vodemki
    I making a small steering simulator using the reynolds boid algorythm. Now I want to add a wall avoidance feature. My walls are in 3D and defined using two points like that: ---------. P2 | | P1 .--------- My agents have a velocity, a position, etc... Could you tell me how to make avoidance with my agents ? Vector2D ReynoldsSteeringModel::repulsionFromWalls() { Vector2D force; vector<Wall *> wallsList = walls(); Point2D pos = self()->position(); Vector2D velocity = self()->velocity(); for (unsigned i=0; i<wallsList.size(); i++) { //TODO } return force; } Then I use all the forces returned by my boid functions and I apply it to my agent. I just need to know how to do that with my walls ? Thanks for your help.

    Read the article

  • Material tiling and offset in unity

    - by Simran kaur
    Ambiguity: What exactly is the difference between Tiling the material and Offset of material? Need to do: I need the material to be repeated n times on the object where I need to set the value of n via script.How do I do it? It seems to happen through Tiling(tried via inspector) but again what is difference between mainTextureOffset and setTextureOffset? Tried: Following is the line of code that I tried to repeat the texture n number of times on an object(repeat across the width of object), but it does nothing significant that I can see.

    Read the article

  • Rending 2D Tile World (With Player In The Middle)

    - by Mick
    What I have at the moment is a series of data structures I'm using, and I would like to render the world onto the screen (just the visible parts). I've actually already done this several times (lots of rewrites), but it's a bit buggy (rounding seems to make the screen jump ever so slightly every x tiles the player walks past). Basically I've been confusing myself heavily on what I feel should be a pretty simple problem... so here I am asking for some help! OK! So I have a 50x50 array holding the tiles of the world. I have the player position as 2 floats, x ([0, 49]) and y ([0, 49]) in that array. I have the application size exactly in pixels (x and y). I have an arbitrary TILE_SIZE static int (based on screen pixels). What I think is heavily confusing me is using a 2d orthogonal projection in opengl which maps (0,0) to the top left of the screen and (SCREEN_SIZE_X, SCREEN_SIZE_Y) to the bottom right of the screen. gl.glMatrixMode(GL.GL_PROJECTION); gl.glLoadIdentity(); glu.gluOrtho2D(0, getActualWidth(), getActualHeight(), 0); gl.glMatrixMode(GL.GL_MODELVIEW); gl.glLoadIdentity(); The map tiles are set so that the (0,0) in the array is the bottom left. And the player has to be in the middle on the screen (SCREEN_SIZE_X/2, SCREEN_SIZE_Y/2). What I've been doing so far is trying to render 1-2 tiles more all around what would be displayed on the screen so that I don't have to worry about figuring out rendering half a tile from the top left, depending where the player is. It seems like such an easy problem but after spending about 40+hours on it rewriting it many times I think I'm at a point where I just can't think clearly anymore... Any help would be appreciated. It would be great if someone can provide some very basic pseudo code on keeping the player in the middle when your projection is mapped to screen coordinates and only rendering basically the tiles that you would be any be see. Thanks!

    Read the article

  • Grpahic hardwares

    - by Vanangamudi
    Which vendor provides better GPGPU. my requirements are confined to rendering utilising the GPU for BSDF building for e.g. Intel started providing Ivy Bridge chipset GPU, which are comparably fast to HD5960 cards. I'm not that against nvidia or amd. but I'm a fan of Intel. how it compares to nvidia in price and performance. if possible may I know, how all of them perform with OpenCL?? I'm not sure if it is right to ask it here. but I don't know where to ask.

    Read the article

  • Java 2D World question

    - by Munkybunky
    I have a 2D world background made up of a Grid of graphics, which I display on screen with a viewport (800x600) and it all works. My question is I have the following code to convert the mouse co-ordinates to world co-ordinates then World co-ordinates to grid co-ordinates then grid co-ordinates to screen co-ordinates. //Add camerax to mouse screen co-ords to convert to world co-ords. int cursorx_world=(int)camerax+(int)GameInput.mousex; int cursorx_grid=(int)cursorx_world/blocksize; // World Co-ords / gridsize give grid co-ords int cursorx_screen=-(int)camerax+(cursorx_grid*blocksize); So is there anyway I can convert straight from mouse screen co-ords to screen co-ordinates?

    Read the article

  • Climbing boxes in box2D

    - by Rothens
    I've just stepped into the world of Box2D with libgdx. I've already made a stack of boxes: They are dropped randomly ontop of each other. What I'd like to achieve is to make a character, that could freely climb on the boxes, (He can grip on the boxes anywhere, not just on the side/top of a box) but his weight affects the stack as well, so the boxes could fall down. My google-fu failed me... Is there any way to make this possible?

    Read the article

  • In 3D camera math, calculate what Z depth is pixel unity for a given FOV

    - by badweasel
    I am working in iOS and OpenGL ES 2.0. Through trial and error I've figured out a frustum to where at a specific z depth pixels drawn are 1 to 1 with my source textures. So 1 pixel in my texture is 1 pixel on the screen. For 2d games this is good. Of course it means that I also factor in things like the size of the quad and the size of the texture. For example if my sprite is a quad 32x32 pixels. The quad size is 3.2 units wide and tall. And the texcoords are 32 / the size of the texture wide and tall. Then the frustum is: matrixFrustum(-(float)backingWidth/frustumScale,(float)backingWidth/frustumScale, -(float)backingHeight/frustumScale, (float)backingHeight/frustumScale, 40, 1000, mProjection); Where frustumScale is 800 for a retina screen. Then at a distance of 800 from camera the sprite is pixel for pixel the same as photoshop. For 3d games sometimes I still want to be able to do this. But depending on the scene I sometimes need the FOV to be different things. I'm looking for a way to figure out what Z depth will achieve this same pixel unity for a given FOV. For this my mProjection is set using: matrixPerspective(cameraFOV, near, far, (float)backingWidth / (float)backingHeight, mProjection); With testing I found that at an FOV of 45.0 a Z of 38.5 is very close to pixel unity. And at an FOV of 30.0 a Z of 59.5 is about right. But how can I calculate a value that is spot on? Here's my matrixPerspecitve code: void matrixPerspective(float angle, float near, float far, float aspect, mat4 m) { //float size = near * tanf(angle / 360.0 * M_PI); float size = near * tanf(degreesToRadians(angle) / 2.0); float left = -size, right = size, bottom = -size / aspect, top = size / aspect; // Unused values in perspective formula. m[1] = m[2] = m[3] = m[4] = 0; m[6] = m[7] = m[12] = m[13] = m[15] = 0; // Perspective formula. m[0] = 2 * near / (right - left); m[5] = 2 * near / (top - bottom); m[8] = (right + left) / (right - left); m[9] = (top + bottom) / (top - bottom); m[10] = -(far + near) / (far - near); m[11] = -1; m[14] = -(2 * far * near) / (far - near); } And my mView is set using: lookAtMatrix(cameraPos, camLookAt, camUpVector, mView); * UPDATE * I'm going to leave this here in case anyone has a different solution, can explain how they do it, or why this works. This is what I figured out. In my system I use a 10th scale unit to pixels on non-retina displays and a 20th scale on retina displays. The iPhone is 640 pixels wide on retina and 320 pixels wide on non-retina (obsolete). So if I want something to be the full screen width I divide by 20 to get the OpenGL unit width. Then divide that by 2 to get the left and right unit position. Something 32 units wide centered on the screen goes from -16 to +16. Believe it or not I have an excel spreadsheet do all this math for me and output all the vertex data for my sprite sheet. It's an arbitrary thing I made up to do .1 units = 1 non-retina pixel or 2 retina pixels. I could have made it .01 units = 2 pixels and someday I might switch to that. But for now it's the other. So the width of the screen in units is 32.0, and that means the left most pixel is at -16.0 and the right most is at 16.0. After messing a bit I figured out that if I take the [0] value of an identity modelViewProjection matrix and multiply it by 16 I get the depth required to get 1:1 pixels. I don't know why. I don't know if the 16 is related to the screen size or just a lucky guess. But I did a test where I placed a sprite at that calculated depth and varied the FOV through all the valid values and the object stays steady on screen with 1:1 pixels. So now I'm just calculating the unityDepth that way. If someone gives me a better answer I'll checkmark it.

    Read the article

  • Transparent parts of texture are opaque black instead

    - by Aaron
    I render a sprite twice, one on top of the other. The sprites have transparent parts, so I should be able to see the bottom sprite under the top sprite. The transparent parts are black (the clear colour) and opaque instead though and the topmost sprite blocks the bottom sprite. My fragment shader is trivial: uniform sampler2D texture; varying vec2 f_texcoord; void main() { gl_FragColor = texture2D(texture, f_texcoord); } I have glEnable(GL_BLEND) and glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) in my initialization code. My texture comes from a PNG file that I load with libpng. I'm sure to use GL_RGBA when initializing the texture with glTexImage2D (otherwise the sprites look like noise).

    Read the article

  • Spherical harmonics lighting interpolation

    - by TravisG
    I want to use hardware filtering to smooth out colors in texels of a texture when I'm accessing texels at coordinates that are not directly at the center of the texel, the catch being that the texels store 2 bands of spherical harmonics coefficients (=4 coefficients), not RGBA intensity values. Can I just use hardware filtering like that (GL_LINEAR with and without mip mapping) without any considerations? In other terms: If I were to first convert the coefficients back to intensity representations, than manually interpolate between two intensities, would the resulting intensity be the same as if I interpolated between the coefficient vectors directly and then converted the interpolated result to intensities?

    Read the article

  • World to Pixel Transformation

    - by D00d
    My objects have a location in world coordinates (basically 1.0f is a meter). If I simply draw my objects using their world coordinates, each meter will correspond to a pixel. Obviously that's not what I want. Now, I don't want to have to apply a transformation to each and every object's position when I draw them. As I happen to be using XNA, and spritebatch allows a Matrix to be passed in as an argument in it's begin method, I was wondering if there is a way to pass the World to Pixel transformation in there. Any suggestions? So far Matrix.CreateScale(new Vector3(zoom, zoom, 1)) puts the objects in their proper spot, but it also scales up the sprites. Is there a way to transform the position without enlarging the sprite? Thanks

    Read the article

  • Drawing Grid in 3D view - Mathematically calculate points and draw line between them (Not working)

    - by Deukalion
    I'm trying to draw a simple grid from a starting point and expand it to a size. Doing this mathematically and drawing the lines between each point, but since the "DrawPrimitives(LineList)" doesn't work the way it should work, And this method can't even draw lines between four points to create a simple Rectangle, so how does this method work exactly? Some sort of coordinate system: [ ][ ][ ][ ][ ][ ][ ] [ ][2.2][ ][0.2][ ][2.2][ ] [ ][2.1][1.1][ ][1.1][2.1][ ] [ ][2.0][ ][0.0][ ][2.0][ ] [ ][2.1][1.1][ ][1.1][2.1][ ] [ ][2.2][ ][0.2][ ][2.2][ ] [ ][ ][ ][ ][ ][ ][ ] I've checked with my method and it's working as it should. It calculates all the points to form a grid. This way I should be able to create Points where to draw line right? This way, if I supply the method with Size = 2 it starts at 0,0 and works it through all the corners (2,2) on each side. So, I have the positions of each point. How do I draw lines between these? VerticeCount = must be number of Points in this case, right? So, tell me, if I can't supply this method with Point A, Point B, Point C, Point D to draw a four vertice rectangle (Point A - B - C - D) - how do I do it? How do I even begin to understand it? As far as I'm concered, that's a "Line list" or a list of points where to draw lines. Can anyone explain what I'm missing? I wish to do this mathematically so I can create a a custom grid that can be altered.

    Read the article

  • Modular building technique with angles? (A roof)

    - by Mungoid
    Ive been spending a bit of time lately studying the modular buildings of many games and reading/viewing several tutorials about it as well, but almost every example I see uses a plain square building that does not have any angled roof or similar. In all my applications (CS6, Blender/Max, UDK) I adhere to the same grid spacing and I get pretty good results, but trying to make modular angled pieces is confusing me as I'm not sure the best way to approach it. Below is some shots of my template sheet and workflow I have been doing. Should I do the roof separately or is it possible for me to keep it in the same texture sheet? The main issue is below. I have made a couple modular roof pieces but when i try to use them, i end up needing to model multiple other parts to fill gaps based on what roof shape i want. I then model those 'filler' pieces and now i have that much less space left in my texture sheet and those pieces are usually not that reusable for anything else. This is where im not sure how to proceed. If anyone has any links to documents or papers talking about this or advice, I would greatly appreciate it! =-) My main roof pieces with the gaps My power of 2 texture sheet, with 16x16 grid squares. The texture sheet loaded into blender on a 16x16 plane and starting to separate and extrude.

    Read the article

  • Are there any reasons to use Legacy (2.X) OpenGL?

    - by user27886
    The benefits are well documented of the Modern OpenGL 3.X & 4.X API's, but I'm wondering if there are ANY benefits to keeping with the old OpenGL, Or if learning OpenGL 2.X is a complete waste of time now no matter what? Particularly I've wondered if using the OpenGL 2.X API is appropriate if the target platform had graphics hardware capable of only up to OpenGL 2.X. Would a driver update on said target platform allow programs compiled using the Modern OpenGL API's to be released on this old platform? If they both work, which would be faster? Thanks

    Read the article

  • Possible to pass pygame data to memory map block?

    - by toozie21
    I am building a matrix out of addressable pixels and it will be run by a Pi (over the ethernet bus). The matrix will be 75 pixels wide and 20 pixels tall. As a side project, I thought it would be neat to run pong on it. I've seen some python based pong tutorials for Pi, but the problem is that they want to pass the data out to a screen via pygame.display function. I have access to pass pixel information using a memory map block, so is there anyway to do that with pygame instead of passing it out the video port? In case anyone is curious, this was the pong tutorial I was looking at: Pong Tutorial

    Read the article

  • How to offset particles from point of origin

    - by Sun
    Hi I'm having troubles off setting particles from a point of origin. I want my particles to spread out after a certain radius from a the point of origin. For example, this is what I have right now: All particles emitted from a point of origin. What I want is this: Particles are offset from the point of origin by some amount, i.e after the circle. What is the best way to achieve this? At the moment, I have the point of origin, the position of each particle and its rotation angle. Sorry for the poor illustrations. Edit: I was mistaken, when a particle is created, I have only the point of origin. When the particle is created I am able to calculate the rotation of the particle in the update method after it has moved to a new location using atan2() method. This is how I create/manage particles: Created new particle at enemy ship death location, for every new particle which is added to the list, call Update and Draw to update its position, calculate new angle and draw it.

    Read the article

  • My Sprite comes out the screen

    - by IlNero
    If i an action moves the sprite,how can i keep the CCSprite on the screen???? this is my code: [enemy runAction:[CCSequence actions:[CCMoveBy actionWithDuration:2.0 position:ccp(-winSize.width*0.4, 0)], [CCMoveBy actionWithDuration:randomValueBetween(1.0, 0.3) position:ccp(randomValueBetween(winSize.width*0.2, -winSize.width*0.2), randomValueBetween(winSize.height*0.2, -winSize.height*0.2))], [CCDelayTime actionWithDuration:0.5], [CCMoveBy actionWithDuration:randomValueBetween(1.0, 0.3) position:ccp(randomValueBetween(winSize.width*0.2, -winSize.width*0.2), randomValueBetween(winSize.height*0.2, -winSize.height*0.2))], [CCDelayTime actionWithDuration:0.5], [CCMoveBy actionWithDuration:randomValueBetween(1.0, 0.3) position:ccp(randomValueBetween(winSize.width*0.2, -winSize.width*0.2), randomValueBetween(winSize.height*0.2, -winSize.height*0.2))], [CCDelayTime actionWithDuration:0.5], [CCMoveBy actionWithDuration:randomValueBetween(1.0, 0.3) position:ccp(randomValueBetween(winSize.width*0.2, -winSize.width*0.2), randomValueBetween(winSize.height*0.2, -winSize.height*0.2))], [CCDelayTime actionWithDuration:0.5], [CCMoveBy actionWithDuration:randomValueBetween(1.0, 0.3) position:ccp(randomValueBetween(winSize.width*0.2, -winSize.width*0.2), randomValueBetween(winSize.height*0.2, -winSize.height*0.2))], [CCDelayTime actionWithDuration:0.5], [CCMoveBy actionWithDuration:randomValueBetween(1.0, 0.3) position:ccp(randomValueBetween(-winSize.width*0.3,winSize.width*0.3), randomValueBetween(winSize.height*0.3, -winSize.height*0.3))], [CCDelayTime actionWithDuration:0.5], [CCMoveBy actionWithDuration:randomValueBetween(1.0, 0.3) position:ccp(randomValueBetween(-winSize.width*0.2,winSize.width*0.2), randomValueBetween(winSize.height*0.2, -winSize.height*0.2))], [CCDelayTime actionWithDuration:0.5], [CCMoveBy actionWithDuration:randomValueBetween(1.0, 0.3) position:ccp(randomValueBetween(-winSize.width*0.3,winSize.width*0.3), randomValueBetween(winSize.height*0.3, -winSize.height*0.3))], [CCDelayTime actionWithDuration:0.5], [CCMoveBy actionWithDuration:randomValueBetween(1.0, 0.3) position:ccp(randomValueBetween(-winSize.width*0.2,winSize.width*0.2), randomValueBetween(winSize.height*0.2, -winSize.height*0.2))], [CCDelayTime actionWithDuration:0.5], [CCMoveBy actionWithDuration:randomValueBetween(1.0, 0.3) position:ccp(randomValueBetween(-winSize.width*0.3,winSize.width*0.3), randomValueBetween(winSize.height*0.3, -winSize.height*0.3))], [CCDelayTime actionWithDuration:0.5], [CCMoveBy actionWithDuration:randomValueBetween(1.0, 0.3) position:ccp(randomValueBetween(-winSize.width*0.2,winSize.width*0.2), randomValueBetween(winSize.height*0.2, -winSize.height*0.2))], [CCDelayTime actionWithDuration:0.5], [CCMoveBy actionWithDuration:2.0 position:ccp(-winSize.width*1.5, 0)], [CCCallFuncN actionWithTarget:self selector:@selector(invisNode:)], nil]]; but whit this code the sprite sometimes comes out the screen, i need the sprite moves randomly in the screen without comes out..

    Read the article

  • How do I create a big multiplayer world in UDK?

    - by Dorpe
    I want to create a big multiplayer world in UDK and I'm having a few difficulties. I created the biggest terrain possible but then any terrain related action I do takes forever. However, I've seen videos of people make same size terrain and working without a problem. My pc is strong enough, so maybe someone can tell me what I'm doing wrong. I want to make it even bigger then the biggest terrain size, so I was thinking of doing level streaming but then I read that streaming is working server side which means if I have a player on every terrain all terrains will still be loaded and I want to save as much memory possible so it will work well online. Thanks for any help you can give.

    Read the article

  • StringBuffer behavior in LWJGL

    - by Michael Oberlin
    Okay, I've been programming in Java for about ten years, but am entirely new to LWJGL. I have a specific problem whilst attempting to create a text console. I have built a class meant to abstract input polling to it, which (in theory) captures key presses from the Keyboard object and appends them to a StringBuilder/StringBuffer, then retrieves the completed string after receiving the ENTER key. The problem is, after I trigger the String return (currently with ESCAPE), and attempt to print it to System.out, I consistently get a blank line. I can get an appropriate string length, and I can even sample a single character out of it and get complete accuracy, but it never prints the actual string. I could swear that LWJGL slipped some kind of thread-safety trick in while I wasn't looking. Here's my code: static volatile StringBuffer command = new StringBuffer(); @Override public void chain(InputPoller poller) { this.chain = poller; } @Override public synchronized void poll() { //basic testing for modifier keys, to be used later on boolean shift = false, alt = false, control = false, superkey = false; if(Keyboard.isKeyDown(Keyboard.KEY_LSHIFT) || Keyboard.isKeyDown(Keyboard.KEY_RSHIFT)) shift = true; if(Keyboard.isKeyDown(Keyboard.KEY_LMENU) || Keyboard.isKeyDown(Keyboard.KEY_RMENU)) alt = true; if(Keyboard.isKeyDown(Keyboard.KEY_LCONTROL) || Keyboard.isKeyDown(Keyboard.KEY_RCONTROL)) control = true; if(Keyboard.isKeyDown(Keyboard.KEY_LMETA) || Keyboard.isKeyDown(Keyboard.KEY_RMETA)) superkey = true; while(Keyboard.next()) if(Keyboard.getEventKeyState()) { command.append(Keyboard.getEventCharacter()); } if (Framework.isConsoleEnabled() && Keyboard.isKeyDown(Keyboard.KEY_ESCAPE)) { System.out.println("Escape down"); System.out.println(command.length() + " characters polled"); //works System.out.println(command.toString().length()); //works System.out.println(command.toString().charAt(4)); //works System.out.println(command.toString().toCharArray()); //blank line! System.out.println(command.toString()); //blank line! Framework.disableConsole(); } //TODO: Add command construction and console management after that } } Maybe the answer's obvious and I'm just feeling tired, but I need to walk away from this for a while. If anyone sees the issue, please let me know. This machine is running the latest release of Java 7 on Ubuntu 12.04, Mate desktop environment. Many thanks.

    Read the article

  • What is the recommended way to output values to FBO targets? (OpenGL 3.3 + GLSL 330)

    - by datSilencer
    I'll begin by apologizing for any dumb assumptions you might find in the code below since I'm still pretty much green when it comes to OpenGL programming. I'm currently trying to implement deferred shading by using FBO's and their associated targets (textures in my case). I have a simple (I think :P) geometry+fragment shader program and I'd like to write its Fragment Shader stage output to three different render targets (previously bound by a call to glDrawBuffers()), like so: #version 330 in vec3 WorldPos0; in vec2 TexCoord0; in vec3 Normal0; in vec3 Tangent0; layout(location = 0) out vec3 WorldPos; layout(location = 1) out vec3 Diffuse; layout(location = 2) out vec3 Normal; uniform sampler2D gColorMap; uniform sampler2D gNormalMap; vec3 CalcBumpedNormal() { vec3 Normal = normalize(Normal0); vec3 Tangent = normalize(Tangent0); Tangent = normalize(Tangent - dot(Tangent, Normal) * Normal); vec3 Bitangent = cross(Tangent, Normal); vec3 BumpMapNormal = texture(gNormalMap, TexCoord0).xyz; BumpMapNormal = 2 * BumpMapNormal - vec3(1.0, 1.0, -1.0); vec3 NewNormal; mat3 TBN = mat3(Tangent, Bitangent, Normal); NewNormal = TBN * BumpMapNormal; NewNormal = normalize(NewNormal); return NewNormal; } void main() { WorldPos = WorldPos0; Diffuse = texture(gColorMap, TexCoord0).xyz; Normal = CalcBumpedNormal(); } If my render target textures are configured as: RT1:(GL_RGB32F, GL_RGB, GL_FLOAT, GL_TEXTURE0, GL_COLOR_ATTACHMENT0) RT2:(GL_RGB32F, GL_RGB, GL_FLOAT, GL_TEXTURE1, GL_COLOR_ATTACHMENT1) RT3:(GL_RGB32F, GL_RGB, GL_FLOAT, GL_TEXTURE2, GL_COLOR_ATTACHMENT2) And assuming that each texture has an internal format capable of contaning the incoming data, will the fragment shader write the corresponding values to the expected texture targets? On a related note, do the textures need to be bound to the OpenGL context when they are Multiple Render Targets? From some Googling, I think there are two other ways to output to MRTs: 1: Output each component to gl_FragData[n]. Some forum posts say this method is deprecated. However, looking at the latest OpenGL 3.3 and 4.0 specifications at opengl.org, the core profiles still mention this approach. 2: Use a typed output array variable for the expected type. In this case, I think it would be something like this: out vec3 [3] output; void main() { output[0] = WorldPos0; output[1] = texture(gColorMap, TexCoord0).xyz; output[2] = CalcBumpedNormal(); } So which is then the recommended approach? Is there a recommended approach at all if I plan to code on top of OpenGL 3.3? Thanks for your time and help!

    Read the article

  • Client Side Prediction for a Look Vector

    - by Mike Sawayda
    So I am making a first person networked shooter. I am working on client-side prediction where I am predicting player position and look vectors client-side based on input messages received from the server. Right now I am only worried about the look vectors though. I am receiving the correct look vector from the server about 20 times per second and I am checking that against the look vector that I have client side. I want to interpolate the clients look vector towards the correct one that is server side over a period of time. Therefore no matter how far you are away from the servers look vector you will interpolate to it over the same amount of time. Ex. if you were 10 degrees off it would take the same amount of time as if you were 2 degrees off to be correctly lined up with the server copy. My code looks something like this but the problem is that the amount that you are changing the clients copy gets infinitesimally small so you will actually never reach the servers copy. This is because I am always calculating the difference and only moving by a percentage of that every frame. Does anyone have any suggestions on how to interpolate towards the servers copy correctly? if(rotationDiffY > ClientSideAttributes::minRotation) { if(serverRotY > clientRotY) { playerObjects[i]->collisionObject->rotation.y += (rotationDiffY * deltaTime); } else { playerObjects[i]->collisionObject->rotation.y -= (rotationDiffY deltaTime); } }

    Read the article

  • How effects found in "Autodesk Fluid FX" are implemented using OpenGL ES?

    - by afds
    How this kind of effects are technically implemented using OpenGL ES? Are they performing simulation on GPU (using Shaders) or CPU while using some smart vertex positioning and texturing? Why it appears so fast (in terms of performance)? You might check the video of that app here: http://www.youtube.com/watch?v=F4KOk6QP6kQ edit Here is the presentation for the app: http://www.futuregameon.com/FGO2010_JosStam.pdf

    Read the article

  • infer half vector length in BRDF

    - by cician
    it's my first question on stack. Is it possible to infer length of the half angle vector for specular lighting from N·L and N·V without the whole view and light vectors? I may be completely off-track, but I have this gut feeling it's possible... Why? I'm working on a skin shader and I'm already doing one texture lookup with N·L+N·E and one texture lookup for specular with N·H+N·V. The latter one can be transformed into N·L+N·E lookup if only I had the half vector length. Doing so could simplify the shader a bit and move some operations into the pre-computed lookup texture. It would make a huge difference since I'm trying to squeeze as much functionality as possible to a single pass mobile version so instruction count matters. Thanks.

    Read the article

  • Do I need the 'w' component in my Vector class?

    - by bobobobo
    Assume you're writing matrix code that handles rotation, translation etc for 3d space. Now the transformation matrices have to be 4x4 to fit the translation component in. However, you don't actually need to store a w component in the vector do you? Even in perspective division, you can simply compute and store w outside of the vector, and perspective divide before returning from the method. For example: // post multiply vec2=matrix*vector Vector operator*( const Matrix & a, const Vector& v ) { Vector r ; // do matrix mult r.x = a._11*v.x + a._12*v.y ... real w = a._41*v.x + a._42*v.y ... // perspective divide r /= w ; return r ; } Is there a point in storing w in the Vector class?

    Read the article

  • How do I make my character slide down high-angled slopes?

    - by keinabel
    I am currently working on my character's movement in Unity3D. I managed to make him move relatively to the mouse cursor. I set a slope limit of 45°, which does not allow the character to walk up the mountains with higher degrees. But he can still jump them up. How do I manage to make him slide down again when he jumped at places with too high slope? Thanks in advance. edit: Code snippet of my basic movement. using UnityEngine; using System.Collections; public class BasicMovement : MonoBehaviour { private float speed; private float jumpSpeed; private float gravity; private float slopeLimit; private Vector3 moveDirection = Vector3.zero; void Start() { PlayerSettings settings = GetComponent<PlayerSettings>(); speed = settings.GetSpeed(); jumpSpeed = settings.GetJumpSpeed(); gravity = settings.GetGravity(); slopeLimit = settings.GetSlopeLimit(); } void Update() { CharacterController controller = GetComponent<CharacterController>(); controller.slopeLimit = slopeLimit; if (controller.isGrounded) { moveDirection = new Vector3(Input.GetAxis("Horizontal"), 0, Input.GetAxis("Vertical")); moveDirection = transform.TransformDirection(moveDirection); moveDirection *= speed; if (Input.GetButton("Jump")) { moveDirection.y = jumpSpeed; } } moveDirection.y -= gravity * Time.deltaTime; controller.Move(moveDirection * Time.deltaTime); } }

    Read the article

  • Procedural Planets, Heightmaps and Textures

    - by henryprescott
    I am currently working on an OpenGL procedural planet generator. I hope to use it for a space RPG, that will not allow players to go down to the surface of a planet so I have ignored anything ROAM related. At the momement I am drawing a cube with VBOs and mapping onto a sphere. I am familiar with most fractal heightmap generating techniques and have already implemented my own version of midpoint displacement(not that useful in this case I know). My question is, what is the best way to procedurally generate the heightmap. I have looked at libnoise which allows me to make tilable heightmaps/textures, but as far as I can see I would need to generate a net like this. Leaving the tiling obvious. Could anyone advise me on the best route to take? Any input would be much appreciated. Thanks, Henry.

    Read the article

< Previous Page | 585 586 587 588 589 590 591 592 593 594 595 596  | Next Page >