Search Results

Search found 19281 results on 772 pages for 'blender game engine'.

Page 441/772 | < Previous Page | 437 438 439 440 441 442 443 444 445 446 447 448  | Next Page >

  • Collision detection of player larger than clipping tile

    - by user1306322
    I want to know how to check for collisions efficiently in case where the player's box is larger than a map tile. On the left is my usual case where I make 8 checks against every surrounding tile, but with the right one it would be much more inefficient. (picture of two cases: on the left is the simple case, on the right is the one I need help with) http://i.stack.imgur.com/k7q0l.png How should I handle the right case?

    Read the article

  • Problem with alleg42.dll / program crashes / Allegro & Codeblocks

    - by user24152
    I'm having a serious problem with allegro. The program should display random pixels on the screen and when I build and run it I get the following error message: Below is the full code of my program: #include <stdio.h> #include <stdlib.h> #include <time.h> #include "allegro.h" #define Text_Color_Red makecol(255,0,0) int main() { int ret; int color_depth = 32; int x; int y; int red; int green; int blue; int color; //init allegro allegro_init(); //install keyboard install_keyboard(); //set color depth to 32 bits set_color_depth(color_depth); //init random seed srand(time(NULL)); //init video mode to 640 x 480 ret = set_gfx_mode(GFX_AUTODETECT_WINDOWED,640,480,0,0); if(ret !=0) { allegro_message(allegro_error); return 1; } //Display string textprintf(screen,font,0,0,10,0,Text_Color_Red,"Screen Resolution is: %dx%d -- Press ESC to quit !",SCREEN_W,SCREEN_H); //display pixels until ESC key is pressed //wait for keypress while(!key[KEY_ESC]) { //set a random location x = 10 + rand() % (SCREEN_W-20); y = 10 + rand() % (SCREEN_H-20); //set a random color red = rand() % 255; green = rand() % 255; blue = rand() % 255; color = makecol(red,green,blue); //draw the pixel putpixel(screen, x, y, color); } //quit allegro allegro_exit(); } END_OF_MAIN() Error message: AllegroPixels1.exe has encountered a problem and needs to close. We are sorry for the inconvenience. Error signature: AppName: allegropixels1.exe AppVer: 0.0.0.0 ModName: alleg42.dll ModVer: 4.2.3.0 Offset: 0006c05c I am using Windows XP inside a virtual machine under Parallels 7.0

    Read the article

  • Writing a basic shader for large input files

    - by Zoltan Varadi
    I started writing a shader for my iOS app and instead of starting from scratch i used this tutorial here: http://www.raywenderlich.com/3664/opengl-es-2-0-for-iphone-tutorial I wrote an import function, first to import wavefront .obj models. My problem is that with I can't handle larger inputs (with a simple cube it was working). I realized that the indices array is an array of GLubyte values, which is unsigned char, so as a result i cant have more than 256 indexes. I modified it to GLuint, but then only get a blank screen. What else needs to me modified? p.s.: the source can be downloaded from here: http://d1xzuxjlafny7l.cloudfront.net/downloads/HelloOpenGL.zip

    Read the article

  • Spherical to Cartesian Coordinates

    - by user1258455
    Well I'm reading the Frank's Luna DirectX10 book and, while I'm trying to understand the first demo, I found something that's not very clear at least for me. In the updateScene method, when I press A, S, W or D, the angles mTheta and mPhi change, but after that, there are three lines of code that I don't understand exactly what they do: // Convert Spherical to Cartesian coordinates: mPhi measured from +y // and mTheta measured counterclockwise from -z. float x = 5.0f*sinf(mPhi)*sinf(mTheta); float z = -5.0f*sinf(mPhi)*cosf(mTheta); float y = 5.0f*cosf(mPhi); I mean, this explains that they do, it says that it converts the spherical coordinates to cartesian coordinates, but, mathematically, why? why the x value is calculated by the product of the sins of both angles? And the z by the product of the sine and cosine? and why the y just uses the cosine? After that, those values (x, y and z) are used to build the view matrix. The book doesn't explain (mathematically) why those values are calculated like that (and I didn't find anything to help me to understand it at the first Part of the book: "Mathematical prerequisites"), so it would be good if someone could explain me what exactly happen in those code lines or just give me a link that helps me to understand the math part. Thanks in advance!

    Read the article

  • Draw contour around object in Opengl

    - by Maciekp
    I need to draw contour around 2d objects in 3d space. I tried drawing lines around object(+points to fill the gap), but due to line width, some part of it(~50%) was covering object. I tried to use stencil buffer, to eliminate this problem, but I got sth like this(contour is green): http://goo.gl/OI5uc (sorry I can't post images, due to my reputation) You can see(where arrow points), that some parts of line are behind object, and some are above. This changes when I move camera, but always there is some part, that is covering it. Here is code, that I use for drawing object: glColorMask(1,1,1,1); std::list<CObjectOnScene*>::iterator objIter=ptr->objects.begin(),objEnd=ptr->objects.end(); int countStencilBit=1; while(objIter!=objEnd) { glColorMask(1,1,1,1); glStencilFunc(GL_ALWAYS,countStencilBit,countStencilBit); glStencilOp(GL_REPLACE,GL_KEEP,GL_REPLACE ); (*objIter)->DrawYourVertices(); glStencilFunc(GL_NOTEQUAL,countStencilBit,countStencilBit); glStencilOp(GL_KEEP,GL_KEEP,GL_REPLACE); (*objIter)->DrawYourBorder(); ++objIter; ++countStencilBit; } I've tried different settings of stencil buffer, but always I was getting sth like that. Here is question: 1.Am I setting stencil buffer wrong? 2. Are there any other simple ways to create contour on such objects? Thanks in advance. EDIT: 1. I don't have normals of objects. 2. Object can be concave. 3. I can't use shaders(see below why).

    Read the article

  • Cutting out smaller rectangles from a larger rectangle

    - by Mauro Destro
    The world is initially a rectangle. The player can move on the world border and then "cut" the world via orthogonal paths (not oblique). When the player reaches the border again I have a list of path segments they just made. I'm trying to calculate and compare the two areas created by the path cut and select the smaller one to remove it from world. After the first iteration, the world is no longer a rectangle and player must move on border of this new shape. How can I do this? Is it possible to have a non rectangular path? How can I move the player character only on path? EDIT Here you see an example of what I'm trying to achieve: Initial screen layout. Character moves inside the world and than reaches the border again. Segment of the border present in the smaller area is deleted and last path becomes part of the world border. Character moves again inside the world. Segments of border present in the smaller area are deleted etc.

    Read the article

  • Using multiplication and division with delta time

    - by tesselode
    Using delta time with addition and subtraction is easy. player.x += 100 * dt However, multiplication and division complicate things a bit. For example, let's say I want the player to double his speed every second. player.x = player.x * 2 * dt I can't do this because it'll slow down the player (unless delta time is really high). Division is the same way, except it'll speed things way up. How can I handle multiplication and division with delta time?

    Read the article

  • Most efficient 3d depth sorting for isometric 3d in AS3?

    - by AttackingHobo
    I am not using the built in 3d MovieClips, and I am storing the 3d location my way. I have read a few different articles on sorting depths, but most of them seem in efficient. I had a really efficient way to do it in AS2, but it was really hacky, and I am guessing there are more efficient ways that do not rely on possibly unreliable hacks. What is the most efficient way to sort display depths using AS3 with Z depths I already have?

    Read the article

  • How can i create sprite sheet from 3d model (3D studio max)

    - by OopsUser
    I built simple 3D model of a car, with simple animation in which it's wheels are turning. Now i want to create a sprite sheet, the only way i know how to do it, is to render manually 20 frames from the from, then combine them to a strip manually, then rotate it by 10 degrees, render 20 frames of animation again and combine them to a strip... Is there a way to do it automatically ? With out rotating the scene manually and render it and combining .. it's a lot of work, takes more time then the modelling itself... Thanks

    Read the article

  • Fast lighting with multiple lights

    - by codymanix
    How can I implement fast lighting with multiple lights? I don't want to restrain the player, he can place an unlimited number and possibly overlapping (point) lights into the level. The problem is that shaders which contain dynamic loops which would be necessary to calculate the lighting tend to be very slow. I had the idea that if it could be possible at compiletime to compile a shader n times where n is the number of lights. If the number n is known at compiletime, the loops can be unrolled automatically. Is this possible to generate n versions of the same shader with just a different number of lights? At runtime I could then decide which shader to use for which part of the level.

    Read the article

  • Nice function for "rolling score up"?

    - by bobobobo
    I'm adding to the player's score, and I'm using a per-frame formula like: int score, displayedScore ;// score is ACTUAL score player has, // displayedScore is what is shown this frame to the player // (the creeping/"rolling" number) float disparity = score - displayedScore ; int d = disparity * .1f ; // add 1/10 of the difference, if( !d ) d = signum( disparity ) ; // last 10 go by 1's score += d ; Where inline int signum( float val ){ if( val > 0 ) return 1 ; else if( val < 0 ) return -1 ; else return 0 ; } So, it kind of works where it makes big changes rapidly, then it creeps in the last few one at a time. But I'm looking for better (or possibly well known?) score-creeping functions. Any one?

    Read the article

  • Storing a Hex Grid

    - by Pedro Caetano
    I've been creating a small hex grid framework for Unity3D and have come to the following dilema. This is my coordinate system (taken from here) Link because I'm a new user It all works pretty nicely except for the fact I have no idea how to store it. I originally intended to store this in a 2D array and use images to generate my maps. One problem was that it had negative values (this was easily fixed by offsetting the coordinates a bit). However, due to this coordinate system, such an image or bitmap would have to be diamond shaped - and since these structures are square shaped, this would cause a lot of headaches even if I hack something together. Is there anything I'm missing that could fix this? I recall seeing a forum post regarding this in the unity forums but I can no longer find the link. Is writing a set of coordinate translators the best solution here? If you guys think it would be helpful, I can post code and images of my problem.

    Read the article

  • Very basic OpenGL ES 2 error

    - by user16547
    This is an incredibly simple shader, yet I'm having a lot of trouble understanding what's wrong with it. I'm trying to send a float to my fragment shader. Its purpose is to adjust the alpha of the fragment colour. Here is my fragment shader: precision mediump float; uniform sampler2D u_Texture; uniform float u_Alpha; varying vec2 v_TexCoordinate; void main() { gl_FragColor = texture2D(u_Texture, v_TexCoordinate); gl_FragColor.a *= u_Alpha; } and below is my rendering method. I get a 1282 (invalid operation) on the GLES20.glUniform1f(u_Alpha, alpha); line. alpha is 1 (but I tried other values as well) and transparent is true: public void render() { GLES20.glUseProgram(mProgram); if(transparent) { GLES20.glEnable(GLES20.GL_BLEND); GLES20.glBlendFunc(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA); GLES20.glUniform1f(u_Alpha, alpha); } Matrix.setIdentityM(mModelMatrix, 0); Matrix.rotateM(mModelMatrix, 0, angle, 0, 0, 1); Matrix.translateM(mModelMatrix, 0, x, y, z); Matrix.multiplyMM(mMVPMatrix, 0, mViewMatrix, 0, mModelMatrix, 0); Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0); GLES20.glUniformMatrix4fv(u_MVPMatrix, 1, false, mMVPMatrix, 0); GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, vbo[0]); GLES20.glVertexAttribPointer(a_Position, 3, GLES20.GL_FLOAT, false, 12, 0); GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, vbo[1]); GLES20.glVertexAttribPointer(a_TexCoordinate, 2, GLES20.GL_FLOAT, false, 8, 0); //snowTexture start GLES20.glActiveTexture(GLES20.GL_TEXTURE0); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureHandle[0]); GLES20.glUniform1i(u_Texture, 0); GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, ibo[0]); GLES20.glDrawElements(GLES20.GL_TRIANGLE_STRIP, indices.capacity(), GLES20.GL_UNSIGNED_BYTE, 0); GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0); GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, 0); if(transparent) { GLES20.glDisable(GLES20.GL_BLEND); } GLES20.glUseProgram(0); }

    Read the article

  • Why isn't my other two constant buffers being updated to the shader?

    - by Paul Ske
    I posted previously before about my two dynamic buffers not being dynamically updating the constant shader. The tessellation buffer isn't working because I have to manually update the tessellation factor inside the hull shader. I believe the camera position isn't updating either because when I perform distance adaptation the far edges are more tessellated then the what's truly in front of the camera. I have all the buffers set to dynamic. Inside the render loop I have them set as: ID3D11Buffer *multiBuffers[3]; devcon->VSSetConstantBuffers(0,3,multiBuffers); ... devcon->DSSetConstantBuffers(0,3,multiBuffers); I only got that from a directX Sample. Inside the shader file I have the three cbuffer structs. cbuffer ConstantBuffer { float4x4 WorldMatrix; float4x4 viewMatrix; float4x4 projectionMatrix; float4x4 modelWorldMatrix; // the rotation matrix float3 lightvec; // the light's vector float4 lightcol; // the light's color float4 ambientcol; // the ambient light's color bool isSelected; } cbuffer cameraBuffer { float3 cameraDirection; float padding; } cbuffer TessellationBuffer { float tessellationAmount; float3 padding2; } Am I missing something or would anyone know why wouldn't my buffers update to the shader file?

    Read the article

  • bump mapping with 2 normal maps

    - by DorkMonstuh
    I was wondering if its actually possible to do bump mapping with 2 normal maps... I have tried doing it this way however I get a function overload on max and dot. uniform sampler2D n_mapTex; uniform sampler2D n_mapTex2; uniform sampler2D refTex; varying mediump vec2 TexCoord; varying mediump float vTime; void main() { mediump vec4 wave = texture2D(n_mapTex, TexCoord - vTime); mediump vec4 wave2 = texture2D(n_mapTex2, TexCoord + vTime); mediump vec4 bump = mix(wave2, wave, 0.5); //this extracts the normals from the combined normal maps mediump vec4 normal = normalize(bump.xyzw * 2.0 - 1.0); //determines light position mediump vec3 lightPos = normalize(vec3(0.0, 1.0, 3.0)); mediump float diffuse = max(dot(normal, lightPos),0.0); gl_FragColor = mix(texture2D(refTex, TexCoord), bump, 0.5); }

    Read the article

  • Switching between Discrete and Integrated GPUs

    - by void-pointer
    Hello everyone, I develop CUDA applications on my Alienware M17x portable back-breaker, which has two discrete GTX 285M GPUs and one integrated GeForce 9400M GPU. I can currently switch between them using NVIDIA's software, but I would like the ability to do so within my applications for purposes of benchmarking and general convenience. Apparently this requires the "NDA version" of NVIDIA's Driver API, which I know not how to obtain. Would using this API be the only way to accomplish what I seek, and if so, how would I obtain it? A solution using Windows APIs would also be acceptable, though less preferable to one which would leverage a cross-platform API. I have created a similar thread concerning the matter on NVIDIA's forum, which is down at the time of this writing. Thanks for reading my question; it is much appreciated!

    Read the article

  • Box2D Difference Between WorldCenter and Position

    - by Free Lancer
    So this problem has been brothering for a couple of days now. First off, what is the difference between say Body.getWorldCenter() and Body.getPosition(). I heard that WorldCenter might have to do with the center of gravity or something. Second, When I create a Box2D Body for a sprite the Body is always at the lower left corner. I check it by printing a Rectangle of 1 pixel around the box.getWorldCenter(). From what I understand the Body should be in the center of the Sprite and its bounding box should wrap around the Sprite, correct? Here's an image of what I mean (The Sprite is Red, Body Blue): Here's some code: Body Creator: public static Body createBoxBody( final World pPhysicsWorld, final BodyType pBodyType, final FixtureDef pFixtureDef, Sprite pSprite ) { float pRotation = 0; float pCenterX = pSprite.getX() + pSprite.getWidth() / 2; float pCenterY = pSprite.getY() + pSprite.getHeight() / 2; float pWidth = pSprite.getWidth(); float pHeight = pSprite.getHeight(); final BodyDef boxBodyDef = new BodyDef(); boxBodyDef.type = pBodyType; //boxBodyDef.position.x = pCenterX / Constants.PIXEL_METER_RATIO; //boxBodyDef.position.y = pCenterY / Constants.PIXEL_METER_RATIO; boxBodyDef.position.x = pSprite.getX() / Constants.PIXEL_METER_RATIO; boxBodyDef.position.y = pSprite.getY() / Constants.PIXEL_METER_RATIO; Vector2 v = new Vector2( boxBodyDef.position.x * Constants.PIXEL_METER_RATIO, boxBodyDef.position.y * Constants.PIXEL_METER_RATIO ); Gdx.app.log("@Physics", "createBoxBody():: Box Position: " + v); // Temporary Box shape of the Body final PolygonShape boxPoly = new PolygonShape(); final float halfWidth = pWidth * 0.5f / Constants.PIXEL_METER_RATIO; final float halfHeight = pHeight * 0.5f / Constants.PIXEL_METER_RATIO; boxPoly.setAsBox( halfWidth, halfHeight ); // set the anchor point to be the center of the sprite pFixtureDef.shape = boxPoly; final Body boxBody = pPhysicsWorld.createBody(boxBodyDef); Gdx.app.log("@Physics", "createBoxBody():: Box Center: " + boxBody.getPosition().mul(Constants.PIXEL_METER_RATIO)); boxBody.createFixture(pFixtureDef); boxBody.setTransform( boxBody.getWorldCenter(), MathUtils.degreesToRadians * pRotation ); boxPoly.dispose(); return boxBody; } Making the Sprite: public Car( Texture texture, float pX, float pY, World world ) { super( "Car" ); mSprite = new Sprite( texture ); mSprite.setSize( mSprite.getWidth() / 6, mSprite.getHeight() / 6 ); mSprite.setPosition( pX, pY ); mSprite.setOrigin( mSprite.getWidth()/2, mSprite.getHeight()/2); FixtureDef carFixtureDef = new FixtureDef(); // Set the Fixture's properties, like friction, using the car's shape carFixtureDef.restitution = 1f; carFixtureDef.friction = 1f; carFixtureDef.density = 1f; // needed to rotate body using applyTorque mBody = Physics.createBoxBody( world, BodyDef.BodyType.DynamicBody, carFixtureDef, mSprite ); }

    Read the article

  • problems texture mapping in modern OpenGL 3.3 using GLSL #version 150

    - by RubyKing
    Hi all I'm trying to do texture mapping using Modern OpenGL and GLSL 150. The problem is the texture shows but has this weird flicker I can show a video here http://www.youtube.com/watch?v=xbzw_LMxlHw and I have everything setup best I can have my texcords in my vertex array sent up to opengl I have my fragment color set to the texture values and texel values I have my vertex sending the textures cords to texture cordinates to be used in the fragment shader I have my ins and outs setup and I still don't know what I'm missing that could be causing that flicker. here is my code FRAGMENT SHADER #version 150 uniform sampler2D texture; in vec2 texture_coord; varying vec3 texture_coordinate; void main(void){ gl_FragColor = texture(texture, texture_coord); } VERTEX SHADER #version 150 in vec4 position; out vec2 texture_coordinate; out vec2 texture_coord; uniform vec3 translations; void main() { texture_coord = (texture_coordinate); gl_Position = vec4(position.xyz + translations.xyz, 1.0); } Last bit here is my vertex array with texture cordinates GLfloat vVerts[] = { 0.5f, 0.5f, 0.0f, 0.0f, 1.0f , 0.0f, 0.5f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f}; //tex x and y HERE IS THE ACTUAL FULL SOURCE CODE if you need to see all the code in its fullest glory here is a link to every file http://ideone.com/7kQN3 thank you for your help

    Read the article

  • Can i change the order of these OpenGL / Win32 calls?

    - by Adam Naylor
    I've been adapting the NeHe ogl/win32 code to be more object orientated and I don't like the way some of the calls are structured. The example has the following pseudo structure: Register window class Change display settings with a DEVMODE Adjust window rect Create window Get DC Find closest matching pixel format Set the pixel format to closest match Create rendering context Make that context current Show the window Set it to foreground Set it to having focus Resize the GL scene Init GL The points in bold are what I want to move into a rendering class (the rest are what I see being pure win32 calls) but I'm not sure if I can call them after the win32 calls. Essentially what I'm aiming for is to encapsulate the Win32 calls into a Platform::Initiate() type method and the rest into a sort of Renderer::Initiate() method. So my question essentially boils down to: "Would OpenGL allow these methods to be called in this order?" Register window class Adjust window rect Create window Get DC Show the window Set it to foreground Set it to having focus Change display settings with a DEVMODE Find closest matching pixel format Set the pixel format to closest match Create rendering context Make that context current Resize the GL scene Init GL (obviously passing through the appropriate window handles and device contexts.) Thanks in advance.

    Read the article

  • OpenGL - Rendering from part of an index and vertex array depending on an element count

    - by user1423893
    I'm currently drawing my shapes as lines by using a VAO and then assigning the dynamic vertices and indices each frame. // Bind VAO glBindVertexArray(m_vao); // Update the vertex buffer with the new data (Copy data into the vertex buffer object) glBufferData(GL_ARRAY_BUFFER, numVertices * sizeof(VertexPosition), m_vertices.data(), GL_DYNAMIC_DRAW); // Update the index buffer with the new data (Copy data into the index buffer object) glBufferData(GL_ELEMENT_ARRAY_BUFFER, numIndices * sizeof(unsigned short), indices.data(), GL_DYNAMIC_DRAW); glDrawElements(GL_LINES, numIndices, GL_UNSIGNED_SHORT, BUFFER_OFFSET(0)); // Unbind VAO glBindVertexArray(0); What I would like to do is draw the lines using only part of the data stored in the index and vertex buffer objects. The vertex buffer has its vertices set from an array of defined maximum size: std::array<VertexPosition, maxVertices> m_vertices; The index buffer has its elements set from an array of defined maximum size: std::array<unsigned short, maxIndices> indices = { 0 }; A running total is kept of the number of vertices and indices needed for each draw call numVertices numIndices Can I not specify that the buffer data contain the entire array and only read from part of it when drawing? For example using the vertex buffer object glBufferData(GL_ARRAY_BUFFER, numVertices * sizeof(VertexPosition), m_vertices.data(), GL_DYNAMIC_DRAW); m_vertices.data() = Entire array is stored numVertices * sizeof(VertexPosition) = Amount of data to read from the entire array Is this not the correct way to approach this? I do not wish to use std::vector if possible.

    Read the article

  • How can I read a portion of one Minecraft world file and write it into another?

    - by RapierMother
    I'm looking to read block data from one Minecraft world and write the data into certain places in another. I have a Minecraft world, let's say "TemplateWorld", and a 2D list of Point objects. I'm developing an application that should use the x and y values of these Points as x and z reference coordinates from which to read constant-sized areas of blocks from the TemplateWorld. It should then write these blocks into another Minecraft world at constant y coordinates, with x & z coordinates determined based on each Point's index in the 2D list. The issue is that, while I've found a decent amount of information online regarding Minecraft world formats, I haven't found what I really need: more of a breakdown by hex address of where/what everything is. For example, I could have the TemplateWorld actually be a .schematic file rather than a world; I just need to be able to read the bytes of the file, know that the actual block data starts always at a certain address (or after a certain instance of FF, etc.), and how it's stored. Once I know that, it's easy as pie to just read the bytes and store them.

    Read the article

  • How to do pixel per pixel modeling in unity3d?

    - by Kabumbus
    So generally I want to have api like pixels.addPixel3D(new Pixel3D(0xFF0000, 100, 100,100)); (color, position) where pixels is some abstraction on 3d sceen objet.So to say point cloud. It would have grate use in deep space/stars modeling... I want to set each pixel by hand (having no image base or any automatic thing)... So point is modeling something like Or look at alive flash analog here How to do such thing in unity?

    Read the article

  • Transform 3d viewport vector to 2d vector

    - by learning_sam
    I am playing around with 3d transformations and came along an issue. I have a 3d vector already within the viewport and need to transform it to a 2d vector. (let's say my screen is 10x10) Does that just straight works like regualar transformation or is something different here? i.e.: I have the vector a = (2, 1, 0) within the viewport and want the 2d vector. Does that works like this and if yes how do I handle the "0" within the 3rd component?

    Read the article

  • How are trajectories calculated and transmitted to other players in Multi-Player ?

    - by giulio
    I play alot of COD4. And can see tracers for gunfire, missles, care packages fall from helicopters etc. There is alot of activity. I am curious to know the algorithm (at a high level) that manages all this action when you have 20 people on a map shooting each other to death ? This question touches on the subject but doesn't ask for a more in-depth answer as to how you the developers go about calculating and transmitting movement and collision detection for projectiles, be it missles/bullets or any other object that is flying through the air in real-time.

    Read the article

  • Dynamic Jump spot

    - by Pasquale Sada
    I have an initial velocity V(Vx,Vy,VZ) and a spot where he stands still at S(Sx,Sy,Sz). What I'm trying to achieve is a jump on a spot E(Ex,Ey,Ez) where you have clicked on(only lower or higher spot, because I've in place a simple steering behavior for even terrains). There are no obstacle around. I've implemented a formula that can make him jump in a precise way on a spot but you need to declare an angle: the problem arise when the selected spot is straight above your head. It' pretty lame that the char hang there and can reach a thing that is 1cm above is head. I'll share the code I'm using: Vector3 dir = target - transform.position; // get target direction float h = dir.y; // get height difference dir.y = 0; // retain only the horizontal direction float dist = dir.magnitude ; // get horizontal distance float a = angle * Mathf.Deg2Rad; // convert angle to radians dir.y = dist * Mathf.Tan(a); // set dir to the elevation angle dist += h / Mathf.Tan(a); // correct for small height differences // calculate the velocity magnitude float vel = Mathf.Sqrt(dist * Physics.gravity.magnitude / Mathf.Sin(2 *a)); return vel * dir.normalized;

    Read the article

< Previous Page | 437 438 439 440 441 442 443 444 445 446 447 448  | Next Page >