Search Results

Search found 25496 results on 1020 pages for 'development fabric'.

Page 542/1020 | < Previous Page | 538 539 540 541 542 543 544 545 546 547 548 549  | Next Page >

  • AndEngine player, background and camera

    - by valdemar593
    I'm developing a 2D shooter using AndEngine. At the moment I'm trying to make the camera follow the player. As I've understood the common approach is to use the SmoothCamera zooming it and setting the chased entity. The problem is that the camera follows the player WITH background moving also (RepeatingSpriteBackground), so it looks like the player doesn't move at all though the actual position changes. So I don't really get how to make the camera follow the player and have the background not moving. Thanks in advance.

    Read the article

  • glTexImage2D not loading my data

    - by Clyde
    Can anyone suggest why this code doesn't work? When I draw using this texture all I get is black. If I use GLUtils.texImage2D() to load a png file, it works correctly. ByteBuffer bb = ByteBuffer.allocateDirect(128*128*4).order(ByteOrder.nativeOrder()); bb.position(0); for(int row = 0; row != 128; row++) { for(int i = 0 ; i != 128 ; i++) { bb.put((byte)0x80); bb.put((byte)0xFF); bb.put((byte)0xFF); bb.put((byte)i); } } int[] handle = new int[1]; GLES20.glEnable(GLES20.GL_TEXTURE_2D); GLES20.glGenTextures(1, handle, 0); DrawAdapter.checkGlError("Gen textures"); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, handle[0]); DrawAdapter.checkGlError("Bind textures"); bb.position(0); GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGBA, 128, 128, 0, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, bb); DrawAdapter.checkGlError("glTexImage2D"); return handle[0];

    Read the article

  • Assigning a colour to imorted obj. files that are being used as default material

    - by Salino
    I am having a problem with assigning a colour to the different meshes that I have on one object. The technique that I have used is the first approach on this site. Is it possible to export a simulation (animation) from Blender to Unity? So what I would like to do is the following. I have about 107 meshes that are different frames from my shape key animation of my blender model. What I would like to have is that the first mesh will be bright green and up to the 40th mesh the colour turns to be white /greyish... the best would be if I could assign every mesh by hand a colour, however they are all default materials. And if I assign the object a colour, the whole "animation" is going to be in that colour

    Read the article

  • Client/Server game even in solo: any big problem?

    - by Klaim
    I'm making a game which have strong basic design based on multiplayer but also should provide a really interesting and self-sufficient solo game. A bit like a real-time strategy game. The events and actions taken shouldn't be as massive and immediate as in a FPS, so you can also think the networking like for an RTS. It's a PC game, targetting Windows, MacOSX and Linux (Ubuntu & Fedora). It's programmed in C++, using a variety of open source libraries, so I have great (potential) control over the performances. So far I always considered that just making the game work with two applications, client & server, even in solo mode was ok. However, as I'm in the process of starting the network code I'm having doubts about if it's a good idea. I'm not a specialist so I might be missing something in my analysis. I see these pros and cons: Pros: The game works only one way so if I fix a bug it should apply on all game modes, whatever the distance with the server is; Basic networking issues would be detected early, including behaviour with the protection softwares (firewall) installed (i am not specialist so this might be wrong); Cons: I suppose that even if it should be really fast enough, networking client and server on the same computer would still be slower than no networking and message passing in (one) process memory. Maybe debugging would be more difficult? I don't have experience in this case but so far I assume that debugging with Visual Studio allows me to debug multiple process so it shouldn't be really different. Also, remote debugging. My question is: is there a big disadvantage that I missed? Or maybe there are advantages that I missed and that should encourage me to just continue with only client-server game sessions?

    Read the article

  • Crash when trying to detect touch

    - by iQue
    I've got a character in a 2D game using surfaceView that I want to be able to move using a button (eventually a joystick), but my game crashes as soon as I try to move my sprite. This is my onTouch-method for my steering button: public void handleActionDown(int eventX, int eventY) { if (eventX >= (x - bitmap.getWidth() / 2) && (eventX <= (x + bitmap.getWidth()/2))) { if (eventY >= (y - bitmap.getHeight() / 2) && (y <= (y + bitmap.getHeight() / 2))) { setTouched(true); } else { setTouched(false); } } else { setTouched(false); } And if I try to put this in my update-method: public void update() { x += (speed.getXv() * speed.getxDirection()); y += (speed.getYv() * speed.getyDirection()); } The sprite moves on its own just fine, but as soon as I add: public void update() { if(steering.isTouched()){ x += (speed.getXv() * speed.getxDirection()); y += (speed.getYv() * speed.getyDirection()); } the game crashes. Does anyone know why this is or how to fix it? I cannot figure it out. I'm using MotionEvent.ACTION_DOWN to check if the user if pressing the screen.

    Read the article

  • How can I keep the correct alpha during rendering particles?

    - by April
    Rencently,I was trying to save textures of 3D particles so that I can reuse the in 2D rendering.Now I had some problem with alpha channel.Some artist told me I that my textures should have unpremultiplied alpha channel.When I try to get the rgb value back,I got strange result.Some area went lighter and even totally white.I mainly focus on additive and blend mode,that is: ADDITIVE: srcAlpha VS 1 BLEND: srcAlpha VS 1-srcAlpha I tried a technique called premultiplied alpha.This technique just got you the right rgb value,its all you need on screen.As for alpha value,it worked well with BLEND mode,but not ADDITIVE mode.As you can see in parameters,BLEND mode always controlled its value within 1.While ADDITIVE mode cannot guarantee. I want proper alpha,but it just got too big or too small consider to rgb.Now what can I do?Any help will be great thankful. PS:If you don't understand what I am trying to do,there is a commercial software called "Particle Illusion".You can create various particles and then save the scene to texture,where you can choose to remove background of particles.

    Read the article

  • Unit turning in navmesh-based pathfinding

    - by Haddayn
    I'm working on an RTS game, and I'm using navmeshes for unit pathfinding. I do know how to find a general path within a navmesh, but how do you determine if the unit have enough space to turn? I have units of different shapes (mostly rectangles with different dimensions), and with different turn radii. Additionally some of units can turn in place, and some can move in reverse. So, how to find a path which unit can follow, considering that it can not rotate easily?

    Read the article

  • 2d game view camera zoom, rotation & offset using 'Filter' / 'Shader' processing?

    - by Arthur Wulf White
    I wish to add the ability to zoom-in, zoom-out, rotate and move the view in a top-down view over a collection of points and lines in a large 2d map. I split the map into a grid so I only need to render the points that are 'near' the camera. My question is, how do I render a point A(Xp,Yp) assuming the following details: Offset of the camera pov from the origin of the map is: Xc, Yc Meaning the camera center is positioned on top of that point. If there's a point in Xc, Yc it is positioned in the center of the screen. The rotation angle is: alpha The scale is: S Read my answer first. I am thinking there is more optimized solution, thanks. My question is how to include the following improvement: I read in the AS3 Bible book that: In regards to ShaderInput, You can use these methods to coerce Pixel Bender to crunch huge sets of data masquerading as images, without doing too much work on the ActionScript side to make them look like images. Meaning if I am performing the same linear function on a lot of items, I can do it all at once if I use Shaders correctly and save processing time. Does anyone know how that is accomplished? Here is a sample of what I mean: http://wonderfl.net/c/eFp0/

    Read the article

  • What are the app file size limitations for different smartphone OSes & carriers?

    - by Nick Gotch
    I know the iPhone App Store limits how large an app can be in general and there are also limitations with AT&T over the size it can be to transmit over a data plan vs WiFi. I have no idea what, if any, these limits are for Android apps and what I'm finding online is a mix of different numbers. Does anyone know these numbers definitively? The Android game I'm porting is in the 20-30MB range and we'd like to know if we need to further reduce its size.

    Read the article

  • How can I keep track of a battle log on a web game?

    - by Jay W
    Recently I started working on a Web turn-based PvP RPG game. Now I'm working on the battle system but I encountered some issues: How can I keep track of everything that happens in the battle? It should keep track of the characters on the field, inventory, the damage done etc. I first thought I would simply put it in the (MySQL) database, but I think it will be too much. Especially if several people are in a battle. I thought of puting this in sessions or cookies but I don't think thats reliable. Does anyone have an idea how I can do this?

    Read the article

  • Assigning a different texture based on picking(XNA)

    - by Thomas Carmichael
    I'm making a game using XNA. I have some simple objects like cube and sphere, and I would like to change the texture of one face of these objects based on picking. That is, when the cursor is over one face, it turns red. The only way I've seen to do this is to overload the content processor as here: http://xbox.create.msdn.com/en-US/education/catalog/sample/picking_triangle but it seems like it shouldn't be this complicated. I'm using .x models, and would like to be able to implement this for more complex models in the future beyond cubes/spheres/etc. Is this the best/only way to go about it? I'll figure that out if that's what is necessary, but it seems that there would be a simpler solution to load a different texture to a face than I've seen, I just don't know what it is.

    Read the article

  • CUDA 4.1 Particle Update

    - by N0xus
    I'm using CUDA 4.1 to parse in the update of my Particle system that I've made with DirectX 10. So far, my update method for the particle systems is 1 line of code within a for loop that makes each particle fall down the y axis to simulate a waterfall: m_particleList[i].positionY = m_particleList[i].positionY - (m_particleList[i].velocity * frameTime * 0.001f); In my .cu class I've created a struct which I copied from my particle class and is as follows: struct ParticleType { float positionX, positionY, positionZ; float red, green, blue; float velocity; bool active; }; Then I have an UpdateParticle method in the .cu as well. This encompass the 3 main parameters my particles need to update themselves based off the initial line of code. : __global__ void UpdateParticle(float* position, float* velocity, float frameTime) { } This is my first CUDA program and I'm at a loss to what to do next. I've tried to simply put the particleList line in the UpdateParticle method, but then the particles don't fall down as they should. I believe it is because I am not calling something that I need to in the class where the particle fall code use to be. Could someone please tell me what it is I am missing to get it working as it should? If I am doing this completely wrong in general, the please inform me as well.

    Read the article

  • How to make a stack stable? Need help for an explicit resting contact scheme (2-dimensional)

    - by Register Sole
    Previously, I struggle with the sequential impulse-based method I developed. Thanks to jedediah referring me to this paper, I managed to rebuild the codes and implement the simultaneous impulse based method with Projected-Gauss-Seidel (PGS) iterative solver as described by Erin Catto (mentioned in the reference of the paper as [Catt05]). So here's how it currently is: The simulation handles 2-dimensional rotating convex polygons. Detection is using separating-axis test, with a SKIN, meaning closest points between two polygons is detected and determined if their distance is less than SKIN. To resolve collision, simultaneous impulse-based method is used. It is solved using iterative solver (PGS-solver) as in Erin Catto's paper. Error-correction is implemented using Baumgarte's stabilization (you can refer to either paper for this) using J V = beta/dt*overlap, J is the Jacobian for the constraints, V the matrix containing the velocities of the bodies, beta an error-correction parameter that is better be < 1, dt the time-step taken by the engine, and overlap, the overlap between the bodies (true overlap, so SKIN is ignored). However, it is still less stable than I expected :s I tried to stack hexagons (or squares, doesn't really matter), and even with only 4 to 5 of them, they would swing! Also note that I am not looking for a sleeping scheme. But I would settle if you have any explicit scheme to handle resting contacts. That said, I would be more than happy if you have a way of treating it generally (as continuous collision, instead of explicitly as a special state). Ideas I have tried: Using simultaneous position based error correction as described in the paper in section 5.3.2, turned out to be worse than the current scheme. If you want to know the parameters I used: Hexagons, side 50 (pixels) gravity 2400 (pixels/sec^2) time-step 1/60 (sec) beta 0.1 restitution 0 to 0.2 coeff. of friction 0.2 PGS iteration 10 initial separation 10 (pixels) mass 1 (unit is irrelevant for now, i modified velocity directly<-impulse method) inertia 1/1000 Thanks in advance! I really appreciate any help from you guys!! :) EDIT In response to Cholesky's comment about warm starting the solver and Baumgarte: Oh right, I forgot to mention! I do save the contact history and the impulse determined in this time step to be used as initial guess in the next time step. As for the Baumgarte, here's what actually happens in the code. Collision is detected when the bodies' closest distance is less than SKIN, meaning they are actually still separated. If at this moment, I used the PGS solver without Baumgarte, restitution of 0 alone would be able to stop the bodies, separated by a distance of ~SKIN, in mid-air! So this isn't right, I want to have the bodies touching each other. So I turn on the Baumgarte, where its role is actually to pull the bodies together! Weird I know, a scheme intended to push the body apart becomes useful for the reverse. Also, I found that if I increase the number of iteration to 100, stacks become much more stable, though the program becomes so slow. UPDATE Since the stack swings left and right, could it be something is wrong with my friction model? Current friction constraint: relative_tangential_velocity = 0

    Read the article

  • Process of getting DEJUS rating (Brazil)?

    - by feklee
    I would like to get DEJUS rating for my HTML5 game on the Firefox Marketplace, so that I can tell Mozilla to make the game available to users in Brazil. I want the game to be rated as: Livre (general) Can non-Brazilian citizens request ratings from DEJUS? If so, what documents need to be provided, and in which language? What I have found so far: Submission form in English (note that there is no country field in the address form, and it's necessary to specify CPF/CNPJ) Description of procedure in Portuguese. Process flow chart in Portuguese. Practical guide to rating system in English.

    Read the article

  • how does HDR work?

    - by dotminic
    I'm trying to understand what HDR is and how it works. I understand the basic concepts and have an slight idea of how it is implemented with D3D/hlsl. However it's still pretty foggy. Say I'm rendering a sphere with a texture of the earth and a small point list of vertices to act as stars, how would I render this in HDR ? Here are a few things I'm confused about: I'm guessing, I can't use just any basic image format for the texture as the values would be limited to [0, 255] and clamped to [0, 1] in a shader. Same goes for the back buffer, I take it the format needs to be a float point format ? What are the other steps involved ? Surely there has to be more than just using floating point formats to render to a render target and then apply some bloom as a post process ? (considering the output will be 8bpp anyway) Basically, what are the steps for HDR ? How does it work ? I can't seem to find any good papers / articles that describe the process, other than this one, but it seems to skim over the basics a little, so it's confusing.

    Read the article

  • Threads slowing down application and not working properly

    - by Belgin
    I'm making a software renderer which does per-polygon rasterization using a floating point digital differential analyzer algorithm. My idea was to create two threads for rasterization and have them work like so: one thread draws each even scanline in a polygon and the other thread draws each odd scanline, and they both start working at the same time, but the main application waits for both of them to finish and then pauses them before continuing with other computations. As this is the first time I'm making a threaded application, I'm not sure if the following method for thread synchronization is correct: First of all, I use two global variables to control the two threads, if a global variable is set to 1, that means the thread can start working, otherwise it must not work. This is checked by the thread running an infinite loop and if it detects that the global variable has changed its value, it does its job and then sets the variable back to 0 again. The main program also uses an empty while to check when both variables become 0 after setting them to 1. Second, each thread is assigned a global structure which contains information about the triangle that is about to be rasterized. The structures are filled in by the main program before setting the global variables to 1. My dilemma is that, while this process works under some conditions, it slows down the program considerably, and also it fails to run properly when compiled for Release in Visual Studio, or when compiled with any sort of -O optimization with gcc (i.e. nothing on screen, even SEGFAULTs). The program isn't much faster by default without threads, which you can see for yourself by commenting out the #define THREADS directive, but if I apply optimizations, it becomes much faster (especially with gcc -Ofast -march=native). N.B. It might not compile with gcc because of fscanf_s calls, but you can replace those with the usual fscanf, if you wish to use gcc. Because there is a lot of code, too much for here or pastebin, I created a git repository where you can view it. My questions are: Why does adding these two threads slow down my application? Why doesn't it work when compiling for Release or with optimizations? Can I speed up the application with threads? If so, how? Thanks in advance.

    Read the article

  • How to display a projectile trajectory in c++? [on hold]

    - by sana
    I am trying to make a game of Gorillas in c++ whose specification is somewhat like....... is : "In this game both players should select their position on a level scaled ground. Scale of the ground should be from 0 to 20 divisions, each division corresponding to 10 meters. Each player will enter an angle and initial velocity (limits of both should be defined) and the player will hurl a stone with this velocity at given angle. Stone will make a projectile and if it hits the other player then shooting player wins. A random effect of air should also be incorporated. Air will support one player and resist other. Velocity of air should be generated randomly, within some limits, and subtracted or added in the horizontal velocity of the stone. An arrow of suitable length shall represent air direction and velocity. The player who hits first wins." How do I display the trajectory of the stone????

    Read the article

  • Fastest approach to 3D animation

    - by HappyFerret
    I'm currently tasked with designing a small HTML5 game. Having done everything by myself so far (3D models, codebase, game design, etc) I'm now at a point where I'm running out of time. I've less than a day to animate and bind everything together. However, that's exactly my problem. I was under the naive impression that everything would be easier if I went with pre-rendered 3D models. However, I didn't consider the most difficult part. Animation. After having spent over an hour trying to figure out messiahStudio, I figured it's time to ask for outside help. Is there any easier solution to 3D animation than 3D rigging? What I'm basically looking for is some sort of tool that allows me to simply grab and move/deform select polygons. It doesn't have to be as life-like and accurate as rigging, just efficient enough. Were the circumstances any different, I might just learn how to rig. But that's sorely out of scope right now. PS:The models were created in Sculptris but are fairly low-poly.

    Read the article

  • 3d Collision Handling

    - by TobSpr
    I have trouble while detecting collisions on my 3D-Game. I have set-up Rays, to detect collisions (Screenshot) and my main-rountine already analyzes them. But now there's the question what to do with that. One possibility would be, to move the player back to the last position, but that's dirty, and does not work if the player can walk in multiple directions (e.g. if the player runs along a wall). My question is, what to do with the collision data / or in which direction, by which amount move the player? I'm sure there is an algorithm for that (as for almost all is).

    Read the article

  • What is the purpose of the canonical view volume?

    - by breadjesus
    I'm currently learning OpenGL and haven't been able to find an answer to this question. After the projection matrix is applied to the view space, the view space is "normalized" so that all the points lie within the range [-1, 1]. This is generally referred to as the "canonical view volume" or "normalized device coordinates". While I've found plenty of resources telling me about how this happens, I haven't seen anything about why it happens. What is the purpose of this step?

    Read the article

  • Collision rectangle response

    - by dotty
    I'm having difficulties getting a moveable rectangle to collide with more than one rectangle. I'm using SFML and it has a handy function called Intersect() which takes 2 rectangles and returns the intersections. I have a vector full of rectangles which I want my moveable rectangle to collide with. I'm looping through this using the following code (p is the moveble rectangle). IsCollidingWith returns a bool but also uses SFML's Interesect to work out the intersections. while(unsigned i = 0; i!= testRects.size(); i++){ if(p.IsCollidingWith(testRects[i]){ p.Collide(testRects[i]); } } and the actual Collide() code void gameObj::collide( gameObj collidingObject ){ printf("%f %f\n", this->colliderResult.width, this->colliderResult.height); if (this->colliderResult.width < this->colliderResult.height) { // collided on X if (this->getCollider().left < collidingObject.getCollider().left ) { this->move( -this->colliderResult.width , 0); }else { this->move( this->colliderResult.width, 0 ); } } if(this->colliderResult.width > this->colliderResult.height){ if (this->getCollider().top < collidingObject.getCollider().top ) { this->move( 0, -this->colliderResult.height); }else { this->move( 0, this->colliderResult.height ); } } } and the IsCollidingWith() code is bool gameObj::isCollidingWith( gameObj testObject ){ if (this->getCollider().intersects( testObject.getCollider(), this->colliderResult )) { return true; }else { return false; } } This works fine when there's only 1 Rect in the scene. However, when there's move than one Rect it causes issue when working out 2 collisions at once. Any idea how to deal with this correctly? I have uploaded a video to youtube to show my problem. The console on the far-right shows the width and height of the intersections. You can see on the console that it's trying to calculate 2 collisions at once, I think this is where the problem is being caused. The youtube video is at http://www.youtube.com/watch?v=fA2gflOMcAk also , this image also seems to illustrate the problem nicely. Can someone please help, I've been stuck on this all weekend!

    Read the article

  • Will C++ remain viable for game engines in somewhat distant future?

    - by samual
    C++11 has opened ways, which were only dreamt by the C++ programmers. It has been three years since I have been learning C++, and I am going well. Now I want to get into vedio games. Every core of the game code I saw, was monstourously writtern in C++. My question is - If I get into serious game engine dev, and perfecting it would take, maybe say 10 years, would we still be writing game engines in C++ ?(newer standard) Or, will John Carmack, write id tech 7 in c++? note - I am strictly talking about game engines.

    Read the article

  • FBO rendering different result between Galaxy S2 and S3

    - by BruceJones
    I'm working on a pong game and have recently set up FBO rendering so that I can apply some post-processing shaders. This proceeds as so: Bind texture A to framebuffer Draw balls Bind texture B to framebuffer Draw texture A using fade shader on fullscreen quad Bind screen to framebuffer Draw texture B using normal textured quad shader Neither texture A or B are cleared at any point, this way the balls leave trails on screen, see below for the fade shader. Fade Shader private final String fragmentShaderCode = "precision highp float;" + "uniform sampler2D u_Texture;" + "varying vec2 v_TexCoordinate;" + "vec4 color;" + "void main(void)" + "{" + " color = texture2D(u_Texture, v_TexCoordinate);" + " color.a *= 0.8;" + " gl_FragColor = color;" + "}"; This works fine with the Samsung Galaxy S3/ Note2, but cause a strange effect doesnt work on Galaxy S2 or Note1. See pictures: Galaxy S3/Note2 Galaxy S3/Note2 Galaxy S2/Note Galaxy S2/Note Can anyone explain the difference?

    Read the article

  • Keeping the meshes "thickness" the same when scaling an object

    - by user1806687
    I've been bashing my head for the past couple of weeks trying to find a way to help me accomplish, on first look very easy task. So, I got this one object currently made out of 5 cuboids (2 sides, 1 top, 1 bottom, 1 back), this is just for an example, later on there will be whole range of different set ups. Now, the thing is when the user chooses to scale the whole object this is what should happen: X scale: top and bottom cuboids should get scaled by a scale factor, sides should get moved so they are positioned just like they were before(in this case at both ends of top and bottom cuboids), back should get scaled so it fits like before(if I simply scale it by a scale factor it will leave gaps on each side). Y scale: sides should get scaled by a scale factor, top and bottom cuboid should get moved, and back should also get scaled. Z scale: sides, top and bottom cuboids should get scaled, back should get moved. Hope you can help, EDIT: So, I've decided to explain the situation once more, this time more detailed(hopefully). I've also made some pictures of how the scaling should look like, where is the problem and the wrong way of scaling. I this example I will be using a thick walled box, with one face missing, where each wall is made by a cuboid(but later on there will be diffrent shapes of objects, where a one of the face might be roundish, or triangle or even under some angle), scaling will be 2x on X axis. 1.This is how the default object without any scaling applied looks like: http://img856.imageshack.us/img856/4293/defaulttz.png 2.If I scale the whole object(all of the meshes) by some scale factor, the problem becomes that the "thickness" of the object walls also change(which I do not want): http://img822.imageshack.us/img822/9073/wrongwaytoscale.png 3.This is how the correct scaling should look like. Appropriate faces gets caled in this case where the scale is on X axis(top, bottom, back): http://imageshack.us/photo/my-images/163/rightwayxscale1.png/ 4.But the scale factor might not be the same for all object all of the times. In this case the back has to get scaled a bit more or it leaves gaps: http://imageshack.us/photo/my-images/9/problemwhenscaling.png/ 5.If everything goes well this is how the final object should look like: http://imageshack.us/photo/my-images/856/rightwayxscale2.png/ So, as you have might noticed there are quite a bit of things to look out when scaling. I am asking you, if any of you have any idea on how to accomplish this scaling. I have tried whole bunch of things, from scaling all of the object by the same scale factor, to subtracting and adding sizes to get the right size. But nothing I tried worked, if one mesh got scaled correctly then others didnt. Donwload the example object. English is not my first language, so I am really sorry if its hard to understand what I am saying.

    Read the article

  • Vector reflect problem

    - by xdevel2000
    I'm testing some vector reflection and I want to check what happens when a ball collides with a paddle. So if I have: Vector2 velocity = new Vector2(-5, 2); position_ball += velocity; if (position_ball.X < 10) { Vector2 v = new Vector2(1,0); // or Vector2.UnitX velocity = Vector2.Reflect(velocity, v); } then, correctly, velocity is (5,2) after Reflect, but if I do: if (position_ball.X < 10) { Vector2 v = new Vector2(1,1); velocity = Vector2.Reflect(velocity, v); } then velocity is (1,8) and not (5, -2) that is the solution of reflection equation R = V - 2 * (V . N) Why is that?

    Read the article

< Previous Page | 538 539 540 541 542 543 544 545 546 547 548 549  | Next Page >