Search Results

Search found 19855 results on 795 pages for 'game console'.

Page 386/795 | < Previous Page | 382 383 384 385 386 387 388 389 390 391 392 393  | Next Page >

  • Calculating the force of an impact?

    - by meds
    I'm trying to figure out a way to determine the force two objects collide in. I have two vectors defining their linear velocity at the time of impact, their mass and their angular velocity. Keep in mind this is all for a 2D physics engine. I don't think it's as simple as adding up these values and figuring out if it's large enogh it makes a large impact since that doesn't take into account if the two objects are travelling in the same direction (as an example). Any ideas?

    Read the article

  • Windows API Programing....

    - by vs4vijay
    Hello There... Its me Vijay.. I m Trying to make a CrossHair(some kind of cursor) On The Screen while running a Game (Counter Strike)... so i did this... ############################# #include<iostream.h> #include<windows.h> #include<conio.h> #include<dos.h> #include<stdlib.h> #include<process.h> #include <time.h> int main() { HANDLE hl = OpenProcess(PROCESS_ALL_ACCESS,TRUE,pid); // Here pid is the process ID of the Game... HDC hDC = GetDC(NULL); //Here i pass NULL for Entire Screen... HBRUSH hb=CreateSolidBrush(RGB(0,255,255)); SelectObject(hDC,hb); POINT p; while(!kbhit()) { int x=1360/2,y=768/2; MoveToEx(hDC,x-20,y,&p); LineTo(hDC,x+20,y); SetPixel(hDC,x,y,RGB(255,0,0)); SetPixel(hDC,x-1,y-1,RGB(255,0,0)); SetPixel(hDC,x-1,y+1,RGB(255,0,0)); SetPixel(hDC,x+1,y+1,RGB(255,0,0)); SetPixel(hDC,x+1,y-1,RGB(255,0,0)); MoveToEx(hDC,x,y-20,&p); LineTo(hDC,x,y+20); } cin.get(); return 0; } #################################### it works fine....at desktop i see crosshair...but my problem is that when i run game...the cross here got disappeared.... so i think i did not handle the process of game... so i pass the HANDLE to the GetDC(hl)... But GetDC take only HWND(Handle To Window)... so i typecast it like this... HWND hl = (HWND)OpenProcess(PROCESS_ALL_ACCESS,TRUE,pid); and passed hl to the GetDC(hl)... but it doesnt work...Whats wrong with the code... plz tell me how do i make a simple shape at the screen on a process or game... PS : (My Compiler Is DevCPP and OS WinXP SP3....)

    Read the article

  • Why am I seeing streak artifacts on the cube map I'm rendering?

    - by BobDole
    I'm getting strange streaks on my cube map when rendering to it. He is my code that is being called each frame: void drawCubeMap(void) { int face; glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glBindFramebuffer(GL_FRAMEBUFFER, fbo); //glBindTexture(GL_TEXTURE_CUBE_MAP, cubeMapTexture); //glClearColor(1.0f, 1.0f, 1.0f, 1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glViewport(0,0,sizeT, sizeT); for (face = 0; face < 6; face++) { glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, cubeMapTexture, 0); drawSpheres(); } glBindFramebuffer(GL_FRAMEBUFFER, 0); glBindTexture(GL_TEXTURE_2D, 0); glViewport(0,0,900, 900); } Any idea what it might be? The streaking occurs when I'm rotating the spheres around the main sphere.

    Read the article

  • returning correct multiTouch id

    - by Max
    I've spent countless hours on reading tutorials and looking at every question related to multiTouch from here and Stackoverflow. But I just cannot figure out how to do this correctly. I use a loop to get my pointerId, I dont see alot of people doing this but its the only way I've managed to get it somewhat working. I have two joysticks on my screen, one for moving and one for controlling my sprites rotation and the angle he shoots, like in Monster Shooter. Both these work fine. My problem is that when I Move my sprite at the same time as Im shooting, my touchingPoint for my movement is set to the touchingPoint of my shooting, since the x and y is higher on the touchingPoint of my shooting (moving-stick on left side of screen, shooting-stick on right side), my sprite speeds up, this creates an unwanted change in speed for my sprite. I will post my entire onTouch method here with some variable-changes to make it more understandable. Since I do not know where Im going wrong. public void update(MotionEvent event) { if (event == null && lastEvent == null) { return; } else if (event == null && lastEvent != null) { event = lastEvent; } else { lastEvent = event; } int pointerCount = event.getPointerCount(); for (int i = 0; i < pointerCount; i++) { int x = (int) event.getX(i); int y = (int) event.getY(i); int id = event.getPointerId(i); int action = event.getActionMasked(); int actionIndex = event.getActionIndex(); String actionString; switch (action) { case MotionEvent.ACTION_DOWN: actionString = "DOWN"; break; case MotionEvent.ACTION_UP: shooting=false; // when shooting is true, it shoots dragging=false; // when dragging is true, it moves actionString = "UP"; break; case MotionEvent.ACTION_POINTER_DOWN: actionString = "PNTR DOWN"; break; case MotionEvent.ACTION_POINTER_UP: shooting=false; dragging=false; actionString = "PNTR UP"; break; case MotionEvent.ACTION_CANCEL: shooting=false; dragging=false; actionString = "CANCEL"; break; case MotionEvent.ACTION_MOVE: try{ if((int) event.getX(id) > 0 && (int) event.getX(id) < touchingBox && (int) event.getY(id) > touchingBox && (int) event.getY(id) < view.getHeight()){ movingPoint.x = (int) event.getX(id); movingPoint.y = (int) event.getY(id); dragging = true; } else if((int) event.getX(id) > touchingBox && (int) event.getX(id) < view.getWidth() && (int) event.getY(id) > touchingBox && (int) event.getY(id) < view.getHeight()){ shootingPoint.x = (int) event.getX(id); shootingPoint.y = (int) event.getY(id); shooting=true; }else{ shooting=false; dragging=false; } }catch(Exception e){ } actionString = "MOVE"; break; default: actionString = ""; } Wouldnt post this much code if I wasnt at an absolute loss of what I'm doing wrong. I simply can not get a good understanding of how multiTouching works. basicly movingPoint changes for both my first and second finger. I bind it to a box, but aslong as I hold one finger within this box, it changes its value based on where my second finger touches. It moves in the right direction and nothing gives an error, the problem is the speed-change, its almost like it adds up the two touchingPoints.

    Read the article

  • SDL - Getting a single keypress event instead of a keystate?

    - by MrKatSwordfish
    Right now I'm working on a simple SDL project, but I've hit an issue when trying to get a single keypress event to skip past a splash screen. Right now, there are 4 start-up splash screens that I would like to be able to skip with a single keypress (of any key). My issue is that, as of now, if I hold down a key, it skips through each splash screen to the very last one immediately. The splash screens are stored as an array of SDL surfaces which are all loaded at the initialization of the state. I have an variable called currentSplashImage that controls which element of the array is being rendered on the screen. I've set it up so that whenever there's a SDL_KEYDOWN event, it triggers a single incrementation of the currentSplashImage variable. So, I'm really not sure why my code isn't working correctly. For some reason, when I hold down a button, it seems to be treating the held button as a new key press event every time it ticks through the code. Does anyone know how I can go about fixing this issue? [Here's a snippet of code that I've been using...] void SplashScreenState::handleEvents() { SDL_PollEvent( &localEvent ); if ( localEvent.type == SDL_KEYDOWN ) { if ( currentSplashImage < 3 && currentSplashImage >= 0) { currentSplashImage++; } } else if ( localEvent.type == SDL_QUIT ) { smgaEngine.setRunning(false); } } I should also mention that the SDL_Event 'localEvent' is part of the GameState parent class, while this event handling code is part of a SplashScreenState subclass. If anyone knows why this is happening, or if there is any way to improve my code, It'd be helpful to me! :D I'm still a very new programmer, trying to learn. UPDATE: I added a std::cout line to that the code runs multiple times with a single KEYDOWN event. I also tried disabling SDL_EnableKeyRepeat, but it didn't fix the issue. void SplashScreenState::handleEvents() { SDL_PollEvent( &localEvent ); if ( localEvent.type == SDL_KEYDOWN ) { if ( currentSplashImage < 3 && currentSplashImage >= 0) { currentSplashImage++; std::cout << "KEYDOWN.."; //<---- test cout line } } else if ( localEvent.type == SDL_QUIT ) { smgaEngine.setRunning(false); } } This prints out "KEYDOWN..KEYDOWN..KEYDOWN.." in the cout stream when a button is held.

    Read the article

  • What is the purpose of the canonical view volume?

    - by breadjesus
    I'm currently learning OpenGL and haven't been able to find an answer to this question. After the projection matrix is applied to the view space, the view space is "normalized" so that all the points lie within the range [-1, 1]. This is generally referred to as the "canonical view volume" or "normalized device coordinates". While I've found plenty of resources telling me about how this happens, I haven't seen anything about why it happens. What is the purpose of this step?

    Read the article

  • Why RenderTarget2D overwrites other objects when trying to put some text in a model?

    - by cad
    I am trying to draw an object composited by two cubes (A & B) (one on top of the other, but for now I have them a little bit more open). I am able to do it and this is the result. (Cube A is the blue and Cube B is the one with brown text that comes from a png texture) But I want to have any text as parameter in the cube B. I have tried what @alecnash suggested in his question, but for some reason when I try to draw cube B, cube A dissapears and everything turns purple. This is my draw code: public void Draw(GraphicsDevice graphicsDevice, SpriteBatch spriteBatch, Matrix viewMatrix, Matrix projectionMatrix) { graphicsDevice.BlendState = BlendState.Opaque; graphicsDevice.DepthStencilState = DepthStencilState.Default; graphicsDevice.RasterizerState = RasterizerState.CullCounterClockwise; graphicsDevice.SamplerStates[0] = SamplerState.LinearClamp; // CUBE A basicEffect.View = viewMatrix; basicEffect.Projection = projectionMatrix; basicEffect.World = Matrix.CreateTranslation(ModelPosition); basicEffect.VertexColorEnabled = true; foreach (EffectPass pass in basicEffect.CurrentTechnique.Passes) { pass.Apply(); drawCUBE_TOP(graphicsDevice); drawCUBE_Floor(graphicsDevice); DrawFullSquareStripesFront(graphicsDevice, _numStrips, Color.Red, Color.Blue, _levelPercentage); DrawFullSquareStripesLeft(graphicsDevice, _numStrips, Color.Red, Color.Blue, _levelPercentage); DrawFullSquareStripesRight(graphicsDevice, _numStrips, Color.Red, Color.Blue, _levelPercentage); DrawFullSquareStripesBack(graphicsDevice, _numStrips, Color.Red, Color.Blue, _levelPercentage); } // CUBE B // Set the World matrix which defines the position of the cube texturedCubeEffect.World = Matrix.CreateTranslation(ModelPosition); // Set the View matrix which defines the camera and what it's looking at texturedCubeEffect.View = viewMatrix; // Set the Projection matrix which defines how we see the scene (Field of view) texturedCubeEffect.Projection = projectionMatrix; // Enable textures on the Cube Effect. this is necessary to texture the model texturedCubeEffect.TextureEnabled = true; Texture2D a = SpriteFontTextToTexture(graphicsDevice, spriteBatch, arialFont, "TEST ", Color.Black, Color.GhostWhite); texturedCubeEffect.Texture = a; //texturedCubeEffect.Texture = cubeTexture; // Enable some pretty lights texturedCubeEffect.EnableDefaultLighting(); // apply the effect and render the cube foreach (EffectPass pass in texturedCubeEffect.CurrentTechnique.Passes) { pass.Apply(); cubeToDraw.RenderToDevice(graphicsDevice); } } private Texture2D SpriteFontTextToTexture(GraphicsDevice graphicsDevice, SpriteBatch spriteBatch, SpriteFont font, string text, Color backgroundColor, Color textColor) { Vector2 Size = font.MeasureString(text); RenderTarget2D renderTarget = new RenderTarget2D(graphicsDevice, (int)Size.X, (int)Size.Y); graphicsDevice.SetRenderTarget(renderTarget); graphicsDevice.Clear(Color.Transparent); spriteBatch.Begin(); //have to redo the ColorTexture //spriteBatch.Draw(ColorTexture.Create(graphicsDevice, 1024, 1024, backgroundColor), Vector2.Zero, Color.White); spriteBatch.DrawString(font, text, Vector2.Zero, textColor); spriteBatch.End(); graphicsDevice.SetRenderTarget(null); return renderTarget; } The way I generate texture with dynamic text is: Texture2D a = SpriteFontTextToTexture(graphicsDevice, spriteBatch, arialFont, "TEST ", Color.Black, Color.GhostWhite); After commenting several parts to see what caused the problem, it seems to be located in this line graphicsDevice.SetRenderTarget(renderTarget);

    Read the article

  • Shader compile log depending on hardware

    - by dreta
    I'm done with the core of my graphics engine and I'm testing it on every platform I can get my hands on. Now, what I noticed is that different drivers return different shader and program compile log content. For example, on my friend's laptop if you successfuly compile a shader then the log is simply empty. However on my PC I get some useful information along with it. So if I compile a vertex shader, I'll get: Vertex shader was successfully compiled to run on hardware. Which isn't that impressive, but is what happens when I compile a program. On my friend's computer the log is empty, since the program compiles. However on my own computer I get: Vertex shader(s) linked, fragment shader(s) linked. Which is awesome, because I'm attaching a geometry shader with 0 (I have a geometry shader file with trash, so it doesn't compile and the pointer is set to 0), and the compiler just tells me which shaders linked. Now it got me thinking, if I was going to buy a graphics card, is there a way for me to get the information about whether or not I'll get this "extended" compile information? Maybe it's vendor specific? Now I don't expect an answer TBH, this seems a bit obscure, but maybe somebody has any experience with this and could post it.

    Read the article

  • Andengine. Put bullet to pool, when it leaves screen

    - by Ashot
    i'm creating a bullet with physics body. Bullet class (extends Sprite class) has die() method, which unregister physics connector, hide sprite and put it in pool public void die() { Log.d("bulletDie", "See you in hell!"); if (this.isVisible()) { this.setVisible(false); mPhysicsWorld.unregisterPhysicsConnector(physicsConnector); physicsConnector.setUpdatePosition(false); body.setActive(false); this.setIgnoreUpdate(true); bulletsPool.recyclePoolItem(this); } } in onUpdate method of PhysicsConnector i executes die method, when sprite leaves screen physicsConnector = new PhysicsConnector(this,body,true,false) { @Override public void onUpdate(final float pSecondsElapsed) { super.onUpdate(pSecondsElapsed); if (!camera.isRectangularShapeVisible(_bullet)) { Log.d("bulletDie","Dead?"); _bullet.die(); } } }; it works as i expected, but _bullet.die() executes TWICE. what i`m doing wrong and is it right way to hide sprites? here is full code of Bullet class (it is inner class of class that represents player) private class Bullet extends Sprite implements PhysicsConstants { private final Body body; private final PhysicsConnector physicsConnector; private final Bullet _bullet; private int id; public Bullet(float x, float y, ITextureRegion texture, VertexBufferObjectManager vertexBufferObjectManager) { super(x,y,texture,vertexBufferObjectManager); _bullet = this; id = bulletId++; body = PhysicsFactory.createCircleBody(mPhysicsWorld, this, BodyDef.BodyType.DynamicBody, bulletFixture); physicsConnector = new PhysicsConnector(this,body,true,false) { @Override public void onUpdate(final float pSecondsElapsed) { super.onUpdate(pSecondsElapsed); if (!camera.isRectangularShapeVisible(_bullet)) { Log.d("bulletDie","Dead?"); Log.d("bulletDie",id+""); _bullet.die(); } } }; mPhysicsWorld.registerPhysicsConnector(physicsConnector); $this.getParent().attachChild(this); } public void reset() { final float angle = canon.getRotation(); final float x = (float) ((Math.cos(MathUtils.degToRad(angle))*radius) + centerX) / PIXEL_TO_METER_RATIO_DEFAULT; final float y = (float) ((Math.sin(MathUtils.degToRad(angle))*radius) + centerY) / PIXEL_TO_METER_RATIO_DEFAULT; this.setVisible(true); this.setIgnoreUpdate(false); body.setActive(true); mPhysicsWorld.registerPhysicsConnector(physicsConnector); body.setTransform(new Vector2(x,y),0); } public Body getBody() { return body; } public void setLinearVelocity(Vector2 velocity) { body.setLinearVelocity(velocity); } public void die() { Log.d("bulletDie", "See you in hell!"); if (this.isVisible()) { this.setVisible(false); mPhysicsWorld.unregisterPhysicsConnector(physicsConnector); physicsConnector.setUpdatePosition(false); body.setActive(false); this.setIgnoreUpdate(true); bulletsPool.recyclePoolItem(this); } } }

    Read the article

  • Android Dynamic 2D Map

    - by Deltharis
    My problem is, I want to create a 2D tiled map. Yes, I know it's been asked a lot. I've seen answers that propose the use of tiled however it only allows (or so it seems to me) to generate static maps that do not change once generated. And I need a large empty uniform space of empty tiles, upon which players may place various buildings (some spanning more than one tile and logically being the same one). How to approach this in Android? Do I make some kind of TableLayout, use arbitrarly large amount of rows and imageviews (with my emptyTile), than somehow work event-based changing of image ids from there? I'd think that only a portion of that map should be visible at a time, but I don't see how scrolling around could be the part of that structure.

    Read the article

  • rotating an object on an arc

    - by gardian06
    I am trying to get a turret to rotate on an arc, and have hit a wall. I have 8 possible starting orientations for the turrets, and want them to rotate on a 90 degree arc. I currently take the starting rotation of the turret, and then from that derive the positive, and negative boundary of the arc. because of engine restrictions (Unity) I have to do all of my tests against a value which is between [0,360], and due to numerical precision issues I can not test against specific values. I would like to write a general test without having to go in, and jury rig cases //my current test is: // member variables public float negBound; public float posBound; // found in Start() function (called immediately after construction) // eulerAngles.y is the the degree measure of the starting y rotation negBound = transform.eulerAngles.y-45; posBound = transform.eulerAngles.y+45; // insure that values are within bounds if(negBound<0){ negBound+=360; }else if(posBound>360){ posBound-=360; } // called from Update() when target not in firing line void Rotate(){ // controlls what direction if(transform.eulerAngles.y>posBound){ dir = -1; } else if(transform.eulerAngles.y < negBound){ dir = 1; } // rotate object } follows is a table of values for my different cases (please excuse my force formatting) read as base is the starting rotation of the turret, neg is the negative boundry, pos is the positive boundry, range is the acceptable range of values, and works is if it performs as expected with the current code. |base-|-neg-|-pos--|----------range-----------|-works-| |---0---|-315-|--45--|-315-0,0-45----------|----------| |--45--|---0---|--90--|-0-45,54-90----------|----x----| |-135-|---90--|-180-|-90-135,135-180---|----x----| |-180-|--135-|-225-|-135-180,180-225-|----x----| |-225-|--180-|-270-|-180-225,225-270-|----x----| |-270-|--225-|-315-|-225-270,270-315-|----------| |-315-|--270-|---0---|--270-315,315-0---|----------| I will need to do all tests from derived, or stored values, but can not figure out how to get all of my cases to work simultaneously. //I attempted to concatenate the 2 tests: if((transform.eulerAngles.y>posBound)&&(transform.eulerAngles.y < negBound)){ dir *= -1; } this caused only the first case to be successful // I attempted to store a opposite value, and do a void Rotate(){ // controlls what direction if((transform.eulerAngles.y > posBound)&&(transform.eulerAngles.y<oposite)){ dir = -1; } else if((transform.eulerAngles.y < negBound)&&(transform.eulerAngles.y>oposite)){ dir = 1; } // rotate object } this causes the opposite situation as indicated on the table. What am I missing here?

    Read the article

  • How to display consistent background image

    - by Tofu_Craving_Redish_BlueDragon
    Drawing a large background is relatively slow in PyGame. In order to avoid drawing BG every frame, you could draw it once, then do nothing. However, if something is overdrawn onto the surface and keeps moving, you will need to redraw the background in order to "erase" the color pixels left by moving object; otherwise, you will have "traces" of the moving object. I have a moving object in my PyGame. However, I do not want to "clear the color buffer" by redrawing the background image. Redrawing the background image every frame is slow. My solution : I will "clear" only required portions (where the "traces" of moving object are left) of the "buffer" by redrawing portions of background. Is there any other better way to have a consistent background?

    Read the article

  • String on a model

    - by alecnash
    I am trying to put a sting on a Model and I want it to be dynamic. Did some research and came up with drawing the text on the texture and then set it on the model. I use something like this: public static Texture2D SpriteFontTextToTexture(SpriteFont font, string text, Color backgroundColor, Color textColor) { Size = font.MeasureString(text); RenderTarget2D renderTarget = new RenderTarget2D(GraphicsDevice, (int)Size.X, (int)Size.Y); GraphicsDevice.SetRenderTarget(renderTarget); GraphicsDevice.Clear(Color.Transparent); Spritbatch.Begin(); //have to redo the ColorTexture Spritbatch.Draw(ColorTexture.Create(GraphicsDevice, 1024, 1024, backgroundColor), Vector2.Zero, Color.White); Spritbatch.DrawString(font, text, Vector2.Zero, textColor); Spritbatch.End(); GraphicsDevice.SetRenderTarget(null); return renderTarget; } When I was working with primitives and not models everything worked fine because I set the texture exactly where I wanted but with the model (RoundedRect 3d button) it looks like that: Is there a way to have the text centered only on one side?

    Read the article

  • how to use double buffering in awt? [on hold]

    - by Ishanth
    import java.awt.event.*; import java.awt.*; class circle1 extends Frame implements KeyListener { public int a=300; public int b=70; public int pacx=360; public int pacy=270; public circle1() { setTitle("circle"); addKeyListener(this); repaint(); } public void paint(Graphics g) { g.fillArc (a, b, 60, 60,pacx,pacy); } public void keyPressed(KeyEvent e) { int key=e.getKeyCode(); System.out.println(key); if(key==38) { b=b-5; //move pacman up pacx=135;pacy=270; //packman mouth upside if(b==75&&a>=20||b==75&&a<=945) { b=b+5; } else { repaint(); } } else if(key==40) { b=b+5; //move pacman downside pacx=315; pacy=270; //packman mouth down if(b==645&&a>=20||b==645&&a<=940) { b=b-5; } else{ repaint(); } } else if(key==37) { a=a-5; //move pacman leftside pacx=227; pacy=270; //packman mouth left if(a==15&&b>=75||a==15&&b<=640) { a=a+5; } else { repaint(); } } else if(key==39) { a=a+5; //move pacman rightside pacx=42;pacy=270; //packman mouth right if(a==945&&a>=80||a==945&&b<=640) { a=a-5; } else { repaint(); } } } public void keyReleased(KeyEvent e){} public void keyTyped(KeyEvent e){} public static void main(String args[]) { circle1 c=new circle1(); c.setVisible(true); c.setSize(400,400); } }

    Read the article

  • Making video from 3D gaphics in OpenGL

    - by MVTC
    What are some of the preferred methods or libraries for creating video from an OpenGL graphics simulation? For example, I want to create a visualization(video) of an N-Body gravity simulation by rendering non-real-time OpenGL frames. The simulation is already coded, I just don't know how to convert it to video. EDIT: I am also interested in providing the described functionality: The user can adjust parameters including the time step between captured frames and then initiate the simulation. The user waits for the simulation to complete, and then can watch the results. The user is able to increase or decrease the playback speed of the simulation whereas in slow motion, more frames are used i.e., you see higher resolution time steps, and when the speed is increased, you see lower resolution time steps at a higher rate, but the frames per second flashing on the screen is constant.

    Read the article

  • IDirect3DDevice9::GetRenderTargetData() returns no data

    - by P. Avery
    I've got a simple function to get the rendertarget data of an RT( w/default pool ). This particular RT has a resolution of 1x1( it's the 10'th and final mip of a texture ). Here is my code to get data for IDirect3DSurface9 *pTargetSurface: IDirect3DSurface9 *pSOS = NULL; pd3dDevice->CreateOffScreenPlainSurface( 1, 1, D3DFMT_A8R8G8B8, D3DPOOL_SYSTEMMEM, &pSOS, NULL ); // get residual energy if( FAILED( hr = pd3dDevice->GetRenderTargetData( pTargetSurface, pSOS ) ) ) { DebugStringDX( ClassName, "Failed to IDirect3DDevice9::GetRenderTargetData() at DownsampleArea()", __LINE__, hr ); goto Exit; } // lock surface if( FAILED( hr = pSOS->LockRect( &rct, NULL, D3DLOCK_READONLY ) ) ) { DebugStringDX( ClassName, "Failed to IDirect3DSurface9::LockRect() at DownsampleArea()", __LINE__, hr ); goto Exit; } // get residual energy from downsampled texture pByte = ( BYTE* )rct.pBits; D3DXVECTOR4 vEnergy; vEnergy.z = ( float )pByte[ 0 ] / 255.0f; vEnergy.y = ( float )pByte[ 1 ] / 255.0f; vEnergy.x = ( float )pByte[ 2 ] / 255.0f; vEnergy.w = ( float )pByte[ 3 ] / 255.0f; V( pSOS->UnlockRect() ); All formatting and settings are correct, directx in debug mode shows no errors... The problem is that the 4 bytes above are 0...I know this to be incorrect by using PIX to debug...PIX shows that RGB bytes are 0.078 and Alpah is 1. These values are not less than that which can be represented by a single byte( 1 / 255 ). Any ideas? Am I copying rendertarget data correctly?

    Read the article

  • How can I achieve this lighting with OpenGL?

    - by Smallbro
    I'm currently trying to implement a type of "smooth" lighting. How can I achieve lighting which looks like this: http://dl.dropbox.com/u/1668516/concept/warp3.png Using OpenGl. I've attempted to use blending modes and have come very close to making it work but it came out like this: https://pbs.twimg.com/media/A1071viCEAAlFmJ.png and I also wasn't able to change the alpha of the black background which I want to be able to do. Could I get a few pointers in the right direction?

    Read the article

  • Playing a death anim on an enemy that I want to remove

    - by Max
    I've been trying to find a tutorial on how to best make animations in Android. I already have some animations for my enemies and my character that are controlled by rectangles and changing rectangleframe between updates using a picture like this: When I'm shooting my enemies they lose HP, and when their HP == 0 they get removed. As long as I'm using an arrayList (which I do for all enemies and bullets) I'm fine, since I can just use list.remove(i). But when I'm on a boss-level and the Boss's HP == 0, I want to remove him and play an animation of an explosion of stars before the "End-screen". Is there a preferred way to do temporary animations like this? If you can give me an example or redirect me to a tutorial, I'd be really grateful!

    Read the article

  • Certain grid lines not rendering as expected

    - by row1
    I am drawing a simple quad (a triangle strip with 4 vertices) as the floor and then drawing an 8x8 grid over top (a collection of vertex pairs for a line list). The vertical grid lines work fine (apart from being very aliased), but some of the horizontal lines do not get rendered. The grid renders fine if I do not draw the quad. foreach (EffectPass pass in _Effect.CurrentTechnique.Passes) { pass.Apply(); CurrentGraphicsDevice.SetVertexBuffer(_VertexFloorBuffer); _Engine.CurrentGraphicsDevice.DrawPrimitives(PrimitiveType.TriangleStrip, 0, 2); //Some of the horizontal lines seems to disappear if we draw the above quad. CurrentGraphicsDevice.SetVertexBuffer(_VertexGridBuffer); CurrentGraphicsDevice.DrawPrimitives(PrimitiveType.LineList, 0, _VertexGridBuffer.VertexCount / 2); } What could be causing these lines to not be rendered? Update: I added the below code after I draw my quad and grid and it started working. But I am not sure why that works as I thought this code was to draw the WPF controls elementRenderer.Render(); spriteBatch.Begin(); spriteBatch.Draw(elementRenderer.Texture, Vector2.Zero, Color.White); spriteBatch.End();

    Read the article

  • Working with lots of cubes. Improving performance?

    - by Randomman159
    Edit: To sum the question up, I have a voxel based world (Minecraft style (Thanks Communist Duck)) which is suffering from poor performance. I am not positive on the source but would like any possible advice on how to get rid of it. I am working on a project where a world consists of a large quantity of cubes (I would give you a number, but it is user defined worlds). My test one is around (48 x 32 x 48) blocks. Basically these blocks don't do anything in themselves. They just sit there. They start being used when it comes to player interaction. I need to check what cubes the users mouse interacts with (mouse over, clicking, etc.), and for collision detecting as the player moves. Now I had a massive amount of lag at first, looping through every block. I have managed to decrease that lag, by looping through all the blocks, and finding which blocks are within a particular range of the character, and then only looping through those blocks for the collision detection, etc. However, I am still going at a depressing 2fps. Does anyone have any other ideas on how I could decrease this lag? Btw, I am using XNA (C#) and yes, it is 3d.

    Read the article

  • Using Bullet physics engine to find the moment of object contact before penetration

    - by MooMoo
    I would like to use Bullet Physics engine to simulate the objects in 3D world. One of the objects in the world will move using the position from 3D mouse control. I will call it "Mouse Object" and any object in the world as "Object A" I define the time before "mouse object" and "Object A" collide as t-1 The time "mouse object" penetrate "Object A" as t Now there is a problem about rendering the scene because when I move the mouse very fast, "Mouse object" will reside in "Object A" before "Object A" start to move. I would like the "Mouse Object" to stop right away attach to the "Object A". Also If the "Object A" move, the "Mouse object" should move following (attach) the "Object A" without stop at the first collision take place. This is what i did I find the position of the "Mouse Object" at time t-1 and time t. I will name it as pos(t-1) and pos(t) The contact time will be sometime between t-1 to t, which the time of contact I name it as t_contact, therefore the contact position (without penetration) between "Mouse object" and "Object A" will be pos(t_contact) then I create multiple "Mouse object"s using this equation pos[n] = pos(t-1) * C * ( pos(t) - pos(t-1) ) where 0 <= C <= 1 if I choose C = 0.1, 0.2, 0.3,0.4..... 1.0, I will get pos[n] for 10 values Then I test collision for all of these 10 "Mouse Objects" and choose the one that seperate between "no collision" and "collision". I feel this method is super non-efficient. I am not sure the way other people find the time-of-contact or the position-of-contact when "Object A" can move.

    Read the article

  • matrix to transform unit cube to space defined by 8 arbitrary points

    - by aadster
    I asked a question relating to similar to this already, but I think this is a clearer objective of what Im trying to achieve.. or whether its possible at all! Im trying to find a transformation (matrix ideally) which would transform the 8 points of a 3d unit cube to 8 arbitrary points in space. The 8 target points have no known structure. e.g: My gut feeling is that a matrix is unable to provide this xform since the cube faces vertices can be concave.. but are there any other methods of transformation? Thanks!

    Read the article

  • forward rendering and multiple shadow maps

    - by Irbis
    I have two light sources on my scene. I created two fbo's which store depth textures for these lights. A render loop looks like this: bind fbo1 save depth values for first light unbind fbo1 bind fbo2 save depth values for second light unbind fbo2 enable additive blending bind first depth texture render scene bind second depth texture render scene disable additive blending For one light source the program works fine. For many light sources I use an additive blending to acumulate lighting results but then some objects become transparent (for example when an object which is further away from the camera is drawn before an object which is closer to the camera). How to resolve that problem ? How should I accumulate lighting effects for many light sources (many shadow maps) ? P.S. I use OpenGL/GLSL 3.3+

    Read the article

  • Circular movement - eliminating speed ups near Y = 0

    - by Fibericon
    I have a basic algorithm to rotate an enemy around a 200 unit radius circle with center 0. This is how I'm achieving that: if (position.Y <= 0 && position.X > -200) { position.X -= 2; position.Y = 0 - (float)Math.Sqrt((200 * 200) - (position.X * position.X)); } else { position.X += 2; position.Y = (float)Math.Sqrt((200 * 200) - (position.X * position.X)); } It does work, and I've ensured that at no point does either X or Y equal NaN. However, when Y approaches 0, it seems to go significantly faster. This surprises me, because the Y values are locked to the X, which is being incremented by a steady amount. What can I do to smooth the speed?

    Read the article

  • Calculate initial velocity of a 3d vector-based projectile

    - by Frotty
    Okay, so I got a Projectile with 2 Vectors, position and velocity. I now want to calculate the initial velocity for it in order to reach a specific point on the map. Or actually, how high has the start z-velocity to be (because x and y are probably defined by a speed variable) in order for the projectile to hit the marked position. The projectile is influenced by a constant gravity vector. All calculations are done 32 times per second. I want this, because I don't want to use a parabola function, so the projectile can still be influenced by other sources, simply adding some velocity. I didn't really find anything referring to that topic and would be glad for every helping answer, Thanks.

    Read the article

< Previous Page | 382 383 384 385 386 387 388 389 390 391 392 393  | Next Page >