Search Results

Search found 19855 results on 795 pages for 'game console'.

Page 386/795 | < Previous Page | 382 383 384 385 386 387 388 389 390 391 392 393  | Next Page >

  • how to use double buffering in awt? [on hold]

    - by Ishanth
    import java.awt.event.*; import java.awt.*; class circle1 extends Frame implements KeyListener { public int a=300; public int b=70; public int pacx=360; public int pacy=270; public circle1() { setTitle("circle"); addKeyListener(this); repaint(); } public void paint(Graphics g) { g.fillArc (a, b, 60, 60,pacx,pacy); } public void keyPressed(KeyEvent e) { int key=e.getKeyCode(); System.out.println(key); if(key==38) { b=b-5; //move pacman up pacx=135;pacy=270; //packman mouth upside if(b==75&&a>=20||b==75&&a<=945) { b=b+5; } else { repaint(); } } else if(key==40) { b=b+5; //move pacman downside pacx=315; pacy=270; //packman mouth down if(b==645&&a>=20||b==645&&a<=940) { b=b-5; } else{ repaint(); } } else if(key==37) { a=a-5; //move pacman leftside pacx=227; pacy=270; //packman mouth left if(a==15&&b>=75||a==15&&b<=640) { a=a+5; } else { repaint(); } } else if(key==39) { a=a+5; //move pacman rightside pacx=42;pacy=270; //packman mouth right if(a==945&&a>=80||a==945&&b<=640) { a=a-5; } else { repaint(); } } } public void keyReleased(KeyEvent e){} public void keyTyped(KeyEvent e){} public static void main(String args[]) { circle1 c=new circle1(); c.setVisible(true); c.setSize(400,400); } }

    Read the article

  • How to move a rectangle properly?

    - by bodycountPP
    I recently started to learn OpenGL. Right now I finished the first chapter of the "OpenGL SuperBible". There were two examples. The first had the complete code and showed how to draw a simple triangle. The second example is supposed to show how to move a rectangle using SpecialKeys. The only code provided for this example was the SpecialKeys method. I still tried to implement it but I had two problems. In the previous example I declared and instaciated vVerts in the SetupRC() method. Now as it is also used in the SpecialKeys() method, I moved the declaration and instantiation to the top of the code. Is this proper c++ practice? I copied the part where vertex positions are recalculated from the book, but I had to pick the vertices for the rectangle on my own. So now every time I press a key for the first time the rectangle's upper left vertex is moved to (-0,5:-0.5). This ok because of GLfloat blockX = vVerts[0]; //Upper left X GLfloat blockY = vVerts[7]; // Upper left Y But I also think that this is the reason why my rectangle is shifted in the beginning. After the first time a key was pressed everything works just fine. Here is my complete code I hope you can help me on those two points. GLBatch squareBatch; GLShaderManager shaderManager; //Load up a triangle GLfloat vVerts[] = {-0.5f,0.5f,0.0f, 0.5f,0.5f,0.0f, 0.5f,-0.5f,0.0f, -0.5f,-0.5f,0.0f}; //Window has changed size, or has just been created. //We need to use the window dimensions to set the viewport and the projection matrix. void ChangeSize(int w, int h) { glViewport(0,0,w,h); } //Called to draw the scene. void RenderScene(void) { //Clear the window with the current clearing color glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT|GL_STENCIL_BUFFER_BIT); GLfloat vRed[] = {1.0f,0.0f,0.0f,1.0f}; shaderManager.UseStockShader(GLT_SHADER_IDENTITY,vRed); squareBatch.Draw(); //perform the buffer swap to display the back buffer glutSwapBuffers(); } //This function does any needed initialization on the rendering context. //This is the first opportunity to do any OpenGL related Tasks. void SetupRC() { //Blue Background glClearColor(0.0f,0.0f,1.0f,1.0f); shaderManager.InitializeStockShaders(); squareBatch.Begin(GL_QUADS,4); squareBatch.CopyVertexData3f(vVerts); squareBatch.End(); } //Respond to arrow keys by moving the camera frame of reference void SpecialKeys(int key,int x,int y) { GLfloat stepSize = 0.025f; GLfloat blockSize = 0.5f; GLfloat blockX = vVerts[0]; //Upper left X GLfloat blockY = vVerts[7]; // Upper left Y if(key == GLUT_KEY_UP) { blockY += stepSize; } if(key == GLUT_KEY_DOWN){blockY -= stepSize;} if(key == GLUT_KEY_LEFT){blockX -= stepSize;} if(key == GLUT_KEY_RIGHT){blockX += stepSize;} //Recalculate vertex positions vVerts[0] = blockX; vVerts[1] = blockY - blockSize*2; vVerts[3] = blockX + blockSize * 2; vVerts[4] = blockY - blockSize *2; vVerts[6] = blockX+blockSize*2; vVerts[7] = blockY; vVerts[9] = blockX; vVerts[10] = blockY; squareBatch.CopyVertexData3f(vVerts); glutPostRedisplay(); } //Main entry point for GLUT based programs int main(int argc, char** argv) { //Sets the working directory. Not really needed gltSetWorkingDirectory(argv[0]); //Passes along the command-line parameters and initializes the GLUT library. glutInit(&argc,argv); //Tells the GLUT library what type of display mode to use, when creating the window. //Double buffered window, RGBA-Color mode,depth-buffer as part of our display, stencil buffer also available glutInitDisplayMode(GLUT_DOUBLE|GLUT_RGBA|GLUT_DEPTH|GLUT_STENCIL); //Window size glutInitWindowSize(800,600); glutCreateWindow("MoveRect"); glutReshapeFunc(ChangeSize); glutDisplayFunc(RenderScene); glutSpecialFunc(SpecialKeys); //initialize GLEW library GLenum err = glewInit(); //Check that nothing goes wrong with the driver initialization before we try and do any rendering. if(GLEW_OK != err) { fprintf(stderr,"Glew Error: %s\n",glewGetErrorString); return 1; } SetupRC(); glutMainLoop(); return 0; }

    Read the article

  • matrix to transform unit cube to space defined by 8 arbitrary points

    - by aadster
    I asked a question relating to similar to this already, but I think this is a clearer objective of what Im trying to achieve.. or whether its possible at all! Im trying to find a transformation (matrix ideally) which would transform the 8 points of a 3d unit cube to 8 arbitrary points in space. The 8 target points have no known structure. e.g: My gut feeling is that a matrix is unable to provide this xform since the cube faces vertices can be concave.. but are there any other methods of transformation? Thanks!

    Read the article

  • Prediction happening on (sending) client side

    - by Daniel
    This seems like a simple enough concept, but I haven't seen this implemented anywhere yet. Assuming that the server just forwards and verifies data... I'm using mouse-based movement, so it's not too difficult to predict the location of the player 150ms from when the event is sent. I'm thinking it is more accurate than using old data and older data on the receiving clients' side. The question I have, is why can I not find any examples of this? Is there something fundamentally wrong with this that I cannot find anyone implementing or talking about implementing this.

    Read the article

  • Is there a standard way to track 2d tile positions both locally and on screen?

    - by Magicked
    I'm building a 2D engine based on 32x32 tiles with OpenGL. OpenGL draws from the top left, so Y coordinates go down the screen as they increase. Obviously this is different than a standard graph where Y coordinates move up as they increase. I'm having trouble determining how I want to track positions for both sprites and tile objects (objects that are collections of tiles). My brain wants to set the world position as the bottom left of the object and track every object this way. The problem with this is I would have to translate it to an on screen position on rendering. The positive with this is I could easily visualize (especially in the case of objects made of multiple tiles) how something is structured and needs to be built. Are there standard ways for doing this? Should I just suck it up and get used to positions beginning in the top left? Here are the OpenGL calls to start rendering: // enable textures since we're going to use these for our sprites glEnable(GL_TEXTURE_2D); glClearColor(0.0f, 0.0f, 0.0f, 0.0f); // enable alpha blending glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); // disable the OpenGL depth test since we're rendering 2D graphics glDisable(GL_DEPTH_TEST); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0, WIDTH, HEIGHT, 0, 1, -1); glMatrixMode(GL_MODELVIEW); I assume I need to change: glOrtho(0, WIDTH, HEIGHT, 0, 1, -1); to: glOrtho(0, WIDTH, 0, HEIGHT, 1, -1);

    Read the article

  • Exporting spritesheet for Cocos2d

    - by Terko
    I would like to know how people usually save the animations in order to load them easily in Cocos2d with as few hard-code as possible. E.G. The solution I thought of is to have one plist file containing information about each frame, and the second plist to contain information about each of the animation(name of the animation, which frames to play, and the delay probably). If this is the correct solution, how can I generate such plist files for spritesheet automatically?

    Read the article

  • Making video from 3D gaphics in OpenGL

    - by MVTC
    What are some of the preferred methods or libraries for creating video from an OpenGL graphics simulation? For example, I want to create a visualization(video) of an N-Body gravity simulation by rendering non-real-time OpenGL frames. The simulation is already coded, I just don't know how to convert it to video. EDIT: I am also interested in providing the described functionality: The user can adjust parameters including the time step between captured frames and then initiate the simulation. The user waits for the simulation to complete, and then can watch the results. The user is able to increase or decrease the playback speed of the simulation whereas in slow motion, more frames are used i.e., you see higher resolution time steps, and when the speed is increased, you see lower resolution time steps at a higher rate, but the frames per second flashing on the screen is constant.

    Read the article

  • Specifying force and angle in ApplyImpulse in box2d

    - by Deepak Mahalingam
    I need to apply an impulse on a object with a particular force and at a particular angle in Box2d. If I am right the syntax would be the following: body.GetBody().ApplyImpulse(new b2Vec2(direction, power),body.GetBody().GetWorldCenter()); The problem is my direction is in angles. I found a discussion where it was said that the way we can convert an angle into a vector would be as: new b2Vec2(Math.cos(angle*Math.PI/180),Math.sin(angle*Math.PI/180)); Now I am not sure how to combine these two. In other words, if I wish to apply a force of 30 units at an angle of 30 degrees at the center of the object, how should I do it?

    Read the article

  • How can I achieve this lighting with OpenGL?

    - by Smallbro
    I'm currently trying to implement a type of "smooth" lighting. How can I achieve lighting which looks like this: http://dl.dropbox.com/u/1668516/concept/warp3.png Using OpenGl. I've attempted to use blending modes and have come very close to making it work but it came out like this: https://pbs.twimg.com/media/A1071viCEAAlFmJ.png and I also wasn't able to change the alpha of the black background which I want to be able to do. Could I get a few pointers in the right direction?

    Read the article

  • OGRE 3D: How to create very basic gameworld [on hold]

    - by skiwi
    I'm considering trying around to create an FPS (First person shooter), using the Ogre 3D engine. I have done the Basic Tutorials (except CEGUI), and have read through the Intermediate Tutorial, I understand some of the more advanced concepts, but I'm stuck with very simple concepts. First of all: I would want to use some tiles (square ones, with relative little height) as the floor, I guess I need to set up a loop to get those tiles done. But how would I go about creating those tiles exactly? Like making it to be their own mesh, and then I would need to find some texture. Secondly: I guess I can derive the camera and movement functions from the basic tutorial. But I'll be needing a "soldier" (anything does for now), what is the best way to create a moderately decent looking soldier? (Or obtain a decent one from an open library?) And thirdly: How can I ensure that the soldier is actually walking on the ground, instead of mid air? Will raycasting into the ground + adjust position based on that, suffice?

    Read the article

  • Working with lots of cubes. Improving performance?

    - by Randomman159
    Edit: To sum the question up, I have a voxel based world (Minecraft style (Thanks Communist Duck)) which is suffering from poor performance. I am not positive on the source but would like any possible advice on how to get rid of it. I am working on a project where a world consists of a large quantity of cubes (I would give you a number, but it is user defined worlds). My test one is around (48 x 32 x 48) blocks. Basically these blocks don't do anything in themselves. They just sit there. They start being used when it comes to player interaction. I need to check what cubes the users mouse interacts with (mouse over, clicking, etc.), and for collision detecting as the player moves. Now I had a massive amount of lag at first, looping through every block. I have managed to decrease that lag, by looping through all the blocks, and finding which blocks are within a particular range of the character, and then only looping through those blocks for the collision detection, etc. However, I am still going at a depressing 2fps. Does anyone have any other ideas on how I could decrease this lag? Btw, I am using XNA (C#) and yes, it is 3d.

    Read the article

  • Why does the Ternary\Conditional operator seem significantly faster

    - by Jodrell
    Following on from this question, which I have partially answered. I compile this console app in x64 Release Mode, with optimizations on, and run it from the command line without a debugger attached. using System; using System.Diagnostics; class Program { static void Main() { var stopwatch = new Stopwatch(); var ternary = Looper(10, Ternary); var normal = Looper(10, Normal); if (ternary != normal) { throw new Exception(); } stopwatch.Start(); ternary = Looper(10000000, Ternary); stopWatch.Stop(); Console.WriteLine( "Ternary took {0}ms", stopwatch.ElapsedMilliseconds); stopwatch.Start(); normal = Looper(10000000, Normal); stopWatch.Stop(); Console.WriteLine( "Normal took {0}ms", stopwatch.ElapsedMilliseconds); if (ternary != normal) { throw new Exception(); } Console.ReadKey(); } static int Looper(int iterations, Func<bool, int, int> operation) { var result = 0; for (int i = 0; i < iterations; i++) { var condition = result % 11 == 4; var value = ((i * 11) / 3) % 5; result = operation(condition, value); } return result; } static int Ternary(bool condition, in value) { return value + (condition ? 2 : 1); } static int Normal(int iterations) { if (condition) { return = 2 + value; } return = 1 + value; } } I don't get any exceptions and the output to the console is somthing close to, Ternary took 107ms Normal took 230ms When I break down the CIL for the two logical functions I get this, ... Ternary ... { : ldarg.1 // push second arg : ldarg.0 // push first arg : brtrue.s T // if first arg is true jump to T : ldc.i4.1 // push int32(1) : br.s F // jump to F T: ldc.i4.2 // push int32(2) F: add // add either 1 or 2 to second arg : ret // return result } ... Normal ... { : ldarg.0 // push first arg : brfalse.s F // if first arg is false jump to F : ldc.i4.2 // push int32(2) : ldarg.1 // push second arg : add // add second arg to 2 : ret // return result F: ldc.i4.1 // push int32(1) : ldarg.1 // push second arg : add // add second arg to 1 : ret // return result } Whilst the Ternary CIL is a little shorter, it seems to me that the execution path through the CIL for either function takes 3 loads and 1 or 2 jumps and a return. Why does the Ternary function appear to be twice as fast. I underdtand that, in practice, they are both very quick and indeed, quich enough but, I would like to understand the discrepancy.

    Read the article

  • Andengine. Put bullet to pool, when it leaves screen

    - by Ashot
    i'm creating a bullet with physics body. Bullet class (extends Sprite class) has die() method, which unregister physics connector, hide sprite and put it in pool public void die() { Log.d("bulletDie", "See you in hell!"); if (this.isVisible()) { this.setVisible(false); mPhysicsWorld.unregisterPhysicsConnector(physicsConnector); physicsConnector.setUpdatePosition(false); body.setActive(false); this.setIgnoreUpdate(true); bulletsPool.recyclePoolItem(this); } } in onUpdate method of PhysicsConnector i executes die method, when sprite leaves screen physicsConnector = new PhysicsConnector(this,body,true,false) { @Override public void onUpdate(final float pSecondsElapsed) { super.onUpdate(pSecondsElapsed); if (!camera.isRectangularShapeVisible(_bullet)) { Log.d("bulletDie","Dead?"); _bullet.die(); } } }; it works as i expected, but _bullet.die() executes TWICE. what i`m doing wrong and is it right way to hide sprites? here is full code of Bullet class (it is inner class of class that represents player) private class Bullet extends Sprite implements PhysicsConstants { private final Body body; private final PhysicsConnector physicsConnector; private final Bullet _bullet; private int id; public Bullet(float x, float y, ITextureRegion texture, VertexBufferObjectManager vertexBufferObjectManager) { super(x,y,texture,vertexBufferObjectManager); _bullet = this; id = bulletId++; body = PhysicsFactory.createCircleBody(mPhysicsWorld, this, BodyDef.BodyType.DynamicBody, bulletFixture); physicsConnector = new PhysicsConnector(this,body,true,false) { @Override public void onUpdate(final float pSecondsElapsed) { super.onUpdate(pSecondsElapsed); if (!camera.isRectangularShapeVisible(_bullet)) { Log.d("bulletDie","Dead?"); Log.d("bulletDie",id+""); _bullet.die(); } } }; mPhysicsWorld.registerPhysicsConnector(physicsConnector); $this.getParent().attachChild(this); } public void reset() { final float angle = canon.getRotation(); final float x = (float) ((Math.cos(MathUtils.degToRad(angle))*radius) + centerX) / PIXEL_TO_METER_RATIO_DEFAULT; final float y = (float) ((Math.sin(MathUtils.degToRad(angle))*radius) + centerY) / PIXEL_TO_METER_RATIO_DEFAULT; this.setVisible(true); this.setIgnoreUpdate(false); body.setActive(true); mPhysicsWorld.registerPhysicsConnector(physicsConnector); body.setTransform(new Vector2(x,y),0); } public Body getBody() { return body; } public void setLinearVelocity(Vector2 velocity) { body.setLinearVelocity(velocity); } public void die() { Log.d("bulletDie", "See you in hell!"); if (this.isVisible()) { this.setVisible(false); mPhysicsWorld.unregisterPhysicsConnector(physicsConnector); physicsConnector.setUpdatePosition(false); body.setActive(false); this.setIgnoreUpdate(true); bulletsPool.recyclePoolItem(this); } } }

    Read the article

  • How can I extract a list of Minecraft items and recipes?

    - by Sean
    I'm designing a robust system for resolving item dependencies in Minecraft and to do so, I need to maintain a database of items and recipes. Right now, this database has to be hand-crafted (no pun intended); I would like to know if it is possible to somehow query the Minecraft jars (or perhaps more realistically, grep through them) to extract this data automatically. How can this be done? The project is currently in Python, but it can still be ported to Java without much fuss at this stage. (For the curious.)

    Read the article

  • What is the point in using real time?

    - by bobobobo
    I understand that using real time frame elapses (which should vary between 16-17ms on average) are provided by a lot of frameworks. GetTimeElapsedSinceLastFrame, and it gives you the wall clock time. But should we use this information in basic physics simulation? It looks to me to be a bad idea. Say there is a slight lag on the machine, for whatever reason (say a virus scanner starts up). The calculations all jump, and there is no need for this. Why not use a virtual second and ignore wall clock time? For gameplay on the level of Commander Keen, shouldn't you always use the virtual second and not real-time? (Besides stopwatch timing for race games) I don't see a need to use real time and not a fixed 16ms time step.

    Read the article

  • Making a Living Developing Games

    - by cable729
    I'm in my last year of high school, and I've been looking at colleges. I'm taking a C++ class at a local community college and I don't feel that it's worth it. I could have learned everything in that class in a week. This had me thinking, would a CS degree even be worth it? How much can it teach me if I can learn everything on my own? Even if I do need to learn more advanced subjects, many colleges put their material online AND I can buy a book. Will companies hire me if I don't have a CS degree? If I have a portfolio will I stand a chance? What kind of things are needed in the portfolio? I want to live doing what I love - programming. So I will do it. I'm just not sure that a CS degree will do anything to me. In addition, if there is a benefit to getting a CS degree, what places are the best?

    Read the article

  • Cube rotation DX10

    - by German
    Well I'm reading the Frank's Luna DirectX10 book and, while I'm trying to understand the first demo, I found something that's not very clear at least for me. In the updateScene method, when I press A, S, W or D, the angles mTheta and mPhi change, but after that, there are three lines of code that I don't understand exactly what they do: // Convert Spherical to Cartesian coordinates: mPhi measured from +y // and mTheta measured counterclockwise from -z. float x = 5.0f*sinf(mPhi)*sinf(mTheta); float z = -5.0f*sinf(mPhi)*cosf(mTheta); float y = 5.0f*cosf(mPhi); I mean, this explains that they do, it says that it converts the spherical coordinates to cartesian coordinates, but, mathematically, why? why the x value is calculated by the product of the sins of both angles? And the z by the product of the sine and cosine? and why the y just uses the cosine? After that, those values (x, y and z) are used to build the view matrix. The book doesn't explain (mathematically) why those values are calculated like that (and I didn't find anything to help me to understand it at the first Part of the book: "Mathematical prerequisites"), so it would be good if someone could explain me what exactly happen in those code lines or just give me a link that helps me to understand the math part. Thanks in advance!

    Read the article

  • How to display consistent background image

    - by Tofu_Craving_Redish_BlueDragon
    Drawing a large background is relatively slow in PyGame. In order to avoid drawing BG every frame, you could draw it once, then do nothing. However, if something is overdrawn onto the surface and keeps moving, you will need to redraw the background in order to "erase" the color pixels left by moving object; otherwise, you will have "traces" of the moving object. I have a moving object in my PyGame. However, I do not want to "clear the color buffer" by redrawing the background image. Redrawing the background image every frame is slow. My solution : I will "clear" only required portions (where the "traces" of moving object are left) of the "buffer" by redrawing portions of background. Is there any other better way to have a consistent background?

    Read the article

  • How to manage enemy movement and shoot in a shmup?

    - by whatever
    I'm wondering what is the best (or at least a good) way of managing enemies in a shoot-em-up. Basically, what I'd do would be a class that manages displaying and updating positions of all the enemies. But how to create good deplacements for enemies? A list of where-to-go points? gravitating around some fixed points (with ponderation, distance evaluation etc.)? Same question for the shoot patterns? Can you please put me on a track?

    Read the article

  • TGA loader: reverse y-axis

    - by aVoX
    I've written a TGA image loader in Java which is working perfectly for files created with GIMP as long as they are saved with the option origin set to Top Left (Note: Actually TGA files are meant to be stored upside down - Bottom Left in GIMP). My problem is that I want my image loader to be capable of reading all different kinds of TGA, so my question is: How do I flip the image upside down? Note that I store all image data inside a one-dimensional byte array, because OpenGL (glTexImage2D to be specific) requires it that way. Thanks in advance.

    Read the article

  • IDirect3DDevice9::GetRenderTargetData() returns no data

    - by P. Avery
    I've got a simple function to get the rendertarget data of an RT( w/default pool ). This particular RT has a resolution of 1x1( it's the 10'th and final mip of a texture ). Here is my code to get data for IDirect3DSurface9 *pTargetSurface: IDirect3DSurface9 *pSOS = NULL; pd3dDevice->CreateOffScreenPlainSurface( 1, 1, D3DFMT_A8R8G8B8, D3DPOOL_SYSTEMMEM, &pSOS, NULL ); // get residual energy if( FAILED( hr = pd3dDevice->GetRenderTargetData( pTargetSurface, pSOS ) ) ) { DebugStringDX( ClassName, "Failed to IDirect3DDevice9::GetRenderTargetData() at DownsampleArea()", __LINE__, hr ); goto Exit; } // lock surface if( FAILED( hr = pSOS->LockRect( &rct, NULL, D3DLOCK_READONLY ) ) ) { DebugStringDX( ClassName, "Failed to IDirect3DSurface9::LockRect() at DownsampleArea()", __LINE__, hr ); goto Exit; } // get residual energy from downsampled texture pByte = ( BYTE* )rct.pBits; D3DXVECTOR4 vEnergy; vEnergy.z = ( float )pByte[ 0 ] / 255.0f; vEnergy.y = ( float )pByte[ 1 ] / 255.0f; vEnergy.x = ( float )pByte[ 2 ] / 255.0f; vEnergy.w = ( float )pByte[ 3 ] / 255.0f; V( pSOS->UnlockRect() ); All formatting and settings are correct, directx in debug mode shows no errors... The problem is that the 4 bytes above are 0...I know this to be incorrect by using PIX to debug...PIX shows that RGB bytes are 0.078 and Alpah is 1. These values are not less than that which can be represented by a single byte( 1 / 255 ). Any ideas? Am I copying rendertarget data correctly?

    Read the article

  • UDK - How to make sure a PhysicalMaterial mask actually works?

    - by tomacmuni
    Hello, I have been reading the documentation for UDK about physical materials and masks. I have my 1bit BMP mask, and the two physical material assets I want to shoot off in the black and white channels. I have applied my material to both a rigid body and to a skeletal mesh and neither apparently uses the mask. If I assign a regular physical material (one that doesn't use a mask) then it will work fine, but this defeats the point because it gives only one hit reaction. In the documentation it states that it is possible to extend a class on which we want to use a physical material based on the KActor class's usage. How to do that? Here is the quote: "The following properties [ie, ImpactEffect - Particle system to spawn at the point of impact + ImpactSound - Sound to play when an impact occurs] allow you to attach sounds and effects to physical collisions. These only work on classes which support them, which at the moment is only KActor. By looking at the implementation in KActor though, you can add this functionality to other classes (or you can subclass KActor)." Essentially, how to make sure a PhysicalMaterial mask actually works? What code could be added to a skeletal mesh class perhaps, to get it going? Any help appreciated.

    Read the article

  • Certain grid lines not rendering as expected

    - by row1
    I am drawing a simple quad (a triangle strip with 4 vertices) as the floor and then drawing an 8x8 grid over top (a collection of vertex pairs for a line list). The vertical grid lines work fine (apart from being very aliased), but some of the horizontal lines do not get rendered. The grid renders fine if I do not draw the quad. foreach (EffectPass pass in _Effect.CurrentTechnique.Passes) { pass.Apply(); CurrentGraphicsDevice.SetVertexBuffer(_VertexFloorBuffer); _Engine.CurrentGraphicsDevice.DrawPrimitives(PrimitiveType.TriangleStrip, 0, 2); //Some of the horizontal lines seems to disappear if we draw the above quad. CurrentGraphicsDevice.SetVertexBuffer(_VertexGridBuffer); CurrentGraphicsDevice.DrawPrimitives(PrimitiveType.LineList, 0, _VertexGridBuffer.VertexCount / 2); } What could be causing these lines to not be rendered? Update: I added the below code after I draw my quad and grid and it started working. But I am not sure why that works as I thought this code was to draw the WPF controls elementRenderer.Render(); spriteBatch.Begin(); spriteBatch.Draw(elementRenderer.Texture, Vector2.Zero, Color.White); spriteBatch.End();

    Read the article

  • How would I use JBox2d in Java?

    - by BluFire
    So I did some research and a found Box2d. I then proceeded to download it and the testbed. Now that i have it, I don't know how to properly use it. I'm looking for a clear simple answer on how to use the engine. The things I did was that I put it into a lib folder and referenced the JBox2D jar file. After that i got stuck. How can i use this to program games for android? I'm very confused since Box2d was intended for C++.

    Read the article

  • How Would I create alternate players (Turn base Event)

    - by Blue
    The picture above shows 2 players. Each containing 3 characters. I want to know how to make a Turn based event starting with player 1 alternating turns with player 2. And in every alternation each character gets a turn. If a character dies, the next character on the same team goes, and so on. How would I create this? Is there a tutorial? I haven't made any turn-based games so I don't know how to program these kinds of stuff.

    Read the article

< Previous Page | 382 383 384 385 386 387 388 389 390 391 392 393  | Next Page >