Search Results

Search found 26043 results on 1042 pages for 'development trunk'.

Page 562/1042 | < Previous Page | 558 559 560 561 562 563 564 565 566 567 568 569  | Next Page >

  • What are some of the more commonly used projectile rendering techniques?

    - by KlashnikovKid
    couldn't find a duplicate question (bit surprising to me) but anywho I'm starting to get near implementing the rendering of projectiles for my game. My question is what are some good techniques for efficiently rendering projectiles? I would like emphasis on techniques that leave room for the projectiles to be "rich" and dynamic (Cool to look at!) I'm also using DX11 for my rendering engine so bleeding edge techniques that can make use of that would be much appreciated too. Thanks!

    Read the article

  • SDL Bullet Movement

    - by Code Assasssin
    I'm currently working on my first space shooter, and I'm in the process of making my ship shoot some bullets/lasers. Unfortunately, I'm having a hard time getting the bullets to fly vertically. I'm a total noob when it comes to this so you might have a hard time understanding my code :/ // Position Bullet Function projectilex = x + 17; projectiley = y + -20; if(keystates[SDLK_SPACE]) { alive = true; } And here's my show function if(alive) { if(frame == 2) { frame = 0; } apply_surface(projectilex,projectiley,ShootStuff,screen,&lazers[frame]); frame++; projectiley + 1; } I'm trying to get the bullet to fly vertically... and I have no clue how to do that. I've tried messing with the y coordinate but that makes things worse. The laser/bullet just follows the ship :( How would I get it to fire at the starting position and keep going in a vertical line without it following the ship? int main( int argc, char* args[] ) { Player p; Timer fps; bool quit = false; if( init() == false ) { return 1; } //Load the files if( load_files() == false ) { return 1; } clip[ 0 ].x = 0; clip[ 0 ].y = 0; clip[ 0 ].w = 30; clip[ 0 ].h = 36; clip[ 1 ].x = 31; clip[ 1 ].y = 0; clip[ 1 ].w = 39; clip[ 1 ].h = 36; clip[ 2 ].x = 71; clip[ 2 ].y = 0; clip[ 2 ].w = 29; clip[ 2 ].h = 36; lazers [ 0 ].x = 0; lazers [ 0 ].y = 0; lazers [ 0 ].w = 3; lazers [ 0 ].h = 9; lazers [ 1 ].x = 5; lazers [ 1 ].y = 0; lazers [ 1 ].w = 3; lazers [ 1 ].h = 7; while( quit == false ) { fps.start(); //While there's an event to handle while( SDL_PollEvent( &event ) ) { p.handle_input(); //If a key was pressed //If the user has Xed out the window if( event.type == SDL_QUIT ) { //Quit the program quit = true; } } //Scroll background bgX -= 8; //If the background has gone too far if( bgX <= -GameBackground->w ) { //Reset the offset bgX = 0; } p.move(); apply_surface( bgX, bgY,GameBackground, screen ); apply_surface( bgX + GameBackground->w, bgY, GameBackground, screen ); apply_surface(0,0, FullHealthBar,screen); p.shoot(); p.show(); //Apply the message //Update the screen if( SDL_Flip( screen ) == -1 ) { return 1; } SDL_Flip(GameBackground); if( fps.get_ticks() < 1000 / FRAMES_PER_SECOND ) { SDL_Delay( ( 1000 / FRAMES_PER_SECOND ) - fps.get_ticks() ); } } //Clean up clean_up(); return 0; }

    Read the article

  • (Android) How are OpenGL ES 1 framebuffers and textures sized?

    - by jens
    I am trying to draw to a texture using a framebuffer using OpenGL ES 1.1 on Android, Java. Afterwords I want to overlay this texture full-screen over my game. In theory, this works like a charm, but somehow the coordinates are off. For testing I drew something at (0,0) with width and height 200, and it partly is off-screen. This is how I create the framebuffer: fb = new int[1]; depthRb = new int[1]; renderTex = new int[1]; gl11ep.glGenFramebuffersOES(1, fb, 0); gl11ep.glGenRenderbuffersOES(1, depthRb, 0); // the depth buffer gl.glGenTextures(1, renderTex, 0);// generate texture gl.glBindTexture(GL10.GL_TEXTURE_2D, renderTex[0]); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_REPEAT); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT); texBuffer = ByteBuffer.allocateDirect(buf.length*4).order(ByteOrder.nativeOrder()).asIntBuffer(); gl.glTexImage2D(GL10.GL_TEXTURE_2D, 0, GL10.GL_LUMINANCE, texW, texH, 0, GL10.GL_LUMINANCE, GL10.GL_UNSIGNED_BYTE, texBuffer); gl11ep.glBindRenderbufferOES(GL11ExtensionPack.GL_RENDERBUFFER_OES, depthRb[0]); gl11ep.glRenderbufferStorageOES(GL11ExtensionPack.GL_RENDERBUFFER_OES, GL11ExtensionPack.GL_DEPTH_COMPONENT16, texW, texH); Before I draw, I do this: gl11ep.glBindFramebufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, fb[0]); gl.glClearColor(0f, 0f, 0f, 0f); // specify texture as color attachment gl11ep.glFramebufferTexture2DOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_COLOR_ATTACHMENT0_OES, GL10.GL_TEXTURE_2D, renderTex[0], 0); // attach render buffer as depth buffer gl11ep.glFramebufferRenderbufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_DEPTH_ATTACHMENT_OES, GL11ExtensionPack.GL_RENDERBUFFER_OES, depthRb[0]); I set texW = 1024 and texH = 512. When rendering this texture fullscreen, with a lightmask (size 200x200) placed at (0, 0) and (texW/2, texH/2). You can see that it seems like the coordinate system doesnt start at (0,0) as that light overlaps the screen and the images are not drawn as squares (my lightcone-texture is a circle, not an ellipse). So, how is the coordinate system of this offscreen-drawn texture defined? Thanks

    Read the article

  • How to draw a spotlight in 3D

    - by RecursiveCall
    To be clear, I am not talking about the light result (the lit area) but the spotlight itself, like this The two common suggestions that I tried are 2D image and a 3D cone. The problem with the pre-regenerated 2D image is that it always look 2D and flat no matter how it is rotated in world space. The cone on the other hand is next to impossible to control when it comes to fade distance, it doesn't look soft (smooth) and it is expensive to compute.

    Read the article

  • Pixel Shader Issues :

    - by Morphex
    I have issues with a pixel shader, my issue is mostly that I get nothing draw on the screen. float4x4 MVP; // TODO: add effect parameters here. struct VertexShaderInput { float4 Position : POSITION; float4 normal : NORMAL; float2 TEXCOORD : TEXCOORD; }; struct VertexShaderOutput { float4 Position : POSITION; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { input.Position.w = 0; VertexShaderOutput output; output.Position = mul(input.Position, MVP); // TODO: add your vertex shader code here. return output; } float4 PixelShaderFunction(VertexShaderOutput input) : SV_TARGET { return float4(1, 0, 0, 1); } technique { pass { Profile = 11.0; VertexShader = VertexShaderFunction; PixelShader = PixelShaderFunction; } } My matrix is calculated like this : Matrix MVP = Matrix.Multiply(Matrix.Multiply(Matrix.Identity, Matrix.LookAtLH(new Vector3(-10, 10, -10), new Vector3(0), new Vector3(0, 1, -0))), Camera.Projection); VoxelEffect.Parameters["MVP"].SetValue(MVP); Visual Studio Graphics Debug shows me that my vertex shader is actually working, but not the PixelShader. I striped the Shader to the bare minimums so that I was sure the shader was correct. But why is my screen still black?

    Read the article

  • Is an extra collision-mesh for level-data worth the hassle?

    - by Serthy
    What is the optimal approach for collision-detection with the environment in an 3D engine (with triangle mesh based geometry, no bsp)? A) Use the render mesh [+] no need for additional work for artists to fiddle with collision detection [-] high detail is harder for physics calculation [+/-] maybe use collidable flags for materials [+/-] compute the collision-mesh from the render-mesh B) Use an additional collision mesh [+] faster/more optimal collision-detection [-] additional work (either by the artist or by the programmer who has to develop an algorithm to compute it from the render-mesh) [-] more memory useage How do AAA title handle this? And what are the indie dev's approaches?

    Read the article

  • Help my graphists sharing their work

    - by Andy M
    As a developer I'm used to Subversion for source control and I think it's great for sharing source code between developers. Now thinking about my graphists and game designers, they need to have a slightly different approach I think. They need to share binary files They need to be able to have a thumbnail and preview of their work I don't want to include their binaries into my game repository (would be much too heavy for developer when updating) I've seen that some graphists uses personally created website to share their work but I was wondering if some "standard" application existed in order to provide my graphists a cool way of working together. Is there a common way of dealing with this? Is the way I want to do (only final sprites on my game repo) correct? How do you guys do this as game developers?

    Read the article

  • Proper way to do texture mapping in modern OpenGL?

    - by RubyKing
    I'm trying to do texture mapping using OpenGL 3.3 and GLSL 150. The problem is the texture shows but has this weird flicker I can show a video here. My texcords are in a vertex array. I have my fragment color set to the texture values and texel values. I have my vertex shader sending the texture cords to texture cordinates to be used in the fragment shader. I have my ins and outs setup and I still don't know what I'm missing that could be causing that flicker. Here is my code: Fragment shader #version 150 uniform sampler2D texture; in vec2 texture_coord; varying vec3 texture_coordinate; void main(void) { gl_FragColor = texture(texture, texture_coord); } Vertex shader #version 150 in vec4 position; out vec2 texture_coordinate; out vec2 texture_coord; uniform vec3 translations; void main() { texture_coord = (texture_coordinate); gl_Position = vec4(position.xyz + translations.xyz, 1.0); } Last bit Here is my vertex array with texture coordinates: GLfloat vVerts[] = { 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 0.5f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f}; //tex x and y If you need to see all the code, here is a link to every file. Thank you for your help.

    Read the article

  • openGL textures in bitmap mode

    - by evenex_code
    For reasons detailed here I need to texture a quad using a bitmap (as in, 1 bit per pixel, not an 8-bit pixmap). Right now I have a bitmap stored in an on-device buffer, and am mounting it like so: glBindBuffer(GL_PIXEL_UNPACK_BUFFER, BFR.G[(T+1)%2]); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, W, H, 0, GL_COLOR_INDEX, GL_BITMAP, 0); The OpenGL spec has this to say about glTexImage2D: "If type is GL_BITMAP, the data is considered as a string of unsigned bytes (and format must be GL_COLOR_INDEX). Each data byte is treated as eight 1-bit elements..." Judging by the spec, each bit in my buffer should correspond to a single pixel. However, the following experiments show that, for whatever reason, it doesn't work as advertised: 1) When I build my texture, I write to the buffer in 32-bit chunks. From the wording of the spec, it is reasonable to assume that writing 0x00000001 for each value would result in a texture with 1-px-wide vertical bars with 31-wide spaces between them. However, it appears blank. 2) Next, I write with 0x000000FF. By my apparently flawed understanding of the bitmap mode, I would expect that this should produce 8-wide bars with 24-wide spaces between them. Instead, it produces a white 1-px-wide bar. 3) 0x55555555 = 1010101010101010101010101010101, therefore writing this value ought to create 1-wide vertical stripes with 1 pixel spacing. However, it creates a solid gray color. 4) Using my original 8-bit pixmap in GL_BITMAP mode produces the correct animation. I have reached the conclusion that, even in GL_BITMAP mode, the texturer is still interpreting 8-bits as 1 element, despite what the spec seems to suggest. The fact that I can generate a gray color (while I was expecting that I was working in two-tone), as well as the fact that my original 8-bit pixmap generates the correct picture, support this conclusion. Questions: 1) Am I missing some kind of prerequisite call (perhaps for setting a stride length or pack alignment or something) that will signal to the texturer to treat each byte as 8-elements, as it suggests in the spec? 2) Or does it simply not work because modern hardware does not support it? (I have read that GL_BITMAP mode was deprecated in 3.3, I am however forcing a 3.0 context.) 3) Am I better off unpacking the bitmap into a pixmap using a shader? This is a far more roundabout solution than I was hoping for but I suppose there is no such thing as a free lunch.

    Read the article

  • Touch Event Not Work With PinchZoom

    - by Siddharth
    I was implemented pinch zoom functionality for my tower of defense game. I manage different entity to display all the towers. From that entity game player select the tower and drag to the actual position where he want to plot the tower. I set the entity in HUD also, so user scroll and zoom the region, the tower become visible all the time. Basically I have created the different entity to show and hide towers only so I can manage it easily. My problem is when I have not perform the scroll and zoom the tower touch and the dragging easily done but when I zoom and scroll the scene at that time the tower touch event does not call so the player can not able to drag and drop it to actual position. So anybody please help me to come out of it.

    Read the article

  • Player position triggering teleports

    - by jSherz
    I'm developing a Minecraft plugin (bukkit) in which a server admin can create 'portals' - a small region that will teleport any players who enter it. I have the teleportation sorted and I know how I could define areas that the player's position could be tested against. This would involve an ArrayList containing the zones and then hooking the PlayerMoveEvent so that the ArrayList is searched each time for a matching portal region. Although this method would work, I doubt that it would be very efficient when 100+ players are all moving around at the same time. Is there a better way of checking a player position against a set of 'zones' / regions?

    Read the article

  • Can I Base My Game on Another Game and Earn Money? [closed]

    - by Neb
    Possible Duplicate: How closely can a game resemble another game without legal problems I want make a game similar to Pocket Tanks but for Android and then sell it. Since I am not directly copying anything from Pocket Tanks, but simply using it to give me ideas, I should be allowed to make it. I don't want to finish making my game and then get into some legal trouble, so I wanted to ask here if its allowed. If this is the wrong place to ask, can you tell me where I could ask this question?

    Read the article

  • Creating a game engine in C++ and Python - Where do I start? [closed]

    - by Peter
    Yes, you read correctly, how does humble ole Peter here make an engine using the 2 languages he's proficient in to an extent. I have more than enough time and wish not to use any 3rd party "stuffs" (engine parts like methods, classes etc etc, fully from scratch). If anyone could PLEASE explain how this is done then i will love you forever. Thanks for reading, hoping for some productive answers. Thankyou very much. EDIT: Re read what i've said for the 4th time, forgot to mention; 2D sprite based, with voxels and physics. :D

    Read the article

  • Simple (and fast) dices physics

    - by Markus von Broady
    I'm programming a throw of 5 dices in Actionscript 3 + AwayPhysics (BulletPhysics port). I had a lot of fun tweaking frictions, masses etc. and in the end I found best results with more physics ticks per frame. Currently I use 10 ticks per frame (1/60 s) and it's OK, though I see a difference in plus for 20 ticks. Even though it's only 5 cubes (dices) in a box (or a floor with 3 walls really) I can't simulate 20 ticks in a frame and keep FPS at 60 on a medium-aged PC. That's why I decided to precompute frames for animation, finishing it in around 1700 ticks in 2 seconds. The flash player is freezed for these 2 seconds, and I'm afraid that this result will be more of a 5 seconds or even more, if I'll simulate multi-threading and compute frames in background of some other heavy processes and CPU drawing (dices is only a part of this game). Because I want both players to see dices roll in same way, I can't compute physics when having free resources, and build a buffer for at least one throw of each type (where type is number of dices thrown). I'm afraid players will see a "preparing dices........." message too often and for too long. I think the only solution to this problem is replacing PhysicsEngine with something simpler, or creating own physicsEngine. Do You have any formulas for cube-cube and cube-wall collision detection, and for calculating how their angular and linear velocities should change after a collision occurs?

    Read the article

  • System Requirement Checking

    - by gl3829
    I am working on a game and want to strengthen its requirement checking to ensure that it can run successfully. Therefore, I am looking for information on what is useful to check before starting the game. As a simple example, Why check for a specific amount of memory? Should I as a game developer ensure a minimum amount of memory? I feel this information is usually skipped in many books and resources but is critical to be able to deliver a game that will run on many machines. I would appreciate if you answered with what you check in the system, why you check it and if you have a good resource about it, please include. Just to be a bit more specific, I'm developing in Windows.

    Read the article

  • Checking collision of bullets and Asteroids

    - by Moaz ELdeen
    I'm trying to detect collision between two list of bullets and asteroids. The code works fine, but when the bullet intersects with an asteroid, and that bullet passes through another asteroid, the code gives an assertion, and it says about it can't increment the iterator. I'm sure there is a small bug in that code, but I can't find it. for (list<Bullet>::iterator itr_bullet = ship.m_Bullets.begin(); itr_bullet!=ship.m_Bullets.end();) { for (list<Asteroid>::iterator itr_astroid = asteroids.begin(); itr_astroid!=asteroids.end(); itr_astroid++) { if(checkCollision(itr_bullet->getCenter(),itr_astroid->getCenter(), itr_bullet->getRadius(), itr_astroid->getRadius())) { itr_astroid = asteroids.erase(itr_astroid); } } itr_bullet++; }

    Read the article

  • Position sprite at center of screen

    - by Wellie
    I am trying to get a sprite to position itself at the center of the screen but nothing seems to be working for me. I'm trying Viewport viewport = graphics.GraphicsDevice.Viewport; logoPosition = new Vector2((viewport.Width - towerImage.Width) / 2, (viewport.Height - towerImage.Height) / 2); and spriteBatch.Draw(towerImage, centre, null, Color.White, 0, baseOrigin, 1.0f, SpriteEffects.None, 0); This is my first time using XNA and I don't really have a clue what I'm doing.

    Read the article

  • How do I create an efficient long, pannable, sprite-animated scene in a Windows Store game?

    - by Groo
    I am creating my first Windows Store application in XAML, and I cannot seem to find a proper example for the requirements I have. The basic idea of the app is to have a large scrollable canvas which would lazily start animating visible parts of the view as soon as user stops panning over a certain content (with some audio played also): My original idea was to use a StackPanel to add a bunch of custom controls, each of which would then animate itself once visible (with a short delay), but I have a couple of concerns: If the entire canvas is ~50 screen widths wide, is it feasible to load all content at the beginning, or do I need to plan doing some lazy loading during scrolling? For example, when I select a certain region in the Bing Travel app, it seems to lazily load tiles as I scroll it towards the end. Since content is stretched 100% vertically, and these animations are vectorized to be resolution independent, I am not sure if XAML (CompositionTarget) will be able to handle this, or I have to go for DirectX (MonoGame or C++) to get rid of flicker. Even better, is there an example for Windows 8 which uses a 100% vertically sized GridView with custom animated controls inside?

    Read the article

  • How do I calculate the motion of 2 massive bodies in space?

    - by 1224
    I'm writing code simulating the 2-dimensional motion of two massive bodies with gravitational fields. The bodies' masses are known and I have a gravitational force equation. I know from that force I can get a differential equation for coordinates. I know that I once I solve this equation I will get the coordinates. I will need to make up some initial position and some initial velocity. I'd like to end up with a numeric solver for the ordinal differential equation for coordinates to get the formulas that I can write in code. Could someone break down how from laws and initial conditions we get to the formulas that calculate x and y at time t?

    Read the article

  • How many threads should an Android game use?

    - by kvance
    At minimum, an OpenGL Android game has a UI thread and a Renderer thread created by GLSurfaceView. Renderer.onDrawFrame() should be doing a minimum of work to get the higest FPS. The physics, AI, etc. don't need to run every frame, so we can put those in another thread. Now we have: Renderer thread - Update animations and draw polys Game thread - Logic & periodic physics, AI, etc. updates UI thread - Android UI interaction only Since you don't ever want to block the UI thread, I run one more thread for the game logic. Maybe that's not necessary though? Is there ever a reason to run game logic in the renderer thread?

    Read the article

  • A* PathFinding Not Consistent

    - by RedShft
    I just started trying to implement a basic A* algorithm in my 2D tile based game. All of the nodes are tiles on the map, represented by a struct. I believe I understand A* on paper, as I've gone through some pseudo code, but I'm running into problems with the actual implementation. I've double and tripled checked my node graph, and it is correct, so I believe the issue to be with my algorithm. This issue is, that with the enemy still, and the player moving around, the path finding function will write "No Path" an astounding amount of times and only every so often write "Path Found". Which seems like its inconsistent. This is the node struct for reference: struct Node { bool walkable; //Whether this node is blocked or open vect2 position; //The tile's position on the map in pixels int xIndex, yIndex; //The index values of the tile in the array Node*[4] connections; //An array of pointers to nodes this current node connects to Node* parent; int gScore; int hScore; int fScore; } Here is the rest: http://pastebin.com/cCHfqKTY This is my first attempt at A* so any help would be greatly appreciated.

    Read the article

  • boolean operations on meshes

    - by lathomas64
    given a set of vertices and triangles for each mesh. Does anyone know of an algorithm, or a place to start looking( I tried google first but haven't found a good place to get started) to perform boolean operations on said meshes and get a set of vertices and triangle for the resulting mesh? Of particular interest are subtraction and union. Example pictures: http://www.rhino3d.com/4/help/Commands/Booleans.htm

    Read the article

  • Is it possible to use a spherical collision component in UDK?

    - by Almo
    I have an object in UDK, which has a SkeletalMesh. At certain times in the game, I want this object to continue rendering the SkeletalMesh, but I'd like it to use spherical collision temporarily. After reading a bunch about PrimitiveComponents, my understanding is that UDK supports cylindrical and box-like collision, but not spherical without using a static mesh. But it seems an attached static mesh will render, since it has no bHidden attribute. There must be a way to do this, but I don't know UDK well enough yet to understand all the pitfalls.

    Read the article

  • Reading the memory of a N64

    - by toazron1
    I'm looking for a way to read the memory of a N64, while the game is running, in real time. I have a c# program which hooks into the memory of a running emulator and tracks SSB64 stats. I want to do the same thing with the physical N64. Currently it is possible to read the memory with a gameshark pro, however it's extremely slow, buggy, and not practical for what I am trying to accomplish. Would it be possible to tap into the gameshark, or the N64 directly, to access the memory in real time? Thanks!

    Read the article

< Previous Page | 558 559 560 561 562 563 564 565 566 567 568 569  | Next Page >