Search Results

Search found 43935 results on 1758 pages for 'development process'.

Page 509/1758 | < Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >

  • Illumination and Shading for computer graphics class

    - by Sam I Am
    I am preparing for my test tomorrow and this is one of the practice questions. I solved it partially but I am confused with the rest. Here is the problem: Consider a gray world with no ambient and specular lighting ( only diffuse lighting). The screen coordinates of a triangle P1,P2,P3, are P1=(100,100), P2= (300,150), P3 = (200, 200). The gray values at P!,P2,P3 are 1/2, 3/4, and 1/4 respectively. The light is at infinity and its direction and gray color are (1,1,1) and 1.0 respectively. The coefficients of diffused reflection is 1/2. The normals of P1,P2,P3 are N1= (0,0,1), N2 = (1,0,0), and N3 = (0,1,0) respectively. Consider the coordinates of three points P1,P2,P3 to be 0. Do not normalize the normals. I have computed that the illumination at the 3 vertices P1,P2,P3 is (1/4,3/8,1/8). Also I computed that interpolation coefficients of a point P inside the triangle whose coordinates are (220, 160) are given by (1/5,2/5,2/5). Now I have 4 more questions regarding this problem. 1) The illumination at P using Gouraud Shading is: i) 1/2 The answer is 1/2, but I have no idea how to compute it.. 2) The interpolated normal at P is given by i) (2/5, 2/5,1/5) ii) (1/2, 1/4, 1/4) iii) (3/5, 1/5, 1/5) 3) The interpolated color at P is given by: i) 1/2 Again, I know the correct answer but no idea how to solve it 4) The illumination at P using Phong Shading is i) 1/4 ii) 9/40 iii) 1/2

    Read the article

  • 2D Planet Gravity

    - by baked
    I'm trying to make a simple game where a spaceship is launched and then its path is effected by the gravity of planets. Similar to this game: http://sciencenetlinks.com/interactives/gravity.html I wish to know how to replicate the effect the planets have on the spaceship in this game so a spaceship can 'loop' around a planet to change direction. I have managed to achieve some bogus results where the spaceship loops in a huge ellipse around the planet or is only slightly affected by the gravity of a planet using Vectors. Thanks in advance. p.s I have plenty of coding experience just none to do with game dev.

    Read the article

  • spinning a 2d Cube

    - by Rahul Verma
    I know that a cube is actually a 3d shape , but i have some other problem over here. I have been doing 2D Game dev using libgdx but have never touched 3D rendering. Now what I want in my 2D game is that instead of coins I make my player collect magical cubes. But those cubes need to be spinning on one Diagonal, same can be seen in popular game Vector. Here is a screenshot. Can someone explaing the mathematics of such an animation

    Read the article

  • Keeping game model and graphics/animation separate but in sync

    - by AJM
    Suppose I'm building a chess game where I want to have animations. Pieces glide to their new squares when moved. Pieces perform attack animations when capturing other pieces. I'm not sure how to effectively separate the data and logic needed for these animations and the actual game model (in the MVC sense). The pieces themselves should ideally not have to worry about their pixel coordinates or current animation frame. At the same time, many changes to the model are effectively driven by animations. A moved piece changes its position after (before?) its sprite is done gliding. A piece is removed from the board after the capturing piece is finished its attack animation. How would you suggest I manage the game model, the graphics and animations, and their relationships? For example, where would the animations "live"? How would animations be created and managed in response to player moves? How would animations drive updates to the game model, or how would the game model drive animations?

    Read the article

  • How do I use setFilmSize in panda3d to achieve the correct view?

    - by lhk
    I'm working with Panda3d and recently switched my game to isometric rendering. I moved the virtual camera accordingly and set an orthographic lens. Then I implemented the classes "Map" and "Canvas". A canvas is a dynamically generated mesh: a flat quad. I'm using it to render the ingame graphics. Since the game itself is still set in a 3d coordinate system I'm planning to rely on these canvases to draw sprites. I could have named this class "Tile" but as I'd like to use it for non-tile sketches (enemies, environment) as well I thought canvas would describe it's function better. Map does exactly what it's name suggests. Its constructor receives the number of rows and columns and then creates a standard isometric map. It uses the canvas class for tiles. I'm planning to write a map importer that reads a file to create maps on the fly. Here's the canvas implementation: class Canvas: def __init__(self, texture, vertical=False, width=1,height=1): # create the mesh format=GeomVertexFormat.getV3t2() format = GeomVertexFormat.registerFormat(format) vdata=GeomVertexData("node-vertices", format, Geom.UHStatic) vertex = GeomVertexWriter(vdata, 'vertex') texcoord = GeomVertexWriter(vdata, 'texcoord') # add the vertices for a flat quad vertex.addData3f(1, 0, 0) texcoord.addData2f(1, 0) vertex.addData3f(1, 1, 0) texcoord.addData2f(1, 1) vertex.addData3f(0, 1, 0) texcoord.addData2f(0, 1) vertex.addData3f(0, 0, 0) texcoord.addData2f(0, 0) prim = GeomTriangles(Geom.UHStatic) prim.addVertices(0, 1, 2) prim.addVertices(2, 3, 0) self.geom = Geom(vdata) self.geom.addPrimitive(prim) self.node = GeomNode('node') self.node.addGeom(self.geom) # this is the handle for the canvas self.nodePath=NodePath(self.node) self.nodePath.setSx(width) self.nodePath.setSy(height) if vertical: self.nodePath.setP(90) # the most important part: "Drawing" the image self.texture=loader.loadTexture(""+texture+".png") self.nodePath.setTexture(self.texture) Now the code for the Map class class Map: def __init__(self,rows,columns,size): self.grid=[] for i in range(rows): self.grid.append([]) for j in range(columns): # create a canvas for the tile. For testing the texture is preset tile=Canvas(texture="../assets/textures/flat_concrete",width=size,height=size) x=(i-1)*size y=(j-1)*size # set the tile up for rendering tile.nodePath.reparentTo(render) tile.nodePath.setX(x) tile.nodePath.setY(y) # and store it for later access self.grid[i].append(tile) And finally the usage def loadMap(self): self.map=Map(10, 10, 1) this function is called within the constructor of the World class. The instantiation of world is the entry point to the execution. The code is pretty straightforward and runs good. Sadly the output is not as expected: Please note: The problem is not the white rectangle, it's my player object. The problem is that although the map should have equal width and height it's stretched weirdly. With orthographic rendering I expected the map to be a perfect square. What did I do wrong ? UPDATE: I've changed the viewport. This is how I set up the orthographic camera: lens = OrthographicLens() lens.setFilmSize(40, 20) base.cam.node().setLens(lens) You can change the "aspect" by modifying the parameters of setFilmSize. I don't know exactly how they are related to window size and screen resolution but after testing a little the values above seem to work for me. Now everything is rendered correctly as long as I don't resize the window. Every change of the window's size as well as switching to fullscreen destroys the correct rendering. I know that implementing a listener for resize events is not in the scope of this question. However I wonder why I need to make the Film's height two times bigger than its width. My window is quadratic ! Can you tell me how to find out correct setting for the FilmSize ? UPDATE 2: I can imagine that it's hard to envision the behaviour of the game. At first glance the obvious solution is to pass the window's width and height in pixels to setFilmSize. There are two problems with that approach. The parameters for setFilmSize are ingame units. You'll get a way to big view if you pass the pixel size For some strange reason the image is distorted if you pass equal values for width and height. Here's the output for setFilmSize(800,800) You'll have to stress your eyes but you'll see what I mean

    Read the article

  • problem in array of shooter sprites which contain different colour bubbles

    - by prakash s
    everyone i am developing bubble shooter game in cocos2d I have placed shooter array which contain different color bubbles like this 00000000 it is 8 bubbles array if i tap the screen, first bubbles should move for shooting the target .png .And if i again tap the screen again 2nd position bubble should move for shooting the target.png bubbles,how it will possible for me because i have already created the array of target which contain different color bubbles, here i write the code : - (void)ccTouchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { // Choose one of the touches to work with UITouch *touch = [touches anyObject]; CGPoint location = [touch locationInView:[touch view]]; location = [[CCDirector sharedDirector] convertToGL:location]; // Set up initial location of projectile CGSize winSize = [[CCDirector sharedDirector] winSize]; NSMutableArray * movableSprites = [[NSMutableArray alloc] init]; NSArray *images = [NSArray arrayWithObjects:@"1.png", @"2.png", @"3.png", @"4.png",@"5.png",@"6.png",@"7.png", @"8.png", nil]; for(int i = 0; i < images.count; ++i) { int index = (arc4random() % 8)+1; NSString *image = [NSString stringWithFormat:@"%d.png", index]; CCSprite*projectile = [CCSprite spriteWithFile:image]; //CCSprite *projectile = [CCSprite spriteWithFile:@"3.png" rect:CGRectMake(0, 0,256,256)]; [self addChild:projectile]; [movableSprites addObject:projectile]; float offsetFraction = ((float)(i+1))/(images.count+1); //projectile.position = ccp(20, winSize.height/2); //projectile.position = ccp(18,0 ); //projectile.position = ccp(350*offsetFraction, 20.0f); projectile.position = ccp(10/offsetFraction, 20.0f); // projectile.position = ccp(projectile.position.x,projectile.position.y); // Determine offset of location to projectile int offX = location.x - projectile.position.x; int offY = location.y - projectile.position.y; // Bail out if we are shooting down or backwards if (offX <= 0) return; // Ok to add now - we've double checked position //[self addChild:projectile]; // Determine where we wish to shoot the projectile to int realX = winSize.width + (projectile.contentSize.width/2); float ratio = (float) offY / (float) offX; int realY = (realX * ratio) + projectile.position.y; CGPoint realDest = ccp(realX, realY); // Determine the length of how far we're shooting int offRealX = realX - projectile.position.x; int offRealY = realY - projectile.position.y; float length = sqrtf((offRealX*offRealX)+(offRealY*offRealY)); float velocity = 480/1; // 480pixels/1sec float realMoveDuration = length/velocity; // Move projectile to actual endpoint [projectile runAction:[CCSequence actions: [CCMoveTo actionWithDuration:realMoveDuration position:realDest], [CCCallFuncN actionWithTarget:self selector:@selector(spriteMoveFinished:)], nil]]; // Add to projectiles array projectile.tag = 1; [_projectiles addObject:projectile]; } }

    Read the article

  • How to build a 4x game?

    - by Marco
    I'm trying to study how succefully implement a 4x game. Area of interest: 1) map data: how to store stellars systems (graphs?), how to generate them and so on.. 2) multiplayer: how to organize code in a non graphical server and a client to display it 3) command system: what are patters to catch user and ai decisions and handle them, adding at first "explore" and "colonize" then "combat", "research", "spy" and so on (commands can affect ships, planets, research, etc..) 4) ai system: ai can use commands to expand, upgrade planets and ship I know is a big questions, so help is appreciated :D 1) Map data Best choice is have a graph to model a galaxy. A node is a stellar system and every system have a list of planets. Ship cannot travel outside of predefined paths, like in Ascendancy: http://www.abandonia.com/files/games/221/Ascendancy_2.png Every connection between two stellar systems have a cost, in turns. Generate a galaxy is only a matter of: - dimension: number of stellar systems, - variety: randomize number of planets and types (desertic, earth, etc..), - positions of each stellar system on game space - connections: assure that exist a path between every node, so graph is "connected" (not sure if this a matematically correct term) 2) Multiplayer Game is organized in turns: player 1, player 2, ai1, ai2. Server take care of all data and clients just diplay it and collect data change. Because is a turn game, latency is not a problem :D 3) Command system I would like to design a hierarchy of commands to take care of this aspect: abstract Genericcommand (target) ExploreCommand (Ship) extends genericcommand colonizeCommand (Ship) buildcommand(planet, object) and so on. In my head all this commands are stored in a queue for every planets, ships or reasearch center or spy, and each turn a command is sent to a server to apply command and change data state 4) ai system I don't have any idea about this. Is a big topic and what I want is a simple ai. Something like "expand and fight against everyone". I think about a behaviour tree to control ai moves, so I can develop an ai that try to build ships to expand and then colonize planets, upgrade them throught science and combat enemies. Could be done with a finite state machine too ? any ideas, resources, article are welcome!

    Read the article

  • OpenGL: Filtering/antialising textures in a 2D game

    - by futlib
    I'm working on a 2D game using OpenGL 1.5 that uses rather large textures. I'm seeing aliasing effects and am wondering how to tackle those. I'm finding lots of material about antialiasing in 3D games, but I don't see how most of that applies to 2D games - e.g. antisoptric filtering seems to make no sense, FSAA doesn't sound like the best bet either. I suppose this means texture filtering is my best option? Right now I'm using bilinear filtering, I think: glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); From what I've read, I'd have to use mipmaps to use trilinear filtering, which would drive memory usage up, so I'd rather not. I know the final sizes of all the textures when they are loaded, so can't I somehow size them correctly at that point? (Using some form of texture filtering).

    Read the article

  • GLSL, is it possible to offsetting vertices based on height map colour?

    - by Rob
    I am attempting to generate some terrain based upon a heightmap. I have generated a 32 x 32 grid and a corresponding height map - In my vertex shader I am trying to offset the position of the Y axis based upon the colour of the heightmap, white vertices being higher than black ones. //Vertex Shader Code #version 330 uniform mat4 modelMatrix; uniform mat4 viewMatrix; uniform mat4 projectionMatrix; uniform sampler2D heightmap; layout (location=0) in vec4 vertexPos; layout (location=1) in vec4 vertexColour; layout (location=3) in vec2 vertexTextureCoord; layout (location=4) in float offset; out vec4 fragCol; out vec4 fragPos; out vec2 fragTex; void main() { // Retreive the current pixel's colour vec4 hmColour = texture(heightmap,vertexTextureCoord); // Offset the y position by the value of current texel's colour value ? vec4 offset = vec4(vertexPos.x , vertexPos.y + hmColour.r, vertexPos.z , 1.0); // Final Position gl_Position = projectionMatrix * viewMatrix * modelMatrix * offset; // Data sent to Fragment Shader. fragCol = vertexColour; fragPos = vertexPos; fragTex = vertexTextureCoord; } However the code I have produced only creates a grid with none of the y vertices higher than any others.

    Read the article

  • What are the most common AI systems implemented in Tower Defense Games

    - by the_Dan
    I'm currently in the middle of researching on the various types of AI techniques used in tower defense type games. If someone could be help me in understanding the different types of techniques and their associated advantages. Using Google I already found several techniques. Random Map traversal Path finding e.g. Cost based Traversing Algorithms i.e. A* I have already found a great answer to this type of question with the below link, but I feel that this answer is tailored to FPS. If anyone could add to this and make it specific to tower defense games then I would be truly great-full. How is AI most commonly implemented in popular games? Example of such games would be: Radiant Defense Plant Vs Zombies - Not truly Intelligent, but there must be an AI system used right? Field Runners Edit: After further research I found an interesting book that may be useful: http://www.amazon.com/dp/0123747317/?tag=stackoverfl08-20

    Read the article

  • Making a surface transparent from blackness of texture

    - by Dan the Man
    I am making a "halo" shader in unity using GLSL. And I've come to a roadblock. What I need to do is take a texture, like the following, and make it transparent according to the darkness of it. And I don't want a cutout, because that cuts it off at a hard edge. This line of code doesn't seem to work. gl_FragColor = texture2D( vec4( _MainTex.r, _MainTex.g, _MainTex.b, _MainTex.a), vec2(textureCoordinates));

    Read the article

  • What is the benefit of triple buffering?

    - by user782220
    I read everything written in a previous question. From what I understand in double buffering the program must wait until the finished drawing is copied or swapped before starting the next drawing. In triple buffering the program has two back buffers and can immediately start drawing in the one that is not involved in such copying. But with triple buffering if you're in a situation where you can take advantage of the third buffer doesn't that suggest that you are drawing frames faster than the monitor can refresh. So then you don't actually get a higher frame rate. So what is the benefit of triple buffering then?

    Read the article

  • How do I make time?

    - by SystemNetworks
    I wanted to output a text for a certain amount of time. One way is to use threads. Are there any other ways? I can't use threads for slick2d. This is my code when I use threads for slick: package javagame; import org.newdawn.slick.GameContainer; import org.newdawn.slick.Graphics; import org.newdawn.slick.Image; import java.util.Random; import org.newdawn.slick.Input; import org.newdawn.slick.*; import org.newdawn.slick.state.*; import org.lwjgl.input.Mouse; public class thread1 implements Runnable { String showUp; int timeLeft; public thread1(String s) { s = showUp; } public void run(Graphics g) { try { g.drawString("%s is sleeping %d", 500, 500); Thread.sleep(timeLeft); g.drawString("%s is awake", 600,600); } catch(Exception e) { } } @Override public void run() { // TODO Auto-generated method stub run(); } } It auto generates a new run() And also when I call it to my main class it has stack overflow!

    Read the article

  • Realistic planetary terrain generation with weights

    - by Programmdude
    I need terrain generation for a planet. The planet will be divided up into several hundred hexes, and I need it to be realistic and based on weights. I have dabbled in terrain generation before, but nothing like this. So I figure it would be a good idea to ask the community for answers, recommended articles or the like. By realistic, I mean not just random hexes, but continent shaped things with a few islands. More desert around the equator and more ice around the poles. I also have two weights I need to base it around: ice percentage and water percentage. That means that around XX% of the planet will need to be water. Does anyone have any advice or places to start? Generating arbitrary terrain is easy, but something a bit more "organic" like this seems rather difficult. It also needs to be seamless. Should be obvious since it's a planet, but no harm in pointing it out.

    Read the article

  • What could cause a pixel shader to paint outside the lines of the vertex shader output?

    - by Rei Miyasaka
    From what I understand, the pixels that a pixel shader operates on are specified implicitly by the SV_POSITION output (in DirectX) of the vertex shader. What then could cause a pixel shader to render in the middle of nowhere? I used the new Visual Studio 2012 graphics debugger to visualize my vertex and pixel shader output. This is the output from a DrawIndexed() call that draws a cube: The pink part is the rendered output of the pixel shader, which takes the cube on its left as its input. The vertex shader code: cbuffer Buf { float4x4 final; }; struct In { float4 pos:POSITION; float3 norm:NORMAL; float2 texuv:TEXCOORD; }; struct Out { float4 col:COLOR; float2 tex:TEXCOORD; float4 pos:SV_POSITION; }; Out main(In input) { Out output; output.pos = mul(input.pos, final); output.col = float4(1.0f, 0.5f, 0.5f, 1.0f); output.tex = input.texuv; return output; } And the pixel shader: struct In { float4 col:COLOR; float2 tex:TEXCOORD; float4 pos:SV_POSITION; }; float4 main(In input) : SV_TARGET { return input.col; } The raster stage is the only thing between the vertex shader and the pixel shader, so my suspicion is that it's some raster stage settings. But the raster stage shouldn't change the shape of the vertex shader output so drastically, should it?

    Read the article

  • My first animation - Using SDL.NET C#

    - by Mark
    Hi all! I'm trying to animate a player object in my 2D grid when the user clicks somewhere in the screen. I got the following 4 variables: oX (Current player position X) oY (Current player position Y) dX (Destination X) dY (Destination Y) How can I make sure the player moves in a straight line to the new XY coordinates. The way I'm doing it now is really awfull and causes the player to first move along x axis, and finally in y axis. Can someone give me some guidance with the involved math cause I'm really not sure on how to accomplish this. Thank you for your time. Kind regards, Mark Update: It's working now but whats the right way to check if the current positions are equal to the target position? private static void MovePlayer(double x2, double y2, int duration) { double hX = x2 - m_PlayerPosition.X; double hY = y2 - m_PlayerPosition.Y; double Length = Math.Sqrt(Math.Pow(hX, 2) + Math.Pow(hY, 2)); hX = hX / Length; hY = hY / Length; while (m_PlayerPosition.X != Convert.ToInt32(x2) || m_PlayerPosition.Y != Convert.ToInt32(y2)) { m_PlayerPosition.X += Convert.ToInt32(hX * 1); m_PlayerPosition.Y += Convert.ToInt32(hY * 1); UpdatePlayerLocation(); } }

    Read the article

  • Open GL stars are not rendering

    - by Darestium
    I doing Nehe's Open GL Lesson 9. I'm using SFML for windowing, the strange thing is no stars are rendering. #include <SFML/System.hpp> #include <SFML/Window.hpp> #include <SFML/Graphics.hpp> #include <iostream> void processEvents(sf::Window *app); void processInput(sf::Window *app); void renderGlScene(sf::Window *app); void init(); int loadResources(); const int NUM_OF_STARS = 50; float triRot = 0.0f; float quadRot = 0.0f; bool twinkle = false; bool tKey = false; float zoom = 15.0f; float tilt = 90.0f; float spin = 0.0f; unsigned int loop; unsigned int texture_handle[1]; typedef struct { int r, g, b; float distance; float angle; } stars; stars star[NUM_OF_STARS]; int main() { sf::Window app(sf::VideoMode(800, 600, 32), "Nehe Lesson 9"); app.UseVerticalSync(false); init(); if (loadResources() == -1) { return EXIT_FAILURE; } while (app.IsOpened()) { processEvents(&app); processInput(&app); renderGlScene(&app); app.Display(); } return EXIT_SUCCESS; } int loadResources() { sf::Image img_data; // Load Texture if (!img_data.LoadFromFile("data/images/star.bmp")) { std::cout << "Could not load data/images/star.bmp"; return -1; } // Generate 1 texture glGenTextures(1, &texture_handle[0]); // Linear filtering glBindTexture(GL_TEXTURE_2D, texture_handle[0]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, img_data.GetWidth(), img_data.GetHeight(), 0, GL_RGBA, GL_UNSIGNED_BYTE, img_data.GetPixelsPtr()); return 0; } void processInput(sf::Window *app) { const sf::Input& input = app->GetInput(); if (input.IsKeyDown(sf::Key::T) && !tKey) { tKey = true; twinkle = !twinkle; } if (!input.IsKeyDown(sf::Key::T)) { tKey = false; } if (input.IsKeyDown(sf::Key::Up)) { tilt -= 0.05f; } if (input.IsKeyDown(sf::Key::Down)) { tilt += 0.05f; } if (input.IsKeyDown(sf::Key::PageUp)) { zoom -= 0.02f; } if (input.IsKeyDown(sf::Key::Up)) { zoom += 0.02f; } } void init() { glClearDepth(1.f); glClearColor(0.f, 0.f, 0.f, 0.f); // Enable texturing glEnable(GL_TEXTURE_2D); //glDepthMask(GL_TRUE); // Setup a perpective projection glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45.f, 1.f, 1.f, 500.f); glShadeModel(GL_SMOOTH); glBlendFunc(GL_SRC_ALPHA, GL_ONE); glEnable(GL_BLEND); for (loop = 0; loop < NUM_OF_STARS; loop++) { star[loop].distance = (float)loop / NUM_OF_STARS * 5.0f; // Calculate distance from the centre // Give stars random rgb value star[loop].r = rand() % 256; star[loop].g = rand() % 256; star[loop].b = rand() % 256; } } void processEvents(sf::Window *app) { sf::Event event; while (app->GetEvent(event)) { if (event.Type == sf::Event::Closed) { app->Close(); } if (event.Type == sf::Event::KeyPressed && event.Key.Code == sf::Key::Escape) { app->Close(); } } } void renderGlScene(sf::Window *app) { app->SetActive(); // Clear color depth buffer glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Apply some transformations glMatrixMode(GL_MODELVIEW); glLoadIdentity(); // Select texture glBindTexture(GL_TEXTURE_2D, texture_handle[0]); for (loop = 0; loop < NUM_OF_STARS; loop++) { glLoadIdentity(); // Reset The View Before We Draw Each Star glTranslatef(0.0f, 0.0f, zoom); // Zoom Into The Screen (Using The Value In 'zoom') glRotatef(tilt, 1.0f, 0.0f, 0.0f); // Tilt The View (Using The Value In 'tilt') glRotatef(star[loop].angle, 0.0f, 1.0f, 0.0f); // Rotate To The Current Stars Angle glTranslatef(star[loop].distance, 0.0f, 0.0f); // Move Forward On The X Plane glRotatef(-star[loop].angle,0.0f,1.0f,0.0f); // Cancel The Current Stars Angle glRotatef(-tilt,1.0f,0.0f,0.0f); // Cancel The Screen Tilt if (twinkle) { glColor4ub(star[(NUM_OF_STARS - loop) - 1].r, star[(NUM_OF_STARS - loop)-1].g, star[(NUM_OF_STARS - loop) - 1].b, 255); glBegin(GL_QUADS); // Begin Drawing The Textured Quad glTexCoord2f(0.0f, 0.0f); glVertex3f(-1.0f, -1.0f, 0.0f); glTexCoord2f(1.0f, 0.0f); glVertex3f( 1.0f, -1.0f, 0.0f); glTexCoord2f(1.0f, 1.0f); glVertex3f( 1.0f, 1.0f, 0.0f); glTexCoord2f(0.0f, 1.0f); glVertex3f(-1.0f, 1.0f, 0.0f); glEnd(); // Done Drawing The Textured Quad } glRotatef(spin,0.0f,0.0f,1.0f); // Rotate The Star On The Z Axis // Assign A Color Using Bytes glColor4ub(star[loop].r, star[loop].g, star[loop].b, 255); glBegin(GL_QUADS); // Begin Drawing The Textured Quad glTexCoord2f(0.0f, 0.0f); glVertex3f(-1.0f,-1.0f, 0.0f); glTexCoord2f(1.0f, 0.0f); glVertex3f( 1.0f,-1.0f, 0.0f); glTexCoord2f(1.0f, 1.0f); glVertex3f( 1.0f, 1.0f, 0.0f); glTexCoord2f(0.0f, 1.0f); glVertex3f(-1.0f, 1.0f, 0.0f); glEnd(); // Done Drawing The Textured Quad spin += 0.01f; // Used To Spin The Stars star[loop].angle += (float)loop / NUM_OF_STARS; // Changes The Angle Of A Star star[loop].distance -= 0.01f; // Changes The Distance Of A Star if (star[loop].distance < 0.0f) { star[loop].distance += 5.0f; // Move The Star 5 Units From The Center star[loop].r = rand() % 256; // Give It A New Red Value star[loop].g = rand() % 256; // Give It A New Green Value star[loop].b = rand() % 256; // Give It A New Blue Value } } } I've looked over the code atleast 10 times now and I can't figure out the problem. Any help would be much appreciated.

    Read the article

  • How a "Collision System" should be implemented?

    - by nathan
    My game is written using a entity system approach using Artemis Framework. Right know my collision detection is called from the Movement System but i'm wondering if it's a proper way to do collision detection using such an approach. Right know i'm thinking of a new system dedicated to collision detection that would proceed all the solid entities to check if they are in collision with another one. I'm wondering if it's a correct way to handle collision detection with an entity system approach? Also, how should i implement this collision system? I though of an IntervalEntitySystem that would check every 200ms (this value is chosen regarding the Artemis documentation) if some entities are colliding. protected void processEntities(ImmutableBag<Entity> ib) { for (int i = 0; i < ib.size(); i++) { Entity e = ib.get(i); //check of collision with other entities here } }

    Read the article

  • Logarithmic spacing of FFT bins

    - by Mykel Stone
    I'm trying to do the examples within the GameDev.net Beat Detection article ( http://archive.gamedev.net/archive/reference/programming/features/beatdetection/index.html ) I have no issue with performing a FFT and getting the frequency data and doing most of the article. I'm running into trouble though in the section 2.B, Enhancements and beat decision factors. in this section the author gives 3 equations numbered R10-R12 to be used to determine how many bins go into each subband: R10 - Linear increase of the width of the subband with its index R11 - We can choose for example the width of the first subband R12 - The sum of all the widths must not exceed 1024 He says the following in the article: "Once you have equations (R11) and (R12) it is fairly easy to extract 'a' and 'b', and thus to find the law of the 'wi'. This calculus of 'a' and 'b' must be made manually and 'a' and 'b' defined as constants in the source; indeed they do not vary during the song." However, I cannot seem to understand how these values are calculated...I'm probably missing something simple, but learning fourier analysis in a couple of weeks has left me Decimated-in-Mind and I cannot seem to see it.

    Read the article

  • Identify which CCSprite is touched in Cocos2d

    - by PeterK
    I am trying to learn Cocos2d and is experimenting with Ray Wenderlich tutorial whack-a-mole: www.raywenderlich.com/2560/how-to-create-a-mole-whacking-game-with-cocos2d-part-1 In this tutorial three CCSprite's are popping up and you should click on them... However, i am trying to identify which mole, rat in my case, is popping up and place a CCSprite above that. Initially this looked like an easy task but i am failing. I am trying to NSLog LEFT HIT. i would guess the problem is in the If-statement and the last "227" height parameter. The left rat boundingBox = {{99.5, 146.5}, {165, 227}} (from NSLog). The key code is in the ccTouchBegan function: -(BOOL) ccTouchBegan:(UITouch *)touch withEvent:(UIEvent *)event { CGPoint touchLocation = [self convertTouchToNodeSpace:touch]; for (CCSprite *rat in rats) { if (rat.userData == FALSE) continue; if (CGRectContainsPoint(rat.boundingBox, touchLocation)) { //left: rat boundingBox = {{99.5, 146.5}, {165, 227}} //mid: rat boundingBox = {{349.5, 146.5}, {165, 227}} //right: rat boundingBox = {{599.5, 146.5}, {165, 227}} //>>>>Here is where i try to get a hit<<<< if (CGRectContainsPoint(CGRectMake(99.5, 146.55, 165, 227), touchLocation)) { NSLog(@">>>>HIT LEFT<<<<<"); } I would really appreciate a few ideas how to get this to work.

    Read the article

  • Cropping a line (laser beam) in XNA

    - by electroflame
    I have a laser sprite that I wish to crop. I want to crop it so when it collides with an item, I can calculate the distance between the starting point, and the ending point, and only draw that. This eliminates the "overdraw" of a laser drawing past an item. Essentially, I'm trying to crop a line, but also keep that line "attached" to the nose of my ship. The line should not be drawn past the nose of my ship, that should be the starting point. There is no rotation to worry about. Currently, I thought that doing this through SpriteBatch would be best. This is my current Spritebatch code: spriteBatch.Draw(Laser.sprite, new Rectangle((int)Laser.position.X, (int)Laser.position.Y, Laser.sprite.Width, LaserHeight), new Rectangle(0, 0, (int)(Laser.sprite.Width), LaserHeight), new Color(255, 255, 255, (byte)MathHelper.Clamp(Laser.Alpha, 0, 255)), Laser.rotation, new Vector2(Laser.sprite.Width/2, LaserHeight/2), SpriteEffects.None, 0); But this doesn't quite work. It does only draw part of the sprite, but when LaserHeight is incremented, it lengthens the line in both ways! I believe this is due to some stupid error on my part with the Origin of the draw. Quick recap: I need to have my laser sprite drawn with the bottom of it at the nose of my ship, and then use LaserHeight to crop the image so only part of it is drawn. I have a feeling my explanation is a bit...lacking. So if you require more information, please say so and I will try to provide more information. Thanks in advance!

    Read the article

  • How to account for speed of the vehicle when shooting shells from it?

    - by John Murdoch
    I'm developing a simple 3D ship game using libgdx and bullet. When a user taps the mouse I create a new shell object and send it in the direction of the mouse click. However, if the user has tapped the mouse in the direction where the ship is currently moving, the ship catches up to the shells very quickly and can sometimes even get hit by them - simply because the speed of shells and the ship are quite comparable. I think I need to account for ship speed when generating the initial impulse for the shells, and I tried doing that (see "new line added"), but I cannot figure out if what I'm doing is the proper way and if yes, how to calculate the correct coefficient. public void createShell(Vector3 origin, Vector3 direction, Vector3 platformVelocity, float velocity) { long shellId = System.currentTimeMillis(); // hack ShellState state = getState().createShellState(shellId, origin.x, origin.y, origin.z); ShellEntity entity = EntityFactory.getInstance().createShellEntity(shellId, state); add(entity); entity.getBody().applyCentralImpulse(platformVelocity.mul(velocity * 0.02f)); // new line added, to compensate for the moving platform, no idea how to calculate proper coefficient entity.getBody().applyCentralImpulse(direction.nor().mul(velocity)); } private final Vector3 v3 = new Vector3(); public void shootGun(Vector3 direction) { Vector3 shipVelocity = world.getShipEntities().get(id).getBody().getLinearVelocity(); world.getState().getShipStates().get(id).transform.getTranslation(v3); // current location of our ship v3.add(direction.nor().mul(10.0f)); // hack; this is to avoid shell immediately impacting the ship that it got shot out from world.createShell(v3, direction, shipVelocity, 500); }

    Read the article

  • Client side latency when using prediction

    - by Tips48
    I've implemented Client-Side prediction into my game, where when input is received by the client, it first sends it to the server and then acts upon it just as the server will, to reduce the appearance of lag. The problem is, the server is authoritative, so when the server sends back the position of the Entity to the client, it undo's the effect of the interpolation and creates a rubber-banding effect. For example: Client sends input to server - Client reacts on input - Server receives and reacts on input - Server sends back response - Client reaction is undone due to latency between server and client To solve this, I've decided to store the game state and input every tick in the client, and then when I receive a packet from the server, get the game state from when the packet was sent and simulate the game up to the current point. My questions: Won't this cause lag? If I'm receiving 20/30 EntityPositionPackets a second, that means I have to run 20-30 simulations of the game state. How do I sync the client and server tick? Currently, I'm sending the milli-second the packet was sent by the server, but I think it's adding too much complexity instead of just sending the tick. The problem with converting it to sending the tick is that I have no guarantee that the client and server are ticking at the same rate, for example if the client is an old-end PC.

    Read the article

  • How does interpolation actually work to smooth out an object's movement?

    - by user22241
    I've asked a few similar questions over the past 8 months or so with no real joy, so I am going make the question more general. I have an Android game which is OpenGL ES 2.0. within it I have the following Game Loop: My loop works on a fixed time step principle (dt = 1 / ticksPerSecond) loops=0; while(System.currentTimeMillis() > nextGameTick && loops < maxFrameskip){ updateLogic(dt); nextGameTick+=skipTicks; timeCorrection += (1000d/ticksPerSecond) % 1; nextGameTick+=timeCorrection; timeCorrection %=1; loops++; } render(); My intergration works like this: sprite.posX+=sprite.xVel*dt; sprite.posXDrawAt=sprite.posX*width; Now, everything works pretty much as I would like. I can specify that I would like an object to move across a certain distance (screen width say) in 2.5 seconds and it will do just that. Also because of the frame skipping that I allow in my game loop, I can do this on pretty much any device and it will always take 2.5 seconds. Problem However, the problem is that when a render frame skips, the graphic stutter. It's extremely annoying. If I remove the ability to skip frames, then everything is smooth as you like, but will run at different speeds on different devices. So it's not an option. I'm still not sure why the frame skips, but I would like to point out that this is Nothing to do with poor performance, I've taken the code right back to 1 tiny sprite and no logic (apart from the logic required to move the sprite) and I still get skipped frames. And this is on a Google Nexus 10 tablet (and as mentioned above, I need frame skipping to keep the speed consistent across devices anyway). So, the only other option I have is to use interpolation (or extrapolation), I've read every article there is out there but none have really helped me to understand how it works and all of my attempted implementations have failed. Using one method I was able to get things moving smoothly but it was unworkable because it messed up my collision. I can foresee the same issue with any similar method because the interpolation is passed to (and acted upon within) the rendering method - at render time. So if Collision corrects position (character now standing right next to wall), then the renderer can alter it's position and draw it in the wall. So I'm really confused. People have said that you should never alter an object's position from within the rendering method, but all of the examples online show this. So I'm asking for a push in the right direction, please do not link to the popular game loop articles (deWitters, Fix your timestep, etc) as I've read these multiple times. I'm not asking anyone to write my code for me. Just explain please in simple terms how Interpolation actually works with some examples. I will then go and try to integrate any ideas into my code and will ask more specific questions if need-be further down the line. (I'm sure this is a problem many people struggle with).

    Read the article

  • obj-c classes and sub classes (Cocos2d) conversion

    - by Lewis
    Hi I'm using this version of cocos2d: https://github.com/krzysztofzablocki/CCNode-SFGestureRecognizers Which supports the UIGestureRecognizer within a CCLayer in a cocos2d scene like so: @interface HelloWorldLayer : CCLayer <UIGestureRecognizerDelegate> { } Now I want to make this custom gesture work within the scene, attaching it to a sprite in cocos2d: #import <Foundation/Foundation.h> #import <UIKit/UIGestureRecognizerSubclass.h> @protocol OneFingerRotationGestureRecognizerDelegate <NSObject> @optional - (void) rotation: (CGFloat) angle; - (void) finalAngle: (CGFloat) angle; @end @interface OneFingerRotationGestureRecognizer : UIGestureRecognizer { CGPoint midPoint; CGFloat innerRadius; CGFloat outerRadius; CGFloat cumulatedAngle; id <OneFingerRotationGestureRecognizerDelegate> target; } - (id) initWithMidPoint: (CGPoint) midPoint innerRadius: (CGFloat) innerRadius outerRadius: (CGFloat) outerRadius target: (id) target; - (void)reset; - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event; - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event; - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event; - (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event; @end #include <math.h> #import "OneFingerRotationGestureRecognizer.h" @implementation OneFingerRotationGestureRecognizer // private helper functions CGFloat distanceBetweenPoints(CGPoint point1, CGPoint point2); CGFloat angleBetweenLinesInDegrees(CGPoint beginLineA, CGPoint endLineA, CGPoint beginLineB, CGPoint endLineB); - (id) initWithMidPoint: (CGPoint) _midPoint innerRadius: (CGFloat) _innerRadius outerRadius: (CGFloat) _outerRadius target: (id <OneFingerRotationGestureRecognizerDelegate>) _target { if ((self = [super initWithTarget: _target action: nil])) { midPoint = _midPoint; innerRadius = _innerRadius; outerRadius = _outerRadius; target = _target; } return self; } /** Calculates the distance between point1 and point 2. */ CGFloat distanceBetweenPoints(CGPoint point1, CGPoint point2) { CGFloat dx = point1.x - point2.x; CGFloat dy = point1.y - point2.y; return sqrt(dx*dx + dy*dy); } CGFloat angleBetweenLinesInDegrees(CGPoint beginLineA, CGPoint endLineA, CGPoint beginLineB, CGPoint endLineB) { CGFloat a = endLineA.x - beginLineA.x; CGFloat b = endLineA.y - beginLineA.y; CGFloat c = endLineB.x - beginLineB.x; CGFloat d = endLineB.y - beginLineB.y; CGFloat atanA = atan2(a, b); CGFloat atanB = atan2(c, d); // convert radiants to degrees return (atanA - atanB) * 180 / M_PI; } #pragma mark - UIGestureRecognizer implementation - (void)reset { [super reset]; cumulatedAngle = 0; } - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesBegan:touches withEvent:event]; if ([touches count] != 1) { self.state = UIGestureRecognizerStateFailed; return; } } - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesMoved:touches withEvent:event]; if (self.state == UIGestureRecognizerStateFailed) return; CGPoint nowPoint = [[touches anyObject] locationInView: self.view]; CGPoint prevPoint = [[touches anyObject] previousLocationInView: self.view]; // make sure the new point is within the area CGFloat distance = distanceBetweenPoints(midPoint, nowPoint); if ( innerRadius <= distance && distance <= outerRadius) { // calculate rotation angle between two points CGFloat angle = angleBetweenLinesInDegrees(midPoint, prevPoint, midPoint, nowPoint); // fix value, if the 12 o'clock position is between prevPoint and nowPoint if (angle > 180) { angle -= 360; } else if (angle < -180) { angle += 360; } // sum up single steps cumulatedAngle += angle; // call delegate if ([target respondsToSelector: @selector(rotation:)]) { [target rotation:angle]; } } else { // finger moved outside the area self.state = UIGestureRecognizerStateFailed; } } - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesEnded:touches withEvent:event]; if (self.state == UIGestureRecognizerStatePossible) { self.state = UIGestureRecognizerStateRecognized; if ([target respondsToSelector: @selector(finalAngle:)]) { [target finalAngle:cumulatedAngle]; } } else { self.state = UIGestureRecognizerStateFailed; } cumulatedAngle = 0; } - (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesCancelled:touches withEvent:event]; self.state = UIGestureRecognizerStateFailed; cumulatedAngle = 0; } @end Header file for view controller: #import "OneFingerRotationGestureRecognizer.h" @interface OneFingerRotationGestureViewController : UIViewController <OneFingerRotationGestureRecognizerDelegate> @property (nonatomic, strong) IBOutlet UIImageView *image; @property (nonatomic, strong) IBOutlet UITextField *textDisplay; @end then this is in the .m file: gestureRecognizer = [[OneFingerRotationGestureRecognizer alloc] initWithMidPoint: midPoint innerRadius: outRadius / 3 outerRadius: outRadius target: self]; [self.view addGestureRecognizer: gestureRecognizer]; Now my question is, is it possible to add this custom gesture into the cocos2d project found on that github, and if so, what do I need to change in the OneFingerRotationGestureRecognizerDelegate to get it to work within cocos2d. Because at the minute it is setup in a standard iOS project and not a cocos2d project and I do not know enough about UIViews and classing/ sub classing in obj-c to get this to work. Also it seems to inherit from a UIView where cocos2d uses CCLayer. Kind regards, Lewis. I also realise I may have not included enough code from the custom gesture project for readers to interpret it fully, so the full project can be found here: https://github.com/melle/OneFingerRotationGestureDemo

    Read the article

< Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >