Search Results

Search found 26774 results on 1071 pages for 'distributed development'.

Page 473/1071 | < Previous Page | 469 470 471 472 473 474 475 476 477 478 479 480  | Next Page >

  • How to scroll hex tiles?

    - by Chris Evans
    I don't seem to be able to find an answer to this one. I have a map of hex tiles. I wish to implement scrolling. Code at present: drawTilemap = function() { actualX = Math.floor(viewportX / hexWidth); actualY = Math.floor(viewportY / hexHeight); offsetX = -(viewportX - (actualX * hexWidth)); offsetY = -(viewportY - (actualY * hexHeight)); for(i = 0; i < (10); i++) { for(j = 0; j < 10; j++) { if(i % 2 == 0) { x = (hexOffsetX * i) + offsetX; y = j * sourceHeight; } else { x = (hexOffsetX * i) + offsetX; y = hexOffsetY + (j * sourceHeight); } var tileselected = mapone[actualX + i][j]; drawTile(x, y, tileselected); } } } The code I've written so far only handles X movement. It doesn't yet work the way it should do. If you look at my example on jsfiddle.net below you will see that when moving to the right, when you get to the next hex tile along, there is a problem with the X position and calculations that have taken place. It seems it is a simple bit of maths that is missing. Unfortunately I've been unable to find an example that includes scrolling yet. http://jsfiddle.net/hd87E/1/ Make sure there is no horizontal scroll bar then trying moving right using the - right arrow on the keyboard. You will see the problem as you reach the end of the first tile. Apologies for the horrid code, I'm learning! Cheers

    Read the article

  • Java Slick2d - Mouse picking how to take into account camera

    - by Corey
    When I move it it obviously changes the viewport so my mouse picking is off. My camera is just a float x and y and I use g.translate(-cam.cameraX+400, -cam.cameraY+300); to translate the graphics. I have the numbers hard coded just for testing purposes. How would I take into account the camera so my mouse picking works correctly. double mousetileX = Math.floor((double)mouseX/tiles.tileWidth); double mousetileY = Math.floor((double)mouseY/tiles.tileHeight); double playertileX = Math.floor(playerX/tiles.tileWidth); double playertileY = Math.floor(playerY/tiles.tileHeight); double lengthX = Math.abs((float)playertileX - mousetileX); double lengthY = Math.abs((float)playertileY - mousetileY); double distance = Math.sqrt((lengthX*lengthX)+(lengthY*lengthY)); if(input.isMousePressed(Input.MOUSE_LEFT_BUTTON) && distance < 4) { if(tiles.map[(int)mousetileX][(int)mousetileY] == 1) { tiles.map[(int)mousetileX][(int)mousetileY] = 0; } } That is my mouse picking code

    Read the article

  • What data should be cached in a multiplayer server, relative to AI and players?

    - by DevilWithin
    In a virtual place, fully network driven, with an arbitrary number of players and an arbitrary number of enemies, what data should be cached in the server memory, in order to optimize smooth AI simulation? Trying to explain, lets say player A sees player B to E, and enemy A to G. Each of those players, see player A, but not necessarily each other. Same applies to enemies. Think of this question from a topdown perspective please. In many cases, for example, when a player shoots his gun, the server handles the sound as a radial "signal" that every other entity within reach "hear" and react upon. Doing these searches all the time for a whole area, containing possibly a lot of unrelated players and enemies, seems to be an issue, when the budget for each AI agent is so small. Should every entity cache whatever enters and exits from its radius of awareness? Is there a great way to trace the entities close by without flooding the memory with such caches? What about other AI related problems that may arise, after assuming the previous one works well? We're talking about environments with possibly hundreds of enemies, a swarm.

    Read the article

  • Control convention for circular movement?

    - by Christian
    I'm currently doing a kind of training project in Unity (still a beginner). It's supposed to be somewhat like Breakout, but instead of just going left and right I want the paddle to circle around the center point. This is all fine and dandy, but the problem I have is: how do you control this with a keyboard or gamepad? For touch and mouse control I could work around the problem by letting the paddle follow the cursor/finger, but with the other control methods I'm a bit stumped. With a keyboard for example, I could either make it so that the Left arrow always moves the paddle clockwise (it starts at the bottom of the circle), or I could link it to the actual direction - meaning that if the paddle is at the bottom, it goes left and up along the circle or, if it's in the upper hemisphere, it moves left and down, both times toward the outer left point of the circle. Both feel kind of weird. With the first one, it can be counter intuitive to press Left to move the paddle right when it's in the upper area, while in the second method you'd need to constantly switch buttons to keep moving. So, long story short: is there any kind of existing standard, convention or accepted example for this type of movement and the corresponding controls? I didn't really know what to google for (control conventions for circular movement was one of the searches I tried, but it didn't give me much), and I also didn't really find anything about this on here. If there is a Question that I simply didn't see, please excuse the duplicate.

    Read the article

  • std::vector::size with glDrawElements crashes?

    - by NoobScratcher
    ( win32 / OpenGL 3.3 / GLSL 330 ) I decided after a long time of trying to do a graphical user interface using just opengl graphics to go back to a gui toolkit and so in the process have had to port alot of my code to win32. But I have a problem with my glDrawElement function. my program compiles and runs fine until it gets to glDrawElements then crashes.. which is rather annoying right. so I was trying to figure out why and I found out its std::vector::size member not returning the correct amount of faces in the unsigned interger vector eg, "vector<unsigned int>faces; " so when I use cout << faces.size() << endl; I got 68 elements???? instead of 24 as you can see here in this .obj file: # Blender v2.61 (sub 0) OBJ File: '' # www.blender.org v 1.000000 -1.000000 -1.000000 v 1.000000 -1.000000 1.000000 v -1.000000 -1.000000 1.000000 v -1.000000 -1.000000 -1.000000 v 1.000000 1.000000 -0.999999 v 0.999999 1.000000 1.000001 v -1.000000 1.000000 1.000000 v -1.000000 1.000000 -1.000000 s off f 1 2 3 4 f 5 8 7 6 f 1 5 6 2 f 2 6 7 3 <--- 24 Faces not 68? f 3 7 8 4 f 5 1 4 8 I'm using a parser I created to get the faces/vertexes in my .obj file: char modelbuffer [20000]; int MAX_BUFF = 20000; unsigned int face[3]; FILE * pfile; pfile = fopen(szFileName, "rw"); while(fgets(modelbuffer, MAX_BUFF, pfile) != NULL) { if('v') { Point p; sscanf(modelbuffer, "v %f %f %f", &p.x, &p.y, &p.z); points.push_back(p); cout << " p.x = " << p.x << " p.y = " << p.y << " p.z = " << p.x << endl; } if('f') { sscanf(modelbuffer, "f %d %d %d %d", &face[0], &face[1], &face[2], &face[3]); cout << "face[0] = " << face[0] << " face[1] = " << face[1] << " face[2] = " << face[2] << " face[3] = " << face[3] << "\n"; faces.push_back(face[0] - 1); faces.push_back(face[1] - 1); faces.push_back(face[2] - 1); faces.push_back(face[3] - 1); cout << face[0] - 1 << face[1] - 1 << face[2] - 1 << face[3] - 1 << endl; } } using this struct to store the x,y,z positions also this vector was used with Point: vector<Point>points; struct Point { float x, y, z; }; If someone could tell me why its not working and how to fix it that would be awesome I also provide a pastebin to the full source code if you want a closer look. http://pastebin.com/gznYLVw7

    Read the article

  • Change collision action

    - by PatrickR
    I have a collision detection and its working fine, the problem is, that whenever my "bird" is hitting a "cloud", the cloud dissapers and i get some points. The same happens for the "sol" which it should, but not with the clouds. How can this be changed ? ive tryed a lot, but can seem to figger it out. Collision Code - (void)update:(ccTime)dt { bird.position = ccpAdd(bird.position, skyVelocity); NSMutableArray *projectilesToDelete = [[NSMutableArray alloc] init]; for (CCSprite *bird in _projectiles) { bird.anchorPoint = ccp(0, 0); CGRect absoluteBox = CGRectMake(bird.position.x, bird.position.y, [bird boundingBox].size.width, [bird boundingBox].size.height); NSMutableArray *targetsToDelete = [[NSMutableArray alloc] init]; for (CCSprite *cloudSprite in _targets) { cloudSprite.anchorPoint = ccp(0, 0); CGRect absoluteBox = CGRectMake(cloudSprite.position.x, cloudSprite.position.y, [cloudSprite boundingBox].size.width, [cloudSprite boundingBox].size.height); if (CGRectIntersectsRect([bird boundingBox], [cloudSprite boundingBox])) { [targetsToDelete addObject:cloudSprite]; } } for (CCSprite *solSprite in _targets) { solSprite.anchorPoint = ccp(0, 0); CGRect absoluteBox = CGRectMake(solSprite.position.x, solSprite.position.y, [solSprite boundingBox].size.width, [solSprite boundingBox].size.height); if (CGRectIntersectsRect([bird boundingBox], [solSprite boundingBox])) { [targetsToDelete addObject:solSprite]; score += 50/2; [scoreLabel setString:[NSString stringWithFormat:@"%d", score]]; } } // NÅR SKYEN BLIVER RAMT AF FUGLEN for (CCSprite *cloudSprite in targetsToDelete) { //[_targets removeObject:cloudSprite]; //[self removeChild:cloudSprite cleanup:YES]; } // NÅR SOLEN BLIVER RAMT AF FUGLEN for (CCSprite *solSprite in targetsToDelete) { [_targets removeObject:solSprite]; [self removeChild:solSprite cleanup:YES]; } if (targetsToDelete.count > 0) { [projectilesToDelete addObject:bird]; } [targetsToDelete release]; } // NÅR FUGLEN BLIVER RAMT AF ALT ANDET for (CCSprite *bird in projectilesToDelete) { //[_projectiles removeObject:bird]; //[self removeChild:bird cleanup:YES]; } [projectilesToDelete release]; }

    Read the article

  • How to handle animations?

    - by Bane
    I am coding a simple 2D engine to be used with HTML5. I already have classes such as Picture, Scene, Camera and Renderer, but now I need to work on Animations. Picture is basocally a wrapper for a normal image object, with it's own draw method, but this is unrelated, I'm interested in how animation in 2D games is usually done. What I planned to do, is to have the Animation class as well act like a wrapper for a few image objects, and then have methods such as getCurrentImage, next and animate (which would use intervals to quickly change the current image). I meant to feed the animation a couple of PNG's at inicialisation. Is quickly swapping PNG images acceptable for 2D animation? Are there some standard ways of doing this, or are there flaws in my ways?

    Read the article

  • Pattern for performing game actions

    - by Arkiliknam
    Is there a generally accepted pattern for performing various actions within a game? A way a player can perform actions and also that an AI might perform actions, such as move, attack, self-destruct, etc. I currently have an abstract BaseAction which uses .NET generics to specify the different objects that get returned by the various actions. This is all implemented in a pattern similar to the Command, where each action is responsible for itself and does all that it needs. My reasoning for being abstract is so that I may have a single ActionHandler, and AI can just queue up different action implementing the baseAction. And the reason it is generic is so that the different actions can return result information relevant to the action (as different actions can have totally different outcomes in the game), along with some common beforeAction and afterAction implementations. So... is there a more accepted way of doing this, or does this sound alright?

    Read the article

  • Checking for alternate keys with XNA IsKeyDown

    - by jocull
    I'm working on picking up XNA and this was a confusing point for me. KeyboardState keyState = Keyboard.GetState(); if (keyState.IsKeyDown(Keys.Left) || keyState.IsKeyDown(Keys.A)) { //Do stuff... } The book I'm using (Learning XNA 4.0, O'Rielly) says that this method accepts a bitwise OR series of keys, which I think should look like this... KeyboardState keyState = Keyboard.GetState(); if (keyState.IsKeyDown(Keys.Left | Keys.A)) { //Do stuff... } But I can't get it work. I also tried using !IsKeyUp(... | ...) as it said that all keys had to be down for it to be true, but had no luck with that either. Ideas? Thanks.

    Read the article

  • Would like some help in understanding rendering geometry vs textures

    - by Anon
    So I was just pondering whether it is more taxing on the GPU to render geometry or a texture. What I'm trying to see is whether there is a huge difference in rendering two scenes with the same setup: Scene 1: Example Object: A dirt road (nothing else) Geometry: Detailed road, with all the bumps, cracks and so forth done in the mesh Scene 2: Example Object: A dirt road (nothing else) Geometry: A simple mesh, in a form of a road, but in this case maps and textures are simulating cracks, bumps, etc... So of these two, which one is likely to tax the hardware more? Or is it not a like for like comparison? What would be the best way of doing something like this? Go heavy on the textures? Or have a blend of both?

    Read the article

  • How do games make money? What models do they use?

    - by cable729
    I'm trying to research the ways in which games make money. I want to know more about the models they use (free/premium, trial/subscription, free-to-play with micro-transactions, etc.). In addition, I want information on which models work for which games, what models are best for which age groups, etc. I've tried my best to find information, and Google hasn't turned anything up at all. I think I'll stop by my University's library and see if there's anything there. This may seem like a broad question, but I'm looking for links and titles of books, not typed-out answers.

    Read the article

  • Retrieving snapshots of game statistics

    - by SatheeshJM
    What is a good architecture for storing game statistics, so that I can retrieve snapshots of it at various moments? Say I have a game, and the user's statistics initially are: { hours_played = 0, games_played = 0, no_of_times_killed = 0, } When the user purchases something for the first time from within the game, the stats are { hours_played = 10, games_played = 2, no_of_times_killed = 5, } And when he purchases something for the second time, the stats are { hours_played = 20, games_played = 4, no_of_times_killed = 10, } Let me name the events as "purchase1" and "purchase2" How do I model my statistics, so that at any point in the future, I will be able to retrieve the snapshot of the statistics at the time when "purchase1" was fired. Similarly for "purchase2" and any other event I will add. Hope that made sense..

    Read the article

  • Splitting a texture atlas into seperate images

    - by bigtunacan
    I'm doing a port of an existing game and the designer no longer has all of the original art; he only has the resulting texture atlases he used when developing for iPad. The tool I'm using won't support these files so I need to break them back out into separate PNG files. I'm hoping someone knows of a software tool that does this. PC software would be preferred in this case, but Mac would suffice.

    Read the article

  • Making a Camera look at a target Vector

    - by Peteyslatts
    I have a camera that works as long as its stationary. Now I'm trying to create a child class of that camera class that will look at its target. The new addition to the class is a method called SetTarget(). The method takes in a Vector3 target. The camera wont move but I need it to rotate to look at the target. If I just set the target, and then call CreateLookAt() (which takes in position, target, and up), when the object gets far enough away and underneath the camera, it suddenly flips right side up. So I need to transform the up vector, which currently always stays at Vector3.Up. I feel like this has something to do with taking the angle between the old direction vector and the new one (which I know can be expressed by target - position). I feel like this is all really vague, so here's the code for my base camera class: public class BasicCamera : Microsoft.Xna.Framework.GameComponent { public Matrix view { get; protected set; } public Matrix projection { get; protected set; } public Vector3 position { get; protected set; } public Vector3 direction { get; protected set; } public Vector3 up { get; protected set; } public Vector3 side { get { return Vector3.Cross(up, direction); } protected set { } } public BasicCamera(Game game, Vector3 position, Vector3 target, Vector3 up) : base(game) { this.position = position; this.direction = target - position; this.up = up; CreateLookAt(); projection = Matrix.CreatePerspectiveFieldOfView( MathHelper.PiOver4, (float)Game.Window.ClientBounds.Width / (float)Game.Window.ClientBounds.Height, 1, 500); } public override void Update(GameTime gameTime) { // TODO: Add your update code here CreateLookAt(); base.Update(gameTime); } } And this is the code for the class that extends the above class to look at its target. class TargetedCamera : BasicCamera { public Vector3 target { get; protected set; } public TargetedCamera(Game game, Vector3 position, Vector3 target, Vector3 up) : base(game, position, target, up) { this.target = target; } public void SetTarget(Vector3 target) { direction = target - position; } protected override void CreateLookAt() { view = Matrix.CreateLookAt(position, target, up); } }

    Read the article

  • How can I use WebGL to create a tile-based multi-layer scrolling platform game?

    - by Nicholas Hill
    I've found WebGL (based on OpenGL) to be a fiendish and unforgiving framework for those learning to write HTML5-based games. Despite the presence of many examples on how to get started, I'm really struggling to understand how I could simply load a bunch of images and render them to a canvas quickly using WebGL. My specific scenario involves trying to render a map using a bespoke but simple multi-layered tile engine, where each value in a three dimensional array points to the image to use for that location in the rendered image. Think "Sonic the Hedgehog" via tilesets, tiles, maps, layers, sprites etc. Can anyone enlighten me: 1) How can I load an image that I can use as a texture in WebGL? 2) How can I dynamically select an image at run time and draw it at any co-ordinate, that I also select at run time?

    Read the article

  • Projective texture and deferred lighting

    - by Vodácek
    In my previous question, I asked whether it is possible to do projective texturing with deferred lighting. Now (more than half a year later) I have a problem with my implementation of the same thing. I am trying to apply this technique in light pass. (my projector doesn't affect albedo). I have this projector View a Projection matrix: Matrix projection = Matrix.CreateOrthographicOffCenter(-halfWidth * Scale, halfWidth * Scale, -halfHeight * Scale, halfHeight * Scale, 1, 100000); Matrix view = Matrix.CreateLookAt(Position, Target, Vector3.Up); Where halfWidth and halfHeight is are half of the texture's width and height, Position is the Projector's position and target is the projector's target. This seems to be ok. I am drawing full screen quad with this shader: float4x4 InvViewProjection; texture2D DepthTexture; texture2D NormalTexture; texture2D ProjectorTexture; float4x4 ProjectorViewProjection; sampler2D depthSampler = sampler_state { texture = <DepthTexture>; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D normalSampler = sampler_state { texture = <NormalTexture>; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D projectorSampler = sampler_state { texture = <ProjectorTexture>; AddressU = Clamp; AddressV = Clamp; }; float viewportWidth; float viewportHeight; // Calculate the 2D screen position of a 3D position float2 postProjToScreen(float4 position) { float2 screenPos = position.xy / position.w; return 0.5f * (float2(screenPos.x, -screenPos.y) + 1); } // Calculate the size of one half of a pixel, to convert // between texels and pixels float2 halfPixel() { return 0.5f / float2(viewportWidth, viewportHeight); } struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position :POSITION0; float4 PositionCopy : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = input.Position; output.PositionCopy=output.Position; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float2 texCoord =postProjToScreen(input.PositionCopy) + halfPixel(); // Extract the depth for this pixel from the depth map float4 depth = tex2D(depthSampler, texCoord); //return float4(depth.r,0,0,1); // Recreate the position with the UV coordinates and depth value float4 position; position.x = texCoord.x * 2 - 1; position.y = (1 - texCoord.y) * 2 - 1; position.z = depth.r; position.w = 1.0f; // Transform position from screen space to world space position = mul(position, InvViewProjection); position.xyz /= position.w; //compute projection float3 projection=tex2D(projectorSampler,postProjToScreen(mul(position,ProjectorViewProjection)) + halfPixel()); return float4(projection,1); } In first part of pixel shader is recovered position from G-buffer (this code I am using in other shaders without any problem) and then is tranformed to projector viewprojection space. Problem is that projection doesn't appear. Here is an image of my situation: The green lines are the rendered projector frustum. Where is my mistake hidden? I am using XNA 4. Thanks for advice and sorry for my English. EDIT: Shader above is working but projection was too small. When I changed the Scale property to a large value (e.g. 100), the projection appears. But when the camera moves toward the projection, the projection expands, as can bee seen on this YouTube video.

    Read the article

  • How change LOD in geometry?

    - by ChaosDev
    Im looking for simple algorithm of LOD, for change geometry vertexes and decrease frame time. Im created octree, but now I want model or terrain vertex modify algorithm,not for increase(looking on tessellation later) but for decrease. I want something like this Questions: Is same algorithm can apply either to model and terrain correctly? Indexes need to be modified ? I must use octree or simple check distance between camera and object for desired effect ? New value of indexcount for DrawIndexed function needed ? Code: //m_LOD == 10 in the beginning //m_RawVerts - array of 3d Vector filled with values from vertex buffer. void DecreaseLOD() { m_LOD--; if(m_LOD<1)m_LOD=1; RebuildGeometry(); } void IncreaseLOD() { m_LOD++; if(m_LOD>10)m_LOD=10; RebuildGeometry(); } void RebuildGeometry() { void* vertexRawData = new byte[m_VertexBufferSize]; void* indexRawData = new DWORD[m_IndexCount]; auto context = mp_D3D->mp_Context; D3D11_MAPPED_SUBRESOURCE data; ZeroMemory(&data,sizeof(D3D11_MAPPED_SUBRESOURCE)); context->Map(mp_VertexBuffer->mp_buffer,0,D3D11_MAP_READ,0,&data); memcpy(vertexRawData,data.pData,m_VertexBufferSize); context->Unmap(mp_VertexBuffer->mp_buffer,0); context->Map(mp_IndexBuffer->mp_buffer,0,D3D11_MAP_READ,0,&data); memcpy(indexRawData,data.pData,m_IndexBufferSize); context->Unmap(mp_IndexBuffer->mp_buffer,0); DWORD* dwI = (DWORD*)indexRawData; int sz = (m_VertexStride/sizeof(float));//size of vertex element //algorithm must be here. std::vector<Vector3d> vertices; int i = 0; for(int j = 0; j < m_VertexCount; j++) { float x1 = (((float*)vertexRawData)[0+i]); float y1 = (((float*)vertexRawData)[1+i]); float z1 = (((float*)vertexRawData)[2+i]); Vector3d lv = Vector3d(x1,y1,z1); //my useless attempts if(j+m_LOD+1<m_RawVerts.size()) { float v1 = VECTORHELPER::Distance(m_RawVerts[dwI[j]],m_RawVerts[dwI[j+m_LOD]]); float v2 = VECTORHELPER::Distance(m_RawVerts[dwI[j]],m_RawVerts[dwI[j+m_LOD+1]]); if(v1>v2) lv = m_RawVerts[dwI[j+1]]; else if(v2<v1) lv = m_RawVerts[dwI[j+2]]; } (((float*)vertexRawData)[0+i]) = lv.x; (((float*)vertexRawData)[1+i]) = lv.y; (((float*)vertexRawData)[2+i]) = lv.z; i+=sz;//pass others vertex format values without change } for(int j = 0; j < m_IndexCount; j++) { //indices ? } //set vertexes to device UpdateVertexes(vertexRawData,mp_VertexBuffer->getSize()); delete[] vertexRawData; delete[] indexRawData; }

    Read the article

  • Pokemon Yellow wrap transitions

    - by Alex Koukoulas
    So I've been trying to make a pretty accurate clone of the good old Pokemon Yellow for quite some time now and one puzzling but nonetheless subtle mechanic has puzzled me. As you can see in the uploaded image there is a certain colour manipulation done in two stages after entering a wrap to another game location (such as stairs or entering a building). One easy (and sloppy) way of achieving this and the one I have been using so far is to make three copies of each image rendered on the screen all of them with their colours adjusted accordingly to match each stage of the transition. Of course after a while this becomes tremendously time consuming. So my question is does anyone know any better way of achieving this colour manipulation effect using java? Thanks in advance, Alex

    Read the article

  • Maya Animated Character export for XNA 4.0 problem

    - by FahidK
    To begin with, I'm trying to export an animated character in .fbx format from Maya 2013 to XNA 4.0 In Maya, The Model has a basic rig and the animations are in clips made in the Trax editor. so the issue i'm having is after selecting the model and the root joint and then hitting export in .fbx format, for some reason when i open the exported .fbx file the joint system is detached from the model with no animation. Btw, i have the animations in clips so that they can be called in code, for example "run","walk","attack". So, what can i do to solve this problem? Thank you.

    Read the article

  • Too much delay while sending object over UDP to server

    - by RomZes
    I'm getting 4 sec delay when sending objects over UDP. Working on small game and trying to implement multiplayer. For now just trying to synchronize movements of 2 balls on the screen. StartingPoint.java is my server(first player), that receiving serialized objects (coordinates). SecondPlayer.java is client that sending serialized objects to server. When I'm moving my first object it appears 4 seconds later on different screen. StartingPoint.java @Override public void run() { byte[] receiveData = new byte[256]; byte[] sendData = new byte[256]; // DatagramSocket socketS; try { socket = new DatagramSocket(5000); System.out.println("Socket created on "+ port + " port"); } catch (SocketException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } while(true){ b1.update(this); b3.update(); System.out.println("Starting server..."); //// Receiving and deserializing object try { //socket.setSoTimeout(1000); DatagramPacket packet = new DatagramPacket(buf, buf.length); socket.receive(packet); byte[] data = packet.getData(); ByteArrayInputStream in = new ByteArrayInputStream(data); ObjectInputStream is = new ObjectInputStream(in); // socket.setSoTimeout(300); b1 = (Ball) is.readObject(); } catch (IOException | ClassNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); } repaint(); try { Thread.sleep(17); } catch (InterruptedException e) { e.printStackTrace(); } SecondPlayer.java @Override public void run() { while(true){ b.update(); networkSend(); repaint(); try { Thread.sleep(17); } catch (InterruptedException e) { e.printStackTrace(); } } public void networkSend(){ // Serialize to a byte array try { ByteArrayOutputStream bStream = new ByteArrayOutputStream(); ObjectOutputStream oo; oo = new ObjectOutputStream(bStream); oo.writeObject(b); oo.flush(); oo.close(); byte[] bufCar = bStream.toByteArray(); //socket = new DatagramSocket(); //socket.setSoTimeout(1000); InetAddress address = InetAddress.getByName("localhost"); DatagramPacket packet = new DatagramPacket(bufCar, bufCar.length, address, port); socket.send(packet); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); }

    Read the article

  • Which game logic should run when doing prediction for PNP state updates

    - by spaceOwl
    We are writing a multiplayer game, where each game client (player) is responsible for sending state updates regarding its "owned" objects to other players. Each message that arrives to other (remote) clients is processed as such: Figure out when the message was sent. Create a diff between NOW and that time. Run game specific logic to bring the received state to "current" time. I am wondering which sort of logic should execute as part of step #3 ? Our game is composed of a physical update (position, speed, acceleration, etc) and many other components that can update an object's state and occur regularly (locally). There's a trade off here - Getting the new state quickly or remaining "faithful" to the true state representation and executing the whole thing to predict the "true" state when receiving state updates from remote clients. Which one is recommended to be used? and why?

    Read the article

  • OpenGL VertexBuffer won'e render in GLFW3

    - by sm81095
    So I have started to try to learn OpenGL, and I decided to use GLFW to assist in window creation. The problem is, since GLFW3 is so new, there are no tutorials on it yet and how to use it with modern OpenGL (3.3, specifically). Using the GLFW3 tutorial found on the website, which uses older OpenGL rendering (glBegin(GL_TRIANGLES), glVertex3f()), and such, I can get a triangle to render to the screen. The problem is, using new OpenGL, I can't get the same triangle to render to the screen. I am new to OpenGL, and GLFW3 is new to most people, so I may be completely missing something obvious, but here is my code: static const GLuint g_vertex_buffer_data[] = { -1.0f, -1.0f, 0.0f, 1.0f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f }; int main(void) { GLFWwindow* window; if(!glfwInit()) { fprintf(stderr, "Failed to initialize GLFW."); return -1; } glfwWindowHint(GLFW_SAMPLES, 4); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); window = glfwCreateWindow(800, 600, "Test Window", NULL, NULL); if(!window) { glfwTerminate(); fprintf(stderr, "Failed to create a GLFW window"); return -1; } glfwMakeContextCurrent(window); glewExperimental = GL_TRUE; GLenum err = glewInit(); if(err != GLEW_OK) { glfwTerminate(); fprintf(stderr, "Failed to initialize GLEW"); fprintf(stderr, (char*)glewGetErrorString(err)); return -1; } GLuint VertexArrayID; glGenVertexArrays(1, &VertexArrayID); glBindVertexArray(VertexArrayID); GLuint programID = LoadShaders("SimpleVertexShader.glsl", "SimpleFragmentShader.glsl"); GLuint vertexBuffer; glGenBuffers(1, &vertexBuffer); glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer); glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW); while(!glfwWindowShouldClose(window)) { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glUseProgram(programID); glEnableVertexAttribArray(0); glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0); glDrawArrays(GL_TRIANGLES, 0, 3); glDisableVertexAttribArray(0); glfwSwapBuffers(window); glfwPollEvents(); } glDeleteBuffers(1, &vertexBuffer); glDeleteProgram(programID); glfwDestroyWindow(window); glfwTerminate(); exit(EXIT_SUCCESS); } I know it is not my shaders, they are super simple and I've checked them against GLFW 2.7 so I know that they work. I'm assuming that I've missed something crucial to using the OpenGL context with GLFW3, so any help locating the problem would be greatly appreciated.

    Read the article

  • cocos2d event handler not fired when reentering scene

    - by Adam Freund
    I am encountering a very strange problem with my cocos2d app. I add a sprite to the page and have an event handler linked to it which replaces the scene with another scene. On that page I have another button to take me back to the original scene. When I am back on the original scene, the eventHandler doesn't get fired when I click on the sprite. Below is the relevant code. Thanks for any help! CCMenuItemImage *backBtnImg = [CCMenuItemImage itemWithNormalImage:@"btn_back.png" selectedImage:@"btn_back_pressed.png" target:self selector:@selector(backButtonTapped:)]; backBtnImg.position = ccp(45, 286); CCMenu *backBtn = [CCMenu menuWithItems:backBtnImg, nil]; backBtn.position = CGPointZero; [self addChild:backBtn]; EventHandler method (doesn't get called when the scene is re-entered). (void)backButtonTapped:(id)sender { NSLog(@"backButtonTapped\n"); CCMenuItemImage *backButton = (CCMenuItemImage *)sender; [backButton setNormalImage:[CCSprite spriteWithFile:@"btn_back_pressed.png"]]; [[CCDirector sharedDirector] replaceScene:[CCTransitionFade transitionWithDuration:.25 scene:[MenuView scene] withColor:ccBLACK]]; }

    Read the article

  • OpenGL quake 3 shader file for objects (for trees)

    - by mlodziaszka
    I decided to add to my game few trees, I already quake 3 model loader (md3) its for characters and method for texture drawing is store in *.ini file. I found a package of trees in MD3 and I have no problem with loading model alone, but there is a *.shader file and i have no idea how to load it to draw texture properly. Tree pack: http://www.custommapmakers.org/wiki/index.php/Models:GR_Trees_set I do not have to use exactly this format, I can write another loader, but trees in *.obj or .3ds look even harder

    Read the article

  • Procedural object generation and unique identification

    - by 2080
    My question relates to procedural content generation and data management of the emerging objects in a database. I assume a networked game, with a server-client model. Unspecified objects in the game world are generated while the game is running with procedural algorithms (for example perlin noise). The players (/clients) can modify the properties of these objects, but have to notify the server of these changes. How could this communication address unique objects, so that both the server and the client know of which object they are speaking? Not only the inner properties of the objects can differ, but also visible, such as the position. When the player wants to select one of these objects the game has to find out the id - does anyone know which methods or algorithms can accomplish that?

    Read the article

< Previous Page | 469 470 471 472 473 474 475 476 477 478 479 480  | Next Page >