Search Results

Search found 37616 results on 1505 pages for 'model driven development'.

Page 632/1505 | < Previous Page | 628 629 630 631 632 633 634 635 636 637 638 639  | Next Page >

  • How to solve problems with movement in simple tile based multiplayer game?

    - by Murlo
    I'm making a simple tile based 2D multiplayer game in JavaScript using socket.io where you can move one tile every 200 ms. The two solutions I've tried are as follows: The client sends "walk one tile north" every 200 ms. Problem: People can easily hack the client to send the action more often. The client sends "walking north" and "stopped walking". Problem: Sometimes the player moves extra steps when "stopped walking" doesn't arrive in time. Do you know a way around these problems or is there a better way to do it? EDIT: Regarding the first solution I've tried adding validation on the server to check if it has been 200 ms since last movement. The problem is that latency still encourages people just to spam the action as much as possible, giving them an unfair advantage.

    Read the article

  • Knowing state of game in real time

    - by evthim
    I'm trying to code a tic tac toe game in java and I need help figuring out how to efficiently and without freezing the program check if someone won the game. I'm only in the design stages now, I haven't started programming anything but I'm wondering how would I know at all times the state of the game and exactly when someone wins? Response to MarkR: (note: had to place comment here, it was too long for comment section) It's not a homework problem, I'm trying to get more practice programming GUI's which I've only done once as a freshman in my second introductory programming course. I understand I'll have a 2D array. I plan to have a 2D integer array where x would equal 1 and o would equal 0. However, won't it take too much time if I check after every move if someone won the game? Is there a way or a data structure or algorithm I can use so that the program will know the state (when I say state I mean not just knowing every position on the board, the int array will take care of that, I mean knowing that user 1 will win if he places x on this block) of the game at all times and thus can know automatically when someone won?

    Read the article

  • How to Effectively Create Bullet Patterns

    - by SoulBeaver
    I'm currently creating a top-down shooter like Touhou. The most important factor of the game is that there are many diverse patterns and ways at which bullets are generated and shot at the player, see this video: http://www.youtube.com/watch?v=4Nb5Ohbt1Sg#start=0:60;end=9:53; At the moment, I'm using a class "Pattern" which has a series of steps on moving and shooting. However, I feel this method is quite laborous as I have to create a new Pattern for each attack and perhaps new Bullet classes that will implement a certain behavior. This question received a comment suggesting I should look into BulletML for easy creation and storage of bullets with a specific pattern. It looks decent, but it led me to wonder, what other solutions do you have that I should take into consideration? Update My current design is as follows: An example of an implemented pattern: My GigasPattern first executes a teleport which moves Alice to a certain point (X, Y) on the screen. After this is completed, the pattern starts using the Mover to move the sprite around (whereas teleporting has separate effects and animation). These are of no concern, really, as they are quite simple. The Shooter also creates various Attacks, which are classes again that the Shooter can use to create various patterns of bullets, much like the one in the question I posted. Once the Mover has reached it's destination, both it and the shooter stop and return to an inactive state. The pattern completes, is removed by the AI and a new one gets chosen.

    Read the article

  • Multiplayer tile based movement synchronization

    - by Mars
    I have to synchronize the movement of multiple players over the Internet, and I'm trying to figure out the safest way to do that. The game is tile based, you can only move in 4 directions, and every move moves the sprite 32px (over time of course). Now, if I would simply send this move action to the server, which would broadcast it to all players, while the walk key is kept being pressed down, to keep walking, I have to take this next command, send it to the server, and to all clients, in time, or the movement won't be smooth anymore. I saw this in other games, and it can get ugly pretty quick, even without lag. So I'm wondering if this is even a viable option. This seems like a very good method for single player though, since it's easy, straight forward (, just take the next movement action in time and add it to a list), and you can easily add mouse movement (clicking on some tile), to add a path to a queue, that's walked along. The other thing that came to my mind was sending the information that someone started moving in some direction, and again once he stopped or changed the direction, together with the position, so that the sprite will appear at the correct position, or rather so that the position can be fixed if it's wrong. This should (hopefully) only make problems if someone really is lagging, in which case it's to be expected. For this to work out I'd need some kind of queue though, where incoming direction changes and stuff are saved, so the sprite knows where to go, after the current movement to the next tile is finished. This could actually work, but kinda sounds overcomplicated. Although it might be the only way to do this, without risk of stuttering. If a stop or direction change is received on the client side it's saved in a queue and the char keeps moving to the specified coordinates, before stopping or changing direction. If the new command comes in too late there'll be stuttering as well of course... I'm having a hard time deciding for a method, and I couldn't really find any examples for this yet. My main problem is keeping the tile movement smooth, which is why other topics regarding synchronization of pixel based movement aren't helping too much. What is the "standard" way to do this?

    Read the article

  • Alternatives to the GPL

    - by Bane
    I made a game, and I am currently making a game engine. I want them both to be completely free and open source. What license should I choose? I was reading a bit on GPL, but that seems to be more suited for system code and libraries, AFAIK, as it doesn't permit the use of code for proprietorial software - which, in turn, implies that the code can be used in the first place. I can see that, obviously, game engines can be considered libraries, and therefor be used, but what about game code? Is there an alternative to GPL?

    Read the article

  • Frame Buffer Objects vs calling TexCoord2f?

    - by sensae
    I'm learning the basics of OpenGL with lwjgl currently, and following a guide I've got textured quads that can move around a scene. I've been reading about Frame Buffer Objects, and I'm not really clear on their purpose and their benefit. My understanding is that I'll create a FBO with the texture I'd like, load the FBO, draw a quad, then unload the FBO. What would the technique I'm currently doing for texture management be called, and how does it differ from using FBOs? What are the benefits to using FBOs? How does it fit into the grand rendering scheme of things?

    Read the article

  • How can I get into the educational market?

    - by mmyers
    I believe that my current game project is very well-suited for educational gaming; so well-suited, in fact, that I know of several different schools (one community college and at least one or two high schools) that have used versions of it at some time or another. And that's without any such marketing on my part. I'd like to expand on this part of the potential user base. But I have absolutely no experience in dealing with school administrations. How can I break into this market enough to be noticed? And on a side note, could marketing the game as educational kill the gamers market?

    Read the article

  • Confusion with floats converted into ints during collision detection

    - by TheBroodian
    So in designing a 2D platformer, I decided that I should be using a Vector2 to track the world location of my world objects to retain some sub-pixel precision for slow-moving objects and other such subtle nuances, yet representing their bodies with Rectangles, because as far as collision detection and resolution is concerned, I don't need sub-pixel precision. I thought that the following line of thought would work smoothly... Vector2 wrldLocation; Point WorldLocation; Rectangle collisionRectangle; public void Update(GameTime gameTime) { Vector2 moveAmount = velocity * (float)gameTime.ElapsedGameTime.TotalSeconds wrldLocation += moveAmount; WorldLocation = new Point((int)wrldLocation.X, (int)wrldLocation.Y); collisionRectangle = new Rectangle(WorldLocation.X, WorldLocation.Y, genericWidth, genericHeight); } and I guess in theory it sort of works, until I try to use it in conjunction with my collision detection, which works by using Rectangle.Offset() to project where collisionRectangle would supposedly end up after applying moveAmount to it, and if a collision is found, finding the intersection and subtracting the difference between the two intersecting sides to the given moveAmount, which would theoretically give a corrected moveAmount to apply to the object's world location that would prevent it from passing through walls and such. The issue here is that Rectangle.Offset() only accepts ints, and so I'm not really receiving an accurate adjustment to moveAmount for a Vector2. If I leave out wrldLocation from my previous example, and just use WorldLocation to keep track of my object's location, everything works smoothly, but then obviously if my object is being given velocities less than 1 pixel per update, then the velocity value may as well be 0, which I feel further down the line I may regret. Does anybody have any suggestions about how I might go about resolving this?

    Read the article

  • OpenGL directional light creating black spots

    - by AnonymousDeveloper
    I probably ought to start by saying that I suspect the problem is that one of my vectors is not in the correct "space", but I don't know for sure. I am having a strange problem with a directional light. When I move the camera away from (0.0, 0.0, 0.0) it creates tiny black spots that grow larger as the distance increases. I apologize ahead of time for the length of the code. Vertex shader: #version 410 core in vec3 vf_normal; in vec3 vf_bitangent; in vec3 vf_tangent; in vec2 vf_textureCoordinates; in vec3 vf_vertex; out vec3 tc_normal; out vec3 tc_bitangent; out vec3 tc_tangent; out vec2 tc_textureCoordinates; out vec3 tc_vertex; uniform mat3 vf_m_normal; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform float vf_te_inner; uniform float vf_te_outer; void main() { tc_normal = vf_normal; tc_bitangent = vf_bitangent; tc_tangent = vf_tangent; tc_textureCoordinates = vf_textureCoordinates; tc_vertex = vf_vertex; gl_Position = vf_m_mvp * vec4(vf_vertex, 1.0); } Tessellation Control shader: #version 410 core layout (vertices = 3) out; in vec3 tc_normal[]; in vec3 tc_bitangent[]; in vec3 tc_tangent[]; in vec2 tc_textureCoordinates[]; in vec3 tc_vertex[]; out vec3 te_normal[]; out vec3 te_bitangent[]; out vec3 te_tangent[]; out vec2 te_textureCoordinates[]; out vec3 te_vertex[]; uniform float vf_te_inner; uniform float vf_te_outer; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; #define ID gl_InvocationID float getTessLevelInner(float distance0, float distance1) { float avgDistance = (distance0 + distance1) / 2.0; return clamp((vf_te_inner - avgDistance), 1.0, vf_te_inner); } float getTessLevelOuter(float distance0, float distance1) { float avgDistance = (distance0 + distance1) / 2.0; return clamp((vf_te_outer - avgDistance), 1.0, vf_te_outer); } void main() { te_normal[gl_InvocationID] = tc_normal[gl_InvocationID]; te_bitangent[gl_InvocationID] = tc_bitangent[gl_InvocationID]; te_tangent[gl_InvocationID] = tc_tangent[gl_InvocationID]; te_textureCoordinates[gl_InvocationID] = tc_textureCoordinates[gl_InvocationID]; te_vertex[gl_InvocationID] = tc_vertex[gl_InvocationID]; float eyeToVertexDistance0 = distance(vec3(0.0), vec4(vf_m_view * vec4(tc_vertex[0], 1.0)).xyz); float eyeToVertexDistance1 = distance(vec3(0.0), vec4(vf_m_view * vec4(tc_vertex[1], 1.0)).xyz); float eyeToVertexDistance2 = distance(vec3(0.0), vec4(vf_m_view * vec4(tc_vertex[2], 1.0)).xyz); gl_TessLevelOuter[0] = getTessLevelOuter(eyeToVertexDistance1, eyeToVertexDistance2); gl_TessLevelOuter[1] = getTessLevelOuter(eyeToVertexDistance2, eyeToVertexDistance0); gl_TessLevelOuter[2] = getTessLevelOuter(eyeToVertexDistance0, eyeToVertexDistance1); gl_TessLevelInner[0] = getTessLevelInner(eyeToVertexDistance2, eyeToVertexDistance0); } Tessellation Evaluation shader: #version 410 core layout (triangles, equal_spacing, cw) in; in vec3 te_normal[]; in vec3 te_bitangent[]; in vec3 te_tangent[]; in vec2 te_textureCoordinates[]; in vec3 te_vertex[]; out vec3 g_normal; out vec3 g_bitangent; out vec4 g_patchDistance; out vec3 g_tangent; out vec2 g_textureCoordinates; out vec3 g_vertex; uniform float vf_te_inner; uniform float vf_te_outer; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat3 vf_m_normal; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_displace; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; vec2 interpolate2D(vec2 v0, vec2 v1, vec2 v2) { return vec2(gl_TessCoord.x) * v0 + vec2(gl_TessCoord.y) * v1 + vec2(gl_TessCoord.z) * v2; } vec3 interpolate3D(vec3 v0, vec3 v1, vec3 v2) { return vec3(gl_TessCoord.x) * v0 + vec3(gl_TessCoord.y) * v1 + vec3(gl_TessCoord.z) * v2; } float amplify(float d, float scale, float offset) { d = scale * d + offset; d = clamp(d, 0, 1); d = 1 - exp2(-2*d*d); return d; } float getDisplacement(vec2 t0, vec2 t1, vec2 t2) { float displacement = 0.0; vec2 textureCoordinates = interpolate2D(t0, t1, t2); vec2 vector = ((t0 + t1 + t2) / 3.0); float sampleDistance = sqrt((vector.x * vector.x) + (vector.y * vector.y)); sampleDistance /= ((vf_te_inner + vf_te_outer) / 2.0); displacement += texture(vf_t_displace, textureCoordinates).x; displacement += texture(vf_t_displace, textureCoordinates + vec2(-sampleDistance, -sampleDistance)).x; displacement += texture(vf_t_displace, textureCoordinates + vec2(-sampleDistance, sampleDistance)).x; displacement += texture(vf_t_displace, textureCoordinates + vec2( sampleDistance, sampleDistance)).x; displacement += texture(vf_t_displace, textureCoordinates + vec2( sampleDistance, -sampleDistance)).x; return (displacement / 5.0); } void main() { g_normal = normalize(interpolate3D(te_normal[0], te_normal[1], te_normal[2])); g_bitangent = normalize(interpolate3D(te_bitangent[0], te_bitangent[1], te_bitangent[2])); g_patchDistance = vec4(gl_TessCoord, (1.0 - gl_TessCoord.y)); g_tangent = normalize(interpolate3D(te_tangent[0], te_tangent[1], te_tangent[2])); g_textureCoordinates = interpolate2D(te_textureCoordinates[0], te_textureCoordinates[1], te_textureCoordinates[2]); g_vertex = interpolate3D(te_vertex[0], te_vertex[1], te_vertex[2]); float displacement = getDisplacement(te_textureCoordinates[0], te_textureCoordinates[1], te_textureCoordinates[2]); float d2 = min(min(min(g_patchDistance.x, g_patchDistance.y), g_patchDistance.z), g_patchDistance.w); d2 = amplify(d2, 50, -0.5); g_vertex += g_normal * displacement * 0.1 * d2; gl_Position = vf_m_mvp * vec4(g_vertex, 1.0); } Geometry shader: #version 410 core layout (triangles) in; layout (triangle_strip, max_vertices = 3) out; in vec3 g_normal[3]; in vec3 g_bitangent[3]; in vec4 g_patchDistance[3]; in vec3 g_tangent[3]; in vec2 g_textureCoordinates[3]; in vec3 g_vertex[3]; out vec3 f_tangent; out vec3 f_bitangent; out vec3 f_eyeDirection; out vec3 f_lightDirection; out vec3 f_normal; out vec4 f_patchDistance; out vec4 f_shadowCoordinates; out vec2 f_textureCoordinates; out vec3 f_vertex; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat3 vf_m_normal; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; void main() { int index = 0; while (index < 3) { vec3 vertexNormal_cameraspace = vf_m_normal * normalize(g_normal[index]); vec3 vertexTangent_cameraspace = vf_m_normal * normalize(f_tangent); vec3 vertexBitangent_cameraspace = vf_m_normal * normalize(f_bitangent); mat3 TBN = transpose(mat3( vertexTangent_cameraspace, vertexBitangent_cameraspace, vertexNormal_cameraspace )); vec3 eyeDirection = -(vf_m_view * vf_m_model * vec4(g_vertex[index], 1.0)).xyz; vec3 lightDirection = normalize(-(vf_m_view * vec4(vf_l_position, 1.0)).xyz); f_eyeDirection = TBN * eyeDirection; f_lightDirection = TBN * lightDirection; f_normal = normalize(g_normal[index]); f_patchDistance = g_patchDistance[index]; f_shadowCoordinates = vf_m_depthBias * vec4(g_vertex[index], 1.0); f_textureCoordinates = g_textureCoordinates[index]; f_vertex = (vf_m_model * vec4(g_vertex[index], 1.0)).xyz; gl_Position = gl_in[index].gl_Position; EmitVertex(); index ++; } EndPrimitive(); } Fragment shader: #version 410 core in vec3 f_bitangent; in vec3 f_eyeDirection; in vec3 f_lightDirection; in vec3 f_normal; in vec4 f_patchDistance; in vec4 f_shadowCoordinates; in vec3 f_tangent; in vec2 f_textureCoordinates; in vec3 f_vertex; out vec4 fragColor; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; vec2 poissonDisk[16] = vec2[]( vec2(-0.94201624, -0.39906216), vec2( 0.94558609, -0.76890725), vec2(-0.09418410, -0.92938870), vec2( 0.34495938, 0.29387760), vec2(-0.91588581, 0.45771432), vec2(-0.81544232, -0.87912464), vec2(-0.38277543, 0.27676845), vec2( 0.97484398, 0.75648379), vec2( 0.44323325, -0.97511554), vec2( 0.53742981, -0.47373420), vec2(-0.26496911, -0.41893023), vec2( 0.79197514, 0.19090188), vec2(-0.24188840, 0.99706507), vec2(-0.81409955, 0.91437590), vec2( 0.19984126, 0.78641367), vec2( 0.14383161, -0.14100790) ); float random(vec3 seed, int i) { vec4 seed4 = vec4(seed,i); float dot_product = dot(seed4, vec4(12.9898, 78.233, 45.164, 94.673)); return fract(sin(dot_product) * 43758.5453); } float amplify(float d, float scale, float offset) { d = scale * d + offset; d = clamp(d, 0, 1); d = 1 - exp2(-2.0 * d * d); return d; } void main() { vec3 lightColor = vf_l_color.xyz; float lightPower = vf_l_color.w; vec3 materialDiffuseColor = texture(vf_t_diffuse, f_textureCoordinates).xyz; vec3 materialAmbientColor = vec3(0.1, 0.1, 0.1) * materialDiffuseColor; vec3 materialSpecularColor = texture(vf_t_specular, f_textureCoordinates).xyz; vec3 n = normalize(texture(vf_t_normal, f_textureCoordinates).rgb * 2.0 - 1.0); vec3 l = normalize(f_lightDirection); float cosTheta = clamp(dot(n, l), 0.0, 1.0); vec3 E = normalize(f_eyeDirection); vec3 R = reflect(-l, n); float cosAlpha = clamp(dot(E, R), 0.0, 1.0); float visibility = 1.0; float bias = 0.005 * tan(acos(cosTheta)); bias = clamp(bias, 0.0, 0.01); for (int i = 0; i < 4; i ++) { float shading = (0.5 / 4.0); int index = i; visibility -= shading * (1.0 - texture(vf_t_shadow, vec3(f_shadowCoordinates.xy + poissonDisk[index] / 3000.0, (f_shadowCoordinates.z - bias) / f_shadowCoordinates.w))); }\n" fragColor.xyz = materialAmbientColor + visibility * materialDiffuseColor * lightColor * lightPower * cosTheta + visibility * materialSpecularColor * lightColor * lightPower * pow(cosAlpha, 5); fragColor.w = texture(vf_t_diffuse, f_textureCoordinates).w; } The following images should be enough to give you an idea of the problem. Before moving the camera: Moving the camera just a little. Moving it to the center of the scene.

    Read the article

  • Box2D blocky map. Body, Fixtures a huge map and performance

    - by Solom
    Right now I'm still in the planning phase of a my very first game. I'm creating a "Minecraft"-like game in 2D that features blocks that can be destroyed as well as players moving around the map. For creating the map I chose a 2D-Array of Integers that represent the Block ID. For testing purposes I created a huge map (16348 * 256) and in my prototype that didn't use Box2D everything worked like a charm. I only rendered those blocks that where within the bounds of my camera and got 60 fps straight. The problem started when I decided to use an existing physics-solution rather than implementing my own one. What I had was basically simple hitboxes around the blocks and then I had to manually check if the player collided with any of those in his neighborhood. For more advanced physics as well as the collision detection I want to switch over to Box2D. The problem I have right now is ... how to go about the bodies? I mean, the blocks are of a static bodytype. They don't move on their own, they just are there to be collided with. But as far as I can see it, every block needs his own body with a rectangular fixture attached to it, so as to be destroyable. But for a huge map such as mine, this turns out to be a real performance bottle-neck. (In fact even a rather small map [compared to the other] of 1024*256 is unplayable.) I mean I create thousands of thousands of blocks. Even if I just render those that are in my immediate neighborhood there are hundreds of them and (at least with the debugRenderer) I drop to 1 fps really quickly (on my own "monster machine"). I thought about strategies like creating just one body, attaching multiple fixtures and only if a fixture got hit, separate it from the body, create a new one and destroy it, but this didn't turn out quite as successful as hoped. (In fact the core just dumps. Ah hello C! I really missed you :X) Here is the code: public class Box2DGameScreen implements Screen { private World world; private Box2DDebugRenderer debugRenderer; private OrthographicCamera camera; private final float TIMESTEP = 1 / 60f; // 1/60 of a second -> 1 frame per second private final int VELOCITYITERATIONS = 8; private final int POSITIONITERATIONS = 3; private Map map; private BodyDef blockBodyDef; private FixtureDef blockFixtureDef; private BodyDef groundDef; private Body ground; private PolygonShape rectangleShape; @Override public void show() { world = new World(new Vector2(0, -9.81f), true); debugRenderer = new Box2DDebugRenderer(); camera = new OrthographicCamera(); // Pixel:Meter = 16:1 // Body definition BodyDef ballDef = new BodyDef(); ballDef.type = BodyDef.BodyType.DynamicBody; ballDef.position.set(0, 1); // Fixture definition FixtureDef ballFixtureDef = new FixtureDef(); ballFixtureDef.shape = new CircleShape(); ballFixtureDef.shape.setRadius(.5f); // 0,5 meter ballFixtureDef.restitution = 0.75f; // between 0 (not jumping up at all) and 1 (jumping up the same amount as it fell down) ballFixtureDef.density = 2.5f; // kg / m² ballFixtureDef.friction = 0.25f; // between 0 (sliding like ice) and 1 (not sliding) // world.createBody(ballDef).createFixture(ballFixtureDef); groundDef = new BodyDef(); groundDef.type = BodyDef.BodyType.StaticBody; groundDef.position.set(0, 0); ground = world.createBody(groundDef); this.map = new Map(20, 20); rectangleShape = new PolygonShape(); // rectangleShape.setAsBox(1, 1); blockFixtureDef = new FixtureDef(); // blockFixtureDef.shape = rectangleShape; blockFixtureDef.restitution = 0.1f; blockFixtureDef.density = 10f; blockFixtureDef.friction = 0.9f; } @Override public void render(float delta) { Gdx.gl.glClearColor(1, 1, 1, 1); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT); debugRenderer.render(world, camera.combined); drawMap(); world.step(TIMESTEP, VELOCITYITERATIONS, POSITIONITERATIONS); } private void drawMap() { for(int a = 0; a < map.getHeight(); a++) { /* if(camera.position.y - (camera.viewportHeight/2) > a) continue; if(camera.position.y - (camera.viewportHeight/2) < a) break; */ for(int b = 0; b < map.getWidth(); b++) { /* if(camera.position.x - (camera.viewportWidth/2) > b) continue; if(camera.position.x - (camera.viewportWidth/2) < b) break; */ /* blockBodyDef = new BodyDef(); blockBodyDef.type = BodyDef.BodyType.StaticBody; blockBodyDef.position.set(b, a); world.createBody(blockBodyDef).createFixture(blockFixtureDef); */ PolygonShape rectangleShape = new PolygonShape(); rectangleShape.setAsBox(1, 1, new Vector2(b, a), 0); blockFixtureDef.shape = rectangleShape; ground.createFixture(blockFixtureDef); rectangleShape.dispose(); } } } @Override public void resize(int width, int height) { camera.viewportWidth = width / 16; camera.viewportHeight = height / 16; camera.update(); } @Override public void hide() { dispose(); } @Override public void pause() { } @Override public void resume() { } @Override public void dispose() { world.dispose(); debugRenderer.dispose(); } } As you can see I'm facing multiple problems here. I'm not quite sure how to check for the bounds but also if the map is bigger than 24*24 like 1024*256 Java just crashes -.-. And with 24*24 I get like 9 fps. So I'm doing something really terrible here, it seems and I assume that there most be a (much more performant) way, even with Box2D's awesome physics. Any other ideas? Thanks in advance!

    Read the article

  • Fast determination of whether objects are onscreen in 2D

    - by Ben Ezard
    So currently, I have this in each object's renderer's update method: float a = transform.position.x * Main.scale; float b = transform.position.y * Main.scale; float c = Camera.main.transform.position.x * Main.scale; float d = Camera.main.transform.position.y * Main.scale; onscreen = a + width - c > 0 && a - c < GameView.width && b + height - d > 0 && b - d < GameView.height; transform.position is a 2D vector containing the game engine's definition of where the object is - this is then multiplied by Main.scale to translate that coordinate into actual screen space Similarly, Camera.main.transform.position is the in-engine representation of where the main camera is, and this is also multiplied by Main.scale The problem is, as my game is tile-based, thousands of these updates get called every frame, just to determine whether or not each object should be drawn - how can I improve this please?

    Read the article

  • How to make my sprite jump properly?

    - by Matthew Morgan
    I'm currently working on a 2D platformer in XNA. I have, however been having some trouble with creating a fully functional jumping algorithm. This is what I have so far: if (keystate.IsKeyDown(Keys.W)) if (onGround = true) //"onground" is true when the collision between the main sprite and the ground is detected { spritePosition.Y = velocity.Y = -5; } So, the problem I am now having is that as soon as the jump starts the variable "onGround" = false and the sprite is brought back the ground by the simple gravity I have implemented. The other problem I have is creating a limit to the height after which the sprite should automatically return to the ground. Any advice or suggestions would be greatly appreciated.

    Read the article

  • It's possible to fulfill the social necessity of a human being through a social game in 3D like IMVU?

    - by Totty
    (I'm not advertising nor promoting this game, as it's just an example of my experience and I would like to have your opinion about the matter if possible) I've been started researching "things" about games and I've decided to begin to play IMVU as a friend of mine said it's cool. At first it seemed just another 3d social game, not so cool.. But I've "tried to like" and after 1 day I can say I'm addicted to it! Yes; I will explain better: About the game: You can go in chat-rooms, move to positions. Some positions are like sitting in a sofa, floor, dancing alone or with a partner, kissing and more in this way. In the free version of the game there is no nudity. You can even listen to music, view youtube... The 3d graphics are quite low end, so it's not as real as the paid PC games of today. About my experience: At first I was going with my friend in chat-rooms, they seemed very nice. There were people talking about general stuff, quite like in a real life. Well, I begin to know some girls (yes, virtual girls commanded by a real girl, I hope!). Things happened: Some girls are just crazy, not like in real life, they make out in before even talking; Other girls you can speak a little bit, then they add you to their friend-list. Sometimes they invite to their virtual places. Some girls have really IMVU boyfriends only (but not in reality) and most of them don't even make up in the game, so it's really a level of commitment involved here! But from what my friend told they last for him, at least, about 3 days... Some others have real and IMVU boyfriends that are the same. Until now I haven't find a girl with different boyfriend in the IMVU and reality. Nor multiple boyfriends. There are rooms where the same people find each selves every day and speak about general stuff, relationships and so on... They are nice with you, they "feel" you and show careness. This is what amazes me, they treat you like a real human being and as being their friend in the real world. (of course it's not always like this) There are jealous girls too and competitiveness between females lol, I know you loled! This is kind of social. So today I closed my door in my room and I've played it all day long and guess what, I didn't feel a need to stay with a real person at all. Normally, If I would stay a full day alone I would get quite crazy... So the question is: It's just me that seemed to be able to fulfill my social needs or there is something more? thanks for your precious time for reading my full question,

    Read the article

  • Most suited technology for browser games?

    - by Tingle
    I was thinking about making a 2D MMO which I would in the long run support on various plattforms like desktop, mac, browser, android and ios. The server will be c++/linux based and the first client would go in the browser. So I have done some research and found that webgl and flash 11 support hardware accelerated rendering, I saw some other things like normal HTML5 painting. So my question is, which technology should I use for such a project? My main goal would be that the users have a hassle free experience using what there hardware can give them with hardware acceleration. And the client should work on the most basic out-of-the-box pc's that any casual pc or mac user has. And another criteria would be that it should be developer friendly. I've messed with webgl abit for example and that would require writing a engine from scratch - which is acceptable but not preferred. Also, in case of non-actionscript, which kind language is most prefered in terms of speed and flexability. I'm not to fond of javascript due to the garbage collector but have learned to work around it. Thank you for you time.

    Read the article

  • How to shade a texture two different colors?

    - by Venesectrix
    To give an example of what I'm asking about, I'll use Saints Row 3 since I've been playing that lately. In that game you can customize your looks and your car's appearance a lot. Your coat can have a primary color and a trim color. Your car can have a primary color and a stripe color, etc. Is there just a single coat texture that is being shaded two different colors somehow or are they overlaying a transparent second texture for the trim/stripes that gets shaded differently? If it's just one texture I'd like to know how it's done. If it's two different textures it seems like it's a waste of space. The second texture would be the same size as the first one but mostly transparent if you just wanted to lay it on top of the first one. Or are they just carefully positioning a second, smaller texture so that it aligns properly with the first one?

    Read the article

  • Why do I have an error when adding states in slick?

    - by SystemNetworks
    When I was going to create another state I had an error. This is my code: public static final int play2 = 3; and public Game(String gamename){ this.addState(new mission(play2)); } and public void initStatesList(GameContainer gc) throws SlickException{ this.getState(play2).init(gc, this); } I have an error in the addState. above the above code. I don't know where is the problem. But if you want the whole code it is here: package javagame; import org.newdawn.slick.*; import org.newdawn.slick.state.*; public class Game extends StateBasedGame{ public static final String gamename = "NET FRONT"; public static final int menu = 0; public static final int play = 1; public static final int train = 2; public static final int play2 = 3; public Game(String gamename){ super(gamename); this.addState(new Menu(menu)); this.addState(new Play(play)); this.addState(new train(train)); this.addState(new mission(play2)); } public void initStatesList(GameContainer gc) throws SlickException{ this.getState(menu).init(gc, this); this.getState(play).init(gc, this); this.getState(train).init(gc, this); this.enterState(menu); this.getState(play2).init(gc, this); } public static void main(String[] args) { try{ AppGameContainer app =new AppGameContainer(new Game(gamename)); app.setDisplayMode(1500, 1000, false); app.start(); }catch(SlickException e){ e.printStackTrace(); } } } //SYSTEM NETWORKS(C) 2012 NET FRONT

    Read the article

  • Given a start and end point, how can I constrain the end point so the resulting line segment is horizontal, vertical, or 45 degrees?

    - by GloryFish
    I have a grid of letters. The player clicks on a letter and drags out a selection. Using Bresenham's Algorithm I can create a line of highlighted letters representing the player's selection. However, what I really want is to have the line segment be constrained to 45 degree angles (as is common for crossword-style games). So, given a start point and an end point, how can I find the line that passes through the start point and is closest to the end point? Bonus: To make things super sweet I'd like to get a list of points in the grid that the line passes through, and for super MEGA bonus points, I'd like to get them in order of selection (i.e. from start point to end point).

    Read the article

  • AABB - AABB Collision, which face do I hit?

    - by PeeS
    To allow my objects to slide when they collide, I need to : Know which face of the AABB they collide with. Calculate the normal to that face. Return the normal and calculate the impulse that to apply to the player's velocity. Question How can I calculate which face of the AABB I collided with, knowing that I have two AABB's colliding? One is the player and the other is a world object. Here's what that looks like (problem collision circled in white): Thank you for your help.

    Read the article

  • OpenGL + Allegro. Moving from software drawing X Y to openGL is confusing

    - by Aaron
    Having a fair bit of trouble. I'm used to Allegro and drawing sprites on a bitmap buffer at X Y coords. Now I've started a test project with OpenGL and its weird. Basically, as far as I know, theirs many ways to draw stuff in OpenGL. At the moment, I think I'm creating a Quad? Whatever that is, and I think Ive given it a texture of a bitmap and them im drawing that: GLuint gl_image; bitmap = load_bitmap("cat.bmp", NULL); gl_image = allegro_gl_make_texture_ex(AGL_TEXTURE_MASKED, bitmap, GL_RGBA); glBindTexture(GL_TEXTURE_2D, gl_image); glBegin(GL_QUADS); glColor4ub(255, 255, 255, 255); glTexCoord2f(0, 0); glVertex3f(-0.5, 0.5, 0); glTexCoord2f(1, 0); glVertex3f(0.5, 0.5, 0); glTexCoord2f(1, 1); glVertex3f(0.5, -0.5, 0); glTexCoord2f(0, 1); glVertex3f(-0.5, -0.5, 0); glEnd(); So yeah. So I got a few questions: Is this the best way of drawing a sprite? Is it suitable? The big question: Can anyone help / Does anyone know any tutorials on this weird coordinate thing? If it even is that. It's vastly different from XY, but I want to learn it. I was thinking maybe I could learn how this weird positioning stuff works, and then write a function to try and translate it to X and Y coords. Thats about it. I'm still trying to figure it all out on my own but any contributions you guys can make would be greatly appreciated =D Thanks!

    Read the article

  • Deferred contexts and inheriting state from the immediate context

    - by dreijer
    I took my first stab at using deferred contexts in DirectX 11 today. Basically, I created my deferred context using CreateDeferredContext() and then drew a simple triangle strip with it. Early on in my test application, I call OMSetRenderTargets() on the immediate context in order to render to the swap chain's back buffer. Now, after having read the documentation on MSDN about deferred contexts, I assumed that calling ExecuteCommandList() on the immediate context would execute all of the deferred commands as "an extension" to the commands that had already been executed on the immediate context, i.e. the triangle strip I rendered in the deferred context would be rendered to the swap chain's back buffer. That didn't seem to be the case, however. Instead, I had to manually pull out the immediate context's render target (using OMGetRenderTargets()) and then set it on the deferred context with OMSetRenderTargets(). Am I doing something wrong or is that the way deferred contexts work?

    Read the article

  • Using box2d DrawDebugData with multi layer scene ?

    - by Mr.Gando
    In my Game, a Scene is composed by several layers. Each layer has different camera transformations. This way I can have a layer at z=3 (GUI), z=2 (Monsters), z=1 (scrolling background), and this 3 layers compose my whole Scene. My render loop looks something like: renderLayer() applyTransformations() renderVisibleEntities() renderChildLayers() end If I call DrawDebugData() in the render loop, the whole b2world debug data will be rendered once for each layer in my scene, this generates a mess, because the "debug boxes" get duplicated, some of them get the camera transformations applied and some of them don't, etc. What I would like to do, would be to make DrawDebugData to draw only certain debug boxes. In that way, I could call something like b2world->DrawDebugDataForLayer(int layer_id) and call that on each layer like : renderLayer() applyTransformations() renderVisibleEntities() //Only render my corresponding layer debug data b2world->DrawDebugDataForLayer(layer_id) renderChildLayers() end Is there a way to subclass b2World so I could add this functionality ( specific to my game ) ? If not, what would be the best way to achieve this (Cocos2d uses a similar scene graph approach and box2d, but I'm not sure if debugDraw works in Cocos2d... ) Thanks

    Read the article

  • Game maps with "counties" that are split along lines that aren't necessarily straight

    - by pm_2
    I want to create a game environment that supports a 2D map. This is a really basic map, but must be split along lines that are not necessarily straight. So imagine a country with county boundaries. I then want to be able to detect drag / drop events within these counties. What I'm really looking for here is a pointer to where to start on this (how it has been done before - any existing libraries out there), as I'm sure that what I'm trying to do is not new - although I can't find a beginners guide for this anywhere.

    Read the article

  • Understanding dot notation

    - by Starkers
    Here's my interpretation of dot notation: a = [2,6] b = [1,4] c = [0,8] a . b . c = (2*6)+(1*4)+(0*8) = 12 + 4 + 0 = 16 What is the significance of 16? Apparently it's a scalar. Am I right in thinking that a scalar is the number we times a unit vector by to get a vector that has a scaled up magnitude but the same direction as the unit vector? So again, what is the relevance of 16? When is it used? It's not the magnitude of all the vectors added up. The magnitude of all of them is calculated as follows: sqrt( ax * ax + ay * ay ) + sqrt( bx * bx + by * by ) + sqrt( cx * cx + cy * cy) sqrt( 2 * 2 + 6 * 6 ) + sqrt( 1 * 1 + 4 * 4 ) + sqrt( 0 * 0 + 8 * 8) sqrt( 4 + 36 ) + sqrt( 1 + 16 ) + sqrt( 0 + 64) sqrt( 40 ) + sqrt( 17 ) + sqrt( 64) 6.3 + 4.1 + 8 10.4 + 8 18.4 So I don't really get this diagram: Attempting with sensible numbers: a = [1,0] b = [4,3] a . b = (1*0) + (4*3) = 0 + 12 = 12 So what exactly is a . b describing here? The magnitude of that vector? Because that isn't right: the 'a.b' vector = [4,0] sqrt( x*x + y*y ) sqrt( 4*4 + 0*0 ) sqrt( 16 + 0 ) 4 So what is 12 describing?

    Read the article

  • Working out of a vertex array for destrucible objects

    - by bobobobo
    I have diamond-shaped polygonal bullets. There are lots of them on the screen. I did not want to create a vertex array for each, so I packed them into a single vertex array and they're all drawn at once. | bullet1.xyz | bullet1.rgb | bullet2.xyz | bullet2.rgb This is great for performance.. there is struct Bullet { vector<Vector3f*> verts ; // pointers into the vertex buffer } ; This works fine, the bullets can move and do collision detection, all while having their data in one place. Except when a bullet "dies" Then you have to clear a slot, and pack all the bullets towards the beginning of the array. Is this a good approach to handling lots of low poly objects? How else would you do it?

    Read the article

  • HUD layer not being added on my scene

    - by Shailesh_ios
    I have a CCScene which already holds my gameLayer and I am trying to add HUD layer on that.But the HUD layer is not getting added in my scene, I can say that because I have set up a CCLabel on HUD layer and when I run my project, I cannot see that label. Here's what I am doing : In my gameLayer: +(id) scene { CCScene *scene = [CCScene node]; GameScreen *layer = [GameScreen node]; [scene addChild: layer]; HUDclass * otherLayer = [HUDclass node]; [scene addChild:otherLayer]; layer.HC = otherLayer;// HC is reference to my HUD layer in @Interface of gameLayer return scene; } And then in my HUD layer I have just added a CCLabelTTF in its init method like this : -(id)init { if ((self = [super init])) { CCLabelTTF * label = [CCLabelTTF labelWithString:@"IN WEAPON CLASS" fontName:@"Arial" fontSize:15]; label.position = ccp(240,160); [self addChild:label]; } return self; } But now when I run my project I dont see that label, What am I doing wrong here ..? Any Ideas.. ? Thanks in advance for your time.

    Read the article

< Previous Page | 628 629 630 631 632 633 634 635 636 637 638 639  | Next Page >