Search Results

Search found 25496 results on 1020 pages for 'development fabric'.

Page 490/1020 | < Previous Page | 486 487 488 489 490 491 492 493 494 495 496 497  | Next Page >

  • How to solve problems with movement in simple tile based multiplayer game?

    - by Murlo
    I'm making a simple tile based 2D multiplayer game in JavaScript using socket.io where you can move one tile every 200 ms. The two solutions I've tried are as follows: The client sends "walk one tile north" every 200 ms. Problem: People can easily hack the client to send the action more often. The client sends "walking north" and "stopped walking". Problem: Sometimes the player moves extra steps when "stopped walking" doesn't arrive in time. Do you know a way around these problems or is there a better way to do it? EDIT: Regarding the first solution I've tried adding validation on the server to check if it has been 200 ms since last movement. The problem is that latency still encourages people just to spam the action as much as possible, giving them an unfair advantage.

    Read the article

  • Rendering transparent textures in directX

    - by Vibhore Tanwer
    I am working with a directX application with WPF, I am facing a problem with videos and images that contains transparent pixels, I have to draw a color in background an then a video/image over it. What I expect is background color should be visible while playing video only non transparent pixels should be visible but what I get is a black background behind the video. I am using following settings on device to achieve alpha blending : device.RenderState.SourceBlend = Blend.SourceAlpha; device.RenderState.DestinationBlend = Blend.InvSourceAlpha; device.RenderState.AlphaBlendEnable = true; What am I missing here? What is the best approach to handle transparent videos? Any help will be of great value to me.

    Read the article

  • Multiplayer tile based movement synchronization

    - by Mars
    I have to synchronize the movement of multiple players over the Internet, and I'm trying to figure out the safest way to do that. The game is tile based, you can only move in 4 directions, and every move moves the sprite 32px (over time of course). Now, if I would simply send this move action to the server, which would broadcast it to all players, while the walk key is kept being pressed down, to keep walking, I have to take this next command, send it to the server, and to all clients, in time, or the movement won't be smooth anymore. I saw this in other games, and it can get ugly pretty quick, even without lag. So I'm wondering if this is even a viable option. This seems like a very good method for single player though, since it's easy, straight forward (, just take the next movement action in time and add it to a list), and you can easily add mouse movement (clicking on some tile), to add a path to a queue, that's walked along. The other thing that came to my mind was sending the information that someone started moving in some direction, and again once he stopped or changed the direction, together with the position, so that the sprite will appear at the correct position, or rather so that the position can be fixed if it's wrong. This should (hopefully) only make problems if someone really is lagging, in which case it's to be expected. For this to work out I'd need some kind of queue though, where incoming direction changes and stuff are saved, so the sprite knows where to go, after the current movement to the next tile is finished. This could actually work, but kinda sounds overcomplicated. Although it might be the only way to do this, without risk of stuttering. If a stop or direction change is received on the client side it's saved in a queue and the char keeps moving to the specified coordinates, before stopping or changing direction. If the new command comes in too late there'll be stuttering as well of course... I'm having a hard time deciding for a method, and I couldn't really find any examples for this yet. My main problem is keeping the tile movement smooth, which is why other topics regarding synchronization of pixel based movement aren't helping too much. What is the "standard" way to do this?

    Read the article

  • OpenGL directional light creating black spots

    - by AnonymousDeveloper
    I probably ought to start by saying that I suspect the problem is that one of my vectors is not in the correct "space", but I don't know for sure. I am having a strange problem with a directional light. When I move the camera away from (0.0, 0.0, 0.0) it creates tiny black spots that grow larger as the distance increases. I apologize ahead of time for the length of the code. Vertex shader: #version 410 core in vec3 vf_normal; in vec3 vf_bitangent; in vec3 vf_tangent; in vec2 vf_textureCoordinates; in vec3 vf_vertex; out vec3 tc_normal; out vec3 tc_bitangent; out vec3 tc_tangent; out vec2 tc_textureCoordinates; out vec3 tc_vertex; uniform mat3 vf_m_normal; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform float vf_te_inner; uniform float vf_te_outer; void main() { tc_normal = vf_normal; tc_bitangent = vf_bitangent; tc_tangent = vf_tangent; tc_textureCoordinates = vf_textureCoordinates; tc_vertex = vf_vertex; gl_Position = vf_m_mvp * vec4(vf_vertex, 1.0); } Tessellation Control shader: #version 410 core layout (vertices = 3) out; in vec3 tc_normal[]; in vec3 tc_bitangent[]; in vec3 tc_tangent[]; in vec2 tc_textureCoordinates[]; in vec3 tc_vertex[]; out vec3 te_normal[]; out vec3 te_bitangent[]; out vec3 te_tangent[]; out vec2 te_textureCoordinates[]; out vec3 te_vertex[]; uniform float vf_te_inner; uniform float vf_te_outer; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; #define ID gl_InvocationID float getTessLevelInner(float distance0, float distance1) { float avgDistance = (distance0 + distance1) / 2.0; return clamp((vf_te_inner - avgDistance), 1.0, vf_te_inner); } float getTessLevelOuter(float distance0, float distance1) { float avgDistance = (distance0 + distance1) / 2.0; return clamp((vf_te_outer - avgDistance), 1.0, vf_te_outer); } void main() { te_normal[gl_InvocationID] = tc_normal[gl_InvocationID]; te_bitangent[gl_InvocationID] = tc_bitangent[gl_InvocationID]; te_tangent[gl_InvocationID] = tc_tangent[gl_InvocationID]; te_textureCoordinates[gl_InvocationID] = tc_textureCoordinates[gl_InvocationID]; te_vertex[gl_InvocationID] = tc_vertex[gl_InvocationID]; float eyeToVertexDistance0 = distance(vec3(0.0), vec4(vf_m_view * vec4(tc_vertex[0], 1.0)).xyz); float eyeToVertexDistance1 = distance(vec3(0.0), vec4(vf_m_view * vec4(tc_vertex[1], 1.0)).xyz); float eyeToVertexDistance2 = distance(vec3(0.0), vec4(vf_m_view * vec4(tc_vertex[2], 1.0)).xyz); gl_TessLevelOuter[0] = getTessLevelOuter(eyeToVertexDistance1, eyeToVertexDistance2); gl_TessLevelOuter[1] = getTessLevelOuter(eyeToVertexDistance2, eyeToVertexDistance0); gl_TessLevelOuter[2] = getTessLevelOuter(eyeToVertexDistance0, eyeToVertexDistance1); gl_TessLevelInner[0] = getTessLevelInner(eyeToVertexDistance2, eyeToVertexDistance0); } Tessellation Evaluation shader: #version 410 core layout (triangles, equal_spacing, cw) in; in vec3 te_normal[]; in vec3 te_bitangent[]; in vec3 te_tangent[]; in vec2 te_textureCoordinates[]; in vec3 te_vertex[]; out vec3 g_normal; out vec3 g_bitangent; out vec4 g_patchDistance; out vec3 g_tangent; out vec2 g_textureCoordinates; out vec3 g_vertex; uniform float vf_te_inner; uniform float vf_te_outer; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat3 vf_m_normal; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_displace; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; vec2 interpolate2D(vec2 v0, vec2 v1, vec2 v2) { return vec2(gl_TessCoord.x) * v0 + vec2(gl_TessCoord.y) * v1 + vec2(gl_TessCoord.z) * v2; } vec3 interpolate3D(vec3 v0, vec3 v1, vec3 v2) { return vec3(gl_TessCoord.x) * v0 + vec3(gl_TessCoord.y) * v1 + vec3(gl_TessCoord.z) * v2; } float amplify(float d, float scale, float offset) { d = scale * d + offset; d = clamp(d, 0, 1); d = 1 - exp2(-2*d*d); return d; } float getDisplacement(vec2 t0, vec2 t1, vec2 t2) { float displacement = 0.0; vec2 textureCoordinates = interpolate2D(t0, t1, t2); vec2 vector = ((t0 + t1 + t2) / 3.0); float sampleDistance = sqrt((vector.x * vector.x) + (vector.y * vector.y)); sampleDistance /= ((vf_te_inner + vf_te_outer) / 2.0); displacement += texture(vf_t_displace, textureCoordinates).x; displacement += texture(vf_t_displace, textureCoordinates + vec2(-sampleDistance, -sampleDistance)).x; displacement += texture(vf_t_displace, textureCoordinates + vec2(-sampleDistance, sampleDistance)).x; displacement += texture(vf_t_displace, textureCoordinates + vec2( sampleDistance, sampleDistance)).x; displacement += texture(vf_t_displace, textureCoordinates + vec2( sampleDistance, -sampleDistance)).x; return (displacement / 5.0); } void main() { g_normal = normalize(interpolate3D(te_normal[0], te_normal[1], te_normal[2])); g_bitangent = normalize(interpolate3D(te_bitangent[0], te_bitangent[1], te_bitangent[2])); g_patchDistance = vec4(gl_TessCoord, (1.0 - gl_TessCoord.y)); g_tangent = normalize(interpolate3D(te_tangent[0], te_tangent[1], te_tangent[2])); g_textureCoordinates = interpolate2D(te_textureCoordinates[0], te_textureCoordinates[1], te_textureCoordinates[2]); g_vertex = interpolate3D(te_vertex[0], te_vertex[1], te_vertex[2]); float displacement = getDisplacement(te_textureCoordinates[0], te_textureCoordinates[1], te_textureCoordinates[2]); float d2 = min(min(min(g_patchDistance.x, g_patchDistance.y), g_patchDistance.z), g_patchDistance.w); d2 = amplify(d2, 50, -0.5); g_vertex += g_normal * displacement * 0.1 * d2; gl_Position = vf_m_mvp * vec4(g_vertex, 1.0); } Geometry shader: #version 410 core layout (triangles) in; layout (triangle_strip, max_vertices = 3) out; in vec3 g_normal[3]; in vec3 g_bitangent[3]; in vec4 g_patchDistance[3]; in vec3 g_tangent[3]; in vec2 g_textureCoordinates[3]; in vec3 g_vertex[3]; out vec3 f_tangent; out vec3 f_bitangent; out vec3 f_eyeDirection; out vec3 f_lightDirection; out vec3 f_normal; out vec4 f_patchDistance; out vec4 f_shadowCoordinates; out vec2 f_textureCoordinates; out vec3 f_vertex; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat3 vf_m_normal; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; void main() { int index = 0; while (index < 3) { vec3 vertexNormal_cameraspace = vf_m_normal * normalize(g_normal[index]); vec3 vertexTangent_cameraspace = vf_m_normal * normalize(f_tangent); vec3 vertexBitangent_cameraspace = vf_m_normal * normalize(f_bitangent); mat3 TBN = transpose(mat3( vertexTangent_cameraspace, vertexBitangent_cameraspace, vertexNormal_cameraspace )); vec3 eyeDirection = -(vf_m_view * vf_m_model * vec4(g_vertex[index], 1.0)).xyz; vec3 lightDirection = normalize(-(vf_m_view * vec4(vf_l_position, 1.0)).xyz); f_eyeDirection = TBN * eyeDirection; f_lightDirection = TBN * lightDirection; f_normal = normalize(g_normal[index]); f_patchDistance = g_patchDistance[index]; f_shadowCoordinates = vf_m_depthBias * vec4(g_vertex[index], 1.0); f_textureCoordinates = g_textureCoordinates[index]; f_vertex = (vf_m_model * vec4(g_vertex[index], 1.0)).xyz; gl_Position = gl_in[index].gl_Position; EmitVertex(); index ++; } EndPrimitive(); } Fragment shader: #version 410 core in vec3 f_bitangent; in vec3 f_eyeDirection; in vec3 f_lightDirection; in vec3 f_normal; in vec4 f_patchDistance; in vec4 f_shadowCoordinates; in vec3 f_tangent; in vec2 f_textureCoordinates; in vec3 f_vertex; out vec4 fragColor; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; vec2 poissonDisk[16] = vec2[]( vec2(-0.94201624, -0.39906216), vec2( 0.94558609, -0.76890725), vec2(-0.09418410, -0.92938870), vec2( 0.34495938, 0.29387760), vec2(-0.91588581, 0.45771432), vec2(-0.81544232, -0.87912464), vec2(-0.38277543, 0.27676845), vec2( 0.97484398, 0.75648379), vec2( 0.44323325, -0.97511554), vec2( 0.53742981, -0.47373420), vec2(-0.26496911, -0.41893023), vec2( 0.79197514, 0.19090188), vec2(-0.24188840, 0.99706507), vec2(-0.81409955, 0.91437590), vec2( 0.19984126, 0.78641367), vec2( 0.14383161, -0.14100790) ); float random(vec3 seed, int i) { vec4 seed4 = vec4(seed,i); float dot_product = dot(seed4, vec4(12.9898, 78.233, 45.164, 94.673)); return fract(sin(dot_product) * 43758.5453); } float amplify(float d, float scale, float offset) { d = scale * d + offset; d = clamp(d, 0, 1); d = 1 - exp2(-2.0 * d * d); return d; } void main() { vec3 lightColor = vf_l_color.xyz; float lightPower = vf_l_color.w; vec3 materialDiffuseColor = texture(vf_t_diffuse, f_textureCoordinates).xyz; vec3 materialAmbientColor = vec3(0.1, 0.1, 0.1) * materialDiffuseColor; vec3 materialSpecularColor = texture(vf_t_specular, f_textureCoordinates).xyz; vec3 n = normalize(texture(vf_t_normal, f_textureCoordinates).rgb * 2.0 - 1.0); vec3 l = normalize(f_lightDirection); float cosTheta = clamp(dot(n, l), 0.0, 1.0); vec3 E = normalize(f_eyeDirection); vec3 R = reflect(-l, n); float cosAlpha = clamp(dot(E, R), 0.0, 1.0); float visibility = 1.0; float bias = 0.005 * tan(acos(cosTheta)); bias = clamp(bias, 0.0, 0.01); for (int i = 0; i < 4; i ++) { float shading = (0.5 / 4.0); int index = i; visibility -= shading * (1.0 - texture(vf_t_shadow, vec3(f_shadowCoordinates.xy + poissonDisk[index] / 3000.0, (f_shadowCoordinates.z - bias) / f_shadowCoordinates.w))); }\n" fragColor.xyz = materialAmbientColor + visibility * materialDiffuseColor * lightColor * lightPower * cosTheta + visibility * materialSpecularColor * lightColor * lightPower * pow(cosAlpha, 5); fragColor.w = texture(vf_t_diffuse, f_textureCoordinates).w; } The following images should be enough to give you an idea of the problem. Before moving the camera: Moving the camera just a little. Moving it to the center of the scene.

    Read the article

  • How to Effectively Create Bullet Patterns

    - by SoulBeaver
    I'm currently creating a top-down shooter like Touhou. The most important factor of the game is that there are many diverse patterns and ways at which bullets are generated and shot at the player, see this video: http://www.youtube.com/watch?v=4Nb5Ohbt1Sg#start=0:60;end=9:53; At the moment, I'm using a class "Pattern" which has a series of steps on moving and shooting. However, I feel this method is quite laborous as I have to create a new Pattern for each attack and perhaps new Bullet classes that will implement a certain behavior. This question received a comment suggesting I should look into BulletML for easy creation and storage of bullets with a specific pattern. It looks decent, but it led me to wonder, what other solutions do you have that I should take into consideration? Update My current design is as follows: An example of an implemented pattern: My GigasPattern first executes a teleport which moves Alice to a certain point (X, Y) on the screen. After this is completed, the pattern starts using the Mover to move the sprite around (whereas teleporting has separate effects and animation). These are of no concern, really, as they are quite simple. The Shooter also creates various Attacks, which are classes again that the Shooter can use to create various patterns of bullets, much like the one in the question I posted. Once the Mover has reached it's destination, both it and the shooter stop and return to an inactive state. The pattern completes, is removed by the AI and a new one gets chosen.

    Read the article

  • Estimate angle to launch missile, maths question

    - by Jonathan
    I've been working on this for an hour or two now and my maths really isn't my strong suit which is definitely not a good thing for a game programmer but that shouldn't stop me enjoying a hobby surely? After a few failed attempts I was hoping someone else out there could help so here's the situation. I'm trying to implement a bit of faked intelligence when the A.I fires it's missiles at a target in a 2D game world. By predicting the likely position the target will be in given it's current velocity and the time it will take the missile to reach it's target. I created an image to demonstrate my thinking: http://i.imgur.com/SFmU3.png which also contains the logic I use for accelerating the missile after launch. The ship that fires the missile can fire within a total of 40 degree angle, 20 either side of itself, but this could likely become variable. My current attempt was to break the space between the two lines into segments which match the targets width. Then calculate the time it would take the missile to get to that location using the formula. So for each iteration of this we total up the values and that tells us the distance travelled, ad it would then just need compared to distance to the segment. startVelocity * ((startVelocity * acceleration)^(currentframe-1) So for example. If we start at a velocity of 1f/frame with an acceleration of 0.1f the formula, at frame 4, would be 1 * (1.1^3) = 1.331 But I quickly realized I was getting lost when trying to put this into practice. Does this seem like a correct starting point or am I going completely the wrong way about it? Any pointers would help me greatly. Maths really isn't my strong suit so I get easily lost in these matters and don't even really know a good phrase to search for with this. So I guess in summary my question is more about the correct way to approach this problem and any additional code samples on top of that would be great but I'm not averse to working out the complete code from helpful pointers.

    Read the article

  • Infinite terrain shadows

    - by user35399
    I'm creating an infinite terrain engine, which generates the terrain either with fractals or noise. How can I make dynamic shadows for the sun on this terrain, if I don't know in advance what will be rendered in front of the sun. My terrain: The sun is the only light, it is directional, my terrain is generated on a plane which is positioned before the camera, frustum culled and fits the size of the viewing frustum. It is height mapped with generated noise texture, and using tessellation shaders on it. Video:http://www.youtube.com/watch?v=tk6yFwYusOs Dynamic shadows with the infinite terrain.

    Read the article

  • Algorithm to simplify building/structural meshes

    - by morpheus
    I am looking for an algorithm to simplify the meshes of buildings or similar structures. EDIT: I had made a comment that Hoppe's algorithm tends to make meshes more and more spherical with simplification. But, I am not sure about it, so am deleting the comment. Buildings in contrast should tend to become more and more rectangular with increasing simplification. The D3DX extensions for D3D in version 9.0 (d3dx9.lib) used to have classes to do progressive mesh simplification. See: http://doc.51windows.net/Directx9_SDK/?url=/directx9_sdk/graphics/reference/d3dx/functions/mesh/d3dxgeneratepmesh.htm http://msdn.microsoft.com/en-us/library/windows/desktop/bb281243(v=vs.85).aspx

    Read the article

  • Stop animation playing automatically

    - by Starkers
    I've created an animation to animate a swinging mace. To do this I select the mace object in the scene pane, open the animation pane, and key it at a certain position at 0:00. I'm prompted to save this animation in my assets folder, which I do, as maceswing I then rotate the mace, move the slider through time and key it in a different position. I move the slider through time again, move the object to the original position and key it. There are now three things in my assets folder: maceswing appears to be my animation, but I have no idea what Mace Mace 1 and Mace 2 are. (I've been mucking around trying to get this working so it's possible Mace 1 and Mace 2 are just duplicates of Mace. I still want to know what they are though) When I play my game, the mace is constantly swinging, even though I didn't apply maceswing to it. I can't stop it. People say there's some kind of tick box to stop it constantly animating but I can't find it. My mace object only has an Animator component: Unticking this component doesn't stop the animation playing so I have no idea where the animation is coming from. Or what the Animator component actually does. I don't want this animation constantly playing. I only want it to play once when someone clicks a certain button: var Mace : Transform; if(Input.GetButtonDown('Fire1')){ Mace.animation.Play('maceswing'); }; Upon clicking the 'Fire1' button, I get this error: MissingComponentException: There is no 'Animation' attached to the "Mace" game object, but a script is trying to access it. You probably need to add a Animation to the game object "Mace". Or your script needs to check if the component is attached before using it. There is no 'Animation' attached to the "Mace" game object, and yet I can see it swinging away constantly. Infact I can't stop it! So what's causing the animation if the game object doesn't have an 'Animation' attached to it?

    Read the article

  • Why do we use the Pythagorean theorem in game physics?

    - by Starkers
    I've recently learned that we use Pythagorean theorem a lot in our physics calculations and I'm afraid I don't really get the point. Here's an example from a book to make sure an object doesn't travel faster than a MAXIMUM_VELOCITY constant in the horizontal plane: MAXIMUM_VELOCITY = <any number>; SQUARED_MAXIMUM_VELOCITY = MAXIMUM_VELOCITY * MAXIMUM_VELOCITY; function animate(){ var squared_horizontal_velocity = (x_velocity * x_velocity) + (z_velocity * z_velocity); if( squared_horizontal_velocity <= SQUARED_MAXIMUM_VELOCITY ){ scalar = squared_horizontal_velocity / SQUARED_MAXIMUM_VELOCITY; x_velocity = x_velocity / scalar; z_velocity = x_velocity / scalar; } } Let's try this with some numbers: An object is attempting to move 5 units in x and 5 units in z. It should only be able to move 5 units horizontally in total! MAXIMUM_VELOCITY = 5; SQUARED_MAXIMUM_VELOCITY = 5 * 5; SQUARED_MAXIMUM_VELOCITY = 25; function animate(){ var x_velocity = 5; var z_velocity = 5; var squared_horizontal_velocity = (x_velocity * x_velocity) + (z_velocity * z_velocity); var squared_horizontal_velocity = 5 * 5 + 5 * 5; var squared_horizontal_velocity = 25 + 25; var squared_horizontal_velocity = 50; // if( squared_horizontal_velocity <= SQUARED_MAXIMUM_VELOCITY ){ if( 50 <= 25 ){ scalar = squared_horizontal_velocity / SQUARED_MAXIMUM_VELOCITY; scalar = 50 / 25; scalar = 2.0; x_velocity = x_velocity / scalar; x_velocity = 5 / 2.0; x_velocity = 2.5; z_velocity = z_velocity / scalar; z_velocity = 5 / 2.0; z_velocity = 2.5; // new_horizontal_velocity = x_velocity + z_velocity // new_horizontal_velocity = 2.5 + 2.5 // new_horizontal_velocity = 5 } } Now this works well, but we can do the same thing without Pythagoras: MAXIMUM_VELOCITY = 5; function animate(){ var x_velocity = 5; var z_velocity = 5; var horizontal_velocity = x_velocity + z_velocity; var horizontal_velocity = 5 + 5; var horizontal_velocity = 10; // if( horizontal_velocity >= MAXIMUM_VELOCITY ){ if( 10 >= 5 ){ scalar = horizontal_velocity / MAXIMUM_VELOCITY; scalar = 10 / 5; scalar = 2.0; x_velocity = x_velocity / scalar; x_velocity = 5 / 2.0; x_velocity = 2.5; z_velocity = z_velocity / scalar; z_velocity = 5 / 2.0; z_velocity = 2.5; // new_horizontal_velocity = x_velocity + z_velocity // new_horizontal_velocity = 2.5 + 2.5 // new_horizontal_velocity = 5 } } Benefits of doing it without Pythagoras: Less lines Within those lines, it's easier to read what's going on ...and it takes less time to compute, as there are less multiplications Seems to me like computers and humans get a better deal without Pythagorean theorem! However, I'm sure I'm wrong as I've seen Pythagoras' theorem in a number of reputable places, so I'd like someone to explain me the benefit of using Pythagorean theorem to a maths newbie. Does this have anything to do with unit vectors? To me a unit vector is when we normalize a vector and turn it into a fraction. We do this by dividing the vector by a larger constant. I'm not sure what constant it is. The total size of the graph? Anyway, because it's a fraction, I take it, a unit vector is basically a graph that can fit inside a 3D grid with the x-axis running from -1 to 1, z-axis running from -1 to 1, and the y-axis running from -1 to 1. That's literally everything I know about unit vectors... not much :P And I fail to see their usefulness. Also, we're not really creating a unit vector in the above examples. Should I be determining the scalar like this: // a mathematical work-around of my own invention. There may be a cleverer way to do this! I've also made up my own terms such as 'divisive_scalar' so don't bother googling var divisive_scalar = (squared_horizontal_velocity / SQUARED_MAXIMUM_VELOCITY); var divisive_scalar = ( 50 / 25 ); var divisive_scalar = 2; var multiplicative_scalar = (divisive_scalar / (2*divisive_scalar)); var multiplicative_scalar = (2 / (2*2)); var multiplicative_scalar = (2 / 4); var multiplicative_scalar = 0.5; x_velocity = x_velocity * multiplicative_scalar x_velocity = 5 * 0.5 x_velocity = 2.5 Again, I can't see why this is better, but it's more "unit-vector-y" because the multiplicative_scalar is a unit_vector? As you can see, I use words such as "unit-vector-y" so I'm really not a maths whiz! Also aware that unit vectors might have nothing to do with Pythagorean theorem so ignore all of this if I'm barking up the wrong tree. I'm a very visual person (3D modeller and concept artist by trade!) and I find diagrams and graphs really, really helpful so as many as humanely possible please!

    Read the article

  • Understanding dot notation

    - by Starkers
    Here's my interpretation of dot notation: a = [2,6] b = [1,4] c = [0,8] a . b . c = (2*6)+(1*4)+(0*8) = 12 + 4 + 0 = 16 What is the significance of 16? Apparently it's a scalar. Am I right in thinking that a scalar is the number we times a unit vector by to get a vector that has a scaled up magnitude but the same direction as the unit vector? So again, what is the relevance of 16? When is it used? It's not the magnitude of all the vectors added up. The magnitude of all of them is calculated as follows: sqrt( ax * ax + ay * ay ) + sqrt( bx * bx + by * by ) + sqrt( cx * cx + cy * cy) sqrt( 2 * 2 + 6 * 6 ) + sqrt( 1 * 1 + 4 * 4 ) + sqrt( 0 * 0 + 8 * 8) sqrt( 4 + 36 ) + sqrt( 1 + 16 ) + sqrt( 0 + 64) sqrt( 40 ) + sqrt( 17 ) + sqrt( 64) 6.3 + 4.1 + 8 10.4 + 8 18.4 So I don't really get this diagram: Attempting with sensible numbers: a = [1,0] b = [4,3] a . b = (1*0) + (4*3) = 0 + 12 = 12 So what exactly is a . b describing here? The magnitude of that vector? Because that isn't right: the 'a.b' vector = [4,0] sqrt( x*x + y*y ) sqrt( 4*4 + 0*0 ) sqrt( 16 + 0 ) 4 So what is 12 describing?

    Read the article

  • How can I get into the educational market?

    - by mmyers
    I believe that my current game project is very well-suited for educational gaming; so well-suited, in fact, that I know of several different schools (one community college and at least one or two high schools) that have used versions of it at some time or another. And that's without any such marketing on my part. I'd like to expand on this part of the potential user base. But I have absolutely no experience in dealing with school administrations. How can I break into this market enough to be noticed? And on a side note, could marketing the game as educational kill the gamers market?

    Read the article

  • Image first loaded, then it isn't? (XNA)

    - by M0rgenstern
    I am very confused at the Moment. I have the following Class: (Just a part of the class): public class GUIWindow { #region Static Fields //The standard image for windows. public static IngameImage StandardBackgroundImage; #endregion } IngameImage is just one of my own classes, but actually it contains a Texture2D (and some other things). In another class I load a list of GUIButtons by deserializing a XML file. public static GUI Initializazion(string pXMLPath, ContentManager pConMan) { GUI myGUI = pConMan.Load<GUI>(pXMLPath); GUIWindow.StandardBackgroundImage = new IngameImage(pConMan.Load<Texture2D>(myGUI.WindowStandardBackgroundImagePath), Vector2.Zero, 1024, 600, 1, 0, Color.White, 1.0f, true, false, false); System.Console.WriteLine("Image loaded? " + (GUIWindow.StandardBackgroundImage.ImageStrip != null)); myGUI.Windows = pConMan.Load<List<GUIWindow>>(myGUI.GUIFormatXMLPath); System.Console.WriteLine("Windows loaded"); return myGUI; } Here this line: System.Console.WriteLine("Image loaded? " + (GUIWindow.StandardBackgroundImage.ImageStrip != null)); Prints "true". To load the GUIWindows I need an "empty" constructor, which looks like that: public GUIWindow() { Name = ""; Buttons = new List<Button>(); ImagePath = ""; System.Console.WriteLine("Image loaded? (In win) " + (GUIWindow.StandardBackgroundImage.ImageStrip != null)); //Image = new IngameImage(StandardBackgroundImage); //System.Console.WriteLine( //Image.IsActive = false; SelectedButton = null; IsActive = false; } As you can see, I commented lines out in the constructor. Because: Otherwise this would crash. Here the line System.Console.WriteLine("Image loaded? (In win) " + (GUIWindow.StandardBackgroundImage.ImageStrip != null)); Doesn't print anything, it just crashes with the following errormessage: Building content threw NullReferenceException: Object reference not set to an object instance. Why does this happen? Before the program wants to load the List, it prints "true". But in the constructor, so in the loading of the list it prints "false". Can anybody please tell me why this happens and how to fix it?

    Read the article

  • Most suited technology for browser games?

    - by Tingle
    I was thinking about making a 2D MMO which I would in the long run support on various plattforms like desktop, mac, browser, android and ios. The server will be c++/linux based and the first client would go in the browser. So I have done some research and found that webgl and flash 11 support hardware accelerated rendering, I saw some other things like normal HTML5 painting. So my question is, which technology should I use for such a project? My main goal would be that the users have a hassle free experience using what there hardware can give them with hardware acceleration. And the client should work on the most basic out-of-the-box pc's that any casual pc or mac user has. And another criteria would be that it should be developer friendly. I've messed with webgl abit for example and that would require writing a engine from scratch - which is acceptable but not preferred. Also, in case of non-actionscript, which kind language is most prefered in terms of speed and flexability. I'm not to fond of javascript due to the garbage collector but have learned to work around it. Thank you for you time.

    Read the article

  • How can I customize an FPS game?

    - by monoceres
    I want to create a customized (modded) fps game where I can change the look and feel of the game to match my intended theme. Some of the things I would like to do: Create a custom map (terrain). Add custom sound effects Change AI (For example, running away instead of actively looking for combat). Change menus and add some storyboard. Script events in game (like a countdown until game over) Change the models of the NPC's. What options do I have? Is there any platform/game/engine/whatever that allows one to do the things above in a reasonable way? I work as a programmer so I'm not afraid of coding some part of the project, but to save time it would be nice to work in some high-level way (like scripting or configuration files).

    Read the article

  • iPhone 3d Model format: .h file, .obj, or some other?

    - by T Reddy
    I'm beginning to write an iPhone game using OpenGL-ES and I've come across a problem with deciding what format my 3D models should be in. I've read (link escapes me at the moment) that some developers prefer the models compiled in Objective-C .h files. Still, others prefer having .obj as these are more portable (i.e., for deployment on non-iPhone platforms). Various 3D game engines seem to support many(?) formats, but I'm not going to use any of these engines as I would like to actually learn OpenGL-ES. Am I putting myself at a disadvantage here by not using a packaged engine? Thanks!

    Read the article

  • Previewing a Demo Level in Mobile for UDK?

    - by Reno Yeo
    I've already clicked on "Emulate Mobile Features" and everything has been compiled. I've also set the mobile previewer settings to iPhone 4's dimensions and features. However, when i click on the mobile previewer, a new window pops up but it goes into a "Not Responding" mode after a while. Is there anything I'm doing wrong? To be honest, I'm afraid of the difficulty curve required in learning UDK, but I am interested in developing a game for it.

    Read the article

  • Must all AI states be able to react to any event?

    - by Prog
    FSMs implemented with the State design pattern are a common way to design AI agents. I am familiar with the State design pattern and know how to implement it. How is this used in games to design AI agents? Consider a simplified class Monster, representing an AI agent: class Monster { State state; // other fields omitted public void update(){ // called every game-loop cycle state.execute(this); } public void setState(State state){ this.state = state; } // irrelevant stuff omitted } There are several State subclasses implementing execute() differently. So far, classic State pattern. AI agents are subject to environmental effects and other objects communicating with them. For example, an AI agent might tell another AI agent to attack (i.e. agent.attack()). Or a fireball might tell an AI agent to fall down. This means that the agent must have methods such as attack() and fallDown(), or commonly some message receiving mechanism to understand such messages. With an FSM, the current State of the agent should be the one taking care of such method calls - i.e. the agent delegates to the current state upon every event. Is this correct? If correct, how is this done? Are all states obligated by their superclass to implement methods such as attack(), fallDown() etc., so the agent can always delegate to them on almost every event? Or is it done in some other way?

    Read the article

  • Am I missing something about these considerations about Leaderboard's database's schema?

    - by misiMe
    I just finished to develop a mobile game, now I want to implement an online leaerboard using mysql. I'm wondering about the database's schema, I thought about some possibilities: (I didn't got in detail with syntax because my question is just about the logic of it) Name: string; Score: integer I thought to ask the name just the first time. If, in the future, you will modify that, it will call just an update to the name associated with your id. Leaderboard(ID, Name, Score) ID: integer autoincrement, PrimaryKey With this kind of idea maybe the db will grow fast because if you choose everytime a different name for the score, it will add a new entry. Leaderboard(PhoneId, Name, Score) Here PhoneId will be the unique identifier of the phone, PrimaryKey. A con of this choice is that if you want to play with your friends' phone, you can't put a different name for the score. Leaderboard(Name, Score) Here Name is PrimaryKey. With that, if you enter a name that already exists, you will be prompted to choose another one. Do you agree with this considerations? What will you do? Am I missing something?

    Read the article

  • Connecting 2 Vertices in 3DS Max?

    - by Reanimation
    How do you connect two vertices in 3DS Max 2013? I have two vertices which I wish to connect with a line to create an edge. (actually several) I have tried all I can think and done several Google searches but it only comes up with older versions method which say use the "connect" button... But I can't find the connect button on my version (see below) This is what my menu looks like: These are the vertices I'm trying to connect: Basically, I've edited an STL file and deleted some edges and vertices. Now I want to fill the gaps and triangulate what's left. Thanks.

    Read the article

  • Using NumPy arrays as 2D mathematical vectors?

    - by CorundumGames
    Right now I'm using lists as position, velocity, and acceleration vectors in my game. Is that a better option than using NumPy's arrays (not the standard library's) as vectors (with float data types)? I'm frequently adding vectors and changing their values directly, then placing the values in these vectors into a Pygame Rect. The vector is used for position (because Rects can't hold floats, so we can't go "between" pixels), and the Rect is used for rendering (because Pygame will only take in Rects for rendering positions).

    Read the article

  • Get collision details from Rectangle.Intersects()

    - by Daniel Ribeiro
    I have a Breakout game in which, at some point, I detect the collision between the ball and the paddle with something like this: // Ball class rectangle.Intersects(paddle.Rectangle); Is there any way I can get the exact coordinates of the collision, or any details about it, with the current XNA API? I thought of doing some basic calculations, such as comparing the exact coordinates of each object on the moment of the collision. It would look something like this: // Ball class if((rectangle.X - paddle.Rectangle.X) < (paddle.Rectangle.Width / 2)) // Collision happened on the left side else // Collision happened on the right side But I'm not sure this is the correct way to do it. Do you guys have any tips on maybe an engine I might have to use to achieve that? Or even good coding practices using this method?

    Read the article

  • Game maps with "counties" that are split along lines that aren't necessarily straight

    - by pm_2
    I want to create a game environment that supports a 2D map. This is a really basic map, but must be split along lines that are not necessarily straight. So imagine a country with county boundaries. I then want to be able to detect drag / drop events within these counties. What I'm really looking for here is a pointer to where to start on this (how it has been done before - any existing libraries out there), as I'm sure that what I'm trying to do is not new - although I can't find a beginners guide for this anywhere.

    Read the article

  • Working out of a vertex array for destrucible objects

    - by bobobobo
    I have diamond-shaped polygonal bullets. There are lots of them on the screen. I did not want to create a vertex array for each, so I packed them into a single vertex array and they're all drawn at once. | bullet1.xyz | bullet1.rgb | bullet2.xyz | bullet2.rgb This is great for performance.. there is struct Bullet { vector<Vector3f*> verts ; // pointers into the vertex buffer } ; This works fine, the bullets can move and do collision detection, all while having their data in one place. Except when a bullet "dies" Then you have to clear a slot, and pack all the bullets towards the beginning of the array. Is this a good approach to handling lots of low poly objects? How else would you do it?

    Read the article

  • Problem creating levels using inherited classes/polymorphism

    - by Adam
    I'm trying to write my level classes by having a base class that each level class inherits from...The base class uses pure virtual functions. My base class is only going to be used as a vector that'll have the inherited level classes pushed onto it...This is what my code looks like at the moment, I've tried various things and get the same result (segmentation fault). //level.h class Level { protected: Mix_Music *music; SDL_Surface *background; SDL_Surface *background2; vector<Enemy> enemy; bool loaded; int time; public: Level(); virtual ~Level(); int bgX, bgY; int bg2X, bg2Y; int width, height; virtual void load(); virtual void unload(); virtual void update(); virtual void draw(); }; //level.cpp Level::Level() { bgX = 0; bgY = 0; bg2X = 0; bg2Y = 0; width = 2048; height = 480; loaded = false; time = 0; } Level::~Level() { } //virtual functions are empty... I'm not sure exactly what I'm supposed to include in the inherited class structure, but this is what I have at the moment... //level1.h class Level1: public Level { public: Level1(); ~Level1(); void load(); void unload(); void update(); void draw(); }; //level1.cpp Level1::Level1() { } Level1::~Level1() { enemy.clear(); Mix_FreeMusic(music); SDL_FreeSurface(background); SDL_FreeSurface(background2); music = NULL; background = NULL; background2 = NULL; Mix_CloseAudio(); } void Level1::load() { music = Mix_LoadMUS("music/song1.xm"); background = loadImage("image/background.png"); background2 = loadImage("image/background2.png"); Mix_OpenAudio(48000, MIX_DEFAULT_FORMAT, 2, 4096); Mix_PlayMusic(music, -1); } void Level1::unload() { } //functions have level-specific code in them... Right now for testing purposes, I just have the main loop call Level1 level1; and use the functions, but when I run the game I get a segmentation fault. This is the first time I've tried writing inherited classes, so I know I'm doing something wrong, but I can't seem to figure out what exactly.

    Read the article

< Previous Page | 486 487 488 489 490 491 492 493 494 495 496 497  | Next Page >