Search Results

Search found 25550 results on 1022 pages for 'mere development'.

Page 488/1022 | < Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >

  • How was 20Q made?

    - by Dan the Man
    Ever since I was a kid, I've wondered how they made the 20Q electronic game. In this game, which is it's on device, you think of an object, thing, or animal (e.g. a potato or a donkey), once you mentally choose your thing, the device goes through a series of questions such as: Is it larger than a loaf of bread? Is it found outdoors? Is it used for recreation? For each of the questions you can answer yes, no, maybe, or unknown. The way I've always thought of it to work was with immense, nested conditionals (if statements). But, I don't think that would be very likely as it would be terribly difficult to understand while coding it. I'm not looking for a discussion as SE doesn't allow it; I'm looking for concrete knowledge or solutions.

    Read the article

  • most efficient AABB vs Ray collision algorithms

    - by Asher Einhorn
    Is there a known 'most efficient' algorithm for AABB vs Ray collision detection? I recently stumbled accross Arvo's AABB vs Sphere collision algorithm, and I am wondering if there is a similarly noteworthy algorithm for this. One must have condition for this algorithm is that I need to have the option of querying the result for the distance from the ray's origin to the point of collision. having said this, if there is another, faster algorithm which does not return distance, then in addition to posting one that does, also posting that algorithm would be very helpful indeed. Please also state what the function's return argument is, and how you use it to return distance or a 'no-collision' case. For example, does it have an out parameter for the distance as well as a bool return value? or does it simply return a float with the distance, vs a value of -1 for no collision? (For those that don't know: AABB = Axis Aligned Bounding Box)

    Read the article

  • How can I simulate a rigid body bounced from a wall in 3D world?

    - by HyperGroups
    How can I simulate a rigid sword bounced from a wall and hit the ground (like in physical world)? I want to use this for a simple animation. I can detect the figure and the size of the sword (maybe needed in doing bounce). Rotation can be controlled by quaternions/matrix/euler angles. It should turn the head and do rotations and fly to the ground. How can I simulate this physical process? Maybe what I need is an equation and some parameters? I need these data, and would combine them into my movie file, I use Mathematica to do the thing that generate the movie file(If I have the data, I can also export it into a 3DSMax script for example).

    Read the article

  • Drag camera/view in a 3D world

    - by Dono
    I'm trying to make a Draggable view in a 3D world. Currently, I've made it using mouse position on the screen, but, when I move the distance traveled by my mouse is not equal to the distance traveled in the 3D world. So, I've tried to do that : Compute a ray from mouse position to 3D world. Calculate intersection with the ground. Check intersection difference old position <- new position. Translate camera with the difference. I've got a problem with this method: The ray is computed with the current camera's position I move the camera I compute the new ray with new camera position. The difference between old ray and new ray is now invalid. So, graphically my camera don't stop to move to previous/new position everytime. How can I do a draggable camera with another solution ? Thanks!

    Read the article

  • Rendering Texture Quad to Screen or FBO (OpenGL ES)

    - by Usman.3D
    I need to render the texture on the iOS device's screen or a render-to-texture frame buffer object. But it does not show any texture. It's all black. (I am loading texture with image myself for testing purpose) //Load texture data UIImage *image=[UIImage imageNamed:@"textureImage.png"]; GLuint width = FRAME_WIDTH; GLuint height = FRAME_HEIGHT; //Create context void *imageData = malloc(height * width * 4); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef context = CGBitmapContextCreate(imageData, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big); CGColorSpaceRelease(colorSpace); //Prepare image CGContextClearRect(context, CGRectMake(0, 0, width, height)); CGContextDrawImage(context, CGRectMake(0, 0, width, height), image.CGImage); glGenTextures(1, &texture); glBindTexture(GL_TEXTURE_2D, texture); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); Simple Texture Quad drawing code mentioned here //Bind Texture, Bind render-to-texture FBO and then draw the quad const float quadPositions[] = { 1.0, 1.0, 0.0, -1.0, 1.0, 0.0, -1.0, -1.0, 0.0, -1.0, -1.0, 0.0, 1.0, -1.0, 0.0, 1.0, 1.0, 0.0 }; const float quadTexcoords[] = { 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0 }; // stop using VBO glBindBuffer(GL_ARRAY_BUFFER, 0); // setup buffer offsets glVertexAttribPointer(ATTRIB_VERTEX, 3, GL_FLOAT, GL_FALSE, 3*sizeof(float), quadPositions); glVertexAttribPointer(ATTRIB_TEXCOORD0, 2, GL_FLOAT, GL_FALSE, 2*sizeof(float), quadTexcoords); // ensure the proper arrays are enabled glEnableVertexAttribArray(ATTRIB_VERTEX); glEnableVertexAttribArray(ATTRIB_TEXCOORD0); //Bind Texture and render-to-texture FBO. glBindTexture(GL_TEXTURE_2D, GLid); //Actually wanted to render it to render-to-texture FBO, but now testing directly on default FBO. //glBindFramebuffer(GL_FRAMEBUFFER, textureFBO[pixelBuffernum]); // draw glDrawArrays(GL_TRIANGLES, 0, 2*3); What am I doing wrong in this code? P.S. I'm not familiar with shaders yet, so it is difficult for me to make use of them right now.

    Read the article

  • I Don't Understand Anything About Randomly Generated Worlds [closed]

    - by Alex Larsen
    What tools do I need to make a Minecraft-like generated world? I heard about Perlin noise and Simplex, but I don't understand anything about them. So far all I found on the internet was a Simplex version for C#, and all it has is functions, and this is what I get: Console.WriteLine(Noise.Generate(SomeNumber, SomeNumber, SumNumber)); Outputs random floats. I'm really lost. I don't understand the whole random generated worlds concept. Can someone help me? And if I use the noise thing I don't understand how to use it.

    Read the article

  • Coordinate and positioning problem on iOS with cocos2d-x

    - by Vexille
    I'm using cocos2d-x alongside with Marmalade and running some tests and tutorials before starting an actual project with them. So far things are working reasonably well on the windows simulator, Android and even on Blackberry's Playbook, but on iOS devices (iPhone and iPad) the positioning seems to be off. To make things clearer, I put together a scene that just draws an image in the middle of the screen. It worked as expected on everything else, but this is the result I got on an iPhone: To get the coordinates for the center of the screen I'm using the VisibleRect class from the TestCpp sample. It just uses sharedOpenGLView to get the visible size and visible origin, and calculate the center from that. CCSprite* test = CCSprite::create("Ball.png", CCRectMake(0, 0, 80, 80) ); test->setPosition( ccp(VisibleRect::center().x, VisibleRect::center().y) ); this->addChild(test); Also I have a noBorder policy set on AppDelegate: CCEGLView::sharedOpenGLView()->setDesignResolutionSize(designSize.width, designSize.height, kResolutionNoBorder); One funny thing is that I tried to deploy the TestCpp sample project to some iOS devices and it worked reasonably well on the iPhone, but on the iPad the application was only being drawn on a small portion of the screen - just like what happened on the iPhone when I tried using the ShowAll policy.

    Read the article

  • Best way to go for simple online multi-player games?

    - by Mr_CryptoPrime
    I want to create a trivia game for my website. The graphic design does not have to be too fancy, probably no more advanced than a typical flash game. It needs to be secure because I want users to be able to play for real money. It also needs to run fast so users don't spend their time frustrated with game freezing. Compatibility, as with almost all online products, is key because of the large target market. I am most acquainted with Java programming, but I don't want to do it in Java if there is something much better. I am assuming I will have to utilize a variety of different languages in order for everything to come together. If someone could point out the main structure of everything so I could get a good start that would be great! 1) Language choice for simple secure online multiplayer games? 2) Perhaps use a database like MySQL, stored on a secure server for the trivia questions? 3) Free educational resources and even simpler projects to practice? Any ideas or suggestions would be helpful...Thanks!

    Read the article

  • What are some good resources for creating a game engine in XNA?

    - by Glasser
    I'm currently a student game programmer working on an indie project. We have a team of eleven people (five programmers, four artists, and two audio designers) aboard, all working hard to help design this game. We've been meeting for months now and so far we have a pretty buffed out Game Design Document as well as much audio/visual concept art. Our programmers are itching to progress on our own end. Each person in our programming team is well versed in C++, but is very familiar with C#. We have enough experience and skill that we're confident that we will be successful with our game, and we're looking to build our own game engine in XNA as it seems like it would be worth our time and effort in the end. The game itself will be a 2D beat 'em up style game to be released over xbox live and the PC. It's play style will be similar to that of Castle Crashers or Scott Pilgrim vs The World. We want to design the game engine to allow us to better implement our assets into the game as well as to simplify the creation of design elements/mechanics. Currently between our programmers, we have books such as "XNA 4.0" and "Game Coding Complete, Third Edition," but we'd still like more information on both XNA and (especially) building a game engine from scratch. What are any other good books, websites, or resources we could use to further map out and program our game engine?

    Read the article

  • Infinite terrain shadows

    - by user35399
    I'm creating an infinite terrain engine, which generates the terrain either with fractals or noise. How can I make dynamic shadows for the sun on this terrain, if I don't know in advance what will be rendered in front of the sun. My terrain: The sun is the only light, it is directional, my terrain is generated on a plane which is positioned before the camera, frustum culled and fits the size of the viewing frustum. It is height mapped with generated noise texture, and using tessellation shaders on it. Video:http://www.youtube.com/watch?v=tk6yFwYusOs Dynamic shadows with the infinite terrain.

    Read the article

  • How to implement Fog Of War with an shader?

    - by Cambrano
    Okay, I'm creating a RTS game and want to implement an AgeOfEmpires-like Fog Of War(FOW). That means a tile(or pixel) can be: 0% transparent (unexplored) 50% transparent black (explored but not in viewrange) 100% transparent(explored and in viewrange) RTS means I'll have many explorers (NPCs, buildings, ...). Okay, so I have an 2d array of bytes byte[,] explored. The byte value correlates the transparency. The question is, how do I pass this array to my shader? Well I think it is not possible to pass an entire array. So: what technique shall I use to let my shader know if a pixel/tile is visible or not?

    Read the article

  • OpenGL directional light creating black spots

    - by AnonymousDeveloper
    I probably ought to start by saying that I suspect the problem is that one of my vectors is not in the correct "space", but I don't know for sure. I am having a strange problem with a directional light. When I move the camera away from (0.0, 0.0, 0.0) it creates tiny black spots that grow larger as the distance increases. I apologize ahead of time for the length of the code. Vertex shader: #version 410 core in vec3 vf_normal; in vec3 vf_bitangent; in vec3 vf_tangent; in vec2 vf_textureCoordinates; in vec3 vf_vertex; out vec3 tc_normal; out vec3 tc_bitangent; out vec3 tc_tangent; out vec2 tc_textureCoordinates; out vec3 tc_vertex; uniform mat3 vf_m_normal; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform float vf_te_inner; uniform float vf_te_outer; void main() { tc_normal = vf_normal; tc_bitangent = vf_bitangent; tc_tangent = vf_tangent; tc_textureCoordinates = vf_textureCoordinates; tc_vertex = vf_vertex; gl_Position = vf_m_mvp * vec4(vf_vertex, 1.0); } Tessellation Control shader: #version 410 core layout (vertices = 3) out; in vec3 tc_normal[]; in vec3 tc_bitangent[]; in vec3 tc_tangent[]; in vec2 tc_textureCoordinates[]; in vec3 tc_vertex[]; out vec3 te_normal[]; out vec3 te_bitangent[]; out vec3 te_tangent[]; out vec2 te_textureCoordinates[]; out vec3 te_vertex[]; uniform float vf_te_inner; uniform float vf_te_outer; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; #define ID gl_InvocationID float getTessLevelInner(float distance0, float distance1) { float avgDistance = (distance0 + distance1) / 2.0; return clamp((vf_te_inner - avgDistance), 1.0, vf_te_inner); } float getTessLevelOuter(float distance0, float distance1) { float avgDistance = (distance0 + distance1) / 2.0; return clamp((vf_te_outer - avgDistance), 1.0, vf_te_outer); } void main() { te_normal[gl_InvocationID] = tc_normal[gl_InvocationID]; te_bitangent[gl_InvocationID] = tc_bitangent[gl_InvocationID]; te_tangent[gl_InvocationID] = tc_tangent[gl_InvocationID]; te_textureCoordinates[gl_InvocationID] = tc_textureCoordinates[gl_InvocationID]; te_vertex[gl_InvocationID] = tc_vertex[gl_InvocationID]; float eyeToVertexDistance0 = distance(vec3(0.0), vec4(vf_m_view * vec4(tc_vertex[0], 1.0)).xyz); float eyeToVertexDistance1 = distance(vec3(0.0), vec4(vf_m_view * vec4(tc_vertex[1], 1.0)).xyz); float eyeToVertexDistance2 = distance(vec3(0.0), vec4(vf_m_view * vec4(tc_vertex[2], 1.0)).xyz); gl_TessLevelOuter[0] = getTessLevelOuter(eyeToVertexDistance1, eyeToVertexDistance2); gl_TessLevelOuter[1] = getTessLevelOuter(eyeToVertexDistance2, eyeToVertexDistance0); gl_TessLevelOuter[2] = getTessLevelOuter(eyeToVertexDistance0, eyeToVertexDistance1); gl_TessLevelInner[0] = getTessLevelInner(eyeToVertexDistance2, eyeToVertexDistance0); } Tessellation Evaluation shader: #version 410 core layout (triangles, equal_spacing, cw) in; in vec3 te_normal[]; in vec3 te_bitangent[]; in vec3 te_tangent[]; in vec2 te_textureCoordinates[]; in vec3 te_vertex[]; out vec3 g_normal; out vec3 g_bitangent; out vec4 g_patchDistance; out vec3 g_tangent; out vec2 g_textureCoordinates; out vec3 g_vertex; uniform float vf_te_inner; uniform float vf_te_outer; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat3 vf_m_normal; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_displace; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; vec2 interpolate2D(vec2 v0, vec2 v1, vec2 v2) { return vec2(gl_TessCoord.x) * v0 + vec2(gl_TessCoord.y) * v1 + vec2(gl_TessCoord.z) * v2; } vec3 interpolate3D(vec3 v0, vec3 v1, vec3 v2) { return vec3(gl_TessCoord.x) * v0 + vec3(gl_TessCoord.y) * v1 + vec3(gl_TessCoord.z) * v2; } float amplify(float d, float scale, float offset) { d = scale * d + offset; d = clamp(d, 0, 1); d = 1 - exp2(-2*d*d); return d; } float getDisplacement(vec2 t0, vec2 t1, vec2 t2) { float displacement = 0.0; vec2 textureCoordinates = interpolate2D(t0, t1, t2); vec2 vector = ((t0 + t1 + t2) / 3.0); float sampleDistance = sqrt((vector.x * vector.x) + (vector.y * vector.y)); sampleDistance /= ((vf_te_inner + vf_te_outer) / 2.0); displacement += texture(vf_t_displace, textureCoordinates).x; displacement += texture(vf_t_displace, textureCoordinates + vec2(-sampleDistance, -sampleDistance)).x; displacement += texture(vf_t_displace, textureCoordinates + vec2(-sampleDistance, sampleDistance)).x; displacement += texture(vf_t_displace, textureCoordinates + vec2( sampleDistance, sampleDistance)).x; displacement += texture(vf_t_displace, textureCoordinates + vec2( sampleDistance, -sampleDistance)).x; return (displacement / 5.0); } void main() { g_normal = normalize(interpolate3D(te_normal[0], te_normal[1], te_normal[2])); g_bitangent = normalize(interpolate3D(te_bitangent[0], te_bitangent[1], te_bitangent[2])); g_patchDistance = vec4(gl_TessCoord, (1.0 - gl_TessCoord.y)); g_tangent = normalize(interpolate3D(te_tangent[0], te_tangent[1], te_tangent[2])); g_textureCoordinates = interpolate2D(te_textureCoordinates[0], te_textureCoordinates[1], te_textureCoordinates[2]); g_vertex = interpolate3D(te_vertex[0], te_vertex[1], te_vertex[2]); float displacement = getDisplacement(te_textureCoordinates[0], te_textureCoordinates[1], te_textureCoordinates[2]); float d2 = min(min(min(g_patchDistance.x, g_patchDistance.y), g_patchDistance.z), g_patchDistance.w); d2 = amplify(d2, 50, -0.5); g_vertex += g_normal * displacement * 0.1 * d2; gl_Position = vf_m_mvp * vec4(g_vertex, 1.0); } Geometry shader: #version 410 core layout (triangles) in; layout (triangle_strip, max_vertices = 3) out; in vec3 g_normal[3]; in vec3 g_bitangent[3]; in vec4 g_patchDistance[3]; in vec3 g_tangent[3]; in vec2 g_textureCoordinates[3]; in vec3 g_vertex[3]; out vec3 f_tangent; out vec3 f_bitangent; out vec3 f_eyeDirection; out vec3 f_lightDirection; out vec3 f_normal; out vec4 f_patchDistance; out vec4 f_shadowCoordinates; out vec2 f_textureCoordinates; out vec3 f_vertex; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat3 vf_m_normal; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; void main() { int index = 0; while (index < 3) { vec3 vertexNormal_cameraspace = vf_m_normal * normalize(g_normal[index]); vec3 vertexTangent_cameraspace = vf_m_normal * normalize(f_tangent); vec3 vertexBitangent_cameraspace = vf_m_normal * normalize(f_bitangent); mat3 TBN = transpose(mat3( vertexTangent_cameraspace, vertexBitangent_cameraspace, vertexNormal_cameraspace )); vec3 eyeDirection = -(vf_m_view * vf_m_model * vec4(g_vertex[index], 1.0)).xyz; vec3 lightDirection = normalize(-(vf_m_view * vec4(vf_l_position, 1.0)).xyz); f_eyeDirection = TBN * eyeDirection; f_lightDirection = TBN * lightDirection; f_normal = normalize(g_normal[index]); f_patchDistance = g_patchDistance[index]; f_shadowCoordinates = vf_m_depthBias * vec4(g_vertex[index], 1.0); f_textureCoordinates = g_textureCoordinates[index]; f_vertex = (vf_m_model * vec4(g_vertex[index], 1.0)).xyz; gl_Position = gl_in[index].gl_Position; EmitVertex(); index ++; } EndPrimitive(); } Fragment shader: #version 410 core in vec3 f_bitangent; in vec3 f_eyeDirection; in vec3 f_lightDirection; in vec3 f_normal; in vec4 f_patchDistance; in vec4 f_shadowCoordinates; in vec3 f_tangent; in vec2 f_textureCoordinates; in vec3 f_vertex; out vec4 fragColor; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; vec2 poissonDisk[16] = vec2[]( vec2(-0.94201624, -0.39906216), vec2( 0.94558609, -0.76890725), vec2(-0.09418410, -0.92938870), vec2( 0.34495938, 0.29387760), vec2(-0.91588581, 0.45771432), vec2(-0.81544232, -0.87912464), vec2(-0.38277543, 0.27676845), vec2( 0.97484398, 0.75648379), vec2( 0.44323325, -0.97511554), vec2( 0.53742981, -0.47373420), vec2(-0.26496911, -0.41893023), vec2( 0.79197514, 0.19090188), vec2(-0.24188840, 0.99706507), vec2(-0.81409955, 0.91437590), vec2( 0.19984126, 0.78641367), vec2( 0.14383161, -0.14100790) ); float random(vec3 seed, int i) { vec4 seed4 = vec4(seed,i); float dot_product = dot(seed4, vec4(12.9898, 78.233, 45.164, 94.673)); return fract(sin(dot_product) * 43758.5453); } float amplify(float d, float scale, float offset) { d = scale * d + offset; d = clamp(d, 0, 1); d = 1 - exp2(-2.0 * d * d); return d; } void main() { vec3 lightColor = vf_l_color.xyz; float lightPower = vf_l_color.w; vec3 materialDiffuseColor = texture(vf_t_diffuse, f_textureCoordinates).xyz; vec3 materialAmbientColor = vec3(0.1, 0.1, 0.1) * materialDiffuseColor; vec3 materialSpecularColor = texture(vf_t_specular, f_textureCoordinates).xyz; vec3 n = normalize(texture(vf_t_normal, f_textureCoordinates).rgb * 2.0 - 1.0); vec3 l = normalize(f_lightDirection); float cosTheta = clamp(dot(n, l), 0.0, 1.0); vec3 E = normalize(f_eyeDirection); vec3 R = reflect(-l, n); float cosAlpha = clamp(dot(E, R), 0.0, 1.0); float visibility = 1.0; float bias = 0.005 * tan(acos(cosTheta)); bias = clamp(bias, 0.0, 0.01); for (int i = 0; i < 4; i ++) { float shading = (0.5 / 4.0); int index = i; visibility -= shading * (1.0 - texture(vf_t_shadow, vec3(f_shadowCoordinates.xy + poissonDisk[index] / 3000.0, (f_shadowCoordinates.z - bias) / f_shadowCoordinates.w))); }\n" fragColor.xyz = materialAmbientColor + visibility * materialDiffuseColor * lightColor * lightPower * cosTheta + visibility * materialSpecularColor * lightColor * lightPower * pow(cosAlpha, 5); fragColor.w = texture(vf_t_diffuse, f_textureCoordinates).w; } The following images should be enough to give you an idea of the problem. Before moving the camera: Moving the camera just a little. Moving it to the center of the scene.

    Read the article

  • Accessing managers from game entities/components

    - by Boreal
    I'm designing an entity-component engine in C# right now, and all components need to have access to the global event manager, which sends off inter-entity events (every entity also has a local event manager). I'd like to be able to simply call functions like this: GlobalEventManager.Publish("Foo", new EventData()); GlobalEventManager.Subscribe("Bar", OnBarEvent); without having to do this: class HealthComponent { private EventManager globalEventManager; public HealthComponent(EventManager gEM) { globalEventManager = gEM; } } // later on... EventManager globalEventManager = new EventManager(); Entity playerEntity = new Entity(); playerEntity.AddComponent(new HealthComponent(globalEventManager)); How can I accomplish this? EDIT: I solved it by creating a singleton called GlobalEventManager. It derives from the local EventManager class and I use it like this: GlobalEventManager.Instance.Publish("Foo", new EventData());

    Read the article

  • What causes the iOS OpenGLES driver to allocate extra memory?

    - by Martin Linklater
    I'm trying to optimize the memory usage of our iOS game and I'm puzzled about when/why the iOS GLES driver allocates extra memory at runtime... When I run our game through Instruments with the OpenGL ES Driver instrument the gartUsedBytes value can fluctuate quite wildly. We preload all our textures and build the buffer objects up front, so it's not the game engine requesting extra memory from GL. Currently we are manually requesting around 50MB of GL memory, yet the gartUsedBytes value sits at around 90MB most of the time, peaking at 125MB from time to time. It seems to be linked to what you are rendering that frame - our PVS only submits VBO's for visible meshes. Can anyone shed some light on what the driver is doing in the background ? Like I said earlier, all our game engine allocations are done on level load, so in theory there shouldn't be any fluctuation on GL memory usage while the level is running. Thanks.

    Read the article

  • Multiple objects listening for the same key press

    - by xiaohouzi79
    I want to learn the best way to implement this: I have a hero and an enemy on the screen. Say the hero presses "k" to get out a knife, I want the enemy to react in a certain way. Now, if in my game loop I have a listener for the key press event and I identify a "k" was pressed, the quick and easy way would be to do: // If K pressed // hero.getOoutKnife() // enemy.getAngry() But what is commonly done in more complex games, where say I have 10 types of character on screen and they all need to react in a unique way when the letter "k" is pressed? I can think of a bunch of hacky ways to do this, but would love to know how it should be done properly. I am using C++, but I'm not looking for a code implementation, just some ideas on how it should be done the right way.

    Read the article

  • Handling player/background movements in 2D games

    - by lukeluke
    Suppose you have your animated character controlled by the player and a 2D world (like the old 2D side-scrolling games). When the user press right on the keyboard, the background is moved to the right. If the path is always horizontal, this is simple to do (incrementation/decrementation of the x-coordinate). But suppose that the path is instead a polygonal chain. My questions are: How do you move the background? How do you move the background if the game objects are managed with a physics engine like box2D?

    Read the article

  • GestureListener's fling method doesn't get called

    - by nosferat
    I'm using SimpleGestureDetector from the libgdx-users Wiki as my InputProcessor. I set it in the created() method: Gdx.input.setInputProcess(new SimpleDirectionGestureDetector(charController)); charController is my class which implements the DirectionListener interface defined in the SimpleDirectionGestureDetector class and it is responsible for moving the player character. However the character doesn't change direction when I'm performing a fling action in any direction. I've checked and the fling() method in the SimpleDirectionGesture class doesn't get called and I have no idea why, since everything seems good. What am I doing wrong?

    Read the article

  • Calculating the rotational force of a 2D sprite

    - by Jon
    I am wondering if someone has an elegant way of calculating the following scenario. I have an object of (n) number of squares, random shapes, but we will pretend they are all rectangles. We are dealing with no gravity, so consider the object in space, from a top down perspective. I am applying a force to the object at a specific square (as illustrated below). How do I calculate the rotational angle, based on the force being applied, at the location being applied. If applied in the center square, it would go straight. How should it behave the further I move from the center? How do I calculate the rotational velocity?

    Read the article

  • Remove enemy when bullet hits enemy

    - by jordi12100
    For my education I have to make a basic game in HTML5 canvas. The game is a shooter game. When you can move left - right and space is shoot. When I shoot the bullets will move up. The enemy moves down. When the bullet hits the enemy the enemy has to dissapear and it will gain +1 score. But the enemy will dissapear after it comes up the screen. Demo: http://jordikroon.nl/test.html space = shoot + enemy shows up This is my code: for (i=0;i<enemyX.length;i++) { if(enemyX[i] > canvas.height) { enemyY.splice(i,1); enemyX.splice(i,1); } else { enemyY[i] += 5; moveEnemy(enemyX[i],enemyY[i]); } } for (i=0;i<bulletX.length;i++) { if(bulletY[i] < 0) { bulletY.splice(i,1); bulletX.splice(i,1); } else { bulletY[i] -= 5; moveBullet(bulletX[i],bulletY[i]); for (ib=0;ib<enemyX.length;ib++) { if(bulletX[i] + 50 < enemyX[ib] || enemyX[ib] + 50 < bulletX[i] || bulletY[i] + 50 < enemyY[ib] || enemyY[ib] + 50 < bulletY[i]) { ++score; enemyY.splice(i,1); enemyX.splice(i,1); } } } } Objects: function moveBullet(posX,posY) { //console.log(posY); ctx.arc(posX, (posY-150), 10, 0 , 2 * Math.PI, false); } function moveEnemy(posX,posY) { ctx.rect(posX, posY, 50, 50); ctx.fillStyle = '#ffffff'; ctx.fill(); }

    Read the article

  • How to blend the sprite into background?

    - by optimisez
    I try to blend the character into game but I still cannot remove the blue color in the sprite sheet and discover that the white area of sprite is semi-transparent. Before that, the color D3DCOLOR_XRGB(255, 255, 255) is set in D3DXCreateTextureFromFileEx. You will see the fireball through the sprite. After I change the color to D3DCOLOR_XRGB(0, 255, 255), the result will be Now, I am trying to remove the blue color of the sprite sheet and my expected result is something like that Until now, I still cannot figure out how to do that. Any ideas? void initPlayer() { // Create texture. hr = D3DXCreateTextureFromFileEx(d3dDevice, "player.png", 169, 44, D3DX_DEFAULT, NULL, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, D3DX_DEFAULT, D3DX_DEFAULT, D3DCOLOR_XRGB(0, 255, 255), NULL, NULL, &player); } void renderPlayer() { sprite->Draw(player, &playerRect, NULL, &D3DXVECTOR3(playerDest.X, playerDest.Y, 0),D3DCOLOR_XRGB(255, 255, 255)); } void initFireball() { hr = D3DXCreateTextureFromFileEx(d3dDevice, "fireball.png", 512, 512, D3DX_DEFAULT, NULL, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, D3DX_DEFAULT, D3DX_DEFAULT, D3DCOLOR_XRGB(255, 255, 255), NULL, NULL, &fireball); } void renderFireball() { sprite->Draw(fireball, &fireballRect, NULL, &D3DXVECTOR3(fireballDest.X, fireballDest.Y, 0), D3DCOLOR_XRGB(255,255, 255)); }

    Read the article

  • Could someone explain why my world reconstructed from depth position is incorrect?

    - by yuumei
    I am attempting to reconstruct the world position in the fragment shader from a depth texture. I pass in the 8 frustum points in world space and interpolate them across fragments and then interpolate from near to far by the depth: highp float depth = (2.0 * CameraPlanes.x) / (CameraPlanes.y + CameraPlanes.x - texture( depthTexture, textureCoord ).x * (CameraPlanes.y - CameraPlanes.x)); // Reconstruct the world position from the linear depth highp vec3 world = mix( nearWorldPos, farWorldPos, depth ); CameraPlanes.x is the near plane CameraPlanes.y is the far. Assuming that my frustum positions are correct, and my depth looks correct, why is my world position wrong? (My depth texture is of format GL_DEPTH_COMPONENT32F if that matters) Thanks! :D Update: Screenshot of world position http://imgur.com/sSlHd So you can see it looks nearly correct. However as the camera moves, the colours (positions) change, which they shouldnt. I can get this to work, if I do the following: Write this into the depth attachment in the previous pass: gl_FragDepth = gl_FragCoord.z / gl_FragCoord.w / CameraPlanes.y; and then read the depth texture like so: depth = texture( depthTexture, textureCoord ).x However this will kill the hardware z buffer optimizations.

    Read the article

  • Preventing item duplication?

    - by PuppyKevin
    For my game, there's two types of items - stackable, and nonstackable. Nonstackable items get assigned a unique ID that stays with it forever. A character ID is assosicated with the item, as is a state (CHANGED, UNCHANGED, NEW, REMOVED). The character ID and state is used for item saving purposes. Stackable items have one unique ID, as in the entire stack has one unique ID. For example: 5 Potions (stacked ontop of each other) has one unique ID. When dropping a nonstackable item, the state gets set to REMOVED, and the unique ID and state don't change. If picked up by another player, the state gets set to NEW, and the character ID gets changed to the new character's ID. When dropping all items in a stack of stackable items (for example, 5 potions out of 5) - it behaves just like a nonstackable item. When dropping some of a stack of stackable items (for example, 3 potions out of 5)... I really have no clue what to do. The 3 dropped potions have the state of REMOVED, but the same unique ID and character ID. If another player picks it up, it has no choice but to obtain a new unique ID, and its state gets changed to NEW and its character ID to the new one. If the dropping player picks it back up, they'd just be readded to the stack. There's two issues with that though. 1. If the player who dropped the 3 potions picks it back up, there's no way to tell if they legitimately dropped the items, or if they're duped items. 2. If another player picks up the 3 potions (assuming they're duped), there's no way to know if they're duped or not. My question is: How can I create a system that detects duplicated items for both nonstackable and stackable items?

    Read the article

  • OpenGL Get Rotated X and Y of quad

    - by matejkramny
    I am developing a game in 2D using LWJGL library. So far I have a rotating box. I have done basic Rectangle collision, but it doesn't work for rotated rectangles. Does OpenGL have a function that returns the vertices of rotated rectangle? Or is there another way of doing this using trigonometry? I had researched how to do this and everything I found was using some matrix that I don't understand so I am asking if there is another way of doing this. For clarification, I am trying to find out the true (rotated) X,Y of each point of the rectangle. Let's say, the first point of a rectangle (top,left) has x=10 y=10.. Width and height is 100 pixels. When I rotate the rectangle using glRotatef() the x and y stay the same. The rotation is happening inside OpenGL. I need to extract the x,y of the rectangle so I can detect collisions properly.

    Read the article

  • Working out of a vertex array for destrucible objects

    - by bobobobo
    I have diamond-shaped polygonal bullets. There are lots of them on the screen. I did not want to create a vertex array for each, so I packed them into a single vertex array and they're all drawn at once. | bullet1.xyz | bullet1.rgb | bullet2.xyz | bullet2.rgb This is great for performance.. there is struct Bullet { vector<Vector3f*> verts ; // pointers into the vertex buffer } ; This works fine, the bullets can move and do collision detection, all while having their data in one place. Except when a bullet "dies" Then you have to clear a slot, and pack all the bullets towards the beginning of the array. Is this a good approach to handling lots of low poly objects? How else would you do it?

    Read the article

  • Box2D platformer movement. Are joints a good idea?

    - by Romeo
    So i smashed my brains trying to make my character move. As i wanted later in the game to add explosions and bullets it wasn't a good idea to mess with the velocity and the forces/impulses didn't work as i expected so something stuck in my mind: Is it a good idea to put at his bottom a wheel(circle) which is invisible to the player that will do the movement by rotation? I will attach this to my main body with a revolute joint but i don't really know how to make the main body and wheel body to don't collide one with each other since funny things can happen. What is your oppinion?

    Read the article

< Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >