Search Results

Search found 38203 results on 1529 pages for 'library development'.

Page 584/1529 | < Previous Page | 580 581 582 583 584 585 586 587 588 589 590 591  | Next Page >

  • Loadbalancing Questions

    - by Van Holtz
    I have been learning networking for about 4 months. Wrote a single standalone Multiplayer server and succeeded with authoritative approach. Now I want to extend it by splitting the single server into clusters to allow even more players to log in to avoid latency issues. Now I have protyped the Loadbalancing server and its running pretty good so far. This is my architecture, I have a master server which acts as a proxy, every sub servers(chat, login, game) connect to the master server as well as all the clients. when a client connects, Client Request: Send Request - MS(Master) - Decides which SS(SubServer) to forward to - Forwards Request to SS - SS - Analyze Message - Send Response to MS - Decides which Client to forward to - Forwards Response to Client Well, it looks like its going through lots of stages. it takes double the time to process the message than a single server approach. i feel like my model isnt the best or i may be wrong. is there any better model or the one they use in professional games? I still want a Master-SubServer approach. I just want to clarify that I'm going in the right direction before writing all my codes. Thanks for any answer :)

    Read the article

  • Why doesn't my texture display with this GLSL shader?

    - by Chewy Gumball
    I am trying to display a DXT1 compressed texture on a quad using a VBO and shaders, but I have been unable to get it working. All I get is a black square. I know my texture is uploaded properly because when I use immediate mode without shaders the texture displays fine but I will include that part just in case. Also, when I change the gl_FragColor to something like vec4 (0.0, 1.0, 1.0, 1.0) then I get a nice blue quad so I know that my shader is able to set the colour. It appears to be either the texture is not being bound correctly in the shader or the texture coordinates are not being picked up. However, I can't find the error! What am I doing wrong? I am using OpenTK in C# (not xna). Vertex Shader: void main() { gl_TexCoord[0] = gl_MultiTexCoord0; // Set the position of the current vertex gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; } Fragment Shader: uniform sampler2D diffuseTexture; void main() { // Set the output color of our current pixel gl_FragColor = texture2D(diffuseTexture, gl_TexCoord[0].st); //gl_FragColor = vec4 (0.0,1.0,1.0,1.0); } Drawing Code: int vb, eb; GL.GenBuffers(1, out vb); GL.GenBuffers(1, out eb); // Position Texture float[] verts = { 0.1f, 0.1f, 0.0f, 0.0f, 0.0f, 1.9f, 0.1f, 0.0f, 1.0f, 0.0f, 1.9f, 1.9f, 0.0f, 1.0f, 1.0f, 0.1f, 1.9f, 0.0f, 0.0f, 1.0f }; uint[] indices = { 0, 1, 2, 0, 2, 3 }; //upload data to the VBO GL.BindBuffer(BufferTarget.ArrayBuffer, vb); GL.BindBuffer(BufferTarget.ElementArrayBuffer, eb); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(verts.Length * sizeof(float)), verts, BufferUsageHint.StaticDraw); GL.BufferData(BufferTarget.ElementArrayBuffer, (IntPtr)(indices.Length * sizeof(uint)), indices, BufferUsageHint.StaticDraw); //Upload texture int buffer = GL.GenTexture(); GL.BindTexture(TextureTarget.Texture2D, buffer); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapS, (float)TextureWrapMode.Repeat); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapT, (float)TextureWrapMode.Repeat); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (float)TextureMagFilter.Linear); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (float)TextureMinFilter.Linear); GL.TexEnv(TextureEnvTarget.TextureEnv, TextureEnvParameter.TextureEnvMode, (float)TextureEnvMode.Modulate); GL.CompressedTexImage2D(TextureTarget.Texture2D, 0, texture.format, texture.width, texture.height, 0, texture.data.Length, texture.data); //Draw GL.UseProgram(shaderProgram); GL.EnableClientState(ArrayCap.VertexArray); GL.EnableClientState(ArrayCap.TextureCoordArray); GL.VertexPointer(3, VertexPointerType.Float, 5 * sizeof(float), 0); GL.TexCoordPointer(2, TexCoordPointerType.Float, 5 * sizeof(float), 3); GL.ActiveTexture(TextureUnit.Texture0); GL.Uniform1(GL.GetUniformLocation(shaderProgram, "diffuseTexture"), 0); GL.DrawElements(BeginMode.Triangles, indices.Length, DrawElementsType.UnsignedInt, 0);

    Read the article

  • Any significant performance cost to using BlendState.Premultiplied?

    - by Donutz
    Normally I guess you'd use BlendState.AlphaBlend because normally when you load your textures through the pipeline they're already premultiplied. However, if you're loading textures at runtime from PNGs or some such, you have to loop through the pixels and premultiply them, which can take a long time if you've got a lot of textures to load. So it looks (haven't tried it) like using BlendState.Premultiplied instead of BlendState.AlphaBlend should handle non-premultiplied textures and produce the same visual result, without all the startup costs. I have to wonder if there's a non-obvious cost to doing this, like a huge drop in performance or something. Anyone know?

    Read the article

  • Pathfinding for fleeing

    - by Philipp
    As you know there are plenty of solutions when you wand to find the best path in a 2-dimensional environment which leads from point A to point B. But how do I calculate a path when an object is at point A, and wants to get away from point B, as fast and far as possible? A bit of background information: My game uses a 2d environment which isn't tile-based but has floating point accuracy. The movement is vector-based. The pathfinding is done by partitioning the game world into rectangles which are walkable or non-walkable and building a graph out of their corners. I already have pathfinding between points working by using Dijkstras algorithm. The use-case for the fleeing algorithm is that in certain situations, actors in my game should perceive another actor as a danger and flee from it. The trivial solution would be to just move the actor in a vector in the direction which is opposite from the threat until a "safe" distance was reached or the actor reaches a wall where it then covers in fear. The problem with this approach is that actors will be blocked by small obstacles they could easily get around. As long as moving along the wall wouldn't bring them closer to the threat they could do that, but it would look smarter when they would avoid obstacles in the first place: Another problem I see is with dead ends in the map geometry. In some situations a being must choose between a path which gets it faster away now but ends in a dead end where it would be trapped, or another path which would mean that it wouldn't get that far away from the danger at first (or even a bit closer) but on the other hand would have a much greater long-term reward in that it would eventually get them much further away. So the short-term reward of getting away fast must be somehow valued against the long-term reward of getting away far. There is also another rating problem for situations where an actor should accept to move closer to a minor threat to get away from a much larger threat. But completely ignoring all minor threats would be foolish, too (that's why the actor in this graphic goes out of its way to avoid the minor threat in the upper right area): Are there any standard solutions for this problem?

    Read the article

  • XNA 4.0, Combining model draw calls

    - by MayContainNuts
    I have the following problem: The levels in my game are made up of a Large Quantity of small Models and because of that I am experiencing frame rate problems. I already did some research and came to the conclusion that the amount of draw calls I am making must be the root of my problems. I've looked around for a while now and couldn't quite find a satisfying solution. I can't cull any of those models, in a worst case scenario there could be 1000 of them visible at the same time. I also looked at Hardware geometry Instancing, but I don't think that's quite what I'm looking for, because the level consists of a lot of different parts. So, what I'd like to do is combining 100 or 200 of these Models into a single large one and draw it as a whole 'chunk'. The whole geometry is static so it wouldn't have to be changed after combining, but different parts of it would have to use different textures (I think I can accomplish that with a texture atlas). But I have no idea how to to that, so does anybody have any suggestions?

    Read the article

  • how to use opengl blend mode/functions to brighten/darken a texture.

    - by Jigar
    Tried this code, but the texture didnot get any lighter. try { texture = TextureLoader.getTexture("png", Game.class.getResourceAsStream("/brick.png"), true, GL_NEAREST); } catch (IOException e) { e.printStackTrace(); } GL11.glBindTexture(GL11.GL_TEXTURE_2D, texture.getTextureID()); glEnable(GL_BLEND); glBlendFunc(GL_CONSTANT_ALPHA, GL_CONSTANT_ALPHA); GL14.glBlendColor(1.0f, 1.0f, 1.0f, 0.5f); glColor4f(1, 1, 1, 0.5f); GL11.glBegin(GL11.GL_QUADS); // Start Drawing Quads // Front Face GL11.glNormal3f(0.0f, 0.0f, 1.0f); // Normal Pointing Towards Viewer GL11.glTexCoord2f(0.0f, 0.0f); GL11.glVertex3f(-1.0f, -1.0f, 1.0f); // Point 1 (Front) GL11.glTexCoord2f(1.0f, 0.0f); GL11.glVertex3f(1.0f, -1.0f, 1.0f); // Point 2 (Front) GL11.glTexCoord2f(1.0f, 1.0f); GL11.glVertex3f(1.0f, 1.0f, 1.0f); // Point 3 (Front) GL11.glTexCoord2f(0.0f, 1.0f); GL11.glVertex3f(-1.0f, 1.0f, 1.0f); // Point 4 (Front) glEnd();

    Read the article

  • Best practice for designing a risk-style board game

    - by jyanks
    I'm just trying to figure out how to set up the code for a game like risk... I would like it to be extensible, so that I can have multiple maps (ie- World, North America, Eurasia, Africa) so hardcoding in the map doesn't seem to make a whole lot of sense I'm a bit confused on how/where items should be stored/accessed. Here are the objects I see the game theoretically using: -Countries/Territories -Cities (Can be contained within territories) -Capitols -Connections -Continents -Map -Troops At the moment, I feel like: -A map should have a list of continents and countries. The continents would be more of a 'logical' thing where the continents would just be lists of countries that are checked for bonuses at the start of turns -Countries should have a list of countries that they're connected to for the connections What I can't figure out is: Where do I store the troops? Do I have an object for every single troop or do I just store the number of troops on a country object as an integer? What about capitols and cities? Do those just have a reference to the country they reside in? Is there anything I'm not seeing here that's going to screw me over in the long run with the way that I'm thinking about things now? Any advice would be appreciated.

    Read the article

  • Normal map lighting bug in bottom right quadrant

    - by Ryan Capote
    I am currently working on getting normal maps working in my project, and have run into a problem with lighting. As you can see, the normals in the bottom right quadrant of the lighting isn't calculating the correct direction to the light or something. Best seen by the red light If I use flat normals (z normal = 1.0), it seems to be working fine: normals for the tile sheet: Shader: #version 330 uniform sampler2D uDiffuseTexture; uniform sampler2D uNormalsTexture; uniform sampler2D uSpecularTexture; uniform sampler2D uEmissiveTexture; uniform sampler2D uWorldNormals; uniform sampler2D uShadowMap; uniform vec4 uLightColor; uniform float uConstAtten; uniform float uLinearAtten; uniform float uQuadradicAtten; uniform float uColorIntensity; in vec2 TexCoords; in vec2 GeomSize; out vec4 FragColor; float sample(vec2 coord, float r) { return step(r, texture2D(uShadowMap, coord).r); } float occluded() { float PI = 3.14; vec2 normalized = TexCoords.st * 2.0 - 1.0; float theta = atan(normalized.y, normalized.x); float r = length(normalized); float coord = (theta + PI) / (2.0 * PI); vec2 tc = vec2(coord, 0.0); float center = sample(tc, r); float sum = 0.0; float blur = (1.0 / GeomSize.x) * smoothstep(0.0, 1.0, r); sum += sample(vec2(tc.x - 4.0*blur, tc.y), r) * 0.05; sum += sample(vec2(tc.x - 3.0*blur, tc.y), r) * 0.09; sum += sample(vec2(tc.x - 2.0*blur, tc.y), r) * 0.12; sum += sample(vec2(tc.x - 1.0*blur, tc.y), r) * 0.15; sum += center * 0.16; sum += sample(vec2(tc.x + 1.0*blur, tc.y), r) * 0.15; sum += sample(vec2(tc.x + 2.0*blur, tc.y), r) * 0.12; sum += sample(vec2(tc.x + 3.0*blur, tc.y), r) * 0.09; sum += sample(vec2(tc.x + 4.0*blur, tc.y), r) * 0.05; return sum * smoothstep(1.0, 0.0, r); } float calcAttenuation(float distance) { float linearAtten = uLinearAtten * distance; float quadAtten = uQuadradicAtten * distance * distance; float attenuation = 1.0 / (uConstAtten + linearAtten + quadAtten); return attenuation; } vec3 calcFragPosition(void) { return vec3(TexCoords*GeomSize, 0.0); } vec3 calcLightPosition(void) { return vec3(GeomSize/2.0, 0.0); } float calcDistance(vec3 fragPos, vec3 lightPos) { return length(fragPos - lightPos); } vec3 calcLightDirection(vec3 fragPos, vec3 lightPos) { return normalize(lightPos - fragPos); } vec4 calcFinalLight(vec2 worldUV, vec3 lightDir, float attenuation) { float diffuseFactor = dot(normalize(texture2D(uNormalsTexture, worldUV).rgb), lightDir); vec4 diffuse = vec4(0.0); vec4 lightColor = uLightColor * uColorIntensity; if(diffuseFactor > 0.0) { diffuse = vec4(texture2D(uDiffuseTexture, worldUV.xy).rgb, 1.0); diffuse *= diffuseFactor; lightColor *= diffuseFactor; } else { discard; } vec4 final = (diffuse + lightColor); if(texture2D(uWorldNormals, worldUV).g > 0.0) { return final * attenuation; } else { return final * occluded(); } } void main(void) { vec3 fragPosition = calcFragPosition(); vec3 lightPosition = calcLightPosition(); float distance = calcDistance(fragPosition, lightPosition); float attenuation = calcAttenuation(distance); vec2 worldPos = gl_FragCoord.xy / vec2(1024, 768); vec3 lightDir = calcLightDirection(fragPosition, lightPosition); lightDir = (lightDir*0.5)+0.5; float atten = calcAttenuation(distance); vec4 emissive = texture2D(uEmissiveTexture, worldPos); FragColor = calcFinalLight(worldPos, lightDir, atten) + emissive; }

    Read the article

  • Sprite transparency not effected libGDX

    - by Aon GoltzCrank
    I am making a game using libGDX and Tween Universal Engine. My problem is as follows: I have 2 screens so fars, a splash screen with the logo, and a second one which is the main menu. In the splash screen I use a SpriteBatch and a Sprite with the Texture of the image I want (which goes through some scaling.) Now I use the Tween engine, along with a created SpriteAccessor to control the alpha of the sprite. I fade the picture in, then fade it out, then change it to the next screen. In the next screen I have a single sprite, and a single, 3 slot, sprite array. In this screen I also use the tween engine, I fade the single sprite into the screen (it's the background image) then I try to, using the same method, (Tween.to) to change the alpah of the sprite array (each sprite by itself.), I first set it to 0 using Tween.set, then using the method I change it. This didn't work, after some tests I tried setting the alpha of a single sprite from the array to 0, and that didn't work. It's like the program is ignoring the alpha value, I even printed out the alpha value, it saying 0, but the sprite is visible. How can I fix this, or why might it be caused?

    Read the article

  • CCUserDefault, iOS/Android and game updates

    - by Luke
    My game uses cocos2d-x and will be published on iOS platform first, later on Android. I save a lot of things with CCUserDefault (scores, which level was completed, number of coins taken, etc...). But now I have a big doubt. What will happen when the game will receive its first update? CCUserDefault uses an XML file stored somewhere in the app storage space. This file is created and retained until one uninstalls the app. I am wondering what happens when the app is updated. Will the old XML file be maintained? Because if not, how should I handle app updates (updates in the sense that 2, 3 or more new level packages will be added, but the informations about the old ones, like scores, which level was finished and which not, number of coins, etc., need absolutely not to be lost)?

    Read the article

  • PHP Battle System for RPG game

    - by Jay
    I posted this a while ago on stackoverflow, they thought it would be better place here, I agree. Essentially I know what I want to accomplish, and I have something to the effect of what I want but I am not satisfied with it. Here's the problem. Each user has some states: STR (how hard they hit), DEF (dodging/blocking attacks), SPD (when they can strike), and STAMINA (basically their endurance in game, if this runs out they can no longer fight and lose) What I need is something like this: UserA Stats: STR: 1,000 DEF: 2500 SPD: 2000 (HP: 1000/1000) UserB Stats: STR: 1,500 DEF: 500 SPD: 4000 (HP: 1000/1000) Because the second user has double the speed, he lands twice the amount of hits on the first user, before he gets hit. Because he has less strength than the first users defence, he will do no, to little damage. This is how the battle would theoretically go: UserB strikes UserA for 0 damage UserB strikes UserA for 0 damage UserA strikes UserB for 500 damage UserB strikes UserA for 0 damage UserB strikes UserA for 0 damage UserA strikes UserB for 500 damage, and sends him to the hospital! I was using this code, which is buggy, and not efficient, I just need a better way to do this: http://pastebin.com/15LiQQuJ Oh, and if anyone has some good ideas on how to improve the concept that would be cool too! It's not that elaborate so I'll be thinking of all sorts of things to make it more dynamic. Thanks.

    Read the article

  • Material, Pass, Technique and shaders

    - by Papi75
    I'm trying to make a clean and advanced Material class for the rendering of my game, here is my architecture: class Material { void sendToShader() { program->sendUniform( nameInShader, valueInMaterialOrOther ); } private: Blend blendmode; ///< Alpha, Add, Multiply, … Color ambient; Color diffuse; Color specular; DrawingMode drawingMode; // Line Triangles, … Program* program; std::map<string, TexturePacket> textures; // List of textures with TexturePacket = { Texture*, vec2 offset, vec2 scale} }; How can I handle the link between the Shader and the Material? (sendToShader method) If the user want to send additionals informations to the shader (like time elapsed), how can I allow that? (User can't edit Material class) Thanks!

    Read the article

  • Elliptical orbit modeling

    - by Nathon
    I'm playing with orbits in a simple 2-d game where a ship flies around in space and is attracted to massive things. The ship's velocity is stored in a vector and acceleration is applied to it every frame as appropriate given Newton's law of universal gravitation. The point masses don't move (there's only 1 right now) so I would expect an elliptical orbit. Instead, I see this: I've tried with nearly circular orbits, and I've tried making the masses vastly different (a factor of a million) but I always get this rotated orbit. Here's some (D) code, for context: void accelerate(Vector delta) { velocity = velocity + delta; // Velocity is a member of the ship class. } // This function is called every frame with the fixed mass. It's a // method of the ship's. void fall(Well well) { // f=(m1 * m2)/(r**2) // a=f/m // Ship mass is 1, so a = f. float mass = 1; Vector delta = well.position - loc; float rSquared = delta.magSquared; float force = well.mass/rSquared; accelerate(delta * force * mass); }

    Read the article

  • 2D SAT Collision Detection not working when using certain polygons (With example)

    - by sFuller
    My SAT algorithm falsely reports that collision is occurring when using certain polygons. I believe this happens when using a polygon that does not contain a right angle. Here is a simple diagram of what is going wrong: Here is the problematic code: std::vector<vec2> axesB = polygonB->GetAxes(); //loop over axes B for(int i = 0; i < axesB.size(); i++) { float minA,minB,maxA,maxB; polygonA->Project(axesB[i],&minA,&maxA); polygonB->Project(axesB[i],&minB,&maxB); float intervalDistance = polygonA->GetIntervalDistance(minA, maxA, minB, maxB); if(intervalDistance >= 0) return false; //Collision not occurring } This function retrieves axes from the polygon: std::vector<vec2> Polygon::GetAxes() { std::vector<vec2> axes; for(int i = 0; i < verts.size(); i++) { vec2 a = verts[i]; vec2 b = verts[(i+1)%verts.size()]; vec2 edge = b-a; axes.push_back(vec2(-edge.y,edge.x).GetNormailzed()); } return axes; } This function returns the normalized vector: vec2 vec2::GetNormailzed() { float mag = sqrt( x*x + y*y ); return *this/mag; } This function projects a polygon onto an axis: void Polygon::Project(vec2* axis, float* min, float* max) { float d = axis->DotProduct(&verts[0]); float _min = d; float _max = d; for(int i = 1; i < verts.size(); i++) { d = axis->DotProduct(&verts[i]); _min = std::min(_min,d); _max = std::max(_max,d); } *min = _min; *max = _max; } This function returns the dot product of the vector with another vector. float vec2::DotProduct(vec2* other) { return (x*other->x + y*other->y); } Could anyone give me a pointer in the right direction to what could be causing this bug? Edit: I forgot this function, which gives me the interval distance: float Polygon::GetIntervalDistance(float minA, float maxA, float minB, float maxB) { float intervalDistance; if (minA < minB) { intervalDistance = minB - maxA; } else { intervalDistance = minA - maxB; } return intervalDistance; //A positive value indicates this axis can be separated. } Edit 2: I have recreated the problem in HTML5/Javascript: Demo

    Read the article

  • Issues glVertexAttribPointer last 2 parameters?

    - by NoobScratcher
    Introduction Hello I will start out by explaining my setup, showing samples as I go along explaining the situation. I'm using these tools: OpenGL 3.3 GLSL 330 C++ Problem The problem is when I render the wavefront obj 3d model it gives a very weird visual glitch the model was supposed to be a square but instead its a triangluated mess with parts of the vertexes pointing in a stretched direction in massive amounts towards the bottom left side of the frustum.... Explanation: I'm using std::vectors to store my wavefront .obj model data using sscanf to get the floating point values into the structure members x,y,z and store them into the Points structure variable p; int index = IndexAssigner(1, 1); ifstream file (list[index].c_str() ); points.push_back(Point()); Point p; int face[4]; while (!file.eof() ) { char modelbuffer[10000]; file.getline(modelbuffer, 10000); switch(modelbuffer[0]) { case 'v' : sscanf(modelbuffer, "v %f %f %f", &p.x, &p.y, &p.z); points.push_back(p); break; case 'f': sscanf(modelbuffer, "f %d %d %d %d", face, face+1, face+2, face+3 ); faces.push_back(face[0]); faces.push_back(face[1]); faces.push_back(face[2]); faces.push_back(face[3]); } //Turn on FileReader aka "RENDER CODE" FileReader = true; } then I render the Points vector using the .data() member of std::vectors to the frustum. Other declarations: int numfloats = 4; float* point=reinterpret_cast<float*>(&points[0]); int num_bytes=numfloats*sizeof(float); Vector declarations: struct Point {float x, y , z; }; std::vector<int>faces; std::vector<Point>points; Render code: glGenBuffers(1, &vertexbuffer); glGenTextures(1, &ModelTexture); glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer); glBindTexture(GL_TEXTURE_3D, ModelTexture); glTexImage2D(GL_TEXTURE_2D, 0,GL_RGBA, ModelSurface->w, ModelSurface->h, 0, GL_BGR, GL_UNSIGNED_BYTE, ModelSurface->pixels); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glBufferData(GL_ARRAY_BUFFER, sizeof(points), points.data(), GL_STATIC_DRAW); glVertexAttribPointer(3, 3, GL_FLOAT, GL_FALSE,num_bytes ,points.data()); glEnableVertexAttribArray(3); //Translation Process GLfloat TranslationMatrix[] = { 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0 }; //Send Translation Matrix up to the vertex shader glUniformMatrix4fv(translation, 1, TRUE, TranslationMatrix); glDrawElements( GL_QUADS, faces.size(), GL_UNSIGNED_INT, faces.data()); I tried looking at what was causing this and went through every function every parameter ,etc looked at the man pages. Then found out that it could be my glVertexAttribPointer. Here are the man pages for glVertexAttribPointer http://www.opengl.org/sdk/docs/man/xhtml/glVertexAttribPointer.xml The last 2 parameters is my problem How do I write those 2 last parameters do I try putting the data from Points into it?. glVertexAttribPointer(3, 3, GL_FLOAT, GL_FALSE,num_bytes ,points.data()); How does it work with vectors? Is it fast?* if you can not be bothered too look at the man pages here is the scripts coming from the man pages directly. Stride Specifies the byte offset between consecutive generic vertex attributes. If stride is 0, the generic vertex attributes are understood to be tightly packed in the array. The initial value is 0. Pointer Specifies a pointer to the first component of the first generic vertex attribute in the array. The initial value is 0. If you want my full source - http://ideone.com/fPfkg Thanks Again if you do read this.

    Read the article

  • Object pools for efficient resource management

    - by GameDevEnthusiast
    How can I avoid using default new() to create each object? My previous demo had very unpleasant framerate hiccups during dynamic memory allocations (usually, when arrays are resized), and creating lots of small objects which often contain one pointer to some DirectX resource seems like an awful lot of waste. I'm thinking about: Creating a master look-up table to refer to objects by handles (for safety & ease of serialization), much like EntityList in source engine Creating a templated object pool, which will store items contiguously (more cache-friendly, fast iteration, etc.) and the stored elements will be accessed (by external systems) via the global lookup table. The object pool will use the swap-with-last trick for fast removal (it will invoke the object's ~destructor first) and will update the corresponding indices in the global table accordingly (when growing/shrinking/moving elements). The elements will be copied via plain memcpy(). Is it a good idea? Will it be safe to store objects of non-POD types (e.g. pointers, vtable) in such containers? Related post: Dynamic Memory Allocation and Memory Management

    Read the article

  • Drawing large 2D sidescroller level terrain

    - by Yar
    I'm a relatively good programmer but now that it comes to add some basic levels to my 2D game I'm kinda stuck. What I want to do: An acceptable, large (8000 * 1000 pixels) "green hills" test level for my game. What is the best way for me to do this? It doesn't have to look great, it just shouldn't look like it was made in MS paint with the line and paint bucket tool. Basically it should just mud with grass on top of it, shaped in some form of hills. But how should I draw it, I can't just take out the pencil tool and start drawing it pixel per pixel, can I?

    Read the article

  • Any good web frameworks for asynchronous multiplayer games?

    - by Steven Stadnicki
    I'm trying to craft a site for web-based (original) board games, and my client (currently written in Actionscript, but that's highly fungible) works fine - I can play solitaire games in the client - but it has nothing to connect to. What I'm looking for is a server framework for handling accounts/authentication and game tracking: something that would let players log in, show them a list of their current games, let them invite friends to new games, let them make moves in the games they have open, etc. I'm flexible on language; obviously I'm going to have to write a lot of server code to handle the actual game logic, but that should be straightforward enough. I'm more concerned with how to handle the user (and game) DBs, though suggestions for a good server framework for communicating with the DBs (and serving up, most likely, JSON for client communications) are also welcome. Right now my leaning is towards Ruby (probably with Rails) but as far as I can determine it would be a pretty good chunk of effort to set up the necessary databases, so having something even higher-level would be really useful to me.

    Read the article

  • Reacting to rectangle on rectangle collisions

    - by mcjohnalds45
    I don't know how to react to collisions between two axis aligned rectangles that have x, y, width and height values (x and y are from the centre of the box) to make them simply not overlap. I figured I'd just make them move away from each other depending on how far they intersect in the opposite direction (left, right, up or down) of where they collided. If I check for collisions only on the x axis or only on the y axis it works fine, but when checking for both collisions crazy stuff happens. This code executes when the first box collides with the second. It's in lua but feel free to answer in anything that isn't to too counter-intuitive. if box1.x < box2.x then box1.x = box1.x + box2.x - box1.x - (box1.width / 2) - (box2.width / 2) end if box1.x > box2.x then box1.x = box1.x - (box1.x - box2.x - (box1.width / 2) - (box2.width / 2)) end if box1.y < box2.y then box1.y = box1.y + box2.y - box1.y - (box1.height / 2) - (box2.height / 2) end if box1.y > box2.y then box1.y = box1.y - (box1.y - box2.y - (box1.height / 2) - (box2.height / 2)) end

    Read the article

  • Different ways to pass Textures into HLSL shaders

    - by codymanix
    The GraphicsDevice class of xna 4 has the properties Textures and VertexTextures. What is the exact difference? I don't really understand what MSDN tells me about this. I usually use Effect parameters to pass textures to my HLSL shaders. What are the differences between these methods, which is faster? My Scenario: I am working on a minecraft like game, which means lots of separate DrawPrimitives calls and change current Texture often since I have lots of different block types. Since I use an Octtree to organize the world, I cannot easily sort by texture.

    Read the article

  • Minecraft style XNA game collision?

    - by Levi
    I've been trying to get this working for ages now, I can detect if there's a solid block at any place on the map and I can check how far something is inside of it, but I don't understand how to fix the collision. I've tried loads of ways and all of them end up by the player getting stuck, glitching around, incorrect responses and I really have no idea how to go about this :/. int Chnk = Utility.GetChunkFromPosition(origin); if (Chnk == -1) return; Vector3 Pos = Utility.GetCubeVectorFromPosition(origin); if (GlobalWorld.LoadedChunks[Chnk].Blocks[(byte)Pos.X, (byte)Pos.Y, (byte)Pos.Z] != 0) { isInIllegalState = true; if (velocity.Y < 0f) velocity.Y = 0f; } while (isInIllegalState) { if (GlobalWorld.LoadedChunks[Chnk].Blocks[(byte)Pos.X, (byte)origin.Y, (byte)Pos.Z] != 0) origin.Y = (int)(origin.Y + 1); else isInIllegalState = false; } if (origin.Y < Chunk.YSize - 2 && GlobalWorld.LoadedChunks[Chnk].Blocks[(byte)Pos.X, (byte)(origin.Y + playerHeight.Y), (byte)Pos.Z] != 0) { velocity.Y = 0f; //Acceleration.Y = 0f; origin.Y = (int)origin.Y;// -0.5f; } for (int x = -1; x <= 1; x+=2) { for (int z = -1; z <= 1; z += 2) { Vector3 CornerPosition = new Vector3(boundingSize * x, 0, boundingSize * z); bool CorrectX = false; bool CorrectZ = false; Vector3 RoundedOrigin = Utility.RoundVector(origin); Vector3 RoundedCorner = Utility.RoundVector(origin + CornerPosition); byte BlockAdjacent = Utility.GetCubeFromPosition(origin + CornerPosition); if (BlockAdjacent == 0) continue; if (RoundedCorner.X != RoundedOrigin.X && RoundedCorner.Z != RoundedOrigin.Z) { CorrectX = true; CorrectZ = true; } if (RoundedCorner.Z != RoundedOrigin.Z && RoundedCorner.X == RoundedOrigin.X) CorrectZ = true; if (RoundedCorner.X != RoundedOrigin.X && RoundedCorner.Z == RoundedOrigin.Z) CorrectX = true; if (CorrectX && CornerPosition.X > 0) { if (origin.X > 0f) origin.X = (int)(origin.X + 1) - boundingSize; else origin.X = (int)origin.X - boundingSize; } else if (CorrectX && CornerPosition.X < 0) { if (origin.X > 0f) origin.X = (int)(origin.X) + boundingSize; else origin.X = (int)(origin.X - 1) + boundingSize; } if (CorrectZ && CornerPosition.Z > 0) { if (origin.Z > 0f) origin.Z = (int)(origin.Z + 1) - boundingSize; else origin.Z = (int)origin.Z - boundingSize; } else if (CorrectZ && CornerPosition.Z < 0) { if (origin.Z > 0f) origin.Z = (int)(origin.Z) + boundingSize; else origin.Z = (int)(origin.Z - 1) + boundingSize; } } }

    Read the article

  • Dynamic body implementation

    - by ArturoVM
    I am writing a 2D game where one of the characters has some very particular requirements. This character is a body with no particular shape (similar to a fluid, but not so much), it has to be able to grow and shrink (as in actually growing, not just scaling), and it has to have collision detection (even if it's basic). Because of this requirements, it obviously can't be based on a sprite, so direct rendering of the shape should be the logical thing to do. I assume this is no easy task, but I just couldn't find a good physics engine that covers these requirements (or at least no tutorial on how to do it; I particularly searched for Box2D tutorials). Is there a way of doing this with Box2D, SDL, or any other physics or game engine out there? If not, what's a good place to start? I am really clueless as far as soft-body physics are concerned.

    Read the article

  • Smooth pixels while rotating sprite

    - by goodm
    I just started with andengine, so this maybe gonna be silly question. How to make my sprites more smooth while I rotate them? Or maybe it because this is screenshot from tablet? Thanks JohnEye it works: Just need to change my BitmapTextureAtlas from: BitmapTextureAtlas carAtlas = new BitmapTextureAtlas(this.getTextureManager(),100, 63); to: BitmapTextureAtlas carAtlas = new BitmapTextureAtlas(this.getTextureManager(),100, 63, TextureOptions.BILINEAR);

    Read the article

  • Making AI jump on a spot effectively

    - by Pasquale Sada
    How to calculate, in 3D environment, the closest point, from which an AI character can jump onto a platform? Setup I have an initial velocity V(Vx,Vy,VZ) and a spot where the character stands still at S(Sx,Sy,Sz). What I'm trying to achieve is a successful jump on a spot E(Ex,Ey,Ez) where you have clicked on(only lower or higher spot, because I've in place a simple steering behavior for even terrains). There are no obstacles around. I've implemented a formula that can make him jump in a precise way on a spot but you need to declare an angle: the problem arise when the selected spot is straight above your head. It' pretty lame that the char hang there and can reach a thing that is 1cm above is head. I'll share the code I'm using: Vector3 dir = target - transform.position; // get target direction float h = dir.y; // get height difference dir.y = 0; // retain only the horizontal direction float dist = dir.magnitude ; // get horizontal distance float a = angle * Mathf.Deg2Rad; // convert angle to radians dir.y = dist * Mathf.Tan(a); // set dir to the elevation angle dist += h / Mathf.Tan(a); // correct for small height differences // calculate the velocity magnitude float vel = Mathf.Sqrt(dist * Physics.gravity.magnitude / Mathf.Sin(2 *a)); return vel * dir.normalized; Ended up using the lowest angle (20 degree) and checking for collision on the trajectory. If found any increase the angle. Here some code (to improve the code maybe must stop the check at the highest point of the curve): Vector3 BallisticVel(Vector3 target, float angle) { Vector3 dir = target - transform.position; // get target direction float h = dir.y; // get height difference dir.y = 0; // retain only the horizontal direction float dist = dir.magnitude ; // get horizontal distance float a = angle * Mathf.Deg2Rad; // convert angle to radians dir.y = dist * Mathf.Tan(a); // set dir to the elevation angle dist += h / Mathf.Tan(a); // correct for small height differences // calculate the velocity magnitude float vel = Mathf.Sqrt(dist * Physics.gravity.magnitude / Mathf.Sin(2 * a)); return vel * dir.normalized; } Vector3 TrajectoryPoint(Vector3 startingPosition, Vector3 startingVelocity, float n ) { float t = 1/60 ; // seconds per time step Vector3 stepVelocity = t * startingVelocity; // m/s Vector3 stepGravity = t * t * Physics.gravity; // m/s/s return startingPosition + n * stepVelocity + 0.5f * (n*n+n) * stepGravity; } bool CheckTrajectory(Vector3 startingPosition,Vector3 target, float angle_jump) { Debug.Log("checking"); if(angle_jump < 80f) { Debug.Log("if"); Vector3 startingVelocity = BallisticVel(target, angle_jump); for (int i = 0; i < 180; i++) { //Debug.Log(i); Vector3 trajectoryPosition = TrajectoryPoint( startingPosition, startingVelocity, i ); if(Physics.Raycast(trajectoryPosition,Vector3.forward,safeDistance)) { angle_jump += 10; break; // restart loop with the new angle } else continue; } return true; JumpVelocity = BallisticVel(target, angle_jump); } return false; }

    Read the article

  • Circle-Rectangle collision in a tile map game

    - by furiousd
    I am making a 2D tile map based putt-putt game. I have collision detection working between the ball and the walls of the map, although when the ball collides at the meeting point between 2 tiles I offset it by 0.5 so that it doesn't get stuck in the wall. This aint a huge issue though. if(y % 20 == 0) { y+=0.5; } if(x % 20 == 0) { x+=0.5; } Collisions work as follows Find the closest point between each tile and the center of the ball If distance(ball_x, ball_y, close_x, close_y) <= ball_radius and the closest point belongs to a solid object, collision has occured Invert X/Y speed according to side of object collided with The next thing I tried to do was implement floating blocks in the middle of the map for the ball to bounce off of. When a ball collides with a corner of the block, it gets stuck in it. So I changed my determineRebound() function to treat corners as if they were circles. Here's that functon: `i and j are indexes of the solid object in the 2d map array. x & y are centre point of ball.` void determineRebound(int _i, int _j) { if(y > _i*tile_w && y < _i*tile_w + tile_w) { //Not a corner xs*=-1; } else if(x > _j*tile_w && x < _j*tile_w + tile_w) { //Not a corner ys*=-1; } else { //Corner float nx = x - close_x; float ny = y - close_y; float len = sqrt(nx * nx + ny * ny); nx /= len; ny /= len; float projection = xs * nx + ys * ny; xs -= 2 * projection * nx; ys -= 2 * projection * ny; } } This is where things have gotten messy. Collisions with 'floating' corners work fine, but now when the ball collides near the meeting point of 2 tiles, it detects a corner collision and does not rebound as expected. I'm a bit in over my head at this point. I guess I'm wondering if I'm going about making this sort of game in the right way. Is a 2d tile map the way to go? If so, is there a problem with my collision logic and where am I going wrong? Any advice/feedback would be great.

    Read the article

< Previous Page | 580 581 582 583 584 585 586 587 588 589 590 591  | Next Page >