Search Results

Search found 38203 results on 1529 pages for 'library development'.

Page 580/1529 | < Previous Page | 576 577 578 579 580 581 582 583 584 585 586 587  | Next Page >

  • Is 2 lines of push/pop code for each pre-draw-state too many?

    - by Griffin
    I'm trying to simplify vector graphics management in XNA; currently by incorporating state preservation. 2X lines of push/pop code for X states feels like too many, and it just feels wrong to have 2 lines of code that look identical except for one being push() and the other being pop(). The goal is to eradicate this repetitiveness,and I hoped to do so by creating an interface in which a client can give class/struct refs in which he wants restored after the rendering calls. Also note that many beginner-programmers will be using this, so forcing lambda expressions or other advanced C# features to be used in client code is not a good idea. I attempted to accomplish my goal by using Daniel Earwicker's Ptr class: public class Ptr<T> { Func<T> getter; Action<T> setter; public Ptr(Func<T> g, Action<T> s) { getter = g; setter = s; } public T Deref { get { return getter(); } set { setter(value); } } } an extension method: //doesn't work for structs since this is just syntatic sugar public static Ptr<T> GetPtr <T> (this T obj) { return new Ptr<T>( ()=> obj, v=> obj=v ); } and a Push Function: //returns a Pop Action for later calling public static Action Push <T> (ref T structure) where T: struct { T pushedValue = structure; //copies the struct data Ptr<T> p = structure.GetPtr(); return new Action( ()=> {p.Deref = pushedValue;} ); } However this doesn't work as stated in the code. How might I accomplish my goal? Example of code to be refactored: protected override void RenderLocally (GraphicsDevice device) { if (!(bool)isCompiled) {Compile();} //TODO: make sure state settings don't implicitly delete any buffers/resources RasterizerState oldRasterState = device.RasterizerState; DepthFormat oldFormat = device.PresentationParameters.DepthStencilFormat; DepthStencilState oldBufferState = device.DepthStencilState; { //Rendering code } device.RasterizerState = oldRasterState; device.DepthStencilState = oldBufferState; device.PresentationParameters.DepthStencilFormat = oldFormat; }

    Read the article

  • Viewport / Camera Calculation in 2D Game

    - by Dave
    we have a 2D game with some sprites and tiles and some kind of camera/viewport, that "moves" around the scene. so far so good, if we wouldn't had some special behaviour for your camera/viewport translation. normally you could stick the camera to your player figure and center it, resulting in a very cheap, undergraduate, translation equation, like : vec_translation -/+= speed (depending in what keys are pressed. WASD as default.) buuuuuuuuuut, we want our player figure be able to actually reach the bounds, when the viewport/camera has reached a maximum translation. we came up with the following solution (only keys a and d are the shown here, the rest is just adaption of calculation or maybe YOUR super-cool and elegant solution :) ): if(keys[A]) { playerX -= speed; if(playerScreenX <= width / 2 && tx < 0) { playerScreenX = width / 2; tx += speed; } else if(playerScreenX <= width / 2 && (tx) >= 0) { playerScreenX -= speed; tx = 0; if(playerScreenX < 0) playerScreenX = 0; } else if(playerScreenX >= width / 2 && (tx) < 0) { playerScreenX -= speed; } } if(keys[D]) { playerX += speed; if(playerScreenX >= width / 2 && (-tx + width) < sceneWidth) { playerScreenX = width / 2; tx -= speed; } if(playerScreenX >= width / 2 && (-tx + width) >= sceneWidth) { playerScreenX += speed; tx = -(sceneWidth - width); if(playerScreenX >= width - player.width) playerScreenX = width - player.width; } if(playerScreenX <= width / 2 && (-tx + width) < sceneWidth) { playerScreenX += speed; } } i think the code is rather self explaining: keys is a flag container for currently active keys, playerX/-Y is the position of the player according to world origin, tx/ty are the translation components vital to background / npc / item offset calculation, playerOnScreenX/-Y is the actual position of the player figure (sprite) on screen and width/height are the dimensions of the camera/viewport. this all looks quite nice and works well, but there is a very small and nasty calculation error, which in turn sums up to some visible effect. let's consider following piece of code: if(playerScreenX <= width / 2 && tx < 0) { playerScreenX = width / 2; tx += speed; } it can be translated into plain english as : if the x position of your player figure on screen is less or equal the half of your display / camera / viewport size AND there is enough space left LEFT of your viewport/camera then set players x position on screen to width half, increase translation (because we subtract the translation from something we want to move). easy, right?! doing this will create a small delta between playerX and playerScreenX. after so much talking, my question appears now here at the bottom of this document: how do I stick the calculation of my player-on-screen to the actual position of the player AND having a viewport that is not always centered aroung the players figure? here is a small test-case in processing: http://pastebin.com/bFaTauaa thank you for reading until now and thank you in advance for probably answering my question.

    Read the article

  • Is AGS outdated for Point & Click Adventures?

    - by Aidan Moore
    Is Adventure Game Studio (AGS) outdated? I am working on a Point and Click Adventure game being coded on the AGS engine, and just recently, the question of 'is this outdated?' has come up. I'll admit, AGS is a rather old, and kind of went out of style with the P&C genre itself, but I have not found anything quite like it that specializes in this specific format of games. So my big question is not only 'is this outdated?' but also 'Is there a better alternative?'

    Read the article

  • Linking one uniform variable to many shaders

    - by Winged
    Let's say, that I have 3 programs, and in each of those programs there is a view matrix uniform, which should be the same in all those programs. Right now, when my camera moves, I need to re-upload the modified matrix to every program separately. Is it possible to create some kind of global uniforms which are constant for all programs linked to it, so I could just upload the matrix once? I tried creating a globalUniforms object which looked kinda like this: var globalUniforms = { program: {}, // (...) vMatrixUniform: null, // (...) initialize: function() { vMatrixUniform = gl.getUniformLocation(this.program, 'uVMatrix'); } }; So I could just link it to proper programs like this: program.vMatrixUniform = globalUniforms.vMatrixUniform;, and then pass the matrix like this: if (camera.isDirty.viewMatrix !== false) { camera.isDirty.viewMatrix = false; gl.uniformMatrix4fv(globalUniforms.vMatrixUniform, false, camera.viewMatrix.element); } but unfortunately it throws an error: Uncaught exception: gl.INVALID_VALUE was caused by call to: getUniformLocation called from line 272, column 2 in () in mysite/js/mesh.js: vMatrixUniform = gl.getUniformLocation(this.program, 'uVMatrix'); Summing up: is there a more efficient way of managing shaders which follows my logic?

    Read the article

  • How to do Cross Platform in own Engine? [on hold]

    - by Mineorbit
    At the Moment I finished the first game with my game engine(if I wanna call it like that) which is based in LWJGL. Now i'm worring if I could do crossplattforming in my engine. I build me a tool tool with a batch file to compile my project dir into an .exe . At first i'm looking to do it on Android with an comparable batch file. An link for an tutorial would be awesome! At next place there would be an renderer and audiosystem. If read that theres an OpenGL ES renderer, and I allready played a bit around with the Android SDK. But I use the Texture and Audio class in slick-util. So I thought about creating OOP classes that carry around the data and load it in an platform specific Buffer. A Link for an equaly easy-to-use Texture or Audio class would be awesome! Thats all for now! Answers would be awesome! Thanks, Mineorbit!

    Read the article

  • Cocos2d Tiled Dynamic Object Layer

    - by Rodrigo Camargo
    I'm trying to develop a cocos2d tiled based game using a sort of 'dynamic' object layer. What I want to do is after the tiled map is loaded, the user can drag something into the map and that will become an event when the 'hero' pass over it. I know how to build an object layer in tiled but it seems that is for fixed positions and what I want is a dynamic action position based on what the user can select. For instance, the user can drag a rock into a tile and when the character hit that rock he may die, or something. I'm a little lost about how to make it work. Do you have any idea of what should I use or what should I look for? Thanks in advance!

    Read the article

  • Rotating wheel with touch adding velocity

    - by Lewis
    I have a wheel control in a game which is setup like so: - (void)ccTouchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [touches anyObject]; CGPoint location = [touch locationInView:[touch view]]; location = [[CCDirector sharedDirector] convertToGL:location]; if (CGRectContainsPoint(wheel.boundingBox, location)) { CGPoint firstLocation = [touch previousLocationInView:[touch view]]; CGPoint location = [touch locationInView:[touch view]]; CGPoint touchingPoint = [[CCDirector sharedDirector] convertToGL:location]; CGPoint firstTouchingPoint = [[CCDirector sharedDirector] convertToGL:firstLocation]; CGPoint firstVector = ccpSub(firstTouchingPoint, wheel.position); CGFloat firstRotateAngle = -ccpToAngle(firstVector); CGFloat previousTouch = CC_RADIANS_TO_DEGREES(firstRotateAngle); CGPoint vector = ccpSub(touchingPoint, wheel.position); CGFloat rotateAngle = -ccpToAngle(vector); CGFloat currentTouch = CC_RADIANS_TO_DEGREES(rotateAngle); wheelRotation += (currentTouch - previousTouch) * 0.6; //limit speed 0.6 } } I update the rotation of a the wheel in the update method by doing: wheel.rotation = wheelRotation; Now once the user lets go of the wheel I want it to rotate back to where it was before but not without taking into account the velocity of the swipe the user has done. This is the bit I really can't get my head around. So if the swipe generates a lot of velocity then the wheel will carry on moving slightly in that direction until the overall force which pulls the wheel back to the starting position kicks in. Any ideas/code snippets?

    Read the article

  • Why does my 3D model not translate the way I expect? [closed]

    - by ChocoMan
    In my first image, my model displays correctly: But when I move the model's position along the Z-axis (forward) I get this, yet the Y-axis doesnt change. An if I keep going, the model disappears into the ground: Any suggestions as to how I can get the model to translate properly visually? Here is how Im calling the model and the terrain in draw(): cameraPosition = new Vector3(camX, camY, camZ); // Copy any parent transforms. Matrix[] transforms = new Matrix[mShockwave.Bones.Count]; mShockwave.CopyAbsoluteBoneTransformsTo(transforms); Matrix[] ttransforms = new Matrix[terrain.Bones.Count]; terrain.CopyAbsoluteBoneTransformsTo(ttransforms); // Draw the model. A model can have multiple meshes, so loop. foreach (ModelMesh mesh in mShockwave.Meshes) { // This is where the mesh orientation is set, as well // as our camera and projection. foreach (BasicEffect effect in mesh.Effects) { effect.EnableDefaultLighting(); effect.PreferPerPixelLighting = true; effect.World = transforms[mesh.ParentBone.Index] * Matrix.CreateRotationY(modelRotation) * Matrix.CreateTranslation(modelPosition); // Looking at the model (picture shouldnt change other than rotation) effect.View = Matrix.CreateLookAt(cameraPosition, modelPosition, Vector3.Up); effect.Projection = Matrix.CreatePerspectiveFieldOfView( MathHelper.ToRadians(45.0f), aspectRatio, 1.0f, 10000.0f); effect.TextureEnabled = true; } // Draw the mesh, using the effects set above. prepare3d(); mesh.Draw(); } //Terrain test foreach (ModelMesh meshT in terrain.Meshes) { foreach (BasicEffect effect in meshT.Effects) { effect.EnableDefaultLighting(); effect.PreferPerPixelLighting = true; effect.World = ttransforms[meshT.ParentBone.Index] * Matrix.CreateRotationY(0) * Matrix.CreateTranslation(terrainPosition); // Looking at the model (picture shouldnt change other than rotation) effect.View = Matrix.CreateLookAt(cameraPosition, terrainPosition, Vector3.Up); effect.Projection = Matrix.CreatePerspectiveFieldOfView( MathHelper.ToRadians(45.0f), aspectRatio, 1.0f, 10000.0f); effect.TextureEnabled = true; } // Draw the mesh, using the effects set above. prepare3d(); meshT.Draw(); DrawText(); } base.Draw(gameTime); } I'm suspecting that there may be something wrong with how I'm handling my camera. The model rotates fine on its Y-axis.

    Read the article

  • Slick 2d scrolling off screen

    - by Peter
    I have something scrolling in and out of the screen. Now when it goes off screen, I want it to scroll into the screen at another location. What I do is I grab the last pixels at the screens edge using g.copyArea and then g.drawImage on the edge of the screen. And then I do a g.translate to create room for the next row which is next render cycle. My problem is that I get a single pixel row, which is not copied onto the canvas. Where as I want each row to be added and then translated, so that the image that scrolled off screen is recreated on the other side of the screen. Here is my code, maybe there is a better way of doing this, open to any suggests, cause I'm totally stuck @Override public void render(GameContainer gc, Graphics g) throws SlickException { //g.setClip(0, 0, 300, gc.getHeight()); g.translate(0, y); g.drawImage(image,0,200); g.resetTransform(); //g.clearClip(); g.copyArea(rightImage, 0, gc.getHeight() - 1); g.drawImage(rightImage, 300, 0); g.translate(0, y); y=y+3; }

    Read the article

  • Server side random selection of players

    - by Ron
    Assuming I have a simple client-server game, where the server picks random players on a very frequent base, I was wondering what is the best way to select a random player (According to the following constraints): Solution must be high performance and highly scalable Random spread should be relatively even (meaning if I have 3 players and pick 99 times, they will all be picked 33 times more or less) Should only pick players who were active in the past X days (optional, but a big bonus) The actual DB or data model used to store players isn't an issue here, as we'll select the technology in accordance to our needs. However, high performance and scalability is (at the moment we have over 60,000 unique daily active players, and we plan on growing even more). Thanks!

    Read the article

  • 2D basic map system

    - by Cyril
    i'm currently coding a 2D game in Java, and I would like to have some clues on how-to build this system : the screen is moving on a grander map, for instance, the screen represent 800*600 units on a 100K*100K map. When you command your unit to go to another position, the screen move on this map AND when you move your mouse on a side or another of the screen, you move the screen on the map. Not sure that i'm clear, but we can retrieve this system in most RTS games (warcraft/starcraft for example). I'm currently using Slick 2D. Any idea ? Thanks.

    Read the article

  • Creating a frozen bubble clone

    - by Vaughan Hilts
    This photo illustrates the environment: http://i.imgur.com/V4wbp.png I'll shoot the cannon, it'll bounce off the wall and it's SUPPOSED to stick to the bubble. It does at pretty much every other angle. The problem is always reproduced here, when hit off the wall into those bubbles. It also exists in other cases, but I'm not sure what triggers it. What actually happens: The ball will sometimes set to the wrong cell, and my "dropping" code will detect it as a loner and drop it off the stage. *There are many implementations of "Frozen Bubble" on the web, but I can't for the life of me find a good explanation as to how the algorithm for the "Bubble Sticking" works. * I see this: http://www.wikiflashed.com/wiki/BubbleBobble https://frozenbubblexna.svn.codeplex.com/svn/FrozenBubble/ But I can't figure out the algorithims... could anyone explain possibly the general idea behind getting the balls to stick? Code in question: //Counstruct our bounding rectangle for use var nX = currentBall.x + ballvX * gameTime; var nY = currentBall.y - ballvY * gameTime; var movingRect = new BoundingRectangle(nX, nY, 32, 32); var able = false; //Iterate over the cells and draw our bubbles for (var x = 0; x < 8; x++) { for (var y = 0; y < 12; y++) { //Get the bubble at this layout var bubble = bubbleLayout[x][y]; var rowHeight = 27; //If this slot isn't empty, draw if (bubble != null) { var bx = 0, by = 0; if (y % 2 == 0) { bx = x * 32 + 270; by = y * 32 + 45; } else { bx = x * 32 + 270 + 16; by = y * 32 + 45; } //Check var targetBox = new BoundingRectangle(bx, by, 32, 32); if (targetBox.intersects(movingRect)) { able = true; } } } } cellY = Math.round((currentBall.y - 45) / 32); if (cellY % 2 == 0) cellX = Math.round((currentBall.x - 270) / 32); else cellX = Math.round((currentBall.x - 270 - 16) / 32); Any ideas are very much welcome. Things I've tried: Flooring and Ceiling values Changing the wall bounce to a lower value Slowing down the ball None of these seem to affect it. Is there something in my math I'm not getting?

    Read the article

  • How to transform mesh components?

    - by Lea Hayes
    I am attempting to transform the components of a mesh directly using a 4x4 matrix. This is working for the vertex positions, but it is not working for the normals (and probably not the tangents either). Here is what I have: // Transform vertex positions - Works like a charm! vertices = mesh.vertices; for (int i = 0; i < vertices.Length; ++i) vertices[i] = transform.MultiplyPoint(vertices[i]); // Does not work, lighting is messed up on mesh normals = mesh.normals; for (int i = 0; i < normals.Length; ++i) normals[i] = transform.MultiplyVector(normals[i]); Note: The input matrix converts from local to world space and is needed to combine multiple meshes together.

    Read the article

  • Keeping rotation between two objects

    - by user99
    In my XNA game I have two objects that collide. When the first object collides with the other it is able to latch on to it and move it about the world. I am having a problem with the math here (Math isn't my strong point). I currently have the second object latch on to the first and move around with it, but I cannot get it to keep it's original direction. So, if the object is facing up it should keep this direction relative to how it is being rotated with the original item. Any tips on how I could best to achieve this?

    Read the article

  • Generated 3d tree meshes

    - by Jari Komppa
    I did not find a question on these lines yet, correct me if I'm wrong. Trees (and fauna in general) are common in games. Due to their nature, they are a good candidate for procedural generation. There's SpeedTree, of course, if you can afford it; as far as I can tell, it doesn't provide the possibility of generating your tree meshes at runtime. Then there's SnappyTree, an online webgl based tree generator based on the proctree.js which is some ~500 lines of javascript. One could use either of above (or some other tree generator I haven't stumbled upon) to create a few dozen tree meshes beforehand - or model them from scratch in a 3d modeller - and then randomly mirror/scale them for a few more variants.. But I'd rather have a free, linkable tree mesh generator. Possible solutions: Port proctree.js to c++ and deal with the open source license (doesn't seem to be gpl, so could be doable; the author may also be willing to co-operate to make the license even more free). Roll my own based on L-systems. Don't bother, just use offline generated trees. Use some other method I haven't found yet.

    Read the article

  • Making a game with responsive resolution

    - by alexandervrs
    I am making a game, however I wish for it to be resolution agnostic. My target resolution i.e. where things look as intended is 1600 x 900. My ideas are: Make the HUD stay fixed to the sides no matter what resolution, use different size for HUD graphics under a certain resolution and another under a certain large one. Use large HD sprites/backgrounds which are a power of 2, so they scale nicely. Use the player's native resolution. Scale the game area (not the HUD) to fit (resulting zooming in some and cropping the game area sides if necessary for widescreen, no stretch), but always fill the screen. Have a min and max resolution limit for small and very large displays where you will just change the resolution(?) or scale up/down to fit. What I am a bit confused though is what math formula I would use to scale the game area correctly based on the resolution no matter the aspect ratio, fully fit in a square screen and with some clip to the sides for widescreen. Pseudocode would help as well. :)

    Read the article

  • Getting FEATURE_LEVEL_9_3 to work in DX11

    - by Dominic
    Currently I'm going through some tutorials and learning DX11 on a DX10 machine (though I just ordered a new DX11 compatible computer) by means of setting the D3D_FEATURE_LEVEL_ setting to 10_0 and switching the vertex and pixel shader versions in D3DX11CompileFromFile to "vs_4_0" and "ps_4_0" respectively. This works fine as I'm not using any DX11-only features yet. I'd like to make it compatible with DX9.0c, which naively I thought I could do by changing the feature level setting to 9_3 or something and taking the vertex/pixel shader versions down to 3 or 2. However, no matter what I change the vertex/pixel shader versions to, it always fails when I try to call D3DX11CompileFromFile to compile the vertex/pixel shader files when I have D3D_FEATURE_LEVEL_9_3 enabled. Maybe this is due to the the vertex/pixel shader files themselves being incompatible for the lower vertex/pixel shader versions, but I'm not expert enough to say. My shader files are listed below: Vertex shader: cbuffer MatrixBuffer { matrix worldMatrix; matrix viewMatrix; matrix projectionMatrix; }; struct VertexInputType { float4 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; PixelInputType LightVertexShader(VertexInputType input) { PixelInputType output; // Change the position vector to be 4 units for proper matrix calculations. input.position.w = 1.0f; // Calculate the position of the vertex against the world, view, and projection matrices. output.position = mul(input.position, worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); // Store the texture coordinates for the pixel shader. output.tex = input.tex; // Calculate the normal vector against the world matrix only. output.normal = mul(input.normal, (float3x3)worldMatrix); // Normalize the normal vector. output.normal = normalize(output.normal); return output; } Pixel Shader: Texture2D shaderTexture; SamplerState SampleType; cbuffer LightBuffer { float4 ambientColor; float4 diffuseColor; float3 lightDirection; float padding; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; float4 LightPixelShader(PixelInputType input) : SV_TARGET { float4 textureColor; float3 lightDir; float lightIntensity; float4 color; // Sample the pixel color from the texture using the sampler at this texture coordinate location. textureColor = shaderTexture.Sample(SampleType, input.tex); // Set the default output color to the ambient light value for all pixels. color = ambientColor; // Invert the light direction for calculations. lightDir = -lightDirection; // Calculate the amount of light on this pixel. lightIntensity = saturate(dot(input.normal, lightDir)); if(lightIntensity > 0.0f) { // Determine the final diffuse color based on the diffuse color and the amount of light intensity. color += (diffuseColor * lightIntensity); } // Saturate the final light color. color = saturate(color); // Multiply the texture pixel and the final diffuse color to get the final pixel color result. color = color * textureColor; return color; }

    Read the article

  • Automatically triggering standard spaceship controls to stop its motion

    - by Garan
    I have been working on a 2D top-down space strategy/shooting game. Right now it is only in the prototyping stage (I have gotten basic movement) but now I am trying to write a function that will stop the ship based on it's velocity. This is being written in Lua, using the Love2D engine. My code is as follows (note- object.dx is the x-velocity, object.dy is the y-velocity, object.acc is the acceleration, and object.r is the rotation in radians): function stopMoving(object, dt) local targetr = math.atan2(object.dy, object.dx) if targetr == object.r + math.pi then local currentspeed = math.sqrt(object.dx*object.dx+object.dy*object.dy) if currentspeed ~= 0 then object.dx = object.dx + object.acc*dt*math.cos(object.r) object.dy = object.dy + object.acc*dt*math.sin(object.r) end else if (targetr - object.r) >= math.pi then object.r = object.r - object.turnspeed*dt else object.r = object.r + object.turnspeed*dt end end end It is implemented in the update function as: if love.keyboard.isDown("backspace") then stopMoving(player, dt) end The problem is that when I am holding down backspace, it spins the player clockwise (though I am trying to have it go the direction that would be the most efficient at getting to the angle it would have to be) and then it never starts to accelerate the player in the direction opposite to it's velocity. What should I change in this code to get that to work? EDIT : I'm not trying to just stop the player in place, I'm trying to get it to use it's normal commands to neutralize it's existing velocity. I also changed math.atan to math.atan2, apparently it's better. I noticed no difference when running it, though.

    Read the article

  • How to do directional per fragment lighting in world space?

    - by user
    I am attempting to create a GLSL shader for simple, per-fragment directional light. So far, after following many tutorials, I have continually ran into the issue: my light is specified in world coordinates, however, the shader treats the light's position as being in eye space, thus, the light direction changes when I move the camera. My question is, how to I transform a directional light position such as (50, 50, 50, 0) into eye space, or, would doing things this way be the incorrect approach to the problem?

    Read the article

  • Beginner question about vertex arrays in OpenGL

    - by MrDatabase
    Is there a special order in which vertices are entered into a vertex array? Currently I'm drawing single textures like this: glBindTexture(GL_TEXTURE_2D, texName); glVertexPointer(2, GL_FLOAT, 0, vertices); glTexCoordPointer(2, GL_FLOAT, 0, coordinates); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); where vertices has four "xy pairs". This is working fine. As a test I doubled the sizes of the vertices and coordinates arrays and changed the last line above to: glDrawArrays(GL_TRIANGLE_STRIP, 0, 8); since vertices now contains eight "xy pairs". I do see two textures (the second is intentionally offset from the first). However the textures are now distorted. I've tried passing GL_TRIANGLES to glDrawArrays instead of GL_TRIANGLE_STRIP but this doesn't work either. I'm so new to OpenGL that I thought it's best to just ask here :-) Cheers!

    Read the article

  • Ideas for time-keeping in a webbased RPG?

    - by ashy_32bit
    I'm assigned a task of doing the preliminary research stuff for a web-based MMO RPG. Now my buggiest problem here is "web based" vs "MMO RPG". I did some research about time keeping systems and I'm totally confused as how exactly something as real-time as an MMO-RPG can work on some pull-only (unidirectional) platform like HTTP. I know there is also a turn-based alternative to time keeping but can it work in an MMO setting ? EDIT: Take a battle for example, player A (human) wants to attack Player B (also human) in the open. How does it work when when player A issues the "attack" command on player B ? how do I inform player B that he is being attacked ? and then how exactly the battle goes on between the two in an HTTP based communication channel? To my knowledge this is impossible unless you resort to another technology (HTML is 1-way, that is you can just ask server and get response, server can't update you unless being asked to. this is very well-known and simply explained). So I though maybe I can somehow change the whole timekeeping model from real-time to a more non-real-time model (towards a turn based RPG for example) and somehow work around the whole problem of "interactivity". EDIT2: It is not that I don't wanna use any server side technologies. For sure it is not gonna work client-side-only even for the most trivial of the multi-player games, let alone an RPG. So sure there would be a (probably complex) server side component to it (the so called Game Engine I suppose). The problem is not the technology that implements the logic (game mechanics) bits but the communication technology and how it limits the game mechanics abilities (like how real-time or turn based it is gonna be). HTTP is a request-response protocol meaning you get served only if you ask for it (explicitly send a GET or POST request to the server). HTTP server can not inform you if anything of interest happens in the game world unless you refresh the page (as some suggested) or you use some bi-directional tech (totally different animals) like Flash, WebSock, HTML5 etc etc. So maybe the question is: Is it possible to implement a MMORPG using only HTML5/PHP and no periodic page refreshes? if so what would be rules to make it an MMO-RPG? Can't explain it any clearer. Sorry :D

    Read the article

  • OpenGL sprites and point size limitation

    - by Srdan
    I'm developing a simple particle system that should be able to perform on mobile devices (iOS, Andorid). My plan was to use GL_POINT_SPRITE/GL_PROGRAM_POINT_SIZE method because of it's efficiency (GL_POINTS are enough), but after some experimenting, I found myself in a trouble. Sprite size is limited (to usually 64 pixels). I'm calculating size using this formula gl_PointSize = in_point_size * some_factor / distance_to_camera to make particle sizes proportional to distance to camera. But at some point, when camera is close enough, problem with size limitation emerges and whole system starts looking unrealistic. Is there a way to avoid this problem? If no, what's alternative? I was thinking of manually generating billboard quad for each particle. Now, I have some questions about that approach. I guess minimum geometry data would be four vertices per particle and index array to make quads from these vertices (with GL_TRIANGLE_STRIP). Additionally, for each vertex I need a color and texture coordinate. I would put all that in an interleaved vertex array. But as you can see, there is much redundancy. All vertices of same particle share same color value, and four texture coordinates are same for all particles. Because of how glDrawArrays/Elements works, I see no way to optimise this. Do you know of a better approach on how to organise per-particle data? Should I use buffers or vertex arrays, or there is no difference because each time I have to update all particles' data. About particles simulation... Where to do it? On CPU or on a vertex processors? Something tells me that mobile's CPU would do it faster than it's vertex unit (at least today in 2012 :). So, any advice on how to make a simple and efficient particle system without particle size limitation, for mobile device, would be appreciated. (animation of camera passing through particles should be realistic)

    Read the article

  • 2D camera perspective projection from 3D coordinates -- HOW?

    - by Jack
    I am developing a camera for a 2D game with a top-down view that has depth. It's almost a 3D camera. Basically, every object has a Z even though it is in 2D, and similarly to parallax layers their position, scale and rotation speed vary based on their Z. I guess this would be a perspective projection. But I am having trouble converting the objects' 3D coordinates into the 2D space of the screen so that everything has correct perspective and scale. I never learned matrices though I did dig the topic a bit today. I tried without using matrices thanks to this article but every attempt gave awkward results. I'm using ActionScript 3 and Flash 11+ (Starling), where the screen coordinates work like this: Left-handed coordinates system illustration I can explain further what I did if you want to help me sort out what's wrong, or you can directly tell me how you would do it properly. In case you prefer the former, read on. These are images showing the formulas I used: upload.wikimedia.org/math/1/c/8/1c89722619b756d05adb4ea38ee6f62b.png upload.wikimedia.org/math/d/4/0/d4069770c68cb8f1aa4b5cfc57e81bc3.png (Sorry new users can't post images, but both are from the wikipedia article linked above, section "Perspective projection". That's where you'll find what all variables mean, too) The long formula is greatly simplified because I believe a normal top-down 2D camera has no X/Y/Z rotation values (correct ?). Then it becomes d = a - c. Still, I can't get it to work. Maybe you could explain what numbers I should put in a(xyz), c(xyz), theta(xyz), and particularly, e(xyz) ? I don't quite get how e is different than c in my case. c.z is also an issue to me. If the Z of the camera's target object is 0, should the camera's Z be something like -600 ? ( = focal length of 600) Whatever I do, it's wrong. I only got it to work when I used arbitrary calculations that "looked" right, like most cameras with parallax layers seem to do, but that's fake! ;) If I want objects to travel between Z layers I might as well do it right. :) Thanks a lot for your help!

    Read the article

  • Super-quick MIDI generator with nonrestrictive license?

    - by Ricket
    I'm working on my Ludum Dare entry and trying to figure out how in the world I'm ever going to get background music. I found WolframTones, but the license is too restrictive: Unless otherwise specified, this Site and content presented on this Site are for your personal and noncommercial use. You may not modify, copy, distribute, transmit, display, perform, reproduce, publish, license, create derivative works from, transfer, or sell any information or content obtained from this Site. For commercial and other uses, contact us. But I really like the interface! It's a lot like sfxr - click a genre and download a song. That's so cool. Is there another program that does this same sort of thing but without a restrictive license, so that I can generate a bgm and use it in my game?

    Read the article

  • High CPU usage on Pong clone

    - by max
    I just made my first game, a clone of Pong, using OpenGL and C++. But its using ~50% of the CPU, which I guess is very high for a game like this. How can I improve that? Can you please look up my code and tell me what all things I am doing wrong? Any feedback is welcome. http://pastebin.com/L5zE3axh Also it would be extremely helpful if you give some general points on how to develop games in OpenGL efficiently.. Thanks in advance!

    Read the article

< Previous Page | 576 577 578 579 580 581 582 583 584 585 586 587  | Next Page >