Search Results

Search found 25952 results on 1039 pages for 'development lifecycle'.

Page 455/1039 | < Previous Page | 451 452 453 454 455 456 457 458 459 460 461 462  | Next Page >

  • Pixmaps, ByteBuffers, and Textures....Oh my

    - by odaymichael
    My ultimate goal is to take a specific region of the screen, and redraw it somewhere else. For example, take a square from the upper left hand corner of the screen and redraw it on the lower right hand corner, so that it is basically a copy of that screen section; kind of like a minimap, but at the same scale as the original. I have looked in to pixmaps and bytebuffers. Also maybe copying that region from the backbuffer somehow. Wondering the best way to go about this. Any help is appreciated. I am using opengl es and libgdx for what it's worth.

    Read the article

  • Drawing a sprite or text causes the OpenGl rendering to 'disappear' in SFML

    - by Ken
    I'm using some SFML built in functions to draw sprites and text as an overlay on top of some OpenGL rending in an SFML RenderWindow. The opengl rendering appears fine until I add the code to draw the sprites or text. The sprite or text drawing causes the OpenGL stuff to disappear. The follow code show what I'm trying to do sf::RenderWindow window(sf::VideoMode(viewport.width,viewport.height,32), "SFML Window"); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0,viewport.width,0,viewport.height,0,1); while (window.pollEvent(Event)) { //event handling... //begin drawing glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glBegin(GL_TRIANGLES); glColor3f(col.x,col.y,col.z); for(int i=0;i<3;i++) glVertex2f(pos.x+verts[i].x,pos.y+verts[i].y); glEnd(); // adding this line causes all the previous opengl triangles not to appear window.draw("Sometext"); window.display(); }

    Read the article

  • How do they keep track of the NPCs in Left 4 Dead?

    - by f20k
    How do they keep track of the NPC zombies in Left 4 Dead? I am talking about the NPCs that just walk into walls or wander around aimlessly. Even though the players cannot see them, they are there (say inside rooms or behind doors). Let's say there's about 10 or so zombies in a hallway and inside rooms. Does the game keep all of those zombies in a list and iterate through giving them commands? Do they just spawn when the user is within a certain radius or reached a special location? Say you placed the 4 units (controlled by players) on completely different places throughout the map. Let's assume you aren't being swarmed and then you have not killed any of these aimless NPCs. Would the game be keeping track of 10 x 4 = 40 zombies in total? Or is my understanding completely off? The reason I ask is if I were to implement something similar on a mobile device, keeping track of 40 or more NPCs might not be such a great idea.

    Read the article

  • Finding shapes in 2D Array, then optimising

    - by assemblism
    I'm new so I can't do an image, but below is a diagram for a game I am working on, moving bricks into patterns, and I currently have my code checking for rotated instances of a "T" shape of any colour. The X and O blocks would be the same colour, and my last batch of code would find the "T" shape where the X's are, but what I wanted was more like the second diagram, with two "T"s Current result      Desired Result [X][O][O]                [1][1][1] [X][X][_]                [2][1][_] [X][O][_]                [2][2][_] [O][_][_]                [2][_][_] My code loops through x/y, marks blocks as used, rotates the shape, repeats, changes colour, repeats. I have started trying to fix this checking with great trepidation. The current idea is to: loop through the grid and make note of all pattern occurrences (NOT marking blocks as used), and putting these to an array loop through the grid again, this time noting which blocks are occupied by which patterns, and therefore which are occupied by multiple patterns. looping through the grid again, this time noting which patterns obstruct which patterns That much feels right... What do I do now? I think I would have to try various combinations of conflicting shapes, starting with those that obstruct the most other patterns first.How do I approach this one? use the rational that says I have 3 conflicting shapes occupying 8 blocks, and the shapes are 4 blocks each, therefore I can only have a maximum of two shapes. (I also intend to incorporate other shapes, and there will probably be score weighting which will need to be considered when going through the conflicting shapes, but that can be another day) I don't think it's a bin packing problem, but I'm not sure what to look for. Hope that makes sense, thanks for your help

    Read the article

  • Given a RGB color x, how to find the most contrasting color y?

    - by Arthur Wulf White
    I have to mark a certain item in a way that will make it stick-out in the background. I need it to be surrounded with the color that contrasts the background as much as possible so it will pop out and easily noticeable by the player. Lets say I know the background is color 'x', how do I find 'y' such that it will be very contrasting to 'x' and easy to notice in a background where 'x' is a dominant color? I first thought about inverting color 'x' and then I noticed that when 'x' is a medium shade of gray, if I invert 'x' to get 'y', then 'y' is also a medium shade of gray which does not work.

    Read the article

  • Break the object body

    - by Siddharth
    In my game, I want to break the object body creating slicing effect. After research I found that I have to use ray casting but I don't know how to use it. If some one know how to break the physics body then please provide information to me. EDIT : I don't have any logic how to do that in andengine. Only I have some link to do slicing http://www.emanueleferonato.com/2012/03/05/breaking-objects-with-box2d-the-realistic-way/ Yes I have to slice physics body into two parts. My physics body have 2d objects.

    Read the article

  • how to transform child elements position into a world position

    - by MrGreg
    So Im making a 2d space game and I have a bunch of spaceships that have turrets. Objects have a position and orientation, the ships being in world coordinates while the turrets are children and coordinates are relative to their parents. How do I efficiently calculate the position of a turret in world coordinates (i.e. when it fires and I need to know where to place a bullet in the world)? Calculating the turrets orientation is trivial - I just add the turrets relative angle to its parents. For position though, I guess I could do a bunch of trigonometry but this MUST be a common problem with a good/fast general solution? Should I be relearning how to do matrix math again? :) btw - Im creating the game in javascript+canvas but its the math/algorithm im interested in here Cheers, Greg

    Read the article

  • Texture displays on Android emulator but not on device

    - by Rob
    I have written a simple UI which takes an image (256x256) and maps it to a rectangle. This works perfectly on the emulator however on the phone the texture does not show, I see only a white rectangle. This is my code: public void onSurfaceCreated(GL10 gl, EGLConfig config) { byteBuffer = ByteBuffer.allocateDirect(shape.length * 4); byteBuffer.order(ByteOrder.nativeOrder()); vertexBuffer = byteBuffer.asFloatBuffer(); vertexBuffer.put(cardshape); vertexBuffer.position(0); byteBuffer = ByteBuffer.allocateDirect(shape.length * 4); byteBuffer.order(ByteOrder.nativeOrder()); textureBuffer = byteBuffer.asFloatBuffer(); textureBuffer.put(textureshape); textureBuffer.position(0); // Set the background color to black ( rgba ). gl.glClearColor(0.0f, 0.0f, 0.0f, 0.5f); // Enable Smooth Shading, default not really needed. gl.glShadeModel(GL10.GL_SMOOTH); // Depth buffer setup. gl.glClearDepthf(1.0f); // Enables depth testing. gl.glEnable(GL10.GL_DEPTH_TEST); // The type of depth testing to do. gl.glDepthFunc(GL10.GL_LEQUAL); // Really nice perspective calculations. gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_NICEST); gl.glEnable(GL10.GL_TEXTURE_2D); loadGLTexture(gl); } public void onDrawFrame(GL10 gl) { gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); gl.glDisable(GL10.GL_DEPTH_TEST); gl.glMatrixMode(GL10.GL_PROJECTION); // Select Projection gl.glPushMatrix(); // Push The Matrix gl.glLoadIdentity(); // Reset The Matrix gl.glOrthof(0f, 480f, 0f, 800f, -1f, 1f); gl.glMatrixMode(GL10.GL_MODELVIEW); // Select Modelview Matrix gl.glPushMatrix(); // Push The Matrix gl.glLoadIdentity(); // Reset The Matrix gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY); gl.glLoadIdentity(); gl.glTranslatef(card.x, card.y, 0.0f); gl.glBindTexture(GL10.GL_TEXTURE_2D, texture[0]); //activates texture to be used now gl.glVertexPointer(2, GL10.GL_FLOAT, 0, vertexBuffer); gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer); gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, 4); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY); } public void onSurfaceChanged(GL10 gl, int width, int height) { // Sets the current view port to the new size. gl.glViewport(0, 0, width, height); // Select the projection matrix gl.glMatrixMode(GL10.GL_PROJECTION); // Reset the projection matrix gl.glLoadIdentity(); // Calculate the aspect ratio of the window GLU.gluPerspective(gl, 45.0f, (float) width / (float) height, 0.1f, 100.0f); // Select the modelview matrix gl.glMatrixMode(GL10.GL_MODELVIEW); // Reset the modelview matrix gl.glLoadIdentity(); } public int[] texture = new int[1]; public void loadGLTexture(GL10 gl) { // loading texture Bitmap bitmap; bitmap = BitmapFactory.decodeResource(context.getResources(), R.drawable.image); // generate one texture pointer gl.glGenTextures(0, texture, 0); //adds texture id to texture array // ...and bind it to our array gl.glBindTexture(GL10.GL_TEXTURE_2D, texture[0]); //activates texture to be used now // create nearest filtered texture gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); // Use Android GLUtils to specify a two-dimensional texture image from our bitmap GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0); // Clean up bitmap.recycle(); } As per many other similar issues and resolutions on the web i have tried setting the minsdkversion is 3, loading the bitmap via an input stream bitmap = BitmapFactory.decodeStream(is), setting BitmapFactory.Options.inScaled to false, putting the images in the nodpi folder and putting them in the raw folder.. all of which didn't help. I'm not really sure what else to try..

    Read the article

  • OpenGL ES 1 Pixel Error?

    - by Beginner001
    I am developing a game on android using OpenGL ES 1.0 for Android OS. It is a 2d game using a simple Orthographic projection and textures for the sprites. One of these textures has a small line (it looks like 1 pixel) all the way across the top that has the same colors as the bottom 1-pixel line of the texture. It is almost as if the bottom line of the image raster was copied and pasted as the top line as well. Is anyone familiar with this type of error? What could the problem be?

    Read the article

  • Making a game with responsive resolution

    - by alexandervrs
    I am making a game, however I wish for it to be resolution agnostic. My target resolution i.e. where things look as intended is 1600 x 900. My ideas are: Make the HUD stay fixed to the sides no matter what resolution, use different size for HUD graphics under a certain resolution and another under a certain large one. Use large HD sprites/backgrounds which are a power of 2, so they scale nicely. Use the player's native resolution. Scale the game area (not the HUD) to fit (resulting zooming in some and cropping the game area sides if necessary for widescreen, no stretch), but always fill the screen. Have a min and max resolution limit for small and very large displays where you will just change the resolution(?) or scale up/down to fit. What I am a bit confused though is what math formula I would use to scale the game area correctly based on the resolution no matter the aspect ratio, fully fit in a square screen and with some clip to the sides for widescreen. Pseudocode would help as well. :)

    Read the article

  • Importing 3d model with multiple skeletons

    - by Sweta Dwivedi
    I have created an animated butterfly in 3ds Max and try to export it in ".fbx" format to use in XNA, however as soon as I compile, i get the following Errors: Warning 1 Multiple skeletons were found in the file. The first skeleton, named "Left.Wing" has been moved to be a child of the scene root. The other, "Right.Wing", will be ignored. Fragment identifier "Right.Wing". Error 2 Vertex is bound to bone "Right.Wing", but this bone is not present in the skeleton. Which is confusing since I have the bone Right.Wing . . and I use it to animate the butterfly I have seen a few possible solution for Blender but none for 3Ds max it would be really helpful if someone could help me out with this

    Read the article

  • Is it legal to develop a game using D&D rules?

    - by Max
    For a while now I've been thinking about trying my hand at creating a game similar in spirit and execution to Baldur's Gate, Icewind Dale and offshoots. I'd rather not face the full bulk of work in implementing my own RPG system - I'd like to use D&D rules. Now, reading about the subject it seems there is something called "The License" which allows a company to brand a game as D&D. This license seems to be exclusive, and let's just say I don't have the money to buy it :p. Is it still legal for me to implement and release such a game? Commercially or open-source? I'm not sure exactly which edition would fit the best, but since Baldur's Gate is based of 2nd edition, could I go ahead an implement that? in short: what are the issues concerning licensing and publishing when it comes to D&D? Also: Didn't see any similar question...

    Read the article

  • HLSL: Pack 4 values into 32 bit float.

    - by TheBigO
    I can't find any useful information on packing 4 values into a 32 bit float in HLSL. Ideally, what I want to be able to do in HLSL is: float4 values = ... // Some values where each component is between 0 and 1. float packedValues = pack32R(values); float4 values2 = unpack32R(packedValues); I realize that there will be precision limitations, and performance tradeoffs between different precisions in different methods. I'm just wondering what ideas are out there.

    Read the article

  • Server side random selection of players

    - by Ron
    Assuming I have a simple client-server game, where the server picks random players on a very frequent base, I was wondering what is the best way to select a random player (According to the following constraints): Solution must be high performance and highly scalable Random spread should be relatively even (meaning if I have 3 players and pick 99 times, they will all be picked 33 times more or less) Should only pick players who were active in the past X days (optional, but a big bonus) The actual DB or data model used to store players isn't an issue here, as we'll select the technology in accordance to our needs. However, high performance and scalability is (at the moment we have over 60,000 unique daily active players, and we plan on growing even more). Thanks!

    Read the article

  • Velocity collision detection (2D)

    - by ultifinitus
    Alright, so I have made a simple game engine (see youtube) And my current implementation of collision resolution has a slight problem, involving the velocity of a platform. Basically I run through all of the objects necessary to detect collisions on and resolve those collisions as I find them. Part of that resolution is setting the player's velocity = the platform's velocity. Which works great! Unless I have a row of platforms moving at different velocities or a platform between a stack of tiles.... (current system) bool player::handle_collisions() { collisions tcol; bool did_handle = false; bool thisObjectHandle = false; for (int temp = 0; temp < collideQueue.size(); temp++) { thisObjectHandle = false; tcol = get_collision(prevPos.x,y,get_img()->get_width(),get_img()->get_height(), collideQueue[temp]->get_position().x,collideQueue[temp]->get_position().y, collideQueue[temp]->get_img()->get_width(),collideQueue[temp]->get_img()->get_height()); if (prevPos.y >= collideQueue[temp]->get_prev_pos().y + collideQueue[temp]->get_img()->get_height()) if (tcol.top > 0) { add_pos(0,tcol.top); set_vel(get_vel().x,collideQueue[temp]->get_vel().y); thisObjectHandle = did_handle = true; } if (prevPos.y + get_img()->get_height() <= collideQueue[temp]->get_prev_pos().y) if (tcol.bottom > 0) { add_pos(collideQueue[temp]->get_vel().x,-tcol.bottom); set_vel(get_vel().x/*collideQueue[temp]->get_vel().x*/,collideQueue[temp]->get_vel().y); ableToJump = true; jumpTimes = maxjumpable; thisObjectHandle = did_handle = true; } /// /// ADD CODE FROM NEXT CODE BLOCK HERE (on forum, not in code) /// } for (int temp = 0; temp < collideQueue.size(); temp++) { thisObjectHandle = false; tcol = get_collision(x,y,get_img()->get_width(),get_img()->get_height(), collideQueue[temp]->get_position().x,collideQueue[temp]->get_position().y, collideQueue[temp]->get_img()->get_width(),collideQueue[temp]->get_img()->get_height()); if (prevPos.x + get_img()->get_width() <= collideQueue[temp]->get_prev_pos().x) if (tcol.left > 0) { add_pos(-tcol.left,0); set_vel(collideQueue[temp]->get_vel().x,get_vel().y); thisObjectHandle = did_handle = true; } if (prevPos.x >= collideQueue[temp]->get_prev_pos().x + collideQueue[temp]->get_img()->get_width()) if (tcol.right > 0) { add_pos(tcol.right,0); set_vel(collideQueue[temp]->get_vel().x,get_vel().y); thisObjectHandle = did_handle = true; } } return did_handle; } (if I add the following code {where the comment to do so is}, which is glitchy, the above problem doesn't happen, though it brings others) if (!thisObjectHandle) { if (tcol.bottom > tcol.top) { add_pos(collideQueue[temp]->get_vel().x,-tcol.bottom); set_vel(get_vel().x,collideQueue[temp]->get_vel().y); } else if (tcol.top > tcol.bottom) { add_pos(0,tcol.top); set_vel(get_vel().x,collideQueue[temp]->get_vel().y); } } How would you change my system to prevent this?

    Read the article

  • iOS persistant storage with update function

    - by jernej
    im developing a game which has different levels and i need to store all levels and its elements (position, image, sounds,..) into a file/database. The levels will be updated so i need a function that checks online for a update and downloads a database dump and additional files. I was planing to store all the persistent data into a SQLLite database, but not quite sure how to do the update part - to pack the database dump and the files together (in a .zip or with a xml). Can this be done any other way (as secure as possible)? thanks!

    Read the article

  • Is 2 lines of push/pop code for each pre-draw-state too many?

    - by Griffin
    I'm trying to simplify vector graphics management in XNA; currently by incorporating state preservation. 2X lines of push/pop code for X states feels like too many, and it just feels wrong to have 2 lines of code that look identical except for one being push() and the other being pop(). The goal is to eradicate this repetitiveness,and I hoped to do so by creating an interface in which a client can give class/struct refs in which he wants restored after the rendering calls. Also note that many beginner-programmers will be using this, so forcing lambda expressions or other advanced C# features to be used in client code is not a good idea. I attempted to accomplish my goal by using Daniel Earwicker's Ptr class: public class Ptr<T> { Func<T> getter; Action<T> setter; public Ptr(Func<T> g, Action<T> s) { getter = g; setter = s; } public T Deref { get { return getter(); } set { setter(value); } } } an extension method: //doesn't work for structs since this is just syntatic sugar public static Ptr<T> GetPtr <T> (this T obj) { return new Ptr<T>( ()=> obj, v=> obj=v ); } and a Push Function: //returns a Pop Action for later calling public static Action Push <T> (ref T structure) where T: struct { T pushedValue = structure; //copies the struct data Ptr<T> p = structure.GetPtr(); return new Action( ()=> {p.Deref = pushedValue;} ); } However this doesn't work as stated in the code. How might I accomplish my goal? Example of code to be refactored: protected override void RenderLocally (GraphicsDevice device) { if (!(bool)isCompiled) {Compile();} //TODO: make sure state settings don't implicitly delete any buffers/resources RasterizerState oldRasterState = device.RasterizerState; DepthFormat oldFormat = device.PresentationParameters.DepthStencilFormat; DepthStencilState oldBufferState = device.DepthStencilState; { //Rendering code } device.RasterizerState = oldRasterState; device.DepthStencilState = oldBufferState; device.PresentationParameters.DepthStencilFormat = oldFormat; }

    Read the article

  • AI to move custom-shaped spaceships (shape affecting movement behaviour)

    - by kaoD
    I'm designing a networked turn based 3D-6DOF space fleet combat strategy game which relies heavily on ship customization. Let me explain the game a bit, since you need to know a bit about it to set the question. What I aim for is the ability to create your own fleet of ships with custom shapes and attached modules (propellers, tractor beams...) which would give advantages and disadvantages to each ship, so you have lots of different fleet distributions. E.g., long ship with two propellers at the side would let the ship spin around that plane easily, bigger ships would move slowly unless you place lots of propellers at the back (therefore spending more "construction" points and energy when moving, and it will only move fast towards that direction.) I plan to balance all the game around this feature. The game would revolve around two phases: orders and combat phase. During the orders phase, you command the different ships. When all players finish the order phase, the combat phase begins and the ship orders get resolved in real-time for some time, then the action pauses and there's a new orders phase. The problem comes when I think about player input. To move a ship, you need to turn on or off different propellers if you want to steer, travel forward, brake, rotate in place... These propellers don't have to work at their whole power, so you can achieve more movement combinations with less propellers. I think this approach is a bit boring. The player doesn't want to fiddle with motors or anything, you just want to MOVE and KILL. The way I intend the player to give orders to these ships is by a destination and a rotation, and then the AI would calculate the correct propeller power to achive that movement and rotation. Propulsion doesn't have to be the same throught the entire turn calculation (after the orders have been given) so it would be cool if the ships reacted as they move, adjusting the power of the propellers for their needs dynamically, but it may be too hard to implement and it's not really needed for the game to work. In both cases, how would that AI decide which propellers to activate for the best (or at least not worst) trajectory to be achieved? I though about some approaches: Learning AI: The ship types would learn about their movement by trial and error, adjusting their behaviour with more uses, and finally becoming "smart". I don't want to get involved THAT far in AI coding, and I think it can be frustrating for the player (even if you can let it learn without playing.) Pre-calculated timestep movement: Upon ship creation, ALL possible movements are calculated for each propeller configuration and power for a given delta-time. Memory intensive, ugly, bad. Pre-calculated trajectories: The same as above but not for each delta-time but the whole trajectory, which would then be fitted as much as possible. Requires a fixed propeller configuration for the whole combat phase and is still memory intensive, ugly and bad. Continuous brute forcing: The AI continously checks ALL possible propeller configurations throughout the entire combat phase, precalculates a few time steps and decides which is the best one based on that. Con: what's good now might not be that good later, and it's too CPU intensive, ugly, and bad too. Single brute forcing: Same as above, but only brute forcing at the beginning of the simulation, so it needs constant propeller configuration throughout the entire combat phase. Coninuous angle check: This is not a full movement method, but maybe a way to discard "stupid" propeller configurations. Given the current propeller's normal vector and the final one, you can approximate the power needed for the propeller based on the angle. You must do this continuously throughout the whole combat phase. I figured this one out recently so I didn't put in too much thought. A priori, it has the "what's good now might not be that good later" drawback too, and it doesn't care about the other propellers which may act together to make a better propelling configuration. I'm really stuck here. Any ideas?

    Read the article

  • Per-vertex animation with VBOs: Stream each frame or use index offset per frame?

    - by charstar
    Scenario Meshes are animated using either skeletons (skinned animation) or some form of morph targets (i.e. per-vertex key frames). However, in either case, the animations are known in full at load-time, that is, there is no physics, IK solving, or any other form of in-game pose solving. The number of character actions (animations) will be limited but rich (hand-animated). There may be multiple characters using a each mesh and its animations simultaneously in-game (they will be at different poses/keyframes at the same time). Assume color and texture coordinate buffers are static. Goal To leverage the richness of well vetted animation tools such as Blender to do the heavy lifting for a small but rich set of animations. I am aware of additive pose blending like that from Naughty Dog and similar techniques but I would prefer to expend a little RAM/VRAM to avoid implementing a thesis-ready pose solver. I would also like to avoid implementing a key-frame + interpolation curve solver (reinventing Blender vertex groups and IPOs). Current Considerations Much like a non-shader-powered pose solver, create a VBO for each character and copy vertex and normal data to each VBO on each frame (VBO in STREAMING). Create one VBO for each animation where each frame (interleaved vertex and normal data) is concatenated onto the VBO. Then each character simply has a buffer pointer offset based on its current animation frame (e.g. pointer offset = (numVertices+numNormals)*frameNumber). (VBO in STATIC) Known Trade-Offs In 1 above: Each VBO would be small but there would be many VBOs and therefore lots of buffer binding and vertex copying each frame. Both client and pipeline intensive. In 2 above: There would be few VBOs therefore insignificant buffer binding and no vertex data getting jammed down the pipe each frame, but each VBO would be quite large. Are there any pitfalls to number 2 (aside from finite memory)? Are there other methods that I am missing?

    Read the article

  • Frame timing for GLFW versus GLUT

    - by linello
    I need a library which ensures me that the timing between frames are more constant as possible during an experiment of visual psychophics. This is usually done synchronizing the refresh rate of the screen with the main loop. For example if my monitor runs at 60Hz I would like to specify that frequency to my framework. For example if my gameloop is the following void gameloop() { // do some computation printDeltaT(); Flip buffers } I would like to have printed a constant time interval. Is it possible with GLFW?

    Read the article

  • How to transform mesh components?

    - by Lea Hayes
    I am attempting to transform the components of a mesh directly using a 4x4 matrix. This is working for the vertex positions, but it is not working for the normals (and probably not the tangents either). Here is what I have: // Transform vertex positions - Works like a charm! vertices = mesh.vertices; for (int i = 0; i < vertices.Length; ++i) vertices[i] = transform.MultiplyPoint(vertices[i]); // Does not work, lighting is messed up on mesh normals = mesh.normals; for (int i = 0; i < normals.Length; ++i) normals[i] = transform.MultiplyVector(normals[i]); Note: The input matrix converts from local to world space and is needed to combine multiple meshes together.

    Read the article

  • Game state management (Game, Menu, Titlescreen, etc)

    - by munchor
    Basically, in every single game I've made so far, I always have a variable like "current_state", which can be "game", "titlescreen", "gameoverscreen", etc. And then on my Update function I have a huge: if current_state == "game" game stuf ... else if current_state == "titlescreen" ... However, I don't feel like this is a professional/clean way of handling states. Any ideas on how to do this in a better way? Or is this the standard way?

    Read the article

  • How is the terrain generated in Commandos and Commandos game clones/look-alikes?

    - by teodron
    The Commandos series of games and its similar western counterpart, Desperados, use a mix of 2D and 3D elements to achieve a very pleasing and immersive atmosphere. Apart from the concept that alone made the series a best-seller, the graphics eye-candy was also a much appreciated asset of that game. I was very curious on what was the technique used to model and adorn the realistic terrains in those titles? Below are some screenshots that could be relevant as a reference for whomever has a candidate answer: The tiny details and patternless distribution of ornamental textures make me think that these terrains were not generated using a standard heightmap-blendmap method.

    Read the article

  • Strange and erratic transformations when using OpenGL VBOs to render scene

    - by janoside
    I have an existing iOS game with fairly simple scenes (all textured quads) and I'm using Apple's "Texture2D" class. I'm trying to convert this class to use VBOs since the vertices of my objects basically never change so I may as well not re-create them for every object every frame. I have the scene rendering using VBOs but the sizes and orientations of all rendered objects are strange and erratic - though locations seem generally correct. I've been toying with this code for a few days now, and I've found something odd: if I re-create all of my VBOs each frame, everything looks correct, even though I'm almost certain my vertices are not changing. Other notes I'm basing my work on this tutorial, and therefore am also using "IBOs" I create my buffers before rendering begins My buffers include vertex and texture data I'm using OpenGL ES 1.1 Fearing some strange effect of the current matrix GL state at the time of buffer creation I've also tried wrapping my buffer-setup code in a "pushMatrix-loadIdentity-popMatrix" block which (as expected) had no effect I'm aware that various articles have been published demonstrating that VBOs may not help performance, but I want to understand this problem and at least have the option to use them. I realize this is a shot in the dark, but has anyone else experienced this type of strange behavior? What might I be doing to result in this behavior? It's rather difficult for me to isolate the problem since I'm working in an existing, moderately complex project, so suggestions about how to approach the problem are also quite welcome.

    Read the article

  • Moving a Cube from a GUI texture on iOS [on hold]

    - by London2423
    I really hope someone can help me in this since I am working already two days but without any result. What I' am trying to achieve in this instance is to move a GameObject when a GUI Texture is touch on a Iphone. The GameObject to be moved is named Cube. The Cube has a Script named "Left" that supposedly when is "call it " from the GUITexture the Cube should move left. I hope is clear: I want to "activated" the script in the Game Object from the Guitexture. I try to use send message but without any joy as well so I am using GetComponent. This is the script "inside" the GUITexture using Unity and C# //script inside the gameobject cube so it can move left when call it from the GUItexture void Awake() { left = Cube.GetComponent<Left>().enable = true; } void Start() { Cube = GameObject.Find ("Cube"); } void Update () { //loop through all the touches on the screeen for(int i = 0 ; i < Input.touchCount; i++) { //execute this code for current touch (i) on the screen if(this.guiTexture.HitTest(Input.GetTouch(i).position)) { //if current hits our guiTecture, run this code if(Input.GetTouch (i).phase == TouchPhase.Began) //move the cube object Cube.GetComponent<Left> (); } if(Input.GetTouch (i).phase == TouchPhase.Ended) { return; } if(Input.GetTouch(i).phase == TouchPhase.Stationary); //if current finger is stationary run this code { Cube.GetComponent<Left> (); } } } } } This is the script inside the GameObject named "Cube" that is activated from the Gui Texture and when is activated from the GUITexture should allow the cube to move left public class Left : MonoBehaviour { // Use this for initialization void Start () { } // Update is called once per frame void OnMousedown () { transform.position += Vector3.left * Time.deltaTime; } } Before write here I search all documentation, tutorial videos, forums but I still don't understand where is my mistake. May please someone help me Thanks CL

    Read the article

< Previous Page | 451 452 453 454 455 456 457 458 459 460 461 462  | Next Page >