Search Results

Search found 37616 results on 1505 pages for 'model driven development'.

Page 607/1505 | < Previous Page | 603 604 605 606 607 608 609 610 611 612 613 614  | Next Page >

  • Finding furthermost point in game world

    - by user13414
    I am attempting to find the furthermost point in my game world given the player's current location and a normalized direction vector in screen space. My current algorithm is: convert player world location to screen space multiply the direction vector by a large number (2000) and add it to the player's screen location to get the distant screen location convert the distant screen location to world space create a line running from the player's world location to the distant world location loop over the bounding "walls" (of which there are always 4) of my game world check whether the wall and the line intersect if so, where they intersect is the furthermost point of my game world in the direction of the vector Here it is, more or less, in code: public Vector2 GetFurthermostWorldPoint(Vector2 directionVector) { var screenLocation = entity.WorldPointToScreen(entity.Location); var distantScreenLocation = screenLocation + (directionVector * 2000); var distantWorldLocation = entity.ScreenPointToWorld(distantScreenLocation); var line = new Line(entity.Center, distantWorldLocation); float intersectionDistance; Vector2 intersectionPoint; foreach (var boundingWall in entity.Level.BoundingWalls) { if (boundingWall.Intersects(line, out intersectionDistance, out intersectionPoint)) { return intersectionPoint; } } Debug.Assert(false, "No intersection found!"); return Vector2.Zero; } Now this works, for some definition of "works". I've found that the further out my distant screen location is, the less chance it has of working. When digging into the reasons why, I noticed that calls to Viewport.Unproject could result in wildly varying return values for points that are "far away". I wrote this stupid little "test" to try and understand what was going on: [Fact] public void wtf() { var screenPositions = new Vector2[] { new Vector2(400, 240), new Vector2(400, -2000), }; var viewport = new Viewport(0, 0, 800, 480); var projectionMatrix = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, viewport.Width / viewport.Height, 1, 200000); var viewMatrix = Matrix.CreateLookAt(new Vector3(400, 630, 600), new Vector3(400, 345, 0), new Vector3(0, 0, 1)); var worldMatrix = Matrix.Identity; foreach (var screenPosition in screenPositions) { var nearPoint = viewport.Unproject(new Vector3(screenPosition, 0), projectionMatrix, viewMatrix, worldMatrix); var farPoint = viewport.Unproject(new Vector3(screenPosition, 1), projectionMatrix, viewMatrix, worldMatrix); Console.WriteLine("For screen position {0}:", screenPosition); Console.WriteLine(" Projected Near Point = {0}", nearPoint.TruncateZ()); Console.WriteLine(" Projected Far Point = {0}", farPoint.TruncateZ()); Console.WriteLine(); } } The output I get on the console is: For screen position {X:400 Y:240}: Projected Near Point = {X:400 Y:629.571 Z:599.0967} Projected Far Point = {X:392.9302 Y:-83074.98 Z:-175627.9} For screen position {X:400 Y:-2000}: Projected Near Point = {X:400 Y:626.079 Z:600.7554} Projected Far Point = {X:390.2068 Y:-767438.6 Z:148564.2} My question is really twofold: what am I doing wrong with the unprojection such that it varies so wildly and, thus, does not allow me to determine the corresponding world point for my distant screen point? is there a better way altogether to determine the furthermost point in world space given a current world space location, and a directional vector in screen space?

    Read the article

  • HLSL: Pack 4 values into 32 bit float.

    - by TheBigO
    I can't find any useful information on packing 4 values into a 32 bit float in HLSL. Ideally, what I want to be able to do in HLSL is: float4 values = ... // Some values where each component is between 0 and 1. float packedValues = pack32R(values); float4 values2 = unpack32R(packedValues); I realize that there will be precision limitations, and performance tradeoffs between different precisions in different methods. I'm just wondering what ideas are out there.

    Read the article

  • Square game map rendered as sphere

    - by Roflha
    For a hobby project of mine I have created a finite voxel world (similar to Minecraft), but as I said, mine is finite. When you reach the edge of it, you are sent to the other side. That is all working fine along with rendering the far side of the map, but I want to be able to render this grid as a sphere. Looking down from above, the world is a square. I basically want to be able to represent a portion of that square as a sphere, as if you were looking at a planet. Right now I am experimenting with taking a circular section of the map, and rendering that, but it look to flat (no curvature around the edges). My question then, is what would be the best way to add some curvature to the edges of a 2d circle to make it look like a hemisphere. However, I am not overly attached to this implementation so if somebody has some other idea for representing the square as a planet, I am all ears.

    Read the article

  • How do produce a "mucus spreading" effect in a 2D environment?

    - by nathan
    Here is an example of such a mucus spreading. The substance is spread around the source (in this example, the source would be the main alien building). The game is starcraft, the purple substance is called creep. How this kind of substance spreading would be achieved in a top down 2D environment? Recalculating the substance progression and regenerate the effect on the fly each frame or rather use a large collection of tiles or something else?

    Read the article

  • Which purpose do armor points serve?

    - by Bane
    I have seen a mechanic which I call "armor points" in many games: Quake, Counter Strike, etc. Generally, while the player has these armor points, he takes less damage. However, they act in a similar fashion that health points do: you lose them by taking said damage. Why would you design such a feature? Is this just health 2.0, or am I missing something? To me, armor only makes sense in, for example, RPG games, where it is a constant that determines your resistance. But I don't see why would it need to be reduceable during combat.

    Read the article

  • What should I worry about when changing OpenGL origin to upper left of screen?

    - by derivative
    For self education, I'm writing a 2D platformer engine in C++ using SDL / OpenGL. I initially began with pure SDL using the tutorials on sdltutorials.com and lazyfoo.net, but I'm now rendering in an OpenGL context (specifically immediate mode but I'm learning about VAOs/VBOs) and using SDL for interface, audio, etc. SDL uses a coordinate system with the origin in the upper left of the screen and the positive y-axis pointing down. It's easy to set up my orthographic projection in OpenGL to mirror this. I know that texture coordinates are a right-hand system with values from 0 to 1 -- flipping the texture vertically before rendering (well, flip the file before loading) yields textures that render correctly... which is fine if I'm drawing the entire texture, but ultimately I'll be using tilesets and can imagine problems. What should I be concerned about in terms of rendering when I do this? If anybody has any advice or they've done this themselves and can point out future pitfalls, that would be great, but really any thoughts would be appreciated.

    Read the article

  • Best gui toolkit to use for creating 3D board game

    - by UserInteractive
    I have created a board game using Java and Swing - using GridLayout and various other apis. It works properly but the UI looks very very simple. I would want couple of animations like tilting the GridLayoutat any angle. There are pawns on boxes of the GridLayout that I want to be animated when somebody clicks on it. I'm not sure of the right GUI toolkit to use for this. Swing repaint is possible to a limit and cannot be used for a lot of animation and graphics. And I realized after creating the game that Swing is probably not a good tool to create games. Could anybody suggest a better framework to use that I can use it in Eclipse with Java? I was thinking of JavaFX or tools like Adobe Flash or Adobe Air. Any suggestions please?

    Read the article

  • Change alpha to a Frame in libgdx

    - by Rudy_TM
    I have this batch.draw(currentFrame, x, y, this.parent.originX, this.parent.originY, this.parent.width, this.parent.height, this.scaleX, this.scaleY,this.rotation); I want to apply the alpha that it gets from the method, but theres is not overload from the SpriteBatch class that takes the alpha value, is there some wey to apply it? (i did it this way, because this are animation, and i wanted to control them) in my static ones i apply sprite.draw(SpriteBatch, alpha) Thanks

    Read the article

  • AS3 Calculating Delta Time In Seconds

    - by user1133079
    Here is how I've been trying to implement delta time based on different internet resources. var startTime:Number = getTimer(); game.Update(deltaTime); deltaTime = Number(getTimer() - startTime) * 0.001; My issue with this is it doesn't seem to be giving me accurate timing. The main update shows the frame time at 0.001 and when reinitializing the level it goes to 0.002. I'm using dt else where for a timer and later on time based physics so I would like it to work as expected. I must be missing something silly.

    Read the article

  • GUI for DirectX

    - by DeadMG
    I'm looking for a GUI library built on top of DirectX- preferably 9, but I can also do 11. I've looked at stuff like DXUT, but it's way too much for me- I'm only needing some UI controls which I would rather not write (and debug) myself, and their need to keep a C-compatible API is definitely a big downside. I'd rather look at UI libs that are designed to be integrated into an existing DirectX-based system, rather than forming the basis of a system. Any recommendations?

    Read the article

  • Implementing invisible bones

    - by DeadMG
    I suddenly have the feeling that I have absolutely no idea how to implement invisible objects/bones. Right now, I use hardware instancing to store the world matrix of every bone in a vertex buffer, and then send them all to the pipeline. But when dealing with frustrum culling, or having them set to invisible by my simulation for other reasons, means that some of them will be randomly invisible. Does this mean I effectively need to re-fill the buffer from scratch every frame with only the visible unit's matrices? This seems to me like it would involve a lot of wasted bandwidth.

    Read the article

  • Procedural terrains in 3D: what has been done ? Are there common algo and/or theories about it ?

    - by jokoon
    Besides programming, modeling an environment takes a great deal of time. I don't know about the work time involved, for example, in a WoW dungeon level, or other beautiful city-like, future environment, jungles, fantasy, etc, but this kind of work is made from scratch by artists. What are the techniques involved in the TorchLight level randomizer, and does other titles have similarities with this ? Is there a family name for such techniques ?

    Read the article

  • Problem animating in Unity/Orthello 2D. Can't move gameObject

    - by Nelson Gregório
    I have a enemy npc that moves left and right in a corridor. It's animated with 2 sprites using Orthello 2D Framework. If I untick the animation's play on start and looping, the npc moves correctly. If I turn it on, the npc tries to move but is pulled back to his starting position again and again because of the animation loop. If I turn looping off during runtime, the npc moves correctly again. What did I do wrong? Here's the npc code if needed. using UnityEngine; using System.Collections; public class Enemies : MonoBehaviour { private Vector2 movement; public float moveSpeed = 200; public bool started = true; public bool blockedRight = false; public bool blockedLeft = false; public GameObject BorderL; public GameObject BorderR; void Update () { if (gameObject.transform.position.x < BorderL.transform.position.x) { started = false; blockedRight = false; blockedLeft = true; } if (gameObject.transform.position.x > BorderR.transform.position.x) { started = false; blockedLeft = false; blockedRight = true; } if(started) { movement = new Vector2(1, 0f); movement *= Time.deltaTime*moveSpeed; gameObject.transform.Translate(movement.x,movement.y, 0f); } if(!blockedRight && !started && blockedLeft) { movement = new Vector2(1, 0f); movement *= Time.deltaTime*moveSpeed; gameObject.transform.Translate(movement.x,movement.y, 0f); } if(!blockedLeft && !started && blockedRight) { movement = new Vector2(-1, 0f); movement *= Time.deltaTime*moveSpeed; gameObject.transform.Translate(movement.x,movement.y, 0f); } } }

    Read the article

  • Depth buffer values reset on change shader?

    - by bobobobo
    I have 2 different shaders, and when I change the shader (glUseProgram), it seems that the depth information is lost, because everything drawn with the 2nd shader appears completely on top of anything drawn by the first shader. If I switch the order of shader use/drawing, then it's the same (the last drawn object always appears on top of the first drawn object if there is a shader change between the 2 objects, even if the last drawn object is further away)

    Read the article

  • Parenting Opengl with Groups in LibGDX

    - by Rudy_TM
    I am trying to make an object child of a Group, but this object has a draw method that calls opengl to draw in the screen. Its class its this public class OpenGLSquare extends Actor { private static final ImmediateModeRenderer renderer = new ImmediateModeRenderer10(); private static Matrix4 matrix = null; private static Vector2 temp = new Vector2(); public static void setMatrix4(Matrix4 mat) { matrix = mat; } @Override public void draw(SpriteBatch batch, float arg1) { // TODO Auto-generated method stub renderer.begin(matrix, GL10.GL_TRIANGLES); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x0, y0, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x0, y1, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x1, y1, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x1, y1, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x1, y0, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x0, y0, 0f); renderer.end(); } } In my screen class I have this, i call it in the constructor MyGroupClass spriteLab = new MyGroupClass(spriteSheetLab); OpenGLSquare square = new OpenGLSquare(); square.setX0(100); square.setY0(200); square.setX1(400); square.setY1(280); square.color.set(Color.BLUE); square.setSize(); //spriteLab.addActorAt(0, clock); spriteLab.addActor(square); stage.addActor(spriteLab); And the render in the screen I have @Override public void render(float arg0) { this.gl.glClear(GL10.GL_COLOR_BUFFER_BIT |GL10.GL_DEPTH_BUFFER_BIT); stage.draw(); stage.act(Gdx.graphics.getDeltaTime()); } The problem its that when i use opengl with parent, it resets all the other chldren to position 0,0 and the opengl renderer paints the square in the exact position of the screen and not relative to the parent. I tried using batch.enableBlending() and batch.disableBlending() that fixes the position problem of the other children, but not the relative position of the opengl drawing and it also puts alpha to the glDrawing. What am i doing wrong?:/

    Read the article

  • About online game servers and how to handle data

    - by TreantBG
    So my question isn't about what technology to use or how to do this or that, but a more general question. I'm currently developing a action third person shooter. With elements of RPG - weapon,armor upgrades and items. Players will be able to create new games or join old ones. So my question is how to create the game server that players will play in. I have two ideas on my mind. The player who made the game is the server. All data passes trough him and he send this data to the server updating the database of the players with their XP points kills/deaths score and other. Or my host machine is the server, the player who made the game just will open new instance on my host and will be like client. And all players send their input data to the host, the host updates the game and send response back to client for any new changes like where is the enemy and other. And if i choose option 1 is there a chance the host to change the game content and manipulate the game results? (I think there is but i'm not sure) And if i choose option 2 isn't that raising the response time and potentially the game lag? or maybe there is another option?

    Read the article

  • Drawing a sprite or text causes the OpenGl rendering to 'disappear' in SFML

    - by Ken
    I'm using some SFML built in functions to draw sprites and text as an overlay on top of some OpenGL rending in an SFML RenderWindow. The opengl rendering appears fine until I add the code to draw the sprites or text. The sprite or text drawing causes the OpenGL stuff to disappear. The follow code show what I'm trying to do sf::RenderWindow window(sf::VideoMode(viewport.width,viewport.height,32), "SFML Window"); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0,viewport.width,0,viewport.height,0,1); while (window.pollEvent(Event)) { //event handling... //begin drawing glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glBegin(GL_TRIANGLES); glColor3f(col.x,col.y,col.z); for(int i=0;i<3;i++) glVertex2f(pos.x+verts[i].x,pos.y+verts[i].y); glEnd(); // adding this line causes all the previous opengl triangles not to appear window.draw("Sometext"); window.display(); }

    Read the article

  • OpenGL VertexBuffer won'e render in GLFW3

    - by sm81095
    So I have started to try to learn OpenGL, and I decided to use GLFW to assist in window creation. The problem is, since GLFW3 is so new, there are no tutorials on it yet and how to use it with modern OpenGL (3.3, specifically). Using the GLFW3 tutorial found on the website, which uses older OpenGL rendering (glBegin(GL_TRIANGLES), glVertex3f()), and such, I can get a triangle to render to the screen. The problem is, using new OpenGL, I can't get the same triangle to render to the screen. I am new to OpenGL, and GLFW3 is new to most people, so I may be completely missing something obvious, but here is my code: static const GLuint g_vertex_buffer_data[] = { -1.0f, -1.0f, 0.0f, 1.0f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f }; int main(void) { GLFWwindow* window; if(!glfwInit()) { fprintf(stderr, "Failed to initialize GLFW."); return -1; } glfwWindowHint(GLFW_SAMPLES, 4); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); window = glfwCreateWindow(800, 600, "Test Window", NULL, NULL); if(!window) { glfwTerminate(); fprintf(stderr, "Failed to create a GLFW window"); return -1; } glfwMakeContextCurrent(window); glewExperimental = GL_TRUE; GLenum err = glewInit(); if(err != GLEW_OK) { glfwTerminate(); fprintf(stderr, "Failed to initialize GLEW"); fprintf(stderr, (char*)glewGetErrorString(err)); return -1; } GLuint VertexArrayID; glGenVertexArrays(1, &VertexArrayID); glBindVertexArray(VertexArrayID); GLuint programID = LoadShaders("SimpleVertexShader.glsl", "SimpleFragmentShader.glsl"); GLuint vertexBuffer; glGenBuffers(1, &vertexBuffer); glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer); glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW); while(!glfwWindowShouldClose(window)) { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glUseProgram(programID); glEnableVertexAttribArray(0); glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0); glDrawArrays(GL_TRIANGLES, 0, 3); glDisableVertexAttribArray(0); glfwSwapBuffers(window); glfwPollEvents(); } glDeleteBuffers(1, &vertexBuffer); glDeleteProgram(programID); glfwDestroyWindow(window); glfwTerminate(); exit(EXIT_SUCCESS); } I know it is not my shaders, they are super simple and I've checked them against GLFW 2.7 so I know that they work. I'm assuming that I've missed something crucial to using the OpenGL context with GLFW3, so any help locating the problem would be greatly appreciated.

    Read the article

  • The purpose of using invert and transpose

    - by user699215
    In openGl ES and the World of 3D - why use the invers matrix? The thing is that I dont have any intuition to, why it is used, therefore please correct me: As fare as I understand, it is used in shaders - and can help you to figure out the opposite direction of the normals? Invers in ordinary numbers is like; The product of a number and its multiplicative inverse is 1. Observe that 3/5 * 5/3 = 1. In a matrix this will give you the Identity Matrix, which is the base coordinate system or the orion of the World space - right. But the invers is - some other coordinate system? You can use the transpose(Row-major order to Column-major order) of a square matrix to find the inverted matrix, as calculating the invers is process heavy - and the transpose is giving you the inverted matrix as a bi product? Again, I am looking for getting some intuition of this - and therefore be able to use it as intended. Thank you for any reply that will guide me in the right direction. Regards

    Read the article

  • Platform game collisions with Block

    - by Sri Harsha Chilakapati
    I am trying to create a platform game and doing wrong collision detection with the blocks. Here's my code // Variables GTimer jump = new GTimer(1000); boolean onground = true; // The update method public void update(long elapsedTime){ MapView.follow(this); // Add the gravity if (!onground && !jump.active){ setVelocityY(4); } // Jumping if (isPressed(VK_SPACE) && onground){ jump.start(); setVelocityY(-4); onground = false; } if (jump.action(elapsedTime)){ // jump expired jump.stop(); } // Horizontal movement setVelocityX(0); if (isPressed(VK_LEFT)){ setVelocityX(-4); } if (isPressed(VK_RIGHT)){ setVelocityX(4); } } // The collision method public void collision(GObject other){ if (other instanceof Block){ // Determine the horizontal distance between centers float h_dist = Math.abs((other.getX() + other.getWidth()/2) - (getX() + getWidth()/2)); // Now the vertical distance float v_dist = Math.abs((other.getY() + other.getHeight()/2) - (getY() + getHeight()/2)); // If h_dist > v_dist horizontal collision else vertical collision if (h_dist > v_dist){ // Are we moving right? if (getX()<other.getX()){ setX(other.getX()-getWidth()); } // Are we moving left? else if (getX()>other.getX()){ setX(other.getX()+other.getWidth()); } } else { // Are we moving up? if (jump.active){ jump.stop(); } // We are moving down else { setY(other.getY()-getHeight()); setVelocityY(0); onground = true; } } } } The problem is that the object jumps well but does not fall when moved out of platform. Here's an image describing the problem. I know I'm not checking underneath the object but I don't know how. The map is a list of objects and should I have to iterate over all the objects??? Thanks

    Read the article

  • Blending transition in cocos2d

    - by fiddler
    In my cocos2d-iphone game, I have 2 backgrounds (CCnodes), each containing a quite complex hierarchy of sprites. I would like to make a smooth transition between them: initially, only the first background is visible at the end, only the second one is visible Is there a good way to set the opacity of a full hierarchy of sprites ? I tried to recursively set the opacity of all the contained sprites. It kinda works except that: i guess it's not very efficient i would like the opacity of overlapping sprites to be 'merged' (as if the background was one single big sprite)

    Read the article

  • My rhythm game runs choppy even with high frame rate

    - by felipedrl
    I'm coding a rhythm game and the game runs smoothly with uncapped fps. But when I try to cap it around 60 the game updates in little chunks, like hiccups, as if it was skipping frames or at a very low frame rate. The reason I need to cap frame rate is because in some computers I tested, the fps varies a lot (from ~80 - ~250 fps) and those drops are noticeable and degrade response time. Since this is a rhythm game this is very important. This issue is driving me crazy. I've spent a few weeks already on it and still can't figure out the problem. I hope someone more experienced than me could shed some light on it. I'll try to put here all the hints I've tried along with two pseudo codes for game loops I tried, so I apologize if this post gets too lengthy. 1st GameLoop: const uint UPDATE_SKIP = 1000 / 60; uint nextGameTick = SDL_GetTicks(); while(isNotDone) { // only false when a QUIT event is generated! if (processEvents()) { if (SDL_GetTicks() > nextGameTick) { update(UPDATE_SKIP); render(); nextGameTick += UPDATE_SKIP; } } } 2nd Game Loop: const uint UPDATE_SKIP = 1000 / 60; while (isNotDone) { LARGE_INTEGER startTime; QueryPerformanceCounter(&startTime); // process events will return false in case of a QUIT event processed if (processEvents()) { update(frameTime); render(); } LARGE_INTEGER endTime; do { QueryPerformanceCounter(&endTime); frameTime = static_cast<uint>((endTime.QuadPart - startTime.QuadPart) * 1000.0 / frequency.QuadPart); } while (frameTime < UPDATE_SKIP); } [1] At first I thought it was a timer resolution problem. I was using SDL_GetTicks, but even when I switched to QueryPerformanceCounter, supposedly less granular, I saw no difference. [2] Then I thought it could be due to a rounding error in my position computation and since game updates are smaller in high FPS that would be less noticeable. Indeed there is an small error, but from my tests I realized that it is not enough to produce the position jumps I'm getting. Also, another intriguing factor is that if I enable vsync I'll get smooth updates @60fps regardless frame cap code. So why not rely on vsync? Because some computers can force a disable on gfx card config. [3] I started printing the maximum and minimum frame time measured in 1sec span, in the hope that every a few frames one would take a long time but still not enough to drop my fps computation. It turns out that, with frame cap code I always get frame times in the range of [16, 18]ms, and still, the game "does not moves like jagger". [4] My process' priority is set to HIGH (Windows doesn't allow me to set REALTIME for some reason). As far as I know there is only one thread running along with the game (a sound callback, which I really don't have access to it). I'm using AudiereLib. I then disabled Audiere by removing it from the project and still got the issue. Maybe there are some others threads running and one of them is taking too long to come back right in between when I measured frame times, I don't know. Is there a way to know which threads are attached to my process? [5] There are some dynamic data being created during game run. But It is a little bit hard to remove it to test. Maybe I'll have to try harder this one. Well, as I told you I really don't know what to try next. Anything, I mean, anything would be of great help. What bugs me more is why at 60fps & vsync enabled I get an smooth update and at 60fps & no vsync I don't. Is there a way to implement software vsync? I mean, query display sync info? Thanks in advance. I appreciate the ones that got this far and yet again I apologize for the long post. Best Regards from a fellow coder.

    Read the article

  • Selling your iphone games.

    - by Artemix
    Hi. So, long story short, some days ago I pusblished an iPhone game, I think the game wasnt that bad tbh, and still I got only 10 sells at $0.99. Are they any publishers, sponsors, or distributors to make your game "visible" on the app store market?, or the only thing you need is to have an amazing game and thats all? Somehow I think that even if you have an awesome game if you dont do that "marketing magic" correctly you will not exist in the store. Now Im making a second game, completly different, and I want to know how to do things right. If anyone knows something about this topic, let me know. Thx in advance.

    Read the article

  • How can I use WebGL to create a tile-based multi-layer scrolling platform game?

    - by Nicholas Hill
    I've found WebGL (based on OpenGL) to be a fiendish and unforgiving framework for those learning to write HTML5-based games. Despite the presence of many examples on how to get started, I'm really struggling to understand how I could simply load a bunch of images and render them to a canvas quickly using WebGL. My specific scenario involves trying to render a map using a bespoke but simple multi-layered tile engine, where each value in a three dimensional array points to the image to use for that location in the rendered image. Think "Sonic the Hedgehog" via tilesets, tiles, maps, layers, sprites etc. Can anyone enlighten me: 1) How can I load an image that I can use as a texture in WebGL? 2) How can I dynamically select an image at run time and draw it at any co-ordinate, that I also select at run time?

    Read the article

  • Distributed Rendering in the UDK and Unity

    - by N0xus
    At the moment I'm looking at getting a game engine to run in a CAVE environment. So far, during my research I've seen a lot of people being able to get both Unity and the Unreal engine up and running in a CAVE (someone did get CryEngine to work in one, but there is little research data about it). As of yet, I have not cemented my final choice of engine for use in the next stage of my project. I've experience in both, so the learning curve will be gentle on both. And both of the engines offer stereoscopic rendering, either already inbuilt with ReadD (Unreal) or by doing it yourself (Unity). Both can also make use of other input devices as well, such as the kinect or other devices. So again, both engines are still on the table. For the last bit of my preliminary research, I was advised to see if either, or both engines could do distributed rendering. I was advised this, as the final game we make could go into a variety of differently sized CAVEs. The one I have access to is roughly 2.4m x 3m cubed, and have been duly informed that this one is a "baby" compared to others. So, finally onto my question: Can either the Unreal Engine, or Unity Engine make it possible for developers to allow distributed rendering? Either through in built devices, or by creating my own plugin / script?

    Read the article

< Previous Page | 603 604 605 606 607 608 609 610 611 612 613 614  | Next Page >