Search Results

Search found 26774 results on 1071 pages for 'distributed development'.

Page 554/1071 | < Previous Page | 550 551 552 553 554 555 556 557 558 559 560 561  | Next Page >

  • What is the purpose of the canonical view volume?

    - by breadjesus
    I'm currently learning OpenGL and haven't been able to find an answer to this question. After the projection matrix is applied to the view space, the view space is "normalized" so that all the points lie within the range [-1, 1]. This is generally referred to as the "canonical view volume" or "normalized device coordinates". While I've found plenty of resources telling me about how this happens, I haven't seen anything about why it happens. What is the purpose of this step?

    Read the article

  • how to mask Cocos2d

    - by alex
    Hi i'am iOS developer but i'm new to cocos2d.Im working on new game i use Kobold2d Have cocos2d installed too and i want to make this effect. http://postimage.org/image/ngj399ibn/ I Know how is done on flash, but cant make it in kobold. There 2 images with the same size one is like low-res image for background and the secon hi-res over the first one,when the "reticle" mask move reveal the second image inside the circle and outsite only the background is visible. I was googling with no success, saw some ray wenderlich projects but not helpful.Any help

    Read the article

  • How to handle a Tile Map Scrolling [duplicate]

    - by DGomez
    This question already has an answer here: Implementing a camera / viewport to a 2D game 1 answer i'm making a video game, and i'm having, i think, a concept problem. The game will be a platformer which will use tile maps, so to start i will create a mask matrix indicating the tiles to be loaded, and etc..., so my problem is, how to handle the scrolling? should i create a giant mask matrix indicating in each position of the whole level what is supposed to be loaded, and according to the position of the player, change the section to be drawed?? Is this a correct approach to this situation??

    Read the article

  • Client/Server game even in solo: any big problem?

    - by Klaim
    I'm making a game which have strong basic design based on multiplayer but also should provide a really interesting and self-sufficient solo game. A bit like a real-time strategy game. The events and actions taken shouldn't be as massive and immediate as in a FPS, so you can also think the networking like for an RTS. It's a PC game, targetting Windows, MacOSX and Linux (Ubuntu & Fedora). It's programmed in C++, using a variety of open source libraries, so I have great (potential) control over the performances. So far I always considered that just making the game work with two applications, client & server, even in solo mode was ok. However, as I'm in the process of starting the network code I'm having doubts about if it's a good idea. I'm not a specialist so I might be missing something in my analysis. I see these pros and cons: Pros: The game works only one way so if I fix a bug it should apply on all game modes, whatever the distance with the server is; Basic networking issues would be detected early, including behaviour with the protection softwares (firewall) installed (i am not specialist so this might be wrong); Cons: I suppose that even if it should be really fast enough, networking client and server on the same computer would still be slower than no networking and message passing in (one) process memory. Maybe debugging would be more difficult? I don't have experience in this case but so far I assume that debugging with Visual Studio allows me to debug multiple process so it shouldn't be really different. Also, remote debugging. My question is: is there a big disadvantage that I missed? Or maybe there are advantages that I missed and that should encourage me to just continue with only client-server game sessions?

    Read the article

  • XNA: draw a sprite in 3d, is that possible?

    - by Heisenbug
    since now I always used sprited to draw in 2D: spriteBatch.Draw(myTexture, rectangle, color); (I suppose the texture is binded internally to 2 triangles and then scaled.) Now, I'm porting my game in 3D and I have to draw several planes (walls, floor, roof,..). Do I need to manually binding a texture to a geometry (for example using VertexPositionColorTexture with VertexBuffer and IndexBuffer), or is there any simpler way to do that? I'm looking for something like spriteBatch.Draw with the rectangle clip specified in 3d space: spriteBatch.Draw(myTexture, rectangleIn3D, color);

    Read the article

  • Making video from 3D gaphics in OpenGL

    - by MVTC
    What are some of the preferred methods or libraries for creating video from an OpenGL graphics simulation? For example, I want to create a visualization(video) of an N-Body gravity simulation by rendering non-real-time OpenGL frames. The simulation is already coded, I just don't know how to convert it to video. EDIT: I am also interested in providing the described functionality: The user can adjust parameters including the time step between captured frames and then initiate the simulation. The user waits for the simulation to complete, and then can watch the results. The user is able to increase or decrease the playback speed of the simulation whereas in slow motion, more frames are used i.e., you see higher resolution time steps, and when the speed is increased, you see lower resolution time steps at a higher rate, but the frames per second flashing on the screen is constant.

    Read the article

  • forward rendering and multiple shadow maps

    - by Irbis
    I have two light sources on my scene. I created two fbo's which store depth textures for these lights. A render loop looks like this: bind fbo1 save depth values for first light unbind fbo1 bind fbo2 save depth values for second light unbind fbo2 enable additive blending bind first depth texture render scene bind second depth texture render scene disable additive blending For one light source the program works fine. For many light sources I use an additive blending to acumulate lighting results but then some objects become transparent (for example when an object which is further away from the camera is drawn before an object which is closer to the camera). How to resolve that problem ? How should I accumulate lighting effects for many light sources (many shadow maps) ? P.S. I use OpenGL/GLSL 3.3+

    Read the article

  • How to texture voxel terrain without triplanar texturing?

    - by Thelvyn
    How can a voxel terrain (marching cubes) be textured without triplanar mapping ? The goal being to have more artistic freedom. I think, I could unwrap the mesh while extracting the isosurface then use projective painting. But I do not know how to handle terrain modifications without breaking the texture. I also guess that virtual texturing could help here. Links for these matters would be appreciated.

    Read the article

  • How to reduce the time it takes to load my web game? [closed]

    - by Danial
    I created a puzzle game with Unity and uploaded it to one server. This works fine, but I bought a new server and uploaded my game to it as well. There, the loading time is much longer. These are the servers: http://pinheadsinteractive.com/Mozzie/ (fast) http://operation-mozzie-free.com/ (slow) The Unity files are exactly the same from one server to the next. My client is dissatisfied with the new, slow loading time. So, how can I reduce the time my Unity game takes to load? Even in some cases they faced the problem that they could not load the game at all. For the the moment, I'm using an iframe on the new sever as a workaround, but the issue still remains unsolved.

    Read the article

  • How can I keep track of a battle log on a web game?

    - by Jay W
    Recently I started working on a Web turn-based PvP RPG game. Now I'm working on the battle system but I encountered some issues: How can I keep track of everything that happens in the battle? It should keep track of the characters on the field, inventory, the damage done etc. I first thought I would simply put it in the (MySQL) database, but I think it will be too much. Especially if several people are in a battle. I thought of puting this in sessions or cookies but I don't think thats reliable. Does anyone have an idea how I can do this?

    Read the article

  • Will C++ remain viable for game engines in somewhat distant future?

    - by samual
    C++11 has opened ways, which were only dreamt by the C++ programmers. It has been three years since I have been learning C++, and I am going well. Now I want to get into vedio games. Every core of the game code I saw, was monstourously writtern in C++. My question is - If I get into serious game engine dev, and perfecting it would take, maybe say 10 years, would we still be writing game engines in C++ ?(newer standard) Or, will John Carmack, write id tech 7 in c++? note - I am strictly talking about game engines.

    Read the article

  • How to display consistent background image

    - by Tofu_Craving_Redish_BlueDragon
    Drawing a large background is relatively slow in PyGame. In order to avoid drawing BG every frame, you could draw it once, then do nothing. However, if something is overdrawn onto the surface and keeps moving, you will need to redraw the background in order to "erase" the color pixels left by moving object; otherwise, you will have "traces" of the moving object. I have a moving object in my PyGame. However, I do not want to "clear the color buffer" by redrawing the background image. Redrawing the background image every frame is slow. My solution : I will "clear" only required portions (where the "traces" of moving object are left) of the "buffer" by redrawing portions of background. Is there any other better way to have a consistent background?

    Read the article

  • Change the shape of body dynamically

    - by user45491
    I have a problem where i have a ballon which i need to continuously inflate and defalte in update method, I have tried to used setScaleCenter but it is not giving desired result. Below is a code i am trying to make work scale += (float) ((dist-lastDist)/lastDist); Log.d("pinch","scale is "+scale); Log.d("pinch","change in scale is "+(float) ((lastDist-dist)/lastDist)); player.setScaleCenter(scale, scale); player.setScale(scale);

    Read the article

  • How can I keep the correct alpha during rendering particles?

    - by April
    Rencently,I was trying to save textures of 3D particles so that I can reuse the in 2D rendering.Now I had some problem with alpha channel.Some artist told me I that my textures should have unpremultiplied alpha channel.When I try to get the rgb value back,I got strange result.Some area went lighter and even totally white.I mainly focus on additive and blend mode,that is: ADDITIVE: srcAlpha VS 1 BLEND: srcAlpha VS 1-srcAlpha I tried a technique called premultiplied alpha.This technique just got you the right rgb value,its all you need on screen.As for alpha value,it worked well with BLEND mode,but not ADDITIVE mode.As you can see in parameters,BLEND mode always controlled its value within 1.While ADDITIVE mode cannot guarantee. I want proper alpha,but it just got too big or too small consider to rgb.Now what can I do?Any help will be great thankful. PS:If you don't understand what I am trying to do,there is a commercial software called "Particle Illusion".You can create various particles and then save the scene to texture,where you can choose to remove background of particles.

    Read the article

  • How can I determine the first visible tile in an isometric perspective?

    - by alekop
    I am trying to render the visible portion of a diamond-shaped isometric map. The "world" coordinate system is a 2D Cartesian system, with the coordinates increasing diagonally (in terms of the view coordinate system) along the axes. The "view" coordinates are simply mouse offsets relative to the upper left corner of the view. My rendering algorithm works by drawing diagonal spans, starting from the upper right corner of the view and moving diagonally to the right and down, advancing to the next row when it reaches the right view edge. When the rendering loop reaches the lower left corner, it stops. There are functions to convert a point from view coordinates to world coordinates and then to map coordinates. Everything works when rendering from tile 0,0, but as the view scrolls around the rendering needs to start from a different tile. I can't figure out how to determine which tile is closest to the upper right corner. At the moment I am simply converting the coordinates of the upper right corner to map coordinates. This works as long as the view origin (upper right corner) is inside the world, but when approaching the edges of the map the starting tile coordinate obviously become invalid. I guess this boils down to asking "how can I find the intersection between the world X axis and the view X axis?"

    Read the article

  • Crash when trying to detect touch

    - by iQue
    I've got a character in a 2D game using surfaceView that I want to be able to move using a button (eventually a joystick), but my game crashes as soon as I try to move my sprite. This is my onTouch-method for my steering button: public void handleActionDown(int eventX, int eventY) { if (eventX >= (x - bitmap.getWidth() / 2) && (eventX <= (x + bitmap.getWidth()/2))) { if (eventY >= (y - bitmap.getHeight() / 2) && (y <= (y + bitmap.getHeight() / 2))) { setTouched(true); } else { setTouched(false); } } else { setTouched(false); } And if I try to put this in my update-method: public void update() { x += (speed.getXv() * speed.getxDirection()); y += (speed.getYv() * speed.getyDirection()); } The sprite moves on its own just fine, but as soon as I add: public void update() { if(steering.isTouched()){ x += (speed.getXv() * speed.getxDirection()); y += (speed.getYv() * speed.getyDirection()); } the game crashes. Does anyone know why this is or how to fix it? I cannot figure it out. I'm using MotionEvent.ACTION_DOWN to check if the user if pressing the screen.

    Read the article

  • Unit turning in navmesh-based pathfinding

    - by Haddayn
    I'm working on an RTS game, and I'm using navmeshes for unit pathfinding. I do know how to find a general path within a navmesh, but how do you determine if the unit have enough space to turn? I have units of different shapes (mostly rectangles with different dimensions), and with different turn radii. Additionally some of units can turn in place, and some can move in reverse. So, how to find a path which unit can follow, considering that it can not rotate easily?

    Read the article

  • What is the point in using real time?

    - by bobobobo
    I understand that using real time frame elapses (which should vary between 16-17ms on average) are provided by a lot of frameworks. GetTimeElapsedSinceLastFrame, and it gives you the wall clock time. But should we use this information in basic physics simulation? It looks to me to be a bad idea. Say there is a slight lag on the machine, for whatever reason (say a virus scanner starts up). The calculations all jump, and there is no need for this. Why not use a virtual second and ignore wall clock time? For gameplay on the level of Commander Keen, shouldn't you always use the virtual second and not real-time? (Besides stopwatch timing for race games) I don't see a need to use real time and not a fixed 16ms time step.

    Read the article

  • 2d game view camera zoom, rotation & offset using 'Filter' / 'Shader' processing?

    - by Arthur Wulf White
    I wish to add the ability to zoom-in, zoom-out, rotate and move the view in a top-down view over a collection of points and lines in a large 2d map. I split the map into a grid so I only need to render the points that are 'near' the camera. My question is, how do I render a point A(Xp,Yp) assuming the following details: Offset of the camera pov from the origin of the map is: Xc, Yc Meaning the camera center is positioned on top of that point. If there's a point in Xc, Yc it is positioned in the center of the screen. The rotation angle is: alpha The scale is: S Read my answer first. I am thinking there is more optimized solution, thanks. My question is how to include the following improvement: I read in the AS3 Bible book that: In regards to ShaderInput, You can use these methods to coerce Pixel Bender to crunch huge sets of data masquerading as images, without doing too much work on the ActionScript side to make them look like images. Meaning if I am performing the same linear function on a lot of items, I can do it all at once if I use Shaders correctly and save processing time. Does anyone know how that is accomplished? Here is a sample of what I mean: http://wonderfl.net/c/eFp0/

    Read the article

  • Collision rectangle response

    - by dotty
    I'm having difficulties getting a moveable rectangle to collide with more than one rectangle. I'm using SFML and it has a handy function called Intersect() which takes 2 rectangles and returns the intersections. I have a vector full of rectangles which I want my moveable rectangle to collide with. I'm looping through this using the following code (p is the moveble rectangle). IsCollidingWith returns a bool but also uses SFML's Interesect to work out the intersections. while(unsigned i = 0; i!= testRects.size(); i++){ if(p.IsCollidingWith(testRects[i]){ p.Collide(testRects[i]); } } and the actual Collide() code void gameObj::collide( gameObj collidingObject ){ printf("%f %f\n", this->colliderResult.width, this->colliderResult.height); if (this->colliderResult.width < this->colliderResult.height) { // collided on X if (this->getCollider().left < collidingObject.getCollider().left ) { this->move( -this->colliderResult.width , 0); }else { this->move( this->colliderResult.width, 0 ); } } if(this->colliderResult.width > this->colliderResult.height){ if (this->getCollider().top < collidingObject.getCollider().top ) { this->move( 0, -this->colliderResult.height); }else { this->move( 0, this->colliderResult.height ); } } } and the IsCollidingWith() code is bool gameObj::isCollidingWith( gameObj testObject ){ if (this->getCollider().intersects( testObject.getCollider(), this->colliderResult )) { return true; }else { return false; } } This works fine when there's only 1 Rect in the scene. However, when there's move than one Rect it causes issue when working out 2 collisions at once. Any idea how to deal with this correctly? I have uploaded a video to youtube to show my problem. The console on the far-right shows the width and height of the intersections. You can see on the console that it's trying to calculate 2 collisions at once, I think this is where the problem is being caused. The youtube video is at http://www.youtube.com/watch?v=fA2gflOMcAk also , this image also seems to illustrate the problem nicely. Can someone please help, I've been stuck on this all weekend!

    Read the article

  • Do unused vertices in a 3D object affect performance?

    - by Gajet
    For my game I need to generate a mesh dynamically. Now I'm wondering does it have a noticeable affect in FPS if I allocate more vertices than what I'm actually using or not? and does it matter if I'm using DirectX or OpenGL? Edit Final output will be a w*h cell grid, but for technical issues it's much easier for me to allocate (w+1)*(h+1) vertices. Sure I'll only use w*h vertices in indexing, and I know there is some memory wasting there, but I want to know if it also affect FPS or not? (Note that mesh is only generated once in each time you play the game)

    Read the article

  • Model format for small game

    - by DeadMG
    I'm writing my own small-time game from scratch, and now I'm looking to start creating models. I've been wondering- what is the best model format to use? Given that I will be writing the model loading code myself and using whatever program generates them. Ideally, I'd look for a format that has fairly wide support between modelling programs, so I can pick the one I like most to actually perform the building, and the format itself would be relatively simple to load, rather than having all of the latest features.

    Read the article

  • Is IDirectInput8::FindDevice totally broken on Windows 7?

    - by Noora
    I'm developing on Windows 7, and using DirectInput8 for my input needs. I'm tracking gamepad additions and removals (that is, GUID_DEVINTERFACE_HID devices) using the DBT_DEVICEARRIVAL and DBT_DEVICEREMOVECOMPLETE messages, which works fine. However, what I've come to find out is that no matter what I do, passing the received values from DBT_DEVICEARRIVAL to IDirectInput8's FindDevice method, it will always fail to identify the device, returning DIERR_DEVICENOTREG. DirectInput still clearly knows about the device, because I can enumerate and create it just fine. I've tried with three different gamepads, and the error persists, so it's not about that either. I also tried passing some alternative interface GUIDs for the RegisterDeviceNotification call, didn't help. So, has anyone else faced the same problem, and have you found a usable workaround? I'm afraid I'll soon have to stoop down to re-enumerating all devices when something is added or removed, but I'll first give this question one last shot here. EDIT: For the record, I've also tried pretty much every single HID API & SetupAPI function for alternative ways of figuring out the needed GUIDs, with zero success. So if you're facing the same problem as me, don't bother with that route. I'm pretty sure those GUIDs are made up by DirectInput itself somehow. Short of reverse engineering dinput8.dll, I'm truly out of ideas now.

    Read the article

  • Restricted pathfinding Area

    - by SubZeron
    So i'm triying to create a little "XCOM : Enemy Unknown" like game ,and using the Aron Granberg's Pathfinding-Tool (free version) to handle the "click to move part. i want to add a little trap system where the hero get stuck inside an area, so he will have only the possibility to move inside this trapped area, so far everything is fine however when i click outside the trapped area, the hero try to reach the destination even though the wall will prevents him from reaching it. so my question is, is there any way to restrict the area where the pathfinding system work to the trapped area dynamically. and wich Graph Type is recommended to use in this situation or this kind of Games (Grid Graph/Navmesh Graph/Point Graph). Thank you. image link for explanation : https://dl.dropbox.com/u/77993668/exemple.jpg

    Read the article

  • How can I achieve this lighting with OpenGL?

    - by Smallbro
    I'm currently trying to implement a type of "smooth" lighting. How can I achieve lighting which looks like this: http://dl.dropbox.com/u/1668516/concept/warp3.png Using OpenGl. I've attempted to use blending modes and have come very close to making it work but it came out like this: https://pbs.twimg.com/media/A1071viCEAAlFmJ.png and I also wasn't able to change the alpha of the black background which I want to be able to do. Could I get a few pointers in the right direction?

    Read the article

< Previous Page | 550 551 552 553 554 555 556 557 558 559 560 561  | Next Page >