Search Results

Search found 25952 results on 1039 pages for 'development lifecycle'.

Page 523/1039 | < Previous Page | 519 520 521 522 523 524 525 526 527 528 529 530  | Next Page >

  • Android Loading Screen: How do I go about using a stack to load elements, and the option of incrementing the size counter?

    - by tom_mai78101
    I have some problems with figuring out what value I should put in the function: int value_needed_to_figure_out = X; ProgressBar.incrementProgressBy(value_needed_to_figure_out); I've been researching about loading screens and how to use them. Some examples I've seen have implemented Thread.sleep() in a Handler.post(new Runnable()) function. To me, I got most of that concept of using the Handler to update the ProgressBar, while pretending to do some heavy crunching work. So, I kept looking. I have read this thread here: How do I load chunks of data from an assest manager during a loading screen? It said that I can try using a stack it needs to load, and adding a size counter as I add elements to the stack. What does it mean? This is the part where I'm totally stumped. If anyone would provide some hints, I'll gladly appreciate it. Thanks in advance.

    Read the article

  • Story and news-feed ideas for social network games

    - by arpine
    I am currently working on a educational and fun 2-in-1 game. As I am not a professional, I need advice on story and news-feed. The goal is simple-get richer, the story is about a worker who is trying to get over his/her financial problems and become rich. During the whole gaming process there is a news-feed (every day there are a couple of fresh news about what is going on). The news are fresh and individual so I need to write about 2000 pieces of news for 2 year gaming, maybe more. The problem is that I am not sure whether repetitive news can interest in this game. What can be done to make the news-making process easier but not boring from the point of view of the player?

    Read the article

  • how get collision callback of two specific objects using bullet physics?

    - by sebap123
    I have got problem implementing collision callback into my project. I would like to have detection between two specific objects. I have got normall collision but I want one object to stop or change color or whatever when colides with another. I wrote code from bullet wiki: int numManifolds = dynamicsWorld->getDispatcher()->getNumManifolds(); for (int i=0;i<numManifolds;i++) { btPersistentManifold* contactManifold = dynamicsWorld->getDispatcher()->getManifoldByIndexInternal(i); btCollisionObject* obA = static_cast<btCollisionObject*>(contactManifold->getBody0()); btCollisionObject* obB = static_cast<btCollisionObject*>(contactManifold->getBody1()); int numContacts = contactManifold->getNumContacts(); for (int j=0;j<numContacts;j++) { btManifoldPoint& pt = contactManifold->getContactPoint(j); if (pt.getDistance()<0.f) { const btVector3& ptA = pt.getPositionWorldOnA(); const btVector3& ptB = pt.getPositionWorldOnB(); const btVector3& normalOnB = pt.m_normalWorldOnB; bool x = (ContactProcessedCallback)(pt,fallRigidBody,earthRigidBody); if(x) printf("collision\n"); } } } where fallRigidBody is a dynamic body - a sphere and earthRigiBody is static body - StaticPlaneShape and sphere isn't touching earthRigidBody all the time. I have got also other objects that are colliding with sphere and it works fine. But the program detects collision all the time. It doesn't matter if the objects are or aren't colliding. I have also added after declarations of rigid body: earthRigidBody->setCollisionFlags(earthRigidBody->getCollisionFlags() | btCollisionObject::CF_CUSTOM_MATERIAL_CALLBACK); fallRigidBody->setCollisionFlags(fallRigidBody->getCollisionFlags() | btCollisionObject::CF_CUSTOM_MATERIAL_CALLBACK); So can someone tell me what I am doing wrong? Maybe it is something simple?

    Read the article

  • Is there an AIR native extension to use GameCenter APIs for turn-based games?

    - by Phil
    I'm planning a turn based game using the iOS 5 GameCenter (GameKit) turn-based functions. Ideally I would program the game with AIR (I'm a Flash dev), but so far I can't seem to find any already available native extension that offers that (only basic GameCenter functions), so my questions are: Does anyone know if that already exists? And secondly how complex a task would it be to create an extension that does that? Are there any pitfalls I should be aware of etc.? ** UPDATE ** There does not seem a solution to the above from Adobe. For anyone who is interested check out the Adobe Gaming SDK. It contains a Game Center ANE which I've read contains options for multiplayer but not turn-based multiplayer, at least it's a start. Comes a bit late for me as I've already learned Obj-c!

    Read the article

  • Architectural advice - websockets javascript/php integration

    - by Ewan Vaentine
    Myself and a friend have started making a game, he's likely to be using impact.js for the user interaction etc, but we need multiplayer functionality so some form of websockets for TCP connections etc. So we were thinking impact.js into socket.io and node.js. However, user accounts, ecommerce, session handling and social media integration will all be handled with Codeigniter (PHP), my question is, is it wise to have node.js running in parallel with Codeigniter, or if this is even possible? If not, if you were to create a multiplayer online game utilising ecomms to buy credits and user accounts, how would you go about this from a structural position and what engines/frameworks would you recommend? I'm new to this site so I apologise in advance if I'm posting something inappropriate. Cheers, Ewan

    Read the article

  • Multi Threading - How to split the tasks

    - by Motig
    if I have a game engine with the basic 'game engine' components, what is the best way to 'split' the tasks with a multi-threaded approach? Assuming I have the standard components of: Rendering Physics Scripts Networking And a quad-core, I see two ways of multi-threading: Option A ('Vertical'): Using this approach I can allow one core for each component of the engine; e.g. one core for the Rendering task, one for the Physics, etc. Advantages: I do not need to worry about thread-safety within each component I can take advantage of special optimizations provided for single-threaded access (e.g. DirectX offers a flag that can be set to tell it that you will only use single-threading) Option B ('Horizontal'): Using this approach, each task may be split up into 1 <= n <= numCores threads, and executed simultaneously, one after the other. Advantages: Allows for work-sharing, i.e. each thread can take over work still remaining as the others are still processing I can take advantage of libraries that are designed for multi-threading (i.e. ... DirectX) I think, in retrospect, I would pick Option B, but I wanted to hear you guys' thoughts on the matter.

    Read the article

  • Create Adventure Game Scene/Room/Backdrop from Real Photo

    - by Lyuben
    Is there a suitable software or a good tutorial for creating 2D rooms/scenery for adventure games from real photos? Is it possible to achieve good results by using photos, or the hand-drawn style will always be the best choice? Thank you! --- EDIT --- I want to clarify that I'm particularly interested in the art creation process, not on the environment in which to build games. I'm writing the game in Java for Android, but I don't think it matters. Also, I'm not trying to decide if the game will have photo realistic rooms or not - I want to achieve 2d pixelated, old-school style background scenes and I wonder if this can be made from photos, because I cannot draw them myself. For example, can I shoot a scene with my camera and then make it look something like the image in the following link: PIXEL ART FOREST I know that I cannot get the same quality as an absolutely hand-drawn pixel, but I'm looking for some decent technology/tutorial/software to make them somewhat similar.

    Read the article

  • FBO rendering different result between Glaxay S2 and S3

    - by BruceJones
    I'm working on a pong game and have recently set up FBO rendering so that I can apply some post-processing shaders. This proceeds as so: Bind texture A to framebuffer Draw balls Bind texture B to framebuffer Draw texture A using fade shader on fullscreen quad Bind screen to framebuffer Draw texture B using normal textured quad shader Neither texture A or B are cleared at any point, this way the balls leave trails on screen, see below for the fade shader. Fade Shader private final String fragmentShaderCode = "precision highp float;" + "uniform sampler2D u_Texture;" + "varying vec2 v_TexCoordinate;" + "vec4 color;" + "void main(void)" + "{" + " color = texture2D(u_Texture, v_TexCoordinate);" + " color.a *= 0.8;" + " gl_FragColor = color;" + "}"; This works fine with the Samsung Galaxy S3/ Note2, but cause a strange effect doesnt work on Galaxy S2 or Note1. See pictures: Galaxy S3/Note2 Galaxy S3/Note2 Galaxy S2/Note Galaxy S2/Note Can anyone explain the difference?

    Read the article

  • How do I interpolate air drag with a variable time step?

    - by Valentin Krummenacher
    So I have a little game which works with small steps, however those steps vary in time, so for example I sometimes have 10 Steps/second and then I have 20 Steps/second. This changes automatically depending on how many steps the user's computer can take. To avoid inaccurate positioning of the game's player object I use y=v0*dt+g*dt^2/2 to determine my objects y-position, where dt is the time since the last step, v0 is the velocity of my object in the beginning of my step and g is the gravity. To calculate the velocity in the end of a step I use v=v0+g*dt what also gives me correct results, independent of whether I use 2 steps with a dt of for example 20ms or one step with a dt of 40ms. Now I would like to introduce air drag. For simplicity's sake I use a=k*v^2 where a is the air drag's acceleration (I am aware that it would usually result in a force, but since I assume 1kg for my object's mass the force is the same as the resulting acceleration), k is a constant (in this case I'm using 0.001) and v is the speed. Now in an infinitely small time interval a is k multiplied by the velocity in this small time interval powered by 2. The problem is that v in the next time interval would depend on the drag of the last which again depends on the v of the last interval and so on... In other words: If I use a=k*v^2 I get different results for my position/velocity when I use 2 steps of 20ms than when I use one step of 40ms. I used to have this problem for my position too, but adding +g*dt^2/2 to the formula for my position fixed the problem since it takes into account that the position depends on the velocity which changes slightly in every infinitely small time interval. Does something like that exist for air drag too? And no, I dont mean anything like Adding air drag to a golf ball trajectory equation or similar, for that kind of method only gives correct results when all my steps are the same. (I hope you can understand my intermediate english, it's not my main language so I would like to say sorry for all the silly mistakes I might have made in my question)

    Read the article

  • Questions before I revamp my rendering engine to use shaders (GLSL)

    - by stephelton
    I've written a fairly robust rendering engine using OpenGL ES 1.1 (fixed-function.) I've been looking into revamping the engine to use OpenGL ES 2.0, which necessitates that I use shaders. I've been absorbing information all day long and still have some questions. Firstly, lighting. The fixed-function pipeline is guaranteed to have at least 8 lights available. My current engine finds lights that are "close" to the primitives being drawn and enables them; I don't know how many lights are going to be enabled until I draw a given model. Nothing is dynamically allocated in GLSL, so I have to define in a shader some number of lights to be used, right? So if I want to stick with 8, should I write my general purpose shader to have 8 lights and then use uniforms to tell it how many / which lights to use? Which brings me to another question: should I be concerned with the amount of data I'm allocating in a shader? Recent video cards have hundreds of "stream processors." If I've got a fragment shader being used on some number of fragments in a given triangle, I assume they must each have their own stack to work on. Are read-only variables copied here, or read when needed? My initial goal is to rework my code so that it is virtually identical to the current implementation. What I have in mind is to create my own matrix stack so that I can implement something along the lines of push/popMatrix and apply all my translations, rotations, and scales to this matrix, then provide the matrix to the vertex shader so that it can make very quick vertex translations. Is this approach sound? Edit: My original intention was to ask if there was a tutorial that would explain the bare minimum necessary to jump from fixed-function to using shaders. Thanks!

    Read the article

  • Textures selectively not applying in Unity

    - by user46790
    On certain imported objects (fbx) in Unity, upon applying a material, only the base colour of the material is applied, with none of the tiled texture showing. This isn't universal; on a test model only some submeshes didn't show the texture, while some did. I have tried every combination of import/calculate normals/tangents to no avail. FYI I'm not exactly experienced with the software or gamedev in general; this is to make a small static scene with 3-4 objects max. One model tested was created in 3DSMax, the other in Blender. I've had this happen on every export from Blender, but only some submeshes from the 3DSMax model (internet sourced to test the problem)

    Read the article

  • Constrained A* problem

    - by Ragekit
    I've got a little problem with an A* algorithm that I need to Constrained a little bit. Basically : I use an A* to find the shortest path between 2 randomly placed room in 3D space, and then build a corridor between them. The problem I found is that sometimes it makes chimney like corridors that are not ideal, so I constrict the A* so that if the last movement was up or down, you go sideways. Everything is fine, but in some corner cases, it fails to find a path (when there is obviously one). Like here between the blue and red dot : (i'm in unity btw, but i don't think it matters) Here is the code of the actual A* (a bit long, and some redundency) while(current != goal) { //add stair up / stair down foreach(Node<GridUnit> test in current.Neighbors) { if(!test.Data.empty && test != goal) continue; //bug at arrival; if(test == goal && penul !=null) { Vector3 currentDiff = current.Data.bounds.center - test.Data.bounds.center; if(!Mathf.Approximately(currentDiff.y,0)) { //wanna drop on the last if(!coplanar(test.Data.bounds.center,current.Data.bounds.center,current.Data.parentUnit.bounds.center,to.Data.bounds.center)) { continue; } else { if(Mathf.Approximately(to.Data.bounds.center.x, current.Data.parentUnit.bounds.center.x) && Mathf.Approximately(to.Data.bounds.center.z, current.Data.parentUnit.bounds.center.z)) { continue; } } } } if(current.Data.parentUnit != null) { Vector3 previousDiff = current.Data.parentUnit.bounds.center - current.Data.bounds.center; Vector3 currentDiff = current.Data.bounds.center - test.Data.bounds.center; if(!Mathf.Approximately(previousDiff.y,0)) { if(!Mathf.Approximately(currentDiff.y,0)) { //you wanna drop now : continue; } if(current.Data.parentUnit.parentUnit != null) { if(!coplanar(test.Data.bounds.center,current.Data.bounds.center,current.Data.parentUnit.bounds.center,current.Data.parentUnit.parentUnit.bounds.center)) { continue; }else { if(Mathf.Approximately(test.Data.bounds.center.x, current.Data.parentUnit.parentUnit.bounds.center.x) && Mathf.Approximately(test.Data.bounds.center.z, current.Data.parentUnit.parentUnit.bounds.center.z)) { continue; } } } } } g = current.Data.g + HEURISTIC(current.Data,test.Data); h = HEURISTIC(test.Data,goal.Data); f = g + h; if(open.Contains(test) || closed.Contains(test)) { if(test.Data.f > f) { //found a shorter path going passing through that point test.Data.f = f; test.Data.g = g; test.Data.h = h; test.Data.parentUnit = current.Data; } } else { //jamais rencontré test.Data.f = f; test.Data.h = h; test.Data.g = g; test.Data.parentUnit = current.Data; open.Add(test); } } closed.Add (current); if(open.Count == 0) { Debug.Log("nothingfound"); //nothing more to test no path found, stay to from; List<GridUnit> r = new List<GridUnit>(); r.Add(from.Data); return r; } //sort open from small to biggest travel cost open.Sort(delegate(Node<GridUnit> x, Node<GridUnit> y) { return (int)(x.Data.f-y.Data.f); }); //get the smallest travel cost node; Node<GridUnit> smallest = open[0]; current = smallest; open.RemoveAt(0); } //build the path going backward; List<GridUnit> ret = new List<GridUnit>(); if(penul != null) { ret.Insert(0,to.Data); } GridUnit cur = goal.Data; ret.Insert(0,cur); do{ cur = cur.parentUnit; ret.Insert(0,cur); } while(cur != from.Data); return ret; You see at the start of the foreach i constrict the A* like i said. If you have any insight it would be cool. Thanks

    Read the article

  • How can I change this isometric engine to make it so that you could distinguish between blocks that are on different planes?

    - by l5p4ngl312
    I have been working on an isometric minecraft-esque game engine for a strategy game I plan on making. As you can see, it really needs some sort of shading. It is difficult to distinguish between separate elevations when the camera is facing away from the slope because everything is the same shade. So my question is: can I shade just a specific section of a sprite? All of those blocks are just sprites, so if I shaded the entire image, it would shade the whole block. I am using LWJGL. Are there any other approaches to take? Heres a link to a screenshot from the engine: http://i44.tinypic.com/qxqlix.jpg

    Read the article

  • Collision with half of a semi-circle

    - by heitortsergent
    I am trying to port a game I made using Flash/AS3, to the Windows Phone using C#/XNA 4.0. You can see it here: http://goo.gl/gzFiE In the flash version I used a pixel-perfect collision between meteors (it's a rectangle, and usually rotated) that spawn outside the screen, and move towards the center, and a shield in the center of the screen(which is half of a semi-circle, also rotated by the player), which made the meteor bounce back in the opposite direction it came from, when they collided. My goal now is to make the meteors bounce in different angles, depending on the position it collides with the shield (much like Pong, hitting the borders causes a change in the ball's angle). So, these are the 3 options I thought of: Pixel-perfect collision (Microsoft has a sample) , but then I wouldn't know how to change the meteor angle after the collision 3 BoundingCircle's to represent the half semi-circle shield, but then I would have to somehow move them as I rotate the shield. Farseer Physics. I could make a shape composed of 3 lines, and use that as the collision object for the shield. Is there any other way besides those? Which would be the best way to do it(it's aimed towards mobile devices, so pixel-perfect is probably not a good choice)? Most of the time there's always a easier/better way than what we think of...

    Read the article

  • Converting a WIndows Store App to Android

    - by pm_2
    I cross posted this from SO I'm very new to Xamarin. I have a few published Windows Store apps and want to convert them to Android. I'm attempting to use Xamarin for this. I'm just using the free version of Xamarin. Here's where I am so far: I am trying two apps - one was build with Monogame and one is just build on the WinRT framework. I have managed to get them both into Xamarin studio, basically by hacking the csproj files. I'm getting build errors because it's missing references. There does appear to be some equivalent Mono / .Net4 libraries, but things like Microsoft.Xna.Framework seem to be missing. So, my question is: am I going about this the right way and, if so, am I missing a step ("convert dependencies" or something)? If I'm not going about this the right way then how should I be doing this (I found very few online resources on this subject)?

    Read the article

  • Re-sizing the form without scaling the GUI

    - by Bmoore
    I am writing a turn based strategy game in C#. My GUI implementation consists of class that extends Form containing a class that extends Panel. When I render the GUI I draw to the paint method in the panel. I am trying to figure out what is the best way for handling form re-size events. I know I want a minimum window size, but I would prefer to not have a maximum or a set size. Ideally the GUI would reveal more/less of the map as the user changes the window size. I would like to avoid scaling the graphics if at all possible. What is the best way to handle re-size events?

    Read the article

  • HowTo Enable jBullet DebugMode

    - by Kenneth Bray
    I would like to render the physics world of jBullet to debug some issues in my game, and I am not finding too much on enabling the debugDraw method of jBullet. Do I need to write my own debugDraw method, or is there an easier way to draw the physics models to the screen? If there is already a built in method I would prefer to use that, otherwise I guess I will start making my own functions to handle this.

    Read the article

  • XNA 4: RenderTarget2D textures getting transparent on fullscreen

    - by Shashwat
    I'm generating a Texture2D object using RenderTarget2D as in the following code public static Texture2D GetTextTexture(string text, Vector2 position, SpriteFont font, Color foreColor, Color backColor, Texture2D background=null) { int width = (int)font.MeasureString(text).X; int height = (int)font.MeasureString(text).Y; GraphicsDevice device = Settings.game.GraphicsDevice; SpriteBatch spriteBatch = Settings.game.spriteBatch; RenderTarget2D renderTarget = new RenderTarget2D(device, width, height, false, SurfaceFormat.Color, DepthFormat.Depth24Stencil8, device.PresentationParameters.MultiSampleCount, RenderTargetUsage.DiscardContents); device.SetRenderTarget(renderTarget); device.Clear(backColor); spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.Opaque); if (background != null) spriteBatch.Draw(background, new Rectangle(0, 0, 70, 70), Color.White); spriteBatch.End(); spriteBatch.Begin(); spriteBatch.DrawString(font, text, position, foreColor, 0, new Vector2(0), 0.8f, SpriteEffects.None, 0); spriteBatch.End(); device.SetRenderTarget(null); ResetGraphicsDeviceSettings(); return (Texture2D)renderTarget; } It's working all fine. But when I ToggleFullScreen() (and vice-versa), the previous textures are getting transparent. However, the new textures after that are being generated correctly. What can be the reason for this?

    Read the article

  • FPS camera specification

    - by user1095108
    I remember I once composed a FPS viewing transformation, as a composition of 3 rotations, each with an angle as a parameter. The first angle specified the left/right rotation around the y-axis, the second angle the up/down rotation around the x-axis, and the third around the z-axis. The viewing transformation was therefore specified by 3 angles. Naturally, this transformation had a gimbal lock, depending in what order the transformation were performed. What should I look at to derive my viewing transformation without the gimbal lock? I know the "lookAt" method already, but I consider that cumbersome. EDIT: MY first guess is to do the first 2 transformations to get a viewing direction and then the axis-angle rotation on this axis.

    Read the article

  • DirectX 9.0c and C++ GUI

    - by SullY
    Well, I'm trying to code a gui for my engine, but I've got some problems. I know how to make a UI overlay but buttons are still black magic for me. Anything I tried was to compilcated ( if it goes big ). To Example I tried to look if the mouse position is the same as the Pixel that is showing the button. But If I use some bigger areas it's getting to complicated. Now I'm searching for a Tutorial how to implement your own gui. I'm really confused about it. Well I hope you have/ know some good tutorials. By the way, I took a look at the DXUTSample, but it's to big to get overview.

    Read the article

  • Bullet Physics implementing custom MotionState class

    - by Arosboro
    I'm trying to make my engine's camera a kinematic rigid body that can collide into other rigid bodies. I've overridden the btMotionState class and implemented setKinematicPos which updates the motion state's tranform. I use the overridden class when creating my kinematic body, but the collision detection fails. I'm doing this for fun trying to add collision detection and physics to Sean O' Neil's Procedural Universe I referred to the bullet wiki on MotionStates for my CPhysicsMotionState class. If it helps I can add the code for the Planetary rigid bodies, but I didn't want to clutter the post. Here is my motion state class: class CPhysicsMotionState: public btMotionState { protected: // This is the transform with position and rotation of the camera CSRTTransform* m_srtTransform; btTransform m_btPos1; public: CPhysicsMotionState(const btTransform &initialpos, CSRTTransform* srtTransform) { m_srtTransform = srtTransform; m_btPos1 = initialpos; } virtual ~CPhysicsMotionState() { // TODO Auto-generated destructor stub } virtual void getWorldTransform(btTransform &worldTrans) const { worldTrans = m_btPos1; } void setKinematicPos(btQuaternion &rot, btVector3 &pos) { m_btPos1.setRotation(rot); m_btPos1.setOrigin(pos); } virtual void setWorldTransform(const btTransform &worldTrans) { btQuaternion rot = worldTrans.getRotation(); btVector3 pos = worldTrans.getOrigin(); m_srtTransform->m_qRotate = CQuaternion(rot.x(), rot.y(), rot.z(), rot.w()); m_srtTransform->SetPosition(CVector(pos.x(), pos.y(), pos.z())); m_btPos1 = worldTrans; } }; I add a rigid body for the camera: // Create rigid body for camera btCollisionShape* cameraShape = new btSphereShape(btScalar(5.0f)); btTransform startTransform; startTransform.setIdentity(); // forgot to add this line CVector vCamera = m_srtCamera.GetPosition(); startTransform.setOrigin(btVector3(vCamera.x, vCamera.y, vCamera.z)); m_msCamera = new CPhysicsMotionState(startTransform, &m_srtCamera); btScalar tMass(80.7f); bool isDynamic = (tMass != 0.f); btVector3 localInertia(0,0,0); if (isDynamic) cameraShape->calculateLocalInertia(tMass,localInertia); btRigidBody::btRigidBodyConstructionInfo rbInfo(tMass, m_msCamera, cameraShape, localInertia); m_rigidBody = new btRigidBody(rbInfo); m_rigidBody->setCollisionFlags(m_rigidBody->getCollisionFlags() | btCollisionObject::CF_KINEMATIC_OBJECT); m_rigidBody->setActivationState(DISABLE_DEACTIVATION); This is the code in Update() that runs each frame: CSRTTransform srtCamera = CCameraTask::GetPtr()->GetCamera(); Quaternion qRotate = srtCamera.m_qRotate; btQuaternion rot = btQuaternion(qRotate.x, qRotate.y, qRotate.z, qRotate.w); CVector vCamera = CCameraTask::GetPtr()->GetPosition(); btVector3 pos = btVector3(vCamera.x, vCamera.y, vCamera.z); CPhysicsMotionState* cameraMotionState = CCameraTask::GetPtr()->GetMotionState(); cameraMotionState->setKinematicPos(rot, pos);

    Read the article

  • How can I imitate interaction and movement in Diablo II?

    - by user422318
    I'm prototyping a simple browser-based game. It's played from a top down perspective on a 2d canvas. You left-click on a point on the map, and your character will begin walking to it. If you click on a different point on the map, then your character will begin walking to the new point. It's similar to Diablo II: http://www.youtube.com/watch?v=EvDKt-To6K0&feature=related How can I best imitate this movement system for a player? Ideas... Track current coords and target coords If target coords are exactly up, left, right, or down, then increment appropriate direction until you get there Implied else: target coords are in a quadrant. To make this movement look natural, character will have to move diagonally. For example, pretend the target is to the northeast. For each game frame, alternate incrementing current coordinates in the north and then east directions.

    Read the article

  • What's the best way to add some particle or laser effects to an already animated character?

    - by Scott
    I just purchased some rigged and animated robot characters from 3drt for a game I'm making in unity. I would like to be able to add some weapon effects to the characters. For example, I would like for the robots to be able to shot lasers out of the hands at enemies. I have know idea where to even start with this task as I'm more of a programmer than a graphics guy. Can some experienced developers / designers please point me in a good direction? Thanks. Note: As of right now I have maya and blender installed on my computer.

    Read the article

  • record and replay directinput events

    - by cloudraven
    I am trying to build a record and replay system for a couple of games. I was wondering if I can make a general replay engine using directinput rather than doing an specific implementation for each game. Recording DirectInput events doesn't seem to be that much of a problem, but I don't know if there is a way to play them back. My question is, is there a way to feed DirectInput events from a log and make DirectInput believe that they came from mouse/joystick/keyboard? I assume it is unlikely, but if there is a way I would be interested in learning about it.

    Read the article

  • Marching squares: Finding multiple contours within one source field?

    - by TravisG
    Principally, this is a follow-up-question to a problem from a few weeks ago, even though this is about the algorithm in general without application to my actual problem. The algorithm basically searches through all lines in the picture, starting from the top left of it, until it finds a pixel that is a border. In pseudo-C++: int start = 0; for(int i=0; i<amount_of_pixels; ++i) { if(pixels[i] == border) { start = i; break; } } When it finds one, it starts the marching squares algorithm and finds the contour to whatever object the pixel belongs to. Let's say I have something like this: Where everything except the color white is a border. And have found the contour points of the first blob: For the general algorithm it's over. It found a contour and has done its job. How can I move on to the other two blobs to find their contours as well?

    Read the article

< Previous Page | 519 520 521 522 523 524 525 526 527 528 529 530  | Next Page >