Search Results

Search found 37616 results on 1505 pages for 'model driven development'.

Page 616/1505 | < Previous Page | 612 613 614 615 616 617 618 619 620 621 622 623  | Next Page >

  • Easiest Way To Implement "Slow Motion" and variable game speed in XNA?

    - by TerryB
    I have an XNA 4.0 game that I want to be able to switch into slow motion and back again to full speed every now and then. So if you kill an enemy the game switches into slow motion as they explode and then goes back to normal. What is the easiest way to do this in XNA 4.0 without having to alter all my existing code that relies on GameTime? I have some code that relies on the TotalGameTime, which will be wrong unless I get XNA to slow down. Is there anyway to avoid refactoring that code? Thanks!

    Read the article

  • Designing generic render/graphics component in C++?

    - by s73v3r
    I'm trying to learn more about Component Entity systems. So I decided to write a Tetris clone. I'm using the "style" of component-entity system where the Entity is just a bag of Components, the Components are just data, a Node is a set of Components needed to accomplish something, and a System is a set of methods that operates on a Node. All of my components inherit from a basic IComponent interface. I'm trying to figure out how to design the Render/Graphics/Drawable Components. Originally, I was going to use SFML, and everything was going to be good. However, as this is an experimental system, I got the idea of being able to change out the render library at will. I thought that since the Rendering would be fairly componentized, this should be doable. However, I'm having problems figuring out how I would design a common Interface for the different types of Render Components. Should I be using C++ Template types? It seems that having the RenderComponent somehow return it's own mesh/sprite/whatever to the RenderSystem would be the simplest, but would be difficult to generalize. However, letting the RenderComponent just hold on to data about what it would render would make it hard to re-use this component for different renderable objects (background, falling piece, field of already fallen blocks, etc). I realize this is fairly over-engineered for a regular Tetris clone, but I'm trying to learn about component entity systems and making interchangeable components. It's just that rendering seems to be the hardest to split out for me.

    Read the article

  • Game Patching Mac/PC

    - by Centurion Games
    Just wondering what types of solutions are available to handle patching of PC/Mac games that don't have any sort of auto updater built into them. In windows do you just spin off some sort of new install shield for the game that includes the updated files, hope you can read a valid registry key to point to the right directory, and overwrite files? If so how does that translate over to Mac where the game is normally just distributed as straight up .app file? Is there a better approach than the above for an already released product? (Assuming direct sells, and not through a marketplace that features auto-updating like Steam.) Are there any off the shelf auto-updater type libraries that could also be easily integrated with a C/C++ code base even after a game has been shipped to make this a lot simpler, and that are cross platform? Also how do auto-updaters work with new OS's that want applications and files digitally signed?

    Read the article

  • Implementing invisible bones

    - by DeadMG
    I suddenly have the feeling that I have absolutely no idea how to implement invisible objects/bones. Right now, I use hardware instancing to store the world matrix of every bone in a vertex buffer, and then send them all to the pipeline. But when dealing with frustrum culling, or having them set to invisible by my simulation for other reasons, means that some of them will be randomly invisible. Does this mean I effectively need to re-fill the buffer from scratch every frame with only the visible unit's matrices? This seems to me like it would involve a lot of wasted bandwidth.

    Read the article

  • Does anybody know of any resources to achieve this particular "2.5D" isometric engine effect?

    - by Craig Whitley
    I understand this is a little vague, but I was hoping somebody might be able to describe a high-level workflow or link to a resource to be able to achieve a specific isometric "2.5D" tile engine effect. I fell in love with http://www.youtube.com/watch?v=-Q6ISVaM5Ww this engine. Especially with the lighting and the shaders! He has a brief description of how he achieved what he did, but I could really use a brief flow of where you would start, what you would read up on and learn and the logical order to implement these things. A few specific questions: 1) Is there a heightmap on the ground texture that lets the light reflect brighter on certain parts of it? 2) "..using a special material which calculates the world-space normal vectors of every pixel.." - is this some "magic" special material he has created himself, or can you hazard a guess at what he means? 3) with relation to the above quote - what does he mean by 'world-space normal vectors of every pixel'? 4) I'm guessing I'm being a little bit optimistic when I ask if there's any 'all-in-one' tutorial out there? :)

    Read the article

  • Create a thread in xna Update method to find path?

    - by Dan
    I am trying to create a separate thread for my enemy's A* pathfinder which will give me a list of points to get to the player. I have placed the thread in the update method of my enemy. However this seems to cause jittering in the game every-time the thread is called. I have tried calling just the method and this works fine. Is there any way I can sort this out so that I can have the pathfinder on its own thread? Do I need to remove the thread start from the update and start it in the constructor? Is there any way this can work. Here is the code at the moment: bool running = false; bool threadstarted; System.Threading.Thread thread; public void update() { if (running == false && threadstarted == false) { thread = new System.Threading.Thread(PathThread); //thread.Priority = System.Threading.ThreadPriority.Lowest; thread.IsBackground = true; thread.Start(startandendobj); //PathThread(startandendobj); threadstarted = true; } } public void PathThread(object Startandend) { object[] Startandendarray = (object[])Startandend; Point startpoint = (Point)Startandendarray[0]; Point endpoint = (Point)Startandendarray[1]; bool runnable = true; // Path find from 255, 255 to 0,0 on the map foreach(Tile tile in Map) { if(tile.Color == Color.Red) { if (tile.Position.Contains(endpoint)) { runnable = false; } } } if(runnable == true) { running = true; Pathfinder p = new Pathfinder(Map); pathway = p.FindPath(startpoint, endpoint); running = false; threadstarted = false; } }

    Read the article

  • get stick analogue XY position using Jinput in lwjgl

    - by oIrC
    i want to capture the movement of the analogue stick of the gamePad. is there any equivalent function to this public void mouseMoved(MouseEvent mouseEvent) { mouseEvent.getX(); //return the X coordinate of the cursor inside a component mouseEvent.getY();//return the Y coordinate of the cursor inside a component } into lwjgl.input.Controllers, i found controller.getAxisValue() but this one doesn't work as the function above. any help? thanks.

    Read the article

  • Implementing movement on a grid

    - by Dvole
    I have a simple snake game, where I have other NPC snakes on the field. How do I calculate the movement of those other snakes so that they did not hit walls, and each other? So far I have it like this: I check for current coordinates and when there is a wall nearby I change direction to some other one. And so on, this way the snakes never collide the walls. But not actually colliding other snakes, how do I prevent this? I figured I could probe for the direction I'm heading and if there is anything there I would change direction too, but there is a set of situation where this won't work, for example if another snake will block off all exits later.

    Read the article

  • Collision planes confusion

    - by Jeffrey
    I'm following this tutorial by thecplusplusguy and in the linked video he explain that for example for the world basement and walls we need to create the actual rendered (shown to the player) walls and then duplicate them, place them in the same coordinates as the rendered walls and call them collision (by defining their material to collision). Then it defines in the Object loader function that those objects with material == collision are collision planes and should not be rendered but just used to check collision. Now I'm pretty confused. Why would we add this kind of complexity to a problem that can easily be solved by a simple loadObject(string plane_object, bool check_collision);: Creating only the walls object (by loading .obj file in plane_object) Define them also as collision planes whenever the check_collision is set to true In this case we have lowered the complexity of his method and make it more flexible and faster to develop (faster because we don't always have to make a copy for each plane and flexible because we don't hardcode the Object loader). The only case in which this method could not work is when we need hidden collision planes, and for that we could modify the loadObject() function like this: loadObject(string plane_object, bool check_collision = true, bool hide_object = false); Creating only the walls object (by loading .obj file in plane_object) Define them also as collision planes whenever the check_collision is set to true And add the ability to actually show the object or hide it based on hide_object. The final question is: am I right? What would the possible problem encountered with my solution versus his?

    Read the article

  • How do I get my polygons to be lighted by either side?

    - by Molmasepic
    Okay, I am using Ogre3D and Gorilla(2D library for ogre3D) and I am making Gorilla::Screenrenderables in the open scene. The problem that I am having is that when I make a light and have my SR(screenrenderable) near it, it does not light up unless the face of the SR is facing the light... I am wondering if there is a way to maybe set the material or code(which would be harder) so the SR is lit up whether the vertices of the polygon are facing the light or not. I feel it is possible but the main obstacle is how I would go about doing this.

    Read the article

  • How can I link to callback functions in Lua such that the callbacks will be updated when the scripts are reloaded?

    - by Raptormeat
    I'm implementing Lua scripting in my game using LuaBind, and one of the things I'm not clear on is the logistics of reloading the scripts live ingame. Currently, using the LuaBind C++ class luabind::object, I save references to Lua callbacks directly in the classes that use them. Then I can use luabind::call_function using that object in order to call the Lua code from the C++ code. I haven't tested this yet, but my assumption is that if I reload the scripts, then all the functions will be redefined, BUT the references to the OLD functions will still exist in the form of the luabind::object held by the C++ code. I would like to be able to swap out the old for the new without manually having to manage this for every script hook in the game. How best to change this so the process works? My first thought is to not save a reference to the function directly, but maybe save the function name instead, and grab the function by name every time we want to call it. I'm looking for better ideas!

    Read the article

  • fragment shader directional light positioning with camera

    - by meWantToLearn
    Im trying to set up directional lighting in the fragment shader. So the direction of my light moves with the camera position. #version 150 core uniform sampler2D diffuseTex; uniform vec4 lightColour; uniform vec3 lightDirection; vec3 LNorm = normalize(lightDirection); vec3 normal = normalize(IN.normal); vec3 calColour = lightColour[i].rgb * intensity; gl_FragColor = vec4(diffuse.rbg * calColour, diffuse.a); It lights the entire scene.

    Read the article

  • Strange behavior of RigidBody with gravity and impulse applied

    - by Heisenbug
    I'm doing some experiments trying to figure out how physics works in Unity. I created a cube mesh with a BoxCollider and a RigidBody. The cuve is laying on a mesh plane with a BoxCollider. I'm trying to update the object position applying a force on its RigidBody. Inside script FixedUpdate function I'm doing the following: public void FixedUpdate() { if (leftButtonPressed()) this.rigidbody.AddForce( this.transform.forward * this.forceStrength, ForceMode.Impulse); } Despite the object is aligned with the world axis and the force is applied along Z axis, it performs a quite big rotation movement around its y axis. Since I didn't modify the center of mass and the BoxCollider position and dimension, all values should be fine. Removing gravity and letting the object flying without touching the plane, the problem doesn't show. So I suppose it's related to the friction between objects, but I can't understand exactly which is the problem. Why this? What's my mistake? How can I fix this, or what's the right way to do such a moving an object on a plane through a force impulse?

    Read the article

  • What''s easy extensible technique to store game data?

    - by Miro
    I'm looking for library/technique for storing my game resources - levels, object (effects,world info), items(price,effects,...), NPC(visual info, behavior), everything except graphics/audio stuff. I've seen lua used for Awesome WM configuration. protobuf looks good, but it seems to be designed for network communication. I've tried to write my own parser, but as the project grows it's more and more harder to manage it and catch all the bugs. My requiremets: stability easy extension of data without need to convert older versions to newer good(don't have to be the best) performance of loading not much coding not XML!

    Read the article

  • How do I create a bounding frustrum from a view & projection matrix?

    - by Narf the Mouse
    Given a left-handed Projection matrix, a left-handed View matrix, a ViewProj matrix of View * Projection - How do I create a bounding Frustum comprised of near, far, left, right and top, bottom planes? The only example I could find on Google (Tutorial 16: Frustum Culling) seems to not work; for example, if the math is used as given, the near-plane's distance is a negative. This places the near-plane behind the camera...

    Read the article

  • Bullet pattern isn't behaving as expected

    - by Fibericon
    I have a boss that's supposed to continuously shoot five streams of bullets, each at a different angle. It starts off just fine, but doesn't seem to want to use its entire array of bullets. No matter how large I set the length of bulletList, the boss simply stops shooting after a couple of seconds, then pick up again shortly. Here's what I'm using to generate the pattern: Vector3 direction = new Vector3(0.5f, -1, 0); for (int r = 0; r < boss.gun.bulletList.Length; r++) { if (!boss.gun.bulletList[r].isActive) { boss.gun.bulletList[r].direction = direction; boss.gun.bulletList[r].speed = boss.gun.BulletSpeedAdjustment; boss.gun.bulletList[r].position = boss.position; boss.gun.bulletList[r].isActive = true; break; } } direction = new Vector3(-0.5f, -1, 0); //Repeat with four similar for loops, to place a bullet in each direction It doesn't seem to matter if the bulletList length is 1000 or 100000. What could be the issue here?

    Read the article

  • Basic collision direction detection on 2d objects

    - by Osso Buko
    I am trying to develop a platform game for Android by using ANdroid GL Engine (ANGLE). And I am having trouble with collision detection. I have two objects which is shaped as rectangular. And no change in rotation. Here is a scheme of attributes of objects. What i am trying to do is when objects collide they block each other's movement on that direction. Every object has 4 boolean (bTop, bBottom, bRight, bLeft). For example when bBottom is true object can't advance on that direction. I came up with a solution but it seems it only works on one dimensional. Bottom and top or right and left. public void collisionPlatform (MyObject a, MyObject b) { // first obj is player and second is a wall or a platform Vector p1 = a.mPosition; // p1 = middle point of first object Vector d1 = a.mPosition2; // width(mX) and height of first object Vector mSpeed1 = a.mSpeed; // speed vector of first object Vector p2 = b.mPosition; // p1 = middle point of second object Vector d2 = b.mPosition2; // width(mX) and height of second object Vector mSpeed2 = b.mSpeed; // speed vector of second object float xDist, yDist; // distant between middle of two object float width , height; // this is average of two objects measurements width=(width1+width2)/2 xDist=(p1.mX - p2.mX); // calculate distance // if positive first object is at the right yDist=(p1.mY - p2.mY); // if positive first object is below width = d1.mX + d2.mX; // average measurements calculate height = d1.mY + d2.mY; width/=2; height/=2; if (Math.abs(xDist) < width && Math.abs(yDist) < height) { // Two object is collided if(p1.mY>p2.mY) { // first object is below second one a.bTop = true; if(a.mSpeed.mY<0) a.mSpeed.mY=0; b.bBottom = true; if(b.mSpeed.mY>0) b.mSpeed.mY=0; } else { a.bBottom = true; if(a.mSpeed.mY>0) a.mSpeed.mY=0; b.bTop = true; if(b.mSpeed.mY<0) b.mSpeed.mY=0; } } As seen in my code it simply will not work. when object comes from right or left it doesn't work. I tried couple of ways other than this one but none worked. I am guessing right method will include mSpeed vector. But I have no idea how to do it. I really appreciate if you could help. Sorry for my bad english.

    Read the article

  • XNA 4.0 Refresh AudioEngine, WaveBank and Others Not Found

    - by Peteyslatts
    I'm going through the Learning XNA 4.0 book, and unfortunately I installed XNA 4.0 refresh. All the code up until now has worked, with the exception of me needing to remove the Framework.Net and Framework.Storage. (As a side question, will this be problematic later?) The problem I'm having now is that in my Game1.cs file, I have imported all of the XNA.Framework libraries, and when I try and create instances of any of the following classes, an error pops up saying VisualStudio can't find them: AudiEngine, WaveBank, SoundBank, and Cue. I have googled around for a while, and the only solution I saw was to import Microsoft.Xna.Framework.Xact, but this doesn't seem to exist for me. Any help is much appreciated, Thanks Peter.

    Read the article

  • Rendering output to arbitary quadrilateral

    - by Trainee4Life
    I want to render output on a device to an arbitary quadirlateral, i.e. project texture on to a quad. What are the possible ways I could implement it? Till now, I have investigated: Drawing textured quadrilateral - Quads look odd as they are composed of triangles, and the distortion looks odd. The issue I'm facing has been discussed here and here as well. Setting transformation on device - Need help in getting this implemented. Pixel shaders - Not able to implement the desired effect. Any help would be much appreciated.

    Read the article

  • MCP 1.7.10 Java class navigation

    - by Elias Benevedes
    So, I'm new to the Minecraft modding community and trying to understand where to start. I've attempted to do it before, but dropped it to the complexity of starting and the lack of a site like this to help (Mind that I'm also semi-new to Java, but have worked extensively in Javascript and Python. I understand how Java is different from the two). I have downloaded MCP 9.08 (Decompiles 1.7.10), and decompiled Minecraft. I'm looking to mod client, so I didn't supply it with a server jar. Everything seemed to work fine in decompile (Only error was it couldn't find the server jar). I can find my files in /mcp908/src/minecraft/net/minecraft. However, if I open up one of the classes in, say, block, I see a bunch of variables starting with p_ and ending with _. Is there any way to make these variables more decipherable, to understand what's going on so I can learn by example? Thank you.

    Read the article

  • 3D Texture Mapping (Atlas)

    - by Tim Hatch
    This is a pretty simple question. If I was to use multiple images in a single texture for a 3D cube, how would I go about re-using each vertex (having 8 total vs 24)? With a single buffer of 8 vertices, I don't see how I'd properly reuse the UV values. Any help on that? I know it's not terribly clear, but I figured it was a simple question. The 2D method is pretty easy, the next coordinates would be the same as the first (0,0 and 0,1 respectively). However, the above 3D version has me quite befuddled.

    Read the article

  • Using a Vertex Buffer and DrawUserIndexedPrimitives?

    - by MattMcg
    Let's say I have a large but static world and only a single moving object on said world. To increase performance I wish to use a vertex and index buffer for the static part of the world. I set them up and they work fine however if I throw in another draw call to DrawUserIndexedPrimitives (to draw my one single moving object) after the call to DrawIndexedPrimitives, it will error out saying a valid vertex buffer must be set. I can only assume the DrawUserIndexedPrimitive call destroyed/replaced the vertex buffer I set. In order to get around this I must call device.SetVertexBuffer(vertexBuffer) every frame. Something tells me that isn't correct as that kind of defeats the point of a buffer? To shed some light, the large vertex buffer is the final merged mesh of many repeated cubes (think Minecraft) which I manually create to reduce the amount of vertices/indexes needed (for example two connected cubes become one cuboid, the connecting faces are cut out), and also the amount of matrix translations (as it would suck to do one per cube). The moving objects would be other items in the world which are dynamic and not fixed to the block grid, so things like the NPCs who move constantly. How do I go about handling the large static world but also allowing objects to freely move about?

    Read the article

  • Collision in PyGame for spinning rectangular object.touching circles

    - by OverAchiever
    I'm creating a variation of Pong. One of the differences is that I use a rectangular structure as the object which is being bounced around, and I use circles as paddles. So far, all the collision handling I've worked with was using simple math (I wasn't using the collision "feature" in PyGame). The game is contained within a 2-dimensional continuous space. The idea is that the rectangular structure will spin at different speed depending on how far from the center you touch it with the circle. Also, any extremity of the rectangular structure should be able to touch any extremity of the circle. So I need to keep track of where it has been touched on both the circle and the rectangle to figure out the direction it will be bounced to. I intend to have basically 8 possible directions (Up, down, left, right and the half points between each one of those). I can work out the calculation of how the objected will be dislocated once I get the direction it will be dislocated to based on where it has been touch. I also need to keep track of where it has been touched to decide if the rectangular structure will spin clockwise or counter-clockwise after it collided. Before I started coding, I read the resources available at the PyGame website on the collision class they have (And its respective functions). I tried to work out the logic of what I was trying to achieve based on those resources and how the game will function. The only thing I could figure out that I could do was to make each one of these objects as a group of rectangular objects, and depending on which rectangle was touched the other would behave accordingly and give the illusion it is a single object. However, not only I don't know if this will work, but I also don't know if it is gonna look convincing based on how PyGame redraws the objects. Is there a way I can use PyGame to handle these collision detections by still having a single object? Can I figure out the point of collision on both objects using functions within PyGame precisely enough to achieve what I'm looking for? P.s: I hope the question was specific and clear enough. I apologize if there were any grammar mistakes, English is not my native language.

    Read the article

  • Changing DisplayMode seems not to update Input&Graphic Dimension

    - by coding.mof
    I'm writing a small game using Slick and Nifty-GUI. At the program startup I set the DisplayMode using the following lines: AppGameContainer app = new ... app.setDisplayMode( 800, 600, false ); app.start(); I wrote a Nifty-ScreenController for my settings dialog in which the user can select the desired DisplayMode. When I try to set the new DisplayMode within this controller class the game window gets resized correctly but the Graphics and Input objects aren't updated accordingly. Therefore my rendering code just uses a part of the new window. I tried to set different DisplayModes in the main method to test if it's generally possible to invoke this method multiple times. It seems that changing the DisplayMode only works before I call app.start(). Furthermore I tried to update the Graphics & Input object manually but the init and setDimensions methods are package private. :( Does someone know what I'm doing wrong and how to change the DisplayMode correctly?

    Read the article

  • SDL mouse wheel not picking up

    - by Chris
    Running Ubuntu 11.04, SDL 1.2 trying to pickup mouse wheel up/down movement with this (stripped down) code: int main( int argc, char **argv ) { SDL_MouseButtonEvent *mousebutton = NULL; while ( !done ) { if(mousebutton != NULL && mousebutton->button == SDL_BUTTON_LEFT) yrot += 0.75f; else if(mousebutton != NULL && mousebutton->button == SDL_BUTTON_RIGHT) yrot -= 0.75f; else if(mousebutton != NULL && mousebutton->button == SDL_BUTTON_WHEELUP){ xrot += 0.75f; }else if(mousebutton != NULL && mousebutton->button == SDL_BUTTON_WHEELDOWN){ xrot -= 0.75f; } while ( SDL_PollEvent( &event ) ) { switch( event.type ) { case SDL_MOUSEBUTTONDOWN: mousebutton = &event.button; break; case SDL_MOUSEBUTTONUP: mousebutton = NULL; break; default: break; } } } return 0; } strange thing is, scrolling with the mouse button does nothing, but if I hold down a mouse button or two and then move the mouse it hits the SDL_BUTTON_WHEEL code occasionally. This honestly reeks of a pointer issue, which would make sense since I've been spoiled with C# for the past couple years, but I am just not seeing it. How do i correctly find mouse scroll events in SDL?

    Read the article

< Previous Page | 612 613 614 615 616 617 618 619 620 621 622 623  | Next Page >