Search Results

Search found 25660 results on 1027 pages for 'dotnetnuke development'.

Page 568/1027 | < Previous Page | 564 565 566 567 568 569 570 571 572 573 574 575  | Next Page >

  • How to pause and unpause the animation of a sprite?

    - by user1609578
    My game has a sprite representing a character. When the character picks up an item, the sprite should stop moving for a period of time. I use CCbezier to make the sprite move, like this: sprite->runaction(x) Now I want the sprite to stop its current action (moving) and later resume it. I can make the sprite stop by using: sprite->stopaction(x) but if I do that, I can't resume the movement. How can I do that?

    Read the article

  • Complex, yet simple crafting system model

    - by KatShot
    I'm working on some arcade shooter/slasher, and the main logline is "Kick'em with everything you want". There's not so many enemies in GDD, main focus is on tons of weapons and gadgets to cause mayhem. To get weapon, you need to craft it, and now crafting system looks simple, like: 1) You got three slots for weapon parts (like A, B, C) 2) You collect misc weapon parts, and when you got atleast one for every slot, you can craft a weapon (for example, if you got A1, B1, B2, B3 and C1, you can craft such models - A1B1C1, A1B2C1, A1B3C1) As for me, this crafting system is too simple, because weapon parts will just fall from the top of screen, often enough. That's why I'm thinking about adding some more crafting system levels, like resources (collect 10 scrap pieces to make part A1 or C3), etc. My question is: How can i add some more complex, still simple, transparent levels in crafting system? upd. For example, in Minecraft or Terraria, first 5-10 crafting recipies quite transparent and simple IMHO. But then it turns into huge mess to understand, how to craft this or that (for example, fishing rod)

    Read the article

  • What's the difference between Pygame's Sound and Music classes?

    - by Southpaw Hare
    What are the key differences between the Sound and Music classes in Pygame? What are the limitations of each? In what situation would one use one or the other? Is there a benefit to using them in an unintuitive way such as using Sound objects to play music files or visa-versa? Are there specifically issues with channel limitations, and do one or both have the potential to be dropped from their channel unreliably? What are the risks of playing music as a Sound?

    Read the article

  • Collider2D and Rigidbody2D, how do they work?

    - by user42646
    I have been learning JavaScript and Unity for a week now. I learned how to make Cube as a Ground and another Cube as a player and I used this code to make the Player Cube move forward and backward and jumping var walkspeed: float = 5.0; var jumpheight: float = 250.0; var grounded = false; function Update() { rigidbody.freezeRotation = true; if (Input.GetKey("a")) transform.Translate(Vector3(-1, 0, 0) * Time.deltaTime * walkspeed); if (Input.GetKey("d")) transform.Translate(Vector3(1, 0, 0) * Time.deltaTime * walkspeed); if (Input.GetButton("Jump")) { Jump(); } } function OnCollisionEnter(hit: Collision) { grounded = true; } function Jump() { if (grounded == true) { rigidbody.AddForce(Vector3.up * jumpheight); grounded = false; } } I also learned how to make a character hit box. how to make a sprite and animation. pretty much the basic stuff. Couple of days ago I created simple ground in Photoshop and a simple character and imported them to Unity3D. Whenever I use my code above the character keeps falling from the scene. Like the character has nothing to stand on. After thinking about it it make sense because I really didn't make anything to make the player character understand that he should stand on something so I started reading about this issue and I realized that there is something called Collider2D and Rigidbody2D. Now I'm really stuck here I just don't know what to do. I applied the rigibody2d to my character picture and the Collider2D to the ground picture but whenever I play the project the gravity makes my character falls down. This is my question: How can I make the rigibody2d object realize that it shouldn't fall if there is a Collider2D object under it? So when I jump it's going to jump and the gravity going to bring it back to the ground.

    Read the article

  • C++ Parallel Asynchonous task

    - by Doodlemeat
    I am currently building a randomly generated terrain game where terrain is created automatically around the player. I am experiencing lag when the generated process is active, as I am running quite heavy tasks with post-processing and creating physics bodies. Then I came to mind using a parallel asynchronous task to do the post-processing for me. But I have no idea how I am going to do that. I have searched for C++ std::async but I believe that is not what I want. In the examples I found, a task returned something. I want the task to change objects in the main program. This is what I want: // Main program // Chunks that needs to be processed. // NOTE! These chunks are already generated, but need post-processing only! std::vector<Chunk*> unprocessedChunks; And then my task could look something like this, running like a loop constantly checking if there is chunks to process. // Asynced task if(unprocessedChunks.size() > 0) { processChunk(unprocessedChunks.pop()); } I know it's not far from easy as I wrote it, but it would be a huge help for me if you could push me at the right direction. In Java, I could type something like this: asynced_task = startAsyncTask(new PostProcessTask()); And that task would run until I do this: asynced_task.cancel();

    Read the article

  • What is the primary use of Vertex Buffer Objects?

    - by sensae
    From what I've read, it seems VBOs are purely for performance. I'm working on a very rudimentary learning project in lwjgl and I'm just trying to figure out what more advanced features of the library I should be delving into, and what their use is. My understanding is that VBOs allow a person to keep vertexes in VRAM while they aren't currently being drawn in a scene. In my case, I'm just drawing quads and performance probably isn't a concern at all, but I'm trying to piece together what's happening under the hood. If I'm drawing quads directly, I'm drawing from the CPU memory, correct? Also, if I'm not doing any checks for visibility, does that mean I'm rendering absolutely everything in the "scene", regardless of whether its in view? Are VBOs a way to store objects and only render what's needed?

    Read the article

  • Moving two objects proportionally

    - by SSL
    I'm trying to move two objects away from each other at a proportional distance, but on different scales. I'm not quite sure how to do this. Object A can go from position 0.1 to 1. Object B has no limits. If object B is decreasing, then Object A should be decreasing at rate R. Likewise, if Object B is increasing, then Object A increases at rate R. How can I tie these two Object positions together so that in an update loop, they automatically update their positions? I tried using: ObjA.Pos += 0.001f * ObjB.VelocityY; //0.001f is the rate This works but there's an error each time it runs. ObjA starts off at its max position 1 but then the next time it will stop at 0.97, 0.94, 0.91 etc.. This is due to the 0.001f rate I put in. Is there a way to control the rate, yet not end up with the rounding error?

    Read the article

  • How do you blend multiple colors in HSV (polar) color-space?

    - by Toxikman
    In RGB color space, you can do a weighted multiple-color blend by just doing: Start with R = G = B = 0. Then we perform a blend at index i using a set of colors C, and a set of normalized weights w like so: R += w[i] * C[i].r G += w[i] * C[i].g B += w[i] * C[i].b But I'd like to interpolate the colors in the HSV color-space instead, so that saturation and brightness are uniform across the interpolation. I know I can blend saturation and brightness in the same way as above, but the HUE component is an angle around a continuous circle, since HSV is essentially a polar coordinate system. Blending only two HSV colors makes sense to me, you just find the shortest arc around the circle and interpolate between the two hues. But when you attempt to blend more than 2 colors, it becomes a bit of a puzzle. You have to handle anomalous cases, like 4 equally-weighted colors with a hue at 0, 90, 180, and 270 degrees. They basically cancel each other out, so any hue will do. Any ideas would be greatly appreciated.

    Read the article

  • How does process of updating code with Continous Integration work?

    - by BleakCabalist
    I want to draw a model of process of updating the source code with the use of Continous Integration. The main issue is I don't really understand how it works when there are several programmers working on various aspects of the code at the same time. I can't visualize it in my mind. Here's what I know but I might be wrong: New code is sent to repository. Continous Integration server asks Version Control System if there is a new code in repository. If there is than CIS executes tests on the code. If tests show there are problems than CIS orders VCS to revert back to working wersion of the code and communicates it to programmer. If tests are passed positively it compiles the repository code and makes new build of a game? New build is made not after ever single change, but at the end of the day I believe? Are my assumptions above correct? If yes, does it also work when there are several programmers updating repository at once? Is this enough to draw a model of the process in your opinions or did I miss something? Also, what software would I need for above process? Can you guys give examples for CIS software and VCS software and whatever else I need? Does CIS software perform code tests or do I need another tool for that and integrate it with CIS? Is there a repository software?

    Read the article

  • Rotating object around moving object/player in 2D

    - by Boston
    I am trying to implement a camera which rotates around the world around the player. I have found many solutions online to the task of rotating an object about the origin, or about an arbitrary point. The procedure seems to be to translate the point to be rotated about to the origin, perform the rotation, translate back, then draw. I have gotten this working for rotation around the origin as well as for a fixed point. Rotation of objects around the player works as well, provided the player does not move. However, if the objects are rotated around the player by some non-zero degree, if the player moves after the rotation it causes the rotated objects to move as well. I probably have done a poor job explaining this so here's an image: http://i.imgur.com/1n63iWR.gif And here's the code for the behavior: renderx = (Ox - Px)*cos(camAngle) - (Oy - Py)*sin(camAngle) + Px; rendery = (Ox - Px)*sin(camAngle) + (Oy - Py)*cos(camAngle) + Py; Where (Ox,Oy) is the actual position of the object to be rotated and (Px,Py) is the actual position of the player. Any ideas? I am using C++ with SDL2.0.

    Read the article

  • XNA - Detect click on triangle/circle form of a texture

    - by chr1s89
    How can i detect clicks on a texture (will be a button in my game) that has a form of a triangle or circle. I know only the rectangle solution where u can use the positions + the width/height but this dont work for that because clicks will be detected at the transparent pixels. I heard of pixel-perfect collision is it the right way for this? It would be great if someone can give me a example for such a solution or other.

    Read the article

  • 3D Mesh Collision Help

    - by BlackAfricano
    I am new to XNA (I have only been working in it for a countable number of weeks) as well as to these forums (I have only made 1-2 other posts), so this may seem like a strange request, but I am wondering if anybody knows about more advanced collision in XNA. So far I have only been able to figure out BoundingSphere's which seem to be the simplest of the methods, and I was thinking of looking more into BoundingBoxes because the game I have is a 2-3D platformer. The problem I realized was that if I wanted to get any more advanced than stages in the shape of a box, I would face some immediate dilemmas. I was hoping somebody here was knowledgeable on the subject and could inform me where I could get started learning how to do something like this: https://www.youtube.com/watch?v=ekMD_Gtt8d4 Although the game in this video isn't very pretty, the mesh collision looks like what I'm looking for. I am hoping for the most complete solution, if possible.

    Read the article

  • How do I render an entire frame to a Texture2D?

    - by redcodefinal
    I asked a question here: C# XNA Make rendered screen a texture2d But, I ended up not getting the exact result I was looking for since I didn't ask the question right. In a game I am writing, I render an extremely large city out of objects, this can cause lag when moving the camera to view things that are off screen. I need a way to render then ENTIRE city, even the stuff that is off screen, and make it into a Texture2D. The answer I chose for the last one didn't work entirely right because it only gets what is on screen, not what is off.

    Read the article

  • Breathing for game/movie characters

    - by dtldarek
    Breathing (the movement of chest and face features): I'd like to ask if it is hard to model and whether it is computationaly expensive. I recently noticed the great effect it has in Madagascar 3 movie, but (please, correct me if I am wrong) don't remember seeing it in any games (except maybe steam cloud in cold/winter setting) and very few animated movies does that to noticable degree (e.g. when it is necessary by the plot or situation). I'd greatly appreciate answers from both movie graphics and game graphics perspective.

    Read the article

  • XNA - How do I change the texture of a 2D object?

    - by Adorjan
    I am on to make a table game, I successfully figured out how to make the arraw and to move the cursor on it (by tiles). Now I wanna find out how to make that if I hit the Enter key the tile's texture change to another. I tryed like this: if (input.KeyPressed(Keys.Enter)) { cell[X,Y].Cell_texture = tile_texture; } but it doesn't really work. Hope you can help. :) Thanks!

    Read the article

  • GPU optimization question: pre-computed or procedural?

    - by Jay
    Good morning, I'm learning shader program and need some general direction. I want to add noise to my laser beam (like this). Which is the best way to handle it? I could pre-compute an image and pass it to the shader. I could then use the image to change the opacity and easily animate the smoke by changing the offset of the texture lookup. I could also generate noise in the shader and do the same thing the texture was used for. Is it generally better to avoid I/O to the graphics card or the opposite? Thanks!

    Read the article

  • Zelda-style top-down RPG. How to store tile and collision data?

    - by Delerat
    I'm looking to build a Zelda: LTTP style top-down RPG. I've read a lot on the subject and am currently going back and forth on a few solutions. I'm using C#, MonoGame, and Tiled. For my tile maps, these are the choices I can see in front of me: Store each tile as its own array. Each one having 3-4 layers, texture/animation, depth, flags, and maybe collision(depending on how I do it). I've read warning about memory issues going this route, and my biggest map will probably be 160x120 tiles. My average map however will be about 40x30. The number of tiles might cut in half if I decide to double my tile size, which is currently 16x16. This is the most appealing approach for me, as I feel like I would know how to save maps, make changes, and separate it into chunks for collision checks. Store the static parts of my tile map in multiple arrays acting as the different layers. Then I would just use entities for anything that wasn't static. All of the other tile data such as collisions, depth, etc., would be stored in their own layers as well I guess? This way just seems messy to me though. Regardless of which one I choose, I'm also unsure how to plan all of that other tile data. I could write a bunch of code that would know which integer represents what tile and it's data, but if I changed a tileset in Tiled and exported it again, all of those integers could potentially change and I'd have to adjust a whole bunch of code. My other issue is about how I could do collision. I want to at least support angled collision that slides you around the corners of objects like LTTP does, if not more oddball shapes as well. So do I: Store collision as a flag for binary collision. Could I get this to support angles? Would it be fine to store collision as an integer and have each number represent a certain angle of collision? Store a list of rectangles or other shapes and do collision that way? Sorry for the large two-part(three-part?) question. I felt like these needed to be asked together as I believe each choice influences the other.

    Read the article

  • Java - Finding distance between player and tile in array

    - by Corey
    What is the best way performance wise to do this? When I click a tile I want it to get the distance and if I am close enough I can interact with the tile. One way would be to find the tile by doing mouse / tile width when I click correct? But then how would I get that tiles position? I know how to find the distance I just don't know how to get a certain tiles position from the array when I click it

    Read the article

  • Can XNA Content Pipeline split one content file into several .xnb?

    - by Zeta Two
    Let's say I have an xml file which looks like this <Weapons> <Weapon> <Name>Pistol</Name> ... </Weapon> <Weapon> <Name>MachineGun</Name> ... </Weapon> </Weapons> Would it be possible to use a custom importer/writer/reader to create two files, Pistol.xnb and MachineGun.xnb which I can load individually with Content.Load()? While writing this I realized I could just import a Weapon[] list and split them up with a helper, but I'm still wondering if this is possible?

    Read the article

  • How do I convert mouse co-ordinates in Slick2d java?

    - by Trycon
    I'm really new in Java and I really want to how do I convert the mouse clicks to co-ordinates in game. My game moves its images so that the camera could stay with the character. I follwed thenewboston tutorials. I have been modifying new codes for smoother gameplay. I have been searching the web for tutorials. This is one of the codes: PosGameX=MouseX+0; PosGameY=MouseY+0; I have not try this code but, I really think this would not work. The website I have visited, I think, is not good for coding. My gameplay is that when the mouse clicks on a position. It would try to get the co-ordinates(Mouse) and convert it to game co-ordinates. And I really want to know how do I make my mouse clicks to game co-ordinates? FOR MORE INFO: Searches: How Do I translate game co-ordinates? How Do I translate mouse to game co-ordinates? AND PLEASE! Do not give me algebra. I have really forgotten those.

    Read the article

  • How to determine which thrusters to turn on to rotate the ship?

    - by migimunz
    The configuration of the ship changes dynamically, so I have to determine which thruster to turn on when I want to rotate the ship clockwise or counter clockwise. The thrusters are always axis aligned with the ship (never at an angle) and are either on or off. Here's one of the possible setups: What I've tried so far is to visualize the firing vector and the direction vector to the center of mass of the ship: Unfortunately, I didn't get very far with that.

    Read the article

  • Inversion of control in Unity?

    - by user3206275
    I am semi-experienced .NET developer who has just began working with Unity. I am trying to decide on how to make IoC work in Unity 4.X ( I have not yet tested anything), and I wonder what are the good ways of achieving it. This post and its answers states that Ninject won't work with Unity, however it is old. Is it still true? If yes, what are other means of achieving IoC in Unity ? Edit 1 : I am targeting mainly Windows platform. So I don't need platform interoperability, I just need it to work.

    Read the article

  • How do I convert screen coordinates to between -1 and 1?

    - by bbdude95
    I'm writing a function that allows me to click on my tiles. The origin for my tiles is the center, however, the mouse's origin is the top left. I need a way to transform my mouse coordinates into my tile coordinates. Here is what I already have (but is not working): void mouseClick(int button, int state, int x, int y) { x -= 400; y -= 300; float xx = x / 100; // This gets me close but the number is still high. float yy = y / 100; // It needs to be between -1 and 1 }

    Read the article

  • Rotate an object given only by its points?

    - by d33tah
    I was recently writing a simple 3D maze FPP game. Once I was done fiddling with planes in OpenGL, I wanted to add support for importing Blender objects. The approach I used was triangulization of the object, then using Three.js to export the points to plaintext and then parsing the result JSON in my app. The example file can be seen here: https://github.com/d33tah/tinyfpp/blob/master/Data/Models/cross.txt The numbers represent x,y,z,u,v of a single vertex, which combined in three make a triangle. Then I rendered such an object triangle-by-triangle and played with it. I could move it back and forth and sideways, but I still have no idea how to rotate it by some axis. Let's say I'd like to rotate all the points by five degrees to the left, how would a code doing it look like?

    Read the article

  • Which is the way to pass parameters in a drawableGameComponent in XNA 4.0?

    - by cad
    I have a small demo and I want to create a class that draws messages in screen like fps rate. I am reading a XNA book and they comment about GameComponents. I have created a class that inherits DrawableGameComponent public class ScreenMessagesComponent : Microsoft.Xna.Framework.DrawableGameComponent I override some methods like Initialize or LoadContent. But when I want to override draw I have a problem, I would like to pass some parameters to it. Overrided method does not allow me to pass parameters. public override void Draw(GameTime gameTime) { StringBuilder buffer = new StringBuilder(); buffer.AppendFormat("FPS: {0}\n", framesPerSecond); // Where get framesPerSecond from??? spriteBatch.DrawString(spriteFont, buffer.ToString(), fontPos, Color.Yellow); base.Draw(gameTime); } If I create a method with parameters, then I cannot override it and will not be automatically called: public void Draw(SpriteBatch spriteBatch, int framesPerSecond) { StringBuilder buffer = new StringBuilder(); buffer.AppendFormat("FPS: {0}\n", framesPerSecond); spriteBatch.DrawString(spriteFont, buffer.ToString(), fontPos, Color.Yellow); base.Draw(gameTime); } So my questions are: Is there a mechanism to pass parameter to a drawableGameComponent? What is the best practice? In general is a good practice to use GameComponents?

    Read the article

< Previous Page | 564 565 566 567 568 569 570 571 572 573 574 575  | Next Page >