Search Results

Search found 25377 results on 1016 pages for 'development 4 0'.

Page 557/1016 | < Previous Page | 553 554 555 556 557 558 559 560 561 562 563 564  | Next Page >

  • Triangles in a C++ STL Vector as an Objective-C member sometimes draws incorrectly in OpenGL ES

    - by Rahil627
    The polygons draw correctly 80% of the time. When it fails, a vertex is dislocated. The polygon is consistently drawn with the wrong vertex. I checked that the vector is correct during initialization, even when it's wrongly drawn. I'm using Cocos2d. The class member: @interface Polygon : CCSprite { std::vector<float> triangleVertices; } The draw function called in [Polygon draw]: + (void)drawTrianglesWithVertices:(const std::vector<float> &)v { //glEnableClientState(GL_VERTEX_ARRAY); glDisable(GL_TEXTURE_2D); glDisableClientState(GL_TEXTURE_COORD_ARRAY); glDisableClientState(GL_COLOR_ARRAY); glVertexPointer(2, GL_FLOAT, 0, &v[0]); glDrawArrays(GL_TRIANGLES, 0, v.size()); //glDisableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_COLOR_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glEnable(GL_TEXTURE_2D); } Any ideas?

    Read the article

  • I finished "Beginning Android Games", should I use its framework?

    - by orod
    I've worked through Mario Zechner's "Beginning Android Games" and have made my own pong and asteroids game using the framework used in the book. I have also downloaded the source code for Replica Island and am able to run that. I like Replica Island's framework over the one I made from reading the book. Some differences are that Replica Island uses different activities for each screen instead of Zechner's Screen class and that Replica Island can use a lot of textures and isn't limited to textures with dimensions of powers of 2. If I'm serious about writing games and apps for Android should I learn Replica Island's framework and use that instead of the one I made while reading Zechner's book?

    Read the article

  • What do you use to create sprite graphics? [closed]

    - by SimpleRookie
    Possible Duplicate: What tools do you use for 2D art/sprite creation? What do you folks suggest for creating sprite graphics and sprite sheets? I fiddle with pixelformer and tilestudio. Pixelfromer has a kicken interface, it is quick and easy to make graphics, but a bit cumbersom if you want to make a spritemap. Tile Studio is a interesting mix or tiles and maps, but it is a bit buggy and basic. The Adobe series, just don't really seem to handle tiny graphics well. (there is a previous posting of this question existing, but it is a year old and I was hoping for further/updated input from the community)

    Read the article

  • Farseer circle hangs where it's spawned

    - by necrosmash
    I'm currently trying to simply spawn a circle in Farseer. However, it's stuck wherever I spawn it! The game is updating fine, as I can see the circle spinning in place when I spawn it because of how I currently have gravity set up (following code from Game1.cs): // Initialise the screen center for use with // the Level class screenCenter = new Vector2(graphics.GraphicsDevice.Viewport.Width / 2f, graphics.GraphicsDevice.Viewport.Height / 2f); world = new World(new Vector2(20, 20)); currentLevel = new Level1(screenCenter, circleSprite, groundSprite, ref world); Level1 constructor: public Level1(Vector2 screenCenter, Texture2D circleSprite, Texture2D groundSprite, ref World world) { player = new Player(ref world, screenCenter, circleSprite); ground = new Ground(ref world, screenCenter, groundSprite); listLevelItems = new List<LevelItem>(); listLevelItems.Add(player); listLevelItems.Add(ground); } Player constructor: public Player(ref World world, Vector2 screenCenter, Texture2D sprite) { setSprite(sprite); setPosition((screenCenter / MeterInPixels) + new Vector2(0f, 0f)); playerBody = BodyFactory.CreateCircle(world, 96f / (2f * MeterInPixels), 1f, playerPosition); getBody().BodyType = BodyType.Dynamic; // Ball bounce and friction getBody().Restitution = 0.3f; getBody().Friction = 0.5f; } If I use a breakpoint and change the playerBody position while the game is halted, the ball does move, but stays fixed in its new location. Any help would be greatly appreciated.

    Read the article

  • Why do I have to divide the origin of a quad by 4 instead of 2?

    - by vinzBad
    I'm currently transitioning from C#/XNA to C#/OpenTK but I'm getting stuck at the basics. So I have this Sprite-Class: public static bool EnableDebugDraw = true; public float X; public float Y; public float OriginX = 0; public float OriginY = 0; public float Width = 0.1f; public float Height = 0.1f; public Color TintColor = Color.Red; float _layerDepth = 0f; public void Render() { Vector2[] corners = { new Vector2(X-OriginX,Y-OriginY), //top left new Vector2(X +Width -OriginX,Y-OriginY),//top right new Vector2(X +Width-OriginX,Y+Height-OriginY),//bottom rigth new Vector2(X-OriginX,Y+Height-OriginY)//bottom left }; GL.Color3(TintColor); GL.Begin(BeginMode.Quads); { for (int i = 0; i < 4; i++) GL.Vertex3(corners[i].X,corners[i].Y,_layerDepth); } GL.End(); if (EnableDebugDraw) { GL.Color3(Color.Violet); GL.PointSize(3); GL.Begin(BeginMode.Points); { for (int i = 0; i < 4; i++) GL.Vertex2(corners[i]); } GL.End(); GL.Color3(Color.Green); GL.Begin(BeginMode.Points); GL.Vertex2(X + OriginX, Y + OriginY); GL.End(); } With the following setup I try to set the origin of the quad to the middle of the quad. _sprite.OriginX = _sprite.Width / 2; _sprite.OriginY = _sprite.Height / 2; but this sets the origin to the upper right corner of the quad, so i have to _sprite.OriginX = _sprite.Width / 4; _sprite.OriginY = _sprite.Height / 4; However this is not the intended behaviour, could you advise me how I fix this?

    Read the article

  • Is it safe to run multiple XNA ContentManager instances on multiple threads?

    - by Boinst
    My XNA project currently uses one ContentManager instance, and one dedicated background thread for loading all content. I wonder, would it be safe to have multiple ContentManager instances, each in it's own dedicated thread, loading different content at the same time? I'm prompted to ask this question because this article makes the following statement: If there are two textures created at the same time on different threads, they will clobber the other and you will end up with some garbage in the textures. I think that what the author is saying here, is that if I access one ContentManager simultaneously on two threads, I'll get garbage. But what if I have separate ContentManager instances for each thread? If no-one knows the answer already from experience, I'll go ahead and try it and see what happens.

    Read the article

  • Multiple ( V- / I- ) Buffers, is it sane?

    - by Techie
    Currently I am developing an RTS game using XNA ( / ANX.Framework ). There is one thing bothers me. I am not sure in what way or how to organise Buffers. Should I use a new Vertexbuffer for any object ( e.g. a Char, a Table, an model ) or is it better to use ONE HUGE/ BIG Buffer to store any geometry in? I am still new to 3D Programming though I finished yet couple games using DirectX 9. Well, I hope this question qin't a duplicate and I appreciate any answer leading me into the right direction.

    Read the article

  • engine for responsive gameplay

    - by zaftcoAgeiha
    Many games have been praised for its responsive gameplay, where each user action input correspond to a quick and precise character movement (eg: super meat boy, shank...) What makes those games responsive? and what prevents other games from achieving the same? How much of it is due to the game framework used to queue mouse/keyboard events and render/update the game and how much is attributed to better coding?

    Read the article

  • Game 30% done on HTML5. Maybe it was a bad idea. Should I change to Unity3d? [on hold]

    - by Dokkat
    I'm creating a 3d game on HTML5. It's 30% complete and the hard part is already coded. The server is on node.js.Now I'm realizing that maybe it was not a wise choice. This is because I realized: Three.js still has many bugs. I don't see the same thing on every machine. Each browser, OS, can give different results. I'm afraid my clients will have a great stress installing my game properly. I have tons of sprites and models on my game. I wonder if my clients will have to load all them again everytime they want to play? I wonder if a Node.js server will be fast enough to handle it, and I'm afraid it won't be scalable. What would you advise me? Should I continue and finish the game on HTML5 or is it better to remake it on something else, like Unity3d for the client and (what?) for the server?

    Read the article

  • Collisions and Lists

    - by user50635
    I've run into an issue that breaks my collisions. Here's my method: Gather Input Project Rectangle Check for intersection and ispassable Update The update method is built on object_position * seconds_passed * velocity * speed. Input changes velocity and is normalized if 1. This method works well with just one object comparison, however I pass a list or a for loop to the collision detector and velocity gets changed back to a non zero when the list hits an object that passes the test and the object can pass through. Any solutions would be much appreciated. Side note is there a more proper way to simulate movement?

    Read the article

  • Moving two objects proportionally

    - by SSL
    I'm trying to move two objects away from each other at a proportional distance, but on different scales. I'm not quite sure how to do this. Object A can go from position 0.1 to 1. Object B has no limits. If object B is decreasing, then Object A should be decreasing at rate R. Likewise, if Object B is increasing, then Object A increases at rate R. How can I tie these two Object positions together so that in an update loop, they automatically update their positions? I tried using: ObjA.Pos += 0.001f * ObjB.VelocityY; //0.001f is the rate This works but there's an error each time it runs. ObjA starts off at its max position 1 but then the next time it will stop at 0.97, 0.94, 0.91 etc.. This is due to the 0.001f rate I put in. Is there a way to control the rate, yet not end up with the rounding error?

    Read the article

  • How can a pygame image be colored?

    - by Juicy
    I'm writing a 2d particle system for a game in Pygame[1]. For the particles, I have an image surface loaded from a file -- basically a white primitive drawn over a transparent background. I'd like the particle engine to emit variously colored particles, but I'm not sure how to tell Pygame to color the surface. I've looked through what passes for documentation, but I'm having trouble finding anything. [1] Yeah, I don't really like Pygame, but my course insists I write this project in Python.

    Read the article

  • moving in the wrong direction

    - by Will
    Solution: To move a unit forward: forward = Quaternion(0,0,0,1) rotation.normalize() # ocassionally ... pos += ((rotation * forward) * rotation.conjugated()).xyz().normalized() * speed I think the trouble stemmed from how the Euclid math library was doing Quaternion*Vector3 multiplication, although I can't see it. I have a vec3 position, a quaternion for rotation and a speed. I compute the player position like this: rot *= Quaternion().rotate_euler(0.,roll_speed,pitch_speed) rot.normalize() pos += rot.conjugated() * Vector3(0.,0.,-speed) However, printing the pos to console, I can see that I only ever seem to travel on the x-axis. When I draw the scene using the rot quaternion to rotate my camera, it shows a proper orientation. What am I doing wrong? Here's an example: You start off with rotation being an identity quaternion: w=1,x=0,y=0,z=0 You move forward; the code correctly decrements the Z You then pitch right over to face the other way; if you spin only 175deg it'll go in right direction; you have to spin past 180deg. It doesn't matter which direction you spin in, up or down, though Your quaternion can then be something like: w=0.1,x=0.1,y=0,z=0 And moving forward, you actually move backward?! (I am using the euclid Python module, but its the same as every other conjulate) The code can be tried online at http://williame.github.com/ludum_dare_24_evolution/ The only key that adjusts the speed is W and S. The arrow keys only adjust the pitch/roll. At first you can fly ok, but after a bit of weaving around you end up getting sucked towards one of the sides. The code is https://github.com/williame/ludum_dare_24_evolution/blob/cbacf61a7159d2c83a2187af5f2015b2dde28687/tiny1web.py#L102

    Read the article

  • Application window as polygon texture?

    - by nekome
    Is there a way, or method, to have some application rendered as texture in 3D scene on some polygon, and also have full interactivity with it? I'm talking about Windows platform, and maybe OpenGL but I guess it doesn't matter is it OGL or DX. For example: I run Calculator using WINAPI functions (preferably hidden, not showing on desktop) and I want to render it inside 3D scene on some polygon but still be able to type or click buttons and have it respond. My idea to realize this is to have WINAPI take screenshot (or render it to memory if possible) of that Calculator and pass it to OpenGL as texture for each frame (I'm experimenting with SDL through pygame) and for mouse interactivity to use coordination translation and calculate where on application window it would act, and then use WINAPI functions such as SetCursorPos to set cursor ant others to simulate click or something else. I haven't found any tutorials with topic similar to this one. Am I on a right track? Is there better way to do this if possible at all?

    Read the article

  • background animation algorithm for single screen

    - by becool_max
    I’m writing simple strategy game (in xna), and would like to have an animated background. In my game all the actions happens inside one screen and thus standard parallax effect does not look appropriate. However, I found a video of a game with suitable background animation for my game http://www.youtube.com/watch?v=Vcxdbjulf90&feature=share&list=PLEEF9ABAB913946E6 (from 3 to 6s, while main character stays at the same place). What is the algorithm to do this stuff? It would be nice if someone can provide a reference for a similar example (language is not important).

    Read the article

  • Multiply mode in SpriteBatch

    - by ashes999
    I have a "lighting" texture (black background with white or colours for lights) that I want to draw as a multiplcation operation. SpriteBatch.Begin can specify BlendState.Additive, but there's no BlendState.Multiplicative. I also tried the solution in this answer, but it didn't work -- even when I (incorrectly?) changed the code to work with XNA 4 style ColorDestinationBlend, I ended up with the final solution being inverted (black area where the light is, everything else is visible). I initially thought of a shader, but I couldn't get shaders to work with MonoGame, so I'm falling back to SpriteBatch.

    Read the article

  • How do I get mouse x / y of the world plane in Unity?

    - by Discipol
    I am trying to make a tiled landscape. The terrain itself is not made from tiles but the world has a grid which I define. I would like to place boxes/rectangles which snap to this grid, at runtime, but in order for me to do that I must get a projection from the user screen to the real-world coordinates. I have tried various examples using the Ray class but nothing worked. It compiles and outputs a constant value no matter where I put the mouse. I have tried to add some tiles and try to detect them but no luck. I also tried with one plane as big as my world but still no luck. I am using C# but even a JS version would be helpful. This technique involves calculating which tile the mouse is under by the x and y positions. Perhaps detecting which tile itself is being pointed to would be a simpler task, at which point I can just retrieve its i/j properties. Update: I got it working thanks to some answers, but the ball freaks out towards the far end of the plane. Why is this? https://www.dropbox.com/sh/9pqnl30lm6uwm6h/AACc2JcbW16z6PuHFLLfCAX6a

    Read the article

  • Why doesn't this ContentLoader project (VS2010 e) recognize Microsoft.Build.dll?

    - by IAbstract
    I am working with an XNA content loader sample. In the references for the project (VS 2010 Express) there are: Microsoft.Build Microsoft.Build.Framework //as well as the standard XNA framework and graphics references To emulate this project, I am trying to first add a reference to Microsoft.Build.dll. But Visual Studio warns me that it cannot load the .dll. I looked at MSDN and the document referenced Microsoft.Build.Evaluation. This is suppose to be available in the Microsoft.Build.dll and then I'll have access to the Project class. Has anyone had any experience with this?

    Read the article

  • Using 2d collision with 3d objects

    - by Lyise
    I'm planning to write a fairly basic scrolling shoot 'em up, however, I have run into a query with regards to checking for collision. I plan to have a fixed top down view, where the player and enemies are all 3d objects on a fixed plane, and when the enemy or player fires at the other, their shots will also be along this fixed plane. In order to handle the collision, I have read up a bit on collision detection in 3d, as it is not something I have looked into previously, but I'm not sure what would be ideal for this situation. My options appear to be: Sphere collision, however, this lacks the pixel precision I would like Detection using all vertexes and planes of each object, but this seems overly convoluted for a fixed plane of play Rendering the play screen in black and white (where white is an object, black is empty space), once for enemies and once for the player, and checking for collisions that way (if a pixel is white on both, there is a collision) Which of these would be the best approach, or is there another option that I am missing? I have done this previously using 2d sprites, however I can't use the same thinking here as I don't have the image to refer to.

    Read the article

  • What is the correct install process to setup Node.js with Windows Azure Emulator

    - by PazoozaTest Pazman
    This question is related to this question: Node.js running under IIS Express Keeps Crashing to which I need help with reinstalling and getting node.js up and running in windows emulator working. Hello I am reinstalling my machine: Toshiha Laptop 2 GB Ram 32 bit processor What is the correct procedure from start to finish to get node.js development working, so far nothing has worked and the emulator (IIS Express) worker processor keeps crashing. No matter how many instances they all end up crashing. Up until two weeks ago my node development was working fine, but I had to do a reinstall, and since then I haven't been doing any node.js development on windows emulator because the latest June 2012 Azure SDK for Node.js is buggy. These are the steps I have taken: 1) Reformat HD 2) Insert Windows 7 N SP1 CD 3) Reboot machine into CD installation 4) Follow and wait until Windows 7 installed 5) Run Add/Remove programs + enable IIS + IIS management tools 6) Run Windows Update (installed about 53 updates) 7) Go here http://www.windowsazure.com/en-us/develop/nodejs/ 8) Click Windows Installer June 2012 and install Windows Azure SDK for Node.js - June 2012 9) Run Azure Powershell 10) Navigate to c:\node\testSite\webrole1 11) launch site: start-azureemulator -launch 12) Play around on website (then crash!) Problem signature: Problem Event Name: APPCRASH Application Name: iisexpress.exe Application Version: 8.0.8298.0 Application Timestamp: 4f620349 Fault Module Name: iiscore.dll Fault Module Version: 8.0.8298.0 Fault Module Timestamp: 4f63b65c Exception Code: c0000005 Exception Offset: 00021767 OS Version: 6.1.7601.2.1.0.256.28 Locale ID: 1033 Additional Information 1: f66d Additional Information 2: f66d807b515d6b2dc6f28f66db769a01 Additional Information 3: 7b2f Additional Information 4: 7b2f6797d07ebc2c23f2b227e779722e Am I missing a step in my resintall process? Do I have all the required files to do node.js windows azure emulator development? Why is IIS Express crashing all the time? Can I still do node.js windows azure emulator development without using IIS Express and use my local Windows 7 N (SP1) IIS 7.x that comes shipped?

    Read the article

  • How do I connect the seams between my terrain?

    - by gnomgrol
    I'm using c++ and D3D11 and I'm trying to create a (pretty) large terrain, lets say 4096x4096, maybe larger. I've got the basics of terrain creation and already split it up into chunks. But, when I'm rendering them (every chunk has its own vertex and index buffer, as well as its own heightmap), there are still little pieces missing between them. I read a lot about LOD(Level Of Detail) and GMM(Geometry Mipmap), but I can't really implement the theory I read. At the moment, it looks like this: I could really use some help, everything is welcome. If you have some good tutorials on any of this, please share them.

    Read the article

  • Android Java: Way to effectively pause system time while debugging?

    - by TheMaster42
    In my project, I call nanoTime and use that to get a deltaTime which I pass to my entities and animations. However, while debugging (for example, stepping through my code), the system time on my phone is happily chugging along, so it's impossible to look at, say, two sequential frames of data in the debugger (since by the time I'm done looking at the first frame, the system time has continued to move ahead by seconds or even minutes). Is there a programming practice or method to pause the system clock (or a way for my code to intercept and fake my deltaTime) whenever I pause execution from the debugger? Additional Information: I'm using Eclipse Classic with the ADT plugin and a Samsung SII, coding in Java. My code invoking nanoTime: http://pastebin.com/0ZciyBtN I do all display via a Canvas object (2D sprites and animations).

    Read the article

  • OpenGL : Keeping alpha in a render buffer

    - by Cyan
    In my current task, i need to render a texture into a render buffer, in order to work on it (apply special filters) there. The result is then considered a "new texture", which is later displayed. This works fine, except when the texture contains some transparent/semi-transparent parts. My current guess it that, within the render buffer, the texture is "merged" with a kind of "grey background". In this case, it obviously impacts the R,G,B color components of transparent pixels. I've yet to find a way around this. Even manually assigning alpha after the rendering process doesn't save the day for semi-transparent pixels, which RGB are "tainted" by the grey background.

    Read the article

  • Modular spaceship control

    - by SSS
    I am developing a physics based game with spaceships. A spaceship is constructed from circles connected by joints. Some of the circles have engines attached. Engines can rotate around the center of circle and create thrust. I want to be able to move the ship in a direction or rotate around a point by setting the rotation and thrust for each of the ship's engines. How can I find the rotation and thrust needed for each engine to achieve this?

    Read the article

  • Zelda-style top-down RPG. How to store tile and collision data?

    - by Delerat
    I'm looking to build a Zelda: LTTP style top-down RPG. I've read a lot on the subject and am currently going back and forth on a few solutions. I'm using C#, MonoGame, and Tiled. For my tile maps, these are the choices I can see in front of me: Store each tile as its own array. Each one having 3-4 layers, texture/animation, depth, flags, and maybe collision(depending on how I do it). I've read warning about memory issues going this route, and my biggest map will probably be 160x120 tiles. My average map however will be about 40x30. The number of tiles might cut in half if I decide to double my tile size, which is currently 16x16. This is the most appealing approach for me, as I feel like I would know how to save maps, make changes, and separate it into chunks for collision checks. Store the static parts of my tile map in multiple arrays acting as the different layers. Then I would just use entities for anything that wasn't static. All of the other tile data such as collisions, depth, etc., would be stored in their own layers as well I guess? This way just seems messy to me though. Regardless of which one I choose, I'm also unsure how to plan all of that other tile data. I could write a bunch of code that would know which integer represents what tile and it's data, but if I changed a tileset in Tiled and exported it again, all of those integers could potentially change and I'd have to adjust a whole bunch of code. My other issue is about how I could do collision. I want to at least support angled collision that slides you around the corners of objects like LTTP does, if not more oddball shapes as well. So do I: Store collision as a flag for binary collision. Could I get this to support angles? Would it be fine to store collision as an integer and have each number represent a certain angle of collision? Store a list of rectangles or other shapes and do collision that way? Sorry for the large two-part(three-part?) question. I felt like these needed to be asked together as I believe each choice influences the other.

    Read the article

< Previous Page | 553 554 555 556 557 558 559 560 561 562 563 564  | Next Page >