Search Results

Search found 658 results on 27 pages for 'sprites'.

Page 24/27 | < Previous Page | 20 21 22 23 24 25 26 27  | Next Page >

  • Which techniques to study?

    - by Djentleman
    Just to give you some background info, I'm studying a programming major at a tertiary level and am in my third year, so I'm not a newbie off the street. However, I am still quite new to game programming as a subset of programming. One of my personal projects for next semester is to design and create a 2D platformer game with emphasis on procedural generation and "neato" effects (think metroidvania). I've written up a list of some techniques to help me improve my personal skills (using XNA for the time being). The list is as follows: QuadTrees: Build a basic program in XNA that moves basic 2D sprites (circles and squares) around a set path and speed and changes their colour when they collide. Add functionality to add and delete objects of different sizes (select a direction and speed when adding and just drag and drop them in). Particles: Build a basic program in XNA in which you can select different colours and create particle effects of those colours on screen by clicking and dragging the mouse around (simple particles emerging from where the mouse is clicked). Add functionality where you can change the amount of particles to be drawn and the speed at which they travel and when they expire. Possibly implement gravity and wind after part 3 is complete. Physics: Build a basic program in XNA where you have a ball in a set 2D environment, a wind slider, and a gravity slider (can go to negative for reverse gravity). You can click to drag the ball around and release to throw it and, depending on what you do, the ball interacts with the environment. Implement other shapes afterwards. Random 2D terrain generation: Build a basic program in XNA that randomly generates terrain (including hills, caves, etc) created from 2D tiles. Add functionality that draws the tiles from a tileset and places different tiles depending on where they lie on the y-axis (dirt on top, then rock, then lava, etc). Randomised objects: Build a basic program in XNA that, when a button is clicked, displays a randomised item sprite based on parameters (type, colour, etc) with the images pulled from tilesets. Add the ability to save the item as an object, which stores it in a side-pane where it can be selected for viewing. Movement: Build a basic program in XNA where you can move an object around in an environment (tile-based) with a camera that pans with it. No gravity. Implement gravity and wind, allow the character to jump and fall with some basic platforms. So my question is this: Are there any other commonly used techniques that I should research, and can I get some suggestions as to the effectiveness of the techniques I've chosen to work on (e.g., don't do QuadTree stuff because [insert reason here], or, do [insert technique here] before you start working on particles because [insert reason here])? I hope this is clear enough and please let me know if I can further clarify anything!

    Read the article

  • What is the start point in game development? Where to start?

    - by Dragon
    I understand, I'm not unique with such a question, there are a lot of questions like this one. But I hope you'll take a minute and maybe can give me a piece of advice. I have an idea to develop games, but I don't know where is the start point in game development. The learning curve isn't as straight as in learning of a programming language, but I want to give it a try. I have some experience with OOP and programming in general. I know (not too deep) C#, Java programming languages. I searched info on where to start, read a lot of blogs, forums and so on. Once I decided "stop wandering around, just start develop a game" and I started. At the moment I have a console version of very simple game (RPS - rock-paper-scissors) developed with C#. It has different modes: "player vs cpu" and "player vs player". Some time later I looked at the code and decided that it should be refactored or even redeveloped from the scratch. And I thought that time "GUI is what I need. I can add logic later." And now I'm here. I've already decided to make RPS with GUI, then make multiplayer and so on. I'm not thinking about 3D now, 2D is enough. It doesn't matter what language to use: C# or Java, I found frameworks for both - XNA (C#) and Slick (Java). Both are good for 2D game development. But I know nothing about sprites, how to bind objects on the screen and so on. You can say "you don't need it for such simple game like RPS", but RPS is the beginning, I have some ideas like "Tower Defense" game... you know, everybody has ideas, wishes.... and this knowledge is useful and in some way obligatory. So what is the start point to achieve my plans, ideas, wishes? Where to start? Is it possible to make game development learning curve a little bit straight? Or there're ways that amateur and game development beginners use for years? Thank you for you answers and advise in advance. P.S Sorry for that this post turned out an essay, but I tried to express my wish to start acting. Hope I managed to do it.

    Read the article

  • Draw a never-ending line in XNA

    - by user2236165
    I am drawing a line in XNA which I want to never end. I also have a tool that moves forward in X-direction and a camera which is centered at this tool. However, when I reach the end of the viewport the lines are not drawn anymore. Here are some pictures to illustrate my problem: At the start the line goes across the whole screen, but as my tool moves forward, we reach the end of the line. Here are the method which draws the lines: private void DrawEvenlySpacedSprites (Texture2D texture, Vector2 point1, Vector2 point2, float increment) { var distance = Vector2.Distance (point1, point2); // the distance between two points var iterations = (int)(distance / increment); // how many sprites with be drawn var normalizedIncrement = 1.0f / iterations; // the Lerp method needs values between 0.0 and 1.0 var amount = 0.0f; if (iterations == 0) iterations = 1; for (int i = 0; i < iterations; i++) { var drawPoint = Vector2.Lerp (point1, point2, amount); spriteBatch.Draw (texture, drawPoint, Color.White); amount += normalizedIncrement; } } Here are the draw method in Game. The dots are my lines: protected override void Draw (GameTime gameTime) { graphics.GraphicsDevice.Clear(Color.Black); nyVector = nextVector (gammelVector); GraphicsDevice.SetRenderTarget (renderTarget); spriteBatch.Begin (); DrawEvenlySpacedSprites (dot, gammelVector, nyVector, 0.9F); spriteBatch.End (); GraphicsDevice.SetRenderTarget (null); spriteBatch.Begin (SpriteSortMode.Deferred, BlendState.AlphaBlend, null, null, null, null, camera.transform); spriteBatch.Draw (renderTarget, new Vector2 (), Color.White); spriteBatch.Draw (tool, new Vector2(toolPos.X - (tool.Width/2), toolPos.Y - (tool.Height/2)), Color.White); spriteBatch.End (); gammelVector = new Vector2 (nyVector.X, nyVector.Y); base.Draw (gameTime); } Here's the next vector-method, It just finds me a new point where the line should be drawn with a new X-coordinate between 100 and 200 pixels and a random Y-coordinate between the old vector Y-coordinate and the height of the viewport: Vector2 nextVector (Vector2 vector) { return new Vector2 (vector.X + r.Next(100, 200), r.Next ((int)(vector.Y - 100), viewport.Height)); } Can anyone point me in the right direction here? I'm guessing it has to do with the viewport.width, but I'm not quite sure how to solve it. Thank you for reading!

    Read the article

  • Why is Spritebatch drawing my Textures out of order?

    - by Andrew
    I just started working with XNA Studio after programming 2D games in java. Because of this, I have absolutely no experience with Spritebatch and sprite sorting. In java, I could just layer the images by calling the draw methods in order. For a while, my Spritebatch was working fine in deferred sorting mode, but when I made a change to one of my textures, it suddenly started drawing them out of order. I have searched for a solution to this problem, but nothing seems to work. I have tried adding layer depths to the sprites and changing the sort mode to BackToFront or FrontToBack or even immediate, but nothing seems to work. Here is my drawing code: protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.Gray); Game1.spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.AlphaBlend, SamplerState.PointClamp, null, null); for (int x = 0; x < 5; x++) { for (int y = 0; y < 5; y++) { region[x, y].draw(((float)w / aw)); // Draws the Tile-Based background } } player.draw(spriteBatch, ((float)w / aw));//draws the character (This method is where the problem occurs) enemy.draw(spriteBatch, (float)w/aw); // draws a basic enemy Game1.spriteBatch.End(); base.Draw(gameTime); } player.draw method: public void draw(SpriteBatch sb, float ratio){ //draws the player base (The character without hair or equipment) sb.Draw(playerbase[0], new Rectangle((int)(pos.X - (24 * ratio)), (int)(pos.Y - (48 * ratio)), (int)(48 * ratio), (int)(48 * ratio)), new Rectangle(orientation * 48, animFrame * 48, 48, 48), Color.White,0,Vector2.Zero,SpriteEffects.None,0); //draws the player's hair sb.Draw(playerbase[3], new Rectangle((int)(pos.X - (24 * ratio)), (int)(pos.Y - (48 * ratio)), (int)(48 * ratio), (int)(48 * ratio)), new Rectangle(orientation * 48, animFrame * 48, 48, 48), Color.White, 0, Vector2.Zero, SpriteEffects.None, 0); //draws the player's shirt sb.Draw(equipment[0], new Rectangle((int)(pos.X - (24 * ratio)), (int)(pos.Y - (48 * ratio)), (int)(48 * ratio), (int)(48 * ratio)), new Rectangle(orientation * 48, animFrame * 48, 48, 48), Color.White, 0, Vector2.Zero, SpriteEffects.None, 0); //draws the player's pants sb.Draw(equipment[1], new Rectangle((int)(pos.X - (24 * ratio)), (int)(pos.Y - (48 * ratio)), (int)(48 * ratio), (int)(48 * ratio)), new Rectangle(orientation * 48, animFrame * 48, 48, 48), Color.White, 0, Vector2.Zero, SpriteEffects.None, 0); //draws the player's shoes sb.Draw(equipment[2], new Rectangle((int)(pos.X - (24 * ratio)), (int)(pos.Y - (48 * ratio)), (int)(48 * ratio), (int)(48 * ratio)), new Rectangle(orientation * 48, animFrame * 48, 48, 48), Color.White, 0, Vector2.Zero, SpriteEffects.None, 0); } the game has a top-down perspective much like the early legend of zelda games. It draws sections of the texture depending on which direction the character is facing and the animation frame. However, instead of drawing the character in the order the draw methods are called, it ends up drawing the character out of order. Please help me with this problem.

    Read the article

  • Why does my ID3DXSprite appear to be incorrectly scaled?

    - by Bjoern
    I am using D3D9 for rendering some simple things (a movie) as the backmost layer, then on top of that some text messages, and now wanted to add some buttons to that. Before adding the buttons everything seemed to have worked fine, and I was using a ID3DXSprite for the text as well (ID3DXFont), now I am loading some graphics for the buttons, but they seem to be scaled to something like 1.2 times their original size. In my test window I centered the graphic, but it being too big it just doesnt fit well, for example the client area is 640x360, the graphic is 440, so I expect 100 pixel on left and right, left side is fine [I took screenshot and "counted" the pixels in photoshop], but on the right there is only about 20 pixels) My rendering code is very simple (I am omitting error checks, et cetera, for brevity) // initially viewport was set to width/height of client area // clear device m_d3dDevice->Clear( 0, NULL, D3DCLEAR_TARGET|D3DCLEAR_STENCIL|D3DCLEAR_ZBUFFER, D3DCOLOR_ARGB(0,0,0,0), 1.0f, 0 ); // begin scene m_d3dDevice->BeginScene(); // render movie surface (just two triangles to which the movie is rendered) m_d3dDevice->SetRenderState(D3DRS_ALPHABLENDENABLE,false); m_d3dDevice->SetSamplerState( 0, D3DSAMP_MAGFILTER, D3DTEXF_LINEAR ); // bilinear filtering m_d3dDevice->SetSamplerState( 0, D3DSAMP_MINFILTER, D3DTEXF_LINEAR ); // bilinear filtering m_d3dDevice->SetTextureStageState( 0, D3DTSS_COLORARG1, D3DTA_TEXTURE ); m_d3dDevice->SetTextureStageState( 0, D3DTSS_COLORARG2, D3DTA_DIFFUSE ); //Ignored m_d3dDevice->SetTextureStageState( 0, D3DTSS_COLOROP, D3DTOP_SELECTARG1 ); m_d3dDevice->SetTexture( 0, m_movieTexture ); m_d3dDevice->SetStreamSource(0, m_displayPlaneVertexBuffer, 0, sizeof(Vertex)); m_d3dDevice->SetFVF(Vertex::FVF_Flags); m_d3dDevice->DrawPrimitive(D3DPT_TRIANGLELIST, 0, 2); // render sprites m_sprite->Begin(D3DXSPRITE_ALPHABLEND | D3DXSPRITE_SORT_TEXTURE | D3DXSPRITE_DO_NOT_ADDREF_TEXTURE); // text drop shadow m_font->DrawText( m_playerSprite, m_currentMessage.c_str(), m_currentMessage.size(), &m_playerFontRectDropShadow, DT_RIGHT|DT_TOP|DT_NOCLIP, m_playerFontColorDropShadow ); // text m_font->DrawText( m_playerSprite, m_currentMessage.c_str(), m_currentMessage.size(), &m_playerFontRect, DT_RIGHT|DT_TOP|DT_NOCLIP, m_playerFontColorMessage ) ); // control object m_sprite->Draw( m_texture, 0, 0, &m_vecPos, 0xFFFFFFFF ); // draws a few objects like this m_sprite->End() // end scene m_d3dDevice->EndScene(); What did I forget to do here? Except for the control objects (play button, pause button etc which are placed on a "panel" which is about 440 pixels wide) everything seems fine, the objects are positioned where I expect them, but just too big. By the way I loaded the images using D3DXCreateTextureFromFileEx (resizing wnidow, and reacting to lost device, etc, works fine too). For experimenting, I added some code to take an identity matrix and scale is down on the x/y axis to 0.75f, which then gave me the expected result for the controls (but also made the text smaller and out of position), but I don't know why I would need to scale anything. My rendering code is so simple, I just wanted to draw my 2D objects 1;1 the size they came from the file... I am really very inexperienced in D3D, so the answer might be very simple...

    Read the article

  • Problem animating in Unity/Orthello 2D. Can't move gameObject

    - by Nelson Gregório
    I have a enemy npc that moves left and right in a corridor. It's animated with 2 sprites using Orthello 2D Framework. If I untick the animation's play on start and looping, the npc moves correctly. If I turn it on, the npc tries to move but is pulled back to his starting position again and again because of the animation loop. If I turn looping off during runtime, the npc moves correctly again. What did I do wrong? Here's the npc code if needed. using UnityEngine; using System.Collections; public class Enemies : MonoBehaviour { private Vector2 movement; public float moveSpeed = 200; public bool started = true; public bool blockedRight = false; public bool blockedLeft = false; public GameObject BorderL; public GameObject BorderR; void Update () { if (gameObject.transform.position.x < BorderL.transform.position.x) { started = false; blockedRight = false; blockedLeft = true; } if (gameObject.transform.position.x > BorderR.transform.position.x) { started = false; blockedLeft = false; blockedRight = true; } if(started) { movement = new Vector2(1, 0f); movement *= Time.deltaTime*moveSpeed; gameObject.transform.Translate(movement.x,movement.y, 0f); } if(!blockedRight && !started && blockedLeft) { movement = new Vector2(1, 0f); movement *= Time.deltaTime*moveSpeed; gameObject.transform.Translate(movement.x,movement.y, 0f); } if(!blockedLeft && !started && blockedRight) { movement = new Vector2(-1, 0f); movement *= Time.deltaTime*moveSpeed; gameObject.transform.Translate(movement.x,movement.y, 0f); } } }

    Read the article

  • Best way to determine surface normal for a group of pixels?

    - by Paul Renton
    One of my current endeavors is creating a 2D destructible terrain engine for iOS Cocos2D (See https://github.com/crebstar/PWNDestructibleTerrain ). It is in an infant stages no doubt, but I have made significant progress since starting a couple weeks ago. However, I have run into a bit of a performance road block with calculating surface normals. Note: For my destructible terrain engine, an alpha of 0 is considered to not be solid ground. The method posted below works just great given small rectangles, such as n < 30. Anything above 30 causes a dip in the frame rate. If you approach 100x100 then you might as well read a book while the sprite attempts to traverse the terrain. At the moment this is the best I can come up with for altering the angle on a sprite as it roams across terrain (to get the angle for a sprite's orientation just take dot product of 100 * normal * (1,0) vector). -(CGPoint)getAverageSurfaceNormalAt:(CGPoint)pt withRect:(CGRect)area { float avgX = 0; float avgY = 0; ccColor4B color = ccc4(0, 0, 0, 0); CGPoint normal; float len; for (int w = area.size.width; w >= -area.size.width; w--) { for (int h = area.size.height; h >= -area.size.height; h--) { CGPoint pixPt = ccp(w + pt.x, h + pt.y); if ([self pixelAt:pixPt colorCache:&color]) { if (color.a != 0) { avgX -= w; avgY -= h; } // end inner if } // end outer if } // end inner for } // end outer for len = sqrtf(avgX * avgX + avgY * avgY); if (len == 0) { normal = ccp(avgX, avgY); } else { normal = ccp(avgX/len, avgY/len); } // end if return normal; } // end get My problem is I have sprites that require larger rectangles in order for their movement to look realistic. I considered doing a cache of all surface normals, but this lead to issues of knowing when to recalculate the surface normals and these calculations also being quite expensive (also how large should the blocks be?). Another smaller issue is I don't know how to properly treat the case when length is = 0. So I am stuck... Any advice from the community would be greatly appreciated! Is my method the best possible one? Or should I rethink the algorithm? I am new to game development and always looking to learn new tips and tricks.

    Read the article

  • Could I be going crazy with Event Handlers? Am I going the "wrong way" with my design?

    - by sensae
    I guess I've decided that I really like event handlers. I may be suffering a bit from analysis paralysis, but I'm concerned about making my design unwieldy or running into some other unforeseen consequence to my design decisions. My game engine currently does basic sprite-based rendering with a panning overhead camera. My design looks a bit like this: SceneHandler Contains a list of classes that implement the SceneListener interface (currently only Sprites). Calls render() once per tick, and sends onCameraUpdate(); messages to SceneListeners. InputHandler Polls the input once per tick, and sends a simple "onKeyPressed" message to InputListeners. I have a Camera InputListener which holds a SceneHandler instance and triggers updateCamera(); events based on what the input is. AgentHandler Calls default actions on any Agents (AI) once per tick, and will check a stack for any new events that are registered, dispatching them to specific Agents as needed. So I have basic sprite objects that can move around a scene and use rudimentary steering behaviors to travel. I've gotten onto collision detection, and this is where I'm not sure the direction my design is going is good. Is it a good practice to have many, small event handlers? I imagine going the way I am that I'd have to implement some kind of CollisionHandler. Would I be better off with a more consolidated EntityHandler which handles AI, collision updates, and other entity interactions in one class? Or will I be fine just implementing many different event handling subsystems which pass messages to each other based on what kind of event it is? Should I write an EntityHandler which is simply responsible for coordinating all these sub event handlers? I realize in some cases, such as my InputHandler and SceneHandler, those are very specific types of events. A large portion of my game code won't care about input, and a large portion won't care about updates that happen purely in the rendering of the scene. Thus I feel my isolation of those systems is justified. However, I'm asking this question specifically approaching game logic type events.

    Read the article

  • Variable number of GUI Buttons

    - by Wakaka
    I have a generic HTML5 Canvas GUI Button class and a Scene class. The Scene class has a method called createButton(), which will create a new Button with onclick parameter and store it in a list of buttons. I call createButton() for all UI buttons when initializing the Scene. Because buttons can appear and disappear very often during rendering, Scene would first deactivate all buttons (temporarily remove their onclick, onmouseover etc property) before each render frame. During rendering, the renderer would then activate the required buttons for that frame. The problem is that part of the UI requires a variable number of buttons, and their onclick, onmouseover etc properties change frequently. An example is a buffs system. The UI will list all buffs as square sprites for the current unit selected, and mousing over each square will bring up a tooltip with some information on the buff. But the number of buffs is variable thus I won't know how many buttons to create at the start. What's the best way to solve this problem? P.S. My game is in Javascript, and I know I can use HTML buttons, but would like to make my game purely Canvas-based. Create buttons on-the-fly during rendering. Thus I will only have buttons when I require them. After the render frame these buttons would be useless and removed. Create a fixed set of buttons that I'm going to assume the number of buffs per unit won't exceed. During each render frame activate the buttons accordingly and set their onmouseover property. Assign a button to each Buff instance. This sounds wrong as the buff button is a part of the GUI which can only have one unit selected. Assigning a button to every single Buff in the game seems to be overkill. Also, I would need to change the button's position every render frame since its order in the unit's list of buffs matter. Any other solutions? I'm actually quite for idea (1) but am worried about the memory/time issues of creating a new Button() object every render frame. But this is in Javascript where object creation is oh-so-common ({} mainly) due to automatic garbage collection. What is your take on this? Thanks!

    Read the article

  • Andengine. Put bullet to pool, when it leaves screen

    - by Ashot
    i'm creating a bullet with physics body. Bullet class (extends Sprite class) has die() method, which unregister physics connector, hide sprite and put it in pool public void die() { Log.d("bulletDie", "See you in hell!"); if (this.isVisible()) { this.setVisible(false); mPhysicsWorld.unregisterPhysicsConnector(physicsConnector); physicsConnector.setUpdatePosition(false); body.setActive(false); this.setIgnoreUpdate(true); bulletsPool.recyclePoolItem(this); } } in onUpdate method of PhysicsConnector i executes die method, when sprite leaves screen physicsConnector = new PhysicsConnector(this,body,true,false) { @Override public void onUpdate(final float pSecondsElapsed) { super.onUpdate(pSecondsElapsed); if (!camera.isRectangularShapeVisible(_bullet)) { Log.d("bulletDie","Dead?"); _bullet.die(); } } }; it works as i expected, but _bullet.die() executes TWICE. what i`m doing wrong and is it right way to hide sprites? here is full code of Bullet class (it is inner class of class that represents player) private class Bullet extends Sprite implements PhysicsConstants { private final Body body; private final PhysicsConnector physicsConnector; private final Bullet _bullet; private int id; public Bullet(float x, float y, ITextureRegion texture, VertexBufferObjectManager vertexBufferObjectManager) { super(x,y,texture,vertexBufferObjectManager); _bullet = this; id = bulletId++; body = PhysicsFactory.createCircleBody(mPhysicsWorld, this, BodyDef.BodyType.DynamicBody, bulletFixture); physicsConnector = new PhysicsConnector(this,body,true,false) { @Override public void onUpdate(final float pSecondsElapsed) { super.onUpdate(pSecondsElapsed); if (!camera.isRectangularShapeVisible(_bullet)) { Log.d("bulletDie","Dead?"); Log.d("bulletDie",id+""); _bullet.die(); } } }; mPhysicsWorld.registerPhysicsConnector(physicsConnector); $this.getParent().attachChild(this); } public void reset() { final float angle = canon.getRotation(); final float x = (float) ((Math.cos(MathUtils.degToRad(angle))*radius) + centerX) / PIXEL_TO_METER_RATIO_DEFAULT; final float y = (float) ((Math.sin(MathUtils.degToRad(angle))*radius) + centerY) / PIXEL_TO_METER_RATIO_DEFAULT; this.setVisible(true); this.setIgnoreUpdate(false); body.setActive(true); mPhysicsWorld.registerPhysicsConnector(physicsConnector); body.setTransform(new Vector2(x,y),0); } public Body getBody() { return body; } public void setLinearVelocity(Vector2 velocity) { body.setLinearVelocity(velocity); } public void die() { Log.d("bulletDie", "See you in hell!"); if (this.isVisible()) { this.setVisible(false); mPhysicsWorld.unregisterPhysicsConnector(physicsConnector); physicsConnector.setUpdatePosition(false); body.setActive(false); this.setIgnoreUpdate(true); bulletsPool.recyclePoolItem(this); } } }

    Read the article

  • How to chain actions/animations together and delay their execution?

    - by codinghands
    I'm trying to build a simple game with a number of screens - 'TitleScreen', 'LoadingScreen', 'PlayScreen', 'HighScoreScreen' - each of which has it's own draw & update logic methods, sprites, other useful fields, etc. This works well for me as a game dev beginner, and it runs. However, on my 'PlayScreen' I want to run some animations before the player gets control - dropping in some artwork, playing some sound effects, generally prettifying things a little. However, I'm not sure what the best way to chain animations / sound effects / other timed general events is. I could make an intermediary screen, 'PrePlayScreen', which simply has all of this hardcoded like so: Update(){ Animation anim1 = new Animation(.....); Animation anim2 = new Animation(.....); anim1.Run(); if(anim1.State == AnimationState.Complete) anim2.Run(); if(anim2.State == AnimationState.Complete) // Load 'PlayScreen' screen } But this doesn't seem so great - surely their must be a better way? I then thought, 'Hey - an AnimationManager! That'd be awesome!'. But then that creeping OOP panic set in as I thought about it some more. If I create the Animation in my Screen, then add it to the AnimationManager (which may or may not be a GameComponent hooked up to Update/Draw), how can I get 'back' to it? To signal commands like start / end / repeat? I'd still need to keep a reference to the object in my Screen so that I could still communicate with it once it's buried in the bosom of a List in my AnimationManager. This seems bad. I've also tried using events - call 'Update' on all the animations in the PlayScreen update loop, but crucially all of the animations have a bool flag ('Active') which determines whether they should begin. The first animation has this set to 'true', all others 'false'. On completion the first animation raises an event, which sets animation 2's bool flag to true (and so it then runs). Once animation 2 is complete another 'anim complete' event is raised, and the screen state changes. Considering the game I'm making is basically as simple as it gets I know I'm overthinking this... it's just the paradigm shift from web - game development is making me break out in a serious case of the stupids.

    Read the article

  • Problem texturing with opengl

    - by Killrazor
    Hello! I'm having problems making a simple sprite rendering. I load 2 different textures. Then, I bind these textures and draw 2 squares, one with each texture. But only the texture of the first rendered object is drawn in both squares. Its like if I'd only use a texture or as if glBindTexture don't work properly. I know that GL is a state machine, but I think that you only need to change active texture with glBindTexture. I load texture with this method: bool CTexture::generate( utils::CImageBuff* img ) { assert(img); m_image = img; CHECKGL(glGenTextures(1,&m_textureID)); CHECKGL(glBindTexture(GL_TEXTURE_2D,m_textureID)); CHECKGL(glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR)); CHECKGL(glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR)); //CHECKGL(glTexImage2D(GL_TEXTURE_2D,0,img->getBpp(),img->getWitdh(),img->getHeight(),0,img->getFormat(),GL_UNSIGNED_BYTE,img->getImgData())); CHECKGL(glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, img->getWitdh(), img->getHeight(), 0, GL_RGBA, GL_UNSIGNED_BYTE, img->getImgData())); return true; } And I bind textures with this function: void CTexture::bind() { CHECKGL(glBindTexture(GL_TEXTURE_2D,m_textureID)); } Also, I draw sprites with this method void CSprite2D::render() { CHECKGL(glLoadIdentity()); CHECKGL(glEnable(GL_TEXTURE_2D)); CHECKGL(glEnable(GL_BLEND)); CHECKGL(glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)); m_texture->bind(); CHECKGL(glPushMatrix()); CHECKGL(glBegin(GL_QUADS)); CHECKGL(glTexCoord2f(m_textureAreaStart.s,m_textureAreaStart.t)); // 0,0 by default CHECKGL(glVertex3i(m_position.x,m_position.y,0)); CHECKGL(glTexCoord2f(m_textureAreaEnd.s,m_textureAreaStart.t)); // 1,0 by default CHECKGL(glVertex3i( m_position.x + m_dimensions.x, m_position.y, 0)); CHECKGL(glTexCoord2f(m_textureAreaEnd.s, m_textureAreaEnd.t)); // 1,1 by default CHECKGL(glVertex3i( m_position.x + m_dimensions.x, m_position.y + m_dimensions.y, 0)); CHECKGL(glTexCoord2f(m_textureAreaStart.s, m_textureAreaEnd.t)); // 0,1 by default CHECKGL(glVertex3i( m_position.x, m_position.y + m_dimensions.y,0)); CHECKGL(glPopMatrix()); CHECKGL(glDisable(GL_BLEND)); } Could you help me? All help is welcome. Thanks!!

    Read the article

  • Possible to change the alpha value of certain pixels on iPhone?

    - by emi1faber
    Is it possible to change just a portion of a Sprite's alpha in response to user interaction? A good example of what I mean is iFog or iSteam, where the user can wipe "steam" off the iPhone's screen. Swapping images out wouldn't be feasible due to the sheer number of possibilities where the user could touch and move... For example, say you have a simple app that has a brick wall in the background that has graffiti on it, so there'd be two sprites, one of the brick wall, then one of the graffiti that has a higher z value than the brick wall. Then, based upon where the user touches (assuming their touch controls a sandblaster), some of the graffiti should be removed, but not all of it, which could be accomplished by changing the alpha value on a portion of the graffiti sprite. Is there any way to do this in cocos2d-iphone? Or, do I need to drop down into openGL, and if so, where would be a good place to start my search on how to accomplish this? Ideally, I'd like to accomplish this on a cocos2d-iphone Sprite, but if it's not possible, where's the best place to start looking? Thanks in advance, Ben

    Read the article

  • Nunit Relative Path failing

    - by levi.siebens
    I'm having an issue with Nunit where I cannot find an image file when I run my tests and each time it looks for images it looks in the Nunit folder instead of looking inside the folder where the binary resides. Below is a detailed description of what's happening. I'm building a binary that is under test which contains the definition for some game elements and png files which will define the sprites I'm using (for sanity sake call it Binary1) Nunit runs tests from a seperate binary (Binary1Test) executing test methods against the first binary (Binary1). All tests pass, unless the test executes code in Binary1 which then requires Binary1 to use one of the image files (which are defined via a relative path). When the method is called, Nunit throws a file not found exception stating that it cannot find the file and states it's looking inside of the Program Files\Nunit.net 2.0 folder So I have no idea why the code is doing this, and to make matters more confusing when I pull up Enviornment.CurrentDirectory it gives me the correct path (the path to my debug folder) and not the path to nunit. Also if I use this instead of using the relative path, my tests will run without issue. So my question is, does anyone know why in the case of loading relative paths from within my binary that nunit decides to use it's directory instead of the directory where the binary is located and where the images are stored? Thanks.

    Read the article

  • How do I check to see if my subview is being touched?

    - by Amy
    I went through this tutorial about how to animate sprites: http://icodeblog.com/2009/07/24/iphone-programming-tutorial-animating-a-game-sprite/ I've been attempting to expand on the tutorial by trying to make Ryu animate only when he is touched. However, the touch is not even being registered and I believe it has something to do with it being a subview. Here is my code: -(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event{ UITouch *touch = [touches anyObject]; if([touch view] == ryuView){ NSLog(@"Touch"); } else { NSLog(@"No touch"); } } -(void) ryuAnims{ NSArray *imageArray = [[NSArray alloc] initWithObjects: [UIImage imageNamed:@"1.png"], [UIImage imageNamed:@"2.png"], [UIImage imageNamed:@"3.png"], [UIImage imageNamed:@"4.png"], [UIImage imageNamed:@"5.png"], [UIImage imageNamed:@"6.png"], [UIImage imageNamed:@"7.png"], [UIImage imageNamed:@"8.png"], [UIImage imageNamed:@"9.png"], [UIImage imageNamed:@"10.png"], [UIImage imageNamed:@"11.png"], [UIImage imageNamed:@"12.png"], nil]; ryuView.animationImages = imageArray; ryuView.animationDuration = 1.1; [ryuView startAnimating]; } -(void)viewDidLoad { [super viewDidLoad]; UIImageView *image = [[UIImageView alloc] initWithFrame: CGRectMake(100, 125, 150, 130)]; ryuView = image; ryuView.image = [UIImage imageNamed:@"1.png"]; ryuView.contentMode = UIViewContentModeBottomLeft; [self.view addSubview:ryuView]; [image release]; } This code compiles fine, however, when touching or clicking on ryu, nothing happens. I've also tried if([touch view] == ryuView.image) but that gives me this error: "Comparison of distinct Objective-C type 'struct UIImage *' and 'struct UIView *' lacks a cast." What am I doing wrong?

    Read the article

  • XNA Reach profile with VMWare - Vertex Buffers not working?

    - by Nektarios
    Running XNA app, using Reach profile, in VMWare Fusion host OS Mac OSX, VM is Windows XP SP 3 (my dual-boot OS). Running on MacBook Pro w/NVidia 320M graphics card When I am booted in to XP natively, my code works. The code is drawing cubes that are set up using vertex buffers When another friend runs this same code on Windows 7, it also works for him just fine When I am running my code in the VM, it doesn't work. I have billboarding sprites running in a shader program and this part displays fine. I get no crashing or errors, the geometry just doesn't appear. I tried Debug and Release. This is very basic operation so I'm thinking VMWare isn't the problem, but it's my code.... My init code: var vertexArray = verts.ToArray(); var indexArray = indices.ToArray(); indexBuffer = new IndexBuffer(GraphicsDevice, typeof(Int16), indexArray.Length, BufferUsage.WriteOnly); indexBuffer.SetData(indexArray); vertexBuffer = new VertexBuffer(GraphicsDevice, typeof(VertexPositionColor), vertexArray.Length, BufferUsage.WriteOnly); vertexBuffer.SetData(vertexArray); My Draw code: // problem isn't here, tried no cull GraphicsDevice.RasterizerState = RasterizerState.CullClockwise; GraphicsDevice.BlendState = BlendState.AlphaBlend; GraphicsDevice.DepthStencilState = new DepthStencilState() { DepthBufferEnable = true }; // Update View and Projection TileEffect.View = ((Game1)Game).Camera.View; TileEffect.Projection = ((Game1)Game).Camera.Projection; TileEffect.CurrentTechnique.Passes[0].Apply(); GraphicsDevice.SetVertexBuffer(vertexBuffer); GraphicsDevice.Indices = indexBuffer; GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, indices.Count, 0, indices.Count / 3); For LoadContent: TileEffect = new BasicEffect(GraphicsDevice) { World = Matrix.Identity, View = ((Game1)Game).Camera.View, Projection = ((Game1)Game).Camera.Projection, VertexColorEnabled = true };

    Read the article

  • How to scale a sprite image without losing color key information?

    - by Michael P
    Hello everyone, I'm currently developing a simple application that displays map and draws some markers on it. I'm developing for Windows Mobile, so I decided to use DirectDraw and Imaging interfaces to make the application fast and pretty. The map moves when user moves finger on the touchscreen, so the whole map moving/scrolling animation has to be fast, but it is not. On every map update I have to draw portion of the map, control buttons, and markers - buttons and markers are preloaded on DirectDraw surface as a mipmap. So the only thing I do is BitBlit from the mipmap to a back buffer, and from the back buffer to a primary surface (I can't use page flipping due to the windowed mode of my application). Previously I used premultiplied-alpha surface with 32 bit ARGB pixel format for images mipmap, everything was looking good, but drawing entire "scene" was horribly slow - i could forget about smooth map scrolling. Now I'm using mipmap with native (RGB565) pixel format and fuchsia (0xFF00FF) color key. Drawing is much better my mipmap surface is generated on program loading - images are loaded from files, scaled (with filtering) and drawn on mipmap. The problem is, that image scaling process blends pixel colors, and those pixels which are on the border of a sprite region are blended with surrounding fuchsia pixels resulting semi-fuchsia color that is not treated as color key. When I do blitting with color key option, sprites have small fuchsia-like borders, and it looks really bad. How to solve this problem? I can use alpha blitting, but it is too slow - even in ARGB 1555 format.

    Read the article

  • JQUERY: Setting Active state on animated menu tabs

    - by Tony
    I have image sprites that use JQuery to set the BG position on mouseover and mouseout events. When I set the active state BG position using JQUERY it works fine until I move my cursor away from the active 'tab' which then fires the mouseout event animation. What I want is the mouseClick event to stop the animation on the active tab but still allow the animation effect to work on the other tabs, and when another tab is clicked for the active state to be removed from the current tab to the new 'active' tab. JQuery $(function(){ /* This script changes main nav links hover state*/ $('.navigation a') .css( {backgroundPosition: "-1px -120px"} ) .mouseover(function(){ $(this).stop().animate({backgroundPosition:"(-1px -240px)"}, {duration:400}) }) .mouseout(function(){ $(this).stop().animate({backgroundPosition:"(-1px -120px)"}, {duration:400, complete:function (){ $(this).css({backgroundPosition: "-1px -120px"}) }}) }) }); $(document).ready(function(){ $("a").click(function(){ $(this).css({backgroundPosition: "-1px -235px"}); }); }); HTML <ul class="navigation"> <li><a href="#index" tabindex="10" title="Home" id="homeButton"></a></li> <li><a href="#about" tabindex="20" title="About us" id="aboutButton"></a></li> <li><a href="#facilities" tabindex="30" title="Our facilities and opening Times" id="facilitiesButton"></a></li> <li><a href="#advice" tabindex="40" title="Advice and useful links" c id="adviceButton"></a></li> <li><a href="#find" tabindex="50" title="How to find Us" id="findButton"></a></li> <li><a href="#contact" tabindex="60" title="Get in touch with us" id="contactButton"></a></li> </ul> You can see what I've got so far here Thanks in advance for any help

    Read the article

  • CCSprite with actions crossing the screen boundaries (copy sprite problem)

    - by iostriz
    Let's say we have a CCSprite object that has an actions tied to it: -(void) moveJack { CCSpriteSheet *sheet = (CCSpriteSheet*)[self getChildByTag:kSheet]; CCSprite *jack = (CCSprite*)[sheet getChildByTag:kJack]; ... CCSequence *seq = [CCSequence actions: jump1, [jump1 reverse], jump2, nil]; [jack runAction:seq]; } If the sprite crosses over the screen boundary, I would like to display it at opposite side. So, original sprite is half displayed on the right side (for example), and half on the left side, because it has not fully crossed yet. Obviously (or is it), I need 2 sprites to achieve this. One on the right side (original), and one on the left side (a copy). The problem is - I don't know how to create exact copy of the original sprite, because tied actions have scaling and blending transformations (sprite is a bit distorted). I would like to have something like: CCSprite *copy = [[jack copy] autorelease]; so that I can add a copy to display it on the correct side (and kill it after transition is over). It should have all the actions tied to it... Any ideas?

    Read the article

  • OpenGL - drawing 2D polygons shapes with texture

    - by plonkplonk
    I am trying to make a few effects in a C+GL game. So far I draw all my sprites as a quad, and it works. However, I am trying to make a large ring appear at times, with a texture following that ring, as it takes less memory than a quad with the ring texture inside. The type of ring I want to make is not a round-shaped GL mesh ring (the "tube" type) but a "paper" 2D ring. That way I can modify the "width" of the ring, getting more of the effect than a simple quad+ring texture. So far all my attempts have been...kind of ridiculous, as I don't understand GL's coordinates too well (and I can't really understand the available documentation...I am just a designer with no coder help or background. A n00b, basically). glBegin(GL_POLYGON); for(i = 0;i < 360; i += 10){ glTexCoord2f(0, 0); glVertex2f(Cos(i)*(H-10),Sin(i)H); glTexCoord2f(0, HP); glVertex2f(Sin(i)(H-10),Cos(i)*(H-10)); glTexCoord2f(WP, HP); glVertex2f(Cos(i)H,Sin(i)(H-10)); glTexCoord2f(WP, 0); glVertex2f(Sin(i)*H,Cos(i)*H); } glEnd(); This is my last attempt, and it seems to generate a "sunburst" from the right edge of the circle instead of a ring. It's an amusing effect but definitely not what I want. Other results included the circle looking exactly the same as the quad textured (aka drawing a sprite literally) or something that looked like a pop-art filter, by working on this train of thought. Seems like my logic here is entirely flawed, so, what would be the easiest way to obtain such a ring? No need to reply in code, just some guidance for a non-math-skilled user...

    Read the article

  • Cannot show scene with Cocos2D when using UITabBarController

    - by gw1921
    Hi I'm new to Cocos2D but am having serious issues when trying to load a cocos scene in one of the UIViewControllers mixed with other normal UIKit UIViewControllers. My project uses a UITabBarController to manage four view controllers. Three are normal UIKit view controllers while one of them I want to use cocos2D for (to draw some sprites and animate them). The following is what I've done so far. In the applicationDidFinishLaunching method, I initialize cocos2D to use the window and take the first tabbarcontroller view: - (void)applicationDidFinishLaunching:(UIApplication *)application { if( ! [CCDirector setDirectorType:CCDirectorTypeDisplayLink] ) { [CCDirector setDirectorType:CCDirectorTypeDefault]; } [[CCDirector sharedDirector] setPixelFormat:kPixelFormatRGBA8888]; [CCTexture2D setDefaultAlphaPixelFormat:kTexture2DPixelFormat_RGBA8888]; [[CCDirector sharedDirector] setDisplayFPS:YES]; [[CCDirector sharedDirector] attachInView: window]; [[[CCDirector sharedDirector] openGLView] addSubview: tabBarController.view]; [window makeKeyAndVisible]; } Then in the third UIViewController code (where I want to use cocos2D) I do this (note: the view is being loaded from a NIB file which has a blank UIView and nothing else):\ - (void)viewDidLoad { [super viewDidLoad]; // 'scene' is an autorelease object. CCScene *myScene = [CCScene node]; // 'layer' is an autorelease object. MyLayer *myLayer = [MyLayer node]; // add layer as a child to scene [myScene addChild: myLayer]; [[CCDirector sharedDirector] runWithScene: myScene]; } However all I see is a blank white view and nothing else. If I call the following in viewdidLoad: [[CCDirector sharedDirector] attachInView: self.view]; The app crashes complaining that CCDirector is already attached. Please help!

    Read the article

  • jQuery change CSS background position and mouseover/mouseout

    - by steelfrog
    I have a menu composed of a single image (e.g., sprites). In order to display the hover image, I simply move the background image down. I'd like this effect to be controlled by jQuery and animated. This is a sample menu item. <ul> <li id="home"><a href="#"></a></li> </ul> This is what I'm toying with. I'm very new to jQuery and am having problems getting the proper selector going. $(document).ready(function(){ $('#home a'); // Set the 'normal' state background position .css( {backgroundPosition: "0 0"} ) // On mouse over, move the background .mouseover(function(){ $(this).stop().animate({backgroundPosition:"(0 -54px)"}, {duration:500}) }) // On mouse out, move the background back .mouseout(function(){ $(this).stop().animate({backgroundPosition:"(0 0)"}, {duration:500}) }) }); Could you point out what I'm doing wrong? I know the selector is probably incorrect but I'm having a hard time figuring out how to manipulate the anchor.

    Read the article

  • Flash "visible" issue

    - by justkevin
    I'm writing a tool in Flex that lets me design composite sprites using layered bitmaps and then "bake" them into a low overhead single bitmapData. I've discovered a strange behavior I can't explain: toggling the "visible" property of my layers works twice for each layer (i.e., I can turn it off, then on again) and then never again for that layer-- the layer stays visible from that point on. If I override "set visible" on the layer as such: override public function set visible(value:Boolean):void { if(value == false) this.alpha = 0; else {this.alpha = 1;} } The problem goes away and I can toggle "visibility" as much as I want. Any ideas what might be causing this? Edit: Here is the code that makes the call: private function onVisibleChange():void { _layer.visible = layerVisible.selected; changed(); } The changed() method "bakes" the bitmap: public function getBaked():BitmapData { var w:int = _composite.width + (_atmosphereOuterBlur * 2); var h:int = _composite.height + (_atmosphereOuterBlur * 2); var bmpData:BitmapData = new BitmapData(w,h,true,0x00000000); var matrix:Matrix = new Matrix(); var bounds:Rectangle = this.getBounds(this); matrix.translate(w/2,h/2); bmpData.draw(this,matrix,null,null,new Rectangle(0,0,w,h),true); return bmpData; } Incidentally, while the layer is still visible, using the Flex debugger I can verify that the layer's visible value is "false".

    Read the article

  • Linking AS code to symbols defined in an external SWC?

    - by Ender
    (apologies ahead of time, I only really know Flash; my Flex experience is basically nil. There may be a very standard and obvious workflow solution that Flex people know about) I have a number of UI elements that are graphically quite complex (they're not components, they're just Sprites). Since it takes a long time to compile them, I've been trying to move them into an external .swc. However, I want to associate some code with these classes, but I don't want to have to recompile the graphical assets every time I make a code change. At the moment I have it set up like this: UI elements are created in a separate FLA and exported to a SWC. In my primary FLA, I have actionscript classes that extend each of the graphical assets in the SWC. For example: external.swc: (some symbol defined in the Library and exported for actionscript in frame 1) class: com.foo.WidgetGraphic base: flash.display.Sprite main.fla: Widget.as: package com.foo { public class Widget extends WidgetGraphic { ... } } This works, but is time-consuming and prone to error. I'd rather be able to avoid having to inherit from each graphical asset, and just define them directly. Is there a better way to do what I'm trying to accomplish? Note: the main concern here is compile time. I don't have any movies or audio or fonts, just a lot of vector art assets that appear to be slowing down my compilation time significantly. When I'm debugging I'm only making code changes, and would rather not have to keep recompiling the art...

    Read the article

  • How to make a small flash swf with ComboBox in Actionscript 3?

    - by Sint
    I have a pure Actionscript 3 project, using flash.* libraries, compiles down to about 6k (using mxmlc). Program handles about 1k shapes, a few sprites, a sockets connection, works great (tastes less filling). Now, how would I add a ComboBox control without incurring excessive bloat? More specificially, I would like to keep the size under 100k. So far I have tried: Adobe mx.controls ComboBoxexample - simple mxml example compiles to 200+k both on my main Linux Box using mxmlc and in Windows using Flash Builder 4 Yahoo Astra - uses mx libraries underneath(so as bloated as Adobe?), plus does not contain exact ComboBox Keith Peter's MinimalComps - seems small, but far from providing ComboBox functionality SPAS (Swing Package for Actionscript) - compiles to 130k, but alpha version of ComboBox does not let me adjust height... asuilib - compiles to 40k, unfortunately this ComboBox does not provide for scrolling items...if it does not fit on screen no way to scroll to it Now my questions: Is there a way to lower size for projects importing mx.controls ? Maybe there is a way to fix SPAS or asuilib ComboBoxes? Perhaps, there are some other libraries which provide a ComboBox(or DropList)?

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27  | Next Page >