Search Results

Search found 32277 results on 1292 pages for 'module development'.

Page 463/1292 | < Previous Page | 459 460 461 462 463 464 465 466 467 468 469 470  | Next Page >

  • Gap in parallaxing background loop

    - by CinetiK
    The bug here is that my background kind of offset a bit itself from where it should draw and so I have this line. I have some troubles understanding why I get this bug when I set a speed that is different then 1,2,4,8,16,... In main class I set the speed depending on the player speed bgSpeed = -(int)playerMoveSpeed.X / 10; and here's my background class class ParallaxingBackground { Texture2D texture; Vector2[] positions; public int Speed { get; set;} public void Initialize(ContentManager content, String texturePath, int screenWidth, int speed) { texture = content.Load<Texture2D>(texturePath); this.Speed = speed; positions = new Vector2[screenWidth / texture.Width + 2]; for (int i = 0; i < positions.Length; i++) { positions[i] = new Vector2(i * texture.Width, 0); } } public void Update() { for (int i = 0; i < positions.Length; i++) { positions[i].X += Speed; if (Speed <= 0) { if (positions[i].X <= -texture.Width) { positions[i].X = texture.Width * (positions.Length - 1); } } else { if (positions[i].X >= texture.Width*(positions.Length - 1)) { positions[i].X = -texture.Width; } } } } public void Draw(SpriteBatch spriteBatch) { for (int i = 0; i < positions.Length; i++) { spriteBatch.Draw(texture, positions[i], Color.White); } } }

    Read the article

  • Which are the cons of using only non-member functions and POD?

    - by Miro
    I'm creating my own game engine. I've read these articles and this question about DOD and it was written to not use member functions and classes. I also heard some criticism to this idea. I can write it using member functions or non-member functions it would be similar. So what are the benefits/cons of that approach or when the project grows, does any of these approaches give clearer and better manageable code? With POD & non-member functions I don't have to make struct members public I can still use object id outside of engine like OpenGL does with all it's stuff, so It's not about encapsulation. POD - plain old data DOD - data oriented design

    Read the article

  • Correct Rotation and Translation with a 4x4 matrix

    - by sFuller
    I am using a 4x4 matrix to transform verts in a shader. I multiply an identity matrix by a rotation matrix by a translation matrix. I am trying to first rotate the verts and then translate them, however in my result, it appears that the verts are being transformed and then rotated. My matrix looks something like this: m00 m01 m02 tx m10 m11 m12 ty m20 m21 m22 tz --- --- --- 1 I am not using OpenGL's fixed function pipeline, I am multiplying matrices on the client side, and uploading the matrix to a GLSL shader. If it helps, I am using my own matrix multiplication code, but I have recreated this problem using matrices on my graphing calculator, so I don't believe my matrix code has errors.. I'll include a visual aid if needed.

    Read the article

  • How do I use setFilmSize in panda3d to achieve the correct view?

    - by lhk
    I'm working with Panda3d and recently switched my game to isometric rendering. I moved the virtual camera accordingly and set an orthographic lens. Then I implemented the classes "Map" and "Canvas". A canvas is a dynamically generated mesh: a flat quad. I'm using it to render the ingame graphics. Since the game itself is still set in a 3d coordinate system I'm planning to rely on these canvases to draw sprites. I could have named this class "Tile" but as I'd like to use it for non-tile sketches (enemies, environment) as well I thought canvas would describe it's function better. Map does exactly what it's name suggests. Its constructor receives the number of rows and columns and then creates a standard isometric map. It uses the canvas class for tiles. I'm planning to write a map importer that reads a file to create maps on the fly. Here's the canvas implementation: class Canvas: def __init__(self, texture, vertical=False, width=1,height=1): # create the mesh format=GeomVertexFormat.getV3t2() format = GeomVertexFormat.registerFormat(format) vdata=GeomVertexData("node-vertices", format, Geom.UHStatic) vertex = GeomVertexWriter(vdata, 'vertex') texcoord = GeomVertexWriter(vdata, 'texcoord') # add the vertices for a flat quad vertex.addData3f(1, 0, 0) texcoord.addData2f(1, 0) vertex.addData3f(1, 1, 0) texcoord.addData2f(1, 1) vertex.addData3f(0, 1, 0) texcoord.addData2f(0, 1) vertex.addData3f(0, 0, 0) texcoord.addData2f(0, 0) prim = GeomTriangles(Geom.UHStatic) prim.addVertices(0, 1, 2) prim.addVertices(2, 3, 0) self.geom = Geom(vdata) self.geom.addPrimitive(prim) self.node = GeomNode('node') self.node.addGeom(self.geom) # this is the handle for the canvas self.nodePath=NodePath(self.node) self.nodePath.setSx(width) self.nodePath.setSy(height) if vertical: self.nodePath.setP(90) # the most important part: "Drawing" the image self.texture=loader.loadTexture(""+texture+".png") self.nodePath.setTexture(self.texture) Now the code for the Map class class Map: def __init__(self,rows,columns,size): self.grid=[] for i in range(rows): self.grid.append([]) for j in range(columns): # create a canvas for the tile. For testing the texture is preset tile=Canvas(texture="../assets/textures/flat_concrete",width=size,height=size) x=(i-1)*size y=(j-1)*size # set the tile up for rendering tile.nodePath.reparentTo(render) tile.nodePath.setX(x) tile.nodePath.setY(y) # and store it for later access self.grid[i].append(tile) And finally the usage def loadMap(self): self.map=Map(10, 10, 1) this function is called within the constructor of the World class. The instantiation of world is the entry point to the execution. The code is pretty straightforward and runs good. Sadly the output is not as expected: Please note: The problem is not the white rectangle, it's my player object. The problem is that although the map should have equal width and height it's stretched weirdly. With orthographic rendering I expected the map to be a perfect square. What did I do wrong ? UPDATE: I've changed the viewport. This is how I set up the orthographic camera: lens = OrthographicLens() lens.setFilmSize(40, 20) base.cam.node().setLens(lens) You can change the "aspect" by modifying the parameters of setFilmSize. I don't know exactly how they are related to window size and screen resolution but after testing a little the values above seem to work for me. Now everything is rendered correctly as long as I don't resize the window. Every change of the window's size as well as switching to fullscreen destroys the correct rendering. I know that implementing a listener for resize events is not in the scope of this question. However I wonder why I need to make the Film's height two times bigger than its width. My window is quadratic ! Can you tell me how to find out correct setting for the FilmSize ? UPDATE 2: I can imagine that it's hard to envision the behaviour of the game. At first glance the obvious solution is to pass the window's width and height in pixels to setFilmSize. There are two problems with that approach. The parameters for setFilmSize are ingame units. You'll get a way to big view if you pass the pixel size For some strange reason the image is distorted if you pass equal values for width and height. Here's the output for setFilmSize(800,800) You'll have to stress your eyes but you'll see what I mean

    Read the article

  • Unexpected behaviour with glFramebufferTexture1D

    - by Roshan
    I am using render to texture concept with glFramebufferTexture1D. I am drawing a cube on non-default FBO with all the vertices as -1,1 (maximum) in X Y Z direction. Now i am setting viewport to X while rendering on non default FBO. My background is blue with white color of cube. For default FBO, i have created 1-D texture and attached this texture to above FBO with color attachment. I am setting width of texture equal to width*height of above FBO view-port. Now, when i render this texture to on another cube, i can see continuous white color on start or end of each face of the cube. That means part of the face is white and rest is blue. I am not sure whether this behavior is correct or not. I expect all the texels should be white as i am using -1 and 1 coordinates for cube rendered on non-default FBO. code: #define WIDTH 3 #define HEIGHT 3 GLfloat vertices8[]={ 1.0f,1.0f,1.0f, -1.0f,1.0f,1.0f, -1.0f,-1.0f,1.0f, 1.0f,-1.0f,1.0f,//face 1 1.0f,-1.0f,-1.0f, -1.0f,-1.0f,-1.0f, -1.0f,1.0f,-1.0f, 1.0f,1.0f,-1.0f,//face 2 1.0f,1.0f,1.0f, 1.0f,-1.0f,1.0f, 1.0f,-1.0f,-1.0f, 1.0f,1.0f,-1.0f,//face 3 -1.0f,1.0f,1.0f, -1.0f,1.0f,-1.0f, -1.0f,-1.0f,-1.0f, -1.0f,-1.0f,1.0f,//face 4 1.0f,1.0f,1.0f, 1.0f,1.0f,-1.0f, -1.0f,1.0f,-1.0f, -1.0f,1.0f,1.0f,//face 5 -1.0f,-1.0f,1.0f, -1.0f,-1.0f,-1.0f, 1.0f,-1.0f,-1.0f, 1.0f,-1.0f,1.0f//face 6 }; GLfloat vertices[]= { 0.5f,0.5f,0.5f, -0.5f,0.5f,0.5f, -0.5f,-0.5f,0.5f, 0.5f,-0.5f,0.5f,//face 1 0.5f,-0.5f,-0.5f, -0.5f,-0.5f,-0.5f, -0.5f,0.5f,-0.5f, 0.5f,0.5f,-0.5f,//face 2 0.5f,0.5f,0.5f, 0.5f,-0.5f,0.5f, 0.5f,-0.5f,-0.5f, 0.5f,0.5f,-0.5f,//face 3 -0.5f,0.5f,0.5f, -0.5f,0.5f,-0.5f, -0.5f,-0.5f,-0.5f, -0.5f,-0.5f,0.5f,//face 4 0.5f,0.5f,0.5f, 0.5f,0.5f,-0.5f, -0.5f,0.5f,-0.5f, -0.5f,0.5f,0.5f,//face 5 -0.5f,-0.5f,0.5f, -0.5f,-0.5f,-0.5f, 0.5f,-0.5f,-0.5f, 0.5f,-0.5f,0.5f//face 6 }; GLuint indices[] = { 0, 2, 1, 0, 3, 2, 4, 5, 6, 4, 6, 7, 8, 9, 10, 8, 10, 11, 12, 15, 14, 12, 14, 13, 16, 17, 18, 16, 18, 19, 20, 23, 22, 20, 22, 21 }; GLfloat texcoord[] = { 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0 }; glGenTextures(1, &id1); glBindTexture(GL_TEXTURE_1D, id1); glGenFramebuffers(1, &Fboid); glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexImage1D(GL_TEXTURE_1D, 0, GL_RGBA, WIDTH*HEIGHT , 0, GL_RGBA, GL_UNSIGNED_BYTE,0); glBindFramebuffer(GL_FRAMEBUFFER, Fboid); glFramebufferTexture1D(GL_DRAW_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,GL_TEXTURE_1D,id1,0); draw_cube(); glBindFramebuffer(GL_FRAMEBUFFER, 0); draw(); } draw_cube() { glViewport(0, 0, WIDTH, HEIGHT); glClearColor(0.0f, 0.0f, 0.5f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); glEnableVertexAttribArray(glGetAttribLocation(temp.psId,"position")); glVertexAttribPointer(glGetAttribLocation(temp.psId,"position"), 3, GL_FLOAT, GL_FALSE, 0,vertices8); glDrawArrays (GL_TRIANGLE_FAN, 0, 24); } draw() { glClearColor(1.0f, 0.0f, 0.0f, 1.0f); glClearDepth(1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnableVertexAttribArray(glGetAttribLocation(shader_data.psId,"tk_position")); glVertexAttribPointer(glGetAttribLocation(shader_data.psId,"tk_position"), 3, GL_FLOAT, GL_FALSE, 0,vertices); nResult = GL_ERROR_CHECK((GL_NO_ERROR, "glVertexAttribPointer(position, 3, GL_FLOAT, GL_FALSE, 0,vertices);")); glEnableVertexAttribArray(glGetAttribLocation(shader_data.psId,"inputtexcoord")); glVertexAttribPointer(glGetAttribLocation(shader_data.psId,"inputtexcoord"), 2, GL_FLOAT, GL_FALSE, 0,texcoord); glBindTexture(*target11, id1); glDrawElements ( GL_TRIANGLES, 36,GL_UNSIGNED_INT, indices ); when i change WIDTH=HEIGHT=2, and call a glreadpixels with height, width equal to 4 in draw_cube() i can see first 2 pixels with white color, next two with blue(glclearcolor), next two white and then blue and so on.. Now when i change width parameter in glTeximage1D to 16 then ideally i should see alternate patches of white and blue right? But its not the case here. why so?

    Read the article

  • libgdx ActorGestureListener.pan() parameters not moving actor in smooth line

    - by Roar Skullestad
    I override the pan method in ActorGestureListener to implement dragging actors in libgdx (scene2d). When I move individual pieces on a board they move smoothly, but when moving the whole board, the x and y coordinates that is sent to pan is "jumping", and in an increasingly amount the longer it is dragged. These are an example of the deltaY coordinates sent to pan when dragging smoothly downwards: 1.1156368 -0.13125038 -1.0500145 0.98439217 -1.0500202 0.91877174 -0.984396 0.9187679 -0.98439026 0.9187641 -0.13125038 This is how I move the camera: public void pan (InputEvent event, float x, float y, float deltaX, float deltaY) { cam.translate(-deltaX, -deltaY); I have been using both the delta values sent to pan and the real position values, but similar results. And since it is the coordinates that are wrong, it doesn't matter whether I move the board itself or the camera. What could the cause be for this and what is the solution? When I move camera only half the delta-values, it moves smoothly but only at half the speed of the mouse pointer: cam.translate(-deltaX / 2, -deltaY / 2); It seems like the moving of camera or board affects the mouse input coordinates. How can I drag at "mouse speed" and still get smooth movements? (This question was also posted on stackoverflow: http://stackoverflow.com/questions/20693020/libgdx-actorgesturelistener-pan-parameters-not-moving-actor-in-smooth-line)

    Read the article

  • Platformer Enemy AI

    - by hayer
    I'm currently developing a platformer shooter. The game is multiplayer and while my net code could use some real work I have put that off for the time, so currently I'm trying to implement the AI. The game is pretty simple; Players run around on a map filled with a X amount of zombies that try to eat their brains, classic and overused I know. Weapons spawn at random intervals around the map. The problem is that the zombies, when they find their pray the have to follow it for some while.. And here is the problem, running the AI navcode seems to take for ever. So here is the ideas I have come up with so far Have the AI update at different intervals with a maximum of Y ms with no updates. Have the zombies assigned to groups of zombies. One is appointed the leader of the group who finds the way to the player - the rest just follows the leader. If the leader dies another one of the zombies in the group is appointed president of the zombie swarm. If there is less than five zombies in a group they try to meet up with other zombies.(Aka they are assigned to a different group and therefor a new leader) Multi-threading option one or two? For navigation I have some kinda navmesh(since the game is not tile-based) that tells the zombies where they can walk etc. If anyone else got some ideas on how to do navigation I would love some input. For LoS(zombie - player) I have split the map into grids. If the players grid is connected to the zombies grid(if I go with option two I would only need to check if leader zombies grid is connected to player, aka less checks) - if they are connected and there is more than 250ms since last check do a raytrace.. This is my first time programming AI so input on any field is appreciated.

    Read the article

  • Does use of simple shaders improve performace/battery life?

    - by Miro
    I'm making OpenGL game for Android. Till now i've used only fixed function pipeline, but i'm rendering simple things. Fixed function pipeline includes a lot of stuff i don't need. So i'm thinking about implementing shaders in my game to simplify OpenGL pipeline if it can make better performance. Better performance = better battery life, unless fps is limited by software limit, not hardware power.

    Read the article

  • How to Hide the code of HTML5 games [closed]

    - by jeyanthinath
    Possible Duplicate: HTML5 game obfuscation I am begin to develop games in HTML5 and I had doubt that , when we use the game in online its source can be visible to others even if we use complex code and reference to java-script files , then what is the use of HTML5 even everyone can be able to download the code and still use their updated version Is it possible to hide the code of HTML5 in web page games OR there some other way it can made it not visible to the users !!! If not what is the use of HTML5 as it is open to user as well !!!

    Read the article

  • XNA - Finding boundaries in isometric tilemap

    - by Yheeky
    I have an issue with my 2D isometric engine. I'm using my own 2D camera class which works with matrices and need to find the tilemaps boundaries so the user always sees the map. Currently my map size is 100x100 (with 128x128 tiles) so the calculation (e.g. for the right boundary) is: var maxX = (TileMap.MapWidth + 1) * (TileMap.TileWidth / 2) - ViewSize.X; var maxX = (100 + 1) * (128 / 2) - 1360; // = 5104 pixels. This works fine while having scale factor of 1.0f but not for any other zoom factor. When I zoom out to 0.9f the right border should be at approx. 4954. I´m using the following code for transformation but I always get a wrong value: var maxXVector = new Vector2(maxX, 0); var maxXTransformed = Vector2.Transform(maxXVector, tempTransform).X; The result is 4593. Does anyone of you have an idea what I´m during wrong? Thanks for your help! Yheeky

    Read the article

  • Dealing with 2D pixel shaders and SpriteBatches in XNA 4.0 component-object game engine?

    - by DaveStance
    I've got a bit of experience with shaders in general, having implemented a couple, very simple, 3D fragment and vertex shaders in OpenGL/WebGL in the past. Currently, I'm working on a 2D game engine in XNA 4.0 and I'm struggling with the process of integrating per-object and full-scene shaders in my current architecture. I'm using a component-entity design, wherein my "Entities" are merely collections of components that are acted upon by discreet system managers (SpatialProvider, SceneProvider, etc). In the context of this question, my draw call looks something like this: SceneProvider::Draw(GameTime) calls... ComponentManager::Draw(GameTime, SpriteBatch) which calls (on each drawable component) DrawnComponent::Draw(GameTime, SpriteBatch) The SpriteBatch is set up, with the default SpriteBatch shader, in the SceneProvider class just before it tells the ComponentManager to start rendering the scene. From my understanding, if a component needs to use a special shader to draw itself, it must do the following when it's Draw(GameTime, SpriteBatch) method is invoked: public void Draw(GameTime gameTime, SpriteBatch spriteBatch) { spriteBatch.End(); spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, null, null, null, EffectShader, ViewMatrix); // Draw things here that are shaded by the "EffectShader." spriteBatch.End(); spriteBatch.Begin(/* same settings that were set by SceneProvider to ensure the rest of the scene is rendered normally */); } My question is, having been told that numerous calls to SpriteBatch.Begin() and SpriteBatch.End() within a single frame was terrible for performance, is there a better way to do this? Is there a way to instruct the currently running SpriteBatch to simply change the Effect shader it is using for this particular draw call and then switch it back before the function ends?

    Read the article

  • How to account for speed of the vehicle when shooting shells from it?

    - by John Murdoch
    I'm developing a simple 3D ship game using libgdx and bullet. When a user taps the mouse I create a new shell object and send it in the direction of the mouse click. However, if the user has tapped the mouse in the direction where the ship is currently moving, the ship catches up to the shells very quickly and can sometimes even get hit by them - simply because the speed of shells and the ship are quite comparable. I think I need to account for ship speed when generating the initial impulse for the shells, and I tried doing that (see "new line added"), but I cannot figure out if what I'm doing is the proper way and if yes, how to calculate the correct coefficient. public void createShell(Vector3 origin, Vector3 direction, Vector3 platformVelocity, float velocity) { long shellId = System.currentTimeMillis(); // hack ShellState state = getState().createShellState(shellId, origin.x, origin.y, origin.z); ShellEntity entity = EntityFactory.getInstance().createShellEntity(shellId, state); add(entity); entity.getBody().applyCentralImpulse(platformVelocity.mul(velocity * 0.02f)); // new line added, to compensate for the moving platform, no idea how to calculate proper coefficient entity.getBody().applyCentralImpulse(direction.nor().mul(velocity)); } private final Vector3 v3 = new Vector3(); public void shootGun(Vector3 direction) { Vector3 shipVelocity = world.getShipEntities().get(id).getBody().getLinearVelocity(); world.getState().getShipStates().get(id).transform.getTranslation(v3); // current location of our ship v3.add(direction.nor().mul(10.0f)); // hack; this is to avoid shell immediately impacting the ship that it got shot out from world.createShell(v3, direction, shipVelocity, 500); }

    Read the article

  • obj-c classes and sub classes (Cocos2d) conversion

    - by Lewis
    Hi I'm using this version of cocos2d: https://github.com/krzysztofzablocki/CCNode-SFGestureRecognizers Which supports the UIGestureRecognizer within a CCLayer in a cocos2d scene like so: @interface HelloWorldLayer : CCLayer <UIGestureRecognizerDelegate> { } Now I want to make this custom gesture work within the scene, attaching it to a sprite in cocos2d: #import <Foundation/Foundation.h> #import <UIKit/UIGestureRecognizerSubclass.h> @protocol OneFingerRotationGestureRecognizerDelegate <NSObject> @optional - (void) rotation: (CGFloat) angle; - (void) finalAngle: (CGFloat) angle; @end @interface OneFingerRotationGestureRecognizer : UIGestureRecognizer { CGPoint midPoint; CGFloat innerRadius; CGFloat outerRadius; CGFloat cumulatedAngle; id <OneFingerRotationGestureRecognizerDelegate> target; } - (id) initWithMidPoint: (CGPoint) midPoint innerRadius: (CGFloat) innerRadius outerRadius: (CGFloat) outerRadius target: (id) target; - (void)reset; - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event; - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event; - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event; - (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event; @end #include <math.h> #import "OneFingerRotationGestureRecognizer.h" @implementation OneFingerRotationGestureRecognizer // private helper functions CGFloat distanceBetweenPoints(CGPoint point1, CGPoint point2); CGFloat angleBetweenLinesInDegrees(CGPoint beginLineA, CGPoint endLineA, CGPoint beginLineB, CGPoint endLineB); - (id) initWithMidPoint: (CGPoint) _midPoint innerRadius: (CGFloat) _innerRadius outerRadius: (CGFloat) _outerRadius target: (id <OneFingerRotationGestureRecognizerDelegate>) _target { if ((self = [super initWithTarget: _target action: nil])) { midPoint = _midPoint; innerRadius = _innerRadius; outerRadius = _outerRadius; target = _target; } return self; } /** Calculates the distance between point1 and point 2. */ CGFloat distanceBetweenPoints(CGPoint point1, CGPoint point2) { CGFloat dx = point1.x - point2.x; CGFloat dy = point1.y - point2.y; return sqrt(dx*dx + dy*dy); } CGFloat angleBetweenLinesInDegrees(CGPoint beginLineA, CGPoint endLineA, CGPoint beginLineB, CGPoint endLineB) { CGFloat a = endLineA.x - beginLineA.x; CGFloat b = endLineA.y - beginLineA.y; CGFloat c = endLineB.x - beginLineB.x; CGFloat d = endLineB.y - beginLineB.y; CGFloat atanA = atan2(a, b); CGFloat atanB = atan2(c, d); // convert radiants to degrees return (atanA - atanB) * 180 / M_PI; } #pragma mark - UIGestureRecognizer implementation - (void)reset { [super reset]; cumulatedAngle = 0; } - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesBegan:touches withEvent:event]; if ([touches count] != 1) { self.state = UIGestureRecognizerStateFailed; return; } } - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesMoved:touches withEvent:event]; if (self.state == UIGestureRecognizerStateFailed) return; CGPoint nowPoint = [[touches anyObject] locationInView: self.view]; CGPoint prevPoint = [[touches anyObject] previousLocationInView: self.view]; // make sure the new point is within the area CGFloat distance = distanceBetweenPoints(midPoint, nowPoint); if ( innerRadius <= distance && distance <= outerRadius) { // calculate rotation angle between two points CGFloat angle = angleBetweenLinesInDegrees(midPoint, prevPoint, midPoint, nowPoint); // fix value, if the 12 o'clock position is between prevPoint and nowPoint if (angle > 180) { angle -= 360; } else if (angle < -180) { angle += 360; } // sum up single steps cumulatedAngle += angle; // call delegate if ([target respondsToSelector: @selector(rotation:)]) { [target rotation:angle]; } } else { // finger moved outside the area self.state = UIGestureRecognizerStateFailed; } } - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesEnded:touches withEvent:event]; if (self.state == UIGestureRecognizerStatePossible) { self.state = UIGestureRecognizerStateRecognized; if ([target respondsToSelector: @selector(finalAngle:)]) { [target finalAngle:cumulatedAngle]; } } else { self.state = UIGestureRecognizerStateFailed; } cumulatedAngle = 0; } - (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesCancelled:touches withEvent:event]; self.state = UIGestureRecognizerStateFailed; cumulatedAngle = 0; } @end Header file for view controller: #import "OneFingerRotationGestureRecognizer.h" @interface OneFingerRotationGestureViewController : UIViewController <OneFingerRotationGestureRecognizerDelegate> @property (nonatomic, strong) IBOutlet UIImageView *image; @property (nonatomic, strong) IBOutlet UITextField *textDisplay; @end then this is in the .m file: gestureRecognizer = [[OneFingerRotationGestureRecognizer alloc] initWithMidPoint: midPoint innerRadius: outRadius / 3 outerRadius: outRadius target: self]; [self.view addGestureRecognizer: gestureRecognizer]; Now my question is, is it possible to add this custom gesture into the cocos2d project found on that github, and if so, what do I need to change in the OneFingerRotationGestureRecognizerDelegate to get it to work within cocos2d. Because at the minute it is setup in a standard iOS project and not a cocos2d project and I do not know enough about UIViews and classing/ sub classing in obj-c to get this to work. Also it seems to inherit from a UIView where cocos2d uses CCLayer. Kind regards, Lewis. I also realise I may have not included enough code from the custom gesture project for readers to interpret it fully, so the full project can be found here: https://github.com/melle/OneFingerRotationGestureDemo

    Read the article

  • Technique to have screen independent grid based puzzle with sprite animation

    - by Yan Cheng CHEOK
    Hello all, let's say I have a fixed size grid puzzle game (8 x 10). I will be using sprites animation, when the "pieces" in the puzzle is moving from one grid to another grid. I was wondering, what is the technique to have this game being implemented as screen resolution independent. Here is what I plan to do. 1) The data structure coordinate will be represented using double, with 1.0 as max value. // Puzzle grid of 8 x 10 Environment { double width = 0.8; double height = 1.0; } // Location of Sprite at coordinate (1, 1) Sprite { double posX = 0.1; double posY = 0.1; double width = 0.1; double height = 0.1; } // scale = PYSICAL_SCREEN_SIZE drawBitmap ( sprite_image, sprite_image_rect, new Rect(sprite.posX * Scale, sprite.posY * Scale, (sprite.posX + sprite.width) * Scale, (sprite.posY + sprite.Height) * Scale), paint ); 2) A large size sprite image will be used (128x128). As sprite image shall look fine if we scale from large size down to small size, but not vice versa. Besides the above mentioned technique, is there any other consideration I had missed out?

    Read the article

  • How do I calculate the boundary of the game window after transforming the view?

    - by Cypher
    My Camera class handles zoom, rotation, and of course panning. It's invoked through SpriteBatch.Begin, like so many other XNA 2D camera classes. It calculates the view Matrix like so: public Matrix GetViewMatrix() { return Matrix.Identity * Matrix.CreateTranslation(new Vector3(-this.Spatial.Position, 0.0f)) * Matrix.CreateTranslation(-( this.viewport.Width / 2 ), -( this.viewport.Height / 2 ), 0.0f) * Matrix.CreateRotationZ(this.Rotation) * Matrix.CreateScale(this.Scale, this.Scale, 1.0f) * Matrix.CreateTranslation(this.viewport.Width * 0.5f, this.viewport.Height * 0.5f, 0.0f); } I was having a minor issue with performance, which after doing some profiling, led me to apply a culling feature to my rendering system. It used to, before I implemented the camera's zoom feature, simply grab the camera's boundaries and cull any game objects that did not intersect with the camera. However, after giving the camera the ability to zoom, that no longer works. The reason why is visible in the screenshot below. The navy blue rectangle represents the camera's boundaries when zoomed out all the way (Camera.Scale = 0.5f). So, when zoomed out, game objects are culled before they reach the boundaries of the window. The camera's width and height are determined by the Viewport properties of the same name (maybe this is my mistake? I wasn't expecting the camera to "resize" like this). What I'm trying to calculate is a Rectangle that defines the boundaries of the screen, as indicated by my awesome blue arrows, even after the camera is rotated, scaled, or panned. Here is how I've more recently found out how not to do it: public Rectangle CullingRegion { get { Rectangle region = Rectangle.Empty; Vector2 size = this.Spatial.Size; size *= 1 / this.Scale; Vector2 position = this.Spatial.Position; position = Vector2.Transform(position, this.Inverse); region.X = (int)position.X; region.Y = (int)position.Y; region.Width = (int)size.X; region.Height = (int)size.Y; return region; } } It seems to calculate the right size, but when I render this region, it moves around which will obviously cause problems. It needs to be "static", so to speak. It's also obscenely slow, which causes more of a problem than it solves. What am I missing?

    Read the article

  • What could cause a pixel shader to paint outside the lines of the vertex shader output?

    - by Rei Miyasaka
    From what I understand, the pixels that a pixel shader operates on are specified implicitly by the SV_POSITION output (in DirectX) of the vertex shader. What then could cause a pixel shader to render in the middle of nowhere? I used the new Visual Studio 2012 graphics debugger to visualize my vertex and pixel shader output. This is the output from a DrawIndexed() call that draws a cube: The pink part is the rendered output of the pixel shader, which takes the cube on its left as its input. The vertex shader code: cbuffer Buf { float4x4 final; }; struct In { float4 pos:POSITION; float3 norm:NORMAL; float2 texuv:TEXCOORD; }; struct Out { float4 col:COLOR; float2 tex:TEXCOORD; float4 pos:SV_POSITION; }; Out main(In input) { Out output; output.pos = mul(input.pos, final); output.col = float4(1.0f, 0.5f, 0.5f, 1.0f); output.tex = input.texuv; return output; } And the pixel shader: struct In { float4 col:COLOR; float2 tex:TEXCOORD; float4 pos:SV_POSITION; }; float4 main(In input) : SV_TARGET { return input.col; } The raster stage is the only thing between the vertex shader and the pixel shader, so my suspicion is that it's some raster stage settings. But the raster stage shouldn't change the shape of the vertex shader output so drastically, should it?

    Read the article

  • Reasoner Conversion Problems:

    - by Annalyne
    I have this code right here in Java and I wanted to translate it in C++, but I had some problems going: this is the java code: import java.io.*; import java.util.*; public class ClueReasoner { private int numPlayers; private int playerNum; private int numCards; private SATSolver solver; private String caseFile = "cf"; private String[] players = {"sc", "mu", "wh", "gr", "pe", "pl"}; private String[] suspects = {"mu", "pl", "gr", "pe", "sc", "wh"}; private String[] weapons = {"kn", "ca", "re", "ro", "pi", "wr"}; private String[] rooms = {"ha", "lo", "di", "ki", "ba", "co", "bi", "li", "st"}; private String[] cards; public ClueReasoner() { numPlayers = players.length; // Initialize card info cards = new String[suspects.length + weapons.length + rooms.length]; int i = 0; for (String card : suspects) cards[i++] = card; for (String card : weapons) cards[i++] = card; for (String card : rooms) cards[i++] = card; numCards = i; // Initialize solver solver = new SATSolver(); addInitialClauses(); } private int getPlayerNum(String player) { if (player.equals(caseFile)) return numPlayers; for (int i = 0; i < numPlayers; i++) if (player.equals(players[i])) return i; System.out.println("Illegal player: " + player); return -1; } private int getCardNum(String card) { for (int i = 0; i < numCards; i++) if (card.equals(cards[i])) return i; System.out.println("Illegal card: " + card); return -1; } private int getPairNum(String player, String card) { return getPairNum(getPlayerNum(player), getCardNum(card)); } private int getPairNum(int playerNum, int cardNum) { return playerNum * numCards + cardNum + 1; } public void addInitialClauses() { // TO BE IMPLEMENTED AS AN EXERCISE // Each card is in at least one place (including case file). for (int c = 0; c < numCards; c++) { int[] clause = new int[numPlayers + 1]; for (int p = 0; p <= numPlayers; p++) clause[p] = getPairNum(p, c); solver.addClause(clause); } // If a card is one place, it cannot be in another place. // At least one card of each category is in the case file. // No two cards in each category can both be in the case file. } public void hand(String player, String[] cards) { playerNum = getPlayerNum(player); // TO BE IMPLEMENTED AS AN EXERCISE } public void suggest(String suggester, String card1, String card2, String card3, String refuter, String cardShown) { // TO BE IMPLEMENTED AS AN EXERCISE } public void accuse(String accuser, String card1, String card2, String card3, boolean isCorrect) { // TO BE IMPLEMENTED AS AN EXERCISE } public int query(String player, String card) { return solver.testLiteral(getPairNum(player, card)); } public String queryString(int returnCode) { if (returnCode == SATSolver.TRUE) return "Y"; else if (returnCode == SATSolver.FALSE) return "n"; else return "-"; } public void printNotepad() { PrintStream out = System.out; for (String player : players) out.print("\t" + player); out.println("\t" + caseFile); for (String card : cards) { out.print(card + "\t"); for (String player : players) out.print(queryString(query(player, card)) + "\t"); out.println(queryString(query(caseFile, card))); } } public static void main(String[] args) { ClueReasoner cr = new ClueReasoner(); String[] myCards = {"wh", "li", "st"}; cr.hand("sc", myCards); cr.suggest("sc", "sc", "ro", "lo", "mu", "sc"); cr.suggest("mu", "pe", "pi", "di", "pe", null); cr.suggest("wh", "mu", "re", "ba", "pe", null); cr.suggest("gr", "wh", "kn", "ba", "pl", null); cr.suggest("pe", "gr", "ca", "di", "wh", null); cr.suggest("pl", "wh", "wr", "st", "sc", "wh"); cr.suggest("sc", "pl", "ro", "co", "mu", "pl"); cr.suggest("mu", "pe", "ro", "ba", "wh", null); cr.suggest("wh", "mu", "ca", "st", "gr", null); cr.suggest("gr", "pe", "kn", "di", "pe", null); cr.suggest("pe", "mu", "pi", "di", "pl", null); cr.suggest("pl", "gr", "kn", "co", "wh", null); cr.suggest("sc", "pe", "kn", "lo", "mu", "lo"); cr.suggest("mu", "pe", "kn", "di", "wh", null); cr.suggest("wh", "pe", "wr", "ha", "gr", null); cr.suggest("gr", "wh", "pi", "co", "pl", null); cr.suggest("pe", "sc", "pi", "ha", "mu", null); cr.suggest("pl", "pe", "pi", "ba", null, null); cr.suggest("sc", "wh", "pi", "ha", "pe", "ha"); cr.suggest("wh", "pe", "pi", "ha", "pe", null); cr.suggest("pe", "pe", "pi", "ha", null, null); cr.suggest("sc", "gr", "pi", "st", "wh", "gr"); cr.suggest("mu", "pe", "pi", "ba", "pl", null); cr.suggest("wh", "pe", "pi", "st", "sc", "st"); cr.suggest("gr", "wh", "pi", "st", "sc", "wh"); cr.suggest("pe", "wh", "pi", "st", "sc", "wh"); cr.suggest("pl", "pe", "pi", "ki", "gr", null); cr.printNotepad(); cr.accuse("sc", "pe", "pi", "bi", true); } } how can I convert this? there are too many errors I get. for my C++ code (as a commentor asked for) #include <iostream> #include <cstdlib> #include <string> using namespace std; void Scene_Reasoner() { int numPlayer; int playerNum; int cardNum; string filecase = "Case: "; string players [] = {"sc", "mu", "wh", "gr", "pe", "pl"}; string suspects [] = {"mu", "pl", "gr", "pe", "sc", "wh"}; string weapons [] = {"kn", "ca", "re", "ro", "pi", "wr"}; string rooms[] = {"ha", "lo", "di", "ki", "ba", "co", "bi", "li", "st"}; string cards [0]; }; void Scene_Reason_Base () { numPlayer = players.length; // Initialize card info cards = new String[suspects.length + weapons.length + rooms.length]; int i = 0; for (String card : suspects) cards[i++] = card; for (String card : weapons) cards[i++] = card; for (String card : rooms) cards[i++] = card; cardNum = i; }; private int getCardNum (string card) { for (int i = 0; i < numCards; i++) if (card.equals(cards[i])) return i; cout << "Illegal card: " + card <<endl; return -1; }; private int getPairNum(String player, String card) { return getPairNum(getPlayerNum(player), getCardNum(card)); }; private int getPairNum(int playerNum, int cardNum) { return playerNum * numCards + cardNum + 1; }; int main () { return 0; }

    Read the article

  • How do I do JavaScript Array Animation

    - by Henry
    I'm making a game but don't know how to do Array Animation with the png Array and game Surface that I made below. I'm trying to make it so that when the Right arrow key is pressed, the character animates as if it is walking to the right and when the Left arrow key is pressed it animates as if it is walking to the left (kind of like Mario). I put everything on a surface instead of the canvas. Everything is explained in the code below. I couldn't find help on this anywhere. I hope what I got below makes sense. I'm basically a beginner with JavaScript. I'll be back if more is needed: <!doctype html5> <html> <head></head> <script src="graphics.js"></script> <script src="object.js"></script> <body onkeydown ="keyDown(event)" onkeyup ="keyUp(event)" ></body> <script> //"Surface" is where I want to display my animation. It's like the HTML // canvas but it's not that. It's just the surface to where everything in the //game and the game itself will be displayed. var Surface = new Graphics(600, 400, "skyblue"); //here's the array that I want to use for animation var player = new Array("StandsRight.png", "WalksRight.png", "StandsLeft.png","WalksLeft.png" ); //Here is the X coordinate, Y coordinate, the beginning png for the animation, //and the object's name "player." I also turned the array into an object (but //I don't know if I was supposed to do that or not). var player = new Object(50, 100, 40, 115, "StandsRight.png","player"); //When doing animation I know that it requires a "loop", but I don't // know how to connect it so that it works with the arrays so that //it could animate. var loop = 0; //this actually puts "player" on screen. It makes player visible and //it is where I would like the animation to occur. Surface.drawObject(player); //this would be the key that makes "player" animation in the righward direction function keyDown(e) { if (e.keyCode == 39); } //this would be the key that makes "player" animation in the leftward direction function keyUp(e){ if (e.keyCode == 39); } //this is the Mainloop where the game will function MainLoop(); //the mainloop functionized function MainLoop(){ //this is how fast or slow I could want the entire game to go setTimeout(MainLoop, 10); } </script> </html> From here, are the "graphic.js" and the "object.js" files below. In this section is the graphics.js file. This graphics.js part below is linked to the: script src="graphics.js" html script section that I wrote above. Basically, below is a seperate file that I used for Graphics, and to run the code above, make this graphics.js code that I post below here, a separate filed called: graphics.js function Graphics(w,h,c) { document.body.innerHTML += "<table style='position:absolute;font- size:0;top:0;left:0;border-spacing:0;border- width:0;width:"+w+";height:"+h+";background-color:"+c+";' border=1><tr><td> </table>\n"; this.drawRectangle = function(x,y,w,h,c,n) { document.body.innerHTML += "<div style='position:absolute;font-size:0;left:" + x + ";top:" + y + ";width:" + w + ";height:" + h + ";background-color:" + c + ";' id='" + n + "'></div>\n"; } this.drawTexture = function(x,y,w,h,t,n) { document.body.innerHTML += "<img style='position:absolute;font-size:0;left:" + x + ";top:" + y + ";width:" + w + ";height:" + h + ";' id='" + n + "' src='" + t + "'> </img>\n"; } this.drawObject = function(o) { document.body.innerHTML += "<img style='position:absolute;font-size:0;left:" + o.X + ";top:" + o.Y + ";width:" + o.Width + ";height:" + o.Height + ";' id='" + o.Name + "' src='" + o.Sprite + "'></img>\n"; } this.moveGraphic = function(x,y,n) { document.getElementById(n).style.left = x; document.getElementById(n).style.top = y; } this.removeGraphic = function(n){ document.getElementById(n).parentNode.removeChild(document.getElementById(n)); } } Finally, is the object.js file linked to the script src="object.js"" in the html game file above the graphics.js part I just wrote. Basically, this is a separate file too, so thus, in order to run or test the html game code in the very first section I wrote, a person has to also make this code below a separate file called: object.js I hope this helps: function Object(x,y,w,h,t,n) { this.X = x; this.Y = y; this.Velocity_X = 0; this.Velocity_Y = 0; this.Previous_X = 0; this.Previous_Y = 0; this.Width = w; this.Height = h; this.Sprite = t; this.Name = n; this.Exists = true; } In all, this game is made based on a tutorial on youtube at: http://www.youtube.com/watch?v=t2kUzgFM4lY&feature=relmfu I'm just trying to learn how to add animations with it now. I hope the above helps. If not, let me know. Thanks

    Read the article

  • Sprite not rotating around its centre after Scaling at its centre

    - by asma.farhat
    If I scale a sprite at its centre, then try to rotate it around its centre as well, the rotation does not occur around its centre. If you need to rotate, for example a scaled ball,the way its working it is set the scale center at the top left (0,0) set the scale that you want, and then set the rotation center to the middle of the scaled sprite, and then apply the rotation modifier. blaBloBliSprite.setScaleCenter(0, 0); blaBloBliSprite.setScale(0.667f); blaBloBliSprite.setPosition(557, CAMERA_HEIGHT / 2 - blaBloBliSprite.getHeightScaled() / 2); blaBloBliSprite.setRotationCenter(blaBloBliSprite.getWidthScaled() / 2, blaBloBliSprite.getHeightScaled() / 2); But I want to scale a sprite at its centre as well. Is there any way of doing it?

    Read the article

  • Transparency in XNA-4 primitives

    - by Shashwat
    I'm using XNA 4 with Visual Studio 2010. I'm trying to create a simple 3D world with walls and doors in which the user to free to roam around. A wall is just a rectangle which is currently being rendered with four vertices using triangle strips. But to create a door, I'd have to split it into three rectangles as shown in the figure. Four quadrilaterals if I want to have the following door-style It will become more complex to have multiple doors on the same wall or if I have windows. Is there any shorter way to handle this? I am looking for something that will just make the wall transparent wherever I want. I found a solution but facing a problem here

    Read the article

  • Data-driven animation states

    - by user8363
    I'm trying to handle animations in a 2D game engine hobby project, without hard-coding them. Hard coding animation states seems like a common but very strange phenomenon, to me. A little background: I'm working with an entity system where components are bags of data and subsystems act upon them. I chose to use a polling system to update animation states. With animation states I mean: "walking_left", "running_left", "walking_right", "shooting", ... My idea to handle animations was to design it as a data driven model. Data could be stored in an xml file, a rdbms, ... And could be loaded at the start of a game / level/ ... This way you can easily edit animations and transitions without having to go change the code everywhere in your game. As an example I made an xml draft of the data definitions I had in mind. One very important piece of data would simply be the description of an animation. An animation would have a unique id (a descriptive name). It would hold a reference id to an image (the sprite sheet it uses, because different animations may use different sprite sheets). The frames per second to run the animation on. The "replay" here defines if an animation should be run once or infinitely. Then I defined a list of rectangles as frames. <animation id='WIZARD_WALK_LEFT'> <image id='WIZARD_WALKING' /> <fps>50</fps> <replay>true</replay> <frames> <rectangle> <x>0</x> <y>0</y> <width>45</width> <height>45</height> </rectangle> <rectangle> <x>45</x> <y>0</y> <width>45</width> <height>45</height> </rectangle> </frames> </animation> Animation data would be loaded and held in an animation resource pool and referenced by game entities that are using it. It would be treated as a resource like an image, a sound, a texture, ... The second piece of data to define would be a state machine to handle animation states and transitions. This defines each state a game entity can be in, which states it can transition to and what triggers that state change. This state machine would differ from entity to entity. Because a bird might have states "walking" and "flying" while a human would only have the state "walking". However it could be shared by different entities because multiple humans will probably have the same states (especially when you define some common NPCs like monsters, etc). Additionally an orc might have the same states as a human. Just to demonstrate that this state definition might be shared but only by a select group of game entities. <state id='IDLE'> <event trigger='LEFT_DOWN' goto='MOVING_LEFT' /> <event trigger='RIGHT_DOWN' goto='MOVING_RIGHT' /> </state> <state id='MOVING_LEFT'> <event trigger='LEFT_UP' goto='IDLE' /> <event trigger='RIGHT_DOWN' goto='MOVING_RIGHT' /> </state> <state id='MOVING_RIGHT'> <event trigger='RIGHT_UP' goto='IDLE' /> <event trigger='LEFT_DOWN' goto='MOVING_LEFT' /> </state> These states can be handled by a polling system. Each game tick it grabs the current state of a game entity and checks all triggers. If a condition is met it changes the entity's state to the "goto" state. The last part I was struggling with was how to bind animation data and animation states to an entity. The most logical approach seemed to me to add a pointer to the state machine data an entity uses and to define for each state in that machine what animation it uses. Here is an xml example how I would define the animation behavior and graphical representation of some common entities in a game, by addressing animation state and animation data id. Note that both "wizard" and "orc" have the same animation states but a different animation. Also, a different animation could mean a different sprite sheet, or even a different sequence of animations (an animation could be longer or shorter). <entity name="wizard"> <state id="IDLE" animation="WIZARD_IDLE" /> <state id="MOVING_LEFT" animation="WIZARD_WALK_LEFT" /> </entity> <entity name="orc"> <state id="IDLE" animation="ORC_IDLE" /> <state id="MOVING_LEFT" animation="ORC_WALK_LEFT" /> </entity> When the entity is being created it would add a list of states with state machine data and an animation data reference. In the future I would use the entity system to build whole entities by defining components in a similar xml format. -- This is what I have come up with after some research. However I had some trouble getting my head around it, so I was hoping op some feedback. Is there something here what doesn't make sense, or is there a better way to handle these things? I grasped the idea of iterating through frames but I'm having trouble to take it a step further and this is my attempt to do that.

    Read the article

  • Collision Resolution

    - by ultifinitus
    Hey all, I'm making a simple side-scrolling game, and I would appreciate some input! My collision detection system is a simple bounding box detection, so it's really easy to implement. However my collision resolution is ridiculous! Currently I have a little formula like this: if (colliding(firstObject,secondObject)) firstObject.resolve_collision(yAxisOffset); if (colliding(firstObject,secondObject)) firstObject.resolve_collision(xAxisOffset); where yAxisOffset is only set if the first object's previous y position was outside the second object's collision frame, respectively xAxisOffset as well. Now this is working great, in general. However there is a single problem. When I have a stack of objects and I push the first object against that stack, the first object get's "stuck," on the stack. What's I think is happening is the object's collision system checks and resolves for collisions based on creation time, so If I check one axis, then the other, the object will "sink" object directly along the checking axis. This sinking action causes the collision detection routine to think there's a gap between our position and the other object's position, and when I finally check the object that I've already sunk into, my object's position is resolved to it's original position... All this is great, and I'm sure if I bang my head against a wall long enough i'll come up with a working algorithm, but I'd rather not =). So what in the heck do you think I should do? How could I change my collision resolution system to fix this? Here's the program (temporary link, not sure how long it'll last) (notes: arrow keys to navigate, click to drop block, x to jump) I'd appreciate any help you can offer!

    Read the article

  • Turning on collision crashes game

    - by MomentumGaming
    I am getting a null pointer excecption to both my sprite and level. I am working on my mob class, and when I try to move him and the move function is called, the game crashes after checking collision with a null pointer excecption. Taking out the one line that actually checks if the tile located in front of it fixes the problem. Also, if i keep collision ON but don't move the position of the mob (the spider) the game works fine. I will have collision, and the spider appears on the screen, only problem is, getting it to move causes this nasty error that i just can't fix. true Exception in thread "Display" java.lang.NullPointerException at com.apcompsci.game.entity.mob.Mob.collision(Mob.java:67) at com.apcompsci.game.entity.mob.Mob.move(Mob.java:38) at com.apcompsci.game.entity.mob.spider.update(spider.java:58) at com.apcompsci.game.level.Level.update(Level.java:55) at com.apcompsci.game.Game.update(Game.java:128) at com.apcompsci.game.Game.run(Game.java:106) at java.lang.Thread.run(Unknown Source) Here is my renderMob mehtod: public void renderMob(int xp,int yp,Sprite sprite,int flip) { xp -= xOffset; yp-=yOffset; for(int y = 0; y<32; y++) { int ya = y + yp; int ys = y; if(flip == 2||flip == 3)ys = 31-y; for(int x = 0; x<32; x++) { int xa = x + xp; int xs = x; if(flip == 1||flip == 3)xs = 31-x; if(xa < -32 || xa >=width || ya<0||ya>=height) break; if(xa<0) xa =0; int col = sprite.pixels[xs+ys*32]; if(col!= 0x000000) pixels[xa+ya*width] = col; } } } My spider class which determines the sprite and where I control movement, also rendering the spider onto the screen, when I increment ya to move the sprite, I get the crash, but without ya++, it runs flawlessly with a spider sprite on screen: package com.apcompsci.game.entity.mob; import com.apcompsci.game.entity.mob.Mob.Direction; import com.apcompsci.game.graphics.Screen; import com.apcompsci.game.graphics.Sprite; import com.apcompsci.game.level.Level; public class spider extends Mob{ Direction dir; private Sprite sprite; private boolean walking; public spider(int x, int y) { this.x = x <<4; this.y = y <<4; sprite = sprite.spider_forward; } public void update() { int xa = 0, ya = 0; ya++; if(ya<0) { sprite = sprite.spider_forward; dir = Direction.UP; } if(ya>0) { sprite = sprite.spider_back; dir = Direction.DOWN; } if(xa<0) { sprite = sprite.spider_side; dir = Direction.LEFT; } if(xa>0) { sprite = sprite.spider_side; dir = Direction.LEFT; } if(xa!= 0 || ya!= 0) { System.out.println("true"); move(xa,ya); walking = true; } else{ walking = false; } } public void render(Screen screen) { screen.renderMob(x, y, sprite, 0); } } This is th mob class that contains the move() method that is called in the spider class above. This move method calls the collision method. tile and sprite comes up null in the debugger: package com.apcompsci.game.entity.mob; import java.util.ArrayList; import java.util.List; import com.apcompsci.game.entity.Entity; import com.apcompsci.game.entity.projectile.DemiGodProjectile; import com.apcompsci.game.entity.projectile.Projectile; import com.apcompsci.game.graphics.Sprite; public class Mob extends Entity{ protected Sprite sprite; protected boolean moving = false; protected enum Direction { UP,DOWN,LEFT,RIGHT } protected Direction dir; public void move(int xa,int ya) { if(xa != 0 && ya != 0) { move(xa,0); move(0,ya); return; } if(xa>0) dir = Direction.RIGHT; if(xa<0) dir = Direction.LEFT; if(ya>0)dir = Direction.DOWN; if(ya<0)dir = Direction.UP; if(!collision(xa,ya)){ x+= xa; y+=ya; } } public void update() { } public void shoot(int x, int y, double dir) { //dir = Math.toDegrees(dir); Projectile p = new DemiGodProjectile(x, y,dir); level.addProjectile(p); } public boolean collision(int xa,int ya) { boolean solid = false; for(int c = 0; c<4; c++) { int xt = ((x+xa) + c % 2 * 14 - 8 )/16; int yt = ((y+ya) + c / 2 * 12 +3 )/16; if(level.getTile(xt, yt).solid()) solid = true; } return solid; } public void render() { } } Finally, here is the method in which i call the add() method for the spider to add it to the level: protected void loadLevel(String path) { try{ BufferedImage image = ImageIO.read(SpawnLevel.class.getResource(path)); int w = width =image.getWidth(); int h = height = image.getHeight(); tiles = new int[w*h]; image.getRGB(0, 0, w,h, tiles,0, w); } catch(IOException e){ e.printStackTrace(); System.out.println("Exception! Could not load level file!"); } add(new spider(20,45)); } I don't think i need to include the level class but just in case, I have provided a gistHub link for better context. It contains all of the full classes listed above , plus my entity class and maybe another. Thanks for the help if you decide to do so, much appreciated! Also, please tell me if i'm in the wrong section of stackeoverflow, i figured that since this is the gamign section that it belonged but debugging code normally goes into the general section.

    Read the article

  • SlimDX Texture2D from DataRectangle array

    - by Rebekah Bryant
    I'm totally new to DirectX. I'm using SlimDX to create a texture consisting of 13046 DataRectangles. Here's my code. It's breaking on the Texture2D constructor with "E_INVALIDARG: An invalid parameter was passed to the returning function (-2147024809)." inParms is just a struct containing handle to a Panel. public Renderer(Parameters inParms, ref DataRectangle[] inShapes) { Texture2DDescription description = new Texture2DDescription() { Width = 500, Height = 500, MipLevels = 1, ArraySize = inShapes.Length, Format = Format.R32G32B32_Float, SampleDescription = new SampleDescription(1, 0), Usage = ResourceUsage.Default, BindFlags = BindFlags.RenderTarget | BindFlags.ShaderResource, CpuAccessFlags = CpuAccessFlags.None, OptionFlags = ResourceOptionFlags.None }; SwapChainDescription chainDescription = new SwapChainDescription() { BufferCount = 1, IsWindowed = true, Usage = Usage.RenderTargetOutput, ModeDescription = new ModeDescription(0, 0, new Rational(60, 1), Format.R8G8B8A8_UNorm), SampleDescription = new SampleDescription(1, 0), Flags = SwapChainFlags.None, OutputHandle = inParms.Handle, SwapEffect = SwapEffect.Discard }; Device.CreateWithSwapChain(DriverType.Hardware, DeviceCreationFlags.None, chainDescription, out mDevice, out mSwapChain); Texture2D texture = new Texture2D(Device, description, inShapes); } EDIT: Running with the Debug flag set, I got: D3D11 ERROR: ID3D11Device::CreateTexture2D: The format (0x6, R32G32B32_FLOAT) cannot be bound as a RenderTarget, or cast to a format that could be bound as a RenderTarget. This is because the current graphics implementation does not even support this Format. Therefore this format does not support D3D11_BIND_RENDER_TARGET. Use CheckFormatSupport to check Format support. [ STATE_CREATION ERROR #92: CREATETEXTURE2D_UNSUPPORTEDFORMAT] D3D11 ERROR: ID3D11Device::CreateTexture2D: Returning E_INVALIDARG, meaning invalid parameters were passed. [ STATE_CREATION ERROR #104: CREATETEXTURE2D_INVALIDARG_RETURN]

    Read the article

  • OpenGL: Filtering/antialising textures in a 2D game

    - by futlib
    I'm working on a 2D game using OpenGL 1.5 that uses rather large textures. I'm seeing aliasing effects and am wondering how to tackle those. I'm finding lots of material about antialiasing in 3D games, but I don't see how most of that applies to 2D games - e.g. antisoptric filtering seems to make no sense, FSAA doesn't sound like the best bet either. I suppose this means texture filtering is my best option? Right now I'm using bilinear filtering, I think: glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); From what I've read, I'd have to use mipmaps to use trilinear filtering, which would drive memory usage up, so I'd rather not. I know the final sizes of all the textures when they are loaded, so can't I somehow size them correctly at that point? (Using some form of texture filtering).

    Read the article

< Previous Page | 459 460 461 462 463 464 465 466 467 468 469 470  | Next Page >