Search Results

Search found 44501 results on 1781 pages for 'software development life'.

Page 560/1781 | < Previous Page | 556 557 558 559 560 561 562 563 564 565 566 567  | Next Page >

  • Adding Vertices to a dynamic mesh via Method Call

    - by Raven Dreamer
    I have a C# Struct with a static method, "Get Shape" which populates a List with the vertices of a polyhedron. Method Signature: public static void GetShape(Block b, int x, int y, int z, List<Vector3> vertices, List<int> triangles, List<Vector2> uvs, List<Vector2> uv2s) Adding directly to the vertices list (via vertices.Add(vector3) ), the code works as expected, and the new polyhedron appears when I trigger the method. However, I want to do some processing on the vertices I'm adding (a rotation), and the most sensible way I can think to do that is by creating a separate list of Vector3s, and then combining the lists when I'm done. However, vertices.AddRange(newVerts) does not add the shape to the mesh, nor does a foreach loop with verts.Add(vertices[i]). And this is before I've added in any of the processing! I have a feeling this might stem from passing the list of vertices in as a parameter, rather than returning a list and then adding to the vertices in the calling object, but since I'm filling 4 lists, I was trying to avoid having to create a data struct to return all four at once. Any ideas? The working version of the method is reprinted below, in full: public static void GetShape(Block b, int x, int y, int z, List<Vector3> vertices, List<int> triangles, List<Vector2> uvs, List<Vector2> uv2s) { //List<Vector3> vertices = new List<Vector3>(); int l_blockShape = b.blockShape; int l_blockType = b.blockType; //CheckFace checks if the block is empty //if this block is empty, don't draw anything. int vertexIndex; //only y faces need to be hidden. //if((l_blockShape & BlockShape.NegZFace) == BlockShape.NegZFace) { vertexIndex = vertices.Count; //top left, top right, bottom right, bottom left vertices.Add(new Vector3(x+.2f, y + 1, z+.2f)); vertices.Add(new Vector3(x+.8f, y + 1, z+.2f)); vertices.Add(new Vector3(x+.8f, y , z+.2f)); vertices.Add(new Vector3(x+.2f, y , z+.2f)); // first triangle for the face triangles.Add(vertexIndex); triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+3); // second triangle for the face triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+2); triangles.Add(vertexIndex+3); //UVs for the face uvs.Add( new Vector2(0,1)); uvs.Add( new Vector2(1,1)); uvs.Add( new Vector2(1,0)); uvs.Add( new Vector2(0,0)); //UV2s (lightmapping?) uv2s.Add( new Vector2(0,1)); uv2s.Add( new Vector2(1,1)); uv2s.Add( new Vector2(1,0)); uv2s.Add( new Vector2(0,0)); } //XY Z+1 face //if((l_blockShape & BlockShape.PosZFace) == BlockShape.PosZFace) { vertexIndex = vertices.Count; //top left, top right, bottom right, bottom left vertices.Add(new Vector3(x+.8f, y + 1, z+.8f)); vertices.Add(new Vector3(x+.2f, y + 1, z+.8f)); vertices.Add(new Vector3(x+.2f, y , z+.8f)); vertices.Add(new Vector3(x+.8f, y , z+.8f)); // first triangle for the face triangles.Add(vertexIndex); triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+3); // second triangle for the face triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+2); triangles.Add(vertexIndex+3); //UVs for the face uvs.Add( new Vector2(0,1)); uvs.Add( new Vector2(1,1)); uvs.Add( new Vector2(1,0)); uvs.Add( new Vector2(0,0)); //UV2s (lightmapping?) uv2s.Add( new Vector2(0,1)); uv2s.Add( new Vector2(1,1)); uv2s.Add( new Vector2(1,0)); uv2s.Add( new Vector2(0,0)); } //ZY face //if((l_blockShape & BlockShape.NegXFace) == BlockShape.NegXFace) { vertexIndex = vertices.Count; //top left, top right, bottom right, bottom left vertices.Add(new Vector3(x+.2f, y + 1, z+.8f)); vertices.Add(new Vector3(x+.2f, y + 1, z+.2f)); vertices.Add(new Vector3(x+.2f, y , z+.2f)); vertices.Add(new Vector3(x+.2f, y , z+.8f)); // first triangle for the face triangles.Add(vertexIndex); triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+3); // second triangle for the face triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+2); triangles.Add(vertexIndex+3); //UVs for the face uvs.Add( new Vector2(0,1)); uvs.Add( new Vector2(1,1)); uvs.Add( new Vector2(1,0)); uvs.Add( new Vector2(0,0)); //UV2s (lightmapping?) uv2s.Add( new Vector2(0,1)); uv2s.Add( new Vector2(1,1)); uv2s.Add( new Vector2(1,0)); uv2s.Add( new Vector2(0,0)); } //ZY X+1 face // if((l_blockShape & BlockShape.PosXFace) == BlockShape.PosXFace) { vertexIndex = vertices.Count; //top left, top right, bottom right, bottom left vertices.Add(new Vector3(x+.8f, y + 1, z+.2f)); vertices.Add(new Vector3(x+.8f, y + 1, z+.8f)); vertices.Add(new Vector3(x+.8f, y , z+.8f)); vertices.Add(new Vector3(x+.8f, y , z+.2f)); // first triangle for the face triangles.Add(vertexIndex); triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+3); // second triangle for the face triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+2); triangles.Add(vertexIndex+3); //UVs for the face uvs.Add( new Vector2(0,1)); uvs.Add( new Vector2(1,1)); uvs.Add( new Vector2(1,0)); uvs.Add( new Vector2(0,0)); //UV2s (lightmapping?) uv2s.Add( new Vector2(0,1)); uv2s.Add( new Vector2(1,1)); uv2s.Add( new Vector2(1,0)); uv2s.Add( new Vector2(0,0)); } //ZX face if((l_blockShape & BlockShape.NegYFace) == BlockShape.NegYFace) { vertexIndex = vertices.Count; //top left, top right, bottom right, bottom left vertices.Add(new Vector3(x+.8f, y , z+.8f)); vertices.Add(new Vector3(x+.8f, y , z+.2f)); vertices.Add(new Vector3(x+.2f, y , z+.2f)); vertices.Add(new Vector3(x+.2f, y , z+.8f)); // first triangle for the face triangles.Add(vertexIndex+3); triangles.Add(vertexIndex+1); triangles.Add(vertexIndex); // second triangle for the face triangles.Add(vertexIndex+3); triangles.Add(vertexIndex+2); triangles.Add(vertexIndex+1); //UVs for the face uvs.Add( new Vector2(0,1)); uvs.Add( new Vector2(1,1)); uvs.Add( new Vector2(1,0)); uvs.Add( new Vector2(0,0)); //UV2s (lightmapping?) uv2s.Add( new Vector2(0,1)); uv2s.Add( new Vector2(1,1)); uv2s.Add( new Vector2(1,0)); uv2s.Add( new Vector2(0,0)); } //ZX + 1 face if((l_blockShape & BlockShape.PosYFace) == BlockShape.PosYFace) { vertexIndex = vertices.Count; //top left, top right, bottom right, bottom left vertices.Add(new Vector3(x+.8f, y+1 , z+.2f)); vertices.Add(new Vector3(x+.8f, y+1 , z+.8f)); vertices.Add(new Vector3(x+.2f, y+1 , z+.8f)); vertices.Add(new Vector3(x+.2f, y+1 , z+.2f)); // first triangle for the face triangles.Add(vertexIndex+3); triangles.Add(vertexIndex+1); triangles.Add(vertexIndex); // second triangle for the face triangles.Add(vertexIndex+3); triangles.Add(vertexIndex+2); triangles.Add(vertexIndex+1); //UVs for the face uvs.Add( new Vector2(0,1)); uvs.Add( new Vector2(1,1)); uvs.Add( new Vector2(1,0)); uvs.Add( new Vector2(0,0)); //UV2s (lightmapping?) uv2s.Add( new Vector2(0,1)); uv2s.Add( new Vector2(1,1)); uv2s.Add( new Vector2(1,0)); uv2s.Add( new Vector2(0,0)); } }

    Read the article

  • Per-vertex animation with VBOs: VBO per character or VBO per animation?

    - by charstar
    Goal To leverage the richness of well vetted animation tools such as Blender to do the heavy lifting for a small but rich set of animations. I am aware of additive pose blending like that from Naughty Dog and similar techniques but I would prefer to expend a little RAM/VRAM to avoid implementing a thesis-ready pose solver. I would also like to avoid implementing a key-frame + interpolation curve solver (reinventing Blender vertex groups and IPOs), if possible. Scenario Meshes are animated using either skeletons (skinned animation) or some form of morph targets (i.e. per-vertex key frames). However, in either case, the animations are known in full at load-time, that is, there is no physics, IK solving, or any other form of in-game pose solving. The number of character actions (animations) will be limited but rich (hand-animated). There may be multiple characters using a each mesh and its animations simultaneously in-game (they will likely be at different frames of the same animation at the same time). Assume color and texture coordinate buffers are static. Current Considerations Much like a non-shader-powered pose solver, create a VBO for each character and copy vertex and normal data to each VBO on each frame (VBO in STREAMING). Create one VBO for each animation where each frame (interleaved vertex and normal data) is concatenated onto the VBO. Then each character simply has a buffer pointer offset based on its current animation frame (e.g. pointer offset = (numVertices+numNormals)*frameNumber). (VBO in STATIC) Known Trade-Offs In 1 above: Each VBO would be small but there would be many VBOs and therefore lots of buffer binding and vertex copying each frame. Both client and pipeline intensive. In 2 above: There would be few VBOs therefore insignificant buffer binding and no vertex data getting jammed down the pipe each frame, but each VBO would be quite large. Are there any pitfalls to number 2 (aside from finite memory)? I've found a lot of information on what you can do, but no real best practices. Are there other considerations or methods that I am missing?

    Read the article

  • 2D OBB collision detection, resolving collisions?

    - by Milo
    I currently use OBBs and I have a vehicle that is a rigid body and some buildings. Here is my update() private void update() { camera.setPosition((vehicle.getPosition().x * camera.getScale()) - ((getWidth() ) / 2.0f), (vehicle.getPosition().y * camera.getScale()) - ((getHeight() ) / 2.0f)); //camera.move(input.getAnalogStick().getStickValueX() * 15.0f, input.getAnalogStick().getStickValueY() * 15.0f); if(input.isPressed(ControlButton.BUTTON_GAS)) { vehicle.setThrottle(1.0f, false); } if(input.isPressed(ControlButton.BUTTON_BRAKE)) { vehicle.setBrakes(1.0f); } vehicle.setSteering(input.getAnalogStick().getStickValueX()); vehicle.update(16.6666f / 1000.0f); ArrayList<Building> buildings = city.getBuildings(); for(Building b : buildings) { if(vehicle.getRect().overlaps(b.getRect())) { vehicle.update(-17.0f / 1000.0f); break; } } } The collision detection works well. What doesn't is how they are dealt with. My goal is simple. If the vehicle hits a building, it should stop, and never go into the building. When I apply negative torque to reverse the car should not feel buggy and move away from the building. I don't want this to look buggy. This is my rigid body class: class RigidBody extends Entity { //linear private Vector2D velocity = new Vector2D(); private Vector2D forces = new Vector2D(); private float mass; //angular private float angularVelocity; private float torque; private float inertia; //graphical private Vector2D halfSize = new Vector2D(); private Bitmap image; public RigidBody() { //set these defaults so we don't get divide by zeros mass = 1.0f; inertia = 1.0f; } //intialize out parameters public void initialize(Vector2D halfSize, float mass, Bitmap bitmap) { //store physical parameters this.halfSize = halfSize; this.mass = mass; image = bitmap; inertia = (1.0f / 20.0f) * (halfSize.x * halfSize.x) * (halfSize.y * halfSize.y) * mass; RectF rect = new RectF(); float scalar = 10.0f; rect.left = (int)-halfSize.x * scalar; rect.top = (int)-halfSize.y * scalar; rect.right = rect.left + (int)(halfSize.x * 2.0f * scalar); rect.bottom = rect.top + (int)(halfSize.y * 2.0f * scalar); setRect(rect); } public void setLocation(Vector2D position, float angle) { getRect().set(position, getWidth(), getHeight(), angle); } public Vector2D getPosition() { return getRect().getCenter(); } @Override public void update(float timeStep) { //integrate physics //linear Vector2D acceleration = Vector2D.scalarDivide(forces, mass); velocity = Vector2D.add(velocity, Vector2D.scalarMultiply(acceleration, timeStep)); Vector2D c = getRect().getCenter(); c = Vector2D.add(getRect().getCenter(), Vector2D.scalarMultiply(velocity , timeStep)); setCenter(c.x, c.y); forces = new Vector2D(0,0); //clear forces //angular float angAcc = torque / inertia; angularVelocity += angAcc * timeStep; setAngle(getAngle() + angularVelocity * timeStep); torque = 0; //clear torque } //take a relative Vector2D and make it a world Vector2D public Vector2D relativeToWorld(Vector2D relative) { Matrix mat = new Matrix(); float[] Vector2Ds = new float[2]; Vector2Ds[0] = relative.x; Vector2Ds[1] = relative.y; mat.postRotate(JMath.radToDeg(getAngle())); mat.mapVectors(Vector2Ds); return new Vector2D(Vector2Ds[0], Vector2Ds[1]); } //take a world Vector2D and make it a relative Vector2D public Vector2D worldToRelative(Vector2D world) { Matrix mat = new Matrix(); float[] Vectors = new float[2]; Vectors[0] = world.x; Vectors[1] = world.y; mat.postRotate(JMath.radToDeg(-getAngle())); mat.mapVectors(Vectors); return new Vector2D(Vectors[0], Vectors[1]); } //velocity of a point on body public Vector2D pointVelocity(Vector2D worldOffset) { Vector2D tangent = new Vector2D(-worldOffset.y, worldOffset.x); return Vector2D.add( Vector2D.scalarMultiply(tangent, angularVelocity) , velocity); } public void applyForce(Vector2D worldForce, Vector2D worldOffset) { //add linear force forces = Vector2D.add(forces ,worldForce); //add associated torque torque += Vector2D.cross(worldOffset, worldForce); } @Override public void draw( GraphicsContext c) { c.drawRotatedScaledBitmap(image, getPosition().x, getPosition().y, getWidth(), getHeight(), getAngle()); } } Essentially, when any rigid body hits a building it should exhibit the same behavior. How is collision solving usually done? Thanks

    Read the article

  • Sprite sheets, Clamp or Wrap?

    - by David
    I'm using a combination of sprite sheets for well, sprites and individual textures for infinite tiling. For the tiling textures I'm obviously using Wrap to draw the entire surface in one call but up until now I've been making a seperate batch using Clamp for drawing sprites from the sprite sheets. The sprite sheets include a border (repeating the edge pixels of each sprite) and my code uses the correct source coordinates for sprites. But since I'm never giving coordinates outside of the texture when drawing sprites (and indeed the border exists to prevent bleed over when filtering) it's struck me that I'd be better off just using Wrap so that I can combine everything into one batch. I just want to be sure that I haven't overlooked something obvious. Is there any reason that Wrap would be harmful when used with a sprite sheet?

    Read the article

  • Why do I get this file loading exception when trying to draw sprites with libgdx?

    - by BluFire
    I'm having trouble with the "Drawing Images" section on the libgdx tutorial. I set up the documents completely and I typed the code as follows: public class Game implements ApplicationListener { public static final String LOG = Game.class.getSimpleName(); private FPSLogger fpsLogger; private SpriteBatch batch; private Texture texture; private Sprite sprite; private TextureRegion region; //removed irrelevant code for this question... @Override public void render() { texture = new Texture(Gdx.files.internal("android.png")); region = new TextureRegion(texture, 20, 20, 50, 50); sprite = new Sprite(texture, 20, 20, 50, 50); sprite.setPosition(10, 10); sprite.setRotation(45); Gdx.gl.glClearColor(0f, 1f, 0f, 1f); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT); batch.begin(); batch.draw(texture,10,10); batch.draw(region,10,10); sprite.draw(batch); batch.end(); // output the current FPS fpsLogger.log(); } } I went through the tutorial on the website but when I run the code I get errors: Exception in thread "LWJGL Application" com.badlogic.gdx.utils.GdxRuntimeException: Couldn't load file: android.png at com.badlogic.gdx.graphics.Pixmap.<init>(Pixmap.java:137) at com.badlogic.gdx.graphics.glutils.FileTextureData.prepare(FileTextureData.java:55) at com.badlogic.gdx.graphics.Texture.load(Texture.java:175) at com.badlogic.gdx.graphics.Texture.create(Texture.java:159) at com.badlogic.gdx.graphics.Texture.<init>(Texture.java:133) at com.badlogic.gdx.graphics.Texture.<init>(Texture.java:122) at com.game.Game.render(Game.java:46) at com.badlogic.gdx.backends.lwjgl.LwjglApplication.mainLoop (LwjglApplication.java:163) at com.badlogic.gdx.backends.lwjgl.LwjglApplication$1.run(LwjglApplication.java:113) Caused by: com.badlogic.gdx.utils.GdxRuntimeException: File not found: android.png (Internal) at com.badlogic.gdx.files.FileHandle.read(FileHandle.java:108) at com.badlogic.gdx.files.FileHandle.length(FileHandle.java:364) at com.badlogic.gdx.files.FileHandle.readBytes(FileHandle.java:156) at com.badlogic.gdx.graphics.Pixmap.<init>(Pixmap.java:134) ... 8 more I set the android.png in my assests folder in my android project linking it to the desktop one, I don't understand what I'm doing wrong. What is making these errors? FIX. Weird ending.this was the plus where the sprite is suppose to look like. The top right corner of the next image should look like, the bottom left is what turned out in the code. I'm think it was because of the texture region but I'm not 100%. Can somebody explain why it is really warped? I thought the changes I made in the coding will just change position/rotation, rather then a change in the image.

    Read the article

  • How do I move the camera sideways in Libgdx?

    - by Bubblewrap
    I want to move the camera sideways (strafe). I had the following in mind, but it doesn't look like there are standard methods to achieve this in Libgdx. If I want to move the camera sideways by x, I think I need to do the following: Create a Matrix4 mat Determine the orthogonal vector v between camera.direction and camera.up Translate mat by v*x Multiply camera.position by mat Will this approach do what I think it does, and is it a good way to do it? And how can I do this in libgdx? I get "stuck" at step 2, as I have not found any standard method in Libgdx to calculate an orthogonal vector. EDIT: I think I can use camera.direction.crs(camera.up) to find v. I'll try this approach tonight and see if it works. EDIT2: I got it working and didn't need the matrix after all: Vector3 right = camera.direction.cpy().crs(camera.up).nor(); camera.position.add(right.mul(x));

    Read the article

  • Custom Gesture in cocos2d

    - by Lewis
    I've found a little tutorial that would be useful for my game: http://blog.mellenthin.de/archives/2012/02/13/an-one-finger-rotation-gesture-recognizer/ But I can't work out how to convert that gesture to work with cocos2d, I have found examples of pre made gestures in cocos2d, but no custom ones, is it possible? EDIT STILL HAVING PROBLEMS WITH THIS: I've added the code from Sentinel below (from SO), the Gesture and RotateGesture have both been added to my solution and are compiling. Although In the rotation class now I only see selectors, how do I set those up? As the custom gesture found in that project above looks like: header file for custom gesture: #import <Foundation/Foundation.h> #import <UIKit/UIGestureRecognizerSubclass.h> @protocol OneFingerRotationGestureRecognizerDelegate <NSObject> @optional - (void) rotation: (CGFloat) angle; - (void) finalAngle: (CGFloat) angle; @end @interface OneFingerRotationGestureRecognizer : UIGestureRecognizer { CGPoint midPoint; CGFloat innerRadius; CGFloat outerRadius; CGFloat cumulatedAngle; id <OneFingerRotationGestureRecognizerDelegate> target; } - (id) initWithMidPoint: (CGPoint) midPoint innerRadius: (CGFloat) innerRadius outerRadius: (CGFloat) outerRadius target: (id) target; - (void)reset; - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event; - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event; - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event; - (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event; @end .m for custom gesture file: #include <math.h> #import "OneFingerRotationGestureRecognizer.h" @implementation OneFingerRotationGestureRecognizer // private helper functions CGFloat distanceBetweenPoints(CGPoint point1, CGPoint point2); CGFloat angleBetweenLinesInDegrees(CGPoint beginLineA, CGPoint endLineA, CGPoint beginLineB, CGPoint endLineB); - (id) initWithMidPoint: (CGPoint) _midPoint innerRadius: (CGFloat) _innerRadius outerRadius: (CGFloat) _outerRadius target: (id <OneFingerRotationGestureRecognizerDelegate>) _target { if ((self = [super initWithTarget: _target action: nil])) { midPoint = _midPoint; innerRadius = _innerRadius; outerRadius = _outerRadius; target = _target; } return self; } /** Calculates the distance between point1 and point 2. */ CGFloat distanceBetweenPoints(CGPoint point1, CGPoint point2) { CGFloat dx = point1.x - point2.x; CGFloat dy = point1.y - point2.y; return sqrt(dx*dx + dy*dy); } CGFloat angleBetweenLinesInDegrees(CGPoint beginLineA, CGPoint endLineA, CGPoint beginLineB, CGPoint endLineB) { CGFloat a = endLineA.x - beginLineA.x; CGFloat b = endLineA.y - beginLineA.y; CGFloat c = endLineB.x - beginLineB.x; CGFloat d = endLineB.y - beginLineB.y; CGFloat atanA = atan2(a, b); CGFloat atanB = atan2(c, d); // convert radiants to degrees return (atanA - atanB) * 180 / M_PI; } #pragma mark - UIGestureRecognizer implementation - (void)reset { [super reset]; cumulatedAngle = 0; } - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesBegan:touches withEvent:event]; if ([touches count] != 1) { self.state = UIGestureRecognizerStateFailed; return; } } - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesMoved:touches withEvent:event]; if (self.state == UIGestureRecognizerStateFailed) return; CGPoint nowPoint = [[touches anyObject] locationInView: self.view]; CGPoint prevPoint = [[touches anyObject] previousLocationInView: self.view]; // make sure the new point is within the area CGFloat distance = distanceBetweenPoints(midPoint, nowPoint); if ( innerRadius <= distance && distance <= outerRadius) { // calculate rotation angle between two points CGFloat angle = angleBetweenLinesInDegrees(midPoint, prevPoint, midPoint, nowPoint); // fix value, if the 12 o'clock position is between prevPoint and nowPoint if (angle > 180) { angle -= 360; } else if (angle < -180) { angle += 360; } // sum up single steps cumulatedAngle += angle; // call delegate if ([target respondsToSelector: @selector(rotation:)]) { [target rotation:angle]; } } else { // finger moved outside the area self.state = UIGestureRecognizerStateFailed; } } - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesEnded:touches withEvent:event]; if (self.state == UIGestureRecognizerStatePossible) { self.state = UIGestureRecognizerStateRecognized; if ([target respondsToSelector: @selector(finalAngle:)]) { [target finalAngle:cumulatedAngle]; } } else { self.state = UIGestureRecognizerStateFailed; } cumulatedAngle = 0; } - (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesCancelled:touches withEvent:event]; self.state = UIGestureRecognizerStateFailed; cumulatedAngle = 0; } @end Then its initialised like this: // calculate center and radius of the control CGPoint midPoint = CGPointMake(image.frame.origin.x + image.frame.size.width / 2, image.frame.origin.y + image.frame.size.height / 2); CGFloat outRadius = image.frame.size.width / 2; // outRadius / 3 is arbitrary, just choose something >> 0 to avoid strange // effects when touching the control near of it's center gestureRecognizer = [[OneFingerRotationGestureRecognizer alloc] initWithMidPoint: midPoint innerRadius: outRadius / 3 outerRadius: outRadius target: self]; [self.view addGestureRecognizer: gestureRecognizer]; The selector below is also in the same file where the initialisation of the gestureRecogonizer: - (void) rotation: (CGFloat) angle { // calculate rotation angle imageAngle += angle; if (imageAngle > 360) imageAngle -= 360; else if (imageAngle < -360) imageAngle += 360; // rotate image and update text field image.transform = CGAffineTransformMakeRotation(imageAngle * M_PI / 180); [self updateTextDisplay]; } I can't seem to get this working in the RotateGesture class can anyone help me please I've been stuck on this for days now. SECOND EDIT: Here is the users code from SO that was suggested to me: Here is projec on GitHub: SFGestureRecognizers It uses builded in iOS UIGestureRecognizer, and don't needs to be integrated into cocos2d sources. Using it, You can make any gestures, just like you could, if you whould work with UIGestureRecognizer. For example: I made a base class Gesture, and subclassed it for any new gesture: //Gesture.h @interface Gesture : NSObject <UIGestureRecognizerDelegate> { UIGestureRecognizer *gestureRecognizer; id delegate; SEL preSolveSelector; SEL possibleSelector; SEL beganSelector; SEL changedSelector; SEL endedSelector; SEL cancelledSelector; SEL failedSelector; BOOL preSolveAvailable; CCNode *owner; } - (id)init; - (void)addGestureRecognizerToNode:(CCNode*)node; - (void)removeGestureRecognizerFromNode:(CCNode*)node; -(void)recognizer:(UIGestureRecognizer*)recognizer; @end //Gesture.m #import "Gesture.h" @implementation Gesture - (id)init { if (!(self = [super init])) return self; preSolveAvailable = YES; return self; } - (BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer { return YES; } - (BOOL)gestureRecognizer:(UIGestureRecognizer *)recognizer shouldReceiveTouch:(UITouch *)touch { //! For swipe gesture recognizer we want it to be executed only if it occurs on the main layer, not any of the subnodes ( main layer is higher in hierarchy than children so it will be receiving touch by default ) if ([recognizer class] == [UISwipeGestureRecognizer class]) { CGPoint pt = [touch locationInView:touch.view]; pt = [[CCDirector sharedDirector] convertToGL:pt]; for (CCNode *child in owner.children) { if ([child isNodeInTreeTouched:pt]) { return NO; } } } return YES; } - (void)addGestureRecognizerToNode:(CCNode*)node { [node addGestureRecognizer:gestureRecognizer]; owner = node; } - (void)removeGestureRecognizerFromNode:(CCNode*)node { [node removeGestureRecognizer:gestureRecognizer]; } #pragma mark - Private methods -(void)recognizer:(UIGestureRecognizer*)recognizer { CCNode *node = recognizer.node; if (preSolveSelector && preSolveAvailable) { preSolveAvailable = NO; [delegate performSelector:preSolveSelector withObject:recognizer withObject:node]; } UIGestureRecognizerState state = [recognizer state]; if (state == UIGestureRecognizerStatePossible && possibleSelector) { [delegate performSelector:possibleSelector withObject:recognizer withObject:node]; } else if (state == UIGestureRecognizerStateBegan && beganSelector) [delegate performSelector:beganSelector withObject:recognizer withObject:node]; else if (state == UIGestureRecognizerStateChanged && changedSelector) [delegate performSelector:changedSelector withObject:recognizer withObject:node]; else if (state == UIGestureRecognizerStateEnded && endedSelector) { preSolveAvailable = YES; [delegate performSelector:endedSelector withObject:recognizer withObject:node]; } else if (state == UIGestureRecognizerStateCancelled && cancelledSelector) { preSolveAvailable = YES; [delegate performSelector:cancelledSelector withObject:recognizer withObject:node]; } else if (state == UIGestureRecognizerStateFailed && failedSelector) { preSolveAvailable = YES; [delegate performSelector:failedSelector withObject:recognizer withObject:node]; } } @end Subclass example: //RotateGesture.h #import "Gesture.h" @interface RotateGesture : Gesture - (id)initWithTarget:(id)target preSolveSelector:(SEL)preSolve possibleSelector:(SEL)possible beganSelector:(SEL)began changedSelector:(SEL)changed endedSelector:(SEL)ended cancelledSelector:(SEL)cancelled failedSelector:(SEL)failed; @end //RotateGesture.m #import "RotateGesture.h" @implementation RotateGesture - (id)initWithTarget:(id)target preSolveSelector:(SEL)preSolve possibleSelector:(SEL)possible beganSelector:(SEL)began changedSelector:(SEL)changed endedSelector:(SEL)ended cancelledSelector:(SEL)cancelled failedSelector:(SEL)failed { if (!(self = [super init])) return self; preSolveSelector = preSolve; delegate = target; possibleSelector = possible; beganSelector = began; changedSelector = changed; endedSelector = ended; cancelledSelector = cancelled; failedSelector = failed; gestureRecognizer = [[UIRotationGestureRecognizer alloc] initWithTarget:self action:@selector(recognizer:)]; gestureRecognizer.delegate = self; return self; } @end Use example: - (void)addRotateGesture { RotateGesture *rotateRecognizer = [[RotateGesture alloc] initWithTarget:self preSolveSelector:@selector(rotateGesturePreSolveWithRecognizer:node:) possibleSelector:nil beganSelector:@selector(rotateGestureStateBeganWithRecognizer:node:) changedSelector:@selector(rotateGestureStateChangedWithRecognizer:node:) endedSelector:@selector(rotateGestureStateEndedWithRecognizer:node:) cancelledSelector:@selector(rotateGestureStateCancelledWithRecognizer:node:) failedSelector:@selector(rotateGestureStateFailedWithRecognizer:node:)]; [rotateRecognizer addGestureRecognizerToNode:movableAreaSprite]; } I dont understand how to implement the custom gesture code at the start of this post into the rotateGesture class which is a subclass of the gesture class written by the SO user. Any ideas please? When I get 6 more rep I'll add a bounty to this.

    Read the article

  • Clientside anticheating in multiplayer game 1vs1

    - by garnav
    I'm developing a simple card game, where there will be a matchmaking system that will put you against another human player. This will be the only game mode available, a 1vs1 against another human, no AI. I want to prevent cheating as much as possible. I have already read a lot of similar questions here and I already know that I cannot trust the client and I have to make all verifications server side. I intend to have a server (need one for the matchmaking anyway) and I intend to make some verifications server side but if I want to check everything server side this makes my server to be able to keep track of the state of all current games and check every action, and I don't have the money/infrastructure to support that server. My idea is to make clients check and verify some of the actions made by their opponent* and if they find some illegal action notify the possible cheating to the server and make the server verify it. This will still require my server to keep track of all current games, but it will save resources only checking some things that cannot be checked at client side(like card order in the deck) and only checking other things when they are actually wrong. *(only those they can check with out allowing themselves cheating! for example:they can't check if the played card was in hand cos that will need them to know all cards in hand) Summing up, my questions are: is this a viable approach? will I actually save resources doing this or the extra complexity in the server and client for exchanging this messages is not worth it? do you know any game that has successfully or unsuccessfully tried a similar approach? Thanks all for reading and answering

    Read the article

  • Algorithm to find all tiles within a given radius on staggered isometric map

    - by kasztelan
    Given staggered isometric map and a start tile what would be the best way to get all surrounding tiles within given radius(middle to middle)? I can get all neighbours of a given tile and distance between each of them without any problems but I'm not sure what path to take after that. This feature will be used quite often (along with A*) so I'd like to avoid unecessary calculations. If it makes any difference I'm using XNA and each tile is 64x32 pixels.

    Read the article

  • Detect two specific objects collision with bullet physics

    - by sebap123
    I have got some problem with defining collision between objects in my game using bullet physics. I know that objects are colliding with each other simultaneously and I don't have to do anything more. However I need to be noticed when one object collides with one of the rest. It is quite awkward written so I will tell what I want to achive. I have got ball which hits wall from tubes. Everything is on the floor. When ball hits wall some fragments fall down to infinity. So I have got bellow floor btStaticPlaneShape. This is place where most of objects is stoping and then I can start another action. But not all of them. So I've been trying to use function checkCollideWith but it isn't good method as it was said in reference and wiki. So I've checked method described in wiki http://bulletphysics.org/mediawiki-1.5.8/index.php/Collision_Callbacks_and_Triggers called contact information. This isn't good method either because it is extremly hard to identify what is what when colliding. You have to also remember that ball is almost all the time colliding with something - floor, wall or eart level. So is there any other method to check what is colliding with what?

    Read the article

  • glm quaternion camera rotating on wrong axis

    - by Jarrett
    I'm trying to get my camera implemented with a glm::quat used to store the rotation. However, whenever I do circles with the mouse, the camera rotates along the axis I am viewing (i.e. I think it's called the target axis). For example, if I rotated the mouse in a clockwise fashion, the camera rotates clockwise around the axis. I initialize my quaternion like so: void Camera::initialize() { orientationQuaternion_ = glm::quat(); orientationQuaternion_ = glm::normalize(orientationQuaternion_); } I rotate like so: void Camera::rotate(const glm::detail::float32& degrees, const glm::vec3& axis) { orientationQuaternion_ = orientationQuaternion_ * glm::normalize(glm::angleAxis(degrees, axis)); } and I set the viewMatrix like so: void Camera::render() { glm::quat temp = glm::conjugate(orientationQuaternion_); viewMatrix_ = glm::mat4_cast(temp); viewMatrix_ = glm::translate(viewMatrix_, glm::vec3(-pos_.x, -pos_.y, -pos_.z)); } The only axis' I actually try to rotate are the X and Y axis (i.e. (1,0,0) and (0,1,0)). Anyone have any idea why I see my camera rotating around the target axis?

    Read the article

  • Making an AI walk on a NavigationMesh (2D/Top-Down game)

    - by Lennard Fonteijn
    For some time I have been working on a framework which should make it possible to generate 2D levels based on a set of rules specified by level designers. You can read more about it here as I won't go into details: http://www.jorisdormans.nl/article.php?ref=engineering_emergence Anyway, I'm now at the point of putting the framework to use and have trouble coming up with a solution for AI. I decided to implement a NavigationMesh in the generated levels as I already have that information to start with. Consider the following image (borrowed from http://www.david-gouveia.com/pathfinding-on-a-2d-polygonal-map/): When I run A* on the NavigationMesh, the red path would be suggested when I want to go from point A to B (either direction). However, I don't want my AI to walk that path directly and clipping corners, I'd rather want them to follow the more logical black path. How would I go about going from the Red path to the Black path, are there any algorithms for this. Which steps do I take? Is A* the proper solution for this at all? For some additional information: The proof-of-concept game is a 2D top-down game written in C#, but examples/references in any language are welcome!

    Read the article

  • XNA Masking Mayhem

    - by TropicalFlesh
    I'd like to start by mentioning that I'm just an amateur programmer of the past 2 years with no formal training and know very little about maximizing the potential of graphics hardware. I can write shaders and manipulate a multi-layered drawing environment, but I've basically stuck to minimalist pixel shaders. I'm working on putting dynamic point light shadows in my 2d sidescroller, and have had it working to a reasonable degree. Just chucking it in without working on serious optimizations outside of basic culling, I can get 50 lights or so onscreen at once and still hover around 100 fps. The only issue is that I'm on a very high end machine and would like to target the game at as many platforms I can, low and high end. The way I'm doing shadows involves a lot of masking before I can finally draw the light to my light layer. Basically, my technique to achieveing such shadows is as follows. See pics in this album http://imgur.com/a/m2fWw#0 The dark gray represents the background tiles, the light gray represents the foreground tiles, and the yellow represents the shadow-emitting foreground tile. I'll draw the light using a radial gradient and a color of choice I'll then exclude light from the mask by drawing some geometry extending through the tile from my point light. I actually don't mask the light yet at this point, but I'm just illustrating the technique in this image Finally, I'll re-include the foreground layer in my mask, as I only want shadows to collect on the background layer and finally multiply the light with it's mask to the light layer My question is simple - How can I go about reducing the amount of render target switches I need to do to achieve the following: a. Draw mask to exclude shadows from the foreground to it's own target once per frame b. For each light that emits shadows, -Begin light mask as full white -Render shadow geometry as transparent with an opaque blendmode to eliminate shadowed areas from the mask -Render foreground mask back over the light mask to reintroduce light to the foreground c. Multiply light texture with it's individual mask to the main light layer.

    Read the article

  • Can I use GLFW and GLEW together in the same code

    - by Brendan Webster
    I use the g++ compiler, which could be causing the main problem, but I'm using GLFW for window and input management, and I am using GLEW so that I can use OpenGL 3.x functionality. I loaded in models and then tried to make Vertex and Index buffers for the data, but it turned out that I kept getting segmentation faults in the program. I finally figured out that GLEW just wasn't working with GLFW included. Do they not work together? Also I've done the context creation through GLFW so that may be another factor in the problem.

    Read the article

  • HTML5 game engine for a 2D or 2.5D RPG style "map walk"

    - by stargazer
    please help me to choose a HTML5 game engine or Javascript libraries I want to do the following in the game: when the game starts a part the huge map (full size of the map: about 7 screens) is shown. The map itself is completely designed in the editor mapeditor.org (or in some comparable editor - if you know a good alternative to mapeditor.org - let me know) and loaded at runtime or at design time. The game engine should support loading of isometric maps (well, in worst case only orthogonal maps will be sufficient) both "tile layer" and "object layer" from mapeditor.org should be supported. Scrolling/performance of this map should be fast enough. The map and the game should be either in 2D (orthogonal map) or in 2.5D (isometric map) The game engine should support movement of sprites with animation. Let say I have a sprite for "human" with animation sequences showing "walking" in 8 directions - it should be imported into game engine and should "walk" on the map without writing a lot of Javascript code. Automatic scrolling of the map the "human" nears the screen border. Collision detection, "solid" objects. The mapeditor.org supports properies on tiles. Let say I assign a "solid" property to some tiles in editor. It should be easy to check this "solid" property in the game engine and implement kind of "solid" behavior, so the animanted sprites do not walk through the walls. Collision detection - it should be easy to implement some custom functionality like "when sprite A is close to sprite B - call this function" Showing "dialogs" or popup windows on top of the map - should be easy to implement. Cross-browser audio support - (it is implemented quite well in construct 2 from scirra, so I'm looking for the comparable audio quality) The game itself is a king of RPG but without fighting scenes and without huge "inventory". The main character just walking on the map, discovers some things, there are dialogs and sounds. The functionality of this example from sprite.js http://batiste.dosimple.ch/sprite.js/tests/mapeditor/map_reader.html is very close to what I'm developing. But I'm not a Javascript guru (and a very lazy guy) and would like to write even less Javascript code as in the example...

    Read the article

  • How to fix OpenGL Co-ordinate System in SFML?

    - by Marc Alexander Reed
    My OpenGL setup is somehow configured to work like so: (-1, 1) (0, 1) (1, 1) (-1, 0) (0, 0) (1, 0) (-1, -1) (0, -1) (1, -1) How do I configure it so that it works like so: (0, 0) (SW/2, 0) (SW, 0) (0, SH/2) (SW/2, SH/2) (SW, SH/2) (0, SH) (SW/2, SH) (SW/2, SH) SW as Screen Width. SH as Screen Height. This solution would have to fix the problem of I can't translate significantly(1) on the Z axis. Depth doesn't seem to be working either. The Perspective code I'm using is that of my WORKING GLUT OpenGL code which has a cool 3d grid and camera system etc. But my OpenGL setup doesn't seem to work with SFML. Help me guys. :( Thanks in advance. :) #include <SFML/Window.hpp> #include <SFML/Graphics.hpp> #include <SFML/Audio.hpp> #include <SFML/Network.hpp> #include <SFML/OpenGL.hpp> #include "ResourcePath.hpp" //Mac-only #define _USE_MATH_DEFINES #include <cmath> double screen_width = 640.f; double screen_height = 480.f; int main (int argc, const char **argv) { sf::ContextSettings settings; settings.depthBits = 24; settings.stencilBits = 8; settings.antialiasingLevel = 2; sf::Window window(sf::VideoMode(screen_width, screen_height, 32), "SFML OpenGL", sf::Style::Close, settings); window.setActive(); glEnable(GL_DEPTH_TEST); glEnable(GL_LIGHTING); glEnable(GL_LIGHT0); glEnable(GL_NORMALIZE); glEnable(GL_COLOR_MATERIAL); glShadeModel(GL_SMOOTH); glViewport(0, 0, screen_width, screen_height); glMatrixMode(GL_PROJECTION); glLoadIdentity(); //glOrtho(0.0f, screen_width, screen_height, 0.0f, -100.0f, 100.0f); gluPerspective(45.0f, (double) screen_width / (double) screen_height , 0.f, 100.f); glClearColor(0.f, 0.f, 1.f, 0.f); //blue while (window.isOpen()) { sf::Event event; while (window.pollEvent(event)) { switch (event.type) { case sf::Event::Closed: window.close(); break; } switch (event.key.code) { case sf::Keyboard::Escape: window.close(); break; case 'W': break; case 'S': break; case 'A': break; case 'D': break; } } glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glTranslatef(0.f, 0.f, 0.f); glPushMatrix(); glBegin(GL_QUADS); glColor3f(1.f, 0.f, 0.f); glVertex3f(-1.f, 1.f, 0.f); glColor3f(0.f, 1.f, 0.f); glVertex3f(1.f, 1.f, 0.f); glColor3f(1.f, 0.f, 1.f); glVertex3f(1.f, -1.f, 0.f); glColor3f(0.f, 0.f, 1.f); glVertex3f(-1.f, -1.f, 0.f); glEnd(); glPopMatrix(); window.display(); } return EXIT_SUCCESS; }

    Read the article

  • Checkers AI Algorithm

    - by John
    I am making an AI for my checkers game and I'm trying to make it as hard as possible. Here is the current criteria for a move on the hardest difficulty: 1: Look For A Block: This is when a piece is being threatened and another piece can be moved in behind it to protect it. Here is an example: Black Moves |W| |W| |W| |W| | | |W| |W| |W| |W| |W| | | |W| |W| | | | | |W| | | | | | | | | |B| | | | | |B| | | |B| |B| |B| |B| |B| |B| | | |B| |B| |B| |B| White Blocks |W| |W| |W| |W| | | |W| | | |W| |W| |W| |W| |W| |W| | | | | |W| | | | | | | | | |B| | | | | |B| | | |B| |B| |B| |B| |B| |B| | | |B| |B| |B| |B| 2: Move pieces out of danger: if any piece is being threatened, and a piece cannot block for that piece, then it will attempt to move out of the way. If the piece cannot move out of the way without still being in danger, the computer ignores the piece. 3: If the computer player owns any kings, it will attempt to 'hunt down' enemy pieces on the board, if no moves can be made that won't in danger the king or any other pieces, the computer ignores this rule. 4: Any piece that is owned by the computer that is in column 1 or 6 will attempt to go to a side. When a piece is in column 0 or 7, it is in a very strategic position because it cannot get captured while it is in either of these columns 5: It makes an educated random move, the move will not indanger the piece that is moving or any piece that is on the board. 6: If none of the above are possible it makes a random move. This question is not really specific to any language but if all examples could be in Java that would be great, considering this app is written in android. Does anyone see any room for improvement in this algorithm? Anything that would make it better at playing checkers?

    Read the article

  • Something other than Vertex Welding with Texture Atlas?

    - by Tim Winter
    What options (in C# with XNA) would there be for texture usage in a procedural generated 3D world made of cubes to increase performance? Yes, it's like Minecraft. I've been doing a texture atlas and rendering faces individually (4 vertices per face), but I've also read in a couple places about using texture wrapping with two 1D atlases to merge adjacent faces with the same texture. If two or more adjacent faces share the same image, it'd be quite easy to wrap in this way reducing vertices by a large amount. My problem with this is having too many textures, swapping too often, and many image related things like non-power of 2 images. Is there a middle ground option between the 1D texture atlas trick and rendering 4 vertices per cube face? This is a picture of what I have currently (in wireframe). 4 vertices per face seems extremely inefficient to me.

    Read the article

  • Weird y offset when using custom frag shader (Cocos2d-x)

    - by Mister Guacamole
    I'm trying to mask a sprite so I wrote a simple fragment shader that renders only the pixels that are not hidden under another texture (the mask). The problem is that it seems my texture has its y-coordinate offset after passing through the shader. This is the init method of the sprite (GroundZone) I want to mask: bool GroundZone::initWithSize(Size size) { // [...] // Setup the mask of the sprite m_mask = RenderTexture::create(textureWidth, textureHeight); m_mask->retain(); m_mask->setKeepMatrix(true); Texture2D *maskTexture = m_mask->getSprite()->getTexture(); maskTexture->setAliasTexParameters(); // Disable linear interpolation on the mask // Load the custom frag shader with a default vert shader as the sprite’s program FileUtils *fileUtils = FileUtils::getInstance(); string vertexSource = ccPositionTextureA8Color_vert; string fragmentSource = fileUtils->getStringFromFile( fileUtils->fullPathForFilename("CustomShader_AlphaMask_frag.fsh")); GLProgram *shader = new GLProgram; shader->initWithByteArrays(vertexSource.c_str(), fragmentSource.c_str()); shader->bindAttribLocation(GLProgram::ATTRIBUTE_NAME_POSITION, GLProgram::VERTEX_ATTRIB_POSITION); shader->bindAttribLocation(GLProgram::ATTRIBUTE_NAME_TEX_COORD, GLProgram::VERTEX_ATTRIB_TEX_COORDS); shader->link(); CHECK_GL_ERROR_DEBUG(); shader->updateUniforms(); CHECK_GL_ERROR_DEBUG(); int maskTexUniformLoc = shader->getUniformLocationForName("u_alphaMaskTexture"); shader->setUniformLocationWith1i(maskTexUniformLoc, 1); this->setShaderProgram(shader); shader->release(); // [...] } These are the custom drawing methods for actually drawing the mask over the sprite: You need to know that m_mask is modified externally by another class, the onDraw() method only render it. void GroundZone::draw(Renderer *renderer, const kmMat4 &transform, bool transformUpdated) { m_renderCommand.init(_globalZOrder); m_renderCommand.func = CC_CALLBACK_0(GroundZone::onDraw, this, transform, transformUpdated); renderer->addCommand(&m_renderCommand); Sprite::draw(renderer, transform, transformUpdated); } void GroundZone::onDraw(const kmMat4 &transform, bool transformUpdated) { GLProgram *shader = this->getShaderProgram(); shader->use(); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, m_mask->getSprite()->getTexture()->getName()); glActiveTexture(GL_TEXTURE0); } Below is the method (located in another class, GroundLayer) that modify the mask by drawing a line from point start to point end. Both points are in Cocos2d coordinates (Point (0,0) is down-left). void GroundLayer::drawTunnel(Point start, Point end) { // To dig a line, we need first to get the texture of the zone we will be digging into. Then we get the // relative position of the start and end point in the zone's node space. Finally we use the custom shader to // draw a mask over the existing texture. for (auto it = _children.begin(); it != _children.end(); it++) { GroundZone *zone = static_cast<GroundZone *>(*it); Point nodeStart = zone->convertToNodeSpace(start); Point nodeEnd = zone->convertToNodeSpace(end); // Now that we have our two points converted to node space, it's easy to draw a mask that contains a line // going from the start point to the end point and that is then applied over the current texture. Size groundZoneSize = zone->getContentSize(); RenderTexture *rt = zone->getMask(); rt->begin(); { // Draw a line going from start and going to end in the texture, the line will act as a mask over the // existing texture DrawNode *line = DrawNode::create(); line->retain(); line->drawSegment(nodeStart, nodeEnd, 20, Color4F::RED); line->visit(); } rt->end(); } } Finally, here's the custom shader I wrote. #ifdef GL_ES precision mediump float; #endif varying vec2 v_texCoord; uniform sampler2D u_texture; uniform sampler2D u_alphaMaskTexture; void main() { float maskAlpha = texture2D(u_alphaMaskTexture, v_texCoord).a; float texAlpha = texture2D(u_texture, v_texCoord).a; float blendAlpha = (1.0 - maskAlpha) * texAlpha; // Show only where mask is invisible vec3 texColor = texture2D(u_texture, v_texCoord).rgb; gl_FragColor = vec4(texColor, blendAlpha); return; } I got a problem with the y coordinates. Indeed, it seems that once it has passed through my custom shader, the sprite's texture is not at the right place: Without custom shader (the sprite is the brown thing): With custom shader: What's going on here? Thanks :) EDIT It looks like after passing through the shader when I set the position of the sprite I set it in points, with (0,0) being in the top-right. Indeed, when I do sprite->setPosition(320, 480), the sprite is perfectly placed at the top of the screen.

    Read the article

  • Scene Graph Traversing Techniques

    - by Bunkai.Satori
    Scene Graph seems to be the most effective way of representing the game world. The game world usually tends to be as large as the memory and device can handle. In contrast, the screen of the device captures only a fraction of the Game World/Scene Graph. Ideally, I wish to process(update and render) only the visible game objects/nodes on per-frame basis. My question therefore is, how to traverse the scene graph so, that I will focus only on the game notes that are in the camera frustum? How to organize data so that I can easily focus only on the scene graph nodes visible to me? What are techniques to minimize scenegraph traversal time? Is there such way as more effective traversal, or do I have to traverse whole scene graph on per-frame basis?

    Read the article

  • Best way to go about sorting 2D sprites in a "RPG Maker" styled RPG

    - by Aaron Stewart
    I am trying to come up with the best way to create overlapping sprites without having any issues. I was thinking of having a SortedDictionary and setting the Entity's key to it's Y position relative to the max bound of the simulation, aka the Z value. I'd update the "Z" value in the update method each frame, if the entity's position has changed at all. For those who don't know what I mean, I want characters who are standing closer in front of another character to be drawn on top, and if they are behind the character, they are drawn behind. I'm leery of using SpriteBatch back to front or front to back, I've been doing some searching and have been under the impression they are a bad idea. and want to know exactly how other people are dealing with their depth sorting. Just ultimately trying to come up with the best method of sorting for good practice before I get too far in to refactor the system effectively.

    Read the article

  • Where do you search/look for game developers for an indie game startup?

    - by G.Campos
    Hey there I just recently saw stackoverflow had a game dev sister site so here I am, wondering if you experienced fellows know where one can search/look for game developers for an indie game startup? In other words: I have a game idea which I've written down with as much detail as possible (so anyone else can understand how it works) and now I'm looking for a heavy php programmer with whom to pair up in order to go from idea to reality. I'm a front-end/interface designer and an intermediate programmer. I recognize my project requires heavy programming skills which I do not have as of today =) So, what websites, communities or places do you recommend I go look into? Where do good programmers interested in indie games go look for projects if they don't have their own? Thanks in advance G.Campos

    Read the article

  • Create bullet physics rigid body along the vertices of a blender model

    - by Krishnabhadra
    I am working on my first 3D game, for iphone, and I am using Blender to create models, Cocos3D game engine and Bullet for physics simulation. I am trying to learn the use of physics engine. What I have done I have created a small model in blender which contains a Cube (default blender cube) at the origin and a UVSphere hovering exactly on top of this cube (without touching the cube) I saved the file to get MyModel.blend. Then I used File -> Export -> PVRGeoPOD (.pod/.h/.cpp) in Blender to export the model to .pod format to use along with Cocos3D. In the coding side, I added necessary bullet files to my Cocos3D template project in XCode. I am also using a bullet objective C wrapper. -(void) initializeScene { _physicsWorld = [[CC3PhysicsWorld alloc] init]; [_physicsWorld setGravity:0 y:-9.8 z:0]; /*Setup camera, lamp etc.*/ .......... ........... /*Add models created in blender to scene*/ [self addContentFromPODFile: @"MyModel.pod"]; /*Create OpenGL ES buffers*/ [self createGLBuffers]; /*get models*/ CC3MeshNode* cubeNode = (CC3MeshNode*)[self getNodeNamed:@"Cube"]; CC3MeshNode* sphereNode = (CC3MeshNode*)[self getNodeNamed:@"Sphere"]; /*Those boring grey colors..*/ [cubeNode setColor:ccc3(255, 255, 0)]; [sphereNode setColor:ccc3(255, 0, 0)]; float *cVertexData = (float*)((CC3VertexArrayMesh*)cubeNode.mesh).vertexLocations.vertices; int cVertexCount = (CC3VertexArrayMesh*)cubeNode.mesh).vertexLocations.vertexCount; btTriangleMesh* cTriangleMesh = new btTriangleMesh(); // for (int i = 0; i < cVertexCount * 3; i+=3) { // printf("\n%f", cVertexData[i]); // printf("\n%f", cVertexData[i+1]); // printf("\n%f", cVertexData[i+2]); // } /*Trying to create a triangle mesh that curresponds the cube in 3D space.*/ int offset = 0; for (int i = 0; i < (cVertexCount / 3); i++){ unsigned int index1 = offset; unsigned int index2 = offset+6; unsigned int index3 = offset+12; cTriangleMesh->addTriangle( btVector3(cVertexData[index1], cVertexData[index1+1], cVertexData[index1+2] ), btVector3(cVertexData[index2], cVertexData[index2+1], cVertexData[index2+2] ), btVector3(cVertexData[index3], cVertexData[index3+1], cVertexData[index3+2] )); offset += 18; } [self releaseRedundantData]; /*Create a collision shape from triangle mesh*/ btBvhTriangleMeshShape* cTriMeshShape = new btBvhTriangleMeshShape(cTriangleMesh,true); btCollisionShape *sphereShape = new btSphereShape(1); /*Create physics objects*/ gTriMeshObject = [_physicsWorld createPhysicsObjectTrimesh:cubeNode shape:cTriMeshShape mass:0 restitution:1.0 position:cubeNode.location]; sphereObject = [_physicsWorld createPhysicsObject:sphereNode shape:sphereShape mass:1 restitution:0.1 position:sphereNode.location]; sphereObject.rigidBody->setDamping(0.1,0.8); } When I run the sphere and cube shows up fine. I expect the sphere object to fall directly on top of the cube, since I have given it a mass of 1 and the physics world gravity is given as -9.8 in y direction. But What is happening the spere rotates around cube three or times and then just jumps out of the scene. Then I know I have some basic misunderstanding about the whole process. So my question is, how can I create a physics collision shape which corresponds to the shape of a particular mesh model. I may need complex shapes than cube and sphere, but before going into them I want to understand the concepts.

    Read the article

  • how to create 2D collision detection

    - by Aidan Mueller
    I would like to know the best or most effective way to test for 2D collision. I also can do AABBs but when you have a line, for example, that is rotated 45º, and it is really long. it will be hitting things when it shouldn't. I might be able to go through the pixels to see if they are touching others, but that might be slow if I had a big picture. and it might add some complications if I had a movie clip made of several images. How do I check collision between two Images? How would I do circle to box? Please help : ) PS: I do know java so you can write with java syntax and then use a made up GL

    Read the article

  • XNA 2D line-of-sight check

    - by bionicOnion
    I'm working on a top-down shooter in XNA, and I need to implement line-of-sight checking. I've come up with a solution that seems to work, but I get the nagging feeling that it won't be efficient enough to do every frame for multiple calls (the game already hiccups slightly at about 10 calls per frame). The code is below, but my general plan was to create a series of rectangles with a width and height of zero to act as points along the sight line, and then check to see if any of these rectangles intersects a ClutterObject (an interface I defined for things like walls or other obstacles) after first screening for any that can't possibly be in the line of sight (i.e. behind the viewer) or are too far away (a concession I made for efficiency). public static bool LOSCheck(Vector2 pos1, Vector2 pos2) { Vector2 currentPos = pos1; Vector2 perMove = (pos2 - pos1); perMove.Normalize(); HashSet<ClutterObject> clutter = new HashSet<ClutterObject>(); foreach (Room r in map.GetRooms()) { if (r != null) { foreach (ClutterObject c in r.GetClutter()) { if (c != null &&!(c.GetRectangle().X * perMove.X < 0) && !(c.GetRectangle().Y * perMove.Y < 0)) { Vector2 cVector = new Vector2(c.GetRectangle().X, c.GetRectangle().Y); if ((cVector - pos1).Length() < 1500) clutter.Add(c); } } } } while (currentPos != pos2 && ((currentPos - pos1).Length() < 1500)) { Rectangle position = new Rectangle((int)currentPos.X, (int)currentPos.Y, 0, 0); foreach (ClutterObject c in clutter) { if (position.Intersects(c.GetRectangle())) return false; } currentPos += perMove; } return true; } I'm sure that there's a better way to do this (or at least a way to make this method more efficient), but I'm not too used to XNA yet, so I figured it couldn't hurt to bring it here. At the very least, is there an efficient to determine which objects may be in front of the viewer with greater precision than the rather broad 90 degree window I've given myself?

    Read the article

< Previous Page | 556 557 558 559 560 561 562 563 564 565 566 567  | Next Page >