Search Results

Search found 26124 results on 1045 pages for 'unreal development kit'.

Page 437/1045 | < Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >

  • How should I generate and store the boundries of a cave?

    - by Bob Roberts
    I am making a small cave copter game (seriously, where did this type of game come from anyway) and I am trying to figure out how to make and store the procedural generated walls. I am thinking about creating the walls by randomly picking two points away from the center of the screen. They will be no closer than the height of helicopter and no further than the edge of the screen, weighted to prefer to go in the same direction as the point prior so I end up with stalactites and stalagmites and not just noise, at set intervals of distance. To store, perhaps parallel arrays/lists, one for distance from center to top screen and one for distance from center to bottom. Am I way off base with my thinking? I just want the cave to be varied and challenging, I just have never worked with generating data like this. Edit: Woah, I just realized that my idea would lead to a player being able to stay in the middle of the screen and win. That isn't right at all. So the very basis of how I was going to generate is wrong. Edit 2: I also realized I left out a very crucial point. Part of the mechanics of the game will let the player go backwards therefor the data structure should be continuous.

    Read the article

  • Trouble exporting Maya models to Panda3D?

    - by Aerovistae
    Having issues here. I added the Panda3D exporter script thing to Maya. 2 things: When I go to export to a .egg, no .egg is formed. Instead a fileToBeExported_temp.mb appears next to the original fileToBeExported.ma. My models use curving meshes with many subdivisions, easily in the thousands, like on the smoothed tentacles of an octopus. Will Panda be able to handle this in the first place? I can't find out on my own since it won't export.

    Read the article

  • Particle and Physics problem.

    - by Quincy
    This was originally a forum post so I hope you guys don't mind it being 2 questions in one. I am making a game and I got some basic physics implemented. I have 2 problems, 1 with particles being drawn in the wrong place and one with going through walls while jumping in corners. Skip over to about 15 sec video showing the 2 problems : http://youtube.com/watch?v=Tm9nfWsWfiM So the problem with the particles seems to be coming from the removal, as soon as I remove that piece of code it instantly works, but there shouldn't be a problem since they shouldn't even draw when their energy gets to 0 (and then they get removed) So my first question is, how are these particles getting warped all over the screen ? Relevant code : Particle class : class Particle { //Physics public Vector2 position = new Vector2(0,0); public float direction = 180; public float speed = 100; public float energy = 1; protected float startEnergy = 1; //Visual public Sprite sprite; public float rotation = 0; public float scale = 1; public byte alpha = 255; public BlendMode blendMode { get { return sprite.BlendMode; } set { sprite.BlendMode = value; } } public Particle() { } public virtual void Think(float frameTime) { if (energy - frameTime < 0) energy = 0; else energy -= frameTime; position += new Vector2((float)Math.Cos(MathHelper.DegToRad(direction)), (float)Math.Sin(MathHelper.DegToRad(direction))) * speed * frameTime; alpha = (byte)(255 * energy / startEnergy); sprite.Rotation = rotation; sprite.Position = position; sprite.Color = new Color(sprite.Color.R, sprite.Color.G, sprite.Color.B, alpha); } public virtual void Draw(float frameTime) { if (energy > 0) { World.camera.DrawSprite(sprite); } } // Basic particle implementation class BasicSprite : Particle { public BasicSprite(Sprite _sprite) { sprite = _sprite; } } Emitter : class Emitter { protected static Random rand = new Random(); protected List<Particle> particles = new List<Particle>(); public BaseEntity target = null; public Vector2 position = new Vector2(0, 0); public bool Active = true; public float timeAlive = 0; public int particleCount = 0; public int ParticlesPerSeccond { get { return (int)(1 / particleSpawnTime); } set { particleSpawnTime = 1 / (float)value; } } public float dieTime = float.MaxValue; float particleSpawnTime = 0.05f; float spawnTime = 0; public Emitter() { } public virtual void Think(float frametime) { spawnTime += frametime; if (dieTime != float.MaxValue) { timeAlive += frametime; if (timeAlive >= dieTime) Active = false; } if (Active) { if (target != null) position = target.Position; while (spawnTime > particleSpawnTime) { spawnTime -= particleSpawnTime; AddParticle(); particleCount++; } } for (int i = 0; i < particles.Count; i++) { particles[i].Think(frametime); if (particles[i].energy <= 0) { particles.Remove(particles[i]); // As soon as this is removed, it works particleCount--; } } } public virtual void AddParticle() { } public virtual void Draw(float frametime) { foreach (Particle particle in particles) { particle.Draw(frametime); } } } class BloodEmitter : Emitter { Image image; public BloodEmitter() { image = new Image(@"Content/Particles/TinyCircle.png"); image.CreateMaskFromColor(new Color(255, 0, 255, 255)); this.dieTime = 0.5f; this.ParticlesPerSeccond = 100; } public override void AddParticle() { Sprite sprite = new Sprite(image); sprite.Color = new Color((byte)(rand.NextDouble() * 255), (byte)(rand.NextDouble() * 255), (byte)(rand.NextDouble() * 255)); BasicSprite particle = new BasicSprite(sprite); particle.direction = (float)rand.NextDouble() * 360; particle.position = position; particle.blendMode = BlendMode.Alpha; particles.Add(particle); } } The seccond problem is the physics problem, for some reason I can get through the right bottom corner while jumping. I think this is coming from me switching animations but I thought I made it compensate for that. Relevant code : PhysicsEntity : class PhysicsEntity : BaseEntity { // Horizontal movement constants protected const float maxHorizontalSpeed = 1000; protected const float horizontalAcceleration = 15; protected const float horizontalDragAir = 0.95f; protected const float horizontalDragGround = 0.95f; // Vertical movement constants protected const float maxVerticalSpeed = 1000; protected const float verticalAcceleration = 20; // Everything needed for movement and correct animations protected float movement = 0; protected bool onGround = false; protected Vector2 Velocity = new Vector2(0, 0); protected float maxSpeed = 0; float lastThink = 0; float thinkTime = 1f/60f; public PhysicsEntity(Vector2 position, Sprite sprite) : base(position, sprite) { } public override void Draw(float frameTime) { base.Draw(frameTime); } public override void Think(float frameTime) { CalculateMovement(frameTime); base.Think(frameTime); } protected void CalculateMovement(float frameTime) { lastThink += frameTime; while (lastThink > thinkTime) { onGround = false; Velocity.X = MathHelper.Clamp(Velocity.X + horizontalAcceleration * movement, -maxHorizontalSpeed, maxHorizontalSpeed); if (onGround) Velocity.X *= horizontalDragGround; else Velocity.X *= horizontalDragAir; if (maxSpeed < Velocity.X) maxSpeed = Velocity.X; Velocity.Y = MathHelper.Clamp(Velocity.Y + verticalAcceleration, -maxVerticalSpeed, maxVerticalSpeed); lastThink -= thinkTime; DoCollisions(thinkTime); DoAnimations(thinkTime); } } public virtual void DoAnimations(float frameTime) { } public void DoCollisions(float frameTime) { Position.Y += Velocity.Y * frameTime; Vector2 tileCollision = GetTileCollision(); if (tileCollision.X != -1 || tileCollision.Y != -1) { Vector2 collisionDepth = CollisionRectangle.DepthIntersection( new Rectangle( tileCollision.X * World.tileEngine.TileWidth, tileCollision.Y * World.tileEngine.TileHeight, World.tileEngine.TileWidth, World.tileEngine.TileHeight ) ); Position.Y += collisionDepth.Y; if (collisionDepth.Y < 0) onGround = true; Velocity.Y = 0; } Position.X += Velocity.X * frameTime; tileCollision = GetTileCollision(); if (tileCollision.X != -1 || tileCollision.Y != -1) { Vector2 collisionDepth = CollisionRectangle.DepthIntersection( new Rectangle( tileCollision.X * World.tileEngine.TileWidth, tileCollision.Y * World.tileEngine.TileHeight, World.tileEngine.TileWidth, World.tileEngine.TileHeight ) ); Position.X += collisionDepth.X; Velocity.X = 0; } } public void DoCollisions(Vector2 difference) { CollisionRectangle.Y = Position.Y - difference.Y; CollisionRectangle.Height += difference.Y; Vector2 tileCollision = GetTileCollision(); if (tileCollision.X != -1 || tileCollision.Y != -1) { Vector2 collisionDepth = CollisionRectangle.DepthIntersection( new Rectangle( tileCollision.X * World.tileEngine.TileWidth, tileCollision.Y * World.tileEngine.TileHeight, World.tileEngine.TileWidth, World.tileEngine.TileHeight ) ); Position.Y += collisionDepth.Y; if (collisionDepth.Y < 0) onGround = true; Velocity.Y = 0; } CollisionRectangle.X = Position.X - difference.X; CollisionRectangle.Width += difference.X; tileCollision = GetTileCollision(); if (tileCollision.X != -1 || tileCollision.Y != -1) { Vector2 collisionDepth = CollisionRectangle.DepthIntersection( new Rectangle( tileCollision.X * World.tileEngine.TileWidth, tileCollision.Y * World.tileEngine.TileHeight, World.tileEngine.TileWidth, World.tileEngine.TileHeight ) ); Position.X += collisionDepth.X; Velocity.X = 0; } } Vector2 GetTileCollision() { int topLeftTileX = (int)(CollisionRectangle.TopLeft.X / World.tileEngine.TileWidth); int topLeftTileY = (int)(CollisionRectangle.TopLeft.Y / World.tileEngine.TileHeight); int BottomRightTileX = (int)(CollisionRectangle.DownRight.X / World.tileEngine.TileWidth); int BottomRightTileY = (int)(CollisionRectangle.DownRight.Y / World.tileEngine.TileHeight); if (CollisionRectangle.DownRight.Y % World.tileEngine.TileHeight == 0) // If your exactly against the tile don't count that as being inside the tile BottomRightTileY -= 1; if (CollisionRectangle.DownRight.X % World.tileEngine.TileWidth == 0) // If your exactly against the tile don't count that as being inside the tile BottomRightTileX -= 1; for (int i = topLeftTileX; i <= BottomRightTileX; i++) { for (int j = topLeftTileY; j <= BottomRightTileY; j++) { if (World.tileEngine.TileIsSolid(i, j)) { return new Vector2(i, j); } } } return new Vector2(-1, -1); } } Player : enum State { Standing, Running, Jumping, Falling, Sliding, WallSlide } class Player : PhysicsEntity { private State state { get { return currentState; } set { if (currentState != value) { currentState = value; animationChanged = true; } } } private State currentState = State.Standing; private BasicEmitter basicEmitter = new BasicEmitter(); public bool flipped; public bool animationChanged = false; protected const float jumpPower = 600; AnimationManager animationManager; Rectangle DrawRectangle; public override Rectangle CollisionRectangle { get { return new Rectangle( Position.X - DrawRectangle.Width / 2f, Position.Y - DrawRectangle.Height / 2f, DrawRectangle.Width, DrawRectangle.Height ); } } public Player(Vector2 position, Sprite sprite) : base(position, sprite) { // Only posted the relevant bit DrawRectangle = animationManager.currentAnimation.drawingRectangle; } public override void Draw(float frameTime) { World.camera.DrawSprite( Sprite, Position + new Vector2(DrawRectangle.X, DrawRectangle.Y), animationManager.currentAnimation.drawingRectangle ); } public override void Think(float frameTime) { //I only posted the relevant stuff if (animationChanged) { // if the animation has changed make sure we compensate for the change in with and height animationChanged = false; DoCollisions(animationManager.getSizeDifference()); } DoCustomMovement(); base.Think(frameTime); if (!onGround && Velocity.Y > 0) { state = State.Falling; } } void DoCustomMovement() { if (onGround) { if (World.renderWindow.Input.IsKeyDown(KeyCode.W)) { Velocity.Y = -jumpPower; state = State.Jumping; } } } public override void DoAnimations(float frameTime) { string stateName = Enum.GetName(typeof(State), state); if (!animationManager.currentAnimationIs(stateName)) { animationManager.PlayAnimation(stateName); } animationManager.Think(frameTime); DrawRectangle = animationManager.currentAnimation.drawingRectangle; Sprite.Center = new Vector2( DrawRectangle.X + DrawRectangle.Width / 2, DrawRectangle.Y + DrawRectangle.Height / 2 ); Sprite.FlipX(flipped); } So why am I warping through walls ? I have given this some thought but I just can't seem to find out why this is happening. Full source if needed : source : http://www.mediafire.com/?rc7ddo09gnr68zd (download link)

    Read the article

  • How to update entity states and animations in a component-based game

    - by mivic
    I'm trying to design a component-based entity system for learning purposes (and later use on some games) and I'm having some troubles when it comes to updating entity states. I don't want to have an update() method inside the Component to prevent dependencies between Components. What I currently have in mind is that components hold data and systems update components. So, if I have a simple 2D game with some entities (e.g. player, enemy1, enemy 2) that have Transform, Movement, State, Animation and Rendering components I think I should have: A MovementSystem that moves all the Movement components and updates the State components And a RenderSystem that updates the Animation components (the animation component should have one animation (i.e. a set of frames/textures) for each state and updating it means selecting the animation corresponding to the current state (e.g. jumping, moving_left, etc), and updating the frame index). Then, the RenderSystem updates the Render components with the texture corresponding to the current frame of each entity's Animation and renders everything on screen. I've seen some implementations like Artemis framework, but I don't know how to solve this situation: Let's say that my game has the following entities. Each entity have a set of states and one animation for each state: player: "idle", "moving_right", "jumping" enemy1: "moving_up", "moving_down" enemy2: "moving_left", "moving_right" What are the most accepted approaches in order to update the current state of each entity? The only thing that I can think of is having separate systems for each group of entities and separate State and Animation components so I would have PlayerState, PlayerAnimation, Enemy1State, Enemy1Animation... PlayerMovementSystem, PlayerRenderingSystem... but I think this is a bad solution and breaks the purpose of having a component-based system. As you can see, I'm quite lost here, so I'd very much appreciate any help.

    Read the article

  • Lwjgl or opengl double pixels

    - by Philippe Paré
    I'm working in java with LWJGL and trying to double all my pixels. I'm trying to draw in an area of 800x450 and then stretch all the frame image to the complete 1600x900 pixels without them getting blured. I can't figure out how to do that in java, everything I find is in c++... A hint would be great! Thanks a lot. EDIT : I've tried drawing to a texture created in opengl by setting it to the framebuffer, but I can't find a way to use glGenTextures() in java... so this is not working... also I though about using a shader but I would not be able to draw only in the smaller region...

    Read the article

  • Ray Tracing concers: Efficient Data Structure and Photon Mapping

    - by Grieverheart
    I'm trying to build a simple ray tracer for specific target scenes. An example of such scene can be seen below. I'm concerned as to what accelerating data structure would be most efficient in this case since all objects are touching but on the other hand, the scene is uniform. The objects in my ray tracer are stored as a collection of triangles, thus I also have access to individual triangles. Also, when trying to find the bounding box of the scene, how should infinite planes be handled? Should one instead use the viewing frustum to calculate the bounding box? A few other questions I have are about photon mapping. I've read the original paper by Jensen and many more material. In the compact data structure for the photon they introduce, they store photon power as 4 chars, which from my understanding is 3 chars for color and 1 for flux. But I don't understand how 1 char is enough to store a flux of the order of 1/n, where n is the number of photons (I'm also a bit confused about flux vs power). The other question about photon mapping is, if it would be more efficient in my case to store photons per object (or even per Object's triangle) instead of using a balanced kd-tree. Also, same question about bounding box of the scene but for photon mapping. How should one find a bounding box from the pov of the light when infinite planes are involved?

    Read the article

  • Wheel rotation, to change velocity of vehicle

    - by Lewis
    I update the velocity of my vehicle like so: [v setVelocity: ((2 * 3.14 * 100 * (wheel.getRotationValue / 360) / 30)) * gameSpeed]; // update on 60 fps this gets velocity on all frames divide by 60 for 1 frame. This is done in my update method in my world class. Now wheel.getRotationValue returns the rotation value which is worked out like this: - (void)ccTouchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [touches anyObject]; CGPoint location = [touch locationInView:[touch view]]; location = [[CCDirector sharedDirector] convertToGL:location]; if (CGRectContainsPoint(wheel.boundingBox, location)) { CGPoint firstLocation = [touch previousLocationInView:[touch view]]; CGPoint location = [touch locationInView:[touch view]]; CGPoint touchingPoint = [[CCDirector sharedDirector] convertToGL:location]; CGPoint firstTouchingPoint = [[CCDirector sharedDirector] convertToGL:firstLocation]; CGPoint firstVector = ccpSub(firstTouchingPoint, wheel.position); CGFloat firstRotateAngle = -ccpToAngle(firstVector); CGFloat previousTouch = CC_RADIANS_TO_DEGREES(firstRotateAngle); CGPoint vector = ccpSub(touchingPoint, wheel.position); CGFloat rotateAngle = -ccpToAngle(vector); CGFloat currentTouch = CC_RADIANS_TO_DEGREES(rotateAngle); float limit = 0.5; rotationValue += (currentTouch - previousTouch) * limit; } touching = YES; } Say I steer the vehicle to the far right of the screen, and want to move it to the far left, It wont start moving to the left of the screen until the rotationValue is past 0 degrees again (the wheel is in its center posistion) and is dragged past this value. Is there anyway to change the code I have above, so that movement on the wheel is recognised instantly and updates the velocity of v instantly too?

    Read the article

  • How can I downsample a texture using FBOs?

    - by snape
    I am rendering a scene to FBO as my render target whose size is 8 times the size of the orignal screen in OpenGL. Now i wan to downsample the texture generated by FBO to the size of the screen so as to achieve spatial anti aliasing. How do i achieve the down sampling ? Please provide implementation details. Note : If there is a better way of doing anti aliasing in FBOs please mention that too. I am trying to remove the aliasing in the image attached below.

    Read the article

  • Unexpected results for projection on to plane

    - by ravenspoint
    I want to use this projection matrix: GLfloat shadow[] = { -1,0,0,0, 1,0,-1,1, 0,0,-1,0, 0,0,0,-1 }; It should cast object shadows onto the y = 0 plane from a point light at 1,1,-1. I create a rectangle in the x = 0.5 plane glBegin( GL_QUADS ); glVertex3f( 0.5,0.2,-0.5); glVertex3f( 0.5,0.2,-1.5); glVertex3f( 0.5,0.5,-1.5); glVertex3f( 0.5,0.5,-0.5); glEnd(); Now if I manually multiply these vertices with the matrix, I get. glBegin( GL_QUADS ); glVertex3f( 0.375,0,-0.375); glVertex3f( 0.375,0,-1.625); glVertex3f( 0,0,-2); glVertex3f( 0,0,0); glEnd(); Which produces a reasonable display ( camera at 0,5,0 looking down y axis ) So rather than do the calculation manually, I should be able to use the opengl model transormation. I write this code: glMatrixMode (GL_MODELVIEW); GLfloat shadow[] = { -1,0,0,0, 1,0,-1,1, 0,0,-1,0, 0,0,0,-1 }; glLoadMatrixf( shadow ); glBegin( GL_QUADS ); glVertex3f( 0.5,0.2,-0.5); glVertex3f( 0.5,0.2,-1.5); glVertex3f( 0.5,0.5,-1.5); glVertex3f( 0.5,0.5,-0.5); glEnd(); But this produces a blank screen! What am I doing wrong? Is there some debug mode where I can print out the transformed vertices, so I can see where they are ending up? Note: People have suggested that using glMultMatrixf() might make a difference. It doesn't. Replacing glLoadMatrixf( shadow ); with glLoadIdentity(); glMultMatrixf( shadow ); gives the identical result ( of course! )

    Read the article

  • OpenGL problem with FBO integer texture and color attachment

    - by Grieverheart
    In my simple renderer, I have 2 FBOs one that contains diffuse, normals, instance ID and depth in that order and one that I use store the ssao result. The textures I use for the first FBO are RGB8, RGBA16F, R32I and GL_DEPTH_COMPONENT32F for the depth. For the second FBO I use an R16F texture. My rendering process is to first render to everything I mentioned in the first FBO, then bind depth and normals textures for reading for the ssao pass and write to the second FBO. After that I bind the second FBO's texture for reading in my blur shader and bind the first FBO for writing. What I intend to do is to write the blurred ssao value to the alpha component of the Normals texture. Here are where the problems start. First of all, I use shading language 3.3, which my graphics card does support. I manage ouputs in my shaders using layout(location = #). Now, the normals texture should be bound to color attachment 1, but when I use 1, it seems to write to my diffuse texture which should be in color attachment 0. When I instead use layout(location = 0), it gets correctly written to my normals texture. Besides this, my instance ID texture also gets resets after running the blur shader which is weird because if I use a float texture and write to it instanceID / nInstances, the texture doesn't get reset after the blur shader has ran. Here is how I prepare my first FBO: bool CGBuffer::Init(unsigned int WindowWidth, unsigned int WindowHeight){ //Create FBO glGenFramebuffers(1, &m_fbo); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, m_fbo); //Create gbuffer and Depth Buffer Textures glGenTextures(GBUFF_NUM_TEXTURES, &m_textures[0]); glGenTextures(1, &m_depthTexture); //prepare gbuffer for(unsigned int i = 0; i < GBUFF_NUM_TEXTURES; i++){ glBindTexture(GL_TEXTURE_2D, m_textures[i]); if(i == GBUFF_TEXTURE_TYPE_NORMAL) glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, WindowWidth, WindowHeight, 0, GL_RGBA, GL_FLOAT, NULL); else if(i == GBUFF_TEXTURE_TYPE_DIFFUSE) glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, WindowWidth, WindowHeight, 0, GL_RGB, GL_FLOAT, NULL); else if(i == GBUFF_TEXTURE_TYPE_ID) glTexImage2D(GL_TEXTURE_2D, 0, GL_R32I, WindowWidth, WindowHeight, 0, GL_RED_INTEGER, GL_INT, NULL); else{ std::cout << "Error in FBO initialization" << std::endl; return false; } glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, GL_TEXTURE_2D, m_textures[i], 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); } //prepare depth buffer glBindTexture(GL_TEXTURE_2D, m_depthTexture); glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, WindowWidth, WindowHeight, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, m_depthTexture, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_NONE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); GLenum DrawBuffers[] = {GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1, GL_COLOR_ATTACHMENT2}; glDrawBuffers(GBUFF_NUM_TEXTURES, DrawBuffers); GLenum Status = glCheckFramebufferStatus(GL_FRAMEBUFFER); if(Status != GL_FRAMEBUFFER_COMPLETE){ std::cout << "FB error, status 0x" << std::hex << Status << std::endl; return false; } //Restore default framebuffer glBindFramebuffer(GL_FRAMEBUFFER, 0); return true; } where I use an enum defined as, enum GBUFF_TEXTURE_TYPE{ GBUFF_TEXTURE_TYPE_DIFFUSE, GBUFF_TEXTURE_TYPE_NORMAL, GBUFF_TEXTURE_TYPE_ID, GBUFF_NUM_TEXTURES }; Am I missing some kind of restriction? Does the color attachment of the FBO's textures somehow gets reset i.e. I'm using a re-size function which re-sizes the textures of the FBO but should I perhaps call glFramebufferTexture2D again too? EDIT: Here is the shader in question: #version 330 core uniform sampler2D aoSampler; uniform vec2 TEXEL_SIZE; // x = 1/res x, y = 1/res y uniform bool use_blur; noperspective in vec2 TexCoord; layout(location = 0) out vec4 out_AO; void main(void){ if(use_blur){ float result = 0.0; for(int i = -1; i < 2; i++){ for(int j = -1; j < 2; j++){ vec2 offset = vec2(TEXEL_SIZE.x * i, TEXEL_SIZE.y * j); result += texture(aoSampler, TexCoord + offset).r; // -0.004 because the texture seems to be a bit displaced } } out_AO = vec4(vec3(0.0), result / 9); } else out_AO = vec4(vec3(0.0), texture(aoSampler, TexCoord).r); }

    Read the article

  • Everything turning black when pitching down

    - by Gordon
    Just a quick questions about something that's occurring in my world. Every time I pitch my camera downward, everything starts turning black, and if I pitch upward, everything sort of intensifies. I'm multiplying my normals by the normal matrix in the shader, and I'm multiplying my lights direction by the model view matrix. If I leave the normal and light dir in world space everything ends up fine. I thought putting them both in view space would not cause those weird things to happen?

    Read the article

  • How can I resolve component types in a way that supports adding new types relatively easily?

    - by John
    I am trying to build an Entity Component System for an interactive application developed using C++ and OpenGL. My question is quite simple. In my GameObject class I have a collection of Components. I can add and retrieve components. class GameObject: public Object { public: GameObject(std::string objectName); ~GameObject(void); Component * AddComponent(std::string name); Component * AddComponent(Component componentType); Component * GetComponent (std::string TypeName); Component * GetComponent (<Component Type Here>); private: std::map<std::string,Component*> m_components; }; I will have a collection of components that inherit from the base Components class. So if I have a meshRenderer component and would like to do the following GameObject * warship = new GameObject("myLovelyWarship"); MeshRenderer * meshRenderer = warship->AddComponent(MeshRenderer); or possibly MeshRenderer * meshRenderer = warship->AddComponent("MeshRenderer"); I could be make a Component Factory like this: class ComponentFactory { public: static Component * CreateComponent(const std::string &compTyp) { if(compTyp == "MeshRenderer") return new MeshRenderer; if(compTyp == "Collider") return new Collider; return NULL; } }; However, I feel like I should not have to keep updating the Component Factory every time I want to create a new custom Component but it is an option. Is there a more proper way to add and retrieve these components? Is standard templates another solution?

    Read the article

  • How can i get latency when using Game Center?

    - by Freddy
    I'm pretty new to network programming. Basically I'm using game center for making a relatively simple iPhone game using Game-center p2p. However i'm now working on a algorithm to improve the multiplayer performance. But, I need to know how long it took for a package to travel from one device to the another device (latency) for the algorithm to work good. As for now, I have solved the problem by sending a double with time interval since 1970 in the package and then I compare it with the time at the other device. However I have heard that the NSDate methods is connected to the internet, which also will cause latency so the time interval would not be perfectly correct. What is the ideal way to check for how long it take for a package to be sent?

    Read the article

  • Converting a GameObject method call from UnityScript to C#

    - by Crims0n_
    Here is the UnityScript implementation of the method i use to generate a randomly tiled background, the problem i'm having relates to how to translate the call to the newTile method in c#, so far i've had no luck fiddling... can anyone point me in the correct direction? Thanks #pragma strict import System.Collections.Generic; var mapSizeX : int; var mapSizeY : int; var xOffset : float; var yOffset : float; var tilePrefab : GameObject; var tilePrefab2 : GameObject; var tiles : List.<Transform> = new List.<Transform>(); function Start () { var i : int = 0; var xIndex : int = 0; var yIndex : int = 0; xOffset = 2.69; yOffset = -1.97; while(yIndex < mapSizeY){ xIndex = 0; while(xIndex < mapSizeX){ var z = Random.Range(0, 5); if (z > 2) { var newTile : GameObject = Instantiate (tilePrefab, Vector3(xIndex*0.64 - (xOffset * (mapSizeX/10)), yIndex*-0.64 - (yOffset * (mapSizeY/10)), 0), Quaternion.identity); tiles.Add(newTile.transform); newTile.transform.parent = transform; newTile.transform.name = "tile_"+i; i++; xIndex++; } if (z < 2) { var newTile2 : GameObject = Instantiate (tilePrefab2, Vector3(xIndex*0.64 - (xOffset * (mapSizeX/10)), yIndex*-0.64 - (yOffset * (mapSizeY/10)), 0), Quaternion.identity); tiles.Add(newTile2.transform); newTile2.transform.parent = transform; newTile2.transform.name = "Ztile_"+i; i++; xIndex++; } } yIndex++; } } C# Version [Fixed] using UnityEngine; using System.Collections; public class LevelGen : MonoBehaviour { public int mapSizeX; public int mapSizeY; public float xOffset; public float yOffset; public GameObject tilePrefab; public GameObject tilePrefab2; int i; public System.Collections.Generic.List<Transform> tiles = new System.Collections.Generic.List<Transform>(); // Use this for initialization void Start () { int i = 0; int xIndex = 0; int yIndex = 0; xOffset = 1.58f; yOffset = -1.156f; while (yIndex < mapSizeY) { xIndex = 0; while(xIndex < mapSizeX) { int z = Random.Range(0, 5); if (z > 5) { GameObject newTile = (GameObject)Instantiate(tilePrefab, new Vector3(xIndex*0.64f - (xOffset * (mapSizeX/10.0f)), yIndex*-0.64f - (yOffset * (mapSizeY/10.0f)), 0), Quaternion.identity); tiles.Add(newTile.transform); newTile.transform.parent = transform; newTile.transform.name = "tile_"+i; i++; xIndex++; } if (z < 5) { GameObject newTile2 = (GameObject)Instantiate(tilePrefab, new Vector3(xIndex*0.64f - (xOffset * (mapSizeX/10.0f)), yIndex*-0.64f - (yOffset * (mapSizeY/10.0f)), 0), Quaternion.identity); tiles.Add(newTile2.transform); newTile2.transform.parent = transform; newTile2.transform.name = "tile2_"+i; i++; xIndex++; } } yIndex++; } } // Update is called once per frame void Update () { } }

    Read the article

  • Dynamic Terrain Texture

    - by lgrevenl
    I've been looking at a 2D physics game called 'Hill Climb Racing' (Android and iOS) and was wondering how they went about texturing the terrain? I've had a think about it and I've come up with nothing and finding a resource on the web has proved impossible. Please help. The game mentioned uses Cocos2d. Would it be just as doable in a different environment? EDIT: I was looking at another question: Drawing large 2D sidescroller level terrain The end result is what I'm looking for, but in my mind I was thinking that there would be some way to add this effect (using small textures) to some terrain specified by vertices rather than making a very large image to match whatever is seen in the level.

    Read the article

  • Setting the values of a struct array from JS to GLSL

    - by mikidelux
    I've been trying to make a structure that will contain all the lights of my WebGL app, and I'm having troubles setting up it's values from JS. The structure is as follows: struct Light { vec4 position; vec4 ambient; vec4 diffuse; vec4 specular; vec3 spotDirection; float spotCutOff; float constantAttenuation; float linearAttenuation; float quadraticAttenuation; float spotExponent; float spotLightCosCutOff; }; uniform Light lights[numLights]; After testing LOTS of things I made it work but I'm not happy with the code I wrote: program.uniform.lights = []; program.uniform.lights.push({ position: "", diffuse: "", specular: "", ambient: "", spotDirection: "", spotCutOff: "", constantAttenuation: "", linearAttenuation: "", quadraticAttenuation: "", spotExponent: "", spotLightCosCutOff: "" }); program.uniform.lights[0].position = gl.getUniformLocation(program, "lights[0].position"); program.uniform.lights[0].diffuse = gl.getUniformLocation(program, "lights[0].diffuse"); program.uniform.lights[0].specular = gl.getUniformLocation(program, "lights[0].specular"); program.uniform.lights[0].ambient = gl.getUniformLocation(program, "lights[0].ambient"); ... and so on I'm sorry for making you look at this code, I know it's horrible but I can't find a better way. Is there a standard or recommended way of doing this properly? Can anyone enlighten me?

    Read the article

  • Error in my Separating Axis Theorem collision code

    - by Holly
    The only collision experience i've had was with simple rectangles, i wanted to find something that would allow me to define polygonal areas for collision and have been trying to make sense of SAT using these two links Though i'm a bit iffy with the math for the most part i feel like i understand the theory! Except my implementation somewhere down the line must be off as: (excuse the hideous font) As mentioned above i have defined a CollisionPolygon class where most of my theory is implemented and then have a helper class called Vect which was meant to be for Vectors but has also been used to contain a vertex given that both just have two float values. I've tried stepping through the function and inspecting the values to solve things but given so many axes and vectors and new math to work out as i go i'm struggling to find the erroneous calculation(s) and would really appreciate any help. Apologies if this is not suitable as a question! CollisionPolygon.java: package biz.hireholly.gameplay; import android.graphics.Canvas; import android.graphics.Color; import android.graphics.Paint; import biz.hireholly.gameplay.Types.Vect; public class CollisionPolygon { Paint paint; private Vect[] vertices; private Vect[] separationAxes; int x; int y; CollisionPolygon(Vect[] vertices){ this.vertices = vertices; //compute edges and separations axes separationAxes = new Vect[vertices.length]; for (int i = 0; i < vertices.length; i++) { // get the current vertex Vect p1 = vertices[i]; // get the next vertex Vect p2 = vertices[i + 1 == vertices.length ? 0 : i + 1]; // subtract the two to get the edge vector Vect edge = p1.subtract(p2); // get either perpendicular vector Vect normal = edge.perp(); // the perp method is just (x, y) => (-y, x) or (y, -x) separationAxes[i] = normal; } paint = new Paint(); paint.setColor(Color.RED); } public void draw(Canvas c, int xPos, int yPos){ for (int i = 0; i < vertices.length; i++) { Vect v1 = vertices[i]; Vect v2 = vertices[i + 1 == vertices.length ? 0 : i + 1]; c.drawLine( xPos + v1.x, yPos + v1.y, xPos + v2.x, yPos + v2.y, paint); } } public void update(int xPos, int yPos){ x = xPos; y = yPos; } /* consider changing to a static function */ public boolean intersects(CollisionPolygon p){ // loop over this polygons separation exes for (Vect axis : separationAxes) { // project both shapes onto the axis Vect p1 = this.minMaxProjection(axis); Vect p2 = p.minMaxProjection(axis); // do the projections overlap? if (!p1.overlap(p2)) { // then we can guarantee that the shapes do not overlap return false; } } // loop over the other polygons separation axes Vect[] sepAxesOther = p.getSeparationAxes(); for (Vect axis : sepAxesOther) { // project both shapes onto the axis Vect p1 = this.minMaxProjection(axis); Vect p2 = p.minMaxProjection(axis); // do the projections overlap? if (!p1.overlap(p2)) { // then we can guarantee that the shapes do not overlap return false; } } // if we get here then we know that every axis had overlap on it // so we can guarantee an intersection return true; } /* Note projections wont actually be acurate if the axes aren't normalised * but that's not necessary since we just need a boolean return from our * intersects not a Minimum Translation Vector. */ private Vect minMaxProjection(Vect axis) { float min = axis.dot(new Vect(vertices[0].x+x, vertices[0].y+y)); float max = min; for (int i = 1; i < vertices.length; i++) { float p = axis.dot(new Vect(vertices[i].x+x, vertices[i].y+y)); if (p < min) { min = p; } else if (p > max) { max = p; } } Vect minMaxProj = new Vect(min, max); return minMaxProj; } public Vect[] getSeparationAxes() { return separationAxes; } public Vect[] getVertices() { return vertices; } } Vect.java: package biz.hireholly.gameplay.Types; /* NOTE: Can also be used to hold vertices! Projections, coordinates ect */ public class Vect{ public float x; public float y; public Vect(float x, float y){ this.x = x; this.y = y; } public Vect perp() { return new Vect(-y, x); } public Vect subtract(Vect other) { return new Vect(x - other.x, y - other.y); } public boolean overlap(Vect other) { if(y > other.x && other.y > x){ return true; } return false; } /* used specifically for my SAT implementation which i'm figuring out as i go, * references for later.. * http://www.gamedev.net/page/resources/_/technical/game-programming/2d-rotated-rectangle-collision-r2604 * http://www.codezealot.org/archives/55 */ public float scalarDotProjection(Vect other) { //multiplier = dot product / length^2 float multiplier = dot(other) / (x*x + y*y); //to get the x/y of the projection vector multiply by x/y of axis float projX = multiplier * x; float projY = multiplier * y; //we want to return the dot product of the projection, it's meaningless but useful in our SAT case return dot(new Vect(projX,projY)); } public float dot(Vect other){ return (other.x*x + other.y*y); } }

    Read the article

  • XNA Transparency depending on drawing order?

    - by DarthRoman
    I am drawing two 3D objects, both of them can fade from opaque to transparent independently, and they can intersect between them (so you cannot say when one of them is before the other one). Look at the image for a better understanding (one of the object is a terrain and the other one an area): Now, if I apply transparency to both of them, and draw the terrain before the area, the terrain is not transparent respecting to the area, but the area is: And finally, if I draw the area before the terrain, then the area is not transparent respecting of the terrain: QUESTION: How can I make all the objects transparent to the rest of objects without depending on the drawing order?

    Read the article

  • Regulating how much to draw based on how much was drawn last frame.

    - by Mike Howard
    I have a 3D game world on an iPhone (limited graphics speed), and I'm already regulating whether I draw each shape on the screen based on it's size and distance from the camera. Something like... if (how_big_it_looks_from_the_camera > constant) then draw What I want to do now is also take into account how many shapes are being drawn, so that in busier areas of the game world I can draw less than I otherwise would. I tried to do this by dividing how_big_it_looks by the number of shapes that were drawn last frame (well, the square root of this but I'm simplifying - the problem is the same). if (how_big_it_looks / shapes_drawn > constant2) then draw But the check happens at the level of objects which represent many drawn shapes, and if an object containing many shapes is switched on, it increases shapes_drawn lots and switches itself back off the next frame. It flickers on and off. I tried keeping a kind of weighted average of previous values, by each frame doing something like shapes_drawn_recently = 0.9 * shapes_drawn_recently + 0.1 * shapes_just_drawn, but of course it only slows the flickering down because of the nature of the feedback loop. Is there a good way of solving this? My project is in Objective-C, but a general algorithm or pseudo-code is good too. Thanks.

    Read the article

  • How to create an extensible rope in Box2D?

    - by Thomas
    Let's say I'm trying to create a ninja lowering himself down a rope, or pulling himself back up, all whilst he might be swinging from side to side or hit by objects. Basically like http://ninja.frozenfractal.com/ but with Box2D instead of hacky JavaScript. Ideally I would like to use a rope joint in Box2D that allows me to change the length after construction. The standard Box2D RopeJoint doesn't offer that functionality. I've considered a PulleyJoint, connecting the other end of the "pulley" to an invisible kinematic body that I can control to change the length, but PulleyJoint is more like a rod than a rope: it constrains maximum length, but unlike RopeJoint it constrains the minimum as well. Re-creating a RopeJoint every frame using a new length is rather inefficient, and I'm not even sure it would work properly in the simulation. I could create a "chain" of bodies connected by RotationJoints but that is also less efficient, and less robust. I also wouldn't be able to change the length arbitrarily, but only by adding and removing a whole number of links, and it's not obvious how I would connect the remainder without violating existing joints. This sounds like something that should be straightforward to do. Am I overlooking something?

    Read the article

  • Android Loading Screen: How do I use a stack to load elements?

    - by tom_mai78101
    I have some problems with figuring out what value I should put in the function: int value_needed_to_figure_out = X; ProgressBar.incrementProgressBy(value_needed_to_figure_out); I've been researching about loading screens and how to use them. Some examples I've seen have implemented Thread.sleep() in a Handler.post(new Runnable()) function. To me, I got most of that concept of using the Handler to update the ProgressBar, while pretending to do some heavy crunching work. So, I kept looking. I have read this thread here: How do I load chunks of data from an assest manager during a loading screen? It said that I can try using a stack it needs to load, and adding a size counter as I add elements to the stack. What does it mean? This is the part where I'm totally stumped. If anyone would provide some hints, I'll gladly appreciate it. Thanks in advance.

    Read the article

  • Making large scale changes to an economy in a social game

    - by Zach
    Are there any examples or case studies of social games, specifically on Facebook, where the developer has made drastic changes to the economy? I'm specifically interested in examples where the old economy was based off of purchasing items with Facebook credits then moving to a new model where the same inventory or similar inventory is sold with a soft currency. The closest comparisons I've been able to find so far are looking at iOS games that have gone from purchase models to freemium models, but haven't found a comparable scenario in a social game besides larger scale MMO's.

    Read the article

  • How to design a leaderboard?

    - by PeterK
    This sounds like an easy thing but when i considering the following Many players Some have played many games and some just started Different type of statistics ...on what information should the actual ranking be based on. I am planning to display the board in a UITableView so there is limited space available per player. However, I am not bound to the UITableView if there is a better solution. This is a quiz game and the information i am currently capturing per player is: #games played totally #games played per game type (current version have only one game type) #questions answered #correct answers Maybe i should include additional information. I have been thinking about having a leaderboard property page where the player can decide on what basis the leaderboard should display information but would like to avoid the complexity in that. However, if that is needed i will do it. Anyone that can give me some advice on how to design the presentation of this would be highly appreciated?

    Read the article

  • How do you properly organize a commercial game?

    - by Reactorcore
    For the past months I've been studying programming and I've finally learned how to code, but one thing that is confusing me is how to properly organize the design of a game project - code wise. The game I'm building is a pretty standard commercial game. It has the basic components of a normal game: A world, characters and items interacting with each other and all of this is run by game manager. Basically you play as a hero in a world and do stuff. Fight, explore and interact. Think of your standard adventure game that starts off with an intro, goes to the menu system, then gets into the game and back to the menu. Pretty much like 99% of any commercial game or otherwise serious game projects. Thats what I'm aiming at. The problem is: How do you properly code a commercial game architecture? How do you organize it? How do you make it not become unmaintainable spaghetti code? What specific things to keep in mind when building this, codewise? How you can help me: a) Please tell how do you code your own game projects. What is your thought-process when designing the architecture? b) Recommend books, blogs, tutorials, videos or anything else on how to organize a commercial video game. c) Give hints and tips on do's/don'ts when building a game, codewise. Please help!

    Read the article

  • What exactly can shaders be used for?

    - by Bane
    I'm not really a 3D person, and I've only used shaders a little in some Three.js examples, and so far I've got an impression that they are only being used for the graphical part of the equation. Although, the (quite cryptic) Wikipedia article and some other sources lead me to believe that they can be used for more than just graphical effects, ie, to program the GPU (Wikipedia). So, the GPU is still a processor, right? With a larger and a different instruction set for easier and faster vector manipulation, but still a processor. Can I use shaders to make regular programs (provided I've got access to the video memory, which is probable)? Edit: regular programs == "Applications", ie create windows/console programs, or at least have some way of drawing things on the screen, maybe even taking user input.

    Read the article

< Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >