Search Results

Search found 37616 results on 1505 pages for 'model driven development'.

Page 571/1505 | < Previous Page | 567 568 569 570 571 572 573 574 575 576 577 578  | Next Page >

  • How can I selectively update XNA GameComponents?

    - by Bill
    I have a small 2D game I'm working on in XNA. So far, I have a player-controlled ship that operates on vector thrust and is terribly fun to spin around in circles. I've implemented this as a DrawableGameComponent and registered it with the game using game.Components.Add(this) in the Ship object constructor. How can I implement features like pausing and a menu system with my current implementation? Is it possible to set certain GameComponents to not update? Is this something for which I should even be using a DrawableGameComponent? If not, what are more appropriate uses for this?

    Read the article

  • Radiosity using a hemisphere

    - by P. Avery
    I'm working on a radiosity processor. I'm projecting scene geometry onto a hemisphere at a high order of tessellation during a visibility pass onto a 1024x1024 render target. The problem is that the edges of certain triangles are not being rendered to the item buffer( render target )...so when I test certain edges( or pixels during pixel shader ) for visibility during a reconstruction pass, visible edges are not identified and as a result the pixel for that edge is discarded. One solution was to increase the resolution of the item buffer( up to 4096x4096 )...this helped and more edges were visible, however, this was not fullproof. How do I increase visibility? Here is a screenshot of a scene after radiosity is applied: the seams are edges along a triangle face that were not visible due to the resolution of the item buffer... fixed the problem by sampling the item buffer w/8 points:

    Read the article

  • Opposite Force to Apply to a Collided Rigid Body?

    - by Milo
    I'm working on the physics for my GTA2-like game so I can learn more about game physics. The collision detection and resolution are working great. I'm now just unsure how to compute the force to apply to a body after it collides with a wall. My rigid body looks like this: /our simulation object class RigidBody extends Entity { //linear private Vector2D velocity = new Vector2D(); private Vector2D forces = new Vector2D(); private float mass; private Vector2D v = new Vector2D(); //angular private float angularVelocity; private float torque; private float inertia; //graphical private Vector2D halfSize = new Vector2D(); private Bitmap image; private Matrix mat = new Matrix(); private float[] Vector2Ds = new float[2]; private Vector2D tangent = new Vector2D(); private static Vector2D worldRelVec = new Vector2D(); private static Vector2D relWorldVec = new Vector2D(); private static Vector2D pointVelVec = new Vector2D(); private static Vector2D acceleration = new Vector2D(); public RigidBody() { //set these defaults so we don't get divide by zeros mass = 1.0f; inertia = 1.0f; setLayer(LAYER_OBJECTS); } protected void rectChanged() { if(getWorld() != null) { getWorld().updateDynamic(this); } } //intialize out parameters public void initialize(Vector2D halfSize, float mass, Bitmap bitmap) { //store physical parameters this.halfSize = halfSize; this.mass = mass; image = bitmap; inertia = (1.0f / 20.0f) * (halfSize.x * halfSize.x) * (halfSize.y * halfSize.y) * mass; RectF rect = new RectF(); float scalar = 10.0f; rect.left = (int)-halfSize.x * scalar; rect.top = (int)-halfSize.y * scalar; rect.right = rect.left + (int)(halfSize.x * 2.0f * scalar); rect.bottom = rect.top + (int)(halfSize.y * 2.0f * scalar); setRect(rect); } public void setLocation(Vector2D position, float angle) { getRect().set(position.x,position.y, getWidth(), getHeight(), angle); rectChanged(); } public Vector2D getPosition() { return getRect().getCenter(); } @Override public void update(float timeStep) { doUpdate(timeStep); } public void doUpdate(float timeStep) { //integrate physics //linear acceleration.x = forces.x / mass; acceleration.y = forces.y / mass; velocity.x += (acceleration.x * timeStep); velocity.y += (acceleration.y * timeStep); //velocity = Vector2D.add(velocity, Vector2D.scalarMultiply(acceleration, timeStep)); Vector2D c = getRect().getCenter(); v.x = getRect().getCenter().getX() + (velocity.x * timeStep); v.y = getRect().getCenter().getY() + (velocity.y * timeStep); setCenter(v.x, v.y); forces.x = 0; //clear forces forces.y = 0; //angular float angAcc = torque / inertia; angularVelocity += angAcc * timeStep; setAngle(getAngle() + angularVelocity * timeStep); torque = 0; //clear torque } //take a relative Vector2D and make it a world Vector2D public Vector2D relativeToWorld(Vector2D relative) { mat.reset(); Vector2Ds[0] = relative.x; Vector2Ds[1] = relative.y; mat.postRotate(JMath.radToDeg(getAngle())); mat.mapVectors(Vector2Ds); relWorldVec.x = Vector2Ds[0]; relWorldVec.y = Vector2Ds[1]; return relWorldVec; } //take a world Vector2D and make it a relative Vector2D public Vector2D worldToRelative(Vector2D world) { mat.reset(); Vector2Ds[0] = world.x; Vector2Ds[1] = world.y; mat.postRotate(JMath.radToDeg(-getAngle())); mat.mapVectors(Vector2Ds); worldRelVec.x = Vector2Ds[0]; worldRelVec.y = Vector2Ds[1]; return worldRelVec; } //velocity of a point on body public Vector2D pointVelocity(Vector2D worldOffset) { tangent.x = -worldOffset.y; tangent.y = worldOffset.x; pointVelVec.x = (tangent.x * angularVelocity) + velocity.x; pointVelVec.y = (tangent.y * angularVelocity) + velocity.y; return pointVelVec; } public void applyForce(Vector2D worldForce, Vector2D worldOffset) { //add linear force forces.x += worldForce.x; forces.y += worldForce.y; //add associated torque torque += Vector2D.cross(worldOffset, worldForce); } @Override public void draw( GraphicsContext c) { c.drawRotatedScaledBitmap(image, getPosition().x, getPosition().y, getWidth(), getHeight(), getAngle()); } public Vector2D getVelocity() { return velocity; } public void setVelocity(Vector2D velocity) { this.velocity = velocity; } } The way it is given force is by the applyForce method, this method considers angular torque. I'm just not sure how to come up with the vectors in the case where: RigidBody hits static entity RigidBody hits other RigidBody that may or may not be in motion. Would anyone know a way (without too complex math) that I could figure out the opposite force I need to apply to the car? I know the normal it is colliding with and how deep it collided. My main goal is so that say I hit a building from the side, well the car should not just stay there, it should slowly rotate out of it if I'm more than 45 degrees. Right now when I hit a wall I only change the velocity directly which does not consider angular force. Thanks!

    Read the article

  • How can I resolve component types in a way that supports adding new types relatively easily?

    - by John
    I am trying to build an Entity Component System for an interactive application developed using C++ and OpenGL. My question is quite simple. In my GameObject class I have a collection of Components. I can add and retrieve components. class GameObject: public Object { public: GameObject(std::string objectName); ~GameObject(void); Component * AddComponent(std::string name); Component * AddComponent(Component componentType); Component * GetComponent (std::string TypeName); Component * GetComponent (<Component Type Here>); private: std::map<std::string,Component*> m_components; }; I will have a collection of components that inherit from the base Components class. So if I have a meshRenderer component and would like to do the following GameObject * warship = new GameObject("myLovelyWarship"); MeshRenderer * meshRenderer = warship->AddComponent(MeshRenderer); or possibly MeshRenderer * meshRenderer = warship->AddComponent("MeshRenderer"); I could be make a Component Factory like this: class ComponentFactory { public: static Component * CreateComponent(const std::string &compTyp) { if(compTyp == "MeshRenderer") return new MeshRenderer; if(compTyp == "Collider") return new Collider; return NULL; } }; However, I feel like I should not have to keep updating the Component Factory every time I want to create a new custom Component but it is an option. Is there a more proper way to add and retrieve these components? Is standard templates another solution?

    Read the article

  • Slick2d Spritesheet showing whole image

    - by BotskoNet
    I'm trying to show a single subimage from a sprite sheet. Using slick2d SpriteSheet class, all it's doing is showing me the entire image, but scaled down to fit the cell dimensions. The image is 96x192 and should have cells of 32x32. The code: SpriteSheet spriteSheet = new SpriteSheet("images/"+file, 32, 32 ); System.out.println("Horiz Count: " + spriteSheet.getHorizontalCount()); System.out.println("Vert Count: " + spriteSheet.getVerticalCount()); System.out.println("Height: " + spriteSheet.getHeight()); System.out.println("Width: " + spriteSheet.getWidth()); System.out.println("Texture Width: " + spriteSheet.getTextureWidth()); System.out.println("Texture Height: " + spriteSheet.getTextureHeight()); Prints: Horiz Count: 3 Vert Count: 6 Height: 192 Width: 96 Texture Width: 0.75 Texture Height: 0.75 Not sure what the texture dimensions refer to, but the rest is entirely accurate. However, when I draw the icon, the entire sprite image shows scaled down to 32x32: Image image = spriteSheet.getSprite(1, 0); // a test image.bind(); GL11.glEnable(GL11.GL_BLEND); GL11.glBlendFunc(GL11.GL_SRC_ALPHA, GL11.GL_ONE_MINUS_SRC_ALPHA); GL11.glBegin(GL11.GL_QUADS); GL11.glTexCoord2f(0,0); GL11.glVertex2f(x,y); GL11.glTexCoord2f(1,0); GL11.glVertex2f(x+image.getWidth(),y); GL11.glTexCoord2f(1,1); GL11.glVertex2f(x+image.getWidth(),y+image.getHeight()); GL11.glTexCoord2f(0,1); GL11.glVertex2f(x,y+image.getHeight()); GL11.glEnd(); GL11.glDisable(GL11.GL_BLEND);

    Read the article

  • Automatically zoom out the camera to show all players

    - by user36159
    I am building a game in XNA that takes place in a rectangular arena. The game is multiplayer and each player may go where they like within the arena. The camera is a perspective camera that looks directly downwards. The camera should be automatically repositioned based on the game state. Currently, the xy position is a weighted sum of the xy positions of important entities. I would like the camera's z position to be calculated from the xy coordinates so that it zooms out to the point where all important entities are visible. My current approach is to: hw = the greatest x distance from the camera to an important entity hh = the greatest y distance from the camera to an important entity Calculate z = max(hw / tan(FoVx), hh / tan(FoVy)) My code seems to almost work as it should, but the resulting z values are always too low by a factor of about 4. Any ideas?

    Read the article

  • Alternatives to NSMutableArray for storing 2D grid - iOS Cocos2d

    - by SundayMonday
    I'm creating a grid-based iOS game using Cocos2d. Currently the grid is stored in an NSMutableArray that contains other NSMutableArrays (the latter are rows in the grid). This works ok and performance so far is pretty good. However the syntax feels bulky and the indexing isn't very elegant (using CGPoints, would prefer integer indices). I'm looking for an alternative. What are some alternatives data structures for 2D arrays in this situation? In my game it's very common to add and remove rows from the bottom of the grid. So the grid might start off 10x10, grow to 17x10, shrink to 8x10 and then finally end with 2x10. Note the column count is constant. I've consider using a vector<vector<Object*>>. Also I'm vaguely aware of some type of "fast array" or similar offered by Cocos2d. I'd just like to learn about best practices from other developers!

    Read the article

  • 1136: Incorrect number of arguments. Expected 0 AS3 Flash CS5.5 [on hold]

    - by Erick
    how do I solve this error? I've been trying to get the answer online but have not been successful. I'm trying to learn As3 for flash so I decided to try making a preloader for a game. Preloader.as package com.game.moran { import flash.display.LoaderInfo; import flash.display.MovieClip; import flash.events.*; public class ThePreloader extends MovieClip { private var fullWidth:Number; public var ldrInfo:LoaderInfo; public function ThePreloader (fullWidth:Number = 0, ldrInfo:LoaderInfo = null) { this.fullWidth = fullWidth; this.ldrInfo = ldrInfo; addEventListener(Event.ENTER_FRAME, checkLoad); } private function checkLoad (e:Event) : void { if (ldrInfo.bytesLoaded == ldrInfo.bytesTotal && ldrInfo.bytesTotal !=0) { dispatchEvent (new Event ("loadComplete")); phaseOut(); } updateLoader (ldfInfo.bytesLoaded / ldrInfo.bytesTotal); } private function updateLoader(num:Number) : void { mcPreloaderBar.Width = num * fullWidth; } private function phaseOut() : void { removeEventListener (Event.ENTER_FRAME, checkLoad); phaseComplete(); } private function phaseComplete() : void { dispatchEvent (new Event ("preloaderFinished")); } } } Engine.as package com.game.moran { import flash.display.MovieClip; import flash.display.Sprite; import flash.events.Event; public class Engine extends MovieClip { private var preloader:ThePreloader; public function Engine() { preloader = new ThePreloader(732, this.loaderInfo); stage.addChild(preloader); preloader.addEventListener("loadComplete", loadAssets); preloader.addEventListener("preloaderFinished", showSponsors); } private function loadAssets (e:Event) : void { this.play(); } private function showSponsors(e:Event) : void { stage.removeChild(preloader); trace("show sponsors") } } } The line being flagged as an error is line 13 in Engine.as.

    Read the article

  • How can I dynamically load the correct sprite from a sprite sheet?

    - by Leonard Challis
    I am making a simple card game in unity. The game is based on a standard 52-card pack, with identical backs for unique faces. In my particular game different cards are worth different values and have various special abilities. The game will have 52 cards on the table (on the draw position or in the face-down deck or in someone's hand) at all times, so this number won't change. I thought that making a Card prefab and instantiating 52 of these manually would be a bad idea. Even doing it in code, I thought, would be a bit OTT, and that I should just instantiate visual cards when they are face-up to the player. I have a sprite sheet of the 52 cards and the back, which is imported as a Sprite in multiple mode, sliced in to a grid containing all the cards needed to play the game. The problem I now face is that, through my GameController script I want to generate a shuffled pack of cards, deal some to each player and then show those cards to a player. However, I am not sure of the best way, or even if it's possible, to do this dynamically with the sprite sheets as they are. For instance if I have the following: private CardRank rank; private CardSuite suite; private void Start() { this.rank = CardRank.Ace; this.suite = CardSuite.Spades; } This class would be instantiated by the game manager. I would have 52 of these in code. Whenever I have to visually show a card in the scene, I would use a card prefab, which is essentially a game object with a SpriteRenderer on it. I would need to dynamically load the correct sprite for this object from the spritesheet. The sliced sprites from the sprite sheet actually have names in the format AS (Ace of Spades), 7H (Seven of Hearts), etc - though this was a manual thing I did myself of course. I have also tried various alternative solutions, including creating animations, having separate sprites not in a spritesheet and having an array of available sprites in an array with a specific index for each card, but none seem as elegant as trying to load the correct sprite at runtime, as I'm trying to. So, how do I load a specific sprite from a spritesheet at runtime? I'm open to suggestions, even those that make me think differently about how to approach the problem.

    Read the article

  • What is the best type of c# timer to use with an Unity game that uses many timers simultaneously?

    - by Kyle Seidlitz
    I am developing a stand-alone 3d game in Unity that will have anywhere from 1 to 200 timers running simultaneously. For this game timer durations will range from 5 minutes to 4 days. There will not be any countdown displays or any UI for the timers. An object will be selected, a menu choice will then be selected, and the timer will start. Several events will occur at different intervals during the duration of the timer. The events will be confined to changing the material of the selected object, and calling a 1 second sound effect like a chime or a bell. If the user wants to save or end the game before all the timers are done, the start of the still running timers is to be saved to an XML file such that when the game is started again, any still running timers will have a calculation done to see if the timer is then done, where the game will change the materials appropriately. I am still trying to figure out what type of timer to use, and see also if there are any suggestions for saving and calculating times over several days. What class(es) of timers should I use? Are there any special issues I should look out for in terms of performance?

    Read the article

  • How can I cull non-visible isometric tiles?

    - by james
    I have a problem which I am struggling to solve. I have a large map of around 100x100 tiles which form an isometric map. The user is able to move the map around by dragging the mouse. I am trying to optimize my game only to draw the visible tiles. So far my code is like this. It appears to be ok in the x direction, but as soon as one tile goes completely above the top of the screen, the entire column disappears. I am not sure how to detect that all of the tiles in a particular column are outside the visible region. double maxTilesX = widthOfScreen/ halfTileWidth + 4; double maxTilesY = heightOfScreen/ halfTileHeight + 4; int rowStart = Math.max(0,( -xOffset / halfTileWidth)) ; int colStart = Math.max(0,( -yOffset / halfTileHeight)); rowEnd = (int) Math.min(mapSize, rowStart + maxTilesX); colEnd = (int) Math.min(mapSize, colStart + maxTilesY); EDIT - I think I have solved my problem, but perhaps not in a very efficient way. I have taken the center of the screen coordinates, determined which tile this corresponds to by converting the coordinates into cartesian format. I then update the entire box around the screen.

    Read the article

  • Slick2D Rendering Lots of Polygons

    - by Hazzard
    I'm writing an little isometric game using Slick. The world terrain is made up of lots of quadrilaterals. In a small world that is 128 by 128 squares, over 16,000 quadrilaterals need to be rendered. This puts my pretty powerful computer down to 30 fps. I've though about caching "chunks" of the world so only single chunks would ever need updating at a time, but I don't know how to do this, and I am sure there are other ways to optimize it besides that. Maybe I'm doing the whole thing wrong, surely fancy 3D games that run fine on my machine are more intensive than this. My question is how can I improve the FPS and am I doing something wrong? Or does it actually take that much power to render those polygons? -- Here is the source code for the render method in my game state. It iterates through a 2d array or heights and draws polygons based on the height. public void render(GameContainer container, StateBasedGame game, Graphics gfx) throws SlickException { gfx.translate(offsetX * d + container.getWidth() / 2, offsetY * d + container.getHeight() / 2); gfx.scale(d, d); for (int y = 0; y < placeholder.length; y++) {// x & y are isometric // diag for (int x = 0; x < placeholder[0].length; x++) { Polygon poly; int hor = TestState.TILE_WIDTH * (x - y);// hor and ver are orthagonal int W = TestState.TILE_HEIGHT * (x + y) - 1 * heights[y + 1][x];//points to go off of int S = TestState.TILE_HEIGHT * (x + y) - 1 * heights[y + 1][x + 1]; int E = TestState.TILE_HEIGHT * (x + y) - 1 * heights[y][x + 1]; int N = TestState.TILE_HEIGHT * (x + y) - 1 * heights[y][x]; if (placeholder[y][x] == null) { poly = new Polygon();//Create actual surface polygon poly.addPoint(-TestState.TILE_WIDTH + hor, W); poly.addPoint(hor, S + TestState.TILE_HEIGHT); poly.addPoint(TestState.TILE_WIDTH + hor, E); poly.addPoint(hor, N - TestState.TILE_HEIGHT); float z = ((float) heights[y][x + 1] - heights[y + 1][x]) / 32 + 0.5f; placeholder[y][x] = new Tile(poly, new Color(z, z, z)); //ShapeRenderer.fill(placeholder[y][x]); } if (true) {//ONLY draw tile if it's on screen gfx.setColor(placeholder[y][x].getColor()); ShapeRenderer.fill(placeholder[y][x]); //gfx.fill(placeholder[y][x]); //placeholder[y][x]. //DRAW EDGES if (y + 1 == placeholder.length) {//draw South foundation edges gfx.setColor(Color.gray); Polygon found = new Polygon(); found.addPoint(-TestState.TILE_WIDTH + hor, W); found.addPoint(hor, S + TestState.TILE_HEIGHT); found.addPoint(hor, TestState.TILE_HEIGHT * (x + y + 1)); found.addPoint(-TestState.TILE_WIDTH + hor, TestState.TILE_HEIGHT * (x + y)); gfx.fill(found); } if (x + 1 == placeholder[0].length) {//north gfx.setColor(Color.darkGray); Polygon found = new Polygon(); found.addPoint(TestState.TILE_WIDTH + hor, E); found.addPoint(hor, S + TestState.TILE_HEIGHT); found.addPoint(hor, TestState.TILE_HEIGHT * (x + y + 1)); found.addPoint(TestState.TILE_WIDTH + hor, TestState.TILE_HEIGHT * (x + y)); gfx.fill(found); }//*/ } } } }

    Read the article

  • In an Entity-Component-System Engine, How do I deal with groups of dependent entities?

    - by John Daniels
    After going over a few game design patterns, I have settle with Entity-Component-System (ES System) for my game engine. I've reading articles (mainly T=Machine) and review some source code and I think I got enough to get started. There is just one basic idea I am struggling with. How do I deal with groups of entities that are dependent on each other? Let me use an example: Assume I am making a standard overhead shooter (think Jamestown) and I want to construct a "boss entity" with multiple distinct but connected parts. The break down might look like something like this: Ship body: Movement, Rendering Cannon: Position (locked relative to the Ship body), Tracking\Fire at hero, Taking Damage until disabled Core: Position (locked relative to the Ship body), Tracking\Fire at hero, Taking Damage until disabled, Disabling (er...destroying) all other entities in the ship group My goal would be something that would be identified (and manipulated) as a distinct game element without having to rewrite subsystem form the ground up every time I want to build a new aggregate Element. How do I implement this kind of design in ES System? Do I implement some kind of parent-child entity relationship (entities can have children)? This seems to contradict the methodology that Entities are just empty container and makes it feel more OOP. Do I implement them as separate entities, with some kind of connecting Component (BossComponent) and related system (BossSubSystem)? I can't help but think that this will be hard to implement since how components communicate seem to be a big bear trap. Do I implement them as one Entity, with a collection of components (ShipComponent, CannonComponents, CoreComponent)? This one seems to veer way of the ES System intent (components here seem too much like heavy weight entities), but I'm know to this so I figured I would put that out there. Do I implement them as something else I have mentioned? I know that this can be implemented very easily in OOP, but my choosing ES over OOP is one that I will stick with. If I need to break with pure ES theory to implement this design I will (not like I haven't had to compromise pure design before), but I would prefer to do that for performance reason rather than start with bad design. For extra credit, think of the same design but, each of the "boss entities" were actually connected to a larger "BigBoss entity" made of a main body, main core and 3 "Boss Entities". This would let me see a solution for at least 3 dimensions (grandparent-parent-child)...which should be more than enough for me. Links to articles or example code would be appreciated. Thanks for your time.

    Read the article

  • Implementing a FSM with ActionScript 2 without using classes?

    - by Up2u
    I have seen several references of A.I. and FSM, but sadly I still can't understand the point of an FSM in AS2.0. Is it a must to create a class for each state? I have a game-project which also it has an A.I., the A.I. has 3 states: distanceCheck, ChaseTarget, and Hit the target. It's an FPS game and played via mouse. I have created the A.I. successfully, but I want to convert it to FSM method... My first state is CheckDistanceState() and in that function I look for the nearest target and trigger the function ChaseState(), there I insert the Hit() function to destroy the enemy, The 3 functions that I created are being called in AI_cursor.onEnterframe. Is there any chance to implement an FSM without the need to create a class? From what I've read before, you have to create a class. I prefer to write the code on frames in flash and I still don't understand how to have external classes.

    Read the article

  • OpenGL problem with FBO integer texture and color attachment

    - by Grieverheart
    In my simple renderer, I have 2 FBOs one that contains diffuse, normals, instance ID and depth in that order and one that I use store the ssao result. The textures I use for the first FBO are RGB8, RGBA16F, R32I and GL_DEPTH_COMPONENT32F for the depth. For the second FBO I use an R16F texture. My rendering process is to first render to everything I mentioned in the first FBO, then bind depth and normals textures for reading for the ssao pass and write to the second FBO. After that I bind the second FBO's texture for reading in my blur shader and bind the first FBO for writing. What I intend to do is to write the blurred ssao value to the alpha component of the Normals texture. Here are where the problems start. First of all, I use shading language 3.3, which my graphics card does support. I manage ouputs in my shaders using layout(location = #). Now, the normals texture should be bound to color attachment 1, but when I use 1, it seems to write to my diffuse texture which should be in color attachment 0. When I instead use layout(location = 0), it gets correctly written to my normals texture. Besides this, my instance ID texture also gets resets after running the blur shader which is weird because if I use a float texture and write to it instanceID / nInstances, the texture doesn't get reset after the blur shader has ran. Here is how I prepare my first FBO: bool CGBuffer::Init(unsigned int WindowWidth, unsigned int WindowHeight){ //Create FBO glGenFramebuffers(1, &m_fbo); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, m_fbo); //Create gbuffer and Depth Buffer Textures glGenTextures(GBUFF_NUM_TEXTURES, &m_textures[0]); glGenTextures(1, &m_depthTexture); //prepare gbuffer for(unsigned int i = 0; i < GBUFF_NUM_TEXTURES; i++){ glBindTexture(GL_TEXTURE_2D, m_textures[i]); if(i == GBUFF_TEXTURE_TYPE_NORMAL) glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, WindowWidth, WindowHeight, 0, GL_RGBA, GL_FLOAT, NULL); else if(i == GBUFF_TEXTURE_TYPE_DIFFUSE) glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, WindowWidth, WindowHeight, 0, GL_RGB, GL_FLOAT, NULL); else if(i == GBUFF_TEXTURE_TYPE_ID) glTexImage2D(GL_TEXTURE_2D, 0, GL_R32I, WindowWidth, WindowHeight, 0, GL_RED_INTEGER, GL_INT, NULL); else{ std::cout << "Error in FBO initialization" << std::endl; return false; } glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, GL_TEXTURE_2D, m_textures[i], 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); } //prepare depth buffer glBindTexture(GL_TEXTURE_2D, m_depthTexture); glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, WindowWidth, WindowHeight, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, m_depthTexture, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_NONE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); GLenum DrawBuffers[] = {GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1, GL_COLOR_ATTACHMENT2}; glDrawBuffers(GBUFF_NUM_TEXTURES, DrawBuffers); GLenum Status = glCheckFramebufferStatus(GL_FRAMEBUFFER); if(Status != GL_FRAMEBUFFER_COMPLETE){ std::cout << "FB error, status 0x" << std::hex << Status << std::endl; return false; } //Restore default framebuffer glBindFramebuffer(GL_FRAMEBUFFER, 0); return true; } where I use an enum defined as, enum GBUFF_TEXTURE_TYPE{ GBUFF_TEXTURE_TYPE_DIFFUSE, GBUFF_TEXTURE_TYPE_NORMAL, GBUFF_TEXTURE_TYPE_ID, GBUFF_NUM_TEXTURES }; Am I missing some kind of restriction? Does the color attachment of the FBO's textures somehow gets reset i.e. I'm using a re-size function which re-sizes the textures of the FBO but should I perhaps call glFramebufferTexture2D again too? EDIT: Here is the shader in question: #version 330 core uniform sampler2D aoSampler; uniform vec2 TEXEL_SIZE; // x = 1/res x, y = 1/res y uniform bool use_blur; noperspective in vec2 TexCoord; layout(location = 0) out vec4 out_AO; void main(void){ if(use_blur){ float result = 0.0; for(int i = -1; i < 2; i++){ for(int j = -1; j < 2; j++){ vec2 offset = vec2(TEXEL_SIZE.x * i, TEXEL_SIZE.y * j); result += texture(aoSampler, TexCoord + offset).r; // -0.004 because the texture seems to be a bit displaced } } out_AO = vec4(vec3(0.0), result / 9); } else out_AO = vec4(vec3(0.0), texture(aoSampler, TexCoord).r); }

    Read the article

  • Music Rhythm Game Difficulty Question

    - by David Dimalanta
    I have curious question about music rhythm based genre while I'm making a code for the game. Is it really better if I set a random pattern encountered on every music played or there is a specific pattern depending on the music and the difficulty? I have observed the Guitar Hero 3 game for the game console where the difficulty is set on the number of strings used and possible number of combo (e.g. two-string combo). Compared to the Tap Tap Revenge for the Android and iPhone, the difficulty based on the number of BPM (Beat per Minute), meaning, number of targets spawn and must be hit.

    Read the article

  • Level and Player objects - which should contain which?

    - by Thane Brimhall
    I've been working on a several simple games, and I've always come to a decision point where I have to choose whether to have the Level object as an attribute of the Player class or the Player as an attribute of the Level class. I can see arguments for both: The Level should contain the player because it also contains every other entity. In fact it just makes sense this way: "John is in the room." It makes it a bit more difficult to move the player to a new level, however, because then each level has to pass its player object to an upcoming level. On the other hand, it makes programming sense to me to leave the player as the top-level object that is persistent between levels, and the environment changes because the player decides to change his level and location. It becomes very easy to change levels, because all I have to do is replace the level variable on the player. What's the most common practice here? Or better yet, is there a "right" way to architecture this relationship?

    Read the article

  • Unity 5.1 audio issues (no sound in back channels)

    - by N0xus
    I've trying to bring in surround sound audio into my project. I've set my computer up to run in 5.1 and when I play a 6 channel audio through windows media player (it's a test audio that does left speaker, right speaker etc) it works fine. However, when I run it through Unity, all I get is the front 3 channels. I've set it in the Edit - project settings - audio to be 5.1 in there. I even set it in code with following: void Start() { AudioSettings.speakerMode = AudioSpeakerMode.Mode5point1; } How ever, when I run a debug line of: print ( AudioSettings.driverCaps); It tells me that Unity is only playing in stereo. Is there something I'm still not doing? I should also add I've ran 10 different tests using the 3D audio pan and spread options. I've set both to either being fully off, half way on and full. Still the same results.

    Read the article

  • Map and fill texture using PBO (OpenGL 3.3)

    - by NtscCobalt
    I'm learning OpenGL 3.3 trying to do the following (as it is done in D3D)... Create Texture of Width, Height, Pixel Format Map texture memory Loop write pixels Unmap texture memory Set Texture Render Right now though it renders as if the entire texture is black. I can't find a reliable source for information on how to do this though. Almost every tutorial I've found just uses glTexSubImage2D and passes a pointer to memory. Here is basically what my code does... (In this case it is generating an 1-byte Alpha Only texture but it is rendering it as the red channel for debugging) GLuint pixelBufferID; glGenBuffers(1, &pixelBufferID); glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pixelBufferID); glBufferData(GL_PIXEL_UNPACK_BUFFER, 512 * 512 * 1, nullptr, GL_STREAM_DRAW); glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0); GLuint textureID; glGenTextures(1, &textureID); glBindTexture(GL_TEXTURE_2D, textureID); glTexImage2D(GL_TEXTURE_2D, 0, GL_R8, 512, 512, 0, GL_RED, GL_UNSIGNED_BYTE, nullptr); glBindTexture(GL_TEXTURE_2D, 0); glBindTexture(GL_TEXTURE_2D, textureID); glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pixelBufferID); void *Memory = glMapBuffer(GL_PIXEL_UNPACK_BUFFER, GL_WRITE_ONLY); // Memory copied here, I know this is valid because it is the same loop as in my working D3D version glUnmapBuffer(GL_PIXEL_UNPACK_BUFFER); glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0); And then here is the render loop. // This chunk left in for completeness glUseProgram(glProgramId); glBindVertexArray(glVertexArrayId); glBindBuffer(GL_ARRAY_BUFFER, glVertexBufferId); glEnableVertexAttribArray(0); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 20, 0); glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 20, 12); GLuint transformLocationID = glGetUniformLocation(3, 'transform'); glUniformMatrix4fv(transformLocationID , 1, true, somematrix) // Not sure if this is all I need to do glBindTexture(GL_TEXTURE_2D, pTex->glTextureId); GLuint textureLocationID = glGetUniformLocation(glProgramId, "texture"); glUniform1i(textureLocationID, 0); glDrawArrays(GL_TRIANGLES, Offset*3, Triangles*3); Vertex Shader #version 330 core in vec3 Position; in vec2 TexCoords; out vec2 TexOut; uniform mat4 transform; void main() { TexOut = TexCoords; gl_Position = vec4(Position, 1.0) * transform; } Pixel Shader #version 330 core uniform sampler2D texture; in vec2 TexCoords; out vec4 fragColor; void main() { // Output color fragColor.r = texture2D(texture, TexCoords).r; fragColor.g = 0.0f; fragColor.b = 0.0f; fragColor.a = 1.0; }

    Read the article

  • How to proceed on the waypoint path?

    - by Alpha Carinae
    I'm using Dijkstra algorithm to find shortest path and I'm drawing this path on the screen. As the character object moves on, path updates itself(shortens as the object approaches the target and gets longer as the object moves away from it.) I tried to visualize my problem. This is the beginning state. 'A' node is the target, path is the blue and the object is the green one. I draw this path, from object to the closest node. In this case my problem occurs. Because 'D' node is more closer to the object than 'C' node, something like this happens: So, how can i decide that the object passed the 'D' node? Path should be look like this: One thing comes to my mind is that I use some distance variables between the two closest nodes in the route path. (In this example these are 'C' and 'D' nodes.) As the object approaches 'C' and moves away from the 'D' node at the same time, this means character passed the 'D'. However, I think there are some standardized and easy ways to solve this. What approach should I take?

    Read the article

  • Isometric Screen View to World View

    - by Sleepy Rhino
    I am having trouble working out the math to transform the screen coordinates to the Grid coordinates. The code below is how far I have got but it is totally wrong any help or resources to fix this issue would be great, had a complete mind block with this for some reason. private Point ScreenToIso(int mouseX, int mouseY) { int offsetX = WorldBuilder.STARTX; int offsetY = WorldBuilder.STARTY; Vector2 startV = new Vector2(offsetX, offsetY); int mapX = offsetX - mouseX; int mapY = offsetY - mouseY + (WorldBuilder.tileHeight / 2); mapY = -1 * (mapY / WorldBuilder.tileHeight); mapX = (mapX / WorldBuilder.tileHeight) + mapY; return new Point(mapX, mapY); }

    Read the article

  • 2d shapes in XNA 4.0?

    - by Lautaro
    Having some experience of XNA but none of 3D programming. I have an idea i want to realize but i have not decided to do it in 3d or 2d. Im not sure which one will be best in XNA. I want to have a shape like a blob that can reshape depending on input. The morphing does not need to be very advanced. It could be a circle (2d) or globe (3d) that just has one point that moves slightly in a random direction. In ASP.NET i have made this through the 2d Draw classes where i can make lines, circles, squares etc and then modify the points that makes them up. But it seems to me that XNA does not have classes for making 2d shapes (can i get this confirmed?). If it had, then this would be the quickest solution for me.

    Read the article

  • Permanent death in a MUD (think command line MMORPG)

    - by Luke Laupheimer
    I have considered writing a MUD for years, and I have a lot of ideas my friends think are really cool (and that's how I'd hope to get anywhere -- word of mouth). Thing is, there's one thing I have always wanted, that my friends and strangers hated: permanent death. Now, the emotional response I get to this is visceral revulsion, every time. I'm pretty sure I am the only person that wants this, or if I'm not, I'm a tiny minority. Now, the reason I want it is because I want the actions of the players to matter. Unlike a lot of other MUDs, which have a set of static city-states and social institutions etc, I want the things my players do, should I get any, to actually change the situation. And that includes killing people. If you kill someone, you didn't send them to time out, you killed them. What happens when you kill people? They go away. They don't come back in half an hour to smack talk you some more. They're gone. Forever. By making death non-permanent, you make death not matter. It would be similar if a climax to a character's arc is getting a speeding ticket. It cheapens it. Non-permanent death cheapens death. How can I: 1) Convince my players (and random people!) that this is actually a good idea?, or 2) Find some other way to make death and violence matter as much as it does in real life (except within the game, of course) sans character deletion? What alternatives are there out there?

    Read the article

  • Rendering oily/polluted water?

    - by Fraser
    Any shader wizards out there have an idea of how to achieve an oily/polluted water effect, similar to this: Ideally, the water would not be uniformly oily, but instead the oil could be generated from some source (such as a polluting drain from a chemical plant) and then diffuse throughout the water body. My thought for this part would be to keep an "oil map" as a 2D texture that determines the density of oil at each point on the water surface. It would diffuse and move naturally with the water vel;ocity at that point (I have a wave-particle simulation for dynamic waves, and am already doing something similar for foam on the water surface). However, I'm not sure how physically correct that would be, since oil might not move at the same velocity as the water. And I have no idea how to make all those trippy colors :-). Thoughts?

    Read the article

  • In a 2D platform game, how to ensure the player moves smoothly over sloping ground?

    - by Kovsa
    See image: http://i41.tinypic.com/huis13.jpg I'm developing a physics engine for a 2D platform game. I'm using the separating axis theorem for collision detection. The ground surface is constructed from oriented bounding boxes, with the player as an axis aligned bounding box. (Specifically, I'm using the algorithm from the book "Realtime Collision Detection" which performs swept collision detection for OBBs using SAT). I'm using a fairly small (close to zero) restitution coefficient in the collision response, to ensure that the dynamic objects don't penetrate the environment. The engine mostly works fine, it's just that I'm concerned about some edge cases that could possibly occur. For example, in the diagram, A, B and C are the ground surface. The player is heading left along B towards A. It seems to me that due to inaccuracy, the player box could be slightly below the box B as it continues up and left. When it reaches A, therefore, the bottom left corner of the player might then collide with the right side of A, which would be undesirable (as the intention is for the player to move smoothly over the top of A). It seems like a similar problem could happen when the player is on top of box C, moving left towards B - the most extreme point of B could collide with the left side of the player, instead of the player's bottom left corner sliding up and left above B. Box2D seems to handle this problem by storing connectivity information for its edge shapes, but I'm not really sure how it uses this information to solve the problem, and after looking at the code I don't really grasp what it's doing. Any suggestions would be greatly appreciated.

    Read the article

< Previous Page | 567 568 569 570 571 572 573 574 575 576 577 578  | Next Page >