Search Results

Search found 25660 results on 1027 pages for 'dotnetnuke development'.

Page 435/1027 | < Previous Page | 431 432 433 434 435 436 437 438 439 440 441 442  | Next Page >

  • Input Handling and Game loop

    - by Bob Coder
    So, I intercept the WM_KEYDOWN and other messages. Thing is, my game can't/shouldn't react to these messages just yet, since my game might be currently drawing to the screen or in the middle of updating my game entities. So the idea is to keep a keyboardstate and mousestate, which is updated by the part of my code that intercepts the windows messages. These states just keep track of which keys/buttons are currently pressed. Then, at the start of my game's update function, I access these keyboard and mouse states and my game reacts to the user input. Now, which is the best way to access these states? I assume that windows messages can be sent whenever, so the keyboard/mouse states are constantly being edited. Accessing say a list of currently pressed keys in the keyboard state the same time another part of the code is editing the list would cause problems. Should I make a deep copy of a state and act on that? How would I deal with the garbage generated though, this would take place every frame.

    Read the article

  • OpenGL Learning Material (that's up to date)

    - by Sauron
    So im sure there are topics on this, but alot of them list older material. And the last book: http://www.amazon.com/OpenGL-SuperBible-Comprehensive-Tutorial-Reference/dp/0321712617/ref=sr_1_1?ie=UTF8&qid=1346116133&sr=8-1&keywords=opengl REALLY REALLY disappointed me. I DO NOT want to use someone else's library to learn this stuff, that bothers me SOOO much. So I was hoping there was a newer book that goes into detail, and doesn't use some sort of library "Hiding" everything from you. Or should I just look at older material? If so....anything thats not "too" out of date. Terrain tutorials are a plus (that's kinda my "goal"). Thanks

    Read the article

  • Processing component pools problem - Entity Subsystem

    - by mani3xis
    Architecture description I'm creating (designing) an entity system and I ran into many problems. I'm trying to keep it Data-Oriented and efficient as much as possible. My components are POD structures (array of bytes to be precise) allocated in homogeneous pools. Each pool has a ComponentDescriptor - it just contains component name, field types and field names. Entity is just a pointer to array of components (where address acts like an entity ID). EntityPrototype contains entity name and array of component names. Finally Subsystem (System or Processor) which works on component pools. Actual problem The problem is that some components dependents on others (Model, Sprite, PhysicalBody, Animation depends on Transform component) which makes a lot of problems when it comes to processing them. For example, lets define some entities using [S]prite, [P]hysicalBody and [H]ealth: Tank: Transform, Sprite, PhysicalBody BgTree: Transform, Sprite House: Transform, Sprite, Health and create 4 Tanks, 5 BgTrees and 2 Houses and my pools will look like: TTTTTTTTTTT // Transform pool SSSSSSSSSSS // Sprite pool PPPP // PhysicalBody pool HH // Health component There is no way to process them using indices. I spend 3 days working on it and I still don't have any ideas. In previous designs TransformComponent was bound to the entity - but it wasn't a good idea. Can you give me some advices how to process them? Or maybe I should change the overall design? Maybe I should create pools of entites (pools of component pools) - but I guess it will be a nightmare for CPU caches. Thanks

    Read the article

  • What causes Box2D revolute joints to separate?

    - by nbolton
    I have created a rag doll using dynamic bodies (rectangles) and simple revolute joints (with lower and upper angles). When my rag doll hits the ground (which is a static body) the bodies seem to fidget and the joints separate. It looks like the bodies are sticking to the ground, and the momentum of the rag doll pulls the joint apart (see screenshot below). I'm not sure if it's related, but I'm using the Badlogic GDX Java wrapper for Box2D. Here's some snippets of what I think is the most relevant code: private RevoluteJoint joinBodyParts( Body a, Body b, Vector2 anchor, float lowerAngle, float upperAngle) { RevoluteJointDef jointDef = new RevoluteJointDef(); jointDef.initialize(a, b, a.getWorldPoint(anchor)); jointDef.enableLimit = true; jointDef.lowerAngle = lowerAngle; jointDef.upperAngle = upperAngle; return (RevoluteJoint)world.createJoint(jointDef); } private Body createRectangleBodyPart( float x, float y, float width, float height) { PolygonShape shape = new PolygonShape(); shape.setAsBox(width, height); BodyDef bodyDef = new BodyDef(); bodyDef.type = BodyType.DynamicBody; bodyDef.position.y = y; bodyDef.position.x = x; Body body = world.createBody(bodyDef); FixtureDef fixtureDef = new FixtureDef(); fixtureDef.shape = shape; fixtureDef.density = 10; fixtureDef.filter.groupIndex = -1; fixtureDef.filter.categoryBits = FILTER_BOY; fixtureDef.filter.maskBits = FILTER_STUFF | FILTER_WALL; body.createFixture(fixtureDef); shape.dispose(); return body; } I've skipped the method for creating the head, as it's pretty much the same as the rectangle method (just using a cricle shape). Those methods are used like so: torso = createRectangleBodyPart(x, y + 5, 0.25f, 1.5f); Body head = createRoundBodyPart(x, y + 7.4f, 1); Body leftLegTop = createRectangleBodyPart(x, y + 2.7f, 0.25f, 1); Body rightLegTop = createRectangleBodyPart(x, y + 2.7f, 0.25f, 1); Body leftLegBottom = createRectangleBodyPart(x, y + 1, 0.25f, 1); Body rightLegBottom = createRectangleBodyPart(x, y + 1, 0.25f, 1); Body leftArm = createRectangleBodyPart(x, y + 5, 0.25f, 1.2f); Body rightArm = createRectangleBodyPart(x, y + 5, 0.25f, 1.2f); joinBodyParts(torso, head, new Vector2(0, 1.6f), headAngle); leftLegTopJoint = joinBodyParts(torso, leftLegTop, new Vector2(0, -1.2f), 0.1f, legAngle); rightLegTopJoint = joinBodyParts(torso, rightLegTop, new Vector2(0, -1.2f), 0.1f, legAngle); leftLegBottomJoint = joinBodyParts(leftLegTop, leftLegBottom, new Vector2(0, -1), -legAngle * 1.5f, 0); rightLegBottomJoint = joinBodyParts(rightLegTop, rightLegBottom, new Vector2(0, -1), -legAngle * 1.5f, 0); leftArmJoint = joinBodyParts(torso, leftArm, new Vector2(0, 1), -armAngle * 0.7f, armAngle); rightArmJoint = joinBodyParts(torso, rightArm, new Vector2(0, 1), -armAngle * 0.7f, armAngle);

    Read the article

  • getting bone base and tip positions from a transform matrix?

    - by ddos
    I need this for a Blender3d script, but you don't really need to know Blender to answer this. I need to get bone head and tip positions from a transform matrix read from a file. The position of base is the location part of the matrix, length of the bone (distance from base to tip) is the scale, position of the tip is calculated from the scale (distance from bone base) and rotation part of the matrix. So how to calculate these? bone.base([x,y,z]) # x,y,z - floats bone.tip([x,y,z])

    Read the article

  • Character creation using spritesheets

    - by Patrick Developer
    I am currently creating a 2D fighting game and have implemented a system where upon starting a new game, the player is presented with the option to create a custom character. I have a set of string arrays set with values that correspond to hair, headgear, chest, lower body and shoes. When done selecting a variety of items from the lists, a code is generated based off the index of each item (i.e 01123), which is then used to assign the correct Spritesheet to the player character. This has already presented a lot of work as I have had to create quite a few spreadsheets based of possible combinations, but I am now looking at a massive amount of work to implement each variation. I have started to look into setting layers for each item to reduce workload, but I am also looking at having different stances for the character - Depending on the currently equipped weapon - so this may present a lot of work either way. My question is, do I have any alternatives or am I stuck creating masses of Spritesheets to cover all combinations? As a side note, how much impact will assigning layered items have on overall performance?

    Read the article

  • Unexpected results for projection on to plane

    - by ravenspoint
    I want to use this projection matrix: GLfloat shadow[] = { -1,0,0,0, 1,0,-1,1, 0,0,-1,0, 0,0,0,-1 }; It should cast object shadows onto the y = 0 plane from a point light at 1,1,-1. I create a rectangle in the x = 0.5 plane glBegin( GL_QUADS ); glVertex3f( 0.5,0.2,-0.5); glVertex3f( 0.5,0.2,-1.5); glVertex3f( 0.5,0.5,-1.5); glVertex3f( 0.5,0.5,-0.5); glEnd(); Now if I manually multiply these vertices with the matrix, I get. glBegin( GL_QUADS ); glVertex3f( 0.375,0,-0.375); glVertex3f( 0.375,0,-1.625); glVertex3f( 0,0,-2); glVertex3f( 0,0,0); glEnd(); Which produces a reasonable display ( camera at 0,5,0 looking down y axis ) So rather than do the calculation manually, I should be able to use the opengl model transormation. I write this code: glMatrixMode (GL_MODELVIEW); GLfloat shadow[] = { -1,0,0,0, 1,0,-1,1, 0,0,-1,0, 0,0,0,-1 }; glLoadMatrixf( shadow ); glBegin( GL_QUADS ); glVertex3f( 0.5,0.2,-0.5); glVertex3f( 0.5,0.2,-1.5); glVertex3f( 0.5,0.5,-1.5); glVertex3f( 0.5,0.5,-0.5); glEnd(); But this produces a blank screen! What am I doing wrong? Is there some debug mode where I can print out the transformed vertices, so I can see where they are ending up? Note: People have suggested that using glMultMatrixf() might make a difference. It doesn't. Replacing glLoadMatrixf( shadow ); with glLoadIdentity(); glMultMatrixf( shadow ); gives the identical result ( of course! )

    Read the article

  • How should I generate and store the boundries of a cave?

    - by Bob Roberts
    I am making a small cave copter game (seriously, where did this type of game come from anyway) and I am trying to figure out how to make and store the procedural generated walls. I am thinking about creating the walls by randomly picking two points away from the center of the screen. They will be no closer than the height of helicopter and no further than the edge of the screen, weighted to prefer to go in the same direction as the point prior so I end up with stalactites and stalagmites and not just noise, at set intervals of distance. To store, perhaps parallel arrays/lists, one for distance from center to top screen and one for distance from center to bottom. Am I way off base with my thinking? I just want the cave to be varied and challenging, I just have never worked with generating data like this. Edit: Woah, I just realized that my idea would lead to a player being able to stay in the middle of the screen and win. That isn't right at all. So the very basis of how I was going to generate is wrong. Edit 2: I also realized I left out a very crucial point. Part of the mechanics of the game will let the player go backwards therefor the data structure should be continuous.

    Read the article

  • 3D Model not translating correctly (visually)

    - by ChocoMan
    In my first image, my model displays correctly: But when I move the model's position along the Z-axis (forward) I get this, yet the Y-axis doesnt change. An if I keep going, the model disappears into the ground: Any suggestions as to how I can get the model to translate properly visually? Here is how Im calling the model and the terrain in draw(): cameraPosition = new Vector3(camX, camY, camZ); // Copy any parent transforms. Matrix[] transforms = new Matrix[mShockwave.Bones.Count]; mShockwave.CopyAbsoluteBoneTransformsTo(transforms); Matrix[] ttransforms = new Matrix[terrain.Bones.Count]; terrain.CopyAbsoluteBoneTransformsTo(ttransforms); // Draw the model. A model can have multiple meshes, so loop. foreach (ModelMesh mesh in mShockwave.Meshes) { // This is where the mesh orientation is set, as well // as our camera and projection. foreach (BasicEffect effect in mesh.Effects) { effect.EnableDefaultLighting(); effect.PreferPerPixelLighting = true; effect.World = transforms[mesh.ParentBone.Index] * Matrix.CreateRotationY(modelRotation) * Matrix.CreateTranslation(modelPosition); // Looking at the model (picture shouldnt change other than rotation) effect.View = Matrix.CreateLookAt(cameraPosition, modelPosition, Vector3.Up); effect.Projection = Matrix.CreatePerspectiveFieldOfView( MathHelper.ToRadians(45.0f), aspectRatio, 1.0f, 10000.0f); effect.TextureEnabled = true; } // Draw the mesh, using the effects set above. prepare3d(); mesh.Draw(); } //Terrain test foreach (ModelMesh meshT in terrain.Meshes) { foreach (BasicEffect effect in meshT.Effects) { effect.EnableDefaultLighting(); effect.PreferPerPixelLighting = true; effect.World = ttransforms[meshT.ParentBone.Index] * Matrix.CreateRotationY(0) * Matrix.CreateTranslation(terrainPosition); // Looking at the model (picture shouldnt change other than rotation) effect.View = Matrix.CreateLookAt(cameraPosition, terrainPosition, Vector3.Up); effect.Projection = Matrix.CreatePerspectiveFieldOfView( MathHelper.ToRadians(45.0f), aspectRatio, 1.0f, 10000.0f); effect.TextureEnabled = true; } // Draw the mesh, using the effects set above. prepare3d(); meshT.Draw(); DrawText(); } base.Draw(gameTime); } Im suspecting that there may be something wrong with how I'm handling my camera. The model rotates fine on its Y-axis.

    Read the article

  • How to manage my model

    - by Christophe Debove
    I have in my model, a list of Classes : Player, NonPlayerCharacter, Monster, Item, NonMovableItem etc With AndEngine I've a list of sprite for each piece of my model, How can I manage the relashionship between my model's classes and the graphical elements, what is the degree of abstaction recommended for my problem? One sprite for one Model or one Model for one Sprite or n for n for exemple If I do drag&drop have I to make abstraction of the Sprite Class, another exemple a map is a List of sprite or a list of element of my model?

    Read the article

  • How to cull liquids

    - by Cyral
    I use culling on my Tiles in my 2D Tile Based Platformer, so only ones needed are drawn on screen. Thats easy to do. However, My Liquid tiles (Water, lava, etc) require an Update Method aswell as the normal Draw, which does checks against tiles, makes it flow, etc. So how should I cull liquid updates in my game? Not culling is to slow, culling only on screen looks awkward when you move. What do you think would be best for the player? Maybe someway of culling the visible tiles PLUS also adding the width/height of the viewport to start culling tiles at a fast enough rate in front of the player so it dosent look awkward when moving? (Not sure how to do this though, something with MaxSpeed of player and width of screen)

    Read the article

  • Newton Game Dynamics: Making an object not affect another object

    - by Boreal
    I'm going to be using Newton in my networked action game with Mogre. There will be two "types" of physics object: global and local. Global objects will be kept in sync for everybody; these include the players, projectiles, and other gameplay-related objects. Local objects are purely for effect, like ragdolls, debris, and particles. Is there a way to make the global objects affect the local objects without actually getting affected themselves? I'd like debris to bounce off of a tank, but I don't want the tank to respond in any way.

    Read the article

  • How do I tackle top down RPG movement?

    - by WarmWaffles
    I have a game that I am writing in Java. It is a top down RPG and I am trying to handle movement in the world. The world is largely procedural and I am having a difficult time tackling how to handle character movement around the world and render the changes to the screen. I have the world loaded in blocks which contains all the tiles. How do I tackle the character movement? I am stumped and can't figure out where I should go with it. EDIT: Well I was abstract with the issue at hand. Right now I can only think of rigidly sticking everything into a 2D array and saving the block ID and the player offset in the Block or I could just "float" everything and move about between tiles so to speak.

    Read the article

  • 2D Rectangle Collision Response with Multiple Rectangles

    - by Justin Skiles
    Similar to: Collision rectangle response I have a level made up of tiles where the edges of the level are made up of collidable rectangles. The player's collision box is represented by a rectangle as well. The player can move in 8 directions. The player's velocity is equal in X and Y directions and constant. Each update, I am checking the player's collision against all tiles that are a certain distance away. When the player collides with a rectangle, I am finding the intersection depth and resolving along the most shallow axis followed by the other axis. This resolution happens for both axes simultaneously. See below for two examples of situations where I am having trouble. Moving up-left against the left wall In the scenario below, the player is colliding with two tiles. The tile intersection depth is equal on both axes for the top tile and more shallow in the X axis for the middle tile. Because the player is moving up the wall, the player should slide in an upward direction along the wall. This works properly as long as the rectangle with the more shallow depth is evaluated first. If the equal intersection depth rectangle is evaluated first, there is a chance the player becomes stuck. Moving up-left against the top wall Here is an identical scenario with the exception that the collision is with the top wall. The same problem occurs at the corners when intersection depth is equal for both axes. I guess my overall question is: How can I ensure that collision response occurs on tiles that have non-equal intersection depth before tiles that have equal intersection depth in order to get around the weirdness that occurs at these corners. Sean's answer in the linked question was good, but his solution required having different velocity components in a certain direction. My situation has equal velocities, so there's no good way to tell which direction to resolve at corners. I hope I have made my explanation clear.

    Read the article

  • Large resolution differences

    - by Robin Betka
    I want to develop a game on multiple devices such as PC, Android or IOS. Want it to be in 1080p, but that means a massive scale down for the smartphones. I know how to do that, just render everything on a 1080p rendertarget and then render it on the screen smaller. But what should I do so that the scalling down doesn't look bad and blury? I can't do it vector based or anything because the sprites simply need a specific size. Should I make the sprites power of two size to get some nice mipmapping? And which other settings can I do? Or should I rather go with a lower resolution but then having a little bit worse look PC version? The performance seems not to be a problem for me, so would be sad not using 1080p because of other problems.

    Read the article

  • How can I solve the same problems a CB-architecture is trying to solve without using hacks? [on hold]

    - by Jefffrey
    A component based system's goal is to solve the problems that derives from inheritance: for example the fact that some parts of the code (that are called components) are reused by very different classes that, hypothetically, would lie in a very different branch of the inheritance tree. That's a very nice concept, but I've found out that CBS is often hard to accomplish without using ugly hacks. Implementations of this system are often far from clean. But I don't want to discuss this any further. My question is: how can I solve the same problems a CBS try to solve with a very clean interface? (possibly with examples, there are a lot of abstract talks about the "perfect" design already). Here's an example I was going for before realizing I was just reinventing inheritance again: class Human { public: Position position; Movement movement; Sprite sprite; // other human specific components }; class Zombie { Position position; Movement movement; Sprite sprite; // other zombie specific components }; After writing that I realized I needed an interface, otherwise I would have needed N containers for N different types of objects (or to use boost::variant to gather them all together). So I've thought of polymorphism (move what systems do in a CBS design into class specific functions): class Entity { public: virtual void on_event(Event) {} // not pure virtual on purpose virtual void on_update(World) {} virtual void on_draw(Window) {} }; class Human { private: Position position; Movement movement; Sprite sprite; public: virtual void on_event(Event) { ... } virtual void on_update(World) { ... } virtual void on_draw(Window) { ... } }; class Zombie { private: Position position; Movement movement; Sprite sprite; public: virtual void on_event(Event) { ... } virtual void on_update(World) { ... } virtual void on_draw(Window) { ... } }; Which was nice, except for the fact that now the outside world would not even be able to know where a Human is positioned (it does not have access to its position member). That would be useful to track the player position for collision detection or if on_update the Zombie would want to track down its nearest human to move towards him. So I added const Position& get_position() const; to both the Zombie and Human classes. And then I realized that both functionality were shared, so it should have gone to the common base class: Entity. Do you notice anything? Yes, with that methodology I would have a god Entity class full of common functionality (which is the thing I was trying to avoid in the first place).

    Read the article

  • RPG level-experience formula [closed]

    - by Comy
    I want to make an RPG game and I would like an advice on how should I create my level-experience formula. I saw this formula http://rsdo.net/rsdonline/guides/Experience%20formula.html#PHP and I created a formula myself and I want to ask you which would be better. RuneScape rates My rates Level 2 - 83 xp Level 2 - 35 Level 3 - 174 xp Level 3 - 84 Level 4 - 276 xp Level 4 - 150 Level 5 - 388 xp Level 5 - 238 Level 10 - 1,154 xp Level 10 - 1,087 Level 100 - 14,391,160 xp Level 100 - 311,017 As you can see at level 100 RuneScape's xp is very big and my level 100 is equal with RuneScape's Level 61. Is it better if the xp grows very fast at one point or depends on how I make my game?

    Read the article

  • Using copyrighted sprites

    - by Zertalx
    I was thinking about making a pacman clone, I know there is a similar question here Using Copyrighted Images , but I know i can't use the original art from the game because it belongs to Namco, so if I design a character that has the shape of the slice circle it will look exactly like pacman, maybe if I use green instead of yellow? Also if the game plays like the original pacman, it is wrong? I just want to make the game as a personal project and and publish it in my site without getting in trouble

    Read the article

  • Virtual screen size with libgdx and GLES 2

    - by David Saltares Márquez
    I've been trying to use a virtual screen size for my libgdx desktop-android game. I'd like to always use a 16:9 aspect ratio but with a virtual screen size so everything would adapt automatically depending on the device size. This post illustrates the process pretty well but my game crashes when camera.apply(Gdx.Gl10) is called. This is because I'm using GLES 2.0 (for not having to use multiple of 2 texture sizes). As stated in the OrthographicCamera doc, the apply method only works with GLES 1 and GLES 1.1. Is there another way of applying my GL transformation to the camera so I can use a virtual screen resolution? Having to resize everything manually it's a total pain. Thanks a lot.

    Read the article

  • Keypress detection wont work after seemingly unrelated code change

    - by LukeZaz
    I'm trying to have the Enter key cause a new 'map' to generate for my game, but for whatever reason after implementing full-screen in it the input check won't work anymore. I tried removing the new code and only pressing one key at a time, but it still won't work. Here's the check code and the method it uses, along with the newMap method: public class Game1 : Microsoft.Xna.Framework.Game { // ... protected override void Update(GameTime gameTime) { // ... // Check if Enter was pressed - if so, generate a new map if (CheckInput(Keys.Enter, 1)) { blocks = newMap(map, blocks, console); } // ... } // Method: Checks if a key is/was pressed public bool CheckInput(Keys key, int checkType) { // Get current keyboard state KeyboardState newState = Keyboard.GetState(); bool retType = false; // Return type if (checkType == 0) { // Check Type: Is key currently down? if (newState.IsKeyDown(key)) { retType = true; } else { retType = false; } } else if (checkType == 1) { // Check Type: Was the key pressed? if (newState.IsKeyDown(key)) { if (!oldState.IsKeyDown(key)) { // Key was just pressed retType = true; } else { // Key was already pressed, return false retType = false; } } } // Save keyboard state oldState = newState; // Return result if (retType == true) { return true; } else { return false; } } // Method: Generate a new map public List<Block> newMap(Map map, List<Block> blockList, Console console) { // Create new map block coordinates List<Vector2> positions = new List<Vector2>(); positions = map.generateMap(console); // Clear list and reallocate memory previously used up by it blockList.Clear(); blockList.TrimExcess(); // Add new blocks to the list using positions created by generateMap() foreach (Vector2 pos in positions) { blockList.Add(new Block() { Position = pos, Texture = dirtTex }); } // Return modified list return blockList; } // ... } and the generateMap code: // Generate a list of Vector2 positions for blocks public List<Vector2> generateMap(Console console, int method = 0) { ScreenTileWidth = gDevice.Viewport.Width / 16; ScreenTileHeight = gDevice.Viewport.Height / 16; maxHeight = gDevice.Viewport.Height; List<Vector2> blockLocations = new List<Vector2>(); if (useScreenSize == true) { Width = ScreenTileWidth; Height = ScreenTileHeight; } else { maxHeight = Height; } int startHeight = -500; // For debugging purposes, the startHeight is set to an // hopefully-unreachable value - if it returns this, something is wrong // Methods of land generation /// <summary> /// Third version land generation /// Generates a base land height as the second version does /// but also generates a 'max change' value which determines how much /// the land can raise or lower by which it now does by a random amount /// during generation /// </summary> if (method == 0) { // Get the land height startHeight = rnd.Next(1, maxHeight); int maxChange = rnd.Next(1, 5); // Amount ground will raise/lower by int curHeight = startHeight; for (int w = 0; w < Width; w++) { // Run a chance to lower/raise ground level int changeBy = rnd.Next(1, maxChange); int doChange = rnd.Next(0, 3); if (doChange == 1 && !(curHeight <= (1 + maxChange))) { curHeight = curHeight - changeBy; } else if (doChange == 2 && !(curHeight >= (29 - maxChange))) { curHeight = curHeight + changeBy; } for (int h = curHeight; h < Height; h++) { // Location variables float x = w * 16; float y = h * 16; blockLocations.Add(new Vector2(x, y)); } } console.newMsg("[INFO] Cur, height change maximum: " + maxChange.ToString()); } /// <summary> /// Second version land generator /// Generates a solid mass of land starting at a random height /// derived from either screen height or provided height value /// </summary> else if (method == 1) { // Get the land height startHeight = rnd.Next(0, 30); for (int w = 0; w < Width; w++) { for (int h = startHeight; h < ScreenTileHeight; h++) { // Location variables float x = w * 16; float y = h * 16; // Add a tile at set location blockLocations.Add(new Vector2(x, y)); } } } /// <summary> /// First version land generator /// Generates land completely randomly either across screen or /// in a box set by Width and Height values /// </summary> else { // For each tile in the map... for (int w = 0; w < Width; w++) { for (int h = 0; h < Height; h++) { // Location variables float x = w * 16; float y = h * 16; // ...decide whether or not to place a tile... if (rnd.Next(0, 2) == 1) { // ...and if so, add a tile at that location. blockLocations.Add(new Vector2(x, y)); } } } } console.newMsg("[INFO] Cur, base height: " + startHeight.ToString()); return blockLocations; } I never touched any of the above code for this when it broke - changing keys won't seem to fix it. Despite this, I have camera movement set inside another Game1 method that uses WASD and works perfectly. All I did was add a few lines of code here: private int BackBufferWidth = 1280; // Added these variables private int BackBufferHeight = 800; public Game1() { graphics = new GraphicsDeviceManager(this); graphics.PreferredBackBufferWidth = BackBufferWidth; // and this graphics.PreferredBackBufferHeight = BackBufferHeight; // this Content.RootDirectory = "Content"; this.graphics.IsFullScreen = true; // and this } When I try adding a console line to be printed in the event the key is pressed, it seems that the If is never even triggered despite the correct key being pressed.

    Read the article

  • isometric background that covers the viewport [on hold]

    - by Richard
    The background image should cover the viewport. The technique I use now is a loop with an innerloop that draws diamond shaped images on a canvas element, but it looks like a rotated square. This is a nice example: ,that covers the whole viewport. I have heard something about clickthrough maps, but what more ways are there that are most efficient with mobile devices and javascript? Any advice in grid design out there?.

    Read the article

  • OpenGL profiling with AMD PerfStudio 2

    - by Aurus
    I'm rendering just a really small amount of polygons for my UI but however I still tried to increase the FPS. In the end I removed redundant calls which increased the FPS. I really don't want to lose FPS for nothing so I keep looking for more improvements. The first thing I noticed is the "huge" time where no calls are made before SwapBuffer (the black one). Well I know that OpenGL works asynchronous so SwapBuffer has to wait until everything is done. But shouldn't PerfStudio mark this time also as black ? Correct me If I am wrong. The second thing I noticed is that some glUniform2f calls just take longer (the brown ones). I mean they should all upload 2floats to the GPU how can the time be so different from call to call. The program isn't even changed or something like that. I also tried to look at other programs like gDebugger or CodeXL but they often crashed and they show less statistics (only # of calls or redundant calls etc.) EDIT: I also realized that the draw calls also have different durations, which was obvious for me but sometimes drawing more vertices is faster than drawing less vertices.

    Read the article

  • How do I render only part of a texture to a point sprite in OpenGL ES for Android?

    - by nbolton
    Using the libgdx framework, I've figured out how to render a texture to a point sprite. The problem is, it renders the entire texture to the point sprite, where I only want a small part of it (since it's an isometric tile image). Here's a snippet from some demo code I wrote... create() { renderer = new ImmediateModeRenderer(); tiles = Gdx.graphics.newTexture( Gdx.files.internal("data/tiles2.png"), TextureFilter.MipMap, TextureFilter.Linear, TextureWrap.ClampToEdge, TextureWrap.ClampToEdge); Gdx.gl.glClearColor(0.6f, 0.7f, 0.9f, 1); Gdx.gl.glEnable(GL10.GL_TEXTURE_2D); Gdx.gl.glEnable(GL11.GL_POINT_SPRITE_OES); Gdx.gl11.glTexEnvi( GL11.GL_POINT_SPRITE_OES, GL11.GL_COORD_REPLACE_OES, GL11.GL_TRUE); Gdx.gl10.glPointSize(s); tiles.bind(); } render() { Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); renderer.begin(GL10.GL_POINTS); // render 3 point sprites at various 3d points renderer.vertex(-.1f, 0, -.1f); renderer.vertex(0, 0, 0); renderer.vertex(.1f, 0, .1f); // ... more vertices here if needed (one for each sprite) ... renderer.end(); }

    Read the article

  • How to move a directional light according to the camera movement?

    - by Andrea Benedetti
    Given a light direction, how can I move it according to the camera movement, in a shader? Think that an artist has setup a scene (e.g., in 3DSMax) with a mesh in center of that and a directional light with a position and a target. From this position and target I've calculated the light direction. Now I want to use the same direction in my lighting equation but, obviously, I want that this light moves correctly with the camera. Thanks.

    Read the article

  • What Type of Options should be on the Game Settings Menu?

    - by A13X
    I have seen a post about the main menu options here: UI: Main Menu options for mobile games. What options should be listed? What do users want to see? But I want to know what kind of options should/need to be available on the settings screen. I am making a rather simple 2D game for Android, but really I haven't found many aspects that warrant an options button or a check box besides turning the sound and music on/off. I was thinking graphics settings but then again, how many apps really need graphics settings besides immersive 3D ones?

    Read the article

< Previous Page | 431 432 433 434 435 436 437 438 439 440 441 442  | Next Page >