Search Results

Search found 25550 results on 1022 pages for 'mere development'.

Page 518/1022 | < Previous Page | 514 515 516 517 518 519 520 521 522 523 524 525  | Next Page >

  • Panning a 3d viewport in 2d direction with rotated camera

    - by Noob Game Developer
    I am using below code to pan the viewport (action script 3 code using flare3d framework) _mainCamera.x-= Input3D.mouseXSpeed; _mainCamera.z+= Input3D.mouseYSpeed; Where as Input3D.mouse[X|Y]Speed gives the displacement of the mouse on the X/Y axis starting from the position of the last frame. This works perfect if my camera is not rotated. However, if I rotate the camera (x by 30, y by 60) and pan the camera then it goes wrong. Which is actually correctly panning according to the code. But this is not desired and I know I need to do some math to get the correct x/y which I am not aware of it. Can some one help me achieving it? Update: I am getting an Idea but I am not sure how to do it :( Get the mouseX/Y deltas (xd,yd) Get the current camera coords (pos3d) Convert to screen coords (pos2d) Add deltas to screen coords (pos2d+ (xd,yd)) Convert above coords to 3d coords

    Read the article

  • How can I get textures on edge of walls like in Super Metroid and Aquaria?

    - by meds
    Games like Super Metroid and Aquaria present the terrain with the other facing parts having rocks and stuff while deeper behind them (i.e. underground) there's different detail or just black. I would like to do something similar using polygons. Terrain is created in my current level as a set of overlapping square boxes. I'm not sure if this rendering method will work such a system for creating terrain but if anyone has ideas I'd love to hear them. Otherwise I'd like to know how I should re-write the terrain rendering system so it actually works to draw terrain in this manner...

    Read the article

  • Implementing my Entity System. Questions about some problems I have found.

    - by Notbad
    Hi!, Well during this week I have deciding about implementation of my entity system. It is a big topic so it has been difficult to take one option from the whole. This has been my decision: 1) I don't have an entity class it is just an id. 2) I have systems that contain a list of components (the list is homegenous, I mean, RenderSystem will just have RenderComponents). 3) Compones will be just data. 4) There would be some kind of "entity prototypes" in a manager or something from we will create entity instances.Ideally they will define the type of components it has and initialization data. 5) Prototype code to create an entity (this is from the top of my head): int id=World::getInstance()->createEntity("entity template"); 6) This will notify all systems that a new entity has been created, and if the entity needs a component that the system handles it will add it to the entity. Ok, this are the ideas. Let's see if some can help with the problems: 1) The main problem is this templates that are sent to the systems in creation process to populate the entity with needed components. What would you use, an OR(ed) int?, a list of strings?. 2) How to do initialization for components when the entity has been created? How to store this in the template? I have thought about having a function in the template that is virtual and after entity is created an populated, gets the components and sets initialization values. 3) Don't you think this is a lot of work for just an entity creation?. Sorry for the long post, I have tried to expose my ideas and finding in order other could have a start beside exposing my problems. Thanks in advance, Notbad.

    Read the article

  • Information about rendering, batches, the graphical card, performance etc. + XNA?

    - by Aidiakapi
    I know the title is a bit vague but it's hard to describe what I'm really looking for, but here goes. When it comes to CPU rendering, performance is mostly easy to estimate and straightforward, but when it comes to the GPU due to my lack of technical background information, I'm clueless. I'm using XNA so it'd be nice if theory could be related to that. So what I actually wanna know is, what happens when and where (CPU/GPU) when you do specific draw actions? What is a batch? What influence do effects, projections etc have? Is data persisted on the graphics card or is it transferred over every step? When there's talk about bandwidth, are you talking about a graphics card internal bandwidth, or the pipeline from CPU to GPU? Note: I'm not actually looking for information on how the drawing process happens, that's the GPU's business, I'm interested on all the overhead that precedes that. I'd like to understand what's going on when I do action X, to adapt my architectures and practices to that. Any articles (possibly with code examples), information, links, tutorials that give more insight in how to write better games are very much appreciated. Thanks :)

    Read the article

  • Lag compensation with networked 2D games

    - by Milo
    I want to make a 2D game that is basically a physics driven sandbox / activity game. There is something I really do not understand though. From research, it seems like updates from the server should only be about every 100ms. I can see how this works for a player since they can just concurrently simulate physics and do lag compensation through interpolation. What I do not understand is how this works for updates from other players. If clients only get notified of player positions every 100ms, I do not see how that works because a lot can happen in 100ms. The player could have changed direction twice or so in that time. I was wondering if anyone would have some insight on this issue. Basically how does this work for shooting and stuff like that? Thanks

    Read the article

  • Collision detection between a sprite and rectangle in canvas

    - by Andy
    I'm building a Javascript + canvas game which is essentially a platformer. I have the player all set up and he's running, jumping and falling, but I'm having trouble with the collision detection between the player and blocks (the blocks will essentially be the platforms that the player moves on). The blocks are stored in an array like this: var blockList = [[50, 400, 100, 100]]; And drawn to the canvas using this: this.draw = function() { c.fillRect(blockList[0][0], blockList[0][1], 100, 100); } I'm checking for collisions using something along these lines in the player object: this.update = function() { // Check for collitions with blocks for(var i = 0; i < blockList.length; i++) { if((player.xpos + 34) > blockList[i][0] && player.ypos > blockList[i][1]) { player.xpos = blockList[i][0] - 28; return false; } } // Other code to move the player based on keyboard input etc } The idea is if the player will collide with a block in the next game update (the game uses a main loop running at 60Htz), the function will return false and exit, thus meaning the player won't move. Unfortunately, that only works when the player hits the left side of the block, and I can't work out how to make it so the player stops if it hits any side of the block. I have the properties player.xpos and player.ypos to help here.

    Read the article

  • How can I get a 2D texture to rotate like a compass in XNA?

    - by IronGiraffe
    I'm working on a small maze puzzle game and I'm trying to add a compass to make it somewhat easier for the player to find their way around the maze. The problem is: I'm using XNA's draw method to rotate the arrow and I don't really know how to get it to rotate properly. What I need it to do is point towards the exit from the player's position, but I'm not sure how I can do that. So does anyone know how I can do this? Is there a better way to do it?

    Read the article

  • Possible to pass pygame data to memory map block?

    - by toozie21
    I am building a matrix out of addressable pixels and it will be run by a Pi (over the ethernet bus). The matrix will be 75 pixels wide and 20 pixels tall. As a side project, I thought it would be neat to run pong on it. I've seen some python based pong tutorials for Pi, but the problem is that they want to pass the data out to a screen via pygame.display function. I have access to pass pixel information using a memory map block, so is there anyway to do that with pygame instead of passing it out the video port? In case anyone is curious, this was the pong tutorial I was looking at: Pong Tutorial

    Read the article

  • port opengl2.x to opengl 3.x

    - by user46759
    I'm trying to port opencloth example to OpenGL 3.x. I've mostly done it to the shaders but I'm not sure of this part : glEnableClientState(GL_VERTEX_ARRAY); glBindBuffer(GL_ARRAY_BUFFER, vboID); glVertexPointer(4, GL_FLOAT, 0,0); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glBindBuffer(GL_ARRAY_BUFFER, vboTexID); glTexCoordPointer(2, GL_FLOAT,0, 0); glEnableClientState(GL_NORMAL_ARRAY); glBindBuffer(GL_ARRAY_BUFFER, vboNormID); glNormalPointer(GL_FLOAT,sizeof(float)*4, 0); maybe glEnableVertexAttriArray somewhere ? any clue ? thanx edit : maybe something like that ? glEnableVertexAttribArray (2) ; // Ou glEnableVertexAttribArray (positionIndex) ; glBindBuffer(GL_ARRAY_BUFFER, vboTexID); glVertexAttribPointer (2, 2, GL_FLOAT, GL_FALSE, 0, 0) ; glEnableVertexAttribArray (3) ; // Ou glEnableVertexAttribArray (positionIndex) ; glBindBuffer(GL_ARRAY_BUFFER, vboNormID); glVertexAttribPointer (3, 4, GL_FLOAT, GL_FALSE, sizeof (float) * 4, 0) ;

    Read the article

  • Where i must put .xnb files in mono game project using VS2010?

    - by user23899
    Hello there my problem was describe below In the "The Content Pipeline" paragraph http://blogs.msdn.com/b/bobfamiliar/archive/2012/08/07/windows-8-xna-and-monogame-part-3-code-migration-and-windows-8-feature-support.aspx#comments Author describe how fix it using VS2012 put xnb files to \AppX\Content folder but i use VS2010 and mono game templates for it and there is no folders like this so where i must put this asstes to run game correctly

    Read the article

  • How to divide hex grid evenly among n players?

    - by manabreak
    I'm making a simple hex-based game, and I want the map to be divided evenly among the players. The map is created randomly, and I want the players to have about equal amount of cells, with relatively small areas. For example, if there's four players and 80 cells in the map, each of the players would have about 20 cells (it doesn't have to be spot-on accurate). Also, each player should have no more than four adjacent cells. That is to say, when the map is generated, the biggest "chunks" cannot be more than four cells each. I know this is not always possible for two or three players (as this resembles the "coloring the map" problem), and I'm OK with doing other solutions for those (like creating maps that solve the problem instead). But, for four to eight players, how could I approach this problem? As always, any and all help is appreciated. :)

    Read the article

  • Generating and rendering not point-like particles on GPU

    - by TravisG
    Specifically I'm talking about particles as seen (for example) in the UE4 dev video here. They're not just points and seem to have a nice shape to them that seems to follow their movement. Is it possible to create these kinds of particles (efficiently) completely on the GPU (perhaps through something like motion? Or is the only (or most efficient) way to just create a small particle texture and render small quads for each particle?

    Read the article

  • Java chunk negative number problem

    - by user1990950
    I've got a tile based map, which is divided in chunks. I got a method, which puts tiles in this map and with positive numbers it's working. But when using negative numbers it wont work. This is my setTile method: public static void setTile(int x, int y, Tile tile) { int chunkX = x / Chunk.CHUNK_SIZE, chunkY = y / Chunk.CHUNK_SIZE; IntPair intPair = new IntPair(chunkX, chunkY); world.put(intPair, new Chunk(chunkX, chunkY)); world.get(intPair).setTile(x - chunkX * Chunk.CHUNK_SIZE, y - chunkY * Chunk.CHUNK_SIZE, tile); } This is the setTile method in the chunk class (CHUNK_SIZE is a constant with the value 64): public void setTile(int x, int y, Tile t) { if (x >= 0 && x < CHUNK_SIZE && y >= 0 && y < CHUNK_SIZE) tiles[x][y] = t; } What's wrong with my code?

    Read the article

  • Soccer Game only with National Team names (country names) what about player names? [duplicate]

    - by nightkarnation
    This question already has an answer here: Legal issues around using real players names and team emblems in an open source game 2 answers Ok...this question hasn't been asked before, its very similar to some, but here's the difference: I am making a soccer/football simulator game, that only has national teams (with no official logos) just the country names and flags. Now, my doubt is the following...can I use real player names (that play or played on that national team?) From what I understand if I use a player name linked to a club like Barcelona FC (not a national team) I need the right from the club and the association that club is linked to, right? But If I am only linking the name just to a country...I might just need the permission of the actual player (that I am using his name) and not any other associations, correct? Thanks a lot in advance! Cheers, Diego.

    Read the article

  • In what kind of variable type is the player position stored on a MMORPG such as WoW?

    - by jokoon
    I even heard J. Carmack quickly talk about it... How a software can track a player's position so accurately, being on a such huge world, without loading between zones, and on a multiplayer scale ? How is the data formatted when it passes through the netcode ? I can understand how vertices are stored into the graphic card's memory, but when it comes to synchronize the multiplayer, I can't imagine what is best.

    Read the article

  • Separating logic and data in browser game

    - by Tesserex
    I've been thinking this over for days and I'm still not sure what to do. I'm trying to refactor a combat system in PHP (...sorry.) Here's what exists so far: There are two (so far) types of entities that can participate in combat. Let's just call them players and NPCs. Their data is already written pretty well. When involved in combat, these entities are wrapped with another object in the DB called a Combatant, which gives them information about the particular fight. They can be involved in multiple combats at once. I'm trying to write the logic engine for combat by having combatants injected into it. I want to be able to mock everything for testing. In order to separate logic and data, I want to have two interfaces / base classes, one being ICombatantData and the other ICombatantLogic. The two implementers of data will be one for the real objects stored in the database, and the other for my mock objects. I'm now running into uncertainties with designing the logic side of things. I can have one implementer for each of players and NPCs, but then I have an issue. A combatant needs to be able to return the entity that it wraps. Should this getter method be part of logic or data? I feel strongly that it should be in data, because the logic part is used for executing combat, and won't be available if someone is just looking up information about an upcoming fight. But the data classes only separate mock from DB, not player from NPC. If I try having two child classes of the DB data implementer, one for each entity type, then how do I architect that while keeping my mocks in the loop? Do I need some third interface like IEntityProvider that I inject into the data classes? Also with some of the ideas I've been considering, I feel like I'll have to put checks in place to make sure you don't mismatch things, like making the logic for an NPC accidentally wrap the data for a player. Does that make any sense? Is that a situation that would even be possible if the architecture is correct, or would the right design prohibit that completely so I don't need to check for it? If someone could help me just layout a class diagram or something for this it would help me a lot. Thanks. edit Also useful to note, the mock data class doesn't really need the Entity, since I'll just be specifying all the parameters like combat stats directly instead. So maybe that will affect the correct design.

    Read the article

  • Tunnels in pseudo 3D racing game

    - by Nicholas
    How would one go about doing tunnels in a pseudo 3D racing game ? The main problem I have at the moment is perspective - I cant think of a way, beyond having to Z sort the sprites and tunnel coordinates, so that vehicles are displayed in front of the tunnel entrance and somehow block the display when out of site. I would like my tunnels to be used on both flat, curved and hills and slopes. The tunnel enterance/exit is made up of 3 separate graphics, (left, right and top), whilst inside the tunnel it is just one line graphic along the top (the idea being its supposed to be a set distance above the current vertical road position). As you can see from the picture, the vehicles are still being rendered whilst in the tunnel. I've converted the Code Incomplete road system to GLBasic.

    Read the article

  • Bomb timer adventure game win32 c++ [on hold]

    - by user3491746
    I'm working on an adventure game in win32 and opengl for my 2nd year university project for class. I am pretty much finished my game but I'm stuck on the concept of how to program a timer which outputs hh : mm : ss -- but which countdown, not up. I've made a clock which counts up using vector matrices and the segxseg matrix algorithm but I cannot figure out how to make a clock (it can be simple even text using wsprintf) that counts down in that format. Can anyone possible give me an example or some literature that I can read on how to do this? Please dont suggest for me to use another environment, I've already been working here for 2 months on this game, and I'm pretty much done so i'm at no point to switch over. Can anyone show me how I can take a shot at this component of my project? Thanks a bunch! Anything that I can get is appreciated.

    Read the article

  • GLSL per pixel lighting with custom light type

    - by Justin
    Ok, I am having a big problem here. I just got into GLSL yesterday, so the code will be terrible, I'm sure. Basically, I am attempting to make a light that can be passed into the fragment shader (for learning purposes). I have four input values: one for the position of the light, one for the color, one for the distance it can travel, and one for the intensity. I want to find the distance between the light and the fragment, then calculate the color from there. The code I have gives me a simply gorgeous ring of light that get's twisted and widened as the matrix is modified. I love the results, but it is not even close to what I am after. I want the light to be moved with all of the vertices, so it is always in the same place in relation to the objects. I can easily take it from there, but getting that to work seems to be impossible with my current structure. Can somebody give me a few pointers (pun not intended)? Vertex shader: attribute vec4 position; attribute vec4 color; attribute vec2 textureCoordinates; varying vec4 colorVarying; varying vec2 texturePosition; varying vec4 fposition; varying vec4 lightPosition; varying float lightDistance; varying float lightIntensity; varying vec4 lightColor; void main() { vec4 ECposition = gl_ModelViewMatrix * gl_Vertex; vec3 tnorm = normalize(vec3 (gl_NormalMatrix * gl_Normal)); fposition = ftransform(); gl_Position = fposition; gl_TexCoord[0] = gl_MultiTexCoord0; fposition = ECposition; lightPosition = vec4(0.0, 0.0, 5.0, 0.0) * gl_ModelViewMatrix * gl_Vertex; lightDistance = 5.0; lightIntensity = 1.0; lightColor = vec4(0.2, 0.2, 0.2, 1.0); } Fragment shader: varying vec4 colorVarying; varying vec2 texturePosition; varying vec4 fposition; varying vec4 lightPosition; varying float lightDistance; varying float lightIntensity; varying vec4 lightColor; uniform sampler2D texture; void main() { float l_distance = sqrt((gl_FragCoord.x * lightPosition.x) + (gl_FragCoord.y * lightPosition.y) + (gl_FragCoord.z * lightPosition.z)); float l_value = lightIntensity / (l_distance / lightDistance); vec4 l_color = vec4(l_value * lightColor.r, l_value * lightColor.g, l_value * lightColor.b, l_value * lightColor.a); vec4 color; color = texture2D(texture, gl_TexCoord[0].st); gl_FragColor = l_color * color; //gl_FragColor = fposition; }

    Read the article

  • Scene transitions

    - by Mars
    It's my first time working with actual scenes/states, aka DrawableGameComponents, which work separate from one another. I'm now wondering what's the best way to make transitions between them, and how to affect them from other scenes. Lets say I wanted to "push" one screen to the right, with another one coming in at the same time. Naturally I'd have to keep drawing both, until the transition is complete. And I'd have to adjust the coordinates I'm drawing at while doing it. Is there a way around specifically handling this special case in every single scene? Or of I wanted to fade one into the other. Basically the question stays the same, how would you do that without having to handle it in every single scene? While writing this I'm realizing it will be the same thing for all kinds of transitions. Maybe a central Draw method in the manager could be a solution, where parameters and effects are applied when necessary. But this wouldn't work if objects that are drawn have their own method, and aren't drawn within the scene, or if an effect has to be applied to the whole scene. That means, maybe scenes have to be drawn to their own rendertarget? That way one call to the base class after the normal drawing could be enough, to apply the effects, while drawing it to the main render target. But I once heard there are problems when switching from target to target, back and forth. So is that even a viable option? As you can see, I have some basic ideas how it might work... but nothing specific. I'd like to learn what's the common way to achieve such things, a general way to apply all kinds of transitions.

    Read the article

  • Collisions on complex map 2D

    - by waxx
    I'm currently thinking about collision and map system that I want to use in my next game and I'm kind of puzzled. Maps are going to be somewhat complex with lots of irregularities and thus tiling is out of question. I thought about an editor where you'd draw rectangles on the map that would represent areas that are collidable with and then saving such "collision map" with only black/white gfx. Or maybe should I save exact rectangles data with their x/y/width/height into some text file and go from there? What would you recommend? Thanks.

    Read the article

  • Material tiling and offset in unity

    - by Simran kaur
    Ambiguity: What exactly is the difference between Tiling the material and Offset of material? Need to do: I need the material to be repeated n times on the object where I need to set the value of n via script.How do I do it? It seems to happen through Tiling(tried via inspector) but again what is difference between mainTextureOffset and setTextureOffset? Tried: Following is the line of code that I tried to repeat the texture n number of times on an object(repeat across the width of object), but it does nothing significant that I can see.

    Read the article

  • Shared Object Not saving the level Progress

    - by user3536228
    I am making a flash game in which i have a variable levelState that describes the current level in which user has entered I am using SharedObject to save the progress but it does not do so first i declred a clas level variable private var levelState:Number = 1; private var mySaveData:SharedObject = SharedObject.getLocal("levelSave"); in the Main function i am checking if it is a first run of the game like below if (mySaveData.data.levelsComplete == null) { mySaveData.data.levelsComplete = 1; } and in a function where the winning condition is checked so that levelState could be increased i am usin this sharedobject to hold the value of levelState if (/*winniing condition*/) levelState++; mySaveData.data.levelsComplete = levelState; mySaveData.flush(); setNewLevel(levelState); } but when i play the game clear a level and again run the game it does not start from that level it starts from beginning.

    Read the article

  • Is white the best base color to start with when planning to shade sprites within Unity?

    - by SpartanDonut
    I'm looking into prototyping a game in Unity which will consist of solid square sprites / tiles. I figure I can represent different types of objects with different colors for each of the tiles in the game. I figure that I can import a single square sprite and shade it appropriately in Unity as opposed to imported squares of many different colors. My experience with adjusting the hue and saturation within Photoshop shows that white is not an easy color to change as things that are white often stay white. My testing in Unity shows that I can change the "color" of a sprite to anything other than white and the sprite is seemingly shaded appropriately, despite what I would have thought given my Photoshop experience. Since white objects do seem to take on the appropriate color shading when changed within Unity my gut tells me that this is the best base color to begin with, meaning that I can import a single white square sprite and simply adjust the color to represent different objects and object states. Is a white sprite actually the best color sprite to begin with and why does something like this work in Unity as opposed to adjusting the hue and saturation within Photoshop?

    Read the article

  • snapping an angle to the closest cardinal direction

    - by Josh E
    I'm developing a 2D sprite-based game, and I'm finding that I'm having trouble with making the sprites rotate correctly. In a nutshell, I've got spritesheets for each of 5 directions (the other 3 come from just flipping the sprite horizontally), and I need to clamp the velocity/rotation of the sprite to one of those directions. My sprite class has a pre-computed list of radians corresponding to the cardinal directions like this: protected readonly List<float> CardinalDirections = new List<float> { MathHelper.PiOver4, MathHelper.PiOver2, MathHelper.PiOver2 + MathHelper.PiOver4, MathHelper.Pi, -MathHelper.PiOver4, -MathHelper.PiOver2, -MathHelper.PiOver2 + -MathHelper.PiOver4, -MathHelper.Pi, }; Here's the positional update code: if (velocity == Vector2.Zero) return; var rot = ((float)Math.Atan2(velocity.Y, velocity.X)); TurretRotation = SnapPositionToGrid(rot); var snappedX = (float)Math.Cos(TurretRotation); var snappedY = (float)Math.Sin(TurretRotation); var rotVector = new Vector2(snappedX, snappedY); velocity *= rotVector; //...snip private float SnapPositionToGrid(float rotationToSnap) { if (rotationToSnap == 0) return 0.0f; var targetRotation = CardinalDirections.First(x => (x - rotationToSnap >= -0.01 && x - rotationToSnap <= 0.01)); return (float)Math.Round(targetRotation, 3); } What am I doing wrong here? I know that the SnapPositionToGrid method is far from what it needs to be - the .First(..) call is on purpose so that it throws on no match, but I have no idea how I would go about accomplishing this, and unfortunately, Google hasn't helped too much either. Am I thinking about this the wrong way, or is the answer staring at me in the face?

    Read the article

< Previous Page | 514 515 516 517 518 519 520 521 522 523 524 525  | Next Page >