Search Results

Search found 41789 results on 1672 pages for 'software development'.

Page 558/1672 | < Previous Page | 554 555 556 557 558 559 560 561 562 563 564 565  | Next Page >

  • Advice for programming a lobby for a network multiplayer game?

    - by Milo
    I'm working on learning network programming. I'm working on a simple card game. The basic idea is: Players enter the lobby Players see tables Players sit at an empty seat Once they sit, they do not need any information from the lobby, they see the card table and the data about the other players and so forth. I've programmed the server portion for the game itself. The clients connect to my server object and the server then receives and sends messages; quite simple. The tricky concepts for me are: What's a good way to run many tables at the same time? What's a good way to keep the lobby consistently updated for each person in the lobby (eg: MSG_TABLE_FILLED, 22) Ideally I'd like to have 1 server exe for all of this and to have to deal with multithreading as little as possible. I'm going to use the enet library. I was thinking that each time a game session starts, I push a new Game and I map the client IPs to that table, then I just route messages from those clients to that Game. Since enet supports channels I was thinking of using 2 channels per table, one for the game messages and one for in game chat. Would something like this work? Does anyone have any advice / design ideas for a game with a lobby and many tables? Is there a usual way this is done that I'm overlooking? Any conceptual ideas or even c/c++ code examples would be very helpful. Thanks

    Read the article

  • How can I implement collision detection for these tiles?

    - by Fiona
    I am wondering how this would be possible, if at all. In the image below: http://i.stack.imgur.com/d8cO3.png The light brows tiles are ground, while the dark brown is background, so the player can pass over those tiles. Here's the for loops that draws the level: float scale = 1f; for (row = 0; row < currentLevel.Rows; row++) { for (column = 0; column < currentLevel.Columns; column++) { Tile tile = (Tile)currentLevel.GetTile(row, column); if (tile == null) { continue; } Texture2D texture = tile.Texture; spriteBatch.Draw(texture, new Microsoft.Xna.Framework.Rectangle( (int)(column * currentLevel.CellSize.X * scale), (int)(row * currentLevel.CellSize.Y * scale), (int)(currentLevel.CellSize.X * scale), (int)(currentLevel.CellSize.Y * scale)), Microsoft.Xna.Framework.Color.White); } } Here's what I have so far to determine where to create a Rectangle: Microsoft.Xna.Framework.Rectangle[,,,] groundBounds = new Microsoft.Xna.Framework.Rectangle[?, ?, ?, ?]; int tileSize = 20; int screenSizeInTiles = 30; var tilePositions = new System.Drawing.Point[screenSizeInTiles, screenSizeInTiles]; for (int x = 0; x < screenSizeInTiles; x++) { for (int y = 0; y < screenSizeInTiles; y++) { tilePositions[x, y] = new System.Drawing.Point(x * tileSize, y * tileSize); groundBounds[x, y, tileSize, tileSize] = new Microsoft.Xna.Framework.Rectangle(x, y, 20, 20); } } First off, I'm not sure how to initialize the array groundBounds (I don't know how big to make it). Also, I'm not entirely sure how to go about adding information to groundBounds. I want to add a Rectangle for each tile in the level. Preferably I'd only make a Rectangle for those tiles accessible by the player, and not background tiles, but that's for a different day. FYI, the map was made with a freeware program called Realm Factory.

    Read the article

  • python Velocity control of the player, why doesn't this work?

    - by Dominic Grenier
    I have the following code inside a while True loop: if abs(playerx) < MAXSPEED: if moveLeft: playerx -= 1 if moveRight: playerx += 1 if abs(playery) < MAXSPEED: if moveDown: playery += 1 if moveUp: playery -= 1 if moveLeft == False and abs(playerx) > 0: playerx += 1 if moveRight == False and abs(playerx) > 0: playerx -= 1 if moveUp == False and abs(playery) > 0: playery += 1 if moveDown == False and abs(playery) > 0: playery -= 1 player.x += playerx player.y += playery if player.left < 0 or player.right > 1000: player.x -= playerx if player.top < 0 or player.bottom > 600: player.y -= playery The intended result is that while an arrow key is pressed, playerx or y increments by one at every loop until it reaches MAXSPEED and stays at MAXSPEED. And that when the player stops pressing that arrow key, his speed decreases. Until it reaches 0. To me, this code explicitly says that... But what actually happens is that playerx or y keeps incrementing regardless of MAXSPEED and continues moving even after the player stops pressing the arrow key. I keep rereading but I'm completely baffled by this weird behavior. Any insights? Thanks.

    Read the article

  • Multiple render targets and gamma correctness in Direct3D9

    - by Mario
    Let's say in a deferred renderer when building your G-Buffer you're going to render texture color, normals, depth and whatever else to your multiple render targets at once. Now if you want to have a gamma-correct rendering pipeline and you use regular sRGB textures as well as rendertargets, you'll need to apply some conversions along the way, because your filtering, sampling and calculations should happen in linear space, not sRGB space. Of course, you could store linear color in your textures and rendertargets, but this might very well introduce bad precision and banding issues. Reading from sRGB textures is easy: just set SRGBTexture = true; in your texture sampler in your HLSL effect code and the hardware does the conversion sRGB-linear for you. Writing to an sRGB rendertarget is theoretically easy, too: just set SRGBWriteEnable = true; in your effect pass in HLSL and your linear colors will be converted to sRGB space automatically. But how does this work with multiple rendertargets? I only want to do these corrections to the color textures and rendertarget, not to the normals, depth, specularity or whatever else I'll be rendering to my G-Buffer. Ok, so I just don't apply SRGBTexture = true; to my non-color textures, but when using SRGBWriteEnable = true; I'll do a gamma correction to all the values I write out to my rendertargets, no matter what I actually store there. I found some info on gamma over at Microsoft: http://msdn.microsoft.com/en-us/library/windows/desktop/bb173460%28v=vs.85%29.aspx For hardware that supports Multiple Render Targets (Direct3D 9) or Multiple-element Textures (Direct3D 9), only the first render target or element is written. If I understand correctly, SRGBWriteEnable should only be applied to the first rendertarget, but according to my tests it doesn't and is used for all rendertargets instead. Now the only alternative seems to be to handle these corrections manually in my shader and only correct the actual color output, but I'm not totally sure, that this'll not have any negative impact on color correctness. E.g. if the GPU does any blending or filtering or multisampling after the Linear-sRGB conversion... Do I even need gamma correction in this case, if I'm just writing texture color without lighting to my rendertarget? As far as I know, I DO need it because of the texture filtering and mip sampling happening in sRGB space instead, if I don't correct for it. Anyway, it'd be interesting to hear other people's solutions or thoughts about this.

    Read the article

  • Entity Component System, weapon

    - by Heorhiy
    I'm new to game programming and currently trying to understand Entity Component System design by implementing simple 2d game. By ECS I mean design, described here for example In my game I have different kind of weapons: automatic, gun, grenade, etc... Each type of weapon has it's own affect area (gun shots along the straight line and grenade explodes and covers some spherical area) , damage impact, visual effect and bullet amount, delay between shots. So I don't completely understand how to implement weapons. Should weapon be an Entity or it should be a component? And how the player should pick up a weapon, switch between different types of weapons and etc.

    Read the article

  • Unity scaling instantiated GameObject at Start() doesn't "keep"

    - by Shivan Dragon
    I have a very simple scenario: A box-like Prefab which is imported from Blender automatically (I have the .blend file in the Assets folder). A script that has two public GameObject fields. In one I place the above prefab, and in the other I place a terrain object (which I've created in Unity's graphical view): public Collider terrain; public GameObject aStarCellHighlightPrefab; This script is attached to the camera. The idea is to have the Blender prefab instantiated, have the terrain set as its parent, and then scale said prefab instance up. I first did it like this, in the Start() method: void Start () { cursorPositionOnTerrain = new RaycastHit(); aStarCellHighlight = (GameObject)Instantiate(aStarCellHighlightPrefab, new Vector3(300,300,300), terrain.transform.rotation); aStarCellHighlight.name = "cellHighlight"; aStarCellHighlight.transform.parent = terrain.transform; aStarCellHighlight.transform.localScale = new Vector3(100,100,100); } and first thought it didn't work. However later I noticed that it did in fact work, in the sense where the scale was applied right at the start, but then right after the prefab instance came back to its initial scale. Putting the scale code in the Update() methods fixes it in the sense where now it stays scaled all the time: void Update () { aStarCellHighlight.transform.localScale = new Vector3(100,100,100); //... } However I've noticed that when I run this code, the object is first displayed without the scale being applied, and it takes about 5-10 seconds for the scale to happen. During this time everything works fine (like input and logging, etc). The scene is very simple, it's not like it has a lot of stuff to load or anything (there's a Ray cast from the camera on to the terrain, but that seems to happen without such delays). My (2 part) question is: Why doesn't it take the scale transform when I do it at the beginning in the Start() method. Why do I have to keep scaling it in the Update() method? Why does it take so long for the scale to "apply/show up".

    Read the article

  • Drawing multiple Textures as tilemap

    - by DocJones
    I am trying to draw a 2d game map and the objects on the map in a single pass. Here is my OpenGL initialization code // Turn off unnecessary operations glDisable(GL_DEPTH_TEST); glDisable(GL_LIGHTING); glDisable(GL_CULL_FACE); glDisable(GL_STENCIL_TEST); glDisable(GL_DITHER); glEnable(GL_BLEND); glEnable(GL_TEXTURE_2D); // activate pointer to vertex & texture array glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); My drawing code is being called by a NSTimer every 1/60 s. Here is the drawing code of my world object: - (void) draw:(NSRect)rect withTimedDelta:(double)d { GLint *t; glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, [_textureManager textureByName:@"blocks"]); glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE); for (int x=0; x<[_map getWidth] ; x++) { for (int y=0; y<[_map getHeight] ; y++) { GLint v[] = { 16*x ,16*y, 16*x+16,16*y, 16*x+16,16*y+16, 16*x ,16*y+16 }; t=[_textureManager getBlockWithNumber:[_map getBlockAtX:x andY:y]]; glVertexPointer(2, GL_INT, 0, v); glTexCoordPointer(2, GL_INT, 0, t); glDrawArrays(GL_QUADS, 0, 4); } } } (_textureManager is a Singelton only loading a texture once!) The object drawing codes is identical (except the nested loops) in terms of OpenGL calls: - (void) drawWithTimedDelta:(double)d { GLint *t; GLint v[] = { 16*xpos ,16*ypos, 16*xpos+16,16*ypos, 16*xpos+16,16*ypos+16, 16*xpos ,16*ypos+16 }; glBindTexture(GL_TEXTURE_2D, [_textureManager textureByName:_textureName]); t=[_textureManager getBlockWithNumber:12]; glVertexPointer(2, GL_INT, 0, v); glTexCoordPointer(2, GL_INT, 0, t); glDrawArrays(GL_QUADS, 0, 4); } As soon as my central drawing routine calls the two drawing methods the second call overlays the first one. i would expect the call to world.draw to draw the map and "stamp" the objects upon it. Debugging shows me, that the first call is performed correctly (world is being drawn), but the following call to all objects ONLY draws the objects, the rest of the scene is getting black. I think i need to blend the drawn textures, but i cant seem to figure out how. Any help is appreciated. Thanks PS: Here is the github link to the project. It may not be in sync of my post here, but for some more in-depth analysis it may help.

    Read the article

  • How to make this game loop deterministic

    - by Lanaru
    I am using the following game loop for my pacman clone: long prevTime = System.currentTimeMillis(); while (running) { long curTime = System.currentTimeMillis(); float frameTime = (curTime - prevTime) / 1000f; prevTime = curTime; while (frameTime > 0.0f) { final float deltaTime = Math.min(frameTime, TIME_STEP); update(deltaTime); frameTime -= deltaTime; } repaint(); } The thing is, I don't always get the same ghost movement every time I run the game (their logic is deterministic), so it must be the game loop. I imagine it's due to the final float deltaTime = Math.min(frameTime, TIME_STEP); line. What's the best way of modifying this to perform the exact same way every time I run it? Also, any further improvements I can make?

    Read the article

  • Compute directional light frustum from view furstum points and light direction

    - by Fabian
    I'm working on a friends engine project and my task is to construct a new frustum from the light direction that overlaps the view frustum and possible shadow casters. The project already has a function that creates a frustum for this but its way to big and includes way to many casters (shadows) which can't be seen in the view frustum. Now the only parameter of this function are the normalized light direction vector and a view class which lets me extract the 8 view frustum points in world space. I don't have any additional infos about the scene. I have read some of the related Questions here but non seem to fit very well to my problem as they often just point to cascaded shadow maps. Sadly i can't use DX or openGl functions directly because this engine has a dedicated math library. From what i've read so far the steps are: Transform view frustum points into light space and find min/max x and y values (or sometimes minima and maxima of all three axis) and create a AABB using the min/max vectors. But what comes after this step? How do i transform this new AABB back to world space? What i've done so far: CVector3 Points[8], MinLight = CVector3(FLT_MAX), MaxLight = CVector3(FLT_MAX); for(int i = 0; i<8;++i){ Points[i] = Points[i] * WorldToShadowMapMatrix; MinLight = Math::Min(Points[i],MinLight); MaxLight = Math::Max(Points[i],MaxLight); } AABox box(MinLight,MaxLight); I don't think this is the right way to do it. The near plain probably has to extend into the direction of the light source to include potentional shadow casters. I've read the Microsoft article about cascaded shadow maps http://msdn.microsoft.com/en-us/library/windows/desktop/ee416307%28v=vs.85%29.aspx which also includes some sample code. But they seem to use the scenes AABB to determine the near and far plane which I can't since i cant access this information from the funtion I'm working in. Could you guys please link some example code which shows the calculation of such frustum? Thanks in advance! Additional questio: is there a way to construct a WorldToFrustum matrix that represents the above transformation?

    Read the article

  • Simultaneous Animation for a GameObject - Unity3D

    - by Fahim Ali Zain
    Its my second week with Unity. I am doing a 2D game and I have a small GameObject which should change its sprite along with following a definite path defined in Animation Curves. I did both of them in separate .anim files since the transform animation had many keyframes, i thought it wont be good to put the '2' sprite keyframe repeatedly along side the transform keyframe. But the problem is, I cant get it both working together at the same time. I dont want any blending because the animation is timed well already. Also, I tried deleting the sprite change animation and tried it under script changing the SpriteRenderer.Sprite property under Update(); but it works only when the Animator component is disabled in the GameObject. Any Solutions ? :)

    Read the article

  • How to make a ball fall faster on a ramp? Unity3D/C#

    - by Timothy Williams
    So, I'm making a ball game. Where you pick up the ball, drop it on a ramp, and it flies off in to blocks. The only problem right now is it falls at a normal speed, then lightly falls off, not nearly fast enough to get over the wall and hit the blocks. Is there any way to make the ball go faster down the ramp? Maybe even make it go faster depending on what height you dropped it from (e.g. if you hold it way above the ramp, and drop it, it will drop faster than if you dropped it right above the ramp.) Thanks.

    Read the article

  • How to configure a Firebird Database to run in memory

    - by Robert
    I'm running a software called Fishbowl inventory and it is running on a firebird database (Windows server 2003) at this time the fishbowl software is running extremely slow when more then one user accesses the software. I'm thinking I maybe able to speed up the application by forcing the database to run "In Memory". However I can not find documentation on how to do this. Any help would be greatly appreciated. Thank you in advance. Robert

    Read the article

  • Circular Bullet Spread not Even

    - by SoulBeaver
    I'm creating a bullet shooter much in the style of Touhou. Right now I want to have a very simple circular shot being fired from the enemy. See this picture: As you can see, the spacing is very uneven, which isn't very good if you want to survive. The code I'm using is this: private function shoot() : void { const BULLETS_PER_WAVE : int = 72; var interval : Number = BULLETS_PER_WAVE / 360; for (var i : int = 0; i < BULLETS_PER_WAVE; ++i { var xSpeed : Number = GameConstants.BULLET_NORMAL_SPEED_X * Math.sin(i * interval); var ySpeed : Number = GameConstants.BULLET_NORMAL_SPEED_Y * Math.cos(i * interval); BulletFactory.createNormalBullet(bulletColor_, alice_.center, xSpeed, ySpeed); } canShoot_ = false; cooldownTimer_.start(); } I imagine my mistake is in the sin, cos functions, but I'm not entirely sure what's wrong.

    Read the article

  • How to optimise Andengine's PathModifer (with singleton or pool)?

    - by Casla
    I am trying to build a game where the character find and follows a new path when a new destination is issued by the player, kinda like how units in RTS games work. This is done on a TMX map and I am using the A Star path finder utilities in Andengine to do this.David helped me on that: How can I change the path a sprite is following in real time? At the moment, every-time a new path is issued, I have to abandon the existing PathModifer and Path instances, and create new ones, and from what I read so far, creating new objects when you could re-use existing ones are a big waste for mobile applications. This is how I coded it at the moment: private void loadPathFound() { if (mAStarPath != null) { modifierPath = new org.andengine.entity.modifier.PathModifier.Path(mAStarPath.getLength()); /* replace the first node in the path as the player's current position */ modifierPath.to(player.convertLocalToSceneCoordinates(12, 31)[Constants.VERTEX_INDEX_X]-12, player.convertLocalToSceneCoordinates(12, 31)[Constants.VERTEX_INDEX_Y]-31); for (int i=1; i<mAStarPath.getLength(); i++) { modifierPath.to(mAStarPath.getX(i)*TILE_WIDTH, mAStarPath.getY(i)*TILE_HEIGHT); /* passing in the duration depended on the length of the path, so that the animation has a constant duration for every step */ player.registerEntityModifier(new PathModifier(modifierPath.getLength()/100, modifierPath, null, mIPathModifierListener)); } } The ideal implementation will be to always have just one object of PathModifer and just reset the destination of the path. But I don't know how you can apply the singleton patther on Andengine's PathModifer, there is no method to reset attribute of the path nor the pathModifer. So without re-write the PathModifer and the Path class, or use reflection, is there any other way to implement singleton PathModifer? Thanks for your help.

    Read the article

  • Impulsioned jumping

    - by Mutoh
    There's one thing that has been puzzling me, and that is how to implement a 'faux-impulsed' jump in a platformer. If you don't know what I'm talking about, then think of the jumps of Mario, Kirby, and Quote from Cave Story. What do they have in common? Well, the height of your jump is determined by how long you keep the jump button pressed. Knowing that these character's 'impulses' are built not before their jump, as in actual physics, but rather while in mid-air - that is, you can very well lift your finger midway of the max height and it will stop, even if with desacceleration between it and the full stop; which is why you can simply tap for a hop and hold it for a long jump -, I am mesmerized by how they keep their trajetories as arcs. My current implementation works as following: While the jump button is pressed, gravity is turned off and the avatar's Y coordenate is decremented by the constant value of the gravity. For example, if things fall at Z units per tick, it will rise Z units per tick. Once the button is released or the limit is reached, the avatar desaccelerates in an amount that would make it cover X units until its speed reaches 0; once it does, it accelerates up until its speed matches gravity - sticking to the example, I could say it accelerates from 0 to Z units/tick while still covering X units. This implementation, however, makes jumps too diagonal, and unless the avatar's speed is faster than the gravity, which would make it way too fast in my current project (it moves at about 4 pixels per tick and gravity is 10 pixels per tick, at a framerate of 40FPS), it also makes it more vertical than horizontal. Those familiar with platformers would notice that the character's arc'd jump almost always allows them to jump further even if they aren't as fast as the game's gravity, and when it doesn't, if not played right, would prove itself to be very counter-intuitive. I know this because I could attest that my implementation is very annoying. Has anyone ever attempted at similar mechanics, and maybe even succeeded? I'd like to know what's behind this kind of platformer jumping. If you haven't ever had any experience with this beforehand and want to give it a go, then please, don't try to correct or enhance my explained implementation, unless I was on the right way - try to make up your solution from scratch. I don't care if you use gravity, physics or whatnot, as long as it shows how these pseudo-impulses work, it does the job. Also, I'd like its presentation to avoid a language-specific coding; like, sharing us a C++ example, or Delphi... As much as I'm using the XNA framework for my project and wouldn't mind C# stuff, I don't have much patience to read other's code, and I'm certain game developers of other languages would be interested in what we achieve here, so don't mind sticking to pseudo-code. Thank you beforehand.

    Read the article

  • WoW lua: Getting quest attributes before the QUEST_DETAIL event

    - by Matt DiTrolio
    I'd like to determine the attributes of a quest (i.e., information provided by functions such as QuestIsDaily and IsQuestCompletable) before the player clicks on the quest detail. I'm trying to write an add-on that handles accepting and completing of daily quests with a single click on the NPC, but I'm running into a problem whereby I can't find out anything about a given quest unless the quest text is currently being displayed, defeating the purpose of the add-on. Other add-ons of this nature seem to be getting around this limitation by hard-coding information about quests, an approach I don't much like as it requires constant maintenance. It seems to me that this information must be available somehow, as the game itself can properly figure out which icon to display over the head of the NPC without player interaction. The only question is, are add-on authors allowed access to this information? If so, how? EDIT: What I originally left out was that the situations I'm trying to address are when: An NPC has multiple quests The quest detail is not the first thing that shows up upon right-click Otherwise, the situation is much simpler, as I have the information I need provided immediately.

    Read the article

  • Physics/Graphics Components

    - by Brett Powell
    I have spent the last 48 hours reading up on Object Component systems, and feel I am ready enough to start implementing it. I got the base Object and Component classes created, but now that I need to start creating the actual components I am a bit confused. When I think of them in terms of HealthComponent or something that would basically just be a property, it makes perfect sense. When it is something more general as a Physics/Graphics component, I get a bit confused. My Object class looks like this so far (If you notice any changes I should make please let me know, still new to this)... typedef unsigned int ID; class GameObject { public: GameObject(ID id, Ogre::String name = ""); ~GameObject(); ID &getID(); Ogre::String &getName(); virtual void update() = 0; // Component Functions void addComponent(Component *component); void removeComponent(Ogre::String familyName); template<typename T> T* getComponent(Ogre::String familyName) { return dynamic_cast<T*>(m_components[familyName]); } protected: // Properties ID m_ID; Ogre::String m_Name; float m_flVelocity; Ogre::Vector3 m_vecPosition; // Components std::map<std::string,Component*> m_components; std::map<std::string,Component*>::iterator m_componentItr; }; Now the problem I am running into is what would the general population put into Components such as Physics/Graphics? For Ogre (my rendering engine) the visible Objects will consist of multiple Ogre::SceneNode (possibly multiple) to attach it to the scene, Ogre::Entity (possibly multiple) to show the visible meshes, and so on. Would it be best to just add multiple GraphicComponent's to the Object and let each GraphicComponent handle one SceneNode/Entity or is the idea to have one of each Component needed? For Physics I am even more confused. I suppose maybe creating a RigidBody and keeping track of mass/interia/etc. would make sense. But I am having trouble thinking of how to actually putting specifics into a Component. Once I get a couple of these "Required" components done, I think it will make a lot more sense. As of right now though I am still a bit stumped.

    Read the article

  • Tile-based maps in AS3

    - by Ashley
    I want to make a tile-based platformer in AS3. I want my game to read an external maps file (in xml or json or somethimg similar) to draw a tile-based map. I've seen loads of tutorials for this in AS2 and other languages, and the few I've found in AS3 are either incomplete or filled with extra unnecessary features. I just want to be able to draw a basic map from sprites in Flash. Any links or information to point me in the right direction would be appreciated.

    Read the article

  • Per-vertex position/normal and per-index texture coordinate

    - by Boreal
    In my game, I have a mesh with a vertex buffer and index buffer up and running. The vertex buffer stores a Vector3 for the position and a Vector2 for the UV coordinate for each vertex. The index buffer is a list of ushorts. It works well, but I want to be able to use 3 discrete texture coordinates per triangle. I assume I have to create another vertex buffer, but how do I even use it? Here is my vertex/index buffer creation code: // vertices is a Vertex[] // indices is a ushort[] // VertexDefs stores the vertex size (sizeof(float) * 5) // vertex data numVertices = vertices.Length; DataStream data = new DataStream(VertexDefs.size * numVertices, true, true); data.WriteRange<Vertex>(vertices); data.Position = 0; // vertex buffer parameters BufferDescription vbDesc = new BufferDescription() { BindFlags = BindFlags.VertexBuffer, CpuAccessFlags = CpuAccessFlags.None, OptionFlags = ResourceOptionFlags.None, SizeInBytes = VertexDefs.size * numVertices, StructureByteStride = VertexDefs.size, Usage = ResourceUsage.Default }; // create vertex buffer vertexBuffer = new Buffer(Graphics.device, data, vbDesc); vertexBufferBinding = new VertexBufferBinding(vertexBuffer, VertexDefs.size, 0); data.Dispose(); // index data numIndices = indices.Length; data = new DataStream(sizeof(ushort) * numIndices, true, true); data.WriteRange<ushort>(indices); data.Position = 0; // index buffer parameters BufferDescription ibDesc = new BufferDescription() { BindFlags = BindFlags.IndexBuffer, CpuAccessFlags = CpuAccessFlags.None, OptionFlags = ResourceOptionFlags.None, SizeInBytes = sizeof(ushort) * numIndices, StructureByteStride = sizeof(ushort), Usage = ResourceUsage.Default }; // create index buffer indexBuffer = new Buffer(Graphics.device, data, ibDesc); data.Dispose(); Engine.Log(MessageType.Success, string.Format("Mesh created with {0} vertices and {1} indices", numVertices, numIndices)); And my drawing code: // ShaderEffect, ShaderTechnique, and ShaderPass all store effect data // e is of type ShaderEffect // get the technique ShaderTechnique t; if(!e.techniques.TryGetValue(techniqueName, out t)) return; // effect variables e.SetMatrix("worldView", worldView); e.SetMatrix("projection", projection); e.SetResource("diffuseMap", texture); e.SetSampler("textureSampler", sampler); // set per-mesh/technique settings Graphics.context.InputAssembler.SetVertexBuffers(0, vertexBufferBinding); Graphics.context.InputAssembler.SetIndexBuffer(indexBuffer, SlimDX.DXGI.Format.R16_UInt, 0); Graphics.context.PixelShader.SetSampler(sampler, 0); // render for each pass foreach(ShaderPass p in t.passes) { Graphics.context.InputAssembler.InputLayout = p.layout; p.pass.Apply(Graphics.context); Graphics.context.DrawIndexed(numIndices, 0, 0); } How can I do this?

    Read the article

  • PyGame QIX clone, filling areas

    - by astropanic
    I'm playing around with PyGame. Now I'm trying to implement a QIX clone. I have my game loop, and I can move the player (cursor) on the screen. In QIX, the movment of the player leaves a trace (tail) on the screen, creating a polyline. If the polyline with the screen boundaries creates a polygon, the area is filled. How I can accomplish this behaviour ? How store the tail in memory ? How to detect when it build a closed shape that should be filled ? I don't need an exact working solution, some pointers, algo names would be cool.

    Read the article

  • Algorithm to generate multifaced cube?

    - by OnePie
    Are there any elegant soloution to generate a simple-six sided cube, where each cube is made out of more than one face? The method I have used ended up a horrible and complicated mess of logic that is imopssible to follow and most likely to maintain. The algorithm should not generate reduntant vertices, and should output the indice list for the mesh as well. The reason I need this is that the cubes vertices will be deformed depending on various factors, meaning that a simple six-faced cube will nto do.

    Read the article

  • Need to combine a color, mask, and sprite layer in a shader

    - by Donutz
    My task: to display a sprite using different team colors. I have a sprte graphic, part of which has to be displayed as a team color. The color isn't 'flat', i.e. it shades from brighter to darker. I can't "pre-build" the graphics because there are just too many, so I have to generate them at runtime. I've decided to use a shader, and supply it with a texture consisting of the team color, a texture consisting of a mask (black=no color, white=full color, gray=progressively dimmed color), and the sprite grapic, with the areas where the team color shows being transparent. So here's my shader code: // Effect attempts to merge a color layer, a mask layer, and a sprite layer // to produce a complete sprite sampler UnitSampler : register(s0); // the unit sampler MaskSampler : register(s1); // the mask sampler ColorSampler : register(s2); // the color float4 main(float4 color : COLOR0, float2 texCoord : TEXCOORD0) : COLOR0 { float4 tex1 = tex2D(ColorSampler, texCoord); // get the color float4 tex2 = tex2D(MaskSampler, texCoord); // get the mask float4 tex3 = tex2D(UnitSampler,texCoord); // get the unit float4 tex4 = tex1 * tex2.r * tex3; // color * mask * unit return tex4; } My problem is the calculations involving tex1 through tex4. I don't really understand how the manipulations work, so I'm just thrashing around, producing lots of different incorrect effects. So given tex1 through tex3, what calcs do I do in order to take the color (tex1), mask it (tex2), and apply the result to the unit if it's not zero? And would I be better off to make the mask just on/off (white/black) and put the color shading in the unit graphic?

    Read the article

  • Libgdx Palette Swap

    - by Haedrian
    I'm developing a game using the Libgdx library. I'm trying to implement a very simple palette swap functionality (basically just complete recolouring of some areas, I don't even need to have various shades), but I don't have any idea where to begin. The closest I've come is trying to draw the picture myself using a Pixmap, but that appears to be horrible unmaintainable and produces oodles of code.

    Read the article

  • Diamond-square terrain generation problem

    - by kafka
    I've implemented a diamond-square algorithm according to this article: http://www.lighthouse3d.com/opengl/terrain/index.php?mpd2 The problem is that I get these steep cliffs all over the map. It happens on the edges, when the terrain is recursively subdivided: Here is the source: void DiamondSquare(unsigned x1,unsigned y1,unsigned x2,unsigned y2,float range) { int c1 = (int)x2 - (int)x1; int c2 = (int)y2 - (int)y1; unsigned hx = (x2 - x1)/2; unsigned hy = (y2 - y1)/2; if((c1 <= 1) || (c2 <= 1)) return; // Diamond stage float a = m_heightmap[x1][y1]; float b = m_heightmap[x2][y1]; float c = m_heightmap[x1][y2]; float d = m_heightmap[x2][y2]; float e = (a+b+c+d) / 4 + GetRnd() * range; m_heightmap[x1 + hx][y1 + hy] = e; // Square stage float f = (a + c + e + e) / 4 + GetRnd() * range; m_heightmap[x1][y1+hy] = f; float g = (a + b + e + e) / 4 + GetRnd() * range; m_heightmap[x1+hx][y1] = g; float h = (b + d + e + e) / 4 + GetRnd() * range; m_heightmap[x2][y1+hy] = h; float i = (c + d + e + e) / 4 + GetRnd() * range; m_heightmap[x1+hx][y2] = i; DiamondSquare(x1, y1, x1+hx, y1+hy, range / 2.0); // Upper left DiamondSquare(x1+hx, y1, x2, y1+hy, range / 2.0); // Upper right DiamondSquare(x1, y1+hy, x1+hx, y2, range / 2.0); // Lower left DiamondSquare(x1+hx, y1+hy, x2, y2, range / 2.0); // Lower right } Parameters: (x1,y1),(x2,y2) - coordinates that define a region on a heightmap (default (0,0)(128,128)). range - basically max. height. (default 32) Help would be greatly appreciated.

    Read the article

  • ScreenManagement better practices ?! Textbox not focusing

    - by xykudyax
    I saw a question here using DataTemplates with WPF for ScreenManagement, I was curious and I gave it a try I think the ideia is amazing and very clean. Though I'm new to WPF and I read a lot of times that almost everything should be made in XAML and very little should be "coded behind". My questions resolves about using the datatemplate ideia, WHERE should the code that calls the transitions be? where should I define which commands are avaiable in which screens. For example: [ScreenA] Commands: Pressing B - Goes to state B Pressing ESC - Exits [ScreenB] Commands: Pressing A - Goes to state A Pressing SPACE - Exits where do I define the keyEventHandlers? and where do I call the next screen? I'm doing this as an hobby for learning and "if you are learning, better learn it right" :) Thank you for your time. Yes the Q/A I was talking is: What's a good way to handle game screen management in WPF? What I've done so far was to create a Screen class (derived from UserControl) and create some virtual methods: - one for Initializing stuff (like focus a given component by default) - another for inputHandling I handle it by using a switch case and by listening to the PreviewKeyDown event from the parent container (MainWindow) Im not able to do it another way! Help?!. - and a finally one that removes the keyEvent method (when the screen is terminated) Parent.PreviewKeyDown -= OnKeyDown; am I doing okay? I face a problem. When I add a new screen (userControl) containing a TextBox I'm not able to give it autofocus :/ The Caret is there but is not blinking and I have to hit "TAB" before being able to input anything at all :/

    Read the article

< Previous Page | 554 555 556 557 558 559 560 561 562 563 564 565  | Next Page >