Search Results

Search found 32114 results on 1285 pages for 'general development'.

Page 559/1285 | < Previous Page | 555 556 557 558 559 560 561 562 563 564 565 566  | Next Page >

  • Procedural Planets, Heightmaps and Textures

    - by henryprescott
    I am currently working on an OpenGL procedural planet generator. I hope to use it for a space RPG, that will not allow players to go down to the surface of a planet so I have ignored anything ROAM related. At the momement I am drawing a cube with VBOs and mapping onto a sphere. I am familiar with most fractal heightmap generating techniques and have already implemented my own version of midpoint displacement(not that useful in this case I know). My question is, what is the best way to procedurally generate the heightmap. I have looked at libnoise which allows me to make tilable heightmaps/textures, but as far as I can see I would need to generate a net like this. Leaving the tiling obvious. Could anyone advise me on the best route to take? Any input would be much appreciated. Thanks, Henry.

    Read the article

  • How to prevent "underwater sight" in games

    - by CPP_Person
    In many games where the player can go underwater, it seems like when you look where the top half of the screen is in the air, and the bottom half the screen is in the water, it's almost like the water doesn't exist and the player is... flying slowly with water sounds? Is there a logical way to solve this? An algorithm? Doesn't seem like any solution has come up yet since many games still have this. I don't want to make the same mistake.

    Read the article

  • How effects found in "Autodesk Fluid FX" are implemented using OpenGL ES?

    - by afds
    How this kind of effects are technically implemented using OpenGL ES? Are they performing simulation on GPU (using Shaders) or CPU while using some smart vertex positioning and texturing? Why it appears so fast (in terms of performance)? You might check the video of that app here: http://www.youtube.com/watch?v=F4KOk6QP6kQ edit Here is the presentation for the app: http://www.futuregameon.com/FGO2010_JosStam.pdf

    Read the article

  • How to detect GLSL warnings?

    - by msell
    After compiling a shader with glCompileShader, I can call glGetShaderiv with GL_COMPILE_STATUS to check if the shader compiled successfully. I can also call glGetShaderInfoLog to get information about possible errors, warnings or other info. The information log returned by this function is unspecified. In a tool where users can write their own shaders, I would like to print all errors and warnings from the compilation, but nothing if no warnings or errors were found. The problem is that the GL_COMPILE_STATUS returns only false if the compilation failed and true otherwise. If no problems were found, some drivers return empty info log from glGetShaderInfoLog, but some drivers can return something else such as "No errors.", which I do not want to print to the user. How is this problem generally solved?

    Read the article

  • How do I deal with the problems of a fast side-scroller?

    - by Ska
    I'm making a side scrolling airplane game and when I begin going very fast I begin to experience some problems as a player: Elements are not distinguishable, like power-ups from bullets, etc I start to feel dizzy and uncomfortable There isn't enough time to see what's coming How can I sort this out? Do I use less details in all the grahpics? Tiny Wings has the same horizontal movement speed as in my game but it doesn't suffer from these problems. Are there any other really fast side-scrollers I could take as a reference?

    Read the article

  • What library should I use for 2D Geometry? [closed]

    - by Luka
    I've been working on a 2D game in java, but found that java just didn't cut it for me and had forced me to a lot of bad design choices, so I've decided to port all my work to c++. The main reason I've decided change to c++ is that i had reached a point where i had 3 geometry libraries (the native, one from the game engine and one to handle "complex" polygons), none of witch worked very well together and i couldn't keep track of them. I'm new to c++, but i know all the basics. My question is, what would be a good geometry library to use, ideally it should be able to handle integer and decimal data types, have point, line, and polygon classes witch are able to check for intersection and contains. Thanks in advance, Luka

    Read the article

  • Animations not accepted in animator

    - by Lautaro
    In the official Unity Animator State Machine tutorial video animation clips are dragged out from the assets folder into the animator and dropped. I have a 3D model that i bought online to experiment with that comes with animations. I added a custom made animation as well. These all work well in my demo project. But when i add a animator to the assets and try to drag and drop animations onto it it doesnt work. I get a forbidden-sign as a mouse pointer. I try to add animations through the inspector but that does not work either. The tutorials makes it seem so easy and does not talk anything about what animations can be used. What am i doing wrong?

    Read the article

  • Sub-systems in game engines

    - by Hillel
    So here's the problem- I'm writing my own engine library, and it works fine with stuff like menus and the actual game screen. The thing is, I can't really figure out how to integrate something like an intro or dialogue preceding certain levels into this system. Let's look at another example- say I have a game-specific engine which gets a Level object and runs it. Engine would have its own collision and physics system, all hard coded. Now, what if at some point in a level, I want the player to enter a mini-game with different rules? How do I morph the Engine class to support these sub-systems without having to deal with their code all the time (as in: if(regular game) ... else if(mini game) ...)? And what if I want an intro animation at the start of a level, and I want the player to be able to assume control of his character once the animation ends, do I implement the animation into the Engine class itself? Or maybe I need to run another class, CutScene, and when it ends, it calls Engine and starts the level? What if I want to add a dialogue system, where at the start of each level there's a short dialogue and the player can't control his character, and once it ends, he can? Would I then run the dialogue code inside the Engine code? Maybe these sub-systems should all be scripted? I don't know anything about scripting, is it necessary for this kind of situation? Any help would be appreciated.

    Read the article

  • Player Movement DirectX

    - by SullY
    I'm reading on a Book that's about Gamedevelopment with C++ and DirectX 9. There is something that interrests me: It says that playermovements are increasing with the power of the CPU. Becouse a faster CPU will move the player with every frame ( better CPU = better FPS ) To bypass it, it says you have just to multiplicate time*movementfactor . I'd like to know is there an another way to bypass it ?

    Read the article

  • Increasing efficiency of N-Body gravity simulation

    - by Postman
    I'm making a space exploration type game, it will have many planets and other objects that will all have realistic gravity. I currently have a system in place that works, but if the number of planets goes above 70, the FPS decreases an practically exponential rates. I'm making it in C# and XNA. My guess is that I should be able to do gravity calculations between 100 objects without this kind of strain, so clearly my method is not as efficient as it should be. I have two files, Gravity.cs and EntityEngine.cs. Gravity manages JUST the gravity calculations, EntityEngine creates an instance of Gravity and runs it, along with other entity related methods. EntityEngine.cs public void Update() { foreach (KeyValuePair<string, Entity> e in Entities) { e.Value.Update(); } gravity.Update(); } (Only relevant piece of code from EntityEngine, self explanatory. When an instance of Gravity is made in entityEngine, it passes itself (this) into it, so that gravity can have access to entityEngine.Entities (a dictionary of all planet objects)) Gravity.cs namespace ExplorationEngine { public class Gravity { private EntityEngine entityEngine; private Vector2 Force; private Vector2 VecForce; private float distance; private float mult; public Gravity(EntityEngine e) { entityEngine = e; } public void Update() { //First loop foreach (KeyValuePair<string, Entity> e in entityEngine.Entities) { //Reset the force vector Force = new Vector2(); //Second loop foreach (KeyValuePair<string, Entity> e2 in entityEngine.Entities) { //Make sure the second value is not the current value from the first loop if (e2.Value != e.Value ) { //Find the distance between the two objects. Because Fg = G * ((M1 * M2) / r^2), using Vector2.Distance() and then squaring it //is pointless and inefficient because distance uses a sqrt, squaring the result simple cancels that sqrt. distance = Vector2.DistanceSquared(e2.Value.Position, e.Value.Position); //This makes sure that two planets do not attract eachother if they are touching, completely unnecessary when I add collision, //For now it just makes it so that the planets are not glitchy, performance is not significantly improved by removing this IF if (Math.Sqrt(distance) > (e.Value.Texture.Width / 2 + e2.Value.Texture.Width / 2)) { //Calculate the magnitude of Fg (I'm using my own gravitational constant (G) for the sake of time (I know it's 1 at the moment, but I've been changing it) mult = 1.0f * ((e.Value.Mass * e2.Value.Mass) / distance); //Calculate the direction of the force, simply subtracting the positions and normalizing works, this fixes diagonal vectors //from having a larger value, and basically makes VecForce a direction. VecForce = e2.Value.Position - e.Value.Position; VecForce.Normalize(); //Add the vector for each planet in the second loop to a force var. Force = Vector2.Add(Force, VecForce * mult); //I have tried Force += VecForce * mult, and have not noticed much of an increase in speed. } } } //Add that force to the first loop's planet's position (later on I'll instead add to acceleration, to account for inertia) e.Value.Position += Force; } } } } I have used various tips (about gravity optimizing, not threading) from THIS question (that I made yesterday). I've made this gravity method (Gravity.Update) as efficient as I know how to make it. This O(N^2) algorithm still seems to be eating up all of my CPU power though. Here is a LINK (google drive, go to File download, keep .Exe with the content folder, you will need XNA Framework 4.0 Redist. if you don't already have it) to the current version of my game. Left click makes a planet, right click removes the last planet. Mouse moves the camera, scroll wheel zooms in and out. Watch the FPS and Planet Count to see what I mean about performance issues past 70 planets. (ALL 70 planets must be moving, I've had 100 stationary planets and only 5 or so moving ones while still having 300 fps, the issue arises when 70+ are moving around) After 70 planets are made, performance tanks exponentially. With < 70 planets, I get 330 fps (I have it capped at 300). At 90 planets, the FPS is about 2, more than that and it sticks around at 0 FPS. Strangely enough, when all planets are stationary, the FPS climbs back up to around 300, but as soon as something moves, it goes right back down to what it was, I have no systems in place to make this happen, it just does. I considered multithreading, but that previous question I asked taught me a thing or two, and I see now that that's not a viable option. I've also thought maybe I could do the calculations on my GPU instead, though I don't think it should be necessary. I also do not know how to do this, it is not a simple concept and I want to avoid it unless someone knows a really noob friendly simple way to do it that will work for an n-body gravity calculation. (I have an NVidia gtx 660) Lastly I've considered using a quadtree type system. (Barnes Hut simulation) I've been told (in the previous question) that this is a good method that is commonly used, and it seems logical and straightforward, however the implementation is way over my head and I haven't found a good tutorial for C# yet that explains it in a way I can understand, or uses code I can eventually figure out. So my question is this: How can I make my gravity method more efficient, allowing me to use more than 100 objects (I can render 1000 planets with constant 300+ FPS without gravity calculations), and if I can't do much to improve performance (including some kind of quadtree system), could I use my GPU to do the calculations?

    Read the article

  • how to organize rendering

    - by Irbis
    I use a deferred rendering. During g-buffer stage my rendering loop for a sponza model (obj format) looks like this: int i = 0; int sum = 0; map<string, mtlItem *>::const_iterator itrEnd = mtl.getIteratorEnd(); for(map<string, mtlItem *>::const_iterator itr = mtl.getIteratorBegin(); itr != itrEnd; ++itr) { glActiveTexture(GL_TEXTURE0 + 0); glBindTexture(GL_TEXTURE_2D, itr->second->map_KdId); glDrawElements(GL_TRIANGLES, indicesCount[i], GL_UNSIGNED_INT, (GLvoid*)(sum * 4)); sum += indicesCount[i]; ++i; glBindTexture(GL_TEXTURE_2D, 0); } I sorted faces based on materials. I switch only a diffuse texture but I can place there more material properties. Is it a good approach ? I also wonder how to handle a different kind of materials, for example: some material use a normal map, other doesn't use. Should I have a different shaders for them ?

    Read the article

  • Particle Effect Completion

    - by Siddharth
    In my game I use particle effect for various purposes. In that I detect the completion of the particle effect. Basically I want to do something after completion of the particle effect. But the problem is that I didn't able to find the particle effect completion. So any community member please help me. EDIT : I was creating particle effect using following code pointParticleEmtitter = new PointParticleEmitter(pX, pY); particleSystem = new ParticleSystem(pointParticleEmtitter, maxRate, minRate, maxParticles, mParticleTextureRegion.deepCopy()); particleSystem.setBlendFunction(GL10.GL_SRC_ALPHA, GL10.GL_ONE); particleSystem.addParticleInitializer(new ColorInitializer(0f, 0f, 1f)); particleSystem.addParticleModifier(new AlphaModifier(1, 0, 0, 0.5f)); particleSystem.addParticleModifier(new ExpireModifier(0.5f)); gameObject.getScene().attachChild(particleSystem); Using above code the particle effect was started but when finished that I want to detect. After finishing effect I want to remove the object from the scene.

    Read the article

  • Managing many draw calls for dynamic objects

    - by codetiger
    We are developing a game (cross-platform) using Irrlicht. The game has many (around 200 - 500) dynamic objects flying around during the game. Most of these objects are static mesh and build from 20 - 50 unique Meshes. We created seperate scenenodes for each object and referring its mesh instance. But the output was very much unexpected. Menu screen: (150 tris - Just to show you the full speed rendering performance of 2 test computers) a) NVidia Quadro FX 3800 with 1GB: 1600 FPS DirectX and 2600 FPS on OpenGL b) Mac Mini with Geforce 9400M 256mb: 260 FPS in OpenGL Now inside the game in a test level: (160 dynamic objects counting around 10K tris): a) NVidia Quadro FX 3800 with 1GB: 45 FPS DirectX and 50 FPS on OpenGL b) Mac Mini with Geforce 9400M 256mb: 45 FPS in OpenGL Obviously we don't have the option of mesh batch rendering as most of the objects are dynamic. And the one big static terrain is already in single mesh buffer. To add more information, we use one 2048 png for texture for most of the dynamic objects. And our collision detection hardly and other calculations hardly make any impact on FPS. So we understood its the draw calls we make that eats up all FPS. Is there a way we can optimize the rendering, or are we missing something?

    Read the article

  • How can I imitate interaction and movement in Diablo II?

    - by user422318
    I'm prototyping a simple browser-based game. It's played from a top down perspective on a 2d canvas. You left-click on a point on the map, and your character will begin walking to it. If you click on a different point on the map, then your character will begin walking to the new point. It's similar to Diablo II: http://www.youtube.com/watch?v=EvDKt-To6K0&feature=related How can I best imitate this movement system for a player? Ideas... Track current coords and target coords If target coords are exactly up, left, right, or down, then increment appropriate direction until you get there Implied else: target coords are in a quadrant. To make this movement look natural, character will have to move diagonally. For example, pretend the target is to the northeast. For each game frame, alternate incrementing current coordinates in the north and then east directions.

    Read the article

  • How to make a iOS plugin for Unity3d

    - by DannoEterno
    I've passed last 2 days reading articles and book for understand how can i make a plugin for iOS in Unity. Basically i need just a demo for understand how it work. For now i've tried to make this process (with really poor luck): I've started a new project in Unity and writed a simple script using UnityEngine; using System.Collections; using System; using System.Runtime.InteropServices; public class CallPlugin : MonoBehaviour { [DllImport ("__Internal")] private static extern int test(); void Start () { Debug.Log(test()); } } Then i've created a project in Xcode with this simple script: extern "C"{ int test() { int che = 5; return che; } } Then i've tried: to put the .mm and .h in the Assets/Plugins/iOS = nothing to build the unity project and than add the .h and .mm in the Xcode project = nothing In Unity i will always get the EntryPointNotFoundException, so unity see the file but is unable to reach the method. The problem is... how?! :) Maybe i miss something or i've done something wrong? Thanks a lot for every help that you can give me :)

    Read the article

  • libgdx loading textures fails [duplicate]

    - by Chris
    This question already has an answer here: Why do I get this file loading exception when trying to draw sprites with libgdx? 4 answers I'm trying to load my texture with playerTex = new Texture(Gdx.files.internal("player.jpg")); player.jpg is located under my-gdx-game-android/assets/data/player.jpg I get an exception like this: Full Code: @Override public void create() { camera = new OrthographicCamera(); camera.setToOrtho(false, Gdx.graphics.getWidth(), Gdx.graphics.getHeight()); batch = new SpriteBatch(); FileHandle file = Gdx.files.internal("player.jpg"); playerTex = new Texture(file); player = new Rectangle(); player.x = 800-20; player.y = 250; player.width = 20; player.height = 80; } @Override public void dispose() { // dispose of all the native resources playerTex.dispose(); batch.dispose(); } @Override public void render() { Gdx.gl.glClearColor(1, 1, 1, 1); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); camera.update(); batch.setProjectionMatrix(camera.combined); batch.begin(); batch.draw(playerTex, player.x, player.y); batch.end(); if(Gdx.input.isKeyPressed(Keys.DOWN)) player.y -= 50 * Gdx.graphics.getDeltaTime(); if(Gdx.input.isKeyPressed(Keys.UP)) player.y += 50 * Gdx.graphics.getDeltaTime(); }

    Read the article

  • How can I write data to a file that users can't easily edit?

    - by ThePlan
    While working on game saving and loading I figured I could just write all the variable values to a file and then load that file from it's default location anytime. However from the very beginning it sounded like an odd job. I know about serialization and boost, but that seems so complicated, I figured I'd keep it simple, but I've ran across this huge issue: No matter what file I can write with C++, the user can get their hands on it, they can edit their position, they can remove a boss, or add new weapons for themselves. My question here is: How can I create a file in C++ which cannot be editted or openned with a text editor such as Notepad (I'm not trying to make a file which is impossible to open, but a file which will give the user a headache if he tries to edit it through usual methods.)

    Read the article

  • Why are MVC & TDD not employed more in game architecture?

    - by secoif
    I will preface this by saying I haven't looked a huge amount of game source, nor built much in the way of games. But coming from trying to employ 'enterprise' coding practices in web apps, looking at game source code seriously hurts my head: "What is this view logic doing in with business logic? this needs refactoring... so does this, refactor, refactorrr" This worries me as I'm about to start a game project, and I'm not sure whether trying to mvc/tdd the dev process is going to hinder us or help us, as I don't see many game examples that use this or much push for better architectural practices it in the community. The following is an extract from a great article on prototyping games, though to me it seemed exactly the attitude many game devs seem to use when writing production game code: Mistake #4: Building a system, not a game ...if you ever find yourself working on something that isn’t directly moving your forward, stop right there. As programmers, we have a tendency to try to generalize our code, and make it elegant and be able to handle every situation. We find that an itch terribly hard not scratch, but we need to learn how. It took me many years to realize that it’s not about the code, it’s about the game you ship in the end. Don’t write an elegant game component system, skip the editor completely and hardwire the state in code, avoid the data-driven, self-parsing, XML craziness, and just code the damned thing. ... Just get stuff on the screen as quickly as you can. And don’t ever, ever, use the argument “if we take some extra time and do this the right way, we can reuse it in the game”. EVER. is it because games are (mostly) visually oriented so it makes sense that the code will be weighted heavily in the view, thus any benefits from moving stuff out to models/controllers, is fairly minimal, so why bother? I've heard the argument that MVC introduces a performance overhead, but this seems to me to be a premature optimisation, and that there'd more important performance issues to tackle before you worry about MVC overheads (eg render pipeline, AI algorithms, datastructure traversal, etc). Same thing regarding TDD. It's not often I see games employing test cases, but perhaps this is due to the design issues above (mixed view/business) and the fact that it's difficult to test visual components, or components that rely on probablistic results (eg operate within physics simulations). Perhaps I'm just looking at the wrong source code, but why do we not see more of these 'enterprise' practices employed in game design? Are games really so different in their requirements, or is a people/culture issue (ie game devs come from a different background and thus have different coding habits)?

    Read the article

  • Wall jumping collision detection anomaly

    - by Nanor
    I'm creating a game where the player ascends a tower by wall jumping his way to the top. When the player has collided with the right wall they can only jump left and vice versa. Here is my current implementation: if(wallCollision() == "left"){ player.setPosX(0); player.setVelX(0); ignoreCollisions = true; player.setCanJump(true); player.setFacingLeft(false); } else if (wallCollision() == "right"){ player.setPosX(screenWidth-playerWidth*2); player.setVelX(0); ignoreCollisions = true; player.setCanJump(true); player.setFacingLeft(true); } else{ player.setVelY(player.getVelY() + gravity); } and private String wallCollision(){ if(player.getPosX() < playerWidth && !ignoreCollisions) return "left"; else if(player.getPosX() > screenWidth - playerWidth*2 && !ignoreCollisions) return "right"; else{ timeToJump += Gdx.graphics.getDeltaTime(); if(timeToJump > 0.50f){ timeToJump = 0; ignoreCollisions = false; } return "jumping"; } } If the player is colliding with the left wall it will switch between the states left and jumping repeatedly due to the varible ignoreCollisions being switched repeatedly in collision checks. This will give a chance to either jump as intended or simply ascend vertically instead of diagonally. I can't figure out an implementation that will reliably make sure the player jumps as intended. Does anyone have any pointers?

    Read the article

  • Unity3D : Pause Menu - Android

    - by user3666251
    Im making a 2D game for android.I almost completed the game but now I need a pause game option.I added a pause icon on the top right side of the screen.The icon is a gui texture.Here is what I did so far : I made a script which will bring up some buttons (which is not working) and attached it to the GUITexture.This is the script : #pragma strict function OnMouseDown() { Debug.Log("*Pause Menu Opens*"); Time.timeScale = 0; if (GUI.Button(Rect(10,10,100,50),"Restart")); Application.LoadLevel(Application.loadedLevel); if (GUI.Button(Rect(10,60,100,50),"MainMenu")); Application.LoadLevel("MainMenu"); } Now,the problem stands at the part where the buttons won't show up,the game freezes at the first frame but the buttons won't show up.Please,if you can help I would be realy thankful. Thank you. Edit #1 : I just noticed that when I click "Pause" the game freezes and it takes me to the MainMenu.That's because I added the GUIButton which takes you to the main menu.I think the whole script structure is wrong.I also forgot to mention that Im new in scripting/unity. Thank you.

    Read the article

  • Having the same texture data in different ID3D11Texture2D

    - by bdmnd
    Sorry if this has been answered elsewhere - I'm rather new to DX. My question concerns conservation of resources - specifically textures in VRAM. I assume that upon returning from a call to CreateTexture2D, a copy of any textures data supplied has been copied elsewhere, likely VRAM. Does DX11 have any facility for having multiple ID3D11Texture2D objects which point to the same data? This might at first seem silly, but imagine a ID3D11Texture2D which is an array of textures. In one material, an artist has chosen to blend three identically sized maps, saved on disk as A.dds, B.dds, and C.dds. Then imagine they have another material which also uses three maps, but this time A.dds, B.dds, and D.dds. The shader code knows the diffuse texture is a texture array, and also has the number of layers baked (three in each case). I would essentially like to set up just two ID3D11Texture2D objects, one for each material, but I don't want to waste VRAM for two identical copies of A.dds and B.dds. I could use explicit texture arrays, of course, but this reduces the number of resources available to the shader and can complicate code somewhat more than would otherwise be needed.

    Read the article

  • Is it better to cut and store all sprites needed from a spritesheet in memory, or cut them out just-in-time?

    - by xLite
    I'm not sure what's best practice here as I have little experience with this. Essentially what I am asking is... if it's better to get your single PNG with all your different sprites on it for use in-game, cut out every sprite on startup and store them in memory, then access the already-cut-out sprite from memory quickly or Only have the single PNG with all the different sprites residing in memory, and when you need, for example, a tree. You cut out the tree from the PNG and then continue to use it as normal. I imagine the former is more CPU friendly than the latter but less memory friendly, vice versa for the latter. I want to know what the norm is for game dev. This is a pixel based game using 2D art. Each PNG is actually an avatar's sprite sheet with each body part separated and then later joined to form the full body of the avatar.

    Read the article

  • Problems loading Hilva tutorials

    - by Beska
    I'm a newcomer to XNA, and I'm evaluating some libraries. The Hilva Graphics Engine looks interesting, and I'm trying to run their tutorials. However, all of them give me errors. For example, if I download the ParallaxMappingSample demo, and try to build it, I get Error 1 Error loading pipeline assembly "C:\Users\Me\Desktop\ParallaxMappingSample\Hilva.Content.dll". ParallaxMappingSample I get similar errors for all of the samples. Unfortunately, this error isn't very enlightening. I can see the Hilva.Content.dll in the appropriate directory. I tried removing and readding the reference from the content project, but I get the same error. I'm not sure it's relevant, but I'm on Windows 7, I'm using Microsoft Visual Studio 2010, and XNA 4.0. Is there an easy (or difficult) solution? EDIT: If you happen to try this, even if you don't have a solution, let me know about it in a comment. Whether it works for you, or if you get the same problem...either result would be something that might let me know if it's just a problem with the tutorial, or if it's on my end.

    Read the article

  • How do I randomly generate a top-down 2D level with separate sections and is infinite?

    - by Bagofsheep
    I've read many other questions/answers about random level generation but most of them deal with either randomly/proceduraly generating 2D levels viewed from the side or 3D levels. What I'm trying to achieve is sort of like you were looking straight down on a Minecraft map. There is no height, but the borders of each "biome" or "section" of the map are random and varied. I already have basic code that can generate a perfectly square level with the same tileset (randomly picking segments from the tileset image), but I've encountered a major issue for wanting the level to be infinite: Beyond a certain point, the tiles' positions become negative on one or both of the axis. The code I use to only draw tiles the player can see relies on taking the tiles position and converting it to the index number that represents it in the array. As you well know, arrays cannot have a negative index. Here is some of my code: This generates the square (or rectangle) of tiles: //Scale is in tiles public void Generate(int sX, int sY) { scaleX = sX; scaleY = sY; for (int y = 0; y <= scaleY; y++) { tiles.Add(new List<Tile>()); for (int x = 0; x <= scaleX; x++) { tiles[tiles.Count - 1].Add(tileset.randomTile(x * tileset.TileSize, y * tileset.TileSize)); } } } Before I changed the code after realizing an array index couldn't be negative my for loops looked something like this to center the map around (0, 0): for (int y = -scaleY / 2; y <= scaleY / 2; y++) for (int x = -scaleX / 2; x <= scaleX / 2; x++) Here is the code that draws the tiles: int startX = (int)Math.Floor((player.Position.X - (graphics.Viewport.Width) - tileset.TileSize) / tileset.TileSize); int endX = (int)Math.Ceiling((player.Position.X + (graphics.Viewport.Width) + tileset.TileSize) / tileset.TileSize); int startY = (int)Math.Floor((player.Position.Y - (graphics.Viewport.Height) - tileset.TileSize) / tileset.TileSize); int endY = (int)Math.Ceiling((player.Position.Y + (graphics.Viewport.Height) + tileset.TileSize) / tileset.TileSize); for (int y = startY; y < endY; y++) { for (int x = startX; x < endX; x++) { if (x >= 0 && y >= 0 && x <= scaleX && y <= scaleY) tiles[y][x].Draw(spriteBatch); } } So to summarize what I'm asking: First, how do I randomly generate a top-down 2D map with different sections (not chunks per se, but areas with different tile sets) and second, how do I get past this negative array index issue?

    Read the article

  • Developing GLSL Shaders?

    - by skln
    I want to create shaders but I need a tool to create and see the visual result before I put them into my game. As to determine if there is something wrong with my game or if it's something with the shader I created. I've looked at some like Render Monkey and OpenGL Shader Designer from what I recall of Render Monkey it had a way to define your own attributes (now as "in" for vertex shaders = 330) easily though I can't remember to what extent. Shader Designer requires a plugin that I didn't even bother to look at creating cause it's an external process and plugin. Are there any tools out there that support a scripting language and I could easily provide specific input such as float movement = sin(elapsedTime()); and then define in float movement; in the vertex shader ? It'd be cool if anyone could share how they develop shaders, if they just code away and then plug it into their game hoping to get the result they wanted.

    Read the article

< Previous Page | 555 556 557 558 559 560 561 562 563 564 565 566  | Next Page >