Search Results

Search found 12087 results on 484 pages for 'game mechanics'.

Page 315/484 | < Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >

  • Why do consoles have so little memory compared to classic computers?

    - by jokoon
    I remember the Playstation having 2MB ram and 1MB graphic memory. The Playstation 3 now has only 256MB ram and 256MB graphic memory, and I'm sure that the day the console was released, even laptop's "standard" capacity was at least 1GB. So why do they put so little memory in their machines, while developers would benefit a lot by having more ? Or is the memory that much faster than desktops and thus more expensive ? Or is it not that much worth it for developers ? What are the Sony/XBox/Nintendo engineers thinking that seems to be the same reason ?

    Read the article

  • a flexible data structure for geometries

    - by AkiRoss
    What data structure would you use to represent meshes that are to be altered (e.g. adding or removing new faces, vertices and edges), and that have to be "studied" in different ways (e.g. finding all the triangles intersecting a certain ray, or finding all the triangles "visible" from a given point in the space)? I need to consider multiple aspects of the mesh: their geometry, their topology and spatial information. The meshes are rather big, say 500k triangles, so I am going to use the GPU when computations are heavy. I tried using arrays with vertices and arrays with indices, but I do not love adding and removing vertices from them. Also, using arrays totally ignore spatial and topological information, which I may need studying the mesh. So, I thought about using custom double-linked list data structures, but I believe doing so will require me to copy the data to array buffers before going on the GPU. I also thought about using BST, but not sure it fits. Any help is appreciated. If I have been too fuzzy and you require other information feel free to ask.

    Read the article

  • Changes to myApp.js files are reverted back to normal when the project is build - Cocos2dx

    - by Mansoor
    I am trying to do some changes to my myApp.js file of coco2dx project for android in eclipse but I am not able to do it. I am actually trying to change the default background image of my app. But when I run my project all the changes goes back to before values For Eg: This is the default line wer we are setting our background image this.sprite = cc.Sprite.create("res/HelloWorld.png"); I am changing it to the following line: this.sprite = cc.Sprite.create("res/CloseNormal.png"); But when I run my project CloseNormal.png goes back to HelloWorld.png I am using: OS: Win7 Cocos2d Ver: cocos2dx 2.2.2 Why is this happening. Can anybody help me?

    Read the article

  • What is a right datatype in C++ for OpenGL scene representation with use of GLSL

    - by Rarach
    I am programming in C++ OpenGl with GLSL. Until now I have been using a data structure that is composed of std::vector filled with structures of vertexes and with their parameters (position , normal, color ...) as a global variable for all the code. My question is, as I am using VBOs for drawing - is this a good approach to this problem? I am asking because I happen to have a lot of memory related trouble with this structure. I am trying to generate a terrain with a lot of vertices - more than 1 million. This seems to work, but as I refill the buffer I get memory related issues (crushes that occur, more or less randomly). So again the question is, is this a good data structure to use / and look for the faults in my code, or should I change to something else? Or what data structure would be advisable?

    Read the article

  • ScissorStack LIBGDX example?

    - by user36531
    I cant find a good resource/tutorial on how to do this. I would appreciate it if someone could provide a scissorstack example from an entity class. ie. using scissorstack on PlayerClass such that the map renders around the Player sprite, say 5 tiles. which would then allow me to create a Pawn class and apply same methodology to give a pawn sprite a lower number, like only rendering 1 tile around the location of the pawn.

    Read the article

  • Searching for fast skeletal animation algorithm

    - by igf
    Is it theoretically possible to dynamicaly animate scene with 100-150 400 poly characters meshes on high-end GL ES 2.0 mobile devices or i definetley should use prerendered keyframe animation? Scene have only one light source and precalculated shadow maps. View from top like in Warcraft 3. No any other meshes or objects. 2d collision detecting between objects calculated via spatial hashing. It can be any other algorithm besides ragdoll, but it must supply fast and simple skeletal animation for frame with 100+ low poly meshes for each mesh. Any ideas?

    Read the article

  • Implementing my Entity System. Questions about some problems I have found.

    - by Notbad
    Hi!, Well during this week I have deciding about implementation of my entity system. It is a big topic so it has been difficult to take one option from the whole. This has been my decision: 1) I don't have an entity class it is just an id. 2) I have systems that contain a list of components (the list is homegenous, I mean, RenderSystem will just have RenderComponents). 3) Compones will be just data. 4) There would be some kind of "entity prototypes" in a manager or something from we will create entity instances.Ideally they will define the type of components it has and initialization data. 5) Prototype code to create an entity (this is from the top of my head): int id=World::getInstance()->createEntity("entity template"); 6) This will notify all systems that a new entity has been created, and if the entity needs a component that the system handles it will add it to the entity. Ok, this are the ideas. Let's see if some can help with the problems: 1) The main problem is this templates that are sent to the systems in creation process to populate the entity with needed components. What would you use, an OR(ed) int?, a list of strings?. 2) How to do initialization for components when the entity has been created? How to store this in the template? I have thought about having a function in the template that is virtual and after entity is created an populated, gets the components and sets initialization values. 3) Don't you think this is a lot of work for just an entity creation?. Sorry for the long post, I have tried to expose my ideas and finding in order other could have a start beside exposing my problems. Thanks in advance, Notbad.

    Read the article

  • XNA - Obtaining depth from the scene's render target?

    - by user1423893
    I'm currently rendering my scene to a render target so it can be used for rendering methods such as post processing and order independent transparency. rtScene = new RenderTarget2D( GraphicsDevice, GraphicsDevice.PresentationParameters.BackBufferWidth, GraphicsDevice.PresentationParameters.BackBufferHeight, false, SurfaceFormat.Rgba64, DepthFormat.Depth24Stencil8, // Requires a depth format for objects to be drawn correctly (e.g. wireframe model surrounding model) 0, RenderTargetUsage.PreserveContents ); I am required to use RenderTargetUsage.PreserveContents so that the same render target can be rendered to multiple times, once for each of the draw methods below. DrawBackground DrawDeferred DrawForward DrawTransparent The problem is that DrawTransparent requires a copy of the scene's depth as a texture. Is there any way to obtain this from the scene render target above (rtScene)? I can't have more than one render target with RenderTargetUsage.PreserveContents as this causes problems on hardware such as the XBOX 360, so rendering the depth to a separate render target at the same time as I render the scene isn't possible as far as I can tell. Would I be able to get around this problem by "Ping-Ponging" two render targets (using the more compatible RenderTargetUsage.DiscardContents) and using the result for the depth texture?

    Read the article

  • Why are my scene's depth values not being written to my DepthStencilView?

    - by dotminic
    I'm rendering to a depth map in order to use it as a shader resource view, but when I sample the depth map in my shader, the red component has a value of 1 while all other channels have a value of 0. The Texture2D I use to create the DepthStencilView is bound with the D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE flags, the DepthStencilView has the DXGI_FORMAT_D32_FLOAT format, and the ShaderResourceView's format is D3D11_SRV_DIMENSION_TEXTURE2D. I'm setting the depth map render target, then i'm drawing my scene, and once that is done, I'm the back buffer render target and depth stencil are set on the output merger, and I'm using the depth map shader resource view as a texture in my shader, but the depth value in the red channel is constantly 1. I'm not getting any runtime errors from D3D, and no compile time warning or anything. I'm not sure what I'm missing here at all. I have the impression the depth value is always being set to 1. I have not set any depth/stencil states, and AFAICT depth writing is enabled by default. The geometry is being rendered correctly so I'm pretty sure depth writing is enabled. The device is created with the appropriate debug flags; #if defined(DEBUG) || defined(_DEBUG) deviceFlags |= D3D11_CREATE_DEVICE_DEBUG | D3D11_RLDO_DETAIL; #endif This is how I create my depth map. I've omitted error checking for the sake of brevity D3D11_TEXTURE2D_DESC td; td.Width = width; td.Height = height; td.MipLevels = 1; td.ArraySize = 1; td.Format = DXGI_FORMAT_R32_TYPELESS; td.SampleDesc.Count = 1; td.SampleDesc.Quality = 0; td.Usage = D3D11_USAGE_DEFAULT; td.BindFlags = D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE; td.CPUAccessFlags = 0; td.MiscFlags = 0; _device->CreateTexture2D(&texDesc, 0, &this->_depthMap); D3D11_DEPTH_STENCIL_VIEW_DESC dsvd; ZeroMemory(&dsvd, sizeof(dsvd)); dsvd.Format = DXGI_FORMAT_D32_FLOAT; dsvd.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2D; dsvd.Texture2D.MipSlice = 0; _device->CreateDepthStencilView(this->_depthMap, &dsvd, &this->_dmapDSV); D3D11_SHADER_RESOURCE_VIEW_DESC srvd; srvd.Format = DXGI_FORMAT_R32_FLOAT; srvd.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D; srvd.Texture2D.MipLevels = texDesc.MipLevels; srvd.Texture2D.MostDetailedMip = 0; _device->CreateShaderResourceView(this->_depthMap, &srvd, &this->_dmapSRV);

    Read the article

  • Double Buffering in Panda3D (C++)

    - by jsvcycling
    How would I go about using Double Buffering (to create a loading screen) in Panda3D using C++? I've searched Google and found some forums that talk about the concept of swapping buffers, but I haven't seen any that show any type of source code (specifically Panda3D/C++). I'd like to try and stay away from using pure OpenGL code and work it through Panda3D, but if I have no other choice, then I'll have to go with OpenGL coding.

    Read the article

  • Need the co-ordinates of innerPolygon

    - by user960567
    Let say I have this diagram, Given that i have all the co-ordinates of outer polygon and the distance between inner and outer polygon is d is also given. How to calculate the inner polygon co-ordinates? Edit: I was able to solved the issue by getting the mid-points of all lines. From these mid-points I can move d distance, So I can get three points. No I have 3 points and 3 slopes. From this, I can get three new equations. Simultaneously, solving the equation get the 3 points.

    Read the article

  • What is going on in this SAT/vector projection code?

    - by ssb
    I'm looking at the example XNA SAT collision code presented here: http://www.xnadevelopment.com/tutorials/rotatedrectanglecollisions/rotatedrectanglecollisions.shtml See the following code: private int GenerateScalar(Vector2 theRectangleCorner, Vector2 theAxis) { //Using the formula for Vector projection. Take the corner being passed in //and project it onto the given Axis float aNumerator = (theRectangleCorner.X * theAxis.X) + (theRectangleCorner.Y * theAxis.Y); float aDenominator = (theAxis.X * theAxis.X) + (theAxis.Y * theAxis.Y); float aDivisionResult = aNumerator / aDenominator; Vector2 aCornerProjected = new Vector2(aDivisionResult * theAxis.X, aDivisionResult * theAxis.Y); //Now that we have our projected Vector, calculate a scalar of that projection //that can be used to more easily do comparisons float aScalar = (theAxis.X * aCornerProjected.X) + (theAxis.Y * aCornerProjected.Y); return (int)aScalar; } I think the problems I'm having with this come mostly from translating physics concepts into data structures. For example, earlier in the code there is a calculation of the axes to be used, and these are stored as Vector2, and they are found by subtracting one point from another, however these points are also stored as Vector2s. So are the axes being stored as slopes in a single Vector2? Next, what exactly does the Vector2 produced by the vector projection code represent? That is, I know it represents the projected vector, but as it pertains to a Vector2, what does this represent? A point on a line? Finally, what does the scalar at the end actually represent? It's fine to tell me that you're getting a scalar value of the projected vector, but none of the information I can find online seems to tell me about a scalar of a vector as it's used in this context. I don't see angles or magnitudes with these vectors so I'm a little disoriented when it comes to thinking in terms of physics. If this final scalar calculation is just a dot product, how is that directly applicable to SAT from here on? Is this what I use to calculate maximum/minimum values for overlap? I guess I'm just having trouble figuring out exactly what the dot product is representing in this particular context. Clearly I'm not quite up to date on my elementary physics, but any explanations would be greatly appreciated.

    Read the article

  • movement of sprites with kinect and xna

    - by pablopp83
    im working on a proyect with kinect sdk and xna 4.0. i need take the position of the hands and draw a sprite over it. im doing it directly and, because of that, i get a "trembling hands" effect. so, i was thinking on make the sprite move from the previous position to the new one, given in every frame by the new hand position. this way, the sprite does not jump from one position to another. this is working just fine, but im using a constant value for the velocity, and i really would like to use a variable velocity given by the difference between the prev and the new position. this is, if the hand move more quickly in the reality, the velocity will be higher. I really dont have a clue on how to make this works. can somebody point me in the right direction? thanks.

    Read the article

  • Binding BoundingSpheres to a world matrix in XNA

    - by NDraskovic
    I made a program that loads the locations of items on the scene from a file like this: using (StreamReader sr = new StreamReader(OpenFileDialog1.FileName)) { String line; while ((line = sr.ReadLine()) != null) { red = line.Split(','); model = row[0]; x = row[1]; y = row[2]; z = row[3]; elements.Add(Convert.ToInt32(model)); data.Add(new Vector3(Convert.ToSingle(x), Convert.ToSingle(y), Convert.ToSingle(z))); sfepheres.Add(new BoundingSphere(new Vector3(Convert.ToSingle(x), Convert.ToSingle(y), Convert.ToSingle(z)), 1f)); } I also have a list of BoundingSpheres (called spheres) that adds a new bounding sphere for each line from the file. In this program I have one item (a simple box) that moves (it has its world matrix called matrixBox), and other items are static entire time (there is a world matrix that holds those elements called simply world). The problem i that when I move the box, bounding spheres move with it. So how can I bind all BoundingSpheres (except the one corresponding to the box) to the static world matrix so that they stay in their place when the box moves?

    Read the article

  • Cheap ways to do scaling ops in shader?

    - by Nick Wiggill
    I've got an extensive world terrain that uses vec3 for the vertex position attribute. That's good, because the terrain has endless gradations due to the use of floating point. But I'm thinking about how to reduce the amount of data uploaded to the GPU. For my terrain, which uses discrete / grid-based vertex positions in x and z, it's pretty clear that I can replace my vec3s (floats, really) with shorts, halving the per-vertex position attribute cost from 12 bytes each to 6 bytes. Considering I've got little enough other vertex data, and an enormous amount of terrain data to push into the world, it's a major gain. Currently in my code, one unit in GLSL shaders is equal to 1m in the world. I like that scale. If I move over to using shorts, though, I won't be able to use the same scale, as I would then have a very blocky world where every step in height is an entire metre. So I see these potential solutions to scale the positional data correctly once it arrives at the vertex shader stage: Use 10:1 scaling, i.e. 1 short unit = 1 decimetre in CPU-side code. Do a division by 10 in the vertex shader to scale incoming decimetre values back to metres. Arbirary (non-PoT) divisions tend to be slow, however. Use (some-power-of-two):1 scaling (eg. 8:1), which enables the use of a bitshift (eg. val >> 3) to do the division... not sure how performant this is in shaders, though. Not as intuitive to read values, but possibly quite a bit faster than div by a non-PoT value. Use a texture as lookup table. I've heard that this is really fast. Or whatever solutions others can offer to achieve the same results -- minimal vertex data with sensible scaling.

    Read the article

  • Spherical to Cartesian Coordinates

    - by user1258455
    Well I'm reading the Frank's Luna DirectX10 book and, while I'm trying to understand the first demo, I found something that's not very clear at least for me. In the updateScene method, when I press A, S, W or D, the angles mTheta and mPhi change, but after that, there are three lines of code that I don't understand exactly what they do: // Convert Spherical to Cartesian coordinates: mPhi measured from +y // and mTheta measured counterclockwise from -z. float x = 5.0f*sinf(mPhi)*sinf(mTheta); float z = -5.0f*sinf(mPhi)*cosf(mTheta); float y = 5.0f*cosf(mPhi); I mean, this explains that they do, it says that it converts the spherical coordinates to cartesian coordinates, but, mathematically, why? why the x value is calculated by the product of the sins of both angles? And the z by the product of the sine and cosine? and why the y just uses the cosine? After that, those values (x, y and z) are used to build the view matrix. The book doesn't explain (mathematically) why those values are calculated like that (and I didn't find anything to help me to understand it at the first Part of the book: "Mathematical prerequisites"), so it would be good if someone could explain me what exactly happen in those code lines or just give me a link that helps me to understand the math part. Thanks in advance!

    Read the article

  • LWJGL in Visual Studio (possible)?

    - by Suds
    I switched from XNA and C# to LWJGL and Java about 14 months ago. Inherently, this called for a switch in IDE. I started using eclipse because I have also done some basic Android development in the past. I soon switched to Netbeans - Eclipse is just too primitive. After using netbeans for about six months, I've started looking over the fence at Visual Studio 11, toying with Metro apps for windows 8. Now I want to know, is there any known way to use Visual Studio for LWJGL?

    Read the article

  • TGA loader: reverse height

    - by aVoX
    I wrote a TGA image loader in Java which is working perfectly for files created with GIMP as long as they are saved with the option "origin" set to "Top Left" (Note: Actually TGA files are meant to be stored upside down - "Bottom Left" in GIMP). My problem is that I want my image loader to be capable of reading all different kinds of TGAs, so my question is, how do I flip the image upside down? Note that I store all image data inside a one-dimensional byte array, because OpenGL (glTexImage2D to be specific) requires it that way. Thanks in advance.

    Read the article

  • Are there any OpenGL ES 2.0 examples for JOGL?

    - by fjdutoit
    I've scoured the internet for the last few hours looking for an example of how to run even the most basic OpenGL ES 2 example using JOGL but "by Jupiter!" it has been a total fail. I tried converting the android example from the OpenGL ES 2.0 Programming Guide examples (and at the same time looking at the WebGL example -- which worked fine) yet without any success. Are there any examples out there? If anyone else wants some extra help regarding this question see this thread on the official Jogamp forum.

    Read the article

  • Maya is lagging in a specific way...?

    - by Aerovistae
    My Maya installation worked perfectly. It is not my computer. Something caused it to stop working overnight, somehow. When I try to drag a vertex or something like that, it moves the vertex, but then I have to click like 3 times somewhere outside the mesh before the actual mesh will catch up and follow the vertex. Until I do that, it just stays as it was, with a floating vertex somewhere inside it or outside it. It makes modeling borderline impossible and completely infuriating. What ought to be happening is what we're all used to-- as I move the vertex, the mesh follows it actively, so I can see what it looks like at every given moment until I release the vertex in its new position. Other weird thing: this only applies to complex meshes, like a couple thousand faces. A simple cube works fine. What gives?? Anybody?

    Read the article

  • Splitting Pygame functionality between classes or modules?

    - by sec_goat
    I am attempting to make my pygame application more modular so that different functionalities are split up into different classes and modules. I am having some trouble getting pygame to allow me to draw or load images in secondary classes when the display has been set and pygame.init() has been done in my main class. I have typically used C# and XNA to accomplish this sort of behavior, but this time I need to use python. How do I init pygame in class1, then create an instance of class2 which loads and converts() images. I have tried pygame.init() in class 2 but then it tells me no display mode has been set, when it has been set in class1. I am under the impression i do not wnat to create multiple pygame.displays as that gets problematic I am probably missing something pythonic and simple but I am not sure what. How do I create a Display class, init python and then have other modules do my work like loading images, fonts etc.? here is the simplest version of what I am doing: class1: def __init__(self): self.screen = pygame.display.set_mode((600,400)) self.imageLoader = class2() class2: def __init__(self): self.images = ['list of images'] def load_images(): self.images = os.listdir('./images/') #get all images in the images directory for img in self.images: #read all images in the directory and load them into pygame new_img = pygame.image.load(os.path.join('images', img)).convert() scale_img = pygame.transform.scale(new_img, (pygame.display.Info().current_w, pygame.display.Info().current_h)) self.images.append(scale_img) if __name__ == "__main__": c1 = class1() c1.imageLoader.load_images() Of course when it tries to load an convert the images it tells me pygame has not been initialized, so i throw in a pygame.init() in class2 ( i have heard it is safe to init multiple times) and then the error goes to pygame.error: No video mode has been set

    Read the article

  • Procedural Planets, Heightmaps and Textures

    - by henryprescott
    I am currently working on an OpenGL procedural planet generator. I hope to use it for a space RPG, that will not allow players to go down to the surface of a planet so I have ignored anything ROAM related. At the momement I am drawing a cube with VBOs and mapping onto a sphere. I am familiar with most fractal heightmap generating techniques and have already implemented my own version of midpoint displacement(not that useful in this case I know). My question is, what is the best way to procedurally generate the heightmap. I have looked at libnoise which allows me to make tilable heightmaps/textures, but as far as I can see I would need to generate a net like this. Leaving the tiling obvious. Could anyone advise me on the best route to take? Any input would be much appreciated. Thanks, Henry.

    Read the article

  • Accounting for waves when doing planar reflections

    - by CloseReflector
    I've been studying Nvidia's examples from the SDK, in particular the Island11 project and I've found something curious about a piece of HLSL code which corrects the reflections up and down depending on the state of the wave's height. Naturally, after examining the brief paragraph of code: // calculating correction that shifts reflection up/down according to water wave Y position float4 projected_waveheight = mul(float4(input.positionWS.x,input.positionWS.y,input.positionWS.z,1),g_ModelViewProjectionMatrix); float waveheight_correction=-0.5*projected_waveheight.y/projected_waveheight.w; projected_waveheight = mul(float4(input.positionWS.x,-0.8,input.positionWS.z,1),g_ModelViewProjectionMatrix); waveheight_correction+=0.5*projected_waveheight.y/projected_waveheight.w; reflection_disturbance.y=max(-0.15,waveheight_correction+reflection_disturbance.y); My first guess was that it compensates for the planar reflection when it is subjected to vertical perturbation (the waves), shifting the reflected geometry to a point where is nothing and the water is just rendered as if there is nothing there or just the sky: Now, that's the sky reflecting where we should see the terrain's green/grey/yellowish reflection lerped with the water's baseline. My problem is now that I cannot really pinpoint what is the logic behind it. Projecting the actual world space position of a point of the wave/water geometry and then multiplying by -.5f, only to take another projection of the same point, this time with its y coordinate changed to -0.8 (why -0.8?). Clues in the code seem to indicate it was derived with trial and error because there is redundancy. For example, the author takes the negative half of the projected y coordinate (after the w divide): float waveheight_correction=-0.5*projected_waveheight.y/projected_waveheight.w; And then does the same for the second point (only positive, to get a difference of some sort, I presume) and combines them: waveheight_correction+=0.5*projected_waveheight.y/projected_waveheight.w; By removing the divide by 2, I see no difference in quality improvement (if someone cares to correct me, please do). The crux of it seems to be the difference in the projected y, why is that? This redundancy and the seemingly arbitrary selection of -.8f and -0.15f lead me to conclude that this might be a combination of heuristics/guess work. Is there a logical underpinning to this or is it just a desperate hack? Here is an exaggeration of the initial problem which the code fragment fixes, observe on the lowest tessellation level. Hopefully, it might spark an idea I'm missing. The -.8f might be a reference height from which to deduce how much to disturb the texture coordinate sampling the planarly reflected geometry render and -.15f might be the lower bound, a security measure.

    Read the article

  • How do I randomly generate a top-down 2D level with separate sections and is infinite?

    - by Bagofsheep
    I've read many other questions/answers about random level generation but most of them deal with either randomly/proceduraly generating 2D levels viewed from the side or 3D levels. What I'm trying to achieve is sort of like you were looking straight down on a Minecraft map. There is no height, but the borders of each "biome" or "section" of the map are random and varied. I already have basic code that can generate a perfectly square level with the same tileset (randomly picking segments from the tileset image), but I've encountered a major issue for wanting the level to be infinite: Beyond a certain point, the tiles' positions become negative on one or both of the axis. The code I use to only draw tiles the player can see relies on taking the tiles position and converting it to the index number that represents it in the array. As you well know, arrays cannot have a negative index. Here is some of my code: This generates the square (or rectangle) of tiles: //Scale is in tiles public void Generate(int sX, int sY) { scaleX = sX; scaleY = sY; for (int y = 0; y <= scaleY; y++) { tiles.Add(new List<Tile>()); for (int x = 0; x <= scaleX; x++) { tiles[tiles.Count - 1].Add(tileset.randomTile(x * tileset.TileSize, y * tileset.TileSize)); } } } Before I changed the code after realizing an array index couldn't be negative my for loops looked something like this to center the map around (0, 0): for (int y = -scaleY / 2; y <= scaleY / 2; y++) for (int x = -scaleX / 2; x <= scaleX / 2; x++) Here is the code that draws the tiles: int startX = (int)Math.Floor((player.Position.X - (graphics.Viewport.Width) - tileset.TileSize) / tileset.TileSize); int endX = (int)Math.Ceiling((player.Position.X + (graphics.Viewport.Width) + tileset.TileSize) / tileset.TileSize); int startY = (int)Math.Floor((player.Position.Y - (graphics.Viewport.Height) - tileset.TileSize) / tileset.TileSize); int endY = (int)Math.Ceiling((player.Position.Y + (graphics.Viewport.Height) + tileset.TileSize) / tileset.TileSize); for (int y = startY; y < endY; y++) { for (int x = startX; x < endX; x++) { if (x >= 0 && y >= 0 && x <= scaleX && y <= scaleY) tiles[y][x].Draw(spriteBatch); } } So to summarize what I'm asking: First, how do I randomly generate a top-down 2D map with different sections (not chunks per se, but areas with different tile sets) and second, how do I get past this negative array index issue?

    Read the article

  • Why are only some of my objects being rendered?

    - by BleedObsidian
    Every time I create a new asteroid the previous one is no longer rendered? I did some debugging and printed out the size of Array-List 'Small' and when a new asteroid is created it doesn't go down, so the thread is still there it's just not being rendered, Why? StatePlay: package me.bleedobsidian.astroidjump; import org.newdawn.slick.GameContainer; import org.newdawn.slick.Graphics; import org.newdawn.slick.SlickException; import org.newdawn.slick.state.BasicGameState; import org.newdawn.slick.state.StateBasedGame; public class StatePlay extends BasicGameState { int stateID = 10; Player player; Asteroids asteroids; StatePlay(int stateID) { this.stateID = stateID; } @Override public int getID() { return stateID; } @Override public void init(GameContainer gc, StateBasedGame sbg) throws SlickException { ResManager.loadImages(); player = new Player(); asteroids = new Asteroids(); } @Override public void render(GameContainer gc, StateBasedGame sbg, Graphics g) throws SlickException { g.setAntiAlias(true); player.render(g); asteroids.render(g); g.drawString("Asteroids: " + Asteroids.small.size(), 10, 25); } @Override public void update(GameContainer gc, StateBasedGame sbg, int delta) throws SlickException { player.update(gc, delta); asteroids.update(delta); } } Asteroids: package me.bleedobsidian.astroidjump; import java.util.ArrayList; import java.util.Timer; import org.newdawn.slick.Graphics; import org.newdawn.slick.Image; import org.newdawn.slick.SpriteSheet; public class Asteroids { public static ArrayList<Asteroid_Small> small = new ArrayList<Asteroid_Small>(); static SpriteSheet small_sprites = new SpriteSheet(ResManager.asteroids_small_ss, 32, 32); static Image small_1 = small_sprites.getSubImage(0, 0); static Image small_2 = small_sprites.getSubImage(1, 0); static Image small_3 = small_sprites.getSubImage(2, 0); static Image small_4 = small_sprites.getSubImage(3, 0); static boolean asteroids = true; static int diff = 0; Asteroids() { Task_Asteroids TaskA = new Task_Asteroids(); Timer timer = new Timer("Asteroids"); if(diff == 0) { timer.schedule(TaskA, 0, 4000); } else if(diff == 1) { timer.schedule(TaskA, 0, 3000); } } public static Image chooseSmallImage(int i) { if(i == 0) { return small_1; } else if(i == 1) { return small_2; } else if(i == 2) { return small_3; } else if(i == 3) { return small_4; } else { return small_1; } } public static void level_manager(float x) { if(x < 1000) { diff = 0; } else if(x < 2000) { diff = 1; } else if(x < 3000) { diff = 2; } else if(x < 5000) { diff = 3; } else if(x < 10000) { diff = 4; } else { diff = 5; } } public void update(int delta) { for(int s = 0; s < small.size(); s++) { Asteroid_Small as = small.get(s); as.update(delta); } } public void render(Graphics g) { for(int s = 0; s < small.size(); s++) { Asteroid_Small as = small.get(s); as.render(g); } } public static void setAsteroids(boolean tf) { asteroids = tf; } } Asteroid_Small: package me.bleedobsidian.astroidjump; import org.newdawn.slick.Graphics; import org.newdawn.slick.Image; public class Asteroid_Small { private static Image me; private static float x = 0; private static float y = 0; private static float speed = 0; private static float rotation = 0; private static float rotation_speed = 0; Asteroid_Small(Image i, float x, float y, float rs, float sp) { me = i; Asteroid_Small.x = x; Asteroid_Small.y = y; Asteroid_Small.rotation_speed = rs; Asteroid_Small.speed = sp; } public void update(int delta) { x -= speed * delta; rotation += rotation_speed * delta; me.setRotation(rotation); } public void render(Graphics g) { g.drawImage(me, x, y); } } Task_Asteroid: package me.bleedobsidian.astroidjump; import java.util.TimerTask; public class Task_Asteroids extends TimerTask { public void run() { if(Asteroids.diff == 0) { int randImage = (int) (Math.random() * 4); int randHeight = (int) (Math.random() * 480); Asteroids.small.add(new Asteroid_Small(Asteroids.chooseSmallImage(randImage), Player.x + 960, randHeight, 0.05f, 0.04f)); } } }

    Read the article

< Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >