Search Results

Search found 33291 results on 1332 pages for 'development environment'.

Page 539/1332 | < Previous Page | 535 536 537 538 539 540 541 542 543 544 545 546  | Next Page >

  • Whole map design vs. tiles array design

    - by Mikalichov
    I am working on a 2D RPG, which will feature the usual dungeon/town maps (pre-generated). I am using tiles, that I will then combine to make the maps. My original plan was to assemble the tiles using Photoshop, or some other graphic program, in order to have one bigger picture that I could then use as a map. However, I have read on several places people talking about how they used arrays to build their map in the engine (so you give an array of x tiles to your engine, and it assemble them as a map). I can understand how it's done, but it seems a lot more complicated to implement, and I can't see obvious avantages. What is the most common method, and what are advantages/disadvantages of each?

    Read the article

  • HLSL What you get when you subtract world position from InvertViewProjection.Transform?

    - by cubrman
    In one of NVIDIA's Vertex shaders (the metal one) I found the following code: // transform object normals, tangents, & binormals to world-space: float4x4 WorldITXf : WorldInverseTranspose < string UIWidget="None"; >; // provide tranform from "view" or "eye" coords back to world-space: float4x4 ViewIXf : ViewInverse < string UIWidget="None"; >; ... float4 Po = float4(IN.Position.xyz,1); // homogeneous location coordinates float4 Pw = mul(Po,WorldXf); // convert to "world" space OUT.WorldView = normalize(ViewIXf[3].xyz - Pw.xyz); The term OUT.WorldView is subsequently used in a Pixel Shader to compute lighting: float3 Ln = normalize(IN.LightVec.xyz); float3 Nn = normalize(IN.WorldNormal); float3 Vn = normalize(IN.WorldView); float3 Hn = normalize(Vn + Ln); float4 litV = lit(dot(Ln,Nn),dot(Hn,Nn),SpecExpon); DiffuseContrib = litV.y * Kd * LightColor + AmbiColor; SpecularContrib = litV.z * LightColor; Can anyone tell me what exactly is WorldView here? And why do they add it to the normal?

    Read the article

  • Circle physics and collision using vectors

    - by Joe Hearty
    This is a problem I've been having, When making a set number of filled circles at random locations on a JPanel and applying a gravity (a negative change in the y), each of the circles collide. I want them to have collision detection and push in the opposite direction using vectors but I don't know how to apply that to my scenario could someone help? public void drawballs(Graphics g){ g.setColor (Color.white); //displays circles for(int i = 0; i<xlocationofcircles.length-1; i++){ g.fillOval( (int) xlocationofcircles[i], (int) (ylocationofcircles[i]) ,16 ,16 ); ylocationofcircles[i]+=.2; //gravity if(ylocationofcircles[i] > 550) //stops gravity at bottom of screen ylocationofcircles[i]-=.2; //Check distance between circles(i think..) float distance =(xlocationofcircles[i+1]-xlocationofcircles[i]) + (ylocationofcircles[i+1]-xlocationofcircles[i]); if( Math.sqrt(distance) <16) ...

    Read the article

  • Tic-Tac-Toe game AI

    - by David Jones
    I'm looking into creating a simple tic tac toe/noughts and crosses game in Actionscript3 and am trying to understand the ideas behind the AI used in a game like this. I've seen some simplistic examples online but from what I've read a game tree or something like minimax is the best way to go about this. Can anyone help explain or reference any good examples of this? I've seen that there is a library called as3ds - data structures for game developers which has a number of classes that might help tie this together? Any info/examples or help is much appreciated.

    Read the article

  • How to balance this Pokémon simulator metagame by feedback?

    - by Dokkat
    This is a Pokémon simulator where you build a team of 6 pokémon and battle with someone. Unfortunately, some Pokémon are stronger than others and only a few of the hundredth species are practical. I'm trying to create a metagame where all of them are competitive. For this, I am tagging a Pokémon with a parameter (level) that changes it's strength and scales up/down depending on the it's performance. That is, if the system detects Mewtwo is overperforming, it should decrease it's level tag until Mewtwo is balanced. The question is: how can I identify if a Pokémon is causing an unbalance? The data I have is the historic of the battles (player 1, player 2, pokémon list, winner). The most basic solution I can think of is victory/loss counting.

    Read the article

  • Recommended 2D Game Engine for prototyping

    - by Thomas Dufour
    What high-level game engine would you recommend to develop a 2D game prototype on windows? (or mac/linux if you wish) The kind of things I mean by "high-level" includes (but is definitely not limited to): not having to manage low-level stuff like screen buffers, graphics contexts having an API to draw geometric shapes well, I was going to omit it but I guess being based on an actual "high-level" language is a plus (automatic resource management and the existence a reasonable set of data structures in the standard library come to mind). It seems to me that Flash is the proverbial elephant in the room for this query but I'd very much like to see different answers based on all kinds of languages or SDKs.

    Read the article

  • What kind of physics to choose for our arcade 3D MMO?

    - by Nick
    We're creating an action MMO using Three.js (WebGL) with an arcadish feel, and implementing physics for it has been a pain in the butt. Our game has a terrain where the character will walk on, and in the future 3D objects (a house, a tree, etc) that will have collisions. In terms of complexity, the physics engine should be like World of Warcraft. We don't need friction, bouncing behaviour or anything more complex like joints, etc. Just gravity. I have managed to implement terrain physics so far by casting a ray downwards, but it does not take into account possible 3D objects. Note that these 3D objects need to have convex collisions, so our artists create a 3D house and the player can walk inside but can't walk through the walls. How do I implement proper collision detection with 3D objects like in World of Warcraft? Do I need an advanced physics engine? I read about Physijs which looks cool, but I fear that it may be overkill to implement that for our game. Also, how does WoW do it? Do they have a separate raycasting system for the terrain? Or do they treat the terrain like any other convex mesh? A screenshot of our game so far:

    Read the article

  • Solution for lightweight LAN peer discovering?

    - by DevilWithin
    I built a library for purely cross-platform programming. My games made with it run fine in Android , Pc, Linux, Mac etc. The networking capabilities are provided by ENET library, therefore all communication between my apps is not TCP or UDP compatible, but only in the custom protocol, even tough its based on the UDP ultimately. I don't think its possible to do what i want with ENET, thats why I ask here for help! Lets say I have the same game running in my Android phone, my laptop and my pc. They are all in the same wifi network, and therefore in a LAN, whether its Wifi hotspot(?) or the household router. I need each of those 3 peers to discover the other two in the network. This is meant only to find the IP of alive apps in the LAN network, to be able to host multiplayer games between them. I can only think of one effective way to do this, UDP broadcast, wait responses, but if that is the solution, i need something small, since its the only purpose of the implementation. Other way could be to try to connect to all IPs in the LAN address subrange, but I don't think the OS would be with me on this one :p Sorry for the long question!

    Read the article

  • Best way to mask 2D sprites in XNA?

    - by electroflame
    I currently am trying to mask some sprites. Rather than explaining it in words, I've made up some example pictures: The area to mask (in white) Now, the red sprite that needs to be cropped. The final result. Now, I'm aware that in XNA you can do two things to accomplish this: Use the Stencil Buffer. Use a Pixel Shader. I have tried to do a pixel shader, which essentially did this: float4 main(float2 texCoord : TEXCOORD0) : COLOR0 { float4 tex = tex2D(BaseTexture, texCoord); float4 bitMask = tex2D(MaskTexture, texCoord); if (bitMask.a > 0) { return float4(tex.r, tex.g, tex.b, tex.a); } else { return float4(0, 0, 0, 0); } } This seems to crop the images (albeit, not correct once the image starts to move), but my problem is that the images are constantly moving (they aren't static), so this cropping needs to be dynamic. Is there a way I could alter the shader code to take into account it's position? Alternatively, I've read about using the Stencil Buffer, but most of the samples seem to hinge on using a rendertarget, which I really don't want to do. (I'm already using 3 or 4 for the rest of the game, and adding another one on top of it seems overkill) The only tutorial I've found that doesn't use Rendertargets is one from Shawn Hargreaves' blog over here. The issue with that one, though is that it's for XNA 3.1, and doesn't seem to translate well to XNA 4.0. It seems to me that the pixel shader is the way to go, but I'm unsure of how to get the positioning correct. I believe I would have to change my onscreen coordinates (something like 500, 500) to be between 0 and 1 for the shader coordinates. My only problem is trying to work out how to correctly use the transformed coordinates. Thanks in advance for any help!

    Read the article

  • Car brands and models licensing

    - by Ju-v
    We are small team which working on car racing game but we don't know about licensing process for branded cars like Nissan, Lamborghini, Chevrolet and etc. Do we need to buy any licence for using real car brand names, models, logos,... or we can use them for free? Second option we think about using not real brand with real models is it possible? If someone have experience with that, fell free to share it. Any information about that is welcome.

    Read the article

  • How can i get almost pixel perfect collision detection in a multiplayer game?

    - by Freddy
    I'm currently working on a multiplayer game for iPhone. The problem i have, as with all multiplayer games, is that the other user will always see everything at a non-constant delay. The game I'm making need to have a almost pixel perfect collision detection, but 1 or 2 pixels off is not that big of a deal. How can I possibly get this working? I guess I could just set local player to also be at X ms delay. However this will probably just be worse and feel sloppy when the user input. I know this problem is probably something network programmers deal with everyday and I would be glad if someone could give me a possible solution for this.

    Read the article

  • Making efficeint voxel engines using "chunks"

    - by Wardy
    Concept I'm currently looking in to how voxel engines work with a view to possibly making one myself. I see a lot of stuff like this ... https://sites.google.com/site/letsmakeavoxelengine/home/chunks ... which talks about how to go about reducing the draw calls. What I can't seem to understand is how it actually saves draw call counts on the basis of the logic being something like this ... Without chunks foreach voxel in myvoxels DrawIfVisible() With Chunks foreach chunk in mychunks DrawIfVisible() which then does ... foreach voxel in myvoxels DrawIfVisible() So surely you saved nothing ?!?! You still make a draw call for each visible voxel do you not? A visible voxel needs a draw call in either scenario. The only real saving I can see is that the logic that evaluates a chunk will be able to determine if a large number of voxels are visible or not effectively saving a bit of "is this chunk visible" cpu time. But it's the draw calls that interest me ... The fewer of those, the faster the application. EDIT: In case it makes any difference I will probably be using XNA (DX not OpenGL) for my engine so don't consider my choice of example in the link above my choice of technology. But this question is such that I doubt it would matter.

    Read the article

  • Understanding dot notation

    - by Starkers
    Here's my interpretation of dot notation: a = [2,6] b = [1,4] c = [0,8] a . b . c = (2*6)+(1*4)+(0*8) = 12 + 4 + 0 = 16 What is the significance of 16? Apparently it's a scalar. Am I right in thinking that a scalar is the number we times a unit vector by to get a vector that has a scaled up magnitude but the same direction as the unit vector? So again, what is the relevance of 16? When is it used? It's not the magnitude of all the vectors added up. The magnitude of all of them is calculated as follows: sqrt( ax * ax + ay * ay ) + sqrt( bx * bx + by * by ) + sqrt( cx * cx + cy * cy) sqrt( 2 * 2 + 6 * 6 ) + sqrt( 1 * 1 + 4 * 4 ) + sqrt( 0 * 0 + 8 * 8) sqrt( 4 + 36 ) + sqrt( 1 + 16 ) + sqrt( 0 + 64) sqrt( 40 ) + sqrt( 17 ) + sqrt( 64) 6.3 + 4.1 + 8 10.4 + 8 18.4 So I don't really get this diagram: Attempting with sensible numbers: a = [1,0] b = [4,3] a . b = (1*0) + (4*3) = 0 + 12 = 12 So what exactly is a . b describing here? The magnitude of that vector? Because that isn't right: the 'a.b' vector = [4,0] sqrt( x*x + y*y ) sqrt( 4*4 + 0*0 ) sqrt( 16 + 0 ) 4 So what is 12 describing?

    Read the article

  • Creating a 2D Line Branch (Part 2)

    - by Danran
    Yesterday i asked this question on how to create a 2D line branch; Creating a 2D Line Branch And thanks to the answered provided, i now have this nice looking main branch; *coloured to show the different segments in the final item. Now is the time now to branch things off as discussed in the article; http://drilian.com/2009/02/25/lightning-bolts/ Again however i am confused as to the meaning of the following pseudo code; splitEnd = Rotate(direction, randomSmallAngle)*lengthScale + midPoint; I'm unsure how to actually rotate this correctly. In all honesty i'm abit unsure what to-do completely at this part, "splitEnd" will be a Vector3, so whatever happens in the rotate function must then return some form of directional rotation which is then * by a scale to create length and then added to the midPoint. I'm not sure. If someone could explain what i'm meant to be doing in this part that would be really grateful.

    Read the article

  • Coordinate spaces and transformation matrices

    - by Belgin
    I'm trying to get an object from object space, into projected space using these intermediate matrices: The first matrix (I) is the one that transforms from object space into inertial space, but since my object is not rotated or translated in any way inside the object space, this matrix is the 4x4 identity matrix. The second matrix (W) is the one that transforms from inertial space into world space, which is just a scale transform matrix of factor a = 14.1 on all coordinates, since the inertial space origin coincides with the world space origin. /a 0 0 0\ W = |0 a 0 0| |0 0 a 0| \0 0 0 1/ The third matrix (C) is the one that transforms from world space, into camera space. This matrix is a translation matrix with a translation of (0, 0, 10), because I want the camera to be located behind the object, so the object must be positioned 10 units into the z axis. /1 0 0 0\ C = |0 1 0 0| |0 0 1 10| \0 0 0 1/ And finally, the fourth matrix is the projection matrix (P). Bearing in mind that the eye is at the origin of the world space and the projection plane is defined by z = 1, the projection matrix is: /1 0 0 0\ P = |0 1 0 0| |0 0 1 0| \0 0 1/d 0/ where d is the distance from the eye to the projection plane, so d = 1. I'm multiplying them like this: (((P x C) x W) x I) x V, where V is the vertex' coordinates in column vector form: /x\ V = |y| |z| \1/ After I get the result, I divide x and y coordinates by w to get the actual screen coordinates. Apparenly, I'm doing something wrong or missing something completely here, because it's not rendering properly. Here's a picture of what is supposed to be the bottom side of the Stanford Dragon: Also, I should add that this is a software renderer so no DirectX or OpenGL stuff here.

    Read the article

  • Line Intersection from parametric equation

    - by Sidar
    I'm sure this question has been asked before. However, I'm trying to connect the dots by translating an equation on paper into an actual function. I thought It would be interesting to ask here instead on the Math sites (since it's going to be used for games anyway ). Let's say we have our vector equation : x = s + Lr; where x is the resulting vector, s our starting point/vector. L our parameter and r our direction vector. The ( not sure it's called like this, please correct me ) normal equation is : x.n = c; If we substitute our vector equation we get: (s+Lr).n = c. We now need to isolate L which results in L = (c - s.n) / (r.n); L needs to be 0 < L < 1. Meaning it needs to be between 0 and 1. My question: I want to know what L is so if I were to substitute L for both vector equation (or two lines) they should give me the same intersection coordinates. That is if they intersect. But I can't wrap my head around on how to use this for two lines and find the parameter that fits the intersection point. Could someone with a simple example show how I could translate this to a function/method?

    Read the article

  • How to create games with scrolling?

    - by Chandan Shetty SP
    In games like city story or we farm how do they implement scrolling? To do scrolling using UIScrollView the EAGLView size has to be bigger. In those games EAGLView size look like more than 1024*1024. But there is limitation in viewport size in iphone devices(in 3G iphone max is 1024). I played those games in 3G iphone they are working fine. Any idea how they implemented their scrolling mechanism?

    Read the article

  • Splitting a tetris game apart - where to put time-management?

    - by nightcracker
    I am creating a tetris game in C++ & SDL, and I'm trying to do it "good" by making it object-oriented and keeping scopes small. So far I have the following structure: A main with some lowlevel SDL set up and handling input A game class that keeps track of score and provides the interface for main (move block down, etc) A map class that keeps track of the current game field, which blocks are where. Used by the game class. A block class that consists of the current falling block, used by game. A renderer class abstracting low level SDL to a format where you render "tetris blocks". Used by map and block. Now I have a though time where to place the time-management of this game. For example, where should be decided when a block bumps the bottom of the screen how long it takes the current block locks in place and a new block spawns? I also have an other unrelated question, is there some place where you can find some standard data on tetris like standard score tables, rulesets, timings, etc?

    Read the article

  • Scrolling though objects then creating a new instace of this object

    - by gopgop
    In my game when pressing the right mouse button you will place an object on the ground. all objects have the same super class (GameObject). I have a field called selected and will be equal to one certain gameobject at a time. when clicking the right mouse button it checks whats the instance of selected and that how it determines which object to place on the ground. code exapmle: t is the "slot" for which the object will go to. if (selected instanceof MapleTree) { t = new MapleTree(game,highLight); } else if (selected instanceof OakTree) { t = new OakTree(game,highLight); } Now it has to be a "new" instance of the object. Eventually my game will have hundreds of GameObjects and I don't want to have a huge if else statement. How would I make it so it scrolls though the possible kinds of objects and if its the correct type then create a new instance of it...? When pressing E it will switch the type of selected and is an if else statement as well. How would I do it for this too? here is a code example: if (selected instanceof MapleTree) { selected = new OakTree(game); } else if (selected instanceof OakTree) { selected = new MapleTree(game); }

    Read the article

  • GestureListener's fling method doesn't get called

    - by nosferat
    I'm using SimpleGestureDetector from the libgdx-users Wiki as my InputProcessor. I set it in the created() method: Gdx.input.setInputProcess(new SimpleDirectionGestureDetector(charController)); charController is my class which implements the DirectionListener interface defined in the SimpleDirectionGestureDetector class and it is responsible for moving the player character. However the character doesn't change direction when I'm performing a fling action in any direction. I've checked and the fling() method in the SimpleDirectionGesture class doesn't get called and I have no idea why, since everything seems good. What am I doing wrong?

    Read the article

  • Why do my pyramids fade black and then back to colour again

    - by geminiCoder
    I have the following vertecies and norms GLfloat verts[36] = { -0.5, 0, 0.5, 0, 0, -0.5, 0.5, 0, 0.5, 0, 0, -0.5, 0.5, 0, 0.5, 0, 1, 0, -0.5, 0, 0.5, 0, 0, -0.5, 0, 1, 0, 0.5, 0, 0.5, -0.5, 0, 0.5, 0, 1, 0 }; GLfloat norms[36] = { 0, -1, 0, 0, -1, 0, 0, -1, 0, -1, 0.25, 0.5, -1, 0.25, 0.5, -1, 0.25, 0.5, 1, 0.25, -0.5, 1, 0.25, -0.5, 1, 0.25, -0.5, 0, -0.5, -1, 0, -0.5, -1, 0, -0.5, -1 }; I am writing my fists Open GL game, But I need to know for sure if my Normals are correct as the colours aren't rendering correctly. my Pyramids are coloured then fade to black every half rotation then back again. My app so far is based on the boiler plate code provided by apple. heres my modified setUp Method [EAGLContext setCurrentContext:self.context]; [self loadShaders]; self.effect = [[GLKBaseEffect alloc] init]; self.effect.light0.enabled = GL_TRUE; self.effect.light0.diffuseColor = GLKVector4Make(1.0f, 0.4f, 0.4f, 1.0f); glEnable(GL_DEPTH_TEST); glGenVertexArraysOES(1, &_vertexArray); //create vertex array glBindVertexArrayOES(_vertexArray); glGenBuffers(1, &_vertexBuffer); glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer); glBufferData(GL_ARRAY_BUFFER, sizeof(verts) + sizeof(norms), NULL, GL_STATIC_DRAW); //create vertex buffer big enough for both verts and norms and pass NULL as data.. uint8_t *ptr = (uint8_t *)glMapBufferOES(GL_ARRAY_BUFFER, GL_WRITE_ONLY_OES); //map buffer to pass data to it memcpy(ptr, verts, sizeof(verts)); //copy verts memcpy(ptr+sizeof(verts), norms, sizeof(norms)); //copy norms to position after verts glUnmapBufferOES(GL_ARRAY_BUFFER); glEnableVertexAttribArray(GLKVertexAttribPosition); glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0)); //tell GL where verts are in buffer glEnableVertexAttribArray(GLKVertexAttribNormal); glVertexAttribPointer(GLKVertexAttribNormal, 3, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(sizeof(verts))); //tell GL where norms are in buffer glBindVertexArrayOES(0); And the update method. - (void)update { float aspect = fabsf(self.view.bounds.size.width / self.view.bounds.size.height); GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f), aspect, 0.1f, 100.0f); self.effect.transform.projectionMatrix = projectionMatrix; GLKMatrix4 baseModelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -4.0f); baseModelViewMatrix = GLKMatrix4Rotate(baseModelViewMatrix, _rotation, 0.0f, 1.0f, 0.0f); // Compute the model view matrix for the object rendered with GLKit GLKMatrix4 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -1.5f); modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, _rotation, 1.0f, 1.0f, 1.0f); modelViewMatrix = GLKMatrix4Multiply(baseModelViewMatrix, modelViewMatrix); self.effect.transform.modelviewMatrix = modelViewMatrix; // Compute the model view matrix for the object rendered with ES2 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, 1.5f); modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, _rotation, 1.0f, 1.0f, 1.0f); modelViewMatrix = GLKMatrix4Multiply(baseModelViewMatrix, modelViewMatrix); _normalMatrix = GLKMatrix3InvertAndTranspose(GLKMatrix4GetMatrix3(modelViewMatrix), NULL); _modelViewProjectionMatrix = GLKMatrix4Multiply(projectionMatrix, modelViewMatrix); _rotation += self.timeSinceLastUpdate * 0.5f; } But providing I understand this correct one pyramid is using the GLKit base effect shaders and the other the shaders which are included in the project. So for both of them to have the same error, I thought it would be the Norms?

    Read the article

  • Example of DOD design

    - by Jeffrey
    I can't seem to find a nice explanation of the Data Oriented Design for a generic zombie game (it's just an example, pretty common example). Could you make an example of the Data Oriented Design on creating a generic zombie class? Is the following good? Zombie list class: class ZombieList { GLuint vbo; // generic zombie vertex model std::vector<color>; // object default color std::vector<texture>; // objects textures std::vector<vector3D>; // objects positions public: unsigned int create(); // return object id void move(unsigned int objId, vector3D offset); void rotate(unsigned int objId, float angle); void setColor(unsigned int objId, color c); void setPosition(unsigned int objId, color c); void setTexture(unsigned int, unsigned int); ... void update(Player*); // move towards player, attack if near } Example: Player p; Zombielist zl; unsigned int first = zl.create(); zl.setPosition(first, vector3D(50, 50)); zl.setTexture(first, texture("zombie1.png")); ... while (running) { // main loop ... zl.update(&p); zl.draw(); // draw every zombie } Or would creating a generic World container that contains every action from bite(zombieId, playerId) to moveTo(playerId, vector) to createPlayer() to shoot(playerId, vector) to face(radians)/face(vector); and contains: std::vector<zombie> std::vector<player> ... std::vector<mapchunk> ... std::vector<vbobufferid> player_run_animation; ... be a good example? Whats the proper way to organize a game with DOD?

    Read the article

  • Bubble shooter search alghoritm

    - by Fofole
    So I have a Matrix of NxM. At a given position (for ex. [2][5]) I have a value which represents a color. If there is nothing at that point the value is -1. What I need to do is after I add a new point, to check all his neighbours with the same color value and if there are more than 2, set them all to -1. If what I said doesn't make sense what I'm trying to do is an alghoritm which I use to destroy all the same color bubbles from my screen, where the bubbles are memorized in a matrix where -1 means no bubble and {0,1,2,...} represent that there is a bubble with a specific color. This is what I tried and failed: public class Testing { static private int[][] gameMatrix= {{3, 3, 4, 1, 1, 2, 2, 2, 0, 0}, {1, 4, 1, 4, 2, 2, 1, 3, 0, 0}, {2, 2, 4, 4, 3, 1, 2, 4, 0, 0}, {0, 1, 2, 3, 4, 1, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, }; static int Rows=6; static int Cols=10; static int count; static boolean[][] visited=new boolean[15][15]; static int NOCOLOR = -1; static int color = 1; public static void dfs(int r, int c, int color, boolean set) { for(int dr = -1; dr <= 1; dr++) for(int dc = -1; dc <= 1; dc++) if(!(dr == 0 && dc == 0) && ok(r+dr, c+dc)) { int nr = r+dr; int nc = c+dc; // if it is the same color and we haven't visited this location before if(gameMatrix[nr][nc] == color && !visited[nr][nc]) { visited[nr][nc] = true; count++; dfs(nr, nc, color, set); if(set) { gameMatrix[nr][nc] = NOCOLOR; } } } } static boolean ok(int r, int c) { return r >= 0 && r < Rows && c >= 0 && c < Cols; } static void showMatrix(){ for(int i = 0; i < gameMatrix.length; i++) { System.out.print("["); for(int j = 0; j < gameMatrix[0].length; j++) { System.out.print(" " + gameMatrix[i][j]); } System.out.println(" ]"); } System.out.println(); } static void putValue(int value,int row,int col){ gameMatrix[row][col]=value; } public static void main(String[] args){ System.out.println("Initial Matrix:"); putValue(1, 4, 1); putValue(1, 5, 1); showMatrix(); for(int n = 0; n < 15; n++) for(int m = 0; m < 15; m++) visited[n][m] = false; //reset count count = 0; //dfs(bubbles.get(i).getRow(), bubbles.get(i).getCol(), color, false); // get the contiguous count dfs(5,1,color,false); //if there are more than 2 set the color to NOCOLOR for(int n = 0; n < 15; n++) for(int m = 0; m < 15; m++) visited[n][m] = false; if(count > 2) { //dfs(bubbles.get(i).getRow(), bubbles.get(i).getCol(), color, true); dfs(5,1,color,true); } System.out.println("Matrix after dfs:"); showMatrix(); } }

    Read the article

  • Remove enemy when bullet hits enemy

    - by jordi12100
    For my education I have to make a basic game in HTML5 canvas. The game is a shooter game. When you can move left - right and space is shoot. When I shoot the bullets will move up. The enemy moves down. When the bullet hits the enemy the enemy has to dissapear and it will gain +1 score. But the enemy will dissapear after it comes up the screen. Demo: http://jordikroon.nl/test.html space = shoot + enemy shows up This is my code: for (i=0;i<enemyX.length;i++) { if(enemyX[i] > canvas.height) { enemyY.splice(i,1); enemyX.splice(i,1); } else { enemyY[i] += 5; moveEnemy(enemyX[i],enemyY[i]); } } for (i=0;i<bulletX.length;i++) { if(bulletY[i] < 0) { bulletY.splice(i,1); bulletX.splice(i,1); } else { bulletY[i] -= 5; moveBullet(bulletX[i],bulletY[i]); for (ib=0;ib<enemyX.length;ib++) { if(bulletX[i] + 50 < enemyX[ib] || enemyX[ib] + 50 < bulletX[i] || bulletY[i] + 50 < enemyY[ib] || enemyY[ib] + 50 < bulletY[i]) { ++score; enemyY.splice(i,1); enemyX.splice(i,1); } } } } Objects: function moveBullet(posX,posY) { //console.log(posY); ctx.arc(posX, (posY-150), 10, 0 , 2 * Math.PI, false); } function moveEnemy(posX,posY) { ctx.rect(posX, posY, 50, 50); ctx.fillStyle = '#ffffff'; ctx.fill(); }

    Read the article

  • Image loaded from TGA texture isn't displayed correctly

    - by Ramy Al Zuhouri
    I have a TGA texture containing this image: The texture is 256x256. So I'm trying to load it and map it to a cube: #import <OpenGL/OpenGL.h> #import <GLUT/GLUT.h> #import <stdlib.h> #import <stdio.h> #import <assert.h> GLuint width=640, height=480; GLuint texture; const char* const filename= "/Users/ramy/Documents/C/OpenGL/Test/Test/texture.tga"; void init() { // Initialization glEnable(GL_DEPTH_TEST); glViewport(-500, -500, 1000, 1000); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45, width/(float)height, 1, 1000); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); gluLookAt(0, 0, -100, 0, 0, 0, 0, 1, 0); // Texture char bitmap[256][256][3]; FILE* fp=fopen(filename, "r"); assert(fp); assert(fread(bitmap, 3*sizeof(char), 256*256, fp) == 256*256); fclose(fp); glGenTextures(1, &texture); glBindTexture(GL_TEXTURE_2D, texture); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 256, 256, 0, GL_RGB, GL_UNSIGNED_BYTE, bitmap); } void display() { glClearColor(0, 0, 0, 0); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, texture); glColor3ub(255, 255, 255); glBegin(GL_QUADS); glVertex3f(0, 0, 0); glTexCoord2f(0.0, 0.0); glVertex3f(40, 0, 0); glTexCoord2f(0.0, 1.0); glVertex3f(40, 40, 0); glTexCoord2f(1.0, 1.0); glVertex3f(0, 40, 0); glTexCoord2f(1.0, 0.0); glEnd(); glDisable(GL_TEXTURE_2D); glutSwapBuffers(); } int main(int argc, char** argv) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_RGB | GLUT_DEPTH | GLUT_DOUBLE); glutInitWindowPosition(100, 100); glutInitWindowSize(width, height); glutCreateWindow(argv[0]); glutDisplayFunc(display); init(); glutMainLoop(); return 0; } But this is what I get when the window loads: So just half of the image is correctly displayed, and also with different colors.Then if I resize the window I get this: Magically the image seems to fix itself, even if the colors are wrong.Why?

    Read the article

< Previous Page | 535 536 537 538 539 540 541 542 543 544 545 546  | Next Page >