Search Results

Search found 26774 results on 1071 pages for 'distributed development'.

Page 435/1071 | < Previous Page | 431 432 433 434 435 436 437 438 439 440 441 442  | Next Page >

  • what making a good soundtrack for social game

    - by Maged
    there are many successful social games in Facebook and other social sites like brain buddies, who has the biggest brain and word challenge.both of them have a great soundtrack while playing and in the beginning of the game . my question is how to find a good soundtrack or what's i should look for to find a good soundtrack like this that's help to attract the user specially for games that need concentration ?

    Read the article

  • Bouncing off a circular Boundary with multiple balls?

    - by Anarkie
    I am making a game like this : Yellow Smiley has to escape from red smileys, when yellow smiley hits the boundary game is over, when red smileys hit the boundary they should bounce back with the same angle they came, like shown below: Every 10 seconds a new red smiley comes in the big circle, when red smiley hits yellow, game is over, speed and starting angle of red smileys should be random. I control the yellow smiley with arrow keys. The biggest problem I have reflecting the red smileys from the boundary with the angle they came. I don't know how I can give a starting angle to a red smiley and bouncing it with the angle it came. I would be glad for any tips! My js source code : var canvas = document.getElementById("mycanvas"); var ctx = canvas.getContext("2d"); // Object containing some global Smiley properties. var SmileyApp = { radius: 15, xspeed: 0, yspeed: 0, xpos:200, // x-position of smiley ypos: 200 // y-position of smiley }; var SmileyRed = { radius: 15, xspeed: 0, yspeed: 0, xpos:350, // x-position of smiley ypos: 65 // y-position of smiley }; var SmileyReds = new Array(); for (var i=0; i<5; i++){ SmileyReds[i] = { radius: 15, xspeed: 0, yspeed: 0, xpos:350, // x-position of smiley ypos: 67 // y-position of smiley }; SmileyReds[i].xspeed = Math.floor((Math.random()*50)+1); SmileyReds[i].yspeed = Math.floor((Math.random()*50)+1); } function drawBigCircle() { var centerX = canvas.width / 2; var centerY = canvas.height / 2; var radiusBig = 300; ctx.beginPath(); ctx.arc(centerX, centerY, radiusBig, 0, 2 * Math.PI, false); // context.fillStyle = 'green'; // context.fill(); ctx.lineWidth = 5; // context.strokeStyle = '#003300'; // green ctx.stroke(); } function lineDistance( positionx, positiony ) { var xs = 0; var ys = 0; xs = positionx - 350; xs = xs * xs; ys = positiony - 350; ys = ys * ys; return Math.sqrt( xs + ys ); } function drawSmiley(x,y,r) { // outer border ctx.lineWidth = 3; ctx.beginPath(); ctx.arc(x,y,r, 0, 2*Math.PI); //red ctx.fillStyle="rgba(255,0,0, 0.5)"; ctx.fillStyle="rgba(255,255,0, 0.5)"; ctx.fill(); ctx.stroke(); // mouth ctx.beginPath(); ctx.moveTo(x+0.7*r, y); ctx.arc(x,y,0.7*r, 0, Math.PI, false); // eyes var reye = r/10; var f = 0.4; ctx.moveTo(x+f*r, y-f*r); ctx.arc(x+f*r-reye, y-f*r, reye, 0, 2*Math.PI); ctx.moveTo(x-f*r, y-f*r); ctx.arc(x-f*r+reye, y-f*r, reye, -Math.PI, Math.PI); // nose ctx.moveTo(x,y); ctx.lineTo(x, y-r/2); ctx.lineWidth = 1; ctx.stroke(); } function drawSmileyRed(x,y,r) { // outer border ctx.lineWidth = 3; ctx.beginPath(); ctx.arc(x,y,r, 0, 2*Math.PI); //red ctx.fillStyle="rgba(255,0,0, 0.5)"; //yellow ctx.fillStyle="rgba(255,255,0, 0.5)"; ctx.fill(); ctx.stroke(); // mouth ctx.beginPath(); ctx.moveTo(x+0.4*r, y+10); ctx.arc(x,y+10,0.4*r, 0, Math.PI, true); // eyes var reye = r/10; var f = 0.4; ctx.moveTo(x+f*r, y-f*r); ctx.arc(x+f*r-reye, y-f*r, reye, 0, 2*Math.PI); ctx.moveTo(x-f*r, y-f*r); ctx.arc(x-f*r+reye, y-f*r, reye, -Math.PI, Math.PI); // nose ctx.moveTo(x,y); ctx.lineTo(x, y-r/2); ctx.lineWidth = 1; ctx.stroke(); } // --- Animation of smiley moving with constant speed and bounce back at edges of canvas --- var tprev = 0; // this is used to calculate the time step between two successive calls of run function run(t) { requestAnimationFrame(run); if (t === undefined) { t=0; } var h = t - tprev; // time step tprev = t; SmileyApp.xpos += SmileyApp.xspeed * h/1000; // update position according to constant speed SmileyApp.ypos += SmileyApp.yspeed * h/1000; // update position according to constant speed for (var i=0; i<SmileyReds.length; i++){ SmileyReds[i].xpos += SmileyReds[i].xspeed * h/1000; // update position according to constant speed SmileyReds[i].ypos += SmileyReds[i].yspeed * h/1000; // update position according to constant speed } // change speed direction if smiley hits canvas edges if (lineDistance(SmileyApp.xpos, SmileyApp.ypos) + SmileyApp.radius > 300) { alert("Game Over"); } // redraw smiley at new position ctx.clearRect(0,0,canvas.height, canvas.width); drawBigCircle(); drawSmiley(SmileyApp.xpos, SmileyApp.ypos, SmileyApp.radius); for (var i=0; i<SmileyReds.length; i++){ drawSmileyRed(SmileyReds[i].xpos, SmileyReds[i].ypos, SmileyReds[i].radius); } } // uncomment these two lines to get every going // SmileyApp.speed = 100; run(); // --- Control smiley motion with left/right arrow keys function arrowkeyCB(event) { event.preventDefault(); if (event.keyCode === 37) { // left arrow SmileyApp.xspeed = -100; SmileyApp.yspeed = 0; } else if (event.keyCode === 39) { // right arrow SmileyApp.xspeed = 100; SmileyApp.yspeed = 0; } else if (event.keyCode === 38) { // up arrow SmileyApp.yspeed = -100; SmileyApp.xspeed = 0; } else if (event.keyCode === 40) { // right arrow SmileyApp.yspeed = 100; SmileyApp.xspeed = 0; } } document.addEventListener('keydown', arrowkeyCB, true); JSFiddle : http://jsfiddle.net/gj4Q7/

    Read the article

  • "has no motion" warnings

    - by Adam R. Grey
    When I reimport my project's Library, I get lots of warnings such as State combat.Ghoul Attack has no motion but I have no idea why. In this specific case, I looked up Ghoul Attack. Here's the state in which it appears, in the only animator controller that includes anything called Ghoul Attack: State: m_ObjectHideFlags: 3 m_PrefabParentObject: {fileID: 0} m_PrefabInternal: {fileID: 0} m_Name: Ghoul Attack m_Speed: 1 m_CycleOffset: 0 m_Motions: - {fileID: 7400000, guid: 0db269712a91fd641b6dd5e0e4c6d507, type: 3} - {fileID: 0} m_ParentStateMachine: {fileID: 110708233} m_Position: {x: 492, y: 132, z: 0} m_IKOnFeet: 1 m_Mirror: 0 m_Tag: I thought perhaps that second one - {fileID: 0} was throwing up the warning incorrectly, so I removed it. There was no effect, I still get warnings about Ghoul Attack. So given that the only state I know of with that name does in fact have motion, what is this warning actually trying to tell me?

    Read the article

  • Creating practically solvable 15 puzzle inputs

    - by Ashwin
    I am now developing a 15 puzzle game. I know the method to detect unsolvable puzzles. But unlike 8-puzzle, solution for 15-puzzle takes quite long time for some input states and can be solved within 5 seconds some other set of input states. Now the problem is that I cannot give the user(the player), a problem for which the solution takes more than 10 seconds(if he/she chooses to see the solution). So what I want is that when I initially shuffle the puzzle, I want to only present those puzzles which can be solved within 10 seconds. There must be some way to determine the hardness of the puzzle. I tried searching the net but could not find it. Does anyone know a way of determining the hardness of a puzzle? NOTE : I am using A* algorithm to find out the solution on a computer with 3GB RAM and 2.27GHZ processor.

    Read the article

  • Keep 3d model facing the camera at all angles

    - by Sparky41
    I'm trying to keep a 3d plane facing the camera at all angles but while i have some success with this: Vector3 gunToCam = cam.cameraPosition - getWorld.Translation; Vector3 beamRight = Vector3.Cross(torpDirection, gunToCam); beamRight.Normalize(); Vector3 beamUp = Vector3.Cross(beamRight, torpDirection); shipeffect.beamWorld = Matrix.Identity; shipeffect.beamWorld.Forward = (torpDirection) * 1f; shipeffect.beamWorld.Right = beamRight; shipeffect.beamWorld.Up = beamUp; shipeffect.beamWorld.Translation = shipeffect.beamPosition; *Note: Logic not wrote by me i just found this rather useful It seems to only face the camera at certain angles. For example if i place the camera behind the plane you can see it that only Roll's around the axis like this: http://i.imgur.com/FOKLx.png (imagine if you are looking from behind where you have fired from. Any idea what to what the problem is (angles are not my specialty) shipeffect is an object that holds this class variables: public Texture2D altBeam; public Model beam; public Matrix beamWorld; public Matrix[] gunTransforms; public Vector3 beamPosition;

    Read the article

  • Server for online browser game

    - by Tim Rogers
    I am going to be making an online single player browser game. The online element is needed so that a player can login and store the state of their game. This will include things like what buildings have been made and where they have been positioned as well as the users personal statistics and achievements. At this point in time, I am expecting all of the game logic to be performed client side So far, I am thinking I will use flash for creating the client side of the game. I am also creating a MySQL database to store all the users information. My question is how do I connect the two. Presumably I will need some sort of server application which will listen for incoming requests from any clients, perform the SQL query and then return the data. Does anyone have any recommendations of what technology/language to use?

    Read the article

  • View matrix question (rotate by 180 degrees)

    - by King Snail
    I am using a third party rendering API on top of OpenGL code and I cannot get my matrices correct. The API states this: We're right handed by default, and we treat y as up by convention. Since IwGx's coordinate system has (0,0) as the top left, you typically need a 180 degree rotation around Z in your view matrix. I think the viewer does this by default. In my OpenGL app I have access to the view and projection matrices separately. How can I convert them to fit the criteria used by my third party rendering API? I don't understand what they mean to rotate 180 degrees around Z, is that in the view matrix itself or something in the camera before making the view matrix. Any code would be helpful, thanks.

    Read the article

  • How to simulate pressure with particles?

    - by BeachRunnerJoe
    I'm trying to simulate pressure with a collection of spherical particles in a Unity game I'm building. A couple notes about the problem: The goal is to fill a constantly changing 2d space/void with small, frictionless spheres. The game is trying to simulate the ever-growing pressure of more objects being shoved into this space. The level itself is constantly scrolling from left to right, meaning if the space's dimensions are not changed by the user it will automatically get smaller (the leftmost part of the space will continually scroll off-screen). I'm wondering what some approaches are that I can take to tackling these problems... Knowing when to detect when there is space to fill and then add spheres to the space. Removing spheres from the space when it is shrinking. Strategies to simulate pressure on the spheres such that they "explode outwards" when more space is created. The current approach I am contemplating is using a constantly moving wall, that is off screen and moves with the screen, as this image illustrates: . This moving wall will push and trap the spheres into the space. As for adding new spheres, I was going to have either (1) spheres replicate themselves upon detecting free space, OR (2) spawn them at the left side of the space (where the wall is) - pushing the rest of the spheres to fill the space. I foresee problems with idea #1 because this likely wouldn't really create/simulate pressure; idea #2 seems more promising, but raises the question of how to provide a location for these new sphere particles to spawn (and the ramifications of spawning them when there IS no space). Thanks so much in advance for your wisdom!

    Read the article

  • How do I draw anti-aliased holes in a bitmap

    - by gyozo kudor
    I have an artillery game (hobby-learning project) and when the projectile hits it leaves a hole in the ground. I want this hole to have antialiased edges. I'm using System.Drawing for this. I've tried with clipping paths, and drawing with a transparent color using gfx.CompositingMode = CompositingMode.SourceCopy, but it gives me the same result. If I draw a circle with a solid color it works fine, but I need a hole, a circle with 0 alpha values. I have enabled these but they work only with solid colors: gfx.CompositingQuality = CompositingQuality.HighQuality; gfx.InterpolationMode = InterpolationMode.HighQualityBicubic; gfx.SmoothingMode = SmoothingMode.AntiAlias; In the two pictures consider black as being transparent. This is what I have (zoomed in): And what I need is something like this (made with photoshop): This will be just a visual effect, in code for collision detection I still treat everything with alpha 128 as solid. Edit: I'm usink OpenTK for this game. But for this question I think it doesn't really matter probably it is gdi+ related.

    Read the article

  • Improving Click and Drag with C++

    - by Josh
    I'm currently using SFML 2.0 to develop a game in C++. I have a game sprite class that has a click and drag method. The method works, but there is a slight problem. If the mouse moves too fast, the object the user selected can't keep up and is left behind in the spot where the mouse left its bounds. I will share the class definition and the given function implementation. Definition: class codePeg { protected: FloatRect bounds; CircleShape circle; int xPos, yPos, xDiff, yDiff, once; int xBase, yBase; Vector2i mousePos; Vector2f circlePos; public: void init(RenderWindow& Window); void draw(RenderWindow& Window); void drag(RenderWindow& Window); void setPegPosition(int x, int y); void setPegColor(Color pegColor); void mouseOver(RenderWindow& Window); friend int isPegSelected(void); }; Implementation of the "drag" function: void codePeg::drag(RenderWindow& Window) { mousePos = Mouse::getPosition(Window); circlePos = circle.getPosition(); if(Mouse::isButtonPressed(Mouse::Left)) { if(mousePos.x > xPos && mousePos.y > yPos && mousePos.x - bounds.width < xPos && mousePos.y - bounds.height < yPos) { if(once) { xDiff = mousePos.x - circlePos.x; yDiff = mousePos.y - circlePos.y; once = 0; } xPos = mousePos.x - xDiff; yPos = mousePos.y - yDiff; circle.setPosition(xPos, yPos); } } else { once = 1; xPos = xBase; yPos = yBase; xDiff = 0; yDiff = 0; circle.setPosition(xBase, yBase); } Window.draw(circle); } Like I said, the function works, but to me, the code is very ugly and I think it could be improved and could be more efficient. The only thing I can think of as to why the object cannot keep up with the mouse is that there are too many function calls and/or checks. The user does not really have to mouse the mouse "fast" for it to happen, I would say at an average pace the object is left behind. How can I improve the code so that the object remains with the mouse when it is selected? Any help improving this code or giving advice is greatly appreciated.

    Read the article

  • How can I Implement KeyListeners/ActionListeners into the JFrame?

    - by A.K.
    I'll get to the point: I have a player in my game that you control with the keyboard yet the key methods in the player class and ActionListener w/ KeyAdapter in the Board class don't seem to fire. So far I've tried adding these key methods into the JFrame, doesn't seem to let me move him even though other objects that I have (enemies) can move fine. Here's part of the JFrame class with the event listeners: frm.addKeyListener(KeyBoardListener); public void mouseClicked(MouseEvent e) { nSound.play(); StartB.setContentAreaFilled(false); cards.remove(StartB); frm.remove(TitleL); frm.remove(cards); frm.setLayout(new GridLayout(1, 1)); frm.add(nBoard); //Add Game "Tiles" Or Content. x = 1200 nBoard.setPreferredSize(new Dimension(1200, 420)); cards.revalidate(); frm.validate(); } public KeyListener KeyBoardListener = new KeyListener() { @Override public void keyPressed(KeyEvent args0) { int key = args0.getKeyCode(); if(key == KeyEvent.VK_LEFT) { nBoard.S.vx = -4; } if(key == KeyEvent.VK_RIGHT) { nBoard.S.vx = 4; } if(key == KeyEvent.VK_UP) { nBoard.S.vy = -4; } if(key == KeyEvent.VK_DOWN) { nBoard.S.vy = 4; } if(key == KeyEvent.VK_SPACE) { nBoard.S.fire(); } } @Override public void keyReleased(KeyEvent args0) { int key = args0.getKeyCode(); if(key == KeyEvent.VK_LEFT) { nBoard.S.vx = 0; } if(key == KeyEvent.VK_RIGHT) { nBoard.S.vx = 0; } if(key == KeyEvent.VK_UP) { nBoard.S.vy = 0; } if(key == KeyEvent.VK_DOWN) { nBoard.S.vy = 0; } } @Override public void keyTyped(KeyEvent args0) { // TODO Auto-generated method stub } };

    Read the article

  • Unity Occlusion Portals: What and How?

    - by Nick Wiggill
    (Here I eat my words on Meta about posting Unity questions on Unity Answers... since that site is less responsive than this one.) Unity provides cell-based Occlusion Culling (via Umbra, I believe). However, a newer feature that it supports is Occlusion Portals. The question is, if BSP-based occlusion culling is already a feature of Unity, what do portals add, and how? PS. This question is not "What are portals?" -- I'm aware of the original Quake BSP-style portals -- which is partly why I find the explicit portal concept in Unity odd, since it uses BSP anyway.

    Read the article

  • Vertex fog producing black artifacts

    - by Nick
    I originally posted this question on the XNA forums but got no replies, so maybe someone here can help: I am rendering a textured model using the XNA BasicEffect. When I enable fog, the model outline is still visible as many small black dots when it should be "in the fog". Why is this happening? Here's what it looks like for me -- http://tinypic.com/r/fnh440/6 Here is a minimal example showing my problem: (the ship model that this example uses is from the chase camera sample on this site -- http://xbox.create.msdn.com/en-US/education/catalog/sample/chasecamera -- in case anyone wants to try it out ;)) public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; Model model; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; } protected override void LoadContent() { // Create a new SpriteBatch, which can be used to draw textures. spriteBatch = new SpriteBatch(GraphicsDevice); // TODO: use this.Content to load your game content here model = Content.Load<Model>("ship"); foreach (ModelMesh mesh in model.Meshes) { foreach (BasicEffect be in mesh.Effects) { be.EnableDefaultLighting(); be.FogEnabled = true; be.FogColor = Color.CornflowerBlue.ToVector3(); be.FogStart = 10; be.FogEnd = 30; } } } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); // TODO: Add your drawing code here model.Draw(Matrix.Identity * Matrix.CreateScale(0.01f) * Matrix.CreateRotationY(3 * MathHelper.PiOver4), Matrix.CreateLookAt(new Vector3(0, 0, 30), Vector3.Zero, Vector3.Up), Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, 16f/9f, 1, 100)); base.Draw(gameTime); } }

    Read the article

  • Using Google App Engine to Perform World Updates vs an Authoritative Server

    - by Error 454
    I am considering different game server architectures that use GAE. The types of games I am considering are turn-based where the world status would need to be updated about once per minute. I am looking for an answer that persuades me to either perform the world update on the google servers OR an authoritative server that syncs with the datastore. The main goal here would be to minimize GAE daily quotas. For some rough numbers, I am assuming 10,000 entities requiring updates. Each entity update would require: Reading 5 private entity variables (fetched from datastore) Fetching as many as 20 static variables (from datastore or persisted in server memory) Writing 5 entity variables Clients of the game would authenticate and set state directly against GAE as well as pull the latest world state from GAE. Running the update on GAE would consist of a cron job launched every minute. This would update all of the entities and save the results to the datastore. This would be more CPU intensive for GAE. Running the update on an authoritative server would consist of fetching entity data from the GAE datastore, calculating the new entity states and pushing the new state variables back to the datastore. This would be more bandwidth intensive for the datastore.

    Read the article

  • Getting the number of fragments which passed the depth test

    - by Etan
    In "modern" environments, the "NV Occlusion Query" extension provides a method to get the number of fragments which passed the depth test. However, on the iPad / iPhone using OpenGL ES, the extension is not available. What is the most performant approach to implement a similar behaviour in the fragment shader? Some of my ideas: Render the object completely in white, then count all the colors together using a two-pass shader where first a vertical line is rendered and for each fragment the shader computes the sum over the whole row. Then, a single vertex is rendered whose fragment sums all the partial sums of the first pass. Doesn't seem to be very efficient. Render the object completely in white over a black background. Downsample recursively, abusing the hardware linear interpolation between textures until being at a reasonably small resolution. This leads to fragments which have a greyscale level depending on the number of white pixels where in their corresponding region. Is this even accurate enough? Use mipmaps and simply read the pixel on the 1x1 level. Again the question of accuracy and if it is even possible using non-power-of-two textures. The problem wit these approaches is, that the pipeline gets stalled which results in major performance issues. Therefore, I'm looking for a more performant way to accomplish my goal. Using the EXT_OCCLUSION_QUERY_BOOLEAN extension Apple introduced EXT_OCCLUSION_QUERY_BOOLEAN in iOS 5.0 for iPad 2. "4.1.6 Occlusion Queries Occlusion queries use query objects to track the number of fragments or samples that pass the depth test. An occlusion query can be started and finished by calling BeginQueryEXT and EndQueryEXT, respectively, with a target of ANY_SAMPLES_PASSED_EXT or ANY_SAMPLES_PASSED_CONSERVATIVE_EXT. When an occlusion query is started with the target ANY_SAMPLES_PASSED_EXT, the samples-boolean state maintained by the GL is set to FALSE. While that occlusion query is active, the samples-boolean state is set to TRUE if any fragment or sample passes the depth test. When the occlusion query finishes, the samples-boolean state of FALSE or TRUE is written to the corresponding query object as the query result value, and the query result for that object is marked as available. If the target of the query is ANY_SAMPLES_PASSED_CONSERVATIVE_EXT, an implementation may choose to use a less precise version of the test which can additionally set the samples-boolean state to TRUE in some other implementation dependent cases." The first sentence hints on a behavior which is exactly what I'm looking for: getting the number of pixels which passed the depth test in an asynchronous manner without much performance loss. However, the rest of the document describes only how to get boolean results. Is it possible to exploit this extension to get the pixel count? Does the hardware support it so that there may be hidden API to get access to the pixel count? Other extensions which could be exploitable would be debugging features like the number of times the fragment shader was invoked (PSInvocations in DirectX - not sure if something simila is available in OpenGL ES). However, this would also result in a pipeline stall.

    Read the article

  • Rendering different materials in a voxel terrain

    - by MaelmDev
    Each voxel datapoint in my terrain model is made up of two properties: density and material type. Each is stored as an unsigned integer value (but the density is interpreted as a decimal value between 0 and 1). My current idea for rendering these different materials on the terrain mesh is to store eleven extra attributes in each vertex: six material values corresponding to the materials of the voxels that the vertices lie between, three decimal values that correspond to the interpolation each vertex has between each voxel, and two decimal values that are used to determine where the fragment lies on the triangle. The material and interpolation attributes are the exact same for each vertex in the triangle. The fragment shader samples each texture that corresponds to each material and then uses the aforementioned couple of decimal values to interpolate between these samples and obtain the final textured color of the fragment. It should work fine, but it seems like a big memory hog. I won't be able to reuse vertices in the mesh with indexing, and each vertex will have a lot of data associated with it. It also seems pretty slow. What are some ways to improve or replace this technique for drawing materials on a voxel terrain mesh?

    Read the article

  • What features does D3D have that OpenGL does not (and vice versa)?

    - by Tom
    Are there any feature comparisons on Direct3D 11 and the newest OpenGL versions? Well, simply put, Direct3D 11 introduced three main features (taken from Wikipedia): Tesselation Multithreaded rendering Compute shaders Increased texture cache Now I'm wondering, how does the newest versions of OpenGL cope with these features? And since I have this feeling that there are features that Direct3D lacks from OpenGL's side, what are those?

    Read the article

  • How to visually "connect" skybox edges with terrain model

    - by David
    I'm working on a simple airplane game where I use skybox cube rendered using disabled depth test. Very close to the bottom side of the skybox is my terrain model. What bothers me is that the terrain is not connected to the skybox bottom. This is not visible while the plane flies low, but as it gets some altitude, the terrain looks smaller because of the perspective. Since the skybox center is always same as the camera position, the skybox moves with the plane, but the terrain goes into the distance. Ok, I think you understand the problem. My question is how to fix it. It's an airplane game so limiting max altitude is not possible. I thought about some way to stretch terrain to always cover whole bottom side of the skybox cube, but that doesn't feel right and I don't even know how would I calculate new terrain dimensions every frame. Here are some screenshot of games where you can clearly see the problem: (oops, I cannot post images yet) darker brown is the skybox bottom here: http://i.stack.imgur.com/iMsAf.png untextured brown is the skybox bottom here: http://i.stack.imgur.com/9oZr7.png

    Read the article

  • Collision detection via adjacent tiles - sprite too big

    - by BlackMamba
    I have managed to create a collision detection system for my tile-based jump'n'run game (written in C++/SFML), where I check on each update what values the surrounding tiles of the player contain and then I let the player move accordingly (i. e. move left when there is an obstacle on the right side). This works fine when the player sprite is not too big: Given a tile size of 5x5 pixels, my solution worked quite fine with a spritesize of 3x4 and 5x5 pixels. My problem is that I actually need the player to be quite gigantic (34x70 pixels given the same tilesize). When I try this, there seems to be an invisible, notably smaller boundingbox where the player collides with obstacles, the player also seems to shake strongly. Here some images to explain what I mean: Works: http://tinypic.com/r/207lvfr/8 Doesn't work: http://tinypic.com/r/2yuk02q/8 Another example of non-functioning: http://tinypic.com/r/kexbwl/8 (the player isn't falling, he stays there in the corner) My code for getting the surrounding tiles looks like this (I removed some parts to make it better readable): std::vector<std::map<std::string, int> > Game::getSurroundingTiles(sf::Vector2f position) { // converting the pixel coordinates to tilemap coordinates sf::Vector2u pPos(static_cast<int>(position.x/tileSize.x), static_cast<int>(position.y/tileSize.y)); std::vector<std::map<std::string, int> > surroundingTiles; for(int i = 0; i < 9; ++i) { // calculating the relative position of the surrounding tile(s) int c = i % 3; int r = static_cast<int>(i/3); // we subtract 1 to place the player in the middle of the 3x3 grid sf::Vector2u tilePos(pPos.x + (c - 1), pPos.y + (r - 1)); // this tells us what kind of block this tile is int tGid = levelMap[tilePos.y][tilePos.x]; // converts the coords from tile to world coords sf::Vector2u tileRect(tilePos.x*5, tilePos.y*5); // storing all the information std::map<std::string, int> tileDict; tileDict.insert(std::make_pair("gid", tGid)); tileDict.insert(std::make_pair("x", tileRect.x)); tileDict.insert(std::make_pair("y", tileRect.y)); // adding the stored information to our vector surroundingTiles.push_back(tileDict); } // I organise the map so that it is arranged like the following: /* * 4 | 1 | 5 * -- -- -- * 2 | / | 3 * -- -- -- * 6 | 0 | 7 * */ return surroundingTiles; } I then check in a loop through the surrounding tiles, if there is a 1 as gid (indicates obstacle) and then check for intersections with that adjacent tile. The problem I just can't overcome is that I think that I need to store the values of all the adjacent tiles and then check for them. How? And may there be a better solution? Any help is appreciated. P.S.: My implementation derives from this blog entry, I mostly just translated it from Objective-C/Cocos2d.

    Read the article

  • Android Live Testing

    - by Matthew Dockerty
    I am making a game for android and in it I am using sensors which are not available in the emulator. At the moment I am connecting my device and transferring the apk, then installing to test but that is a pain to do, and I have gotten to the stage where I need to start logging values for debugging. I have gone into the run configs of my app and set it to prompt me to pick a device, but my device is never in the list when it is connected to my PC and I try to run it. How am I supposed to set it up to work properly? Thanks for the help.

    Read the article

  • How can I implement an Iris Wipe effect?

    - by Vandell
    For those who doesn't know: An iris wipe is a wipe that takes the shape of a growing or shrinking circle. It has been frequently used in animated short subjects, such as those in the Looney Tunes and Merrie Melodies cartoon series, to signify the end of a story. When used in this manner, the iris wipe may be centered around a certain focal point and may be used as a device for a "parting shot" joke, a fourth wall-breaching wink by a character, or other purposes. Example from flasheff.com Your answer may or may not include a coding sample, a language agnostic explanation is considered enough.

    Read the article

  • How do I get the correct values from glReadPixels in OpenGL 3.0?

    - by NoobScratcher
    I'm currently trying to Implement mouse selection into my game editor and I ran into a little problem when I look at the values stored in &pixel[0],&pixel[1],&pixel[2],&pixel[3]; I get r: 0 g: 0 b: 0 a: 0 As you can see I'm not able to get the correct values from glReadPixels(); My 3D models are red colored using glColor3f(255,0,0); I was hoping someone could help me figure this out. Here is the source code: case WM_LBUTTONDOWN: { GetCursorPos(&pos); ScreenToClient(hwnd, &pos); GLenum err = glGetError(); while (glGetError() != GL_NO_ERROR) {cerr << err << endl;} glReadPixels(pos.x, SCREEN_HEIGHT - 1 - pos.y, 1, 1, GL_RGB, GL_UNSIGNED_BYTE, &pixel[0] ); cerr << "r: "<< (int)pixel[0] << endl; cerr << "g: "<< (int)pixel[1] << endl; cerr << "b: "<< (int)pixel[2] << endl; cerr << "a: "<< (int)pixel[3] << endl; cout << pos.x << endl; cout << pos.y << endl; } break; I use : WIN32 API OPENGL 3.0 C++

    Read the article

  • How can I handle copyrighted music?

    - by David Dimalanta
    I have a curious question regarding on musics used in music rhythm game. In Guitar Hero for example, they used all different music albums in one program. Then, each album requires to ask permission to the owner, composer of the music, or the copyright owner of the music. Let's say, if you used 15 albums for the music rhythm game, then you have to contact 15 copyright owners and it might be that, for the game developer, that the profit earned goes to the copyright owner or owner of this music. For the independent game developers, was it okay if either used the copyright music by just mentioning the name of the singer included in the credits and in the music select screen or use the non-popular/old music that about 50 years ago? And, does still earn money for the indie game developers by making free downloadable game?

    Read the article

  • Repeat a part of spritesheet as background

    - by Moiblpadde
    So I'm trying to repeat a part of my spritesheet as a background (js, canvas). My code so far: var canvas = $("#board")[0], ctx = canvas.getContext("2d"), sprite = new Image(); sprite.src = "spritesheet.png"; sprite.onload = function(){ ctx.fillStyle = ctx.createPattern(spriteBg, "repeat"); ctx.fillRect(0, 25, 500, 500); } This is fine, but as you can see, it repeat the whole sprite, not just a part of it, and I just can't figure out how to do it D:

    Read the article

  • How can I save state from script in a multithreaded engine?

    - by Peter Ren
    We are building a multithreaded game engine and we've encountered some problems as described below. The engine have 3 threads in total: script, render, and audio. Each frame, we update these 3 threads simultaneously. As these threads updating themselves, they produce some tasks and put them into a public storage area. As all the threads finish their update, each thread go and copy the tasks for themselves one by one. After all the threads finish their task copying, we make the threads process those tasks and update these threads simultaneously as described before. So this is the general idea of the task schedule part of our engine. Ok, well, all the task schedule part work well, but here's the problem: For the simplest, I'll take Camera as an example: local oldPos = camera:getPosition() -- ( 0, 0, 0 ) camera:setPosition( 1, 1, 1 ) -- Won't work now, cuz the render thread will process the task at the beginning of the next frame local newPos = camera:getPosition() -- Still ( 0, 0, 0 ) So that's the problem: If you intend to change a property of an object in another thread, you have to wait until that thread process this property-changing message. As a result, what you get from the object is still the information in the last frame. So, is there a way to solve this problem? Or are we build the task schedule part in a wrong way? Thanks for your answers :)

    Read the article

< Previous Page | 431 432 433 434 435 436 437 438 439 440 441 442  | Next Page >