Search Results

Search found 32277 results on 1292 pages for 'module development'.

Page 525/1292 | < Previous Page | 521 522 523 524 525 526 527 528 529 530 531 532  | Next Page >

  • Water Simulation in LIBGDX [on hold]

    - by Noah Huppert
    I am doing some R&D for a game and am now tackling the topic of water. The goal Make water that can flow. Aka you can have an origin point that water shoots out from or a downhill slope. Make it so water splashes, so when an object hits the water there is a splash. Aka: Actual physics water sim. The current way I know how to do it I know how to create a shader that makes an object look like its water by making waves. Combined with that you can check to see if an object is colliding and apply an upwards force to simulate buoyancy. What is wrong with that way The water does not flow No splashes Possible solutions Have particles that are fairly large that interact with each other to simulate water Possible drawbacks Performance. Question: Is there a better way to do water or is using particles as described the only way?

    Read the article

  • Compressing 2D level data

    - by Lucius
    So, I'm developing a 2D, tile based game and a map maker thingy - all in Java. The problem is that recently I've been having some memory issues when about 4 maps are loaded. Each one of these maps are composed of 128x128 tiles and have 4 layers (for details and stuff). I already spent a good amount of time searching for solutions and the best thing I found was run-length enconding (RLE). It seems easy enough to use with static data, but is there a way to use it with data that is constantly changing, without a big drop in performance? In my maps, supposing I'm compressing the columns, I would have 128 rows, each with some amount of data (hopefully less than it would be without RLE). Whenever I change a tile, that whole row would have to be checked and I'm affraid that would slow down too much the production (and I'm in a somewhat tight schedule). Well, worst case scenario I work on each map individually, and save them using RLE, but it would be really nice if I could avoind that. EDIT: What I'm currently using to store the data for the tiles is a 2D array of HashMaps that use the layer as key and store the id of the tile in that position - like this: private HashMap< Integer, Integer [][]

    Read the article

  • Cocos2d: Moving background on update: offsett issue

    - by mm24
    working with Objective C, iOS and Cocos2d I am developing a vertical scrolling shooter game for iPhone (retina display models with 640 width x 960 height pixel resolution). My basic algorithm works as following: I create two instances of an image that has exactly 640 width x 960 height pixel of resolution, which we will call imageA and imageB I then set the two imags with exactly 480.0f of offset from each other, as the screenSize of a CCScene is set by default to 480.0f. At each update method call I move the two images by the same value. I make sure that their offsett stays to 480.0f However when running the game I see a 1 pixel height line between the two images. This literally bugs me and would like to adjust this. What am I doing wrong? This is a zoom in on the background when the "offsett line" is visible. The white line you can see divides the two background images and is not meant to exist as both images are completely black :): If I change the yPositionOfSecondElement value to 479.0f until the first loop the two images overlap correctly, but as soon as the loop starts the two images starts having an offsett of -1.0f. Here is the initialization code: -(void) init { //... screenHeight = 480.0f; yPositionOfSecondElement= screenHeight;//I tried subtracting an offsett of -1 but eventually the image would go wrong again yPositionOfFirstElement = 0.0f; loopedBackgroundImageInstanceA = [BackgroundLoopedImage loopImageForLevel:levelName]; loopedBackgroundImageInstanceA.anchorPoint = CGPointMake(0.5f, 0.0f); loopedBackgroundImageInstanceA.position = CGPointMake(160.0f, yPositionOfFirstElement); [node addChild:loopedBackgroundImageInstanceA z:zLevelBackground]; //loopedBackgroundImageInstanceA.color= ccRED; loopedBackgroundImageInstanceB = [BackgroundLoopedImage loopImageForLevel:levelName]; loopedBackgroundImageInstanceB.anchorPoint = CGPointMake(0.5f, 0.0f); loopedBackgroundImageInstanceB.position = CGPointMake(160.0f, yPositionOfSecondElement); [node addChild:loopedBackgroundImageInstanceB z:zLevelBackground]; //.... } And here is the move code called at each update: -(void) moveBackgroundSprites:(BackgroundLoopedImage*)imageA :(BackgroundLoopedImage*)imageB :(ccTime)delta { isEligibleToMove=false; //This is done to avoid rounding errors float yStep = delta * [GameController sharedGameController].currentBackgroundSpeed; NSString* formattedNumber = [NSString stringWithFormat:@"%.02f", yStep]; yStep = atof([formattedNumber UTF8String]); //First should adjust position of images [self adjustPosition:imageA :imageB]; //The can get the actual image position CGPoint posA = imageA.position; CGPoint posB = imageB.position; //Here could verify if the checksum is equal to the required difference (should be 479.0f) if (![self verifyCheckSum:posA :posB]) { CCLOG(@"does not comply A"); } //At this stage can compute the hypotetical new position CGPoint newPosA = CGPointMake(posA.x, posA.y - yStep); CGPoint newPosB = CGPointMake(posB.x, posB.y - yStep); // Reposition stripes when they're out of bounds if (newPosA.y <= -yPositionOfSecondElement) { newPosA.y = yPositionOfSecondElement; [imageA shuffle]; if (timeElapsed>=endTime && hasReachedEndLevel==FALSE) { hasReachedEndLevel=TRUE; shouldMoveImageEnd=TRUE; } } else if (newPosB.y <= -yPositionOfSecondElement) { newPosB.y = yPositionOfSecondElement; [imageB shuffle]; if (timeElapsed>=endTime && hasReachedEndLevel==FALSE) { hasReachedEndLevel=TRUE; shouldMoveImageEnd=TRUE; } } //Here should verify that the check sum is equal to 479.0f if (![self verifyCheckSum:posA :posB]) { CCLOG(@"does not comply B"); } imageA.position = newPosA; imageB.position = newPosB; //Here could verify that the check sum is equal to 479.0f if (![self verifyCheckSum:posA :posB]) { CCLOG(@"does not comply C"); } isEligibleToMove=true; } -(BOOL) verifyCheckSum:(CGPoint)posA :(CGPoint)posB { BOOL comply = false; float sum = 0.0f; if (posA.y > posB.y) { sum = posA.y - posB.y; } else if (posB.y > posA.y){ sum = posB.y - posA.y; } else{ return false; } if (sum!=yPositionOfSecondElement) { comply= false; } else{ comply=true; } return comply; } And here is what happens on the update: if(shouldMoveImageA && shouldMoveImageB) { if (isEligibleToMove) { [self moveBackgroundSprites:loopedBackgroundImageInstanceA :loopedBackgroundImageInstanceB :delta]; } Forget about shouldMoveImageA and shouldMoveImageB, this is just for when the background reaches the end of level, this works.

    Read the article

  • Find meeting point of 2 objects in 2D, knowing (constant) speed and slope

    - by Axonn
    I have a gun which fires a projectile which has to hit an enemy. The problem is that the gun has to be automatic, i.e. - choose the angle in which it has to shoot so that the projectile hits the enemy dead in the center. It's been a looooong time since school, and my physics skills are a bit rusty, but they're there. I've been thinking to somehow apply the v = d/t formula to find the time needed for the projectile or enemy to reach a certain point. But the problem is that I can't find the common point for both the projectile and enemy. Yes, I can find a certain point for the projectile, and another for the enemy, but I would need lots of tries to find where the point coincides, which is stupid. There has to be a way to link them together but I can't figure it out. I prepared some drawings and samples: A simple version of my Flash game, dumbed down to the basics, just some shapes: http://axonnsd.org/W/P001/MathSandBox.swf - click the mouse anywhere to fire a projectile. Or, here is an image which describes my problem: So... who has any ideas about how to find x3/y3 - thus leading me to find the angle in which the weapon has to tilt in order to fire a projectile to meet the enemy? EDIT I think it would be clearer if I also mention that I know: the speed of both Enemy and Projectile and the Enemy travels on a straight vertical line.

    Read the article

  • Smooth Camera Zoom Factor Change

    - by Siddharth
    I have game play scene in which user can zoom in and out. For which I used smooth camera in the following manner. public static final int CAMERA_WIDTH = 1024; public static final int CAMERA_HEIGHT = 600; public static final float MAXIMUM_VELOCITY_X = 400f; public static final float MAXIMUM_VELOCITY_Y = 400f; public static final float ZOOM_FACTOR_CHANGE = 1f; mSmoothCamera = new SmoothCamera(0, 0, Constants.CAMERA_WIDTH, Constants.CAMERA_HEIGHT, Constants.MAXIMUM_VELOCITY_X, Constants.MAXIMUM_VELOCITY_Y, Constants.ZOOM_FACTOR_CHANGE); mSmoothCamera.setBounds(0f, 0f, Constants.CAMERA_WIDTH, Constants.CAMERA_HEIGHT); But above thing create problem for me. When user perform zoom in and leave game play scene then other scene behaviour not look good. I already set zoom factor to 1 for this purpose. But now it show camera translation in other scene. Because scene switching time it so much small that player can easily saw translation of camera that I don't want to show. After camera reposition, everything works perfect but how to set camera its proper position. For example my loading text move from bottom to top or vice versa based on camera movement. Any more detail you want then I can able to give you.

    Read the article

  • Ray Intersecting Plane Formula in C++/DirectX

    - by user4585
    I'm developing a picking system that will use rays that intersect volumes and I'm having trouble with ray intersection versus a plane. I was able to figure out spheres fairly easily, but planes are giving me trouble. I've tried to understand various sources and get hung up on some of the variables used within their explanations. Here is a snippet of my code: bool Picking() { D3DXVECTOR3 vec; D3DXVECTOR3 vRayDir; D3DXVECTOR3 vRayOrig; D3DXVECTOR3 vROO, vROD; // vect ray obj orig, vec ray obj dir D3DXMATRIX m; D3DXMATRIX mInverse; D3DXMATRIX worldMat; // Obtain project matrix D3DXMATRIX pMatProj = CDirectXRenderer::GetInstance()->Director()->Proj(); // Obtain mouse position D3DXVECTOR3 pos = CGUIManager::GetInstance()->GUIObjectList.front().pos; // Get window width & height float w = CDirectXRenderer::GetInstance()->GetWidth(); float h = CDirectXRenderer::GetInstance()->GetHeight(); // Transform vector from screen to 3D space vec.x = (((2.0f * pos.x) / w) - 1.0f) / pMatProj._11; vec.y = -(((2.0f * pos.y) / h) - 1.0f) / pMatProj._22; vec.z = 1.0f; // Create a view inverse matrix D3DXMatrixInverse(&m, NULL, &CDirectXRenderer::GetInstance()->Director()->View()); // Determine our ray's direction vRayDir.x = vec.x * m._11 + vec.y * m._21 + vec.z * m._31; vRayDir.y = vec.x * m._12 + vec.y * m._22 + vec.z * m._32; vRayDir.z = vec.x * m._13 + vec.y * m._23 + vec.z * m._33; // Determine our ray's origin vRayOrig.x = m._41; vRayOrig.y = m._42; vRayOrig.z = m._43; D3DXMatrixIdentity(&worldMat); //worldMat = aliveActors[0]->GetTrans(); D3DXMatrixInverse(&mInverse, NULL, &worldMat); D3DXVec3TransformCoord(&vROO, &vRayOrig, &mInverse); D3DXVec3TransformNormal(&vROD, &vRayDir, &mInverse); D3DXVec3Normalize(&vROD, &vROD); When using this code I'm able to detect a ray intersection via a sphere, but I have questions when determining an intersection via a plane. First off should I be using my vRayOrig & vRayDir variables for the plane intersection tests or should I be using the new vectors that are created for use in object space? When looking at a site like this for example: http://www.tar.hu/gamealgorithms/ch22lev1sec2.html I'm curious as to what D is in the equation AX + BY + CZ + D = 0 and how does it factor in to determining a plane intersection? Any help will be appreciated, thanks.

    Read the article

  • using heightmap to simulate 3d in an isometric 2d game

    - by VaTTeRGeR
    I saw a video of an 2.5d engine that used heightmaps to do zbuffering. Is this hard to do? I have more or less no idea of Opengl(lwjgl) and that stuff. I could imagine, that you compare each pixel and its depthmap to the depthmap of the already drawn background to determine if it gets drawn or not. Are there any tutorials on how to do this, is this a common problem? It would already be awesome if somebody knows the names of the Opengl commands so that i can go through some general tutorials on that. greets! Great 2.5d engine with the needed effect, pls go to the last 30 seconds Edit, just realised, that my question wasn't quite clear expressed: How can i tell Opengl to compare the existing depthbuffer with an grayscale texure, to determine if a pixel should get drawn or not?

    Read the article

  • simple collision detection

    - by Rob
    Imagine 2 squares sitting side by side, both level with the ground: http://img19.imageshack.us/img19/8085/sqaures2.jpg A simple way to detect if one is hitting the other is to compare the location of each side. They are touching if ALL of the following are NOT true: The right square's left side is to the right of the left square's right side. The right square's right side is to the left of the left square's left side. The right square's bottom side is above the left square's top side. The right square's top side is below the left square's bottom side. If any of those are true, the squares are not touching. If all of those are false, the squares are touching. But consider a case like this, where one square is at a 45 degree angle: http://img189.imageshack.us/img189/4236/squaresb.jpg Is there an equally simple way to determine if those squares are touching?

    Read the article

  • Physics from other games

    - by Carlosrdz1
    I'm making a platform engine with XNA Game Studio, and I've solved almost everything about colliding stuff. But now, I'm searching for good physics for the player, I'm trying to emulate characters from other games like Mario from Super Mario World, or MegaMan X... do you know a website or something, where the physics from that games are revealed? I remember seen a page with something like that. Or what's the process you think is the best to emulate physics from other games? Just trial and error? Thank you.

    Read the article

  • How do client-server cooperation based games like Diablo 3 work?

    - by edgar
    Diablo 3 cooperates with Blizzard servers even during single player games. In fact, Blizzard has had problems with the games "melting their servers." I would like to ask: How do the client and the server communicate? What details does the client leave to the server, and vice versa? What details are redundant - both the client and the server know - and how often do they disagree? The previous paragraph contains the important questions, but I have a few more that I must explain my motivation towards. I am interested in the programming of botting. Ethical botting - I don't plan on actually abusing the automation to run 24/7. I just find it to be a great programming challenge to glean information from a game, and then make decisions from that information. I am stuck in the starting gate. The unofficial questions from this post would be: How can I make a bot (language, tools, libraries)? Can I get information through the communication between client and server, rather than the brute force pixel detection easily used in more static games? There probably is a trust issue, and to that all I can say is that I promise not to abuse the answers. But please feel free to answer any of the questions you feel comfortable with. Thank you!

    Read the article

  • Getting FEATURE_LEVEL_9_3 to work in DX11

    - by Dominic
    Currently I'm going through some tutorials and learning DX11 on a DX10 machine (though I just ordered a new DX11 compatible computer) by means of setting the D3D_FEATURE_LEVEL_ setting to 10_0 and switching the vertex and pixel shader versions in D3DX11CompileFromFile to "vs_4_0" and "ps_4_0" respectively. This works fine as I'm not using any DX11-only features yet. I'd like to make it compatible with DX9.0c, which naively I thought I could do by changing the feature level setting to 9_3 or something and taking the vertex/pixel shader versions down to 3 or 2. However, no matter what I change the vertex/pixel shader versions to, it always fails when I try to call D3DX11CompileFromFile to compile the vertex/pixel shader files when I have D3D_FEATURE_LEVEL_9_3 enabled. Maybe this is due to the the vertex/pixel shader files themselves being incompatible for the lower vertex/pixel shader versions, but I'm not expert enough to say. My shader files are listed below: Vertex shader: cbuffer MatrixBuffer { matrix worldMatrix; matrix viewMatrix; matrix projectionMatrix; }; struct VertexInputType { float4 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; PixelInputType LightVertexShader(VertexInputType input) { PixelInputType output; // Change the position vector to be 4 units for proper matrix calculations. input.position.w = 1.0f; // Calculate the position of the vertex against the world, view, and projection matrices. output.position = mul(input.position, worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); // Store the texture coordinates for the pixel shader. output.tex = input.tex; // Calculate the normal vector against the world matrix only. output.normal = mul(input.normal, (float3x3)worldMatrix); // Normalize the normal vector. output.normal = normalize(output.normal); return output; } Pixel Shader: Texture2D shaderTexture; SamplerState SampleType; cbuffer LightBuffer { float4 ambientColor; float4 diffuseColor; float3 lightDirection; float padding; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; float4 LightPixelShader(PixelInputType input) : SV_TARGET { float4 textureColor; float3 lightDir; float lightIntensity; float4 color; // Sample the pixel color from the texture using the sampler at this texture coordinate location. textureColor = shaderTexture.Sample(SampleType, input.tex); // Set the default output color to the ambient light value for all pixels. color = ambientColor; // Invert the light direction for calculations. lightDir = -lightDirection; // Calculate the amount of light on this pixel. lightIntensity = saturate(dot(input.normal, lightDir)); if(lightIntensity > 0.0f) { // Determine the final diffuse color based on the diffuse color and the amount of light intensity. color += (diffuseColor * lightIntensity); } // Saturate the final light color. color = saturate(color); // Multiply the texture pixel and the final diffuse color to get the final pixel color result. color = color * textureColor; return color; }

    Read the article

  • Setting Krypton Light to Screen Pixels

    - by Adam Jerrett
    So a few days back, I started playing around with Krypton XNA for 2D lighting in my game. I noticed in general, that spawning a light at (0,0) with Krypton causes the light to appear in, pretty much, the centre of the game screen. Is there any way to change this so a Krypton light's "starting point" at [0,0] would spawn at the top left of the screen, and thus follow the standard screen co-ordinates for position? I ask because currently I'm busy working on my game where my spawn point is [512,512]. With hard code, the closest I've got to the light being "central" to this point is the vector position [12,-20], which makes no sense and is impossible to craft, mathematically, if I want the light to move with the camera (the position [480,512] maps roughly to [10,-20]). So, is there any way to "normalise" the krypton lights to use standard screen co-ordinates? If you guys can, play around with the demo from the site and please see if you can find anything out about it. Documentation on the engine is rather scarce, so it's difficult to find anything relevant to my "pixel-perfect" need. It might just also be something in the code with regards to the matrices that I'm not fully understanding. Any help would be useful. Thanks.

    Read the article

  • Issues glVertexAttribPointer last 2 parameters?

    - by NoobScratcher
    Introduction Hello I will start out by explaining my setup, showing samples as I go along explaining the situation. I'm using these tools: OpenGL 3.3 GLSL 330 C++ Problem The problem is when I render the wavefront obj 3d model it gives a very weird visual glitch the model was supposed to be a square but instead its a triangluated mess with parts of the vertexes pointing in a stretched direction in massive amounts towards the bottom left side of the frustum.... Explanation: I'm using std::vectors to store my wavefront .obj model data using sscanf to get the floating point values into the structure members x,y,z and store them into the Points structure variable p; int index = IndexAssigner(1, 1); ifstream file (list[index].c_str() ); points.push_back(Point()); Point p; int face[4]; while (!file.eof() ) { char modelbuffer[10000]; file.getline(modelbuffer, 10000); switch(modelbuffer[0]) { case 'v' : sscanf(modelbuffer, "v %f %f %f", &p.x, &p.y, &p.z); points.push_back(p); break; case 'f': sscanf(modelbuffer, "f %d %d %d %d", face, face+1, face+2, face+3 ); faces.push_back(face[0]); faces.push_back(face[1]); faces.push_back(face[2]); faces.push_back(face[3]); } //Turn on FileReader aka "RENDER CODE" FileReader = true; } then I render the Points vector using the .data() member of std::vectors to the frustum. Other declarations: int numfloats = 4; float* point=reinterpret_cast<float*>(&points[0]); int num_bytes=numfloats*sizeof(float); Vector declarations: struct Point {float x, y , z; }; std::vector<int>faces; std::vector<Point>points; Render code: glGenBuffers(1, &vertexbuffer); glGenTextures(1, &ModelTexture); glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer); glBindTexture(GL_TEXTURE_3D, ModelTexture); glTexImage2D(GL_TEXTURE_2D, 0,GL_RGBA, ModelSurface->w, ModelSurface->h, 0, GL_BGR, GL_UNSIGNED_BYTE, ModelSurface->pixels); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glBufferData(GL_ARRAY_BUFFER, sizeof(points), points.data(), GL_STATIC_DRAW); glVertexAttribPointer(3, 3, GL_FLOAT, GL_FALSE,num_bytes ,points.data()); glEnableVertexAttribArray(3); //Translation Process GLfloat TranslationMatrix[] = { 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0 }; //Send Translation Matrix up to the vertex shader glUniformMatrix4fv(translation, 1, TRUE, TranslationMatrix); glDrawElements( GL_QUADS, faces.size(), GL_UNSIGNED_INT, faces.data()); I tried looking at what was causing this and went through every function every parameter ,etc looked at the man pages. Then found out that it could be my glVertexAttribPointer. Here are the man pages for glVertexAttribPointer http://www.opengl.org/sdk/docs/man/xhtml/glVertexAttribPointer.xml The last 2 parameters is my problem How do I write those 2 last parameters do I try putting the data from Points into it?. glVertexAttribPointer(3, 3, GL_FLOAT, GL_FALSE,num_bytes ,points.data()); How does it work with vectors? Is it fast?* if you can not be bothered too look at the man pages here is the scripts coming from the man pages directly. Stride Specifies the byte offset between consecutive generic vertex attributes. If stride is 0, the generic vertex attributes are understood to be tightly packed in the array. The initial value is 0. Pointer Specifies a pointer to the first component of the first generic vertex attribute in the array. The initial value is 0. If you want my full source - http://ideone.com/fPfkg Thanks Again if you do read this.

    Read the article

  • why are my players drawn to the side of my viewport

    - by Jetbuster
    Following this admittedly brilliant and clean 2d camera class I have a camera on each player, and it works for multiplayer and i've divided the screen into two sections for split screen by giving each camera a viewport. However in the game it looks like this I'm not sure if thats their position relative to the screen or what The relevant gameScreen code, the makePlayers is setup so it could theoretically work for up to 4 players private void makePlayers() { int rowCount = 1; if (NumberOfPlayers > 2) rowCount = 2; players = new Player[NumberOfPlayers]; for (int i = 0; i < players.Length; i++) { int xSize = GameRef.Window.ClientBounds.Width / 2; int ySize = GameRef.Window.ClientBounds.Height / rowCount; int col = i % rowCount; int row = i / rowCount; int xPoint = 0 + xSize * row; int yPoint = 0 + ySize * col; Viewport viewport = new Viewport(xPoint, yPoint, xSize, ySize); Vector2 playerPosition = new Vector2(viewport.TitleSafeArea.X + viewport.TitleSafeArea.Width / 2, viewport.TitleSafeArea.Y + viewport.TitleSafeArea.Height / 2); players[i] = new Player(playerPosition, playerSprites[i], GameRef, viewport); } //players[1].Keyboard = true; } public override void Draw(GameTime gameTime) { base.Draw(gameTime); foreach (Player player in players) { GraphicsDevice.Viewport = player.PlayerCamera.ViewPort; GameRef.spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.PointClamp, null, null, null, player.PlayerCamera.Transform); map.Draw(GameRef.spriteBatch); // Draw the Player player.Draw(GameRef.spriteBatch); // Draw UI screen elements GraphicsDevice.Viewport = Viewport; ControlManager.Draw(GameRef.spriteBatch); GameRef.spriteBatch.End(); } } the player's initialize and draw methods are like so internal void Initialize() { this.score = 0; this.angle = (float)(Math.PI * 0 / 180);//Start sprite at it's default rotation int width = utils.scaleInt(picture.Width, imageScale); int height = utils.scaleInt(picture.Height, imageScale); this.hitBox = new HitBox(new Vector2(centerPos.X - width / 2, centerPos.Y - height / 2), width, height, Color.Black, game.Window.ClientBounds); playerCamera.Initialize(); } #region Methods public void Draw(SpriteBatch spriteBatch) { //Console.WriteLine("Hitbox: X({0}),Y({1})", hitBox.Points[0].X, hitBox.Points[0].Y); //Console.WriteLine("Image: X({0}),Y({1})", centerPos.X, centerPos.Y); Vector2 orgin = new Vector2(picture.Width / 2, picture.Height / 2); hitBox.Draw(spriteBatch); utils.DrawCrosshair(spriteBatch, Position, game.Window.ClientBounds, Color.Red); spriteBatch.Draw(picture, Position, null, Color.White, angle, orgin, imageScale, SpriteEffects.None, 0.1f); } as I said I think I'm gonna need to do something with the render position but I'm to entirely sure what or how it would be elegant to say the least

    Read the article

  • Is there an application that converts a PC into a video game kiosk/arcade machine?

    - by Rahil627
    Sorry to make the question so vague. What I ultimately want is software that allows people to play independent video games on a PC and not have to worry about maintaining it. Imagine a game that was made in a few hours that does not have a restart button and crashes often. It should be able to handle these kinds of things and do more! The software should: allow the game to be restarted manually handle game crashes (likely by restarting the game) restrict the user from doing anything crazy later... offer a UI to select the game from a list handle pre-configured key bindings cross-platform (start with windows) I just want to know if this exists already before I start creating one. As of now AutoHotKey is being used to do this sloppily. If such software does not exist then perhaps someone could recommend a general open source Kiosk software? Open Kiosk? I'll take anything. (I also could not find a related tag. Not even sure if this question should be here rather than stackoverflow)

    Read the article

  • Is there a way to use the facebook sdk with libgdx?

    - by Rudy_TM
    I have tried to use the facebook sdk in libgdx with callbacks, but it never enters the authetication listeners, so the user never is logged in, it permits the authorization for the facebook app but it never implements the authentication interfaces :( Is there a way to use it? public MyFbClass() { facebook = new Facebook(APPID); mAsyncRunner = new AsyncFacebookRunner(facebook); SessionStore.restore(facebook, this); FB.init(this, 0, facebook, this.permissions); } ///Method for init the permissions and my listener for authetication public void init(final Activity activity, final Facebook fb,final String[] permissions) { mActivity = activity; this.fb = fb; mPermissions = permissions; mHandler = new Handler(); async = new AsyncFacebookRunner(mFb); params = new Bundle(); SessionEvents.addAuthListener(auth); } ///I call the authetication process, I call it with a callback from libgdx public void facebookAction() { // TODO Auto-generated method stub fb.authenticate(); } ///It only allow the app permission, it doesnt register the events public void authenticate() { if (mFb.isSessionValid()) { SessionEvents.onLogoutBegin(); AsyncFacebookRunner asyncRunner = new AsyncFacebookRunner(mFb); asyncRunner.logout(getContext(), new LogoutRequestListener()); //SessionStore.save(this.mFb, getContext()); } else { mFb.authorize(mActivity, mPermissions,0 , new DialogListener()); } } public class SessionListener implements AuthListener, LogoutListener { @Override public void onAuthSucceed() { SessionStore.save(mFb, getContext()); } @Override public void onAuthFail(String error) { } @Override public void onLogoutBegin() { } @Override public void onLogoutFinish() { SessionStore.clear(getContext()); } } DialogListener() { @Override public void onComplete(Bundle values) { SessionEvents.onLoginSuccess(); } @Override public void onFacebookError(FacebookError error) { SessionEvents.onLoginError(error.getMessage()); } @Override public void onError(DialogError error) { SessionEvents.onLoginError(error.getMessage()); } @Override public void onCancel() { SessionEvents.onLoginError("Action Canceled"); } }

    Read the article

  • How can I achieve strong typing with a component messaging system?

    - by Vaughan Hilts
    I'm looking at implementing a messaging system in my entity component system. I've deduced that I can use an event / queue for passing messages, but right now, I just use a generic object and cast out the data I want. I also considered using a dictionary. I see a lot of information on this, but they all involve a lot of casting and guessing. Is there any way to do this elegantly and keep strong typing on my messages?

    Read the article

  • How to do directional per fragment lighting in world space?

    - by user
    I am attempting to create a GLSL shader for simple, per-fragment directional light. So far, after following many tutorials, I have continually ran into the issue: my light is specified in world coordinates, however, the shader treats the light's position as being in eye space, thus, the light direction changes when I move the camera. My question is, how to I transform a directional light position such as (50, 50, 50, 0) into eye space, or, would doing things this way be the incorrect approach to the problem?

    Read the article

  • Given a RGB color x, how to find the most contrasting color y?

    - by Arthur Wulf White
    I have to mark a certain item in a way that will make it stick-out in the background. I need it to be surrounded with the color that contrasts the background as much as possible so it will pop out and easily noticeable by the player. Lets say I know the background is color 'x', how do I find 'y' such that it will be very contrasting to 'x' and easy to notice in a background where 'x' is a dominant color? I first thought about inverting color 'x' and then I noticed that when 'x' is a medium shade of gray, if I invert 'x' to get 'y', then 'y' is also a medium shade of gray which does not work.

    Read the article

  • how to use opengl blend mode/functions to brighten/darken a texture.

    - by Jigar
    Tried this code, but the texture didnot get any lighter. try { texture = TextureLoader.getTexture("png", Game.class.getResourceAsStream("/brick.png"), true, GL_NEAREST); } catch (IOException e) { e.printStackTrace(); } GL11.glBindTexture(GL11.GL_TEXTURE_2D, texture.getTextureID()); glEnable(GL_BLEND); glBlendFunc(GL_CONSTANT_ALPHA, GL_CONSTANT_ALPHA); GL14.glBlendColor(1.0f, 1.0f, 1.0f, 0.5f); glColor4f(1, 1, 1, 0.5f); GL11.glBegin(GL11.GL_QUADS); // Start Drawing Quads // Front Face GL11.glNormal3f(0.0f, 0.0f, 1.0f); // Normal Pointing Towards Viewer GL11.glTexCoord2f(0.0f, 0.0f); GL11.glVertex3f(-1.0f, -1.0f, 1.0f); // Point 1 (Front) GL11.glTexCoord2f(1.0f, 0.0f); GL11.glVertex3f(1.0f, -1.0f, 1.0f); // Point 2 (Front) GL11.glTexCoord2f(1.0f, 1.0f); GL11.glVertex3f(1.0f, 1.0f, 1.0f); // Point 3 (Front) GL11.glTexCoord2f(0.0f, 1.0f); GL11.glVertex3f(-1.0f, 1.0f, 1.0f); // Point 4 (Front) glEnd();

    Read the article

  • One True Event Loop

    - by CyberShadow
    Simple programs that collect data from only one system need only one event loop. For example, Windows applications have the message loop, POSIX network programs usually have a select/epoll/etc. loop at their core, pure SDL games use SDL's event loop. But what if you need to collect events from several subsystems? Such as an SDL game which doesn't use SDL_net for networking. I can think of several solutions: Polling (ugh) Put each event loop in its own thread, and: Send messages to the main thread, which collects and processes the events, or Place the event-processing code of each thread in a critical section, so that the threads can wait for events asynchronously but process them synchronously Choose one subsystem for the main event loop, and pass events from other subsystems via that subsystem as custom messages (for example, the Windows message loop and custom messages, or a socket select() loop and passing events via a loopback connection). Option 2.1 is more interesting on platforms where message-passing is a well-developed threading primitive (e.g. in the D programming language), but 2.2 looks like the best option to me.

    Read the article

  • Rope Colliding with a Rectangle

    - by Colton
    I have my rope, and I have my rectangles. The rope is similar to the implementation found here: http://nehe.gamedev.net/tutorial/rope_physics/17006/ Now, I want to make the rope properly collide with the rectangle such that the rope will not pass through a rectangle, and wrap around the rectangle and all that good stuff. Currently, I have it set so no rope node can pass through a rect (successfully), however, this means a rope segment can still pass through a block. Ex: So the question is, what can I do to fix this? What I have tried: I create a rectangle between two nodes of a rope, calculate rotation between the nodes, and get myself a transformed rectangle. I can successfully detect a collision between rope segments and a (non-transformed) rectangle. Create a new node or pivot point around the corner of the block, and rearrange nodes to point to the corner node. Trouble is determining what corner the rope segment is passing through. And then the current rope setup goes wonky (based on verlet integration, so a sudden change in position causes the rope to wiggle like a seismograph during a magnitude 8 earth quake.) Among other issues that might be solvable, but its turning into a case by case thing, which doesn't seem right. I think the best answer here would just be a link to a tutorial (I simply can't find any, most lead to box2D or farseer, but I want to at least learn how it works before I hide behind an engine).

    Read the article

  • Drawing different per-pixel data on the screen

    - by Amir Eldor
    I want to draw different per-pixel data on the screen, where each pixel has a specific value according to my needs. An example may be a random noise pattern where each pixel is randomly generated. I'm not sure what is the correct and fastest way to do this. Locking a texture/surface and manipulating the raw pixel data? How is this done in modern graphics programming? I'm currently trying to do this in Pygame but realized I will face the same problem if I go for C/SDL or OpenGL/DirectX.

    Read the article

  • Behavior Trees and Animations

    - by Tom
    I have started working on the AI for a game, but am confused how I should handle animations. I will be using a Behavior Tree for AI behavior and Cocos2D for my game engine. Should my "PlayAnimationWalk" just be another node in the tree? Something similar to this: [Approach Player] - Play Walk animation - Move Towards player - Stop Walk animation Or should the node just update an AnimationState in the blackboard and have some type of animation handler/component reference this for which animation should be playing? This has been driving me nuts :)

    Read the article

  • Can I animate render targets or the swap chain?

    - by Eric F.
    I want to animate some synthetic video bits to fullscreen w/o tearing. Can I set up D3D 9/10/11 in exclusive mode, and have it present a series of buffers that I'm writing to? I know how to copy system memory bits into a texture, then draw that texture as a fullscreen quad, but it seems like overkill. Why should I use the triangle rasterizer when I want to do something so simple? All I want to do is set up a long (4-8 buffer) swapchain and set the bits of the back buffer that is about to be displayed. Or, I want to allocate 4-8 RenderTargets, and on each frame, copy the bits from system memory to the RenderTarget, then set it as the next thing to display. I've never seen or heard about anybody doing this, but it seems so dead simple!

    Read the article

< Previous Page | 521 522 523 524 525 526 527 528 529 530 531 532  | Next Page >