Search Results

Search found 33194 results on 1328 pages for 'development approach'.

Page 382/1328 | < Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >

  • Where can i get the openal sdk for c++?

    - by Peter Short
    The OpenAL site I'm looking at is a crappy outdated and broken sharepoint portal and the SDK in the downloads section give me a 500 html code when i request it. http://connect.creativelabs.com/openal/Downloads/OpenAL11CoreSDK.zip I found an OpenAL SDK on a softpedia and it has headers but not alu.h or alut.h which the tutorials I'm looking at apparently require for loading wavs etc. What am I missing? Is OpenAL dead or something?

    Read the article

  • farseer physics xbox samples not working

    - by Hugh
    I have downloaded a few of the sample projects from the official farseer physics website and i just cant get them to run on my xbox. -My connection to the xbox is fine, other xbox projects debug fine on my xbox -I have tried running both the xbox versions of the samples (for example the farseer hello world sample project) and the windows version by right-clicking the project and making a copy for xbox. I get a bunch of errors but what i always get is "unreachable code detected" referring to code in the farseer library, it seems to be a problem to do with referencing/linking the farseer library to the main game project. Help please!

    Read the article

  • Collide with rotation of the object

    - by Lahiru
    I'm developing a mirror for lazer beam(Ball sprite). There I'm trying to redirect the laze beam according to the ration degree of the mirror(Rectangle). How can I collide the ball to the correct angle if the colliding object is with some angle(45 deg) rather than colliding back. here is an screen shot of my work here is my code using System; using System.Collections.Generic; using System.Linq; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Audio; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.GamerServices; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Input; using Microsoft.Xna.Framework.Media; namespace collision { /// <summary> /// This is the main type for your game /// </summary> public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; Texture2D ballTexture; Rectangle ballBounds; Vector2 ballPosition; Vector2 ballVelocity; float ballSpeed = 30f; Texture2D blockTexture; Rectangle blockBounds; Vector2 blockPosition; private Vector2 origin; KeyboardState keyboardState; //Font SpriteFont Font1; Vector2 FontPos; private String displayText; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; } /// <summary> /// Allows the game to perform any initialization it needs to before starting to run. /// This is where it can query for any required services and load any non-graphic /// related content. Calling base.Initialize will enumerate through any components /// and initialize them as well. /// </summary> protected override void Initialize() { // TODO: Add your initialization logic here ballPosition = new Vector2(this.GraphicsDevice.Viewport.Width / 2, this.GraphicsDevice.Viewport.Height * 0.25f); blockPosition = new Vector2(this.GraphicsDevice.Viewport.Width / 2, this.GraphicsDevice.Viewport.Height /2); ballVelocity = new Vector2(0, 1); base.Initialize(); } /// <summary> /// LoadContent will be called once per game and is the place to load /// all of your content. /// </summary> protected override void LoadContent() { // Create a new SpriteBatch, which can be used to draw textures. spriteBatch = new SpriteBatch(GraphicsDevice); ballTexture = Content.Load<Texture2D>("ball"); blockTexture = Content.Load<Texture2D>("mirror"); //create rectangles based off the size of the textures ballBounds = new Rectangle((int)(ballPosition.X - ballTexture.Width / 2), (int)(ballPosition.Y - ballTexture.Height / 2), ballTexture.Width, ballTexture.Height); blockBounds = new Rectangle((int)(blockPosition.X - blockTexture.Width / 2), (int)(blockPosition.Y - blockTexture.Height / 2), blockTexture.Width, blockTexture.Height); origin.X = blockTexture.Width / 2; origin.Y = blockTexture.Height / 2; // TODO: use this.Content to load your game content here Font1 = Content.Load<SpriteFont>("SpriteFont1"); FontPos = new Vector2(graphics.GraphicsDevice.Viewport.Width - 100, 20); } /// <summary> /// UnloadContent will be called once per game and is the place to unload /// all content. /// </summary> protected override void UnloadContent() { // TODO: Unload any non ContentManager content here } /// <summary> /// Allows the game to run logic such as updating the world, /// checking for collisions, gathering input, and playing audio. /// </summary> /// <param name="gameTime">Provides a snapshot of timing values.</param> /// private float RotationAngle; float circle = MathHelper.Pi * 2; float angle; protected override void Update(GameTime gameTime) { // Allows the game to exit if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); // TODO: Add your update logic here //check for collision between the ball and the block, or if the ball is outside the bounds of the screen if (ballBounds.Intersects(blockBounds) || !GraphicsDevice.Viewport.Bounds.Contains(ballBounds)) { //we have a simple collision! //if it has hit, swap the direction of the ball, and update it's position ballVelocity = -ballVelocity; ballPosition += ballVelocity * ballSpeed; } else { //move the ball a bit ballPosition += ballVelocity * ballSpeed; } //update bounding boxes ballBounds.X = (int)ballPosition.X; ballBounds.Y = (int)ballPosition.Y; blockBounds.X = (int)blockPosition.X; blockBounds.Y = (int)blockPosition.Y; keyboardState = Keyboard.GetState(); float val = 1.568017f/90; if (keyboardState.IsKeyDown(Keys.Space)) RotationAngle = RotationAngle + (float)Math.PI; if (keyboardState.IsKeyDown(Keys.Left)) RotationAngle = RotationAngle - val; angle = (float)Math.PI / 4.0f; // 90 degrees RotationAngle = angle; // RotationAngle = RotationAngle % circle; displayText = RotationAngle.ToString(); base.Update(gameTime); } /// <summary> /// This is called when the game should draw itself. /// </summary> /// <param name="gameTime">Provides a snapshot of timing values.</param> protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); // TODO: Add your drawing code here spriteBatch.Begin(); // Find the center of the string Vector2 FontOrigin = Font1.MeasureString(displayText) / 2; spriteBatch.DrawString(Font1, displayText, FontPos, Color.White, 0, FontOrigin, 1.0f, SpriteEffects.None, 0.5f); spriteBatch.Draw(ballTexture, ballPosition, Color.White); spriteBatch.Draw(blockTexture, blockPosition,null, Color.White, RotationAngle,origin, 1.0f, SpriteEffects.None, 0f); spriteBatch.End(); base.Draw(gameTime); } } }

    Read the article

  • Voxel terrain rendering with marching cubes

    - by JavaJosh94
    I was working on making procedurally generated terrain using normal cubish voxels (like minecraft) But then I read about marching cubes and decided to convert to using those. I managed to create a working marching cubes class and cycle through the densities and everything in it seemed to be working so I went on to work on actual terrain generation. I'm using XNA (C#) and a ported libnoise library to generate noise for the terrain generator. But instead of rendering smooth terrain I get a 64x64 chunk (I specified 64 but can change it) of seemingly random marching cubes using different triangles. This is the code I'm using to generate a "chunk": public MarchingCube[, ,] getTerrainChunk(int size, float dMultiplyer, int stepsize) { MarchingCube[, ,] temp = new MarchingCube[size / stepsize, size / stepsize, size / stepsize]; for (int x = 0; x < size; x += stepsize) { for (int y = 0; y <size; y += stepsize) { for (int z = 0; z < size; z += stepsize) { float[] densities = {(float)terrain.GetValue(x, y, z)*dMultiplyer, (float)terrain.GetValue(x, y+stepsize, z)*dMultiplyer, (float)terrain.GetValue(x+stepsize, y+stepsize, z)*dMultiplyer, (float)terrain.GetValue(x+stepsize, y, z)*dMultiplyer, (float)terrain.GetValue(x,y,z+stepsize)*dMultiplyer,(float)terrain.GetValue(x,y+stepsize,z+stepsize)*dMultiplyer,(float)terrain.GetValue(x+stepsize,y+stepsize,z+stepsize)*dMultiplyer,(float)terrain.GetValue(x+stepsize,y,z+stepsize)*dMultiplyer }; Vector3[] corners = { new Vector3(x,y,z), new Vector3(x,y+stepsize,z),new Vector3(x+stepsize,y+stepsize,z),new Vector3(x+stepsize,y,z), new Vector3(x,y,z+stepsize), new Vector3(x,y+stepsize,z+stepsize), new Vector3(x+stepsize,y+stepsize,z+stepsize), new Vector3(x+stepsize,y,z+stepsize)}; if (x == 0 && y == 0 && z == 0) { temp[x / stepsize, y / stepsize, z / stepsize] = new MarchingCube(densities, corners, device); } temp[x / stepsize, y / stepsize, z / stepsize] = new MarchingCube(densities, corners); } } } (terrain is a Perlin Noise generated using libnoise) I'm sure there's probably an easy solution to this but I've been drawing a blank for the past hour. I'm just wondering if the problem is how I'm reading in the data from the noise or if I may be generating the noise wrong? Or maybe the noise is just not good for this kind of generation? If I'm reading it wrong does anyone know the right way? the answers on google were somewhat ambiguous but I'm going to keep searching. Thanks in advance!

    Read the article

  • Stencil mask with AlphaTestEffect

    - by Brendan Wanlass
    I am trying to pull off the following effect in XNA 4.0: http://eng.utah.edu/~brendanw/question.jpg The purple area has 50% opacity. I have gotten pretty close with the following code: public static DepthStencilState AlwaysStencilState = new DepthStencilState() { StencilEnable = true, StencilFunction = CompareFunction.Always, StencilPass = StencilOperation.Replace, ReferenceStencil = 1, DepthBufferEnable = false, }; public static DepthStencilState EqualStencilState = new DepthStencilState() { StencilEnable = true, StencilFunction = CompareFunction.Equal, StencilPass = StencilOperation.Keep, ReferenceStencil = 1, DepthBufferEnable = false, }; ... if (_alphaEffect == null) { _alphaEffect = new AlphaTestEffect(_spriteBatch.GraphicsDevice); _alphaEffect.AlphaFunction = CompareFunction.LessEqual; _alphaEffect.ReferenceAlpha = 129; Matrix projection = Matrix.CreateOrthographicOffCenter(0, _spriteBatch.GraphicsDevice.PresentationParameters.BackBufferWidth, _spriteBatch.GraphicsDevice.PresentationParameters.BackBufferHeight, 0, 0, 1); _alphaEffect.Projection = world.SystemManager.GetSystem<RenderSystem>().Camera.View * projection; } _mask = new RenderTarget2D(_spriteBatch.GraphicsDevice, _spriteBatch.GraphicsDevice.PresentationParameters.BackBufferWidth, _spriteBatch.GraphicsDevice.PresentationParameters.BackBufferHeight, false, SurfaceFormat.Color, DepthFormat.Depth24Stencil8); _spriteBatch.GraphicsDevice.SetRenderTarget(_mask); _spriteBatch.GraphicsDevice.Clear(ClearOptions.Target | ClearOptions.Stencil, Color.Transparent, 0, 0); _spriteBatch.Begin(SpriteSortMode.Immediate, null, null, AlwaysStencilState, null, _alphaEffect); _spriteBatch.Draw(sprite.Texture, position, sprite.SourceRectangle,Color.White, 0f, sprite.Origin, 1f, SpriteEffects.None, 0); _spriteBatch.End(); _spriteBatch.Begin(SpriteSortMode.Immediate, null, null, EqualStencilState, null, null); _spriteBatch.Draw(_maskTex, new Vector2(x * _maskTex.Width, y * _maskTex.Height), null, Color.White, 0f, Vector2.Zero, 1f, SpriteEffects.None, 0); _spriteBatch.End(); _spriteBatch.GraphicsDevice.SetRenderTarget(null); _spriteBatch.GraphicsDevice.Clear(Color.Black); _spriteBatch.Begin(); _spriteBatch.Draw((Texture2D)_mask, Vector2.Zero, null, Color.White, 0f, Vector2.Zero, 1f, SpriteEffects.None, layer/max_layer); _spriteBatch.End(); My problem is, I can't get the AlphaTestEffect to behave. I can either mask over the semi-transparent purple junk and fill it in with the green design, or I can draw over the completely opaque grassy texture. How can I specify the exact opacity that needs to be replace with the green design?

    Read the article

  • Admob banner not getting remove from superview

    - by Anil gupta
    I am developing one 2d game using cocos2d framework, in this game i am using admob for advertising, in some classes not in all classes but admob banner is visible in every class and after some time game getting crash also. I am not getting how admob banner is comes in every class in fact i have not declare in Rootviewcontroller class. can any one suggest me how to integrate Admob in cocos2d game, i want Admob banner in particular classes not in every class, I am using latest google admob sdk, my code is below: Thanks in advance ` -(void)AdMob{ NSLog(@"ADMOB"); CGSize winSize = [[CCDirector sharedDirector]winSize]; // Create a view of the standard size at the bottom of the screen. if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad){ bannerView_ = [[GADBannerView alloc] initWithFrame:CGRectMake(size.width/2-364, size.height - GAD_SIZE_728x90.height, GAD_SIZE_728x90.width, GAD_SIZE_728x90.height)]; } else { // It's an iPhone bannerView_ = [[GADBannerView alloc] initWithFrame:CGRectMake(size.width/2-160, size.height - GAD_SIZE_320x50.height, GAD_SIZE_320x50.width, GAD_SIZE_320x50.height)]; } if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad) { bannerView_.adUnitID =@"a15062384653c9e"; } else { bannerView_.adUnitID =@"a15062392a0aa0a"; } bannerView_.rootViewController = self; [[[CCDirector sharedDirector]openGLView]addSubview:bannerView_]; [bannerView_ loadRequest:[GADRequest request]]; GADRequest *request = [[GADRequest alloc] init]; request.testing = [NSArray arrayWithObjects: GAD_SIMULATOR_ID, nil]; // Simulator [bannerView_ loadRequest:request]; } //best practice for removing the barnnerView_ -(void)removeSubviews{ NSArray* subviews = [[CCDirector sharedDirector]openGLView].subviews; for (id SUB in subviews){ [(UIView*)SUB removeFromSuperview]; [SUB release]; } NSLog(@"remove from view"); } //this makes the refreshTimer count -(void)targetMethod:(NSTimer *)theTimer{ //INCREASE OF THE TIMER AND SECONDS elapsedTime++; seconds++; //INCREASE OF THE MINUTOS EACH 60 SECONDS if (seconds>=60) { seconds=0; minutes++; [self removeSubviews]; [self AdMob]; } NSLog(@"TIME: %02d:%02d", minutes, seconds); } `

    Read the article

  • problem with loading in .FBX meshes in DirectX 10

    - by N0xus
    I'm trying to load in meshes into DirectX 10. I've created a bunch of classes that handle it and allow me to call in a mesh with only a single line of code in my main game class. How ever, when I run the program this is what renders: In the debug output window the following errors keep appearing: D3D10: ERROR: ID3D10Device::DrawIndexed: Input Assembler - Vertex Shader linkage error: Signatures between stages are incompatible. The reason is that Semantic 'TEXCOORD' is defined for mismatched hardware registers between the output stage and input stage. [ EXECUTION ERROR #343: DEVICE_SHADER_LINKAGE_REGISTERINDEX ] D3D10: ERROR: ID3D10Device::DrawIndexed: Input Assembler - Vertex Shader linkage error: Signatures between stages are incompatible. The reason is that the input stage requires Semantic/Index (POSITION,0) as input, but it is not provided by the output stage. [ EXECUTION ERROR #342: DEVICE_SHADER_LINKAGE_SEMANTICNAME_NOT_FOUND ] The thing is, I've no idea how to fix this. The code I'm using does work and I've simply brought all of that code into a new project of mine. There are no build errors and this only appears when the game is running The .fx file is as follows: float4x4 matWorld; float4x4 matView; float4x4 matProjection; struct VS_INPUT { float4 Pos:POSITION; float2 TexCoord:TEXCOORD; }; struct PS_INPUT { float4 Pos:SV_POSITION; float2 TexCoord:TEXCOORD; }; Texture2D diffuseTexture; SamplerState diffuseSampler { Filter = MIN_MAG_MIP_POINT; AddressU = WRAP; AddressV = WRAP; }; // // Vertex Shader // PS_INPUT VS( VS_INPUT input ) { PS_INPUT output=(PS_INPUT)0; float4x4 viewProjection=mul(matView,matProjection); float4x4 worldViewProjection=mul(matWorld,viewProjection); output.Pos=mul(input.Pos,worldViewProjection); output.TexCoord=input.TexCoord; return output; } // // Pixel Shader // float4 PS(PS_INPUT input ) : SV_Target { return diffuseTexture.Sample(diffuseSampler,input.TexCoord); //return float4(1.0f,1.0f,1.0f,1.0f); } RasterizerState NoCulling { FILLMODE=SOLID; CULLMODE=NONE; }; technique10 Render { pass P0 { SetVertexShader( CompileShader( vs_4_0, VS() ) ); SetGeometryShader( NULL ); SetPixelShader( CompileShader( ps_4_0, PS() ) ); SetRasterizerState(NoCulling); } } In my game, the .fx file and model are called and set as follows: Loading in shader file //Set the shader flags - BMD DWORD dwShaderFlags = D3D10_SHADER_ENABLE_STRICTNESS; #if defined( DEBUG ) || defined( _DEBUG ) dwShaderFlags |= D3D10_SHADER_DEBUG; #endif ID3D10Blob * pErrorBuffer=NULL; if( FAILED( D3DX10CreateEffectFromFile( TEXT("TransformedTexture.fx" ), NULL, NULL, "fx_4_0", dwShaderFlags, 0, md3dDevice, NULL, NULL, &m_pEffect, &pErrorBuffer, NULL ) ) ) { char * pErrorStr = ( char* )pErrorBuffer->GetBufferPointer(); //If the creation of the Effect fails then a message box will be shown MessageBoxA( NULL, pErrorStr, "Error", MB_OK ); return false; } //Get the technique called Render from the effect, we need this for rendering later on m_pTechnique=m_pEffect->GetTechniqueByName("Render"); //Number of elements in the layout UINT numElements = TexturedLitVertex::layoutSize; //Get the Pass description, we need this to bind the vertex to the pipeline D3D10_PASS_DESC PassDesc; m_pTechnique->GetPassByIndex( 0 )->GetDesc( &PassDesc ); //Create Input layout to describe the incoming buffer to the input assembler if (FAILED(md3dDevice->CreateInputLayout( TexturedLitVertex::layout, numElements,PassDesc.pIAInputSignature, PassDesc.IAInputSignatureSize, &m_pVertexLayout ) ) ) { return false; } model loading: m_pTestRenderable=new CRenderable(); //m_pTestRenderable->create<TexturedVertex>(md3dDevice,8,6,vertices,indices); m_pModelLoader = new CModelLoader(); m_pTestRenderable = m_pModelLoader->loadModelFromFile( md3dDevice,"armoredrecon.fbx" ); m_pGameObjectTest = new CGameObject(); m_pGameObjectTest->setRenderable( m_pTestRenderable ); // Set primitive topology, how are we going to interpet the vertices in the vertex buffer md3dDevice->IASetPrimitiveTopology( D3D10_PRIMITIVE_TOPOLOGY_TRIANGLELIST ); if ( FAILED( D3DX10CreateShaderResourceViewFromFile( md3dDevice, TEXT( "armoredrecon_diff.png" ), NULL, NULL, &m_pTextureShaderResource, NULL ) ) ) { MessageBox( NULL, TEXT( "Can't load Texture" ), TEXT( "Error" ), MB_OK ); return false; } m_pDiffuseTextureVariable = m_pEffect->GetVariableByName( "diffuseTexture" )->AsShaderResource(); m_pDiffuseTextureVariable->SetResource( m_pTextureShaderResource ); Finally, the draw function code: //All drawing will occur between the clear and present m_pViewMatrixVariable->SetMatrix( ( float* )m_matView ); m_pWorldMatrixVariable->SetMatrix( ( float* )m_pGameObjectTest->getWorld() ); //Get the stride(size) of the a vertex, we need this to tell the pipeline the size of one vertex UINT stride = m_pTestRenderable->getStride(); //The offset from start of the buffer to where our vertices are located UINT offset = m_pTestRenderable->getOffset(); ID3D10Buffer * pVB=m_pTestRenderable->getVB(); //Bind the vertex buffer to input assembler stage - md3dDevice->IASetVertexBuffers( 0, 1, &pVB, &stride, &offset ); md3dDevice->IASetIndexBuffer( m_pTestRenderable->getIB(), DXGI_FORMAT_R32_UINT, 0 ); //Get the Description of the technique, we need this in order to loop through each pass in the technique D3D10_TECHNIQUE_DESC techDesc; m_pTechnique->GetDesc( &techDesc ); //Loop through the passes in the technique for( UINT p = 0; p < techDesc.Passes; ++p ) { //Get a pass at current index and apply it m_pTechnique->GetPassByIndex( p )->Apply( 0 ); //Draw call md3dDevice->DrawIndexed(m_pTestRenderable->getNumOfIndices(),0,0); //m_pD3D10Device->Draw(m_pTestRenderable->getNumOfVerts(),0); } Is there anything I've clearly done wrong or are missing? Spent 2 weeks trying to workout what on earth I've done wrong to no avail. Any insight a fresh pair eyes could give on this would be great.

    Read the article

  • Vehicle: Boat accelerating and turning in Unity

    - by Emilios S.
    I'm trying to make a player-controllable boat in Unity and I'm running into problems with my code. 1) I want to make the boat to accelerate and decelerate steadily instead of simply moving the speed I'm telling it to right away. 2) I want to make the player unable to steer the boat unless it is moving. 3) If possible, I want to simulate the vertical floating of a boat during its movement (it going up and down) My current code (C#) is this: using UnityEngine; using System.Collections; public class VehicleScript : MonoBehaviour { public float speed=10; public float rotationspeed=50; // Use this for initialization // Update is called once per frame void Update () { // Forward movement if(Input.GetKey(KeyCode.I)) speed = transform.Translate (Vector3.left*speed*Time.deltaTime); // Backward movement if(Input.GetKey(KeyCode.K)) transform.Translate (Vector3.right*speed*Time.deltaTime); // Left movement if(Input.GetKey(KeyCode.J)) transform.Rotate (Vector3.down*rotationspeed*Time.deltaTime); // Right movement if(Input.GetKey(KeyCode.L)) transform.Rotate (Vector3.up*rotationspeed*Time.deltaTime); } } In the current state of my code, when I press the specified keys, the boat simply moves 10 units/sec instantly, and also stops instantly. I'm not really sure how to make the things stated above, so any help would be appreciated. Just to clarify, I don't necessarily need the full code to implement those features, I just want to know what functions to use in order to achieve the desired effects. Thank you very much.

    Read the article

  • First time shadow mapping problems

    - by user1294203
    I have implemented basic shadow mapping for the first time in OpenGL using shaders and I'm facing some problems. Below you can see an example of my rendered scene: The process of the shadow mapping I'm following is that I render the scene to the framebuffer using a View Matrix from the light point of view and the projection and model matrices used for normal rendering. In the second pass, I send the above MVP matrix from the light point of view to the vertex shader which transforms the position to light space. The fragment shader does the perspective divide and changes the position to texture coordinates. Here is my vertex shader, #version 150 core uniform mat4 ModelViewMatrix; uniform mat3 NormalMatrix; uniform mat4 MVPMatrix; uniform mat4 lightMVP; uniform float scale; in vec3 in_Position; in vec3 in_Normal; in vec2 in_TexCoord; smooth out vec3 pass_Normal; smooth out vec3 pass_Position; smooth out vec2 TexCoord; smooth out vec4 lightspace_Position; void main(void){ pass_Normal = NormalMatrix * in_Normal; pass_Position = (ModelViewMatrix * vec4(scale * in_Position, 1.0)).xyz; lightspace_Position = lightMVP * vec4(scale * in_Position, 1.0); TexCoord = in_TexCoord; gl_Position = MVPMatrix * vec4(scale * in_Position, 1.0); } And my fragment shader, #version 150 core struct Light{ vec3 direction; }; uniform Light light; uniform sampler2D inSampler; uniform sampler2D inShadowMap; smooth in vec3 pass_Normal; smooth in vec3 pass_Position; smooth in vec2 TexCoord; smooth in vec4 lightspace_Position; out vec4 out_Color; float CalcShadowFactor(vec4 lightspace_Position){ vec3 ProjectionCoords = lightspace_Position.xyz / lightspace_Position.w; vec2 UVCoords; UVCoords.x = 0.5 * ProjectionCoords.x + 0.5; UVCoords.y = 0.5 * ProjectionCoords.y + 0.5; float Depth = texture(inShadowMap, UVCoords).x; if(Depth < (ProjectionCoords.z + 0.001)) return 0.5; else return 1.0; } void main(void){ vec3 Normal = normalize(pass_Normal); vec3 light_Direction = -normalize(light.direction); vec3 camera_Direction = normalize(-pass_Position); vec3 half_vector = normalize(camera_Direction + light_Direction); float diffuse = max(0.2, dot(Normal, light_Direction)); vec3 temp_Color = diffuse * vec3(1.0); float specular = max( 0.0, dot( Normal, half_vector) ); float shadowFactor = CalcShadowFactor(lightspace_Position); if(diffuse != 0 && shadowFactor > 0.5){ float fspecular = pow(specular, 128.0); temp_Color += fspecular; } out_Color = vec4(shadowFactor * texture(inSampler, TexCoord).xyz * temp_Color, 1.0); } One of the problems is self shadowing as you can see in the picture, the crate has its own shadow cast on itself. What I have tried is enabling polygon offset (i.e. glEnable(POLYGON_OFFSET_FILL), glPolygonOffset(GLfloat, GLfloat) ) but it didn't change much. As you see in the fragment shader, I have put a static offset value of 0.001 but I have to change the value depending on the distance of the light to get more desirable effects , which not very handy. I also tried using front face culling when I render to the framebuffer, that didn't change much too. The other problem is that pixels outside the Light's view frustum get shaded. The only object that is supposed to be able to cast shadows is the crate. I guess I should pick more appropriate projection and view matrices, but I'm not sure how to do that. What are some common practices, should I pick an orthographic projection? From googling around a bit, I understand that these issues are not that trivial. Does anyone have any easy to implement solutions to these problems. Could you give me some additional tips? Please ask me if you need more information on my code. Here is a comparison with and without shadow mapping of a close-up of the crate. The self-shadowing is more visible.

    Read the article

  • Tiled game: how to correct load background image ?

    - by stighy
    Hi, i'm a newbie. I'm trying to develop a 2d game (top-down view). I would like to load a standard background, a textured ground... My "world" is big, for example 3000px X 3000px. I think it is not a good idea to load a 3000px x 3000px image and move it... So, how is the best practice ? To load a single small image (64x64) and repeat it for N times ? If yes, ok, but how i can manage the "background" movement ? Thanks Bye!

    Read the article

  • 2D Tile Map files for Platformer, JSON or DB?

    - by Stephen Tierney
    I'm developing a 2D platformer with some uni friends. We've based it upon the XNA Platformer Starter Kit which uses .txt files to store the tile map. While this is simple it does not give us enough control and flexibility with level design. Some examples: for multiple layers of content multiple files are required, each object is fixed onto the grid, doesn't allow for rotation of objects, limited number of characters etc. So I'm doing some research into how to store the level data and map file. Reasoning for DB: From my perspective I see less redundancy of data using a database to store the tile data. Tiles in the same x,y position with the same characteristics can be reused from level to level. It seems like it would simple enough to write a method to retrieve all the tiles that are used in a particular level from the database. Reasoning for JSON: Visually editable files, changes can be tracked via SVN a lot easier. But there is repeated content. Do either have any drawbacks (load times, access times, memory etc) compared to the other? And what is commonly used in the industry? Currently the file looks like this: .................... .................... .................... .................... .................... .................... .................... .........GGG........ .........###........ .................... ....GGG.......GGG... ....###.......###... .................... .1................X. #################### 1 - Player start point, X - Level Exit, . - Empty space, # - Platform, G - Gem

    Read the article

  • A few questions about integrating AudioKinetic Wwise and Unity

    - by SaldaVonSchwartz
    I'm new to Wwise and to using it with Unity, and though I have gotten the integration to work, I'm still dealing with some loose ends and have a few questions: (I'm on Unity 4.3 as of now but I think it shouldn't make any difference) The base path: The Wwise documentation implies you set this in the AkGlobalSoundEngineInitializer basePath public ivar, which is exposed to the editor. However, I found that this variable is not really used. Instead, the path is hardcoded to /Audio/GeneratedSoundBanks in AkBankPath. I had to modify both scripts to actually look in the path that I set in the editor property. What's the deal with this? Just sloppyness or am I missing something? Also about paths: since I'm on Mac, I'm using Unity natively under OS X and in tadem, the Wwise authoring tool via VMWare and I share the OS X Unity project folder so I can generate the soundbanks into the assets folder. However, the authoring tool (downloaded the latest one for Windows) doesn't automatically generate any "platform-specific" subfolders for my wwise files. That is, again, the Unity integration scripts assume the path to be /Audio/GeneratedSoundBanks/<my-platform>/ which in my case would be Mac (I set the authoring tool to generate for Mac). The documentation says wwise will automatically generate the platform-specific folders but it just dumps all the stuff in GeneratedSoundBanks. Am I missing some setting? cause right now I just manually create the /Mac folder. The C# methods AkSoundEngine.PostEvent and AkSoundEngine.LoadBank for instance, have a few overloads, including ones where I can refer to the soundbanks or events by their ID. However, if I try to use these, for instance: AkSoundEngine.LoadBank(, AkSoundEngine.AK_DEFAULT_POOL_ID) where the int I got from the .h header, I get Ak_Fail. If I use the overloads that reference the objects by string name then it works. What gives? Converting the ID header to C#: The integration comes with a C# script that seems to fork a process to call Python in turn to covert the C++ header into a C# script. This always fails unless I manually execute the Python script myself from outside Unity. Might be a permissions thing, but has anyone experienced this? The Profiler: I set up the Unity player to run in the background and am using the "Profile" version of the plugin. However, when I start the Unity OS X standalone app, the profiler in VMWare does not see it. This I'm thinking might just be that I'm trying to see a running instance of the sound engine inside an OS X binary from a Windows virtual machine. But I'm just wondering if anyone has gotten the Windows profiler to see an OS X Unity binary. Different versions of the integration plugin: It's not clear to me from the documentation whether I have to manually (or write a script to do it) remove the "Profile" version and install the "Release" version when I'm going to do a Release build or if I should install both version in Unity and it'll select the right one. Thanks!

    Read the article

  • Procedural landscape generation but not just fractals

    - by Richard Fabian
    In large procedural landscape games, the land seems dull, but that's probably because the real world is largely dull, with only limited places where the scenery is dramatic or tactical. Looking at world generation from this point of view, a landscape generator for a game needs to not follow the rules of landscaping, but instead some rules married to the expectations of the gamer. For example, there could be a choke point / route generator that creates hills ravines, rivers and mountains between cities, rather than cities plotted on the land based on the resources or conditions generated by the mountains and rainfall patterns. Is there any existing work being done like this? Start with cities or population centres and then add in terrain afterwards?

    Read the article

  • Voxel Face Crawling (Mesh simplification, possibly using greedy)

    - by Tim Winter
    This is in regards to a Minecraft-like terrain engine. I store blocks in chunks (16x256x16 blocks in a chunk). When I generate a chunk, I use multiple procedural techniques to set the terrain and to place objects. While generating, I keep one 1D array for the full chunk (solid or not) and a separate 1D array of solid blocks. After generation, I iterate through the solid blocks checking their neighbors so I only generate block faces that don't have solid neighbors. I store which faces to generate in their own list (that's 6 lists, one per possible face). When rendering a chunk, I render all lists in the camera's current chunk and only the lists facing the camera in all other chunks. Using a 2D atlas with this little shader trick Andrew Russell suggested, I want to merge similar faces together completely. That is, if they are in the same list (same normal), are adjacent to each other, have the same light level, etc. My assumption would be to have each of the 6 lists sorted by the axis they rest on, then by the other two axes (the list for the top of a block would be sorted by it's Y value, then X, then Z). With this alone, I could quite easily merge strips of faces, but I'm looking to merge more than just strips together when possible. I've read up on this greedy meshing algorithm, but I am having a lot of trouble understanding it. To even use it, I would think I'd need to perform a type of flood-fill per sorted list to get the groups of merge-able faces. Then, per group, perform the greedy algorithm. It all sounds awfully expensive if I would ever want dynamic terrain/lighting after initial generation. So, my question: To perform merging of faces as described (ignoring whether it's a bad idea for dynamic terrain/lighting), is there perhaps an algorithm that is simpler to implement? I would also quite happily accept an answer that walks me through the greedy algorithm in a much simpler way (a link or explanation). I don't mind a slight performance decrease if it's easier to implement or even if it's only a little better than just doing strips. I worry that most algorithms focus on triangles rather than quads and using a 2D atlas the way I am, I don't know that I could implement something triangle based with my current skills. PS: I already frustum cull per chunk and as described, I also cull faces between solid blocks. I don't occlusion cull yet and may never.

    Read the article

  • Factors to consider when building an algorithm for gun recoil

    - by Nate Bross
    What would be a good algorithm for calculating the recoil of a shooting guns cross-hairs? What I've got now, is something like this: Define min/max recoil based on weapon size Generate random number of "delta" movement Apply random value to X, Y, or both of cross-hairs (only "up" on the Y axis) Multiply new delta based on time from the previous shot (more recoil for full-auto) What I'm worried about is that this feels rather predicable, what other factors should one take into account when building recoil? While I'd like it to be somewhat predictable, I'd also like to keep players on their toes. I'm thinking about increasing the min/max recoil values by a large amount (relatively) and adding a weighting, so large recoils will be more rare -- it seems like a lot of effort to go into something I felt would be simple. Maybe this is just something that needs to be fine-tuned with additional playtesting, and more playtesters? I think that it's important to note, that the recoil will be a large part of the game, and is a key factor in the game being fun/challenging or not.

    Read the article

  • How can I improve this collision detection logic?

    - by Dan
    I’m trying to make an android game and I’m having a bit of trouble getting the collision detection to work. It works sometimes but my conditions aren’t specific enough and my program gets it wrong. How could I improve the following if conditions? public boolean checkColisionWithPlayer( Player player ) { // Top Left // Top Right // Bottom Right // Bottom Left // int[][] PP = { { player.x, player.y }, { player.x + player.width, player.y }, {player.x + player.height, player.y + player.width }, { player.x, player.y + player.height } }; // TOP LEFT - PLAYER // if( ( PP[0][0] > x && PP[0][0] < x + width ) && ( PP[0][1] > y && PP[0][1] < y + height ) && ( (x - player.x) < 0 ) ) { player.isColided = true; //player.isSpinning = false; // Collision On Right if( PP[0][0] > ( x + width/2 ) && ( PP[0][1] - y < ( x + width ) - PP[0][0] ) ) { Log.i("Colision", "Top Left - Right Side"); player.x = ( x + width ) + 1; player.Vh = player.phy.getVelsoityWallColision(player.Vh, player.Cr); } // Collision On Bottom else if( PP[0][1] > ( y + height/2 ) ) { Log.i("Colision", "Top Left - Bottom Side"); player.y = ( y + height ) + 1; if( player.Vv > 0 ) player.Vv = 0; } return true; } // TOP RIGHT - PLAYER // else if( ( PP[1][0] > x && PP[1][0] < x + width ) && ( PP[1][1] > y && PP[1][1] < y + height ) && ( (x - player.x) > 0 ) ) { player.isColided = true; //player.isSpinning = false; // Collision On Left if( PP[1][0] < ( x + width/2 ) && ( PP[1][0] - x < PP[1][1] - y ) ) { Log.i("Colision", "Top Right - Left Side"); player.x = ( x ) + 1; player.Vh = player.phy.getVelsoityWallColision(player.Vh, player.Cr); } // Collision On Bottom else if( PP[1][1] > ( y + height/2 ) ) { Log.i("Colision", "Top Right - Bottom Side"); player.y = ( y + height ) + 1; if( player.Vv > 0 ) player.Vv = 0; } return true; } // BOTTOM RIGHT - PLAYER // else if( ( PP[2][0] > x && PP[2][0] < x + width ) && ( PP[2][1] > y && PP[2][1] < y + height ) ) { player.isColided = true; //player.isSpinning = false; // Collision On Left if( PP[2][0] < ( x + width/2 ) && ( PP[2][0] - x < PP[2][1] - y ) ) { Log.i("Colision", "Bottom Right - Left Side"); player.x = ( x ) + 1; player.Vh = player.phy.getVelsoityWallColision(player.Vh, player.Cr); } // Collision On Top else if( PP[2][1] < ( y + height/2 ) ) { Log.i("Colision", "Bottom Right - Top Side"); player.y = y - player.height; player.Vv = player.phy.getVelsoityWallColision(player.Vv, player.Cr); //player.Vh = -1 * ( player.phy.getVelsoityWallColision(player.Vv, player.Cr) ); int rs = x - player.x; Log.i("RS", String.format("%d", rs)); if( rs > 0 ) { player.direction = -1; player.isSpinning = true; player.Vh = -0.5 * ( player.phy.getVelsoityWallColision(player.Vv, player.Cr) ); } if( rs < 0 ) { player.direction = 1; player.isSpinning = true; player.Vh = 0.5 * ( player.phy.getVelsoityWallColision(player.Vv, player.Cr) ); } player.rotateSpeed = 1 * rs; } return true; } // BOTTOM LEFT - PLAYER // else if( ( PP[3][0] > x && PP[3][0] < x + width ) && ( PP[3][1] > y && PP[3][1] < y + height ) )//&& ( (x - player.x) > 0 ) ) { player.isColided = true; //player.isSpinning = false; // Collision On Right if( PP[3][0] > ( x + width/2 ) && ( PP[3][1] - y < ( x + width ) - PP[3][0] ) ) { Log.i("Colision", "Bottom Left - Right Side"); player.x = ( x + width ) + 1; player.Vh = player.phy.getVelsoityWallColision(player.Vh, player.Cr); } // Collision On Top else if( PP[3][1] < ( y + height/2 ) ) { Log.i("Colision", "Bottom Left - Top Side"); player.y = y - player.height; player.Vv = player.phy.getVelsoityWallColision(player.Vv, player.Cr); //player.Vh = -1 * ( player.phy.getVelsoityWallColision(player.Vv, player.Cr) ); int rs = x - player.x; //Log.i("RS", String.format("%d", rs)); //player.direction = -1; //player.isSpinning = true; if( rs > 0 ) { player.direction = -1; player.isSpinning = true; player.Vh = -1 * ( player.phy.getVelsoityWallColision(player.Vv, player.Cr) ); } if( rs < 0 ) { player.direction = 1; player.isSpinning = true; player.Vh = 1 * ( player.phy.getVelsoityWallColision(player.Vv, player.Cr) ); } player.rotateSpeed = 1 * rs; } //try { Thread.sleep(1000, 0); } //catch (InterruptedException e) {} return true; } else { player.isColided = false; player.isSpinning = true; } return false; }

    Read the article

  • When to detect collisions in game loop

    - by Ciaran
    My game loop uses a fixed time step to do "physics" updates, say every 20 ms. In here I move objects. I draw frames as frequently as possible. I work out a value between 0 and 1 to represent the proportion of the physics tick that is complete and interpolate between the previous and current physics state before drawing. It results in a smoother game assuming the frame rate is higher than the physics update rate. I am currently doing the collision detection in the physics update routine. I was wondering should it instead take place in the interpolated draw routine where the positions match what the user sees? Collisions can result in explosions by the way.

    Read the article

  • How to control in the vertex shader where pixel ends up in the renderTarget?

    - by cubrman
    What if I have an arbitrary renderTarget, that is smaller than the screen (say it is 1x1 pixel) and I want to make sure in the VertexShaderFunction that all my pixels end up exactly in that 1 pixel region? No matter what I do, they all seem to get culled at some point, though GraphicDevise.Clear() works OK. Where is the top left corner of the renderTarget Vertex-shader-vise? I tried output.Position = (0,0,0,0)/(0,0,0,1)/(1,1,1,1)/(-0.5,0.5,0,1) NOTHING works! Fullscreen quad is not an option 'cause I actually need to process geometry in the shaders to get the results I need.

    Read the article

  • Can frequent state changes decrease rendering performance?

    - by Miro
    Can frequent texture and shader binding decrease rendering performance? "Frequent" binding example: for object for material in object render part of object using that material "Low count" binding example: for material for object in material render part of object using that material I'm planning to use an octree later and with this "low count" method of rendering it can drastically increase memory consumption. So is it good idea?

    Read the article

  • Is HTML5/WebGL performance bad on low-end Android tablets and phones?

    - by Boris van Schooten
    I've developed a couple of WebGL games, and am trying them out on Android. I found that they run very slowly on my tablet, however. For example, a game with 10 sprites or so runs as 5fps. I tried Chrome and CocoonJS, but they are comparably slow. I also tried other games, and even games with only 5 or so moving sprites are this slow. This seems inconsistent with reports from others, such as this benchmark. Typically, when people talk about HTML5 game performance, they mention well-known and higher-end phones and tables. While my 7" tablet is cheap (I believe it's a relabeled Allwinner tablet, apparently with the Mali 400 GPU), I found it generally has a good gaming performance. All the games I tried run smoothly. I also developed an OpenGL ES 2 demo with 200 shaded 3D objects, and it ran at 50fps. My suspicion is that many low-end and white-label devices may have unacceptable HTML5/WebGL support, which means there may be a large section of gamers you will not reach when you choose this as your platform. I've heard rumors about inconsistent performance of HTML5 and WebGL on different devices, but no clear picture emerges. I would like to hear if any of you have had similar experiences with HTML5 or WebGL, or whether I can find information about the percentage of devices I can expect to have decent performance.

    Read the article

  • Thread-safety in Cocos2d-iPhone?

    - by Malax
    After tinkering a bit with cocos2d, I discovered that there is no classic game loop and everything is more-or-less event driven. I guess I can wrap my head around that, no problem. But I cannot find anything about thread safety. Say, I schedule something to occur every two seconds, which Thread will run the code? Given that I cannot find anything about that, I guess there is just one Cocos2d Thread and everything will be fine. Nevertheless, this implicit assumption does not give me a good feeling. Knowing is better than guessing. ;-) Can anyone shed some light onto that topic?

    Read the article

  • XNA: Rotating Bones

    - by MLM
    XNA 4.0 I am trying to learn how to rotate bones on a very simple tank model I made in Cinema 4D. It is rigged by 3 bones, Root - Main - Turret - Barrel I have binded all of the objects to the bones so that all translations/rotations work as planned in C4D. I exported it as .fbx I based my test project after: http://create.msdn.com/en-US/education/catalog/sample/simple_animation I can build successfully with no errors but all the rotations I try to do to my bones have no effect. I can transform my Root successfully using below but the bone transforms have no effect: myModel.Root.Transform = world; Matrix turretRotation = Matrix.CreateRotationY(MathHelper.ToRadians(37)); Matrix barrelRotation = Matrix.CreateRotationX(barrelRotationValue); MainBone.Transform = MainTransform; TurretBone.Transform = turretRotation * TurretTransform; BarrelBone.Transform = barrelRotation * BarrelTransform; I am wondering if my model is just not right or something important I am missing in the code. Here is my Game1.cs using System; using System.Collections.Generic; using System.Linq; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Audio; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.GamerServices; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Input; using Microsoft.Xna.Framework.Media; namespace ModelTesting { /// <summary> /// This is the main type for your game /// </summary> public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; float aspectRatio; Tank myModel; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; } /// <summary> /// Allows the game to perform any initialization it needs to before starting to run. /// This is where it can query for any required services and load any non-graphic /// related content. Calling base.Initialize will enumerate through any components /// and initialize them as well. /// </summary> protected override void Initialize() { // TODO: Add your initialization logic here myModel = new Tank(); base.Initialize(); } /// <summary> /// LoadContent will be called once per game and is the place to load /// all of your content. /// </summary> protected override void LoadContent() { // Create a new SpriteBatch, which can be used to draw textures. spriteBatch = new SpriteBatch(GraphicsDevice); // TODO: use this.Content to load your game content here myModel.Load(Content); aspectRatio = graphics.GraphicsDevice.Viewport.AspectRatio; } /// <summary> /// UnloadContent will be called once per game and is the place to unload /// all content. /// </summary> protected override void UnloadContent() { // TODO: Unload any non ContentManager content here } /// <summary> /// Allows the game to run logic such as updating the world, /// checking for collisions, gathering input, and playing audio. /// </summary> /// <param name="gameTime">Provides a snapshot of timing values.</param> protected override void Update(GameTime gameTime) { // Allows the game to exit if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); // TODO: Add your update logic here float time = (float)gameTime.TotalGameTime.TotalSeconds; // Move the pieces /* myModel.TurretRotation = (float)Math.Sin(time * 0.333f) * 1.25f; myModel.BarrelRotation = (float)Math.Sin(time * 0.25f) * 0.333f - 0.333f; */ base.Update(gameTime); } /// <summary> /// This is called when the game should draw itself. /// </summary> /// <param name="gameTime">Provides a snapshot of timing values.</param> protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); // Calculate the camera matrices. float time = (float)gameTime.TotalGameTime.TotalSeconds; Matrix rotation = Matrix.CreateRotationY(MathHelper.ToRadians(45)); Matrix view = Matrix.CreateLookAt(new Vector3(2000, 500, 0), new Vector3(0, 150, 0), Vector3.Up); Matrix projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, graphics.GraphicsDevice.Viewport.AspectRatio, 10, 10000); // TODO: Add your drawing code here myModel.Draw(rotation, view, projection); base.Draw(gameTime); } } } And here is my tank class: using System; using System.Collections.Generic; using System.Linq; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Audio; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.GamerServices; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Input; using Microsoft.Xna.Framework.Media; namespace ModelTesting { public class Tank { Model myModel; // Array holding all the bone transform matrices for the entire model. // We could just allocate this locally inside the Draw method, but it // is more efficient to reuse a single array, as this avoids creating // unnecessary garbage. public Matrix[] boneTransforms; // Shortcut references to the bones that we are going to animate. // We could just look these up inside the Draw method, but it is more // efficient to do the lookups while loading and cache the results. ModelBone MainBone; ModelBone TurretBone; ModelBone BarrelBone; // Store the original transform matrix for each animating bone. Matrix MainTransform; Matrix TurretTransform; Matrix BarrelTransform; // current animation positions float turretRotationValue; float barrelRotationValue; /// <summary> /// Gets or sets the turret rotation amount. /// </summary> public float TurretRotation { get { return turretRotationValue; } set { turretRotationValue = value; } } /// <summary> /// Gets or sets the barrel rotation amount. /// </summary> public float BarrelRotation { get { return barrelRotationValue; } set { barrelRotationValue = value; } } /// <summary> /// Load the model /// </summary> public void Load(ContentManager Content) { // TODO: use this.Content to load your game content here myModel = Content.Load<Model>("Models\\simple_tank02"); MainBone = myModel.Bones["Main"]; TurretBone = myModel.Bones["Turret"]; BarrelBone = myModel.Bones["Barrel"]; MainTransform = MainBone.Transform; TurretTransform = TurretBone.Transform; BarrelTransform = BarrelBone.Transform; // Allocate the transform matrix array. boneTransforms = new Matrix[myModel.Bones.Count]; } public void Draw(Matrix world, Matrix view, Matrix projection) { myModel.Root.Transform = world; Matrix turretRotation = Matrix.CreateRotationY(MathHelper.ToRadians(37)); Matrix barrelRotation = Matrix.CreateRotationX(barrelRotationValue); MainBone.Transform = MainTransform; TurretBone.Transform = turretRotation * TurretTransform; BarrelBone.Transform = barrelRotation * BarrelTransform; myModel.CopyAbsoluteBoneTransformsTo(boneTransforms); // Draw the model, a model can have multiple meshes, so loop foreach (ModelMesh mesh in myModel.Meshes) { // This is where the mesh orientation is set foreach (BasicEffect effect in mesh.Effects) { effect.World = boneTransforms[mesh.ParentBone.Index]; effect.View = view; effect.Projection = projection; effect.EnableDefaultLighting(); } // Draw the mesh, will use the effects set above mesh.Draw(); } } } }

    Read the article

  • Making an interface for input in C - How?

    - by tloszk
    I have a big question. I started to develop a simple 3D engine (or should I call it framework?). I use OpenGL for rendering and it is developed for Windows. It is all written in C. But I don't know, how to write an "interface" for the keyboard/mouse input. I would like to keep it as simple as possible and nice - what the Win32 "native" input system is not. If anyone has suggestions about the topic, please, tell me. Thanks for everyone, who answers to my question!

    Read the article

  • Fighting Game and input buffering

    - by Pilispring
    In fighting game, there is an important thing called input buffering. When your character is doing an action, you can input the next action that will activate as soon as possible (the buffer is 5-10 frames). Is anyone had already done it or knows the most efficient way to do this? I thought of things like that: Enum list moves (a list of all my moves) if (moves = fireball) { if (Mycharacterisidle) { Do the fireball } else if (MycharacterisMoving) { if (lastspriteisnotfinished) { InputBuffer++; } else if(spriteisfinished && InputBuffer < 5) { Do the fireball } } } Any better ideas? Thx.

    Read the article

  • Rendering skybox in first person shooter

    - by brainydexter
    I am trying to get a skybox rendered correctly in my first person shooter game. I have the skybox cube rendering using GL_TEXTURE_CUBE_MAP. I author the cube with extents of -1 and 1 along X,Y and Z. However, I can't wrap my head around the camera transformations that I need to apply to get it right. My render loop looks something like this: mp_Camera-ApplyTransform() :: Takes the current player transformation and inverts it and pushes that on the modelview stack. Draw GameObjects Draw Skybox DrawSkybox does the following: glEnable(GL_TEXTURE_CUBE_MAP); glDepthMask(GL_FALSE); // draw the cube here with extents glDisable(GL_TEXTURE_CUBE_MAP); glDepthMask(GL_TRUE); Do I need to translate the skybox by the translation of the camera ? (btw, that didn't help either) EDIT: I forgot to mention: It looks like a small cube with unit extents. Also, I can strafe in/out of the cube. Screenshot:

    Read the article

< Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >