Search Results

Search found 25496 results on 1020 pages for 'development fabric'.

Page 384/1020 | < Previous Page | 380 381 382 383 384 385 386 387 388 389 390 391  | Next Page >

  • Why doesn't my GameMaker step event work?

    - by exceltior
    So I want to make some kind of floor that which the player when walks in gets his movement reduced but im having thousand of different issues implementing this since it doesnt appear to do anything ... So i have tried different way: 1 - I have tried Step Event which had the following script: if keyboard_check(ord('A')) { player.x = -5; } if keyboard_check(ord('D')) { player.x = -5; } if keyboard_check(ord('W')) { player.x = -5; } if keyboard_check(ord('S')) { player.x = -5; } 2 - I have tried a Collision Event with the same code 3- I have tried a Step event with collision detection on a script None of these options seem to work at all ... Can you help me?

    Read the article

  • Making a 2D game with responsive resolution

    - by alexandervrs
    I am making a 2D game, however I wish for it to be resolution agnostic. My target resolution i.e. where things look as intended is 1600 x 900. My ideas are: Make the HUD stay fixed to the sides no matter what resolution, use different size for HUD graphics under a certain resolution and another under a certain large one. Use large HD PNG sprites/backgrounds which are a power of 2, so they scale nicely. No vectors. Use the player's native resolution. Scale the game area (not the HUD) to fit (resulting zooming in some and cropping the game area sides if necessary for widescreen, no stretch), but always fill the screen. Have a min and max resolution limit for small and very large displays where you will just change the resolution(?) or scale up/down to fit. What I am a bit confused though is what math formula I would use to scale the game area correctly based on the resolution no matter the aspect ratio, fully fit in a square screen and with some clip to the sides for widescreen. Pseudocode would help as well. :)

    Read the article

  • Xna Equivalent of Viewport.Unproject in a draw call as a matrix transformation

    - by Nick Crowther
    I am making a 2D sidescroller and I would like to draw my sprite to world space instead of client space so I do not have to lock it to the center of the screen and when the camera stops the sprite will walk off screen instead of being stuck at the center. In order to do this I wanted to make a transformation matrix that goes in my draw call. I have seen something like this: http://stackoverflow.com/questions/3570192/xna-viewport-projection-and-spritebatch I have seen Matrix.CreateOrthographic() used to go from Worldspace to client space but, how would I go about using it to go from clientspace to worldspace? I was going to try putting my returns from the viewport.unproject method I have into a scale matrix such as: blah = Matrix.CreateScale(unproject.X,unproject.Y,0); however, that doesn't seem to work correctly. Here is what I'm calling in my draw method(where X is the coordinate my camera should follow): Vector3 test = screentoworld(X, graphics); var clienttoworld = Matrix.CreateScale(test.X,test.Y, 0); animationPlayer.Draw(theSpriteBatch, new Vector2(X.X,X.Y),false,false,0,Color.White,new Vector2(1,1),clienttoworld); Here is my code in my unproject method: Vector3 screentoworld(Vector2 some, GraphicsDevice graphics): Vector2 Position =(some.X,some.Y); var project = Matrix.CreateOrthographic(5*graphicsdevice.Viewport.Width, graphicsdevice.Viewport.Height, 0, 1); var viewMatrix = Matrix.CreateLookAt( new Vector3(0, 0, -4.3f), new Vector3(X.X,X.Y,0), Vector3.Up); //I have also tried substituting (cam.Position.X,cam.Position.Y,0) in for the (0,0,-4.3f) Vector3 nearSource = new Vector3(Position, 0f); Vector3 nearPoint = graphicsdevice.Viewport.Unproject(nearSource, project, viewMatrix, Matrix.Identity); return nearPoint;

    Read the article

  • car crash android game

    - by Axarydax
    I'd like to make a simple 2d car crashing game, where the player would drive his car into moving traffic and try to cause as much damage as possible in each level (some Burnout games had a mode like this). The physics part of the game is the most important, I can worry about graphics later. Would engine like emini or box2d work for this kind of game? Would Android devices have enough power to handle this? For example if there were about 20 cars colliding, along with some buildings, it would be nice if I could get 20 fps.

    Read the article

  • Algorithm to find average position

    - by Simran kaur
    In the given diagram, I have the extreme left and right points, that is -2 and 4 in this case. So, obviously, I can calculate the width which is 6 in this case. What we know: The number of partitions:3 in this case The partition number at at any point i.e which one is 1st,second or third partition (numbered starting from left) What I want: The position of the purple line drawn which is positio of average of a particular partition So, basically I just want a generalized formula to calculate position of the average at any point.

    Read the article

  • CSM shadow errors when models are split

    - by KaiserJohaan
    I'm getting closer to fixing CSM, but there seems to be one more issue at hand. At certain angles, the models will be caught/split between two shadow map cascades, like below. first depth split second depth split - here you can see the model is caught between the splits How does one fix this? Increase the overlapping boundaries between the splits? Or is the frustrum erronous? CameraFrustrum CalculateCameraFrustrum(const float fovDegrees, const float aspectRatio, const float minDist, const float maxDist, const Mat4& cameraViewMatrix, Mat4& outFrustrumMat) { CameraFrustrum ret = { Vec4(1.0f, -1.0f, 0.0f, 1.0f), Vec4(1.0f, 1.0f, 0.0f, 1.0f), Vec4(-1.0f, 1.0f, 0.0f, 1.0f), Vec4(-1.0f, -1.0f, 0.0f, 1.0f), Vec4(1.0f, -1.0f, 1.0f, 1.0f), Vec4(1.0f, 1.0f, 1.0f, 1.0f), Vec4(-1.0f, 1.0f, 1.0f, 1.0f), Vec4(-1.0f, -1.0f, 1.0f, 1.0f), }; const Mat4 perspectiveMatrix = PerspectiveMatrixFov(fovDegrees, aspectRatio, minDist, maxDist); const Mat4 invMVP = glm::inverse(perspectiveMatrix * cameraViewMatrix); outFrustrumMat = invMVP; for (Vec4& corner : ret) { corner = invMVP * corner; corner /= corner.w; } return ret; } Mat4 CreateDirLightVPMatrix(const CameraFrustrum& cameraFrustrum, const Vec3& lightDir) { Mat4 lightViewMatrix = glm::lookAt(Vec3(0.0f), -glm::normalize(lightDir), Vec3(0.0f, -1.0f, 0.0f)); Vec4 transf = lightViewMatrix * cameraFrustrum[0]; float maxZ = transf.z, minZ = transf.z; float maxX = transf.x, minX = transf.x; float maxY = transf.y, minY = transf.y; for (uint32_t i = 1; i < 8; i++) { transf = lightViewMatrix * cameraFrustrum[i]; if (transf.z > maxZ) maxZ = transf.z; if (transf.z < minZ) minZ = transf.z; if (transf.x > maxX) maxX = transf.x; if (transf.x < minX) minX = transf.x; if (transf.y > maxY) maxY = transf.y; if (transf.y < minY) minY = transf.y; } Mat4 viewMatrix(lightViewMatrix); viewMatrix[3][0] = -(minX + maxX) * 0.5f; viewMatrix[3][1] = -(minY + maxY) * 0.5f; viewMatrix[3][2] = -(minZ + maxZ) * 0.5f; viewMatrix[0][3] = 0.0f; viewMatrix[1][3] = 0.0f; viewMatrix[2][3] = 0.0f; viewMatrix[3][3] = 1.0f; Vec3 halfExtents((maxX - minX) * 0.5, (maxY - minY) * 0.5, (maxZ - minZ) * 0.5); return OrthographicMatrix(-halfExtents.x, halfExtents.x, halfExtents.y, -halfExtents.y, halfExtents.z, -halfExtents.z) * viewMatrix; }

    Read the article

  • Problems implementing a screen space shadow ray tracing shader

    - by Grieverheart
    Here I previously asked for the possibility of ray tracing shadows in screen space in a deferred shader. Several problems were pointed out. One of the most important problem is that only visible objects can cast shadows and objects between the camera and the shadow caster can interfere. Still I thought it'd be a fun experiment. The idea is to calculate the view coordinates of pixels and cast a ray to the light. The ray is then traced pixel by pixel to the light and its depth is compared with the depth at the pixel. If a pixel is in front of the ray, a shadow is casted at the original pixel. At first I thought that I could use the DDA algorithm in 2D to calculate the distance 't' (in p = o + t d, where o origin, d direction) to the next pixel and use it in the 3D ray equation to find the ray's z coordinate at that pixel's position. For the 2D ray, I would use the projected and biased 3D ray direction and origin. The idea was that 't' would be the same in both 2D and 3D equations. Unfortunately, this is not the case since the projection matrix is 4D. Thus, some tweak needs to be done to make this work this way. I would like to ask if someone knows of a way to do what I described above, i.e. from a 2D ray in texture coordinate space to get the 3D ray in screen space. I did implement a simple version of the idea which you can see in the following video: video here Shadows may seem a bit pixelated, but that's mostly because of the size of the step in 't' I chose. And here is the shader: #version 330 core uniform sampler2D DepthMap; uniform vec2 projAB; uniform mat4 projectionMatrix; const vec3 light_p = vec3(-30.0, 30.0, -10.0); noperspective in vec2 pass_TexCoord; smooth in vec3 viewRay; layout(location = 0) out float out_AO; vec3 CalcPosition(void){ float depth = texture(DepthMap, pass_TexCoord).r; float linearDepth = projAB.y / (depth - projAB.x); vec3 ray = normalize(viewRay); ray = ray / ray.z; return linearDepth * ray; } void main(void){ vec3 origin = CalcPosition(); if(origin.z < -60) discard; vec2 pixOrigin = pass_TexCoord; //tex coords vec3 dir = normalize(light_p - origin); vec2 texel_size = vec2(1.0 / 600.0); float t = 0.1; ivec2 pixIndex = ivec2(pixOrigin / texel_size); out_AO = 1.0; while(true){ vec3 ray = origin + t * dir; vec4 temp = projectionMatrix * vec4(ray, 1.0); vec2 texCoord = (temp.xy / temp.w) * 0.5 + 0.5; ivec2 newIndex = ivec2(texCoord / texel_size); if(newIndex != pixIndex){ float depth = texture(DepthMap, texCoord).r; float linearDepth = projAB.y / (depth - projAB.x); if(linearDepth > ray.z + 0.1){ out_AO = 0.2; break; } pixIndex = newIndex; } t += 0.5; if(texCoord.x < 0 || texCoord.x > 1.0 || texCoord.y < 0 || texCoord.y > 1.0) break; } } As you can see, here I just increment 't' by some arbitrary factor, calculate the 3D ray and project it to get the pixel coordinates, which is not really optimal. Hopefully, I would like to optimize the code as much as possible and compare it with shadow mapping and how it scales with the number of lights. PS: Keep in mind that I reconstruct position from depth by interpolating rays through a full screen quad.

    Read the article

  • How do multi-platform games usually store save data?

    - by PixelPerfect3
    I realize this is a bit of a broad question, but I was wondering if there is a "standard" in the industry when it comes to storing save data for games (and is it different across platforms - Xbox/PS/PC/Mac/Android/iOS?) For example for a game like Assassin's Creed or The Walking Dead: They are on multiple platforms and they usually have to save enough information about the player and their actions. Do they use something like XML files, databases, or just straight binary dumps? How much does it differ from platform to platform? I would appreciate it if someone with experience in the game industry would answer this.

    Read the article

  • How to make a ball fall faster on a ramp?

    - by Timothy Williams
    So, I'm making a ball game. Where you pick up the ball, drop it on a ramp, and it flies off in to blocks. The only problem right now is it falls at a normal speed, then lightly falls off, not nearly fast enough to get over the wall and hit the blocks. Is there any way to make the ball go faster down the ramp? Maybe even make it go faster depending on what height you dropped it from (e.g. if you hold it way above the ramp, and drop it, it will drop faster than if you dropped it right above the ramp.)

    Read the article

  • Ways to earn money through Flash games

    - by Maged
    If you like developing flash games just for fun, why not make money through them? There are different ways you can monetize your flash game: In Game Ads: Some common examples: Mochi Ads gamejacket ad4game CPMStar InviziAds You can make money by helping online gaming companies test and evaluate new games. Many of those companies are seeking feedback and reviews of their newest games. Find a sponsor and license your game. One of the quickest yet hardest ways to make money from the flash games you create is to find a website who is willing to sponsor them. With a single sponsorship, an individual can make anywhere from $1000-$7000 for a game. What are the best ads from these sites? If the game will be in social websites like Facebook and MySpace, will it still be useful to try other sites? Are there any other ways to earn money from a Flash game?

    Read the article

  • Rendering large and high poly meshes

    - by Aurus
    Consider an huge terrain that has a lot polygons, to render this terrain I thought of following techniques: Using height-map instead of raw meshes: Yes, but I want to create a lot of caves and stuff that simply wont work with height-maps. Using voxels: Yes, but I think that this would be to much since I don't even want to support changing terrain.. Split into multiple chunks and do some sort of LOD with the mesh: Yes, but how would I do that? Tessellation usually creates more detail not less. Precompute the same mesh in lower poly version (like Mudbox does) and depending on the distance it renders one of these meshes: Graphic memory is limited and uploading only the chunks won't solve that problem since the traffic would be too high. IMO the last one sounds really good, but imagine the following process: Upload and render the chunks depending on the current player position. [No problem] Player will walk straight forward Now we maybe have to change on of the low poly chunk with the high poly one So, Remove the low poly chunk and load the high poly chunk [Already to much traffic here, I think] I am not very experienced in graphic programming and maybe the upper process is totally okay but somehow I think it is too much. And how about the disk space it would require.. I think 3 kind of levels would be fine but isn't that also too much? (I am using OpenGL but I don't think that this is important)

    Read the article

  • Normal map applied as diffuse textures looks wrong

    - by KaiserJohaan
    Diffuse textures works fine, but I am having problem with normal maps, so I thought I'd tried to apply the normal maps as the diffuse map in my fragment shader so I could see everything is OK. I comment-out my normal map code and just set the diffuse map to the normal map and I get this: http://postimg.org/image/j9gudjl7r/ Looks like a smurf! This is the actual normal map of the main body: http://postimg.org/image/sbkyr6fg9/ Here is my fragment shader, notice I commented out normal map code so I could debug the normal map as a diffuse texture "#version 330 \n \ \n \ layout(std140) uniform; \n \ \n \ const int MAX_LIGHTS = 8; \n \ \n \ struct Light \n \ { \n \ vec4 mLightColor; \n \ vec4 mLightPosition; \n \ vec4 mLightDirection; \n \ \n \ int mLightType; \n \ float mLightIntensity; \n \ float mLightRadius; \n \ float mMaxDistance; \n \ }; \n \ \n \ uniform UnifLighting \n \ { \n \ vec4 mGamma; \n \ vec3 mViewDirection; \n \ int mNumLights; \n \ \n \ Light mLights[MAX_LIGHTS]; \n \ } Lighting; \n \ \n \ uniform UnifMaterial \n \ { \n \ vec4 mDiffuseColor; \n \ vec4 mAmbientColor; \n \ vec4 mSpecularColor; \n \ vec4 mEmissiveColor; \n \ \n \ bool mHasDiffuseTexture; \n \ bool mHasNormalTexture; \n \ bool mLightingEnabled; \n \ float mSpecularShininess; \n \ } Material; \n \ \n \ uniform sampler2D unifDiffuseTexture; \n \ uniform sampler2D unifNormalTexture; \n \ \n \ in vec3 frag_position; \n \ in vec3 frag_normal; \n \ in vec2 frag_texcoord; \n \ in vec3 frag_tangent; \n \ in vec3 frag_bitangent; \n \ \n \ out vec4 finalColor; " " \n \ \n \ void CalcGaussianSpecular(in vec3 dirToLight, in vec3 normal, out float gaussianTerm) \n \ { \n \ vec3 viewDirection = normalize(Lighting.mViewDirection); \n \ vec3 halfAngle = normalize(dirToLight + viewDirection); \n \ \n \ float angleNormalHalf = acos(dot(halfAngle, normalize(normal))); \n \ float exponent = angleNormalHalf / Material.mSpecularShininess; \n \ exponent = -(exponent * exponent); \n \ \n \ gaussianTerm = exp(exponent); \n \ } \n \ \n \ vec4 CalculateLighting(in Light light, in vec4 diffuseTexture, in vec3 normal) \n \ { \n \ if (light.mLightType == 1) // point light \n \ { \n \ vec3 positionDiff = light.mLightPosition.xyz - frag_position; \n \ float dist = max(length(positionDiff) - light.mLightRadius, 0); \n \ \n \ float attenuation = 1 / ((dist/light.mLightRadius + 1) * (dist/light.mLightRadius + 1)); \n \ attenuation = max((attenuation - light.mMaxDistance) / (1 - light.mMaxDistance), 0); \n \ \n \ vec3 dirToLight = normalize(positionDiff); \n \ float angleNormal = clamp(dot(normalize(normal), dirToLight), 0, 1); \n \ \n \ float gaussianTerm = 0.0; \n \ if (angleNormal > 0.0) \n \ CalcGaussianSpecular(dirToLight, normal, gaussianTerm); \n \ \n \ return diffuseTexture * (attenuation * angleNormal * Material.mDiffuseColor * light.mLightIntensity * light.mLightColor) + \n \ (attenuation * gaussianTerm * Material.mSpecularColor * light.mLightIntensity * light.mLightColor); \n \ } \n \ else if (light.mLightType == 2) // directional light \n \ { \n \ vec3 dirToLight = normalize(light.mLightDirection.xyz); \n \ float angleNormal = clamp(dot(normalize(normal), dirToLight), 0, 1); \n \ \n \ float gaussianTerm = 0.0; \n \ if (angleNormal > 0.0) \n \ CalcGaussianSpecular(dirToLight, normal, gaussianTerm); \n \ \n \ return diffuseTexture * (angleNormal * Material.mDiffuseColor * light.mLightIntensity * light.mLightColor) + \n \ (gaussianTerm * Material.mSpecularColor * light.mLightIntensity * light.mLightColor); \n \ } \n \ else if (light.mLightType == 4) // ambient light \n \ return diffuseTexture * Material.mAmbientColor * light.mLightIntensity * light.mLightColor; \n \ else \n \ return vec4(0.0); \n \ } \n \ \n \ void main() \n \ { \n \ vec4 diffuseTexture = vec4(1.0); \n \ if (Material.mHasDiffuseTexture) \n \ diffuseTexture = texture(unifDiffuseTexture, frag_texcoord); \n \ \n \ vec3 normal = frag_normal; \n \ if (Material.mHasNormalTexture) \n \ { \n \ diffuseTexture = vec4(normalize(texture(unifNormalTexture, frag_texcoord).xyz * 2.0 - 1.0), 1.0); \n \ // vec3 normalTangentSpace = normalize(texture(unifNormalTexture, frag_texcoord).xyz * 2.0 - 1.0); \n \ //mat3 tangentToWorldSpace = mat3(normalize(frag_tangent), normalize(frag_bitangent), normalize(frag_normal)); \n \ \n \ // normal = tangentToWorldSpace * normalTangentSpace; \n \ } \n \ \n \ if (Material.mLightingEnabled) \n \ { \n \ vec4 accumLighting = vec4(0.0); \n \ \n \ for (int lightIndex = 0; lightIndex < Lighting.mNumLights; lightIndex++) \n \ accumLighting += Material.mEmissiveColor * diffuseTexture + \n \ CalculateLighting(Lighting.mLights[lightIndex], diffuseTexture, normal); \n \ \n \ finalColor = pow(accumLighting, Lighting.mGamma); \n \ } \n \ else { \n \ finalColor = pow(diffuseTexture, Lighting.mGamma); \n \ } \n \ } \n"; Here is my wrapper around a texture OpenGLTexture::OpenGLTexture(const std::vector<uint8_t>& textureData, uint32_t textureWidth, uint32_t textureHeight, TextureFormat textureFormat, TextureType textureType, Logger& logger) : mLogger(logger), mTextureID(gNextTextureID++), mTextureType(textureType) { glGenTextures(1, &mTexture); CHECK_GL_ERROR(mLogger); glBindTexture(GL_TEXTURE_2D, mTexture); CHECK_GL_ERROR(mLogger); GLint glTextureFormat = (textureFormat == TextureFormat::TEXTURE_FORMAT_RGB ? GL_RGB : textureFormat == TextureFormat::TEXTURE_FORMAT_RGBA ? GL_RGBA : GL_RED); glTexImage2D(GL_TEXTURE_2D, 0, glTextureFormat, textureWidth, textureHeight, 0, glTextureFormat, GL_UNSIGNED_BYTE, &textureData[0]); CHECK_GL_ERROR(mLogger); glGenerateMipmap(GL_TEXTURE_2D); CHECK_GL_ERROR(mLogger); glBindTexture(GL_TEXTURE_2D, 0); CHECK_GL_ERROR(mLogger); } OpenGLTexture::~OpenGLTexture() { glDeleteBuffers(1, &mTexture); CHECK_GL_ERROR(mLogger); } And here is the sampler I create which is shared between Diffuse and normal textures // texture sampler setup glGenSamplers(1, &mTextureSampler); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_MAG_FILTER, GL_LINEAR); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_WRAP_S, GL_REPEAT); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_WRAP_T, GL_REPEAT); CHECK_GL_ERROR(mLogger); glSamplerParameterf(mTextureSampler, GL_TEXTURE_MAX_ANISOTROPY_EXT, mCurrentAnisotropy); CHECK_GL_ERROR(mLogger); glUniform1i(glGetUniformLocation(mDefaultProgram.GetHandle(), "unifDiffuseTexture"), OpenGLTexture::TEXTURE_UNIT_DIFFUSE); CHECK_GL_ERROR(mLogger); glUniform1i(glGetUniformLocation(mDefaultProgram.GetHandle(), "unifNormalTexture"), OpenGLTexture::TEXTURE_UNIT_NORMAL); CHECK_GL_ERROR(mLogger); glBindSampler(OpenGLTexture::TEXTURE_UNIT_DIFFUSE, mTextureSampler); CHECK_GL_ERROR(mLogger); glBindSampler(OpenGLTexture::TEXTURE_UNIT_NORMAL, mTextureSampler); CHECK_GL_ERROR(mLogger); SetAnisotropicFiltering(mCurrentAnisotropy); The diffuse textures looks like they should, but the normal looks so wierd. Why is this?

    Read the article

  • Importing Models from Maya to OpenGL

    - by Mert Toka
    I am looking for ways to import models to my game project. I am using Maya as modelling software, and GLUT for windowing of my game. I found this great parser, it imports all the textures and normal vectors, but it is compatible with .obj files of 3dsMAX. I tried to use it with Maya obj's, and it turned out that Maya's obj files are a bit different from former one, thus it cannot parse them. If you know any way to convert Maya obj files to 3dsMax obj files, that would be acceptable as well as a new parser for Maya obj files.

    Read the article

  • Dynamic Environment Creation

    - by Jack
    I was wondering, I'm thinking on a more small-scale, abstracted level, but how does one create a dynamic environment a la Minecraft? In specific, I'm thinking of the world as a 3 dimensional array of block objects, how is it made so that large features such as oceans are created? The language isn't important, I'm thinking on a conceptual level, but if it helps, I use C# or C++. Thanks for any help!

    Read the article

  • Making XNA Play Nice With 3DS Max, Boundiing Spheres

    - by Jason R. Mick
    I'm using 3DS Max 2010 with the KW x-porter plugin, which outputs a .X file (just downloaded the very latest version). Been getting some odd results: http://www.picvalley.net/u/2930/2265240220441812321333990933PAStFeSONWQslOrMQC5q.PNG Looks like the culling is screwed up. Note, that models I make in Milkshape don't seem to be having these problems. I've also tried to export an FBX file from 3DS Max 2010 and have been getting similar results. What are your suggestions in terms of exporting *.3DS models to a workable XNA form? What tools do you use?. To be clear, the model in question has none of these defects when viewed from similar angles in 3DS Max 2010. http://www.picvalley.net/u/2563/151728957814855401111333991302mSvEJ03Zv22GwHFgIhiV.PNG Any ideas on this oddity would also be appreciated! Edit 1 -- Add'l issue Forgot to mention, that the model otherwise seems alright, but that rotation seems to double -- in other words, when I scroll my camera view left to right, the model (whose draw I give the camera for the view and perspective matrices w/ BasicEffect seems to rotate twice as much as models I draw natively in XNA

    Read the article

  • Cocos2d Box2d how to shoot an object inside the screen

    - by Ahoura Ghotbi
    I have the code below : - (id) initWithGame:(mainGame*)game { if ((self = [super init])) { isTouchEnabled_ = YES; self.game = game; CGSize size = [[CCDirector sharedDirector] winSize]; screenH = size.height; screenW = size.width; character = [CCSprite spriteWithFile:@"o.png"]; character.position = ccp( size.width /2 , size.height/2 ); character.contentSize = CGSizeMake(72, 79); [self addChild:character z:10 tag:1]; _body = NULL; _radius = 14.0f; // Create ball body and shape b2BodyDef ballBodyDef; ballBodyDef.type = b2_dynamicBody; ballBodyDef.position.Set(100/PTM_RATIO, 100/PTM_RATIO); ballBodyDef.userData = character; _body = _game.world->CreateBody(&ballBodyDef); b2CircleShape circle; circle.m_radius = 26.0/PTM_RATIO; b2FixtureDef ballShapeDef; ballShapeDef.shape = &circle; ballShapeDef.density = 1.0f; ballShapeDef.friction = 0.2f; ballShapeDef.restitution = 0.8f; _body->CreateFixture(&ballShapeDef); [self schedule:@selector(gameChecker:)]; [self schedule:@selector(tick:)]; } return self; } - (void)gameChecker:(ccTime) dt{ if(character.position.y > 200){ [self unschedule:@selector(tick:)]; [self schedule:@selector(dropObject:)]; } } - (void)tick:(ccTime) dt { b2Vec2 force; force.Set(_body->GetLinearVelocity().x, _body->GetLinearVelocity().y+1.0f); for (b2Body* b = _game.world->GetBodyList(); b; b = b->GetNext()) { if (b->GetUserData() == character) { b->SetLinearVelocity(force); } } _game.world->Step(dt, 10, 10); for(b2Body *b = _game.world->GetBodyList(); b; b=b->GetNext()) { if (b->GetUserData() != NULL) { CCSprite *ballData = (CCSprite *)b->GetUserData(); ballData.position = ccp(b->GetPosition().x * PTM_RATIO, b->GetPosition().y * PTM_RATIO); ballData.rotation = -1 * CC_RADIANS_TO_DEGREES(b->GetAngle()); } } } -(void)dropObject:(ccTime) dt{ b2Vec2 force; force.Set(_body->GetLinearVelocity().x, _body->GetLinearVelocity().y-1.0f); for (b2Body* b = _game.world->GetBodyList(); b; b = b->GetNext()) { if (b->GetUserData() == character) { b->SetLinearVelocity(force); } } _game.world->Step(dt, 10, 10); for(b2Body *b = _game.world->GetBodyList(); b; b=b->GetNext()) { if (b->GetUserData() != NULL) { CCSprite *ballData = (CCSprite *)b->GetUserData(); ballData.position = ccp(b->GetPosition().x * PTM_RATIO, b->GetPosition().y * PTM_RATIO); ballData.rotation = -1 * CC_RADIANS_TO_DEGREES(b->GetAngle()); } } } I have been trying to get the effect that fruit ninja has when shooting the fruits. but it seems like its hard to get such animation so I was wondering if anyone can point me to the right direction and/or give me a sample code for a single object that gets thrown into the screen with a direction.

    Read the article

  • How can I pass an array of floats to the fragment shader using textures?

    - by James
    I want to map out a 2D array of depth elements for the fragment shader to use to check depth against to create shadows. I want to be able to copy a float array into the GPU, but using large uniform arrays causes segfaults in openGL so that is not an option. I tried texturing but the best i got was to use GL_DEPTH_COMPONENT glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, 512, 512, 0, GL_DEPTH_COMPONENT, GL_FLOAT, smap); Which doesn't work because that stores depth components (0.0 - 1.0) which I don't want because I have no idea how to calculate them using the depth value produced by the light sources MVP matrix multiplied by the coordinate of each vertex. Is there any way to store and access large 2D arrays of floats in openGL?

    Read the article

  • How to optimize collision detection

    - by Niklas
    I am developing a 2D Java Game with LibGDX. This is what it kinda looks like (simplified): The big black circle is the player, which you can move by tilting the smartphone. The red circles and blue rectangles are enemies, which will move from the right of the screen to the left. The player has to avoid crashing into them. Right now I am checking in the Game Loop every enemy against the player, whether they collide or not. This seems kinda inefficient to me, but I don't know how to improve it. I have tried the Quadtree approach, but it did not really work. The player could easily glitch through enemies and the collision was not detected. Unfortunately, I have destroyed the Quadtree implementation. I used this [tutorial/blog] as my Quadtree implementation(http://gamedevelopment.tutsplus.com/tutorials/quick-tip-use-quadtrees-to-detect-likely-collisions-in-2d-space--gamedev-374).

    Read the article

  • XNA Quadtree with LOD

    - by Byron Cobb
    I'm looking to create a fairly large environment, and as such would like to implement a quadtree and use LOD on it. I've looked through numerous examples and I get the basic idea of a quadtree. Start with a root node with 4 vertices covering the whole map and divide into 4 children nodes until I meet some criteria(max number of triangles) I'm looking for some very very basic algorithm or explanation with respect to drawing the quadtree. What vertices need to be stored per iteration? When do I determine what vertices to draw? When to update indices and vertices? Hope to integrate the bounding frustrum? Do I include parent and child vertices? I'm looking for very simple instruction on what to do. I've scoured the internet for days now looking, but everyone adds extra code and a different spin without explanation. I understand quadtrees, but not with respect to 3d rendering and lod. A link to an outside source will probably have been read by myself already and won't help. Regards, Byron.

    Read the article

  • How to apply Data Oriented Design with Object Oriented Programming?

    - by Pombal
    I've read lots of articles about Data Oriented Design (DOD) and I understand it but I can't design an Object Oriented Programming (OOP) system with DOD in mind, I think my OOP education is blocking me. How should I think to mix the two? The objective is to have a nice OOP interface while using DOD behind the scenes. I saw this too but didn't help much: http://stackoverflow.com/questions/3872354/how-to-apply-dop-and-keep-a-nice-user-interface

    Read the article

  • Maintain proper symbol order when applying an armature in flash

    - by Michael Taufen
    I am trying to animate a character's leg in flash CS 5.5 for a game I am working on. I decided to use the bone tool because it's awesome. The problem I am having, however, is that for my character to be animated properly, the symbols that make up his leg (upper leg, lower leg, and shoe) need to be on top of each other in a specific way (otherwise the shoe looks like its next to the leg, etc). Applying the bones results in the following problem: the first symbol I apply it to is placed in the rear on the armature layer, the next on top of it, and so on, until the final symbol is already on top. I need them to be in the opposite order, but arrange send to back does nothing on the armature layer. How can I fix this? tl;dr: The bone tool is not maintaining the stacking order of my objects, please help. Thanks for helping :).

    Read the article

  • Rendering performance in FlasCC + UDK when compared to Stage3d and UDK on Windows?

    - by Arthur Wulf White
    Adobe recently released the Flash C++ Compiler, which UDK uses to target Flash Player. Developers can now access UDK for browser applications. Does this mean greater performance than using a Stage3D engine (Away3D 4) and how much of a noticeable difference in performance would it make in rendering speeds? Is there any benchmark you could propose that would allow to compare them fairly? I am asking this to help myself understand the consequences in performance for deciding to use UDK in a browser based game. I would also like to know how it compares with UDK running natively in Windows? I am not asking which technology to use or which is better. Only interested in optimizing rendering speed in a 3d browser game with flash.

    Read the article

  • Normals vs Normal maps

    - by KaiserJohaan
    I am using Assimp asset importer (http://assimp.sourceforge.net/lib_html/index.html) to parse 3d models. So far, I've simply pulled out the normal vectors which are defined for each vertex in my meshes. Yet I have also found various tutorials on normal maps... As I understand it for normal maps, the normal vectors are stored in each texel of a normal map, and you pull these out of the normal texture in the shader. Why is there two ways to get the normals, which one is considered best-practice and why?

    Read the article

  • Rotate around the centre of the screen

    - by Dan Scott
    I want my camera to rotate around the centre of screen and I'm not sure how to achieve that. I have a rotation in the camera but I'm not sure what its rotating around. (I think it might be rotating around the position.X of camera, not sure) If you look at these two images: http://imgur.com/E9qoAM7,5qzyhGD#0 http://imgur.com/E9qoAM7,5qzyhGD#1 The first one shows how the camera is normally, and the second shows how I want the level to look when I would rotate the camera 90 degrees left or right. My camera: public class Camera { private Matrix transform; public Matrix Transform { get { return transform; } } private Vector2 position; public Vector2 Position { get { return position; } set { position = value; } } private float rotation; public float Rotation { get { return rotation; } set { rotation = value; } } private Viewport viewPort; public Camera(Viewport newView) { viewPort = newView; } public void Update(Player player) { position.X = player.PlayerPos.X + (player.PlayerRect.Width / 2) - viewPort.Width / 4; if (position.X < 0) position.X = 0; transform = Matrix.CreateTranslation(new Vector3(-position, 0)) * Matrix.CreateRotationZ(Rotation); if (Keyboard.GetState().IsKeyDown(Keys.D)) { rotation += 0.01f; } if (Keyboard.GetState().IsKeyDown(Keys.A)) { rotation -= 0.01f; } } } (I'm assuming you would need to rotate around the centre of the screen to achieve this)

    Read the article

  • Why aren't tangent space normal maps completely blue?

    - by seahorse
    Why aren't normal maps just blue? I would think that normal maps should be predominantly blue in color because the Z component of the normal is represented by blue. Normals point out of the surface in the Z direction so we should see blue as the predominant colour since the Z component is dominant. By definition tangent space is perpendicular to the surface. At any point we should have the normal always pointing in the Z (blue direction) with no X (red direction) or Y (green direction). Thus the normal map (since it is a "normal map") should have the colour of the normals which is just blue (R = x = 0, G = y = 0, B = z = 1) with no shades in between. But normal maps are not so, and they have gradients of shades in them. Why is this so?

    Read the article

< Previous Page | 380 381 382 383 384 385 386 387 388 389 390 391  | Next Page >