Search Results

Search found 35343 results on 1414 pages for 'development tools'.

Page 480/1414 | < Previous Page | 476 477 478 479 480 481 482 483 484 485 486 487  | Next Page >

  • Open source level editor for HTML5 platform game?

    - by Lai Yu-Hsuan
    A natty GUI editor is very helpful to create level map. I want to use some open-source choices rather than build my own from scratch. I found Tiled Map Editor but it doesn't work for what I want. Though I'm building HTML5 game, I don't have to use a HTML5 level editor as long as it can output well-formatted map files which my javascript can read. Edit: Sorry for the confusion. Tiled does not work for me because to make the player perform a 'tricky' jump, sometimes I want to set the distance between two platforms to, say, 7/3 or 8/3 tiles. But in Tiled I get only 2 or 3. If Tiled can do this, please teach me.

    Read the article

  • How are these bullets done?

    - by Mike
    I really want to know how the bullets in Radiangames Inferno are done. The bullets seem like they are just billboard particles but I am curious about how their tails are implemented. They can curve so this means they are not just a billboard. Also, they appear continuous which implies that the tails are not made of a bunch of smaller particles (I think). Can anyone shead some light on this for me?

    Read the article

  • What are the benefits of designing a KeyBinding relay?

    - by Adam Naylor
    The input system of Quake3 is handled using a Keybinding relay, whereby each keypress is matched against a 'binding' which is then passed to the CLI along with a time stamp of when the keypress (or release) occurred. I just wanted to get an idea from developers what they considered to be the key benefits of designing your input system around this approach? One thing i don't particularly like is the appending of the timestamp to the bound command. This seems like a bit of a hack to bend the CLI into handling the games input? Also I feel that detecting the keypress only to add the command to a stream of text that gets parsed at a later date to be a slightly latent way of responding to input? (or is this unfounded?) The only real benefit i can see is that it allows you to bind 'complex' commands to keypresses; like 'switch weapon;+fire;' for example. Or maybe for journaling purposes? Thanks for any insights!

    Read the article

  • How to attach turrets to tiles in a tile based game

    - by Joseph St. Pierre
    I am a flash developer, and I am building a Tower Defense game. The world is being built through tiles, and I have gotten that accomplished easily. I have also gotten level changes and enemy spawning down as well. However, I wish the player to be able to spawn turrets, and have those turrets be on specific tiles, based upon where the player placed it. Here is my code: stop(); colOffset = 50; rowOffset = 50; guns = []; placed = true; dead = 0; spawned = 0; level = 1; interval = 350 / level; amount = level * 20; counter = 0; numCol = 14; numRow = 10; tiles = []; k = 0; create = false; tileName = new Array("road","grass","end", "start"); board = new Array( new Array(1,1,1,1,3,1,1,1,1,1,2,1,1,1), new Array(1,1,1,0,0,1,1,1,1,1,0,1,1,1), new Array(1,1,1,0,1,1,1,1,1,1,0,0,1,1), new Array(1,1,1,0,0,0,1,1,1,1,1,0,1,1), new Array(1,1,1,0,1,0,0,0,1,1,1,0,0,1), new Array(1,1,1,0,1,1,1,0,0,1,1,1,0,1), new Array(1,1,0,0,1,1,1,1,0,1,1,0,0,1), new Array(1,1,0,1,1,1,1,1,0,1,0,0,1,1), new Array(1,1,0,0,0,0,0,0,0,1,0,1,1,1), new Array(1,1,1,1,1,1,1,1,0,0,0,1,1,1) ); buildBoard(); function buildBoard(){ for ( col = 0; col < numCol; col++){ for ( row = 0; row < numRow; row++){ _root.attachMovie("tile", "tile_" + col + "_" + row, _root.getNextHighestDepth()); theTile = eval("tile_" + col + "_" + row); theTile._x = (col * 50); theTile._y = (row * 50); theTile.row = row; theTile.col = col; tileType = board[row][col]; theTile.gotoAndStop(tileName[tileType]); tiles.push(theTile); } } } init(); function init(){ onEnterFrame = function(){ counter += 1; if ( spawned < amount && counter > 50){ min= _root.attachMovie("minion","minion",_root.getNextHighestDepth()); min._x = tile_4_0._x + 25; min._y = tile_4_0._y + 25; min.health = 100; choose = Math.round(Math.random()); if ( choose == 0 ){ min.waypointX = [ tile_4_1._x +25, tile_3_1._x + 25, tile_3_2._x + 25, tile_3_6._x + 25, tile_2_6._x + 25, tile_2_8._x + 25, tile_8_8._x + 25, tile_8_9._x + 25, tile_10_9._x + 25, tile_10_7._x + 25, tile_11_7._x + 25, tile_11_6._x + 25, tile_12_6._x + 25, tile_12_4._x + 25, tile_11_4._x + 25, tile_11_2._x + 25, tile_10_2._x + 25, tile_10_0._x + 25]; min.waypointY = [ tile_4_1._y +25, tile_3_1._y + 25, tile_3_2._y + 25, tile_3_6._y + 25, tile_2_6._y + 25, tile_2_8._y + 25, tile_8_8._y + 25, tile_8_9._y + 25, tile_10_9._y + 25, tile_10_7._y + 25, tile_11_7._y + 25, tile_11_6._y + 25, tile_12_6._y + 25, tile_12_4._y + 25, tile_11_4._y + 25, tile_11_2._y + 25, tile_10_2._y + 25, tile_10_0._y + 25]; } else if ( choose == 1 ){ min.waypointX = [ tile_4_1._x +25, tile_3_1._x + 25, tile_3_2._x + 25, tile_3_3._x + 25, tile_5_3._x + 25, tile_5_4._x + 25, tile_7_4._x + 25, tile_7_5._x + 25, tile_8_5._x + 25, tile_8_8._x + 25, tile_8_9._x + 25, tile_10_9._x + 25, tile_10_7._x + 25, tile_11_7._x + 25, tile_11_6._x + 25, tile_12_6._x + 25, tile_12_4._x + 25, tile_11_4._x + 25, tile_11_2._x + 25, tile_10_2._x + 25, tile_10_0._x + 25 ]; min.waypointY = [ tile_4_1._y +25, tile_3_1._y + 25, tile_3_2._y + 25, tile_3_3._y + 25, tile_5_3._y + 25, tile_5_4._y + 25, tile_7_4._y + 25, tile_7_5._y + 25, tile_8_5._y + 25, tile_8_8._y + 25, tile_8_9._y + 25, tile_10_9._y + 25, tile_10_7._y + 25, tile_11_7._y + 25, tile_11_6._y + 25, tile_12_6._y + 25, tile_12_4._y + 25, tile_11_4._y + 25, tile_11_2._y + 25, tile_10_2._y + 25, tile_10_0._y + 25 ]; } min.i = 0; counter = 0; spawned += 1; min.onEnterFrame = function(){ dx = this.waypointX[this.i] - this._x; dy = this.waypointY[this.i] - this._y; radians = Math.atan2(dy,dx); degrees = radians * 180 / Math.PI; xspeed = Math.cos(radians); yspeed = Math.sin(radians); this._x += xspeed; this._y += yspeed; if( this._x == this.waypointX[this.i] && this._y == this.waypointY[this.i]){ this.i++; } if ( this._x == tile_10_0._x + 25 && this._y == tile_10_0._y + 25){ this.removeMovieClip(); dead += 1; } } } if ( dead >= amount ){ dead = 0; level += 1; amount = level * 20; spawned = 0; } } btnM.onRelease = function(){ create = true; } } game.onEnterFrame = function(){ } It is possible for me however to complete this task, but only once. I am able to make the turret, drag it over to a tile, and have it attach itself to the tile. No problem. The issue is, I cannot do these multiple times. Please Help.

    Read the article

  • How can I locate empty space next to polygon regions?

    - by Stephen
    Let's say I have the following area in a top-down map: The circle is the player, the black square is an obstacle, and the grey polygons with red borders are walk-able areas that will be used as a navigation mesh for enemies. Obstacles and grey polygons are always convex. The grey regions were defined using an algorithm when the world was generated at runtime. Notice the little white column. I need to figure out where any empty space like this is, if at all, after the algorithm builds the grey regions, so that I can fill the space with another region. Basically what I'm hoping for is an algorithm that can detect empty space next to a polygon.

    Read the article

  • How to read BC4 texture in GLSL?

    - by Question
    I'm supposed to receive a texture in BC4 format. In OpenGL, i guess this format is called GL_COMPRESSED_RED_RGTC1. The texture is not really a "texture", more like a data to handle at fragment shader. Usually, to get colors from a texture within a fragment shader, i do : uniform sampler2D TextureUnit; void main() { vec4 TexColor = texture2D(TextureUnit, vec2(gl_TexCoord[0])); (...) the result of which is obviously a v4, for RGBA. But now, i'm supposed to receive a single float from the read. I'm struggling to understand how this is achieved. Should i still use a texture sampler, and expect the value to be in a specific position (for example, within TexColor.r ?), or should i use something else ?

    Read the article

  • Making entire scene fade to grayscale

    - by Fibericon
    When the player loses all of their lives, I want the entire game screen to go grayscale, but not stop updating immediately. I'd also prefer it fade to grayscale instead of suddenly lose all color. Everything I've found so far is either about taking a screenshot and making it grayscale, or making a specific texture grayscale. Is there a way to change the entire playing field and all objects within to grayscale without iterating through everything?

    Read the article

  • How should I unbind and delete OpenAL buffers?

    - by Joe Wreschnig
    I'm using OpenAL to play sounds. I'm trying to implement a fire-and-forget play function that takes a buffer ID and assigns it to a source from a pool I have previously allocated, and plays it. However, there is a problem with object lifetimes. In OpenGL, delete functions either automatically unbind things (e.g. textures), or automatically deletes the thing when it eventually is unbound (e.g. shaders) and so it's usually easy to manage deletion. However alDeleteBuffers instead simply fails with AL_INVALID_OPERATION if the buffer is still bound to a source. Is there an idiomatic way to "delete" OpenAL buffers that allows them to finish playing, and then automatically unbinds and really them? Do I need to tie buffer management more deeply into the source pool (e.g. deleting a buffer requires checking all the allocated sources also)? Similarly, is there an idiomatic way to unbind (but not delete) buffers when they are finished playing? It would be nice if, when I was looking for a free source, I only needed to see if a buffer was attached at all and not bother checking the source state. (I'm using C++, although approaches for C are also fine. Approaches assuming a GCd language and using finalizers are probably not applicable.)

    Read the article

  • Unity3D - Projection matrix camera frustum

    - by MulletDevil
    I've used off centre projection to create a custom projection matrix for my camera. When I run the game I can see the scene correctly in the game view but in the editor view the camera frustum is not correct. It still shows the original frustum shape not the new one. It also appears that Unity is using the original frustum for frustum culling and not the new one as I can see object being culled which are visible to the new frustum but would not be visible in the old one. Am I wrong in thinking that a custom projection matrix would alter the view frustum? Or am I missing something else?

    Read the article

  • converting a mouse click to a ray

    - by Will
    I have a perspective projection. When the user clicks on the screen, I want to compute the ray between the near and far planes that projects from the mouse point, so I can do some ray intersection code with my world. I am using my own matrix and vector and ray classes and they all work as expected. However, when I try and convert the ray to world coordinates my far always ends up as 0,0,0 and so my ray goes from the mouse click to the centre of the object space, rather than through it. (The x and y coordinates of near and far are identical, they differ only in the z coordinates where they are negatives of each other) GLint vp[4]; glGetIntegerv(GL_VIEWPORT,vp); matrix_t mv, p; glGetFloatv(GL_MODELVIEW_MATRIX,mv.f); glGetFloatv(GL_PROJECTION_MATRIX,p.f); const matrix_t inv = (mv*p).inverse(); const float unit_x = (2.0f*((float)(x-vp[0])/(vp[2]-vp[0])))-1.0f, unit_y = 1.0f-(2.0f*((float)(y-vp[1])/(vp[3]-vp[1]))); const vec_t near(vec_t(unit_x,unit_y,-1)*inv); const vec_t far(vec_t(unit_x,unit_y,1)*inv); ray = ray_t(near,far-near); What have I got wrong? (How do you unproject the mouse-point?)

    Read the article

  • When should a bullet texture be loaded in XNA?

    - by Bill
    I'm making a SpaceWar!-esque game using XNA. I want to limit my ships to 5 active bullets at any time. I have a Bullet DrawableGameComponent and a Ship DrawableGameComponent. My Ship has an array of 5 Bullet. What is the best way to manage the Bullet textures? Specifically, when should I be calling LoadTexture? Right now, my solution is to populate the Bullet array in the Ship's constructor, with LoadTexture being called in the Bullet constructor. The Bullet objects will be disabled/not visible except when they are active. Does the texture really need to be loaded once for each individual instance of the bullet object? This seems like a very processor-intensive operation. Note: This is a small-scale project, so I'm OK with not implementing a huge texture-management framework since there won't be more than half a dozen or so in the entire game. I'd still like to hear about scalable solutions for future applications, though.

    Read the article

  • How to move the object around the screen

    - by Abhishek
    I am trying to move the object around the screen I try this code -(void) move { CGFloat upperLimit = mWinSize.height - (mGunda.contentSize.height / 2.0); CGFloat upperLimit1 = mWinSize.height; CGFloat lowerLimit = (mGunda.contentSize.height / 2.0); CGFloat RightLimit = mWinSize.width - (mGunda.contentSize.width/2.0); CGFloat Right = (mGunda.contentSize.width/2.0); if ( mImageGoingUpward ) { mGunda.position = ccp( mGunda.position.x, mGunda.position.y + 5); if ( mGunda.position.y >= upperLimit ) { mImageGoingUpward = NO; mHori = NO; } } else { mGunda.position = ccp( mGunda.position.x, mGunda.position.y - 5); if ( mGunda.position.y <= lowerLimit ) { mGunda.position = ccp(mGunda.position.x +5, lowerLimit); } if(mGunda.position.x >= RightLimit) { mGunda.position = ccp(mGunda.position.x, mGunda.position.y+10); mHori = YES; } if(mHori) { if(mGunda.position.y >= upperLimit) { mGunda.position = ccp(mGunda.position.x - 5,mGunda.position.y); } } } } } It move the object from bottom to top & top to bottom & bottom to right & right to right top of the screen here is problem I have got It not move to the right top to left side of screen this rotationis not happen. How can I do this

    Read the article

  • Normal map applied as diffuse textures looks wrong

    - by KaiserJohaan
    Diffuse textures works fine, but I am having problem with normal maps, so I thought I'd tried to apply the normal maps as the diffuse map in my fragment shader so I could see everything is OK. I comment-out my normal map code and just set the diffuse map to the normal map and I get this: http://postimg.org/image/j9gudjl7r/ Looks like a smurf! This is the actual normal map of the main body: http://postimg.org/image/sbkyr6fg9/ Here is my fragment shader, notice I commented out normal map code so I could debug the normal map as a diffuse texture "#version 330 \n \ \n \ layout(std140) uniform; \n \ \n \ const int MAX_LIGHTS = 8; \n \ \n \ struct Light \n \ { \n \ vec4 mLightColor; \n \ vec4 mLightPosition; \n \ vec4 mLightDirection; \n \ \n \ int mLightType; \n \ float mLightIntensity; \n \ float mLightRadius; \n \ float mMaxDistance; \n \ }; \n \ \n \ uniform UnifLighting \n \ { \n \ vec4 mGamma; \n \ vec3 mViewDirection; \n \ int mNumLights; \n \ \n \ Light mLights[MAX_LIGHTS]; \n \ } Lighting; \n \ \n \ uniform UnifMaterial \n \ { \n \ vec4 mDiffuseColor; \n \ vec4 mAmbientColor; \n \ vec4 mSpecularColor; \n \ vec4 mEmissiveColor; \n \ \n \ bool mHasDiffuseTexture; \n \ bool mHasNormalTexture; \n \ bool mLightingEnabled; \n \ float mSpecularShininess; \n \ } Material; \n \ \n \ uniform sampler2D unifDiffuseTexture; \n \ uniform sampler2D unifNormalTexture; \n \ \n \ in vec3 frag_position; \n \ in vec3 frag_normal; \n \ in vec2 frag_texcoord; \n \ in vec3 frag_tangent; \n \ in vec3 frag_bitangent; \n \ \n \ out vec4 finalColor; " " \n \ \n \ void CalcGaussianSpecular(in vec3 dirToLight, in vec3 normal, out float gaussianTerm) \n \ { \n \ vec3 viewDirection = normalize(Lighting.mViewDirection); \n \ vec3 halfAngle = normalize(dirToLight + viewDirection); \n \ \n \ float angleNormalHalf = acos(dot(halfAngle, normalize(normal))); \n \ float exponent = angleNormalHalf / Material.mSpecularShininess; \n \ exponent = -(exponent * exponent); \n \ \n \ gaussianTerm = exp(exponent); \n \ } \n \ \n \ vec4 CalculateLighting(in Light light, in vec4 diffuseTexture, in vec3 normal) \n \ { \n \ if (light.mLightType == 1) // point light \n \ { \n \ vec3 positionDiff = light.mLightPosition.xyz - frag_position; \n \ float dist = max(length(positionDiff) - light.mLightRadius, 0); \n \ \n \ float attenuation = 1 / ((dist/light.mLightRadius + 1) * (dist/light.mLightRadius + 1)); \n \ attenuation = max((attenuation - light.mMaxDistance) / (1 - light.mMaxDistance), 0); \n \ \n \ vec3 dirToLight = normalize(positionDiff); \n \ float angleNormal = clamp(dot(normalize(normal), dirToLight), 0, 1); \n \ \n \ float gaussianTerm = 0.0; \n \ if (angleNormal > 0.0) \n \ CalcGaussianSpecular(dirToLight, normal, gaussianTerm); \n \ \n \ return diffuseTexture * (attenuation * angleNormal * Material.mDiffuseColor * light.mLightIntensity * light.mLightColor) + \n \ (attenuation * gaussianTerm * Material.mSpecularColor * light.mLightIntensity * light.mLightColor); \n \ } \n \ else if (light.mLightType == 2) // directional light \n \ { \n \ vec3 dirToLight = normalize(light.mLightDirection.xyz); \n \ float angleNormal = clamp(dot(normalize(normal), dirToLight), 0, 1); \n \ \n \ float gaussianTerm = 0.0; \n \ if (angleNormal > 0.0) \n \ CalcGaussianSpecular(dirToLight, normal, gaussianTerm); \n \ \n \ return diffuseTexture * (angleNormal * Material.mDiffuseColor * light.mLightIntensity * light.mLightColor) + \n \ (gaussianTerm * Material.mSpecularColor * light.mLightIntensity * light.mLightColor); \n \ } \n \ else if (light.mLightType == 4) // ambient light \n \ return diffuseTexture * Material.mAmbientColor * light.mLightIntensity * light.mLightColor; \n \ else \n \ return vec4(0.0); \n \ } \n \ \n \ void main() \n \ { \n \ vec4 diffuseTexture = vec4(1.0); \n \ if (Material.mHasDiffuseTexture) \n \ diffuseTexture = texture(unifDiffuseTexture, frag_texcoord); \n \ \n \ vec3 normal = frag_normal; \n \ if (Material.mHasNormalTexture) \n \ { \n \ diffuseTexture = vec4(normalize(texture(unifNormalTexture, frag_texcoord).xyz * 2.0 - 1.0), 1.0); \n \ // vec3 normalTangentSpace = normalize(texture(unifNormalTexture, frag_texcoord).xyz * 2.0 - 1.0); \n \ //mat3 tangentToWorldSpace = mat3(normalize(frag_tangent), normalize(frag_bitangent), normalize(frag_normal)); \n \ \n \ // normal = tangentToWorldSpace * normalTangentSpace; \n \ } \n \ \n \ if (Material.mLightingEnabled) \n \ { \n \ vec4 accumLighting = vec4(0.0); \n \ \n \ for (int lightIndex = 0; lightIndex < Lighting.mNumLights; lightIndex++) \n \ accumLighting += Material.mEmissiveColor * diffuseTexture + \n \ CalculateLighting(Lighting.mLights[lightIndex], diffuseTexture, normal); \n \ \n \ finalColor = pow(accumLighting, Lighting.mGamma); \n \ } \n \ else { \n \ finalColor = pow(diffuseTexture, Lighting.mGamma); \n \ } \n \ } \n"; Here is my wrapper around a texture OpenGLTexture::OpenGLTexture(const std::vector<uint8_t>& textureData, uint32_t textureWidth, uint32_t textureHeight, TextureFormat textureFormat, TextureType textureType, Logger& logger) : mLogger(logger), mTextureID(gNextTextureID++), mTextureType(textureType) { glGenTextures(1, &mTexture); CHECK_GL_ERROR(mLogger); glBindTexture(GL_TEXTURE_2D, mTexture); CHECK_GL_ERROR(mLogger); GLint glTextureFormat = (textureFormat == TextureFormat::TEXTURE_FORMAT_RGB ? GL_RGB : textureFormat == TextureFormat::TEXTURE_FORMAT_RGBA ? GL_RGBA : GL_RED); glTexImage2D(GL_TEXTURE_2D, 0, glTextureFormat, textureWidth, textureHeight, 0, glTextureFormat, GL_UNSIGNED_BYTE, &textureData[0]); CHECK_GL_ERROR(mLogger); glGenerateMipmap(GL_TEXTURE_2D); CHECK_GL_ERROR(mLogger); glBindTexture(GL_TEXTURE_2D, 0); CHECK_GL_ERROR(mLogger); } OpenGLTexture::~OpenGLTexture() { glDeleteBuffers(1, &mTexture); CHECK_GL_ERROR(mLogger); } And here is the sampler I create which is shared between Diffuse and normal textures // texture sampler setup glGenSamplers(1, &mTextureSampler); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_MAG_FILTER, GL_LINEAR); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_WRAP_S, GL_REPEAT); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_WRAP_T, GL_REPEAT); CHECK_GL_ERROR(mLogger); glSamplerParameterf(mTextureSampler, GL_TEXTURE_MAX_ANISOTROPY_EXT, mCurrentAnisotropy); CHECK_GL_ERROR(mLogger); glUniform1i(glGetUniformLocation(mDefaultProgram.GetHandle(), "unifDiffuseTexture"), OpenGLTexture::TEXTURE_UNIT_DIFFUSE); CHECK_GL_ERROR(mLogger); glUniform1i(glGetUniformLocation(mDefaultProgram.GetHandle(), "unifNormalTexture"), OpenGLTexture::TEXTURE_UNIT_NORMAL); CHECK_GL_ERROR(mLogger); glBindSampler(OpenGLTexture::TEXTURE_UNIT_DIFFUSE, mTextureSampler); CHECK_GL_ERROR(mLogger); glBindSampler(OpenGLTexture::TEXTURE_UNIT_NORMAL, mTextureSampler); CHECK_GL_ERROR(mLogger); SetAnisotropicFiltering(mCurrentAnisotropy); The diffuse textures looks like they should, but the normal looks so wierd. Why is this?

    Read the article

  • Networking Client Server Packet logic (How they communicate)

    - by Trixmix
    I want to know what is the logic behind server client communication through packets for a real time game. for example the server sends x packets then the client receives x packets and processes them.. Basically what is the process to keep the client and server in sync and able to receive and send packets. more in depth example of what I want to know: client step 1 wait for a packet step 2 read x packets step 3 process x packets step 4 send x packets and so on... I need to know the very basic outline of the communication. Big questions are: 1) do I send and read packets all at one time? i.e for loop though the incoming packets array list and read them all or one every server loop or what... 2) what order should I do things i.e first receive then read then process then send etc.. 3) what I asked above a step by step of what the server / client should do.. Thanks!

    Read the article

  • What exactly is UV and UVW Mapping?

    - by Michael Stum
    Trying to understand some basic 3D concepts, at the moment I'm trying to figure out how textures actually work. I know that UV and UVW mapping are techniques that map 2D Textures to 3D Objects - Wikipedia told me as much. I googled for explanations but only found tutorials that assumed that I already know what it is. From my understanding, each 3D Model is made out of Points, and several points create a face? Does each point or face have a secondary coordinate that maps to a x/y position in the 2D Texture? Or how does unwrapping manipulate the model? Also, what does the W in UVW really do, what does it offer over UV? As I understand it, W maps to the Z coordinate, but in what situation would I have different textures for the same X/Y and different Z, wouldn't the Z part be invisible? Or am I completely misunderstanding this?

    Read the article

  • Implement 2x speed in tower of defense type game

    - by Siddharth
    I was currently developing tower of defense game and I want to implement 2x feature for my game. Game usually run with 1x speed that was normal speed of the game. Here what 1x and 2x mean : 1x - mention normal speed of the game, 2x - mention the game object moves with double speed means user experience the fast game play. I want to implement such functionality for my game. The functionality that I want contains in the game Medieval Castle game that was available in the market. https://play.google.com/store/apps/details?id=com.nova.root&feature=search_result#?t=W251bGwsMSwxLDEsImNvbS5ub3ZhLnJvb3QiXQ.. The screen shot also shows the 1x and 2x button in that game. I think for 2x speed of the game I have to increase the speed of each object that were in the game. So any member please help what to do for that implementation. Only idea become enough for me.

    Read the article

  • Mobile 3D engine renders alpha as full-object transparency

    - by Nils Munch
    I am running a iOS project using the isgl3d framework for showing pod files. I have a stylish car with 0.5 alpha windows, that I wish to render on a camera background, seeking some augmented reality goodness. The alpha on the windows looks okay, but when I add the object, I notice that it renders the entire object transparently, where the windows are. Including interior of the car. Like so (in example, keyboard can be seen through the dashboard, seats and so on. should be solid) The car interior is a seperate object with alpha 1.0. I would rather not show a "ghost car" in my project, but I haven't found a way around this. Have anyone encountered the same issue, and eventually reached a solution ?

    Read the article

  • Beat detection and FFT

    - by Quincy
    So I am working on a platformer game which includes music with beat detection. I am currently using a simple if the energy that is stored in the history buffer is smaller then the current energy there is a beat. The problem with this is that ofcourse if you use songs like rock songs where you have a pretty steady amplitude this isn't going to work. So I looked further and found algorithms splitting the sound into multiple bands using FFT. I then found this : http://en.literateprograms.org/Cooley-Tukey_FFT_algorithm_(C) The only problem I'm having is that I am quite new to audio and I have no idea how to use that to split the signal up into multiple signals. So my question is : How do you use a FFT to split a signal into multiple bands ? Also for the guys interested, this is my algorithm in c# : // C = threshold, N = size of history buffer / 1024 public void PlaceBeatMarkers(float C, int N) { List<float> instantEnergyList = new List<float>(); short[] samples = soundData.Samples; float timePerSample = 1 / (float)soundData.SampleRate; int sampleIndex = 0; int nextSamples = 1024; // Calculate instant energy for every 1024 samples. while (sampleIndex + nextSamples < samples.Length) { float instantEnergy = 0; for (int i = 0; i < nextSamples; i++) { instantEnergy += Math.Abs((float)samples[sampleIndex + i]); } instantEnergy /= nextSamples; instantEnergyList.Add(instantEnergy); if(sampleIndex + nextSamples >= samples.Length) nextSamples = samples.Length - sampleIndex - 1; sampleIndex += nextSamples; } int index = N; int numInBuffer = index; float historyBuffer = 0; //Fill the history buffer with n * instant energy for (int i = 0; i < index; i++) { historyBuffer += instantEnergyList[i]; } // If instantEnergy / samples in buffer < instantEnergy for the next sample then add beatmarker. while (index + 1 < instantEnergyList.Count) { if(instantEnergyList[index + 1] > (historyBuffer / numInBuffer) * C) beatMarkers.Add((index + 1) * 1024 * timePerSample); historyBuffer -= instantEnergyList[index - numInBuffer]; historyBuffer += instantEnergyList[index + 1]; index++; } }

    Read the article

  • Box2D networking

    - by spacevillain
    I am trying to make a simple sync between two box2d rooms, where you can drag boxes using the mouse. So every time player clicks (and holds the mousedown) a box, I try send joint parameters to server, and server sends them to other clients. When mouseup occurs, I send command to delete joint. The problem is that sync breaks too often. Is my way radically wrong, or it just needs some tweaks? http://www.youtube.com/watch?v=eTN2Gwj6_Lc Source code https://github.com/agentcooper/Box2d-networking

    Read the article

  • Computing a normal matrix in conjunction with gluLookAt

    - by Chris Smith
    I have a hand-rolled camera class that converts yaw, pitch, and roll angles into a forward, side, and up vector suitable for calling gluLookAt. Using this camera class I can modify the model-view matrix to move about the 3D world just fine. However, I am having trouble when using this camera class (and associated model-view matrix) when trying to perform directional lighting in my vertex shader. The problem is that the light direction, (0, 1, 0) for example, is relative to where the 'camera is looking' and not the actual world coordinates. (Or is this eye coordinates vs. model coordinates?) I would like the light direction to be unaffected by the camera's viewing direction. For example, when the camera is looking down the Z axis the ground is lit correctly. However, if I point the camera straight at the ground, then it goes dark. This is (I think) because the light direction is parallel with the camera's 'up' vector which is perpendicular with the ground's normal vector. I tried computing the normal matrix without taking the camera's model view into account, but then none of my objects were rotated correctly. Sorry if this sounds vague. I suspect there is a straight forward answer, but I'm not 100% clear on how the normal matrix should be used for transforming vertex normals in my vertex shader. For reference, here is pseudo code for my rendering loop: pMatrix = new Matrix(); pMatrix = makePerspective(...) mvMatrix = new Matrix() camera.apply(mvMatrix); // Calls gluLookAt // Move the object into position. mvMatrix.translatev(position); mvMatrix.rotatef(rotation.x, 1, 0, 0); mvMatrix.rotatef(rotation.y, 0, 1, 0); mvMatrix.rotatef(rotation.z, 0, 0, 1); var nMatrix = new Matrix(); nMatrix.set(mvMatrix.get().getInverse().getTranspose()); // Set vertex shader uniforms. gl.uniformMatrix4fv(shaderProgram.pMatrixUniform, false, new Float32Array(pMatrix.getFlattened())); gl.uniformMatrix4fv(shaderProgram.mvMatrixUniform, false, new Float32Array(mvMatrix.getFlattened())); gl.uniformMatrix4fv(shaderProgram.nMatrixUniform, false, new Float32Array(nMatrix.getFlattened())); // ... gl.drawElements(gl.TRIANGLES, this.vertexIndexBuffer.numItems, gl.UNSIGNED_SHORT, 0); And the corresponding vertex shader: // Attributes attribute vec3 aVertexPosition; attribute vec4 aVertexColor; attribute vec3 aVertexNormal; // Uniforms uniform mat4 uMVMatrix; uniform mat4 uNMatrix; uniform mat4 uPMatrix; // Varyings varying vec4 vColor; // Constants const vec3 LIGHT_DIRECTION = vec3(0, 1, 0); // Opposite direction of photons. const vec4 AMBIENT_COLOR = vec4 (0.2, 0.2, 0.2, 1.0); float ComputeLighting() { vec4 transformedNormal = vec4(aVertexNormal.xyz, 1.0); transformedNormal = uNMatrix * transformedNormal; float base = dot(normalize(transformedNormal.xyz), normalize(LIGHT_DIRECTION)); return max(base, 0.0); } void main(void) { gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0); float lightWeight = ComputeLighting(); vColor = vec4(aVertexColor.xyz * lightWeight, 1.0) + AMBIENT_COLOR; } Note that I am using WebGL, so if the anser is use glFixThisProblem(...) any pointers on how to re-implement that on WebGL if missing would be appreciated.

    Read the article

  • Moving around/avoiding obstacles

    - by János Harsányi
    I would like to write a "game", where you can place an obstacle (red), and then the black dot tries to avoid it, and get to the green target. I'm using a very easy way to avoid it, if the black dot is close to the red, it changes its direction, and moves for a while, then it moves forward to the green point. How could I create a "smooth" path for the computer controlled "player"? Edit: Not the smoothness is the main point, but to avoid the red blocking "wall" and not to crash into it and then avoid it. How could I implement some path finding algorithm if I just have basically 3 points? (And what would it make the things much more complicated, if you could place multiple obstacles?)

    Read the article

  • Setting up collision using a tilemap and cocos2d

    - by James
    I'm building my first platformer using cocos2d and a tilemap. I'm having trouble coming up with a decent way of determining if the character is colliding with an object. More specifically, in which direction is the character colliding with an object. Following the tutorial here, I have made a separate "meta" layer of collidable tiles. The problem is that unless the character is in the tile, you can't detect the collision. Also, there's no way of telling WHERE the collision is occurring. The best solution would be one that could tell me if a character is up against a wall, or walking on top of a platform. However, I can't seem to figure out a good technique for this.

    Read the article

  • draw fog of war using shaders

    - by lezebulon
    I am making a RTS game, and I'd like some advice on how to best render fog of wars, given what I'm already doing. You can imagine this game as being a classic RTS like Age of Empires 2, where the fog of war will basically be handled by a 2D array telling if a given "tile" is explored or not. The specific things to consider here are : 1) I'm only doing a few draw calls to draw the whole screen, using shaders, and I'm not drawing "tile by tile" in a 2D loop 2) The whole map is much bigger than the screen, and the screen can move every frame or so In that case, how could I draw the fog of war ? I have no issue maintaining on the CPU-side a 2D array giving the fog of war for each tile, but what would be the best way to actually display it dynamically ? thanks!

    Read the article

  • Velocity control of the player, why doesn't this work?

    - by Dominic Grenier
    I have the following code inside a while True loop: if abs(playerx) < MAXSPEED: if moveLeft: playerx -= 1 if moveRight: playerx += 1 if abs(playery) < MAXSPEED: if moveDown: playery += 1 if moveUp: playery -= 1 if moveLeft == False and abs(playerx) > 0: playerx += 1 if moveRight == False and abs(playerx) > 0: playerx -= 1 if moveUp == False and abs(playery) > 0: playery += 1 if moveDown == False and abs(playery) > 0: playery -= 1 player.x += playerx player.y += playery if player.left < 0 or player.right > 1000: player.x -= playerx if player.top < 0 or player.bottom > 600: player.y -= playery The intended result is that while an arrow key is pressed, playerx or playery increments by one at every iteration until it reaches MAXSPEED and stays at MAXSPEED. And that when the player stops pressing that arrow key, his speed decreases until it reaches 0. To me, this code explicitly says that... But what actually happens is that playerx or playery keeps incrementing regardless of MAXSPEED and continues moving even after the player stops pressing the arrow key. I keep rereading but I'm completely baffled by this weird behavior. Any insights? Thanks.

    Read the article

  • How should I invoke a physics engine?

    - by ymfoi
    I'm new to writing games. I'm planning to write a 2D battle game which may require an physics engine. Suppose I've written one, but how can I combine it with the main routine of my game? Should I attach it directly to the graphics render routine or put it in an individual thread? I've spent much time looking for some common approach, but found nothing. So can you reveal some basics idea for me, a newbie? Thanks! P.S. There're many other problems I have to deal with if I choose to start a separate thread for the physics engine, for example, the lock problem, while from my intuition, I guess I'd better separate the render and the physics engine.

    Read the article

< Previous Page | 476 477 478 479 480 481 482 483 484 485 486 487  | Next Page >