Search Results

Search found 32114 results on 1285 pages for 'general development'.

Page 461/1285 | < Previous Page | 457 458 459 460 461 462 463 464 465 466 467 468  | Next Page >

  • What's wrong with this turn to face algorithm?

    - by Chan
    I implement a torpedo object that chases a rotating planet. Specifically, it will turn toward the planet each update. Initially my implement was: void move() { vector3<float> to_target = target - get_position(); to_target.normalize(); position += (to_target * speed); } which works perfectly for torpedo that is a solid sphere. Now my torpedo is actually a model, which has a forward vector, so using this method looks odd because it doesn't actually turn toward but jump toward. So I revised it a bit to get, double get_rotation_angle(vector3<float> u, vector3<float> v) const { u.normalize(); v.normalize(); double cosine_theta = u.dot(v); // domain of arccosine is [-1, 1] if (cosine_theta > 1) { cosine_theta = 1; } if (cosine_theta < -1) { cosine_theta = -1; } return math3d::to_degree(acos(cosine_theta)); } vector3<float> get_rotation_axis(vector3<float> u, vector3<float> v) const { u.normalize(); v.normalize(); // fix linear case if (u == v || u == -v) { v[0] += 0.05; v[1] += 0.0; v[2] += 0.05; v.normalize(); } vector3<float> axis = u.cross(v); return axis.normal(); } void turn_to_face() { vector3<float> to_target = (target - position); vector3<float> axis = get_rotation_axis(get_forward(), to_target); double angle = get_rotation_angle(get_forward(), to_target); double distance = math3d::distance(position, target); gl_matrix_mode(GL_MODELVIEW); gl_push_matrix(); { gl_load_identity(); gl_translate_f(position.get_x(), position.get_y(), position.get_z()); gl_rotate_f(angle, axis.get_x(), axis.get_y(), axis.get_z()); gl_get_float_v(GL_MODELVIEW_MATRIX, OM); } gl_pop_matrix(); move(); } void move() { vector3<float> to_target = target - get_position(); to_target.normalize(); position += (get_forward() * speed); } The logic is simple, I find the rotation axis by cross product, the angle to rotate by dot product, then turn toward the target position each update. Unfortunately, it looks extremely odds since the rotation happens too fast that it always turns back and forth. The forward vector for torpedo is from the ModelView matrix, the third column A: MODELVIEW MATRIX -------------------------------------------------- R U A T -------------------------------------------------- 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 -------------------------------------------------- Any suggestion or idea would be greatly appreciated.

    Read the article

  • 2D metaball liquid effect - how to feed output of one rendering pass as input to another shader

    - by Guye Incognito
    I'm attempting to make a shader for unity3d web project. I want to implement something like in the great answer by DMGregory in this question. in order to achieve a final look something like this.. Its metaballs with specular and shading. The steps to make this shader are. 1. Convert the feathered blobs into a heightmap. 2. Generate a normalmap from the heightmap 3. Feed the normal map and height map into a standard unity shader, for instance transparent parallax specular. I pretty much have all the pieces I need assembled but I am new to shaders and need help putting them together I can generate a heightmap from the blobs using some fragment shader code I wrote (I'm just using the red channel here cus i dont know if you can access the brightness) half4 frag (v2f i) : COLOR{ half4 texcol,finalColor; texcol = tex2D (_MainTex, i.uv); finalColor=_MyColor; if(texcol.r<_botmcut) { finalColor.r= 0; } else if((texcol.r>_topcut)) { finalColor.r= 0; } else { float r = _topcut-_botmcut; float xpos = _topcut - texcol.r; finalColor.r= (_botmcut + sqrt((xpos*xpos)-(r*r)))/_constant; } return finalColor; } turns these blobs.. into this heightmap Also I've found some CG code that generates a normal map from a height map. The bit of code that makes the normal map from finite differences is here void surf (Input IN, inout SurfaceOutput o) { o.Albedo = fixed3(0.5); float3 normal = UnpackNormal(tex2D(_BumpMap, IN.uv_MainTex)); float me = tex2D(_HeightMap,IN.uv_MainTex).x; float n = tex2D(_HeightMap,float2(IN.uv_MainTex.x,IN.uv_MainTex.y+1.0/_HeightmapDimY)).x; float s = tex2D(_HeightMap,float2(IN.uv_MainTex.x,IN.uv_MainTex.y-1.0/_HeightmapDimY)).x; float e = tex2D(_HeightMap,float2(IN.uv_MainTex.x-1.0/_HeightmapDimX,IN.uv_MainTex.y)).x; float w = tex2D(_HeightMap,float2(IN.uv_MainTex.x+1.0/_HeightmapDimX,IN.uv_MainTex.y)).x; float3 norm = normal; float3 temp = norm; //a temporary vector that is not parallel to norm if(norm.x==1) temp.y+=0.5; else temp.x+=0.5; //form a basis with norm being one of the axes: float3 perp1 = normalize(cross(norm,temp)); float3 perp2 = normalize(cross(norm,perp1)); //use the basis to move the normal in its own space by the offset float3 normalOffset = -_HeightmapStrength * ( ( (n-me) - (s-me) ) * perp1 + ( ( e - me ) - ( w - me ) ) * perp2 ); norm += normalOffset; norm = normalize(norm); o.Normal = norm; } Also here is the built-in transparent parallax specular shader for unity. Shader "Transparent/Parallax Specular" { Properties { _Color ("Main Color", Color) = (1,1,1,1) _SpecColor ("Specular Color", Color) = (0.5, 0.5, 0.5, 0) _Shininess ("Shininess", Range (0.01, 1)) = 0.078125 _Parallax ("Height", Range (0.005, 0.08)) = 0.02 _MainTex ("Base (RGB) TransGloss (A)", 2D) = "white" {} _BumpMap ("Normalmap", 2D) = "bump" {} _ParallaxMap ("Heightmap (A)", 2D) = "black" {} } SubShader { Tags {"Queue"="Transparent" "IgnoreProjector"="True" "RenderType"="Transparent"} LOD 600 CGPROGRAM #pragma surface surf BlinnPhong alpha #pragma exclude_renderers flash sampler2D _MainTex; sampler2D _BumpMap; sampler2D _ParallaxMap; fixed4 _Color; half _Shininess; float _Parallax; struct Input { float2 uv_MainTex; float2 uv_BumpMap; float3 viewDir; }; void surf (Input IN, inout SurfaceOutput o) { half h = tex2D (_ParallaxMap, IN.uv_BumpMap).w; float2 offset = ParallaxOffset (h, _Parallax, IN.viewDir); IN.uv_MainTex += offset; IN.uv_BumpMap += offset; fixed4 tex = tex2D(_MainTex, IN.uv_MainTex); o.Albedo = tex.rgb * _Color.rgb; o.Gloss = tex.a; o.Alpha = tex.a * _Color.a; o.Specular = _Shininess; o.Normal = UnpackNormal(tex2D(_BumpMap, IN.uv_BumpMap)); } ENDCG } FallBack "Transparent/Bumped Specular" }

    Read the article

  • Unity - Invert Movement Direction

    - by m41n
    I am currently developing a 2,5D Sidescroller in Unity (just starting to get to know it). Right now I added a turn-script to have my character face the appropriate direction of movement, though something with the movement itself is behaving oddly now. When I press the right arrow key, the character moves and faces towards the right. If I press the left arrow key, the character faces towards the left, but "moon-walks" to the right. I allready had enough trouble getting the turning to work, so what I am trying is to find a simple solution, if possible without too much reworking of the rest of my project. I was thinking of just inverting the movement direction for a specific input-key/facing-direction. So if anyone knows how to do something like that, I'd be thankful for the help. If it helps, the following is the current part of my "AnimationChooser" script to handle the turning: Quaternion targetf = Quaternion.Euler(0, 270, 0); // Vector3 Direction when facing frontway Quaternion targetb = Quaternion.Euler(0, 90, 0); // Vector3 Direction when facing opposite way if (Input.GetAxisRaw ("Vertical") < 0.0f) // if input is lower than 0 turn to targetf { transform.rotation = Quaternion.Lerp(transform.rotation, targetf, Time.deltaTime * smooth); } if (Input.GetAxisRaw ("Vertical") > 0.0f) // if input is higher than 0 turn to targetb { transform.rotation = Quaternion.Lerp(transform.rotation, targetb, Time.deltaTime * smooth); } The Values (270 and 90) and Axis are because I had to turn my model itself in the very first place to face towards any of the movement directions.

    Read the article

  • Libnoise producing completely random noise

    - by Doodlemeat
    I am using libnoise in C++ taken and I have some problems with getting coherent noise. I mean, the noise produced now are completely random and it doesn't look like a noise. Here's a to the image produced by my game. I am diving the map into several chunks, but I can't seem to find any problem doing that since libnoise supports tileable noise. The code can be found below. Every chunk is 8x8 tiles large. Every tile is 64x64 pixels. I am also providing a link to download the entire project. It was made in Visual Studio 2013. Download link This is the code for generating a chunk Chunk *World::loadChunk(sf::Vector2i pPosition) { sf::Vector2i chunkPos = pPosition; pPosition.x *= mChunkTileSize.x; pPosition.y *= mChunkTileSize.y; sf::FloatRect bounds(static_cast<sf::Vector2f>(pPosition), sf::Vector2f(static_cast<float>(mChunkTileSize.x), static_cast<float>(mChunkTileSize.y))); utils::NoiseMap heightMap; utils::NoiseMapBuilderPlane heightMapBuilder; heightMapBuilder.SetSourceModule(mNoiseModule); heightMapBuilder.SetDestNoiseMap(heightMap); heightMapBuilder.SetDestSize(mChunkTileSize.x, mChunkTileSize.y); heightMapBuilder.SetBounds(bounds.left, bounds.left + bounds.width - 1, bounds.top, bounds.top + bounds.height - 1); heightMapBuilder.Build(); Chunk *chunk = new Chunk(this); chunk->setPosition(chunkPos); chunk->buildChunk(&heightMap); chunk->setTexture(&mTileset); mChunks.push_back(chunk); return chunk; } This is the code for building the chunk void Chunk::buildChunk(utils::NoiseMap *pHeightMap) { // Resize the tiles space mTiles.resize(pHeightMap->GetWidth()); for (int x = 0; x < mTiles.size(); x++) { mTiles[x].resize(pHeightMap->GetHeight()); } // Set vertices type and size mVertices.setPrimitiveType(sf::Quads); mVertices.resize(pHeightMap->GetWidth() * pHeightMap->GetWidth() * 4); // Get the offset position of all tiles position sf::Vector2i tileSize = mWorld->getTileSize(); sf::Vector2i chunkSize = mWorld->getChunkSize(); sf::Vector2f offsetPositon = sf::Vector2f(mPosition); offsetPositon.x *= chunkSize.x; offsetPositon.y *= chunkSize.y; // Build tiles for (int x = 0; x < mTiles.size(); x++) { for (int y = 0; y < mTiles[x].size(); y++) { // Sometimes libnoise can return a value over 1.0, better be sure to cap the top and bottom.. float heightValue = pHeightMap->GetValue(x, y); if (heightValue > 1.f) heightValue = 1.f; if (heightValue < -1.f) heightValue = -1.f; // Instantiate a new Tile object with the noise value, this doesn't do anything yet.. mTiles[x][y] = new Tile(this, pHeightMap->GetValue(x, y)); // Get a pointer to the current tile's quad sf::Vertex *quad = &mVertices[(y + x * pHeightMap->GetWidth()) * 4]; quad[0].position = sf::Vector2f(offsetPositon.x + x * tileSize.x, offsetPositon.y + y * tileSize.y); quad[1].position = sf::Vector2f(offsetPositon.x + (x + 1) * tileSize.x, offsetPositon.y + y * tileSize.y); quad[2].position = sf::Vector2f(offsetPositon.x + (x + 1) * tileSize.x, offsetPositon.y + (y + 1) * tileSize.y); quad[3].position = sf::Vector2f(offsetPositon.x + x * tileSize.x, offsetPositon.y + (y + 1) * tileSize.y); // find out which type of tile to render, atm only air or stone TileStop *tilestop = mWorld->getTileStopAt(heightValue); sf::Vector2i texturePos = tilestop->getTexturePosition(); // define its 4 texture coordinates quad[0].texCoords = sf::Vector2f(texturePos.x, texturePos.y); quad[1].texCoords = sf::Vector2f(texturePos.x + 64, texturePos.y); quad[2].texCoords = sf::Vector2f(texturePos.x + 64, texturePos.y + 64); quad[3].texCoords = sf::Vector2f(texturePos.x, texturePos.y + 64); } } } All the code that uses libnoise in some way are World.cpp, World.h and Chunk.cpp, Chunk.h in the project.

    Read the article

  • Incorporating XNA into an existing project

    - by Boreal
    My game as-is is using IrrlichtLime, which I'm beginning to dislike because it hides a lot of implementation and makes adding your own implementation incredibly complex. I don't really need the scene manager for anything and the only animation I need is manual (i.e. transforming the bones programmatically). However, I've only ever used XNA in the past as a starting point with the templates. How would I take my current project and add XNA to it?

    Read the article

  • Most efficient way to handle coordinate maps in Java

    - by glowcoder
    I have a rectangular tile-based layout. It's your typical Cartesian system. I would like to have a single class that handles two lookup styles Get me the set of players at position X,Y Get me the position of player with key K My current implementation is this: class CoordinateMap<V> { Map<Long,Set<V>> coords2value; Map<V,Long> value2coords; // convert (int x, int y) to long key - this is tested, works for all values -1bil to +1bil // My map will NOT require more than 1 bil tiles from the origin :) private Long keyFor(int x, int y) { int kx = x + 1000000000; int ky = y + 1000000000; return (long)kx | (long)ky << 32; } // extract the x and y from the keys private int[] coordsFor(long k) { int x = (int)(k & 0xFFFFFFFF) - 1000000000; int y = (int)((k >>> 32) & 0xFFFFFFFF) - 1000000000; return new int[] { x,y }; } } From there, I proceed to have other methods that manipulate or access the two maps accordingly. My question is... is there a better way to do this? Sure, I've tested my class and it works fine. And sure, something inside tells me if I want to reference the data by two different keys, I need two different maps. But I can also bet I'm not the first to run into this scenario. Thanks!

    Read the article

  • Most efficient way to implement delta time

    - by Starkers
    Here's one way to implement delta time: /// init /// var duration = 5000, currentTime = Date.now(); // and create cube, scene, camera ect ////// function animate() { /// determine delta /// var now = Date.now(), deltat = now - currentTime, currentTime = now, scalar = deltat / duration, angle = (Math.PI * 2) * scalar; ////// /// animate /// cube.rotation.y += angle; ////// /// update /// requestAnimationFrame(render); ////// } Could someone confirm I know how it works? Here what I think is going on: Firstly, we set duration at 5000, which how long the loop will take to complete in an ideal world. With a computer that is slow/busy, let's say the animation loop takes twice as long as it should, so 10000: When this happens, the scalar is set to 2.0: scalar = deltat / duration scalar = 10000 / 5000 scalar = 2.0 We now times all animation by twice as much: angle = (Math.PI * 2) * scalar; angle = (Math.PI * 2) * 2.0; angle = (Math.PI * 4) // which is 2 rotations When we do this, the cube rotation will appear to 'jump', but this is good because the animation remains real-time. With a computer that is going too quickly, let's say the animation loop takes half as long as it should, so 2500: When this happens, the scalar is set to 0.5: scalar = deltat / duration scalar = 2500 / 5000 scalar = 0.5 We now times all animation by a half: angle = (Math.PI * 2) * scalar; angle = (Math.PI * 2) * 0.5; angle = (Math.PI * 1) // which is half a rotation When we do this, the cube won't jump at all, and the animation remains real time, and doesn't speed up. However, would I be right in thinking this doesn't alter how hard the computer is working? I mean it still goes through the loop as fast as it can, and it still has render the whole scene, just with different smaller angles! So this a bad way to implement delta time, right? Now let's pretend the computer is taking exactly as long as it should, so 5000: When this happens, the scalar is set to 1.0: angle = (Math.PI * 2) * scalar; angle = (Math.PI * 2) * 1; angle = (Math.PI * 2) // which is 1 rotation When we do this, everything is timsed by 1, so nothing is changed. We'd get the same result if we weren't using delta time at all! My questions are as follows Mostly importantly, have I got the right end of the stick here? How do we know to set the duration to 5000 ? Or can it be any number? I'm a bit vague about the "computer going too quickly". Is there a way loop less often rather than reduce the animation steps? Seems like a better idea. Using this method, do all of our animations need to be timesed by the scalar? Do we have to hunt down every last one and times it? Is this the best way to implement delta time? I think not, due to the fact the computer can go nuts and all we do is divide each animation step and because we need to hunt down every step and times it by the scalar. Not a very nice DSL, as it were. So what is the best way to implement delta time? Below is one way that I do not really get but may be a better way to implement delta time. Could someone explain please? // Globals INV_MAX_FPS = 1 / 60; frameDelta = 0; clock = new THREE.Clock(); // In the animation loop (the requestAnimationFrame callback)… frameDelta += clock.getDelta(); // API: "Get the seconds passed since the last call to this method." while (frameDelta >= INV_MAX_FPS) { update(INV_MAX_FPS); // calculate physics frameDelta -= INV_MAX_FPS; } How I think this works: Firstly we set INV_MAX_FPS to 0.01666666666 How we will use this number number does not jump out at me. We then intialize a frameDelta which stores how long the last loop took to run. Come the first loop frameDelta is not greater than INV_MAX_FPS so the loop is not run (0 = 0.01666666666). So nothing happens. Now I really don't know what would cause this to happen, but let's pretend that the loop we just went through took 2 seconds to complete: We set frameDelta to 2: frameDelta += clock.getDelta(); frameDelta += 2.00 Now we run an animation thanks to update(0.01666666666). Again what is relevance of 0.01666666666?? And then we take away 0.01666666666 from the frameDelta: frameDelta -= INV_MAX_FPS; frameDelta = frameDelta - INV_MAX_FPS; frameDelta = 2 - 0.01666666666 frameDelta = 1.98333333334 So let's go into the second loop. Let's say it took 2(? Why not 2? Or 12? I am a bit confused): frameDelta += clock.getDelta(); frameDelta = frameDelta + clock.getDelta(); frameDelta = 1.98333333334 + 2 frameDelta = 3.98333333334 This time we enter the while loop because 3.98333333334 = 0.01666666666 We run update We take away 0.01666666666 from frameDelta again: frameDelta -= INV_MAX_FPS; frameDelta = frameDelta - INV_MAX_FPS; frameDelta = 3.98333333334 - 0.01666666666 frameDelta = 3.96666666668 Now let's pretend the loop is super quick and runs in just 0.1 seconds and continues to do this. (Because the computer isn't busy any more). Basically, the update function will be run, and every loop we take away 0.01666666666 from the frameDelta untill the frameDelta is less than 0.01666666666. And then nothing happens until the computer runs slowly again? Could someone shed some light please? Does the update() update the scalar or something like that and we still have to times everything by the scalar like in the first example?

    Read the article

  • Detect collision from a particular side

    - by Fabián
    I'm making a platform sidescrolling game. All I want to do is to detect if my character is on the floor: function OnCollisionStay (col : Collision){ if(col.gameObject.tag == "Floor"){ onFloor = true; } else {onFloor = false;} } function OnCollisionExit (col : Collision){ onFloor = false; } But I know this isn't the accurate way. If I hit a cube with a "floor" tag, in the air (no matter if with the character's feet or head) I would be able to jump. Is there a way to use the same box collision to detect if I'm touching something from a specific side?

    Read the article

  • Math major as a viable degree

    - by Zak O'Keefe
    While I realize there are many topics about CS vs software engineering vs game school programs, I haven't found anything relating to whether pure math degrees (with CS minor and electives) would also be a viable program. By this I mean: Would having a math major, CS minor put one at competitive disadvantage as compared to a pure CS program? This relates specifically to game engine programming, more on the graphics side. Background (for those who care): Currently a math major, CS minor at school and looking to land a career doing graphics engine programming. Admittedly, I love math and if at all possible would like to stay my current program as long as it doesn't put me at a competitive disadvantage trying to land a job post-graduation. That being said, I'm strong in the traditional C/C++ languages, strong concurrent programming skills, and currently produce self-made games for iOS. As an employer, how badly is the math major hurting me? Just want to get some advice from people already in the field!

    Read the article

  • How to correctly export UV coordinates from Blender

    - by KlashnikovKid
    Alright, so I'm just now getting around to texturing some assets. After much trial and error I feel I'm pretty good at UV unwrapping now and my work looks good in Blender. However, either I'm using the UV data incorrectly (I really doubt it) or Blender doesn't seem to export the correct UV coordinates into the obj file because the texture is mapped differently in my game engine. And in Blender I've played with the texture panel and it's mapping options and have noticed it doesn't appear to affect the exported obj file's uv coordinates. So I guess my question is, is there something I need to do prior to exporting in order to bake the correct UV coordinates into the obj file? Or something else that needs to be done to massage the texture coordinates for sampling. Or any thoughts at all of what could be going wrong? (Also here is a screen shot of my diffused texture in blender and the game engine. As you can see in the image, I have the same problem with a simple test cube not getting correct uv's either) http://www.digitalinception.net/blenderSS.png http://www.digitalinception.net/gameSS.png

    Read the article

  • Rubik's cube array rotation

    - by Ace
    I'm about to make a 3D Rubik's cube based game in Flash AS3 and Away3d. I don't really know how to manage the 2D arrays of the Rubik's cube. For example, how do I rotate the corresponding arrays if I rotate a side, or just rotate a middle part? In this stage I also don't know how to rotate those smaller cube parts all together if a side is rotating. First I was thinking of "groups" ( like in sketchup or 3ds max, blender), but that would be tricky, because the group components would change every time. So I was thinking of just rotating each individual piece along a global axis. However, I just know the Away3d functions to rotate the cube of his local X , Y or Z axis, but how to rotate in global axis? Does anyone know of a algorithm for doing these types of rotations?

    Read the article

  • Tile map collision is not working properly

    - by Sigh-AniDe
    I am having problems setting collision between my sprite and the tiles. I have only done the code for colision for moving upwards but some places on the map it moves up and some places it doesn't. Here is what I have so far: Vector2 position; private static float scalingFactor = 32; static int tileWidth = 32; static int tileHeight = 32; int[ , ] map = { {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, {0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, {0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, {0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, {0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, }; // This is in turtle.update if ( keyboardState.IsKeyDown( Keys.Up ) ) { if ( position.Y > screenHeight / 4 ) { //// current position of the turtle on the tiles int mapX = ( int )( position.X / scalingFactor ); int mapY = ( int )( position.Y / scalingFactor ) - 1; if ( isMovable( mapX , mapY , map ) ) { position.Y = position.Y - scalingFactor; } } else { MoveUp(); } } private void MoveUp() { motion.Y = -1; } public bool isMovable( int mapX , int mapY , int[ , ] map ) { if ( mapX < 0 || mapX > 19 || mapY < 0 || mapY > 20 ) { return false; } int tile = map[ mapX , mapY ]; if ( tile == 0 ) { return false; } return true; } protected override void Update( GameTime gameTime ) { turtle.Update( screenHeight , scalingFactor , map ); base.Update( gameTime ); }

    Read the article

  • How can I convert an image from raw data in Android without any munging?

    - by stephelton
    I have raw image data (may be png, jpg, ...) and I want it converted in Android without changing its pixel depth (bpp). In particular, when I load a grayscale (8 bpp) image that I want to use as alpha (glTexImage() with GL_ALPHA), it converts it to 16 bpp (presumably 5_6_5). While I do have a plan b (actually, I'm probably on plan 'e' by now, this is really becoming annoying) I would really like to discover an easy way to do this using what is readily available in the api. So far, I'm using BitmapFactory.decodeByteArray(). While I'm at it. I'm doing this from a native environment via jni (passing the buffer in from C, and a new buffer back to C from Java). Any portable solution in C/C++ would be preferable, but I don't want to introduce anything that might break in future versions of Android, etc.

    Read the article

  • How to draw texture to screen in Unity?

    - by user1306322
    I'm looking for a way to draw textures to screen in Unity in a similar fashion to XNA's SpriteBatch.Draw method. Ideally, I'd like to write a few helper methods to make all my XNA code work in Unity. This is the first issue I've faced on this seemingly long journey. I guess I could just use quads, but I'm not so sure it's the least expensive way performance-wise. I could do that stuff in XNA anyway, but they made SpriteBatch not without a reason, I believe.

    Read the article

  • Using multiple indexes with buffer objects in OpenTK

    - by Rushyo
    I've got multiple buffers in OpenGL holding data on position, normals and texcoords. I also have an equal number of buffers holding distinct index data for each of those buffers. I quite like this format (indvidual indexes for each buffer) utilised by COLLADA since it strikes me as optimally efficient at accessing each buffer. I've set up pointers to the relevant data arrays using VertexPointer, NormalPointer, etc however I have no way to assign pointers to the index buffers since DrawElements appear to only look at one ElementArrayBuffer. Can I utilise multiple indices some way or will I be better off using a different technique which can support this? I'd prefer to keep the distinct indices if at all possible.

    Read the article

  • Unity Frustum Culling Issue

    - by N0xus
    I'm creating a game that utilizes off center projection. I've got my game set up in a CAVE being rendered in a cluster, over 8 PC's with 4 of these PC's being used for each eye (this creates a stereoscopic effect). To help with alignment in the CAVE I've implemented an off center projection class. This class simply tells the camera what its top left, bottom left & bottom right corners are. From here, it creates a new projection matrix showing the the player the left and right of their world. However, inside Unity's editor, the camera is still facing forwards and, as a result the culling inside Unity isn't rendering half of the image that appears on the left and right screens. Does anyone know of a way to to either turn off the culling in Unity, or find a way to fix the projection matrix issue?

    Read the article

  • How to expose game data in the game without a singelton?

    - by zardon
    I'm quite new to cocos2d and games programming, and am currently I am writing a game that is currently in Prototype stage. Everything is going okay, but I've realized a potentially big problem and I am not sure how to solve it. I am using a singelton to store a bunch of arrays for everything, a global list of planets, a global list of troops, a global list of products, etc. And only now I'm realizing that all of this will be in memory and this is the wrong way to do it. I am not storing files or anything on the disk just yet, with exception to a save/load state, which is a capture of everything. My game makes use of a map which allows you to select a planet, then it will give you a breakdown of that planets troops and resources, Lets use this scenario: My game has 20 planets. On which you can have 20 troops. Straight away that's an array of 400! This does not add the NPC, which is another 10. So, 20x10 = 200 So, now we have 600 all in arrays inside a Singelton. This is obviously very bad, and very wrong. Especially as the game scales in the amount of data. But I need to expose pretty much everything, especially on the map page, and I am not sure how else to do it. I've been told that I can use a controller for the map page which has the information I need for each planet, and other controllers for other items I require global display for. I've also thought about storing each planet's data in a save file, using initWithCoder however there could be a boatload of files on the user's device? I really don't want to use a database, mainly because I would need to translate NSObjects and non-NSObjects like CGRects and CGPoints and Colors into/from SQL. I am open to other ideas on how to store and read game data to prevent using a singelton to store everything, everywhere. Thanks for your time.

    Read the article

  • How do I calculate opposite of a vector, add some slack

    - by Jason94
    How can i calulate a valid range (RED) for my object's (BLACK) traveling direction (GREEN). The green is a Vector2 where x and y range is -1 to 1. What I'm trying to do here is to create rocket fuel burn effekt. So what i got is rocket speed (float) rocket direction (Vector2 x = [-1, 1], y = [-1, 1]) I may think that rocket speed does not matter as fuel burn effect (particle) is created on position with its own speed.

    Read the article

  • can't spot the error. Trying to increment

    - by Kevin Jensen Petersen
    I really can't spot the error, or the misspelling. This script should increase the variable currentTime with 1 every second, as long as i am holding the Space button down. This is Unity C#. using UnityEngine; using System.Collections; public class GameTimer : MonoBehaviour { //Timer private bool isTimeDone; public GUIText counter; public int currentTime; private bool starting; //Each message will be shown random each 20 seconds. public string[] messages; public GUIText msg; //To check if this is the end private bool end; void Update () { counter.guiText.text = currentTime.ToString(); if(Input.GetKey(KeyCode.Space)) { if(starting == false) { starting = true; } if(end == false) { if(isTimeDone) { StartCoroutine(timer()); } } else { msg.guiText.text = "You think you can do better? Press 'R' to Try again!"; if(Input.GetKeyDown(KeyCode.R)) { Application.LoadLevel(Application.loadedLevel); } } } if(!Input.GetKey(KeyCode.Space) & starting) { end = true; } } IEnumerator timer() { isTimeDone = false; yield return new WaitForSeconds(1); currentTime++; isTimeDone = true; } }

    Read the article

  • Distance between two 3D objects' faces

    - by Arthur Gibraltar
    I'm really newbie on programming and I'm making some tests. I couldn't find nowhere on Internet how could I calculate the distance between two 3D objects' faces. Is there anyway? Detailing, as an example, I have two 3D cubes. Each one has a vector3 position designating it's center on the 3D space and an orientation matrix. And each cube has a size (float width, float height and float length). I could get a simple distance between them by calling Vector3.Distance(), but it doesn't consider its sizes, just the position. Then the distance would be between its centers. Is there any way to calculate the distance between the faces? Thanks for any reply.

    Read the article

  • Static "LoD" hack opinions

    - by David Lively
    I've been playing with implementing dynamic level of detail for rendering a very large mesh in XNA. It occurred to me that (duh) the whole point of this is to generate small triangles close to the camera, and larger ones far away. Given that, rather than constantly modifying or swapping index buffers based on a feature's rendered size or distance from the camera, it would be a lot easier (and potentially quite a bit faster), to render a single "fan" or flat wedge/frustum-shaped planar mesh that is tessellated into small triangles close to the near or small end of the frustum and larger ones at the far end, sort of like this (overhead view) (Pardon the gap in the middle - I drew one side and mirrored it) The triangle sizes are chosen so that all are approximately the same size when projected. Then, that mesh would be transformed to track the camera so that the Z axis (center vertical in this image) is always aligned with the view direction projected into the XZ plane. The vertex shader would then read terrain heights from a height texture and adjust the Y coordinate of the mesh to match a height field that defines the terrain. This eliminates the need for culling (since the mesh is generated to match the viewport dimensions) and the need to modify the index and/or vertex buffers when drawing the terrain. Obviously this doesn't address terrain with overhangs, etc, but that could be handled to a certain extent by including a second mesh that defines a sort of "ceiling" via a different texture. The other LoD schemes I've seen aren't particularly difficult to implement and, in some cases, are a lot more flexible, but this seemed like a decent quick-and-dirty way to handle height map-based terrain without getting into geometry manipulation. Has anyone tried this? Opinions?

    Read the article

  • How to define type-specific scripts when using a 'type object' programming pattern?

    - by Erik
    I am in the process of creating a game engine written in C++, using the C/C++ SQLite interface to achieve a 'type object' pattern. The process is largely similar to what is outlined here (Thank you Bob Nystrom for the great resource!). I have a generally defined Entity class that when a new object is created, data is taken from a SQLite database and then is pushed back into a pointer vector, which is then iterated through, calling update() for each object. All the ints, floats, strings are loaded in fine, but the script() member of Entity is proving an issue. It's not much fun having a bunch of stationary objects laying around my gameworld. The only solutions I've come up with so far are: Create a monolithic EntityScript class with member functions encompassing all game AI and then calling the corresponding script when iterating through the Entity vector. (Not ideal) Create bindings between C++ and a scripting language. This would seem to get the job done, but it feels like implementing this (given the potential memory overhead) and learning a new language is overkill for a small team (2-3 people) that know the entirety of the existing game engine. Can you suggest any possible alternatives? My ideal situation would be that to add content to the game, one would simply add a script file to the appropriate directory and append the SQLite database with all the object data. All that is required is to have a variety of integers and floats passed between both the engine and the script file.

    Read the article

  • Rendering different materials in a voxel terrain

    - by MaelmDev
    Each voxel datapoint in my terrain model is made up of two properties: density and material type. Each is stored as an unsigned integer value (but the density is interpreted as a decimal value between 0 and 1). My current idea for rendering these different materials on the terrain mesh is to store eleven extra attributes in each vertex: six material values corresponding to the materials of the voxels that the vertices lie between, three decimal values that correspond to the interpolation each vertex has between each voxel, and two decimal values that are used to determine where the fragment lies on the triangle. The material and interpolation attributes are the exact same for each vertex in the triangle. The fragment shader samples each texture that corresponds to each material and then uses the aforementioned couple of decimal values to interpolate between these samples and obtain the final textured color of the fragment. It should work fine, but it seems like a big memory hog. I won't be able to reuse vertices in the mesh with indexing, and each vertex will have a lot of data associated with it. It also seems pretty slow. What are some ways to improve or replace this technique for drawing materials on a voxel terrain mesh?

    Read the article

  • State / Screen management in Entity Component Systems

    - by David Lively
    My entity/component system is happily humming along and, despite some performance concerns I initially had, everything is working fine. However, I've realized that I missed a crucial point when starting this thing: how do you handle different screens? At the moment, I have a GameManager class which owns a component manager and entity manager. When I create an entity, the entity manager assigns it an ID and makes sure it's tracked. When I modify the components that are assigned to an entity. an UpdateEntity method is called, which alerts each of the systems that they may need to add or remove the entity from their respective entity lists. A problem with this is that the collection of entities operated on by each system is determined solely by the individual Systems, typically based on a "required component" filter. (An entity has to have a Renderable component to be rendered, for instance.) In this situation, I can't just keep collections of entities per screen and only Update/Draw those collections. They'd have to either be added and removed depending on their applicability to the current screen, which would cause their associated components to be removed, or enable/disable entities in a group per screen to hide what's not supposed to be visible. These approaches seem like really, really crappy kludges. What's a good way to handle this? A pretty straightforward way that comes to mind is to create a separate GameManager (which in my implementation owns all of the systems, entities, etc.) per screen, which means that everything outside of the device context would be duplicated. That's bothersome because some things are always visible, or I might want to continue to display the game under a translucent menu window. Another option would be to add a "layer" key to the GameManager class, which could be checked against a displayable layer stack held by the game manager. *System.Draw() would be called for each active layer, in the required order as determined by the stack. When the systems request an iterator for their respective entity collections, it would be pre-filtered to a (cached) set of those entities that participate in the active layer. Those collections could be updated from the same UpdateEntity event that's already used to maintain each system's entity collections. Still, kinda feels like a hack. If I've coded myself into a corner, feel free to throw tomatoes as long as they're labeled with a helpful suggestion. Hooray for learning curves.

    Read the article

  • Which Game Engine to Use for an Angry Bird style game? [JAVA] [on hold]

    - by Arch1tect
    Our team is building an Angry Bird Style game, and we have only about ten days. The game is a little more complex than Angry Bird because there are two players, they each have a castle with pigs to protect(not destroy:)). And the goal is to destroy the other player's pigs. I wonder what Game Engine would help us finish this game most efficiently. We at least need a physics engine but I guess game engine is more helpful since it usually includes physics engine. Correct me if I'm wrong. (So I'm wondering which game engine I should use, if it's just physics engine, I'll use box2d) Networking may or may not be added later depend on time we have. Thanks in advance for any advice! EDIT: image looks small, I'll add one:

    Read the article

< Previous Page | 457 458 459 460 461 462 463 464 465 466 467 468  | Next Page >