Search Results

Search found 19281 results on 772 pages for 'blender game engine'.

Page 422/772 | < Previous Page | 418 419 420 421 422 423 424 425 426 427 428 429  | Next Page >

  • 2D Side scroller collision detection

    - by Shanon Simmonds
    I am trying to do some collision detection between objects and tiles, but the tiles do not have there own x and y position, they are just rendered to the x and y position given, there is an array of integers which has the ids of the tiles to use(which are given from an image and all the different colors are assigned different tiles) int x0 = camera.x / 16; int y0 = camera.y / 16; int x1 = (camera.x + screen.width) / 16; int y1 = (camera.y + screen.height) / 16; for(int y = y0; y < y1; y++) { if(y < 0 || y >= height) continue; // height is the height of the level for(int x = x0; x < x1; x++) { if(x < 0 || x >= width) continue; // width is the width of the level getTile(x, y).render(screen, x * 16, y * 16); } } I tried using the levels getTile method to see if the tile that the object was going to advance to, to see if it was a certain tile, but, it seems to only work in some directions. Any ideas on what I'm doing wrong and fixes would be greatly appreciated. What's wrong is that it doesn't collide properly in every direction and also this is how I tested for a collision in the objects class if(!level.getTile((x + xa) / 16, (y + ya) / 16).isSolid()) { x += xa; y += ya; } EDIT: xa and ya represent the direction as well as the movement, if xa is negative it means the object is moving left, if its positive it is moving right, and same with ya except negative for up, positive for down.

    Read the article

  • Proper updating of GeoClipMaps

    - by thr
    I have been working on an implementation of gpu-based geo clip maps, but there is a section of the GPU Gems 2 article that i just can't seem to understand, specifically this paragraph and more precisely the bolded part: The choice of grid size n = 2k-1 has the further advantage that the finer level is never exactly centered with respect to its parent next-coarser level. In other words, it is always offset by 1 grid unit either left or right, as well as either top or bottom (see Figure 2-4), depending on the position of the viewpoint. In fact, it is necessary to allow a finer level to shift while its next-coarser level stays fixed, and therefore the finer level must sometimes be off-center with respect to the next-coarser level. An alternative choice of grid size, such as n = 2k-3, would provide the possibility for exact centering Let's take an example image from the article: My "understanding" of the way the clip maps were update was that you floor the position of the viewpoint to an int, and such get the center vertex point if this is not the same as the previous center point, you update the entire map. Now, this obviously is not the case - but what I am failing to understand is this: If you look at the image above, if the viewpoint was to move one unit to the right, then the inner ring (the one just around the view point + white center square) would end up getting a 1 unit space on both the left and right side of itself. But there is nothing in the paper that deals with this, what i mean is that it would end up looking like this (excuse my crummy cut-and-paste editing of the above image): This is obviously not a valid state of the. So, would the solution be that a clip ring (layer) can only move in increments of the ring/layer it's contained within? Wouldn't this end up being very restrictive? I feel like I am missing some crucial understanding of parts of the algorithm, but I have been over both this paper and the original paper from 2004 and I just can't see what I am not getting.

    Read the article

  • Player sprite moving slower on iPhone 4

    - by nvillec
    I just finished getting movement/jump animation for a player sprite in Xcode using Cocos2D. The basic movement algorithm is a timer that updates every 0.01 sec, changing the sprite position to (sprite.position.x + xVel, sprite.position.y + yVel). Each time a movement button is tapped, the appropriate velocity (initialized to 0) is changed to whatever speed I choose, then a stop movement button returns the velocity to 0. It's not an ideal solution but I'm very new at this and stoked to at least have that working with little help from the internet. So I may not have explained that perfectly, but it is in fact working to my satisfaction in Xcode's iPhone Simulator, however when I build it for my device and run it on my phone, the sprite's movement speed is noticeably slower than in Xcode. At first I thought it must have to do with the resolution of the iPhone 4, making the sprite's movement path twice as long, but I found that if I pull up the multitask bar, then return to the app the speed will sometimes jump back to normal. My second theory was that the code is just inefficient and is bogging the processes down, but I would see this reflected in the frame rate wouldn't I? It stays at 59-60 the whole time, and the spritesheet animation runs at the correct speed. Has anyone experienced this? Is this a really obvious issue that I'm completely missing? Any help (or tips for optimizing my approach to movement) would be much appreciated!

    Read the article

  • HLSL How to flip geometry horizontally

    - by cubrman
    I want to flip my asymmetric 3d model horizontally in the vertex shader alongside an arbitrary plane parallel to the YZ plane. This should switch everything for the model from the left hand side to the right hand side (like flipping it in Photoshop). Doing it in pixel shader would be a huge computational cost (extra RT, more fullscreen samples...), so it must be done in the vertex shader. Once more: this is NOT reflection, i need to flip THE WHOLE MODEL. I thought I could simply do the following: Turn off culling. Run the following code in the vertex shader: input.Position = mul(input.Position, World); // World[3][0] holds x value of the model's pivot in the World. if (input.Position.x <= World[3][0]) input.Position.x += World[3][0] - input.Position.x; else input.Position.x -= input.Position.x - World[3][0]; ... The model is never drawn. Where am I wrong? I presume that messes up the index buffer. Can something be done about it? P.S. it's INSANELY HARD to format code here. Thanks to Panda I found my problem. SOLUTION: // Do thins before anything else in the vertex shader. Position.x *= -1; // To invert alongside the object's YZ plane.

    Read the article

  • simple collision detection

    - by Rob
    Imagine 2 squares sitting side by side, both level with the ground: http://img19.imageshack.us/img19/8085/sqaures2.jpg A simple way to detect if one is hitting the other is to compare the location of each side. They are touching if ALL of the following are NOT true: The right square's left side is to the right of the left square's right side. The right square's right side is to the left of the left square's left side. The right square's bottom side is above the left square's top side. The right square's top side is below the left square's bottom side. If any of those are true, the squares are not touching. If all of those are false, the squares are touching. But consider a case like this, where one square is at a 45 degree angle: http://img189.imageshack.us/img189/4236/squaresb.jpg Is there an equally simple way to determine if those squares are touching?

    Read the article

  • SDL2 with OpenGL -- weird results, what's wrong?

    - by ber4444
    I'm porting an app to iOS, and therefore need to upgrade it to SDL2 from SDL1.2 (so far I'm testing it as an on OS X desktop app only). However, when running the code with SDL2, I'm getting weird results as shown on the second image below (the first image is how it looks with SDL, correctly). The single changeset that causes this is this one, do you see something obviously wrong there, or does SDL2 have some OpenGL nuances I'm unaware of? My SDL is based on changeset dd7e57847ea9 from HG (since then there is one "Allow specifying of OpenGL 3.2 Core Profile on Mac OS X" commit, not sure if that would help).

    Read the article

  • Is good practice to optimize FPS even when it's above the lower limit to give illusion of movement?

    - by rraallvv
    I started over 50 FPS on the iPhone, but now I'm bellow 30 PFS, I've seen most iPhone games clamped to either 60 or 30 FPS, even when 24 or less would give the illusion of movement. I've concidered my limit to be a little bit over 15 FPS, in fact my physics simulation is updated at that rate (15.84 steps/s) as that is the lowest that still give fluid movement, a bit lower gives jerky motion. Is there a practical reason why to clamp FPS way above the lower limit? Update: The following image could help to clarify I can independently set the physic simulation step, frame rate, and simulation interval update. My concern is why should I clamp any of those to values greater than the minimum? For instance to conserve battery life I could just to choose the lower limits, but it seems that 60 or 30 FPS are the most used values.

    Read the article

  • Numerically stable(ish) method of getting Y-intercept of mouse position?

    - by Fraser
    I'm trying to unproject the mouse position to get the position on the X-Z plane of a ray cast from the mouse. The camera is fully controllable by the user. Right now, the algorithm I'm using is... Unproject the mouse into the camera to get the ray: Vector3 p1 = Vector3.Unproject(new Vector3(x, y, 0), 0, 0, width, height, nearPlane, farPlane, viewProj; Vector3 p2 = Vector3.Unproject(new Vector3(x, y, 1), 0, 0, width, height, nearPlane, farPlane, viewProj); Vector3 dir = p2 - p1; dir.Normalize(); Ray ray = Ray(p1, dir); Then get the Y-intercept by using algebra: float t = -ray.Position.Y / ray.Direction.Y; Vector3 p = ray.Position + t * ray.Direction; The problem is that the projected position is "jumpy". As I make small adjustments to the mouse position, the projected point moves in strange ways. For example, if I move the mouse one pixel up, it will sometimes move the projected position down, but when I move it a second pixel, the project position will jump back to the mouse's location. The projected location is always close to where it should be, but it does not smoothly follow a moving mouse. The problem intensifies as I zoom the camera out. I believe the problem is caused by numeric instability. I can make minor improvements to this by doing some computations at double precision, and possibly abusing the fact that floating point calculations are done at 80-bit precision on x86, however before I start micro-optimizing this and getting deep into how the CLR handles floating point, I was wondering if there's an algorithmic change I can do to improve this? EDIT: A little snooping around in .NET Reflector on SlimDX.dll: public static Vector3 Unproject(Vector3 vector, float x, float y, float width, float height, float minZ, float maxZ, Matrix worldViewProjection) { Vector3 coordinate = new Vector3(); Matrix result = new Matrix(); Matrix.Invert(ref worldViewProjection, out result); coordinate.X = (float) ((((vector.X - x) / ((double) width)) * 2.0) - 1.0); coordinate.Y = (float) -((((vector.Y - y) / ((double) height)) * 2.0) - 1.0); coordinate.Z = (vector.Z - minZ) / (maxZ - minZ); TransformCoordinate(ref coordinate, ref result, out coordinate); return coordinate; } // ... public static void TransformCoordinate(ref Vector3 coordinate, ref Matrix transformation, out Vector3 result) { Vector3 vector; Vector4 vector2 = new Vector4 { X = (((coordinate.Y * transformation.M21) + (coordinate.X * transformation.M11)) + (coordinate.Z * transformation.M31)) + transformation.M41, Y = (((coordinate.Y * transformation.M22) + (coordinate.X * transformation.M12)) + (coordinate.Z * transformation.M32)) + transformation.M42, Z = (((coordinate.Y * transformation.M23) + (coordinate.X * transformation.M13)) + (coordinate.Z * transformation.M33)) + transformation.M43 }; float num = (float) (1.0 / ((((transformation.M24 * coordinate.Y) + (transformation.M14 * coordinate.X)) + (coordinate.Z * transformation.M34)) + transformation.M44)); vector2.W = num; vector.X = vector2.X * num; vector.Y = vector2.Y * num; vector.Z = vector2.Z * num; result = vector; } ...which seems to be a pretty standard method of unprojecting a point from a projection matrix, however this serves to introduce another point of possible instability. Still, I'd like to stick with the SlimDX Unproject routine rather than writing my own unless it's really necessary.

    Read the article

  • Intersection points of plane set forming convex hull

    - by Toji
    Mostly looking for a nudge in the right direction here. Given a set of planes (defined as a normal and distance from origin) that form a convex hull, I would like to find the intersection points that form the corners of that hull. More directly, I'm looking for a way to generate a point cloud appropriate to provide to Bullet. Bonus points if someone knows of a way I could give bullet the plane list directly, since I somewhat suspect that's what it's building on the backend anyway.

    Read the article

  • Java keyboard input [on hold]

    - by dØd
    I'm trying to implement a input system that can detect whether a certain key was held or was only pressed briefly. So far I have this: KEY_INTERACTION_TRESHOLD = 400ms //inside a constructor shouldMeasure = true; @Override public void keyPressed(KeyEvent e) { if (shouldMeasure) { startTime = System.currentTimeMillis(); shouldMeasure = false; return; } System.out.println("Button is held down"); e.consume(); } @Override public void keyReleased(KeyEvent e) { if (System.currentTimeMillis() - startTime < KEY_INTERACTION_TRESHOLD) { System.out.println("Button was only pressed briefly"); } startTime = 0; shouldMeasure = true; e.consume(); } Now this works, but the problem is that there is this delay between when I press a key to hold and when the message 'Button is held down' gets displayed. I understand why this delay occurs (for example when you press and hold a letter there will be a similar delay between the first and the second letter printed out), but I would like to somehow avoid it. I'm using only the Java API.

    Read the article

  • Examples of good Javascript/HTML5 based games

    - by Zuch
    Now that Flash is largely being replaced with HTML5 elements (video, audio, canvas, etc.) are there any good examples of web-based games built on completely open standards (meaning Javascript, HTML and CSS)? I see a lot of examples of pure HTML5 implementations of what was once only in Flash (like stuff here: http://www.html5rocks.com/) but not many games, a domain which still seem dominated by Flash. I'm curious what's possible and what the limitations are.

    Read the article

  • How do I morph between meshes that have different vertex counts?

    - by elijaheac
    I am using MeshMorpher from the Unify wiki in my Unity project, and I want to be able to transform between arbitrary meshes. This utility works best when there are an equal number of vertices between the two meshes. Is there some way to equalize the vertex count between a set of meshes? I don't mean that this would reduce the vertex count of a mesh, but would rather add redundant vertices to any meshes with smaller counts. However, if there is an alternate method of handling this (other than increasing vertices), I would like to know.

    Read the article

  • Java: Reflection Packet Builder using getField()

    - by Matchlighter
    So I just finished writing a packet builder that dynamically loads data into a data stream which is then sent out. Each builder operates by finding fields in its class (and its superclasses) that are marked with an @data annotation. Upon finishing the builder, I remembered that getFields() does not return in "any specific order". I quite like my builder because it allows for quite simple, yet hard-typed packets. Could this implementation be a problem? What would be the best next step to keep the simplicity - do alphabetical sorting of fields?

    Read the article

  • Set a drawing viewport while using camera

    - by Mariano
    I'm working with XNA. I already have a basic world made of tiles and a camera using a transform matrix. I have a character moving around and the camera follows. What I want to do now is draw the map only on a certain part of the screen as shown on the figure below. This way I can move the map to the left of the screen and have the other fixed parts shift to the right. Do I need to modify the camera matrix? Make a new viewport?

    Read the article

  • Child transforms problem when loading 3DS models using assimp

    - by MhdSyrwan
    I'm trying to load a textured 3d model into my scene using assimp model loader. The problem is that child meshes are not situated correctly (they don't have the correct transformations). In brief: all the mTansform matrices are identity matrices, why would that be? I'm using this code to render the model: void recursive_render (const struct aiScene *sc, const struct aiNode* nd, float scale) { unsigned int i; unsigned int n=0, t; aiMatrix4x4 m = nd->mTransformation; m.Scaling(aiVector3D(scale, scale, scale), m); // update transform m.Transpose(); glPushMatrix(); glMultMatrixf((float*)&m); // draw all meshes assigned to this node for (; n < nd->mNumMeshes; ++n) { const struct aiMesh* mesh = scene->mMeshes[nd->mMeshes[n]]; apply_material(sc->mMaterials[mesh->mMaterialIndex]); if (mesh->HasBones()){ printf("model has bones"); abort(); } if(mesh->mNormals == NULL) { glDisable(GL_LIGHTING); } else { glEnable(GL_LIGHTING); } if(mesh->mColors[0] != NULL) { glEnable(GL_COLOR_MATERIAL); } else { glDisable(GL_COLOR_MATERIAL); } for (t = 0; t < mesh->mNumFaces; ++t) { const struct aiFace* face = &mesh->mFaces[t]; GLenum face_mode; switch(face->mNumIndices) { case 1: face_mode = GL_POINTS; break; case 2: face_mode = GL_LINES; break; case 3: face_mode = GL_TRIANGLES; break; default: face_mode = GL_POLYGON; break; } glBegin(face_mode); for(i = 0; i < face->mNumIndices; i++)// go through all vertices in face { int vertexIndex = face->mIndices[i];// get group index for current index if(mesh->mColors[0] != NULL) Color4f(&mesh->mColors[0][vertexIndex]); if(mesh->mNormals != NULL) if(mesh->HasTextureCoords(0))//HasTextureCoords(texture_coordinates_set) { glTexCoord2f(mesh->mTextureCoords[0][vertexIndex].x, 1 - mesh->mTextureCoords[0][vertexIndex].y); //mTextureCoords[channel][vertex] } glNormal3fv(&mesh->mNormals[vertexIndex].x); glVertex3fv(&mesh->mVertices[vertexIndex].x); } glEnd(); } } // draw all children for (n = 0; n < nd->mNumChildren; ++n) { recursive_render(sc, nd->mChildren[n], scale); } glPopMatrix(); } What's the problem in my code ? I've added some code to abort the program if there's any bone in the meshes, but the program doesn't abort, this means : no bones, is that normal? if (mesh->HasBones()){ printf("model has bones"); abort(); } Note: I am using openGL & SFML & assimp

    Read the article

  • Level design - Are games development degrees worth it? [on hold]

    - by Tristan
    I want to go into Level design or Environment design and wondering if degrees are at all worth it for this area, as long as you have a good portfolio. I'm currently on a "Computing and Games Development" course and I feel like dropping out because I am not enjoying it. It's mostly computer based, which I'm not doing that great at, and little games Dev. But I don't know what do to... I do already have some high level education with a "Level 3 extended diploma". Thanks.

    Read the article

  • How to attach an object to a rotating circle?

    - by armands
    I am trying to make an object get attached on a collision point to a circle that is rotating, but the player needs to get attached with a constant point on the player. For example the player is moving back and forth and when the user touches the screen and the player jumps up but what I need is that when the player collides with the circle it attaches it's legs to it and continues rotating with the circle. So I wanted to know how to make this kind of collision joint in Cocos2d Box2d?

    Read the article

  • Converting a GameObject method call from UnityScript to C#

    - by Crims0n_
    Here is the UnityScript implementation of the method i use to generate a randomly tiled background, the problem i'm having relates to how to translate the call to the newTile method in c#, so far i've had no luck fiddling... can anyone point me in the correct direction? Thanks #pragma strict import System.Collections.Generic; var mapSizeX : int; var mapSizeY : int; var xOffset : float; var yOffset : float; var tilePrefab : GameObject; var tilePrefab2 : GameObject; var tiles : List.<Transform> = new List.<Transform>(); function Start () { var i : int = 0; var xIndex : int = 0; var yIndex : int = 0; xOffset = 2.69; yOffset = -1.97; while(yIndex < mapSizeY){ xIndex = 0; while(xIndex < mapSizeX){ var z = Random.Range(0, 5); if (z > 2) { var newTile : GameObject = Instantiate (tilePrefab, Vector3(xIndex*0.64 - (xOffset * (mapSizeX/10)), yIndex*-0.64 - (yOffset * (mapSizeY/10)), 0), Quaternion.identity); tiles.Add(newTile.transform); newTile.transform.parent = transform; newTile.transform.name = "tile_"+i; i++; xIndex++; } if (z < 2) { var newTile2 : GameObject = Instantiate (tilePrefab2, Vector3(xIndex*0.64 - (xOffset * (mapSizeX/10)), yIndex*-0.64 - (yOffset * (mapSizeY/10)), 0), Quaternion.identity); tiles.Add(newTile2.transform); newTile2.transform.parent = transform; newTile2.transform.name = "Ztile_"+i; i++; xIndex++; } } yIndex++; } } C# Version [Fixed] using UnityEngine; using System.Collections; public class LevelGen : MonoBehaviour { public int mapSizeX; public int mapSizeY; public float xOffset; public float yOffset; public GameObject tilePrefab; public GameObject tilePrefab2; int i; public System.Collections.Generic.List<Transform> tiles = new System.Collections.Generic.List<Transform>(); // Use this for initialization void Start () { int i = 0; int xIndex = 0; int yIndex = 0; xOffset = 1.58f; yOffset = -1.156f; while (yIndex < mapSizeY) { xIndex = 0; while(xIndex < mapSizeX) { int z = Random.Range(0, 5); if (z > 5) { GameObject newTile = (GameObject)Instantiate(tilePrefab, new Vector3(xIndex*0.64f - (xOffset * (mapSizeX/10.0f)), yIndex*-0.64f - (yOffset * (mapSizeY/10.0f)), 0), Quaternion.identity); tiles.Add(newTile.transform); newTile.transform.parent = transform; newTile.transform.name = "tile_"+i; i++; xIndex++; } if (z < 5) { GameObject newTile2 = (GameObject)Instantiate(tilePrefab, new Vector3(xIndex*0.64f - (xOffset * (mapSizeX/10.0f)), yIndex*-0.64f - (yOffset * (mapSizeY/10.0f)), 0), Quaternion.identity); tiles.Add(newTile2.transform); newTile2.transform.parent = transform; newTile2.transform.name = "tile2_"+i; i++; xIndex++; } } yIndex++; } } // Update is called once per frame void Update () { } }

    Read the article

  • Why can't a blendShader sample anything but the current coordinate of the background image?

    - by Triynko
    In Flash, you can set a DisplayObject's blendShader property to a pixel shader (flash.shaders.Shader class). The mechanism is nice, because Flash automatically provides your Shader with two input images, including the background surface and the foreground display object's bitmap. The problem is that at runtime, the shader doesn't allow you to sample the background anywhere but under the current output coordinate. If you try to sample other coordinates, it just returns the color of the current coordinate instead, ignoring the coordinates you specified. This seems to occur only at runtime, because it works properly in the Pixel Bender toolkit. This limitation makes it impossible to simulate, for example, the Aero Glass effect in Windows Vista/7, because you cannot sample the background properly for blurring. I must mention that it is possible to create the effect in Flash through manual composition techniques, but it's hard to determine when it actually needs updated, because Flash does not provide information about when a particular area of the screen or a particular display object needs re-rendered. For example, you may have a fixed glass surface with objects moving underneath it that don't dispatch events when they move. The only alternative is to re-render the glass bar every frame, which is inefficient, which is why I am trying to do it through a blendShader so Flash determines when it needs rendered automatically. Is there a technical reason for this limitation, or is it an oversight of some sort? Does anyone know of a workaround, or a way I could provide my manual composition implementation with information about when it needs re-rendered? The limitation is mentioned with no explanation in the last note in this page: http://help.adobe.com/en_US/as3/dev/WSB19E965E-CCD2-4174-8077-8E5D0141A4A8.html It says: "Note: When a Pixel Bender shader program is run as a blend in Flash Player or AIR, the sampling and outCoord() functions behave differently than in other contexts.In a blend, a sampling function will always return the current pixel being evaluated by the shader. You cannot, for example, use add an offset to outCoord() in order to sample a neighboring pixel. Likewise, if you use the outCoord() function outside a sampling function, its coordinates always evaluate to 0. You cannot, for example, use the position of a pixel to influence how the blended images are combined."

    Read the article

  • Finding closest object to a location within a specific perpendicular distance to direction vector

    - by Sniper
    I have a location and a direction vector indicating facing, I want to find the closest object to that location that is within some tolerance distance (perpendicular distance) to the ray formed by the location and direction vector. Basically I want to get the object that is being aimed at. I have thought about finding all objects within a box and then finding the closest object to my vector from them results, but I am sure that there is a more efficient way. The Z axis is optional, the objects are most likely within a few meters of the search vector.

    Read the article

  • "Walking" along a rotating surface in LimeJS

    - by Dave Lancea
    I'm trying to have a character walk along a plank (a long, thin rectangle) that works like a seesaw, being rotated around a central point by box2d physics (falling objects). I want the left and right arrow keys to move the player up and down the plank, regardless of it's slope, and I don't want to use real physics for the player movement. My idea for achieving this was to compute the coordinate based on the rotation of the plank and the current location "up" or "down" the board. My math is derived from here: http://math.stackexchange.com/questions/143932/calculate-point-given-x-y-angle-and-distance Here's the code I have so far: movement = 0; if(keys[37]){ // Left movement = -3; } if(keys[39]){ // Right movement = 3; } // this.plank is a LimeJS sprite. // getRotation() Should return an angle in degrees var rotation = this.plank.getRotation(); // this.current_plank_location is initialized as 0 this.current_plank_location += movement; var x_difference = this.current_plank_location * Math.cos(rotation); var y_difference = this.current_plank_location * Math.sin(rotation); this.setPosition(seesaw.PLANK_CENTER_X + x_difference, seesaw.PLANK_CENTER_Y + y_difference); This code causes the player to swing around in a circle when they are out of the center of the plank given a slight change in rotation of the plank. Any ideas on how I can get the player position to follow the board position?

    Read the article

  • Beginner question about vertex arrays in OpenGL

    - by MrDatabase
    Is there a special order in which vertices are entered into a vertex array? Currently I'm drawing single textures like this: glBindTexture(GL_TEXTURE_2D, texName); glVertexPointer(2, GL_FLOAT, 0, vertices); glTexCoordPointer(2, GL_FLOAT, 0, coordinates); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); where vertices has four "xy pairs". This is working fine. As a test I doubled the sizes of the vertices and coordinates arrays and changed the last line above to: glDrawArrays(GL_TRIANGLE_STRIP, 0, 8); since vertices now contains eight "xy pairs". I do see two textures (the second is intentionally offset from the first). However the textures are now distorted. I've tried passing GL_TRIANGLES to glDrawArrays instead of GL_TRIANGLE_STRIP but this doesn't work either. I'm so new to OpenGL that I thought it's best to just ask here :-) Cheers!

    Read the article

  • Camera movement and threshold not working

    - by irish guy mcconagheh
    I have a platformer that is in progress, part of this has a camera which I only want to move when the character moves out of a certain threshold, to try to accomplish this I have the following if statement: if(((Mathf.Abs(target.transform.position.x))-(Mathf.Abs(transform.position.x)))>thres){ x = moveTo(transform.position.x, target.position.x, trackSpeed); } in unity/c#. In pseudocode it means if((absolute value of player x) - (absolute value of camera x) is greater than the threshold){ move { however this does not seem to work correctly. it appears to work for the first couple of times the threshold is reached, however the distance between the camera and the player has to increase every time for the camera to move. I do not believe the movement of the camera is the problem, however the code for it is as follows: private float moveTo(float n, float target, float accel) { if (n == target) { return n; } else { float dir = Mathf.Sign(target - n); n += accel * Time.deltaTime * dir; return (dir == Mathf.Sign(target-n))? n: target; } } }

    Read the article

  • Unable to use Maya animation with scripts when imported to Unity

    - by keshk
    I am testing to import Maya animation over to Unity. I set up a simple cylinder with 2 bones and an IK handle. Made a simple animation where the cylinder bends and goes back to straight position over 24 frames. Following that, I selected everything and baked, all bones,ik,(animation by selecting all at the graph editor) and even the cylinder. I saved the scene and then select all and export as FBX with animation and bake checked. In unity imported it and at the preview able to see the animation. When I load the model into scene and play (after assigning the controller), able to see animation too. But now when I try to script it and control the animation, nothing happens. Even to test, I tried the following under the Update method. if(animation.isPlaying) Debug.Log("Animation Works"); else Debug.Log("Animation not working"); The bool doesn't even return true nor false. My animation is called "bend", thus just for try I did the following and nothing happens. animation.Play("bend"); Can please advice based on my steps, am I missing something. Do I need to add the controller or is that an unnecessary step? Did I screw up on the Maya part or the Unity part. Thanks for help.

    Read the article

  • Biome Transition in a Grid & Borderless World

    - by API-Beast
    I have a universe: a list of "Systems", each with their own center, type and radius. A small part of such a universe could look like this: Systems: Can be very close to a different system, e.g. overlap Can be inside another, much bigger system Can be very far away from any other systems Spawn system specific entities and particles inside the system radius Have some properties like background color So far so good. However, the player can fly around freely, inside and outside of systems, in real time. How do I interpolate and determine things like the background color now, depending on camera position? E.g. if you are halfway between a green and a red system you should see a background halfway between red and green, or if you are inside a lilac system near the center and at the border of a green system you should get a mostly lilac background etc.

    Read the article

< Previous Page | 418 419 420 421 422 423 424 425 426 427 428 429  | Next Page >