Search Results

Search found 33194 results on 1328 pages for 'development approach'.

Page 403/1328 | < Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >

  • Best way to go about sorting 2D sprites in a "RPG Maker" styled RPG

    - by Aaron Stewart
    I am trying to come up with the best way to create overlapping sprites without having any issues. I was thinking of having a SortedDictionary and setting the Entity's key to it's Y position relative to the max bound of the simulation, aka the Z value. I'd update the "Z" value in the update method each frame, if the entity's position has changed at all. For those who don't know what I mean, I want characters who are standing closer in front of another character to be drawn on top, and if they are behind the character, they are drawn behind. I'm leery of using SpriteBatch back to front or front to back, I've been doing some searching and have been under the impression they are a bad idea. and want to know exactly how other people are dealing with their depth sorting. Just ultimately trying to come up with the best method of sorting for good practice before I get too far in to refactor the system effectively.

    Read the article

  • Procedural world generation oriented on gameplay features

    - by Richard Fabian
    In large procedural landscape games, the land seems dull, but that's probably because the real world is largely dull, with only limited places where the scenery is dramatic or tactical. Looking at world generation from this point of view, a landscape generator for a game needs to not follow the rules of landscaping, but instead some rules married to the expectations of the gamer. For example, there could be a choke point / route generator that creates hills ravines, rivers and mountains between cities, rather than cities plotted on the land based on the resources or conditions generated by the mountains and rainfall patterns. Is there any existing work being done like this? Start with cities or population centres and then add in terrain afterwards?

    Read the article

  • Libgdx Palette Swap

    - by Haedrian
    I'm developing a game using the Libgdx library. I'm trying to implement a very simple palette swap functionality (basically just complete recolouring of some areas, I don't even need to have various shades), but I don't have any idea where to begin. The closest I've come is trying to draw the picture myself using a Pixmap, but that appears to be horrible unmaintainable and produces oodles of code.

    Read the article

  • Box2D: How to get the Exact Collision Point and ignore the collision (from 2 "ghost bodies")

    - by Moritz
    I have a very basic problem with Box2D. For a arenatype game where you can throw scriptable "missiles" at other players I decided to use Box2D for the collision detection between the players and the missiles. Players and missiles have their own circular shape with a specific size (varying). But I don´t want to use dynamic bodies because the missiles need to move themselve in any way they want to (defined in the script) and shouldnt be resolved unless the script wants it. The behavior I look for is as following (for each time step): velocity of missiles is set by the specific missile script each missile is moved according to that velocity if a collision accurs now, I want to get the exact position of impact, and now I need a mechanism to decide if the missile should just ignore the collision (for example collision between two fireballs which shouldnt interact) or take it (so they are resolved and dont overlap anymore) So is there a way in Box2D to create Ghost bodies and listen to collisions from them, then deciding if they should ignore the collision or should take them and resolve their position? I hope I was clear enough and would be happy about any help!

    Read the article

  • unity4.3 rigidbody2d unexpected force behaviour

    - by Lilz Votca Love
    So guys ive edited the question and here is what my problem is i have a player which has a rigidbody2d attached to it.my player is able to doublejump in the air nicely and stick to walls when colliding with them and slowly slides to the ground.All movement is handle through physics and no transform manipulations.here i did something similar to this in the FixedUpdate of my player. void FixedUpdate() { if(wall && Input.GetButtonDown("Jump")) { if(facingright)//player is facing the left side of the wall { rigidbody2D.Addforce(new vector2(-1f,2f)*jumpforce); /*Now the player should jump backwards following this directional vector and should follow a smooth curve which in this part works well*/ } else { rigidbody2D.Addforce(new vector2(1f,2f)*jumpforce); /*Now this is where everything gets complicated as you should have noticed this is the same directional vector only the opposite x axis value and the same amount of force is used but it behaves like the red curve in the picture below*/ } } } bad behaviour and vector in red .I tested the same thing(both addforce methods) for a simple jump and they exactly behave like mentionned above in the picture.so here is my problem.Jumping diagonally forward with rigidbody2d.addforce() do not have the same impact,do not follow the same curve as jumping the opposite direction with the same exact amount of force.if i could fix this or get past this i could implement a walljump system like a ninja jumping in zigzag between two opposite wall to climb them.Any ideas or alternatives?

    Read the article

  • What math should all game programmers know?

    - by Tetrad
    Simple enough question: What math should all game programmers have a firm grasp of in order to be successful? I'm not specifically talking about rendering math or anything in the niche areas of game programming, more specifically just things that even game programmers should know about, and if they don't they'll probably find it useful. Note: as there is no one correct answer, this question (and its answers) is a community wiki. Also, if you would like fancy latex math equations, feel free to use http://mathurl.com/.

    Read the article

  • Octree implementation for fustrum culling

    - by Manvis
    I'm learning modern (=3.1) OpenGL by coding a 3D turn based strategy game, using C++. The maps are composed of 100x90 3D hexagon tiles that range from 50 to 600 tris (20 different types) + any player units on those tiles. My current rendering technique involves sorting meshes by shaders they use (minimizing state changes) and then calling glDrawElementsInstanced() for drawing. Still get solid 16.6 ms/frame on my GTX 560Ti machine but the game struggles (45.45 ms/frame) on an old 8600GT card. I'm certain that using an octree and fustrum culling will help me here, but I have a few questions before I start implementing it: Is it OK for an octree node to have multiple meshes in it (e.g. can a soldier and the hex tile he's standing on end up in the same octree node)? How is one supposed to treat changes in object postion (e.g. several units are moving 3 hexes down)? I can't seem to find good a explanation on how to do it. As I've noticed, soting meshes by shaders is a really good way to save GPU. If I put node contents into, let's say, std::list and sort it before rendering, do you think I would gain any performance, or would it just create overhead on CPU's end? I know that this sounds like early optimization and implementing + testing would be the best way to find out, but perhaps someone knows from experience?

    Read the article

  • Simple 2d game pathfinding

    - by Kooi Nam Ng
    So I was trying to implement a simple pathfinding on iOS and but the outcome seems less satisfactory than what I intended to achieve.The thing is units in games like Warcraft and Red Alert move in all direction whereas units in my case only move in at most 8 directions as these 8 directions direct to the next available node.What should I do in order to achieve the result as stated above?Shrink the tile size? The screenshot intended for illustration. Those rocks are the obstacles whereas the both ends of the green path are the starting and end of the path.The red line is the path that I want to achieve. http://i.stack.imgur.com/lr19c.jpg

    Read the article

  • How closely can a game resemble another game without legal problems

    - by Fuu
    The majority of games build on successes of other games and many are downright clones. So where is the limit of emulating before legal issues come into play? Is it down to literary or graphic work like characters and storyline that cause legal problems, or can someone actually claim to own gameplay mechanics? There are so many similar clone games out there that the rules are probably very slack or nonexistent, but I'd like to hear the views of more experienced developers / designers.

    Read the article

  • How to get quality sprite sheet generation with rotations

    - by BenMaddox
    I'm working on a game that uses sprite sheets with rotation for animations. While the effect is pretty good, the quality of the rotations is somewhat lacking. I exported a flash animation to png sequence and then used a C# app to do matrix based rotations (System.Drawing.Drawing2D.Matrix). Unfortunately, there are several places where the image gets clipped. What would you suggest for a way to get high quality rotations from either flash or the exported PNGs? A circle should fit within the same image boundaries. I don't mind a new program that I must write or an existing program I must download/buy.

    Read the article

  • How to make rigid bodies collide with Apex Clothing in PhysX for Maya

    - by b1nary.atr0phy
    According to the [Apex] Clothing Overview section of the documentation: Colliding with Rigid Bodies Rigid bodies present in your scene will push clothing around roughly as you might expect. Well, I beg to differ. The Apex Cloth collides with the floor just fine, but that's about the only thing it collides with (unless I add ragdoll to the same skeleton that the cloth is attached to.) So for example, if I try to bounce a ball (dynamic rigid body) into the cloth, it simply bounces through it. If I try to walk an actor with ragdoll through it, he simply clips through it as well. Anyone have any insight on this?

    Read the article

  • UIView vs CCLayer and Making gestures work in Cocos2d

    - by Lewis
    now I've been been searching for an answer to this question for at least 3 days now. I've tried on many cocos2d forums, including the official one and have heard nothing back. I've found a project which uses custom gestures: https://github.com/melle/OneFingerRotationGestureDemo Explanation here: http://blog.mellenthin.de/archives/2012/02/13/an-one-finger-rotation-gesture-recognizer/ Now I want to implement that behaviour onto a sprite in a cocos2d application. I've tried to do this but it fails to work. It uses a view controller which inherits like this: @interface OneFingerRotationGestureViewController : UIViewController <OneFingerRotationGestureRecognizerDelegate> Now my question is how would I implement the OneFingerRotationGesture behaviour onto a CCSprite in cocos2d 2.0? As interfaces in cocos2d look like this: @interface HelloWorldLayer : CCLayer Now I have asked a similar question to this on stack overflow and a user directed me to this link: https://github.com/krzysztofzablocki/CCNode-SFGestureRecognizers Which I believe makes use of gestures (like the first github linked project) but not custom gestures. I lack the obj-c skills to work out and implement the functionality into my game, so I would appreciate it if someone could explain the differences between CCLayer and UIViewController, and help me implement the OneFingerRotation gesture into a cocos2d 2.0 project. Regards, Lewis.

    Read the article

  • converting a mouse click to a ray

    - by Will
    I have a perspective projection. When the user clicks on the screen, I want to compute the ray between the near and far planes that projects from the mouse point, so I can do some ray intersection code with my world. I am using my own matrix and vector and ray classes and they all work as expected. However, when I try and convert the ray to world coordinates my far always ends up as 0,0,0 and so my ray goes from the mouse click to the centre of the object space, rather than through it. (The x and y coordinates of near and far are identical, they differ only in the z coordinates where they are negatives of each other) GLint vp[4]; glGetIntegerv(GL_VIEWPORT,vp); matrix_t mv, p; glGetFloatv(GL_MODELVIEW_MATRIX,mv.f); glGetFloatv(GL_PROJECTION_MATRIX,p.f); const matrix_t inv = (mv*p).inverse(); const float unit_x = (2.0f*((float)(x-vp[0])/(vp[2]-vp[0])))-1.0f, unit_y = 1.0f-(2.0f*((float)(y-vp[1])/(vp[3]-vp[1]))); const vec_t near(vec_t(unit_x,unit_y,-1)*inv); const vec_t far(vec_t(unit_x,unit_y,1)*inv); ray = ray_t(near,far-near); What have I got wrong? (How do you unproject the mouse-point?)

    Read the article

  • Open Source AI Bot interfaces

    - by David Young
    What are some open source AI Bot interfaces? Similar to Pogamut 3 GameBots2004 for custom Unreal Tournament bots or Brood Wars API for Starcraft bots etc. If you could please post one AI bot interface per answer (make sure to provide a link) and give a brief summary as to the content of the blog posts. Please include what type of bot interface structure it is, client/server, server/server, etc e.g. BWAPI is client/server which emulates a real player

    Read the article

  • RTS game diplomacy heuristics

    - by kd304
    I'm reimplementing an old 4X space-rts game which has diplomacy options. The original was based on a relation scoring system (0..100) and a set of negotiation options (improve relations, alliance, declare war, etc.) The AI player usually had 3 options: yes, maybe and no; each adding or removing some amount to the relation score. How should the AI chose between the options? How does the diplomacy work in other games and how are they imlemented? Any good books/articles on the subject? (Googling the term diplomacy yields the game Diplomacy, which is unhelpful.)

    Read the article

  • Algorithm to generate multifaced cube?

    - by OnePie
    Are there any elegant soloution to generate a simple-six sided cube, where each cube is made out of more than one face? The method I have used ended up a horrible and complicated mess of logic that is imopssible to follow and most likely to maintain. The algorithm should not generate reduntant vertices, and should output the indice list for the mesh as well. The reason I need this is that the cubes vertices will be deformed depending on various factors, meaning that a simple six-faced cube will nto do.

    Read the article

  • How to load stacking chunks on the fly?

    - by Brettetete
    I'm currently working on an infinite world, mostly inspired by minecraft. A Chunk consists of 16x16x16 blocks. A block(cube) is 1x1x1. This runs very smoothly with a ViewRange of 12 Chunks (12x16) on my computer. Fine. When I change the Chunk height to 256 this becomes - obviously - incredible laggy. So what I basically want to do is stacking chunks. That means my world could be [8,16,8] Chunks large. The question is now how to generate chunks on the fly? At the moment I generate not existing chunks circular around my position (near to far). Since I don't stack chunks yet, this is not very complex. As important side note here: I also want to have biomes, with different min/max height. So in Biome Flatlands the highest layer with blocks would be 8 (8x16) - in Biome Mountains the highest layer with blocks would be 14 (14x16). Just as example. What I could do would be loading 1 Chunk above and below me for example. But here the problem would be, that transitions between different bioms could be larger than one chunk on y. My current chunk loading in action For the completeness here my current chunk loading "algorithm" private IEnumerator UpdateChunks(){ for (int i = 1; i < VIEW_RANGE; i += ChunkWidth) { float vr = i; for (float x = transform.position.x - vr; x < transform.position.x + vr; x += ChunkWidth) { for (float z = transform.position.z - vr; z < transform.position.z + vr; z += ChunkWidth) { _pos.Set(x, 0, z); // no y, yet _pos.x = Mathf.Floor(_pos.x/ChunkWidth)*ChunkWidth; _pos.z = Mathf.Floor(_pos.z/ChunkWidth)*ChunkWidth; Chunk chunk = Chunk.FindChunk(_pos); // If Chunk is already created, continue if (chunk != null) continue; // Create a new Chunk.. chunk = (Chunk) Instantiate(ChunkFab, _pos, Quaternion.identity); } } // Skip to next frame yield return 0; } }

    Read the article

  • How to attach a sprite to a TMXTiledMap at a particular coordinate, in AndEngine?

    - by shailenTJ
    I am trying to add a sprite at a "grid" location on the tiled map. The TMX tiled Map is like a grid, and you can access the size of the grid by calling mTMXtiledMap.getTileRows() and mTMXtiledMap.getTileColumns(). I want to add an object at grid location, say (2, 5). My tileMap is of size (10,10). How can I do that? There is no function like mTMXTiledMap.addChild(int x, int y, Entity mEntity). I would appreciate any suggestions!

    Read the article

  • If Expression True in Immediate Window But Code In if Block Never Runs

    - by Julian
    I set a break point in my code in MonoDevelop to break whenever I click on a surface. I then enter the immediate window and test to see if the the if statement will return true in comparing two Vector3's. It does return true. However, when I step over, the code is never run as though the statement evaluated false. Does anyone know how this could be possible? I've attached a screenshot. Here is the picture of my debug window and immediate window. You can see where the immediate window evaluates to true. The second breakpoint is not hit. Here are the details of the two Vector3's I am comparing. Does anyone know why I am experiencing this? It really seems like an anomaly to me :/ Does it have something to do with threading?

    Read the article

  • Dynamic Environment Creation

    - by Jack
    I was wondering, I'm thinking on a more small-scale, abstracted level, but how does one create a dynamic environment a la Minecraft? In specific, I'm thinking of the world as a 3 dimensional array of block objects, how is it made so that large features such as oceans are created? The language isn't important, I'm thinking on a conceptual level, but if it helps, I use C# or C++. Thanks for any help!

    Read the article

  • Algorithmically generating neon layers on pixel grid

    - by user190929
    In an attempt at a screensaver I am making, I am a fan of neo-like graphics, which, of course, look great against a black background. As I understand it, neon, graphically speaking, is essentially a gradient of a color, brightest in the center, and gets darker proceeding outward. Although, more accurate is similar, but separating it into tubes and glow. The tubes are mostly white, while the glow is where most of the color is seen. Well... the tubes could also be a light variant of the color, you could say. The glow is darker. Anyhow, my question is, how could you generate such things given an initial pattern of pixels that would be the tubes? For example, let's say I want to make a neon 'H'. I, via the libraries, can attain the rectangles of pixels which represent it, but I want to make it look neonized. How could I algorithmically achieve such an effect given a base tube shape and base color? EDIT: ok, I mistated that. Got a bit distracted. My purpose for this was similar to a neon effect, but not. Sorry about that. What I am looking for is something like this: Start with a pattern of pixels: [!][!][!][!][!][!][!][!] [!][!][O][!][!][!][!][!] [!][!][O][O][!][!][!][!] [!][!][!][!][O][!][!][!] [!][!][!][!][!][!][!][!] How to I find the U pixels? [!][E][E][E][!][!][!][!] [!][E][O][E][E][!][!][!] [!][E][O][O][E][E][!][!] [!][E][E][E][O][E][!][!] [!][!][!][E][E][E][!][!] Sorry if that looks bad.

    Read the article

  • OpenGL-ES: clearing the alpha of the FrameBufferObject

    - by MrDatabase
    This question is a follow-up to Texture artifacts on iPad How does one "clear the alpha of the render texture frameBufferObject"? I've searched around here, StackOverflow and various search engines but no luck. I've tried a few things... for example calling GlClear(GL_COLOR_BUFFER_BIT) at the beginning of my render loop... but it doesn't seem to make a difference. Any help is appreciated since I'm still new to OpenGL. Cheers! p.s. I read on SO and in Apple's documentation that GlClear should always be called at the beginning of the renderLoop. Agree? Disagree? Here's where I read this: http://stackoverflow.com/questions/2538662/how-does-glclear-improve-performance

    Read the article

  • Limiting game loop to exactly 60 tics per second (Android / Java)

    - by user22241
    So I'm having terrible problems with stuttering sprites. My rendering and logic takes less than a game tic (16.6667ms) However, although my game loop runs most of the time at 60 ticks per second, it sometimes goes up to 61 - when this happens, the sprites stutter. Currently, my variables used are: //Game updates per second final int ticksPerSecond = 60; //Amount of time each update should take final int skipTicks = (1000 / ticksPerSecond); This is my current game loop @Override public void onDrawFrame(GL10 gl) { // TODO Auto-generated method stub //This method will run continuously //You should call both 'render' and 'update' methods from here //Set curTime initial value if '0' //Set/Re-set loop back to 0 to start counting again loops=0; while(System.currentTimeMillis() > nextGameTick && loops < maxFrameskip){ SceneManager.getInstance().getCurrentScene().updateLogic(); //Time correction to compensate for the missing .6667ms when using int values nextGameTick+=skipTicks; timeCorrection += (1000d/ticksPerSecond) % 1; nextGameTick+=timeCorrection; timeCorrection %=1; //Increase loops loops++; } render(); } I realise that my skipTicks is an int and therefore will come out as 16 rather that 16.6667 However, I tried changing it (and ticksPerSecond) to Longs but got the same problem). I also tried to change the timer used to Nanotime and skiptics to 1000000000/ticksPerSecond, but everything just ran at about 300 ticks per seconds. All I'm attempting to do is to limit my game loop to 60 - what is the best way to guarantee that my game updates never happen at more than 60 times a second? Please note, I do realise that very very old devices might not be able to handle 60 although I really don't expect this to happen - I've tested it on the lowest device I have and it easily achieves 60 tics. So I'm not worried about a device not being able to handle the 60 ticks per second, but rather need to limit it - any help would be appreciated.

    Read the article

  • projection / view matrix: the object is bigger than it should and depth does not affect vertices

    - by Francesco Noferi
    I'm currently trying to write a C 3D software rendering engine from scratch just for fun and to have an insight on what OpenGL does behind the scene and what 90's programmers had to do on DOS. I have written my own matrix library and tested it without noticing any issues, but when I tried projecting the vertices of a simple 2x2 cube at 0,0 as seen by a basic camera at 0,0,10, the cube seems to appear way bigger than the application's window. If I scale the vertices' coordinates down by 8 times I can see a proper cube centered on the screen. This cube doesn't seem to be in perspective: wheen seen from the front, the back vertices pe rfectly overlap with the front ones, so I'm quite sure it's not correct. this is how I create the view and projection matrices (vec4_initd initializes the vectors with w=0, vec4_initw initializes the vectors with w=1): void mat4_lookatlh(mat4 *m, const vec4 *pos, const vec4 *target, const vec4 *updirection) { vec4 fwd, right, up; // fwd = norm(pos - target) fwd = *target; vec4_sub(&fwd, pos); vec4_norm(&fwd); // right = norm(cross(updirection, fwd)) vec4_cross(updirection, &fwd, &right); vec4_norm(&right); // up = cross(right, forward) vec4_cross(&fwd, &right, &up); // orientation and translation matrices combined vec4_initd(&m->a, right.x, up.x, fwd.x); vec4_initd(&m->b, right.y, up.y, fwd.y); vec4_initd(&m->c, right.z, up.z, fwd.z); vec4_initw(&m->d, -vec4_dot(&right, pos), -vec4_dot(&up, pos), -vec4_dot(&fwd, pos)); } void mat4_perspectivefovrh(mat4 *m, float fovdegrees, float aspectratio, float near, float far) { float h = 1.f / tanf(ftoradians(fovdegrees / 2.f)); float w = h / aspectratio; vec4_initd(&m->a, w, 0.f, 0.f); vec4_initd(&m->b, 0.f, h, 0.f); vec4_initw(&m->c, 0.f, 0.f, -far / (near - far)); vec4_initd(&m->d, 0.f, 0.f, (near * far) / (near - far)); } this is how I project my vertices: void device_project(device *d, const vec4 *coord, const mat4 *transform, int *projx, int *projy) { vec4 result; mat4_mul(transform, coord, &result); *projx = result.x * d->w + d->w / 2; *projy = result.y * d->h + d->h / 2; } void device_rendervertices(device *d, const camera *camera, const mesh meshes[], int nmeshes, const rgba *color) { int i, j; mat4 view, projection, world, transform, projview; mat4 translation, rotx, roty, rotz, transrotz, transrotzy; int projx, projy; // vec4_unity = (0.f, 1.f, 0.f, 0.f) mat4_lookatlh(&view, &camera->pos, &camera->target, &vec4_unity); mat4_perspectivefovrh(&projection, 45.f, (float)d->w / (float)d->h, 0.1f, 1.f); for (i = 0; i < nmeshes; i++) { // world matrix = translation * rotz * roty * rotx mat4_translatev(&translation, meshes[i].pos); mat4_rotatex(&rotx, ftoradians(meshes[i].rotx)); mat4_rotatey(&roty, ftoradians(meshes[i].roty)); mat4_rotatez(&rotz, ftoradians(meshes[i].rotz)); mat4_mulm(&translation, &rotz, &transrotz); // transrotz = translation * rotz mat4_mulm(&transrotz, &roty, &transrotzy); // transrotzy = transrotz * roty = translation * rotz * roty mat4_mulm(&transrotzy, &rotx, &world); // world = transrotzy * rotx = translation * rotz * roty * rotx // transform matrix mat4_mulm(&projection, &view, &projview); // projview = projection * view mat4_mulm(&projview, &world, &transform); // transform = projview * world = projection * view * world for (j = 0; j < meshes[i].nvertices; j++) { device_project(d, &meshes[i].vertices[j], &transform, &projx, &projy); device_putpixel(d, projx, projy, color); } } } this is how the cube and camera are initialized: // test mesh cube = &meshlist[0]; mesh_init(cube, "Cube", 8); cube->rotx = 0.f; cube->roty = 0.f; cube->rotz = 0.f; vec4_initw(&cube->pos, 0.f, 0.f, 0.f); vec4_initw(&cube->vertices[0], -1.f, 1.f, 1.f); vec4_initw(&cube->vertices[1], 1.f, 1.f, 1.f); vec4_initw(&cube->vertices[2], -1.f, -1.f, 1.f); vec4_initw(&cube->vertices[3], -1.f, -1.f, -1.f); vec4_initw(&cube->vertices[4], -1.f, 1.f, -1.f); vec4_initw(&cube->vertices[5], 1.f, 1.f, -1.f); vec4_initw(&cube->vertices[6], 1.f, -1.f, 1.f); vec4_initw(&cube->vertices[7], 1.f, -1.f, -1.f); // main camera vec4_initw(&maincamera.pos, 0.f, 0.f, 10.f); maincamera.target = vec4_zerow; and, just to be sure, this is how I compute matrix multiplications: void mat4_mul(const mat4 *m, const vec4 *va, vec4 *vb) { vb->x = m->a.x * va->x + m->b.x * va->y + m->c.x * va->z + m->d.x * va->w; vb->y = m->a.y * va->x + m->b.y * va->y + m->c.y * va->z + m->d.y * va->w; vb->z = m->a.z * va->x + m->b.z * va->y + m->c.z * va->z + m->d.z * va->w; vb->w = m->a.w * va->x + m->b.w * va->y + m->c.w * va->z + m->d.w * va->w; } void mat4_mulm(const mat4 *ma, const mat4 *mb, mat4 *mc) { mat4_mul(ma, &mb->a, &mc->a); mat4_mul(ma, &mb->b, &mc->b); mat4_mul(ma, &mb->c, &mc->c); mat4_mul(ma, &mb->d, &mc->d); }

    Read the article

  • Limit the amount a camera can pitch

    - by ChocoMan
    I'm having problems trying to limit the range my camera can pitch. Currently my camera can pitch around a model without restriction, but having a hard time trying to find the value of the degree/radian the camera is currently at after pitching. Here is what I got so far: // Moves camera with thumbstick Pitch = pController.ThumbSticks.Right.Y * MathHelper.ToRadians(speedAngleMAX); // Pitch Camera around model public void cameraPitch(float pitch) { pitchAngle = ModelLoad.camTarget - ModelLoad.CameraPos; axisPitch = Vector3.Cross(Vector3.Up, pitchAngle); // pitch constrained to model's orientation axisPitch.Normalize(); ModelLoad.CameraPos = Vector3.Transform(ModelLoad.CameraPos - ModelLoad.camTarget, Matrix.CreateFromAxisAngle(axisPitch, pitch)) + ModelLoad.camTarget; } I've tried restraining the Y-camera position of ModelLoad.CameraPos.Y, but doing so gave me some unwanted results.

    Read the article

< Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >