Search Results

Search found 24037 results on 962 pages for 'game design'.

Page 514/962 | < Previous Page | 510 511 512 513 514 515 516 517 518 519 520 521  | Next Page >

  • How can I convert an image from raw data in Android without any munging?

    - by stephelton
    I have raw image data (may be png, jpg, ...) and I want it converted in Android without changing its pixel depth (bpp). In particular, when I load a grayscale (8 bpp) image that I want to use as alpha (glTexImage() with GL_ALPHA), it converts it to 16 bpp (presumably 5_6_5). While I do have a plan b (actually, I'm probably on plan 'e' by now, this is really becoming annoying) I would really like to discover an easy way to do this using what is readily available in the api. So far, I'm using BitmapFactory.decodeByteArray(). While I'm at it. I'm doing this from a native environment via jni (passing the buffer in from C, and a new buffer back to C from Java). Any portable solution in C/C++ would be preferable, but I don't want to introduce anything that might break in future versions of Android, etc.

    Read the article

  • Jumping with Mecanim synchronization

    - by Abhishek Deb
    I am using Unity3D 4.1 for one of my projects. I have a robot character which is always running. I am using mecanim animation system. What I really want:When I press Space bar, the character should jump up in the air, triggering an animation clip and then by the time it reaches the ground, the animation clip should also end. What actually is happening:When I press Space bar, the character jumps in the air. Animation clip plays as it should, but ends way before it reaches the ground. So, it looks like he is running in the mid air. What have I done: I have this humanoid robot setup with a jump animation bounded with the space bar key. Also, instead of using root motion, I am directly moving the robot from code. //Jumping if(Input.GetKeyDown(KeyCode.Space)){ rigidbody.AddForce(Vector3.up*jumpVelocty); anim.SetBool("Jump",true); } else anim.SetBool("Jump",false); Character's Details: Rigidbody = Mass:30, Freeze rotaion:x,y,z Capsule Collider = Material: metal, center(0,4.5,0), radius:1, height:11 Script = jumpVelocity:20000 Jump Animation Clip: ~ 2 seconds. I am really out of ideas how to synchronize everything. Should I make the character jump in some other way so that it quickly comes down and touches the ground to match the animation clip? If yes, please provide a direction.

    Read the article

  • Stencil buffer appears to not be decrementing values correctly

    - by Alex Ames
    I'm attempting to use the stencil buffer as a clipper for my UI system, but I'm having trouble debugging a problem I'm running in to. This is what I'm doing: A widget can pass a rectangle to the the stencil clipper functions, which will increment the stencil buffer values that it covers. Then it will draw its children, which will only get drawn in the stencilled area (so that if they extend outside they'll be clipped). After a widget is done drawing its children, it pops that rectangle from the stack and in the process decrements the values in the stencil buffer that it has previously incremented. The slightly simplified code is below: static void drawStencil(Rect& rect, unsigned int ref) { // Save previous values of the color and depth masks GLboolean colorMask[4]; GLboolean depthMask; glGetBooleanv(GL_COLOR_WRITEMASK, colorMask); glGetBooleanv(GL_DEPTH_WRITEMASK, &depthMask); // Turn off drawing glColorMask(0, 0, 0, 0); glDepthMask(0); // Draw vertices here ... // Turn everything back on glColorMask(colorMask[0], colorMask[1], colorMask[2], colorMask[3]); glDepthMask(depthMask); // Only render pixels in areas where the stencil buffer value == ref glStencilFunc(GL_EQUAL, ref, 0xFF); glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP); } void pushScissor(Rect rect) { // increment things only at the current stencil stack level glStencilFunc(GL_EQUAL, s_scissorStack.size(), 0xFF); glStencilOp(GL_KEEP, GL_INCR, GL_INCR); s_scissorStack.push_back(rect); drawStencil(rect, states, s_ScissorStack.size()); } void popScissor() { // undo what was done in the previous push, // decrement things only at the current stencil stack level glStencilFunc(GL_EQUAL, s_scissorStack.size(), 0xFF); glStencilOp(GL_KEEP, GL_DECR, GL_DECR); Rect rect = s_scissorStack.back(); s_scissorStack.pop_back(); drawStencil(rect, states, s_scissorStack.size()); } And this is how it's being used by the Widgets if (m_clip) pushScissor(m_rect); drawInternal(target, states); for (auto child : m_children) target.draw(*child, states); if (m_clip) popScissor(); This is the result of the above code: There are two things on the screen, a giant test button, and a window with some buttons and text areas on it. The text area scroll box is set to clip its children (so that the text doesn't extend outside the scroll box). The button is drawn after the window and should be on top of it completely. However, for some reason the text area is appearing on top of the button. The only reason I can think of that this would happen is if the stencil values were not getting decremented in the pop, and when it comes time to render the button, since those pixels don't have the right stencil value it doesn't draw over. But I can't figure out whats wrong with my code that would cause that to happen.

    Read the article

  • Box2D how to implement a camera?

    - by Romeo
    By now i have this Camera class. package GameObjects; import main.Main; import org.jbox2d.common.Vec2; public class Camera { public int x; public int y; public int sx; public int sy; public static final float PIXEL_TO_METER = 50f; private float yFlip = -1.0f; public Camera() { x = 0; y = 0; sx = x + Main.APPWIDTH; sy = y + Main.APPHEIGHT; } public Camera(int x, int y) { this.x = x; this.y = y; sx = x + Main.APPWIDTH; sy = y + Main.APPHEIGHT; } public void update() { sx = x + Main.APPWIDTH; sy = y + Main.APPHEIGHT; } public void moveCam(int mx, int my) { if(mx >= 0 && mx <= 80) { this.x -= 2; } else if(mx <= Main.APPWIDTH && mx >= Main.APPWIDTH - 80) { this.x += 2; } if(my >= 0 && my <= 80) { this.y += 2; } else if(my <= Main.APPHEIGHT && my >= Main.APPHEIGHT - 80) { this.y -= 2; } this.update(); } public float meterToPixel(float meter) { return meter * PIXEL_TO_METER; } public float pixelToMeter(float pixel) { return pixel / PIXEL_TO_METER; } public Vec2 screenToWorld(Vec2 screenV) { return new Vec2(screenV.x + this.x, yFlip * screenV.y + this.y); } public Vec2 worldToScreen(Vec2 worldV) { return new Vec2(worldV.x - this.x, yFlip * worldV.y - this.y); } } I need to know how to modify the screenToWorld and worldToScreen functions to include the PIXEL_TO_METER scaling.

    Read the article

  • Platform jumping problems with AABB collisions

    - by Vee
    See the diagram first: When my AABB physics engine resolves an intersection, it does so by finding the axis where the penetration is smaller, then "push out" the entity on that axis. Considering the "jumping moving left" example: If velocityX is bigger than velocityY, AABB pushes the entity out on the Y axis, effectively stopping the jump (result: the player stops in mid-air). If velocityX is smaller than velocitY (not shown in diagram), the program works as intended, because AABB pushes the entity out on the X axis. How can I solve this problem? Source code: public void Update() { Position += Velocity; Velocity += World.Gravity; List<SSSPBody> toCheck = World.SpatialHash.GetNearbyItems(this); for (int i = 0; i < toCheck.Count; i++) { SSSPBody body = toCheck[i]; body.Test.Color = Color.White; if (body != this && body.Static) { float left = (body.CornerMin.X - CornerMax.X); float right = (body.CornerMax.X - CornerMin.X); float top = (body.CornerMin.Y - CornerMax.Y); float bottom = (body.CornerMax.Y - CornerMin.Y); if (SSSPUtils.AABBIsOverlapping(this, body)) { body.Test.Color = Color.Yellow; Vector2 overlapVector = SSSPUtils.AABBGetOverlapVector(left, right, top, bottom); Position += overlapVector; } if (SSSPUtils.AABBIsCollidingTop(this, body)) { if ((Position.X >= body.CornerMin.X && Position.X <= body.CornerMax.X) && (Position.Y + Height/2f == body.Position.Y - body.Height/2f)) { body.Test.Color = Color.Red; Velocity = new Vector2(Velocity.X, 0); } } } } } public static bool AABBIsOverlapping(SSSPBody mBody1, SSSPBody mBody2) { if(mBody1.CornerMax.X <= mBody2.CornerMin.X || mBody1.CornerMin.X >= mBody2.CornerMax.X) return false; if (mBody1.CornerMax.Y <= mBody2.CornerMin.Y || mBody1.CornerMin.Y >= mBody2.CornerMax.Y) return false; return true; } public static bool AABBIsColliding(SSSPBody mBody1, SSSPBody mBody2) { if (mBody1.CornerMax.X < mBody2.CornerMin.X || mBody1.CornerMin.X > mBody2.CornerMax.X) return false; if (mBody1.CornerMax.Y < mBody2.CornerMin.Y || mBody1.CornerMin.Y > mBody2.CornerMax.Y) return false; return true; } public static bool AABBIsCollidingTop(SSSPBody mBody1, SSSPBody mBody2) { if (mBody1.CornerMax.X < mBody2.CornerMin.X || mBody1.CornerMin.X > mBody2.CornerMax.X) return false; if (mBody1.CornerMax.Y < mBody2.CornerMin.Y || mBody1.CornerMin.Y > mBody2.CornerMax.Y) return false; if(mBody1.CornerMax.Y == mBody2.CornerMin.Y) return true; return false; } public static Vector2 AABBGetOverlapVector(float mLeft, float mRight, float mTop, float mBottom) { Vector2 result = new Vector2(0, 0); if ((mLeft > 0 || mRight < 0) || (mTop > 0 || mBottom < 0)) return result; if (Math.Abs(mLeft) < mRight) result.X = mLeft; else result.X = mRight; if (Math.Abs(mTop) < mBottom) result.Y = mTop; else result.Y = mBottom; if (Math.Abs(result.X) < Math.Abs(result.Y)) result.Y = 0; else result.X = 0; return result; }

    Read the article

  • How is this lighting effect done?

    - by Mike
    This is the most beautiful 2d lighting I have ever seen. Does anyone know how he went about doing it? http://www.youtube.com/watch?v=BIQRhOFkvQY http://www.youtube.com/watch?v=tnTYXPuecMs http://www.youtube.com/watch?v=rhC_jVM8IYU http://www.youtube.com/watch?v=_Aw5BdjWqqU Or download it here: http://grantkot.com/PollutedPlanet/publish.htm edit: I am not asking how the particles are simulated; I don't care about the physics.

    Read the article

  • Animate from end frame of one animation to end frame of another Unity3d/C#

    - by Timothy Williams
    So I have two legacy FBX animations; Animation A and Animation B. What I'm looking to do is to be able to fade from A to B regardless of the current frame A is on. Using animation.CrossFade() will play A in reverse until it reaches frame 0, then play B forward. What I'm looking to do is blend from the current frame of A to the end frame of B. Probably via some sort of lerp between the facial position in A and the facial position in the last frame of B. Does anyone know how I might be able to accomplish this? Either via a built in function or potentially lerping of some sort?

    Read the article

  • What features does D3D have that OpenGL does not (and vice versa)?

    - by Tom
    Are there any feature comparisons on Direct3D 11 and the newest OpenGL versions? Well, simply put, Direct3D 11 introduced three main features (taken from Wikipedia): Tesselation Multithreaded rendering Compute shaders Increased texture cache Now I'm wondering, how does the newest versions of OpenGL cope with these features? And since I have this feeling that there are features that Direct3D lacks from OpenGL's side, what are those?

    Read the article

  • Good resources for 2.5D and rendering walls, floors, and sprites

    - by Aidan Mueller
    I'm curious as to how games like Prelude of the chambered handle graphics. If you play for a bit you will see what I mean. It made me wonder how it works. (it is open-source so you can get the source on This page) I did find a few tutorials but I couldn't undertand some of the stuff but it did help with some things. However, I don't like doing things I don't understand. Does anyone know of any good sites for this kind of 2.5D? Any help is appreciated. After all I've been googling all day. Thanks :)

    Read the article

  • How can I make the camera return to the beginning of the terrain when it reaches the end?

    - by wbaccari
    How can I make the camera return to the beginning of the terrain when it reaches the end? I tried using the ICameraSceneNode*-setPosition(). if (camera->getPosition().X>1200.f) camera->setPosition(vector3df(1.f,1550.f,camera->getPosition().Z)); if (camera->getPosition().X<0.f) camera->setPosition(vector3df(1199.f,1550.f,camera->getPosition().Z)); if (camera->getPosition().Z>1200.f) camera->setPosition(vector3df(camera->getPosition().X,1550.f,1.f)); if (camera->getPosition().Z<0.f) camera->setPosition(vector3df(camera->getPosition().X,1550.f,1199.f)); It seems to work fine with a flat terrain (one shade of grey in heightmap) but it starts to produce a strange behavior as soon as i try to add some hills. Edit: The setPosition() call seems to perform a translation of the camera toward the new position, therefore the camera stops at the first obstacle it encounters on its way.

    Read the article

  • Splitting Graph into distinct polygons in O(E) complexity

    - by Arthur Wulf White
    If you have seen my last question: trapped inside a Graph : Find paths along edges that do not cross any edges How do you split an entire graph into distinct shapes 'trapped' inside the graph(like the ones described in my last question) with good complexity? What I am doing now is iterating over all edges and then starting to traverse while always taking the rightmost turn. This does split the graph into distinct shapes. Then I eliminate all the excess shapes (that are repeats of previous shapes) and return the result. The complexity of this algorithm is O(E^2). I am wondering if I could do it in O(E) by removing edges I already traversed previously. My current implementation of that returns unexpected results.

    Read the article

  • Calculating 3D camera positions from a video

    - by Geotarget
    I need to calculate the 3D camera position and rotation for each frame in a given video. This is typically used for motion-tracking, and to insert 3D objects into a video. I'm currently using VideoTrace to calculate this for me, and I'm getting the data exported as a 3DS Maxscript file. However when I try to use the 3D camera rotations, I'm getting strange errors in my 3D calculations, as if there is an error with the 3x3 rotation matrices. Can you spot any error with the data itself? Or is it my other calculations that are erroneous? frame 1 rotation=(matrix3[-0.011938, 0.756018, -0.654442][-0.382040, -0.608284, -0.695727][-0.924068, 0.241718, 0.296091][0, 0, 0]).rotationpart position=[-0.767177, 0.308723, -0.232722] fov=57.352135 frame 2 rotation=(matrix3[-0.460922, -0.726580, -0.509541][-0.200163, 0.644491, -0.737947][ 0.864572, -0.238145, -0.442495][0, 0, 0]).rotationpart position=[-0.856630, 0.198654, -0.243853] fov=57.352135

    Read the article

  • Pixel Collision - Detecting corners

    - by Milkboat
    How would I go about detecting the corners of a texture when I use pixel collision detection? I read about corner collision with rectangles, but I am unsure how to adapt it to my situation. Right now my map is tile based and I do rectangular collision until the player is intersecting with a blocked tile, then I switch to pixel collision. The effect I would like to achieve is when the player hits the corner of an object to push him around the side so he doesn't just hit the edge and stop. Any ideas?

    Read the article

  • What does Skyrim Creation Kit's NPC class do?

    - by pseudoname
    I'm trying to change it with the setclass console command Based on the UESP wiki it looks like it just governs stat gain for leveling, but based on the Elder Scrolls wiki it seems to only control their combat AI. Obviously it does at least one or both of those - what does it actually do, and does it do anything else? __ Ex: if change Lydia from warrior1handed to vigilantcombat1h with the console command 000A2C94.setclass 0010bfef will it have any unintended side effects that aren't immediately apparent other than letting her use the alteration and healing spells I just gave her with console and setting her stats in a way that works for that? Will it do something weird like mess with her factions or ability to join as my follower? Or mess with her health scaling as she levels? Something hard to notice until alot of time went by? @desaivv* I was trying to do it with 000A2C94.setclass 0010bfef wasn't sure if it'd cause hidden issues only showing after hours of play or if i made a new character with the same bat file. but the creation kit sounds like an idea too, i'll have to see how complicated it is. it might just show what it'd do or have some easier way to change her behavior to add spells. I'll try it and see if anything obvious shows up short term just wasn't sure if it had known long term problems

    Read the article

  • Problems with 3D Array for Voxel Data

    - by Sean M.
    I'm trying to implement a voxel engine in C++ using OpenGL, and I've been working on the rendering of the world. In order to render, I have a 3D array of uint16's that hold that id of the block at the point. I also have a 3D array of uint8's that I am using to store the visibility data for that point, where each bit represents if a face is visible. I have it so the blocks render and all of the proper faces are hidden if needed, but all of the blocks are offset by a power of 2 from where they are stored in the array. So the block at [0][0][0] is rendered at (0, 0, 0), and the block at 11 is rendered at (1, 1, 1), but the block at [2][2][2] is rendered at (4, 4, 4) and the block at [3][3][3] is rendered at (8, 8, 8), and so on and so forth. This is the result of drawing the above situation: I'm still a little new to the more advanced concepts of C++, like triple pointers, which I'm using for the 3D array, so I think the error is somewhere in there. This is the code for creating the arrays: uint16*** _blockData; //Contains a 3D array of uint16s that are the ids of the blocks in the region uint8*** _visibilityData; //Contains a 3D array of bytes that hold the visibility data for the faces //Allocate memory for the world data _blockData = new uint16**[REGION_DIM]; for (int i = 0; i < REGION_DIM; i++) { _blockData[i] = new uint16*[REGION_DIM]; for (int j = 0; j < REGION_DIM; j++) _blockData[i][j] = new uint16[REGION_DIM]; } //Allocate memory for the visibility _visibilityData = new uint8**[REGION_DIM]; for (int i = 0; i < REGION_DIM; i++) { _visibilityData[i] = new uint8*[REGION_DIM]; for (int j = 0; j < REGION_DIM; j++) _visibilityData[i][j] = new uint8[REGION_DIM]; } Here is the code used to create the block mesh for the region: //Check if the positive x face is visible, this happens for every face //Block::VERT_X_POS is just an array of non-transformed cube verts for one face //These checks are in a triple loop, which goes over every place in the array if (_visibilityData[x][y][z] & 0x01 > 0) { _vertexData->AddData(&(translateVertices(Block::VERT_X_POS, x, y, z)[0]), sizeof(Block::VERT_X_POS)); } //This is a seperate method, not in the loop glm::vec3* translateVertices(const glm::vec3 data[], uint16 x, uint16 y, uint16 z) { glm::vec3* copy = new glm::vec3[6]; memcpy(&copy, &data, sizeof(data)); for(int i = 0; i < 6; i++) copy[i] += glm::vec3(x, -y, z); //Make +y go down instead return copy; } I cannot see where the blocks may be getting offset by more than they should be, and certainly not why the offsets are a power of 2. Any help is greatly appreciated. Thanks.

    Read the article

  • 2D SAT Collision Detection not working when using certain polygons

    - by sFuller
    My SAT algorithm falsely reports that collision is occurring when using certain polygons. I believe this happens when using a polygon that does not contain a right angle. Here is a simple diagram of what is going wrong: Here is the problematic code: std::vector<vec2> axesB = polygonB->GetAxes(); //loop over axes B for(int i = 0; i < axesB.size(); i++) { float minA,minB,maxA,maxB; polygonA->Project(axesB[i],&minA,&maxA); polygonB->Project(axesB[i],&minB,&maxB); float intervalDistance = polygonA->GetIntervalDistance(minA, maxA, minB, maxB); if(intervalDistance >= 0) return false; //Collision not occurring } This function retrieves axes from the polygon: std::vector<vec2> Polygon::GetAxes() { std::vector<vec2> axes; for(int i = 0; i < verts.size(); i++) { vec2 a = verts[i]; vec2 b = verts[(i+1)%verts.size()]; vec2 edge = b-a; axes.push_back(vec2(-edge.y,edge.x).GetNormailzed()); } return axes; } This function returns the normalized vector: vec2 vec2::GetNormailzed() { float mag = sqrt( x*x + y*y ); return *this/mag; } This function projects a polygon onto an axis: void Polygon::Project(vec2* axis, float* min, float* max) { float d = axis->DotProduct(&verts[0]); float _min = d; float _max = d; for(int i = 1; i < verts.size(); i++) { d = axis->DotProduct(&verts[i]); _min = std::min(_min,d); _max = std::max(_max,d); } *min = _min; *max = _max; } This function returns the dot product of the vector with another vector. float vec2::DotProduct(vec2* other) { return (x*other->x + y*other->y); } Could anyone give me a pointer in the right direction to what could be causing this bug?

    Read the article

  • effect and model vertex declaration compatibility

    - by Vodácek
    I have normal model drawing code. When I try to draw model without UV coordinates I got this exception: System.InvalidOperationException: The current vertex declaration does not include all the elements required by the current vertex shader. TextureCoordinate0 is missing. at Microsoft.Xna.Framework.Graphics.GraphicsDevice.VerifyCanDraw( Boolean bUserPrimitives, Boolean bIndexedPrimitives) at Microsoft.Xna.Framework.Graphics.GraphicsDevice.DrawIndexedPrimitives( PrimitiveType primitiveType, Int32 baseVertex, Int32 minVertexIndex, Int32 numVertices, Int32 startIndex, Int32 primitiveCount) at Microsoft.Xna.Framework.Graphics.ModelMeshPart.Draw() at Microsoft.Xna.Framework.Graphics.ModelMesh.Draw() ... I know what cause the exception, but is possible to avoid it? Is possible to check model before drawing it with current shader for vertex declaration compatibility?

    Read the article

  • Map and fill texture using PBO (OpenGL 3.3)

    - by NtscCobalt
    I'm learning OpenGL 3.3 trying to do the following (as it is done in D3D)... Create Texture of Width, Height, Pixel Format Map texture memory Loop write pixels Unmap texture memory Set Texture Render Right now though it renders as if the entire texture is black. I can't find a reliable source for information on how to do this though. Almost every tutorial I've found just uses glTexSubImage2D and passes a pointer to memory. Here is basically what my code does... (In this case it is generating an 1-byte Alpha Only texture but it is rendering it as the red channel for debugging) GLuint pixelBufferID; glGenBuffers(1, &pixelBufferID); glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pixelBufferID); glBufferData(GL_PIXEL_UNPACK_BUFFER, 512 * 512 * 1, nullptr, GL_STREAM_DRAW); glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0); GLuint textureID; glGenTextures(1, &textureID); glBindTexture(GL_TEXTURE_2D, textureID); glTexImage2D(GL_TEXTURE_2D, 0, GL_R8, 512, 512, 0, GL_RED, GL_UNSIGNED_BYTE, nullptr); glBindTexture(GL_TEXTURE_2D, 0); glBindTexture(GL_TEXTURE_2D, textureID); glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pixelBufferID); void *Memory = glMapBuffer(GL_PIXEL_UNPACK_BUFFER, GL_WRITE_ONLY); // Memory copied here, I know this is valid because it is the same loop as in my working D3D version glUnmapBuffer(GL_PIXEL_UNPACK_BUFFER); glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0); And then here is the render loop. // This chunk left in for completeness glUseProgram(glProgramId); glBindVertexArray(glVertexArrayId); glBindBuffer(GL_ARRAY_BUFFER, glVertexBufferId); glEnableVertexAttribArray(0); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 20, 0); glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 20, 12); GLuint transformLocationID = glGetUniformLocation(3, 'transform'); glUniformMatrix4fv(transformLocationID , 1, true, somematrix) // Not sure if this is all I need to do glBindTexture(GL_TEXTURE_2D, pTex->glTextureId); GLuint textureLocationID = glGetUniformLocation(glProgramId, "texture"); glUniform1i(textureLocationID, 0); glDrawArrays(GL_TRIANGLES, Offset*3, Triangles*3); Vertex Shader #version 330 core in vec3 Position; in vec2 TexCoords; out vec2 TexOut; uniform mat4 transform; void main() { TexOut = TexCoords; gl_Position = vec4(Position, 1.0) * transform; } Pixel Shader #version 330 core uniform sampler2D texture; in vec2 TexCoords; out vec4 fragColor; void main() { // Output color fragColor.r = texture2D(texture, TexCoords).r; fragColor.g = 0.0f; fragColor.b = 0.0f; fragColor.a = 1.0; }

    Read the article

  • Preferred way to render text in OpenGL

    - by dukeofgaming
    Hi, I'm about tu pick up computer graphics once again for an university project. For a previous project I used a library called ftgl that didn't leave me quite satisfied as it felt kind of heavy (I tried all rendering techniques, text rendering didn't scale very well). My question is, is there a good and efficient library for this?, if not, what would be the way to implement fast but nice looking text?. Some intended uses are: Floating object/character labels Dialogues Menus HUD Regards and thanks in advance. EDIT: Preferrably that it can load fonts

    Read the article

  • Farseer: Cutting body from texture

    - by Robin Betka
    Is it possible to cut a body from a texture in Farseer 3.0? I have a texture converted to a body with multiple fixtures ( using BayazitDecomposer, CreatePolygon method, ..) and can even do it as a BreakableBody. But when I try to cut it with the cutting tool, the fixture itself gets cutted but it's connections get discarded! So when I have 14 fixtures, and cut fixture 3 for example, fixture 3 gets cutted but 1,2 and 3-14 just go away. Is there a way to do it? It would work already if I could convert the texture into a body with 1 fixture only, but I haven't figured out it that's possible. BayazitDecomposer creates the multiple verticles, but letting it away creates something weird and I get assert messages all the time. I know I couldn't break it that way but I don't need that anyway when I could cut it. The breaking is just the work around I'm using now. Extending the cuttingtool to support multiple fixtures is very hard especially when you consider that in one cut multiple fixtures could be cutted and then connected again.

    Read the article

  • Translate extrinsic rotations to intrinsic rotations ( Euler angles )

    - by MineMan287
    The problem I have is very frustrating: I am using the Jitter Physics library which gives Quaternion rotations, you can extract the extrinsic rotations but I need intrinsic rotations to rotate in OpenTK (There are other reasons as well so I don't want to make OpenTK use a Matrix) GL.Rotate(xr, 1, 0, 0) GL.Rotate(yr, 0, 1, 0) GL.Rotate(zr, 0, 0, 1) EDIT : Response to the first answer Like This? GL.Rotate(zr, 0, 0, 1) GL.Rotate(yr, 0, 1, 0) GL.Rotate(xr, 1, 0, 0) Or This? GL.Rotate(xr, 1, 0, 0) GL.Rotate(yr, 0, 1, 0) GL.Rotate(zr, 0, 0, 1) GL.Rotate(zr, 0, 0, 1) GL.Rotate(yr, 0, 1, 0) GL.Rotate(xr, 1, 0, 0) GL.Rotate(xr, 1, 0, 0) GL.Rotate(yr, 0, 1, 0) GL.Rotate(zr, 0, 0, 1) I'm confused, please give an example

    Read the article

  • How do I implement collision detection with a sprite walking up a rocky-terrain hill?

    - by detectivecalcite
    I'm working in SDL and have bounding rectangles for collisions set up for each frame of the sprite's animation. However, I recently stumbled upon the issue of putting together collisions for characters walking up and down hills/slopes with irregularly curved or rocky terrain - what's a good way to do collisions for that type of situation? Per-pixel? Loading up the points of the incline and doing player-line collision checking? Should I use bounding rectangles in general or circle collision detection?

    Read the article

  • Calculating the 2D edge normals of a triangle

    - by Kazade
    What's a reliable way to calculate a 2D normal vector for each edge of a triangle, so that each normal is pointing outwards from the triangle? To clarify, given any triangle - for each edge (e.g p2-p1), I need to calculate a 2D normal vector pointing away from the triangle at right angles to the edge (for simplicity we can assume that the points are being specified in an anti-clockwise direction). I've coded a couple of hacky attempts, but I'm sure I'm overlooking some simple method and Google isn't being that helpful today - that and I haven't had my daily caffeine yet!

    Read the article

  • Camera Collision inside the room model

    - by sanddy
    I am having a problem in Calculating the camera collision for my Room model which consists of sofa, tables and other models. The users shall be moving the camera front, back, rotating so i need to make sure that the camera does not collide with any of the models with in the room. I have treated all my models inside the room by BoundingBox[] and the camera by BoundingSphere. So, far i have implemented collision by looking into the tutorial from http://www.toymaker.info/Games/XNA/html/xna_model_collisions.html which was great. But, I guess the problem lies in the Transformation part. I debugged and found some points to be at Vector(-XXX,-XXX,-XXX) where X is digit. Also i found my radius of some models where too large(in thousand, i just looked into its radius value before converting to BoundingBox). Do I need to scale the model for collision??? Below are my code:- On My LoadContent(): Matrix[] transforms = new Matrix[myModel.Bones.Count]; myModel.CopyAbsoluteBoneTransformsTo(transforms); int index = 0; box = new List<BoundingBox>(); BoundingBox worldModel = Utility.CalculateBoundingBox(myModel); foreach (ModelMesh mesh in myModel.Meshes) { Vector3[] obb = new Vector3[8]; worldModel.GetCorners(obb); Vector3[] asdf = (Vector3[])obb.Clone(); Vector3.Transform(obb, ref transforms[mesh.ParentBone.Index], obb); BoundingBox worldBox = BoundingBox.CreateFromPoints(obb); box.Add(worldBox); index++; } On CameraPosition Update: BoundingSphere bs = new BoundingSphere(this.cameraPos, 5.0f); if (RoomWalkthrough.Utility.CheckCollision(bs, bb)) { // Do Something } Please Help.

    Read the article

  • how to add water effect to an image

    - by brainydexter
    This is what I am trying to achieve: A given image would occupy say 3/4th height of the screen. The remaining 1/4th area would be a reflection of it with some waves (water effect) on it. I'm not sure how to do this. But here's my approach: render the given texture to another texture called mirror texture (maybe FBOs can help me?) invert mirror texture (scale it by -1 along Y) render mirror texture at height = 3/4 of the screen add some sense of noise to it OR using pixel shader and time, put pixel.z = sin(time) to make it wavy (Tech: C++/OpenGL/glsl) Is my approach correct ? Is there a better way to do this ? Also, can someone please recommend me if using FrameBuffer Objects would be the right thing here ? Thanks

    Read the article

< Previous Page | 510 511 512 513 514 515 516 517 518 519 520 521  | Next Page >