Search Results

Search found 19338 results on 774 pages for 'game loop'.

Page 380/774 | < Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >

  • How can i create sprite sheet from 3d model (3D studio max)

    - by OopsUser
    I built simple 3D model of a car, with simple animation in which it's wheels are turning. Now i want to create a sprite sheet, the only way i know how to do it, is to render manually 20 frames from the from, then combine them to a strip manually, then rotate it by 10 degrees, render 20 frames of animation again and combine them to a strip... Is there a way to do it automatically ? With out rotating the scene manually and render it and combining .. it's a lot of work, takes more time then the modelling itself... Thanks

    Read the article

  • Working out of a vertex array for destrucible objects

    - by bobobobo
    I have diamond-shaped polygonal bullets. There are lots of them on the screen. I did not want to create a vertex array for each, so I packed them into a single vertex array and they're all drawn at once. | bullet1.xyz | bullet1.rgb | bullet2.xyz | bullet2.rgb This is great for performance.. there is struct Bullet { vector<Vector3f*> verts ; // pointers into the vertex buffer } ; This works fine, the bullets can move and do collision detection, all while having their data in one place. Except when a bullet "dies" Then you have to clear a slot, and pack all the bullets towards the beginning of the array. Is this a good approach to handling lots of low poly objects? How else would you do it?

    Read the article

  • Sensor based vs. AABB based collision

    - by Hillel
    I'm trying to write a simple collision system, which will probably be primarily used for 2D platformers, and I've been planning out an AABB system for a few weeks now, which will work seamlessly with my grid data structure optimization. I picked AABB because I want a simple system, but I also want it to be perfect. Now, I've been hearing a lot lately about a different method to handle collision, using sensors, which are placed in the important parts of the entity. I understand it's a good way to handle slopes, better than AABB collision. The thing is, I can't find a basic explanation of how it works, let alone a comparison of it and the AABB method. If someone could explain it to me, or point me to a good tutorial, I'd very much appreciate it, and also a comparison of the advantages and disadvantages of the two techniques would be nice.

    Read the article

  • 2D mouse coordinates from 3d object projection

    - by user17753
    Not entirely certain of the nomenclature here -- basically, after placing a model in world coordinates and setting up a 3D camera to look at it the model has been projected onto the screen in a 2D fashion. What I'd like to do is determine if the mouse is inside the projected view of the model. Is there a way to "unproject" in the XNA framework? Or what is this process called as, so that I can better search for it?

    Read the article

  • How do I draw a dotted or dashed line?

    - by Gagege
    I'm trying to draw a dashed or dotted line by placing individual segments(dashes) along a path and then separating them. The only algorithm I could come up with for this gave me a dash length that was variable based on the angle of the line. Like this: private function createDashedLine(fromX:Float, fromY:Float, toX:Float, toY:Float):Sprite { var line = new Sprite(); var currentX = fromX; var currentY = fromY; var addX = (toX - fromX) * 0.0075; var addY = (toY - fromY) * 0.0075; line.graphics.lineStyle(1, 0xFFFFFF); var count = 0; // while line is not complete while (!lineAtDestination(fromX, fromY, toX, toY, currentX, currentY)) { /// move line draw cursor to beginning of next dash line.graphics.moveTo(currentX, currentY); // if dash is even if (count % 2 == 0) { // draw the dash line.graphics.lineTo(currentX + addX, currentY + addY); } // add next dash's length to current cursor position currentX += addX; currentY += addY; count++; } return line; } This just happens to be written in Haxe, but the solution should be language neutral. What I would like is for the dash length to be the same no matter what angle the line is. As is, it's just adding 75 thousandths of the line length to the x and y, so if the line is and a 45 degree angle you get pretty much a solid line. If the line is at something shallow like 85 degrees then you get a nice looking dashed line. So, the dash length is variable, and I don't want that. How would I make a function that I can pass a "dash length" into and get that length of dash, no matter what the angle is? If you need to completely disregard my code, be my guest. I'm sure there's a better solution.

    Read the article

  • AndEngine GLES2 - getting Black screen on emulator 4.1

    - by dizworld.com
    I'm new in andengine . I create following code public class MainActivity extends BaseGameActivity { static final int CAMERA_WIDTH = 800; static final int CAMERA_HEIGHT = 480; public Font mFont; public Camera mCamera; //A reference to the current scene public Scene mCurrentScene; public static BaseActivity instance; public EngineOptions onCreateEngineOptions() { instance = this; mCamera = new Camera(0, 0, CAMERA_WIDTH, CAMERA_HEIGHT); return new EngineOptions(true, ScreenOrientation.LANDSCAPE_SENSOR, new RatioResolutionPolicy(CAMERA_WIDTH, CAMERA_HEIGHT), mCamera); } @Override public void onCreateResources(OnCreateResourcesCallback arg0) throws Exception { mFont = FontFactory.create(this.getFontManager(),this.getTextureManager(), 256, 256,Typeface.create(Typeface.DEFAULT, Typeface.BOLD), 32); mFont.load(); } @Override public void onCreateScene(OnCreateSceneCallback arg0) throws Exception { mEngine.registerUpdateHandler(new FPSLogger()); mCurrentScene = new Scene(); Log.v("Scene","enter"); mCurrentScene.setBackground(new Background(0.09804f, 0.7274f, 0.8f)); // return mCurrentScene; } @Override public void onPopulateScene(Scene arg0, OnPopulateSceneCallback arg1) throws Exception { // TODO Auto-generated method stub } } I got code on sites there is returning scene but in AndEngine GLES2 in method onCreateScene() there is no return scene ... so my First run is BLACK .. any suggestion :)

    Read the article

  • What is the simplest way to render video into memory (for drawing to a texture) in .NET?

    - by sebf
    In my project I would like to be able to play back video on surfaces in the world. I intend to do this by having the video frames rendered to a block of memory, then use this to update a texture each frame. Everything is in place - except for the part that actually gets the video. I have looked on Google and found that the video library world is very expansive (and geared towards video processing), and am having trouble finding a suitable one. FFMpeg is very comprehensive, but is an entire suite and would take a good amount of work to integrate. So far the most promising library I've found is the one based on the VLC player libraries - by virtue of it using the same resources as VLC Player it is known to be very capable; it also renders to blocks of memory, but the API (at least of the one on Codeplex) is more of a port of the C++ API rather than a managed wrapper. The 'solution' can be any wrapper/API/library, but with characteristics that make it suitable for use in a rendering engine, namely: Renders the video frame data to memory, so it can be picked up and passed to a texture on the GPU easily. Super simple - all that is needed is a way to load, jump and render a frame programatically - ideally it would use the systems codecs and not require an assortment of plugins. Permissive license (LGPL or more free-er) .NET bindings at least; all the better if it is natively managed Can anyone suggest a lightweight, (.NET) library, that can take a video file, and spit out some frames into a byte[]?

    Read the article

  • How to make natural-looking paths with A* on a grid?

    - by user11177
    I've been reading this: http://theory.stanford.edu/~amitp/GameProgramming/Heuristics.html But there are some things I don't understand, for example the article says to use something like this for pathfinding with diagonal movement: function heuristic(node) = dx = abs(node.x - goal.x) dy = abs(node.y - goal.y) return D * max(dx, dy) I don't know how do set D to get a natural looking path like in the article, I set D to the lowest cost between adjacent squares like it said, and I don't know what they meant by the stuff about the heuristic should be 4*D, that does not seem to change any thing. This is my heuristic function and move function: def heuristic(self, node, goal): D = 10 dx = abs(node.x - goal.x) dy = abs(node.y - goal.y) return D * max(dx, dy) def move_cost(self, current, node): cross = abs(current.x - node.x) == 1 and abs(current.y - node.y) == 1 return 19 if cross else 10 Result: The smooth sailing path we want to happen: The rest of my code: http://pastebin.com/TL2cEkeX

    Read the article

  • Scan-Line Z-Buffering Dilemma

    - by Belgin
    I have a set of vertices in 3D space, and for each I retain the following information: Its 3D coordinates (x, y, z). A list of pointers to some of the other vertices with which it's connected by edges. Right now, I'm doing perspective projection with the projecting plane being XY and the eye placed somewhere at (0, 0, d), with d < 0. By doing Z-Buffering, I need to find the depth of the point of a polygon (they're all planar) which corresponds to a certain pixel on the screen so I can hide the surfaces that are not visible. My questions are the following: How do I determine to which polygon does a pixel belong to so I could use the formula of the plane which contains the polygon to find the Z-coordinate? Are my data structures correct? Do I need to store something else entirely in order for this to work? I'm just projecting the vertices onto the projection plane and joining them with lines based on the pointer lists.

    Read the article

  • OpenGL Application displays only 1 frame

    - by Avi
    EDIT: I have verified that the problem is not the VBO class or the vertex array class, but rather something else. I have a problem where my vertex buffer class works the first time its called, but displays nothing any other time its called. I don't know why this is, and it's also the same in my vertex array class. I'm calling the functions in this order to set up the buffers: enable client states bind buffers set buffer / array data unbind buffers disable client states Then in the draw function, that's called every frame: enable client states bind buffers set pointers unbind buffers bind index buffer draw elements unbind index buffer disable client states Is there something wrong with the order in which I'm calling the functions, or is it a more specific code error? EDIT: here's some of the code Code for setting pointers: //element is the vertex attribute being drawn (e.g. normals, colors, etc.) static void makeElementPointer(VertexBufferElements::VBOElement element, Shader *shade, void *elementLocation) { //elementLocation is BUFFER_OFFSET(n) if a buffer is bound switch (element) { .... glVertexPointer(3, GL_FLOAT, 0, elementLocation); //changes based on element .... //but I'm only dealing with } //vertices for now } And that's basically all the code that isn't just a straight OpenGL function call.

    Read the article

  • Optimizing hierarchical transform

    - by Geotarget
    I'm transforming objects in 3D space by transforming each vector with the object's 4x4 transform matrix. In order to achieve hierarchical transform, I transform the child by its own matrix, and then the child by the parent matrix. This becomes costly because objects deeper in the display tree have to be transformed by all the parent objects. This is what's happening, in summary: Root -- transform its verts by Root matrix Parent -- transform its verts by Parent, Root matrix Child -- transform its verts by Child, Parent, Root matrix Is there a faster way to transform vertices to achieve hierarchical transform? What If I first concatenated each transform matrix with the parent matrices, and then transform verts by that final resulting matrix, would that work and wouldn't that be faster? Root -- transform its verts by Root matrix Parent -- concat Parent, Root matrices, transform its verts by Concated matrix Child -- concat Child, Parent, Root matrices, transform its verts by Concated matrix

    Read the article

  • How to detect collisions between sprite and a user generated shape of some sort?

    - by Huwell
    How to detect a collision between a sprite and a user generated shape of some sort. For example. There are some objects on the screen. The user takes their finger and draws an circle shape around a object (The selection rule is painting circle around the sprite, but the painting shapes may be various). I need to detect which object selected, which just like: (demo images) http://i52.tinypic.com/28h0t1g.png

    Read the article

  • Why my collision detection is not accurate?

    - by optimisez
    After trying and trying, I still cannot understand why the leg of character exceeds the wall but no clipping issue when I hit the wall from below. How should I fix it to make him standstill on the wall? void initPlayer() { // Create texture. hr = D3DXCreateTextureFromFileEx(d3dDevice, "player.png", 169, 44, D3DX_DEFAULT, NULL, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, D3DX_DEFAULT, D3DX_DEFAULT, D3DCOLOR_XRGB(255, 255, 255), NULL, NULL, &player); playerRect.left = playerRect.top = 0; playerRect.right = 29; playerRect.bottom = 36; playerDest.X = 0; playerDest.Y = 564; playerDest.length = playerRect.right - playerRect.left; playerDest.height = playerRect.bottom - playerRect.top; } void initBox() { hr = D3DXCreateTextureFromFileEx(d3dDevice, "brock.png", 330, 132, D3DX_DEFAULT, NULL, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, D3DX_DEFAULT, D3DX_DEFAULT, D3DCOLOR_XRGB(255, 255, 255), NULL, NULL, &box); boxRect.left = 33; boxRect.top = 0; boxRect.right = 63; boxRect.bottom = 30; boxDest.X = boxDest.Y = 300; boxDest.length = boxRect.right - boxRect.left; boxDest.height = boxRect.bottom - boxRect.top; } bool spriteCollide(Entity player, Entity target) { float left1, left2; float right1, right2; float top1, top2; float bottom1, bottom2; left1 = player.X; left2 = target.X; right1 = player.X + player.length; right2 = target.X + target.length; top1 = player.Y; top2 = target.Y; bottom1 = player.Y + player.height; bottom2 = target.Y + target.height; if (bottom1 < top2) return false; if (top1 > bottom2) return false; if (right1 < left2) return false; if (left1 > right2) return false; return true; } void collideWithBox() { if ( spriteCollide(playerDest, boxDest) && keyArr[VK_UP]) //playerDest.Y += 50; playerDest.Y = boxDest.Y + boxDest.height; else if ( spriteCollide(playerDest, boxDest) && !keyArr[VK_UP]) playerDest.Y = boxDest.Y - boxDest.height; }

    Read the article

  • Selection of a mesh with arbitrary region

    - by Tigran
    Considering example: I have a mesh(es) on the OpenGL screen and would like to select a part of it (say for delete purpose). There is a clear way to do the selction via Ray Tracing, or via Selection provided by OpenGL itself. But, for my users, considering that meshes can get wired surfaces, I need to implement a selection via a Arbitrary closed region, so all triangles that appears present inside that region has to be selected. To be more clear, here is screen shot: I want all triangles inside black polygon to be selected, identified, whatever in some way. How can I achieve that ?

    Read the article

  • sdl stencil buffer

    - by noddy
    I am trying to use the stencil buffer for rendering reflection and am working with SDL and OpenGL. When I give the command SDL_GL_SetAttribute(SDL_GL_STENCIL_SIZE,8),I get a return value of 0 indicating success,but when I try to get the size allocated using SDL_GL_GetAtribute( SDL_GL_STENCIL_SIZE,&i),I get a value of 0 for my stencil buffer due to which I am not getting the desired rendering. Can someone help me to correct my mistake? Is there some other initialization also required? Thanks

    Read the article

  • Octrees as data structure

    - by Christian Frantz
    In my cube world, I want to use octrees to represent my chunks of 20x20x20 cubes for frustum and occlusion culling. I understand how octrees work, I just dont know if I'm going about this the right way. My base octree class is taken from here: http://www.xnawiki.com/index.php/Octree What I'm wondering is how to apply occlusion culling using this class. Does it make sense to have one octree for each cube chunk? Or should I make the octree bigger? Since I'm using cubes, each cube should fit into a node without overlap so that won't be an issue

    Read the article

  • Better way to go up/down slope based on yaw?

    - by CyanPrime
    Alright, so I got a bit of movement code and I'm thinking I'm going to need to manually input when to go up/down a slope. All I got to work with is the slope's normal, and vector, and My current and previous position, and my yaw. Is there a better way to rotate whether I go up or down the slope based on my yaw? Vector3f move = new Vector3f(0,0,0); move.x = (float)-Math.toDegrees(Math.cos(Math.toRadians(yaw))); move.z = (float)-Math.toDegrees(Math.sin(Math.toRadians(yaw))); move.normalise(); if(move.z < 0 && slopeNormal.z > 0 || move.z > 0 && slopeNormal.z < 0){ if(move.x < 0 && slopeNormal.x > 0 || move.x > 0 && slopeNormal.x < 0){ move.y += slopeVec.y; } } if(move.z > 0 && slopeNormal.z > 0 || move.z < 0 && slopeNormal.z < 0){ if(move.x > 0 && slopeNormal.x > 0 || move.x < 0 && slopeNormal.x < 0){ move.y -= slopeVec.y; } } move.scale(movementSpeed * delta); Vector3f.add(pos, move, pos);

    Read the article

  • How access PhysicalMaterial from Actor Class?

    - by EmAdpres
    I use Projectile for my weapon system and UDKProjectile has two main function to handle Hit of projectiles(=bullet of my weapon): simulated function ProcessTouch(Actor Other, Vector HitLocation, Vector HitNormal) // For Actors simulated event HitWall(vector HitNormal, actor Wall, PrimitiveComponent WallComp) // Everything except Actors ( I guess) the first method, the function just give me the actor which I hit and my question is How I can get that actor's physical material by first parameter ( Other ), in order to make a proper react about it ( for example a proper Sound of collide ) ... A tricky (but hateful ) way which I knew works is, make a Trace from a little back of that actor to that actor, and use HitInfo parameter which include physical Material ! But there should be a more standard way !

    Read the article

  • Using normals in DirectX 10

    - by Dave
    I've got a working OBJ loader that loads vertices, indices, texture coordinates, and normals. As of right now it doesn't process texture coordinates or normals but it stores them in arrays and creates a valid mesh with the vertices and indices. Now I am trying to figure out how can I make the shader use the correct normal in the array for the current vertex if I can't setnormals() to my mesh. If I were to just use an index in my array of normals corresponding to the index in the vertices, how would I retrieve the current index the shader is processing? BTW: I am trying to write a blinn-phong shader technique. Also when I create the input layout and I've added the semantic NORMAL to it, how would I list the multiple semantics in that single parameter? Would I just separate it with a space? PS: If you need to see any code, just let me know.

    Read the article

  • Very basic OpenGL ES 2 error

    - by user16547
    This is an incredibly simple shader, yet I'm having a lot of trouble understanding what's wrong with it. I'm trying to send a float to my fragment shader. Its purpose is to adjust the alpha of the fragment colour. Here is my fragment shader: precision mediump float; uniform sampler2D u_Texture; uniform float u_Alpha; varying vec2 v_TexCoordinate; void main() { gl_FragColor = texture2D(u_Texture, v_TexCoordinate); gl_FragColor.a *= u_Alpha; } and below is my rendering method. I get a 1282 (invalid operation) on the GLES20.glUniform1f(u_Alpha, alpha); line. alpha is 1 (but I tried other values as well) and transparent is true: public void render() { GLES20.glUseProgram(mProgram); if(transparent) { GLES20.glEnable(GLES20.GL_BLEND); GLES20.glBlendFunc(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA); GLES20.glUniform1f(u_Alpha, alpha); } Matrix.setIdentityM(mModelMatrix, 0); Matrix.rotateM(mModelMatrix, 0, angle, 0, 0, 1); Matrix.translateM(mModelMatrix, 0, x, y, z); Matrix.multiplyMM(mMVPMatrix, 0, mViewMatrix, 0, mModelMatrix, 0); Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0); GLES20.glUniformMatrix4fv(u_MVPMatrix, 1, false, mMVPMatrix, 0); GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, vbo[0]); GLES20.glVertexAttribPointer(a_Position, 3, GLES20.GL_FLOAT, false, 12, 0); GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, vbo[1]); GLES20.glVertexAttribPointer(a_TexCoordinate, 2, GLES20.GL_FLOAT, false, 8, 0); //snowTexture start GLES20.glActiveTexture(GLES20.GL_TEXTURE0); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureHandle[0]); GLES20.glUniform1i(u_Texture, 0); GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, ibo[0]); GLES20.glDrawElements(GLES20.GL_TRIANGLE_STRIP, indices.capacity(), GLES20.GL_UNSIGNED_BYTE, 0); GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0); GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, 0); if(transparent) { GLES20.glDisable(GLES20.GL_BLEND); } GLES20.glUseProgram(0); }

    Read the article

  • Can i change the order of these OpenGL / Win32 calls?

    - by Adam Naylor
    I've been adapting the NeHe ogl/win32 code to be more object orientated and I don't like the way some of the calls are structured. The example has the following pseudo structure: Register window class Change display settings with a DEVMODE Adjust window rect Create window Get DC Find closest matching pixel format Set the pixel format to closest match Create rendering context Make that context current Show the window Set it to foreground Set it to having focus Resize the GL scene Init GL The points in bold are what I want to move into a rendering class (the rest are what I see being pure win32 calls) but I'm not sure if I can call them after the win32 calls. Essentially what I'm aiming for is to encapsulate the Win32 calls into a Platform::Initiate() type method and the rest into a sort of Renderer::Initiate() method. So my question essentially boils down to: "Would OpenGL allow these methods to be called in this order?" Register window class Adjust window rect Create window Get DC Show the window Set it to foreground Set it to having focus Change display settings with a DEVMODE Find closest matching pixel format Set the pixel format to closest match Create rendering context Make that context current Resize the GL scene Init GL (obviously passing through the appropriate window handles and device contexts.) Thanks in advance.

    Read the article

  • Avoiding orbiting in pursuit steering behavior

    - by bobobobo
    I have a missile that does pursuit behavior to track (and try and impact) its (stationary) target. It works fine as long as you are not strafing when you launch the missile. If you are strafing, the missile tends to orbit its target. I fixed this by accelerating tangentially to the target first, killing the tangential component of the velocity first, then beelining for the target. So I accelerate in -vT until vT is nearly 0. Then accelerate in the direction of vN. While that works, I'm looking for a more elegant solution where the missile is able to impact the target without explicitly killing the tangential component first.

    Read the article

  • Adding root bone in 3DS Max?

    - by carlturtle
    my animation artist has made me a nice first person pair of arms, animated it, textured it, and given it to me. Then he went on vacation. I am programming my animations, and I am trying to test the model he has given me. Building my project gives me a warning: Multiple skeletons were found in the file. The first skeleton, named "frame l upperarm" has been moved to be a child of the scene root. The other, "frame r upperarm", will be ignored. Fragment identifier "frame r upperarm". Then an error: "Vertex is bound to bone "frame l forearm", but this bone is not present in the skeleton." I realize this means that there are two skeletons, as said in this problem: Importing 3d model with multiple skeletons I have 3DS Max, but I have no idea how to use it, and Google/CGTalk/Plycount turn up nothing relevant on how to add a root bone or combine skeletons. If anyone knows how, it would help me out greatly. Thanks.

    Read the article

  • How can I model a pendulum blade?

    - by Micah Delane Bolen
    Like this one from Saw V: What primitive shape/s would you start out with? How would you transform the primitive shape/s to give it a nice, smooth, sharp blade on one side without distorting the entire object in a weird way? I tried starting out with a cylinder and then subtracting the top half using a duplicate cylinder and a difference modifier, but I ended up distorting the entire object when I tried to pull the "blade" edges together. I think I need to add lattices to smoothly "sharpen" the edge of the blade.

    Read the article

  • DirectX 11 Constant Buffers vs Effect Framework

    - by Alex
    I'm having some trouble understanding the differences between using constant buffers or using the effect framework of DirectX11 for updating shader constants. From what I understand they both do exactly the same thing, although from reading the documentation it appears as if using effects is meant to be 'easier'. However they seem the same to me, one uses VSSetConstantBuffers and the other GetConstantBufferByName. Is there something I'm missing here?

    Read the article

< Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >