Search Results

Search found 29201 results on 1169 pages for 'game development'.

Page 584/1169 | < Previous Page | 580 581 582 583 584 585 586 587 588 589 590 591  | Next Page >

  • Bullet Physics implementing custom MotionState class

    - by Arosboro
    I'm trying to make my engine's camera a kinematic rigid body that can collide into other rigid bodies. I've overridden the btMotionState class and implemented setKinematicPos which updates the motion state's tranform. I use the overridden class when creating my kinematic body, but the collision detection fails. I'm doing this for fun trying to add collision detection and physics to Sean O' Neil's Procedural Universe I referred to the bullet wiki on MotionStates for my CPhysicsMotionState class. If it helps I can add the code for the Planetary rigid bodies, but I didn't want to clutter the post. Here is my motion state class: class CPhysicsMotionState: public btMotionState { protected: // This is the transform with position and rotation of the camera CSRTTransform* m_srtTransform; btTransform m_btPos1; public: CPhysicsMotionState(const btTransform &initialpos, CSRTTransform* srtTransform) { m_srtTransform = srtTransform; m_btPos1 = initialpos; } virtual ~CPhysicsMotionState() { // TODO Auto-generated destructor stub } virtual void getWorldTransform(btTransform &worldTrans) const { worldTrans = m_btPos1; } void setKinematicPos(btQuaternion &rot, btVector3 &pos) { m_btPos1.setRotation(rot); m_btPos1.setOrigin(pos); } virtual void setWorldTransform(const btTransform &worldTrans) { btQuaternion rot = worldTrans.getRotation(); btVector3 pos = worldTrans.getOrigin(); m_srtTransform->m_qRotate = CQuaternion(rot.x(), rot.y(), rot.z(), rot.w()); m_srtTransform->SetPosition(CVector(pos.x(), pos.y(), pos.z())); m_btPos1 = worldTrans; } }; I add a rigid body for the camera: // Create rigid body for camera btCollisionShape* cameraShape = new btSphereShape(btScalar(5.0f)); btTransform startTransform; startTransform.setIdentity(); // forgot to add this line CVector vCamera = m_srtCamera.GetPosition(); startTransform.setOrigin(btVector3(vCamera.x, vCamera.y, vCamera.z)); m_msCamera = new CPhysicsMotionState(startTransform, &m_srtCamera); btScalar tMass(80.7f); bool isDynamic = (tMass != 0.f); btVector3 localInertia(0,0,0); if (isDynamic) cameraShape->calculateLocalInertia(tMass,localInertia); btRigidBody::btRigidBodyConstructionInfo rbInfo(tMass, m_msCamera, cameraShape, localInertia); m_rigidBody = new btRigidBody(rbInfo); m_rigidBody->setCollisionFlags(m_rigidBody->getCollisionFlags() | btCollisionObject::CF_KINEMATIC_OBJECT); m_rigidBody->setActivationState(DISABLE_DEACTIVATION); This is the code in Update() that runs each frame: CSRTTransform srtCamera = CCameraTask::GetPtr()->GetCamera(); Quaternion qRotate = srtCamera.m_qRotate; btQuaternion rot = btQuaternion(qRotate.x, qRotate.y, qRotate.z, qRotate.w); CVector vCamera = CCameraTask::GetPtr()->GetPosition(); btVector3 pos = btVector3(vCamera.x, vCamera.y, vCamera.z); CPhysicsMotionState* cameraMotionState = CCameraTask::GetPtr()->GetMotionState(); cameraMotionState->setKinematicPos(rot, pos);

    Read the article

  • Why doesn't light continuous on my model?

    - by nosferat
    I created a basic textured cube model with Blender to practice modeling, and then I imported it into Unity. After I put up some lighting it looks pretty ugly. The light is not continuous on a row of textured cubes: What is more odd, the light on the blocks that makes up the floor is continuous. What am I doing wrong? UPDATE This is how it looks like without textures: https://dl.dropbox.com/u/45620018/without%20textures.PNG If I would not know that these are perfect cubes, I'd say there is a slight curve on surface. I also tried lightening the texture but it also didn't help: https://dl.dropbox.com/u/45620018/lighter%20texture.PNG I just simply exported the model from Blender and did not set up any normals or things like that. However I also did not do any special woth the floor brick model.

    Read the article

  • A way to store potentially infinite 2D map data?

    - by Blam
    I have a 2D platformer that currently can handle chunks with 100 by 100 tiles, with the chunk coordinates are stored as longs, so this is the only limit of maps (maxlong*maxlong). All entity positions etc etc are chunk relevant and so there is no limit there. The problem I'm having is how to store and access these chunks without having thousands of files. Any ideas for a preferably quick & low HD cost archive format that doesn't need to open everything at once?

    Read the article

  • Rotation angle based on touch move

    - by Siddharth
    I want to rotate my stick based on the movement of the touch on the screen. From my calculation I did not able to find correct angle in degree. So please provide guidance, my code snippet for that are below. if (pSceneTouchEvent.isActionMove()) { pValueX = pSceneTouchEvent.getX(); pValueY = CAMERA_HEIGHT - pSceneTouchEvent.getY(); rotationAngle = (float) Math.atan2(pValueX, pValueY); stick.setRotation((float) MathUtils.radToDeg(rotationAngle)); }

    Read the article

  • Blending effect on textures

    - by joecks
    Hi i am trying to build screen animation like flickering, interlace, color separation similar to old style malfunctioning Amiga screens. The intended effects are shown in this video. I am using libgdx and I already discovered the universal tween engine, which helps a lot to build transitional animations, but how should I approach those blending effects, any suggestions? I will specify my question once I learned more about libgdx, but maybe you could give me some hints already. Thanks!

    Read the article

  • Point line collision reaction

    - by user4523
    I am trying to program point line segment collision detection and reaction. I am doing this for fun and to learn. The point moves (it has a velocity, and can be controlled by the user), whilst the lines are strait and stationary. The lines are not axis aligned. Everything is in 2D. It is quite straight forward to work out if a collision has occurred. For each frame, the point moves from A to B. AB is a line, and if it crosses the line segment, a collision has occurred (or will occur) and I am able to work out the point of intersection (poi). The problem I am having is with the reaction. Ideally I would like the point to be prevented from moving across the line. In one frame, I can move the point back to the poi (or only alow it to move as far as the poi), and alter the velocity. The problem I am having with this approach (I think) is that, next frame the user may try to cross the line again. Although the point is on the poi, the point may not be exactly on the line. Since it is not axis aligned, I think there is always some subtle rounding issue (A float representation of a point on a line might be rounded to a point that is slightly on one side or the other). Because of this, next frame the path might not intersect the line (because it can start on the other side and move away from it) and the point is effectively allowed to cross the line. Firstly, does the analysis sound correct? Having accepted (maybe) that I cannot always exactly position the point on the line, I tried to move the point away from the line slightly (either along the normal to the line, or along the path vector). I then get a problem at edges. Attempting to fix one collision by moving the point away from the line (even slightly) can cause it to cross another line (one shape I am dealing with is a star, with sharp corners). This can mean that the solution to one collision inadvertently creates another collision, which is ignored. Again, does this sound correct? Anyway, whatever I try, I am having difficulty with edges, and the point is occasionally able to penetrate the polygons and cross lines, which is undesirable. Whilst I can find a lot of information about collision detection on the web (and on this site) I can find precious little information on collision reaction. Does any one know of any good point line collision reaction tutorials? Or is my approach too flawed/over complicated?

    Read the article

  • Recommended formats to store bitmaps in memory?

    - by Geotarget
    I'm working with general purpose image rendering, and high-performance image processing, and so I need to know how to store bitmaps in-memory. (24bpp/32bpp, compressed/raw, etc) I'm not working with 3D graphics or DirectX / OpenGL rendering and so I don't need to use graphics card compatible bitmap formats. My questions: What is the "usual" or "normal" way to store bitmaps in memory? (in C++ engines/projects?) How to store bitmaps for high-performance algorithms, such that read/write times are the fastest? (fixed array? with/without padding? 24-bpp or 32-bpp?) How to store bitmaps for applications handling a lot of bitmap data, to minimize memory usage? (JPEG? or a faster [de]compression algorithm?) Some possible methods: Use a fixed packed 24-bpp or 32-bpp int[] array and simply access pixels using pointer access, all pixels are allocated in one continuous memory chunk (could be 1-10 MB) Use a form of "sparse" data storage so each line of the bitmap is allocated separately, reusing more memory and requiring smaller contiguous memory segments Store bitmaps in its compressed form (PNG, JPG, GIF, etc) and unpack only when its needed, reducing the amount of memory used. Delete the unpacked data if its not used for 10 secs.

    Read the article

  • Smooth drag scrolling of an Isometric map - Html5

    - by user881920
    I have implemented a basic Isometric tile engine that can be explored by dragging the map with the mouse. Please see the fiddle below and drag away: http://jsfiddle.net/kHydg/14/ The code broken down is (CoffeeScript): The draw function draw = -> requestAnimFrame draw canvas.width = canvas.width for row in [0..width] for col in [0..height] xpos = (row - col) * tileHeight + width xpos += (canvas.width / 2) - (tileWidth / 2) + scrollPosition.x ypos = (row + col) * (tileHeight / 2) + height + scrollPosition.y context.drawImage(defaultTile, Math.round(xpos), Math.round(ypos), tileWidth, tileHeight) Mouse drag-scrolling control scrollPosition = x: 0 y: 0 dragHelper = active: false x: 0 y: 0 window.addEventListener 'mousedown', (e) => handleMouseDown(e) , false window.addEventListener 'mousemove', (e) => handleDrag(e) , false window.addEventListener 'mouseup', (e) => handleMouseUp(e) , false handleDrag = (e) => e.preventDefault() if dragHelper.active x = e.clientX y = e.clientY scrollPosition.x -= Math.round((dragHelper.x - x) / 28) scrollPosition.y -= Math.round((dragHelper.y - y) / 28) handleMouseUp = (e) => e.preventDefault() dragHelper.active = false handleMouseDown = (e) => e.preventDefault() x = e.clientX y = e.clientY dragHelper.active = true dragHelper.x = x dragHelper.y = y The Problem As you can see from the fiddle the dragging action is ok but not perfect. How would I change the code to make the dragging action more smooth? What I would like is the point of the map you click on to stay under the mouse point whilst you drag; the same as they have done here: http://craftyjs.com/demos/isometric/

    Read the article

  • OpenGL Lighting

    - by gopgop
    I have a simple day and night cycle by at day disabling OpenGL lighting and at night enabling openGL Lighting. When I enable everything appears darker. My question is How would I make it that at a specific spot there would be a light that will only light up its surrounding area for example: http://media.giantbomb.com/uploads/0/276/1414275-light_large.png Where the light is is where I want to position my light. My application is in 2D.

    Read the article

  • Marching squares: Finding multiple contours within one source field?

    - by TravisG
    Principally, this is a follow-up-question to a problem from a few weeks ago, even though this is about the algorithm in general without application to my actual problem. The algorithm basically searches through all lines in the picture, starting from the top left of it, until it finds a pixel that is a border. In pseudo-C++: int start = 0; for(int i=0; i<amount_of_pixels; ++i) { if(pixels[i] == border) { start = i; break; } } When it finds one, it starts the marching squares algorithm and finds the contour to whatever object the pixel belongs to. Let's say I have something like this: Where everything except the color white is a border. And have found the contour points of the first blob: For the general algorithm it's over. It found a contour and has done its job. How can I move on to the other two blobs to find their contours as well?

    Read the article

  • Corona sdk events dispatched with dispatchEvent() are handled directly upon call. Why so?

    - by Amoxus
    I noticed to my surprise that an event created with dispatchEvent(event) gets handled directly when called, and not together with other events at a specific phase of the frame loop. Two main reasons of having an event system are: so that you can call code B from code A, but still want to prioritize code A. to make sure there are no freaky loopedy loops where code A calls code B calls code A ... I wonder what Ansca's rationale behind having events being handled directly this way is. And does Corona handle loopedy loops and other such pitfalls gracefully? The following code demonstrates dispatchEvent(): T= {} Z = display.newRect(100,100,100,100) function T.doSomething() print("T.doSomething: begun") local event = { name="myEventType", target=T } Z:dispatchEvent( event ) print("T.doSomething: ended") end function Z.sayHello(event) print("Z.sayHello: begun and ended") end Z:addEventListener("myEventType", Z.sayHello) print("Main: begun") T.doSomething() print("Main: ended") However Ansca claims the contrary at http://developer.coronalabs.com/reference/index/objectdispatchevent Can anyone clear this up a little? ( Using Corona simulator V 2012.840 )

    Read the article

  • OpenGL render to texture causing edge artifacts

    - by mysticalOso
    This is my first post here so any help would be massively appreciated :) I'm using C++ with SDL and OpenGL 3.3 When rendering directly to screen I get the following result And when I render to texture I this happens Anti-aliasing is turned off for both. I'm guessing this has something to do with depth buffer accuracy but I've tried a lot of different methods to improve the result but, no success :( I'm currently using the following code to set up my FBO: GLuint frameBufferID; glGenFramebuffers(1, &frameBufferID); glBindFramebuffer(GL_FRAMEBUFFER, frameBufferID); glGenTextures(1, &coloursTextureID); glBindTexture(GL_TEXTURE_2D, coloursTextureID); glTexImage2D(GL_TEXTURE_2D,0,GL_RGB,SCREEN_WIDTH,SCREEN_HEIGHT,0,GL_RGB,GL_UNSIGNED_BYTE,NULL); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); //Depth buffer setup GLuint depthrenderbuffer; glGenRenderbuffers(1, &depthrenderbuffer); glBindRenderbuffer(GL_RENDERBUFFER, depthrenderbuffer); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, SCREEN_WIDTH,SCREEN_HEIGHT); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthrenderbuffer); glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, coloursTextureID, 0); GLenum DrawBuffers[1] = {GL_COLOR_ATTACHMENT0}; glDrawBuffers(1, DrawBuffers); // if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) return false; Thank you so much for any help :)

    Read the article

  • I get GL_INVALID_VALUE after calling glTexSubImage2D

    - by user892644
    I am trying to figure out why my texture allocation does not work. Here is the code: glTexStorage2D(GL_TEXTURE_2D, 2, GL_RGBA8, 2048, 2048); glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 2048, 2048, GL_RGB, GL_UNSIGNED_SHORT_5_6_5_REV, &BitMap[0]); glTexSubImage2D returns GL_INVALID_VALUE but the maximum texture allowed is 16384x16384 on my card. The source of the image is 16bit (Red 5, Green 6, Blue 5).

    Read the article

  • Problem with sprite direction and rotation

    - by user2236165
    I have a sprite called Tool that moves with a speed represented as a float and in a direction represented as a Vector2. When I click the mouse on the screen the sprite change its direction and starts to move towards the mouseclick. In addition to that I rotate the sprite so that it is facing in the direction it is heading. However, when I add a camera that is suppose to follow the sprite so that the sprite is always centered on the screen, the sprite won't move in the given direction and the rotation isn't accurate anymore. This only happens when I add the Camera.View in the spriteBatch.Begin(). I was hoping anyone could maybe shed a light on what I am missing in my code, that would be highly appreciated. Here is the camera class i use: public class Camera { private const float zoomUpperLimit = 1.5f; private const float zoomLowerLimit = 0.1f; private float _zoom; private Vector2 _pos; private int ViewportWidth, ViewportHeight; #region Properties public float Zoom { get { return _zoom; } set { _zoom = value; if (_zoom < zoomLowerLimit) _zoom = zoomLowerLimit; if (_zoom > zoomUpperLimit) _zoom = zoomUpperLimit; } } public Rectangle Viewport { get { int width = (int)((ViewportWidth / _zoom)); int height = (int)((ViewportHeight / _zoom)); return new Rectangle((int)(_pos.X - width / 2), (int)(_pos.Y - height / 2), width, height); } } public void Move(Vector2 amount) { _pos += amount; } public Vector2 Position { get { return _pos; } set { _pos = value; } } public Matrix View { get { return Matrix.CreateTranslation(new Vector3(-_pos.X, -_pos.Y, 0)) * Matrix.CreateScale(new Vector3(Zoom, Zoom, 1)) * Matrix.CreateTranslation(new Vector3(ViewportWidth * 0.5f, ViewportHeight * 0.5f, 0)); } } #endregion public Camera(Viewport viewport, float initialZoom) { _zoom = initialZoom; _pos = Vector2.Zero; ViewportWidth = viewport.Width; ViewportHeight = viewport.Height; } } And here is my Update and Draw-method: protected override void Update (GameTime gameTime) { float elapsed = (float)gameTime.ElapsedGameTime.TotalSeconds; TouchCollection touchCollection = TouchPanel.GetState (); foreach (TouchLocation tl in touchCollection) { if (tl.State == TouchLocationState.Pressed || tl.State == TouchLocationState.Moved) { //direction the tool shall move towards direction = touchCollection [0].Position - toolPos; if (direction != Vector2.Zero) { direction.Normalize (); } //change the direction the tool is moving and find the rotationangle the texture must rotate to point in given direction toolPos += (direction * speed * elapsed); RotationAngle = (float)Math.Atan2 (direction.Y, direction.X); } } if (direction != Vector2.Zero) { direction.Normalize (); } //move tool in given direction toolPos += (direction * speed * elapsed); //change cameracentre to the tools position Camera.Position = toolPos; base.Update (gameTime); } protected override void Draw (GameTime gameTime) { graphics.GraphicsDevice.Clear (Color.Blue); spriteBatch.Begin (SpriteSortMode.BackToFront, BlendState.AlphaBlend, null, null, null, null, Camera.View); spriteBatch.Draw (tool, new Vector2 (toolPos.X, toolPos.Y), null, Color.White, RotationAngle, originOfToolTexture, 1, SpriteEffects.None, 1); spriteBatch.End (); base.Draw (gameTime); }

    Read the article

  • OpenGL doesn't draw (3.3+) [on hold]

    - by Dhiego Magalhães
    Brief: I've been following this tutorial about OpenGL for 2 days, and I still can't have a triangle drawn, so I'm asking for help here. The tutorial is turned to OpenGL version 3.3 programing, using vertex arrays, buffers, etc. The libraries are: GLFW3 and GLEW, and I setted them by myself. The screen keeps black all the time. Full code: link here (It's just like a Hello World opengl program) Further Details: I get no errors at all. I downloaded a software to test my video card, and it supports OpenGL 4.1+ Standard OpenGL code for drawing (from earlier version) such as this one works normally. I'm using Microsoft Visual Studio 10.0 I presume all the OpenGL implementation was dune right: I added Additional Dependences to the linker as glew32.lib, opengl32.lib, glfw3.lib. The glew.dll was placed at SysWOW64 - because I'm running window 64bits, and glew is 32. Notes: I've been working hard to find out what this is, but I can't find. I would appreciate if anyone could test this code for me, so I can know if I implemented something wrong, and that its not my code.

    Read the article

  • Speed up content loading

    - by user1806687
    I am using WinForms Sample downloaded from microsoft website. The problem is, that the model loading time is quite long, using: contentBuilder.Add(ModelPath, ModelName, null, "ModelProcessor"); contentManager.Load<Model>(ModelName); even a simple model, such as a cube with no textures, takes 4+ seconds to load. Now, I am no expert on this, but is there anyway to decrease loading time? EDIT: I've gone thru the code and found out that calling contentBuilder.Build(); ,which comes right after contentBuilder.Add() method takes up most of the time.

    Read the article

  • DirectX 9.0c and C++ GUI

    - by SullY
    Well, I'm trying to code a gui for my engine, but I've got some problems. I know how to make a UI overlay but buttons are still black magic for me. Anything I tried was to compilcated ( if it goes big ). To Example I tried to look if the mouse position is the same as the Pixel that is showing the button. But If I use some bigger areas it's getting to complicated. Now I'm searching for a Tutorial how to implement your own gui. I'm really confused about it. Well I hope you have/ know some good tutorials. By the way, I took a look at the DXUTSample, but it's to big to get overview.

    Read the article

  • OutOfBounds Exception when creating a PolygonShape using jbox2d

    - by B3nGr33ni3r
    So here's the deal, i'm parsing a file that contains the vertices for a polygon, that i want to create in box2d. I create a new PolygonShape() and then call .set() giving it a defined array of Vec, and that defined array's .length property. I expected this to work, since the documentation for jbox2d says this method takes a Vec array, and the count of Vec objects in that array. However, it errors out, and it seems to be unrelated to my code. The error i get is Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 8 at org.jbox2d.collision.shapes.PolygonShape.set(PolygonShape.java:174) and, upon looking at that line in the jbox2d svn repository, i still cannot figure out the issue. Any help is appreciated!

    Read the article

  • Any way to edit Warcraft MDX or MDL Animated models?

    - by Aralox
    I have been searching for a while for a way to get an animated mdl or mdx model into any 3D animating software (such as Blender), but so far have not had any success. I found a few methods of getting textured static mdx or mdl models into Blender/Milkshape/Hexagon, but no one seems to have written an importer that deals with the mdl/mdx model's keyframe animation. On that note, if anyone knows of a way of importing a keyframe-animated 3DS model into Blender, me and alot of people would appreciate it if you could let us know. Thanks for any help! :) PS: For anyone curious about static MDL or MDX - Blender, see here: http://wiki.blender.org/index.php/Extensions:2.6/Py/Scripts/Import-Export/WarCraft_MDL

    Read the article

  • Normalizing the direction to check if able to move

    - by spartan2417
    i have a a room with 4 walls along the x and z axis respectively. My player who is in first person (therefore the camera) should have collision detection with these walls. I'm relatively new to this so please bare with me. I believe the way to do this is to calculate the direction and distance to the wall from the camera and then normalize the directions. However i can only get this far before i dont know what to do. I think you should work out the angle and direction your facing? where _dx and _dz is the small buffer in front of the camera. float CalcDirection(float Cam_x, float Cam_z, float Wall_x, float Wall_z) { //Calculate direction and distance to obstacle. float ob_dirx = Cam_x + _dx - Wall_x; float ob_dirz = Cam_z + _dz - Wall_z; float ob_dist = sqrt(ob_dirx*ob_dirx + ob_dirz*ob_dirz); //Normalise directions float ob_norm = sqrt(ob_dirx*ob_dirx + ob_dirz*ob_dirz); ob_dirx = (ob_dirx)/ob_norm; ob_dirz = (ob_dirz)/ob_norm; can anyone explain in laymen's terms how i work out the angle?

    Read the article

  • LWJGL GL_QUADS texture artifact

    - by Dajgoro Labinac
    I managed to get working lwjgl in Java, and i loaded a test image(tv test card), but i keep getting weird artifacts outside the image. Image link: http://tinypic.com/r/vhv9g/6 Code: glBegin(GL_QUADS); glTexCoord2f(0, 0); glVertex2i(10, 10); glTexCoord2f(1, 0); glVertex2i(500, 10); glTexCoord2f(1, 1); glVertex2i(500, 500); glTexCoord2f(0, 1); glVertex2i(10, 500); glEnd(); What could be the cause?

    Read the article

  • View space lighting in deferred shading

    - by kochol
    I implemented a simple deferred shading renderer. I use 3 G-Buffer for storing position (R32F), normal (G16R16F) and albedo (ARGB8). I use sphere map algorithm to store normals in world space. Currently I use inverse of view * projection matrix to calculate the position of each pixel from stored depth value. First I want to avoid per pixel matrix multiplication for calculating the position. Is there another way to store and calculate position in G-Buffer without the need of matrix multiplication Store the normal in view space Every lighting in my engine is in world space and I want do the lighting in view space to speed up my lighting pass. I want an optimized lighting pass for my deferred engine.

    Read the article

  • Drawing flaming letters in 3d on OpenGL ES 2.0

    - by Chiquis
    I am a bit confused about how to achieve this. What i want is to "draw with flames". I have achieved this with textures successfully, but now my concern is about doing this with particles to achieve the flaming effect. Am I supposed to have a Path in where i should add many particle emitters along the path that will "be emitting flames"? I understand the concept for 2d, but for 3d are the particles (that are quads) always supposed to be facing the user? Edit: Something else im worried about is the performance hit that will occur by having that many particle emitters, because there can be many letters and drawings at the same time. And each of these elements will have many particle emitters.

    Read the article

  • Textures selectively not applying in Unity

    - by user46790
    On certain imported objects (fbx) in Unity, upon applying a material, only the base colour of the material is applied, with none of the tiled texture showing. This isn't universal; on a test model only some submeshes didn't show the texture, while some did. I have tried every combination of import/calculate normals/tangents to no avail. FYI I'm not exactly experienced with the software or gamedev in general; this is to make a small static scene with 3-4 objects max. One model tested was created in 3DSMax, the other in Blender. I've had this happen on every export from Blender, but only some submeshes from the 3DSMax model (internet sourced to test the problem)

    Read the article

  • Any advice for dynamic music control?

    - by Assembler
    I would like to be able to dynamically progress the score, and affect the volume levels of separate channels within the music. How could I do this? From my experience with mod music (olden days Amiga music, Mod Tracker, Scream Tracker, Fast Tracker II, Impulse Tracker etc etc), I believe this is the best way to tackle the problem, to allow the music to move from one loop to another, without anything mixed down. I want to do this in AS3, and am considering pulling apart Flod to make this happen

    Read the article

< Previous Page | 580 581 582 583 584 585 586 587 588 589 590 591  | Next Page >