Search Results

Search found 7346 results on 294 pages for 'touch flo 3d'.

Page 49/294 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Terrain square loading

    - by AndroidXTr3meN
    Games like Skyrim, Morrowind, and more are using quads or square to divide the terrain if im correct. The player is always at #5 1 | 2 | 3 4 | 5 | 6 7 | 8 | 9 So whenever you cross the border you unload and load the new "areas" But if the user goes just over the edge and then the second after goes back previous area a lot of unnecessary loading and unloading is done. Is there a general approach to this because I dont think games like skyrim have this issue? Cheers!

    Read the article

  • Using a Vertex Buffer and DrawUserIndexedPrimitives?

    - by MattMcg
    Let's say I have a large but static world and only a single moving object on said world. To increase performance I wish to use a vertex and index buffer for the static part of the world. I set them up and they work fine however if I throw in another draw call to DrawUserIndexedPrimitives (to draw my one single moving object) after the call to DrawIndexedPrimitives, it will error out saying a valid vertex buffer must be set. I can only assume the DrawUserIndexedPrimitive call destroyed/replaced the vertex buffer I set. In order to get around this I must call device.SetVertexBuffer(vertexBuffer) every frame. Something tells me that isn't correct as that kind of defeats the point of a buffer? To shed some light, the large vertex buffer is the final merged mesh of many repeated cubes (think Minecraft) which I manually create to reduce the amount of vertices/indexes needed (for example two connected cubes become one cuboid, the connecting faces are cut out), and also the amount of matrix translations (as it would suck to do one per cube). The moving objects would be other items in the world which are dynamic and not fixed to the block grid, so things like the NPCs who move constantly. How do I go about handling the large static world but also allowing objects to freely move about?

    Read the article

  • Converting a DrawModel() using BasicEffect to one using Effect

    - by Fibericon
    Take this DrawModel() provided by MSDN: private void DrawModel(Model m) { Matrix[] transforms = new Matrix[m.Bones.Count]; float aspectRatio = graphics.GraphicsDevice.Viewport.Width / graphics.GraphicsDevice.Viewport.Height; m.CopyAbsoluteBoneTransformsTo(transforms); Matrix projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.ToRadians(45.0f), aspectRatio, 1.0f, 10000.0f); Matrix view = Matrix.CreateLookAt(new Vector3(0.0f, 50.0f, Zoom), Vector3.Zero, Vector3.Up); foreach (ModelMesh mesh in m.Meshes) { foreach (BasicEffect effect in mesh.Effects) { effect.EnableDefaultLighting(); effect.View = view; effect.Projection = projection; effect.World = gameWorldRotation * transforms[mesh.ParentBone.Index] * Matrix.CreateTranslation(Position); } mesh.Draw(); } } How would I apply a custom effect to a model with that? Effect doesn't have View, Projection, or World members. This is what they recommend replacing the foreach loop with: foreach (ModelMesh mesh in terrain.Meshes) { foreach (Effect effect in mesh.Effects) { mesh.Draw(); } } Of course, that doesn't really work. What else needs to be done?

    Read the article

  • Central renderer for a given scene

    - by Loggie
    When creating a central rendering system for all game objects in a given scene I am trying to work out the best way to go about passing the scene to the render system to be rendered. If I have a scene managed by an arbitrary structure, i.e., an octree, bsp trees, quad-tree, kd tree, etc. What is the best way to pass this to the render system? The obvious problem is that if simply given the root node of the structure, the render system would require an intrinsic knowledge of the structure in order to traverse the structure. My solution to this is to clip all objects outside the frustum in the scene manager and then create a list of the objects which are left and pass this simple list to the render system, be it an array, a vector, a linked list, etc. (This would be a structure required by the render system as a means to know which objects should be rendered). The list would of course attempt to minimise OpenGL state changes by grouping objects that require the same rendering operations to be performed on them. I have been thinking a lot about this and started searching various terms on here and followed any additional information/links but I have not really found a definitive answer. The case may be that there is no definitive answer but I would appreciate some advice and tips. My question is, is this a reasonable solution to the problem? Are there any improvements that I could make? Are there any caveats I should know about? Side question: Am I right in assuming that octrees, bsp trees, etc are all forms of BVH?

    Read the article

  • Calculating a child object's Position, Rotation and Scale values?

    - by Sergio Plascencia
    I am making my own game editor, but have encountered the following problem: I have two objects, A and B. A's initial values: Position: (3,3,3), Rotation: (45,10,0), Scale(1,2,2.5) B's initial values: Position: (1,1,1), Rotation: (10,34,18), Scale(1.5,2,1) If I now make B a child of A, I need to re-calculate the B's Position, Rotation and Scale relative to A such that it maintains its current position, rotation and scale in world coordinates. So B's position would now be (-2, -2, -2) since now A is its center and (-2, -2, -2) will keep B in the same position. I think I got the Position and scale figured out, but not rotation. So I opened Unity and ran the same example and I noticed that when making a child object, the child object did not move at all. but had its Position, Rotation and Scale values changed relative to the parent. For example: Unity (Parent Object "A"): Position: (0,0,0) Rotation: (45,10,0) Scale: (1,1,1) Unity (Child Object "B"): Position: (0,0,0) Rotation: (0,0,0) Scale: (1,1,1) When B becomes a child of A, it's rotation values become: X: -44.13605 Y: -14.00195 Z: 9.851074 If I plug the same rotation values into the B object in my editor, the object does not move at all. How did Unity arrive at those rotation values for the child? What are the calculations? If you can put all the equations for the Position, Rotation or Scale then I can double check I am doing it correctly but the Rotation is what I really need.

    Read the article

  • What is the best way to implement collision detection using Bullet physics engine and a track generated from a curve?

    - by tigrou
    I am developing a small racing game were the track is generated from a curve. As said above, the track is generated, but not infinite. The track of one level could fit with no problem in memory and will contain a reasonably small amount of triangles. For collisions, I would like to use Bullet physics engine and know what is the best way to handle collisions with the track efficiently. NOTE : The track will be stored as a static rigid body (mass = 0). The player will be represented by a sphere shape for collisions. Here is some possibilities i have in mind : Create one rigid body, then, put all triangles of the track (except non collidable stuff) into it. Result : 1 body with many triangles (eg : 30000 triangles) Split the track into several sections (eg: 10 sections). Then, for each section, create a rigid body and put corresponding triangles in it. Result : small amount of bodies with relatively small amount of triangles (eg : 1500 triangles per section). Split the track into many sub-sections (eg : 1200 sections). Here one subsection = very small step when generating the curve. Again for each sub-section, create a body and put triangles in it. Result : many bodies with very small amount of triangles (eg : 20 triangles). Advantage : it could be possible to "extra data" to each of the subsection, that could be used when handling collisions. Same as 2, but only put sections N and N+1 in physics engine (where N = current section where the player is). When player reach section N+1, unload section N and load section N+2 and so on... Issue : harder to implement, problems if the player suddenly "jump" from one section to another (eg : player fly away from section N, and fall on section N + 4 that was underneath : no collision handled, player will fall into void ) Same as 4, but with many sub-sections. Issues : since subsections are very small there will be constantly new bodies added and removed to physics engine at runtime. Possibilities for player to accidently skip some sections and fall into the void are higher than 4.

    Read the article

  • XNA ModelMesh.Draw vs GraphicsDevice.DrawIndexedPrimitives

    - by cubrman
    I am using XNA 4.0 and I wonder if drawing models with multiple meshes is better by filling the vertex and index buffers first and calling GraphicsDevice.DrawIndexedPrimitives() or by simply using good ol' foreach(...) {ModelMesh.Draw()}. Is it possible to add data to vertex/index buffers at all in order to pack all the models on the scene in them and then call Draw only once per frame? I would appreciate a link to an in-depth explanation. Thanks.

    Read the article

  • How to generate portal zones?

    - by Meow
    I'm developing a portal-based scene manager. Basically all it does is to check the portals against the camera frustum, and render their associated portal zones accordingly. Is there any way my editor can generate portal zones automatically with the user having to set the portals themselves only? For example, the Max Payne 1/2 engine ("Max-FX") only required to set the portal quads, unlike the C4 engine where you also have to explicitly set the portal zones.

    Read the article

  • Better way to go up/down slope based on yaw?

    - by CyanPrime
    Alright, so I got a bit of movement code and I'm thinking I'm going to need to manually input when to go up/down a slope. All I got to work with is the slope's normal, and vector, and My current and previous position, and my yaw. Is there a better way to rotate whether I go up or down the slope based on my yaw? Vector3f move = new Vector3f(0,0,0); move.x = (float)-Math.toDegrees(Math.cos(Math.toRadians(yaw))); move.z = (float)-Math.toDegrees(Math.sin(Math.toRadians(yaw))); move.normalise(); if(move.z < 0 && slopeNormal.z > 0 || move.z > 0 && slopeNormal.z < 0){ if(move.x < 0 && slopeNormal.x > 0 || move.x > 0 && slopeNormal.x < 0){ move.y += slopeVec.y; } } if(move.z > 0 && slopeNormal.z > 0 || move.z < 0 && slopeNormal.z < 0){ if(move.x > 0 && slopeNormal.x > 0 || move.x < 0 && slopeNormal.x < 0){ move.y -= slopeVec.y; } } move.scale(movementSpeed * delta); Vector3f.add(pos, move, pos);

    Read the article

  • GestureListener's fling method doesn't get called

    - by nosferat
    I'm using SimpleGestureDetector from the libgdx-users Wiki as my InputProcessor. I set it in the created() method: Gdx.input.setInputProcess(new SimpleDirectionGestureDetector(charController)); charController is my class which implements the DirectionListener interface defined in the SimpleDirectionGestureDetector class and it is responsible for moving the player character. However the character doesn't change direction when I'm performing a fling action in any direction. I've checked and the fling() method in the SimpleDirectionGesture class doesn't get called and I have no idea why, since everything seems good. What am I doing wrong?

    Read the article

  • Character with several colliders and rigidbodies

    - by Lautaro
    I am doing a PvP fighting game. This is the GameObject hierarchy of the player character. Player contains: Legs Sword Torso Head I want to be able to Register impacts of the sword on a specific body part Use AddForce on the whole player entity when a body part is struck Change the animation of the player that owns the sword that hit Questions Is it correct that the only rigidbody should be on the root Player GameObject ? Is it correct that The body parts should have colliders and be triggers ? Is it correct that The swords should have colliders but not be trigger ?

    Read the article

  • Why does my VertexDeclaration apparently not contain Position0?

    - by Phil
    I'm trying to get my code from calling each individual draw call down to using at least a VertexBuffer, and preferably an indexBuffer, but now that I'm attempting to test my code, I'm getting the error: The current vertex declaration does not include all the elements required by the current vertex shader. Position0 is missing. Which makes absolutely no sense to me, as my VertexDeclaration is: public readonly static VertexDeclaration VertexDeclaration = new VertexDeclaration( new VertexElement(0, VertexElementFormat.Vector3, VertexElementUsage.Position, 0), new VertexElement(sizeof(float) * 3, VertexElementFormat.Color, VertexElementUsage.Color, 0), new VertexElement(sizeof(float) * 3 + 4, VertexElementFormat.Vector3, VertexElementUsage.Normal, 0) ); Which clearly contains the information. I am attempting to draw with the following lines: VertexBuffer vb = new VertexBuffer(GraphicsDevice, VertexPositionColorNormal.VertexDeclaration, c.VertexList.Count, BufferUsage.WriteOnly); IndexBuffer ib = new IndexBuffer(GraphicsDevice, typeof(int), c.IndexList.Count, BufferUsage.WriteOnly); vb.SetData<VertexPositionColorNormal>(c.VertexList.ToArray()); ib.SetData<int>(c.IndexList.ToArray()); GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, vb.VertexCount, 0, c.IndexList.Count/3); Where c is a Chunk class containing an 8x8x8 array of boxes. Full code is available at https://github.com/mrbaggins/Box/tree/ProperMeshing/box/box. Relevant locations are Chunk.cs (Contains the VertexDeclaration) and Game1.cs (Draw() is in Lines 230-250). Not much else of relevance to this problem anywhere else. Note that large commented sections are from old version of drawing.

    Read the article

  • techniques for displaying vehicle damage

    - by norca
    I wonder how I can displaying vehicle damage. I am talking about an good way to show damage on screen. Witch kind of model are common in games and what are the benefits of them. What is state of the art? One way i can imagine is to save a set of textures (normal/color/lightmaps, etc) to a state of the car (normal, damage, burnt out) and switch or blending them. But is this really good without changing the model? Another way i was thinking about is preparing animations for different locations on my car, something like damage on the front, on the leftside/rightside or on the back. And start blending the specific animation. But is this working with good textures? Whats about physik engines? Is it usefull to use it for deforming vertexdata? i think losing parts of my car (doors, sirens, weapons) can looks really nice. my game is a kind of rts in a top down view. vehicles are not the really most importend units (its no racing game), but i have quite a lot in. thx for help

    Read the article

  • Coordinate spaces and transformation matrices

    - by Belgin
    I'm trying to get an object from object space, into projected space using these intermediate matrices: The first matrix (I) is the one that transforms from object space into inertial space, but since my object is not rotated or translated in any way inside the object space, this matrix is the 4x4 identity matrix. The second matrix (W) is the one that transforms from inertial space into world space, which is just a scale transform matrix of factor a = 14.1 on all coordinates, since the inertial space origin coincides with the world space origin. /a 0 0 0\ W = |0 a 0 0| |0 0 a 0| \0 0 0 1/ The third matrix (C) is the one that transforms from world space, into camera space. This matrix is a translation matrix with a translation of (0, 0, 10), because I want the camera to be located behind the object, so the object must be positioned 10 units into the z axis. /1 0 0 0\ C = |0 1 0 0| |0 0 1 10| \0 0 0 1/ And finally, the fourth matrix is the projection matrix (P). Bearing in mind that the eye is at the origin of the world space and the projection plane is defined by z = 1, the projection matrix is: /1 0 0 0\ P = |0 1 0 0| |0 0 1 0| \0 0 1/d 0/ where d is the distance from the eye to the projection plane, so d = 1. I'm multiplying them like this: (((P x C) x W) x I) x V, where V is the vertex' coordinates in column vector form: /x\ V = |y| |z| \1/ After I get the result, I divide x and y coordinates by w to get the actual screen coordinates. Apparenly, I'm doing something wrong or missing something completely here, because it's not rendering properly. Here's a picture of what is supposed to be the bottom side of the Stanford Dragon: Also, I should add that this is a software renderer so no DirectX or OpenGL stuff here.

    Read the article

  • Multitouch screen not detected on Asus Taichi 21DH71

    - by geekfreak
    I just bought this Ultrabook "Asus Taichi 21 DH71". This has Intel 3rd generation i7 processor and 4gb ram with 256 gb SSD. The main feature is that it is a hybrid machine. Naming it has dual screens. When the lid is closed it can be used as a tablet and when lid open it can be used as a notebook. This machine can also be used with the two screens on at the same time. I used ubuntu many years ago and loved it. But I never tried any linux later. My questions are Does the new version of Ubuntu support the Multitouch interface? Will it work specifically on this machine? Will Ubuntu support gestures on multi touchpad? Update 2/22/2013 I did try the latest 64bit Ubuntu(12.10) from live usb and noticed that it couldn't detect the tablet screen. Everything else worked seamlessly. Do you guys think the tablet screen would be detected if I make a complete installation on to the notebook? Please help guys..

    Read the article

  • Material usage, one per model or per object?

    - by WSkid
    Is it better (memory, time (of developer), space) to use single model that is unwrapped and uses a single material or to break a model down into appropriate bits, each with their own smaller texture/material? Or does it depend on the target platform as to what is acceptable - ie PC vs tablet? An example: Say you have a typical house with a tiled roof. Model it, make sure everything is attached, unwrap the walls/roof so in your UV template the walls and roof would be in one texture file, side-by-side in say a 512x512 file. Model the roof/walls as separate objects, unwrap them individually and have two UV templates. You could then have a 256x256 file for each one.

    Read the article

  • Algorithm to simplify building/structural meshes

    - by morpheus
    I am looking for an algorithm to simplify the meshes of buildings or similar structures. EDIT: I had made a comment that Hoppe's algorithm tends to make meshes more and more spherical with simplification. But, I am not sure about it, so am deleting the comment. Buildings in contrast should tend to become more and more rectangular with increasing simplification. The D3DX extensions for D3D in version 9.0 (d3dx9.lib) used to have classes to do progressive mesh simplification. See: http://doc.51windows.net/Directx9_SDK/?url=/directx9_sdk/graphics/reference/d3dx/functions/mesh/d3dxgeneratepmesh.htm http://msdn.microsoft.com/en-us/library/windows/desktop/bb281243(v=vs.85).aspx

    Read the article

  • Normalizing the direction to check if able to move

    - by spartan2417
    i have a a room with 4 walls along the x and z axis respectively. My player who is in first person (therefore the camera) should have collision detection with these walls. I'm relatively new to this so please bare with me. I believe the way to do this is to calculate the direction and distance to the wall from the camera and then normalize the directions. However i can only get this far before i dont know what to do. I think you should work out the angle and direction your facing? where _dx and _dz is the small buffer in front of the camera. float CalcDirection(float Cam_x, float Cam_z, float Wall_x, float Wall_z) { //Calculate direction and distance to obstacle. float ob_dirx = Cam_x + _dx - Wall_x; float ob_dirz = Cam_z + _dz - Wall_z; float ob_dist = sqrt(ob_dirx*ob_dirx + ob_dirz*ob_dirz); //Normalise directions float ob_norm = sqrt(ob_dirx*ob_dirx + ob_dirz*ob_dirz); ob_dirx = (ob_dirx)/ob_norm; ob_dirz = (ob_dirz)/ob_norm; can anyone explain in laymen's terms how i work out the angle?

    Read the article

  • Bending of track in a racing game

    - by caius
    I am trying to create a small racing game in which the track would be modeled using a BSpline curve for the path's center line and directional vectors to define the 'bending' of the track at each point. My problem is that I don't know how to calculate the correct bending / slope of the curve, in such a way that it would be optimal or at least visually nice for a car to 'bend in the corner'. My idea was to use the direction of the 2nd derivatives of the curve, however while this approach looks fine for most of the track, there are points in which the 2nd derivative makes sharp 'twists' / very quick 180 degree flips. I also read about 'knots' of bsplines, but I don't know if such 'twist' in 2nd derivatives is a knot or knots are something else. Can you tell me that using a BSpline: 1. How could I calculate a visually nice bending of a track for a racing game? 2. Is it possible to do this by using some simple calculations of centripertal force / gravity? 3. Is it possible to do this by using 1st, 2nd and 3rd derivatives of the BSpline curve? I am not looking for the 'physically correct' bending angle for the track, I would just like to create something which is visually pleasing in a simple game. I am using a framework which has a built-in class for BSpline, including support for 1st, 2nd and 3rd derivatives of the curve.

    Read the article

  • Restoring two finger middle click again

    - by Thomas A.
    it used to be that tapping two fingers on the touchpad send a middle mouse click. Now it does a right click and three fingers now are the middle click. I really can't understand the change and think it is a bug or badly copied from Apple or something. The reasoning escapes me totally. I use middle click to open links in a new tab in the browser all day and I rarely use right click (and I have a right mouse button below the touchpad, doh) Tapping three fingers on my tiny EeePC touchpad is next to impossible so I want the old behavior. I found: synclient TapButtons2=2 synclient TapButtons3=3 but that did not work on 10.10 Does anyone know how to restore sane behavior?

    Read the article

  • Translate along local axis

    - by Aaron
    I have an object with a position matrix and a rotation matrix (derived from a quaternion, but I digress). I'm able to translate this object along world-relative vectors, but I'm trying to figure out how to translate it along local-relative vectors. So if the object is tilted 45 degrees around its Z-axis the vector (1, 0, 0) would make it move to the upper right. For world-space translations I simply turn the movement vector into a matrix and multiply it by the position matrix: position_mat = translation_mat * position_mat. For local-space translations I'd think I'd have to use the rotation matrix into that formula, but I see the object spin around instead when I apply a translation over time no matter where I multiply the rotation matrix.

    Read the article

  • Connecting 2 Vertices in 3DS Max?

    - by Reanimation
    How do you connect two vertices in 3DS Max 2013? I have two vertices which I wish to connect with a line to create an edge. (actually several) I have tried all I can think and done several Google searches but it only comes up with older versions method which say use the "connect" button... But I can't find the connect button on my version (see below) This is what my menu looks like: These are the vertices I'm trying to connect: Basically, I've edited an STL file and deleted some edges and vertices. Now I want to fill the gaps and triangulate what's left. Thanks.

    Read the article

  • Mobile broadband not connect without unplug and plug

    - by Muhammad Zohaib
    I have recently installed ubuntu 13.10 and I am still very new in this operating system. My problem is that when I start my computer, it detects all the wifi connections around but not my mobile broadband usb connection (huwaie). I dont get any mobile broadband section automatically. I have to unplug and then plug my broadband usb to connect and have mobile broadband section available. I dont like to unplug and then plug my device always as it will loose my laptop and I always want to be plug in laptop even in shutdown. I always want to auto detect my usb broadband by ubuntu. Please someone guide me. Thanks in advance.

    Read the article

  • How can I load .obj files in the Soya3D engine?

    - by John Riselvato
    I recently just found soya3d. I want to import .obj files, but it seems to only accept .data files. How can I import .obj files? Importing a .obj file named "house" produces this error: Traceback (most recent call last): File "introduction.py", line 7, in <module> model = soya.Model.get("house") File "/usr/lib/pymodules/python2.6/soya/__init__.py", line 259, in get return klass._alls.get(filename) or klass._alls.setdefault(filename, klass.load(filename)) File "/usr/lib/pymodules/python2.6/soya/__init__.py", line 268, in load dirname = klass._get_directory_for_loading_and_check_export(filename) File "/usr/lib/pymodules/python2.6/soya/__init__.py", line 194, in _get_directory_for_loading_and_check_export dirname = klass._get_directory_for_loading(filename, ext) File "/usr/lib/pymodules/python2.6/soya/__init__.py", line 171, in _get_directory_for_loading raise ValueError("Cannot find a %s named %s!" % (klass, filename)) ValueError: Cannot find a <class 'soya.Model'> named house! * Soya3D * Quit...

    Read the article

  • How to prevent "underwater sight" in games

    - by CPP_Person
    In many games where the player can go underwater, it seems like when you look where the top half of the screen is in the air, and the bottom half the screen is in the water, it's almost like the water doesn't exist and the player is... flying slowly with water sounds? Is there a logical way to solve this? An algorithm? Doesn't seem like any solution has come up yet since many games still have this. I don't want to make the same mistake.

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >