Search Results

Search found 2867 results on 115 pages for '3d modeller'.

Page 32/115 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • Terrain square loading

    - by AndroidXTr3meN
    Games like Skyrim, Morrowind, and more are using quads or square to divide the terrain if im correct. The player is always at #5 1 | 2 | 3 4 | 5 | 6 7 | 8 | 9 So whenever you cross the border you unload and load the new "areas" But if the user goes just over the edge and then the second after goes back previous area a lot of unnecessary loading and unloading is done. Is there a general approach to this because I dont think games like skyrim have this issue? Cheers!

    Read the article

  • Using a Vertex Buffer and DrawUserIndexedPrimitives?

    - by MattMcg
    Let's say I have a large but static world and only a single moving object on said world. To increase performance I wish to use a vertex and index buffer for the static part of the world. I set them up and they work fine however if I throw in another draw call to DrawUserIndexedPrimitives (to draw my one single moving object) after the call to DrawIndexedPrimitives, it will error out saying a valid vertex buffer must be set. I can only assume the DrawUserIndexedPrimitive call destroyed/replaced the vertex buffer I set. In order to get around this I must call device.SetVertexBuffer(vertexBuffer) every frame. Something tells me that isn't correct as that kind of defeats the point of a buffer? To shed some light, the large vertex buffer is the final merged mesh of many repeated cubes (think Minecraft) which I manually create to reduce the amount of vertices/indexes needed (for example two connected cubes become one cuboid, the connecting faces are cut out), and also the amount of matrix translations (as it would suck to do one per cube). The moving objects would be other items in the world which are dynamic and not fixed to the block grid, so things like the NPCs who move constantly. How do I go about handling the large static world but also allowing objects to freely move about?

    Read the article

  • Converting a DrawModel() using BasicEffect to one using Effect

    - by Fibericon
    Take this DrawModel() provided by MSDN: private void DrawModel(Model m) { Matrix[] transforms = new Matrix[m.Bones.Count]; float aspectRatio = graphics.GraphicsDevice.Viewport.Width / graphics.GraphicsDevice.Viewport.Height; m.CopyAbsoluteBoneTransformsTo(transforms); Matrix projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.ToRadians(45.0f), aspectRatio, 1.0f, 10000.0f); Matrix view = Matrix.CreateLookAt(new Vector3(0.0f, 50.0f, Zoom), Vector3.Zero, Vector3.Up); foreach (ModelMesh mesh in m.Meshes) { foreach (BasicEffect effect in mesh.Effects) { effect.EnableDefaultLighting(); effect.View = view; effect.Projection = projection; effect.World = gameWorldRotation * transforms[mesh.ParentBone.Index] * Matrix.CreateTranslation(Position); } mesh.Draw(); } } How would I apply a custom effect to a model with that? Effect doesn't have View, Projection, or World members. This is what they recommend replacing the foreach loop with: foreach (ModelMesh mesh in terrain.Meshes) { foreach (Effect effect in mesh.Effects) { mesh.Draw(); } } Of course, that doesn't really work. What else needs to be done?

    Read the article

  • What is the best way to implement collision detection using Bullet physics engine and a track generated from a curve?

    - by tigrou
    I am developing a small racing game were the track is generated from a curve. As said above, the track is generated, but not infinite. The track of one level could fit with no problem in memory and will contain a reasonably small amount of triangles. For collisions, I would like to use Bullet physics engine and know what is the best way to handle collisions with the track efficiently. NOTE : The track will be stored as a static rigid body (mass = 0). The player will be represented by a sphere shape for collisions. Here is some possibilities i have in mind : Create one rigid body, then, put all triangles of the track (except non collidable stuff) into it. Result : 1 body with many triangles (eg : 30000 triangles) Split the track into several sections (eg: 10 sections). Then, for each section, create a rigid body and put corresponding triangles in it. Result : small amount of bodies with relatively small amount of triangles (eg : 1500 triangles per section). Split the track into many sub-sections (eg : 1200 sections). Here one subsection = very small step when generating the curve. Again for each sub-section, create a body and put triangles in it. Result : many bodies with very small amount of triangles (eg : 20 triangles). Advantage : it could be possible to "extra data" to each of the subsection, that could be used when handling collisions. Same as 2, but only put sections N and N+1 in physics engine (where N = current section where the player is). When player reach section N+1, unload section N and load section N+2 and so on... Issue : harder to implement, problems if the player suddenly "jump" from one section to another (eg : player fly away from section N, and fall on section N + 4 that was underneath : no collision handled, player will fall into void ) Same as 4, but with many sub-sections. Issues : since subsections are very small there will be constantly new bodies added and removed to physics engine at runtime. Possibilities for player to accidently skip some sections and fall into the void are higher than 4.

    Read the article

  • Calculating a child object's Position, Rotation and Scale values?

    - by Sergio Plascencia
    I am making my own game editor, but have encountered the following problem: I have two objects, A and B. A's initial values: Position: (3,3,3), Rotation: (45,10,0), Scale(1,2,2.5) B's initial values: Position: (1,1,1), Rotation: (10,34,18), Scale(1.5,2,1) If I now make B a child of A, I need to re-calculate the B's Position, Rotation and Scale relative to A such that it maintains its current position, rotation and scale in world coordinates. So B's position would now be (-2, -2, -2) since now A is its center and (-2, -2, -2) will keep B in the same position. I think I got the Position and scale figured out, but not rotation. So I opened Unity and ran the same example and I noticed that when making a child object, the child object did not move at all. but had its Position, Rotation and Scale values changed relative to the parent. For example: Unity (Parent Object "A"): Position: (0,0,0) Rotation: (45,10,0) Scale: (1,1,1) Unity (Child Object "B"): Position: (0,0,0) Rotation: (0,0,0) Scale: (1,1,1) When B becomes a child of A, it's rotation values become: X: -44.13605 Y: -14.00195 Z: 9.851074 If I plug the same rotation values into the B object in my editor, the object does not move at all. How did Unity arrive at those rotation values for the child? What are the calculations? If you can put all the equations for the Position, Rotation or Scale then I can double check I am doing it correctly but the Rotation is what I really need.

    Read the article

  • Central renderer for a given scene

    - by Loggie
    When creating a central rendering system for all game objects in a given scene I am trying to work out the best way to go about passing the scene to the render system to be rendered. If I have a scene managed by an arbitrary structure, i.e., an octree, bsp trees, quad-tree, kd tree, etc. What is the best way to pass this to the render system? The obvious problem is that if simply given the root node of the structure, the render system would require an intrinsic knowledge of the structure in order to traverse the structure. My solution to this is to clip all objects outside the frustum in the scene manager and then create a list of the objects which are left and pass this simple list to the render system, be it an array, a vector, a linked list, etc. (This would be a structure required by the render system as a means to know which objects should be rendered). The list would of course attempt to minimise OpenGL state changes by grouping objects that require the same rendering operations to be performed on them. I have been thinking a lot about this and started searching various terms on here and followed any additional information/links but I have not really found a definitive answer. The case may be that there is no definitive answer but I would appreciate some advice and tips. My question is, is this a reasonable solution to the problem? Are there any improvements that I could make? Are there any caveats I should know about? Side question: Am I right in assuming that octrees, bsp trees, etc are all forms of BVH?

    Read the article

  • XNA ModelMesh.Draw vs GraphicsDevice.DrawIndexedPrimitives

    - by cubrman
    I am using XNA 4.0 and I wonder if drawing models with multiple meshes is better by filling the vertex and index buffers first and calling GraphicsDevice.DrawIndexedPrimitives() or by simply using good ol' foreach(...) {ModelMesh.Draw()}. Is it possible to add data to vertex/index buffers at all in order to pack all the models on the scene in them and then call Draw only once per frame? I would appreciate a link to an in-depth explanation. Thanks.

    Read the article

  • How to generate portal zones?

    - by Meow
    I'm developing a portal-based scene manager. Basically all it does is to check the portals against the camera frustum, and render their associated portal zones accordingly. Is there any way my editor can generate portal zones automatically with the user having to set the portals themselves only? For example, the Max Payne 1/2 engine ("Max-FX") only required to set the portal quads, unlike the C4 engine where you also have to explicitly set the portal zones.

    Read the article

  • Better way to go up/down slope based on yaw?

    - by CyanPrime
    Alright, so I got a bit of movement code and I'm thinking I'm going to need to manually input when to go up/down a slope. All I got to work with is the slope's normal, and vector, and My current and previous position, and my yaw. Is there a better way to rotate whether I go up or down the slope based on my yaw? Vector3f move = new Vector3f(0,0,0); move.x = (float)-Math.toDegrees(Math.cos(Math.toRadians(yaw))); move.z = (float)-Math.toDegrees(Math.sin(Math.toRadians(yaw))); move.normalise(); if(move.z < 0 && slopeNormal.z > 0 || move.z > 0 && slopeNormal.z < 0){ if(move.x < 0 && slopeNormal.x > 0 || move.x > 0 && slopeNormal.x < 0){ move.y += slopeVec.y; } } if(move.z > 0 && slopeNormal.z > 0 || move.z < 0 && slopeNormal.z < 0){ if(move.x > 0 && slopeNormal.x > 0 || move.x < 0 && slopeNormal.x < 0){ move.y -= slopeVec.y; } } move.scale(movementSpeed * delta); Vector3f.add(pos, move, pos);

    Read the article

  • Character with several colliders and rigidbodies

    - by Lautaro
    I am doing a PvP fighting game. This is the GameObject hierarchy of the player character. Player contains: Legs Sword Torso Head I want to be able to Register impacts of the sword on a specific body part Use AddForce on the whole player entity when a body part is struck Change the animation of the player that owns the sword that hit Questions Is it correct that the only rigidbody should be on the root Player GameObject ? Is it correct that The body parts should have colliders and be triggers ? Is it correct that The swords should have colliders but not be trigger ?

    Read the article

  • Why does my VertexDeclaration apparently not contain Position0?

    - by Phil
    I'm trying to get my code from calling each individual draw call down to using at least a VertexBuffer, and preferably an indexBuffer, but now that I'm attempting to test my code, I'm getting the error: The current vertex declaration does not include all the elements required by the current vertex shader. Position0 is missing. Which makes absolutely no sense to me, as my VertexDeclaration is: public readonly static VertexDeclaration VertexDeclaration = new VertexDeclaration( new VertexElement(0, VertexElementFormat.Vector3, VertexElementUsage.Position, 0), new VertexElement(sizeof(float) * 3, VertexElementFormat.Color, VertexElementUsage.Color, 0), new VertexElement(sizeof(float) * 3 + 4, VertexElementFormat.Vector3, VertexElementUsage.Normal, 0) ); Which clearly contains the information. I am attempting to draw with the following lines: VertexBuffer vb = new VertexBuffer(GraphicsDevice, VertexPositionColorNormal.VertexDeclaration, c.VertexList.Count, BufferUsage.WriteOnly); IndexBuffer ib = new IndexBuffer(GraphicsDevice, typeof(int), c.IndexList.Count, BufferUsage.WriteOnly); vb.SetData<VertexPositionColorNormal>(c.VertexList.ToArray()); ib.SetData<int>(c.IndexList.ToArray()); GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, vb.VertexCount, 0, c.IndexList.Count/3); Where c is a Chunk class containing an 8x8x8 array of boxes. Full code is available at https://github.com/mrbaggins/Box/tree/ProperMeshing/box/box. Relevant locations are Chunk.cs (Contains the VertexDeclaration) and Game1.cs (Draw() is in Lines 230-250). Not much else of relevance to this problem anywhere else. Note that large commented sections are from old version of drawing.

    Read the article

  • techniques for displaying vehicle damage

    - by norca
    I wonder how I can displaying vehicle damage. I am talking about an good way to show damage on screen. Witch kind of model are common in games and what are the benefits of them. What is state of the art? One way i can imagine is to save a set of textures (normal/color/lightmaps, etc) to a state of the car (normal, damage, burnt out) and switch or blending them. But is this really good without changing the model? Another way i was thinking about is preparing animations for different locations on my car, something like damage on the front, on the leftside/rightside or on the back. And start blending the specific animation. But is this working with good textures? Whats about physik engines? Is it usefull to use it for deforming vertexdata? i think losing parts of my car (doors, sirens, weapons) can looks really nice. my game is a kind of rts in a top down view. vehicles are not the really most importend units (its no racing game), but i have quite a lot in. thx for help

    Read the article

  • Coordinate spaces and transformation matrices

    - by Belgin
    I'm trying to get an object from object space, into projected space using these intermediate matrices: The first matrix (I) is the one that transforms from object space into inertial space, but since my object is not rotated or translated in any way inside the object space, this matrix is the 4x4 identity matrix. The second matrix (W) is the one that transforms from inertial space into world space, which is just a scale transform matrix of factor a = 14.1 on all coordinates, since the inertial space origin coincides with the world space origin. /a 0 0 0\ W = |0 a 0 0| |0 0 a 0| \0 0 0 1/ The third matrix (C) is the one that transforms from world space, into camera space. This matrix is a translation matrix with a translation of (0, 0, 10), because I want the camera to be located behind the object, so the object must be positioned 10 units into the z axis. /1 0 0 0\ C = |0 1 0 0| |0 0 1 10| \0 0 0 1/ And finally, the fourth matrix is the projection matrix (P). Bearing in mind that the eye is at the origin of the world space and the projection plane is defined by z = 1, the projection matrix is: /1 0 0 0\ P = |0 1 0 0| |0 0 1 0| \0 0 1/d 0/ where d is the distance from the eye to the projection plane, so d = 1. I'm multiplying them like this: (((P x C) x W) x I) x V, where V is the vertex' coordinates in column vector form: /x\ V = |y| |z| \1/ After I get the result, I divide x and y coordinates by w to get the actual screen coordinates. Apparenly, I'm doing something wrong or missing something completely here, because it's not rendering properly. Here's a picture of what is supposed to be the bottom side of the Stanford Dragon: Also, I should add that this is a software renderer so no DirectX or OpenGL stuff here.

    Read the article

  • Material usage, one per model or per object?

    - by WSkid
    Is it better (memory, time (of developer), space) to use single model that is unwrapped and uses a single material or to break a model down into appropriate bits, each with their own smaller texture/material? Or does it depend on the target platform as to what is acceptable - ie PC vs tablet? An example: Say you have a typical house with a tiled roof. Model it, make sure everything is attached, unwrap the walls/roof so in your UV template the walls and roof would be in one texture file, side-by-side in say a 512x512 file. Model the roof/walls as separate objects, unwrap them individually and have two UV templates. You could then have a 256x256 file for each one.

    Read the article

  • Algorithm to simplify building/structural meshes

    - by morpheus
    I am looking for an algorithm to simplify the meshes of buildings or similar structures. EDIT: I had made a comment that Hoppe's algorithm tends to make meshes more and more spherical with simplification. But, I am not sure about it, so am deleting the comment. Buildings in contrast should tend to become more and more rectangular with increasing simplification. The D3DX extensions for D3D in version 9.0 (d3dx9.lib) used to have classes to do progressive mesh simplification. See: http://doc.51windows.net/Directx9_SDK/?url=/directx9_sdk/graphics/reference/d3dx/functions/mesh/d3dxgeneratepmesh.htm http://msdn.microsoft.com/en-us/library/windows/desktop/bb281243(v=vs.85).aspx

    Read the article

  • Normalizing the direction to check if able to move

    - by spartan2417
    i have a a room with 4 walls along the x and z axis respectively. My player who is in first person (therefore the camera) should have collision detection with these walls. I'm relatively new to this so please bare with me. I believe the way to do this is to calculate the direction and distance to the wall from the camera and then normalize the directions. However i can only get this far before i dont know what to do. I think you should work out the angle and direction your facing? where _dx and _dz is the small buffer in front of the camera. float CalcDirection(float Cam_x, float Cam_z, float Wall_x, float Wall_z) { //Calculate direction and distance to obstacle. float ob_dirx = Cam_x + _dx - Wall_x; float ob_dirz = Cam_z + _dz - Wall_z; float ob_dist = sqrt(ob_dirx*ob_dirx + ob_dirz*ob_dirz); //Normalise directions float ob_norm = sqrt(ob_dirx*ob_dirx + ob_dirz*ob_dirz); ob_dirx = (ob_dirx)/ob_norm; ob_dirz = (ob_dirz)/ob_norm; can anyone explain in laymen's terms how i work out the angle?

    Read the article

  • Translate along local axis

    - by Aaron
    I have an object with a position matrix and a rotation matrix (derived from a quaternion, but I digress). I'm able to translate this object along world-relative vectors, but I'm trying to figure out how to translate it along local-relative vectors. So if the object is tilted 45 degrees around its Z-axis the vector (1, 0, 0) would make it move to the upper right. For world-space translations I simply turn the movement vector into a matrix and multiply it by the position matrix: position_mat = translation_mat * position_mat. For local-space translations I'd think I'd have to use the rotation matrix into that formula, but I see the object spin around instead when I apply a translation over time no matter where I multiply the rotation matrix.

    Read the article

  • Bending of track in a racing game

    - by caius
    I am trying to create a small racing game in which the track would be modeled using a BSpline curve for the path's center line and directional vectors to define the 'bending' of the track at each point. My problem is that I don't know how to calculate the correct bending / slope of the curve, in such a way that it would be optimal or at least visually nice for a car to 'bend in the corner'. My idea was to use the direction of the 2nd derivatives of the curve, however while this approach looks fine for most of the track, there are points in which the 2nd derivative makes sharp 'twists' / very quick 180 degree flips. I also read about 'knots' of bsplines, but I don't know if such 'twist' in 2nd derivatives is a knot or knots are something else. Can you tell me that using a BSpline: 1. How could I calculate a visually nice bending of a track for a racing game? 2. Is it possible to do this by using some simple calculations of centripertal force / gravity? 3. Is it possible to do this by using 1st, 2nd and 3rd derivatives of the BSpline curve? I am not looking for the 'physically correct' bending angle for the track, I would just like to create something which is visually pleasing in a simple game. I am using a framework which has a built-in class for BSpline, including support for 1st, 2nd and 3rd derivatives of the curve.

    Read the article

  • Connecting 2 Vertices in 3DS Max?

    - by Reanimation
    How do you connect two vertices in 3DS Max 2013? I have two vertices which I wish to connect with a line to create an edge. (actually several) I have tried all I can think and done several Google searches but it only comes up with older versions method which say use the "connect" button... But I can't find the connect button on my version (see below) This is what my menu looks like: These are the vertices I'm trying to connect: Basically, I've edited an STL file and deleted some edges and vertices. Now I want to fill the gaps and triangulate what's left. Thanks.

    Read the article

  • How can I load .obj files in the Soya3D engine?

    - by John Riselvato
    I recently just found soya3d. I want to import .obj files, but it seems to only accept .data files. How can I import .obj files? Importing a .obj file named "house" produces this error: Traceback (most recent call last): File "introduction.py", line 7, in <module> model = soya.Model.get("house") File "/usr/lib/pymodules/python2.6/soya/__init__.py", line 259, in get return klass._alls.get(filename) or klass._alls.setdefault(filename, klass.load(filename)) File "/usr/lib/pymodules/python2.6/soya/__init__.py", line 268, in load dirname = klass._get_directory_for_loading_and_check_export(filename) File "/usr/lib/pymodules/python2.6/soya/__init__.py", line 194, in _get_directory_for_loading_and_check_export dirname = klass._get_directory_for_loading(filename, ext) File "/usr/lib/pymodules/python2.6/soya/__init__.py", line 171, in _get_directory_for_loading raise ValueError("Cannot find a %s named %s!" % (klass, filename)) ValueError: Cannot find a <class 'soya.Model'> named house! * Soya3D * Quit...

    Read the article

  • Why does unity obj import flip my x coordinate?

    - by milkplus
    When I import my wavefront obj model into unity and then draw lines over it with the same coordinates in the obj file, the x coordinate is negated. I don't see any option in the importer that might be doing that. And I'm using the same localToWorldMatrix and the same coordinate data in the .obj file. Hmmm GL.PushMatrix(); GL.MultMatrix(transform.localToWorldMatrix); CreateMaterial(); lineMaterial.SetPass(0); GL.Color(new Color(0, 1, 0)); GL.Begin(GL.LINES); GL.Vertex(p1); GL.Vertex(p2); GL.Vertex(p2); GL.Vertex(p3); //... GL.End(); GL.PopMatrix();

    Read the article

  • I am new to game development, what do I need to know? [closed]

    - by farmdve
    I am unsure if this question is a duplicate, I hope it isn't. Are there any resources on the terminology when doing game development? Because, even if you tell me to learn some graphics API, how would I understand the things it does, if I am not well into the terminology(voxel,mesh,polygon,shading). What about the math that is involved in the game(geometry) or the concept of the gravity,collision detection in the game and their respective maths? I am very bad at math, never was good, because I have ADHD, but I won't give up just yet. I look at a game, and I see "textures", but how am I walking on them, how do they take substance so I don't fall off of them? And depth? This is what I need information about, not just a link to a library like SDL(which I have compiled under MinGW and MinGW-W64) and tell me to learn it and the cliché answer "start simple/small". I hope the question(s) are not too vague.

    Read the article

  • How to prevent "underwater sight" in games

    - by CPP_Person
    In many games where the player can go underwater, it seems like when you look where the top half of the screen is in the air, and the bottom half the screen is in the water, it's almost like the water doesn't exist and the player is... flying slowly with water sounds? Is there a logical way to solve this? An algorithm? Doesn't seem like any solution has come up yet since many games still have this. I don't want to make the same mistake.

    Read the article

  • OpenGL Fast-Object Instancing Error

    - by HJ Media Studios
    I have some code that loops through a set of objects and renders instances of those objects. The list of objects that needs to be rendered is stored as a std::map, where an object of class MeshResource contains the vertices and indices with the actual data, and an object of classMeshRenderer defines the point in space the mesh is to be rendered at. My rendering code is as follows: glDisable(GL_BLEND); glEnable(GL_CULL_FACE); glDepthMask(GL_TRUE); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnable(GL_DEPTH_TEST); for (std::map<MeshResource*, std::vector<MeshRenderer*> >::iterator it = renderables.begin(); it != renderables.end(); it++) { it->first->setupBeforeRendering(); cout << "<"; for (unsigned long i =0; i < it->second.size(); i++) { //Pass in an identity matrix to the vertex shader- used here only for debugging purposes; the real code correctly inputs any matrix. uniformizeModelMatrix(Matrix4::IDENTITY); /** * StartHere fix rendering problem. * Ruled out: * Vertex buffers correctly. * Index buffers correctly. * Matrices correct? */ it->first->render(); } it->first->cleanupAfterRendering(); } geometryPassShader->disable(); glDepthMask(GL_FALSE); glDisable(GL_CULL_FACE); glDisable(GL_DEPTH_TEST); The function in MeshResource that handles setting up the uniforms is as follows: void MeshResource::setupBeforeRendering() { glEnableVertexAttribArray(0); glEnableVertexAttribArray(1); glEnableVertexAttribArray(2); glEnableVertexAttribArray(3); glEnableVertexAttribArray(4); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, iboID); glBindBuffer(GL_ARRAY_BUFFER, vboID); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), 0); // Vertex position glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid*) 12); // Vertex normal glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid*) 24); // UV layer 0 glVertexAttribPointer(3, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid*) 32); // Vertex color glVertexAttribPointer(4, 1, GL_UNSIGNED_SHORT, GL_FALSE, sizeof(Vertex), (const GLvoid*) 44); //Material index } The code that renders the object is this: void MeshResource::render() { glDrawElements(GL_TRIANGLES, geometry->numIndices, GL_UNSIGNED_SHORT, 0); } And the code that cleans up is this: void MeshResource::cleanupAfterRendering() { glDisableVertexAttribArray(0); glDisableVertexAttribArray(1); glDisableVertexAttribArray(2); glDisableVertexAttribArray(3); glDisableVertexAttribArray(4); } The end result of this is that I get a black screen, although the end of my rendering pipeline after the rendering code (essentially just drawing axes and lines on the screen) works properly, so I'm fairly sure it's not an issue with the passing of uniforms. If, however, I change the code slightly so that the rendering code calls the setup immediately before rendering, like so: void MeshResource::render() { setupBeforeRendering(); glDrawElements(GL_TRIANGLES, geometry->numIndices, GL_UNSIGNED_SHORT, 0); } The program works as desired. I don't want to have to do this, though, as my aim is to set up vertex, material, etc. data once per object type and then render each instance updating only the transformation information. The uniformizeModelMatrix works as follows: void RenderManager::uniformizeModelMatrix(Matrix4 matrix) { glBindBuffer(GL_UNIFORM_BUFFER, globalMatrixUBOID); glBufferSubData(GL_UNIFORM_BUFFER, 0, sizeof(Matrix4), matrix.ptr()); glBindBuffer(GL_UNIFORM_BUFFER, 0); }

    Read the article

  • Cheap ways to do scaling ops in shader?

    - by Nick Wiggill
    I've got an extensive world terrain that uses vec3 for the vertex position attribute. That's good, because the terrain has endless gradations due to the use of floating point. But I'm thinking about how to reduce the amount of data uploaded to the GPU. For my terrain, which uses discrete / grid-based vertex positions in x and z, it's pretty clear that I can replace my vec3s (floats, really) with shorts, halving the per-vertex position attribute cost from 12 bytes each to 6 bytes. Considering I've got little enough other vertex data, and an enormous amount of terrain data to push into the world, it's a major gain. Currently in my code, one unit in GLSL shaders is equal to 1m in the world. I like that scale. If I move over to using shorts, though, I won't be able to use the same scale, as I would then have a very blocky world where every step in height is an entire metre. So I see these potential solutions to scale the positional data correctly once it arrives at the vertex shader stage: Use 10:1 scaling, i.e. 1 short unit = 1 decimetre in CPU-side code. Do a division by 10 in the vertex shader to scale incoming decimetre values back to metres. Arbirary (non-PoT) divisions tend to be slow, however. Use (some-power-of-two):1 scaling (eg. 8:1), which enables the use of a bitshift (eg. val >> 3) to do the division... not sure how performant this is in shaders, though. Not as intuitive to read values, but possibly quite a bit faster than div by a non-PoT value. Use a texture as lookup table. I've heard that this is really fast. Or whatever solutions others can offer to achieve the same results -- minimal vertex data with sensible scaling.

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >