Search Results

Search found 19281 results on 772 pages for 'blender game engine'.

Page 442/772 | < Previous Page | 438 439 440 441 442 443 444 445 446 447 448 449  | Next Page >

  • Creating a DrawableGameComponent

    - by Christian Frantz
    If I'm going to draw cubes effectively, I need to get rid of the numerous amounts of draw calls I have and what has been suggested is that I create a "mesh" of my cubes. I already have them being stored in a single vertex buffer, but the issue lies in my draw method where I am still looping through every cube in order to draw them. I thought this was necessary as each cube will have a set position, but it lowers the frame rate incredibly. What's the easiest way to go about this? I have a class CubeChunk that inherits Microsoft.Stuff.DrawableGameComponent, but I don't know what comes next. I suppose I could just use the chunk of cubes created in my cube class, but that would just keep me going in circles and drawing each cube individually. The goal here is to create a draw method that draws my chunk as a whole, and to not draw individual cubes as I've been doing.

    Read the article

  • Frame Buffer Objects vs calling TexCoord2f?

    - by sensae
    I'm learning the basics of OpenGL with lwjgl currently, and following a guide I've got textured quads that can move around a scene. I've been reading about Frame Buffer Objects, and I'm not really clear on their purpose and their benefit. My understanding is that I'll create a FBO with the texture I'd like, load the FBO, draw a quad, then unload the FBO. What would the technique I'm currently doing for texture management be called, and how does it differ from using FBOs? What are the benefits to using FBOs? How does it fit into the grand rendering scheme of things?

    Read the article

  • Weird appearance for a 3D XNA ground

    - by Belos
    I wanted to add a ground so I can know the position of a helicopter in the world. But the ground appeared in a weird way: http://i.stack.imgur.com/yTSuW.jpg The ground had the following texture: http://i.stack.imgur.com/pdpxB.png EDIT: Sorry, I forgot to post the code: public class ImportModel { public Vector3 Position { get; set; } public Vector3 Rotation { get; set; } public Vector3 Scale { get; set; } Model Model; Matrix[] modeltransforms; GraphicsDevice GraphicDevice; ContentManager Content; BoundingSphere sphere; bool boundingimplemented = false; public ImportModel(string model, GraphicsDevice gd, ContentManager cm, Vector3 position, Vector3 rot, Vector3 sca) { GraphicDevice = gd; Content = cm; Position = position; Rotation = rot; Scale = sca; Model = Content.Load<Model>(model); modeltransforms = new Matrix[Model.Bones.Count]; Model.CopyAbsoluteBoneTransformsTo(modeltransforms); } public void Draw(Camera camera) { Matrix baseworld = Matrix.CreateScale(Scale) * Matrix.CreateFromYawPitchRoll(Rotation.Y, Rotation.X, Rotation.Z) * Matrix.CreateTranslation(Position); foreach (ModelMesh mesh in Model.Meshes) { Matrix localworld = modeltransforms[mesh.ParentBone.Index] * baseworld; foreach (ModelMeshPart meshpart in mesh.MeshParts) { BasicEffect effect = (BasicEffect)meshpart.Effect; effect.World = localworld; effect.View = camera.View; effect.Projection = camera.Projection; effect.EnableDefaultLighting(); } mesh.Draw(); } } public BoundingSphere BoundingSphere { get { if (!boundingimplemented) { foreach (ModelMesh mesh in Model.Meshes) { BoundingSphere transformed = mesh.BoundingSphere.Transform( modeltransforms[mesh.ParentBone.Index]); sphere = BoundingSphere.CreateMerged(sphere, transformed); } Matrix worldTransform = Matrix.CreateScale(Scale) * Matrix.CreateTranslation(Position); BoundingSphere transforme = sphere; transforme = transforme.Transform(worldTransform); return transforme; } else { Matrix worldTransform = Matrix.CreateScale(Scale) * Matrix.CreateTranslation(Position); BoundingSphere transformed = sphere; transformed = transformed.Transform(worldTransform); return transformed; } } } } Then I call the class from the Game1 class: ImportModel ground = new ImportModel("ground", GraphicsDevice, Content, Vector3.Zero, Vector3.Zero, new Vector3(20f)); EDIT2:This is how the scene looks from top: i.stack.imgur.com/Hs983.jpg

    Read the article

  • Collision detection of player larger than clipping tile

    - by user1306322
    I want to know how to check for collisions efficiently in case where the player's box is larger than a map tile. On the left is my usual case where I make 8 checks against every surrounding tile, but with the right one it would be much more inefficient. (picture of two cases: on the left is the simple case, on the right is the one I need help with) http://i.stack.imgur.com/k7q0l.png How should I handle the right case?

    Read the article

  • Leg animation not working

    - by Monacraft
    I am making a simple animation in XNA C# of a leg moving. This is the logic code for the thigh. It is meant to swing from 25' to 335'. However instead, it hits a point and then keeps on spinning in the other direction. Please help, here's the code: private void Thigh_method() { if (Legdata.Left == true) signvalue = -0.05f; else signvalue = 0.05f; if (Legdata.ToMid == true) Thighturn_ang += signvalue; if (Legdata.ToMid == false) Thighturn_ang -= signvalue; if (Thighturn_ang <= 25 || Thighturn_ang <= 335 && Thighturn_ang <= 180) Legdata.Left = true; if (Thighturn_ang >= 25 || Thighturn_ang >= 335 && Thighturn_ang >= 180) Legdata.Left = false; if (Thighturn_ang == 0) Legdata.ToMid = false; if (Math.Abs(Thighturn_ang) >= 25f) Legdata.ToMid = true; } Thanks in advance, Yours: Mona

    Read the article

  • Grid Based Lighting in XNA/Monogame

    - by sm81095
    I know that questions like this have been asked many times, but I have not found one exactly like this yes. I have implemented a top-down grid based world in Monogame, and am starting on the lighting system soon. How I want to do lighting is to have a grid that is 4 times wider and higher, basically splitting each world tile into a 4x4 system of "subtiles". I would like to use a flow like system to spread light across the tiles by reducing the light by a small amount each time. This is kind of the effect I was going for: http://i.imgur.com/rv8LCxZ.png The black grid lines are the light grid, and the red lines are the actual tile grid, and the light drop-off is very exaggerated. I plan to render the world by drawing the unlit grid to a separate RenderTarget2D, then rendering the lighting grid to a separate target and overlaying the two. Basically, my questions are: What would be the algorithm for a flow style lighting system like this? Would there be a more efficient way of rendering this? How would I handle the darkening of the light with colors, reducing the RGB values in each grid, or reducing the alpha in each grid, assuming that I render the light map over the grid using blending? Even assuming the former are possible, what BlendState would I use for that?

    Read the article

  • Given a start and end point, how can I constrain the end point so the resulting line segment is horizontal, vertical, or 45 degrees?

    - by GloryFish
    I have a grid of letters. The player clicks on a letter and drags out a selection. Using Bresenham's Algorithm I can create a line of highlighted letters representing the player's selection. However, what I really want is to have the line segment be constrained to 45 degree angles (as is common for crossword-style games). So, given a start point and an end point, how can I find the line that passes through the start point and is closest to the end point? Bonus: To make things super sweet I'd like to get a list of points in the grid that the line passes through, and for super MEGA bonus points, I'd like to get them in order of selection (i.e. from start point to end point).

    Read the article

  • HLSL Shader not working right?

    - by dvds414
    Okay so I have this shader for ambient occlusion. It loads to world correctly, but it just shows all the models as being white. I do not know why. I am just running the shader while the model is rendering, is that correct? or do I need to make a render target or something? if so then how? I'm using C++. Here is my shader. float sampleRadius; float distanceScale; float4x4 xProjection; float4x4 xView; float4x4 xWorld; float3 cornerFustrum; struct VS_OUTPUT { float4 pos : POSITION; float2 TexCoord : TEXCOORD0; float3 viewDirection : TEXCOORD1; }; VS_OUTPUT VertexShaderFunction( float4 Position : POSITION, float2 TexCoord : TEXCOORD0) { VS_OUTPUT Out = (VS_OUTPUT)0; float4 WorldPosition = mul(Position, xWorld); float4 ViewPosition = mul(WorldPosition, xView); Out.pos = mul(ViewPosition, xProjection); Position.xy = sign(Position.xy); Out.TexCoord = (float2(Position.x, -Position.y) + float2( 1.0f, 1.0f ) ) * 0.5f; float3 corner = float3(-cornerFustrum.x * Position.x, cornerFustrum.y * Position.y, cornerFustrum.z); Out.viewDirection = corner; return Out; } texture depthTexture; texture randomTexture; sampler2D depthSampler = sampler_state { Texture = <depthTexture>; ADDRESSU = CLAMP; ADDRESSV = CLAMP; MAGFILTER = LINEAR; MINFILTER = LINEAR; }; sampler2D RandNormal = sampler_state { Texture = <randomTexture>; ADDRESSU = WRAP; ADDRESSV = WRAP; MAGFILTER = LINEAR; MINFILTER = LINEAR; }; float4 PixelShaderFunction(VS_OUTPUT IN) : COLOR0 { float4 samples[16] = { float4(0.355512, -0.709318, -0.102371, 0.0 ), float4(0.534186, 0.71511, -0.115167, 0.0 ), float4(-0.87866, 0.157139, -0.115167, 0.0 ), float4(0.140679, -0.475516, -0.0639818, 0.0 ), float4(-0.0796121, 0.158842, -0.677075, 0.0 ), float4(-0.0759516, -0.101676, -0.483625, 0.0 ), float4(0.12493, -0.0223423, -0.483625, 0.0 ), float4(-0.0720074, 0.243395, -0.967251, 0.0 ), float4(-0.207641, 0.414286, 0.187755, 0.0 ), float4(-0.277332, -0.371262, 0.187755, 0.0 ), float4(0.63864, -0.114214, 0.262857, 0.0 ), float4(-0.184051, 0.622119, 0.262857, 0.0 ), float4(0.110007, -0.219486, 0.435574, 0.0 ), float4(0.235085, 0.314707, 0.696918, 0.0 ), float4(-0.290012, 0.0518654, 0.522688, 0.0 ), float4(0.0975089, -0.329594, 0.609803, 0.0 ) }; IN.TexCoord.x += 1.0/1600.0; IN.TexCoord.y += 1.0/1200.0; normalize (IN.viewDirection); float depth = tex2D(depthSampler, IN.TexCoord).a; float3 se = depth * IN.viewDirection; float3 randNormal = tex2D( RandNormal, IN.TexCoord * 200.0 ).rgb; float3 normal = tex2D(depthSampler, IN.TexCoord).rgb; float finalColor = 0.0f; for (int i = 0; i < 16; i++) { float3 ray = reflect(samples[i].xyz,randNormal) * sampleRadius; //if (dot(ray, normal) < 0) // ray += normal * sampleRadius; float4 sample = float4(se + ray, 1.0f); float4 ss = mul(sample, xProjection); float2 sampleTexCoord = 0.5f * ss.xy/ss.w + float2(0.5f, 0.5f); sampleTexCoord.x += 1.0/1600.0; sampleTexCoord.y += 1.0/1200.0; float sampleDepth = tex2D(depthSampler, sampleTexCoord).a; if (sampleDepth == 1.0) { finalColor ++; } else { float occlusion = distanceScale* max(sampleDepth - depth, 0.0f); finalColor += 1.0f / (1.0f + occlusion * occlusion * 0.1); } } return float4(finalColor/16, finalColor/16, finalColor/16, 1.0f); } technique SSAO { pass P0 { VertexShader = compile vs_3_0 VertexShaderFunction(); PixelShader = compile ps_3_0 PixelShaderFunction(); } }

    Read the article

  • 3d Picking under reticle

    - by Wolftousen
    i'm currently trying to work out some 3d picking code that I started years ago, but then lost interested the assignment was completed (this part wasn't actually part of the assignment). I am not using the mouse coords for picking, i'm just using the position in 3d space and a ray directly out from there. A small hitch though is that I want to use a cone and not a ray. Here are the variables i'm using: float iReticleSlope = 95/3000; //inverse reticle slope float baseReticle = 1; //radius of the reticle at z = 0 float maxRange = 3000; //max range to target Quaternion orientation; //the cameras orientation Vector3d position; //the cameras position Then I loop through each object in the world: Vector3d transformed; //object position after transformations float d, r; //holder variables for(i = 0; i < objects.length; i++) { transformed = position - objects[i].position; //transform the position relative to camera orientation.multiply(transformed); //orient the object relative to the camera if(transformed.z < 0) { d = sqrt(transformed[0] * transformed[0] + transformed[1] * transformed[1]); r = -transformed[2] * iReticleSlope + objects[i].radius; if(d < r && -transformed[2] - objects[i].radius <= maxRange) { //the object is under the reticle } else { //the object is not under the reticle } } else { //the object is not under the reticle } } Now this all works fine and dandy until the window ratio doesn't match the resolution ratio. Is there any simple way to account for that

    Read the article

  • GLSL, all in one or many shader programs?

    - by stjepano
    I am doing some 3D demos using OpenGL and I noticed that GLSL is somewhat "limited" (or is it just me?). Anyway I have many different types of materials. Some materials have ambient and diffuse color, some materials have ambient occlusion map, some have specular map and bump map etc. Is it better to support everything in one vertex/fragment shader pair or is it better to create many vertex/fragment shaders and select them based on currently selected material? What is the usual shader strategy in OpenGL or D3D?

    Read the article

  • problems texture mapping in modern OpenGL 3.3 using GLSL #version 150

    - by RubyKing
    Hi all I'm trying to do texture mapping using Modern OpenGL and GLSL 150. The problem is the texture shows but has this weird flicker I can show a video here http://www.youtube.com/watch?v=xbzw_LMxlHw and I have everything setup best I can have my texcords in my vertex array sent up to opengl I have my fragment color set to the texture values and texel values I have my vertex sending the textures cords to texture cordinates to be used in the fragment shader I have my ins and outs setup and I still don't know what I'm missing that could be causing that flicker. here is my code FRAGMENT SHADER #version 150 uniform sampler2D texture; in vec2 texture_coord; varying vec3 texture_coordinate; void main(void){ gl_FragColor = texture(texture, texture_coord); } VERTEX SHADER #version 150 in vec4 position; out vec2 texture_coordinate; out vec2 texture_coord; uniform vec3 translations; void main() { texture_coord = (texture_coordinate); gl_Position = vec4(position.xyz + translations.xyz, 1.0); } Last bit here is my vertex array with texture cordinates GLfloat vVerts[] = { 0.5f, 0.5f, 0.0f, 0.0f, 1.0f , 0.0f, 0.5f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f}; //tex x and y HERE IS THE ACTUAL FULL SOURCE CODE if you need to see all the code in its fullest glory here is a link to every file http://ideone.com/7kQN3 thank you for your help

    Read the article

  • Extracting Frustum Planes (Hartmann & Gribbs method)

    - by DAVco
    I have been grappling with the Hartmann/Gribbs method of extracting the Frustum planes for some time now, with little success. There doesn't appear to be a "definitive" topic or tutorial which combines all the necessary information, so perhaps this can be it First of all, I am attempting to do this in C# (For Playstation Mobile), using OpenGL style Column-Major matrices in a Right-Handed coordinate system but obviously the math will work in any language. My projection matrix has a Near plane at 1.0, Far plane at 1000, FOV of 45.0 and Aspect of 1.7647. I want to get my planes in World-Space, so I build my frustum from the View-Projection Matrix (that's projectionMatrix * viewMatrix). The view Matrix is the inverse of the camera's World-Transform. The problem is; regardless of what I tweak, I can't seem to get a correct frustum. I think that I may be missing something obvious. Focusing on the Near and Far planes for the moment (since they have the most obvious normals when correct), when my camera is positioned looking down the negative z-axis, I get two planes facing in the same direction, rather than opposite directions. If i strafe my camera left and right (while still looking along the z axis) the x value of the normal vector changes. Obviously, something is fundamentally wrong here; I just can't figure out what - maybe someone here can?

    Read the article

  • Where does the light come from, using Maya/Panda3D?

    - by Aerovistae
    Total noob to Maya. Total noob to Panda3D. Planning on becoming really good at both as soon as I have free time to do so, but right now I have an assignment due in a few hours which requires this: (The part which confuses me is bolded.) Model and texture a vehicle and two different obstacles Build a scene graph in Panda with a plane, the vehicle, several copies of each of the obstacles, and (at least) a direction light Program vehicle movement, constrained to a plane (no terrain) Working headlights Vehicle collides with obstacles How do I attach a light source to a model? I'm assuming this is done in Panda3D but I'm sufficiently new to this that I wouldn't be astonished to hear it's part of the model.

    Read the article

  • Switching between Discrete and Integrated GPUs

    - by void-pointer
    Hello everyone, I develop CUDA applications on my Alienware M17x portable back-breaker, which has two discrete GTX 285M GPUs and one integrated GeForce 9400M GPU. I can currently switch between them using NVIDIA's software, but I would like the ability to do so within my applications for purposes of benchmarking and general convenience. Apparently this requires the "NDA version" of NVIDIA's Driver API, which I know not how to obtain. Would using this API be the only way to accomplish what I seek, and if so, how would I obtain it? A solution using Windows APIs would also be acceptable, though less preferable to one which would leverage a cross-platform API. I have created a similar thread concerning the matter on NVIDIA's forum, which is down at the time of this writing. Thanks for reading my question; it is much appreciated!

    Read the article

  • Transform 3d viewport vector to 2d vector

    - by learning_sam
    I am playing around with 3d transformations and came along an issue. I have a 3d vector already within the viewport and need to transform it to a 2d vector. (let's say my screen is 10x10) Does that just straight works like regualar transformation or is something different here? i.e.: I have the vector a = (2, 1, 0) within the viewport and want the 2d vector. Does that works like this and if yes how do I handle the "0" within the 3rd component?

    Read the article

  • OpenGL - Rendering from part of an index and vertex array depending on an element count

    - by user1423893
    I'm currently drawing my shapes as lines by using a VAO and then assigning the dynamic vertices and indices each frame. // Bind VAO glBindVertexArray(m_vao); // Update the vertex buffer with the new data (Copy data into the vertex buffer object) glBufferData(GL_ARRAY_BUFFER, numVertices * sizeof(VertexPosition), m_vertices.data(), GL_DYNAMIC_DRAW); // Update the index buffer with the new data (Copy data into the index buffer object) glBufferData(GL_ELEMENT_ARRAY_BUFFER, numIndices * sizeof(unsigned short), indices.data(), GL_DYNAMIC_DRAW); glDrawElements(GL_LINES, numIndices, GL_UNSIGNED_SHORT, BUFFER_OFFSET(0)); // Unbind VAO glBindVertexArray(0); What I would like to do is draw the lines using only part of the data stored in the index and vertex buffer objects. The vertex buffer has its vertices set from an array of defined maximum size: std::array<VertexPosition, maxVertices> m_vertices; The index buffer has its elements set from an array of defined maximum size: std::array<unsigned short, maxIndices> indices = { 0 }; A running total is kept of the number of vertices and indices needed for each draw call numVertices numIndices Can I not specify that the buffer data contain the entire array and only read from part of it when drawing? For example using the vertex buffer object glBufferData(GL_ARRAY_BUFFER, numVertices * sizeof(VertexPosition), m_vertices.data(), GL_DYNAMIC_DRAW); m_vertices.data() = Entire array is stored numVertices * sizeof(VertexPosition) = Amount of data to read from the entire array Is this not the correct way to approach this? I do not wish to use std::vector if possible.

    Read the article

  • Normal vector of a face loaded from an FBX model during collision?

    - by Corey Ogburn
    I'm loading a simple 6 sided cube from a UV-mapped FBX model and I'm using a BoundingBox to test for collisions. Once I determine there's a collision, I want to use the normal vector of the collided surface to correct the movement of whatever collided with the cube. I suppose this is a two-part question: 1) How can I determine which face of the cube was collided with in a collision? 2) How can I get the normal vector of that surface?

    Read the article

  • How do I pass vertex and color positions to OpenGL shaders?

    - by smoth190
    I've been trying to get this to work for the past two days, telling myself I wouldn't ask for help. I think you can see where that got me... I thought I'd try my hand at a little OpenGL, because DirectX is complex and depressing. I picked OpenGL 3.x, because even with my OpenGL 4 graphics card, all my friends don't have that, and I like to let them use my programs. There aren't really any great tutorials for OpenGL 3, most are just "type this and this will happen--the end". I'm trying to just draw a simple triangle, and so far, all I have is a blank screen with my clear color (when I set the draw type to GL_POINTS I just get a black dot). I have no idea what the problem is, so I'll just slap down the code: Here is the function that creates the triangle: void CEntityRenderable::CreateBuffers() { m_vertices = new Vertex3D[3]; m_vertexCount = 3; m_vertices[0].x = -1.0f; m_vertices[0].y = -1.0f; m_vertices[0].z = -5.0f; m_vertices[0].r = 1.0f; m_vertices[0].g = 0.0f; m_vertices[0].b = 0.0f; m_vertices[0].a = 1.0f; m_vertices[1].x = 1.0f; m_vertices[1].y = -1.0f; m_vertices[1].z = -5.0f; m_vertices[1].r = 1.0f; m_vertices[1].g = 0.0f; m_vertices[1].b = 0.0f; m_vertices[1].a = 1.0f; m_vertices[2].x = 0.0f; m_vertices[2].y = 1.0f; m_vertices[2].z = -5.0f; m_vertices[2].r = 1.0f; m_vertices[2].g = 0.0f; m_vertices[2].b = 0.0f; m_vertices[2].a = 1.0f; //Create the VAO glGenVertexArrays(1, &m_vaoID); //Bind the VAO glBindVertexArray(m_vaoID); //Create a vertex buffer glGenBuffers(1, &m_vboID); //Bind the buffer glBindBuffer(GL_ARRAY_BUFFER, m_vboID); //Set the buffers data glBufferData(GL_ARRAY_BUFFER, sizeof(m_vertices), m_vertices, GL_STATIC_DRAW); //Set its usage glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex3D), 0); glVertexAttribPointer(1, 4, GL_FLOAT, GL_TRUE, sizeof(Vertex3D), (void*)(3*sizeof(float))); //Enable glEnableVertexAttribArray(0); glEnableVertexAttribArray(1); //Check for errors if(glGetError() != GL_NO_ERROR) { Error("Failed to create VBO: %s", gluErrorString(glGetError())); } //Unbind... glBindVertexArray(0); } The Vertex3D struct is as such... struct Vertex3D { Vertex3D() : x(0), y(0), z(0), r(0), g(0), b(0), a(1) {} float x, y, z; float r, g, b, a; }; And finally the render function: void CEntityRenderable::RenderEntity() { //Render... glBindVertexArray(m_vaoID); //Use our attribs glDrawArrays(GL_POINTS, 0, m_vertexCount); glBindVertexArray(0); //unbind OnRender(); } (And yes, I am binding and unbinding the shader. That is just in a different place) I think my problem is that I haven't fully wrapped my mind around this whole VertexAttribArray thing (the only thing I like better in DirectX was input layouts D:). This is my vertex shader: #version 330 //Matrices uniform mat4 projectionMatrix; uniform mat4 viewMatrix; uniform mat4 modelMatrix; //In values layout(location = 0) in vec3 position; layout(location = 1) in vec3 color; //Out values out vec3 frag_color; //Main shader void main(void) { //Position in world gl_Position = vec4(position, 1.0); //gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(in_Position, 1.0); //No color changes frag_color = color; } As you can see, I've disable the matrices, because that just makes debugging this thing so much harder. I tried to debug using glslDevil, but my program just crashes right before the shaders are created... so I gave up with that. This is my first shot at OpenGL since the good old days of LWJGL, but that was when I didn't even know what a shader was. Thanks for your help :)

    Read the article

  • Save Zone Implementation in Asteroids

    - by Moaz
    I would like to implement a safe zone for asteroids so that when the ship gets destroyed, it shouldn't be there unless it is safe from other asteroids. I tried to check the distance between each asteroid and the ship, and if it is above threshold, it sets a flag to the ship that's a safe zone, but sometimes it work and sometimes it doesn't for (list<Asteroid>::iterator itr_astroid = asteroids.begin(); itr_astroid!=asteroids.end(); ) { if(currentShip.m_state == Ship::Ship_Dead) { float distance = itr_astroid->getCenter().distance(Vec2f(getWindowWidth()/2,getWindowHeight()/2)); if( distance>200) { currentShip.m_saveField = true; break; } else { currentShip.m_saveField = false; itr_astroid++; } } else { itr_astroid++; } }

    Read the article

  • Material usage, one per model or per object?

    - by WSkid
    Is it better (memory, time (of developer), space) to use single model that is unwrapped and uses a single material or to break a model down into appropriate bits, each with their own smaller texture/material? Or does it depend on the target platform as to what is acceptable - ie PC vs tablet? An example: Say you have a typical house with a tiled roof. Model it, make sure everything is attached, unwrap the walls/roof so in your UV template the walls and roof would be in one texture file, side-by-side in say a 512x512 file. Model the roof/walls as separate objects, unwrap them individually and have two UV templates. You could then have a 256x256 file for each one.

    Read the article

  • C# XNA Make rendered screen a texture2d

    - by redcodefinal
    I am working on a cool little city generator which makes cities in the isometric perspective. However, a problem arose where if the grid size was over a certain limit it would have awful lag. I found the main problem to be in the draw method. So I took the precautionary step of rendering only items that were onscreen. This fixed the lag but, not by much. The idea I have is to render the frame once and take a snapshot. Then, display that as a texture2d on screen. This way I don't have to render 1,000,000 objects every frame since they don't change anyways. TL;DR - I want to Take a snapshot of an already rendered frame Turn it into a Texture2D Render that to the screen instead of all the objects. Any help appreciated.

    Read the article

  • Calculating distance from viewer to object in a shader

    - by Jay
    Good morning, I'm working through creating the spherical billboards technique outlined in this paper. I'm trying to create a shader that calculates the distance from the camera to all objects in the scene and stores the results in a texture. I keep getting either a completely black or white texture. Here are my questions: I assume the position that's automatically sent to the vertex shader from ogre is in object space? The gpu interpolates the output position from the vertex shader when it sends it to the fragment shader. Does it do the same for my depth calculation or do I need to move that calculation to the fragment shader? Is there a way to debug shaders? I have no errors but I'm not sure I'm getting my parameters passed into the shaders correctly. Here's my shader code: void DepthVertexShader( float4 position : POSITION, uniform float4x4 worldViewProjMatrix, uniform float3 eyePosition, out float4 outPosition : POSITION, out float Depth ) { // position is in object space // outPosition is in camera space outPosition = mul( worldViewProjMatrix, position ); // calculate distance from camera to vertex Depth = length( eyePosition - position ); } void DepthFragmentShader( float Depth : TEXCOORD0, uniform float fNear, uniform float fFar, out float4 outColor : COLOR ) { // clamp output using clip planes float fColor = 1.0 - smoothstep( fNear, fFar, Depth ); outColor = float4( fColor, fColor, fColor, 1.0 ); } fNear is the near clip plane for the scene fFar is the far clip plane for the scene

    Read the article

  • Fast lighting with multiple lights

    - by codymanix
    How can I implement fast lighting with multiple lights? I don't want to restrain the player, he can place an unlimited number and possibly overlapping (point) lights into the level. The problem is that shaders which contain dynamic loops which would be necessary to calculate the lighting tend to be very slow. I had the idea that if it could be possible at compiletime to compile a shader n times where n is the number of lights. If the number n is known at compiletime, the loops can be unrolled automatically. Is this possible to generate n versions of the same shader with just a different number of lights? At runtime I could then decide which shader to use for which part of the level.

    Read the article

  • Lerping to a center point while in motion

    - by Fibericon
    I have an enemy that initially flies in a circular motion, while facing away from the center point. This is how I achieve that: position.Y = (float)(Math.Cos(timeAlive * MathHelper.PiOver4) * radius + origin.Y); position.X = (float)(Math.Sin(timeAlive * MathHelper.PiOver4) * radius + origin.X); if (timeAlive < 5) { angle = (float)Math.Atan((0 - position.X) / (0 - position.Y)); if (0 < position.Y) RotationMatrix = Matrix.CreateRotationX(MathHelper.PiOver2) * Matrix.CreateRotationZ(-1 * angle); else RotationMatrix = Matrix.CreateRotationX(MathHelper.PiOver2) * Matrix.CreateRotationZ(MathHelper.Pi - angle); } That part works just fine. After five seconds of this, I want the enemy to turn inward, facing the center point. However, I've been trying to lerp to that point, since I don't want it to simply jump to the new rotation. Here's my code for trying to do that: else { float newAngle = -1 * (float)Math.Atan((0 - position.X) / (0 - position.Y)); angle = MathHelper.Lerp(angle, newAngle, (float)gameTime.ElapsedGameTime.Milliseconds / 1000); if (0 < position.Y) RotationMatrix = Matrix.CreateRotationX(MathHelper.PiOver2) * Matrix.CreateRotationZ(MathHelper.Pi - angle); else RotationMatrix = Matrix.CreateRotationX(MathHelper.PiOver2) * Matrix.CreateRotationZ(-1 * angle); } That doesn't work so fine. It seems like it's going to at first, but then it just sort of skips around. How can I achieve what I want here?

    Read the article

  • Maya is lagging in a specific way...?

    - by Aerovistae
    My Maya installation worked perfectly. It is not my computer. Something caused it to stop working overnight, somehow. When I try to drag a vertex or something like that, it moves the vertex, but then I have to click like 3 times somewhere outside the mesh before the actual mesh will catch up and follow the vertex. Until I do that, it just stays as it was, with a floating vertex somewhere inside it or outside it. It makes modeling borderline impossible and completely infuriating. What ought to be happening is what we're all used to-- as I move the vertex, the mesh follows it actively, so I can see what it looks like at every given moment until I release the vertex in its new position. Other weird thing: this only applies to complex meshes, like a couple thousand faces. A simple cube works fine. What gives?? Anybody?

    Read the article

< Previous Page | 438 439 440 441 442 443 444 445 446 447 448 449  | Next Page >