Search Results

Search found 16473 results on 659 pages for 'game logic'.

Page 306/659 | < Previous Page | 302 303 304 305 306 307 308 309 310 311 312 313  | Next Page >

  • How do I generate a level randomly?

    - by Charlton Santana
    I am currently hard coding 10 different instances like the code below, but but I'd like to create many more. Instead of having the same layout for the new level, I was wondering if there is anyway to generate a random X value for each block (this will be how far into the level it is). A level 100,000 pixels wide would be good enough but if anyone knows a system to make the level go on and on, I'd like to know that too. This is basically how I define a block now (with irrelevant code removed): block = new Block(R.drawable.block, 400, platformheight); block2 = new Block(R.drawable.block, 600, platformheight); block3 = new Block(R.drawable.block, 750, platformheight); The 400 is the X position, which I'd like to place randomly through the level, the platformheight variable defines the Y position which I don't want to change.

    Read the article

  • Where can I find free or buy "next-gen" 3D Assets?

    - by Valmond
    Usually I buy 3D Assets from sites like turbosquid.com or similar. My problem is that I have lately implemented glow, normal maps, specular (and specular power) maps and reflection maps and I can't find any models that use those techniques. So where can I find / buy "next gen" assets (at least models/items with a normal map)? I have checked for similar posts but those I found are about either free only or 2D or 'ordinary' 3D so I hope this is not a duplicate.

    Read the article

  • Correct use of VAO's in OpenGL ES2 for iOS?

    - by sak
    I'm migrating to OpenGL ES2 for one of my iOS projects, and I'm having trouble to get any geometry to render successfully. Here's where I'm setting up the VAO rendering: void bindVAO(int vertexCount, struct Vertex* vertexData, GLushort* indexData, GLuint* vaoId, GLuint* indexId){ //generate the VAO & bind glGenVertexArraysOES(1, vaoId); glBindVertexArrayOES(*vaoId); GLuint positionBufferId; //generate the VBO & bind glGenBuffers(1, &positionBufferId); glBindBuffer(GL_ARRAY_BUFFER, positionBufferId); //populate the buffer data glBufferData(GL_ARRAY_BUFFER, vertexCount, vertexData, GL_STATIC_DRAW); //size of verte position GLsizei posTypeSize = sizeof(kPositionVertexType); glVertexAttribPointer(kVertexPositionAttributeLocation, kVertexSize, kPositionVertexTypeEnum, GL_FALSE, sizeof(struct Vertex), (void*)offsetof(struct Vertex, position)); glEnableVertexAttribArray(kVertexPositionAttributeLocation); //create & bind index information glGenBuffers(1, indexId); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, *indexId); glBufferData(GL_ELEMENT_ARRAY_BUFFER, vertexCount, indexData, GL_STATIC_DRAW); //restore default state glBindVertexArrayOES(0); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0); glBindBuffer(GL_ARRAY_BUFFER, 0); } And here's the rendering step: //bind the frame buffer for drawing glBindFramebuffer(GL_FRAMEBUFFER, outputFrameBuffer); glClear(GL_COLOR_BUFFER_BIT); //use the shader program glUseProgram(program); glClearColor(0.4, 0.5, 0.6, 0.5); float aspect = fabsf(320.0 / 480.0); GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f), aspect, 0.1f, 100.0f); GLKMatrix4 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -1.0f); GLKMatrix4 mvpMatrix = GLKMatrix4Multiply(projectionMatrix, modelViewMatrix); //glUniformMatrix4fv(projectionMatrixUniformLocation, 1, GL_FALSE, projectionMatrix.m); glUniformMatrix4fv(modelViewMatrixUniformLocation, 1, GL_FALSE, mvpMatrix.m); glBindVertexArrayOES(vaoId); glDrawElements(GL_TRIANGLE_FAN, kVertexCount, GL_FLOAT, &indexId); //bind the color buffer glBindRenderbuffer(GL_RENDERBUFFER, colorRenderBuffer); [context presentRenderbuffer:GL_RENDERBUFFER]; The screen is rendering the color passed to glClearColor correctly, but not the shape passed into bindVAO. Is my VAO being built correctly? Thanks!

    Read the article

  • Hardware instancing for voxel engine

    - by Menno Gouw
    i just did the tutorial on Hardware Instancing from this source: http://www.float4x4.net/index.php/2011/07/hardware-instancing-for-pc-in-xna-4-with-textures/. Somewhere between 900.000 and 1.000.000 draw calls for the cube i get this error "XNA Framework HiDef profile supports a maximum VertexBuffer size of 67108863." while still running smoothly on 900k. That is slightly less then 100x100x100 which are a exactly a million. Now i have seen voxel engines with very "tiny" voxels, you easily get to 1.000.000 cubes in view with rough terrain and a decent far plane. Obviously i can optimize a lot in the geometry buffer method, like rendering only visible faces of a cube or using larger faces covering multiple cubes if the area is flat. But is a vertex buffer of roughly 67mb the max i can work with or can i create multiple?

    Read the article

  • Isometric Screen View to World View

    - by Sleepy Rhino
    I am having trouble working out the math to transform the screen coordinates to the Grid coordinates. The code below is how far I have got but it is totally wrong any help or resources to fix this issue would be great, had a complete mind block with this for some reason. private Point ScreenToIso(int mouseX, int mouseY) { int offsetX = WorldBuilder.STARTX; int offsetY = WorldBuilder.STARTY; Vector2 startV = new Vector2(offsetX, offsetY); int mapX = offsetX - mouseX; int mapY = offsetY - mouseY + (WorldBuilder.tileHeight / 2); mapY = -1 * (mapY / WorldBuilder.tileHeight); mapX = (mapX / WorldBuilder.tileHeight) + mapY; return new Point(mapX, mapY); }

    Read the article

  • How can I downsample a texture using FBOs?

    - by snape
    I am rendering a scene to FBO as my render target whose size is 8 times the size of the orignal screen in OpenGL. Now i wan to downsample the texture generated by FBO to the size of the screen so as to achieve spatial anti aliasing. How do i achieve the down sampling ? Please provide implementation details. Note : If there is a better way of doing anti aliasing in FBOs please mention that too. I am trying to remove the aliasing in the image attached below.

    Read the article

  • Understanding how OpenGL blending works

    - by yuumei
    I am attempting to understand how OpenGL (ES) blending works. I am finding it difficult to understand the documentation and how the results of glBlendFunc and glBlendEquation effect the final pixel that is written. Do the source and destination out of glBlendFunc get added together with GL_FUNC_ADD by default? This seems wrong because "basic" blending of GL_ONE, GL_ONE would output 2,2,2,2 then (Source giving 1,1,1,1 and dest giving 1,1,1,1). I have written the following pseudo-code, what have I got wrong? struct colour { float r, g, b, a; }; colour blend_factor( GLenum factor, colour source, colour destination, colour blend_colour ) { colour colour_factor; float i = min( source.a, 1 - destination.a ); // From http://www.khronos.org/opengles/sdk/docs/man/xhtml/glBlendFunc.xml switch( factor ) { case GL_ZERO: colour_factor = { 0, 0, 0, 0 }; break; case GL_ONE: colour_factor = { 1, 1, 1, 1 }; break; case GL_SRC_COLOR: colour_factor = source; break; case GL_ONE_MINUS_SRC_COLOR: colour_factor = { 1 - source.r, 1 - source.g, 1 - source.b, 1 - source.a }; break; // ... } return colour_factor; } colour blend( colour & source, colour destination, GLenum source_factor, // from glBlendFunc GLenum destination_factor, // from glBlendFunc colour blend_colour, // from glBlendColor GLenum blend_equation // from glBlendEquation ) { colour source_colour = blend_factor( source_factor, source, destination, blend_colour ); colour destination_colour = blend_factor( destination_factor, source, destination, blend_colour ); colour output; // From http://www.khronos.org/opengles/sdk/docs/man/xhtml/glBlendEquation.xml switch( blend_equation ) { case GL_FUNC_ADD: output = add( source_colour, destination_colour ); case GL_FUNC_SUBTRACT: output = sub( source_colour, destination_colour ); case GL_FUNC_REVERSE_SUBTRACT: output = sub( destination_colour, source_colour ); } return output; } void do_pixel() { colour final_colour; // Blending if( enable_blending ) { final_colour = blend( current_colour_output, framebuffer[ pixel ], ... ); } else { final_colour = current_colour_output; } } Thanks!

    Read the article

  • DrawIndexedPrimitives overdraws data in previous buffer if called in loop

    - by Daniel Excinsky
    I doubled the question from stackoverflow here, and will delete the opposite of a question that gave me the answer. I have the Draw method in one of my renderers, that loops through the dictionary and gets precollected and preinitialized buffers. When dictionary has only one element, everything is just fine. But with more elements what I get on the screen is only the data from the last buffer (I suppose, not sure) My Draw method: public void Draw(GameTime gameTime) { if (!_areStaticEffectsSet) { // blockEffect.Parameters["TextureAtlas"].SetValue(textureAtlas); blockEffect.Parameters["HorizonColor"].SetValue(World.HORIZONCOLOR); blockEffect.Parameters["NightColor"].SetValue(World.NIGHTCOLOR); blockEffect.Parameters["MorningTint"].SetValue(World.MORNINGTINT); blockEffect.Parameters["EveningTint"].SetValue(World.EVENINGTINT); blockEffect.Parameters["SunColor"].SetValue(World.SUNCOLOR); _areStaticEffectsSet = true; } blockEffect.Parameters["World"].SetValue(Matrix.Identity); blockEffect.Parameters["View"].SetValue(_player.CameraView); blockEffect.Parameters["Projection"].SetValue(_player.CameraProjection); blockEffect.Parameters["CameraPosition"].SetValue(_player.CameraPosition); blockEffect.Parameters["timeOfDay"].SetValue(_world.TimeOfDay); var viewFrustum = new BoundingFrustum(_player.CameraView * _player.CameraProjection); _graphicsDevice.BlendState = BlendState.Opaque; _graphicsDevice.DepthStencilState = DepthStencilState.Default; foreach (KeyValuePair<int, Texture2D> textureAtlas in textureAtlases) { blockEffect.Parameters["TextureAtlas"].SetValue(textureAtlas.Value); foreach (EffectPass pass in blockEffect.CurrentTechnique.Passes) { pass.Apply(); //TODO: ?????????? ??????????????? ?? ?????? ?? ??????? ??????? VertexBuffer ? IndexBuffer foreach (Chunk chunk in _world.Chunks.Values) { if (chunk == null || chunk.IsDisposed) { continue; } if (chunk.BoundingBox.Intersects(viewFrustum) && chunk.GetBlockIndexBuffer(textureAtlas.Key) != null) { lock (chunk) { if (chunk.GetBlockIndexBuffer(textureAtlas.Key).IndexCount > 0) { VertexBuffer vertexBuffer = chunk.GetBlockVertexBuffer(textureAtlas.Key); IndexBuffer indexBuffer = chunk.GetBlockIndexBuffer(textureAtlas.Key); //if (chunk.DrawIndex == new Vector3i(0, 0, 0)) //{ //if (textureAtlas.Key == -1) //{ //var varray = new [] //{ //new VertexPositionTextureLight(new Vector3(0,68,0), new Vector2(0,1),1,new Vector3(0,0,0), new Vector3(1,1,1)), //new VertexPositionTextureLight(new Vector3(0,68,1), new Vector2(0,1),1,new Vector3(0,0,0), new Vector3(1,1,1)), //new VertexPositionTextureLight(new Vector3(1,68,0), new Vector2(0,1),1,new Vector3(0,0,0), new Vector3(1,1,1)) //}; //var iarray = new short[] {0, 1, 2}; //vertexBuffer = new VertexBuffer(_graphicsDevice, typeof(VertexPositionTextureLight), varray.Length, BufferUsage.WriteOnly); //indexBuffer = new IndexBuffer(_graphicsDevice, IndexElementSize.SixteenBits, iarray.Length, BufferUsage.WriteOnly); //vertexBuffer.SetData(varray); //indexBuffer.SetData(iarray); } } _graphicsDevice.SetVertexBuffer(vertexBuffer); _graphicsDevice.Indices = indexBuffer; _graphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, vertexBuffer.VertexCount, 0, indexBuffer.IndexCount / 3); } } } } } } } Noteworthy things about the code: XNA version is 4.0. I've commented the debugging code in the loop, but left it for it may bring some insight. I try not only to change vertices/indices in the loop, but textureAtlas also. Code in the shader about textureAtlas: Texture TextureAtlas; sampler TextureAtlasSampler = sampler_state { texture = <TextureAtlas>; magfilter = POINT; minfilter = POINT; mipfilter = POINT; AddressU = WRAP; AddressV = WRAP; }; struct VSInput { float4 Position : POSITION0; float2 TexCoords1 : TEXCOORD0; float SunLight : COLOR0; float3 LocalLight : COLOR1; float3 Normal : NORMAL0; }; VertexPositionTextureLight is my own realization of IVertexType. So, do anybody know about this problem, or see the wrongness in my code (that's far more likely)?

    Read the article

  • Map and fill texture using PBO (OpenGL 3.3)

    - by NtscCobalt
    I'm learning OpenGL 3.3 trying to do the following (as it is done in D3D)... Create Texture of Width, Height, Pixel Format Map texture memory Loop write pixels Unmap texture memory Set Texture Render Right now though it renders as if the entire texture is black. I can't find a reliable source for information on how to do this though. Almost every tutorial I've found just uses glTexSubImage2D and passes a pointer to memory. Here is basically what my code does... (In this case it is generating an 1-byte Alpha Only texture but it is rendering it as the red channel for debugging) GLuint pixelBufferID; glGenBuffers(1, &pixelBufferID); glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pixelBufferID); glBufferData(GL_PIXEL_UNPACK_BUFFER, 512 * 512 * 1, nullptr, GL_STREAM_DRAW); glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0); GLuint textureID; glGenTextures(1, &textureID); glBindTexture(GL_TEXTURE_2D, textureID); glTexImage2D(GL_TEXTURE_2D, 0, GL_R8, 512, 512, 0, GL_RED, GL_UNSIGNED_BYTE, nullptr); glBindTexture(GL_TEXTURE_2D, 0); glBindTexture(GL_TEXTURE_2D, textureID); glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pixelBufferID); void *Memory = glMapBuffer(GL_PIXEL_UNPACK_BUFFER, GL_WRITE_ONLY); // Memory copied here, I know this is valid because it is the same loop as in my working D3D version glUnmapBuffer(GL_PIXEL_UNPACK_BUFFER); glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0); And then here is the render loop. // This chunk left in for completeness glUseProgram(glProgramId); glBindVertexArray(glVertexArrayId); glBindBuffer(GL_ARRAY_BUFFER, glVertexBufferId); glEnableVertexAttribArray(0); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 20, 0); glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 20, 12); GLuint transformLocationID = glGetUniformLocation(3, 'transform'); glUniformMatrix4fv(transformLocationID , 1, true, somematrix) // Not sure if this is all I need to do glBindTexture(GL_TEXTURE_2D, pTex->glTextureId); GLuint textureLocationID = glGetUniformLocation(glProgramId, "texture"); glUniform1i(textureLocationID, 0); glDrawArrays(GL_TRIANGLES, Offset*3, Triangles*3); Vertex Shader #version 330 core in vec3 Position; in vec2 TexCoords; out vec2 TexOut; uniform mat4 transform; void main() { TexOut = TexCoords; gl_Position = vec4(Position, 1.0) * transform; } Pixel Shader #version 330 core uniform sampler2D texture; in vec2 TexCoords; out vec4 fragColor; void main() { // Output color fragColor.r = texture2D(texture, TexCoords).r; fragColor.g = 0.0f; fragColor.b = 0.0f; fragColor.a = 1.0; }

    Read the article

  • Suggestions needed on an architecture for a multiple clients and customisable web application

    - by ValidfroM
    Our product is a web based course managemant system. We have 10+ clients and in future we may get more clients. (Asp.net,SQL Server) Currently if one of our customers need extra functionality or customised business logic, we will change the db schema and code to meet the needs. (we only have one branch code base and one database schema) To make the change wont affect each others route, we use a client flag, which defined in a web config file, thus those extra fields and biz logic only applied to a particular customer's system. if(ClientId = 'ABC') { //DO ABC Stuff } else { //Normal Route } One of our senior colleagues said, in this way, small company like us can save resources on supporting multiple resources. But what I feel is, this strategy makes our code and database even harder to maintain. Anyone there crossed similar situation? How do you handle that?

    Read the article

  • Implmenting RLE into a tilemap or how to create a large 3D array?

    - by Smallbro
    Currently I've been using a 3D array for my tiles in a 2D world but the 3D side comes in when moving down into caves and whatnot. Now this is not memory efficient and I switched over to a 2D array and can now have much larger maps. The only issue I'm having now is that it seems that my tiles cannot occupy the same space as a tile on the same z level. My current structure means that each block has its own z variable. This is what it used to look like: map.blockData[x][y][z] = new Block(); however now it works like this map.blockData[x][y] = new Block(z); I'm not sure why but if I decide to use the same space on say the floor below it wont allow me to. Does anyone have any ideas on how I can add a z-axis to my 2D array? I'm using java but I reckon the concept carries across different languages. Edit: As Will posted, RLE sounds like the best method for achieving a fast 3D array. However I'm struggling to understand how I would even start to implement it? Would I create a 4D array the 4th being something which controls how many to skip? Or would the x-axis simply change altogether and have large gaps in between - for example [5][y][z] would skip 5 tiles? Is there something really obvious here which I am missing? The number of z levels I'm trying to have is around 66, it would be preferably that I can have up to or more than 1000 in x and y.

    Read the article

  • OpenGL problem with FBO integer texture and color attachment

    - by Grieverheart
    In my simple renderer, I have 2 FBOs one that contains diffuse, normals, instance ID and depth in that order and one that I use store the ssao result. The textures I use for the first FBO are RGB8, RGBA16F, R32I and GL_DEPTH_COMPONENT32F for the depth. For the second FBO I use an R16F texture. My rendering process is to first render to everything I mentioned in the first FBO, then bind depth and normals textures for reading for the ssao pass and write to the second FBO. After that I bind the second FBO's texture for reading in my blur shader and bind the first FBO for writing. What I intend to do is to write the blurred ssao value to the alpha component of the Normals texture. Here are where the problems start. First of all, I use shading language 3.3, which my graphics card does support. I manage ouputs in my shaders using layout(location = #). Now, the normals texture should be bound to color attachment 1, but when I use 1, it seems to write to my diffuse texture which should be in color attachment 0. When I instead use layout(location = 0), it gets correctly written to my normals texture. Besides this, my instance ID texture also gets resets after running the blur shader which is weird because if I use a float texture and write to it instanceID / nInstances, the texture doesn't get reset after the blur shader has ran. Here is how I prepare my first FBO: bool CGBuffer::Init(unsigned int WindowWidth, unsigned int WindowHeight){ //Create FBO glGenFramebuffers(1, &m_fbo); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, m_fbo); //Create gbuffer and Depth Buffer Textures glGenTextures(GBUFF_NUM_TEXTURES, &m_textures[0]); glGenTextures(1, &m_depthTexture); //prepare gbuffer for(unsigned int i = 0; i < GBUFF_NUM_TEXTURES; i++){ glBindTexture(GL_TEXTURE_2D, m_textures[i]); if(i == GBUFF_TEXTURE_TYPE_NORMAL) glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, WindowWidth, WindowHeight, 0, GL_RGBA, GL_FLOAT, NULL); else if(i == GBUFF_TEXTURE_TYPE_DIFFUSE) glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, WindowWidth, WindowHeight, 0, GL_RGB, GL_FLOAT, NULL); else if(i == GBUFF_TEXTURE_TYPE_ID) glTexImage2D(GL_TEXTURE_2D, 0, GL_R32I, WindowWidth, WindowHeight, 0, GL_RED_INTEGER, GL_INT, NULL); else{ std::cout << "Error in FBO initialization" << std::endl; return false; } glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, GL_TEXTURE_2D, m_textures[i], 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); } //prepare depth buffer glBindTexture(GL_TEXTURE_2D, m_depthTexture); glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, WindowWidth, WindowHeight, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, m_depthTexture, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_NONE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); GLenum DrawBuffers[] = {GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1, GL_COLOR_ATTACHMENT2}; glDrawBuffers(GBUFF_NUM_TEXTURES, DrawBuffers); GLenum Status = glCheckFramebufferStatus(GL_FRAMEBUFFER); if(Status != GL_FRAMEBUFFER_COMPLETE){ std::cout << "FB error, status 0x" << std::hex << Status << std::endl; return false; } //Restore default framebuffer glBindFramebuffer(GL_FRAMEBUFFER, 0); return true; } where I use an enum defined as, enum GBUFF_TEXTURE_TYPE{ GBUFF_TEXTURE_TYPE_DIFFUSE, GBUFF_TEXTURE_TYPE_NORMAL, GBUFF_TEXTURE_TYPE_ID, GBUFF_NUM_TEXTURES }; Am I missing some kind of restriction? Does the color attachment of the FBO's textures somehow gets reset i.e. I'm using a re-size function which re-sizes the textures of the FBO but should I perhaps call glFramebufferTexture2D again too? EDIT: Here is the shader in question: #version 330 core uniform sampler2D aoSampler; uniform vec2 TEXEL_SIZE; // x = 1/res x, y = 1/res y uniform bool use_blur; noperspective in vec2 TexCoord; layout(location = 0) out vec4 out_AO; void main(void){ if(use_blur){ float result = 0.0; for(int i = -1; i < 2; i++){ for(int j = -1; j < 2; j++){ vec2 offset = vec2(TEXEL_SIZE.x * i, TEXEL_SIZE.y * j); result += texture(aoSampler, TexCoord + offset).r; // -0.004 because the texture seems to be a bit displaced } } out_AO = vec4(vec3(0.0), result / 9); } else out_AO = vec4(vec3(0.0), texture(aoSampler, TexCoord).r); }

    Read the article

  • How do I draw a single Triangle with XNA and fill it with a Texture?

    - by Deukalion
    I'm trying to wrap my head around: http://msdn.microsoft.com/en-us/library/bb196409.aspx I'm trying to create a method in XNA that renders a single Triangle, then later make a method that takes a list of Triangles and renders them also. But it isn't working. I'm not understanding what all the things does and there's not enough information. My methods: // Triangle is a struct with A, B, C (didn't include) A, B, C = Vector3 public static void Render(GraphicsDevice device, List<Triangle> triangles, Texture2D texture) { foreach (Triangle triangle in triangles) { Render(device, triangle, texture); } } public static void Render(GraphicsDevice device, Triangle triangle, Texture2D texture) { BasicEffect _effect = new BasicEffect(device); _effect.Texture = texture; _effect.VertexColorEnabled = true; VertexPositionColor[] _vertices = new VertexPositionColor[3]; _vertices[0].Position = triangle.A; _vertices[1].Position = triangle.B; _vertices[2].Position = triangle.B; foreach (var pass in _effect.CurrentTechnique.Passes) { pass.Apply(); device.DrawUserIndexedPrimitives<VertexPositionColor> ( PrimitiveType.TriangleList, _vertices, 0, _vertices.Length, new int[] { 0, 1, 2 }, // example has something similiar, no idea what this is 0, 3 // 3 = gives me an error, 1 = works but no results ); } }

    Read the article

  • How To Smoothly Animate From One Camera Position To Another

    - by www.Sillitoy.com
    The Question is basically self explanatory. I have a scene with many cameras and I'd like to smoothly switch from one to another. I am not looking for a cross fade effect but more to a camera moving and rotating the view in order to reach the next camera point of view and so on. To this end I have tried the following code: firstCamera.transform.position.x = Mathf.Lerp(firstCamera.transform.position.x, nextCamer.transform.position.x,Time.deltaTime*smooth); firstCamera.transform.position.y = Mathf.Lerp(firstCamera.transform.position.y, nextCamera.transform.position.y,Time.deltaTime*smooth); firstCamera.transform.position.z = Mathf.Lerp(firstCamera.transform.position.z, nextCamera.transform.position.z,Time.deltaTime*smooth); firstCamera.transform.rotation.x = Mathf.Lerp(firstCamera.transform.rotation.x, nextCamera.transform.rotation.x,Time.deltaTime*smooth); firstCamera.transform.rotation.z = Mathf.Lerp(firstCamera.transform.rotation.z, nextCamera.transform.rotation.z,Time.deltaTime*smooth); firstCamera.transform.rotation.y = Mathf.Lerp(firstCamera.transform.rotation.y, nextCamera.transform.rotation.y,Time.deltaTime*smooth); But the result is actually not that good.

    Read the article

  • OpenGL fovx question

    - by Nick
    To boil my question down to the simplest form, I fear I am oversimplifying how mat4 perspective works. I am using mat4.perspective(45, 2, 0.1, 1000.0) (the binding is WebGL fwiw). With a fovy of 45, and an aspect ratio of 2, I expect to have a fovx of 90. Thus, if I position my camera at (0, 0, 50), looking towards the origin, I expect to see a cube positioned at (50, 0, 0) (45 degrees) right at the very periphery of my screen, half on, half off,. Instead, a cube at (50, 0, 0) is totally off screen, and my actually periphery occurs at about (41.1, 0, 0). What am I missing here? Thanks, nick

    Read the article

  • Speed up lighting in deferred shading

    - by kochol
    I implemented a simple deferred shading renderer. I use 3 G-Buffer for storing position (R32F), normal (G16R16F) and albedo (ARGB8). I use sphere map algorithm to store normals in world space. Currently I use inverse of view * projection matrix to calculate the position of each pixel from stored depth value. First I want to avoid per pixel matrix multiplication for calculating the position. Is there another way to store and calculate position in G-Buffer without the need of matrix multiplication Store the normal in view space Every lighting in my engine is in world space and I want do the lighting in view space to speed up my lighting pass. I want an optimized lighting pass for my deferred engine.

    Read the article

  • Everything turning black when pitching down

    - by Gordon
    Just a quick questions about something that's occurring in my world. Every time I pitch my camera downward, everything starts turning black, and if I pitch upward, everything sort of intensifies. I'm multiplying my normals by the normal matrix in the shader, and I'm multiplying my lights direction by the model view matrix. If I leave the normal and light dir in world space everything ends up fine. I thought putting them both in view space would not cause those weird things to happen?

    Read the article

  • What exactly can shaders be used for?

    - by Bane
    I'm not really a 3D person, and I've only used shaders a little in some Three.js examples, and so far I've got an impression that they are only being used for the graphical part of the equation. Although, the (quite cryptic) Wikipedia article and some other sources lead me to believe that they can be used for more than just graphical effects, ie, to program the GPU (Wikipedia). So, the GPU is still a processor, right? With a larger and a different instruction set for easier and faster vector manipulation, but still a processor. Can I use shaders to make regular programs (provided I've got access to the video memory, which is probable)? Edit: regular programs == "Applications", ie create windows/console programs, or at least have some way of drawing things on the screen, maybe even taking user input.

    Read the article

  • why is this rails association loading individually after an eager load?

    - by codeman73
    I'm trying to avoid the N+1 queries problem with eager loading, but it's not working. The associated models are still being loaded individually. Here are the relevant ActiveRecords and their relationships: class Player < ActiveRecord::Base has_one :tableau end Class Tableau < ActiveRecord::Base belongs_to :player has_many :tableau_cards has_many :deck_cards, :through => :tableau_cards end Class TableauCard < ActiveRecord::Base belongs_to :tableau belongs_to :deck_card, :include => :card end class DeckCard < ActiveRecord::Base belongs_to :card has_many :tableaus, :through => :tableau_cards end class Card < ActiveRecord::Base has_many :deck_cards end and the query I'm using is inside this method of Player: def tableau_contains(card_id) self.tableau.tableau_cards = TableauCard.find :all, :include => [ {:deck_card => (:card)}], :conditions => ['tableau_cards.tableau_id = ?', self.tableau.id] contains = false for tableau_card in self.tableau.tableau_cards # my logic here, looking at attributes of the Card model, with # tableau_card.deck_card.card; # individual loads of related Card models related to tableau_card are done here end return contains end Does it have to do with scope? This tableau_contains method is down a few method calls in a larger loop, where I originally tried doing the eager loading because there are several places where these same objects are looped through and examined. Then I eventually tried the code as it is above, with the load just before the loop, and I'm still seeing the individual SELECT queries for Card inside the tableau_cards loop in the log. I can see the eager-loading query with the IN clause just before the tableau_cards loop as well. EDIT: additional info below with the larger, outer loop Here's the larger loop. It is inside an observer on after_save def after_save(pa) @game = Game.find(turn.game_id, :include => :goals) @game.players = Player.find :all, :include => [ {:tableau => (:tableau_cards)}, :player_goals ], :conditions => ['players.game_id =?', @game.id] for player in @game.players player.tableau.tableau_cards = TableauCard.find :all, :include => [ {:deck_card => (:card)}], :conditions => ['tableau_cards.tableau_id = ?', player.tableau.id] if(player.tableau_contains(card)) ... end end end

    Read the article

  • Rendering oily/polluted water?

    - by Fraser
    Any shader wizards out there have an idea of how to achieve an oily/polluted water effect, similar to this: Ideally, the water would not be uniformly oily, but instead the oil could be generated from some source (such as a polluting drain from a chemical plant) and then diffuse throughout the water body. My thought for this part would be to keep an "oil map" as a 2D texture that determines the density of oil at each point on the water surface. It would diffuse and move naturally with the water vel;ocity at that point (I have a wave-particle simulation for dynamic waves, and am already doing something similar for foam on the water surface). However, I'm not sure how physically correct that would be, since oil might not move at the same velocity as the water. And I have no idea how to make all those trippy colors :-). Thoughts?

    Read the article

  • XNA Transparency depending on drawing order?

    - by DarthRoman
    I am drawing two 3D objects, both of them can fade from opaque to transparent independently, and they can intersect between them (so you cannot say when one of them is before the other one). Look at the image for a better understanding (one of the object is a terrain and the other one an area): Now, if I apply transparency to both of them, and draw the terrain before the area, the terrain is not transparent respecting to the area, but the area is: And finally, if I draw the area before the terrain, then the area is not transparent respecting of the terrain: QUESTION: How can I make all the objects transparent to the rest of objects without depending on the drawing order?

    Read the article

  • Setting the values of a struct array from JS to GLSL

    - by mikidelux
    I've been trying to make a structure that will contain all the lights of my WebGL app, and I'm having troubles setting up it's values from JS. The structure is as follows: struct Light { vec4 position; vec4 ambient; vec4 diffuse; vec4 specular; vec3 spotDirection; float spotCutOff; float constantAttenuation; float linearAttenuation; float quadraticAttenuation; float spotExponent; float spotLightCosCutOff; }; uniform Light lights[numLights]; After testing LOTS of things I made it work but I'm not happy with the code I wrote: program.uniform.lights = []; program.uniform.lights.push({ position: "", diffuse: "", specular: "", ambient: "", spotDirection: "", spotCutOff: "", constantAttenuation: "", linearAttenuation: "", quadraticAttenuation: "", spotExponent: "", spotLightCosCutOff: "" }); program.uniform.lights[0].position = gl.getUniformLocation(program, "lights[0].position"); program.uniform.lights[0].diffuse = gl.getUniformLocation(program, "lights[0].diffuse"); program.uniform.lights[0].specular = gl.getUniformLocation(program, "lights[0].specular"); program.uniform.lights[0].ambient = gl.getUniformLocation(program, "lights[0].ambient"); ... and so on I'm sorry for making you look at this code, I know it's horrible but I can't find a better way. Is there a standard or recommended way of doing this properly? Can anyone enlighten me?

    Read the article

  • Android Loading Screen: How do I use a stack to load elements?

    - by tom_mai78101
    I have some problems with figuring out what value I should put in the function: int value_needed_to_figure_out = X; ProgressBar.incrementProgressBy(value_needed_to_figure_out); I've been researching about loading screens and how to use them. Some examples I've seen have implemented Thread.sleep() in a Handler.post(new Runnable()) function. To me, I got most of that concept of using the Handler to update the ProgressBar, while pretending to do some heavy crunching work. So, I kept looking. I have read this thread here: How do I load chunks of data from an assest manager during a loading screen? It said that I can try using a stack it needs to load, and adding a size counter as I add elements to the stack. What does it mean? This is the part where I'm totally stumped. If anyone would provide some hints, I'll gladly appreciate it. Thanks in advance.

    Read the article

  • How to create an extensible rope in Box2D?

    - by Thomas
    Let's say I'm trying to create a ninja lowering himself down a rope, or pulling himself back up, all whilst he might be swinging from side to side or hit by objects. Basically like http://ninja.frozenfractal.com/ but with Box2D instead of hacky JavaScript. Ideally I would like to use a rope joint in Box2D that allows me to change the length after construction. The standard Box2D RopeJoint doesn't offer that functionality. I've considered a PulleyJoint, connecting the other end of the "pulley" to an invisible kinematic body that I can control to change the length, but PulleyJoint is more like a rod than a rope: it constrains maximum length, but unlike RopeJoint it constrains the minimum as well. Re-creating a RopeJoint every frame using a new length is rather inefficient, and I'm not even sure it would work properly in the simulation. I could create a "chain" of bodies connected by RotationJoints but that is also less efficient, and less robust. I also wouldn't be able to change the length arbitrarily, but only by adding and removing a whole number of links, and it's not obvious how I would connect the remainder without violating existing joints. This sounds like something that should be straightforward to do. Am I overlooking something?

    Read the article

  • Moving a body in a specific direction using XNA with Farseer Physics

    - by Code Assasssin
    I have a custom polygon attached to a body, which looks like this: What I am trying to accomplish is getting the body to move according to wherever the tip of the body is. So far this is what I've tried: if (ks.IsKeyDown(Keys.Up)) { body.ApplyForce(new Vector2(0, -20),body.GetLocalPoint(new Vector2(0,0))); } if (ks.IsKeyDown(Keys.Left)) { body.ApplyTorque(-500); } if (ks.IsKeyDown(Keys.Right)) { body.ApplyTorque(500); } The body rotates fine - but when I try making the body accelerate according to the tip of the body - assuming I have specified the tip correctly(I am pretty sure I haven't), it just spins around, as if I have applied Torque to it. Can anyone point me in the right direction of how to fix this problem?

    Read the article

< Previous Page | 302 303 304 305 306 307 308 309 310 311 312 313  | Next Page >