Search Results

Search found 19338 results on 774 pages for 'game loop'.

Page 306/774 | < Previous Page | 302 303 304 305 306 307 308 309 310 311 312 313  | Next Page >

  • Why does setting a geometry shader cause my sprites to vanish?

    - by ChaosDev
    My application has multiple screens with different tasks. Once I set a geometry shader to the device context for my custom terrain, it works and I get the desired results. But then when I get back to the main menu, all sprites and text disappear. These sprites don't dissappear when I use pixel and vertex shaders. The sprites are being drawn through D3D11, of course, with specified view and projection matrices as well an input layout, vertex, and pixel shader. I'm trying DeviceContext->ClearState() but it does not help. Any ideas? void gGeometry::DrawIndexedWithCustomEffect(gVertexShader*vs,gPixelShader* ps,gGeometryShader* gs=nullptr) { unsigned int offset = 0; auto context = mp_D3D->mp_Context; //set topology context->IASetPrimitiveTopology(m_Topology); //set input layout context->IASetInputLayout(mp_inputLayout); //set vertex and index buffers context->IASetVertexBuffers(0,1,&mp_VertexBuffer->mp_Buffer,&m_VertexStride,&offset); context->IASetIndexBuffer(mp_IndexBuffer->mp_Buffer,mp_IndexBuffer->m_DXGIFormat,0); //send constant buffers to shaders context->VSSetConstantBuffers(0,vs->m_CBufferCount,vs->m_CRawBuffers.data()); context->PSSetConstantBuffers(0,ps->m_CBufferCount,ps->m_CRawBuffers.data()); if(gs!=nullptr) { context->GSSetConstantBuffers(0,gs->m_CBufferCount,gs->m_CRawBuffers.data()); context->GSSetShader(gs->mp_D3DGeomShader,0,0);//after this call all sprites disappear } //set shaders context->VSSetShader( vs->mp_D3DVertexShader, 0, 0 ); context->PSSetShader( ps->mp_D3DPixelShader, 0, 0 ); //draw context->DrawIndexed(m_indexCount,0,0); } //sprites void gSpriteDrawer::Draw(gTexture2D* texture,const RECT& dest,const RECT& source, const Matrix& spriteMatrix,const float& rotation,Vector2d& position,const Vector2d& origin,const Color& color) { VertexPositionColorTexture* verticesPtr; D3D11_MAPPED_SUBRESOURCE mappedResource; unsigned int TriangleVertexStride = sizeof(VertexPositionColorTexture); unsigned int offset = 0; float halfWidth = ( float )dest.right / 2.0f; float halfHeight = ( float )dest.bottom / 2.0f; float z = 0.1f; int w = texture->Width(); int h = texture->Height(); float tu = (float)source.right/(w); float tv = (float)source.bottom/(h); float hu = (float)source.left/(w); float hv = (float)source.top/(h); Vector2d t0 = Vector2d( hu+tu, hv); Vector2d t1 = Vector2d( hu+tu, hv+tv); Vector2d t2 = Vector2d( hu, hv+tv); Vector2d t3 = Vector2d( hu, hv+tv); Vector2d t4 = Vector2d( hu, hv); Vector2d t5 = Vector2d( hu+tu, hv); float ex=(dest.right/2)+(origin.x); float ey=(dest.bottom/2)+(origin.y); Vector4d v4Color = Vector4d(color.r,color.g,color.b,color.a); VertexPositionColorTexture vertices[] = { { Vector3d( dest.right-ex, -ey, z),v4Color, t0}, { Vector3d( dest.right-ex, dest.bottom-ey , z),v4Color, t1}, { Vector3d( -ex, dest.bottom-ey , z),v4Color, t2}, { Vector3d( -ex, dest.bottom-ey , z),v4Color, t3}, { Vector3d( -ex, -ey , z),v4Color, t4}, { Vector3d( dest.right-ex, -ey , z),v4Color, t5}, }; auto mp_context = mp_D3D->mp_Context; // Lock the vertex buffer so it can be written to. mp_context->Map(mp_vertexBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource); // Get a pointer to the data in the vertex buffer. verticesPtr = (VertexPositionColorTexture*)mappedResource.pData; // Copy the data into the vertex buffer. memcpy(verticesPtr, (void*)vertices, (sizeof(VertexPositionColorTexture) * 6)); // Unlock the vertex buffer. mp_context->Unmap(mp_vertexBuffer, 0); //set vertex shader mp_context->IASetVertexBuffers( 0, 1, &mp_vertexBuffer, &TriangleVertexStride, &offset); //set texture mp_context->PSSetShaderResources( 0, 1, &texture->mp_SRV); //set matrix to shader mp_context->UpdateSubresource(mp_matrixBuffer, 0, 0, &spriteMatrix, 0, 0 ); mp_context->VSSetConstantBuffers( 0, 1, &mp_matrixBuffer); //draw sprite mp_context->Draw( 6, 0 ); }

    Read the article

  • Billboard shader without distortion

    - by Nick Wiggill
    I use the standard approach to billboarding within Unity that is OK, but not ideal: transform.LookAt(camera). The problem is that this introduces distortion toward the edges of the viewport, especially as the field of view angle grows larger. This is unlike the perfect billboarding you'd see in eg. Doom when seeing an enemy from any angle and irrespective of where they are located in screen space. Obviously, there are ways to blit an image directly to the viewport, centred around a single vertex, but I'm not hot on shaders. Does anyone have any samples of this approach (GLSL if possible), or any suggestions as to why it isn't typically done this way (vs. the aforementioned quad transformation method)? EDIT: I was confused, thanks Nathan for the heads up. Of course, Causing the quads to look at the camera does not cause them to be parallel to the view plane -- which is what I need.

    Read the article

  • Best way of storing voxels

    - by opiop65
    This should be a pretty easy question to answer, and I'm not looking for how to store the voxels data wise, I'm looking for the theory. Currently, my voxel engine has no global list of tiles. Each chunk has it's own list, and its hard to do things like collision detection or anything that may use tiles that are outside of its own chunk. I recently worked on procedural terrain with a non voxel engine. Now, I want to start using heightmaps in my voxel engine. But, I have a problem. If each chunk has its own list, then it will be pretty difficult to implement the heightmaps. My real question is, is should I store a global list of voxels independent of the chunks? And then I can just easily perform collision detection and terrain generation. However, it would probably be harder to do things such as frustum culling. So how should I store them?

    Read the article

  • What are reasons for Unity3D's owners to force rich guys buying Pro version?

    - by mhambra
    Well, I have to say that Unity is a really nice thing that can save one a dozen of hours on coding (letting instantly work on gameplay). But what's the idea of forcing (EULA) any party, which made over 100k last fiscal year, to purchase Pro instead of using normal edition!? It feels that this kind of licensing provides hidden benefits to rich guys over me, poor sloven, who can afford buying $3.5k license but obviously will not receive any additional cookies from it. And, by the way, anyone estimated how much Unity's source + Playstation + Xbox license will cost?

    Read the article

  • Animations in FBX exported from Maya are anchored in the wrong place

    - by Simon P Stevens
    We are trying to export a model and animation from Maya into Unity3d. In Maya, the model is anchored (pivot point) at the feet (and the body moves up and down). However after we have performed the FBX export, and imported the file into Unity the model is now appears to be anchored by the waist/head and the feet move. These example videos probably help explain the problem more clearly: Example video - Maya - Correct Example video - Unity - Wrong We have also noticed that if we take the FBX file and import it back into Maya we have exactly the same problem. It seems to be that the constraints no longer work after the FBX is reimported back to Maya, which just kills the connection between the joints and the control objects. When we exported the FBX we have tried checking the 'bake animations' check box. The fact that the same problem exist when importing the FBX back into both Maya and Unity suggests that the source of the problem is most likely with the Maya FBX export. Has anyone encountered this problem before and have any ideas how to fix it?

    Read the article

  • Entity/Component based engine rendering separation from logic

    - by Denis Narushevich
    I noticed in Unity3D that each gameObject(entity) have its own renderer component, as far I understand, such component handle rendering logic. I wonder if it is a common practice in entity/component based engines, when single entity have renderer components and logic components such as position, behavior altogether in one box? Such approach sound odd to me, in my understanding entity itself belongs to logic part and shouldn't contain any render specific things inside. With such approach it is impossible to swap renderers, it would require to rewrite all that customized renderers. The way I would do it is, that entity would contain only logic specific components, like AI,transform,scripts plus reference to mesh, or sprite. Then some entity with Camera component would store all references to object that is visible to the camera. And in order to render all that stuff I would have to pass Camera reference to Renderer class and render all sprites,meshes of visible entities. Is such approach somehow wrong?

    Read the article

  • How to make Rect of irregular shape sprite?

    - by Anil gupta
    I used masking for breaking an image as the below mention pattern now its breaking in different pieces but now i have one issue to make the Rect of each pieces, i need to drag the broken pieces and to adjust at correct position so that i can make again actual images. To drag and put at right positing i need to make Rect but i am not getting idea how to make Rect of this irregular shape, I will be very thankful to you, any idea or code to make rect . My previous Question is: How do I break an image into 6 or 8 pieces of different shapes? Thanks.

    Read the article

  • Sprite Kit - containsPoint for SKPhysicsBody?

    - by gj15987
    I have a ball bouncing around the screen. I can pick it up and drag it onto a "bucket". When my touches finish, I use the containsPoint function to check and see if I have dropped the ball onto the bucket. This works fine, however, I actually want to check whether the ball is dropped onto the bucket node's physics body because my "bucket" is actually just an oval, and so I've applied a physics body which is the same shape as the oval, so that the white space around the oval isn't included in the physics simulation. I can't seem to find a "containsPoint" function for physics bodies. Can anyone advise on how I'd check for this? To summarise, I want to drop a node, onto a specific part of another node (or its physics body) and trigger an event. Thanks in advance.

    Read the article

  • Help implementing virtual d-pad

    - by Moshe
    Short Version: I am trying to move a player around on a tilemap, keeping it centered on its tile, while smoothly controlling it with SneakyInput virtual Joystick. My movement is jumpy and hard to control. What's a good way to implement this? Long Version: I'm trying to get a tilemap based RPG "layer" working on top of cocos2d-iphone. I'm using SneakyInput as the input right now, but I've run into a bit of a snag. Initially, I followed Steffen Itterheim's book and Ray Wenderlich's tutorial, and I got jumpy movement working. My player now moves from tile to tile, without any animation whatsoever. So, I took it a step further. I changed my player.position to a CCMoveTo action. Combined with CCfollow, my player moves pretty smoothly. Here's the problem, though: Between each CCMoveTo, the movement stops, so there's a bit of a jumpiness introduced between movements. To deal with that, I changed my CCmoveTo into a CCMoveBy, and instead of running it once, I decided to have it CCRepeatForever. My plan was to stop the repeating action whenever the player changed directions or released the d-pad. However, when the movement stops, the player is not necessarily centered along the tiles, as it should be. To correctly position the player, I use a CCMoveTo and get the closest position that would put the player back into the proper position. This reintroduces an earlier problem of jumpiness between actions. What is the correct way to implement a smooth joystick while smoothly animating the player and keeping it on the "grid" of tiles? Edit: It turns out that this was caused by a "Bug Fix" in the cocos2d engine.

    Read the article

  • Playing NSF music in FMOD.net

    - by Tesserex
    So, as the title says, I want to be able to play NSF files using FMOD, because my project already uses FMOD and I'd rather not replace it. This will involve figuring out how existing players and emulators work and porting it. I haven't yet found an existing player that uses FMOD. My starting point is the MyNes source from http://sourceforge.net/projects/mynes/. There are two big steps between here and what I'm looking for. MyNes plays from a ROM, not NSF. So, I have to rip out the APU and get it to play NSF files. The MyNes APU uses SlimDX, so I have to convert that to FMOD.NET. I am really stuck about how to go about either of these, because I'm not that familiar with audio formats and it's hard finding resources online. So here are a few questions: From what I can tell from the NSF spec at http://kevtris.org/nes/nsfspec.txt, it's just contains the relevant memory section of the ROM, plus the header. If anyone can verify or correct this that would be great. The emulator APU uses data from the rest of the emulator to play, including things like cycle counts. I'm not sure what replaces this in a standalone player. Can't I just load all the music data at once into a stream and play it? Joining #1 and #2, does the header data from the NSF substitute for some of the ROM data in the emulator code? Using FMOD, will I be following the usercreatedsound example for loading a stream? And does this format count as PCM? Specifically MyNes says PCM8. Any tips on loading / playing the stream in FMOD are appreciated. As an aside, I don't really understand the loading / playing sections of the spec I linked at all. It seems to apply to 6502 systems / emulators only and not to my situation. I know it's a long shot for anyone here to have enough experience in this area to help, but anything you can provide is definitely appreciated. A link to an existing .NET library that does this would be even better, but I don't believe one exists.

    Read the article

  • What is the Xbox360's D3DRS_VIEWPORTENABLE equivalent on WinXP D3D9?

    - by Jim Buck
    I posted this on StackOverlow, but of course it should be posted here. I am maintaining a multiplatform codebase for Xbox360 and WinXP. I am seeing an issue on the XP side that appears to be related to D3DRS_VIEWPORTENABLE on the Xbox360 version not having an equivalent on WinXP D3D9. This article had an interesting idea, but the only way to construct an identity matrix is to supply negative numbers to D3DVIEWPORT9::X and D3DVIEWPORT9::Height, but they are unsigned numbers. (I tried to put in negative numbers anyway, but nothing interesting happened.) So, how does one emulate the behavior of D3DRS_VIEWPORTENABLE under WinXP/D3D9? (For clarity, the result I'm seeing is that a 2d screen-aligned quad works fine on Xbox360 but is offset/stretched on WinXP. In fact, the (0, 0) starts in the center of the screen on WinXP instead of in the lower-left corner like on the Xbox360 as a result of applying the viewport transform.) Update: I didn't have an Xbox360 devkit at the time I wrote up this question, but I've since gotten one. I commented out the disabling of the D3DRS_VIEWPORTENABLE state, and the exact same behavior resulted on the Xbox360 as on the WinXP build. So, there must be some DirectX magic to bridge the gap here for emulating D3DRS_VIEWPORTENABLE being turned off on WinXP.

    Read the article

  • OpenGL 3.0+ framebuffer to texture/images

    - by user827992
    I need a way to capture what is rendered on screen, i have read about glReadPixels but it looks really slow. Can you suggest a more efficient or just an alternative way for just copying what is rendered by OpenGL 3.0+ to the local RAM and in general to output this in a image or in a data stream? How i can achieve the same goal with OpenGL ES 2.0 ? EDIT: i just forgot: with this OpenGL functions how i can be sure that I'm actually reading a complete frame, meaning that there is no overlapping between 2 frames or any nasty side effect I'm actually reading the frame that comes right next to the previous one so i do not lose frames

    Read the article

  • libgdx spite position relative to body

    - by While-E
    Apologies if this is a reiteration, as I couldn't find another discussion of this over the past couple days. Issue: I'm using libgdx and box2d, and I'm currently updating the sprite's position to the body's current position every render call. Using a debugRenderer to see the bodies, I see that there is fairly noticeable lag between the movement/position of the body and the sprite that is being moved relative to it. Question: Is this lag normal, possibly to perform collisions ahead of time? If not, should I be manipulating/relating the positions differently? Thanks in advance! [Solution] This was a coding error on my part. Pointed out by a good reply below, I was updating the position of the sprite relative to the body and then stepping the physics. Thus never actually setting the sprite to the body's CURRENT position. Thanks!

    Read the article

  • Collision filtering by object, team

    - by Bill Zimmerman
    Hi, I am looking for a good method to determine which objects will be considered for collision with other objects. My current idea is that each object has the following properties: alwaysCollidesWith = [list of objects that will always trigger a collision check] neverCollidesWith = [lost of objects that will never be considered] teamCollidesWith = [list of objects that will be checked, provided they belong to a different team] For example: -projectiles never have to be checked for collisions with other projectiles -players are always checked for collisions with players, regardless of team -projectiles are only considered for collisions if they collide with another teams players Does anyone see any weaknesses with this approach? Can anyone recommend a better approach?

    Read the article

  • Is the new windows 8 sdk usable with visual c++ express 2010 on windows 7?

    - by JohnB
    This is inspired by and related to Is the June 2010 DX SDK really the latest? asked recently but it's a different question. I won't likely be purchasing the full visual studio 2012 for C++, I intend to use the free visual c++ express 2012 that targets desktop applications when it is released so for now I'm using visual c++ express 2010 running on windows 7. The latest directx11 sdk is the one included in the windows 8 SDK now, it's not a separate release any more. So my question is, can I use the windows 8 SDK to build directx11 programs that work on windows 7 using visual studio express 2010 running on windows 7. Or do I need to stick to the final DirectX SDK release for now?

    Read the article

  • How can I get the palette of an 8-bit surface in SDL.NET/Tao.SDL?

    - by lolmaster
    I'm looking to get the palette of an 8-bit surface in SDL.NET if possible, or (more than likely) using Tao.SDL. This is because I want to do palette swapping with the palette directly, instead of blitting surfaces together to replace colours like how you would do it with a 32-bit surface. I've gotten the SDL_Surface and the SDL_PixelFormat, however when I go to get the palette in the same way, I get a System.ExecutionEngineException: private Tao.Sdl.Sdl.SDL_Palette GetPalette(Surface surf) { // Get surface. Tao.Sdl.Sdl.SDL_Surface sdlSurface = (Tao.Sdl.Sdl.SDL_Surface)System.Runtime.InteropServices.Marshal.PtrToStructure(surf.Handle, typeof(Tao.Sdl.Sdl.SDL_Surface)); // Get pixel format. Tao.Sdl.Sdl.SDL_PixelFormat pixelFormat = (Tao.Sdl.Sdl.SDL_PixelFormat)System.Runtime.InteropServices.Marshal.PtrToStructure(sdlSurface.format, typeof(Tao.Sdl.Sdl.SDL_PixelFormat)); // Execution exception here. Tao.Sdl.Sdl.SDL_Palette palette = (Tao.Sdl.Sdl.SDL_Palette)System.Runtime.InteropServices.Marshal.PtrToStructure(pixelFormat.palette, typeof(Tao.Sdl.Sdl.SDL_Palette)); return palette; } When I used unsafe code to get the palette, I got a compile time error: "Cannot take the address of, get the size of, or declare a pointer to a managed type ('Tao.Sdl.Sdl.SDL_Palette')". My unsafe code to get the palette was this: unsafe { Tao.Sdl.Sdl.SDL_Palette* pal = (Tao.Sdl.Sdl.SDL_Palette*)pixelFormat.palette; } From what I've read, a managed type in this case is when a structure has some sort of reference inside it as a field. The SDL_Palette structure happens to have an array of SDL_Color's, so I'm assuming that's the reference type that is causing issues. However I'm still not sure how to work around that to get the underlying palette. So if anyone knows how to get the palette from an 8-bit surface, whether it's through safe or unsafe code, the help would be greatly appreciated.

    Read the article

  • DirectX9 dynamic rendering

    - by gardian06
    What I am planning to do is have the models (or maybe just an identifier for the model to be used) stored outside of the directX9 framework, and so in nature have completely dynamic rendering. All of the information that I have found contains static rendering (rendering models that are stored in memory at specific positions) I would like information on how to take a model (or identifier for a model type) that is stored outside of the framework, and render it to the screen. I am expected to take a container that holds all the relevant data to be rendered. The information outside would hold the position, orientation (quaternion, though I am told that I can also get a rotation matrix if I prefer), and dimensions (scale)

    Read the article

  • Simplex Noise Help

    - by Alex Larsen
    Im Making A Minecraft Like Gae In XNA C# And I Need To Generate Land With Caves This Is The Code For Simplex I Have /// <summary> /// 1D simplex noise /// </summary> /// <param name="x"></param> /// <returns></returns> public static float Generate(float x) { int i0 = FastFloor(x); int i1 = i0 + 1; float x0 = x - i0; float x1 = x0 - 1.0f; float n0, n1; float t0 = 1.0f - x0 * x0; t0 *= t0; n0 = t0 * t0 * grad(perm[i0 & 0xff], x0); float t1 = 1.0f - x1 * x1; t1 *= t1; n1 = t1 * t1 * grad(perm[i1 & 0xff], x1); // The maximum value of this noise is 8*(3/4)^4 = 2.53125 // A factor of 0.395 scales to fit exactly within [-1,1] return 0.395f * (n0 + n1); } /// <summary> /// 2D simplex noise /// </summary> /// <param name="x"></param> /// <param name="y"></param> /// <returns></returns> public static float Generate(float x, float y) { const float F2 = 0.366025403f; // F2 = 0.5*(sqrt(3.0)-1.0) const float G2 = 0.211324865f; // G2 = (3.0-Math.sqrt(3.0))/6.0 float n0, n1, n2; // Noise contributions from the three corners // Skew the input space to determine which simplex cell we're in float s = (x + y) * F2; // Hairy factor for 2D float xs = x + s; float ys = y + s; int i = FastFloor(xs); int j = FastFloor(ys); float t = (float)(i + j) * G2; float X0 = i - t; // Unskew the cell origin back to (x,y) space float Y0 = j - t; float x0 = x - X0; // The x,y distances from the cell origin float y0 = y - Y0; // For the 2D case, the simplex shape is an equilateral triangle. // Determine which simplex we are in. int i1, j1; // Offsets for second (middle) corner of simplex in (i,j) coords if (x0 > y0) { i1 = 1; j1 = 0; } // lower triangle, XY order: (0,0)->(1,0)->(1,1) else { i1 = 0; j1 = 1; } // upper triangle, YX order: (0,0)->(0,1)->(1,1) // A step of (1,0) in (i,j) means a step of (1-c,-c) in (x,y), and // a step of (0,1) in (i,j) means a step of (-c,1-c) in (x,y), where // c = (3-sqrt(3))/6 float x1 = x0 - i1 + G2; // Offsets for middle corner in (x,y) unskewed coords float y1 = y0 - j1 + G2; float x2 = x0 - 1.0f + 2.0f * G2; // Offsets for last corner in (x,y) unskewed coords float y2 = y0 - 1.0f + 2.0f * G2; // Wrap the integer indices at 256, to avoid indexing perm[] out of bounds int ii = i % 256; int jj = j % 256; // Calculate the contribution from the three corners float t0 = 0.5f - x0 * x0 - y0 * y0; if (t0 < 0.0f) n0 = 0.0f; else { t0 *= t0; n0 = t0 * t0 * grad(perm[ii + perm[jj]], x0, y0); } float t1 = 0.5f - x1 * x1 - y1 * y1; if (t1 < 0.0f) n1 = 0.0f; else { t1 *= t1; n1 = t1 * t1 * grad(perm[ii + i1 + perm[jj + j1]], x1, y1); } float t2 = 0.5f - x2 * x2 - y2 * y2; if (t2 < 0.0f) n2 = 0.0f; else { t2 *= t2; n2 = t2 * t2 * grad(perm[ii + 1 + perm[jj + 1]], x2, y2); } // Add contributions from each corner to get the final noise value. // The result is scaled to return values in the interval [-1,1]. return 40.0f * (n0 + n1 + n2); // TODO: The scale factor is preliminary! } public static float Generate(float x, float y, float z) { // Simple skewing factors for the 3D case const float F3 = 0.333333333f; const float G3 = 0.166666667f; float n0, n1, n2, n3; // Noise contributions from the four corners // Skew the input space to determine which simplex cell we're in float s = (x + y + z) * F3; // Very nice and simple skew factor for 3D float xs = x + s; float ys = y + s; float zs = z + s; int i = FastFloor(xs); int j = FastFloor(ys); int k = FastFloor(zs); float t = (float)(i + j + k) * G3; float X0 = i - t; // Unskew the cell origin back to (x,y,z) space float Y0 = j - t; float Z0 = k - t; float x0 = x - X0; // The x,y,z distances from the cell origin float y0 = y - Y0; float z0 = z - Z0; // For the 3D case, the simplex shape is a slightly irregular tetrahedron. // Determine which simplex we are in. int i1, j1, k1; // Offsets for second corner of simplex in (i,j,k) coords int i2, j2, k2; // Offsets for third corner of simplex in (i,j,k) coords /* This code would benefit from a backport from the GLSL version! */ if (x0 >= y0) { if (y0 >= z0) { i1 = 1; j1 = 0; k1 = 0; i2 = 1; j2 = 1; k2 = 0; } // X Y Z order else if (x0 >= z0) { i1 = 1; j1 = 0; k1 = 0; i2 = 1; j2 = 0; k2 = 1; } // X Z Y order else { i1 = 0; j1 = 0; k1 = 1; i2 = 1; j2 = 0; k2 = 1; } // Z X Y order } else { // x0<y0 if (y0 < z0) { i1 = 0; j1 = 0; k1 = 1; i2 = 0; j2 = 1; k2 = 1; } // Z Y X order else if (x0 < z0) { i1 = 0; j1 = 1; k1 = 0; i2 = 0; j2 = 1; k2 = 1; } // Y Z X order else { i1 = 0; j1 = 1; k1 = 0; i2 = 1; j2 = 1; k2 = 0; } // Y X Z order } // A step of (1,0,0) in (i,j,k) means a step of (1-c,-c,-c) in (x,y,z), // a step of (0,1,0) in (i,j,k) means a step of (-c,1-c,-c) in (x,y,z), and // a step of (0,0,1) in (i,j,k) means a step of (-c,-c,1-c) in (x,y,z), where // c = 1/6. float x1 = x0 - i1 + G3; // Offsets for second corner in (x,y,z) coords float y1 = y0 - j1 + G3; float z1 = z0 - k1 + G3; float x2 = x0 - i2 + 2.0f * G3; // Offsets for third corner in (x,y,z) coords float y2 = y0 - j2 + 2.0f * G3; float z2 = z0 - k2 + 2.0f * G3; float x3 = x0 - 1.0f + 3.0f * G3; // Offsets for last corner in (x,y,z) coords float y3 = y0 - 1.0f + 3.0f * G3; float z3 = z0 - 1.0f + 3.0f * G3; // Wrap the integer indices at 256, to avoid indexing perm[] out of bounds int ii = i % 256; int jj = j % 256; int kk = k % 256; // Calculate the contribution from the four corners float t0 = 0.6f - x0 * x0 - y0 * y0 - z0 * z0; if (t0 < 0.0f) n0 = 0.0f; else { t0 *= t0; n0 = t0 * t0 * grad(perm[ii + perm[jj + perm[kk]]], x0, y0, z0); } float t1 = 0.6f - x1 * x1 - y1 * y1 - z1 * z1; if (t1 < 0.0f) n1 = 0.0f; else { t1 *= t1; n1 = t1 * t1 * grad(perm[ii + i1 + perm[jj + j1 + perm[kk + k1]]], x1, y1, z1); } float t2 = 0.6f - x2 * x2 - y2 * y2 - z2 * z2; if (t2 < 0.0f) n2 = 0.0f; else { t2 *= t2; n2 = t2 * t2 * grad(perm[ii + i2 + perm[jj + j2 + perm[kk + k2]]], x2, y2, z2); } float t3 = 0.6f - x3 * x3 - y3 * y3 - z3 * z3; if (t3 < 0.0f) n3 = 0.0f; else { t3 *= t3; n3 = t3 * t3 * grad(perm[ii + 1 + perm[jj + 1 + perm[kk + 1]]], x3, y3, z3); } // Add contributions from each corner to get the final noise value. // The result is scaled to stay just inside [-1,1] return 32.0f * (n0 + n1 + n2 + n3); // TODO: The scale factor is preliminary! } private static byte[] perm = new byte[512] { 151,160,137,91,90,15, 131,13,201,95,96,53,194,233,7,225,140,36,103,30,69,142,8,99,37,240,21,10,23, 190, 6,148,247,120,234,75,0,26,197,62,94,252,219,203,117,35,11,32,57,177,33, 88,237,149,56,87,174,20,125,136,171,168, 68,175,74,165,71,134,139,48,27,166, 77,146,158,231,83,111,229,122,60,211,133,230,220,105,92,41,55,46,245,40,244, 102,143,54, 65,25,63,161, 1,216,80,73,209,76,132,187,208, 89,18,169,200,196, 135,130,116,188,159,86,164,100,109,198,173,186, 3,64,52,217,226,250,124,123, 5,202,38,147,118,126,255,82,85,212,207,206,59,227,47,16,58,17,182,189,28,42, 223,183,170,213,119,248,152, 2,44,154,163, 70,221,153,101,155,167, 43,172,9, 129,22,39,253, 19,98,108,110,79,113,224,232,178,185, 112,104,218,246,97,228, 251,34,242,193,238,210,144,12,191,179,162,241, 81,51,145,235,249,14,239,107, 49,192,214, 31,181,199,106,157,184, 84,204,176,115,121,50,45,127, 4,150,254, 138,236,205,93,222,114,67,29,24,72,243,141,128,195,78,66,215,61,156,180, 151,160,137,91,90,15, 131,13,201,95,96,53,194,233,7,225,140,36,103,30,69,142,8,99,37,240,21,10,23, 190, 6,148,247,120,234,75,0,26,197,62,94,252,219,203,117,35,11,32,57,177,33, 88,237,149,56,87,174,20,125,136,171,168, 68,175,74,165,71,134,139,48,27,166, 77,146,158,231,83,111,229,122,60,211,133,230,220,105,92,41,55,46,245,40,244, 102,143,54, 65,25,63,161, 1,216,80,73,209,76,132,187,208, 89,18,169,200,196, 135,130,116,188,159,86,164,100,109,198,173,186, 3,64,52,217,226,250,124,123, 5,202,38,147,118,126,255,82,85,212,207,206,59,227,47,16,58,17,182,189,28,42, 223,183,170,213,119,248,152, 2,44,154,163, 70,221,153,101,155,167, 43,172,9, 129,22,39,253, 19,98,108,110,79,113,224,232,178,185, 112,104,218,246,97,228, 251,34,242,193,238,210,144,12,191,179,162,241, 81,51,145,235,249,14,239,107, 49,192,214, 31,181,199,106,157,184, 84,204,176,115,121,50,45,127, 4,150,254, 138,236,205,93,222,114,67,29,24,72,243,141,128,195,78,66,215,61,156,180 }; private static int FastFloor(float x) { return (x > 0) ? ((int)x) : (((int)x) - 1); } private static float grad(int hash, float x) { int h = hash & 15; float grad = 1.0f + (h & 7); // Gradient value 1.0, 2.0, ..., 8.0 if ((h & 8) != 0) grad = -grad; // Set a random sign for the gradient return (grad * x); // Multiply the gradient with the distance } private static float grad(int hash, float x, float y) { int h = hash & 7; // Convert low 3 bits of hash code float u = h < 4 ? x : y; // into 8 simple gradient directions, float v = h < 4 ? y : x; // and compute the dot product with (x,y). return ((h & 1) != 0 ? -u : u) + ((h & 2) != 0 ? -2.0f * v : 2.0f * v); } private static float grad(int hash, float x, float y, float z) { int h = hash & 15; // Convert low 4 bits of hash code into 12 simple float u = h < 8 ? x : y; // gradient directions, and compute dot product. float v = h < 4 ? y : h == 12 || h == 14 ? x : z; // Fix repeats at h = 12 to 15 return ((h & 1) != 0 ? -u : u) + ((h & 2) != 0 ? -v : v); } private static float grad(int hash, float x, float y, float z, float t) { int h = hash & 31; // Convert low 5 bits of hash code into 32 simple float u = h < 24 ? x : y; // gradient directions, and compute dot product. float v = h < 16 ? y : z; float w = h < 8 ? z : t; return ((h & 1) != 0 ? -u : u) + ((h & 2) != 0 ? -v : v) + ((h & 4) != 0 ? -w : w); } This Is My World Generation Code Block[,] BlocksInMap = new Block[1024, 256]; public bool IsWorldGenerated = false; Random r = new Random(); private void RunThread() { for (int BH = 0; BH <= 256; BH++) { for (int BW = 0; BW <= 1024; BW++) { Block b = new Block(); if (BH >= 192) { } BlocksInMap[BW, BH] = b; } } IsWorldGenerated = true; } public void GenWorld() { new Thread(new ThreadStart(RunThread)).Start(); } And This Is A Example Of How I Set Blocks Block b = new Block(); b.BlockType = = Block.BlockTypes.Air; This Is A Example Of How I Set Models foreach (Block b in MyWorld) { switch(b.BlockType) { case Block.BlockTypes.Dirt: b.Model = DirtModel; break; ect. } } How Would I Use These To Generate To World (The Block Array) And If Possible Thread It More? btw It's 1024 Wide And 256 Tall

    Read the article

  • How to code UI / HUD in Entity System?

    - by Sylpheed
    I think I already got the idea of the Entity System inspired by Adam Martin (t-machine). I want to start using this for my next project. I already know the basic of Entity, Components, and Systems. My problem is how to handle UI / HUD. For example, a quest window, skill window, character info window, etc. How do you handle UI events (eg. pressing a button)? These are stuff that doesn't need to be processed every frame. Currently, I'm using MVC to code UI but I don't think that'll be compatible for Entity System. I've read that Entity System is embedded on a larger OOP. I don't know if UI is outside of ES or not. How do I approach this one?

    Read the article

  • Reading from a staging 2D texture array in DirectX10

    - by Don Reba
    I have a DX10 program, where I create an array of 3 16x16 textures, then map, read, and unmap each subresource in turn. I use a single mip level, set resource usage to staging and CPU access to read. Now, here is the problem: Subresource 0 contains 1024 bytes, pitch 64, as expected. Subresource 1 contains 512 bytes, pitch 64. Subresource 2 contains 256 bytes, pitch 64. I expect all three to be the same size. Debugging output is enabled, but not reporting any warnings or errors. Am I missing something, or might this be some sort of driver issue? Here is the code. The language is Nemerle, but C# and C++ would look almost the same. I have looked through the generated code, and am fairly confident the problem is not language-related. def cpuTexture = Texture2D ( device , Texture2DDescription() <- { Width = 16; Height = 16; MipLevels = 1; ArraySize = 3; Format = Format.R32_Float; Usage = ResourceUsage.Staging; CpuAccessFlags = CpuAccessFlags.Read; SampleDescription = SampleDescription(count = 1, quality = 0); } ); foreach (subresource in [0 .. 2]) { def data = cpuTexture.Map(subresource, MapMode.Read, MapFlags.None); Console.WriteLine($"subresource $subresource"); Console.WriteLine($"length = $(data.Data.Length)"); Console.WriteLine($"pitch = $(data.Pitch)"); cpuTexture.Unmap(subresource); }

    Read the article

  • Why is my Tiled map distorted when rendered with LibGDX?

    - by Sean
    I have a Tiled map that looks like this in the editor: But when I load it using an AssetManager (full static source available on GitHub) it appears completely askew. I believe the relevant portion of the code is below. This is the entire method; the others are either empty or might as well be. private OrthographicCamera camera; private AssetManager assetManager; private BitmapFont font; private SpriteBatch batch; private TiledMap map; private TiledMapRenderer renderer; @Override public void create() { float w = Gdx.graphics.getWidth(); float h = Gdx.graphics.getHeight(); camera = new OrthographicCamera(); assetManager = new AssetManager(); batch = new SpriteBatch(); font = new BitmapFont(); camera.setToOrtho(false, (w / h) * 10, 10); camera.update(); assetManager.setLoader(TiledMap.class, new TmxMapLoader( new InternalFileHandleResolver())); assetManager.load(AssetInfo.ICE_CAVE.assetPath, TiledMap.class); assetManager.finishLoading(); map = assetManager.get(AssetInfo.ICE_CAVE.assetPath); renderer = new IsometricTiledMapRenderer(map, 1f/64f); }

    Read the article

  • Disabling depth write trashes the frame buffer on some GPUs

    - by EboMike
    I sometimes disable depth buffer writing via glDepthMask(GL_FALSE) during the alpha rendering of a frame. That works perfectly fine on some GPUs (like the Motorola Droid's PowerVR), but on the HTC EVO with the Adreno GPU for example, I end up with the frame buffer being complete garbage (I see traces of the meshes I rendered somewhere, but the entire screen is mostly trashed). If I force glDepthMask to be true the entire time, everything works fine. I need glDepthMask to be off during parts of the alpha rendering. What can cause the framebuffer to get destroyed by turning the depth writing off? I do clear the depth buffer initially, and the majority of the screen has pixels rendered with depth writing turned on first before I do additional drawing with it turned off.

    Read the article

  • FBX 3ds max export, bad vertices

    - by instancedName
    I need to import model in OpenGL via Fbx SdK, and for testing purposes I created a simple box centered in the (0, 0, 0), length 3, in 3ds max. Here's the image: But when i exported it, and imported in the OpenGL it wasn't in the center. Then I exported it in ASCII format, and opened the file in Notepad, and really Z coordinates were 0, and 3. When I converted model to editable mesh and checked every vertex in 3ds max it had expected (+-1.5, +-1.5, +-1.5) coordinates. Can anyone help me with this one? I'm really stuck. I tried to change whole bunch of parameters in 3ds max export, but every time it changes Z koordinate.

    Read the article

  • Depth interpolation for z-buffer, with scanline

    - by Twodordan
    I have to write my own software 3d rasterizer, and so far I am able to project my 3d model made of triangles into 2d space: I rotate, translate and project my points to get a 2d space representation of each triangle. Then, I take the 3 triangle points and I implement the scanline algorithm (using linear interpolation) to find all points[x][y] along the edges(left and right) of the triangles, so that I can scan the triangle horizontally, row by row, and fill it with pixels. This works. Except I have to also implement z-buffering. This means that knowing the rotated&translated z coordinates of the 3 vertices of the triangle, I must interpolate the z coordinate for all other points I find with my scanline algorithm. The concept seems clear enough, I first find Za and Zb with these calculations: var Z_Slope = (bottom_point_z - top_point_z) / (bottom_point_y - top_point_y); var Za = top_point_z + ((current_point_y - top_point_y) * Z_Slope); Then for each Zp I do the same interpolation horizontally: var Z_Slope = (right_z - left_z) / (right_x - left_x); var Zp = left_z + ((current_point_x - left_x) * Z_Slope); And of course I add to the zBuffer, if current z is closer to the viewer than the previous value at that index. (my coordinate system is x: left - right; y: top - bottom; z: your face - computer screen;) The problem is, it goes haywire. The project is here and if you select the "Z-Buffered" radio button, you'll see the results... (note that the rest of the options before "Z-Buffered" use the Painter's algorithm to correctly order the triangles. I also use the painter's algorithm -only- to draw the wireframe in "Z-Buffered" mode for debugging purposes) PS: I've read here that you must turn the z's into their reciprocals (meaning z = 1/z) before you interpolate. I tried that, and it appears that there's no change. What am I missing? (could anyone clarify, precisely where you must turn z into 1/z and where to turn it back?)

    Read the article

  • How do I implement a quaternion based camera?

    - by kudor gyozo
    I looked at several tutorials about this and when I thought I understood I tried to implement a quaternion based camera. The problem is it doesn't work correctly, after rotating for approx. 10 degrees it jumps back to -10 degrees. I have no idea what's wrong. I'm using openTK and it already has a quaternion class. I'm a noob at opengl, I'm doing this just for fun, and don't really understand quaternions, so probably I'm doing something stupid here. Here is some code: (Actually almost all the code except the methods that load and draw a vbo (it is taken from an OpenTK sample that demonstrates vbo-s)) I load a cube into a vbo and initialize the quaternion for the camera protected override void OnLoad(EventArgs e) { base.OnLoad(e); cameraPos = new Vector3(0, 0, 7); cameraRot = Quaternion.FromAxisAngle(new Vector3(0,0,-1), 0); GL.ClearColor(System.Drawing.Color.MidnightBlue); GL.Enable(EnableCap.DepthTest); vbo = LoadVBO(CubeVertices, CubeElements); } I load a perspective projection here. This is loaded at the beginning and every time I resize the window. protected override void OnResize(EventArgs e) { base.OnResize(e); GL.Viewport(0, 0, Width, Height); float aspect_ratio = Width / (float)Height; Matrix4 perpective = Matrix4.CreatePerspectiveFieldOfView(MathHelper.PiOver4, aspect_ratio, 1, 64); GL.MatrixMode(MatrixMode.Projection); GL.LoadMatrix(ref perpective); } Here I get the last rotation value and create a new quaternion that represents only the last rotation and multiply it with the camera quaternion. After this I transform this into axis-angle so that opengl can use it. (This is how I understood it from several online quaternion tutorials) protected override void OnRenderFrame(FrameEventArgs e) { base.OnRenderFrame(e); GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit); double speed = 1; double rx = 0, ry = 0; if (Keyboard[Key.A]) { ry = -speed * e.Time; } if (Keyboard[Key.D]) { ry = +speed * e.Time; } if (Keyboard[Key.W]) { rx = +speed * e.Time; } if (Keyboard[Key.S]) { rx = -speed * e.Time; } Quaternion tmpQuat = Quaternion.FromAxisAngle(new Vector3(0,1,0), (float)ry); cameraRot = tmpQuat * cameraRot; cameraRot.Normalize(); GL.MatrixMode(MatrixMode.Modelview); GL.LoadIdentity(); Vector3 axis; float angle; cameraRot.ToAxisAngle(out axis, out angle); GL.Rotate(angle, axis); GL.Translate(-cameraPos); Draw(vbo); SwapBuffers(); } Here are 2 images to explain better: I rotate a while and from this: it jumps into this Any help is appreciated. Update1: I add these to a streamwriter that writes into a file: sw.WriteLine("camerarot: X:{0} Y:{1} Z:{2} W:{3} L:{4}", cameraRot.X, cameraRot.Y, cameraRot.Z, cameraRot.W, cameraRot.Length); sw.WriteLine("ry: {0}", ry); The log is available here: http://www.pasteall.org/26133/text. At line 770 the cube jumps from right to left, when camerarot.Y changes signs. I don't know if this is normal. Update2 Here is the complete project.

    Read the article

< Previous Page | 302 303 304 305 306 307 308 309 310 311 312 313  | Next Page >