Search Results

Search found 19182 results on 768 pages for 'game engine'.

Page 392/768 | < Previous Page | 388 389 390 391 392 393 394 395 396 397 398 399  | Next Page >

  • Loading Wavefront Data into VAO and Render It

    - by Jordan LaPrise
    I have successfully loaded a triangulated wavefront(.obj) into 6 vectors, the first 3 vectors contain the locations for vertices, uv coords, and normals. The last three have the indices stored for each of the faces. I have been looking into using VAO's and VBO's to render, and I'm not quite sure how to load and render the data. One of my biggest concerns is the fact that indexed rendering only allows you to have one array of indices, meaning I somehow have to make all of the first three vectors the same size, the only way I thought of doing this, is to make 3 new vertex's of equal size, and load in the data for each face, but that would completely defeat the purpose of indexing. Any help would be appreciated. Thanks in advance, Jordan

    Read the article

  • XNA 4: GetData from Texture2D and Set it into Texture3D with specific order

    - by cubrman
    I am trying to convert my color grading 2d lookup texture into 3d LUT. When I simply use: ColorAtlas.GetData(data); ColorAtlas3D.SetData(data); I get this: I tried building my 2d atlass horizontally but it did not helped - the data was messed up in a different way. So my question is how can I influence the order of the data I get from the 2d atlas and how can I properly pass it into my 3d atlas? Update: I know that I can GetData from a specific Rectangular area and put it into several arrays, but the result is still the same. This is what I tried: Color[] data2D = new Color[0]; for (int i = 0; i < 32; i++) { Color[] data = new Color[32 * 32]; GraphicsDevice.SetRenderTarget(null); ColorAtlas.GetData(0, new Rectangle(0, i*32, 32, 32), data, 0, data.Length); int oldLength = data2D.Length; Array.Resize<Color>(ref data2D, oldLength + data.Length); Array.Copy(data, 0, data2D, oldLength, data.Length); } ColorAtlas3D.SetData(data2D);

    Read the article

  • What is the format of DXGI_FORMAT_D24_UNORM_S8_UINT?

    - by bobobobo
    I'm trying to read the values in a depth texture of type DXGI_FORMAT_D24_UNORM_S8_UINT. I know this means "24 bits for depth, 8 bits for stencil" "A 32-bit z-buffer format that supports 24 bits for depth and 8 bits for stencil.", but how do you interpret those 24 bits? It's clearly not going to be a 32-bit int, and it's not going to be a 32-bit float. If it is an integer value, how "far away" is a value of "1" in the depth texture?

    Read the article

  • What light attenuation function does UDK use?

    - by ananamas
    I'm a big fan of the light attenuation in UDK. Traditionally I've always used the constant-linear-quadratic falloff function to control how "soft" the falloff is, which gives three values to play with. In UDK you can get similar results, but you only need to tweak one value: FalloffExponent. I'm interested in what the actual mathematical function here is. The UDK lighting reference describes it as follows: FalloffExponent: This allows you to modify the falloff of a light. The default falloff is 2. The smaller the number, the sharper the falloff and the more the brightness is maintained until the radius is reached. Does anyone know what it's doing behind the scenes?

    Read the article

  • Sprite not rotating around its centre after Scaling at its centre

    - by asma.farhat
    If I scale a sprite at its centre, then try to rotate it around its centre as well, the rotation does not occur around its centre. If you need to rotate, for example a scaled ball,the way its working it is set the scale center at the top left (0,0) set the scale that you want, and then set the rotation center to the middle of the scaled sprite, and then apply the rotation modifier. blaBloBliSprite.setScaleCenter(0, 0); blaBloBliSprite.setScale(0.667f); blaBloBliSprite.setPosition(557, CAMERA_HEIGHT / 2 - blaBloBliSprite.getHeightScaled() / 2); blaBloBliSprite.setRotationCenter(blaBloBliSprite.getWidthScaled() / 2, blaBloBliSprite.getHeightScaled() / 2); But I want to scale a sprite at its centre as well. Is there any way of doing it?

    Read the article

  • Restrict Tile Map to its boundaries

    - by Farooq Arshed
    I have loaded a tmx file in cocos2dx and now I am trying to implement panning. I have successfully implemented the panning first part where the map moves. Now I want to restrict the map so it does not display the map beyond its boundary where it shows black screen. I am confused as to how to implement it. Below is my code any help would be appreciated. bool HelloWorld::init() { if ( !CCLayer::init() ) { return false; } const char* tmx= "isometric_grass_and_water.tmx"; _tileMap = new CCTMXTiledMap(); _tileMap->initWithTMXFile(tmx); this->addChild(_tileMap); this->setTouchEnabled(true); return true; } void HelloWorld::ccTouchesBegan(CCSet *touches, CCEvent *event){ CCSetIterator it; for (it=touches->begin(); it!=touches->end(); ++it){ CCTouch* touch = (CCTouch*)it.operator*(); CCLog("touches id: %d", touch->getID()); oldLoc = touch->getLocationInView(); oldLoc = CCDirector::sharedDirector()->convertToGL(oldLoc); } } void HelloWorld::ccTouchesMoved(CCSet *touches, CCEvent *event) { if (touches->count() == 1) { CCTouch* touch = (CCTouch*)( touches->anyObject() ); this->moveScreen(touch); } else if (touches->count() == 2) { this->scaleScreen(touches); } } void HelloWorld::moveScreen(CCTouch* touch) { CCPoint currentLoc = touch->getLocationInView(); currentLoc = CCDirector::sharedDirector()->convertToGL(currentLoc); CCPoint moveTo = ccpSub(oldLoc, currentLoc); moveTo = ccpMult(moveTo, -1); oldLoc = currentLoc; this->setPosition(ccpAdd(this->getPosition(), ccp(moveTo.x, moveTo.y))); }

    Read the article

  • tunnel effect cocos2d

    - by samfisher
    I am looking to create a similar tunnel effect in COCOS2D (iOS). Could anyone suggest any pointers? ref Video 1 ref Video 2 Till now I have tried with several ring shape sprites with decreasing scale and positioned center to a same point and keeping Z decreasing as well for each smaller sprite. With that, animating it with CCScaleTo and changing the size to 2.0 with animation duration but it does not come anyway near to the tunnel effect shown in the reference. Thanks, sam

    Read the article

  • Unity Particle System collision detection problem

    - by Krav
    I'm using Unity 3.5.5f3 wich has the Shuriken particle system. I've made a blood particle system based on Unity's demos. (Exploding paint [Blood]) The blood is flowing and when it collides with a Plane Transform wich I've created a small pool of blood spawns as a Collision Sub Emitter. My main problem is that when I want to add another object to collide it just doesn't want to work. When I create a cube, and set it as a collision plane the collision will only occur at the half of the cube. I want this to happen: When it reaches the cube's surface the sub emmiter activates, and when the surface is horizontal it appears horizontally, and if it's vertical then vertically. Now it just appears horizontally everytime like in the picture. How could I solve it?

    Read the article

  • What could cause a pixel shader to paint outside the lines of the vertex shader output?

    - by Rei Miyasaka
    From what I understand, the pixels that a pixel shader operates on are specified implicitly by the SV_POSITION output (in DirectX) of the vertex shader. What then could cause a pixel shader to render in the middle of nowhere? I used the new Visual Studio 2012 graphics debugger to visualize my vertex and pixel shader output. This is the output from a DrawIndexed() call that draws a cube: The pink part is the rendered output of the pixel shader, which takes the cube on its left as its input. The vertex shader code: cbuffer Buf { float4x4 final; }; struct In { float4 pos:POSITION; float3 norm:NORMAL; float2 texuv:TEXCOORD; }; struct Out { float4 col:COLOR; float2 tex:TEXCOORD; float4 pos:SV_POSITION; }; Out main(In input) { Out output; output.pos = mul(input.pos, final); output.col = float4(1.0f, 0.5f, 0.5f, 1.0f); output.tex = input.texuv; return output; } And the pixel shader: struct In { float4 col:COLOR; float2 tex:TEXCOORD; float4 pos:SV_POSITION; }; float4 main(In input) : SV_TARGET { return input.col; } The raster stage is the only thing between the vertex shader and the pixel shader, so my suspicion is that it's some raster stage settings. But the raster stage shouldn't change the shape of the vertex shader output so drastically, should it?

    Read the article

  • How can I import models from Blender into jMonkeyEngine?

    - by Nathan Sabruka
    I have some blender model files (Blender version 2.6) which I would like to use with the jMonkeyEngine SDK. However, when I use Blender's native .obj exporter, I can't import it in jMonkeyEngine (the model simply fails to import or looks messed up). I've tried importing .obj files or .blend files directly into the jMonkeyEngine SDK to no avail. I've also tried to use various OGRE exporters to export .scene and .material files, but only the .scene file is created. Is there a simple way to simply export files from Blender into the jMonkeyEngine SDK? EDIT: I seem to have found something in Blender. When I go under addons, there's a warning in the OGRE exporter; "'.mesh' output requires OgreCommandLineTools". However, I have already installed those tools under the C drive. Has anyone else encountered this issue?

    Read the article

  • SlimDX Texture2D from DataRectangle array

    - by Rebekah Bryant
    I'm totally new to DirectX. I'm using SlimDX to create a texture consisting of 13046 DataRectangles. Here's my code. It's breaking on the Texture2D constructor with "E_INVALIDARG: An invalid parameter was passed to the returning function (-2147024809)." inParms is just a struct containing handle to a Panel. public Renderer(Parameters inParms, ref DataRectangle[] inShapes) { Texture2DDescription description = new Texture2DDescription() { Width = 500, Height = 500, MipLevels = 1, ArraySize = inShapes.Length, Format = Format.R32G32B32_Float, SampleDescription = new SampleDescription(1, 0), Usage = ResourceUsage.Default, BindFlags = BindFlags.RenderTarget | BindFlags.ShaderResource, CpuAccessFlags = CpuAccessFlags.None, OptionFlags = ResourceOptionFlags.None }; SwapChainDescription chainDescription = new SwapChainDescription() { BufferCount = 1, IsWindowed = true, Usage = Usage.RenderTargetOutput, ModeDescription = new ModeDescription(0, 0, new Rational(60, 1), Format.R8G8B8A8_UNorm), SampleDescription = new SampleDescription(1, 0), Flags = SwapChainFlags.None, OutputHandle = inParms.Handle, SwapEffect = SwapEffect.Discard }; Device.CreateWithSwapChain(DriverType.Hardware, DeviceCreationFlags.None, chainDescription, out mDevice, out mSwapChain); Texture2D texture = new Texture2D(Device, description, inShapes); } EDIT: Running with the Debug flag set, I got: D3D11 ERROR: ID3D11Device::CreateTexture2D: The format (0x6, R32G32B32_FLOAT) cannot be bound as a RenderTarget, or cast to a format that could be bound as a RenderTarget. This is because the current graphics implementation does not even support this Format. Therefore this format does not support D3D11_BIND_RENDER_TARGET. Use CheckFormatSupport to check Format support. [ STATE_CREATION ERROR #92: CREATETEXTURE2D_UNSUPPORTEDFORMAT] D3D11 ERROR: ID3D11Device::CreateTexture2D: Returning E_INVALIDARG, meaning invalid parameters were passed. [ STATE_CREATION ERROR #104: CREATETEXTURE2D_INVALIDARG_RETURN]

    Read the article

  • 3Ds Max is exporting model with more normals than vertices

    - by Delta
    I made a simple teapot with the "Create Standard Primitives" option and exported it as a collada file, ended up with this: < float_array id="Teapot001-POSITION-array" count="1590" < float_array id="Teapot001-Normal0-array" count="9216" For what I know there should be only one normal per vertex, am I wrong? What am I supposed to do with that much normals? Just put them on the normal buffer all at once normally?

    Read the article

  • Why am I not getting an sRGB default framebuffer?

    - by Aaron Rotenberg
    I'm trying to make my OpenGL Haskell program gamma correct by making appropriate use of sRGB framebuffers and textures, but I'm running into issues making the default framebuffer sRGB. Consider the following Haskell program, compiled for 32-bit Windows using GHC and linked against 32-bit freeglut: import Foreign.Marshal.Alloc(alloca) import Foreign.Ptr(Ptr) import Foreign.Storable(Storable, peek) import Graphics.Rendering.OpenGL.Raw import qualified Graphics.UI.GLUT as GLUT import Graphics.UI.GLUT(($=)) main :: IO () main = do (_progName, _args) <- GLUT.getArgsAndInitialize GLUT.initialDisplayMode $= [GLUT.SRGBMode] _window <- GLUT.createWindow "sRGB Test" -- To prove that I actually have freeglut working correctly. -- This will fail at runtime under classic GLUT. GLUT.closeCallback $= Just (return ()) glEnable gl_FRAMEBUFFER_SRGB colorEncoding <- allocaOut $ glGetFramebufferAttachmentParameteriv gl_FRAMEBUFFER gl_FRONT_LEFT gl_FRAMEBUFFER_ATTACHMENT_COLOR_ENCODING print colorEncoding allocaOut :: Storable a => (Ptr a -> IO b) -> IO a allocaOut f = alloca $ \ptr -> do f ptr peek ptr On my desktop (Windows 8 64-bit with a GeForce GTX 760 graphics card) this program outputs 9729, a.k.a. gl_LINEAR, indicating that the default framebuffer is using linear color space, even though I explicitly requested an sRGB window. This is reflected in the rendering results of the actual program I'm trying to write - everything looks washed out because my linear color values aren't being converted to sRGB before being written to the framebuffer. On the other hand, on my laptop (Windows 7 64-bit with an Intel graphics chip), the program prints 0 (huh?) and I get an sRGB default framebuffer by default whether I request one or not! And on both machines, if I manually create a non-default framebuffer bound to an sRGB texture, the program correctly prints 35904, a.k.a. gl_SRGB. Why am I getting different results on different hardware? Am I doing something wrong? How can I get an sRGB framebuffer consistently on all hardware and target OSes?

    Read the article

  • runtime error: invalid memory address or nil pointer dereference

    - by Klink
    I want to learn OpenGL 3.0 with golang. But when i try to compile some code, i get many errors. package main import ( "os" //"errors" "fmt" //gl "github.com/chsc/gogl/gl33" //"github.com/jteeuwen/glfw" "github.com/go-gl/gl" "github.com/go-gl/glfw" "runtime" "time" ) var ( width int = 640 height int = 480 ) var ( points = []float32{0.0, 0.8, -0.8, -0.8, 0.8, -0.8} ) func initScene() { gl.Init() gl.ClearColor(0.0, 0.5, 1.0, 1.0) gl.Enable(gl.CULL_FACE) gl.Viewport(0, 0, 800, 600) } func glfwInitWindowContext() { if err := glfw.Init(); err != nil { fmt.Fprintf(os.Stderr, "glfw_Init: %s\n", err) glfw.Terminate() } glfw.OpenWindowHint(glfw.FsaaSamples, 1) glfw.OpenWindowHint(glfw.WindowNoResize, 1) if err := glfw.OpenWindow(width, height, 0, 0, 0, 0, 32, 0, glfw.Windowed); err != nil { fmt.Fprintf(os.Stderr, "glfw_Window: %s\n", err) glfw.CloseWindow() } glfw.SetSwapInterval(1) glfw.SetWindowTitle("Title") } func drawScene() { for glfw.WindowParam(glfw.Opened) == 1 { gl.Clear(gl.COLOR_BUFFER_BIT) vertexShaderSrc := `#version 120 attribute vec2 coord2d; void main(void) { gl_Position = vec4(coord2d, 0.0, 1.0); }` vertexShader := gl.CreateShader(gl.VERTEX_SHADER) vertexShader.Source(vertexShaderSrc) vertexShader.Compile() fragmentShaderSrc := `#version 120 void main(void) { gl_FragColor[0] = 0.0; gl_FragColor[1] = 0.0; gl_FragColor[2] = 1.0; }` fragmentShader := gl.CreateShader(gl.FRAGMENT_SHADER) fragmentShader.Source(fragmentShaderSrc) fragmentShader.Compile() program := gl.CreateProgram() program.AttachShader(vertexShader) program.AttachShader(fragmentShader) program.Link() attribute_coord2d := program.GetAttribLocation("coord2d") program.Use() //attribute_coord2d.AttribPointer(size, typ, normalized, stride, pointer) attribute_coord2d.EnableArray() attribute_coord2d.AttribPointer(0, 3, false, 0, &(points[0])) //gl.DrawArrays(gl.TRIANGLES, 0, len(points)) gl.DrawArrays(gl.TRIANGLES, 0, 3) glfw.SwapBuffers() inputHandler() time.Sleep(100 * time.Millisecond) } } func inputHandler() { glfw.Enable(glfw.StickyKeys) if glfw.Key(glfw.KeyEsc) == glfw.KeyPress { //gl.DeleteBuffers(2, &uiVBO[0]) glfw.Terminate() } if glfw.Key(glfw.KeyF2) == glfw.KeyPress { glfw.SetWindowTitle("Title2") fmt.Println("Changed to 'Title2'") fmt.Println(len(points)) } if glfw.Key(glfw.KeyF1) == glfw.KeyPress { glfw.SetWindowTitle("Title1") fmt.Println("Changed to 'Title1'") } } func main() { runtime.LockOSThread() glfwInitWindowContext() initScene() drawScene() } And after that: panic: runtime error: invalid memory address or nil pointer dereference [signal 0xb code=0x1 addr=0x0 pc=0x41bc6f74] goroutine 1 [syscall]: github.com/go-gl/gl._Cfunc_glDrawArrays(0x4, 0x7f8500000003) /tmp/go-build463568685/github.com/go-gl/gl/_obj/_cgo_defun.c:610 +0x2f github.com/go-gl/gl.DrawArrays(0x4, 0x3, 0x0, 0x45bd70) /tmp/go-build463568685/github.com/go-gl/gl/_obj/gl.cgo1.go:1922 +0x33 main.drawScene() /home/klink/Dev/Go/gogl/gopher/exper.go:85 +0x1e6 main.main() /home/klink/Dev/Go/gogl/gopher/exper.go:116 +0x27 goroutine 2 [syscall]: created by runtime.main /build/buildd/golang-1/src/pkg/runtime/proc.c:221 exit status 2

    Read the article

  • How to efficiently render resizable GUI elements in DirectX?

    - by PolGraphic
    I wonder what would be most efficient way to render the GUI elements. When we're talking about constant-size elements (that can still be moving), the textures' atlas seems to be good. But what with the resizeable elements? Let's say the panel (with textured borders)? Is there any better way than just render 9 rectangles with textures on them (I guess one texture and different textures coordinates for left-top corner, border, middle etc. used in shader)?

    Read the article

  • Unexpected behaviour with glFramebufferTexture1D

    - by Roshan
    I am using render to texture concept with glFramebufferTexture1D. I am drawing a cube on non-default FBO with all the vertices as -1,1 (maximum) in X Y Z direction. Now i am setting viewport to X while rendering on non default FBO. My background is blue with white color of cube. For default FBO, i have created 1-D texture and attached this texture to above FBO with color attachment. I am setting width of texture equal to width*height of above FBO view-port. Now, when i render this texture to on another cube, i can see continuous white color on start or end of each face of the cube. That means part of the face is white and rest is blue. I am not sure whether this behavior is correct or not. I expect all the texels should be white as i am using -1 and 1 coordinates for cube rendered on non-default FBO. code: #define WIDTH 3 #define HEIGHT 3 GLfloat vertices8[]={ 1.0f,1.0f,1.0f, -1.0f,1.0f,1.0f, -1.0f,-1.0f,1.0f, 1.0f,-1.0f,1.0f,//face 1 1.0f,-1.0f,-1.0f, -1.0f,-1.0f,-1.0f, -1.0f,1.0f,-1.0f, 1.0f,1.0f,-1.0f,//face 2 1.0f,1.0f,1.0f, 1.0f,-1.0f,1.0f, 1.0f,-1.0f,-1.0f, 1.0f,1.0f,-1.0f,//face 3 -1.0f,1.0f,1.0f, -1.0f,1.0f,-1.0f, -1.0f,-1.0f,-1.0f, -1.0f,-1.0f,1.0f,//face 4 1.0f,1.0f,1.0f, 1.0f,1.0f,-1.0f, -1.0f,1.0f,-1.0f, -1.0f,1.0f,1.0f,//face 5 -1.0f,-1.0f,1.0f, -1.0f,-1.0f,-1.0f, 1.0f,-1.0f,-1.0f, 1.0f,-1.0f,1.0f//face 6 }; GLfloat vertices[]= { 0.5f,0.5f,0.5f, -0.5f,0.5f,0.5f, -0.5f,-0.5f,0.5f, 0.5f,-0.5f,0.5f,//face 1 0.5f,-0.5f,-0.5f, -0.5f,-0.5f,-0.5f, -0.5f,0.5f,-0.5f, 0.5f,0.5f,-0.5f,//face 2 0.5f,0.5f,0.5f, 0.5f,-0.5f,0.5f, 0.5f,-0.5f,-0.5f, 0.5f,0.5f,-0.5f,//face 3 -0.5f,0.5f,0.5f, -0.5f,0.5f,-0.5f, -0.5f,-0.5f,-0.5f, -0.5f,-0.5f,0.5f,//face 4 0.5f,0.5f,0.5f, 0.5f,0.5f,-0.5f, -0.5f,0.5f,-0.5f, -0.5f,0.5f,0.5f,//face 5 -0.5f,-0.5f,0.5f, -0.5f,-0.5f,-0.5f, 0.5f,-0.5f,-0.5f, 0.5f,-0.5f,0.5f//face 6 }; GLuint indices[] = { 0, 2, 1, 0, 3, 2, 4, 5, 6, 4, 6, 7, 8, 9, 10, 8, 10, 11, 12, 15, 14, 12, 14, 13, 16, 17, 18, 16, 18, 19, 20, 23, 22, 20, 22, 21 }; GLfloat texcoord[] = { 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0 }; glGenTextures(1, &id1); glBindTexture(GL_TEXTURE_1D, id1); glGenFramebuffers(1, &Fboid); glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexImage1D(GL_TEXTURE_1D, 0, GL_RGBA, WIDTH*HEIGHT , 0, GL_RGBA, GL_UNSIGNED_BYTE,0); glBindFramebuffer(GL_FRAMEBUFFER, Fboid); glFramebufferTexture1D(GL_DRAW_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,GL_TEXTURE_1D,id1,0); draw_cube(); glBindFramebuffer(GL_FRAMEBUFFER, 0); draw(); } draw_cube() { glViewport(0, 0, WIDTH, HEIGHT); glClearColor(0.0f, 0.0f, 0.5f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); glEnableVertexAttribArray(glGetAttribLocation(temp.psId,"position")); glVertexAttribPointer(glGetAttribLocation(temp.psId,"position"), 3, GL_FLOAT, GL_FALSE, 0,vertices8); glDrawArrays (GL_TRIANGLE_FAN, 0, 24); } draw() { glClearColor(1.0f, 0.0f, 0.0f, 1.0f); glClearDepth(1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnableVertexAttribArray(glGetAttribLocation(shader_data.psId,"tk_position")); glVertexAttribPointer(glGetAttribLocation(shader_data.psId,"tk_position"), 3, GL_FLOAT, GL_FALSE, 0,vertices); nResult = GL_ERROR_CHECK((GL_NO_ERROR, "glVertexAttribPointer(position, 3, GL_FLOAT, GL_FALSE, 0,vertices);")); glEnableVertexAttribArray(glGetAttribLocation(shader_data.psId,"inputtexcoord")); glVertexAttribPointer(glGetAttribLocation(shader_data.psId,"inputtexcoord"), 2, GL_FLOAT, GL_FALSE, 0,texcoord); glBindTexture(*target11, id1); glDrawElements ( GL_TRIANGLES, 36,GL_UNSIGNED_INT, indices ); when i change WIDTH=HEIGHT=2, and call a glreadpixels with height, width equal to 4 in draw_cube() i can see first 2 pixels with white color, next two with blue(glclearcolor), next two white and then blue and so on.. Now when i change width parameter in glTeximage1D to 16 then ideally i should see alternate patches of white and blue right? But its not the case here. why so?

    Read the article

  • Tessellation Texture Coordinates

    - by Stuart Martin
    Firstly some info - I'm using DirectX 11 , C++ and I'm a fairly good programmer but new to tessellation and not a master graphics programmer. I'm currently implementing a tessellation system for a terrain model, but i have reached a snag. My current system produces a terrain model from a height map complete with multiple texture coordinates, normals, binormals and tangents for rendering. Now when i was using a simple vertex and pixel shader combination everything worked perfectly but since moving to include a hull and domain shader I'm slightly confused and getting strange results. My terrain is a high detail model but the textured results are very large patches of solid colour. My current setup passes the model data into the vertex shader then through the hull into the domain and then finally into the pixel shader for use in rendering. My only thought is that in my hull shader i pass the information into the domain shader per patch and this is producing the large areas of solid colour because each patch has identical information. Lighting and normal data are also slightly off but not as visibly as texturing. Below is a copy of my hull shader that does not work correctly because i think the way that i am passing the data through is incorrect. If anyone can help me out but suggesting an alternative way to get the required data into the pixel shader? or by showing me the correct way to handle the data in the hull shader id be very thankful! cbuffer TessellationBuffer { float tessellationAmount; float3 padding; }; struct HullInputType { float3 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; float3 binormal : BINORMAL; float2 tex2 : TEXCOORD1; }; struct ConstantOutputType { float edges[3] : SV_TessFactor; float inside : SV_InsideTessFactor; }; struct HullOutputType { float3 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; float3 binormal : BINORMAL; float2 tex2 : TEXCOORD1; float4 depthPosition : TEXCOORD2; }; ConstantOutputType ColorPatchConstantFunction(InputPatch<HullInputType, 3> inputPatch, uint patchId : SV_PrimitiveID) { ConstantOutputType output; output.edges[0] = tessellationAmount; output.edges[1] = tessellationAmount; output.edges[2] = tessellationAmount; output.inside = tessellationAmount; return output; } [domain("tri")] [partitioning("integer")] [outputtopology("triangle_cw")] [outputcontrolpoints(3)] [patchconstantfunc("ColorPatchConstantFunction")] HullOutputType ColorHullShader(InputPatch<HullInputType, 3> patch, uint pointId : SV_OutputControlPointID, uint patchId : SV_PrimitiveID) { HullOutputType output; output.position = patch[pointId].position; output.tex = patch[pointId].tex; output.tex2 = patch[pointId].tex2; output.normal = patch[pointId].normal; output.tangent = patch[pointId].tangent; output.binormal = patch[pointId].binormal; return output; } Edited to include the domain shader:- [domain("tri")] PixelInputType ColorDomainShader(ConstantOutputType input, float3 uvwCoord : SV_DomainLocation, const OutputPatch<HullOutputType, 3> patch) { float3 vertexPosition; PixelInputType output; // Determine the position of the new vertex. vertexPosition = uvwCoord.x * patch[0].position + uvwCoord.y * patch[1].position + uvwCoord.z * patch[2].position; output.position = mul(float4(vertexPosition, 1.0f), worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); output.depthPosition = output.position; output.tex = patch[0].tex; output.tex2 = patch[0].tex2; output.normal = patch[0].normal; output.tangent = patch[0].tangent; output.binormal = patch[0].binormal; return output; }

    Read the article

  • IrrKlang with Ogre

    - by Vinnie
    I'm trying to set up sound in my Ogre3D project. I have installed irrKlang 1.4.0 and added it's include and lib directories to my projects VC++ Include and Library directories, but I'm still getting a Linker error when I attempt to build. Any suggestions? (Error 4007 error LNK2019: unresolved external symbol "__declspec(dllimport) class irrklang::ISoundEngine * __cdecl irrklang::createIrrKlangDevice(enum irrklang::E_SOUND_OUTPUT_DRIVER,int,char const *,char const *)" (_imp?createIrrKlangDevice@irrklang@@YAPAVISoundEngine@1@W4E_SOUND_OUTPUT_DRIVER@1@HPBD1@Z) referenced in function "public: __thiscall SoundManager::SoundManager(void)" (??0SoundManager@@QAE@XZ)

    Read the article

  • isometric drawing order with larger than single tile images - drawing order algorithm?

    - by Roger Smith
    I have an isometric map over which I place various images. Most images will fit over a single tile, but some images are slightly larger. For example, I have a bed of size 2x3 tiles. This creates a problem when drawing my objects to the screen as I get some tiles erroneously overlapping other tiles. The two solutions that I know of are either splitting the image into 1x1 tile segments or implementing my own draw order algorithm, for example by assigning each image a number. The image with number 1 is drawn first, then 2, 3 etc. Does anyone have advice on what I should do? It seems to me like splitting an isometric image is very non obvious. How do you decide which parts of the image are 'in' a particular tile? I can't afford to split up all of my images manually either. The draw order algorithm seems like a nicer choice but I am not sure if it's going to be easy to implement. I can't solve, in my head, how to deal with situations whereby you change the index of one image, which causes a knock on effect to many other images. If anyone has an resources/tutorials on this I would be most grateful.

    Read the article

  • How to move an UIView along a curved CGPath according to user dragging the view

    - by Felipe Cypriano
    I'm trying to build a interface that the user can move his finger around the screen an a list of images moves along a path. The idea is that the images center nevers leaves de path. Most of the things I found was about how to animate using CGPath and not about actually using the path as the track to a user movement. I need to objects to be tracked on the path even if the user isn't moving his fingers over the path. For example (image bellow), if the object is at the beginning of the path and the user touches anywhere on the screen and moves his fingers from left to right I need that the object moves from left to right but following the path, that is, going up as it goes to the right towards the path's end. This is the path I've draw, imagine that I'll have a view (any image) that the user can touch and drag it along the path, there's no need to move the finger exactly over the path. If the user move from left to right the image should move from left to right but going up if need following the path. This is how I'm creating the path: CGPoint endPointUp = CGPointMake(315, 124); CGPoint endPointDown = CGPointMake(0, 403); CGPoint controlPoint1 = CGPointMake(133, 187); CGPoint controlPoint2 = CGPointMake(174, 318); CGMutablePathRef path = CGPathCreateMutable(); CGPathMoveToPoint(path, NULL, endPointUp.x, endPointUp.y); CGPathAddCurveToPoint(path, NULL, controlPoint1.x, controlPoint1.y, controlPoint2.x, controlPoint2.y, endPointDown.x, endPointDown.y); Any idead how can I achieve this?

    Read the article

  • First Minecraft mod not working: make a new sword

    - by yamikoWebs
    I am making my first mod and cannot see what is wrong with it. I am using MCP and Modloader. For my first mod I was going to make swords. I started with making a new EnumToolMaterials WOOD(0, 59, 2.0F, 0, 15), STONE(1, 131, 4.0F, 1, 5), IRON(2, 250, 6.0F, 2, 14), LAPIS(3, 750, 7.0F, 2, 14), OBSIDIAN(3, 1000, 7.5F, 3, 12), EMERALD(3, 1561, 8.0F, 3, 10),//diamond GREEN(3, 2000, 9.0F, 4, 10),//emerald GOLD(0, 200, 12.0F, 0, 22); then here is the mod class public class _Mod_Yamiko extends BaseMod{ /* mod itemts */ public static final Item swordLapis = (new ItemSword(600, EnumToolMaterial.LAPIS)).setItemName("swordLapis"); public static final Item swordObsidian = (new ItemSword(601, EnumToolMaterial.OBSIDIAN)).setItemName("swordObsidian"); public static final Item swordGreen = (new ItemSword(602, EnumToolMaterial.GREEN)).setItemName("swordGreen"); public void load(){ //set images swordLapis.iconIndex = ModLoader.addOverride("/gui/items.png","/gui/swordLapis.png"); ModLoader.addName(swordLapis, "Lapis Sword"); //craft ModLoader.addRecipe(new ItemStack(_Mod_Yamiko.swordLapis, 1), new Object[]{ " X ", " X ", " Y ", 'X', Block.dirt, 'Y', Item.stick }); } public String getVersion(){ return "0.1"; } } Then I made a 16×16 .png image. I am not sure where to save it so I recompiled and reobfuscated, took the mod files and put it in my local Minecraft install, added the image where it be should be. No problems when playing but I cannot make the new sword.

    Read the article

  • GLSL, is it possible to offsetting vertices based on height map colour?

    - by Rob
    I am attempting to generate some terrain based upon a heightmap. I have generated a 32 x 32 grid and a corresponding height map - In my vertex shader I am trying to offset the position of the Y axis based upon the colour of the heightmap, white vertices being higher than black ones. //Vertex Shader Code #version 330 uniform mat4 modelMatrix; uniform mat4 viewMatrix; uniform mat4 projectionMatrix; uniform sampler2D heightmap; layout (location=0) in vec4 vertexPos; layout (location=1) in vec4 vertexColour; layout (location=3) in vec2 vertexTextureCoord; layout (location=4) in float offset; out vec4 fragCol; out vec4 fragPos; out vec2 fragTex; void main() { // Retreive the current pixel's colour vec4 hmColour = texture(heightmap,vertexTextureCoord); // Offset the y position by the value of current texel's colour value ? vec4 offset = vec4(vertexPos.x , vertexPos.y + hmColour.r, vertexPos.z , 1.0); // Final Position gl_Position = projectionMatrix * viewMatrix * modelMatrix * offset; // Data sent to Fragment Shader. fragCol = vertexColour; fragPos = vertexPos; fragTex = vertexTextureCoord; } However the code I have produced only creates a grid with none of the y vertices higher than any others.

    Read the article

  • Partial recalculation of visibility on a 2D uniform grid

    - by Martin Källman
    Problem Imagine that we have a 2D uniform grid of dimensions N x N. For this grid we have also pre-computed a visibility look-up table, e.g. with DDA, which answers the boolean query is cell X visible from cell Y? The look-up table is a complete graph KN of the cells V in the grid, with each edge E being a binary value denoting the visibility between its vertices. Question If any given cell has its visibility modified, is it possible to extract the subset Edelta of edges which must have their visibility recomputed due to the change, so as to avoid a full-on recomputation for the entire grid? (Which is N(N-1) / 2 or N2 depending on the implementation) Update If is not possible to solve thi in closed form, then maintaining a separate mapping of each cell and every cell pair who's line intersects said cell might also be an option. This obviously consumes more memory, but the data is static. The increased memory requirement could be reduced by introducing a hierarchy, subdividing the grid into smaller parts, and by doing so the above mapping can be reused for each sub-grid. This would come at a cost in terms of increased computation relative to the number of subdivisions; also requiring a resumable ray-casting algorithm.

    Read the article

  • Algorithm to shift the car

    - by Simran kaur
    I have a track that can be divided into n number of tracks and a car as GamObject. The track has transforms such that some part of the track's width lies in negative x axis and other in positive. Requirement: One move should cross one track. On every move(left or right), I want the car to reach exact centre of the next track on either sides i.e left or right. My code: Problem: : Because of negative values , somewhere I am missing out something that is making car move not in desirable positions and that's because of negative values only. variable tracks is the number of tracks the whole track is divided in. variable dist is the total width of the complete track. On left movement: if (Input.GetKeyDown (KeyCode.LeftArrow)) { if (this.transform.position.x < r.renderer.bounds.min.x + box.size.x) { this.transform.position = new Vector3 (r.renderer.bounds.min.x + Mathf.FloorToInt(box.size.x), this.transform.position.y, this.transform.position.z); } else { int tracknumber = Mathf.RoundToInt(dist - transform.position.x)/tracks; float averagedistance = (tracknumber*(dist/tracks) + (tracknumber-1)*(dist/tracks))/2; if(transform.position.x > averagedistoftracks) { amountofmovement = amountofmovement + (transform.position.x - averagedistance); } else { amountofmovement = amountofmovement - (averagedistance - transform.position.x); } this.transform.position = new Vector3 (this.transform.position.x - amountofmovement, this.transform.position.y, this.transform.position.z); } }

    Read the article

  • Does DirectX implement Triple Buffering?

    - by Asik
    As AnandTech put it best in this 2009 article: In render ahead, frames cannot be dropped. This means that when the queue is full, what is displayed can have a lot more lag. Microsoft doesn't implement triple buffering in DirectX, they implement render ahead (from 0 to 8 frames with 3 being the default). The major difference in the technique we've described here is the ability to drop frames when they are outdated. Render ahead forces older frames to be displayed. Queues can help smoothness and stuttering as a few really quick frames followed by a slow frame end up being evened out and spread over more frames. But the price you pay is in lag (the more frames in the queue, the longer it takes to empty the queue and the older the frames are that are displayed). As I understand it, DirectX "Swap Chain" is merely a render ahead queue, i.e. buffers cannot be dropped; the longer the chain, the greater the input latency. At the same time, I find it hard to believe that the most widely used graphics API would not implement such fundamental functionality correctly. Is there a way to get proper triple buffered vertical synchronisation in DirectX?

    Read the article

< Previous Page | 388 389 390 391 392 393 394 395 396 397 398 399  | Next Page >