Search Results

Search found 1725 results on 69 pages for 'compute shader'.

Page 48/69 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • How do I reconfigure my GLES frame buffer after a rotation?

    - by Panda Pajama
    I am implementing interface rotation for my GLES based game for iOS, written in Xamarin.iOS with OpenTK. I am detecting the rotation by overriding WillRotate, in my UIViewController, and I correctly re-setup all of my projection matrices. However, when drawing a sprite, the image looks a bit blurrier on the landscape version compared to the portrait version, as you can see in the following closeups magnified 10x. Portrait (before rotating) Landscape (after rotating) In both cases, I'm using the same texture with the same sampler, the same shader, and the same GL state. I just changed the order of the parameters in the projection matrix, so the resulting sizes should be exactly the same pixelwise. Since this could be thought of as a window resize, I suppose that the framebuffer has to be recreated to the new size. When working on desktop apps on Direct3D11 (SharpDX), I would have to call swapChain.ResizeBuffers() to do this. I have tried setting AutoResize = true in my iPhoneOSGameView, but then the framebuffer gets clipped as I rotate the interface, and then everything disappears when rotating the interface again. I'm not doing anything strange, my framebuffer initialization is pretty vanilla: int scaling = (int)UIScreen.MainScreen.Scale; DeviceWidth = (int)UIScreen.MainScreen.Bounds.Width * scaling; DeviceHeight = (int)UIScreen.MainScreen.Bounds.Height * scaling; Size = new System.Drawing.Size((int)(DeviceWidth), (int)(DeviceHeight)); Bounds = new System.Drawing.RectangleF(0, 0, DeviceWidth, DeviceHeight); Frame = new System.Drawing.RectangleF(0, 0, DeviceWidth, DeviceHeight); ContextRenderingApi = EAGLRenderingAPI.OpenGLES2; AutoResize = true; LayerRetainsBacking = true; LayerColorFormat = EAGLColorFormat.RGBA8; I get inconsistent results when changing Size, Bounds and Frame on my CreateFrameBuffer override, but since the documentation is so incomplete (it has nothing on Bounds and Frame), I have resorted to randomly changing stuff here and there without really knowing what is going on. There is a similar question which has no answers. However, I don't know if they're experiencing the same problem as I am. Is my supposition that recreating the framebuffer is necessary, correct? If so, does anybody know how to do it correctly in OpenTK for Xamarin.iOS?

    Read the article

  • How do I apply skeletal animation from a .x (Direct X) file?

    - by Byte56
    Using the .x format to export a model from Blender, I can load a mesh, armature and animation. I have no problems generating the mesh and viewing models in game. Additionally, I have animations and the armature properly loaded into appropriate data structures. My problem is properly applying the animation to the models. I have the framework for applying the models and the code for selecting animations and stepping through frames. From what I understand, the AnimationKeys inside the AnimationSet supplies the transformations to transform the bind pose to the pose in the animated frame. As small example: Animation { {Armature_001_Bone} AnimationKey { 2; //Position 121; //number of frames 0;3; 0.000000, 0.000000, 0.000000;;, 1;3; 0.000000, 0.000000, 0.005524;;, 2;3; 0.000000, 0.000000, 0.022217;;, ... } AnimationKey { 0; //Quaternion Rotation 121; 0;4; -0.707107, 0.707107, 0.000000, 0.000000;;, 1;4; -0.697332, 0.697332, 0.015710, 0.015710;;, 2;4; -0.684805, 0.684805, 0.035442, 0.035442;;, ... } AnimationKey { 1; //Scale 121; 0;3; 1.000000, 1.000000, 1.000000;;, 1;3; 1.000000, 1.000000, 1.000000;;, 2;3; 1.000000, 1.000000, 1.000000;;, ... } } So, to apply frame 2, I would take the position, rotation and scale from frame 2, create a transformation matrix (call it Transform_A) from them and apply that matrix the vertices controlled by Armature_001_Bone at their weights. So I'd stuff TransformA into my shader and transform the vertex. Something like: vertexPos = vertexPos * bones[ int(bfs_BoneIndices.x) ] * bfs_BoneWeights.x; Where bfs_BoneIndices and bfs_BoneWeights are values specific to the current vertex. When loading in the mesh vertices, I transform them by the rootTransform and the meshTransform. This ensures they're oriented and scaled correctly for viewing the bind pose. The problem is when I create that transformation matrix (using the position, rotation and scale from the animation), it doesn't properly transform the vertex. There's likely more to it than just using the animation data. I also tried applying the bone transform hierarchies, still no dice. Basically I end up with some twisted models. It should also be noted that I'm working in openGL, so any matrix transposes that might need to be applied should be considered. What data do I need and how do I combine it for applying .x animations to models?

    Read the article

  • Difference between the terms Material & Effect

    - by codey
    I'm making an effect system right now (I think, because it may be a material system... or both!). The effects system follows the common (e.g. COLLADA, DirectX) effect framework abstraction of Effects have Techniques, Techniques have Passes, Passes have States & Shader Programs. An effect, according to COLLADA, defines the equations necessary for the visual appearance of geometry and screen-space image processing. Keeping with the abstraction, effects contain techniques. Each effect can contain one or many techniques (i.e. ways to generate the effect), each of which describes a different method for rendering that effect. The technique could be relate to quality (e.g. high precision, high LOD, etc.), or in-game-situation (e.g. night/day, power-up-mode, etc.). Techniques hold a description of the textures, samplers, shaders, parameters, & passes necessary for rendering this effect using one method. Some algorithms require several passes to render the effect. Pipeline descriptions are broken into an ordered collection of Pass objects. A pass provides a static declaration of all the render states, shaders, & settings for "one rendering pipeline" (i.e. one pass). Meshes usually contain a series of materials that define the model. According to the COLLADA spec (again), a material instantiates an effect, fills its parameters with values, & selects a technique. But I see material defined differently in other places, such as just the Lambert, Blinn, Phong "material types/shaded surfaces", or as Metal, Plastic, Wood, etc. In game dev forums, people often talk about implementing a "material/effect system". Is the material not an instance of an effect? Ergo, if I had effect objects, stored in a collection, & each effect instance object with there own parameter setting, then there is no need for the concept of a material... Or am I interpreting it wrong? Please help by contributing your interpretations as I want to be clear on a distinction (if any), & don't want to miss out on the concept of a material if it should be implemented to follow the abstraction of the DirectX FX framework & COLLADA definitions closely.

    Read the article

  • Correct use of VAO's in OpenGL ES2 for iOS?

    - by sak
    I'm migrating to OpenGL ES2 for one of my iOS projects, and I'm having trouble to get any geometry to render successfully. Here's where I'm setting up the VAO rendering: void bindVAO(int vertexCount, struct Vertex* vertexData, GLushort* indexData, GLuint* vaoId, GLuint* indexId){ //generate the VAO & bind glGenVertexArraysOES(1, vaoId); glBindVertexArrayOES(*vaoId); GLuint positionBufferId; //generate the VBO & bind glGenBuffers(1, &positionBufferId); glBindBuffer(GL_ARRAY_BUFFER, positionBufferId); //populate the buffer data glBufferData(GL_ARRAY_BUFFER, vertexCount, vertexData, GL_STATIC_DRAW); //size of verte position GLsizei posTypeSize = sizeof(kPositionVertexType); glVertexAttribPointer(kVertexPositionAttributeLocation, kVertexSize, kPositionVertexTypeEnum, GL_FALSE, sizeof(struct Vertex), (void*)offsetof(struct Vertex, position)); glEnableVertexAttribArray(kVertexPositionAttributeLocation); //create & bind index information glGenBuffers(1, indexId); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, *indexId); glBufferData(GL_ELEMENT_ARRAY_BUFFER, vertexCount, indexData, GL_STATIC_DRAW); //restore default state glBindVertexArrayOES(0); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0); glBindBuffer(GL_ARRAY_BUFFER, 0); } And here's the rendering step: //bind the frame buffer for drawing glBindFramebuffer(GL_FRAMEBUFFER, outputFrameBuffer); glClear(GL_COLOR_BUFFER_BIT); //use the shader program glUseProgram(program); glClearColor(0.4, 0.5, 0.6, 0.5); float aspect = fabsf(320.0 / 480.0); GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f), aspect, 0.1f, 100.0f); GLKMatrix4 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -1.0f); GLKMatrix4 mvpMatrix = GLKMatrix4Multiply(projectionMatrix, modelViewMatrix); //glUniformMatrix4fv(projectionMatrixUniformLocation, 1, GL_FALSE, projectionMatrix.m); glUniformMatrix4fv(modelViewMatrixUniformLocation, 1, GL_FALSE, mvpMatrix.m); glBindVertexArrayOES(vaoId); glDrawElements(GL_TRIANGLE_FAN, kVertexCount, GL_FLOAT, &indexId); //bind the color buffer glBindRenderbuffer(GL_RENDERBUFFER, colorRenderBuffer); [context presentRenderbuffer:GL_RENDERBUFFER]; The screen is rendering the color passed to glClearColor correctly, but not the shape passed into bindVAO. Is my VAO being built correctly? Thanks!

    Read the article

  • Deferred Shading - Toolkit

    - by AliveDevil
    I recently managed to get some lights rendered in a scene by using a buffer and a for-loop. The problem with this method is the performance drop if more lights are used. I tried to convert Deferred Rendering in XNA4.0 | ROY-T.NL but it is not working, because I am not using any models. I know I have to render color, normals and lights seperate but I don't know how I could get it working. For understanding my structure better I'm using a world-class which holds some chunks. These chunks are loading all vertices from their items. These items have a property which returns the vertices. The item is returning VertexPositionNormalTexture[]. The chunk loads these Vertices and combines them to one large array of VertexPositionNormalTexture via someList.AsParallel().SelectMany(m => m).ToArray()). m is a VertexPositionNormalTexture. someList is List<VertexPositionNormalTexture>. I got my own shader to draw these vertices how I want them to be drawn. The first thing I would try is setting up two RenderTarget2D for rendering the color and normal part. With two different shaders. Than I would have to render the lights and there's the problem: I don't know how. I set up a structure to simplify working with lights but it didn't really help. public struct Light { public Vector3 Position; public Color4 Color; public float Range; public float Intensity; public Light( Vector3 position, Color color, float range, float intensity ) : this() { this.Position = position; this.Color = color; this.Range = range; this.Intensity = intensity; } public float[] Definition { get { return new[] { Position.X, Position.Y, Position.Z, Color.Red, Color.Green, Color.Blue, Intensity, Range }; } } } The next part is equally different because I don't know how to combine the colorMap, normalMap and textureMap to one finalMap. Some information to the system: I'm using SharpDX (Nightly from some months ago) and the SharpDX.Toolkit (I don't want to mess up with Direct3DDevice and similar things). Can someone help me with this problem? If things are missing or I provided insufficient information tell me, I need to get deferred shading working. Things I'm not able to do: create a rendertarget which holds all lights, merge colorMap, normalMap and lightMap to one finalMap and presenting this to the user.

    Read the article

  • Marshalling C# Structs into DX11 cbuffers

    - by Craig
    I'm having some issues with the packing of my structure in C# and passing them through to cbuffers I have registered in HLSL. When I pack my struct in one manner the information seems to be able to pass to the shader: [StructLayout(LayoutKind.Explicit, Size = 16)] internal struct TestStruct { [FieldOffset(0)] public Vector3 mEyePosition; [FieldOffset(12)] public int type; } This works perfectly when used against this HLSL fragment: cbuffer PerFrame : register(b0) { Vector3 eyePos; int type; } float3 GetColour() { float3 returnColour = float(0.0f, 0.0f, 0.0f); switch(type) { case 0: returnColour = float3(1.0f, 0.0f, 0.0f); break; case 1: returnColour = float3(0.0f, 1.0f, 0.0f); break; case 2: returnColour = float3(0.0f, 0.0f, 1.0f); break; } return returnColour; } However, when I use the following structure definitions... // Note this is 16 because HLSL packs in 4 float 'chunks'. // It is also simplified, but still demonstrates the problem. [StructLayout(Layout.Explicit, Size = 16)] internal struct InternalTestStruct { [FieldOffset(0)] public int type; } [StructLayout(LayoutKind.Explicit, Size = 32)] internal struct TestStruct { [FieldOffset(0)] public Vector3 mEyePosition; //Missing 4 bytes here for correct packing. [FieldOffset(16)] public InternalTestStruct mInternal; } ... the following HLSL fragment no longer works. struct InternalType { int type; } cbuffer PerFrame : register(b0) { Vector3 eyePos; InternalType internalStruct; } float3 GetColour() { float3 returnColour = float(0.0f, 0.0f, 0.0f); switch(internaltype.type) { case 0: returnColour = float3(1.0f, 0.0f, 0.0f); break; case 1: returnColour = float3(0.0f, 1.0f, 0.0f); break; case 2: returnColour = float3(0.0f, 0.0f, 1.0f); break; } return returnColour; } Is there a problem with the way I am packing the struct, or is it another issue? To re-iterate: I can pass a struct in a cbuffer so long as it does not contain a nested struct.

    Read the article

  • ssao implementation

    - by Irbis
    I try to implement a ssao based on this tutorial: link I use a deferred rendering and world coordinates for shading calculations. When saving gbuffer a vertex shader output looks like this: worldPosition = vec3(ModelMatrix * vec4(inPosition, 1.0)); normal = normalize(normalModelMatrix * inNormal); gl_Position = ProjectionMatrix * ViewMatrix * ModelMatrix * vec4(inPosition, 1.0); Next for a ssao calculations I render a scene as a full screen quad and I save an occlusion parameter in a texture. (Vertex positions in the world space: link Normals in the world space: link) SSAO implementation: subroutine (RenderPassType) void ssao() { vec2 texCoord = CalcTexCoord(); vec3 worldPos = texture(texture0, texCoord).xyz; vec3 normal = normalize(texture(texture1, texCoord).xyz); vec2 noiseScale = vec2(screenSize.x / 4, screenSize.y / 4); vec3 rvec = texture(texture2, texCoord * noiseScale).xyz; vec3 tangent = normalize(rvec - normal * dot(rvec, normal)); vec3 bitangent = cross(normal, tangent); mat3 tbn = mat3(tangent, bitangent, normal); float occlusion = 0.0; float radius = 4.0; for (int i = 0; i < kernelSize; ++i) { vec3 pix = tbn * kernel[i]; pix = pix * radius + worldPos; vec4 offset = vec4(pix, 1.0); offset = ProjectionMatrix * ViewMatrix * offset; offset.xy /= offset.w; offset.xy = offset.xy * 0.5 + 0.5; float sample_depth = texture(texture0, offset.xy).z; float range_check = abs(worldPos.z - sample_depth) < radius ? 1.0 : 0.0; occlusion += (sample_depth <= pix.z ? 1.0 : 0.0); } outputColor = vec4(occlusion, occlusion, occlusion, 1); } That code gives following results: camera looking towards -z world space: link camera looking towards +z world space: link I wonder if it is possible to use world coordinates in the above code ? When I move camera I get different results because world space positions don't change. Can I treat worldPos.z as a linear depth ? What should I change to get a correct results ? I except the white areas in place of occlusion, so the ground should has the white areas only near to the object.

    Read the article

  • Uniform not being applied to proper mesh

    - by HaMMeReD
    Ok, I got some code, and you select blocks on a grid. The selection works. I can modify the blocks to be raised when selected and the correct one shows. I set a color which I use in the shader. However, I am trying to change the color before rendering the geometry, and the last rendered geometry (in the sequence) is rendered light. However, to debug logic I decided to move the block up and make it white, in which case one block moves up and another block becomes white. I checked all my logic and it knows the correct one is selected and it is showing in, in the correct place and rendering it correctly. When there is only 1 it works properly. Video Of the bug in action, note how the highlighted and elevated blocks are not the same block, however the code for color and My Renderer is here (For the items being drawn) public void render(Renderer renderer) { mGrid.render(renderer, mGameState); for (Entity e:mGameEntities) { UnitTypes ut = UnitTypes.valueOf((String)e.getObject(D.UNIT_TYPE.ordinal())); if (ut == UnitTypes.Soldier) { renderer.testShader.begin(); renderer.testShader.setUniformMatrix("u_mvpMatrix",mEntityMatrix); renderer.texture_soldier.bind(0); Vector2 pos = (Vector2) e.getObject(D.COORDS.ordinal()); mEntityMatrix.set(renderer.mCamera.combined); if (mSelectedEntities.contains(e)) { mEntityMatrix.translate(pos.x, 1f, pos.y); renderer.testShader.setUniformf("v_color", 0.5f,0.5f,0.5f,1f); } else { mEntityMatrix.translate(pos.x, 0f, pos.y); renderer.testShader.setUniformf("v_color", 1f,1f,1f,1f); } mEntityMatrix.scale(0.2f, 0.2f, 0.2f); renderer.model_soldier.render(renderer.testShader,GL20.GL_TRIANGLES); renderer.testShader.end(); } else if (ut == UnitTypes.Enemy_Infiltrator) { renderer.testShader.begin(); renderer.testShader.setUniformMatrix("u_mvpMatrix",mEntityMatrix); renderer.testShader.setUniformf("v_color", 1.0f,1,1,1.0f); renderer.texture_enemy_infiltrator.bind(0); Vector2 pos = (Vector2) e.getObject(D.COORDS.ordinal()); mEntityMatrix.set(renderer.mCamera.combined); mEntityMatrix.translate(pos.x, 0f, pos.y); mEntityMatrix.scale(0.2f, 0.2f, 0.2f); renderer.model_enemy_infiltrator.render(renderer.testShader,GL20.GL_TRIANGLES); renderer.testShader.end(); } } }

    Read the article

  • Glm Vector Transformations [duplicate]

    - by Reanimation
    This question already has an answer here: Car-like Physics - Basic Maths to Simulate Steering 2 answers I have a cube rendered on the screen which represents a car (or similar). Using Projection/Model matrices and Glm I am able to move it back and fourth along the axes and rotate it left or right. I'm having trouble with the vector mathematics to make the cube move forwards no matter which direction it's current orientation is. (ie. if I would like, if it's rotated right 30degrees, when it's move forwards, it travels along the 30degree angle on a new axes). I hope I've explained that correctly. This is what I've managed to do so far in terms of using glm to move the cube: glm::vec3 vel; //velocity vector void renderMovingCube(){ glUseProgram(movingCubeShader.handle()); GLuint matrixLoc4MovingCube = glGetUniformLocation(movingCubeShader.handle(), "ProjectionMatrix"); glUniformMatrix4fv(matrixLoc4MovingCube, 1, GL_FALSE, &ProjectionMatrix[0][0]); glm::mat4 viewMatrixMovingCube; viewMatrixMovingCube = glm::lookAt(camOrigin, camLookingAt, camNormalXYZ); vel.x = cos(rotX); vel.y=sin(rotX); vel*=moveCube; //move cube ModelViewMatrix = glm::translate(viewMatrixMovingCube,globalPos*vel); //bring ground and cube to bottom of screen ModelViewMatrix = glm::translate(ModelViewMatrix, glm::vec3(0,-48,0)); ModelViewMatrix = glm::rotate(ModelViewMatrix, rotX, glm::vec3(0,1,0)); //manually turn glUniformMatrix4fv(glGetUniformLocation(movingCubeShader.handle(), "ModelViewMatrix"), 1, GL_FALSE, &ModelViewMatrix[0][0]); //pass matrix to shader movingCube.render(); //draw glUseProgram(0); } keyboard input: void keyboard() { char BACKWARD = keys['S']; char FORWARD = keys['W']; char ROT_LEFT = keys['A']; char ROT_RIGHT = keys['D']; if (FORWARD) //W - move forwards { globalPos += vel; //globalPos.z -= moveCube; BACKWARD = false; } if (BACKWARD)//S - move backwards { globalPos.z += moveCube; FORWARD = false; } if (ROT_LEFT)//A - turn left { rotX +=0.01f; ROT_LEFT = false; } if (ROT_RIGHT)//D - turn right { rotX -=0.01f; ROT_RIGHT = false; } Where am I going wrong with my vectors? I would like change the direction of the cube (which it does) but then move forwards in that direction.

    Read the article

  • Really weird GL Behaviour, uniform not "hitting" proper mesh? LibGdx

    - by HaMMeReD
    Ok, I got some code, and you select blocks on a grid. The selection works. I can modify the blocks to be raised when selected and the correct one shows. I set a color which I use in the shader. However, I am trying to change the color before rendering the geometry, and the last rendered geometry (in the sequence) is rendered light. However, to debug logic I decided to move the block up and make it white, in which case one block moves up and another block becomes white. I checked all my logic and it knows the correct one is selected and it is showing in, in the correct place and rendering it correctly. When there is only 1 it works properly. Video Of the bug in action, note how the highlighted and elevated blocks are not the same block, however the code for color and My Renderer is here (For the items being drawn) public void render(Renderer renderer) { mGrid.render(renderer, mGameState); for (Entity e:mGameEntities) { UnitTypes ut = UnitTypes.valueOf((String)e.getObject(D.UNIT_TYPE.ordinal())); if (ut == UnitTypes.Soldier) { renderer.testShader.begin(); renderer.testShader.setUniformMatrix("u_mvpMatrix",mEntityMatrix); renderer.texture_soldier.bind(0); Vector2 pos = (Vector2) e.getObject(D.COORDS.ordinal()); mEntityMatrix.set(renderer.mCamera.combined); if (mSelectedEntities.contains(e)) { mEntityMatrix.translate(pos.x, 1f, pos.y); renderer.testShader.setUniformf("v_color", 0.5f,0.5f,0.5f,1f); } else { mEntityMatrix.translate(pos.x, 0f, pos.y); renderer.testShader.setUniformf("v_color", 1f,1f,1f,1f); } mEntityMatrix.scale(0.2f, 0.2f, 0.2f); renderer.model_soldier.render(renderer.testShader,GL20.GL_TRIANGLES); renderer.testShader.end(); } else if (ut == UnitTypes.Enemy_Infiltrator) { renderer.testShader.begin(); renderer.testShader.setUniformMatrix("u_mvpMatrix",mEntityMatrix); renderer.testShader.setUniformf("v_color", 1.0f,1,1,1.0f); renderer.texture_enemy_infiltrator.bind(0); Vector2 pos = (Vector2) e.getObject(D.COORDS.ordinal()); mEntityMatrix.set(renderer.mCamera.combined); mEntityMatrix.translate(pos.x, 0f, pos.y); mEntityMatrix.scale(0.2f, 0.2f, 0.2f); renderer.model_enemy_infiltrator.render(renderer.testShader,GL20.GL_TRIANGLES); renderer.testShader.end(); } } }

    Read the article

  • openGL textures in bitmap mode

    - by evenex_code
    For reasons detailed here I need to texture a quad using a bitmap (as in, 1 bit per pixel, not an 8-bit pixmap). Right now I have a bitmap stored in an on-device buffer, and am mounting it like so: glBindBuffer(GL_PIXEL_UNPACK_BUFFER, BFR.G[(T+1)%2]); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, W, H, 0, GL_COLOR_INDEX, GL_BITMAP, 0); The OpenGL spec has this to say about glTexImage2D: "If type is GL_BITMAP, the data is considered as a string of unsigned bytes (and format must be GL_COLOR_INDEX). Each data byte is treated as eight 1-bit elements..." Judging by the spec, each bit in my buffer should correspond to a single pixel. However, the following experiments show that, for whatever reason, it doesn't work as advertised: 1) When I build my texture, I write to the buffer in 32-bit chunks. From the wording of the spec, it is reasonable to assume that writing 0x00000001 for each value would result in a texture with 1-px-wide vertical bars with 31-wide spaces between them. However, it appears blank. 2) Next, I write with 0x000000FF. By my apparently flawed understanding of the bitmap mode, I would expect that this should produce 8-wide bars with 24-wide spaces between them. Instead, it produces a white 1-px-wide bar. 3) 0x55555555 = 1010101010101010101010101010101, therefore writing this value ought to create 1-wide vertical stripes with 1 pixel spacing. However, it creates a solid gray color. 4) Using my original 8-bit pixmap in GL_BITMAP mode produces the correct animation. I have reached the conclusion that, even in GL_BITMAP mode, the texturer is still interpreting 8-bits as 1 element, despite what the spec seems to suggest. The fact that I can generate a gray color (while I was expecting that I was working in two-tone), as well as the fact that my original 8-bit pixmap generates the correct picture, support this conclusion. Questions: 1) Am I missing some kind of prerequisite call (perhaps for setting a stride length or pack alignment or something) that will signal to the texturer to treat each byte as 8-elements, as it suggests in the spec? 2) Or does it simply not work because modern hardware does not support it? (I have read that GL_BITMAP mode was deprecated in 3.3, I am however forcing a 3.0 context.) 3) Am I better off unpacking the bitmap into a pixmap using a shader? This is a far more roundabout solution than I was hoping for but I suppose there is no such thing as a free lunch.

    Read the article

  • Syntax error in aggregate argument: Expecting a single column argument with possible 'Child' qualifier.

    - by Rushabh
    DataTable distinctTable = dTable.DefaultView.ToTable(true,"ITEM_NO","ITEM_STOCK"); DataTable dtSummerized = new DataTable("SummerizedResult"); dtSummerized.Columns.Add("ITEM_NO",typeof(string)); dtSummerized.Columns.Add("ITEM_STOCK",typeof(double)); int count=0; foreach(DataRow dRow in distinctTable.Rows) { count++; //string itemNo = Convert.ToString(dRow[0]); double TotalItem = Convert.ToDouble(dRow[1]); string TotalStock = dTable.Compute("sum(" + TotalItem + ")", "ITEM_NO=" + dRow["ITEM_NO"].ToString()).ToString(); dtSummerized.Rows.Add(count,dRow["ITEM_NO"],TotalStock); } Error Message: Syntax error in aggregate argument: Expecting a single column argument with possible 'Child' qualifier. Do anyone can help me out? Thanks.

    Read the article

  • "device-mapper resume ioctl failed" when run a instance

    - by user1490377
    I install ubuntu-12.04 server and openstack on two computer. When Launch an image, the instance can't run and show error state! nova-compute.log details: 2012-06-24 12:02:00 DEBUG nova.utils [req-71fdca27-5f93-438e-a4be-ddc54f698171 c737b66b2102415f817ca50b9649fd8f 5b1da4eaee3643919a230efc06473720] Unexpected error while running command. Command: sudo nova-rootwrap kpartx -a /dev/nbd15 Exit code: 1 Stdout: '' Stderr: 'device-mapper: resume ioctl failed: Invalid argument\ncreate/reload failed on nbd15p1\n' from (pid=1267) trycmd /usr/lib/python2.7/dist-packages/nova/utils.py:277 2012-06-24 12:02:00 DEBUG nova.utils [req-71fdca27-5f93-438e-a4be-ddc54f698171 c737b66b2102415f817ca50b9649fd8f 5b1da4eaee3643919a230efc06473720] Running cmd (subprocess): sudo nova-rootwrap qemu-nbd -d /dev/nbd15 from (pid=1267) execute /usr/lib/python2.7/dist-packages/nova/utils.py:219 2012-06-24 12:02:02 DEBUG nova.virt.disk.api [req-71fdca27-5f93-438e-a4be-ddc54f698171 c737b66b2102415f817ca50b9649fd8f 5b1da4eaee3643919a230efc06473720] Failed to map partitions: Unexpected error while running command. Command: sudo nova-rootwrap kpartx -a /dev/nbd15 Exit code: 1 Stdout: '' Stderr: 'device-mapper: resume ioctl failed: Invalid argument\ncreate/reload failed on nbd15p1\n' from (pid=1267) mount /usr/lib/python2.7/dist-packages/nova/virt/disk/api.py:205

    Read the article

  • Silverlight RelativeSource of TemplatedParent Binding within a DataTemplate, Is it possible?

    - by Matt.M
    I'm trying to make a bar graph Usercontrol. I'm creating each bar using a DataTemplate. The problem is in order to compute the height of each bar, I first need to know the height of its container (the TemplatedParent). Unfortunately what I have: Height="{Binding RelativeSource={RelativeSource TemplatedParent}, Path=Height, Converter={StaticResource HeightConverter}, Mode=OneWay}" Does not work. Each time a value of NaN is returned to my Converter. Does RelativeSource={RelativeSource TemplatedParent} not work in this context? What else can I do to allow my DataTemplate to "talk" to the element it is being applied to? Incase it helps here is the barebones DataTemplate: <DataTemplate x:Key="BarGraphTemplate"> <Grid Width="30"> <Rectangle HorizontalAlignment="Center" Stroke="Black" Width="20" Height="{Binding RelativeSource={RelativeSource TemplatedParent}, Path=Height, Converter={StaticResource HeightConverter}, Mode=OneWay}" VerticalAlignment="Bottom" /> </Grid> </DataTemplate>

    Read the article

  • How do I use local memory in OpenCL?

    - by splicer
    I've been playing with OpenCL recently, and I'm able to write simple kernels that use only global memory. Now I'd like to start using local memory, but I can't seem to figure out how to use get_local_size() and get_local_id() to compute one "chunk" of output at a time. For example, let's say I wanted to convert Apple's OpenCL Hello World example kernel to something the uses local memory. How would you do it? Here's the original kernel source: __kernel square( __global float *input, __global float *output, const unsigned int count) { int i = get_global_id(0); if (i < count) output[i] = input[i] * input[i]; } If this example can't easily be converted into something that shows how to make use of local memory, any other simple example will do. Thanks!

    Read the article

  • Adaboost algorithm and its usage in face detection

    - by Hani
    I am trying to understand Adaboost algorithm but i have some troubles. After reading about Adaboost i realized that it is a classification algorithm(somehow like neural network). But i could not know how the weak classifiers are chosen (i think they are haar-like features for face detection) and how finally the H result which is the final strong classifier can be used. I mean if i found the alpha values and compute the H ,how am i going to benefit from it as a value (one or zero) for new images. Please is there an example describes it in a perfect way? i found the plus and minus example that is found in most adaboost tutorials but i did not know how exactly hi is chosen and how to adopt the same concept on face detection. I read many papers and i had many ideas but until now my ideas are not well arranged. Thanks....

    Read the article

  • computing matting Laplacian matrix of an Image

    - by ajith
    hi everyone, i need to compute Laplacian Matrix-L for an image(nXn) in opencv...computing goes as follows.......... dij - 1/|wk|{[1+1/(e/|wk|+s2)][(Ii-µk)*(Ij -µk)]}.... for all(i,j)?wk,summing over k yields (i,j)th element of L. where Here dij is the Kronecker delta,µk and s2k are the mean & variance of intensities in the window wk around k,and |wk| is the number of pixels in this window.wk is 3X3 window... here am not clear about 2 things... 1.what ll be the size of L?nXn or (nXn)X(nXn)?? 2.how to select Ii and Ij separately in 2D image?

    Read the article

  • Linear Regression and Java Dates

    - by Smithers
    I am trying to find the linear trend line for a set of data. The set contains pairs of dates (x values) and scores (y values). I am using a version of this code as the basis of my algorithm. The results I am getting are off by a few orders of magnitude. I assume that there is some problem with round off error or overflow because I am using Date's getTime method which gives you a huge number of milliseconds. Does anyone have a suggestion on how to minimize the errors and compute the correct results?

    Read the article

  • CUDA: accumulate data into a large histogram of floats

    - by shoosh
    I'm trying to think of a way to implement the following algorithm using CUDA: Working on a large volume of voxels, for each voxel I calculate an index i and a value c. after the calculation I need to perform histogram[i] += c c is a float value and the histogram can have up to 15,000 bins. I'm looking for a way to implement this efficiently using CUDA. The first obvious problem is that with compute capabilities 1.3 which is what I'm using I can't even do an atomicAdd() of floats so how can I accumulate anything reliably? This example by nVidia does something somewhat simpler. The histograms are saved in the shared memory (which I can't do due to its size) and it only accumulates integers. Can this approach be generalized to my case?

    Read the article

  • Opening a local file in Eclipse from the web

    - by Victor Nicollet
    Right now, when I notice a problem on a page on my PHP web site, I have to look at the URL, mentally deduce what file is responsible for displaying that page, then navigate the Eclipse PDT file tree to open that file. This is annoying and uses brain power that could have been applied to solving the issue instead. I would like my PHP web site to display on every page a link that I could click to automatically open the correct file in Eclipse. I can easily compute the complete absolute path for the file I need to open (for example, open C:/xampp/htdocs/controllers/Foo/Bar.php when visiting /foo/bar), and I can make sure that Eclipse is currently open with the correct project loaded, but I'm stuck on how I can have Firefox/Chrome/IE tell Eclipse to open that specific file.

    Read the article

  • C#: Cached Property: Easier way?

    - by Peterdk
    I have a object with properties that are expensive to compute, so they are only calculated on first access and then cached. private List<Note> notes; public List<Note> Notes { get { if (this.notes == null) { this.notes = CalcNotes(); } return this.notes; } } I wonder, is there a better way to do this? Is it somehow possible to create a Cached Property or something like that in C#?

    Read the article

  • Compile-time lookup array creation for ANSI-C?

    - by multiproximus
    A previous programmer preferred to generate large lookup tables (arrays of constants) to save runtime CPU cycles rather than calculating values on the fly. He did this by creating custom Visual C++ projects that were unique for each individual lookup table... which generate array files that are then #included into a completely separate ANSI-C micro-controller (Renesas) project. This approach is fine for his original calculation assumptions, but has become tedious when the input parameters need to be modified, requiring me to recompile all of the Visual C++ projects and re-import those files into the ANSI-C project. What I would like to do is port the Visual C++ source directly into the ANSI-C microcontroller project and let the compiler create the array tables. So, my question is: Can ANSI-C compilers compute and generate lookup arrays during compile time? And if so, how should I go about it? Thanks in advance for your help!

    Read the article

  • Haskell Lazy Evaluation and Reuse

    - by Jonathan Sternberg
    I know that if I were to compute a list of squares in Haskell, I could do this: squares = [ x ** 2 | x <- [1 ..] ] Then when I call squares like this: print $ take 4 squares And it would print out [1.0, 4.0, 9.0, 16.0]. This gets evaluated as [ 1 ** 2, 2 ** 2, 3 ** 2, 4 ** 2 ]. Now since Haskell is functional and the result would be the same each time, if I were to call squares again somewhere else, would it re-evaluate the answers it's already computed? If I were to re-use squares after I had already called the previous line, would it re-calculate the first 4 values? print $ take 5 squares Would it evaluate [1.0, 4.0, 9.0, 16.0, 5 ** 2]?

    Read the article

  • Modifying Bresenham's line algorithm

    - by sphennings
    I'm trying to use Bresenham's line algorithm to compute Field of View on a grid. The code I'm using calculates the lines without a problem but I'm having problems getting it to always return the line running from start point to endpoint. What do I need to do so that all lines returned run from (x0,y0) to (x1,y1) def bresenham_line(self, x0, y0, x1, y1): steep = abs(y1 - y0) > abs(x1 - x0) if steep: x0, y0 = y0, x0 x1, y1 = y1, x1 if x0 > x1: x0, x1 = x1, x0 y0, y1 = y1, y0 if y0 < y1: ystep = 1 else: ystep = -1 deltax = x1 - x0 deltay = abs(y1 - y0) error = -deltax / 2 y = y0 line = [] for x in range(x0, x1 + 1): if steep: line.append((y,x)) else: line.append((x,y)) error = error + deltay if error > 0: y = y + ystep error = error - deltax return line

    Read the article

  • Constructing a hash table/hash function.

    - by nn
    Hi, I would like to construct a hash table that looks up keys in sequences (strings) of bytes ranging from 1 to 15 bytes. I would like to store an integer value, so I imagine an array for hashing would suffice. I'm having difficulty conceptualizing how to construct a hash function such that given the key would give an index into the array. Any assistance would be much appreiated. The maximum number of entries in the hash is: 4081*15 + 4081*14 + ... 4081 = 4081((15*(16))/2) = 489720. So for example: int table[489720]; int lookup(unsigned char *key) { int index = hash(key); return table[index]; } How can I compute hash(key). I'd preferably like to get a perfect hash function. Thanks.

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >