Search Results

Search found 2086 results on 84 pages for 'pixel shader'.

Page 21/84 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • wxCam can't open /dev/dsp

    - by SIJAR
    I try to run wxCam through PulseAudio using the following command: $ padsp -d wxcam Although wxCam started ok, wxCam is been set to use xdiv format to enable sound during recording, but while recording the I get the error: Cannot open /dev/dsp. Video file will be recorded without audio track Please help me in fixing this issue. Below is some debug information: $ padsp -d wxcam Determining video4linux API version... Using video4linux 2 API VIDIOC_ENUM_FRAMESIZES: Invalid argument V4L2_CID_GAMMA is not supported Determining pixel format... pixel format: YUV 4:2:2 (YUYV) Found V4L2_PIX_FMT_YUYV pixel format pixel format: MJPEG Found V4L2_PIX_FMT_MJPEG pixel format --DEBUG: [wxcam] Generating standard Huffman tables for this frame. Corrupt JPEG data: 2 extraneous bytes before marker 0xd2 ... repeats a couple of times ... open of failed: No such file or directory --DEBUG: [wxcam] Generating standard Huffman tables for this frame. Corrupt JPEG data: 2 extraneous bytes before marker 0xd4 ... repeats a couple of times .... /home/sij/Videos/Webcam/video.avi written: 640x480, 1334396 bytes --DEBUG: [wxcam] Generating standard Huffman tables for this frame. Corrupt JPEG data: 2 extraneous bytes before marker 0xd3 ... repeats a couple of times ...

    Read the article

  • Picture rendered from above and below using an Orthographic camera do not match

    - by Roy T.
    I'm using an orthographic camera to render slices of a model (in order to voxelize it). I render each slice both from above and below in order to determine what is inside each slice. I am using an orthographic camera The model I render is a simple 'T' shape constructed from two cubes. The cubes have the same dimensions and have the same Y (height) coordinate. See figure 1 for a render of it in Blender. I render this model once directly from above and once directly from below. My expectation was that I would get exactly the same image (except for mirroring over the y-axis). However when I render using a very low resolution render target (25x25) the position (in pixels) of the 'T' is different when rendered from above as opposed to rendered from below. See figure 2 and 3. The pink blocks are not part of the original rendering but I've added them so you can easily count/see the differences. Figure 2: the T rendered from above Figure 3: the T rendered from below This is probably due to what I've read about pixel and texel coordinates which might be biased to the top-left as seen from the camera. Since I'm using the same 'up' vector for both of my camera's my bias only shows on the x-axis. I've tried to change the position of the camera and it's look-at by, what I thought, should be half a pixel. I've tried both shifting a single camera and shifting both cameras and while I see some effect I am not able to get a pixel-by-pixel perfect copy from both camera's. Here I initialize the camera and compute, what I believe to be, half pixel. boundsDimX and boundsDimZ is a slightly enlarged bounding box around the model which I also use as the width and height of the view volume of the orthographic camera. Matrix projection = Matrix.CreateOrthographic(boundsDimX, boundsDimZ, 0.5f, sliceHeight + 0.5f); Vector3 halfPixel = new Vector3(boundsDimX / (float)renderTarget.Width, 0, boundsDimY / (float)renderTarget.Height) * 0.5f; This is the code where I set the camera position and camera look ats // Position camera if (downwards) { float cameraHeight = bounds.Max.Y + 0.501f - (sliceHeight * i); Vector3 cameraPosition = new Vector3 ( boundsCentre.X, // possibly adjust by half a pixel? cameraHeight, boundsCentre.Z ); camera.Position = cameraPosition; camera.LookAt = new Vector3(cameraPosition.X, cameraHeight - 1.0f, cameraPosition.Z); } else { float cameraHeight = bounds.Max.Y - 0.501f - (sliceHeight * i); Vector3 cameraPosition = new Vector3 ( boundsCentre.X, cameraHeight, boundsCentre.Z ); camera.Position = cameraPosition; camera.LookAt = new Vector3(cameraPosition.X, cameraHeight + 1.0f, cameraPosition.Z); } Main Question Now you've seen all the problems and code you can guess it. My main question is. How do I align both camera's so that they each render exactly the same image (mirrored along the Y axis)? Figure 1 the original model rendered in blender

    Read the article

  • Which of these algorithms is best for my goal?

    - by JonathonG
    I have created a program that restricts the mouse to a certain region based on a black/white bitmap. The program is 100% functional as-is, but uses an inaccurate, albeit fast, algorithm for repositioning the mouse when it strays outside the area. Currently, when the mouse moves outside the area, basically what happens is this: A line is drawn between a pre-defined static point inside the region and the mouse's new position. The point where that line intersects the edge of the allowed area is found. The mouse is moved to that point. This works, but only works perfectly for a perfect circle with the pre-defined point set in the exact center. Unfortunately, this will never be the case. The application will be used with a variety of rectangles and irregular, amorphous shapes. On such shapes, the point where the line drawn intersects the edge will usually not be the closest point on the shape to the mouse. I need to create a new algorithm that finds the closest point to the mouse's new position on the edge of the allowed area. I have several ideas about this, but I am not sure of their validity, in that they may have far too much overhead. While I am not asking for code, it might help to know that I am using Objective C / Cocoa, developing for OS X, as I feel the language being used might affect the efficiency of potential methods. My ideas are: Using a bit of trigonometry to project lines would work, but that would require some kind of intense algorithm to test every point on every line until it found the edge of the region... That seems too resource intensive since there could be something like 200 lines that would have each have to have as many as 200 pixels checked for black/white.... Using something like an A* pathing algorithm to find the shortest path to a black pixel; however, A* seems resource intensive, even though I could probably restrict it to only checking roughly in one direction. It also seems like it will take more time and effort than I have available to spend on this small portion of the much larger project I am working on, correct me if I am wrong and it would not be a significant amount of code (100 lines or around there). Mapping the border of the region before the application begins running the event tap loop. I think I could accomplish this by using my current line-based algorithm to find an edge point and then initiating an algorithm that checks all 8 pixels around that pixel, finds the next border pixel in one direction, and continues to do this until it comes back to the starting pixel. I could then store that data in an array to be used for the entire duration of the program, and have the mouse re-positioning method check the array for the closest pixel on the border to the mouse target position. That last method would presumably execute it's initial border mapping fairly quickly. (It would only have to map between 2,000 and 8,000 pixels, which means 8,000 to 64,000 checked, and I could even permanently store the data to make launching faster.) However, I am uncertain as to how much overhead it would take to scan through that array for the shortest distance for every single mouse move event... I suppose there could be a shortcut to restrict the number of elements in the array that will be checked to a variable number starting with the intersecting point on the line (from my original algorithm), and raise/lower that number to experiment with the overhead/accuracy tradeoff. Please let me know if I am over thinking this and there is an easier way that will work just fine, or which of these methods would be able to execute something like 30 times per second to keep mouse movement smooth, or if you have a better/faster method. I've posted relevant parts of my code below for reference, and included an example of what the area might look like. (I check for color value against a loaded bitmap that is black/white.) // // This part of my code runs every single time the mouse moves. // CGPoint point = CGEventGetLocation(event); float tX = point.x; float tY = point.y; if( is_in_area(tX,tY, mouse_mask)){ // target is inside O.K. area, do nothing }else{ CGPoint target; //point inside restricted region: float iX = 600; // inside x float iY = 500; // inside y // delta to midpoint between iX,iY and tX,tY float dX; float dY; float accuracy = .5; //accuracy to loop until reached do { dX = (tX-iX)/2; dY = (tY-iY)/2; if(is_in_area((tX-dX),(tY-dY),mouse_mask)){ iX += dX; iY += dY; } else { tX -= dX; tY -= dY; } } while (abs(dX)>accuracy || abs(dY)>accuracy); target = CGPointMake(roundf(tX), roundf(tY)); CGDisplayMoveCursorToPoint(CGMainDisplayID(),target); } Here is "is_in_area(int x, int y)" : bool is_in_area(NSInteger x, NSInteger y, NSBitmapImageRep *mouse_mask){ NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; NSUInteger pixel[4]; [mouse_mask getPixel:pixel atX:x y:y]; if(pixel[0]!= 0){ [pool release]; return false; } [pool release]; return true; }

    Read the article

  • Architecture- Tracking lead origin when data is submitted by a server

    - by Kevin
    I'm looking for some assistance in determining the least complex strategy for tracking leads on an affiliate's website. The idea is to make the affiliate's integration with my application as easy as possible. I've run into theoretical barriers, so i'm here to explore other options. Application Overview: This is a lead aggregation / distribution platform. We will be focusing on the affiliate portion of this website. Essentially affiliates sign up, enter in marketing campaigns and sell us their conversions. Problem to be solved: We want to track a lead's origin and other events on the affiliate site. We want to know what pages, ads, and forms they viewed before they converted. This can easily be solved with pixel tracking. Very straightforward. Theoretical Issues: I thought I would ask affiliates to place the pixel where I could log impressions and set a third party cookie when the pixel is first called. Then I could associate future impressions with this cookie. The problem is that when the visitor converts on the affiliate's site and I receive their information via HTTP POST from the Affiliate's server I wouldn't be able to access the cookie and associate it with the lead record unless the lead lands on my processor via a redirect and is then redirected back to the affiliate's landing page. I don't want to force the affiliates to submit their forms directly to my tracking site, so allowing them to make an HTTP POST from their server side form processor would be ideal. I've considered writing JavaScript to set a First Party cookie but this seems to make things more complicated for the affiliate. I also considered having the affiliate submit the lead's data via a conversion pixel. This seems to be the most ideal scenario so far as almost all pixels are as easy as copy/paste. The only complication comes from the conversion pixel- which would submit all of the lead information and the request would come from the visitor's machine so I could access my third party cookie.

    Read the article

  • Take a snapshot with JavaFX!

    - by user12610255
    JavaFX 2.2 has a "snapshot" feature that enables you to take a picture of any node or scene. Take a look at the API Documentation and you will find new snapshot methods in the javafx.scene.Scene class. The most basic version has the following signature: public WritableImage snapshot(WritableImage image) The WritableImage class (also introduced in JavaFX 2.2) lives in the javafx.scene.image package, and represents a custom graphical image that is constructed from pixels supplied by the application. In fact, there are 5 new classes in javafx.scene.image: PixelFormat: Defines the layout of data for a pixel of a given format. WritablePixelFormat: Represents a pixel format that can store full colors and so can be used as a destination format to write pixel data from an arbitrary image. PixelReader: Defines methods for retrieving the pixel data from an Image or other surface containing pixels. PixelWriter: Defines methods for writing the pixel data of a WritableImage or other surface containing writable pixels. WritableImage: Represents a custom graphical image that is constructed from pixels supplied by the application, and possibly from PixelReader objects from any number of sources, including images read from a file or URL. The API documentation contains lots of information, so go investigate and have fun with these useful new classes! -- Scott Hommel

    Read the article

  • How would I detect if two 2D arrays of any shape collided?

    - by user2104648
    Say there's two or more moveable objects of any shape in 2D plane, each object has its own 2D boolean array to act as a bounds box which can range from 10 to 100 pixels, the program then reads each pixel from a image that represents it, and appropriatly changes the array to true(pixel has a alpha more then 1) or false(pixel has a alpha less than one). Each time one of these objects moves, what would be the best accurate way to test if they hit another object in Java using as few APIs/libraries as possible?

    Read the article

  • How to configure mspaint on Windows Server 2008R2/ Win 7 to start up with 1-pixel canvas or auto-crop a pasted image?

    - by Fantomas
    I do a lot of screen print capturing, and I have just figured out how to use AutoHotKey to paste screen prints into MsPaint automatically. How to paste Print Screen on MS Paint automatically when press "PrtSc" button ? However, one small problem I have is that ... if I grabbed a screen with Alt-Prt Scr that is only 50x50 pixels, then there would be extra white margin around it, because MSPaint starts out with a larger canvas by default. How can I make it ALWAYS start with 1x1 instead?

    Read the article

  • What LPR arguments do I need to print a 1400x800 pixel image on a 4x6 label?

    - by Nick
    This is driving me nuts. UPS sends our system a 1400x800 GIF image of a shipping label, which is supposed to fit nicely on a 4x6 page. Unfortunately, I can't seem to get the command line options right to make it happen. We're using an Eltron/Zebra 2844 with a network adapter, and printing from our Ubuntu 8.04 server using CUPS. We're using the correct drivers, and test pages print correctly. No matter what I try though, it insists on printing the UPS labels accross 6 pages, with a little bit of the label on each page, or way too small. I've tried a bazillion different lpr settings, most of them producing garbage. The closest I've gotten is this: lpr -P Eltron2844 -o natural-scaling=55 -o page-right=0 -o page-left=0 -o landscape -o media="4x6" ./1ZY437560399620027.gif but it causes the image to be too small on the page. It's about an inch too short, and there's a 1/2" margin on both sides. If I bump the scale up to 56, it explodes the image onto two pages, and squashes it. Any ideas?

    Read the article

  • Wpf: Why is WriteableBitmap getting slower?

    - by fritz
    There is a simple MSDN example about WriteableBitmap. It shows how to draw a freehand line with the cursor by just updating one pixel when the mouse is pressed and is moving over a WPF -Image Control. writeableBitmap.Lock(); (...set the writeableBitmap.BackBuffers pixel value...) writeableBitmap.AddDirtyRect(new Int32Rect(column, row, 1, 1)); writeableBitmap.Unlock(); Now I'm trying to understand the following behaviour when moving the mouse pointer very fast: If the image/bitmap size is relatively small e.g. 800:600 pixel, then the last drawn pixel is always "synchronized" with the mouse pointers position, i.e. there is no delay, very fast reaction on mouse movements. But if the bitmap gets larger e.g. 1300:1050 pixel, you can notice a delay, the last drawn pixel always appear a bit delayed behind the moving mouse pointer. So as in both cases only one pixel gets updated with "AddDirtyRect", the reaction speed should be independent from the bitmap size!? But it seems that Writeablebitmap gets slower when it's size gets larger. Or does the whole bitmap somehow get transferred to the graphic device on every writeableBitmap.Unlock(); call , and not only the rectangle area speficied in the AddDirtyRect method? fritz

    Read the article

  • RMagic Error in rails, with AM Charts

    - by Elliot
    Hi Everyone, I'm using AMCharts and rails. AMCharts uses the Image Magic lib to export an image of the chart. In rails this is done with the gem, RMagic. In a controller this is implemented with the following controller method: def export width = params[:width].to_i height = params[:height].to_i data = {} img = Magick::Image.new(width, height) height.times do |y| row = params["r#{y}"].split(',') row.size.times do |r| pixel = row[r].to_s.split(':') pixel[0] = pixel[0].to_s.rjust(6, '0') if pixel.size == 2 pixel[1].to_i.times do (data[y] ||= []) << pixel[0] end else (data[y] ||= []) << pixel[0] end end width.times do |x| img.pixel_color(x, y, "##{data[y][x]}") end end img.format = "PNG" send_data(img.to_blob , :disposition => 'inline', :type => 'image/png', :filename => "chart.png?#{rand(99999999).to_i}") end When the controller is accessed however, I receive this error in the page: The change you wanted was rejected. Maybe you tried to change something you didn't have access to. And this error in the logs (its running on heroku btw): ActionController::InvalidAuthenticityToken (ActionController::InvalidAuthenticityToken): /home/heroku_rack/lib/static_assets.rb:9:in `call' /home/heroku_rack/lib/last_access.rb:25:in `call' /home/heroku_rack/lib/date_header.rb:14:in `call' thin (1.0.1) lib/thin/connection.rb:80:in `pre_process' thin (1.0.1) lib/thin/connection.rb:78:in `catch' thin (1.0.1) lib/thin/connection.rb:78:in `pre_process' thin (1.0.1) lib/thin/connection.rb:57:in `process' thin (1.0.1) lib/thin/connection.rb:42:in `receive_data' eventmachine (0.12.6) lib/eventmachine.rb:240:in `run_machine' eventmachine (0.12.6) lib/eventmachine.rb:240:in `run' thin (1.0.1) lib/thin/backends/base.rb:57:in `start' thin (1.0.1) lib/thin/server.rb:150:in `start' thin (1.0.1) lib/thin/controllers/controller.rb:80:in `start' thin (1.0.1) lib/thin/runner.rb:173:in `send' thin (1.0.1) lib/thin/runner.rb:173:in `run_command' thin (1.0.1) lib/thin/runner.rb:139:in `run!' thin (1.0.1) bin/thin:6 /usr/local/bin/thin:20:in `load' /usr/local/bin/thin:20 Rendering /disk1/home/slugs/149903_609c236_eb4f/mnt/public/422.html (422 Unprocessable Entity) Anyone have any idea what's going on here?

    Read the article

  • Computing "average" of two colors

    - by Francisco P.
    This is only marginally programming related - has much more to do w/ colors and their representation. I am working on a very low level app. I have an array of bytes in memory. Those are characters. They were rendered with anti-aliasing: they have values from 0 to 255, 0 being fully transparent and 255 totally opaque (alpha, if you wish). I am having trouble conceiving an algorithm for the rendering of this font. I'm doing the following for each pixel: // intensity is the weight I talked about: 0 to 255 intensity = glyphs[text[i]][x + GLYPH_WIDTH*y]; if (intensity == 255) continue; // Don't draw it, fully transparent else if (intensity == 0) setPixel(x + xi, y + yi, color, base); // Fully opaque, can draw original color else { // Here's the tricky part // Get the pixel in the destination for averaging purposes pixel = getPixel(x + xi, y + yi, base); // transfer is an int for calculations transfer = (int) ((float)((float) (255.0 - (float) intensity/255.0) * (float) color.red + (float) pixel.red)/2); // This is my attempt at averaging newPixel.red = (Byte) transfer; transfer = (int) ((float)((float) (255.0 - (float) intensity/255.0) * (float) color.green + (float) pixel.green)/2); newPixel.green = (Byte) transfer; // transfer = (int) ((float) ((float) 255.0 - (float) intensity)/255.0 * (((float) color.blue) + (float) pixel.blue)/2); transfer = (int) ((float)((float) (255.0 - (float) intensity/255.0) * (float) color.blue + (float) pixel.blue)/2); newPixel.blue = (Byte) transfer; // Set the newpixel in the desired mem. position setPixel(x+xi, y+yi, newPixel, base); } The results, as you can see, are less than desirable. That is a very zoomed in image, at 1:1 scale it looks like the text has a green "aura". Any idea for how to properly compute this would be greatly appreciated. Thanks for your time!

    Read the article

  • Black Screen: How to set Projection/View Matrix

    - by Lisa
    I have a Windows Phone 8 C#/XAML with DirectX component project. I'm rendering some particles, but each particle is a rectangle versus a square (as I've set the vertices to be positions equally offset from each other). I used an Identity matrix in the view and projection matrix. I decided to add the windows aspect ratio to prevent the rectangles. But now I get a black screen. None of the particles are rendered now. I don't know what's wrong with my matrices. Can anyone see the problem? These are the default matrices in Microsoft's project example. View Matrix: XMVECTOR eye = XMVectorSet(0.0f, 0.7f, 1.5f, 0.0f); XMVECTOR at = XMVectorSet(0.0f, -0.1f, 0.0f, 0.0f); XMVECTOR up = XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f); XMStoreFloat4x4(&m_constantBufferData.view, XMMatrixTranspose(XMMatrixLookAtRH(eye, at, up))); Projection Matrix: void CubeRenderer::CreateWindowSizeDependentResources() { Direct3DBase::CreateWindowSizeDependentResources(); float aspectRatio = m_windowBounds.Width / m_windowBounds.Height; float fovAngleY = 70.0f * XM_PI / 180.0f; if (aspectRatio < 1.0f) { fovAngleY /= aspectRatio; } XMStoreFloat4x4(&m_constantBufferData.projection, XMMatrixTranspose(XMMatrixPerspectiveFovRH(fovAngleY, aspectRatio, 0.01f, 100.0f))); } I've tried modifying them to use cocos2dx's WP8 example. XMMATRIX identityMatrix = XMMatrixIdentity(); float fovy = 60.0f; float aspect = m_windowBounds.Width / m_windowBounds.Height; float zNear = 0.1f; float zFar = 100.0f; float xmin, xmax, ymin, ymax; ymax = zNear * tanf(fovy * XM_PI / 360); ymin = -ymax; xmin = ymin * aspect; xmax = ymax * aspect; XMMATRIX tmpMatrix = XMMatrixPerspectiveOffCenterRH(xmin, xmax, ymin, ymax, zNear, zFar); XMMATRIX projectionMatrix = XMMatrixMultiply(tmpMatrix, identityMatrix); // View Matrix float fEyeX = m_windowBounds.Width * 0.5f; float fEyeY = m_windowBounds.Height * 0.5f; float fEyeZ = m_windowBounds.Height / 1.1566f; float fLookAtX = m_windowBounds.Width * 0.5f; float fLookAtY = m_windowBounds.Height * 0.5f; float fLookAtZ = 0.0f; float fUpX = 0.0f; float fUpY = 1.0f; float fUpZ = 0.0f; XMMATRIX tmpMatrix2 = XMMatrixLookAtRH(XMVectorSet(fEyeX,fEyeY,fEyeZ,0.f), XMVectorSet(fLookAtX,fLookAtY,fLookAtZ,0.f), XMVectorSet(fUpX,fUpY,fUpZ,0.f)); XMMATRIX viewMatrix = XMMatrixMultiply(tmpMatrix2, identityMatrix); XMStoreFloat4x4(&m_constantBufferData.view, viewMatrix); Vertex Shader cbuffer ModelViewProjectionConstantBuffer : register(b0) { //matrix model; matrix view; matrix projection; }; struct VertexInputType { float4 position : POSITION; float2 tex : TEXCOORD0; float4 color : COLOR; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float4 color : COLOR; }; PixelInputType main(VertexInputType input) { PixelInputType output; // Change the position vector to be 4 units for proper matrix calculations. input.position.w = 1.0f; //===================================== // TODO: ADDED for testing input.position.z = 0.0f; //===================================== // Calculate the position of the vertex against the world, view, and projection matrices. //output.position = mul(input.position, model); output.position = mul(input.position, view); output.position = mul(output.position, projection); // Store the texture coordinates for the pixel shader. output.tex = input.tex; // Store the particle color for the pixel shader. output.color = input.color; return output; } Before I render the shader, I set the view/projection matrices into the constant buffer void ParticleRenderer::SetShaderParameters() { ViewProjectionConstantBuffer* dataPtr; D3D11_MAPPED_SUBRESOURCE mappedResource; DX::ThrowIfFailed(m_d3dContext->Map(m_constantBuffer.Get(), 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource)); dataPtr = (ViewProjectionConstantBuffer*)mappedResource.pData; dataPtr->view = m_constantBufferData.view; dataPtr->projection = m_constantBufferData.projection; m_d3dContext->Unmap(m_constantBuffer.Get(), 0); // Now set the constant buffer in the vertex shader with the updated values. m_d3dContext->VSSetConstantBuffers(0, 1, m_constantBuffer.GetAddressOf() ); // Set shader texture resource in the pixel shader. m_d3dContext->PSSetShaderResources(0, 1, &m_textureView); } Nothing, black screen... I tried so many different look at, eye, and up vectors. I tried transposing the matrices. I've set the particle center position to always be (0, 0, 0), I tried different positions too, just to make sure they're not being rendered offscreen.

    Read the article

  • In GLSL is it possible to offset vertices based on height map colour?

    - by Rob
    I am attempting to generate some terrain based upon a heightmap. I have generated a 32 x 32 grid and a corresponding height map - In my vertex shader I am trying to offset the position of the Y axis based upon the colour of the heightmap, white vertices being higher than black ones. //Vertex Shader Code #version 330 uniform mat4 modelMatrix; uniform mat4 viewMatrix; uniform mat4 projectionMatrix; uniform sampler2D heightmap; layout (location=0) in vec4 vertexPos; layout (location=1) in vec4 vertexColour; layout (location=3) in vec2 vertexTextureCoord; layout (location=4) in float offset; out vec4 fragCol; out vec4 fragPos; out vec2 fragTex; void main() { // Retreive the current pixel's colour vec4 hmColour = texture(heightmap,vertexTextureCoord); // Offset the y position by the value of current texel's colour value ? vec4 offset = vec4(vertexPos.x , vertexPos.y + hmColour.r, vertexPos.z , 1.0); // Final Position gl_Position = projectionMatrix * viewMatrix * modelMatrix * offset; // Data sent to Fragment Shader. fragCol = vertexColour; fragPos = vertexPos; fragTex = vertexTextureCoord; } However the code I have produced only creates a grid with none of the y vertices higher than any others. This is the C++ code that generates the grid and texture co-orientates which I believe to be correct as the texture is mapped to the grid, hence the white blob in the middle. The grid-lines are generated in the fragment shader, sorry for any confusion. I have tried multiplying the r value of hmColour by 1000 unfortunately that had no effect. The only other problem it could be is that the texture coordinate data is incorrect ? for (int z = 0; z < MAP_Z ; z++) { for(int x = 0; x < MAP_X ; x++) { //Generate Vertex Buffer vertexData[iVertex++] = float (x) * MAP_X; vertexData[iVertex++] = 0; vertexData[iVertex++] = -(float) (z) * MAP_Z; //Colour Buffer NOT NEEDED colourData[iColour++] = 255.0f; // R colourData[iColour++] = 1.0f; // G colourData[iColour++] = 0.0f; // B //Texture Buffer textureData[iTexture++] = (float ) x * (1.0f / MAP_X); textureData[iTexture++] = (float ) z * (1.0f / MAP_Z); } } The heightmap texture I am trying to use appears like so (without grid-lines). This is the corresponding fragment shader // Fragment Shader Code #version 330 uniform sampler2D hmTexture; layout (location=0) out vec4 fragColour; in vec2 fragTex; in vec4 pos; void main(void) { vec2 line = fragTex * 32; // Without Gridlines fragColour = texture(hmTexture,fragTex); // With grid lines // + mix(vec4(0.0, 0.0, 1.0, 0.0), vec4(1.0, 1.0, 1.0, 1.0), // smoothstep(0.05,fract(line.y), 0.99) * smoothstep(0.05,fract(line.x),0.99)); }

    Read the article

  • how can I specify interleaved vertex attributes and vertex indices

    - by freefallr
    I'm writing a generic ShaderProgram class that compiles a set of Shader objects, passes args to the shader (like vertex position, vertex normal, tex coords etc), then links the shader components into a shader program, for use with glDrawArrays. My vertex data already exists in a VertexBufferObject that uses the following data structure to create a vertex buffer: class CustomVertex { public: float m_Position[3]; // x, y, z // offset 0, size = 3*sizeof(float) float m_TexCoords[2]; // u, v // offset 3*sizeof(float), size = 2*sizeof(float) float m_Normal[3]; // nx, ny, nz; float colour[4]; // r, g, b, a float padding[20]; // padded for performance }; I've already written a working VertexBufferObject class that creates a vertex buffer object from an array of CustomVertex objects. This array is said to be interleaved. It renders successfully with the following code: void VertexBufferObject::Draw() { if( ! m_bInitialized ) return; glBindBuffer( GL_ARRAY_BUFFER, m_nVboId ); glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, m_nVboIdIndex ); glEnableClientState( GL_VERTEX_ARRAY ); glEnableClientState( GL_TEXTURE_COORD_ARRAY ); glEnableClientState( GL_NORMAL_ARRAY ); glEnableClientState( GL_COLOR_ARRAY ); glVertexPointer( 3, GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 0) ); glTexCoordPointer(3, GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 12)); glNormalPointer(GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 20)); glColorPointer(3, GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 32)); glDrawElements( GL_TRIANGLES, m_nNumIndices, GL_UNSIGNED_INT, ((char*)NULL + 0) ); glDisableClientState( GL_VERTEX_ARRAY ); glDisableClientState( GL_TEXTURE_COORD_ARRAY ); glDisableClientState( GL_NORMAL_ARRAY ); glDisableClientState( GL_COLOR_ARRAY ); glBindBuffer( GL_ARRAY_BUFFER, 0 ); glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, 0 ); } Back to the Vertex Array Object though. My code for creating the Vertex Array object is as follows. This is performed before the ShaderProgram runtime linking stage, and no glErrors are reported after its steps. // Specify the shader arg locations (e.g. their order in the shader code) for( int n = 0; n < vShaderArgs.size(); n ++) glBindAttribLocation( m_nProgramId, n, vShaderArgs[n].sFieldName.c_str() ); // Create and bind to a vertex array object, which stores the relationship between // the buffer and the input attributes glGenVertexArrays( 1, &m_nVaoHandle ); glBindVertexArray( m_nVaoHandle ); // Enable the vertex attribute array (we're using interleaved array, since its faster) glBindBuffer( GL_ARRAY_BUFFER, vShaderArgs[0].nVboId ); glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, vShaderArgs[0].nVboIndexId ); // vertex data for( int n = 0; n < vShaderArgs.size(); n ++ ) { glEnableVertexAttribArray(n); glVertexAttribPointer( n, vShaderArgs[n].nFieldSize, GL_FLOAT, GL_FALSE, vShaderArgs[n].nStride, (GLubyte *) NULL + vShaderArgs[n].nFieldOffset ); AppLog::Ref().OutputGlErrors(); } This doesn't render correctly at all. I get a pattern of white specks onscreen, in the shape of the terrain rectangle, but there are no regular lines etc. Here's the code I use for rendering: void ShaderProgram::Draw() { using namespace AntiMatter; if( ! m_nShaderProgramId || ! m_nVaoHandle ) { AppLog::Ref().LogMsg("ShaderProgram::Draw() Couldn't draw object, as initialization of ShaderProgram is incomplete"); return; } glUseProgram( m_nShaderProgramId ); glBindVertexArray( m_nVaoHandle ); glDrawArrays( GL_TRIANGLES, 0, m_nNumTris ); glBindVertexArray(0); glUseProgram(0); } Can anyone see errors or omissions in either the VAO creation code or rendering code? thanks!

    Read the article

  • How to update a user created Bitmap in the Windows API

    - by gamernb
    In my code I quickly generate images on the fly, and I want to display them as quickly as possible. So the first time I create my image, I create a new BITMAP, but instead of deleting the old one and creating a new one for every subsequent image, I just want to copy my data back into the existing one. Here is my code to do both the initial creation and the updating. The creation works just fine, but the updating one doesn't work. BITMAPINFO bi; HBITMAP Frame::CreateBitmap(HWND hwnd, int tol1, int tol2, bool useWhite, bool useBackground) { ZeroMemory(&bi.bmiHeader, sizeof(BITMAPINFOHEADER)); bi.bmiHeader.biSize = sizeof(BITMAPINFOHEADER); bi.bmiHeader.biWidth = width; bi.bmiHeader.biHeight = height; bi.bmiHeader.biPlanes = 1; bi.bmiHeader.biBitCount = 24; bi.bmiHeader.biCompression = BI_RGB; ZeroMemory(bi.bmiColors, sizeof(RGBQUAD)); // Allocate memory for bitmap bits int size = height * width; Pixel* newPixels = new Pixel[size]; // Recompute the output //memcpy(newPixels, pixels, size*3); ComputeOutput(newPixels, tol1, tol2, useWhite, useBackground); HBITMAP bitmap = CreateDIBitmap(GetDC(hwnd), &bi.bmiHeader, CBM_INIT, newPixels, &bi, DIB_RGB_COLORS); delete newPixels; return bitmap; } and void Frame::UpdateBitmap(HWND hwnd, HBITMAP bitmap, int tol1, int tol2, bool useWhite, bool useBackground) { ZeroMemory(&bi.bmiHeader, sizeof(BITMAPINFOHEADER)); bi.bmiHeader.biSize = sizeof(BITMAPINFOHEADER); HDC hdc = GetDC(hwnd); if(!GetDIBits(hdc, bitmap, 0, bi.bmiHeader.biHeight, NULL, &bi, DIB_RGB_COLORS)) MessageBox(NULL, "Can't get base image info!", "Error!", MB_ICONEXCLAMATION | MB_OK); // Allocate memory for bitmap bits int size = height * width; Pixel* newPixels = new Pixel[size]; // Recompute the output //memcpy(newPixels, pixels, size*3); ComputeOutput(newPixels, tol1, tol2, useWhite, useBackground); // Push back to windows if(!SetDIBits(hdc, bitmap, 0, bi.bmiHeader.biHeight, newPixels, &bi, DIB_RGB_COLORS)) MessageBox(NULL, "Can't set pixel data!", "Error!", MB_ICONEXCLAMATION | MB_OK); delete newPixels; } where the Pixel struct is just this: struct Pixel { unsigned char b, g, r; }; Why does my update function not work. I always get the MessageBox for "Can't set pixel data!" I used code similar to this when I was loading in the original bitmap from file, then editing the data, but now when I manually create it, it doesn't work.

    Read the article

  • GDI+ Load a jpg and save as 24bit png problem

    - by wookey
    Problem Hello all! I have this code which takes my jpg image loops through altering pixels and finally saving it as a png type. The problem is that the resulting image has a bit depth of 32 bits. I need it to be 24 bit, can any one shiny some light on the correct method of setting it? Am I along the right tracks looking at setting the pixel format to PixelFormat24bppRGB? Code static inline void Brighten(Gdiplus::Bitmap* img) { int width = img->GetWidth()/8,height = img->GetHeight(), max = (width*height),r,g,b; Gdiplus::Color pixel; for(int a = 0,x = 0, y = -1; a < max; ++a) { x = a%width; if(x == 0) ++y; img->GetPixel(x,y,&pixel); r = pixel.GetR(); g = pixel.GetG(); b = pixel.GetB(); if (r > 245) r = 245; if (g > 245) g = 245; if (b > 245) b = 245; r = 10; g = 10; b = 10; pixel = Gdiplus::Color(r,g,b); img->SetPixel(x,y,pixel);; } } ULONG_PTR m_dwToken = 0; Gdiplus::GdiplusStartupInput input; Gdiplus::GdiplusStartupOutput output; Gdiplus::GdiplusStartup( &m_dwToken, &input, &output ); USES_CONVERSION_EX; Gdiplus::ImageCodecInfo* pEncoders = static_cast< Gdiplus::ImageCodecInfo* >( _ATL_SAFE_ALLOCA(1040, _ATL_SAFE_ALLOCA_DEF_THRESHOLD)); Gdiplus::DllExports::GdipGetImageEncoders(5, 1040, pEncoders ); CLSID clsidEncoder = pEncoders[4].Clsid; Gdiplus::Bitmap img1((CT2W)L"IMG_1.JPG"); Brighten(&img1); img1.Save((CT2W)L"IMG_1_R3.PNG",&clsidEncoder,NULL); Thanks in advance!

    Read the article

  • formula for replicating glTexGen in opengl es 2.0 glsl

    - by visualjc
    I also posted this on the main StackExchange, but this seems like a better place, but for give me for the double post if it shows up twice. I have been trying for several hours to implement a GLSL replacement for glTexGen with GL_OBJECT_LINEAR. For OpenGL ES 2.0. In Ogl GLSL there is the gl_TextureMatrix that makes this easier, but thats not available on OpenGL ES 2.0 / OpenGL ES Shader Language 1.0 Several sites have mentioned that this should be "easy" to do in a GLSL vert shader. But I just can not get it to work. My hunch is that I'm not setting the planes up correctly, or I'm missing something in my understanding. I've pored over the web. But most sites are talking about projected textures, I'm just looking to create UV's based on planar projection. The models are being built in Maya, have 50k polygons and the modeler is using planer mapping, but Maya will not export the UV's. So I'm trying to figure this out. I've looked at the glTexGen manpage information: g = p1xo + p2yo + p3zo + p4wo What is g? Is g the value of s in the texture2d call? I've looked at the site: http://www.opengl.org/wiki/Mathematics_of_glTexGen Another size explains the same function: coord = P1*X + P2*Y + P3*Z + P4*W I don't get how coord (an UV vec2 in my mind) is equal to the dot product (a scalar value)? Same problem I had before with "g". What do I set the plane to be? In my opengl c++ 3.0 code, I set it to [0, 0, 1, 0] (basically unit z) and glTexGen works great. I'm still missing something. My vert shader looks basically like this: WVPMatrix = World View Project Matrix. POSITION is the model vertex position. varying vec4 kOutBaseTCoord; void main() { gl_Position = WVPMatrix * vec4(POSITION, 1.0); vec4 sPlane = vec4(1.0, 0.0, 0.0, 0.0); vec4 tPlane = vec4(0.0, 1.0, 0.0, 0.0); vec4 rPlane = vec4(0.0, 0.0, 0.0, 0.0); vec4 qPlane = vec4(0.0, 0.0, 0.0, 0.0); kOutBaseTCoord.s = dot(vec4(POSITION, 1.0), sPlane); kOutBaseTCoord.t = dot(vec4(POSITION, 1.0), tPlane); //kOutBaseTCoord.r = dot(vec4(POSITION, 1.0), rPlane); //kOutBaseTCoord.q = dot(vec4(POSITION, 1.0), qPlane); } The frag shader precision mediump float; uniform sampler2D BaseSampler; varying mediump vec4 kOutBaseTCoord; void main() { //gl_FragColor = vec4(kOutBaseTCoord.st, 0.0, 1.0); gl_FragColor = texture2D(BaseSampler, kOutBaseTCoord.st); } I've tried texture2DProj in frag shader Here are some of the other links I've looked up http://www.gamedev.net/topic/407961-texgen-not-working-with-glsl-with-fixed-pipeline-is-ok/ Thank you in advance.

    Read the article

  • error X3501: 'main': entrypoint not found

    - by Pasha
    I am trying to learn DX10 by following this tutorial. However, my shader won't compile. Below is the detailed error message. Build started 9/10/2012 10:22:46 PM. 1>Project "D:\code\dx\Engine\Engine\Engine.vcxproj" on node 2 (Build target(s)). C:\Program Files (x86)\Windows Kits\8.0\bin\x86\fxc.exe /nologo /E"main" /Fo "D:\code\dx\Engine\Debug\color.cso" /Od /Zi color.fx 1>FXC : error X3501: 'main': entrypoint not found compilation failed; no code produced 1>Done Building Project "D:\code\dx\Engine\Engine\Engine.vcxproj" (Build target(s)) -- FAILED. Build FAILED. Time Elapsed 00:00:00.05 I can easily compile the downloaded code, but I want to know how to fix this error myself. My color.fx looks like this //////////////////////////////////////////////////////////////////////////////// // Filename: color.fx //////////////////////////////////////////////////////////////////////////////// ///////////// // GLOBALS // ///////////// matrix worldMatrix; matrix viewMatrix; matrix projectionMatrix; ////////////// // TYPEDEFS // ////////////// struct VertexInputType { float4 position : POSITION; float4 color : COLOR; }; struct PixelInputType { float4 position : SV_POSITION; float4 color : COLOR; }; //////////////////////////////////////////////////////////////////////////////// // Vertex Shader //////////////////////////////////////////////////////////////////////////////// PixelInputType ColorVertexShader(VertexInputType input) { PixelInputType output; // Change the position vector to be 4 units for proper matrix calculations. input.position.w = 1.0f; // Calculate the position of the vertex against the world, view, and projection matrices. output.position = mul(input.position, worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); // Store the input color for the pixel shader to use. output.color = input.color; return output; } //////////////////////////////////////////////////////////////////////////////// // Pixel Shader //////////////////////////////////////////////////////////////////////////////// float4 ColorPixelShader(PixelInputType input) : SV_Target { return input.color; } //////////////////////////////////////////////////////////////////////////////// // Technique //////////////////////////////////////////////////////////////////////////////// technique10 ColorTechnique { pass pass0 { SetVertexShader(CompileShader(vs_4_0, ColorVertexShader())); SetPixelShader(CompileShader(ps_4_0, ColorPixelShader())); SetGeometryShader(NULL); } }

    Read the article

  • GLSL compile error when accessing an array with compile-time constant index

    - by Benlitz
    I have this shader that works well on my computer (using an ATI HD 5700). I have a loop iterating between two constant values, which is, afaik, acceptable in a glsl shader. I write stuff in two arrays in this loop. #define NB_POINT_LIGHT 2 ... varying vec3 vVertToLight[NB_POINT_LIGHT]; varying vec3 vVertToLightWS[NB_POINT_LIGHT]; ... void main() { ... for (int i = 0; i < NB_POINT_LIGHT; ++i) { if (bPointLightUse[i]) { vVertToLight[i] = ConvertToTangentSpace(ShPointLightData[i].Position - WorldPos.xyz); vVertToLightWS[i] = ShPointLightData[i].Position - WorldPos.xyz; } } ... } I tried my program on another computer equipped with an nVidia GTX 560 Ti, and it fails to compile my shader. I get the following errors (94 and 95 are the lines of the two affectations) when calling glLinkProgram: Vertex info ----------- 0(94) : error C5025: lvalue in assignment too complex 0(95) : error C5025: lvalue in assignment too complex I think my code is valid, I don't know if this comes from a compiler bug, a conversion of my shader to another format from the compiler (nvidia looks to convert it to CG), or if I just missed something. I already tried to remove the if (bPointLightUse[i]) statement and I still have the same error. However, if I just write this: vVertToLight[0] = ConvertToTangentSpace(ShPointLightData[0].Position - WorldPos.xyz); vVertToLightWS[0] = ShPointLightData[0].Position - WorldPos.xyz; vVertToLight[1] = ConvertToTangentSpace(ShPointLightData[1].Position - WorldPos.xyz); vVertToLightWS[1] = ShPointLightData[1].Position - WorldPos.xyz; Then I don't have the error anymore, but it's really unconvenient so I would prefer to keep something loop-based. Here is the more detailled config that works: Vendor: ATI Technologies Inc. Renderer: ATI Radeon HD 5700 Series Version: 4.1.10750 Compatibility Profile Context Shading Language version: 4.10 And here is the more detailed config that doesn't work (should also be compatibility profile, although not indicated): Vendor: NVIDIA Corporation Renderer: GeForce GTX 560 Ti/PCI/SSE2 Version: 4.1.0 Shading Language version: 4.10 NVIDIA via Cg compiler

    Read the article

  • Unity custom shaders and z-fighting

    - by Heisenbug
    I've just readed a chapter of Unity iOS Essential by Robert Wiebe. It shows a solution for handling z-figthing problem occuring while rendering a street on a plane with the same y offset. Basically it modified Normal-Diffuse shader provided by Unity, specifing the (texture?) offset in -1, -1. Here's basically what the shader looks like: Shader "Custom/ModifiedNormalDiffuse" { Properties { _Color ("Main Color", Color) = (1,1,1,1) _MainTex ("Base (RGB)", 2D) = "white" {} } SubShader { Offset -1,-1 //THIS IS THE ADDED LINE Tags { "RenderType"="Opaque" } LOD 200 CGPROGRAM #pragma surface surf Lambert sampler2D _MainTex; fixed4 _Color; struct Input { float2 uv_MainTex; }; void surf (Input IN, inout SurfaceOutput o) { half4 c = tex2D (_MainTex, IN.uv_MainTex) *_Color; o.Albedo = c.rgb; o.Alpha = c.a; } ENDCG } FallBack "Diffuse" } Ok. That's simple and it works. The author says about it: ...we could use a copy of the shader that draw the road at an Offset of -1, -1 so that whenever the two textures are drawn, the road is always drawn last. I don't know CG nor GLSL, but I've a little bit of experience with HLSL. Anyway I can't figure out what exactly is going on. Could anyone explain me what exactly Offset directly does, and how is solves z-fighting problems?

    Read the article

  • VBO and shaders confusion, what's their connection?

    - by Jeffrey
    Considering OpenGL 2.1 VBOs and 1.20 GLSL shaders: When creating an entity like "Zombie", is it good to initialize just the VBO buffer with the data once and do N glDrawArrays() calls per each N zombies? Is there a more efficient way? (With a single call we cannot pass different uniforms to the shader to calculate an offset, see point 3) When dealing with logical object (player, tree, cube etc), should I always use the same shader or should I customize (or be able to customize) the shaders per each object? Considering an entity class, should I create and define the shader at object initialization? When having a movable object such as a human, is there any more powerful way to deal with its coordinates than to initialize its VBO object at 0,0 and define an uniform offset to pass to the shader to calculate its real position? Could you make an example of the Data Oriented Design on creating a generic zombie class? Is the following good? Zombielist class: class ZombieList { GLuint vbo; // generic zombie vertex model std::vector<color>; // object default color std::vector<texture>; // objects textures std::vector<vector3D>; // objects positions public: unsigned int create(); // return object id void move(unsigned int objId, vector3D offset); void rotate(unsigned int objId, float angle); void setColor(unsigned int objId, color c); void setPosition(unsigned int objId, color c); void setTexture(unsigned int, unsigned int); ... void update(Player*); // move towards player, attack if near } Example: Player p; Zombielist zl; unsigned int first = zl.create(); zl.setPosition(first, vector3D(50, 50)); zl.setTexture(first, texture("zombie1.png")); ... while (running) { // main loop ... zl.update(&p); zl.draw(); // draw every zombie }

    Read the article

  • How AlphaBlend Blendstate works in XNA when accumulighting light into a RenderTarget?

    - by cubrman
    I am using a Deferred Rendering engine from Catalin Zima's tutorial: His lighting shader returns the color of the light in the rgb channels and the specular component in the alpha channel. Here is how light gets accumulated: Game.GraphicsDevice.SetRenderTarget(LightRT); Game.GraphicsDevice.Clear(Color.Transparent); Game.GraphicsDevice.BlendState = BlendState.AlphaBlend; // Continuously draw 3d spheres with lighting pixel shader. ... Game.GraphicsDevice.BlendState = BlendState.Opaque; MSDN states that AlphaBlend field of the BlendState class uses the next formula for alphablending: (source × Blend.SourceAlpha) + (destination × Blend.InvSourceAlpha), where "source" is the color of the pixel returned by the shader and "destination" is the color of the pixel in the rendertarget. My question is why do my colors are accumulated correctly in the Light rendertarget even when the new pixels' alphas equal zero? As a quick sanity check I ran the following code in the light's pixel shader: float specularLight = 0; float4 light4 = attenuation * lightIntensity * float4(diffuseLight.rgb,specularLight); if (light4.a == 0) light4 = 0; return light4; This prevents lighting from getting accumulated and, subsequently, drawn on the screen. But when I do the following: float specularLight = 0; float4 light4 = attenuation * lightIntensity * float4(diffuseLight.rgb,specularLight); return light4; The light is accumulated and drawn exactly where it needs to be. What am I missing? According to the formula above: (source x 0) + (destination x 1) should equal destination, so the "LightRT" rendertarget must not change when I draw light spheres into it! It feels like the GPU is using the Additive blend instead: (source × Blend.One) + (destination × Blend.One)

    Read the article

  • UV Atlas Generation and Seam Removal

    - by P. Avery
    I'm generating light maps for scene mesh objects using DirectX's UV Atlas Tool( D3DXUVAtlasCreate() ). I've succeeded in generating an atlas, however, when I try to render the mesh object using the atlas the seams are visible on the mesh. Below are images of a lightmap generated for a cube. Here is the code I use to generate a uv atlas for a cube: struct sVertexPosNormTex { D3DXVECTOR3 vPos, vNorm; D3DXVECTOR2 vUV; sVertexPosNormTex(){} sVertexPosNormTex( D3DXVECTOR3 v, D3DXVECTOR3 n, D3DXVECTOR2 uv ) { vPos = v; vNorm = n; vUV = uv; } ~sVertexPosNormTex() { } }; // create a light map texture to fill programatically hr = D3DXCreateTexture( pd3dDevice, 128, 128, 1, 0, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, &pLightmap ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to D3DXCreateTexture( lightmap )", __LINE__, hr ); return hr; } // get the zero level surface from the texture IDirect3DSurface9 *pS = NULL; pLightmap->GetSurfaceLevel( 0, &pS ); // clear surface pd3dDevice->ColorFill( pS, NULL, D3DCOLOR_XRGB( 0, 0, 0 ) ); // load a sample mesh DWORD dwcMaterials = 0; LPD3DXBUFFER pMaterialBuffer = NULL; V_RETURN( D3DXLoadMeshFromX( L"cube3.x", D3DXMESH_MANAGED, pd3dDevice, &pAdjacency, &pMaterialBuffer, NULL, &dwcMaterials, &g_pMesh ) ); // generate adjacency DWORD *pdwAdjacency = new DWORD[ 3 * g_pMesh->GetNumFaces() ]; g_pMesh->GenerateAdjacency( 1e-6f, pdwAdjacency ); // create light map coordinates LPD3DXMESH pMesh = NULL; LPD3DXBUFFER pFacePartitioning = NULL, pVertexRemapArray = NULL; FLOAT resultStretch = 0; UINT numCharts = 0; hr = D3DXUVAtlasCreate( g_pMesh, 0, 0, 128, 128, 3.5f, 0, pdwAdjacency, NULL, NULL, NULL, NULL, NULL, 0, &pMesh, &pFacePartitioning, &pVertexRemapArray, &resultStretch, &numCharts ); if( SUCCEEDED( hr ) ) { // release and set mesh SAFE_RELEASE( g_pMesh ); g_pMesh = pMesh; // write mesh to file hr = D3DXSaveMeshToX( L"cube4.x", g_pMesh, 0, ( const D3DXMATERIAL* )pMaterialBuffer->GetBufferPointer(), NULL, dwcMaterials, D3DXF_FILEFORMAT_TEXT ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to D3DXSaveMeshToX() at OnD3D9CreateDevice()", __LINE__, hr ); } // fill the the light map hr = BuildLightmap( pS, g_pMesh ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to BuildLightmap()", __LINE__, hr ); } } else { DebugStringDX( "Main", "Failed to D3DXUVAtlasCreate() at OnD3D9CreateDevice()", __LINE__, hr ); } SAFE_RELEASE( pS ); SAFE_DELETE_ARRAY( pdwAdjacency ); SAFE_RELEASE( pFacePartitioning ); SAFE_RELEASE( pVertexRemapArray ); SAFE_RELEASE( pMaterialBuffer ); Here is code to fill lightmap texture: HRESULT BuildLightmap( IDirect3DSurface9 *pS, LPD3DXMESH pMesh ) { HRESULT hr = S_OK; // validate lightmap texture surface and mesh if( !pS || !pMesh ) return E_POINTER; // lock the mesh vertex buffer sVertexPosNormTex *pV = NULL; pMesh->LockVertexBuffer( D3DLOCK_READONLY, ( void** )&pV ); // lock the mesh index buffer WORD *pI = NULL; pMesh->LockIndexBuffer( D3DLOCK_READONLY, ( void** )&pI ); // get the lightmap texture surface description D3DSURFACE_DESC desc; pS->GetDesc( &desc ); // lock the surface rect to fill with color data D3DLOCKED_RECT rct; hr = pS->LockRect( &rct, NULL, 0 ); if( FAILED( hr ) ) { DebugStringDX( "main.cpp:", "Failed to IDirect3DTexture9::LockRect()", __LINE__, hr ); return hr; } // iterate the pixels of the lightmap texture // check each pixel to see if it lies between the uv coordinates of a cube face BYTE *pBuffer = ( BYTE* )rct.pBits; for( UINT y = 0; y < desc.Height; ++y ) { BYTE* pBufferRow = ( BYTE* )pBuffer; for( UINT x = 0; x < desc.Width * 4; x+=4 ) { // determine the pixel's uv coordinate D3DXVECTOR2 p( ( ( float )x / 4.0f ) / ( float )desc.Width + 0.5f / 128.0f, y / ( float )desc.Height + 0.5f / 128.0f ); // for each face of the mesh // check to see if the pixel lies within the face's uv coordinates for( UINT i = 0; i < 3 * pMesh->GetNumFaces(); i +=3 ) { sVertexPosNormTex v[ 3 ]; v[ 0 ] = pV[ pI[ i + 0 ] ]; v[ 1 ] = pV[ pI[ i + 1 ] ]; v[ 2 ] = pV[ pI[ i + 2 ] ]; if( TexcoordIsWithinBounds( v[ 0 ].vUV, v[ 1 ].vUV, v[ 2 ].vUV, p ) ) { // the pixel lies b/t the uv coordinates of a cube face // light contribution functions aren't needed yet //D3DXVECTOR3 vPos = TexcoordToPos( v[ 0 ].vPos, v[ 1 ].vPos, v[ 2 ].vPos, v[ 0 ].vUV, v[ 1 ].vUV, v[ 2 ].vUV, p ); //D3DXVECTOR3 vNormal = v[ 0 ].vNorm; // set the color of this pixel red( for demo ) BYTE ba[] = { 0, 0, 255, 255, }; //ComputeContribution( vPos, vNormal, g_sLight, ba ); // copy the byte array into the light map texture memcpy( ( void* )&pBufferRow[ x ], ( void* )ba, 4 * sizeof( BYTE ) ); } } } // go to next line of the texture pBuffer += rct.Pitch; } // unlock the surface rect pS->UnlockRect(); // unlock mesh vertex and index buffers pMesh->UnlockIndexBuffer(); pMesh->UnlockVertexBuffer(); // write the surface to file hr = D3DXSaveSurfaceToFile( L"LightMap.jpg", D3DXIFF_JPG, pS, NULL, NULL ); if( FAILED( hr ) ) DebugStringDX( "Main.cpp", "Failed to D3DXSaveSurfaceToFile()", __LINE__, hr ); return hr; } bool TexcoordIsWithinBounds( const D3DXVECTOR2 &t0, const D3DXVECTOR2 &t1, const D3DXVECTOR2 &t2, const D3DXVECTOR2 &p ) { // compute vectors D3DXVECTOR2 v0 = t1 - t0, v1 = t2 - t0, v2 = p - t0; float f00 = D3DXVec2Dot( &v0, &v0 ); float f01 = D3DXVec2Dot( &v0, &v1 ); float f02 = D3DXVec2Dot( &v0, &v2 ); float f11 = D3DXVec2Dot( &v1, &v1 ); float f12 = D3DXVec2Dot( &v1, &v2 ); // Compute barycentric coordinates float invDenom = 1 / ( f00 * f11 - f01 * f01 ); float fU = ( f11 * f02 - f01 * f12 ) * invDenom; float fV = ( f00 * f12 - f01 * f02 ) * invDenom; // Check if point is in triangle if( ( fU >= 0 ) && ( fV >= 0 ) && ( fU + fV < 1 ) ) return true; return false; } Screenshot Lightmap I believe the problem comes from the difference between the lightmap uv coordinates and the pixel center coordinates...for example, here are the lightmap uv coordinates( generated by D3DXUVAtlasCreate() ) for a specific face( tri ) within the mesh, keep in mind that I'm using the mesh uv coordinates to write the pixels for the texture: v[ 0 ].uv = D3DXVECTOR2( 0.003581, 0.295631 ); v[ 1 ].uv = D3DXVECTOR2( 0.003581, 0.003581 ); v[ 2 ].uv = D3DXVECTOR2( 0.295631, 0.003581 ); the lightmap texture size is 128 x 128 pixels. The upper-left pixel center coordinates are: float halfPixel = 0.5 / 128 = 0.00390625; D3DXVECTOR2 pixelCenter = D3DXVECTOR2( halfPixel, halfPixel ); will the mapping and sampling of the lightmap texture will require that an offset be taken into account or that the uv coordinates are snapped to the pixel centers..? ...Any ideas on the best way to approach this situation would be appreciated...What are the common practices?

    Read the article

  • How AlphaBlend Blendstate works in XNA 4 when accumulighting light into a RenderTarget?

    - by cubrman
    I am using a Deferred Rendering engine from Catalin Zima's tutorial: His lighting shader returns the color of the light in the rgb channels and the specular component in the alpha channel. Here is how light gets accumulated: Game.GraphicsDevice.SetRenderTarget(LightRT); Game.GraphicsDevice.Clear(Color.Transparent); Game.GraphicsDevice.BlendState = BlendState.AlphaBlend; // Continuously draw 3d spheres with lighting pixel shader. ... Game.GraphicsDevice.BlendState = BlendState.Opaque; MSDN states that AlphaBlend field of the BlendState class uses the next formula for alphablending: (source × Blend.SourceAlpha) + (destination × Blend.InvSourceAlpha), where "source" is the color of the pixel returned by the shader and "destination" is the color of the pixel in the rendertarget. My question is why do my colors are accumulated correctly in the Light rendertarget even when the new pixels' alphas equal zero? As a quick sanity check I ran the following code in the light's pixel shader: float specularLight = 0; float4 light4 = attenuation * lightIntensity * float4(diffuseLight.rgb,specularLight); if (light4.a == 0) light4 = 0; return light4; This prevents lighting from getting accumulated and, subsequently, drawn on the screen. But when I do the following: float specularLight = 0; float4 light4 = attenuation * lightIntensity * float4(diffuseLight.rgb,specularLight); return light4; The light is accumulated and drawn exactly where it needs to be. What am I missing? According to the formula above: (source x 0) + (destination x 1) should equal destination, so the "LightRT" rendertarget must not change when I draw light spheres into it! It feels like the GPU is using the Additive blend instead: (source × Blend.One) + (destination × Blend.One)

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >