Search Results

Search found 1587 results on 64 pages for 'pixel reaper'.

Page 22/64 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Suggestions for implementing a dynamic 2D level

    - by Wouter
    I am working on a game that needs a level that is completely generated. Currently my approach is to draw textures for the levels pixel by pixel during the game (in XNA with SpriteBatch). This is too intensive unfortunately. The game has frame drops even when I only draw 1 level texture each draw cycle. Here is an example of the current prototype. It is a simple sidescroller with the avatar swimming through a cave. The shape of this cave will alter throughout the level (textures and physics collision shapes). You can clearly see the boundaries of the level tiles in the screenshot below. These are generated just before they move into camera view. For inspiration I looked at PixelJunk Shooter 2. These levels are obviously not generated, but some of the levels have movement. How do you guys think they implemented it? My guess is that the level and other objects in the game are actually flat 3d models, but I am not sure..

    Read the article

  • Calculate the Intersection of Two Volumes

    - by igrad
    If you've ever played The Swapper, you'll have a good idea of what I'm asking about. I need to check for, and isolate, areas of a rectangle that may intersect with either a circle or another rectangle. These selected areas will receive special properties, and the areas will be non-static, since the intersecting shapes themselves will also be dynamic. My first thought was to use raycasting detection, though I've only seen that in use with circles, or even ellipses. I'm curious if there's a method of using raycasting with a more rectangular approach, or if there's a totally different method already in use to accomplish this task. I would like something more exact than checking in large chunks, and since I'm using SDL2 with a logical renderer size of 1920x1080, checking if each pixel is intersecting is out of the question, as it would slow things down past a playable speed. I already have a multi-shape collision function-template in place, and I could use that, though it only checks if sides or corners are intersecting; it does not compute the overlapping area, or even find the circle's secant line, though I can't imagine it would be overly complex to implement. TL;DR: I need to find and isolate areas of a rectangle that may intersect with a circle or another rectangle without checking every single pixel on-screen.

    Read the article

  • Modern techniques for spriting

    - by DevilWithin
    Hello, I would like to know the flow for making modern 2D game artwork. How are the assets made nowadays? Bitmap? Vector-based? Hand-drawn and painted? Drawn digitally? Modeled in 3D and exported to bitmaps? I would like some information on programs as well, for fine looking art. Why does Flash's vector art style look good in most games? How do I make equivalent graphics with external tools? Or equaly good and not vector-based, anyway. Any special hints for animating? An answer oriented towards a one-man-army indie developer with little experience but some artistic sense would be appreciated! Not a complete dummy with paint programs, but also not a master at all, just need efficient ways to achieve results. Thanks. NOTE: Pixel art is not the goal of this question, nothing related to direct pixel manipulation should be brought up here, but you're free to do exactly that :)

    Read the article

  • Level Creating Help

    - by Brandon oubiub
    I am making a little 2d overhead RPG type game just for fun. I have almost all the basic stuff set up, but I just need a little help on level creation. I can already make a level and place each tile how I want it, but having to place each tile gets annoying after a while. I noticed that in a lot of games, even extremely simple ones, they have LOTS of levels with LOTS of tiles in each. Creating all that in this fashion would take forever. So I guess my question is, as a game developer, am I supposed to do all that, or maybe make a little level editor so I can see things as I create it? What do game developers do? I'm using Java. EDIT: Okay, say if I had an image for a map, that I made in MS paint or photoshop, and each pixel represent a tile value, could I somehow in Java detect what color an individual pixel is? If so, that would be perfect. If so, how?

    Read the article

  • What's the best algorithm for... [closed]

    - by Paska
    Hi programmers! Today come out a little problem. I have an array of coordinates (latitude and longitude) maded in this way: [0] = "45.01234,9.12345" [1] = "46.11111,9.12345" [2] = "47.22222,9.98765" [...] etc In a loop, convert these coordinates in meters (UTM northing / UTM easting) and after that i convert these coords in pixel (X / Y) on screen (the output device is an iphone) to draw a route line on a custom map. [0] = "512335.00000,502333.666666" [...] etc The returning pixel are passed to a method that draw a line on screen (simulating a route calculation). [0] = "20,30" [1] = "21,31" [2] = "25,40" [...] etc As coordinate (lat/lon) are too many, i need to truncate lat/lon array eliminating the values that doesn't fill in the map bound (the visible part of map on screen). Map bounds are 2 couple of coords lat/lon, upper left and lower right. Now, what is the best way to loop on this array (NOT SORTED) and check if a value is or not in bound and after remove the value that is outside? To return a clean array that contains only the coords visible on screen? Note: the coords array is a very big array. 4000/5000 couple of items. This is a method that should be looped every drag or zoom. Anyone have an idea to optimize search and controls in this array? many thanks, A

    Read the article

  • How can I perform 2D side-scroller collision checks in a tile-based map?

    - by bill
    I am trying to create a game where you have a player that can move horizontally and jump. It's kind of like Mario but it isn't a side scroller. I'm using a 2D array to implement a tile map. My problem is that I don't understand how to check for collisions using this implementation. After spending about two weeks thinking about it, I've got two possible solutions, but both of them have some problems. Let's say that my map is defined by the following tiles: 0 = sky 1 = player 2 = ground The data for the map itself might look like: 00000 10002 22022 For solution 1, I'd move the player (the 1) a complete tile and update the map directly. This make the collision easy because you can check if the player is touching the ground simply by looking at the tile directly below the player: // x and y are the tile coordinates of the player. The tile origin is the upper-left. if (grid[x][y+1] == 2){ // The player is standing on top of a ground tile. } The problem with this approach is that the player moves in discrete tile steps, so the animation isn't smooth. For solution 2, I thought about moving the player via pixel coordinates and not updating the tile map. This will make the animation much smoother because I have a smaller movement unit per frame. However, this means I can't really accurately store the player in the tile map because sometimes he would logically be between two tiles. But the bigger problem here is that I think the only way to check for collision is to use Java's intersection method, which means the player would need to be at least a single pixel "into" the ground to register collision, and that won't look good. How can I solve this problem?

    Read the article

  • Constructive criticsm on my linear sampling Gaussian blur

    - by Aequitas
    I've been attempting to implement a gaussian blur utilising linear sampling, I've come across a few articles presented on the web and a question posed here which dealt with the topic. I've now attempted to implement my own Gaussian function and pixel shader drawing reference from these articles. This is how I'm currently calculating my weights and offsets: int support = int(sigma * 3.0) weights.push_back(exp(-(0*0)/(2*sigma*sigma))/(sqrt(2*pi)*sigma)); total += weights.back(); offsets.push_back(0); for (int i = 1; i <= support; i++) { float w1 = exp(-(i*i)/(2*sigma*sigma))/(sqrt(2*pi)*sigma); float w2 = exp(-((i+1)*(i+1))/(2*sigma*sigma))/(sqrt(2*pi)*sigma); weights.push_back(w1 + w2); total += 2.0f * weights[i]; offsets.push_back(w1 / weights[i]); } for (int i = 0; i < support; i++) { weights[i] /= total; } Here is an example of my vertical pixel shader: vec3 acc = texture2D(tex_object, v_tex_coord.st).rgb*weights[0]; vec2 pixel_size = vec2(1.0 / tex_size.x, 1.0 / tex_size.y); for (int i = 1; i < NUM_SAMPLES; i++) { acc += texture2D(tex_object, (v_tex_coord.st+(vec2(0.0, offsets[i])*pixel_size))).rgb*weights[i]; acc += texture2D(tex_object, (v_tex_coord.st-(vec2(0.0, offsets[i])*pixel_size))).rgb*weights[i]; } gl_FragColor = vec4(acc, 1.0); Am I taking the correct route with this? Any criticism or potential tips to improving my method would be much appreciated.

    Read the article

  • Error message when running OpenGL programs with bumblebee

    - by user170860
    X Error of failed request: BadDrawable (invalid Pixmap or Window parameter) Major opcode of failed request: 152 (DRI2) Minor opcode of failed request: 8 (DRI2SwapBuffers ) Resource id in failed request: 0x4200005 Serial number of failed request: 2166 Current serial number in output stream: 2167 primus: warning: timeout waiting for display worker Segmentation fault (core dumped) I don't get this on all OGL programs, but only particularly GPU intense ones. Also, I only get this using primusrun. optirun gives the same error no matter what I run: [VGL] NOTICE: Pixel format of 2D X server does not match pixel format of [VGL] Pbuffer. Disabling PBO readback. I don't know what either of these mean. Neither of them stop the programs from running, but I'd like to fix the problem if there is one. Also, I prefer to use primusrun because it is faster and it does a better job with vertical sync, however, it only supports OGL 4.2. This isn't a big issue because the programs I write are forward compatible, but it still seems odd to me. So basically I'd just like it if someone could explain to me what is happening and if there is something I can do about it. Thanks.

    Read the article

  • Box2D how to implement a camera?

    - by Romeo
    By now i have this Camera class. package GameObjects; import main.Main; import org.jbox2d.common.Vec2; public class Camera { public int x; public int y; public int sx; public int sy; public static final float PIXEL_TO_METER = 50f; private float yFlip = -1.0f; public Camera() { x = 0; y = 0; sx = x + Main.APPWIDTH; sy = y + Main.APPHEIGHT; } public Camera(int x, int y) { this.x = x; this.y = y; sx = x + Main.APPWIDTH; sy = y + Main.APPHEIGHT; } public void update() { sx = x + Main.APPWIDTH; sy = y + Main.APPHEIGHT; } public void moveCam(int mx, int my) { if(mx >= 0 && mx <= 80) { this.x -= 2; } else if(mx <= Main.APPWIDTH && mx >= Main.APPWIDTH - 80) { this.x += 2; } if(my >= 0 && my <= 80) { this.y += 2; } else if(my <= Main.APPHEIGHT && my >= Main.APPHEIGHT - 80) { this.y -= 2; } this.update(); } public float meterToPixel(float meter) { return meter * PIXEL_TO_METER; } public float pixelToMeter(float pixel) { return pixel / PIXEL_TO_METER; } public Vec2 screenToWorld(Vec2 screenV) { return new Vec2(screenV.x + this.x, yFlip * screenV.y + this.y); } public Vec2 worldToScreen(Vec2 worldV) { return new Vec2(worldV.x - this.x, yFlip * worldV.y - this.y); } } I need to know how to modify the screenToWorld and worldToScreen functions to include the PIXEL_TO_METER scaling.

    Read the article

  • Sprites rendering blurry with velocity

    - by ashes999
    After adding velocity to my game, I feel like my textures are twitching. I thought it was just my eyes, until I finally captured it in a screenshot: The one on the left is what renders in my game; the one on the right is the original sprite, pasted over. (This is a screenshot from Photoshop, zoomed in 6x.) Notice the edges are aliasing -- it looks almost like sub-pixel rendering. In fact, if I had not forced my sprites (which have position and velocity as ints) to draw using integer values, I would swear that MonoGame is drawing with floating point values. But it isn't. What could be the cause of these things appearing blurry? It doesn't happen without velocity applied. To be precise, my SpriteComponent class has a Vector2 Position field. When I call Draw, I essentially use new Vector2((int)Math.Round(this.Position.X), (int)Math.Round(this.Position.Y)) for the position. I had a bug before where even stationary objects would jitter -- that was due to me using the straight Position vector and not rounding the values to ints. If I use Floor/Ceiling instead of round, the sprite sinks/hovers (one pixel difference either way) but still draws blurry.

    Read the article

  • Character movement on a 2D tile map

    - by Chris Morris
    I'm working at making a HTML5 game. Top down, closest thing I can equate it to is the gameboy zeldas, but open world and no rooms. What I have so far is a procedurally generated map in a multi dimensional array. And a starting position on the map. Along with this I have an array of movable and non movable tile ID's. I also have a class for my player and have him being rendered out in the center of the starting tile. My problem however is getting the movement sorted out for the player. I want to be able to have the character free move around the map (pixel by pixel essentially) ontop of this 2D generated world. Ideally this would allow the user to move around the walk able area of the canvas. this is simple enough for me to do, but I am having problems now moving the world. If the user is 20% from the edge of the screen i want the world to start panning in the direction the player is heading. But I'm rather lacking in ideas of how to do this. I've looked around for some tutorials, but am coming up blank on ideas of how to generate the playable area (zoomed in) and to then move this generated area under the player when they reach near the end of the screen. My current idea was to generate a certain amount of tiles full size to fill the screen and place the player i the middle. Then when the user approaches the edge of the screen start generating the tiles offset by the distance moved and the direction. I can kind of see this working but I really have no idea if this is the best or easiest to code of methods for generating the world. sorry for the lack of code but I'm still just in the theory stages of working this all out.

    Read the article

  • Does anybody know of any resources to achieve this particular "2.5D" isometric engine effect?

    - by Craig Whitley
    I understand this is a little vague, but I was hoping somebody might be able to describe a high-level workflow or link to a resource to be able to achieve a specific isometric "2.5D" tile engine effect. I fell in love with http://www.youtube.com/watch?v=-Q6ISVaM5Ww this engine. Especially with the lighting and the shaders! He has a brief description of how he achieved what he did, but I could really use a brief flow of where you would start, what you would read up on and learn and the logical order to implement these things. A few specific questions: 1) Is there a heightmap on the ground texture that lets the light reflect brighter on certain parts of it? 2) "..using a special material which calculates the world-space normal vectors of every pixel.." - is this some "magic" special material he has created himself, or can you hazard a guess at what he means? 3) with relation to the above quote - what does he mean by 'world-space normal vectors of every pixel'? 4) I'm guessing I'm being a little bit optimistic when I ask if there's any 'all-in-one' tutorial out there? :)

    Read the article

  • Sampling Heightmap Edges for Normal map

    - by pl12
    I use a Sobel filter to generate normal maps from procedural height maps. The heightmaps are 258x258 pixels. I scale my texture coordinates like so: texCoord = (texCoord * (256/258)) + (1/258) Yet even with this I am left with the following problem: As you can see the edges of the normal map still proves to be problematic. Putting the texture wrap mode to "clamp" also proved no help. EDIT: The Sobel Filter function by sampling the 8 surrounding pixels around a given pixel so that a derivative can be calculated in order to find the "normal" of the given pixel. The texture coordinates are instanced once per quad (for the quadtree that makes up the world) and are created as follows (it is quite possible that the problem results from the way I scale and offset the texCoords as seen above): Java: for(int i = 0; i<vertices.length; i++){ Vector2f coord = new Vector2f((vertices[i].x)/(worldSize), (vertices[i].z)/( worldSize)); texCoords[i] = coord; } the quad used for input here rests on the X0Z plane. 'worldSize' is the diameter of the planet. No negative texCoords are seen as the quad used for input for this method is not centered around the origin. Is there something I am missing here? Thanks.

    Read the article

  • Colored Collision Detection

    - by tugrul büyükisik
    Several years ago, i made a fast collision detection for 2D, it was just checking a bullets front-pixel's color to check if it were to hit something. Lets say the target rgb color is (124,200,255) then it just checks for that color. After the collision detection, it paints the target with appropriate picture. So, collision detection is made in background without drawing but then painted and drawed. How can i do this in 3D? Because, a vertex is not just exist like a 2D picture's pixel. I looked at some java3D and other programs and understood that 3D world is made of objects. Not just pictures. Is there a program that truly fills the world with vertices ? But it could be needing terabytes of ram even more. Do you know an easy way to interpolate the color of a vertex in java3D or similar program? Note: for a rgb color-identifier, i can make 255*255*255 different 2D objects in background.

    Read the article

  • How do I reconstruct depth in deferred rendering using an orthographic projection?

    - by Jeremie
    I've been trying to get my world space position of my pixel but I4m missing something. I'm using a orthographic view for a 2.5d game. My depth is linear and this is my code. float3 lightPos = lightPosition; float2 texCoord = PostProjToScreen(PSIn.lightPosition)+halfPixel; float depth = tex2D(depthMap, texCoord); float4 position; position.x = texCoord.x *2-1; position.y = (1-texCoord.y)*2-1; position.z = depth.r; position.w = 1; position = mul(position, inViewProjection); //position.xyz/=position.w; // I comment it but even without it it doesn't work float4 normal = (tex2D(normalMap, texCoord)-.5f) * 2; normal = normalize(normal); float3 lightDirection = normalize(lightPos-position); float att = saturate(1.0f - length(lightDirection) /attenuation); float lightning = saturate (dot(normal, lightDirection)); lightning*= brightness; return float4(lightColor* lightning*att, 1); I'm using a sphere but it's not working the way I want. I reproject the texture properly onto the sphere but the light coordinates in the pixel shader seems to be stuck at zero even if when I move the light volume update properly.

    Read the article

  • What problem does double or triple buffering solve in modern games?

    - by krokvskrok
    I want to check if my understanding of the causes for using double (or triple) buffering is correct: A monitor with 60Hz refresh's the monitor-display 60 times per second. If the monitor refresh the monitor-display, he updates pixel for pixel and line for line. The monitor requests the color values for the pixels from the video memory. If I run now a game, then this game is constantly manipulating this video memory. If this game don't use a buffer strategy (double buffering etc.) then the following problem can happen: The monitor is now refreshing his monitor-display. At this moment the monitor had refreshed the first half monitor-display already. At the same time, the game had manipulated the video memory with new data. Now the monitor accesses for the second half monitor-display this new manipulated data from the video memory. The problems can be tearing or flickering. Is my understanding of cases of using buffer strategy correct? Are there other reasons?

    Read the article

  • Rectangular Raycasting?

    - by igrad
    If you've ever played The Swapper, you'll have a good idea of what I'm asking about. I need to check for, and isolate, areas of a rectangle that may intersect with either a circle or another rectangle. These selected areas will receive special properties, and the areas will be non-static, since the intersecting shapes themselves will also be dynamic. My first thought was to use raycasting detection, though I've only seen that in use with circles, or even ellipses. I'm curious if there's a method of using raycasting with a more rectangular approach, or if there's a totally different method already in use to accomplish this task. I would like something more exact than checking in large chunks, and since I'm using SDL2 with a logical renderer size of 1920x1080, checking if each pixel is intersecting is out of the question, as it would slow things down past a playable speed. I already have a multi-shape collision function-template in place, and I could use that, though it only checks if sides or corners are intersecting; it does not compute the overlapping area, or even find the circle's secant line, though I can't imagine it would be overly complex to implement. TL;DR: I need to find and isolate areas of a rectangle that may intersect with a circle or another rectangle without checking every single pixel on-screen.

    Read the article

  • Using 2d collision with 3d objects

    - by Lyise
    I'm planning to write a fairly basic scrolling shoot 'em up, however, I have run into a query with regards to checking for collision. I plan to have a fixed top down view, where the player and enemies are all 3d objects on a fixed plane, and when the enemy or player fires at the other, their shots will also be along this fixed plane. In order to handle the collision, I have read up a bit on collision detection in 3d, as it is not something I have looked into previously, but I'm not sure what would be ideal for this situation. My options appear to be: Sphere collision, however, this lacks the pixel precision I would like Detection using all vertexes and planes of each object, but this seems overly convoluted for a fixed plane of play Rendering the play screen in black and white (where white is an object, black is empty space), once for enemies and once for the player, and checking for collisions that way (if a pixel is white on both, there is a collision) Which of these would be the best approach, or is there another option that I am missing? I have done this previously using 2d sprites, however I can't use the same thinking here as I don't have the image to refer to.

    Read the article

  • Shadow mapping with deffered shading for directional lights - shadow map projection problem

    - by Harry
    I'm trying to implement shadow mapping to my engine. I started with directional lights because they seemed to be the easiest one, but I was wrong :) I have implemented deferred shading and I retrieve position from depth. I think that there is the biggest problem but code looks ok for me. Now more about problem: Shadow map projected onto meshes looks bad scaled and translated and also some informations from shadow map texture aren't visible. You can see it on this screen: http://img5.imageshack.us/img5/2254/93dn.png Yelow frustum is light frustum and I have mixed shadow map preview and actual scene. As you can see shadows are in wrong place and shadow of cone and sphere aren't visible. Could you look at my codes and tell me where I have a mistake? // create shadow map if(!_shd)glGenTextures(1, &_shd); glBindTexture(GL_TEXTURE_2D, _shd); glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, 1024, 1024, 0, GL_DEPTH_COMPONENT, GL_FLOAT,NULL); // shadow map size glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, _shd, 0); glDrawBuffer(GL_NONE); // setting camera Vector dire=Vector(0,0,1); ACamera.setLookAt(dire,Vector(0)); ACamera.setPerspectiveView(60.0f,1,0.1f,10.0f); // currently needed for proper frustum corners calculation Vector min(ACamera._point[0]),max(ACamera._point[0]); for(int i=0;i<8;i++){ max=Max(max,ACamera._point[i]); min=Min(min,ACamera._point[i]); } ACamera.setOrthogonalView(min.x,max.x,min.y,max.y,-max.z,-min.z); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, _s_buffer); // framebuffer for shadow map // rendering to depth buffer glBindFramebuffer(GL_DRAW_FRAMEBUFFER, _g_buffer); Shaders["DirLight"].set(true); Matrix4 bias; bias.x.set(0.5,0.0,0.0,0.0); bias.y.set(0.0,0.5,0.0,0.0); bias.z.set(0.0,0.0,0.5,0.0); bias.w.set(0.5,0.5,0.5,1.0); Shaders["DirLight"].set("textureMatrix",ACamera.matrix*Projection3D*bias); // order of multiplications are 100% correct, everything gives mi the same result as using glm glActiveTexture(GL_TEXTURE5); glBindTexture(GL_TEXTURE_2D,_shd); lightDir(dir); // light calculations Vertex Shader makes nothing related to shadow calculatons Pixel shader function which calculates if pixel is in shadow or not: float readShadowMap(vec3 eyeDir) { // retrieve depth of pixel float z = texture2D(depth, gl_FragCoord.xy/screen).z; vec3 pos = vec3(gl_FragCoord.xy/screen, z); // transform by the projection and view inverse vec4 worldSpace = inverse(View)*inverse(ProjectionMatrix)*vec4(pos*2-1,1); worldSpace /= worldSpace.w; vec4 coord=textureMatrix*worldSpace; float vis=1.0f; if(texture2D(shadow, coord.xy).z < coord.z-0.001)vis=0.2f; return vis; } I also have question about shadows specifically for directional light. Currently I always look at 0,0,0 position and in further implementation I have to move light frustum along to camera frustum. I've found how to do this here: http://www.gamedev.net/topic/505893-orthographic-projection-for-shadow-mapping/ but it doesn't give me what I want. Maybe because of problems mentioned above, but I want know your opinion. EDIT: vec4 worldSpace is position read from depht of the scene (not shadow map). Maybe I wasn't precise so I'll try quick explain what is what: View is camera view matrix, ProjectionMatrix is camera projection,. First I try to get world space position from depth map and then multiply it by textureMatrix which is light view *light projection*bias. Rest of code is the same as in many tutorials. I can't use vertex shader to make something like gl_Position=textureMatrix*gl_Vertex and get it interpolated in fragment shader because of deffered rendering use so I want get it from depht buffer. EDIT2: I also tried make it as in Coding Labs tutorial about Shadow Mapping with Deferred Rendering but unfortunately this either works wrong.

    Read the article

  • Black Screen: How to set Projection/View Matrix

    - by Lisa
    I have a Windows Phone 8 C#/XAML with DirectX component project. I'm rendering some particles, but each particle is a rectangle versus a square (as I've set the vertices to be positions equally offset from each other). I used an Identity matrix in the view and projection matrix. I decided to add the windows aspect ratio to prevent the rectangles. But now I get a black screen. None of the particles are rendered now. I don't know what's wrong with my matrices. Can anyone see the problem? These are the default matrices in Microsoft's project example. View Matrix: XMVECTOR eye = XMVectorSet(0.0f, 0.7f, 1.5f, 0.0f); XMVECTOR at = XMVectorSet(0.0f, -0.1f, 0.0f, 0.0f); XMVECTOR up = XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f); XMStoreFloat4x4(&m_constantBufferData.view, XMMatrixTranspose(XMMatrixLookAtRH(eye, at, up))); Projection Matrix: void CubeRenderer::CreateWindowSizeDependentResources() { Direct3DBase::CreateWindowSizeDependentResources(); float aspectRatio = m_windowBounds.Width / m_windowBounds.Height; float fovAngleY = 70.0f * XM_PI / 180.0f; if (aspectRatio < 1.0f) { fovAngleY /= aspectRatio; } XMStoreFloat4x4(&m_constantBufferData.projection, XMMatrixTranspose(XMMatrixPerspectiveFovRH(fovAngleY, aspectRatio, 0.01f, 100.0f))); } I've tried modifying them to use cocos2dx's WP8 example. XMMATRIX identityMatrix = XMMatrixIdentity(); float fovy = 60.0f; float aspect = m_windowBounds.Width / m_windowBounds.Height; float zNear = 0.1f; float zFar = 100.0f; float xmin, xmax, ymin, ymax; ymax = zNear * tanf(fovy * XM_PI / 360); ymin = -ymax; xmin = ymin * aspect; xmax = ymax * aspect; XMMATRIX tmpMatrix = XMMatrixPerspectiveOffCenterRH(xmin, xmax, ymin, ymax, zNear, zFar); XMMATRIX projectionMatrix = XMMatrixMultiply(tmpMatrix, identityMatrix); // View Matrix float fEyeX = m_windowBounds.Width * 0.5f; float fEyeY = m_windowBounds.Height * 0.5f; float fEyeZ = m_windowBounds.Height / 1.1566f; float fLookAtX = m_windowBounds.Width * 0.5f; float fLookAtY = m_windowBounds.Height * 0.5f; float fLookAtZ = 0.0f; float fUpX = 0.0f; float fUpY = 1.0f; float fUpZ = 0.0f; XMMATRIX tmpMatrix2 = XMMatrixLookAtRH(XMVectorSet(fEyeX,fEyeY,fEyeZ,0.f), XMVectorSet(fLookAtX,fLookAtY,fLookAtZ,0.f), XMVectorSet(fUpX,fUpY,fUpZ,0.f)); XMMATRIX viewMatrix = XMMatrixMultiply(tmpMatrix2, identityMatrix); XMStoreFloat4x4(&m_constantBufferData.view, viewMatrix); Vertex Shader cbuffer ModelViewProjectionConstantBuffer : register(b0) { //matrix model; matrix view; matrix projection; }; struct VertexInputType { float4 position : POSITION; float2 tex : TEXCOORD0; float4 color : COLOR; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float4 color : COLOR; }; PixelInputType main(VertexInputType input) { PixelInputType output; // Change the position vector to be 4 units for proper matrix calculations. input.position.w = 1.0f; //===================================== // TODO: ADDED for testing input.position.z = 0.0f; //===================================== // Calculate the position of the vertex against the world, view, and projection matrices. //output.position = mul(input.position, model); output.position = mul(input.position, view); output.position = mul(output.position, projection); // Store the texture coordinates for the pixel shader. output.tex = input.tex; // Store the particle color for the pixel shader. output.color = input.color; return output; } Before I render the shader, I set the view/projection matrices into the constant buffer void ParticleRenderer::SetShaderParameters() { ViewProjectionConstantBuffer* dataPtr; D3D11_MAPPED_SUBRESOURCE mappedResource; DX::ThrowIfFailed(m_d3dContext->Map(m_constantBuffer.Get(), 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource)); dataPtr = (ViewProjectionConstantBuffer*)mappedResource.pData; dataPtr->view = m_constantBufferData.view; dataPtr->projection = m_constantBufferData.projection; m_d3dContext->Unmap(m_constantBuffer.Get(), 0); // Now set the constant buffer in the vertex shader with the updated values. m_d3dContext->VSSetConstantBuffers(0, 1, m_constantBuffer.GetAddressOf() ); // Set shader texture resource in the pixel shader. m_d3dContext->PSSetShaderResources(0, 1, &m_textureView); } Nothing, black screen... I tried so many different look at, eye, and up vectors. I tried transposing the matrices. I've set the particle center position to always be (0, 0, 0), I tried different positions too, just to make sure they're not being rendered offscreen.

    Read the article

  • Cocos2d: Moving background on update: offsett issue

    - by mm24
    working with Objective C, iOS and Cocos2d I am developing a vertical scrolling shooter game for iPhone (retina display models with 640 width x 960 height pixel resolution). My basic algorithm works as following: I create two instances of an image that has exactly 640 width x 960 height pixel of resolution, which we will call imageA and imageB I then set the two imags with exactly 480.0f of offset from each other, as the screenSize of a CCScene is set by default to 480.0f. At each update method call I move the two images by the same value. I make sure that their offsett stays to 480.0f However when running the game I see a 1 pixel height line between the two images. This literally bugs me and would like to adjust this. What am I doing wrong? This is a zoom in on the background when the "offsett line" is visible. The white line you can see divides the two background images and is not meant to exist as both images are completely black :): If I change the yPositionOfSecondElement value to 479.0f until the first loop the two images overlap correctly, but as soon as the loop starts the two images starts having an offsett of -1.0f. Here is the initialization code: -(void) init { //... screenHeight = 480.0f; yPositionOfSecondElement= screenHeight;//I tried subtracting an offsett of -1 but eventually the image would go wrong again yPositionOfFirstElement = 0.0f; loopedBackgroundImageInstanceA = [BackgroundLoopedImage loopImageForLevel:levelName]; loopedBackgroundImageInstanceA.anchorPoint = CGPointMake(0.5f, 0.0f); loopedBackgroundImageInstanceA.position = CGPointMake(160.0f, yPositionOfFirstElement); [node addChild:loopedBackgroundImageInstanceA z:zLevelBackground]; //loopedBackgroundImageInstanceA.color= ccRED; loopedBackgroundImageInstanceB = [BackgroundLoopedImage loopImageForLevel:levelName]; loopedBackgroundImageInstanceB.anchorPoint = CGPointMake(0.5f, 0.0f); loopedBackgroundImageInstanceB.position = CGPointMake(160.0f, yPositionOfSecondElement); [node addChild:loopedBackgroundImageInstanceB z:zLevelBackground]; //.... } And here is the move code called at each update: -(void) moveBackgroundSprites:(BackgroundLoopedImage*)imageA :(BackgroundLoopedImage*)imageB :(ccTime)delta { isEligibleToMove=false; //This is done to avoid rounding errors float yStep = delta * [GameController sharedGameController].currentBackgroundSpeed; NSString* formattedNumber = [NSString stringWithFormat:@"%.02f", yStep]; yStep = atof([formattedNumber UTF8String]); //First should adjust position of images [self adjustPosition:imageA :imageB]; //The can get the actual image position CGPoint posA = imageA.position; CGPoint posB = imageB.position; //Here could verify if the checksum is equal to the required difference (should be 479.0f) if (![self verifyCheckSum:posA :posB]) { CCLOG(@"does not comply A"); } //At this stage can compute the hypotetical new position CGPoint newPosA = CGPointMake(posA.x, posA.y - yStep); CGPoint newPosB = CGPointMake(posB.x, posB.y - yStep); // Reposition stripes when they're out of bounds if (newPosA.y <= -yPositionOfSecondElement) { newPosA.y = yPositionOfSecondElement; [imageA shuffle]; if (timeElapsed>=endTime && hasReachedEndLevel==FALSE) { hasReachedEndLevel=TRUE; shouldMoveImageEnd=TRUE; } } else if (newPosB.y <= -yPositionOfSecondElement) { newPosB.y = yPositionOfSecondElement; [imageB shuffle]; if (timeElapsed>=endTime && hasReachedEndLevel==FALSE) { hasReachedEndLevel=TRUE; shouldMoveImageEnd=TRUE; } } //Here should verify that the check sum is equal to 479.0f if (![self verifyCheckSum:posA :posB]) { CCLOG(@"does not comply B"); } imageA.position = newPosA; imageB.position = newPosB; //Here could verify that the check sum is equal to 479.0f if (![self verifyCheckSum:posA :posB]) { CCLOG(@"does not comply C"); } isEligibleToMove=true; } -(BOOL) verifyCheckSum:(CGPoint)posA :(CGPoint)posB { BOOL comply = false; float sum = 0.0f; if (posA.y > posB.y) { sum = posA.y - posB.y; } else if (posB.y > posA.y){ sum = posB.y - posA.y; } else{ return false; } if (sum!=yPositionOfSecondElement) { comply= false; } else{ comply=true; } return comply; } And here is what happens on the update: if(shouldMoveImageA && shouldMoveImageB) { if (isEligibleToMove) { [self moveBackgroundSprites:loopedBackgroundImageInstanceA :loopedBackgroundImageInstanceB :delta]; } Forget about shouldMoveImageA and shouldMoveImageB, this is just for when the background reaches the end of level, this works.

    Read the article

  • 3D Graphics with XNA Game Studio 4.0 bug in light map?

    - by Eibis
    i'm following the tutorials on 3D Graphics with XNA Game Studio 4.0 and I came up with an horrible effect when I tried to implement the Light Map http://i.stack.imgur.com/BUWvU.jpg this effect shows up when I look towards the center of the house (and it moves with me). it has this shape because I'm using a sphere to represent light; using other light shapes gives different results. I'm using a class PreLightingRenderer: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; using Dhpoware; using Microsoft.Xna.Framework.Content; namespace XNAFirstPersonCamera { public class PrelightingRenderer { // Normal, depth, and light map render targets RenderTarget2D depthTarg; RenderTarget2D normalTarg; RenderTarget2D lightTarg; // Depth/normal effect and light mapping effect Effect depthNormalEffect; Effect lightingEffect; // Point light (sphere) mesh Model lightMesh; // List of models, lights, and the camera public List<CModel> Models { get; set; } public List<PPPointLight> Lights { get; set; } public FirstPersonCamera Camera { get; set; } GraphicsDevice graphicsDevice; int viewWidth = 0, viewHeight = 0; public PrelightingRenderer(GraphicsDevice GraphicsDevice, ContentManager Content) { viewWidth = GraphicsDevice.Viewport.Width; viewHeight = GraphicsDevice.Viewport.Height; // Create the three render targets depthTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Single, DepthFormat.Depth24); normalTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); lightTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); // Load effects depthNormalEffect = Content.Load<Effect>(@"Effects\PPDepthNormal"); lightingEffect = Content.Load<Effect>(@"Effects\PPLight"); // Set effect parameters to light mapping effect lightingEffect.Parameters["viewportWidth"].SetValue(viewWidth); lightingEffect.Parameters["viewportHeight"].SetValue(viewHeight); // Load point light mesh and set light mapping effect to it lightMesh = Content.Load<Model>(@"Models\PPLightMesh"); lightMesh.Meshes[0].MeshParts[0].Effect = lightingEffect; this.graphicsDevice = GraphicsDevice; } public void Draw() { drawDepthNormalMap(); drawLightMap(); prepareMainPass(); } void drawDepthNormalMap() { // Set the render targets to 'slots' 1 and 2 graphicsDevice.SetRenderTargets(normalTarg, depthTarg); // Clear the render target to 1 (infinite depth) graphicsDevice.Clear(Color.White); // Draw each model with the PPDepthNormal effect foreach (CModel model in Models) { model.CacheEffects(); model.SetModelEffect(depthNormalEffect, false); model.Draw(Camera.ViewMatrix, Camera.ProjectionMatrix, Camera.Position); model.RestoreEffects(); } // Un-set the render targets graphicsDevice.SetRenderTargets(null); } void drawLightMap() { // Set the depth and normal map info to the effect lightingEffect.Parameters["DepthTexture"].SetValue(depthTarg); lightingEffect.Parameters["NormalTexture"].SetValue(normalTarg); // Calculate the view * projection matrix Matrix viewProjection = Camera.ViewMatrix * Camera.ProjectionMatrix; // Set the inverse of the view * projection matrix to the effect Matrix invViewProjection = Matrix.Invert(viewProjection); lightingEffect.Parameters["InvViewProjection"].SetValue(invViewProjection); // Set the render target to the graphics device graphicsDevice.SetRenderTarget(lightTarg); // Clear the render target to black (no light) graphicsDevice.Clear(Color.Black); // Set render states to additive (lights will add their influences) graphicsDevice.BlendState = BlendState.Additive; graphicsDevice.DepthStencilState = DepthStencilState.None; foreach (PPPointLight light in Lights) { // Set the light's parameters to the effect light.SetEffectParameters(lightingEffect); // Calculate the world * view * projection matrix and set it to // the effect Matrix wvp = (Matrix.CreateScale(light.Attenuation) * Matrix.CreateTranslation(light.Position)) * viewProjection; lightingEffect.Parameters["WorldViewProjection"].SetValue(wvp); // Determine the distance between the light and camera float dist = Vector3.Distance(Camera.Position, light.Position); // If the camera is inside the light-sphere, invert the cull mode // to draw the inside of the sphere instead of the outside if (dist < light.Attenuation) graphicsDevice.RasterizerState = RasterizerState.CullClockwise; // Draw the point-light-sphere lightMesh.Meshes[0].Draw(); // Revert the cull mode graphicsDevice.RasterizerState = RasterizerState.CullCounterClockwise; } // Revert the blending and depth render states graphicsDevice.BlendState = BlendState.Opaque; graphicsDevice.DepthStencilState = DepthStencilState.Default; // Un-set the render target graphicsDevice.SetRenderTarget(null); } void prepareMainPass() { foreach (CModel model in Models) foreach (ModelMesh mesh in model.Model.Meshes) foreach (ModelMeshPart part in mesh.MeshParts) { // Set the light map and viewport parameters to each model's effect if (part.Effect.Parameters["LightTexture"] != null) part.Effect.Parameters["LightTexture"].SetValue(lightTarg); if (part.Effect.Parameters["viewportWidth"] != null) part.Effect.Parameters["viewportWidth"].SetValue(viewWidth); if (part.Effect.Parameters["viewportHeight"] != null) part.Effect.Parameters["viewportHeight"].SetValue(viewHeight); } } } } that uses three effect: PPDepthNormal.fx float4x4 World; float4x4 View; float4x4 Projection; struct VertexShaderInput { float4 Position : POSITION0; float3 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 Depth : TEXCOORD0; float3 Normal : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 viewProjection = mul(View, Projection); float4x4 worldViewProjection = mul(World, viewProjection); output.Position = mul(input.Position, worldViewProjection); output.Normal = mul(input.Normal, World); // Position's z and w components correspond to the distance // from camera and distance of the far plane respectively output.Depth.xy = output.Position.zw; return output; } // We render to two targets simultaneously, so we can't // simply return a float4 from the pixel shader struct PixelShaderOutput { float4 Normal : COLOR0; float4 Depth : COLOR1; }; PixelShaderOutput PixelShaderFunction(VertexShaderOutput input) { PixelShaderOutput output; // Depth is stored as distance from camera / far plane distance // to get value between 0 and 1 output.Depth = input.Depth.x / input.Depth.y; // Normal map simply stores X, Y and Z components of normal // shifted from (-1 to 1) range to (0 to 1) range output.Normal.xyz = (normalize(input.Normal).xyz / 2) + .5; // Other components must be initialized to compile output.Depth.a = 1; output.Normal.a = 1; return output; } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPLight.fx float4x4 WorldViewProjection; float4x4 InvViewProjection; texture2D DepthTexture; texture2D NormalTexture; sampler2D depthSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D normalSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 LightColor; float3 LightPosition; float LightAttenuation; // Include shared functions #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position : POSITION0; float4 LightPosition : TEXCOORD0; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = mul(input.Position, WorldViewProjection); output.LightPosition = output.Position; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Find the pixel coordinates of the input position in the depth // and normal textures float2 texCoord = postProjToScreen(input.LightPosition) + halfPixel(); // Extract the depth for this pixel from the depth map float4 depth = tex2D(depthSampler, texCoord); // Recreate the position with the UV coordinates and depth value float4 position; position.x = texCoord.x * 2 - 1; position.y = (1 - texCoord.y) * 2 - 1; position.z = depth.r; position.w = 1.0f; // Transform position from screen space to world space position = mul(position, InvViewProjection); position.xyz /= position.w; // Extract the normal from the normal map and move from // 0 to 1 range to -1 to 1 range float4 normal = (tex2D(normalSampler, texCoord) - .5) * 2; // Perform the lighting calculations for a point light float3 lightDirection = normalize(LightPosition - position); float lighting = clamp(dot(normal, lightDirection), 0, 1); // Attenuate the light to simulate a point light float d = distance(LightPosition, position); float att = 1 - pow(d / LightAttenuation, 6); return float4(LightColor * lighting * att, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPShared.vsi has some common functions: float viewportWidth; float viewportHeight; // Calculate the 2D screen position of a 3D position float2 postProjToScreen(float4 position) { float2 screenPos = position.xy / position.w; return 0.5f * (float2(screenPos.x, -screenPos.y) + 1); } // Calculate the size of one half of a pixel, to convert // between texels and pixels float2 halfPixel() { return 0.5f / float2(viewportWidth, viewportHeight); } and finally from the Game class I set up in LoadContent with: effect = Content.Load(@"Effects\PPModel"); models[0] = new CModel(Content.Load(@"Models\teapot"), new Vector3(-50, 80, 0), new Vector3(0, 0, 0), 1f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); house = new CModel(Content.Load(@"Models\house"), new Vector3(0, 0, 0), new Vector3((float)-Math.PI / 2, 0, 0), 35.0f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); models[0].SetModelEffect(effect, true); house.SetModelEffect(effect, true); renderer = new PrelightingRenderer(GraphicsDevice, Content); renderer.Models = new List(); renderer.Models.Add(house); renderer.Models.Add(models[0]); renderer.Lights = new List() { new PPPointLight(new Vector3(0, 120, 0), Color.White * .85f, 2000) }; where PPModel.fx is: float4x4 World; float4x4 View; float4x4 Projection; texture2D BasicTexture; sampler2D basicTextureSampler = sampler_state { texture = ; addressU = wrap; addressV = wrap; minfilter = anisotropic; magfilter = anisotropic; mipfilter = linear; }; bool TextureEnabled = true; texture2D LightTexture; sampler2D lightSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 AmbientColor = float3(0.15, 0.15, 0.15); float3 DiffuseColor; #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; float2 UV : TEXCOORD0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 UV : TEXCOORD0; float4 PositionCopy : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 worldViewProjection = mul(World, mul(View, Projection)); output.Position = mul(input.Position, worldViewProjection); output.PositionCopy = output.Position; output.UV = input.UV; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Sample model's texture float3 basicTexture = tex2D(basicTextureSampler, input.UV); if (!TextureEnabled) basicTexture = float4(1, 1, 1, 1); // Extract lighting value from light map float2 texCoord = postProjToScreen(input.PositionCopy) + halfPixel(); float3 light = tex2D(lightSampler, texCoord); light += AmbientColor; return float4(basicTexture * DiffuseColor * light, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } I don't have any idea on what's wrong... googling the web I found that this tutorial may have some bug but I don't know if it's the LightModel fault (the sphere) or in a shader or in the class PrelightingRenderer. Any help is very appreciated, thank you for reading!

    Read the article

  • Install Mac OS Lion on Macbook with existing Ubuntu 12.04 dual boot

    - by kash
    I don't usually create my own questions, because I can find what I need through other people's questions. However, this one doesn't seem to be anywhere, except in one place where someone is talking about not using Apple hardware. So here it goes. I am using Apple hardware. I have a MacBook 1,1 with Intel Core2Duo running Ubuntu Precise Pangolin flawlessly. I never considered starting with Mac OSx and dual-booting because I never liked Mac OS. Now that I'm realizing that the only way I'm going to be able to run my DAW (Reaper) nicely and use a Firewire recording interface is to run Mac OSx, I'm considering it again. I want to go the opposite direction of most people and install Mac OSx next to an existing Ubuntu installation and have a boot menu with refit. I've installed refit in Ubuntu and I've converted the .dmg file for Mac OS Lion to a .iso file. Now, should I mount the .iso in Ubuntu and run it there, or burn a DVD and restart? I'm trying to gauge how to do this without hurting my existing Ubuntu installation. I will of course keep looking for existing answers in the meantime.

    Read the article

  • WPF Blurry Images - Bitmap Class

    - by Luke
    I am using the following sample at http://blogs.msdn.com/dwayneneed/archive/2007/10/05/blurry-bitmaps.aspx within VB.NET. The code is shown below. I am having a problem when my application loads the CPU is pegging 50-70%. I have determined that the problem is with the Bitmap class. The OnLayoutUpdated() method is calling the InvalidateVisual() continously. This is because some points are not returning as equal but rather, Point(0.0,-0.5) Can anyone see any bugs within this code or know a better implmentation for pixel snapping a Bitmap image so it is not blurry? p.s. The sample code was in C#, however I believe that it was converted correctly. Imports System Imports System.Collections.Generic Imports System.Windows Imports System.Windows.Media Imports System.Windows.Media.Imaging Class Bitmap Inherits FrameworkElement ' Use FrameworkElement instead of UIElement so Data Binding works as expected Private _sourceDownloaded As EventHandler Private _sourceFailed As EventHandler(Of ExceptionEventArgs) Private _pixelOffset As Windows.Point Public Sub New() _sourceDownloaded = New EventHandler(AddressOf OnSourceDownloaded) _sourceFailed = New EventHandler(Of ExceptionEventArgs)(AddressOf OnSourceFailed) AddHandler LayoutUpdated, AddressOf OnLayoutUpdated End Sub Public Shared ReadOnly SourceProperty As DependencyProperty = DependencyProperty.Register("Source", GetType(BitmapSource), GetType(Bitmap), New FrameworkPropertyMetadata(Nothing, FrameworkPropertyMetadataOptions.AffectsRender Or FrameworkPropertyMetadataOptions.AffectsMeasure, New PropertyChangedCallback(AddressOf Bitmap.OnSourceChanged))) Public Property Source() As BitmapSource Get Return DirectCast(GetValue(SourceProperty), BitmapSource) End Get Set(ByVal value As BitmapSource) SetValue(SourceProperty, value) End Set End Property Public Shared Function FindParentWindow(ByVal child As DependencyObject) As Window Dim parent As DependencyObject = VisualTreeHelper.GetParent(child) 'Check if this is the end of the tree If parent Is Nothing Then Return Nothing End If Dim parentWindow As Window = TryCast(parent, Window) If parentWindow IsNot Nothing Then Return parentWindow Else ' Use recursion until it reaches a Window Return FindParentWindow(parent) End If End Function Public Event BitmapFailed As EventHandler(Of ExceptionEventArgs) ' Return our measure size to be the size needed to display the bitmap pixels. ' ' Use MeasureOverride instead of MeasureCore so Data Binding works as expected. ' Protected Overloads Overrides Function MeasureCore(ByVal availableSize As Size) As Size Protected Overloads Overrides Function MeasureOverride(ByVal availableSize As Size) As Size Dim measureSize As New Size() Dim bitmapSource As BitmapSource = Source If bitmapSource IsNot Nothing Then Dim ps As PresentationSource = PresentationSource.FromVisual(Me) If Me.VisualParent IsNot Nothing Then Dim window As Window = window.GetWindow(Me.VisualParent) If window IsNot Nothing Then ps = PresentationSource.FromVisual(window.GetWindow(Me.VisualParent)) ElseIf FindParentWindow(Me) IsNot Nothing Then ps = PresentationSource.FromVisual(FindParentWindow(Me)) End If End If ' If ps IsNot Nothing Then Dim fromDevice As Matrix = ps.CompositionTarget.TransformFromDevice Dim pixelSize As New Vector(bitmapSource.PixelWidth, bitmapSource.PixelHeight) Dim measureSizeV As Vector = fromDevice.Transform(pixelSize) measureSize = New Size(measureSizeV.X, measureSizeV.Y) Else measureSize = New Size(bitmapSource.PixelWidth, bitmapSource.PixelHeight) End If End If Return measureSize End Function Protected Overloads Overrides Sub OnRender(ByVal dc As DrawingContext) Dim bitmapSource As BitmapSource = Me.Source If bitmapSource IsNot Nothing Then _pixelOffset = GetPixelOffset() ' Render the bitmap offset by the needed amount to align to pixels. dc.DrawImage(bitmapSource, New Rect(_pixelOffset, DesiredSize)) End If End Sub Private Shared Sub OnSourceChanged(ByVal d As DependencyObject, ByVal e As DependencyPropertyChangedEventArgs) Dim bitmap As Bitmap = DirectCast(d, Bitmap) Dim oldValue As BitmapSource = DirectCast(e.OldValue, BitmapSource) Dim newValue As BitmapSource = DirectCast(e.NewValue, BitmapSource) If ((oldValue IsNot Nothing) AndAlso (bitmap._sourceDownloaded IsNot Nothing)) AndAlso (Not oldValue.IsFrozen AndAlso (TypeOf oldValue Is BitmapSource)) Then RemoveHandler DirectCast(oldValue, BitmapSource).DownloadCompleted, bitmap._sourceDownloaded RemoveHandler DirectCast(oldValue, BitmapSource).DownloadFailed, bitmap._sourceFailed ' ((BitmapSource)newValue).DecodeFailed -= bitmap._sourceFailed; // 3.5 End If If ((newValue IsNot Nothing) AndAlso (TypeOf newValue Is BitmapSource)) AndAlso Not newValue.IsFrozen Then AddHandler DirectCast(newValue, BitmapSource).DownloadCompleted, bitmap._sourceDownloaded AddHandler DirectCast(newValue, BitmapSource).DownloadFailed, bitmap._sourceFailed ' ((BitmapSource)newValue).DecodeFailed += bitmap._sourceFailed; // 3.5 End If End Sub Private Sub OnSourceDownloaded(ByVal sender As Object, ByVal e As EventArgs) InvalidateMeasure() InvalidateVisual() End Sub Private Sub OnSourceFailed(ByVal sender As Object, ByVal e As ExceptionEventArgs) Source = Nothing ' setting a local value seems scetchy... RaiseEvent BitmapFailed(Me, e) End Sub Private Sub OnLayoutUpdated(ByVal sender As Object, ByVal e As EventArgs) ' This event just means that layout happened somewhere. However, this is ' what we need since layout anywhere could affect our pixel positioning. Dim pixelOffset As Windows.Point = GetPixelOffset() If Not AreClose(pixelOffset, _pixelOffset) Then InvalidateVisual() End If End Sub ' Gets the matrix that will convert a Windows.Point from "above" the ' coordinate space of a visual into the the coordinate space ' "below" the visual. Private Function GetVisualTransform(ByVal v As Visual) As Matrix If v IsNot Nothing Then Dim m As Matrix = Matrix.Identity Dim transform As Transform = VisualTreeHelper.GetTransform(v) If transform IsNot Nothing Then Dim cm As Matrix = transform.Value m = Matrix.Multiply(m, cm) End If Dim offset As Vector = VisualTreeHelper.GetOffset(v) m.Translate(offset.X, offset.Y) Return m End If Return Matrix.Identity End Function Private Function TryApplyVisualTransform(ByVal Point As Windows.Point, ByVal v As Visual, ByVal inverse As Boolean, ByVal throwOnError As Boolean, ByRef success As Boolean) As Windows.Point success = True If v IsNot Nothing Then Dim visualTransform As Matrix = GetVisualTransform(v) If inverse Then If Not throwOnError AndAlso Not visualTransform.HasInverse Then success = False Return New Windows.Point(0, 0) End If visualTransform.Invert() End If Point = visualTransform.Transform(Point) End If Return Point End Function Private Function ApplyVisualTransform(ByVal Point As Windows.Point, ByVal v As Visual, ByVal inverse As Boolean) As Windows.Point Dim success As Boolean = True Return TryApplyVisualTransform(Point, v, inverse, True, success) End Function Private Function GetPixelOffset() As Windows.Point Dim pixelOffset As New Windows.Point() Dim ps As PresentationSource = PresentationSource.FromVisual(Me) If ps IsNot Nothing Then Dim rootVisual As Visual = ps.RootVisual ' Transform (0,0) from this element up to pixels. pixelOffset = Me.TransformToAncestor(rootVisual).Transform(pixelOffset) pixelOffset = ApplyVisualTransform(pixelOffset, rootVisual, False) pixelOffset = ps.CompositionTarget.TransformToDevice.Transform(pixelOffset) ' Round the origin to the nearest whole pixel. pixelOffset.X = Math.Round(pixelOffset.X) pixelOffset.Y = Math.Round(pixelOffset.Y) ' Transform the whole-pixel back to this element. pixelOffset = ps.CompositionTarget.TransformFromDevice.Transform(pixelOffset) pixelOffset = ApplyVisualTransform(pixelOffset, rootVisual, True) pixelOffset = rootVisual.TransformToDescendant(Me).Transform(pixelOffset) End If Return pixelOffset End Function Private Function AreClose(ByVal Point1 As Windows.Point, ByVal Point2 As Windows.Point) As Boolean Return AreClose(Point1.X, Point2.X) AndAlso AreClose(Point1.Y, Point2.Y) End Function Private Function AreClose(ByVal value1 As Double, ByVal value2 As Double) As Boolean If value1 = value2 Then Return True End If Dim delta As Double = value1 - value2 Return ((delta < 0.00000153) AndAlso (delta > -0.00000153)) End Function End Class

    Read the article

  • Export JPG out of Dicom File With mDCM

    - by Ross Peoples
    Hello, I'm using mDCM with C# to view dicom tags, but I'm trying to convert the pixel data to a Bitmap and eventually out to a JPG file. I have read all of the posts on the mDCM Google Group on the subject and all of the code examples either don't work or are missing important lines of code. The image I am working with is a 16 bit monochrome1. I have tried using LockBits, SetPixel, and unsafe code in order to convert the pixel data to a Bitmap but all attempts fail. Does anyone have any code that could make this work. Thanks in advance P.S. Before anyone suggests trying something else like ClearCanvas, know that mDCM is the only library that suits my needs and ClearCanvas is WAY too bloated for what I need to do.

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >