Search Results

Search found 1598 results on 64 pages for 'pixel bender'.

Page 21/64 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Drawing transparent glyphs on the HTML canvas

    - by Bertrand Le Roy
    The HTML canvas has a set of methods, createImageData and putImageData, that look like they will enable you to draw transparent shapes pixel by pixel. The data structures that you manipulate with these methods are pseudo-arrays of pixels, with four bytes per pixel. One byte for red, one for green, one for blue and one for alpha. This alpha byte makes one believe that you are going to be able to manage transparency, but that’s a lie. Here is a little script that attempts to overlay a simple generated pattern on top of a uniform background: var wrong = document.getElementById("wrong").getContext("2d"); wrong.fillStyle = "#ffd42a"; wrong.fillRect(0, 0, 64, 64); var overlay = wrong.createImageData(32, 32), data = overlay.data; fill(data); wrong.putImageData(overlay, 16, 16); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } where the fill method is setting the pixels in the lower-left half of the overlay to opaque red, and the rest to transparent black. And here’s how it renders: As you can see, the transparency byte was completely ignored. Or was it? in fact, what happens is more subtle. What happens is that the pixels from the image data, including their alpha byte, replaced the existing pixels of the canvas. So the alpha byte is not lost, it’s just that it wasn’t used by putImageData to combine the new pixels with the existing ones. This is in fact a clue to how to write a putImageData that works: we can first dump that image data into an intermediary canvas, and then compose that temporary canvas onto our main canvas. The method that we can use for this composition is drawImage, which works not only with image objects, but also with canvas objects. var right = document.getElementById("right").getContext("2d"); right.fillStyle = "#ffd42a"; right.fillRect(0, 0, 64, 64); var overlay = wrong.createImageData(32, 32), data = overlay.data; fill(data); var overlayCanvas = document.createElement("canvas"); overlayCanvas.width = overlayCanvas.height = 32; overlayCanvas.getContext("2d").putImageData(overlay, 0, 0); right.drawImage(overlayCanvas, 16, 16); And there is is, a version of putImageData that works like it should always have:

    Read the article

  • Tessellation Texture Coordinates

    - by Stuart Martin
    Firstly some info - I'm using DirectX 11 , C++ and I'm a fairly good programmer but new to tessellation and not a master graphics programmer. I'm currently implementing a tessellation system for a terrain model, but i have reached a snag. My current system produces a terrain model from a height map complete with multiple texture coordinates, normals, binormals and tangents for rendering. Now when i was using a simple vertex and pixel shader combination everything worked perfectly but since moving to include a hull and domain shader I'm slightly confused and getting strange results. My terrain is a high detail model but the textured results are very large patches of solid colour. My current setup passes the model data into the vertex shader then through the hull into the domain and then finally into the pixel shader for use in rendering. My only thought is that in my hull shader i pass the information into the domain shader per patch and this is producing the large areas of solid colour because each patch has identical information. Lighting and normal data are also slightly off but not as visibly as texturing. Below is a copy of my hull shader that does not work correctly because i think the way that i am passing the data through is incorrect. If anyone can help me out but suggesting an alternative way to get the required data into the pixel shader? or by showing me the correct way to handle the data in the hull shader id be very thankful! cbuffer TessellationBuffer { float tessellationAmount; float3 padding; }; struct HullInputType { float3 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; float3 binormal : BINORMAL; float2 tex2 : TEXCOORD1; }; struct ConstantOutputType { float edges[3] : SV_TessFactor; float inside : SV_InsideTessFactor; }; struct HullOutputType { float3 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; float3 binormal : BINORMAL; float2 tex2 : TEXCOORD1; float4 depthPosition : TEXCOORD2; }; ConstantOutputType ColorPatchConstantFunction(InputPatch<HullInputType, 3> inputPatch, uint patchId : SV_PrimitiveID) { ConstantOutputType output; output.edges[0] = tessellationAmount; output.edges[1] = tessellationAmount; output.edges[2] = tessellationAmount; output.inside = tessellationAmount; return output; } [domain("tri")] [partitioning("integer")] [outputtopology("triangle_cw")] [outputcontrolpoints(3)] [patchconstantfunc("ColorPatchConstantFunction")] HullOutputType ColorHullShader(InputPatch<HullInputType, 3> patch, uint pointId : SV_OutputControlPointID, uint patchId : SV_PrimitiveID) { HullOutputType output; output.position = patch[pointId].position; output.tex = patch[pointId].tex; output.tex2 = patch[pointId].tex2; output.normal = patch[pointId].normal; output.tangent = patch[pointId].tangent; output.binormal = patch[pointId].binormal; return output; } Edited to include the domain shader:- [domain("tri")] PixelInputType ColorDomainShader(ConstantOutputType input, float3 uvwCoord : SV_DomainLocation, const OutputPatch<HullOutputType, 3> patch) { float3 vertexPosition; PixelInputType output; // Determine the position of the new vertex. vertexPosition = uvwCoord.x * patch[0].position + uvwCoord.y * patch[1].position + uvwCoord.z * patch[2].position; output.position = mul(float4(vertexPosition, 1.0f), worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); output.depthPosition = output.position; output.tex = patch[0].tex; output.tex2 = patch[0].tex2; output.normal = patch[0].normal; output.tangent = patch[0].tangent; output.binormal = patch[0].binormal; return output; }

    Read the article

  • Generate texture for a heightmap

    - by James
    I've recently been trying to blend multiple textures based on the height at different points in a heightmap. However i've been getting poor results. I decided to backtrack and just attempt to recreate one single texture from an SDL_Surface (i'm using SDL) and just send that into opengl. I'll put my code for creating the texture and reading the colour values. It is a 24bit TGA i'm loading, and i've confirmed that the rest of my code works because i was able to send the surfaces pixels directly to my createTextureFromData function and it drew fine. struct RGBColour { RGBColour() : r(0), g(0), b(0) {} RGBColour(unsigned char red, unsigned char green, unsigned char blue) : r(red), g(green), b(blue) {} unsigned char r; unsigned char g; unsigned char b; }; // main loading code SDLSurfaceReader* reader = new SDLSurfaceReader(m_renderer); reader->readSurface("images/grass.tga"); // new texture unsigned char* newTexture = new unsigned char[reader->m_surface->w * reader->m_surface->h * 3 * reader->m_surface->w]; for (int y = 0; y < reader->m_surface->h; y++) { for (int x = 0; x < reader->m_surface->w; x += 3) { int index = (y * reader->m_surface->w) + x; RGBColour colour = reader->getColourAt(x, y); newTexture[index] = colour.r; newTexture[index + 1] = colour.g; newTexture[index + 2] = colour.b; } } unsigned int id = m_renderer->createTextureFromData(newTexture, reader->m_surface->w, reader->m_surface->h, RGB); // functions for reading pixels RGBColour SDLSurfaceReader::getColourAt(int x, int y) { Uint32 pixel; Uint8 red, green, blue; RGBColour rgb; pixel = getPixel(m_surface, x, y); SDL_LockSurface(m_surface); SDL_GetRGB(pixel, m_surface->format, &red, &green, &blue); SDL_UnlockSurface(m_surface); rgb.r = red; rgb.b = blue; rgb.g = green; return rgb; } // this function taken from SDL documentation // http://www.libsdl.org/cgi/docwiki.cgi/Introduction_to_SDL_Video#getpixel Uint32 SDLSurfaceReader::getPixel(SDL_Surface* surface, int x, int y) { int bpp = m_surface->format->BytesPerPixel; Uint8 *p = (Uint8*)m_surface->pixels + y * m_surface->pitch + x * bpp; switch (bpp) { case 1: return *p; case 2: return *(Uint16*)p; case 3: if (SDL_BYTEORDER == SDL_BIG_ENDIAN) return p[0] << 16 | p[1] << 8 | p[2]; else return p[0] | p[1] << 8 | p[2] << 16; case 4: return *(Uint32*)p; default: return 0; } } I've been stumped at this, and I need help badly! Thanks so much for any advice.

    Read the article

  • Getting the number of fragments which passed the depth test

    - by Etan
    In "modern" environments, the "NV Occlusion Query" extension provides a method to get the number of fragments which passed the depth test. However, on the iPad / iPhone using OpenGL ES, the extension is not available. What is the most performant approach to implement a similar behaviour in the fragment shader? Some of my ideas: Render the object completely in white, then count all the colors together using a two-pass shader where first a vertical line is rendered and for each fragment the shader computes the sum over the whole row. Then, a single vertex is rendered whose fragment sums all the partial sums of the first pass. Doesn't seem to be very efficient. Render the object completely in white over a black background. Downsample recursively, abusing the hardware linear interpolation between textures until being at a reasonably small resolution. This leads to fragments which have a greyscale level depending on the number of white pixels where in their corresponding region. Is this even accurate enough? Use mipmaps and simply read the pixel on the 1x1 level. Again the question of accuracy and if it is even possible using non-power-of-two textures. The problem wit these approaches is, that the pipeline gets stalled which results in major performance issues. Therefore, I'm looking for a more performant way to accomplish my goal. Using the EXT_OCCLUSION_QUERY_BOOLEAN extension Apple introduced EXT_OCCLUSION_QUERY_BOOLEAN in iOS 5.0 for iPad 2. "4.1.6 Occlusion Queries Occlusion queries use query objects to track the number of fragments or samples that pass the depth test. An occlusion query can be started and finished by calling BeginQueryEXT and EndQueryEXT, respectively, with a target of ANY_SAMPLES_PASSED_EXT or ANY_SAMPLES_PASSED_CONSERVATIVE_EXT. When an occlusion query is started with the target ANY_SAMPLES_PASSED_EXT, the samples-boolean state maintained by the GL is set to FALSE. While that occlusion query is active, the samples-boolean state is set to TRUE if any fragment or sample passes the depth test. When the occlusion query finishes, the samples-boolean state of FALSE or TRUE is written to the corresponding query object as the query result value, and the query result for that object is marked as available. If the target of the query is ANY_SAMPLES_PASSED_CONSERVATIVE_EXT, an implementation may choose to use a less precise version of the test which can additionally set the samples-boolean state to TRUE in some other implementation dependent cases." The first sentence hints on a behavior which is exactly what I'm looking for: getting the number of pixels which passed the depth test in an asynchronous manner without much performance loss. However, the rest of the document describes only how to get boolean results. Is it possible to exploit this extension to get the pixel count? Does the hardware support it so that there may be hidden API to get access to the pixel count? Other extensions which could be exploitable would be debugging features like the number of times the fragment shader was invoked (PSInvocations in DirectX - not sure if something simila is available in OpenGL ES). However, this would also result in a pipeline stall.

    Read the article

  • Projective texture and deferred lighting

    - by Vodácek
    In my previous question, I asked whether it is possible to do projective texturing with deferred lighting. Now (more than half a year later) I have a problem with my implementation of the same thing. I am trying to apply this technique in light pass. (my projector doesn't affect albedo). I have this projector View a Projection matrix: Matrix projection = Matrix.CreateOrthographicOffCenter(-halfWidth * Scale, halfWidth * Scale, -halfHeight * Scale, halfHeight * Scale, 1, 100000); Matrix view = Matrix.CreateLookAt(Position, Target, Vector3.Up); Where halfWidth and halfHeight is are half of the texture's width and height, Position is the Projector's position and target is the projector's target. This seems to be ok. I am drawing full screen quad with this shader: float4x4 InvViewProjection; texture2D DepthTexture; texture2D NormalTexture; texture2D ProjectorTexture; float4x4 ProjectorViewProjection; sampler2D depthSampler = sampler_state { texture = <DepthTexture>; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D normalSampler = sampler_state { texture = <NormalTexture>; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D projectorSampler = sampler_state { texture = <ProjectorTexture>; AddressU = Clamp; AddressV = Clamp; }; float viewportWidth; float viewportHeight; // Calculate the 2D screen position of a 3D position float2 postProjToScreen(float4 position) { float2 screenPos = position.xy / position.w; return 0.5f * (float2(screenPos.x, -screenPos.y) + 1); } // Calculate the size of one half of a pixel, to convert // between texels and pixels float2 halfPixel() { return 0.5f / float2(viewportWidth, viewportHeight); } struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position :POSITION0; float4 PositionCopy : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = input.Position; output.PositionCopy=output.Position; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float2 texCoord =postProjToScreen(input.PositionCopy) + halfPixel(); // Extract the depth for this pixel from the depth map float4 depth = tex2D(depthSampler, texCoord); //return float4(depth.r,0,0,1); // Recreate the position with the UV coordinates and depth value float4 position; position.x = texCoord.x * 2 - 1; position.y = (1 - texCoord.y) * 2 - 1; position.z = depth.r; position.w = 1.0f; // Transform position from screen space to world space position = mul(position, InvViewProjection); position.xyz /= position.w; //compute projection float3 projection=tex2D(projectorSampler,postProjToScreen(mul(position,ProjectorViewProjection)) + halfPixel()); return float4(projection,1); } In first part of pixel shader is recovered position from G-buffer (this code I am using in other shaders without any problem) and then is tranformed to projector viewprojection space. Problem is that projection doesn't appear. Here is an image of my situation: The green lines are the rendered projector frustum. Where is my mistake hidden? I am using XNA 4. Thanks for advice and sorry for my English. EDIT: Shader above is working but projection was too small. When I changed the Scale property to a large value (e.g. 100), the projection appears. But when the camera moves toward the projection, the projection expands, as can bee seen on this YouTube video.

    Read the article

  • Android Canvas Coordinate System

    - by Mitch
    I'm trying to find information on how to change the coordinate system for the canvas. I have some vector data I'd like to draw to a canvas using things like circles and lines, but the data's coordinate system doesn't match the canvas coordinate system. Is there a way to map the units I'm using to the screen's units? I'm drawing to an ImageView which isn't taking up the entire display. If I have to do my own calculations prior to each drawing call, how to I find the width and height of my ImageView? The getWidth() and getHeight() calls I tried seem to be returning the entire canvas size and not the size of the ImageView which isn't helpful. I see some matrix stuff, is that something that will work for me? I tried to use the "public void scale(float sx, float sy)", but that works more like a pixel level zoom rather than a vector scale function by expanding each pixel. This means if the dimensions are increased to fit the screen, the line thickness is also increased. Update: After some research I'm starting to think there's no way to change coordinate systems to something else. I'll need to map all my coordinates to the screen's pixel coordinates and do so by modifying each vector. The getWidth() and getHeight() seem to be working better for me now. I can say what was wrong, but I suspect I can't use these methods inside the constructor.

    Read the article

  • File Format Conversion to TIFF. Some issues???

    - by Nains
    I'm having a proprietary image format SNG( a proprietary format) which is having a countinous array of Image data along with Image meta information in seperate HDR file. Now I need to convert this SNG format to a Standard TIFF 6.0 Format. So I studied the TIFF format i.e. about its Header, Image File Directories( IFD's) and Stripped Image Data. Now I have few concerns about this conversion. Please assist me. SNG Continous Data vs TIFF Stripped Data: Should I convert SNG Data to TIFF as a continous data in one Strip( data load/edit time problem?) OR make logical StripOffsets of the SNG Image data. SNG Data Header uses only necessary Meta Information, thus while converting the SNG to TIFF, some information can’t be retrieved such as NewSubFileType, Software Tag etc. So this raises a concern that after conversion whether any missing directory information such as NewSubFileType, Software Tag etc is necessary and sufficient condition for TIFF File. Encoding of each pixel component of RGB Sample in SNG data: Here each SNG Image Data Strip per Pixel component is encoded as: Out^[i] := round( LineBuffer^[i * 3] * **0.072169** + LineBuffer^[i * 3 + 1] * **0.715160** + LineBuffer^[i * 3+ 2]* **0.212671**); Only way I deduce from it is that each Pixel is represented with 3 RGB component and some coefficient is multiplied with each component to make the SNG Viewer work RGB color information of SNG Image Data. (Developer who earlier work on this left, now i am following the trace :)) Thus while converting this to TIFF, the decoding the same needs to be done. This raises a concern that the how RBG information in TIFF is produced, or better do we need this information?. Please assist...

    Read the article

  • Editing 8bpp indexed Bitmaps

    - by Pedro Sá
    hi, i'm trying to edit the pixels of a 8bpp. Since this PixelFormat is indexed i'm aware that it uses a Color Table to map the pixel values. Even though I can edit the bitmap by converting it to 24bpp, 8bpp editing is much faster (13ms vs 3ms). But, changing each value when accessing the 8bpp bitmap results in some random rgb colors even though the PixelFormat remains 8bpp. I'm currently developing in c# and the algorithm is as follows: (C#) 1- Load original Bitmap at 8bpp 2- Create Empty temp Bitmap with 8bpp with the same size as the original 3-LockBits of both bitmaps and, using P/Invoke, calling c++ method where I pass the Scan0 of each BitmapData object. (I used a c++ method as it offers better performance when iterating through the Bitmap's pixels) (C++) 4- Create a int[256] palette according to some parameters and edit the temp bitmap bytes by passing the original's pixel values through the palette. (C#) 5- UnlockBits. My question is how can I edit the pixel values without having the strange rgb colors, or even better, edit the 8bpp bitmap's Color Table? Regards, Pedro

    Read the article

  • Using Sendkeys in python to press {F12} results in other keys pressed?

    - by ThantiK
    import time from ctypes import * import win32gui import win32com.client as comclt X = 119 Y = 53 def PILColorToRGB(pil_color): """ convert a PIL-compatible integer into an (r, g, b) tuple """ hexstr = '%06x' % pil_color # reverse byte order r, g, b = hexstr[4:], hexstr[2:4], hexstr[:2] r, g, b = [int(n, 16) for n in (r, g, b)] return (r, g, b) wsh = comclt.Dispatch("WScript.Shell") w = win32gui user = windll.LoadLibrary("c:\\windows\\system32\\user32.dll") h = user.GetDC(0) gdi = windll.LoadLibrary("c:\\windows\\system32\\gdi32.dll") while True: FG = w.GetWindowText(w.GetForegroundWindow()) #FG = Foreground window title. if FG == "World of Warcraft": rgb = (PILColorToRGB(gdi.GetPixel(h,X,Y))) #X, Y time.sleep(0.333) #don't check too often. if (rgb[0] >= 130): #While Pixel (X, Y) is Red... #print "%d %d %d" % (rgb[0], rgb[1], rgb[2]) #Debug wsh.SendKeys("{F12}") #Send a key. time.sleep(0.7) #Add some extra down-time if we send the key. else: time.sleep(5) Basically all this code does is read a pixel on the screen, and send a key (F12) if the pixel is red. But when using this code I regularly get some phantom key-code being pressed. The application I'm using this on is obviously world of warcraft, and I have checked that all keybinds are standard keybinds. However randomly it seems I get either an up arrow, or a w pressed, which moves my character forward whenever this code executes (F12 is bound to a macro, unbound from any movement. If I press f12 with a hardware event, it does not exhibit this behavior. What in the world could be going on here?

    Read the article

  • How to scale a sprite image without losing color key information?

    - by Michael P
    Hello everyone, I'm currently developing a simple application that displays map and draws some markers on it. I'm developing for Windows Mobile, so I decided to use DirectDraw and Imaging interfaces to make the application fast and pretty. The map moves when user moves finger on the touchscreen, so the whole map moving/scrolling animation has to be fast, but it is not. On every map update I have to draw portion of the map, control buttons, and markers - buttons and markers are preloaded on DirectDraw surface as a mipmap. So the only thing I do is BitBlit from the mipmap to a back buffer, and from the back buffer to a primary surface (I can't use page flipping due to the windowed mode of my application). Previously I used premultiplied-alpha surface with 32 bit ARGB pixel format for images mipmap, everything was looking good, but drawing entire "scene" was horribly slow - i could forget about smooth map scrolling. Now I'm using mipmap with native (RGB565) pixel format and fuchsia (0xFF00FF) color key. Drawing is much better my mipmap surface is generated on program loading - images are loaded from files, scaled (with filtering) and drawn on mipmap. The problem is, that image scaling process blends pixel colors, and those pixels which are on the border of a sprite region are blended with surrounding fuchsia pixels resulting semi-fuchsia color that is not treated as color key. When I do blitting with color key option, sprites have small fuchsia-like borders, and it looks really bad. How to solve this problem? I can use alpha blitting, but it is too slow - even in ARGB 1555 format.

    Read the article

  • Updating UIView subclass contents

    - by Brett
    Hi; I am trying to update a UIView subclass to draw individual pixels (rectangles) after calculating if they are in the mandelbrot set. I am using the following code but am getting a blank screen: //drawingView.h ... -(void)drawRect:(CGRect)rect{ CGContextRef context = UIGraphicsGetCurrentContext(); CGContextSetRGBFillColor(context, r, g, b, 1); CGContextSetRGBStrokeColor(context, r, g, b, 1); CGRect arect = CGRectMake(x,y,1,1); CGContextFillRect(context, arect); } ... //viewController.m ... - (void)viewDidLoad { [drawingView setX:10]; [drawingView setY:10]; [drawingView setR:0.0]; [drawingView setG:0.0]; [drawingView setB:0.0]; [super viewDidLoad]; [drawingView setNeedsDisplay]; } Once I figure out how to display one pixel, I'd like to know how to go about updating the view by adding another pixel (once calculated). Once all the variables change, how do I keep the current view as is but add another pixel? Thanks, Brett

    Read the article

  • Qt/C++, Problems with large QImage

    - by David Günzel
    I'm pretty new to C++/Qt and I'm trying to create an application with Visual Studio C++ and Qt (4.8.3). The application displays images using a QGraphicsView, I need to change the images at pixel level. The basic code is (simplified): QImage* img = new QImage(img_width,img_height,QImage::Format_RGB32); while(do_some_stuff) { img->setPixel(x,y,color); } QGraphicsPixmapItem* pm = new QGraphicsPixmapItem(QPixmap::fromImage(*img)); QGraphicsScene* sc = new QGraphicsScene; sc->setSceneRect(0,0,img->width(),img->height()); sc->addItem(pm); ui.graphicsView->setScene(sc); This works well for images up to around 12000x6000 pixel. The weird thing happens beyond this size. When I set img_width=16000 and img_height=8000, for example, the line img = new QImage(...) returns a null image. The image data should be around 512,000,000 bytes, so it shouldn't be too large, even on a 32 bit system. Also, my machine (Win 7 64bit, 8 GB RAM) should be capable of holding the data. I've also tried this version: uchar* imgbuf = (uchar*) malloc(img_width*img_height*4); QImage* img = new QImage(imgbuf,img_width,img_height,QImage::Format_RGB32); At first, this works. The img pointer is valid and calling img-width() for example returns the correct image width (instead of 0, in case the image pointer is null). But as soon as I call img-setPixel(), the pointer becomes null and img-width() returns 0. So what am I doing wrong? Or is there a better way of modifying large images on pixel level? Regards, David

    Read the article

  • Help in J2ME for creating image and parse it

    - by HAMED
    Can anyone tell me how i can parse a png image into many png images in J2ME??? for examole:I just wnat to have a source image 150*150 pixel and parse it to 10 image with 15*15 pixel. I write an elemantary code that have exeption. This is my code: public class HelloMIDlet extends MIDlet implements CommandListener { private boolean midletPaused = false; private Command exitCommand; private Form form; private StringItem stringItem; Image im , im2; Form form1 = null; public HelloMIDlet() { try { // create source image im = Image.createImage("/image1.JPG"); int height = im.getHeight() ; int width = im.getWidth() ; int x = 0 ; int y = 0 ; while ( y < height ){ while ( x < width ){ // create 15*15 pixel of source image im2 = im.createImage(im, x, y, 15, 15, Sprite.TRANS_NONE) ; x += 15 ; } y += 15 ; x = 0 ; } } catch (IOException ex) { ex.printStackTrace(); } } please help me to make it right....It's emergency! Thanks a lot guys...

    Read the article

  • Screen information while Windows system is locked (.NET)

    - by Matt
    We have a nightly process that updates applications on a user's pc, and that requires bringing the application down and back up again (not looking to get into changing that process). The problem is that we are building a Windows AppBar on launch which requires a valid screen, and when the system is locked there isn't one in the Screen class. So none of the visual effects are enabled and it shows up real ugly. The only way we currently have around this is to detect a locked screen and just spin and wait until the user unlocks the desktop, then continue launching. Leaving it down isn't an option, as this is a key part of our user's workflow, and they expect it to be up and running if they left it that way the night before. Any ideas?? I can't seem to find the display information anywhere, but it has to be stored off someplace, since the user is still logged in. The contents of the Screen.AllScreens array: ** When Locked: Device Name : DISPLAY Primary : True Bits Per Pixel : 0 Bounds : {X=-1280,Y=0,Width=2560,Height=1024} Working Area : {X=0,Y=0,Width=1280,Height=1024} ** When Unlocked: Device Name : \\.\DISPLAY1 Primary : True Bits Per Pixel : 32 Bounds : {X=0,Y=0,Width=1280,Height=1024} Working Area : {X=0,Y=0,Width=1280,Height=994} Device Name : \\.\DISPLAY2 Primary : False Bits Per Pixel : 32 Bounds : {X=-1280,Y=0,Width=1280,Height=1024} Working Area : {X=-1280,Y=0,Width=1280,Height=964}

    Read the article

  • Python: Serial Transmission

    - by Silent Elektron
    I have an image stack of 500 images (jpeg) of 640x480. I intend to make 500 pixels (1st pixels of all images) as a list and then send that via COM1 to FPGA where I do my further processing. I have a couple of questions here: How do I import all the 500 images at a time into python and how do i store it? How do I send the 500 pixel list via COM1 to FPGA? I tried the following: Converted the jpeg image to intensity values (each pixel is denoted by a number between 0 and 255) in MATLAB, saved the intensity values in a text file, read that file using readlines(). But it became too cumbersome to make the intensity value files for all the 500 images! Used NumPy to put the read files in a matrix and then pick the first pixel of all images. But when I send it, its coming like: [56, 61, 78, ... ,71, 91]. Is there a way to eliminate the [ ] and , while sending the data serially? Thanks in Advance! :)

    Read the article

  • Selenium Webdriver Java - looking for alternatives for Actions and Robot when performing drag-and-drop

    - by Ja-ke Alconcel
    I first tried Actions class and the drag-and-drop does work on different elements, however it was unable to locate the a specific draggable element on it's exact screen/webpage position. Here's the code I've used: Point loc = driver.findElement(By.id("thiselement")).getLocation(); System.out.println(loc); WebElement drag = driver.findElement(By.id("thiselement")); Actions test = new Actions(driver); test.dragAndDropBy(drag, 0, 60).build().perform(); I checked the element with it's pixel location and it prints (837, -52), which was somewhere on top of the webpage and was pixels away from the actual element. Then I tried using the Robot class and works perfectly fine on my script, but can only provide constant successful runs on a single test machine, running it with a different machine with different screen resolution and screen size will render the script to fail due to the dependency of Robot on the pixel location of the element. The sample code of the Robot script I'm using: Robot dragAndDrop = new Robot(); dragAndDrop.mouseMove(945, 166); //actual pixel location of the draggable element dragAndDrop.mousePress(InputEvent.BUTTON1_MASK); sleep(3000); dragAndDrop.mouseMove(945, 226); dragAndDrop.mouseRelease(InputEvent.BUTTON1_MASK); sleep(3000); Is there any alternative for Actions and Robot to automate drag-and-drop? Or maybe a help on working the script to work on Actions as I really can't use Robot. Thanks in advance.

    Read the article

  • "java.lang.OutOfMemoryError: Java heap space" in image and array storage

    - by totalconscience
    I am currently working on an image processing demonstration in java (Applet). I am running into the problem where my arrays are too large and I am getting the "java.lang.OutOfMemoryError: Java heap space" error. The algorithm I run creates an NxD float array where: N is the number of pixel in the image and D is the coordinates of each pixel plus the colorspace components of each pixel (usually 1 for grayscale or 3 for RGB). For each iteration of the algorithm it creates one of these NxD float arrays and stores it for later use in a vector, so that the user of the applet may look at the individual steps. My client wants the program to be able to load a 500x500 RGB image and run as the upper bound. There are about 12 to 20 iterations per run so that means I need to be able to store a 12x500x500x5 float in some fashion. Is there a way to process all of this data and, if possible, how? Example of the issue: I am loading a 512 by 512 Grayscale image and even before the first iteration completes I run out of heap space. The line it points me to is: Y.add(new float[N][D]) where Y is a Vector and N and D are described as above. This is the second instance of the code using that line.

    Read the article

  • how to make a CUDA Histogram kernel?

    - by kitw
    Hi all, I am writing a CUDA kernel for Histogram on a picture, but I had no idea how to return a array from the kernel, and the array will change when other thread read it. Any possible solution for it? __global__ void Hist( TColor *dst, //input image int imageW, int imageH, int*data ){ const int ix = blockDim.x * blockIdx.x + threadIdx.x; const int iy = blockDim.y * blockIdx.y + threadIdx.y; if(ix < imageW && iy < imageH) { int pixel = get_red(dst[imageW * (iy) + (ix)]); //this assign specific RED value of image to pixel data[pixel] ++; // ?? problem statement ... } } @para d_dst: input image TColor is equals to float4. @para data: the array for histogram size [255] extern "C" void cuda_Hist(TColor *d_dst, int imageW, int imageH,int* data) { dim3 threads(BLOCKDIM_X, BLOCKDIM_Y); dim3 grid(iDivUp(imageW, BLOCKDIM_X), iDivUp(imageH, BLOCKDIM_Y)); Hist<<<grid, threads>>>(d_dst, imageW, imageH, data); }

    Read the article

  • Creating transparent PNG with exact RGBA values

    - by rrowland
    I'm color-coding a transparent image to be read programatically. However, the image seems to be getting compressed and my code is reading color values other than the ones I mean to pass it. Concept This is the output I get, exporting as PNG-24. I programatically check each pixel for one of the six colors I use in creating the image: 0x00000F 0x0000F0 0x000F00 0x00F000 0x0F0000 0xF00000 Each color represents a different texture to apply. Top right (0x00000F) will pull texture from the tile to its top right and blend it at a ratio equal to the opacity of the pixel. The end goal is to create a hex tiled grid with differing textures that blend smoothly. What's happening It seems that when converting to PNG, Photoshop will change the RGBA to make it smoother, or just to help compression size. Parts that should be 250 red range anywhere from 150 to 255. Question Whether using PNG or another web-compatible format, I need to be able to save these pixel values, essentially instructions, loss-less and still maintain transparency. Is this possible in any format or will I need to re-think my approach?

    Read the article

  • Voice Recognition Google API

    - by user2966744
    thanks for reading. I'm creating a simple web based drawing app that uses speech recognition. I have created a simple page, the project is on github here: https://github.com/a5hton/speechdraw It has a 16x16 pixel grid. I would like to be able to draw on this grid by using simple words. For example if you say "right", the pixel to the right will be colored black. If you say "down" the pixel below the last one will be colored black. You can say up, down, left or right and the corresponding pixels will be colored. Saying "erase" will switch to erase mode, colouring the pixels back to their original color. Saying "lift" will lift the pen off the page. Saying "draw" will enable the draw mode. Could you please help me work out how to make this happen. Please see the simple page at to get an understanding. Thank you! Cheers, Michael

    Read the article

  • Convert png sequence to x264 with ffmpeg

    - by Thucydides411
    I am trying to convert a series of pngs into an mp4 video. I am using ffmpeg, and want to encode the video with the x264 codec. Using the command ffmpeg -y -r 30 -b 1800k -i _tmp%04d.png -vcodec libx264 out.mp4 I get the following warning message Incompatible pixel format 'bgra' for codec 'libx264', auto-selecting format 'yuv420p' My understanding is that there is an alpha channel in the pngs, which the x264 encoder cannot handle. Is there a way to get around this problem? Is there, for example, a way to get the encoder to ignore the alpha channel (my pngs don't actually have any transparent elements)? I'm aware that I could batch convert the pngs beforehand to strip the alpha channel, but the sequence of images is produced by another program, and having to preprocess the images each time I make a video would be less than optimal. Edit: After stripping the alpha channel from each frame using the command convert in.png -background white -flatten +matte out.png ffmpeg gives the warning message Incompatible pixel format 'pal8' for codec 'libx264', auto-selecting format 'yuv420p' so still no dice.

    Read the article

  • ffmpeg open webcam using YUYV but i want MJPEG

    - by Pavel
    I need ffmpeg to open webcam (logitech c910) in MJPEG mode, because the webcam can give ~24 using MJPEG "protocol" and only ~10 fps using the YUYV. Can i choose between them using ffmpeg command line? xx@(none) ~ $ v4l2-ctl --list-formats ioctl: VIDIOC_ENUM_FMT Index : 0 Type : Video Capture Pixel Format: 'YUYV' Name : YUV 4:2:2 (YUYV) Index : 1 Type : Video Capture Pixel Format: 'MJPG' (compressed) Name : MJPEG My current command line: ffmpeg -y -f alsa -i hw:3,0 -f video4linux2 -r 20 -s 1280x720 -i /dev/video0 -acodec libfaac -ab 128k -vcodec libx264 /tmp/web.avi ffmpeg produces corrupted h264 stream when i record from webcam, but normal h264 strem when i record from x11grab. Another codecs (mjpeg, mpeg4) works well with webcam... But this is another story.

    Read the article

  • Media Information for Constant and Variable bit rate of Video files

    - by cpx
    What is this Maximum bit rate for a .mp4 format file whose bit rate mode is Constant? Media information displayed for MP4 (Using MediaInfo Tool) ID : 1 Format : AVC Format/Info : Advanced Video Codec Format profile : [email protected] Format settings, CABAC : No Format settings, ReFrames : 1 frame Codec ID : avc1 Codec ID/Info : Advanced Video Coding Bit rate mode : Constant Bit rate : 1 500 Kbps Maximum bit rate : 3 961 Kbps Display aspect ratio : 4:3 Frame rate mode : Constant Frame rate : 29.970 fps Color space : YUV Chroma subsampling : 4:2:0 Bit depth : 8 bits Scan type : Progressive Bits/(Pixel*Frame) : 0.163 In this case where the bit rate mode is set to Variable, is the Bit rate field where the value is displayed as 309 is its average bit rate? Media information displayed for M4V (Using MediaInfo Tool) ID : 1 Format : AVC Format/Info : Advanced Video Codec Format profile : [email protected] Format settings, CABAC : No Format settings, ReFrames : 1 frame Codec ID : avc1 Codec ID/Info : Advanced Video Coding Bit rate mode : Variable Bit rate : 309 Kbps Display aspect ratio : 16:9 Frame rate mode : Variable Frame rate : 23.976 fps Minimum frame rate : 23.810 fps Maximum frame rate : 24.390 fps Color space : YUV Chroma subsampling : 4:2:0 Bit depth : 8 bits Scan type : Progressive Bits/(Pixel*Frame) : 0.229 Writing library : x264 core 120

    Read the article

  • Extracting the layer transparency into an editable layer mask in Photoshop

    - by last-child
    Is there any simple way to extract the "baked in" transparency in a layer and turn it into a layer mask in Photoshop? To take a simple example: Let's say that I paint a few strokes with a semi-transparent brush, or paste in a .png-file with an alpha channel. The rgb color values and the alpha value for each pixel are now all contained in the layer-image itself. I would like to be able to edit the alpha values as a layer mask, so that the layer image is solid and contains only the RGB values for each pixel. Is this possible, and in that case how? Thanks. EDIT: To clarify - I'm not really after the transparency values in themselves, but in the separation of rgb values and alpha values. That means that the layer must become a solid, opaque image with a mask.

    Read the article

  • High-res icon in Windows Vista alt-tab thumbnail preview?

    - by netvope
    I have customized my alt-tab screen with the following: [HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\AltTab] "OverlayIconPx"=dword:00000040 "MaxThumbSizePx"=dword:00000100 "MinThumbSizePcent"=dword:00000064 It works great: the thumbnail becomes 256 pixel wide and the icon at the corner of the thumbnail becomes 64x64 pixels. However, Windows doesn't load the high-res icons from the programs; instead, it uses the 16x16 pixel icon and scaled it up by nearest-neighbor. I'm sure the programs has high-res icons because I saw them with in "Extra Large Icon" view in Explorer. So the question is: How can I force Windows to load the high-res icons for the alt-tab thumbnail preview? (Perhaps a registry key, or a .dll hack/injection?)

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >