Search Results

Search found 1519 results on 61 pages for 'energetic pixels'.

Page 19/61 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Block filters using fragment shaders

    - by Nils
    I was following this tutorial using Apple's OpenGL Shader Builder (tool similar to Nvidia's fx composer, but simpler). I could easily apply the filters, but I don't understand if they worked correct (and if so how can I improve the output). For example the blur filter: OpenGL itself does some image processing on the textures, so if they are displayed in a higher resolution than the original image, they are blurred already by OpenGL. Second the blurred part is brighter then the part not processed, I think this does not make sense, since it just takes pixels from the direct neighborhood. This is defined by float step_w = (1.0/width); Which I don't quite understand: The pixels are indexed using floating point values?? Edit: I forgot to attach the exact code I used: Fragment Shader // Originally taken from: http://www.ozone3d.net/tutorials/image_filtering_p2.php#part_2 #define KERNEL_SIZE 9 float kernel[KERNEL_SIZE]; uniform sampler2D colorMap; uniform float width; uniform float height; float step_w = (1.0/width); float step_h = (1.0/height); // float step_w = 20.0; // float step_h = 20.0; vec2 offset[KERNEL_SIZE]; void main(void) { int i = 0; vec4 sum = vec4(0.0); offset[0] = vec2(-step_w, -step_h); // south west offset[1] = vec2(0.0, -step_h); // south offset[2] = vec2(step_w, -step_h); // south east offset[3] = vec2(-step_w, 0.0); // west offset[4] = vec2(0.0, 0.0); // center offset[5] = vec2(step_w, 0.0); // east offset[6] = vec2(-step_w, step_h); // north west offset[7] = vec2(0.0, step_h); // north offset[8] = vec2(step_w, step_h); // north east // Gaussian kernel // 1 2 1 // 2 4 2 // 1 2 1 kernel[0] = 1.0; kernel[1] = 2.0; kernel[2] = 1.0; kernel[3] = 2.0; kernel[4] = 4.0; kernel[5] = 2.0; kernel[6] = 1.0; kernel[7] = 2.0; kernel[8] = 1.0; // TODO make grayscale first // Laplacian Filter // 0 1 0 // 1 -4 1 // 0 1 0 /* kernel[0] = 0.0; kernel[1] = 1.0; kernel[2] = 0.0; kernel[3] = 1.0; kernel[4] = -4.0; kernel[5] = 1.0; kernel[6] = 0.0; kernel[7] = 2.0; kernel[8] = 0.0; */ // Mean Filter // 1 1 1 // 1 1 1 // 1 1 1 /* kernel[0] = 1.0; kernel[1] = 1.0; kernel[2] = 1.0; kernel[3] = 1.0; kernel[4] = 1.0; kernel[5] = 1.0; kernel[6] = 1.0; kernel[7] = 1.0; kernel[8] = 1.0; */ if(gl_TexCoord[0].s<0.5) { // For every pixel sample the neighbor pixels and sum up for( i=0; i<KERNEL_SIZE; i++ ) { // select the pixel with the concerning offset vec4 tmp = texture2D(colorMap, gl_TexCoord[0].st + offset[i]); sum += tmp * kernel[i]; } sum /= 16.0; } else if( gl_TexCoord[0].s>0.51 ) { sum = texture2D(colorMap, gl_TexCoord[0].xy); } else // Draw a red line { sum = vec4(1.0, 0.0, 0.0, 1.0); } gl_FragColor = sum; } Vertex Shader void main(void) { gl_TexCoord[0] = gl_MultiTexCoord0; gl_Position = ftransform(); }

    Read the article

  • DirectX: Render to a screen buffer without using a render target

    - by knight666
    Hello, I'm writing an open source 2D game engine, and I want to support as many devices and platforms as possible. I currently only have Windows Mobile though. I'm rendering using DirectX Mobile, with DirectDraw as a fallback path. However, I've run into a bit of trouble. It seems that while the reference driver supports createRenderTarget, many many many physical devices do not. I need some way to render to the screen without using a render target, because I render sprites using textured quads, but I also need to be able to draw individual pixels. This is how I do it right now: // save old values if (Error::Failed(m_D3DDevice->GetRenderTarget(&m_D3DOldTarget))) { ERROR_EXPLAIN("Could not retrieve backbuffer."); return false; } // clear render surface if (Error::Failed(m_D3DDevice->SetRenderTarget(m_D3DRenderSurface, NULL))) { ERROR_EXPLAIN("Could not set render target to render texture."); return false; } if (Error::Failed (m_D3DDevice->Clear( 0, NULL, // target rectangle D3DMCLEAR_TARGET, D3DMCOLOR_XRGB(0, 0, 0), // clear color 1.0f, 0 ) ) ) { ERROR_EXPLAIN("Failed to clear render texture."); return false; } D3DMLOCKED_RECT render_rect; if (Error::Failed(m_D3DRenderSurface->LockRect(&render_rect, NULL, NULL))) { ERROR_EXPLAIN("Failed to lock render surface pixels."); } else { m_D3DBackSurf->SetBuffer((Pixel*)render_rect.pBits); m_D3DRenderSurface->UnlockRect(); } // begin scene if (Error::Failed(m_D3DDevice->BeginScene())) { ERROR_EXPLAIN("Failed to start rendering."); return false; } // ===================== // example rendering // ===================== // some other stuff, but the most important part of rendering a sprite: device->SetTexture(0, m_Texture)); device->SetStreamSource(0, m_VertexBuffer, sizeof(Vertex)); device->DrawPrimitive(D3DMPT_TRIANGLELIST, 0, 2); // plotting a pixel Surface* target = (Surface*)Device::GetRenderMethod()->GetRenderTarget(); buffer = target->GetBuffer(); buffer[somepixel] = MAKECOLOR(255, 0, 0); // end scene if (Error::Failed(device->EndScene())) { ERROR_EXPLAIN("Failed to end scene."); return false; } // clear screen if (Error::Failed(device->SetRenderTarget(m_D3DOldTarget, NULL))) { ERROR_EXPLAIN("Couldn't set render target to backbuffer."); return false; } if (Error::Failed(device->GetBackBuffer ( 0, D3DMBACKBUFFER_TYPE_MONO, &m_D3DBack ) ) ) { ERROR_EXPLAIN("Couldn't retrieve backbuffer."); return false; } RECT dest = { 0, 0, Device::GetWidth(), Device::GetHeight() }; if (Error::Failed( device->StretchRect ( m_D3DRenderSurface, NULL, m_D3DBack, &dest, D3DMTEXF_NONE ) ) ) { ERROR_EXPLAIN("Failed to stretch render texture to backbuffer."); return false; } if (Error::Failed(device->Present(NULL, NULL, NULL, NULL))) { ERROR_EXPLAIN("Failed to present device."); return false; } I'm looking for a way to do the same thing (render sprites using hardware acceleration and plot pixels on a buffer) without using a render target. Thanks in advance.

    Read the article

  • .NET GDI+ image size - file codec limitations

    - by roygbiv
    Is there a limit on the size of image that can be encoded using the image file codecs available from .NET? I'm trying to encode images 4GB in size, but it simply does not work (or does not work properly i.e. writes out an unreadable file) with .bmp, .jpg, .png or the .tif encoders. When I lower the image size to < 2GB it does work with the .jpg but not the .bmp, .tif or .png. My next attempt would be to try libtiff because I know tiff files are meant for large images. What is a good file format for large images? or am I just hitting the file format limitations? Random r = new Random((int)DateTime.Now.Ticks); int width = 64000; int height = 64000; int stride = (width % 4) > 0 ? width + (width % 4) : width; UIntPtr dataSize = new UIntPtr((ulong)stride * (ulong)height); IntPtr p = Program.VirtualAlloc(IntPtr.Zero, dataSize, Program.AllocationType.COMMIT | Program.AllocationType.RESERVE, Program.MemoryProtection.READWRITE); Bitmap bmp = new Bitmap(width, height, stride, PixelFormat.Format8bppIndexed, p); BitmapData bd = bmp.LockBits(new Rectangle(0, 0, bmp.Width, bmp.Height), ImageLockMode.ReadWrite, bmp.PixelFormat); ColorPalette cp = bmp.Palette; for (int i = 0; i < cp.Entries.Length; i++) { cp.Entries[i] = Color.FromArgb(i, i, i); } bmp.Palette = cp; unsafe { for (int y = 0; y < bd.Height; y++) { byte* row = (byte*)bd.Scan0.ToPointer() + (y * bd.Stride); for (int x = 0; x < bd.Width; x++) { *(row + x) = (byte)r.Next(256); } } } bmp.UnlockBits(bd); bmp.Save(@"c:\test.jpg", ImageFormat.Jpeg); bmp.Dispose(); Program.VirtualFree(p, UIntPtr.Zero, 0x8000); I have also tried using a pinned GC memory region, but this is limited to < 2GB. Random r = new Random((int)DateTime.Now.Ticks); int bytesPerPixel = 4; int width = 4000; int height = 4000; int padding = 4 - ((width * bytesPerPixel) % 4); padding = (padding == 4 ? 0 : padding); int stride = (width * bytesPerPixel) + padding; UInt32[] pixels = new UInt32[width * height]; GCHandle gchPixels = GCHandle.Alloc(pixels, GCHandleType.Pinned); using (Bitmap bmp = new Bitmap(width, height, stride, PixelFormat.Format32bppPArgb, gchPixels.AddrOfPinnedObject())) { for (int y = 0; y < height; y++) { int row = (y * width); for (int x = 0; x < width; x++) { pixels[row + x] = (uint)r.Next(); } } bmp.Save(@"c:\test.jpg", ImageFormat.Jpeg); } gchPixels.Free();

    Read the article

  • Add Windows 7’s AeroSnap Feature to Vista and XP

    - by Asian Angel
    Are you using Windows Vista or XP and want that Windows 7 AeroSnap goodness on your own system? Then join us as we look at AeroSnap for Windows Vista and XP. Note: Requires .NET Framework 2.0 or higher (link provided at bottom of article). Setup What exactly does AeroSnap do you might ask…here is a quote directly from the website: “AeroSnap is a simple but powerful application that allows you to resize, arrange or maximize your desktop windows with just drag’n'drop. Simply drag a window to a side of your desktop to snap it or drag it to the top to maximize. When you drag it back to the last position, the last window size will be restored.” As soon as you have finished installing AeroSnap and started it for the first time the only item that will be visible is the “System Tray Icon”. Before going any further you should take a moment to view and make any desired adjustments in the “Options”. Note: AeroSnap works with multiple monitors. You may want to have AeroSnap start with Windows each time but the really nice setting to enable here is the “Snap Preview”. If you are using AeroSnap on Vista and have Aero enabled this will really be nice. The second portion may be of interest for those who would like to enable the keyboard shortcut function. One point worth noting about this screen is that the highest number of pixels from the screen’s edge that you can set AeroSnap for is 20 pixels. AeroSnap in Action AeroSnap is extremely easy to use…just grab the top of an app window and drag it to the left, right, or top of your screen. Since we installed this on Windows Vista we made certain to enable the “Snap Preview” in the “Options”.  We started off with dragging our Firefox 3.7 window towards the left…once we got close to the edge of the screen you can see that the left half of the screen temporarily “shaded over”. Note: The “Snap Preview” displays on the left and right movements but not the top movement. Releasing Firefox snapped it right into the “shaded over” part of the screen. The great thing about AeroSnap is that it is really easy to return the app window to it former size…all that you have to do is simply click on and grab the top portion of the app window. Moving Firefox towards the top of our screen and… It quickly snaps into filling the screen. One thing that we did notice is that the window did not “Maximize” as per the function for the button in the upper right corner. Dragging towards the right side now… And snap! Tucked in all nice and neat… You can minimize the app windows to the Taskbar and they will return to their previous “snap area” when “maximized” again. Conclusion If you have been wanting to add Windows 7’s AeroSnap goodness to your Vista and XP systems then you should definitely give this app a try. AeroSnap is very easy to set up and operate… Links Download AeroSnap for Windows Vista & XP Download the .NET Framework Similar Articles Productive Geek Tips Using Windows 7 or Vista System RestoreRoundup: 16 Tweaks to Windows Vista Look & FeelSelect Files using Check Boxes in Windows VistaSpeed up Your Windows Vista Computer with ReadyBoostHow-To Geek Bounty: $103.24(Paid!) for Active Desktop for Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Add a Custom Title in IE using Spybot or Spyware Blaster When You Need to Hail a Taxi in NYC Live Map of Marine Traffic NoSquint Remembers Site Specific Zoom Levels (Firefox) New Firefox release 3.6.3 fixes 1 Critical bug Dark Side of the Moon (8-bit)

    Read the article

  • how to drawing continues line just like in paint [on hold]

    - by hussain shah
    hi sir i want to draw a points.the following code is work good but the problem is than when i drag the mouse button, if i move slow working good but if i move the curser fast they cannot made continues line.please what is the solution...? #include <iostream> #include <GL/glut.h> #include <GL/glu.h> #include <stdlib.h> void first() { glPushMatrix(); glTranslatef(1,01,01); glScalef(1, 1, 1); glColor3f(0, 1, 0); glBegin(GL_QUADS); glVertex2f(0.8, 0.6); glVertex2f(0.6, 0.6); glVertex2f(0.6, 0.8); glVertex2f(0.8, 0.8); glEnd(); glPopMatrix(); glFlush(); } void display (void) { glClear(GL_COLOR_BUFFER_BIT); //store color of each pixels of a frame glClearColor(0, 0, 0, 0);// screen color //glFlush(); } void drag (int x, int y) { { y=500-y; //x=500-x; glPointSize(5); glColor3f(1.0,1.0,1.0); glBegin(GL_POINTS); glVertex2f(x,y+2); glEnd(); glutSwapBuffers(); glFlush(); } } void reshape (int w, int h){} void init (void) { glClear(GL_COLOR_BUFFER_BIT); //store color of each pixels of a frame glClearColor(0, 0, 0, 0); glViewport(0,0,500,500); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0.0, 500.0, 0.0, 500.0, 1.0, -1.0); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); } void mouse_button (int button, int state, int x, int y) { if (button == GLUT_LEFT_BUTTON && state == GLUT_DOWN) { drag(x,y); first(); } //else if (button == GLUT_MIDDLE_BUTTON && state == GLUT_DOWN) //{ // //} else if (button == GLUT_RIGHT_BUTTON && state == GLUT_DOWN) { exit(0); } } int main (int argc, char**argv) { glutInit (&argc, argv); //initialize the program. glutInitDisplayMode (GLUT_SINGLE); //set up a basic display buffer (only singular for now) glutInitWindowSize (500,500); //set whe width and height of the window glutInitWindowPosition (100, 100); //set the position of the window glutCreateWindow ("A basic OpenGL Window"); //set the caption for the window glutMotionFunc(drag); //glutMouseFunc(mouse_button); init(); glutDisplayFunc (display);//call the display function to draw our world glutMainLoop(); //initialize the OpenGL loop cycle return 0; }

    Read the article

  • Customizing UPK outputs (Part 1)

    - by [email protected]
    If you are familiar with Oracle's User Productivity Kit, you are aware that UPK is a great product for rapidly developing application training. Did you know that you can also customize the UPK outputs to incorporate your company's logo, colors, and preferred styles? There are several areas that support customization: Logo - Within the developer, you can change the logo for all outputs at one time. Player - The player output uses a style sheet that can be updated to change colors, graphics and other visual branding. Documentation - The print documentation uses a Word-based template that can be modified to match your corporate standards. I'll discuss the first one today, and we'll cover the others in subsequent blogs. Before you begin: If you are working in a multi-user environment, ensure that you have "Modify" permissions for the Styles directory under the Publishing folder. Make a copy of the current styles. This recommendation is for backup purposes. If something goes wrong, you will have a way to recover. Consider creating your own category by creating a new folder under the Styles directory, and then copying the styles into your new folder. When you upgrade to future versions, the system will overwrite the standard styles with any new feature additions and updates that have been made. With your own category, all of your customizations will remain intact. To update the logos in all outputs: From the Tools Menu, choose Customize Logo. Select the category if necessary. Browse to select your logo. You can use any size logo, in any graphic format (*.bmp, *.gif, *.jpeg, *.jpg, *.png, or *.tif). The system will make a copy of your logo and add it to each of the publishing styles. Choose OK, and the update process begins. It may take a few minutes. Helpful hints: The logo you select is used "as is" - no resizing or cropping occurs during this process. The Customize Logo process automates replacing all the logo graphics for online deployment (small_logo.gif and large_logo.gif) and the headers in the documentation outputs. You can manually replace these graphics on an individual style basis if you prefer. The recommended logo size is 230 pixels wide x 44 pixels high. Prior to updating the logos, the system will display the size of the selected logo. If you use a logo that is much larger than the recommended size, the heading area will resize to fit the new logo, but that will impact the space available for your training material. If you are using a multi-user environment, the system will check out the publishing styles to you for the logo updates. After you review the styles, remember to check them in so the rest of your team can access the new changes. I'd be interested in hearing (or seeing) how you brand your UPK. Feel free to share in the comments! --Maria Cozzolino, Manager of Requirements & UI for UPK Product Development PS. For those of you who want to customize the player and documentation NOW, check out the detailed instructions in the Publishing Content chapter of the Content Development Guide.

    Read the article

  • Getting the number of fragments which passed the depth test

    - by Etan
    In "modern" environments, the "NV Occlusion Query" extension provides a method to get the number of fragments which passed the depth test. However, on the iPad / iPhone using OpenGL ES, the extension is not available. What is the most performant approach to implement a similar behaviour in the fragment shader? Some of my ideas: Render the object completely in white, then count all the colors together using a two-pass shader where first a vertical line is rendered and for each fragment the shader computes the sum over the whole row. Then, a single vertex is rendered whose fragment sums all the partial sums of the first pass. Doesn't seem to be very efficient. Render the object completely in white over a black background. Downsample recursively, abusing the hardware linear interpolation between textures until being at a reasonably small resolution. This leads to fragments which have a greyscale level depending on the number of white pixels where in their corresponding region. Is this even accurate enough? Use mipmaps and simply read the pixel on the 1x1 level. Again the question of accuracy and if it is even possible using non-power-of-two textures. The problem wit these approaches is, that the pipeline gets stalled which results in major performance issues. Therefore, I'm looking for a more performant way to accomplish my goal. Using the EXT_OCCLUSION_QUERY_BOOLEAN extension Apple introduced EXT_OCCLUSION_QUERY_BOOLEAN in iOS 5.0 for iPad 2. "4.1.6 Occlusion Queries Occlusion queries use query objects to track the number of fragments or samples that pass the depth test. An occlusion query can be started and finished by calling BeginQueryEXT and EndQueryEXT, respectively, with a target of ANY_SAMPLES_PASSED_EXT or ANY_SAMPLES_PASSED_CONSERVATIVE_EXT. When an occlusion query is started with the target ANY_SAMPLES_PASSED_EXT, the samples-boolean state maintained by the GL is set to FALSE. While that occlusion query is active, the samples-boolean state is set to TRUE if any fragment or sample passes the depth test. When the occlusion query finishes, the samples-boolean state of FALSE or TRUE is written to the corresponding query object as the query result value, and the query result for that object is marked as available. If the target of the query is ANY_SAMPLES_PASSED_CONSERVATIVE_EXT, an implementation may choose to use a less precise version of the test which can additionally set the samples-boolean state to TRUE in some other implementation dependent cases." The first sentence hints on a behavior which is exactly what I'm looking for: getting the number of pixels which passed the depth test in an asynchronous manner without much performance loss. However, the rest of the document describes only how to get boolean results. Is it possible to exploit this extension to get the pixel count? Does the hardware support it so that there may be hidden API to get access to the pixel count? Other extensions which could be exploitable would be debugging features like the number of times the fragment shader was invoked (PSInvocations in DirectX - not sure if something simila is available in OpenGL ES). However, this would also result in a pipeline stall.

    Read the article

  • Bounding Box Collision Glitching Problem (Pygame)

    - by Ericson Willians
    So far the "Bounding Box" method is the only one that I know. It's efficient enough to deal with simple games. Nevertheless, the game I'm developing is not that simple anymore and for that reason, I've made a simplified example of the problem. (It's worth noticing that I don't have rotating sprites on my game or anything like that. After showing the code, I'll explain better). Here's the whole code: from pygame import * DONE = False screen = display.set_mode((1024,768)) class Thing(): def __init__(self,x,y,w,h,s,c): self.x = x self.y = y self.w = w self.h = h self.s = s self.sur = Surface((64,48)) draw.rect(self.sur,c,(self.x,self.y,w,h),1) self.sur.fill(c) def draw(self): screen.blit(self.sur,(self.x,self.y)) def move(self,x): if key.get_pressed()[K_w] or key.get_pressed()[K_UP]: if x == 1: self.y -= self.s else: self.y += self.s if key.get_pressed()[K_s] or key.get_pressed()[K_DOWN]: if x == 1: self.y += self.s else: self.y -= self.s if key.get_pressed()[K_a] or key.get_pressed()[K_LEFT]: if x == 1: self.x -= self.s else: self.x += self.s if key.get_pressed()[K_d] or key.get_pressed()[K_RIGHT]: if x == 1: self.x += self.s else: self.x -= self.s def warp(self): if self.y < -48: self.y = 768 if self.y > 768 + 48: self.y = 0 if self.x < -64: self.x = 1024 + 64 if self.x > 1024 + 64: self.x = -64 r1 = Thing(0,0,64,48,1,(0,255,0)) r2 = Thing(6*64,6*48,64,48,1,(255,0,0)) while not DONE: screen.fill((0,0,0)) r2.draw() r1.draw() # If not intersecting, then moves, else, it moves in the opposite direction. if not ((((r1.x + r1.w) > (r2.x - r1.s)) and (r1.x < ((r2.x + r2.w) + r1.s))) and (((r1.y + r1.h) > (r2.y - r1.s)) and (r1.y < ((r2.y + r2.h) + r1.s)))): r1.move(1) else: r1.move(0) r1.warp() if key.get_pressed()[K_ESCAPE]: DONE = True for ev in event.get(): if ev.type == QUIT: DONE = True display.update() quit() The problem: In my actual game, the grid is fixed and each tile has 64 by 48 pixels. I know how to deal with collision perfectly if I moved by that size. Nevertheless, obviously, the player moves really fast. In the example, the collision is detected pretty well (Just as I see in many examples throughout the internet). The problem is that if I put the player to move WHEN IS NOT intersecting, then, when it touches the obstacle, it does not move anymore. Giving that problem, I began switching the directions, but then, when it touches and I press the opposite key, it "glitches through". My actual game has many walls, and the player will touch them many times, and I can't afford letting the player go through them. The code-problem illustrated: When the player goes towards the wall (Fine). When the player goes towards the wall and press the opposite direction. (It glitches through). Here is the logic I've designed before implementing it: I don't know any other method, and I really just want to have walls fixed in a grid, but move by 1 or 2 or 3 pixels (Slowly) and have perfect collision without glitching-possibilities. What do you suggest?

    Read the article

  • Chicago SQL Saturday

    - by Johnm
    This past Saturday, April 17, 2010, I journeyed North to the great city of Chicago for some SQL Server fun, learning and fellowship. The Chicago edition of this grassroots phenomenon was the 31st scheduled SQL Saturday since the program's birth in late 2007. The Chicago SQL Saturday consisted of four tracks with eight sessions each and was a very energetic and fast paced day for the 300+/- SQL Server enthusiasts in attendance. The speaker line up included national notables such as Kevin Kline, Brent Ozar, and Brad McGehee. My hometown of Indianapolis was well represented in the speaker line up with Arie Jones, Aaron King and Derek Comingore. The day began with a very humorous keynote by Kevin Kline and Brent Ozar who emphasized the importance of community events such as SQL Saturday and the monthly user group meetings. They also brilliantly included the impact that getting involved in the SQL community through social media can have on your professional career. My approach to the day was to try to experience as much of the event as I could, so there were very few sessions that I attended for their full duration. I leaped from session to session like a bumble bee, gleaning bits of nectar from each session. Amid these leaps I took the opportunity to briefly chat with some of the in-the-queue speakers as well as other attendees that wondered the hallways. I especially enjoyed a great discussion with Devin Knight about his plans regarding the upcoming Jacksonville SQL Saturday as well as an interesting SQL interpretation of the Iron Chef, which I think would catch on like wild-fire. There were two sessions that stood out as exceptional. So much so that I could not pull myself away: Kevin Kline presented on "SQL Server Internals and Architecture". This session could have been classified as one that is intended for the beginner. Kevin even personally warned me of such as I entered the room. I am a believer in revisiting the basics regardless of the level of your mastery, so I entered into this session in that spirit. It was a very clear and precise presentation. Masterfully illustrated and demonstrated. Brad McGehee presented on "How and When to Use Indexed Views". This was a topic that I was recently exploring and was considering to for use in an integration project. Brad effectively communicated the complexity of this feature and what is involved to gain their full benefit. It was clear at the conclusion of this session that it was not the right feature for my specific needs. Overall, the event was a great success. The use of volunteers, from an attendee's perspective was masterful. The only recommendation that I would have for the next Chicago SQL Saturday would be to include more time in between sessions to permit some level of networking among the attendees, one-on-one questions for speakers and visits to the sponsor booths. Congratulations to Wendy Pastrick, Ted Krueger, and Aaron Lowe for their efforts and a very successful SQL Saturday!

    Read the article

  • Update iPhone UIProgressView during NSURLConnection download.

    - by Scott Pendleton
    I am using this code: NSURLConnection *oConnection=[[NSURLConnection alloc] initWithRequest:oRequest delegate:self]; to download a file, and I want to update a progress bar on a subview that I load. For that, I use this code: - (void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data { [oReceivedData appendData:data]; float n = oReceivedData.length; float d = self.iTotalSize; NSNumber *oNum = [NSNumber numberWithFloat:n/d]; self.oDPVC.oProgress.progress = [oNum floatValue]; } The subview is oDPVC, and the progress bar on it is oProgress. Setting the progress property does not update the control. From what I have read on the Web, lots of people want to do this and yet there is not a complete, reliable sample. Also, there is much contradictory advice. Some people think that you don't need a separate thread. Some say you need a background thread for the progress update. Some say no, do other things in the background and update the progress on the main thread. I've tried all the advice, and none of it works for me. One more thing, maybe this is a clue. I use this code to load the subview during applicationDidFinishLaunching: self.oDPVC = [[DownloadProgressViewController alloc] initWithNibName:@"DownloadProgressViewController" bundle:nil]; [window addSubview:self.oDPVC.view]; In the XIB file (which I have examined in both Interface Builder and in a text editor) the progress bar is 280 pixels wide. But when the view opens, it has somehow been adjusted to maybe half that width. Also, the background image of the view is default.png. Instead of appearing right on top of the default image, it is shoved up about 10 pixels, leaving a white bar across the bottom of the screen. Maybe that's a separate issue, maybe not.

    Read the article

  • iPhone SDK : UIImageView - Collapsing animation

    - by chris.o.
    Hi All, I'm trying to animating across the screen a horizontal bar with a color gradient. For simplicity, I chose to make a png of the fully extended bar, assign it to a UIImageView, and (attempt to) animate resizing of it using a UIView animation. The problem is that in the "closed" state of the Image, the portion of the image showing is not the part that I want. The image is arranged so that from L to R, a white to red color gradient occurs. The right side (about 20 pixels) is solid red and is the part I want to show when the bar is "collapsed". I'm trying to extend the image out by about 100 pixels to its full width. I referenced the "Buy Now" button example for my code as it seemed relevant" http://stackoverflow.com/questions/1669804/uibutton-appstore-buy-button-animation My code: [UIView beginAnimations:@"barAnimation" context:nil]; [UIView setAnimationDuration:0.6]; CGRect barFrame = bookBar.bounds; if (fExtendBar) { barFrame.origin.x -= 100; barFrame.size.width += 100; } else { barFrame.origin.x += 100; barFrame.size.width -= 100; } bookBar.frame = barFrame; [UIView commitAnimations]; I feel like this should be possible, maybe by setting an "offsetOfImage" to display in the UIImageView, but I can't seem to make it work. Also, I've noticed that the behavior is kind of consistent with what happens in Interface Builder when you resize an image. Then again, I would think there would be a way to override this behavior programmatically. Any suggestions (including other approaches) are welcome. Thanks, chris.o.

    Read the article

  • Scrolling a Canvas smoothly in Android

    - by prepbgg
    I'm new to Android. I am drawing bitmaps, lines and shapes onto a Canvas inside the OnDraw(Canvas canvas) method of my view. I am looking for help on how to implement smooth scrolling in response to a drag by the user. I have searched but not found any tutorials to help me with this. The reference for Canvas seems to say that if a Canvas is constructed from a Bitmap (called bmpBuffer, say) then anything drawn on the Canvas is also drawn on bmpBuffer. Would it be possible to use bmpBuffer to implement a scroll ... perhaps copy it back to the Canvas shifted by a few pixels at a time? But if I use Canvas.drawBitmap to draw bmpBuffer back to Canvas shifted by a few pixels, won't bmpBuffer be corrupted? Perhaps, therefore, I should copy bmpBuffer to bmpBuffer2 then draw bmpBuffer2 back to the Canvas. A more straightforward approach would be to draw the lines, shapes, etc. straight into a buffer Bitmap then draw that buffer (with a shift) onto the Canvas but so far as I can see the various methods: drawLine(), drawShape() and so on are not available for drawing to a Bitmap ... only to a Canvas. Could I have 2 Canvases? One of which would be constructed from the buffer bitmap and used simply for plotting the lines, shapes, etc. and then the buffer bitmap would be drawn onto the other Canvas for display in the View? I should welcome any advice! Answers to similar questions here (and on other websites) refer to "blitting". I understand the concept but can't find anything about "blit" or "bitblt" in the Android documentation. Are Canvas.drawBitmap and Bitmap.Copy Android's equivalents?

    Read the article

  • Force a UIView to redraw immediately, instead of during next run loop

    - by Justin Kent
    I've created a UIImagePicker / camera view, with a toolbar and custom button for taking a snapshot. I can't really change to using the default way because of the custom button, and I'm drawing on top of the view. When you hit the button, I want to take a screenshot using UIGetScreenImage(); however, the toolbar is showing up in the image, even if I hide it first: //hide the toolbar self.toolbar.hidden = YES; // capture the screen pixels CGImageRef screenCap = UIGetScreenImage(); I'm pretty sure this is because even though the toolbar is hidden, it gets redrawn once the function returns and we enter the next run loop - after UIGetScreenImage is called. I tried making the following addition, but it didn't help: //hide the toolbar self.toolbar.hidden = YES; [self.toolbar drawRect:CGRectMake(0, 0, 320, 52)]; // capture the screen pixels CGImageRef screenCap = UIGetScreenImage(); I also tried using setNeedsDisplay, but that doesn't work either because once again the draw happens after the current function returns. Any suggestions? Thanks!

    Read the article

  • Conversion from Iphone Core Surface RGB Frame into ffmepg AVFarme

    - by Sridhar
    Hello, I am trying to convert Core Surface RGB frame buffer(Iphone) to ffmpeg Avfarme to encode into a movie file. But I am not getting the correct video output (video showing colors dazzling not the correct picture) I guess there is something wrong with converting from core surface frame buffer into AVFrame. Here is my code : Surface *surface = [[Surface alloc]initWithCoreSurfaceBuffer:coreSurfaceBuffer]; [surface lock]; unsigned int height = surface.height; unsigned int width = surface.width; unsigned int alignmentedBytesPerRow = (width * 4); if (!readblePixels) { readblePixels = CGBitmapAllocateData(alignmentedBytesPerRow * height); NSLog(@"alloced readablepixels"); } unsigned int bytesPerRow = surface.bytesPerRow; void *pixels = surface.baseAddress; for (unsigned int j = 0; j < height; j++) { memcpy(readblePixels + alignmentedBytesPerRow * j, pixels + bytesPerRow * j, bytesPerRow); } pFrameRGB->data[0] = readblePixels; // I guess here is what I am doing wrong. pFrameRGB->data[1] = NULL; pFrameRGB->data[2] = NULL; pFrameRGB->data[3] = NULL; pFrameRGB->linesize[0] = pCodecCtx->width; pFrameRGB->linesize[1] = 0; pFrameRGB->linesize[2] = 0; pFrameRGB->linesize[3] = 0; sws_scale (img_convert_ctx, pFrameRGB->data, pFrameRGB->linesize, 0, pCodecCtx->height, pFrameYUV->data, pFrameYUV->linesize); Please help me out. Thanks, Raghu

    Read the article

  • Fastest image iteration in Python

    - by Greg
    I am creating a simple green screen app with Python 2.7.4 but am getting quite slow results. I am currently using PIL 1.1.7 to load and iterate the images and saw huge speed-ups changing from the old getpixel() to the newer load() and pixel access object indexing. However the following loop still takes around 2.5 seconds to run for an image of around 720p resolution: def colorclose(Cb_p, Cr_p, Cb_key, Cr_key, tola, tolb): temp = math.sqrt((Cb_key-Cb_p)**2+(Cr_key-Cr_p)**2) if temp < tola: return 0.0 else: if temp < tolb: return (temp-tola)/(tolb-tola) else: return 1.0 .... for x in range(width): for y in range(height): Y, cb, cr = fg_cbcr_list[x, y] mask = colorclose(cb, cr, cb_key, cr_key, tola, tolb) mask = 1 - mask bgr, bgg, bgb = bg_list[x,y] fgr, fgg, fgb = fg_list[x,y] pixels[x,y] = ( (int)(fgr - mask*key_color[0] + mask*bgr), (int)(fgg - mask*key_color[1] + mask*bgg), (int)(fgb - mask*key_color[2] + mask*bgb)) Am I doing anything hugely inefficient here which makes it run so slow? I have seen similar, simpler examples where the loop is replaced by a boolean matrix for instance, but for this case I can't see a way to replace the loop. The pixels[x,y] assignment seems to take the most amount of time but not knowing Python very well I am unsure of a more efficient way to do this. Any help would be appreciated.

    Read the article

  • Editing 8bpp indexed Bitmaps

    - by Pedro Sá
    hi, i'm trying to edit the pixels of a 8bpp. Since this PixelFormat is indexed i'm aware that it uses a Color Table to map the pixel values. Even though I can edit the bitmap by converting it to 24bpp, 8bpp editing is much faster (13ms vs 3ms). But, changing each value when accessing the 8bpp bitmap results in some random rgb colors even though the PixelFormat remains 8bpp. I'm currently developing in c# and the algorithm is as follows: (C#) 1- Load original Bitmap at 8bpp 2- Create Empty temp Bitmap with 8bpp with the same size as the original 3-LockBits of both bitmaps and, using P/Invoke, calling c++ method where I pass the Scan0 of each BitmapData object. (I used a c++ method as it offers better performance when iterating through the Bitmap's pixels) (C++) 4- Create a int[256] palette according to some parameters and edit the temp bitmap bytes by passing the original's pixel values through the palette. (C#) 5- UnlockBits. My question is how can I edit the pixel values without having the strange rgb colors, or even better, edit the 8bpp bitmap's Color Table? Regards, Pedro

    Read the article

  • How do you word wrap a RichTextField for Blackberry

    - by Kai
    I've been trying to modify a rich text field to display correctly in its half of the horizontal field. The goal is this: --------------------------- | address is | ***********| | very long | ** IMAGE **| | state, zip | ***********| --------------------------- Where address is a single string separate from the city and zip. I am modifying the address field like this: RichTextField addrField = new RichTextField(address) { public int getPreferredWidth() { return 200; } protected void layout(int maxWidth,int maxHeight) { super.layout(getPreferredWidth(),maxHeight); setExtent(getPreferredWidth(), getHeight()); } }; The results look like this: ----------------------------- | address is ve| ***********| | state, zip | ** IMAGE **| | | ***********| ----------------------------- where clearly the address is just going under the image. Both horizontal fields are static 200 pixels wide. It's not like the system wouldn't know where to wrap the address. However, I have heard it is not easy to do this and is not done automatically. I have had no success finding a direct answer online. I have found people saying you need to do it in a custom layout manager, some refer to the RichTextField API, which is of no use. But nobody actually mentions how to do it. I understand that I may need to read character by character and set where the line breaks should happen. What I don't know is how exactly to do any of this. You can't just count characters and assume each is worth 5 pixels, and you shouldn't have to. Surely there must be some way to achieve this in a way that makes sense. Any suggestions?

    Read the article

  • Clipped UITextField with UITextFieldAlignmentRight

    - by Typeoneerror
    Got a small problem with UITextField. I have a simple UITextField and when I set the textAlignment property to "right", it gets clipped by 1-2 pixels. It looks shite so I'm hoping someone has an idea of how to remedy this. I've tried setting the frame to integers to prevent them from being on .5 pixels. - (UITextField *)textControlForSetting:(NSDictionary *)settings { CGRect frame = CGRectIntegral(CGRectMake(100.0f, 0.0f, 170.0f, 44.0f)); UITextField *textField = [[[UITextField alloc] initWithFrame:frame] autorelease]; NSString *defaultValue = [settings objectForKey:kDefaultValueKey]; NSString *currentValue = [prefs objectForKey:[settings objectForKey:kSettingKey]]; textField.tag = settingsCounter; textField.delegate = self; textField.textAlignment = UITextAlignmentRight; textField.font = [UIFont fontWithName:@"HelveticaNeue" size:14.0f]; textField.contentVerticalAlignment = UIControlContentVerticalAlignmentCenter; textField.placeholder = (currentValue != nil) ? currentValue : defaultValue; settingsCounter++; return textField; }

    Read the article

  • How to scale a sprite image without losing color key information?

    - by Michael P
    Hello everyone, I'm currently developing a simple application that displays map and draws some markers on it. I'm developing for Windows Mobile, so I decided to use DirectDraw and Imaging interfaces to make the application fast and pretty. The map moves when user moves finger on the touchscreen, so the whole map moving/scrolling animation has to be fast, but it is not. On every map update I have to draw portion of the map, control buttons, and markers - buttons and markers are preloaded on DirectDraw surface as a mipmap. So the only thing I do is BitBlit from the mipmap to a back buffer, and from the back buffer to a primary surface (I can't use page flipping due to the windowed mode of my application). Previously I used premultiplied-alpha surface with 32 bit ARGB pixel format for images mipmap, everything was looking good, but drawing entire "scene" was horribly slow - i could forget about smooth map scrolling. Now I'm using mipmap with native (RGB565) pixel format and fuchsia (0xFF00FF) color key. Drawing is much better my mipmap surface is generated on program loading - images are loaded from files, scaled (with filtering) and drawn on mipmap. The problem is, that image scaling process blends pixel colors, and those pixels which are on the border of a sprite region are blended with surrounding fuchsia pixels resulting semi-fuchsia color that is not treated as color key. When I do blitting with color key option, sprites have small fuchsia-like borders, and it looks really bad. How to solve this problem? I can use alpha blitting, but it is too slow - even in ARGB 1555 format.

    Read the article

  • DWM and painting unresponsive apps

    - by Doug Kavendek
    In Vista and later, if an app becomes unresponsive, the Desktop Window Manager is able to handle redrawing it when necessary (move a window over it, drag it around, etc.) because it has kept a pixel buffer for it. Windows also tries to detect when an app has become unresponsive after some timeout, and tries to make the best of the situation -- I believe it dims out the window, adds "Not Responding" to its title bar, and perhaps some other effects. Now, we have a skinned app that uses window regions and layered windows, and it doesn't play well with these effects. We've been developing on XP, but have noticed a strange effect when testing on Vista. At some points the app may spend a few moments on some calculation or callback, and if it passes the unresponsive threshold (I've read that it's a five second timeout, but I cannot find a link), a strange graphical problem occurs: any pixels that would be 100% transparent due to the window regions turn black, which effectively makes the window rectangular again, with a black background. There seem to be other anomalies, with the original window's pixels being shifted a bit in some child dialogs. I am working on reducing such delays (ideally Windows will never need to step in like this), and trying to maintain responsiveness while it's busy, but I'd still like to figure out what is causing it to render like that, as I can't guarantee I can eliminate all delays. Basically, I just would like to know what Windows is doing when this happens, and how I can make my app behave properly with it. Skinned apps have to still work on Vista and later, so I need to figure out what I'm doing that's non-standard. I don't even know exactly how to look for information into how Windows now handles unresponsive apps, as my searches only return people having issues with apps that are unresponsive, or very rudimentary explanations of what the DWM does with such apps. Heck I'm not even 100% sure it's the DWM responsible, but it seems likely. Any potential leads? Photo of problem; screen shots won't capture the effect (note that the white dialog's buffer is shifted -- it is shifted exactly by the distance it has been offset from the main (blue) window):

    Read the article

  • How do I make the left and right gutters different colors with 960.gs?

    - by Andrew Arrow
    How do I make the left and right gutters different colors with 960.gs? When I try something simple like: <div style="background-color: green"> <div class="container_16"> <div class="grid_16"> test </div> </div> </div> <div style="background-color: cyan"> <div class="container_16"> <div class="grid_16"> test </div> </div> </div> The green and cyan colors are ignored. Seems like the "grid_16" class removes the color for some reason? My goal is being able to have different sections of the page in different colors all the way across the page, even past 960 pixels. So if someone makes their browser 1200px the left and right sides have the right color and the rest of the grid system is all contained within the 960 pixels in the middle. I could add a background color to 'body' to do this for just 1 color, but I want multiple colors in the page. Like different colored horizontal stripes. Thanks.

    Read the article

  • GLSL shader render to texture not saving alpha value

    - by quadelirus
    I am rendering to a texture using a GLSL shader and then sending that texture as input to a second shader. For the first texture I am using RGB channels to send color data to the second GLSL shader, but I want to use the alpha channel to send a floating point number that the second shader will use as part of its program. The problem is that when I read the texture in the second shader the alpha value is always 1.0. I tested this in the following way: at the end of the first shader I did this: gl_FragColor(r, g, b, 0.1); and then in the second texture I read the value of the first texture using something along the lines of vec4 f = texture2D(previous_tex, pos); if (f.a != 1.0) { gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0); return; } No pixels in my output are black, whereas if I change the above code to read gl_FragColor(r, g, 0.1, 1.0); //Notice I'm now sending 0.1 for blue and in the second shader vec4 f = texture2D(previous_tex, pos); if (f.b != 1.0) { gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0); return; } All the appropriate pixels are black. This means that for some reason when I set the alpha value to something other than 1.0 in the first shader and render to a texture, it is still seen as being 1.0 by the second shader. Before I render to texture I glDisable(GL_BLEND); It seems pretty clear to me that the problem has to do with OpenGL handling alpha values in some way that isn't obvious to me since I can use the blue channel in the way I want, and figured someone out there will instantly see the problem.

    Read the article

  • Python pixel manipulation library

    - by silinter
    So I'm going through the beginning stages of producing a game in Python, and I'm looking for a library that is able to manipulate pixels and blit them relatively fast. My first thought was pygame, as it deals in pure 2D surfaces, but it only allows pixel access through pygame.get_at(), pygame.set_at() and pygame.get_buffer(), all of which lock the surface each time they're called, making them slow to use. I can also use the PixelArray and surfarray classes, but they are locked for the duration of their lifetimes, and the only way to blit them to a surface is to either copy the pixels to a new surface, or use surfarray.blit_array, which requires creating a subsurface of the screen and blitting it to that, if the array is smaller than the screen (if it's bigger I can just use a slice of the array, which is no problem). I don't have much experience with PyOpenGL or Pyglet, but I'm wondering if there is a faster library for doing pixel manipulation in, or if there is a faster method, in Pygame, for doing pixel manupilation. I did some work with SDL and OpenGL in C, and I do like the idea of adding vertex/fragment shaders to my program. My program will chiefly be dealing in loading images and writing/reading to/from surfaces.

    Read the article

  • how to tile 3D mesh with image brush in XAML

    - by MC9000
    I have a 2D square in a ViewPort3D that I want to do a tiling of an image (like a checkerboard or flooring with "tiles" effect). I've created an image brush (the image is 50x50 pixels, the surface 250x550 pixels) and a viewport (trying to follow MS's site - though their example is for 2D), but only 1 of the colors in the "tile" image shows up and no tiling is seen. I can't find a single example on the Internet and MS's site has no info (that I can find) on 3D XAML anywhere, so I'm stumped as how to actually do this. <Viewport3D> <Viewport3D.Camera> <PerspectiveCamera Position="125,790,120" LookDirection="0,-.7,-0.25" UpDirection="0,0,1" /> </Viewport3D.Camera> <ModelVisual3D> <ModelVisual3D.Content> <Model3DGroup> <AmbientLight Color="white" /> <GeometryModel3D> <GeometryModel3D.Geometry> <MeshGeometry3D Positions="0,0,0 250,0,0 250,550,0 0,550,0 " TriangleIndices="0 1 3 1 2 3 "/> </GeometryModel3D.Geometry> <GeometryModel3D.Material> <DiffuseMaterial> <DiffuseMaterial.Brush> <ImageBrush ViewportUnits="Absolute" TileMode="Tile" ImageSource="testsquare.gif" Viewport="0,0,50,50" Stretch="None" ViewboxUnits="Absolute" /> </DiffuseMaterial.Brush> </DiffuseMaterial> </GeometryModel3D.Material> </GeometryModel3D> </Model3DGroup> </ModelVisual3D.Content> </ModelVisual3D> </Viewport3D>

    Read the article

  • What are some programming techniques for converting SD images to HD images

    - by Dr Dork
    I'm taking programming class and instructor loves to work with images so most of our assignments involve manipulating raw RGB image data. One of our assignments is to implement a standard image converter that converts SD images to HD images and vice versa. I always take advantage of these types of assignments to go a little beyond what we were asked to do, so I added a basic anti-aliasing process that uses the average pixel color of the 3x3 surrounding pixels to improve the converted image. While it helps a bit, the resulting image still doesn't look good, which is ok because it's not expected to for the assignment. I've learned that converting an SD to HD images has shown to be much harder than down sampling to SD from HD just because SD to HD effectively involves increasing resolution when it is not there. Obviously, it is hard to create pixels from nothing, but I'd like enhance my anti-aliasing to something that provides better results when upscaling an image. Most of the techniques I find and read on the internet are far beyond my level of image processing and programming. Can anybody suggest any better methods or processes to create good HD content from SD content that may be within my programming skill level? I know that's a difficult thing to gauge since you don't know me, but perhaps knowing that I can write c++ code to read in raw RGB data and upscale/downscale it with simple average-anti-aliasing will give you an idea. Thanks in advance for all your help!

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >