Search Results

Search found 1524 results on 61 pages for 'stimulating pixels'.

Page 19/61 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • .NET GDI+ image size - file codec limitations

    - by roygbiv
    Is there a limit on the size of image that can be encoded using the image file codecs available from .NET? I'm trying to encode images 4GB in size, but it simply does not work (or does not work properly i.e. writes out an unreadable file) with .bmp, .jpg, .png or the .tif encoders. When I lower the image size to < 2GB it does work with the .jpg but not the .bmp, .tif or .png. My next attempt would be to try libtiff because I know tiff files are meant for large images. What is a good file format for large images? or am I just hitting the file format limitations? Random r = new Random((int)DateTime.Now.Ticks); int width = 64000; int height = 64000; int stride = (width % 4) > 0 ? width + (width % 4) : width; UIntPtr dataSize = new UIntPtr((ulong)stride * (ulong)height); IntPtr p = Program.VirtualAlloc(IntPtr.Zero, dataSize, Program.AllocationType.COMMIT | Program.AllocationType.RESERVE, Program.MemoryProtection.READWRITE); Bitmap bmp = new Bitmap(width, height, stride, PixelFormat.Format8bppIndexed, p); BitmapData bd = bmp.LockBits(new Rectangle(0, 0, bmp.Width, bmp.Height), ImageLockMode.ReadWrite, bmp.PixelFormat); ColorPalette cp = bmp.Palette; for (int i = 0; i < cp.Entries.Length; i++) { cp.Entries[i] = Color.FromArgb(i, i, i); } bmp.Palette = cp; unsafe { for (int y = 0; y < bd.Height; y++) { byte* row = (byte*)bd.Scan0.ToPointer() + (y * bd.Stride); for (int x = 0; x < bd.Width; x++) { *(row + x) = (byte)r.Next(256); } } } bmp.UnlockBits(bd); bmp.Save(@"c:\test.jpg", ImageFormat.Jpeg); bmp.Dispose(); Program.VirtualFree(p, UIntPtr.Zero, 0x8000); I have also tried using a pinned GC memory region, but this is limited to < 2GB. Random r = new Random((int)DateTime.Now.Ticks); int bytesPerPixel = 4; int width = 4000; int height = 4000; int padding = 4 - ((width * bytesPerPixel) % 4); padding = (padding == 4 ? 0 : padding); int stride = (width * bytesPerPixel) + padding; UInt32[] pixels = new UInt32[width * height]; GCHandle gchPixels = GCHandle.Alloc(pixels, GCHandleType.Pinned); using (Bitmap bmp = new Bitmap(width, height, stride, PixelFormat.Format32bppPArgb, gchPixels.AddrOfPinnedObject())) { for (int y = 0; y < height; y++) { int row = (y * width); for (int x = 0; x < width; x++) { pixels[row + x] = (uint)r.Next(); } } bmp.Save(@"c:\test.jpg", ImageFormat.Jpeg); } gchPixels.Free();

    Read the article

  • To be effective on your home projects is it better using the same technologies used at work?

    - by systempuntoout
    To be more productive and effective, is it better to start developing an home project using the same technologies used at work? I'm not talking about a simple hello world web page but an home project with all bells and whistles that one day, maybe, you could sell on internet. This dilemma is often subject of flames between me and a friend. He thinks that if you want to make a great home-made project you need to use the same technologies used daily at work staying in the same scope too; for example, a c++ computer game programmer should develope an home-made c++ game. I'm pretty sure that developing using the same technologies used at work can be more productive at beginning, but surely less exciting and stimulating of working with other languages\ides\libraries out of your daily job. What's your opinion about that?

    Read the article

  • Is it better to adopt the same technologies used at work to be effective on your home projects ?

    - by systempuntoout
    Is it better to start developing an home project using the same technologies used at work to be more productive and effective? I'm not talking about a simple hello world web page but an home project with all bells and whistles that one day, maybe, you could sell on internet. This dilemma is often subject of flames between me and a friend. He thinks that if you want to make a great home-made project you need to use the same technologies used daily at work staying in the same scope too; for example, a c++ computer game programmer should develope an home-made c++ game. I'm pretty sure that developing using the same technologies used at work can be more productive at beginning, but surely less exciting and stimulating of working with other languages\ides\libraries out of your daily job. What's your opinion about that?

    Read the article

  • Add Windows 7’s AeroSnap Feature to Vista and XP

    - by Asian Angel
    Are you using Windows Vista or XP and want that Windows 7 AeroSnap goodness on your own system? Then join us as we look at AeroSnap for Windows Vista and XP. Note: Requires .NET Framework 2.0 or higher (link provided at bottom of article). Setup What exactly does AeroSnap do you might ask…here is a quote directly from the website: “AeroSnap is a simple but powerful application that allows you to resize, arrange or maximize your desktop windows with just drag’n'drop. Simply drag a window to a side of your desktop to snap it or drag it to the top to maximize. When you drag it back to the last position, the last window size will be restored.” As soon as you have finished installing AeroSnap and started it for the first time the only item that will be visible is the “System Tray Icon”. Before going any further you should take a moment to view and make any desired adjustments in the “Options”. Note: AeroSnap works with multiple monitors. You may want to have AeroSnap start with Windows each time but the really nice setting to enable here is the “Snap Preview”. If you are using AeroSnap on Vista and have Aero enabled this will really be nice. The second portion may be of interest for those who would like to enable the keyboard shortcut function. One point worth noting about this screen is that the highest number of pixels from the screen’s edge that you can set AeroSnap for is 20 pixels. AeroSnap in Action AeroSnap is extremely easy to use…just grab the top of an app window and drag it to the left, right, or top of your screen. Since we installed this on Windows Vista we made certain to enable the “Snap Preview” in the “Options”.  We started off with dragging our Firefox 3.7 window towards the left…once we got close to the edge of the screen you can see that the left half of the screen temporarily “shaded over”. Note: The “Snap Preview” displays on the left and right movements but not the top movement. Releasing Firefox snapped it right into the “shaded over” part of the screen. The great thing about AeroSnap is that it is really easy to return the app window to it former size…all that you have to do is simply click on and grab the top portion of the app window. Moving Firefox towards the top of our screen and… It quickly snaps into filling the screen. One thing that we did notice is that the window did not “Maximize” as per the function for the button in the upper right corner. Dragging towards the right side now… And snap! Tucked in all nice and neat… You can minimize the app windows to the Taskbar and they will return to their previous “snap area” when “maximized” again. Conclusion If you have been wanting to add Windows 7’s AeroSnap goodness to your Vista and XP systems then you should definitely give this app a try. AeroSnap is very easy to set up and operate… Links Download AeroSnap for Windows Vista & XP Download the .NET Framework Similar Articles Productive Geek Tips Using Windows 7 or Vista System RestoreRoundup: 16 Tweaks to Windows Vista Look & FeelSelect Files using Check Boxes in Windows VistaSpeed up Your Windows Vista Computer with ReadyBoostHow-To Geek Bounty: $103.24(Paid!) for Active Desktop for Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Add a Custom Title in IE using Spybot or Spyware Blaster When You Need to Hail a Taxi in NYC Live Map of Marine Traffic NoSquint Remembers Site Specific Zoom Levels (Firefox) New Firefox release 3.6.3 fixes 1 Critical bug Dark Side of the Moon (8-bit)

    Read the article

  • how to drawing continues line just like in paint [on hold]

    - by hussain shah
    hi sir i want to draw a points.the following code is work good but the problem is than when i drag the mouse button, if i move slow working good but if i move the curser fast they cannot made continues line.please what is the solution...? #include <iostream> #include <GL/glut.h> #include <GL/glu.h> #include <stdlib.h> void first() { glPushMatrix(); glTranslatef(1,01,01); glScalef(1, 1, 1); glColor3f(0, 1, 0); glBegin(GL_QUADS); glVertex2f(0.8, 0.6); glVertex2f(0.6, 0.6); glVertex2f(0.6, 0.8); glVertex2f(0.8, 0.8); glEnd(); glPopMatrix(); glFlush(); } void display (void) { glClear(GL_COLOR_BUFFER_BIT); //store color of each pixels of a frame glClearColor(0, 0, 0, 0);// screen color //glFlush(); } void drag (int x, int y) { { y=500-y; //x=500-x; glPointSize(5); glColor3f(1.0,1.0,1.0); glBegin(GL_POINTS); glVertex2f(x,y+2); glEnd(); glutSwapBuffers(); glFlush(); } } void reshape (int w, int h){} void init (void) { glClear(GL_COLOR_BUFFER_BIT); //store color of each pixels of a frame glClearColor(0, 0, 0, 0); glViewport(0,0,500,500); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0.0, 500.0, 0.0, 500.0, 1.0, -1.0); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); } void mouse_button (int button, int state, int x, int y) { if (button == GLUT_LEFT_BUTTON && state == GLUT_DOWN) { drag(x,y); first(); } //else if (button == GLUT_MIDDLE_BUTTON && state == GLUT_DOWN) //{ // //} else if (button == GLUT_RIGHT_BUTTON && state == GLUT_DOWN) { exit(0); } } int main (int argc, char**argv) { glutInit (&argc, argv); //initialize the program. glutInitDisplayMode (GLUT_SINGLE); //set up a basic display buffer (only singular for now) glutInitWindowSize (500,500); //set whe width and height of the window glutInitWindowPosition (100, 100); //set the position of the window glutCreateWindow ("A basic OpenGL Window"); //set the caption for the window glutMotionFunc(drag); //glutMouseFunc(mouse_button); init(); glutDisplayFunc (display);//call the display function to draw our world glutMainLoop(); //initialize the OpenGL loop cycle return 0; }

    Read the article

  • Customizing UPK outputs (Part 1)

    - by [email protected]
    If you are familiar with Oracle's User Productivity Kit, you are aware that UPK is a great product for rapidly developing application training. Did you know that you can also customize the UPK outputs to incorporate your company's logo, colors, and preferred styles? There are several areas that support customization: Logo - Within the developer, you can change the logo for all outputs at one time. Player - The player output uses a style sheet that can be updated to change colors, graphics and other visual branding. Documentation - The print documentation uses a Word-based template that can be modified to match your corporate standards. I'll discuss the first one today, and we'll cover the others in subsequent blogs. Before you begin: If you are working in a multi-user environment, ensure that you have "Modify" permissions for the Styles directory under the Publishing folder. Make a copy of the current styles. This recommendation is for backup purposes. If something goes wrong, you will have a way to recover. Consider creating your own category by creating a new folder under the Styles directory, and then copying the styles into your new folder. When you upgrade to future versions, the system will overwrite the standard styles with any new feature additions and updates that have been made. With your own category, all of your customizations will remain intact. To update the logos in all outputs: From the Tools Menu, choose Customize Logo. Select the category if necessary. Browse to select your logo. You can use any size logo, in any graphic format (*.bmp, *.gif, *.jpeg, *.jpg, *.png, or *.tif). The system will make a copy of your logo and add it to each of the publishing styles. Choose OK, and the update process begins. It may take a few minutes. Helpful hints: The logo you select is used "as is" - no resizing or cropping occurs during this process. The Customize Logo process automates replacing all the logo graphics for online deployment (small_logo.gif and large_logo.gif) and the headers in the documentation outputs. You can manually replace these graphics on an individual style basis if you prefer. The recommended logo size is 230 pixels wide x 44 pixels high. Prior to updating the logos, the system will display the size of the selected logo. If you use a logo that is much larger than the recommended size, the heading area will resize to fit the new logo, but that will impact the space available for your training material. If you are using a multi-user environment, the system will check out the publishing styles to you for the logo updates. After you review the styles, remember to check them in so the rest of your team can access the new changes. I'd be interested in hearing (or seeing) how you brand your UPK. Feel free to share in the comments! --Maria Cozzolino, Manager of Requirements & UI for UPK Product Development PS. For those of you who want to customize the player and documentation NOW, check out the detailed instructions in the Publishing Content chapter of the Content Development Guide.

    Read the article

  • Getting the number of fragments which passed the depth test

    - by Etan
    In "modern" environments, the "NV Occlusion Query" extension provides a method to get the number of fragments which passed the depth test. However, on the iPad / iPhone using OpenGL ES, the extension is not available. What is the most performant approach to implement a similar behaviour in the fragment shader? Some of my ideas: Render the object completely in white, then count all the colors together using a two-pass shader where first a vertical line is rendered and for each fragment the shader computes the sum over the whole row. Then, a single vertex is rendered whose fragment sums all the partial sums of the first pass. Doesn't seem to be very efficient. Render the object completely in white over a black background. Downsample recursively, abusing the hardware linear interpolation between textures until being at a reasonably small resolution. This leads to fragments which have a greyscale level depending on the number of white pixels where in their corresponding region. Is this even accurate enough? Use mipmaps and simply read the pixel on the 1x1 level. Again the question of accuracy and if it is even possible using non-power-of-two textures. The problem wit these approaches is, that the pipeline gets stalled which results in major performance issues. Therefore, I'm looking for a more performant way to accomplish my goal. Using the EXT_OCCLUSION_QUERY_BOOLEAN extension Apple introduced EXT_OCCLUSION_QUERY_BOOLEAN in iOS 5.0 for iPad 2. "4.1.6 Occlusion Queries Occlusion queries use query objects to track the number of fragments or samples that pass the depth test. An occlusion query can be started and finished by calling BeginQueryEXT and EndQueryEXT, respectively, with a target of ANY_SAMPLES_PASSED_EXT or ANY_SAMPLES_PASSED_CONSERVATIVE_EXT. When an occlusion query is started with the target ANY_SAMPLES_PASSED_EXT, the samples-boolean state maintained by the GL is set to FALSE. While that occlusion query is active, the samples-boolean state is set to TRUE if any fragment or sample passes the depth test. When the occlusion query finishes, the samples-boolean state of FALSE or TRUE is written to the corresponding query object as the query result value, and the query result for that object is marked as available. If the target of the query is ANY_SAMPLES_PASSED_CONSERVATIVE_EXT, an implementation may choose to use a less precise version of the test which can additionally set the samples-boolean state to TRUE in some other implementation dependent cases." The first sentence hints on a behavior which is exactly what I'm looking for: getting the number of pixels which passed the depth test in an asynchronous manner without much performance loss. However, the rest of the document describes only how to get boolean results. Is it possible to exploit this extension to get the pixel count? Does the hardware support it so that there may be hidden API to get access to the pixel count? Other extensions which could be exploitable would be debugging features like the number of times the fragment shader was invoked (PSInvocations in DirectX - not sure if something simila is available in OpenGL ES). However, this would also result in a pipeline stall.

    Read the article

  • Bounding Box Collision Glitching Problem (Pygame)

    - by Ericson Willians
    So far the "Bounding Box" method is the only one that I know. It's efficient enough to deal with simple games. Nevertheless, the game I'm developing is not that simple anymore and for that reason, I've made a simplified example of the problem. (It's worth noticing that I don't have rotating sprites on my game or anything like that. After showing the code, I'll explain better). Here's the whole code: from pygame import * DONE = False screen = display.set_mode((1024,768)) class Thing(): def __init__(self,x,y,w,h,s,c): self.x = x self.y = y self.w = w self.h = h self.s = s self.sur = Surface((64,48)) draw.rect(self.sur,c,(self.x,self.y,w,h),1) self.sur.fill(c) def draw(self): screen.blit(self.sur,(self.x,self.y)) def move(self,x): if key.get_pressed()[K_w] or key.get_pressed()[K_UP]: if x == 1: self.y -= self.s else: self.y += self.s if key.get_pressed()[K_s] or key.get_pressed()[K_DOWN]: if x == 1: self.y += self.s else: self.y -= self.s if key.get_pressed()[K_a] or key.get_pressed()[K_LEFT]: if x == 1: self.x -= self.s else: self.x += self.s if key.get_pressed()[K_d] or key.get_pressed()[K_RIGHT]: if x == 1: self.x += self.s else: self.x -= self.s def warp(self): if self.y < -48: self.y = 768 if self.y > 768 + 48: self.y = 0 if self.x < -64: self.x = 1024 + 64 if self.x > 1024 + 64: self.x = -64 r1 = Thing(0,0,64,48,1,(0,255,0)) r2 = Thing(6*64,6*48,64,48,1,(255,0,0)) while not DONE: screen.fill((0,0,0)) r2.draw() r1.draw() # If not intersecting, then moves, else, it moves in the opposite direction. if not ((((r1.x + r1.w) > (r2.x - r1.s)) and (r1.x < ((r2.x + r2.w) + r1.s))) and (((r1.y + r1.h) > (r2.y - r1.s)) and (r1.y < ((r2.y + r2.h) + r1.s)))): r1.move(1) else: r1.move(0) r1.warp() if key.get_pressed()[K_ESCAPE]: DONE = True for ev in event.get(): if ev.type == QUIT: DONE = True display.update() quit() The problem: In my actual game, the grid is fixed and each tile has 64 by 48 pixels. I know how to deal with collision perfectly if I moved by that size. Nevertheless, obviously, the player moves really fast. In the example, the collision is detected pretty well (Just as I see in many examples throughout the internet). The problem is that if I put the player to move WHEN IS NOT intersecting, then, when it touches the obstacle, it does not move anymore. Giving that problem, I began switching the directions, but then, when it touches and I press the opposite key, it "glitches through". My actual game has many walls, and the player will touch them many times, and I can't afford letting the player go through them. The code-problem illustrated: When the player goes towards the wall (Fine). When the player goes towards the wall and press the opposite direction. (It glitches through). Here is the logic I've designed before implementing it: I don't know any other method, and I really just want to have walls fixed in a grid, but move by 1 or 2 or 3 pixels (Slowly) and have perfect collision without glitching-possibilities. What do you suggest?

    Read the article

  • Anticipating JavaOne 2012 – Number 17!

    - by Janice J. Heiss
    As I write this, JavaOne 2012 (September 30-October 4 in San Francisco, CA) is just over a week away -- the seventeenth JavaOne! I’ll resist the impulse to travel in memory back to the early days of JavaOne. But I will say that JavaOne is a little like your birthday or New Year’s in that it invites reflection, evaluation, and comparison. It’s a time when we take the temperature of Java and assess the world of information technology generally. At JavaOne, insight and information flow amongst Java developers like no other time of the year.This year, the status of Java seems more secure in the eyes of most Java developers who agree that Oracle is doing an acceptable job of stewarding the platform, and while the story is still in progress, few doubt that Oracle is engaging strongly with the Java community and wants to see Java thrive. From my perspective, the biggest news about Java is the growth of some 250 alternative languages for the JVM – from Groovy to Jython to JRuby to Scala to Clojure and on and on – offering both new opportunities and challenges. The JVM has proven itself to be unusually flexible, resulting in an embarrassment of riches in which, more and more, developers are challenged to find ways to optimally mix together several different languages on projects.    To the matter at hand -- I can say with confidence that Oracle is working hard to make each JavaOne better than the last – more interesting, more stimulating, more networking, and more fun! A great deal of thought and attention is being devoted to the task. To free up time for the 475 technical sessions/Birds of feather/Hands-on-Labs slots, the Java Strategy, Partner, and Technical keynotes will be held on Sunday September 30, beginning at 4:00 p.m.   Let’s not forget Java Embedded@JavaOne which is being held Wednesday, Oct. 3rd and Thursday, Oct. 4th at the Hotel Nikko. It will provide business decision makers, technical leaders, and ecosystem partners important information about Java Embedded technologies and new business opportunities.   This year's JavaOne theme is “Make the Future Java”. So come to JavaOne and make your future better by:--Choosing from 475 sessions given by the experts to improve your working knowledge and coding expertise --Networking with fellow developers in both casual and formal settings--Enjoying world-class entertainment--Delighting in one of the world’s great cities (my home town) Hope to see you there!

    Read the article

  • Java Spotlight Episode 85: Migrating from Spring to JavaEE 6

    - by Roger Brinkley
    Interview with Bert Ertman and Paul Bakker on migrating from Spring to JavaEE 6. Joining us this week on the Java All Star Developer Panel is Arun Gupta, Java EE Guy. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link:  Java Spotlight Podcast in iTunes. Show Notes News Transactional Interceptors in Java EE 7 Larry Ellison and Mark Hurd on Oracle Cloud Duke’s Choice Award submissions open until June 15 Registration for the 2012 JVM Lanugage Summit now open Events June 11-14, Cloud Computing Expo, New York City June 12, Boulder JUG June 13, Denver JUG June 13, Eclipse Juno DemoCamp, Redwoood Shore June 13, JUG Münster June 14, Java Klassentreffen, Vienna, Austria June 18-20, QCon, New York City June 20, 1871, Chicago June 26-28, Jazoon, Zurich, Switzerland July 5, Java Forum, Stuttgart, Germany July 30-August 1, JVM Language Summit, Santa Clara Feature InterviewBert Ertman is a Fellow at Luminis in the Netherlands. Next to his customer assignments he is responsible for stimulating innovation, knowledge sharing, coaching, technology choices and presales activities. Besides his day job he is a Java User Group leader for NLJUG, the Dutch Java User Group. A frequent speaker on Enterprise Java and Software Architecture related topics at international conferences (e.g. Devoxx, JavaOne, etc) as well as an author and member of the editorial advisory board for Dutch software development magazine: Java Magazine. In 2008, Bert was honored by being awarded the coveted title of Java Champion by an international panel of Java leaders and luminaries. Paul Bakker is senior software engineer at Luminis Technologies where he works on the Amdatu platform, an open source, service-oriented application platform for web applications. He has a background as trainer where he teached various Java related subjects. Paul is also a regular speaker on conferences and author for the Dutch Java Magazine.TutorialsPart 1: http://howtojboss.com/2012/04/17/article-series-migrating-spring-applications-to-java-ee-6-part-1/Part 2: http://howtojboss.com/2012/04/17/article-series-migrating-spring-applications-to-java-ee-6-part-2/Part 3: http://howtojboss.com/2012/05/10/article-series-migrating-from-spring-to-java-ee-6-part-3/   Mail Bag What’s Cool Sang Shin in EE team @larryellison JavaOne content selection is almost complete-Notifications coming soon

    Read the article

  • Expanding Influence and Community

    - by Johnm
    When I was just nine years of age my father introduced me to the computer. It was a Radio Shack TRS-80 Color Computer (aka: CoCo). He shared with me the nuances of writing BASIC and it wasn't long before I was in the back seat of the school bus scribbling, on a pad of paper, the code I would later type. My father demonstrated that while my friends were playing their Atari 2600 consoles, I had the unique opportunity to create my own games on the Coco. One of which provided a great friend of mine hours and hours of hilarity and entertainment. It wasn't long before my father was inviting me to tag along as he drove to the local high school where a gathering of fellow Coco enthusiasts assembled. In these meetings all in attendance would chat about their latest challenges and solutions. They would swap the labors of their sleepless nights eagerly gazing into their green and black screens. Friendships were built and business partners were developed. While these experiences at the time in my pre-teen mind were chalked up to simply sharing time with my father, it had a tremendous impact on me later in life. This past weekend I attended the Louisville SQL Saturday (#45). It was great to see that there were some who brought along their children. It is encouraging to see fresh faces in the crowd at our  monthly IndyPASS meetings. Each time I see the youthful eyes peering from the audience while the finer details of SQL Server is presented, I cannot help but to be transported back to the experiences that I enjoyed in those Coco days. It is exciting to think of how these experiences are impacting their lives and stimulating their minds. Some of these children have actually approached me asking questions about what was presented or simply bragging about their latest discovery in programming. One of the topics that arose in the "Women in Technology" session in Louisville, which was masterfully facilitated by Kathi Kellenberger, was exploring how we could ignite the spark of interest in databases among the youth. It was awesome to hear that there were some that volunteer their time to share their experiences with students. It made me wonder what user groups could achieve if we were to consider expanding our influence and community beyond our immediate peers to include those who are simply enjoying their time with their father or mother.

    Read the article

  • Anticipating JavaOne 2012 – Number 17!

    - by Janice J. Heiss
    As I write this, JavaOne 2012 (September 30-October 4 in San Francisco, CA) is just over a week away -- the seventeenth JavaOne! I’ll resist the impulse to travel in memory back to the early days of JavaOne. But I will say that JavaOne is a little like your birthday or New Year’s in that it invites reflection, evaluation, and comparison. It’s a time when we take the temperature of Java and assess the world of information technology generally. At JavaOne, insight and information flow amongst Java developers like no other time of the year.This year, the status of Java seems more secure in the eyes of most Java developers who agree that Oracle is doing an acceptable job of stewarding the platform, and while the story is still in progress, few doubt that Oracle is engaging strongly with the Java community and wants to see Java thrive. From my perspective, the biggest news about Java is the growth of some 250 alternative languages for the JVM – from Groovy to Jython to JRuby to Scala to Clojure and on and on – offering both new opportunities and challenges. The JVM has proven itself to be unusually flexible, resulting in an embarrassment of riches in which, more and more, developers are challenged to find ways to optimally mix together several different languages on projects.    To the matter at hand -- I can say with confidence that Oracle is working hard to make each JavaOne better than the last – more interesting, more stimulating, more networking, and more fun! A great deal of thought and attention is being devoted to the task. To free up time for the 475 technical sessions/Birds of feather/Hands-on-Labs slots, the Java Strategy, Partner, and Technical keynotes will be held on Sunday September 30, beginning at 4:00 p.m.   Let’s not forget Java Embedded@JavaOne which is being held Wednesday, Oct. 3rd and Thursday, Oct. 4th at the Hotel Nikko. It will provide business decision makers, technical leaders, and ecosystem partners important information about Java Embedded technologies and new business opportunities.   This year's JavaOne theme is “Make the Future Java”. So come to JavaOne and make your future better by:--Choosing from 475 sessions given by the experts to improve your working knowledge and coding expertise --Networking with fellow developers in both casual and formal settings--Enjoying world-class entertainment--Delighting in one of the world’s great cities (my home town) Hope to see you there! Originally published on blogs.oracle.com/javaone.

    Read the article

  • Update iPhone UIProgressView during NSURLConnection download.

    - by Scott Pendleton
    I am using this code: NSURLConnection *oConnection=[[NSURLConnection alloc] initWithRequest:oRequest delegate:self]; to download a file, and I want to update a progress bar on a subview that I load. For that, I use this code: - (void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data { [oReceivedData appendData:data]; float n = oReceivedData.length; float d = self.iTotalSize; NSNumber *oNum = [NSNumber numberWithFloat:n/d]; self.oDPVC.oProgress.progress = [oNum floatValue]; } The subview is oDPVC, and the progress bar on it is oProgress. Setting the progress property does not update the control. From what I have read on the Web, lots of people want to do this and yet there is not a complete, reliable sample. Also, there is much contradictory advice. Some people think that you don't need a separate thread. Some say you need a background thread for the progress update. Some say no, do other things in the background and update the progress on the main thread. I've tried all the advice, and none of it works for me. One more thing, maybe this is a clue. I use this code to load the subview during applicationDidFinishLaunching: self.oDPVC = [[DownloadProgressViewController alloc] initWithNibName:@"DownloadProgressViewController" bundle:nil]; [window addSubview:self.oDPVC.view]; In the XIB file (which I have examined in both Interface Builder and in a text editor) the progress bar is 280 pixels wide. But when the view opens, it has somehow been adjusted to maybe half that width. Also, the background image of the view is default.png. Instead of appearing right on top of the default image, it is shoved up about 10 pixels, leaving a white bar across the bottom of the screen. Maybe that's a separate issue, maybe not.

    Read the article

  • iPhone SDK : UIImageView - Collapsing animation

    - by chris.o.
    Hi All, I'm trying to animating across the screen a horizontal bar with a color gradient. For simplicity, I chose to make a png of the fully extended bar, assign it to a UIImageView, and (attempt to) animate resizing of it using a UIView animation. The problem is that in the "closed" state of the Image, the portion of the image showing is not the part that I want. The image is arranged so that from L to R, a white to red color gradient occurs. The right side (about 20 pixels) is solid red and is the part I want to show when the bar is "collapsed". I'm trying to extend the image out by about 100 pixels to its full width. I referenced the "Buy Now" button example for my code as it seemed relevant" http://stackoverflow.com/questions/1669804/uibutton-appstore-buy-button-animation My code: [UIView beginAnimations:@"barAnimation" context:nil]; [UIView setAnimationDuration:0.6]; CGRect barFrame = bookBar.bounds; if (fExtendBar) { barFrame.origin.x -= 100; barFrame.size.width += 100; } else { barFrame.origin.x += 100; barFrame.size.width -= 100; } bookBar.frame = barFrame; [UIView commitAnimations]; I feel like this should be possible, maybe by setting an "offsetOfImage" to display in the UIImageView, but I can't seem to make it work. Also, I've noticed that the behavior is kind of consistent with what happens in Interface Builder when you resize an image. Then again, I would think there would be a way to override this behavior programmatically. Any suggestions (including other approaches) are welcome. Thanks, chris.o.

    Read the article

  • Scrolling a Canvas smoothly in Android

    - by prepbgg
    I'm new to Android. I am drawing bitmaps, lines and shapes onto a Canvas inside the OnDraw(Canvas canvas) method of my view. I am looking for help on how to implement smooth scrolling in response to a drag by the user. I have searched but not found any tutorials to help me with this. The reference for Canvas seems to say that if a Canvas is constructed from a Bitmap (called bmpBuffer, say) then anything drawn on the Canvas is also drawn on bmpBuffer. Would it be possible to use bmpBuffer to implement a scroll ... perhaps copy it back to the Canvas shifted by a few pixels at a time? But if I use Canvas.drawBitmap to draw bmpBuffer back to Canvas shifted by a few pixels, won't bmpBuffer be corrupted? Perhaps, therefore, I should copy bmpBuffer to bmpBuffer2 then draw bmpBuffer2 back to the Canvas. A more straightforward approach would be to draw the lines, shapes, etc. straight into a buffer Bitmap then draw that buffer (with a shift) onto the Canvas but so far as I can see the various methods: drawLine(), drawShape() and so on are not available for drawing to a Bitmap ... only to a Canvas. Could I have 2 Canvases? One of which would be constructed from the buffer bitmap and used simply for plotting the lines, shapes, etc. and then the buffer bitmap would be drawn onto the other Canvas for display in the View? I should welcome any advice! Answers to similar questions here (and on other websites) refer to "blitting". I understand the concept but can't find anything about "blit" or "bitblt" in the Android documentation. Are Canvas.drawBitmap and Bitmap.Copy Android's equivalents?

    Read the article

  • Force a UIView to redraw immediately, instead of during next run loop

    - by Justin Kent
    I've created a UIImagePicker / camera view, with a toolbar and custom button for taking a snapshot. I can't really change to using the default way because of the custom button, and I'm drawing on top of the view. When you hit the button, I want to take a screenshot using UIGetScreenImage(); however, the toolbar is showing up in the image, even if I hide it first: //hide the toolbar self.toolbar.hidden = YES; // capture the screen pixels CGImageRef screenCap = UIGetScreenImage(); I'm pretty sure this is because even though the toolbar is hidden, it gets redrawn once the function returns and we enter the next run loop - after UIGetScreenImage is called. I tried making the following addition, but it didn't help: //hide the toolbar self.toolbar.hidden = YES; [self.toolbar drawRect:CGRectMake(0, 0, 320, 52)]; // capture the screen pixels CGImageRef screenCap = UIGetScreenImage(); I also tried using setNeedsDisplay, but that doesn't work either because once again the draw happens after the current function returns. Any suggestions? Thanks!

    Read the article

  • Conversion from Iphone Core Surface RGB Frame into ffmepg AVFarme

    - by Sridhar
    Hello, I am trying to convert Core Surface RGB frame buffer(Iphone) to ffmpeg Avfarme to encode into a movie file. But I am not getting the correct video output (video showing colors dazzling not the correct picture) I guess there is something wrong with converting from core surface frame buffer into AVFrame. Here is my code : Surface *surface = [[Surface alloc]initWithCoreSurfaceBuffer:coreSurfaceBuffer]; [surface lock]; unsigned int height = surface.height; unsigned int width = surface.width; unsigned int alignmentedBytesPerRow = (width * 4); if (!readblePixels) { readblePixels = CGBitmapAllocateData(alignmentedBytesPerRow * height); NSLog(@"alloced readablepixels"); } unsigned int bytesPerRow = surface.bytesPerRow; void *pixels = surface.baseAddress; for (unsigned int j = 0; j < height; j++) { memcpy(readblePixels + alignmentedBytesPerRow * j, pixels + bytesPerRow * j, bytesPerRow); } pFrameRGB->data[0] = readblePixels; // I guess here is what I am doing wrong. pFrameRGB->data[1] = NULL; pFrameRGB->data[2] = NULL; pFrameRGB->data[3] = NULL; pFrameRGB->linesize[0] = pCodecCtx->width; pFrameRGB->linesize[1] = 0; pFrameRGB->linesize[2] = 0; pFrameRGB->linesize[3] = 0; sws_scale (img_convert_ctx, pFrameRGB->data, pFrameRGB->linesize, 0, pCodecCtx->height, pFrameYUV->data, pFrameYUV->linesize); Please help me out. Thanks, Raghu

    Read the article

  • Fastest image iteration in Python

    - by Greg
    I am creating a simple green screen app with Python 2.7.4 but am getting quite slow results. I am currently using PIL 1.1.7 to load and iterate the images and saw huge speed-ups changing from the old getpixel() to the newer load() and pixel access object indexing. However the following loop still takes around 2.5 seconds to run for an image of around 720p resolution: def colorclose(Cb_p, Cr_p, Cb_key, Cr_key, tola, tolb): temp = math.sqrt((Cb_key-Cb_p)**2+(Cr_key-Cr_p)**2) if temp < tola: return 0.0 else: if temp < tolb: return (temp-tola)/(tolb-tola) else: return 1.0 .... for x in range(width): for y in range(height): Y, cb, cr = fg_cbcr_list[x, y] mask = colorclose(cb, cr, cb_key, cr_key, tola, tolb) mask = 1 - mask bgr, bgg, bgb = bg_list[x,y] fgr, fgg, fgb = fg_list[x,y] pixels[x,y] = ( (int)(fgr - mask*key_color[0] + mask*bgr), (int)(fgg - mask*key_color[1] + mask*bgg), (int)(fgb - mask*key_color[2] + mask*bgb)) Am I doing anything hugely inefficient here which makes it run so slow? I have seen similar, simpler examples where the loop is replaced by a boolean matrix for instance, but for this case I can't see a way to replace the loop. The pixels[x,y] assignment seems to take the most amount of time but not knowing Python very well I am unsure of a more efficient way to do this. Any help would be appreciated.

    Read the article

  • Editing 8bpp indexed Bitmaps

    - by Pedro Sá
    hi, i'm trying to edit the pixels of a 8bpp. Since this PixelFormat is indexed i'm aware that it uses a Color Table to map the pixel values. Even though I can edit the bitmap by converting it to 24bpp, 8bpp editing is much faster (13ms vs 3ms). But, changing each value when accessing the 8bpp bitmap results in some random rgb colors even though the PixelFormat remains 8bpp. I'm currently developing in c# and the algorithm is as follows: (C#) 1- Load original Bitmap at 8bpp 2- Create Empty temp Bitmap with 8bpp with the same size as the original 3-LockBits of both bitmaps and, using P/Invoke, calling c++ method where I pass the Scan0 of each BitmapData object. (I used a c++ method as it offers better performance when iterating through the Bitmap's pixels) (C++) 4- Create a int[256] palette according to some parameters and edit the temp bitmap bytes by passing the original's pixel values through the palette. (C#) 5- UnlockBits. My question is how can I edit the pixel values without having the strange rgb colors, or even better, edit the 8bpp bitmap's Color Table? Regards, Pedro

    Read the article

  • How do you word wrap a RichTextField for Blackberry

    - by Kai
    I've been trying to modify a rich text field to display correctly in its half of the horizontal field. The goal is this: --------------------------- | address is | ***********| | very long | ** IMAGE **| | state, zip | ***********| --------------------------- Where address is a single string separate from the city and zip. I am modifying the address field like this: RichTextField addrField = new RichTextField(address) { public int getPreferredWidth() { return 200; } protected void layout(int maxWidth,int maxHeight) { super.layout(getPreferredWidth(),maxHeight); setExtent(getPreferredWidth(), getHeight()); } }; The results look like this: ----------------------------- | address is ve| ***********| | state, zip | ** IMAGE **| | | ***********| ----------------------------- where clearly the address is just going under the image. Both horizontal fields are static 200 pixels wide. It's not like the system wouldn't know where to wrap the address. However, I have heard it is not easy to do this and is not done automatically. I have had no success finding a direct answer online. I have found people saying you need to do it in a custom layout manager, some refer to the RichTextField API, which is of no use. But nobody actually mentions how to do it. I understand that I may need to read character by character and set where the line breaks should happen. What I don't know is how exactly to do any of this. You can't just count characters and assume each is worth 5 pixels, and you shouldn't have to. Surely there must be some way to achieve this in a way that makes sense. Any suggestions?

    Read the article

  • Clipped UITextField with UITextFieldAlignmentRight

    - by Typeoneerror
    Got a small problem with UITextField. I have a simple UITextField and when I set the textAlignment property to "right", it gets clipped by 1-2 pixels. It looks shite so I'm hoping someone has an idea of how to remedy this. I've tried setting the frame to integers to prevent them from being on .5 pixels. - (UITextField *)textControlForSetting:(NSDictionary *)settings { CGRect frame = CGRectIntegral(CGRectMake(100.0f, 0.0f, 170.0f, 44.0f)); UITextField *textField = [[[UITextField alloc] initWithFrame:frame] autorelease]; NSString *defaultValue = [settings objectForKey:kDefaultValueKey]; NSString *currentValue = [prefs objectForKey:[settings objectForKey:kSettingKey]]; textField.tag = settingsCounter; textField.delegate = self; textField.textAlignment = UITextAlignmentRight; textField.font = [UIFont fontWithName:@"HelveticaNeue" size:14.0f]; textField.contentVerticalAlignment = UIControlContentVerticalAlignmentCenter; textField.placeholder = (currentValue != nil) ? currentValue : defaultValue; settingsCounter++; return textField; }

    Read the article

  • How to scale a sprite image without losing color key information?

    - by Michael P
    Hello everyone, I'm currently developing a simple application that displays map and draws some markers on it. I'm developing for Windows Mobile, so I decided to use DirectDraw and Imaging interfaces to make the application fast and pretty. The map moves when user moves finger on the touchscreen, so the whole map moving/scrolling animation has to be fast, but it is not. On every map update I have to draw portion of the map, control buttons, and markers - buttons and markers are preloaded on DirectDraw surface as a mipmap. So the only thing I do is BitBlit from the mipmap to a back buffer, and from the back buffer to a primary surface (I can't use page flipping due to the windowed mode of my application). Previously I used premultiplied-alpha surface with 32 bit ARGB pixel format for images mipmap, everything was looking good, but drawing entire "scene" was horribly slow - i could forget about smooth map scrolling. Now I'm using mipmap with native (RGB565) pixel format and fuchsia (0xFF00FF) color key. Drawing is much better my mipmap surface is generated on program loading - images are loaded from files, scaled (with filtering) and drawn on mipmap. The problem is, that image scaling process blends pixel colors, and those pixels which are on the border of a sprite region are blended with surrounding fuchsia pixels resulting semi-fuchsia color that is not treated as color key. When I do blitting with color key option, sprites have small fuchsia-like borders, and it looks really bad. How to solve this problem? I can use alpha blitting, but it is too slow - even in ARGB 1555 format.

    Read the article

  • DWM and painting unresponsive apps

    - by Doug Kavendek
    In Vista and later, if an app becomes unresponsive, the Desktop Window Manager is able to handle redrawing it when necessary (move a window over it, drag it around, etc.) because it has kept a pixel buffer for it. Windows also tries to detect when an app has become unresponsive after some timeout, and tries to make the best of the situation -- I believe it dims out the window, adds "Not Responding" to its title bar, and perhaps some other effects. Now, we have a skinned app that uses window regions and layered windows, and it doesn't play well with these effects. We've been developing on XP, but have noticed a strange effect when testing on Vista. At some points the app may spend a few moments on some calculation or callback, and if it passes the unresponsive threshold (I've read that it's a five second timeout, but I cannot find a link), a strange graphical problem occurs: any pixels that would be 100% transparent due to the window regions turn black, which effectively makes the window rectangular again, with a black background. There seem to be other anomalies, with the original window's pixels being shifted a bit in some child dialogs. I am working on reducing such delays (ideally Windows will never need to step in like this), and trying to maintain responsiveness while it's busy, but I'd still like to figure out what is causing it to render like that, as I can't guarantee I can eliminate all delays. Basically, I just would like to know what Windows is doing when this happens, and how I can make my app behave properly with it. Skinned apps have to still work on Vista and later, so I need to figure out what I'm doing that's non-standard. I don't even know exactly how to look for information into how Windows now handles unresponsive apps, as my searches only return people having issues with apps that are unresponsive, or very rudimentary explanations of what the DWM does with such apps. Heck I'm not even 100% sure it's the DWM responsible, but it seems likely. Any potential leads? Photo of problem; screen shots won't capture the effect (note that the white dialog's buffer is shifted -- it is shifted exactly by the distance it has been offset from the main (blue) window):

    Read the article

  • How do I make the left and right gutters different colors with 960.gs?

    - by Andrew Arrow
    How do I make the left and right gutters different colors with 960.gs? When I try something simple like: <div style="background-color: green"> <div class="container_16"> <div class="grid_16"> test </div> </div> </div> <div style="background-color: cyan"> <div class="container_16"> <div class="grid_16"> test </div> </div> </div> The green and cyan colors are ignored. Seems like the "grid_16" class removes the color for some reason? My goal is being able to have different sections of the page in different colors all the way across the page, even past 960 pixels. So if someone makes their browser 1200px the left and right sides have the right color and the rest of the grid system is all contained within the 960 pixels in the middle. I could add a background color to 'body' to do this for just 1 color, but I want multiple colors in the page. Like different colored horizontal stripes. Thanks.

    Read the article

  • GLSL shader render to texture not saving alpha value

    - by quadelirus
    I am rendering to a texture using a GLSL shader and then sending that texture as input to a second shader. For the first texture I am using RGB channels to send color data to the second GLSL shader, but I want to use the alpha channel to send a floating point number that the second shader will use as part of its program. The problem is that when I read the texture in the second shader the alpha value is always 1.0. I tested this in the following way: at the end of the first shader I did this: gl_FragColor(r, g, b, 0.1); and then in the second texture I read the value of the first texture using something along the lines of vec4 f = texture2D(previous_tex, pos); if (f.a != 1.0) { gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0); return; } No pixels in my output are black, whereas if I change the above code to read gl_FragColor(r, g, 0.1, 1.0); //Notice I'm now sending 0.1 for blue and in the second shader vec4 f = texture2D(previous_tex, pos); if (f.b != 1.0) { gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0); return; } All the appropriate pixels are black. This means that for some reason when I set the alpha value to something other than 1.0 in the first shader and render to a texture, it is still seen as being 1.0 by the second shader. Before I render to texture I glDisable(GL_BLEND); It seems pretty clear to me that the problem has to do with OpenGL handling alpha values in some way that isn't obvious to me since I can use the blue channel in the way I want, and figured someone out there will instantly see the problem.

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >