Search Results

Search found 18161 results on 727 pages for 'cell array'.

Page 410/727 | < Previous Page | 406 407 408 409 410 411 412 413 414 415 416 417  | Next Page >

  • CPU/JVM/JBoss 7 slows down over time

    - by lukas
    I'm experiencing performance slow down on JBoss 7.1.1 Final. I wrote simple program that demostrates this behavior. I generate an array of 100,000 of random integers and run bubble sort on it. @Model public class PerformanceTest { public void proceed() { long now = System.currentTimeMillis(); int[] arr = new int[100000]; for(int i = 0; i < arr.length; i++) { arr[i] = (int) (Math.random() * 200000); } long now2 = System.currentTimeMillis(); System.out.println((now2 - now) + "ms took to generate array"); now = System.currentTimeMillis(); bubbleSort(arr); now2 = System.currentTimeMillis(); System.out.println((now2 - now) + "ms took to bubblesort array"); } public void bubbleSort(int[] arr) { boolean swapped = true; int j = 0; int tmp; while (swapped) { swapped = false; j++; for (int i = 0; i < arr.length - j; i++) { if (arr[i] > arr[i + 1]) { tmp = arr[i]; arr[i] = arr[i + 1]; arr[i + 1] = tmp; swapped = true; } } } } } Just after I start the server, it takes approximately 22 seconds to run this code. After few days of JBoss 7.1.1. running, it takes 330 sec to run this code. In both cases, I launch the code when the CPU utilization is very low (say, 1%). Any ideas why? I run the server with following arguments: -Xms1280m -Xmx2048m -XX:MaxPermSize=2048m -Djava.net.preferIPv4Stack=true -Dorg.jboss.resolver.warning=true -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true -Duser.timezone=UTC -Djboss.server.default.config=standalone-full.xml -Xrunjdwp:transport=dt_socket,address=8787,server=y,suspend=n I'm running it on Linux 2.6.32-279.11.1.el6.x86_64 with java version "1.7.0_07". It's within J2EE applicaiton. I use CDI so I have a button on JSF page that will call method "proceed" on @RequestScoped component PerformanceTest. I deploy this as separate war file and even if I undeploy other applications, it doesn't change the performance. It's a virtual machine that is sharing CPUs with another machine but that one doesn't consume anything. Here's yet another observation: when the server is after fresh start and I run the bubble sort, It utilizes 100% of one processor core. It never switches to another core or drops utilization below 95%. However after some time the server is running and I'm experiencing the performance problems, the method above is utilizing CPU core usually 100%, however I just found out from htop that this task is being switched very often to other cores. That is, at the beginning it's running on core #1, after say 2 seconds it's running on #5 then after say 2 seconds #8 etc. Furthermore, the utilization is not kept at 100% at the core but sometimes drops to 80% or even lower. For the server after fresh start, even though If I simulate a load, it never switches the task to another core.

    Read the article

  • App using MonoTouch Core Graphics mysteriously crashes

    - by Stephen Ashley
    My app launches with a view controller and a simple view consisting of a button and a subview. When the user touches the button, the subview is populated with scrollviews that display the column headers, row headers, and cells of a spreadsheet. To draw the cells, I use CGBitmapContext to draw the cells, generate an image, and then put the image into the imageview contained in the scrollview that displays the cells. When I run the app on the iPad, it displays the cells just fine, and the scrollview lets the user scroll around in the spreadsheet without any problems. If the user touches the button a second time, the spreadsheet redraws and continues to work perfectly, If, however, the user touches the button a third time, the app crashes. There is no exception information display in the Application Output window. My first thought was that the successive button pushes were using up all the available memory, so I overrode the DidReceiveMemoryWarning method in the view controller and used a breakpoint to confirm that this method was not getting called. My next thought was that the CGBitmapContext was not getting released and looked for a Monotouch equivalent of Objective C's CGContextRelease() function. The closest I could find was the CGBitmapContext instance method Dispose(), which I called, without solving the problem. In order to free up as much memory as possible (in case I was somehow running out of memory without tripping a warning), I tried forcing garbage collection each time I finished using a CGBitmapContext. This made the problem worse. Now the program would crash moments after displaying the spreadsheet the first time. This caused me to wonder whether the Garbage Collector was somehow collecting something necessary to the continued display of graphics on the screen. I would be grateful for any suggestions on further avenues to investigate for the cause of these crashes. I have included the source code for the SpreadsheetView class. The relevant method is DrawSpreadsheet(), which is called when the button is touched. Thank you for your assistance on this matter. Stephen Ashley public class SpreadsheetView : UIView { public ISpreadsheetMessenger spreadsheetMessenger = null; public UIScrollView cellsScrollView = null; public UIImageView cellsImageView = null; public SpreadsheetView(RectangleF frame) : base() { Frame = frame; BackgroundColor = Constants.backgroundBlack; AutosizesSubviews = true; } public void DrawSpreadsheet() { UInt16 RowHeaderWidth = spreadsheetMessenger.RowHeaderWidth; UInt16 RowHeaderHeight = spreadsheetMessenger.RowHeaderHeight; UInt16 RowCount = spreadsheetMessenger.RowCount; UInt16 ColumnHeaderWidth = spreadsheetMessenger.ColumnHeaderWidth; UInt16 ColumnHeaderHeight = spreadsheetMessenger.ColumnHeaderHeight; UInt16 ColumnCount = spreadsheetMessenger.ColumnCount; // Add the corner UIImageView cornerView = new UIImageView(new RectangleF(0f, 0f, RowHeaderWidth, ColumnHeaderHeight)); cornerView.BackgroundColor = Constants.headingColor; CGColorSpace cornerColorSpace = null; CGBitmapContext cornerContext = null; IntPtr buffer = Marshal.AllocHGlobal(RowHeaderWidth * ColumnHeaderHeight * 4); if (buffer == IntPtr.Zero) throw new OutOfMemoryException("Out of memory."); try { cornerColorSpace = CGColorSpace.CreateDeviceRGB(); cornerContext = new CGBitmapContext (buffer, RowHeaderWidth, ColumnHeaderHeight, 8, 4 * RowHeaderWidth, cornerColorSpace, CGImageAlphaInfo.PremultipliedFirst); cornerContext.SetFillColorWithColor(Constants.headingColor.CGColor); cornerContext.FillRect(new RectangleF(0f, 0f, RowHeaderWidth, ColumnHeaderHeight)); cornerView.Image = UIImage.FromImage(cornerContext.ToImage()); } finally { Marshal.FreeHGlobal(buffer); if (cornerContext != null) { cornerContext.Dispose(); cornerContext = null; } if (cornerColorSpace != null) { cornerColorSpace.Dispose(); cornerColorSpace = null; } } cornerView.Image = DrawBottomRightCorner(cornerView.Image); AddSubview(cornerView); // Add the cellsScrollView cellsScrollView = new UIScrollView (new RectangleF(RowHeaderWidth, ColumnHeaderHeight, Frame.Width - RowHeaderWidth, Frame.Height - ColumnHeaderHeight)); cellsScrollView.ContentSize = new SizeF (ColumnCount * ColumnHeaderWidth, RowCount * RowHeaderHeight); Size iContentSize = new Size((int)cellsScrollView.ContentSize.Width, (int)cellsScrollView.ContentSize.Height); cellsScrollView.BackgroundColor = UIColor.Black; AddSubview(cellsScrollView); CGColorSpace colorSpace = null; CGBitmapContext context = null; CGGradient gradient = null; UIImage image = null; int bytesPerRow = 4 * iContentSize.Width; int byteCount = bytesPerRow * iContentSize.Height; buffer = Marshal.AllocHGlobal(byteCount); if (buffer == IntPtr.Zero) throw new OutOfMemoryException("Out of memory."); try { colorSpace = CGColorSpace.CreateDeviceRGB(); context = new CGBitmapContext (buffer, iContentSize.Width, iContentSize.Height, 8, 4 * iContentSize.Width, colorSpace, CGImageAlphaInfo.PremultipliedFirst); float[] components = new float[] {.75f, .75f, .75f, 1f, .25f, .25f, .25f, 1f}; float[] locations = new float[]{0f, 1f}; gradient = new CGGradient(colorSpace, components, locations); PointF startPoint = new PointF(0f, (float)iContentSize.Height); PointF endPoint = new PointF((float)iContentSize.Width, 0f); context.DrawLinearGradient(gradient, startPoint, endPoint, 0); context.SetLineWidth(Constants.lineWidth); context.BeginPath(); for (UInt16 i = 1; i <= RowCount; i++) { context.MoveTo (0f, iContentSize.Height - i * RowHeaderHeight + (Constants.lineWidth/2)); context.AddLineToPoint((float)iContentSize.Width, iContentSize.Height - i * RowHeaderHeight + (Constants.lineWidth/2)); } for (UInt16 j = 1; j <= ColumnCount; j++) { context.MoveTo((float)j * ColumnHeaderWidth - Constants.lineWidth/2, (float)iContentSize.Height); context.AddLineToPoint((float)j * ColumnHeaderWidth - Constants.lineWidth/2, 0f); } context.StrokePath(); image = UIImage.FromImage(context.ToImage()); } finally { Marshal.FreeHGlobal(buffer); if (gradient != null) { gradient.Dispose(); gradient = null; } if (context != null) { context.Dispose(); context = null; } if (colorSpace != null) { colorSpace.Dispose(); colorSpace = null; } // GC.Collect(); //GC.WaitForPendingFinalizers(); } UIImage finalImage = ActivateCell(1, 1, image); finalImage = ActivateCell(0, 0, finalImage); cellsImageView = new UIImageView(finalImage); cellsImageView.Frame = new RectangleF(0f, 0f, iContentSize.Width, iContentSize.Height); cellsScrollView.AddSubview(cellsImageView); } private UIImage ActivateCell(UInt16 column, UInt16 row, UIImage backgroundImage) { UInt16 ColumnHeaderWidth = (UInt16)spreadsheetMessenger.ColumnHeaderWidth; UInt16 RowHeaderHeight = (UInt16)spreadsheetMessenger.RowHeaderHeight; CGColorSpace cellColorSpace = null; CGBitmapContext cellContext = null; UIImage cellImage = null; IntPtr buffer = Marshal.AllocHGlobal(4 * ColumnHeaderWidth * RowHeaderHeight); if (buffer == IntPtr.Zero) throw new OutOfMemoryException("Out of memory: ActivateCell()"); try { cellColorSpace = CGColorSpace.CreateDeviceRGB(); // Create a bitmap the size of a cell cellContext = new CGBitmapContext (buffer, ColumnHeaderWidth, RowHeaderHeight, 8, 4 * ColumnHeaderWidth, cellColorSpace, CGImageAlphaInfo.PremultipliedFirst); // Paint it white cellContext.SetFillColorWithColor(UIColor.White.CGColor); cellContext.FillRect(new RectangleF(0f, 0f, ColumnHeaderWidth, RowHeaderHeight)); // Convert it to an image cellImage = UIImage.FromImage(cellContext.ToImage()); } finally { Marshal.FreeHGlobal(buffer); if (cellContext != null) { cellContext.Dispose(); cellContext = null; } if (cellColorSpace != null) { cellColorSpace.Dispose(); cellColorSpace = null; } // GC.Collect(); //GC.WaitForPendingFinalizers(); } // Draw the border on the cell image cellImage = DrawBottomRightCorner(cellImage); CGColorSpace colorSpace = null; CGBitmapContext context = null; Size iContentSize = new Size((int)backgroundImage.Size.Width, (int)backgroundImage.Size.Height); buffer = Marshal.AllocHGlobal(4 * iContentSize.Width * iContentSize.Height); if (buffer == IntPtr.Zero) throw new OutOfMemoryException("Out of memory: ActivateCell()."); try { colorSpace = CGColorSpace.CreateDeviceRGB(); // Set up a bitmap context the size of the whole grid context = new CGBitmapContext (buffer, iContentSize.Width, iContentSize.Height, 8, 4 * iContentSize.Width, colorSpace, CGImageAlphaInfo.PremultipliedFirst); // Draw the original grid into the bitmap context.DrawImage(new RectangleF(0f, 0f, iContentSize.Width, iContentSize.Height), backgroundImage.CGImage); // Draw the cell image into the bitmap context.DrawImage(new RectangleF(column * ColumnHeaderWidth, iContentSize.Height - (row + 1) * RowHeaderHeight, ColumnHeaderWidth, RowHeaderHeight), cellImage.CGImage); // Convert the bitmap back to an image backgroundImage = UIImage.FromImage(context.ToImage()); } finally { Marshal.FreeHGlobal(buffer); if (context != null) { context.Dispose(); context = null; } if (colorSpace != null) { colorSpace.Dispose(); colorSpace = null; } // GC.Collect(); //GC.WaitForPendingFinalizers(); } return backgroundImage; } private UIImage DrawBottomRightCorner(UIImage image) { int width = (int)image.Size.Width; int height = (int)image.Size.Height; float lineWidth = Constants.lineWidth; CGColorSpace colorSpace = null; CGBitmapContext context = null; UIImage returnImage = null; IntPtr buffer = Marshal.AllocHGlobal(4 * width * height); if (buffer == IntPtr.Zero) throw new OutOfMemoryException("Out of memory: DrawBottomRightCorner()."); try { colorSpace = CGColorSpace.CreateDeviceRGB(); context = new CGBitmapContext (buffer, width, height, 8, 4 * width, colorSpace, CGImageAlphaInfo.PremultipliedFirst); context.DrawImage(new RectangleF(0f, 0f, width, height), image.CGImage); context.BeginPath(); context.MoveTo(0f, (int)(lineWidth/2f)); context.AddLineToPoint(width - (int)(lineWidth/2f), (int)(lineWidth/2f)); context.AddLineToPoint(width - (int)(lineWidth/2f), height); context.SetLineWidth(Constants.lineWidth); context.SetStrokeColorWithColor(UIColor.Black.CGColor); context.StrokePath(); returnImage = UIImage.FromImage(context.ToImage()); } finally { Marshal.FreeHGlobal(buffer); if (context != null){ context.Dispose(); context = null;} if (colorSpace != null){ colorSpace.Dispose(); colorSpace = null;} // GC.Collect(); //GC.WaitForPendingFinalizers(); } return returnImage; } }

    Read the article

  • Need help unformatting text

    - by Axilus
    I am currently programming a Visual C# service to receive emails from various sources then I take certain info and organize it in a database using Regex to retrieve the deferent cell values (such as header, body, problem, cost, etc.etc.). My problem is that I am currently using a Hotmail account to email the service which the service then extracts data and writes it to a csv file; however this is all going fine an dandy except for the fact that the text is formated so when there is a "\n" or something of the sort, the program decides to not input the data that follows that into the cell. For instance, if I emailed this: Cost:$1000.00 Body: Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed vulputate mattis dolor, a dapibus turpis lacinia mollis. Fusce in enim nulla, sit amet gravida dolor. Suspendisse at nisi velit, vel ornare odio. Integer metus justo, imperdiet et pellentesque in, facilisis dignissim metus. Suspendisse potenti. Vivamus purus nisl, hendrerit sit amet rutrum eu, euismod in felis. Maecenas blandit, metus ac eleifend vulputate, nibh ligula mollis mi, non malesuada nunc tellus ac risus. In at rutrum elit. Proin metus sem, ullamcorper ut rhoncus sed, semper nec tellus. Maecenas adipiscing nisl nec elit egestas vel bibendum justo vehicula. Aliquam erat volutpat. Nullam fermentum enim in magna consequat a lacinia felis iaculis. Ut odio justo, consectetur nec cursus eu, dignissim non sapien. Duis tincidunt fringilla aliquet. Vivamus elementum lobortis massa vel posuere. Aenean non congue odio. Aenean aliquam elit volutpat tortor tempor pharetra. Mauris non est eu orci ultricies lacinia. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Ut vitae orci lectus, sit amet convallis nunc. Vivamus feugiat ante at justo auctor at pretium ante congue. In hac habitasse platea dictumst. Sed at feugiat odio. The body cell would look as follows: <span class=3D"ecxecxApple-style-s= pan" style=3D"font-family:Arial=2C Helvetica=2C sans=3Bfont-size:11px"><p s= tyle=3D"text-align:justify=3Bfont-size:11px=3Bline-height:14px=3Bmargin-rig= ht:0px=3Bmargin-bottom:14px=3Bmargin-left:0px=3Bpadding-top:0px=3Bpadding-r= ight:0px=3Bpadding-bottom:0px=3Bpadding-left:0px">Lorem ipsum dolor sit ame= t=2C consectetur adipiscing elit. Praesent in augue nec justo tempor varius= eu et tellus. Nunc id massa tortor=2C ut lobortis sem. Class aptent taciti= sociosqu ad litora torquent per conubia nostra=2C per inceptos himenaeos. = Maecenas quis nisl nec quam tristique posuere sed at nibh. Cras fringilla v= estibulum metus vel porttitor. Cras iaculis=2C erat nec gravida accumsan=2C= metus felis vestibulum risus=2C quis venenatis nisl nulla sed diam. Aenean= quis viverra velit. Etiam quis massa lectus=2C faucibus facilisis sem. Cur= abitur non eros tellus. Sed at ligula neque. Donec elementum rhoncus volutp= at. Curabitur eu accumsan erat. Phasellus auctor odio dolor=2C ut ornare au= gue. Suspendisse vel est nibh. Vivamus facilisis placerat augue sit amet al= iquam. Maecenas viverra=2C ipsum a tincidunt elementum=2C arcu tellus rutru= m ipsum=2C et dignissim urna orci ac mi. Vivamus non odio massa. Nulla cong= ue massa eu leo pretium non consequat urna molestie.</p><p style=3D"text-al= ign:justify=3Bfont-size:11px=3Bline-height:14px=3Bmargin-right:0px=3Bmargin= -bottom:14px=3Bmargin-left:0px=3Bpadding-top:0px=3Bpadding-right:0px=3Bpadd= ing-bottom:0px=3Bpadding-left:0px">Integer neque odio=2C scelerisque at mol= estie quis=2C congue sed arcu. Praesent a arcu odio. Donec sollicitudin=2C = quam vel tincidunt lobortis=2C urna augue cursus lorem=2C in eleifend nunc = risus nec neque. Donec euismod mauris non nibh blandit sollicitudin. Vivamu= s sed tincidunt augue. Suspendisse iaculis massa ut tellus rutrum auctor. C= ras venenatis consequat urna in viverra. Ut blandit imperdiet dolor non sce= lerisque. Suspendisse potenti. Sed vitae lacus ac odio euismod tempus. Aene= an ut sem odio. Curabitur auctor purus a diam iaculis facilisis. Integer mo= lestie commodo mauris a imperdiet. Nunc aliquet tempus orci sit amet viverr= a.</p><p style=3D"text-align:justify=3Bfont-size:11px=3Bline-height:14px=3B= margin-right:0px=3Bmargin-bottom:14px=3Bmargin-left:0px=3Bpadding-top:0px= =3Bpadding-right:0px=3Bpadding-bottom:0px=3Bpadding-left:0px">Morbi ultrici= es fermentum magna=2C et ultricies urna convallis non. Aenean nibh felis=2C= faucibus et pellentesque ultrices=2C accumsan a ligula. Aliquam vulputate = nisi vitae mi pretium et pretium nulla aliquet. Nam egestas diam vel elit c= ommodo fermentum. Aenean venenatis bibendum tellus=2C eget scelerisque risu= s consequat ut. In porta interdum eleifend. Cras laoreet venenatis pulvinar= .. Praesent ultricies tristique lorem=2C quis interdum arcu scelerisque nec.= Quisque arcu tellus=2C consectetur vel mattis nec=2C feugiat ac quam. Prae= sent sit amet fermentum nulla. Nulla lobortis=2C odio vitae elementum aucto= r=2C libero turpis condimentum mi=2C sed aliquet felis sapien nec tortor. I= nteger vehicula=2C neque in egestas accumsan=2C felis metus sagittis nulla= =2C eu dapibus ligula ipsum ut sapien. Nulla quis urna tortor=2C sed facili= sis leo. In at metus sed velit venenatis varius. Fusce aliquam mattis enim= =2C vitae tincidunt sem cursus in.</p><p style=3D"text-align:justify=3Bfont= -size:11px=3Bline-height:14px=3Bmargin-right:0px=3Bmargin-bottom:14px=3Bmar= gin-left:0px=3Bpadding-top:0px=3Bpadding-right:0px=3Bpadding-bottom:0px=3Bp= adding-left:0px">Proin tincidunt ligula at ligula bibendum vitae condimentu= m nunc congue. Curabitur ac magna nibh=2C vel accumsan nisl. Duis nec eros = et purus vestibulum tincidunt at sit amet libero. Donec eu nibh eros. Pelle= ntesque habitant morbi tristique senectus et netus et malesuada fames ac tu= rpis egestas. Donec accumsan=2C tellus at luctus faucibus=2C est nibh sempe= r diam=2C vitae adipiscing lorem tellus vel nulla. Donec eget ipsum ut lore= m tristique ultricies. Aliquam sem diam=2C semper sit amet volutpat pretium= =2C lobortis et eros. Sed vel iaculis metus. Phasellus malesuada elementum = porta.</p><p style=3D"text-align:justify=3Bfont-size:11px=3Bline-height:14p= x=3Bmargin-right:0px=3Bmargin-bottom:14px=3Bmargin-left:0px=3Bpadding-top:0= px=3Bpadding-right:0px=3Bpadding-bottom:0px=3Bpadding-left:0px">Fusce tinci= dunt dignissim massa quis dapibus. Sed aliquet consequat orci=2C eu cursus = libero dapibus vitae. Pellentesque at felis felis=2C vitae condimentum libe= ro. Vivamus eros erat=2C elementum et tristique vitae=2C mattis et neque. P= raesent bibendum leo ac tortor congue id mollis libero ornare. Pellentesque= adipiscing accumsan mi=2C a bibendum purus dignissim id. Cum sociis natoqu= e penatibus et magnis dis parturient montes=2C nascetur ridiculus mus. Morb= i mollis nisi in nibh cursus facilisis. Ut eu quam dolor=2C sit amet congue= orci. Aliquam quam dolor=2C viverra vitae varius sed=2C molestie et quam. = Suspendisse purus mauris=2C fermentum condimentum pharetra at=2C molestie a= nunc. Nam rhoncus euismod venenatis. Nam pellentesque quam ac ipsum volutp= at a eleifend odio imperdiet. Class aptent taciti sociosqu ad litora torque= nt per conubia nostra=2C per inceptos himenaeos. Nulla in nunc magna. Lorem= ipsum dolor sit amet=2C consectetur adipiscing elit. Donec pretium tincidu= nt gravida.</p></span> As you can tell I need a way to get rid of all that html junk and make it readable again. Is there anyway to do this with Regex? Or an easier way if possible. Cheers

    Read the article

  • Super-Charge GIMP’s Image Editing Capabilities with G’MIC [Cross-Platform]

    - by Asian Angel
    Recently we showed you how to enhance GIMP’s image editing power and today we help you super-charge GIMP even more. G’MIC (GREYC’s Magic Image Converter) will add an impressive array of filters and effects to your GIMP installation for image editing goodness. Note: We applied the Contrast Swiss Mask filter to the image shown in the screenshot above to create a nice, warm sunset effect. To add the new PPA open the Ubuntu Software Center, go to the Edit Menu, and select Software Sources. Access the Other Software Tab in the Software Sources Window and add the first of the PPAs shown below (outlined in red). The second PPA will be automatically added to your system. Once you have the new PPAs set up, go back to the Ubuntu Software Center and do a search for “G’MIC”. You will find two listings available and can select either one to add G’MIC to your system (both work equally well). Click on More Info for the listing that you choose and scroll down to where Add-ons are listed. Make sure to select the Add-on listed, click Apply Changes when it appears, and then click Install. We have both shown here for your convenience… When you get ready to use G’MIC to enhance an image, go to the Filters Menu and select G’MIC. A new window will appear where you can select from an impressive array of filters available for your use. Have fun! Command Line Installation For those of you who prefer using the command line for installation use the following commands: sudo add-apt-repository ppa:ferramroberto/gimp sudo apt-get update sudo apt-get install gmic gimp-gmic Links Note: G’MIC is available for Linux, Windows, and Mac. G’MIC PPA at Launchpad [via Web Upd8] G’MIC Homepage at Sourceforge *Downloads for all three platforms available here. Bonus The anime wallpaper shown in the screenshots above can be found here: anime sport [DesktopNexus] Latest Features How-To Geek ETC Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions Access and Manage Your Ubuntu One Account in Chrome and Iron Mouse Over YouTube Previews YouTube Videos in Chrome Watch a Machine Get Upgraded from MS-DOS to Windows 7 [Video] Bring the Whole Ubuntu Gang Home to Your Desktop with this Mascots Wallpaper Hack Apart a Highlighter to Create UV-Reactive Flowers [Science] Add a “Textmate Style” Lightweight Text Editor with Dropbox Syncing to Chrome and Iron

    Read the article

  • SQLAuthority News SQL Server 2008 R2 Update for Developers Training Kit (March 2010 Update)

    SQL Server 2008 R2 offers an impressive array of capabilities for developers that build upon key innovations introduced in SQL Server 2008. The SQL Server 2008 R2 Update for Developers Training Kit is ideal for developers who want to understand how to take advantage of the key improvements introduced in SQL [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Survey of MySQL Storage Engines

    <b>Database Journal:</b> "MySQL has an interesting architecture that sets it apart from some other enterprise database systems. It allows you to plug in different modules to handle storage. What that means to end users is that it is quite flexible, offering an interesting array of different storage engines with different features, strengths, and tradeoffs."

    Read the article

  • Dell Continues to Rise

    Server Snapshot: Dell has pushed Sun out of the No. 3 server spot. Given the OEM's new array of products, will it be long before Dell's ascendancy places it at IBM's and HP's backs?

    Read the article

  • Dell Continues to Rise

    Server Snapshot: Dell has pushed Sun out of the No. 3 server spot. Given the OEM's new array of products, will it be long before Dell's ascendancy places it at IBM's and HP's backs?

    Read the article

  • Handling HumanTask attachments in Oracle BPM 11g PS4FP+ (I)

    - by ccasares
    Adding attachments to a HumanTask is a feature that exists in Oracle HWF (Human Workflow) since 10g. However, in 11g there have been many improvements on this feature and this entry will try to summarize them. Oracle BPM 11g 11.1.1.5.1 (aka PS4 Feature Pack or PS4FP) introduced two great features: Ability to link attachments at a Task scope or at a Process scope: "Task" attachments are only visible within the scope (lifetime) of a task. This means that, initially, any member of the assignment pattern of the Human Task will be able to handle (add, review or remove) attachments. However, once the task is completed, subsequent human tasks will not have access to them. This does not mean those attachments got lost. Once the human task is completed, attachments can be retrieved in order to, i.e., check them in to a Content Server or to inject them to a new and different human task. Aside note: a "re-initiated" human task will inherit comments and attachments, along with history and -optionally- payload. See here for more info. "Process" attachments are visible within the scope of the process. This means that subsequent human tasks in the same process instance will have access to them. Ability to use Oracle WebCenter Content (previously known as "Oracle UCM") as the backend for the attachments instead of using HWF database backend. This feature adds all content server document lifecycle capabilities to HWF attachments (versioning, RBAC, metadata management, etc). As of today, only Oracle WCC is supported. However, Oracle BPM Suite does include a license of Oracle WCC for the solely usage of document management within BPM scope. Here are some code samples that leverage the above features. Retrieving uploaded attachments -Non UCM- Non UCM attachments (default ones or those that have existed from 10g, and are stored "as-is" in HWK database backend) can be retrieved after the completion of the Human Task. Firstly, we need to know whether any attachment has been effectively uploaded to the human task. There are two ways to find it out: Through an XPath function: Checking the execData/attachment[] structure. For example: Once we are sure one ore more attachments were uploaded to the Human Task, we want to get them. In this example, by "get" I mean to get the attachment name and the payload of the file. Aside note: Oracle HWF lets you to upload two kind of [non-UCM] attachments: a desktop document and a Web URL. This example focuses just on the desktop document one. In order to "retrieve" an uploaded Web URL, you can get it directly from the execData/attachment[] structure. Attachment content (payload) is retrieved through the getTaskAttachmentContents() XPath function: This example shows how to retrieve as many attachments as those had been uploaded to the Human Task and write them to the server using the File Adapter service. The sample process excerpt is as follows:  A dummy UserTask using "HumanTask1" Human Task followed by a Embedded Subprocess that will retrieve the attachments (we're assuming at least one attachment is uploaded): and once retrieved, we will write each of them back to a file in the server using a File Adapter service: In detail: We've defined an XSD structure that will hold the attachments (both name and payload): Then, we can create a BusinessObject based on such element (attachmentCollection) and create a variable (named attachmentBPM) of such BusinessObject type. We will also need to keep a copy of the HumanTask output's execData structure. Therefore we need to create a variable of type TaskExecutionData... ...and copy the HumanTask output execData to it: Now we get into the embedded subprocess that will retrieve the attachments' payload. First, and using an XSLT transformation, we feed the attachmentBPM variable with the name of each attachment and setting an empty value to the payload: Please note that we're using the XSLT for-each node to create as many target structures as necessary. Also note that we're setting an Empty text to the payload variable. The reason for this is to make sure the <payload></payload> tag gets created. This is needed when we map the payload to the XML variable later. Aside note: We are assuming that we're retrieving non-UCM attachments. However in real life you might want to check the type of attachment you're handling. The execData/attachment[]/storageType contains the values "UCM" for UCM type attachments, "TASK" for non-UCM ones or "URL" for Web URL ones. Those values are part of the "Ext.Com.Oracle.Xmlns.Bpel.Workflow.Task.StorageTypeEnum" enumeration. Once we have fed the attachmentsBPM structure and so it now contains the name of each of the attachments, it is time to iterate through it and get the payload. Therefore we will use a new embedded subprocess of type MultiInstance, that will iterate over the attachmentsBPM/attachment[] element: In every iteration we will use a Script activity to map the corresponding payload element with the result of the XPath function getTaskAttachmentContents(). Please, note how the target array element is indexed with the loopCounter predefined variable, so that we make sure we're feeding the right element during the array iteration:  The XPath function used looks as follows: hwf:getTaskAttachmentContents(bpmn:getDataObject('UserTask1LocalExecData')/ns1:systemAttributes/ns1:taskId, bpmn:getDataObject('attachmentsBPM')/ns:attachment[bpmn:getActivityInstanceAttribute('SUBPROCESS3067107484296', 'loopCounter')]/ns:fileName)  where the input parameters are: taskId of the just completed Human Task attachment name we're retrieving the payload from array index (loopCounter predefined variable)  Aside note: The reason whereby we're iterating the execData/attachment[] structure through embedded subprocess and not, i.e., using XSLT and for-each nodes, is mostly because the getTaskAttachmentContents() XPath function is currently not available in XSLT mappings. So all this example might be considered as a workaround until this gets fixed/enhanced in future releases. Once this embedded subprocess ends, we will have all attachments (name + payload) in the attachmentsBPM variable, which is the main goal of this sample. But in order to test everything runs fine, we finish the sample writing each attachment to a file. To that end we include a final embedded subprocess to concurrently iterate through each attachmentsBPM/attachment[] element: On each iteration we will use a Service activity that invokes a File Adapter write service. In here we have two important parameters to set. First, the payload itself. The file adapter awaits binary data in base64 format (string). We have to map it using XPath (Simple mapping doesn't recognize a String as a base64-binary valid target):  Second, we must set the target filename using the Service Properties dialog box:  Again, note how we're making use of the loopCounter index variable to get the right element within the embedded subprocess iteration. Handling UCM attachments will be part of a different and upcoming blog entry. Once I finish will all posts on this matter, I will upload the whole sample project to java.net.

    Read the article

  • Working with Sub-Optimal Disk Configurations (Making the best of what you’ve got)

    - by Jonathan Kehayias
    This is the first post in a what will be a series of posts on working with a sub-optimal disk configuration to squeeze as much performance out of it as possible.  You might ask what a Sub-Optimal Disk Configuration?  In this case it is a Dell Powervault MD3000 with 15 Seagate Barracuda ES.2 SAS 1 TB 7.2K RPM disks (Model Number ST31000640SS).  This equates to just under 14TB of raw storage that can configured into a number of RAID configurations.  In this case, the disk array...(read more)

    Read the article

  • JD Edwards Delivers Once Again with Significant Announcements

    Listen to Lenley Hensarling, JD Edwards Group Vice President,talk about the significant JD Edwards announcements made during Oracle OpenWorld 2008.Lenley will highlight how JD Edwards’ customers can benefit from the latest product releases from EnterpriseOne and World,discuss the wave of companies who are upgrading to the most recent JD Edwards releases to take advantage of an array of industry specific enhancements,and elaborate on JD Edwards’ strategy about integrating to other Oracle solutions,bringing continuous value to customers.

    Read the article

  • Using Stub Objects

    - by user9154181
    Having told the long and winding tale of where stub objects came from and how we use them to build Solaris, I'd like to focus now on the the nuts and bolts of building and using them. The following new features were added to the Solaris link-editor (ld) to support the production and use of stub objects: -z stub This new command line option informs ld that it is to build a stub object rather than a normal object. In this mode, it accepts the same command line arguments as usual, but will quietly ignore any objects and sharable object dependencies. STUB_OBJECT Mapfile Directive In order to build a stub version of an object, its mapfile must specify the STUB_OBJECT directive. When producing a non-stub object, the presence of STUB_OBJECT causes the link-editor to perform extra validation to ensure that the stub and non-stub objects will be compatible. ASSERT Mapfile Directive All data symbols exported from the object must have an ASSERT symbol directive in the mapfile that declares them as data and supplies the size, binding, bss attributes, and symbol aliasing details. When building the stub objects, the information in these ASSERT directives is used to create the data symbols. When building the real object, these ASSERT directives will ensure that the real object matches the linking interface presented by the stub. Although ASSERT was added to the link-editor in order to support stub objects, they are a general purpose feature that can be used independently of stub objects. For instance you might choose to use an ASSERT directive if you have a symbol that must have a specific address in order for the object to operate properly and you want to automatically ensure that this will always be the case. The material presented here is derived from a document I originally wrote during the development effort, which had the dual goals of providing supplemental materials for the stub object PSARC case, and as a set of edits that were eventually applied to the Oracle Solaris Linker and Libraries Manual (LLM). The Solaris 11 LLM contains this information in a more polished form. Stub Objects A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be used at runtime. However, an application can be built against a stub object, where the stub object provides the real object name to be used at runtime, and then use the real object at runtime. When building a stub object, the link-editor ignores any object or library files specified on the command line, and these files need not exist in order to build a stub. Since the compilation step can be omitted, and because the link-editor has relatively little work to do, stub objects can be built very quickly. Stub objects can be used to solve a variety of build problems: Speed Modern machines, using a version of make with the ability to parallelize operations, are capable of compiling and linking many objects simultaneously, and doing so offers significant speedups. However, it is typical that a given object will depend on other objects, and that there will be a core set of objects that nearly everything else depends on. It is necessary to impose an ordering that builds each object before any other object that requires it. This ordering creates bottlenecks that reduce the amount of parallelization that is possible and limits the overall speed at which the code can be built. Complexity/Correctness In a large body of code, there can be a large number of dependencies between the various objects. The makefiles or other build descriptions for these objects can become very complex and difficult to understand or maintain. The dependencies can change as the system evolves. This can cause a given set of makefiles to become slightly incorrect over time, leading to race conditions and mysterious rare build failures. Dependency Cycles It might be desirable to organize code as cooperating shared objects, each of which draw on the resources provided by the other. Such cycles cannot be supported in an environment where objects must be built before the objects that use them, even though the runtime linker is fully capable of loading and using such objects if they could be built. Stub shared objects offer an alternative method for building code that sidesteps the above issues. Stub objects can be quickly built for all the shared objects produced by the build. Then, all the real shared objects and executables can be built in parallel, in any order, using the stub objects to stand in for the real objects at link-time. Afterwards, the executables and real shared objects are kept, and the stub shared objects are discarded. Stub objects are built from a mapfile, which must satisfy the following requirements. The mapfile must specify the STUB_OBJECT directive. This directive informs the link-editor that the object can be built as a stub object, and as such causes the link-editor to perform validation and sanity checking intended to guarantee that an object and its stub will always provide identical linking interfaces. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data exported from the object must have an ASSERT symbol attribute in the mapfile to specify the symbol type, size, and bss attributes. In the case where there are multiple symbols that reference the same data, the ASSERT for one of these symbols must specify the TYPE and SIZE attributes, while the others must use the ALIAS attribute to reference this primary symbol. Given such a mapfile, the stub and real versions of the shared object can be built using the same command line for each, adding the '-z stub' option to the link for the stub object, and omiting the option from the link for the real object. To demonstrate these ideas, the following code implements a shared object named idx5, which exports data from a 5 element array of integers, with each element initialized to contain its zero-based array index. This data is available as a global array, via an alternative alias data symbol with weak binding, and via a functional interface. % cat idx5.c int _idx5[5] = { 0, 1, 2, 3, 4 }; #pragma weak idx5 = _idx5 int idx5_func(int index) { if ((index 4)) return (-1); return (_idx5[index]); } A mapfile is required to describe the interface provided by this shared object. % cat mapfile $mapfile_version 2 STUB_OBJECT; SYMBOL_SCOPE { _idx5 { ASSERT { TYPE=data; SIZE=4[5] }; }; idx5 { ASSERT { BINDING=weak; ALIAS=_idx5 }; }; idx5_func; local: *; }; The following main program is used to print all the index values available from the idx5 shared object. % cat main.c #include <stdio.h> extern int _idx5[5], idx5[5], idx5_func(int); int main(int argc, char **argv) { int i; for (i = 0; i The following commands create a stub version of this shared object in a subdirectory named stublib. elfdump is used to verify that the resulting object is a stub. The command used to build the stub differs from that of the real object only in the addition of the -z stub option, and the use of a different output file name. This demonstrates the ease with which stub generation can be added to an existing makefile. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o stublib/libidx5.so.1 -zstub % ln -s libidx5.so.1 stublib/libidx5.so % elfdump -d stublib/libidx5.so | grep STUB [11] FLAGS_1 0x4000000 [ STUB ] The main program can now be built, using the stub object to stand in for the real shared object, and setting a runpath that will find the real object at runtime. However, as we have not yet built the real object, this program cannot yet be run. Attempts to cause the system to load the stub object are rejected, as the runtime linker knows that stub objects lack the actual code and data found in the real object, and cannot execute. % cc main.c -L stublib -R '$ORIGIN/lib' -lidx5 -lc % ./a.out ld.so.1: a.out: fatal: libidx5.so.1: open failed: No such file or directory Killed % LD_PRELOAD=stublib/libidx5.so.1 ./a.out ld.so.1: a.out: fatal: stublib/libidx5.so.1: stub shared object cannot be used at runtime Killed We build the real object using the same command as we used to build the stub, omitting the -z stub option, and writing the results to a different file. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o lib/libidx5.so.1 Once the real object has been built in the lib subdirectory, the program can be run. % ./a.out [0] 0 0 0 [1] 1 1 1 [2] 2 2 2 [3] 3 3 3 [4] 4 4 4 Mapfile Changes The version 2 mapfile syntax was extended in a number of places to accommodate stub objects. Conditional Input The version 2 mapfile syntax has the ability conditionalize mapfile input using the $if control directive. As you might imagine, these directives are used frequently with ASSERT directives for data, because a given data symbol will frequently have a different size in 32 or 64-bit code, or on differing hardware such as x86 versus sparc. The link-editor maintains an internal table of names that can be used in the logical expressions evaluated by $if and $elif. At startup, this table is initialized with items that describe the class of object (_ELF32 or _ELF64) and the type of the target machine (_sparc or _x86). We found that there were a small number of cases in the Solaris code base in which we needed to know what kind of object we were producing, so we added the following new predefined items in order to address that need: NameMeaning ...... _ET_DYNshared object _ET_EXECexecutable object _ET_RELrelocatable object ...... STUB_OBJECT Directive The new STUB_OBJECT directive informs the link-editor that the object described by the mapfile can be built as a stub object. STUB_OBJECT; A stub shared object is built entirely from the information in the mapfiles supplied on the command line. When the -z stub option is specified to build a stub object, the presence of the STUB_OBJECT directive in a mapfile is required, and the link-editor uses the information in symbol ASSERT attributes to create global symbols that match those of the real object. When the real object is built, the presence of STUB_OBJECT causes the link-editor to verify that the mapfiles accurately describe the real object interface, and that a stub object built from them will provide the same linking interface as the real object it represents. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data in the object is required to have an ASSERT attribute that specifies the symbol type and size. If the ASSERT BIND attribute is not present, the link-editor provides a default assertion that the symbol must be GLOBAL. If the ASSERT SH_ATTR attribute is not present, or does not specify that the section is one of BITS or NOBITS, the link-editor provides a default assertion that the associated section is BITS. All data symbols that describe the same address and size are required to have ASSERT ALIAS attributes specified in the mapfile. If aliased symbols are discovered that do not have an ASSERT ALIAS specified, the link fails and no object is produced. These rules ensure that the mapfiles contain a description of the real shared object's linking interface that is sufficient to produce a stub object with a completely compatible linking interface. SYMBOL_SCOPE/SYMBOL_VERSION ASSERT Attribute The SYMBOL_SCOPE and SYMBOL_VERSION mapfile directives were extended with a symbol attribute named ASSERT. The syntax for the ASSERT attribute is as follows: ASSERT { ALIAS = symbol_name; BINDING = symbol_binding; TYPE = symbol_type; SH_ATTR = section_attributes; SIZE = size_value; SIZE = size_value[count]; }; The ASSERT attribute is used to specify the expected characteristics of the symbol. The link-editor compares the symbol characteristics that result from the link to those given by ASSERT attributes. If the real and asserted attributes do not agree, a fatal error is issued and the output object is not created. In normal use, the link editor evaluates the ASSERT attribute when present, but does not require them, or provide default values for them. The presence of the STUB_OBJECT directive in a mapfile alters the interpretation of ASSERT to require them under some circumstances, and to supply default assertions if explicit ones are not present. See the definition of the STUB_OBJECT Directive for the details. When the -z stub command line option is specified to build a stub object, the information provided by ASSERT attributes is used to define the attributes of the global symbols provided by the object. ASSERT accepts the following: ALIAS Name of a previously defined symbol that this symbol is an alias for. An alias symbol has the same type, value, and size as the main symbol. The ALIAS attribute is mutually exclusive to the TYPE, SIZE, and SH_ATTR attributes, and cannot be used with them. When ALIAS is specified, the type, size, and section attributes are obtained from the alias symbol. BIND Specifies an ELF symbol binding, which can be any of the STB_ constants defined in <sys/elf.h>, with the STB_ prefix removed (e.g. GLOBAL, WEAK). TYPE Specifies an ELF symbol type, which can be any of the STT_ constants defined in <sys/elf.h>, with the STT_ prefix removed (e.g. OBJECT, COMMON, FUNC). In addition, for compatibility with other mapfile usage, FUNCTION and DATA can be specified, for STT_FUNC and STT_OBJECT, respectively. TYPE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SH_ATTR Specifies attributes of the section associated with the symbol. The section_attributes that can be specified are given in the following table: Section AttributeMeaning BITSSection is not of type SHT_NOBITS NOBITSSection is of type SHT_NOBITS SH_ATTR is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SIZE Specifies the expected symbol size. SIZE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. The syntax for the size_value argument is as described in the discussion of the SIZE attribute below. SIZE The SIZE symbol attribute existed before support for stub objects was introduced. It is used to set the size attribute of a given symbol. This attribute results in the creation of a symbol definition. Prior to the introduction of the ASSERT SIZE attribute, the value of a SIZE attribute was always numeric. While attempting to apply ASSERT SIZE to the objects in the Solaris ON consolidation, I found that many data symbols have a size based on the natural machine wordsize for the class of object being produced. Variables declared as long, or as a pointer, will be 4 bytes in size in a 32-bit object, and 8 bytes in a 64-bit object. Initially, I employed the conditional $if directive to handle these cases as follows: $if _ELF32 foo { ASSERT { TYPE=data; SIZE=4 } }; bar { ASSERT { TYPE=data; SIZE=20 } }; $elif _ELF64 foo { ASSERT { TYPE=data; SIZE=8 } }; bar { ASSERT { TYPE=data; SIZE=40 } }; $else $error UNKNOWN ELFCLASS $endif I found that the situation occurs frequently enough that this is cumbersome. To simplify this case, I introduced the idea of the addrsize symbolic name, and of a repeat count, which together make it simple to specify machine word scalar or array symbols. Both the SIZE, and ASSERT SIZE attributes support this syntax: The size_value argument can be a numeric value, or it can be the symbolic name addrsize. addrsize represents the size of a machine word capable of holding a memory address. The link-editor substitutes the value 4 for addrsize when building 32-bit objects, and the value 8 when building 64-bit objects. addrsize is useful for representing the size of pointer variables and C variables of type long, as it automatically adjusts for 32 and 64-bit objects without requiring the use of conditional input. The size_value argument can be optionally suffixed with a count value, enclosed in square brackets. If count is present, size_value and count are multiplied together to obtain the final size value. Using this feature, the example above can be written more naturally as: foo { ASSERT { TYPE=data; SIZE=addrsize } }; bar { ASSERT { TYPE=data; SIZE=addrsize[5] } }; Exported Global Data Is Still A Bad Idea As you can see, the additional plumbing added to the Solaris link-editor to support stub objects is minimal. Furthermore, about 90% of that plumbing is dedicated to handling global data. We have long advised against global data exported from shared objects. There are many ways in which global data does not fit well with dynamic linking. Stub objects simply provide one more reason to avoid this practice. It is always better to export all data via a functional interface. You should always hide your data, and make it available to your users via a function that they can call to acquire the address of the data item. However, If you do have to support global data for a stub, perhaps because you are working with an already existing object, it is still easilily done, as shown above. Oracle does not like us to discuss hypothetical new features that don't exist in shipping product, so I'll end this section with a speculation. It might be possible to do more in this area to ease the difficulty of dealing with objects that have global data that the users of the library don't need. Perhaps someday... Conclusions It is easy to create stub objects for most objects. If your library only exports function symbols, all you have to do to build a faithful stub object is to add STUB_OBJECT; and then to use the same link command you're currently using, with the addition of the -z stub option. Happy Stubbing!

    Read the article

  • How To show document directory save image in thumbnail in cocos2d class

    - by Anil gupta
    I have just implemented multiple photo selection from iphone photo library and i am saving all selected photo in document directory every time as a array, now i want to show all saved images in my class from document directory as a thumbnail, i have tried some logic but my game getting crashing, My code is below. Any help will be appreciate. Thanks in advance. -(id) init { // always call "super" init // Apple recommends to re-assign "self" with the "super" return value if( (self=[super init])) { CCSprite *photoalbumbg = [CCSprite spriteWithFile:@"photoalbumbg.png"]; photoalbumbg.anchorPoint = ccp(0,0); [self addChild:photoalbumbg z:0]; //Background Sound // [[SimpleAudioEngine sharedEngine]playBackgroundMusic:@"Background Music.wav" loop:YES]; CCSprite *photoalbumframe = [CCSprite spriteWithFile:@"photoalbumframe.png"]; photoalbumframe.position = ccp(160,240); [self addChild:photoalbumframe z:2]; CCSprite *frame = [CCSprite spriteWithFile:@"Photo-Frames.png"]; frame.position = ccp(160,270); [self addChild:frame z:1]; /*_____________________________________________________________________________________*/ CCMenuItemImage * upgradebtn = [CCMenuItemImage itemFromNormalImage:@"AlbumUpgrade.png" selectedImage:@"AlbumUpgrade.png" target:self selector:@selector(Upgrade:)]; CCMenu * fMenu = [CCMenu menuWithItems:upgradebtn,nil]; fMenu.position = ccp(200,110); [self addChild:fMenu z:3]; NSError *error; NSFileManager *fM = [NSFileManager defaultManager]; NSString *documentsDirectory = [NSHomeDirectory() stringByAppendingPathComponent:@"Documents"]; NSLog(@"Documents directory: %@", [fM contentsOfDirectoryAtPath:documentsDirectory error:&error]); NSArray *allfiles = [fM contentsOfDirectoryAtPath :documentsDirectory error:&error]; directoryList = [[NSMutableArray alloc] init]; for(NSString *file in allfiles) { NSString *path = [documentsDirectory stringByAppendingPathComponent:file]; [directoryList addObject:file]; } NSLog(@"array file name value ==== %@", directoryList); CCSprite *temp = [CCSprite spriteWithFile:[directoryList objectAtIndex:0]]; [temp setTextureRect:CGRectMake(160.0f, 240.0f, 50,50)]; // temp.anchorPoint = ccp(0,0); [self addChild:temp z:10]; for(UIImage *file in directoryList) { // NSData *pngData = [NSData dataWithContentsOfFile:file]; // image = [UIImage imageWithData:pngData]; NSLog(@"uiimage = %@",image); // UIImage *image = [UIImage imageWithContentsOfFile:file]; for (int i=1; i<=3; i++) { for (int j=1;j<=3; j++) { CCTexture2D *tex = [[[CCTexture2D alloc] initWithImage:file] autorelease]; CCSprite *selectedimage = [CCSprite spriteWithTexture:tex rect:CGRectMake(0, 0, 67, 66)]; selectedimage.position = ccp(100*i,350*j); [self addChild:selectedimage]; } } } } return self; }

    Read the article

  • Voxel Face Crawling (Mesh simplification, possibly using greedy)

    - by Tim Winter
    This is in regards to a Minecraft-like terrain engine. I store blocks in chunks (16x256x16 blocks in a chunk). When I generate a chunk, I use multiple procedural techniques to set the terrain and to place objects. While generating, I keep one 1D array for the full chunk (solid or not) and a separate 1D array of solid blocks. After generation, I iterate through the solid blocks checking their neighbors so I only generate block faces that don't have solid neighbors. I store which faces to generate in their own list (that's 6 lists, one per possible face). When rendering a chunk, I render all lists in the camera's current chunk and only the lists facing the camera in all other chunks. Using a 2D atlas with this little shader trick Andrew Russell suggested, I want to merge similar faces together completely. That is, if they are in the same list (same normal), are adjacent to each other, have the same light level, etc. My assumption would be to have each of the 6 lists sorted by the axis they rest on, then by the other two axes (the list for the top of a block would be sorted by it's Y value, then X, then Z). With this alone, I could quite easily merge strips of faces, but I'm looking to merge more than just strips together when possible. I've read up on this greedy meshing algorithm, but I am having a lot of trouble understanding it. To even use it, I would think I'd need to perform a type of flood-fill per sorted list to get the groups of merge-able faces. Then, per group, perform the greedy algorithm. It all sounds awfully expensive if I would ever want dynamic terrain/lighting after initial generation. So, my question: To perform merging of faces as described (ignoring whether it's a bad idea for dynamic terrain/lighting), is there perhaps an algorithm that is simpler to implement? I would also quite happily accept an answer that walks me through the greedy algorithm in a much simpler way (a link or explanation). I don't mind a slight performance decrease if it's easier to implement or even if it's only a little better than just doing strips. I worry that most algorithms focus on triangles rather than quads and using a 2D atlas the way I am, I don't know that I could implement something triangle based with my current skills. PS: I already frustum cull per chunk and as described, I also cull faces between solid blocks. I don't occlusion cull yet and may never.

    Read the article

  • How do I check user's unlocked achievement and leaderboard scores via GPG plugin

    - by noob
    I need to load user's achievement and their scores from leaderboard in my game. But the Social.LoadScore() and Social.LoadAchievements() both returns a 0 size array in callback. When I checked the implementation in Google Play Gaming's PlayGamePlatform.cs, both the method has this summary - Not implemented yet. Calls the callback with an empty list. So my question is How do I get this data in Unity? Has anyone tried any other method to get the data?

    Read the article

  • XNA: Rotating Bones

    - by MLM
    XNA 4.0 I am trying to learn how to rotate bones on a very simple tank model I made in Cinema 4D. It is rigged by 3 bones, Root - Main - Turret - Barrel I have binded all of the objects to the bones so that all translations/rotations work as planned in C4D. I exported it as .fbx I based my test project after: http://create.msdn.com/en-US/education/catalog/sample/simple_animation I can build successfully with no errors but all the rotations I try to do to my bones have no effect. I can transform my Root successfully using below but the bone transforms have no effect: myModel.Root.Transform = world; Matrix turretRotation = Matrix.CreateRotationY(MathHelper.ToRadians(37)); Matrix barrelRotation = Matrix.CreateRotationX(barrelRotationValue); MainBone.Transform = MainTransform; TurretBone.Transform = turretRotation * TurretTransform; BarrelBone.Transform = barrelRotation * BarrelTransform; I am wondering if my model is just not right or something important I am missing in the code. Here is my Game1.cs using System; using System.Collections.Generic; using System.Linq; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Audio; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.GamerServices; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Input; using Microsoft.Xna.Framework.Media; namespace ModelTesting { /// <summary> /// This is the main type for your game /// </summary> public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; float aspectRatio; Tank myModel; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; } /// <summary> /// Allows the game to perform any initialization it needs to before starting to run. /// This is where it can query for any required services and load any non-graphic /// related content. Calling base.Initialize will enumerate through any components /// and initialize them as well. /// </summary> protected override void Initialize() { // TODO: Add your initialization logic here myModel = new Tank(); base.Initialize(); } /// <summary> /// LoadContent will be called once per game and is the place to load /// all of your content. /// </summary> protected override void LoadContent() { // Create a new SpriteBatch, which can be used to draw textures. spriteBatch = new SpriteBatch(GraphicsDevice); // TODO: use this.Content to load your game content here myModel.Load(Content); aspectRatio = graphics.GraphicsDevice.Viewport.AspectRatio; } /// <summary> /// UnloadContent will be called once per game and is the place to unload /// all content. /// </summary> protected override void UnloadContent() { // TODO: Unload any non ContentManager content here } /// <summary> /// Allows the game to run logic such as updating the world, /// checking for collisions, gathering input, and playing audio. /// </summary> /// <param name="gameTime">Provides a snapshot of timing values.</param> protected override void Update(GameTime gameTime) { // Allows the game to exit if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); // TODO: Add your update logic here float time = (float)gameTime.TotalGameTime.TotalSeconds; // Move the pieces /* myModel.TurretRotation = (float)Math.Sin(time * 0.333f) * 1.25f; myModel.BarrelRotation = (float)Math.Sin(time * 0.25f) * 0.333f - 0.333f; */ base.Update(gameTime); } /// <summary> /// This is called when the game should draw itself. /// </summary> /// <param name="gameTime">Provides a snapshot of timing values.</param> protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); // Calculate the camera matrices. float time = (float)gameTime.TotalGameTime.TotalSeconds; Matrix rotation = Matrix.CreateRotationY(MathHelper.ToRadians(45)); Matrix view = Matrix.CreateLookAt(new Vector3(2000, 500, 0), new Vector3(0, 150, 0), Vector3.Up); Matrix projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, graphics.GraphicsDevice.Viewport.AspectRatio, 10, 10000); // TODO: Add your drawing code here myModel.Draw(rotation, view, projection); base.Draw(gameTime); } } } And here is my tank class: using System; using System.Collections.Generic; using System.Linq; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Audio; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.GamerServices; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Input; using Microsoft.Xna.Framework.Media; namespace ModelTesting { public class Tank { Model myModel; // Array holding all the bone transform matrices for the entire model. // We could just allocate this locally inside the Draw method, but it // is more efficient to reuse a single array, as this avoids creating // unnecessary garbage. public Matrix[] boneTransforms; // Shortcut references to the bones that we are going to animate. // We could just look these up inside the Draw method, but it is more // efficient to do the lookups while loading and cache the results. ModelBone MainBone; ModelBone TurretBone; ModelBone BarrelBone; // Store the original transform matrix for each animating bone. Matrix MainTransform; Matrix TurretTransform; Matrix BarrelTransform; // current animation positions float turretRotationValue; float barrelRotationValue; /// <summary> /// Gets or sets the turret rotation amount. /// </summary> public float TurretRotation { get { return turretRotationValue; } set { turretRotationValue = value; } } /// <summary> /// Gets or sets the barrel rotation amount. /// </summary> public float BarrelRotation { get { return barrelRotationValue; } set { barrelRotationValue = value; } } /// <summary> /// Load the model /// </summary> public void Load(ContentManager Content) { // TODO: use this.Content to load your game content here myModel = Content.Load<Model>("Models\\simple_tank02"); MainBone = myModel.Bones["Main"]; TurretBone = myModel.Bones["Turret"]; BarrelBone = myModel.Bones["Barrel"]; MainTransform = MainBone.Transform; TurretTransform = TurretBone.Transform; BarrelTransform = BarrelBone.Transform; // Allocate the transform matrix array. boneTransforms = new Matrix[myModel.Bones.Count]; } public void Draw(Matrix world, Matrix view, Matrix projection) { myModel.Root.Transform = world; Matrix turretRotation = Matrix.CreateRotationY(MathHelper.ToRadians(37)); Matrix barrelRotation = Matrix.CreateRotationX(barrelRotationValue); MainBone.Transform = MainTransform; TurretBone.Transform = turretRotation * TurretTransform; BarrelBone.Transform = barrelRotation * BarrelTransform; myModel.CopyAbsoluteBoneTransformsTo(boneTransforms); // Draw the model, a model can have multiple meshes, so loop foreach (ModelMesh mesh in myModel.Meshes) { // This is where the mesh orientation is set foreach (BasicEffect effect in mesh.Effects) { effect.World = boneTransforms[mesh.ParentBone.Index]; effect.View = view; effect.Projection = projection; effect.EnableDefaultLighting(); } // Draw the mesh, will use the effects set above mesh.Draw(); } } } }

    Read the article

  • New DMF for SQL Server 2008 sys.dm_fts_parser to parse a string

    Many times we want to split a string into an array and get a list of each word separately. The sys.dm_fts_parser function will help us in these cases. More over, this function will also differentiate the noise words and exact match words. The sys.dm_fts_parser can be also very powerful for debugging purposes. It can help you check how the word breaker and stemmer works for a given input for Full Text Search.

    Read the article

  • Why do I bother with RAID 10 ?

    - by GrumpyOldDBA
    Before I post anything I just want to clarify what I mean by RAID 10 , this is sets of mirrored pairs that have been striped as against a RAID 0 which has been mirrored. I've just had a disk failure in the data array for one of my dev servers, it's an eight disk raid 8, no real worries, replace disk and off we go - but no - the HP engineers told me from the diagnostics ( done to ensure I got the right replacement under warranty ) that not only had a disk failed but I'd lost all the data...(read more)

    Read the article

  • tile_static, tile_barrier, and tiled matrix multiplication with C++ AMP

    - by Daniel Moth
    We ended the previous post with a mechanical transformation of the C++ AMP matrix multiplication example to the tiled model and in the process introduced tiled_index and tiled_grid. This is part 2. tile_static memory You all know that in regular CPU code, static variables have the same value regardless of which thread accesses the static variable. This is in contrast with non-static local variables, where each thread has its own copy. Back to C++ AMP, the same rules apply and each thread has its own value for local variables in your lambda, whereas all threads see the same global memory, which is the data they have access to via the array and array_view. In addition, on an accelerator like the GPU, there is a programmable cache, a third kind of memory type if you'd like to think of it that way (some call it shared memory, others call it scratchpad memory). Variables stored in that memory share the same value for every thread in the same tile. So, when you use the tiled model, you can have variables where each thread in the same tile sees the same value for that variable, that threads from other tiles do not. The new storage class for local variables introduced for this purpose is called tile_static. You can only use tile_static in restrict(direct3d) functions, and only when explicitly using the tiled model. What this looks like in code should be no surprise, but here is a snippet to confirm your mental image, using a good old regular C array // each tile of threads has its own copy of locA, // shared among the threads of the tile tile_static float locA[16][16]; Note that tile_static variables are scoped and have the lifetime of the tile, and they cannot have constructors or destructors. tile_barrier In amp.h one of the types introduced is tile_barrier. You cannot construct this object yourself (although if you had one, you could use a copy constructor to create another one). So how do you get one of these? You get it, from a tiled_index object. Beyond the 4 properties returning index objects, tiled_index has another property, barrier, that returns a tile_barrier object. The tile_barrier class exposes a single member, the method wait. 15: // Given a tiled_index object named t_idx 16: t_idx.barrier.wait(); 17: // more code …in the code above, all threads in the tile will reach line 16 before a single one progresses to line 17. Note that all threads must be able to reach the barrier, i.e. if you had branchy code in such a way which meant that there is a chance that not all threads could reach line 16, then the code above would be illegal. Tiled Matrix Multiplication Example – part 2 So now that we added to our understanding the concepts of tile_static and tile_barrier, let me obfuscate rewrite the matrix multiplication code so that it takes advantage of tiling. Before you start reading this, I suggest you get a cup of your favorite non-alcoholic beverage to enjoy while you try to fully understand the code. 01: void MatrixMultiplyTiled(vector<float>& vC, const vector<float>& vA, const vector<float>& vB, int M, int N, int W) 02: { 03: static const int TS = 16; 04: array_view<const float,2> a(M, W, vA); 05: array_view<const float,2> b(W, N, vB); 06: array_view<writeonly<float>,2> c(M,N,vC); 07: parallel_for_each(c.grid.tile< TS, TS >(), 08: [=] (tiled_index< TS, TS> t_idx) restrict(direct3d) 09: { 10: int row = t_idx.local[0]; int col = t_idx.local[1]; 11: float sum = 0.0f; 12: for (int i = 0; i < W; i += TS) { 13: tile_static float locA[TS][TS], locB[TS][TS]; 14: locA[row][col] = a(t_idx.global[0], col + i); 15: locB[row][col] = b(row + i, t_idx.global[1]); 16: t_idx.barrier.wait(); 17: for (int k = 0; k < TS; k++) 18: sum += locA[row][k] * locB[k][col]; 19: t_idx.barrier.wait(); 20: } 21: c[t_idx.global] = sum; 22: }); 23: } Notice that all the code up to line 9 is the same as per the changes we made in part 1 of tiling introduction. If you squint, the body of the lambda itself preserves the original algorithm on lines 10, 11, and 17, 18, and 21. The difference being that those lines use new indexing and the tile_static arrays; the tile_static arrays are declared and initialized on the brand new lines 13-15. On those lines we copy from the global memory represented by the array_view objects (a and b), to the tile_static vanilla arrays (locA and locB) – we are copying enough to fit a tile. Because in the code that follows on line 18 we expect the data for this tile to be in the tile_static storage, we need to synchronize the threads within each tile with a barrier, which we do on line 16 (to avoid accessing uninitialized memory on line 18). We also need to synchronize the threads within a tile on line 19, again to avoid the race between lines 14, 15 (retrieving the next set of data for each tile and overwriting the previous set) and line 18 (not being done processing the previous set of data). Luckily, as part of the awesome C++ AMP debugger in Visual Studio there is an option that helps you find such races, but that is a story for another blog post another time. May I suggest reading the next section, and then coming back to re-read and walk through this code with pen and paper to really grok what is going on, if you haven't already? Cool. Why would I introduce this tiling complexity into my code? Funny you should ask that, I was just about to tell you. There is only one reason we tiled our extent, had to deal with finding a good tile size, ensure the number of threads we schedule are correctly divisible with the tile size, had to use a tiled_index instead of a normal index, and had to understand tile_barrier and to figure out where we need to use it, and double the size of our lambda in terms of lines of code: the reason is to be able to use tile_static memory. Why do we want to use tile_static memory? Because accessing tile_static memory is around 10 times faster than accessing the global memory on an accelerator like the GPU, e.g. in the code above, if you can get 150GB/second accessing data from the array_view a, you can get 1500GB/second accessing the tile_static array locA. And since by definition you are dealing with really large data sets, the savings really pay off. We have seen tiled implementations being twice as fast as their non-tiled counterparts. Now, some algorithms will not have performance benefits from tiling (and in fact may deteriorate), e.g. algorithms that require you to go only once to global memory will not benefit from tiling, since with tiling you already have to fetch the data once from global memory! Other algorithms may benefit, but you may decide that you are happy with your code being 150 times faster than the serial-version you had, and you do not need to invest to make it 250 times faster. Also algorithms with more than 3 dimensions, which C++ AMP supports in the non-tiled model, cannot be tiled. Also note that in future releases, we may invest in making the non-tiled model, which already uses tiling under the covers, go the extra step and use tile_static memory on your behalf, but it is obviously way to early to commit to anything like that, and we certainly don't do any of that today. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • MySQL Exotic Storage Engines

    <b>Database Journal:</b> "MySQL has an interesting architecture that sets it apart from some other enterprise database systems. It allows you to plug in different modules to handle storage. What that means to end users is that it is quite flexible, offering an interesting array of different storage engines with different features, strengths, and tradeoffs."

    Read the article

  • MySQL Exotic Storage Engines

    MySQL has an interesting architecture that allows you to plug in different modules to handle storage. What that means is that it's quite flexible, offering an interesting array of different storage engines with different features, strengths, and tradeoffs. Sean Hull presents some of the newest and more exotic storage engines, and even some that are still in development.

    Read the article

  • MySQL Exotic Storage Engines

    MySQL has an interesting architecture that allows you to plug in different modules to handle storage. What that means is that it's quite flexible, offering an interesting array of different storage engines with different features, strengths, and tradeoffs. Sean Hull presents some of the newest and more exotic storage engines, and even some that are still in development.

    Read the article

  • Looking for a C# implementation of (Pk) Zip32

    - by bukko
    I need to implement Zip32 (PK compatible) in C#. I can't just call a separate dll or exe because (1) I don't want to write the uncompressed file to disk and (2) I want to avoid the possibly that someone could wrap that library - either of these would compromise security. My ideal solution would be to find a C# implementation of the Zip32 algorithm which I could use, and just modify it so I can pass a byte array or something. Does anyone have any suggestions or (I dare but hope) examples of C# PKZip implementations?

    Read the article

  • Dynamic Environment Creation

    - by Jack
    I was wondering, I'm thinking on a more small-scale, abstracted level, but how does one create a dynamic environment a la Minecraft? In specific, I'm thinking of the world as a 3 dimensional array of block objects, how is it made so that large features such as oceans are created? The language isn't important, I'm thinking on a conceptual level, but if it helps, I use C# or C++. Thanks for any help!

    Read the article

< Previous Page | 406 407 408 409 410 411 412 413 414 415 416 417  | Next Page >