Search Results

Search found 992 results on 40 pages for 'garbage'.

Page 17/40 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • To what extent is size a factor in SSD performance?

    - by artif
    To what extent is the size of an SSD a factor in its performance? In my mind, correct me if I'm wrong, a bigger SSD should be, everything else being equal, faster than a smaller one. A bigger SSD would have more erase blocks and thus more leeway for the FTL (flash translation layer) to do garbage collection optimization. Also there would be more time before TRIM became necessary. I see on Wikipedia that it remarks that "The performance of the SSD can scale with the number of parallel NAND flash chips used in the device" so it seems throughput also increases significantly. Also many SSDs contain internal caches of some sort and presumably those caches are larger for correspondingly large SSDs. But supposing this effect exists, I would like a quantitative analysis. Does throughput increase linearly? How much is garbage collection impacted, if at all? Does latency stay the same? And so on. Would the performance of a 8 GB SSD be significantly different from, for example, an 80 GB SSD assuming both used high quality chips, controllers, etc? Are there any resources (webpages, research papers, presentations, books, etc) that discuss correlations between SSD performance (4 KB random write speed, latency, maximum sequential throughput, etc) and size? I realize this does not really sound like a programming question but it is relevant for what I'm working on (using flash for caching hard drive data) which does involve programming. If there is a better place to ask this question, eg a more hardware oriented site, what would that be? Something like the equivalent of stack overflow (or perhaps a forum) for in-depth questions on hardware interfaces, internals, etc would be appreciated.

    Read the article

  • ASP.NET GZip Encoding Caveats

    - by Rick Strahl
    GZip encoding in ASP.NET is pretty easy to accomplish using the built-in GZipStream and DeflateStream classes and applying them to the Response.Filter property.  While applying GZip and Deflate behavior is pretty easy there are a few caveats that you have watch out for as I found out today for myself with an application that was throwing up some garbage data. But before looking at caveats let’s review GZip implementation for ASP.NET. ASP.NET GZip/Deflate Basics Response filters basically are applied to the Response.OutputStream and transform it as data is written to it through the ASP.NET Response object. So a Response.Write eventually gets written into the output stream which if a filter is also written through the filter stream’s interface. To perform the actual GZip (and Deflate) encoding typically used by Web pages .NET includes the GZipStream and DeflateStream stream classes which can be readily assigned to the Repsonse.OutputStream. With these two stream classes in place it’s almost trivially easy to create a couple of reusable methods that allow you to compress your HTTP output. In my standard WebUtils utility class (from the West Wind West Wind Web Toolkit) created two static utility methods – IsGZipSupported and GZipEncodePage – that check whether the client supports GZip encoding and then actually encodes the current output (note that although the method includes ‘Page’ in its name this code will work with any ASP.NET output). /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } /// <summary> /// Sets up the current page or handler to use GZip through a Response.Filter /// IMPORTANT: /// You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() { HttpResponse Response = HttpContext.Current.Response; if (IsGZipSupported()) { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (AcceptEncoding.Contains("deflate")) { Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "deflate"); } else { Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "gzip"); } } } As you can see the actual assignment of the Filter is as simple as: Response.Filter = new DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); which applies the filter to the OutputStream. You also need to ensure that your response reflects the new GZip or Deflate encoding and ensure that any pages that are cached in Proxy servers can differentiate between pages that were encoded with the various different encodings (or no encoding). To use this utility function now is trivially easy: In any ASP.NET code that wants to compress its Response output you simply use: protected void Page_Load(object sender, EventArgs e) { WebUtils.GZipEncodePage(); Entry = WebLogFactory.GetEntry(); var entries = Entry.GetLastEntries(App.Configuration.ShowEntryCount, "pk,Title,SafeTitle,Body,Entered,Feedback,Location,ShowTopAd", "TEntries"); if (entries == null) throw new ApplicationException("Couldn't load WebLog Entries: " + Entry.ErrorMessage); this.repEntries.DataSource = entries; this.repEntries.DataBind(); } Here I use an ASP.NET page, but the above WebUtils.GZipEncode() method call will work in any ASP.NET application type including HTTP Handlers. The only requirement is that the filter needs to be applied before any other output is sent to the OutputStream. For example, in my CallbackHandler service implementation by default output over a certain size is GZip encoded. The output that is generated is JSON or XML and if the output is over 5k in size I apply WebUtils.GZipEncode(): if (sbOutput.Length > GZIP_ENCODE_TRESHOLD) WebUtils.GZipEncodePage(); Response.ContentType = ControlResources.STR_JsonContentType; HttpContext.Current.Response.Write(sbOutput.ToString()); Ok, so you probably get the idea: Encoding GZip/Deflate content is pretty easy. Hold on there Hoss –Watch your Caching Or is it? There are a few caveats that you need to watch out for when dealing with GZip content. The fist issue is that you need to deal with the fact that some clients don’t support GZip or Deflate content. Most modern browsers support it, but if you have a programmatic Http client accessing your content GZip/Deflate support is by no means guaranteed. For example, WinInet Http clients don’t support GZip out of the box – it has to be explicitly implemented. Other low level HTTP clients on other platforms too don’t support GZip out of the box. The problem is that your application, your Web Server and Proxy Servers on the Internet might be caching your generated content. If you return content with GZip once and then again without, either caching is not applied or worse the wrong type of content is returned back to the client from a cache or proxy. The result is an unreadable response for *some clients* which is also very hard to debug and fix once in production. You already saw the issue of Proxy servers addressed in the GZipEncodePage() function: // Allow proxy servers to cache encoded and unencoded versions separately Response.AppendHeader("Vary", "Content-Encoding"); This ensures that any Proxy servers also check for the Content-Encoding HTTP Header to cache their content – not just the URL. The same thing applies if you do OutputCaching in your own ASP.NET code. If you generate output for GZip on an OutputCached page the GZipped content will be cached (either by ASP.NET’s cache or in some cases by the IIS Kernel Cache). But what if the next client doesn’t support GZip? She’ll get served a cached GZip page that won’t decode and she’ll get a page full of garbage. Wholly undesirable. To fix this you need to add some custom OutputCache rules by way of the GetVaryByCustom() HttpApplication method in your global_ASAX file: public override string GetVaryByCustomString(HttpContext context, string custom) { // Override Caching for compression if (custom == "GZIP") { string acceptEncoding = HttpContext.Current.Response.Headers["Content-Encoding"]; if (string.IsNullOrEmpty(acceptEncoding)) return ""; else if (acceptEncoding.Contains("gzip")) return "GZIP"; else if (acceptEncoding.Contains("deflate")) return "DEFLATE"; return ""; } return base.GetVaryByCustomString(context, custom); } In a page that use Output caching you then specify: <%@ OutputCache Duration="180" VaryByParam="none" VaryByCustom="GZIP" %> To use that custom rule. It’s all Fun and Games until ASP.NET throws an Error Ok, so you’re up and running with GZip, you have your caching squared away and your pages that you are applying it to are jamming along. Then BOOM, something strange happens and you get a lovely garbled page that look like this: Lovely isn’t it? What’s happened here is that I have WebUtils.GZipEncode() applied to my page, but there’s an error in the page. The error falls back to the ASP.NET error handler and the error handler removes all existing output (good) and removes all the custom HTTP headers I’ve set manually (usually good, but very bad here). Since I applied the Response.Filter (via GZipEncode) the output is now GZip encoded, but ASP.NET has removed my Content-Encoding header, so the browser receives the GZip encoded content without a notification that it is encoded as GZip. The result is binary output. Here’s what Fiddler says about the raw HTTP header output when an error occurs when GZip encoding was applied: HTTP/1.1 500 Internal Server Error Cache-Control: private Content-Type: text/html; charset=utf-8 Date: Sat, 30 Apr 2011 22:21:08 GMT Content-Length: 2138 Connection: close ?`I?%&/m?{J?J??t??` … binary output striped here Notice: no Content-Encoding header and that’s why we’re seeing this garbage. ASP.NET has stripped the Content-Encoding header but left our filter intact. So how do we fix this? In my applications I typically have a global Application_Error handler set up and in this case I’ve been using that. One thing that you can do in the Application_Error handler is explicitly clear out the Response.Filter and set it to null at the top: protected void Application_Error(object sender, EventArgs e) { // Remove any special filtering especially GZip filtering Response.Filter = null; … } And voila I get my Yellow Screen of Death or my custom generated error output back via uncompressed content. BTW, the same is true for Page level errors handled in Page_Error or ASP.NET MVC Error handling methods in a controller. Another and possibly even better solution is to check whether a filter is attached just before the headers are sent to the client as pointed out by Adam Schroeder in the comments: protected void Application_PreSendRequestHeaders() { // ensure that if GZip/Deflate Encoding is applied that headers are set // also works when error occurs if filters are still active HttpResponse response = HttpContext.Current.Response; if (response.Filter is GZipStream && response.Headers["Content-encoding"] != "gzip") response.AppendHeader("Content-encoding", "gzip"); else if (response.Filter is DeflateStream && response.Headers["Content-encoding"] != "deflate") response.AppendHeader("Content-encoding", "deflate"); } This uses the Application_PreSendRequestHeaders() pipeline event to check for compression encoding in a filter and adjusts the content accordingly. This is actually a better solution since this is generic – it’ll work regardless of how the content is cleaned up. For example, an error Response.Redirect() or short error display might get changed and the filter not cleared and this code actually handles that. Sweet, thanks Adam. It’s unfortunate that ASP.NET doesn’t natively clear out Response.Filters when an error occurs just as it clears the Response and Headers. I can’t see where leaving a Filter in place in an error situation would make any sense, but hey - this is what it is and it’s easy enough to fix as long as you know where to look. Riiiight! IIS and GZip I should also mention that IIS 7 includes good support for compression natively. If you can defer encoding to let IIS perform it for you rather than doing it in your code by all means you should do it! Especially any static or semi-dynamic content that can be made static should be using IIS built-in compression. Dynamic caching is also supported but is a bit more tricky to judge in terms of performance and footprint. John Forsyth has a great article on the benefits and drawbacks of IIS 7 compression which gives some detailed performance comparisons and impact reviews. I’ll post another entry next with some more info on IIS compression since information on it seems to be a bit hard to come by. Related Content Built-in GZip/Deflate Compression in IIS 7.x HttpWebRequest and GZip Responses © Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET   IIS7  

    Read the article

  • SSIS Technique to Remove/Skip Trailer and/or Bad Data Row in a Flat File

    - by Compudicted
    I noticed that the question on how to skip or bypass a trailer record or a badly formatted/empty row in a SSIS package keeps coming back on the MSDN SSIS Forum. I tried to figure out the reason why and after an extensive search inside the forum and outside it on the entire Web (using several search engines) I indeed found that it seems even thought there is a number of posts and articles on the topic none of them are employing the simplest and the most efficient technique. When I say efficient I mean the shortest time to solution for the fellow developers. OK, enough talk. Let’s face the problem: Typically a flat file (e.g. a comma delimited/CSV) needs to be processed (loaded into a database in most cases really). Oftentimes, such an input file is produced by some sort of an out of control, 3-rd party solution and would come in with some garbage characters and/or even malformed/miss-formatted rows. One such example could be this imaginary file: As you can see several rows have no data and there is an occasional garbage character (1, in this example on row #7). Our task is to produce a clean file that will only capture the meaningful data rows. As an aside, our output/target may be a database table, but for the purpose of this exercise we will simply re-format the source. Let’s outline our course of action to start off: Will use SSIS 2005 to create a DFT; The DFT will use a Flat File Source to our input [bad] flat file; We will use a Conditional Split to process the bad input file; and finally Dump the resulting data to a new [clean] file. Well, only four steps, let’s see if it is too much of work. 1: Start the BIDS and add a DFT to the Control Flow designer (I named it Process Dirty File DFT): 2, and 3: I had added the data viewer to just see what I am getting, alas, surprisingly the data issues were not seen it:   What really is the key in the approach it is to properly set the Conditional Split Transformation. Visually it is: and specifically its SSIS Expression LEN([After CS Column 0]) > 1 The point is to employ the right Boolean expression (yes, the Conditional Split accepts only Boolean conditions). For the sake of this post I re-named the Output Name “No Empty Rows”, but by default it will be named Case 1 (remember to drag your first column into the expression area)! You can close your Conditional Split now. The next part will be crucial – consuming the output of our Conditional Split. Last step - #4: Add a Flat File Destination or any other one you need. Click on the Conditional Split and choose the green arrow to drop onto the target. When you do so make sure you choose the No Empty Rows output and NOT the Conditional Split Default Output. Make the necessary mappings. At this point your package must look like: As the last step will run our package to examine the produced output file. F5: and… it looks great!

    Read the article

  • qsort on an array of pointers to Objective-C objects

    - by ElBueno
    I have an array of pointers to Objective-C objects. These objects have a sort key associated with them. I'm trying to use qsort to sort the array of pointers to these objects. However, the first time my comparator is called, the first argument points to the first element in my array, but the second argument points to garbage, giving me an EXC_BAD_ACCESS when I try to access its sort key. Here is my code (paraphrased): - (void)foo:(int)numThingies { Thingie **array; array = malloc(sizeof(deck[0])*numThingies); for(int i = 0; i < numThingies; i++) { array[i] = [[Thingie alloc] initWithSortKey:(float)random()/RAND_MAX]; } qsort(array[0], numThingies, sizeof(array[0]), thingieCmp); } int thingieCmp(const void *a, const void *b) { const Thingie *ia = (const Thingie *)a; const Thingie *ib = (const Thingie *)b; if (ia.sortKey > ib.sortKey) return 1; //ib point to garbage, so ib.sortKey produces the EXC_BAD_ACCESS else return -1; } Any ideas why this is happening?

    Read the article

  • Perl strings internals

    - by n0rd
    How does perl strings represented internally? What encoding is used? How do I handle different encodings properly? I've been using perl for quite a long time, but it didn't include a lot of string handling in different encodings, and when I encountered a minor problem that had something to do with encodings I usually resorted to some shamanic actions. Until this moment I thought about perl strings as sequences of bytes, which did fit pretty well for my tasks. Now I need to do some processing of UTF-8 encoded file and here starts trouble. First, I read file into string like this: open(my $in, '<', $ARGV[0]) or die "cannot open file $ARGV[0] for reading"; binmode($in, ':utf8'); my $contents; { local $/; $contents = <$in>; } close($in); then simply print it: print $contents; And I get two things: a warning Wide character in print at <scriptname> line <n> and a garbage in console. So I can conclude that perl strings have a concept of "character" that can be "wide" or not, but when printed these "wide" characters are represented in console as multiple bytes, not as single "character". (I wonder now why did all my previous experience with binary files worked quite how I expected it to work without any "character" issues). Why then I see garbage in console? If perl stores strings as character in some known encoding, I don't think there is a big problem to find out console encoding and print text properly. (I use Windows, BTW). If perl stores strings as multibyte sequences (e.g. using same UTF-8 encoding), why is it done this way? From my C experience handling multibyte strings is PAIN.

    Read the article

  • Why isn’t my autoreleased object getting released?

    - by zoul
    Hello. I am debugging a weird memory management error and I can’t figure it out. I noticed that some of my objects are staying in memory longer than expected. I checked all my memory management and finally got to the very improbable conclusion that some of my autorelease operations don’t result in a release. Under what circumstances is that possible? I created a small testing Canary class that logs a message in dealloc and have the following testing code in place: NSLog(@"On the main thread: %i.", [NSThread isMainThread]); [[[Canary alloc] init] autorelease]; According to the code we’re really on the main thread, but the dealloc in Canary does not get called until much later. The delay is not deterministic and can easily take seconds or more. How is that possible? The application runs on a Mac, the garbage collection is turned off (Objective-C Garbage Collection is set to Unsupported on the target.) I am mostly used to iOS, is memory management on OS X different in some important way?

    Read the article

  • Low Latency Serial Communications In .Net

    - by bvillersjr
    I have been researching various third party libraries and approaches to low latency serial communications in .Net. I've read enough that I have now come full circle and know as little as I did when I started due to the variety of conflicting opinions. For example, the functionality in the Framework was ruled out due to some convincing articles stating: "that the Microsoft provided solution has not been stable across framework versions and is lacking in functionality." I have found articles bashing many of the older COM based libraries. I have found articles bashing the idea of a low latency .Net app as a whole due to garbage collection. I have also read articles demonstrating how P/Invoking Windows API functionality for the purpose of low latency communication is unacceptable. THIS RULES OUT JUST ABOUT ANY APPROACH I CAN THINK OF! I would really appreciate some words from those with been there / done that experience. Ideally, I could locate a solid library / partner and not have to build the communications library myself. I have the following simple objectives: Sustained low latency serial communication in C# / VB.Net 32/64 bit Well documented (if the solution is 3rd party) Relatively unimpacted (communication and latency wise) by garbage collection . Flexible (I have no idea what I will have to interface with in the future!) The only requirement that I have for certain is that I need to be able to interface with many different industrial devices such as RS485 based linear actuators, serial / microcontroller based gauges, and ModBus (also RS485) devices. Any comments, ideas, thoughts or links to articles that may iron out my confusion are much appreciated!

    Read the article

  • What are the limitations of the .NET Assembly format?

    - by McKAMEY
    We just ran into an interesting issue that I've not experienced before. We have a large scale production ASP.NET 3.5 SP1 Web App Project in Visual Studio 2008 SP1 which gets compiled and deployed using a Website Deployment Project. Everything has worked fine for the last year, until after a check-in yesterday the app started critically failing with BadImageFormatException. The check-in in question doesn't change anything particularly special and the errors are coming from areas of the app not even changed. Using Reflector we inspected the offending methods to find that there were garbage strings in the code (which Reflector humorously interpreted as Chinese characters). We have consistently reproduced this on several machines so it does not appear to be hardware related. Further inspection showed that those garbage strings did not exist in the Assemblies used as inputs to aspnet_merge.exe during deployment. Web Deployment Project Output Assemblies Properties: Merge all outputs to a single assembly Merge each individual folder output to its own assembly Merge all pages and control outputs to a single assembly Create a separate assembly for each page and control output In the web deployment project properties if we set the merge options to the first option ("Merge all outputs to a single assembly") we experience the issue, yet all of the other options work perfectly! So my question: does anyone know why this is happening? Is there a size-limit to aspnet_merge.exe's capabilities (the resulting merged DLL is around 19.3 MB)? Are there any other known issues with merging the output of WAPs? I would love it if any Assembly format / aspnet_merge gurus know about any such limitations like this. Seems to me like a 25MB Assembly, while big, isn't outrageous. Less disk to hit if it is all pregen'd stuff.

    Read the article

  • thread reaches end but isn't removed

    - by pstanton
    I create a bunch of threads to do some processing: new Thread("upd-" + id){ @Override public void run(){ try{ doSomething(); } catch (Throwable e){ LOG.error("error", e); } finally{ LOG.debug("thread death"); } } }.start(); I know i should be using a threadPool but i need to understand the following problem before i change it: I'm using eclipse's debugger and looking at the threads in the debug pane which lists active threads. Many of them complete as you would expect, and are removed from the debug pane, however some seem to stay in the list of active threads even though the log shows the "thread death" entry for these. When i attempt to debug these threads, they either do not pause for debugging or show an error dialog: "A timeout occurred while retrieving stack frames for thread: upd-...". there is some synchronization going on within the doSomething() call but i'm fairly sure it's ok and since the "thread death" log is being called i'm assuming these threads aren't deadlocked in that method. i don't do any Thread.join()s, however i do call a third party API but doubt they do either. Can anyone think of another reason these threads are lingering? Thanks. EDIT: I created this test to check the Garbage Collection theory: Thread thread = new Thread("!!!!!!!!!!!!!!!!") { @Override public void run() { System.out.println("running"); ThreadUs.sleepQuiet(5000); System.out.println("finished"); // <-- thread removed from list here } }; thread.start(); ThreadUs.sleepQuiet(10000); System.out.println(thread.isAlive()); // <-- thread already removed from list but hasn't been GC'd ThreadUs.sleepQuiet(10000); this proves that it is nothing to do with garbage collection as eclipse removes the thread from the thread list as soon as it completes and isn't waiting for the object to be de-referenced/GC'd.

    Read the article

  • Are there size limitations to the .NET Assembly format?

    - by McKAMEY
    We ran into an interesting issue that I've not experienced before. We have a large scale production ASP.NET 3.5 SP1 Web App Project in Visual Studio 2008 SP1 which gets compiled and deployed using a Website Deployment Project. Everything has worked fine for the last year, until after a check-in yesterday the app started critically failing with BadImageFormatException. The check-in in question doesn't change anything particularly special and the errors are coming from areas of the app not even changed. Using Reflector we inspected the offending methods to find that there were garbage strings in the code (which .NET Reflector humorously interpreted as Chinese characters). We have consistently reproduced this on several machines so it does not appear to be hardware related. Further inspection showed that those garbage strings did not exist in the Assemblies used as inputs to aspnet_merge.exe during deployment. aspnet_merge.exe / Web Deployment Project Output Assemblies Properties: Merge all outputs to a single assembly Merge each individual folder output to its own assembly Merge all pages and control outputs to a single assembly Create a separate assembly for each page and control output In the web deployment project properties if we set the merge options to the first option ("Merge all outputs to a single assembly") we experience the issue, yet all of the other options work perfectly! My question: does anyone know why this is happening? Is there a size-limit to aspnet_merge.exe's capabilities (the resulting merged DLL is around 19.3 MB)? Are there any other known issues with merging the output of WAPs? I would love it if any Assembly format / aspnet_merge.exe gurus know about any such limitations like this. Seems to me like a 25MB Assembly, while big, isn't outrageous.

    Read the article

  • Issue encondig java->xls

    - by Xerg
    This is not a pure java question and can also be related to HTML I've written a java servlet that queries a database table and shows the result as a html table. The user can also ask to receive the result as an Excel sheet. Im creating the Excel sheet by printing the same html table, but with the content-type of "application/vnd.ms-excel". The Excel file is created fine. The problem is that the tables may contain non-english data so I want to use a UTF-8 encoding. PrintWriter out = response.getWriter(); response.setContentType("application/vnd.ms-excel:ISO-8859-1"); //response.setContentType("application/vnd.ms-excel:UTF-8"); response.setHeader("cache-control", "no-cache"); response.setHeader("Content-Disposition", "attachment; filename=file.xls"); out.print(src); out.flush(); The non-english characters appear as garbage (áéíóú) Also I tried converting to bytes from String byte[] arrByte = src.getBytes("ISO-8859-1"); String result = new String(arrByte, "UTF-8"); But I Still getting garbage, What can I do?. Thanks

    Read the article

  • How to extract a 2x2 submatrix from a bigger matrix

    - by ZaZu
    Hello, I am a very basic user and do not know much about commands used in C, so please bear with me...I cant use very complicated codes. I have some knowledge in the stdio.h and ctype.h library, but thats about it. I have a matrix in a txt file and I want to load the matrix based on my input of number of rows and columns For example, I have a 5 by 5 matrix in the file. I want to extract a specific 2 by 2 submatrix, how can I do that ? I created a nested loop using : FILE *sample sample=fopen("randomfile.txt","r"); for(i=0;i<rows;i++){ for(j=0;j<cols;j++){ fscanf(sample,"%f",&matrix[i][j]); } fscanf(sample,"\n",&matrix[i][j]); } fclose(sample); Sadly the code does not work .. If I have this matrix : 5.00 4.00 5.00 6.00 5.00 4.00 3.00 25.00 5.00 3.00 4.00 23.00 5.00 2.00 352.00 6.00 And inputting 3 for row and 3 for column, I get : 5.00 4.00 5.00 6.00 5.00 4.00 3.00 25.00 5.00 Not only this isnt a 2 by 2 submatrix, but even if I wanted the first 3 rows and first 3 columns, its not printing it correctly.... I need to start at row 3 and col 3, then take the 2 by 2 submatrix ! I should have ended up with : 4.00 23.00 352.00 6.00 I heard that I can use fgets and sscanf to accomplish this. Here is my trial code : fgets(garbage,1,fin); sscanf(garbage,"\n"); But this doesnt work either :( What am I doing wrong ? Please help. Thanks !

    Read the article

  • Getting a nicely formatted timestamp without lots of overhead?

    - by Brad Hein
    In my app I have a textView which contains real-time messages from my app, as things happen, messages get printed to this text box. Each message is time-stamped with HH:MM:SS. Up to now, I had also been chasing what seemed to be a memory leak, but as it turns out, it's just my time-stamp formatting method (see below), It apparently produces thousands of objects that later get gc'd. For 1-10 messages per second, I was seeing 500k-2MB of garbage collected every second by the GC while this method was in place. After removing it, no more garbage problem (its back to a nice interval of about 30 seconds, and only a few k of junk typically) So I'm looking for a new, more lightweight method for producing a HH:MM:SS timestamp string :) Old code: /** * Returns a string containing the current time stamp. * @return - a string. */ public static String currentTimeStamp() { String ret = ""; Date d = new Date(); SimpleDateFormat timeStampFormatter = new SimpleDateFormat("hh:mm:ss"); ret = timeStampFormatter.format(d); return ret; }

    Read the article

  • Do COM Dll References Require Manual Disposal? If so, How?

    - by Drew
    I have written some code in VB that verifies that a particular port in the Windows Firewall is open, and opens one otherwise. The code uses references to three COM DLLs. I wrote a WindowsFirewall class, which Imports the primary namespace defined by the DLLs. Within members of the WindowsFirewall class I construct some of the types defined by the DLLs referenced. The following code isn't the entire class, but demonstrates what I am doing. Imports NetFwTypeLib Public Class WindowsFirewall Public Shared Function IsFirewallEnabled as Boolean Dim icfMgr As INetFwMgr icfMgr = CType(System.Activator.CreateInstance(Type.GetTypeFromProgID("HNetCfg.FwMgr")), INetFwMgr) Dim profile As INetFwProfile profile = icfMgr.LocalPolicy.CurrentProfile Dim fIsFirewallEnabled as Boolean fIsFirewallEnabled = profile.FirewallEnabled return fIsFirewallEnabled End Function End Class I do not reference COM DLLs very often. I have read that unmanaged code may not be cleaned up by the garbage collector and I would like to know how to make sure that I have not introduced any memory leaks. Please tell me (a) if I have introduced a memory leak, and (b) how I may clean it up. (My theory is that the icfMgr and profile objects do allocate memory that remains unreleased until after the application closes. I am hopeful that setting their references equal to nothing will mark them for garbage collection, since I can find no other way to dispose of them. Neither one implements IDisposable, and neither contains a Finalize method. I suspect they may not even be relevant here, and that both of those methods of releasing memory only apply to .Net types.)

    Read the article

  • Silverlight WinDg Memory Release Issue

    - by Chris Newton
    Hi, I have used WinDbg succesfully on a number of occasions to track down and fix memory leaks (or more accurately the CLRs inability to garbage collect a released object), but am stuck with one particular control. The control is displayed within a child window and when the window is closed a reference to the control remains and cannot be garbage collected. I have resolved what I believe to be the majority of the issues that could have caused the leak, but the !gcroot of the affected object is not clear (to me at least) as to what is still holding on to this object. The ouput is always the same regardless of the content being presented in the child window: DOMAIN(03FB7238):HANDLE(Pinned):79b12f8:Root: 06704260(System.Object[])- 05719f00(System.Collections.Generic.Dictionary2[[System.IntPtr, mscorlib],[System.Object, mscorlib]])-> 067c1310(System.Collections.Generic.Dictionary2+Entry[[System.IntPtr, mscorlib],[System.Object, mscorlib]][])- 064d42b0(System.Windows.Controls.Grid)- 064d4314(System.Collections.Generic.Dictionary2[[MS.Internal.IManagedPeerBase, System.Windows],[System.Object, mscorlib]])-> 064d4360(System.Collections.Generic.Dictionary2+Entry[[MS.Internal.IManagedPeerBase, System.Windows],[System.Object, mscorlib]][])- 064d3860(System.Windows.Controls.Border)- 064d4218(System.Collections.Generic.Dictionary2[[MS.Internal.IManagedPeerBase, System.Windows],[System.Object, mscorlib]])-> 064d4264(System.Collections.Generic.Dictionary2+Entry[[MS.Internal.IManagedPeerBase, System.Windows],[System.Object, mscorlib]][])- 064d3bfc(System.Windows.Controls.ContentPresenter)- 064d3d64(System.Collections.Generic.Dictionary2[[MS.Internal.IManagedPeerBase, System.Windows],[System.Object, mscorlib]])-> 064d3db0(System.Collections.Generic.Dictionary2+Entry[[MS.Internal.IManagedPeerBase, System.Windows],[System.Object, mscorlib]][])- 064d3dec(System.Collections.Generic.Dictionary2[[System.UInt32, mscorlib],[System.Windows.DependencyObject, System.Windows]])-> 064d3e38(System.Collections.Generic.Dictionary2+Entry[[System.UInt32, mscorlib],[System.Windows.DependencyObject, System.Windows]][])- 06490b04(Insurer.Analytics.SharedResources.Controls.HistoricalKPIViewerControl) If anyone has any ideas about what could potentially be the problem, or if you require more information, please let me know. Kind Regards, Chris

    Read the article

  • Webserver sending corrupt or corrupting served files

    - by NotIan
    EDIT: Looks like the problem was a rootkit that corrupted a bunch of low level linux commands, including top, ps, ifconfig, netstat and others. The problem was resolved by taking all web files off the server and wiping it. A dedicated server we operate is having a strange issue. Files are not be sent complete or are showing up with garbage data. Example: http://sustainablefitness.com/images/banner_bootcamps.jpg To make matters more confusing this corruption does NOT happen when the files are served as https, (I would post a link, but I don't have enough rep points, just add an 's' after http in the link above.) When I throw load at the server, I get dozens of (swapd)s in top this is the only thing that really jumps out. I can't post images but ( imgur.com / ZArSq.png ) is a screenshot of top. I have tried a lot of stuff so far, I am willing to try anything that I can. A dedicated server we operate is having a strange issue. Files are not be sent complete or are showing up with garbage data. Example: http://sustainablefitness.com/images/banner_bootcamps.jpg To make matters more confusing this corruption does NOT happen when the files are served as https, (I would post a link, but I don't have enough rep points, just add an 's' after http in the link above.) When I throw load at the server, I get dozens of (swapd)s in top this is the only thing that really jumps out. I can't post images but ( imgur.com / ZArSq.png ) is a screenshot of top. I have tried a lot of stuff so far, I am willing to try anything that I can.

    Read the article

  • Entangled text boxes

    - by user38329
    Hi StackOverflow, A mere Windows textbox greatly surprised me today. I have two unrelated text boxes inside an application. I can type in either text box and switch the focus by clicking on them. Then happens some event X, which I can't describe here for reasons given below. After this event happens, the two text boxes become "entangled" in an almost quantum way. Say, text box A was focused before X happened. When I click text box B to type in some text, the new text appears in text box A, whereas the blinking cursor happily moves along in text box B through the void, as if the text were there. No amount of clicking on either text boxes can resolve this. The cursor will always remain in B, whereas the text will always go to A. Message spying reveals that after the event X, the text boxes lose the ability to lose or gain focus. When I click on B, WM_LOSE_FOCUS does not come to A, and WM_SET_FOCUS does not come to B. (The rectangles and visibility of the boxes are OK.) The same thing happens in Windows XP and Windows 7. Now, event X: it's a big event in a third-party UI library which I cannot reverse-engineer in a timely manner. (Namely, docking a pane in wxAUI.) I am sure that this behavior is the result of incorrect WinAPI calls to the text boxes (garbage in - garbage out). I would like to know what could possibly cause such "textbox trip" to know where to start looking for the bug. Thanks!

    Read the article

  • Weird stuttering issues not related to GC.

    - by Smills
    I am getting some odd stuttering issues with my game even though my FPS never seems to drop below 30. About every 5 seconds my game stutters. I was originally getting stuttering every 1-2 seconds due to my garbage collection issues, but I have sorted those and will often go 15-20 seconds without a garbage collection. Despite this, my game still stutters periodically even when there is no GC listed in logcat anywhere near the stutter. Even when I take out most of my code and simply make my "physics" code the below code I get this weird slowdown issue. I feel that I am missing something or overlooking something. Shouldn't that "elapsed" code that I put in stop any variance in the speed of the main character related to changes in FPS? Any input/theories would be awesome. Physics: private void updatePhysics() { //get current time long now = System.currentTimeMillis(); //added this to see if I could speed it up, it made no difference Thread myThread = Thread.currentThread(); myThread.setPriority(Thread.MAX_PRIORITY); //work out elapsed time since last frame in seconds double elapsed = (now - mLastTime2) / 1000.0; mLastTime2 = now; //measures FPS and displays in logcat once every 30 frames fps+=1/elapsed; fpscount+=1; if (fpscount==30) { fps=fps/fpscount; Log.i("myActivity","FPS: "+fps+" Touch: "+touch); fpscount=0; } //this should make the main character (theoretically) move upwards at a steady pace mY-=100*elapsed; //increase amount I translate the draw to = main characters Y //location if the main character goes upwards if (mY<=viewY) { viewY=mY; } }

    Read the article

  • BufferedImage.getGraphics() resulting in memory leak, is there a fix?

    - by user359202
    Hi friends, I'm having problem with some framework API calling BufferedImage.getGraphics() method and thus causing memory leak. What this method does is that it always calls BufferedImage.createGraphics(). On a windows machine, createGraphics() is handled by Win32GraphicsEnvironment which keeps a listeners list inside its field displayChanger. When I call getGraphics on my BufferedImage someChart, someChart's SurfaceManager(which retains a reference to someChart) is added to the listeners map in Win32GraphicsEnvironment, preventing someChart to be garbage collected. Nothing afterwards removes someChart's SurfaceManager from the listeners map. In general, the summarized path stopping a BufferedImage from being garbage collected, once getGraphics is called, is as follows: GC Root - localGraphicsEnvironment(Win32GraphicsEnvironment) - displayChanger(SunDisplayChanger) - listeners(Map) - key(D3DChachingSurfaceManager) - bImg(BufferedImage) I could have changed the framework's code so that after every called to BufferedImage.getGraphics(), I keep a reference to the BufferedImage's SurfaceManager. Then, I get hold of localGraphicsEnvironment, cast it to Win32GraphicsEnvironment, then call removeDisplayChangedListener() using the reference to the BufferedImage's SurfaceManager. But I don't think this is a proper way to solve the problem. Could someone please help me with this issue? Thanks a lot!

    Read the article

  • Showing Loading screen during REST service request in android app ?

    - by sat
    Currently here is what I am following, As soon as my app is launched, I have to send a request for REST service, It will take little time , so I thought of showing loading screen, In onCreate() of my Activity , first thing will be to show loading screen(progress dialog) , And I kick off the background Activity using AsyncTask , i.e. requesting for REST service and onPostexecute() I close the dialog and then I do setContentView(myxml); and update the UI . Can this approach be improved ? Problem which I faced was , Sometimes , Garbage collector may start(due to various reasons) and my app hangs at loading screen forever , because of Garbage collector , even request for REST service is not sent and because of it some wake up call comes and rest is disaster and Force close. But sometimes even ForceClose doesnot come fast , may be because of GC. so I cannot even go back and stuck in loading screen. Only thing which I can do at that point is to come back HOME. After that If I come back to my app its still loading , so definitely this approach seems to be a bad design. Whats the right approach ?

    Read the article

  • How come the Actionscript 3 ENTER_FRAME event is crazy nuts?

    - by nstory
    So, I've been toying around with Flash, browsing through the documentation, and all that, and noticed that the ENTER_FRAME event seems to defy my expectation of a deterministic universe. Take the following example: (new MovieClip()).addEventListener(Event.ENTER_FRAME, function(ev) {trace("Test");}); Notice this anonymous MovieClip is not added to the display hierarchy, and any reference to it is immediately lost. It will actually print "Test" once a frame until it is garbage collected. How insane is that? The behavior of this is actually determined by when the garbage collector feels like coming around in all its unpredictable insanity! Is there a better way to create intermittent failures? Seriously. My two theories are that either the DisplayObject class stores weak references to all its instances for the purpose of dispatching ENTER_FRAME events, or, and much wilder, the Flash player actually scans the heap each frame looking for ENTER_FRAME listeners to pull on. Can any hardened Actionscript developer clue me in on how this works? (And maybe a why - the - f**k they thought this was a good idea?)

    Read the article

  • Freeing ImageData when deleting a Canvas

    - by user578770
    I'm writing a XUL application using HTML Canvas to display Bitmap images. I'm generating ImageDatas and imporingt them in a canvas using the putImageData function : for(var pageIndex=0;pageIndex<100;pageIndex++){ this.img = imageDatas[pageIndex]; /* Create the Canvas element */ var imgCanvasTmp = document.createElementNS("http://www.w3.org/1999/xhtml",'html:canvas'); imgCanvasTmp.setAttribute('width', this.img.width); imgCanvasTmp.setAttribute('height', this.img.height); /* Import the image into the Canvas */ imgCanvasTmp.getContext('2d').putImageData(this.img, 0, 0); /* Use the Canvas into another part of the program (Commented out for testing) */ // this.displayCanvas(imgCanvasTmp,pageIndex); } The images are well imported but there seems to be a memory leak due to the putImageData function. When exiting the "for" loop, I would expect the memory allocated for the Canvas to be freed but, by executing the code without executing putImageData, I noticed that my program at the end use 100Mb less (my images are quite big). I came to the conclusion that the putImageData function prevent the garbage collector to free the allocated memory. Do you have any idea how I could force the garbage collector to free the memory? Is there any way to empty the Canvas? I already tried to delete the canvas using the delete operator or to use the clearRect function but it did nothing. I also tried to reuse the same canvas to display the image at each iteration but the amount of memory used did not changed, as if the image where imported without deleting the existing ones...

    Read the article

  • .NET memory leak?

    - by SA
    I have an MDI which has a child form. The child form has a DataGridView in it. I load huge amount of data in the datagrid view. When I close the child form the disposing method is called in which I dispose the datagridview this.dataGrid.Dispose(); this.dataGrid = null; When I close the form the memory doesn't go down. I use the .NET memory profiler to track the memory usage. I see that the memory usage goes high when I initially load the data grid (as expected) and then becomes constant when the loading is complete. When I close the form it still remains constant. However when I take a snapshot of the memory using the memory profiler, it goes down to what it was before loading the file. Taking memory snapshot causes it to forcefully run garbage collector. What is going on? Is there a memory leak? Or do I need to run the garbage collector forcefully? More information: When I am closing the form I no longer need the information. That is why I am not holding a reference to the data.

    Read the article

  • Which keymap to use for wired mac keyboard in Gentoo Linux?

    - by Absolute0
    I just purchase the new wired mac keyboard: Running on Gentoo Linux it works mostly fine. The only problem i am having is the function keys and swapping the alt and command keys to resemble a regular pc keyboard. When I tried switching to the "mac-us" keymap in /etc/conf.d/keymaps I got garbage when typing (not even qwerty). Is there any specific keymap that I can use to get what I want?

    Read the article

  • Has anyone gotten Plan9 working in VirtualBox?

    - by Electrons_Ahoy
    More than anything, I'm just curious to know if this is even possible, since Plan 9 isn't in the list of GuestOSes on the VirtualBox website. However, if someone out there has got it working, my specific question: Whenever I try to boot Plan9, either as a live cd or to install inside of VirtualBox, once the GUI loads all I get is a screen of garbage. I think I've tried just about every graphics driver combination at startup - specifically, even 640x480x8 in VESA mode didn't work. Any suggestions as to how I can load it?

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >