Search Results

Search found 9992 results on 400 pages for 'space efficiency'.

Page 18/400 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • RAIDs with a lot of spindles - how to safely put to use the "wasted" space

    - by kubanczyk
    I have a fairly large number of RAID arrays (server controllers as well as midrange SAN storage) that all suffer from the same problem: barely enough spindles to keep the peak I/O performance, and tons of unused disk space. I guess it's a universal issue since vendors offer the smallest drives of 300 GB capacity but the random I/O performance hasn't really grown much since the time when the smallest drives were 36 GB. One example is a database that has 300 GB and needs random performance of 3200 IOPS, so it gets 16 disks (4800 GB minus 300 GB and we have 4.5 TB wasted space). Another common example are redo logs for a OLTP database that is sensitive in terms of response time. The redo logs get their own 300 GB mirror, but take 30 GB: 270 GB wasted. What I would like to see is a systematic approach for both Linux and Windows environment. How to set up the space so sysadmin team would be reminded about the risk of hindering the performance of the main db/app? Or, even better, to be protected from that risk? The typical situation that comes to my mind is "oh, I have this very large zip file, where do I uncompress it? Umm let's see the df -h and we figure something out in no time..." I don't put emphasis on strictness of the security (sysadmins are trusted to act in good faith), but on overall simplicity of the approach. For Linux, it would be great to have a filesystem customized to cap I/O rate to a very low level - is this possible?

    Read the article

  • where is my disk space?

    - by user166241
    I recently had a problem with .xsession-errors file - it became very big ( 90GB) and took all disk space: How I can check what takes disk space in /tmp?. I cleaned it with command > .xsession-errors but after an hour it became large again. So I deleted it (rm .xsession-errors) - it helped because it wasn't recreated but again after hour disk space disappeared - now there is no .xsession-errors anymore but I don't know where is the memory: df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 106640456 101223392 4 100% / udev 8166744 8 8166736 1% /dev tmpfs 3270224 972 3269252 1% /run none 5120 0 5120 0% /run/lock none 8175552 152 8175400 1% /run/shm du -sc * .[^.]* | sort -n 0 initrd.img 0 initrd.img.old 0 proc 0 sys 0 vmlinuz 0 vmlinuz.old 4 cdrom 4 lib64 4 media 4 mnt 4 selinux 8 dev 12 srv 16 lost+found 68 tmp 1124 run 3396 lib32 5164 .rpmdb 5540 root 8888 sbin 9120 bin 17132 etc 106080 opt 116956 boot 861908 lib 3530584 usr 3821836 var 13371260 home 21859112 total So there is around 100GB used but executing du -sc * .[^.]* | sort -n in root directory finds only ~21 GB - so what takes 80GB?? How to check it? I suspect that when I deleted the `.xsession-errors' file the errors were redirected somwhere else - but where?

    Read the article

  • DVD/CD burning .zip: is it more reliable, faster, longer lasting to burn a zip of files rather than the files as a folder?

    - by Rob
    Is it more reliable, faster, longer lasting to burn to CD/DVD a zip (or a few large zips) of files rather than the files as a folder? Just thinking if 1000s of small files would not be as efficiently recorded compared with one or a few large zips. Also, even after the burning program verifies the disc, I also use Beyond Compare to compare the files with those on the disc. Always binary compares as identical but I hear the drive stuttering presumably as the head is being shifted only slightly each time to seek the next file, which leads me to think that its best to make one or more zips and copy those locally to compare. Or is it that burning invidual files to the disc is not as readable which causes the head to stutter. There aren't any problems, my disc burns are reliable, just thinking more of efficiency and longevity, the discs burn and verify fast enough on my 18x DVD burner. I'm using ImgBurn mostly. Also used Nero in the past. I burn whole discs closed, finalised. Not sure which write mode but would think Disc At Once from a temporary cached image made by the burning program would be the most reliable.

    Read the article

  • how to clear insufficient space on disk where the following directory resides in teamcity

    - by sam
    I am getting the following message :- Warning: insufficient space on disk where the following directory resides: Z:\TeamCity\.BuildServer\system. Disk space available: 915.53Mb. Please contact your system administrator. I already have executed the build history cleanup command. but this has not done much. Can you please guide what directory under the following path I clear up to make space on disk. This Z:\TeamCity.BuildServer\system path has artifacts, caches, changes, messages directories. Which directory to delete to make space. Many Thanks

    Read the article

  • Improve XPath efficiency for repeated, parameterized queries

    - by Chris Allan
    Hi, I am repeatedly performing the following XPath query (though parameterized by 'keywordText') around 40,000 times: String query = SystemGlobal.YAHOO_KEYWORDSSUBNODE + "/" + SystemGlobal.YAHOO_KEYWORDNODE + "[" + SystemGlobal.YAHOO_ATTRKEYPHRASE + "='" + keywordText + "']"; CachedXPathAPI cachedXPathAPI = new CachedXPathAPI(); NodeIterator nl = cachedXPathAPI.selectNodeIterator(doc.getElementsByTagName(SystemGlobal.YAHOO_KEYWORDSROOT).item(0), query); Node n; if ((n = nl.nextNode()) != null) { keyword.setKeywordId(Long.parseLong(cachedXPathAPI.selectSingleNode(n, SystemGlobal.YAHOO_ATTRKEYID).getTextContent())); keyword.setKeyPhrase(cachedXPathAPI.selectSingleNode(n, SystemGlobal.YAHOO_ATTRKEYPHRASE).getTextContent()); keyword.setStatus(mapStatus(cachedXPathAPI.selectSingleNode(n, SystemGlobal.YAHOO_ATTRSTATUS).getTextContent())); keyword.setCampaignId(Long.parseLong(cachedXPathAPI.selectSingleNode(n, "../../" + SystemGlobal.YAHOO_ATTRCAMPAIGNID).getTextContent())); keyword.setAdGroupId(Long.parseLong(cachedXPathAPI.selectSingleNode(n, "../" + SystemGlobal.YAHOO_ATTRADGROUPID).getTextContent())); On the first run of the script, all 40,000 runs of this piece of code will have nl.nextNode() == null, and everything runs quite quickly. However, on the following runs, when nl.nextNode() != null, then things slow down a lot - this takes around an additional 40min to run (whereas the first run takes maybe 1 minute). Oh, and the doc is constructed like so: InputSource in = new InputSource(new FileInputStream(filename)); DocumentBuilderFactory dfactory = DocumentBuilderFactory.newInstance(); dfactory.setNamespaceAware(true); doc = dfactory.newDocumentBuilder().parse(in); I tried including the following lines reportEvaluator = new XPathEvaluatorImpl(reportDoc); reportResolver = reportEvaluator.createNSResolver(reportDoc); and rather creating a NodeIterator, instead creating an XPathResult: XPathResult result = (XPathResult)reportEvaluator.evaluate(query, doc.getElementsByTagName(SystemGlobal.YAHOO_KEYWORDSROOT).item(0), reportResolver, XPathResult.UNORDERED_NODE_ITERATOR_TYPE, null); however this ran even slower Is there a way in which I can speed up the running of this script? I have seen references to precompiled queries, though I haven't seen many actual details. Also, as seen in the code, I am using CachedXPathAPI, though the benefit for this case is not so great. Any help is much appreciated! Chris Allan

    Read the article

  • Windows XP SP3 on Macbook Has Limited Disk Space

    - by Mikey.B
    Hi Guys, I setup Windows XP SP3 on a 40 GB partition using bootcamp (partition formatted for NTFS). For some reason, the hard drive properties only show ~3 GB of space available. Funny thing is though, I've hardly installed anything on the system... and if I open the windows explorer, select all directories, and check properties, it appears that I'm only using 11 GB of space. What gives? I thought there might be hidden files/folders so I enabled the option to show it but still nada. Has anyone come across this before? Any suggestions for how to proceed here?

    Read the article

  • T/SQL Efficiency and Order of Execution

    - by Kyle Rozendo
    Hi All, In regards to the order of execution of statements in SQL, is there any difference between the following performance wise? SELECT * FROM Persons WHERE UserType = 'Manager' AND LastName IN ('Hansen','Pettersen') And: SELECT * FROM Persons WHERE LastName IN ('Hansen','Pettersen') AND UserType = 'Manager' If there is any difference, is there perhaps a link etc. that you may have where one can learn more about this? Thanks a ton, Kyle

    Read the article

  • F# currying efficiency?

    - by Eamon Nerbonne
    I have a function that looks as follows: let isInSet setElems normalize p = normalize p |> (Set.ofList setElems).Contains This function can be used to quickly check whether an element is semantically part of some set; for example, to check if a file path belongs to an html file: let getLowerExtension p = (Path.GetExtension p).ToLowerInvariant() let isHtmlPath = isInSet [".htm"; ".html"; ".xhtml"] getLowerExtension However, when I use a function such as the above, performance is poor since evaluation of the function body as written in "isInSet" seems to be delayed until all parameters are known - in particular, invariant bits such as (Set.ofList setElems).Contains are reevaluated each execution of isHtmlPath. How can best I maintain F#'s succint, readable nature while still getting the more efficient behavior in which the set construction is preevaluated. The above is just an example; I'm looking for a general pattern that avoids bogging me down in implementation details - where possible I'd like to avoid being distracted by details such as the implementation's execution order since that's usually not important to me and kind of undermines a major selling point of functional programming.

    Read the article

  • Stack , data and address space limits on an Ubuntu server

    - by PaulDaviesC
    I am running an Ubuntu server which has around 5000 users. The users are allowed to SSH in to the system. So in order to cap the memory used up by a process I have capped the address space limits using limits.conf. So my question is , should I be limiting the data and stack ? I feel that is not required since I am capping address space. Are there any pitfalls if I do not cap the stack and data limits?

    Read the article

  • Can't start my Windows XP Virtual Machine: Insufficient Disk Space

    - by Rob
    Okay, I currently have a server with two virtual machines installed on it, a CentOS5.4 and a Windows XP. I was remote desktopping the Windows XP chatting on IRC, and all of a sudden I lost connection. I checked with my HyperVisor and tried to restart it, and it won't start at all. It's giving me this error: Message from server0297.serverpool.gnet.ba: Failed to extend swap file (fileHandle 16414) from 0 KB to 524288 KB: No space left on device. Could not power on VM : No space left on device. Failed to power on VM info 4/17/2010 9:49:20 PM root Basically I bought the set up from a host, he installed the HyperVisor and the VirtualMachines, and honestly I don't really know what I'm doing. I've looked at some of the settings, and I can't figure it out. If you need any additional information, I'll try to provide it. The CentOS5.4 is still starting and working flawlessly, if that's relevant.

    Read the article

  • Increase efficiency of a loop with jQuery

    - by Pez Cuckow
    I have a game coded in jQuery where bots are moved around the screen. The below code is a loop that runs every 20ms, currently if you have over 15 bots you start to notice the browser lagging (simply because of all the advanced collision detection going on). Is there any way to reduce the lag, can I make it any more efficient? P.s. sorrry for just posting a block of code, I can't see a way to make my point clear enough without! $.playground().registerCallback(function(){ //Movement Loop if(!pause) { for (var i in bots) { //bots - color, dir, x, y, z, spawned?, spawnerid, prevd var self = $('#b' + i); var current = bots[i]; if(bots[i][5]==1) { var xspeed = 0, yspeed = 0; if(current[1]==0) { yspeed = -D_SPEED; } else if(current[1]==1) { xspeed = D_SPEED; } else if(current[1]==2) { yspeed = D_SPEED; } else if(current[1]==3) { xspeed = -D_SPEED; } var x = current[2] + xspeed; var y = current[3] + yspeed; var z = current[3] + 120; if(current[2]>0&&x>PLAYGROUND_WIDTH||current[2]<0&&x<-GRID_SIZE|| current[3]>0&&y>PLAYGROUND_HEIGHT||current[3]<0&&y<-GRID_SIZE) { remove_bot(i, self); } else { if(current[7]!=current[1]) { self.setAnimation(colors[current[0]][current[1]]); bots[i][7] = current[1]; } if(self.css({"left": ""+(x)+"px", "top": ""+(y)+"px", "z-index": z})) { bots[i][2] = x; bots[i][3] = y; bots[i][4] = z; bots[i][8]++; } } } } $("#debug").html(dump(arrows)); $(".bot").each(function(){ var b_id = $(this).attr("id").substr(1); var collision = false; var c_bot = bots[b_id]; var b_x = c_bot[2]; var b_y = c_bot[3]; var b_d = c_bot[1]; $(this).collision(".arrow,#arrows").each(function(){ //Many thanks to Selim Arsever for this fix! var a_id = $(this).attr("id").substr(1); var piece = arrows[a_id]; var a_v = piece[0]; if(a_v==1) { var a_x = piece[2]; var a_y = piece[3]; var d_x = b_x-a_x; var d_y = b_y-a_y; if(d_x>=4&&d_x<=5&&d_y>=1&&d_y<=2) { //bots - color, dir, x, y, z, spawned?, spawnerid, prevd bots[b_id][7] = c_bot[1]; bots[b_id][1] = piece[1]; collision = true; } } }); if(!collision) { $(this).collision(".wall,#level").each(function(){ var w_id = $(this).attr("id").substr(1); var piece = pieces[w_id]; var w_x = piece[1]; var w_y = piece[2]; d_x = b_x-w_x; d_y = b_y-w_y; if(b_d==0&&d_x>=4&&d_x<=5&&d_y>=27&&d_y<=28) { kill_bot(b_id); collision = true; } //4 // 33 if(b_d==1&&d_x>=-12&&d_x<=-11&&d_y>=21&&d_y<=22) { kill_bot(b_id); collision = true; } //-14 // 21 if(b_d==2&&d_x>=4&&d_x<=5&&d_y>=-9&&d_y<=-8) { kill_bot(b_id); collision = true; } //4 // -9 if(b_d==3&&d_x>=22&&d_x<=23&&d_y>=20&&d_y<=21) { kill_bot(b_id); collision = true; } //22 // 21 }); } if(!collision&&c_bot[8]>GRID_MOVE) { $(this).collision(".spawn,#level").each(function(){ var s_id = $(this).attr("id").substr(1); var piece = pieces[s_id]; var s_x = piece[1]; var s_y = piece[2]; d_x = b_x-s_x; d_y = b_y-s_y; if(b_d==0&&d_x>=4&&d_x<=5&&d_y>=19&&d_y<=20) { kill_bot(b_id); collision = true; } //4 // 33 if(b_d==1&&d_x>=-14&&d_x<=-13&&d_y>=11&&d_y<=12) { kill_bot(b_id); collision = true; } //-14 // 21 if(b_d==2&&d_x>=4&&d_x<=5&&d_y>=-11&&d_y<=-10) { kill_bot(b_id); collision = true; } //4 // -9 if(b_d==3&&d_x>=22&&d_x<=23&&d_y>=11&&d_y<=12) { kill_bot(b_id); collision = true; } //22 // 21*/ }); } if(!collision) { $(this).collision(".exit,#level").each(function(){ var e_id = $(this).attr("id").substr(1); var piece = pieces[e_id]; var e_x = piece[1]; var e_y = piece[2]; d_x = b_x-e_x; d_y = b_y-e_y; if(d_x>=4&&d_x<=5&&d_y>=1&&d_y<=2) { current_bots++; bots[b_id] = false; $("#current_bots").html(current_bots); $("#b" + b_id).setAnimation(exit[2], function(node){$(node).fadeOut(200)}); } }); } if(!collision) { $(this).collision(".bot,#level").each(function(){ var bd_id = $(this).attr("id").substr(1); if(bd_id!=b_id) { var piece = bots[bd_id]; var bd_x = piece[2]; var bd_y = piece[3]; d_x = b_x-bd_x; d_y = b_y-bd_y; if(d_x>=0&&d_x<=2&&d_y>=0&&d_y<=2) { kill_bot(b_id); kill_bot(bd_id); collision = true; } } }); } }); } }, REFRESH_RATE); Many thanks,

    Read the article

  • Efficiency of Java "Double Brace Initialization"?

    - by Jim Ferrans
    In Hidden Features of Java the top answer mentions Double Brace Initialization, with a very enticing syntax: Set<String> flavors = new HashSet<String>() {{ add("vanilla"); add("strawberry"); add("chocolate"); add("butter pecan"); }}; This idiom creates an anonymous inner class with just an instance initializer in it, which "can use any [...] methods in the containing scope". Main question: Is this as inefficient as it sounds? Should its use be limited to one-off initializations? (And of course showing off!) Second question: The new HashSet must be the "this" used in the instance initializer ... can anyone shed light on the mechanism? Third question: Is this idiom too obscure to use in production code? Summary: Very, very nice answers, thanks everyone. On question (3), people felt the syntax should be clear (though I'd recommend an occasional comment, especially if your code will pass on to developers who may not be familiar with it). On question (1), The generated code should run quickly. The extra .class files do cause jar file clutter, and slow program startup slightly (thanks to coobird for measuring that). Thilo pointed out that garbage collection can be affected, and the memory cost for the extra loaded classes may be a factor in some cases. Question (2) turned out to be most interesting to me. If I understand the answers, what's happening in DBI is that the anonymous inner class extends the class of the object being constructed by the new operator, and hence has a "this" value referencing the instance being constructed. Very neat. Overall, DBI strikes me as something of an intellectual curiousity. Coobird and others point out you can achieve the same effect with Arrays.asList, varargs methods, Google Collections, and the proposed Java 7 Collection literals. Newer JVM languages like Scala, JRuby, and Groovy also offer concise notations for list construction, and interoperate well with Java. Given that DBI clutters up the classpath, slows down class loading a bit, and makes the code a tad more obscure, I'd probably shy away from it. However, I plan to spring this on a friend who's just gotten his SCJP and loves good natured jousts about Java semantics! ;-) Thanks everyone!

    Read the article

  • NSXMLParser Memory Allocation Efficiency for the iPhone

    - by Staros
    Hello, I've recently been playing with code for an iPhone app to parse XML. Sticking to Cocoa, I decided to go with the NSXMLParser class. The app will be responsible for parsing 10,000+ "computers", all which contain 6 other strings of information. For my test, I've verified that the XML is around 900k-1MB in size. My data model is to keep each computer in an NSDictionary hashed by a unique identifier. Each computer is also represented by a NSDictionary with the information. So at the end of the day, I end up with a NSDictionary containing 10k other NSDictionaries. The problem I'm running into isn't about leaking memory or efficient data structure storage. When my parser is done, the total amount of allocated objects only does go up by about 1MB. The problem is that while the NSXMLParser is running, my object allocation is jumping up as much as 13MB. I could understand 2 (one for the object I'm creating and one for the raw NSData) plus a little room to work, but 13 seems a bit high. I can't imaging that NSXMLParser is that inefficient. Thoughts? Code... The code to start parsing... NSXMLParser *parser = [[NSXMLParser alloc] initWithData: data]; [parser setDelegate:dictParser]; [parser parse]; output = [[dictParser returnDictionary] retain]; [parser release]; [dictParser release]; And the parser's delegate code... -(void)parser:(NSXMLParser *)parser didStartElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qualifiedName attributes:(NSDictionary *)attributeDict { if(mutableString) { [mutableString release]; mutableString = nil; } mutableString = [[NSMutableString alloc] init]; } -(void)parser:(NSXMLParser *)parser foundCharacters:(NSString *)string { if(self.mutableString) { [self.mutableString appendString:string]; } } -(void)parser:(NSXMLParser *)parser didEndElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qName { if([elementName isEqualToString:@"size"]){ //The initial key, tells me how many computers returnDictionary = [[NSMutableDictionary alloc] initWithCapacity:[mutableString intValue]]; } if([elementName isEqualToString:hashBy]){ //The unique identifier if(mutableDictionary){ [mutableDictionary release]; mutableDictionary = nil; } mutableDictionary = [[NSMutableDictionary alloc] initWithCapacity:6]; [returnDictionary setObject:[NSDictionary dictionaryWithDictionary:mutableDictionary] forKey:[NSMutableString stringWithString:mutableString]]; } if([fields containsObject:elementName]){ //Any of the elements from a single computer that I am looking for [mutableDictionary setObject:mutableString forKey:elementName]; } } Everything initialized and released correctly. Again, I'm not getting errors or leaking. Just inefficient. Thanks for any thoughts!

    Read the article

  • Disk space consumed

    - by aravind-zoniac
    I have a very serious problem here in one of my client server. The remote server is installed with REDHAT ES 5.2 and we have a postgresql as database. I was trying to clone the database. The hard drive had 32 GB of free space before taking clone. I started cloning the database and during the process, there was some internet issue and due to this, putty got disconnected before taking clone. So I opened another fresh session and I was able to see only 2.5GB as available space. Also I was not able to see the clone in the psql terminal. Any solution to get the 29GB that was consumed????

    Read the article

  • Missing over 100GB of Space on sda1 RHEL

    - by WifiGhost
    I have a server setup with a RAID 5 using (3) 500GB drives, 1 as a spare so unused in the RAID. So in my mind i start out with 990GB with the RAID 5 in place. When looking at DF or the built in disk space utility i only see a total of about 882GB, how can i find where the 100+GB went? How can i get it back? I've checked the RAID 5 BIOS and i see all the space. I've tried looking manually and through terminal commands with no luck. Filesystem - 1K-blocks - Used Available - Use% - Mounted on /dev/mapper/vg_web-lv_root 838084192 48368700 747153060 7% / tmpfs 12104644 592 12104052 1% /dev/shm /dev/sda1 495844 121546 348698 26% /boot /dev/mapper/vg_web-lv_home 82569904 259136 78116468 1% /home Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_web-lv_root 800G 47G 713G 7% / tmpfs 12G 592K 12G 1% /dev/shm /dev/sda1 485M 119M 341M 26% /boot /dev/mapper/vg_web-lv_home 79G 254M 75G 1% /home

    Read the article

  • Efficiency of nested Loop

    - by didxga
    See the following snippet: //first nested loops for(int i=0;i<10;i++) { for(int j=1;j<1000000;j++) { //do some stuff } } //second nested loops for(int i=0;i<1000000;i++) { for(int j=1;j<10;j++) { //do some stuff } } I am wondering why the first nested loops is running slower than the second one? Regards!

    Read the article

  • Out of disk space on 4GB partiton yet it's only using 2GB

    - by Camsoft
    I'm running Ubuntu and have had a problem where the root partition has run out of disk space. When I perform df -h I get the following: Filesystem Size Used Avail Use% Mounted on /dev/sda6 4.6G 4.5G 0 100% / Yet there are only 2GB of files actually using up this partition. I then ran the following df -i and I get the following: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda6 305824 118885 186939 39% / I have no idea what the -i flag does but it clearly shows that only 39% is used. Can anyone explain where my disk space has gone?

    Read the article

  • Mouse-usable alt-tab for windows 7

    - by Florian Peschka
    I'm using Windows 7 with ye good olde Windows 2000 style (I favor usability and space over fancy-pants aero and effects) and am looking for a way I can make the ALT-TAB-Control interactable with my mouse. I generally have many many windows open, and pressing tab or shift-tab over and over again begings to really frustrate me, if i want to reach a certain program. Of course, I could just select the program on my taskbar, but that takes even longer. I really like the way mac os x has its alt-tab menu - more or less exactly the same, but I can select the program with my mouse. Is there something like that for my environment?

    Read the article

  • calloc v/s malloc and time efficiency

    - by yCalleecharan
    Hi, I've read with interest the post "c difference between malloc and calloc". I'm using malloc in my code and would like to know what difference I'll have using calloc instead. My present (pseudo)code with malloc: Scenario 1 int main() { allocate large arrays with malloc INITIALIZE ALL ARRAY ELEMENTS TO ZERO for loop //say 1000 times do something and write results to arrays end for loop FREE ARRAYS with free command } //end main If I use calloc instead of malloc, then I'll have: Scenario2 int main() { for loop //say 1000 times ALLOCATION OF ARRAYS WITH CALLOC do something and write results to arrays FREE ARRAYS with free command end for loop } //end main I have three questions: Which of the scenarios is more efficient if the arrays are very large? Which of the scenarios will be more time efficient if the arrays are very large? In both scenarios,I'm just writing to arrays in the sense that for any given iteration in the for loop, I'm writing each array sequentially from the first element to the last element. The important question: If I'm using malloc as in scenario 1, then is it necessary that I initialize the elements to zero? Say with malloc I have array z = [garbage1, garbage2, garbage 3]. For each iteration, I'm writing elements sequentially i.e. in the first iteration I get z =[some_result, garbage2, garbage3], in the second iteration I get in the first iteration I get z =[some_result, another_result, garbage3] and so on, then do I need specifically to initialize my arrays after malloc?

    Read the article

  • SQL Server 2005: reclaiming LOB space

    - by AndrewD
    Hello all, I've got an interesting table in one of my DBs that's confusing me. The table in question has a a few LOB type columns (two nvarchar(max) and a text) and it looks like there's some strange space issues going on. from this query: SELECT type_desc, SUM(total_pages) *8 [Size in kb] FROM sys.partitions p JOIN sys.allocation_units a ON p.partition_id = a.container_id WHERE p.object_id = OBJECT_ID('asyncoperationbase') GROUP BY type_desc; I get: type_desc Size in kb IN_ROW_DATA 27936 LOB_DATA 1198144 ROW_OVERFLOW_DATA 0 (there's just under 8000 rows in the table, each row has a data length of ~10k - not counting the LOB data) here's where it gets somewhat interesting: SELECT ( SUM(DATALENGTH(aob.WorkflowState)) + SUM(DATALENGTH(aob.[Message]))+ SUM(DATALENGTH(aob.[Data])) ) / 1024 FROM AsyncOperationBase aob returns: 76617 As I'm reading it - it looks like the ~75mb of LOB data is using over a gig of space to be stored - I would expect some overhead but not quit that much. Thanks, Andrew

    Read the article

  • jQuery/Javascript framework efficiency

    - by Russell
    My latest project is using a javascript framework (jQuery), along with some plugins (validation, jquery-ui, datepicker, facebox, ...) to help make a modern web application. I am now finding pages loading slower than I am used to. After some js profiling (thanks VS2010!), it seems a lot of the time is taken procesing inside the framework. Now I understand the more complex the ui tools, the more processing needs to be done. The project is not yet at a large stage and I think would be average functions. At this stage I can see it is not going to scale well. I noticed things like the 'each' command in jQuery takes quite a lot of processing time. Have others experienced some extra latency using JS frameworks? How do I minimise their effect on page performance? Are there best practices on implementation using JS frameworks? Thanks

    Read the article

  • CentOS Insufficient space in download directory /var/cache/yum/base/packages

    - by Joao Heleno
    Hello! I was trying to yum install libpcap when I got Error Downloading Packages: 14:libpcap-0.9.4-15.el5.i386: Insufficient space in download directory /var/cache/yum/base/packages * free 0 * needed 108 k Here's output from df -h: Filesystem Size Used Avail Use% Mounted on /dev/sda1 20G 19G 0 100% / /dev/sda3 202G 38G 154G 20% /home tmpfs 1.5G 0 1.5G 0% /dev/shm And fdisk -l: Disk /dev/sda: 250.0 GB, 250000000000 bytes 255 heads, 63 sectors/track, 30394 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 2611 20972826 83 Linux /dev/sda2 2612 3251 5140800 82 Linux swap / Solaris /dev/sda3 3252 30394 218026147+ 83 Linux I have launched yum clean all with no success clearing up space. Please advise. Thanks.

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >