Search Results

Search found 12367 results on 495 pages for 'disk io'.

Page 71/495 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • How to solve "java.io.IOException: error=12, Cannot allocate memory" calling Runtime#exec()?

    - by Andrea Francia
    On my system I can't run a simple Java application that start a process. I don't know how to solve. Could you give me some hints how to solve? The program is: [root@newton sisma-acquirer]# cat prova.java import java.io.IOException; public class prova { public static void main(String[] args) throws IOException { Runtime.getRuntime().exec("ls"); } } The result is: [root@newton sisma-acquirer]# javac prova.java && java -cp . prova Exception in thread "main" java.io.IOException: Cannot run program "ls": java.io.IOException: error=12, Cannot allocate memory at java.lang.ProcessBuilder.start(ProcessBuilder.java:474) at java.lang.Runtime.exec(Runtime.java:610) at java.lang.Runtime.exec(Runtime.java:448) at java.lang.Runtime.exec(Runtime.java:345) at prova.main(prova.java:6) Caused by: java.io.IOException: java.io.IOException: error=12, Cannot allocate memory at java.lang.UNIXProcess.<init>(UNIXProcess.java:164) at java.lang.ProcessImpl.start(ProcessImpl.java:81) at java.lang.ProcessBuilder.start(ProcessBuilder.java:467) ... 4 more Configuration of the system: [root@newton sisma-acquirer]# java -version java version "1.6.0_0" OpenJDK Runtime Environment (IcedTea6 1.5) (fedora-18.b16.fc10-i386) OpenJDK Client VM (build 14.0-b15, mixed mode) [root@newton sisma-acquirer]# cat /etc/fedora-release Fedora release 10 (Cambridge) EDIT: Solution This solves my problem, I don't know exactly why: echo 0 /proc/sys/vm/overcommit_memory Up-votes for who is able to explain :) Additional informations, top output: top - 13:35:38 up 40 min, 2 users, load average: 0.43, 0.19, 0.12 Tasks: 129 total, 1 running, 128 sleeping, 0 stopped, 0 zombie Cpu(s): 1.5%us, 0.5%sy, 0.0%ni, 94.8%id, 3.2%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 1033456k total, 587672k used, 445784k free, 51672k buffers Swap: 2031608k total, 0k used, 2031608k free, 188108k cached Additional informations, free output: [root@newton sisma-acquirer]# free total used free shared buffers cached Mem: 1033456 588548 444908 0 51704 188292 -/+ buffers/cache: 348552 684904 Swap: 2031608 0 2031608

    Read the article

  • StringTemplate: Loading a Template from disk?

    - by Amitabh
    I am using StringTemplate in c# and following code to load a template from a subdirectory of my application. StringTemplateGroup group = new StringTemplateGroup("myGroup", "/tmp"); StringTemplate query = group.GetInstanceOf("Sample"); query.SetAttribute("column", "name"); Console.WriteLine(query); I have a template file Sample.st in the tmp directory of my application. I am getting the following error. Unhandled Exception: System.ArgumentException: Can't find template Sample.st; group hierarchy is [myGroup] Does anyone know what is wrong here?

    Read the article

  • Optimizing PHP require_once's for low disk i/o?

    - by buggedcom
    Q1) I'm designing a CMS (-who isn't!) but priority is being given to caching. Literally everything is cached. DB rows, DB id queries, Configuration data, processed data, compiled templates. Currently it has two layers of caching. The first is a opcode cache or memory cache such as apc, eaccelerator, xcache or memcached. If an entry is not found in there it is then searched for in the secondary slow cache, ie php includes. Are the opcode caches actually faster than doing a require_once to a php file with a var_export'd array of data in it? My tests are inconclusive as my development box (5.3 of XAMPP) keeps throwing errors installing any of the aforementioned programs. Q2) The CMS has numerous helper classes that are autoloaded on demand instead of loading all files. Mostly each has a require before it so no autoloading needs to take place, however this is not the question. Because a page script can have up to 50/60 helper files included I have a feeling that if the site was under pressure it would buckle because of all the i/o that this incurs. Ignore for the moment that there is output cache in place that would remove the need for what I am about to suggest, and also that opcode caches would render this moot. What I have tried to do is join all the helper files required for the scripts execution in one single file. This is achievable and works well, however it has a side effect of greatly increasing the memory usage dramatically even though technically the same code is being used. What are your thoughts and opinions on this?

    Read the article

  • How do you find the disk size of a Postgres table and its indexes

    - by mmrobins
    I'm coming to Postgres from Oracle and looking for a way to find the table and index size in terms of bytes/MB/GB/etc, or even better the size for all tables. In Oracle I had a nasty long query that looked at user_lobs and user_segments to give back an answer. I assume in Postgres there's something I can use in the information_schema tables, but I'm not seeing where. Thanks in advance.

    Read the article

  • Opening a file from a pack URI in WPF

    - by cptmorgan
    Hi All, I am looking to open a .csv file from the application pack to do some unit testing. So what I would really love is some analog to File.ReadAllText(string path) which is instead X.ReadAllText(Uri uri). I haven't as yet been able to find this. Does anyone know if it is possible to read text / bytes (don't mind which) from a file in the pack without compiling this file to disk first? Oh and btw, File.ReadAllText(@"pack://application:,,,/SpreadSheetEngine/Tests/Example.csv") didn't work for me.. Thanks in advance.. Gav

    Read the article

  • Removing multiple files from a Git repo that have already been deleted from disk

    - by Codebeef
    I have a Git repo that I have deleted four files from using rm (not git rm), and my Git status looks like this: # deleted: file1.txt # deleted: file2.txt # deleted: file3.txt # deleted: file4.txt How do I remove these files from Git without having to manually go through and add each file like this: git rm file1 file2 file3 file4 Ideally, I'm looking for something that works in the same way that git add . does, if that's possible.

    Read the article

  • Combining cache methods - memcache/disk based

    - by Industrial
    Hi! Here's the deal. We would have taken the complete static html road to solve performance issues, but since the site will be partially dynamic, this won't work out for us. What we have thought of instead is using memcache + eAccelerator to speed up PHP and take care of caching for the most used data. Here's our two approaches that we have thought of right now: Using memcache on all<< major queries and leaving it alone to do what it does best. Usinc memcache for most commonly retrieved data, and combining with a standard harddrive-stored cache for further usage. The major advantage of only using memcache is of course the performance, but as users increases, the memory usage gets heavy. Combining the two sounds like a more natural approach to us, even though the theoretical compromize in performance. Memcached appears to have some replication features available as well, which may come handy when it's time to increase the nodes. What approach should we use? - Is it stupid to compromize and combine the two methods? Should we insted be focusing on utilizing memcache and instead focusing on upgrading the memory as the load increases with the number of users? Thanks a lot!

    Read the article

  • Intercept windows open file

    - by HyLian
    Hello, I'm trying to make a small program that could intercept the open process of a file. The purpose is when an user double-click on a file in a given folder, windows would inform to the software, then it process that petition and return windows the data of the file. Maybe there would be another solution like monitoring Open messages and force Windows to wait while the program prepare the contents of the file. One application of this concept, could be to manage desencryption of a file in a transparent way to the user. In this context, the encrypted file would be on the disk and when the user open it ( with double-click on it or with some application such as notepad ), the background process would intercept that open event, desencrypt the file and give the contents of that file to the asking application. It's a little bit strange concept, it could be like "Man In The Middle" network concept, but with files instead of network packets. Thanks for reading.

    Read the article

  • How can a single disk in a hardware SATA RAID-10 array bring the entire array to a screeching halt?

    - by Stu Thompson
    Prelude: I'm a code-monkey that's increasingly taken on SysAdmin duties for my small company. My code is our product, and increasingly we provide the same app as SaaS. About 18 months ago I moved our servers from a premium hosting centric vendor to a barebones rack pusher in a tier IV data center. (Literally across the street.) This ment doing much more ourselves--things like networking, storage and monitoring. As part the big move, to replace our leased direct attached storage from the hosting company, I built a 9TB two-node NAS based on SuperMicro chassises, 3ware RAID cards, Ubuntu 10.04, two dozen SATA disks, DRBD and . It's all lovingly documented in three blog posts: Building up & testing a new 9TB SATA RAID10 NFSv4 NAS: Part I, Part II and Part III. We also setup a Cacit monitoring system. Recently we've been adding more and more data points, like SMART values. I could not have done all this without the awesome boffins at ServerFault. It's been a fun and educational experience. My boss is happy (we saved bucket loads of $$$), our customers are happy (storage costs are down), I'm happy (fun, fun, fun). Until yesterday. Outage & Recovery: Some time after lunch we started getting reports of sluggish performance from our application, an on-demand streaming media CMS. About the same time our Cacti monitoring system sent a blizzard of emails. One of the more telling alerts was a graph of iostat await. Performance became so degraded that Pingdom began sending "server down" notifications. The overall load was moderate, there was not traffic spike. After logging onto the application servers, NFS clients of the NAS, I confirmed that just about everything was experiencing highly intermittent and insanely long IO wait times. And once I hopped onto the primary NAS node itself, the same delays were evident when trying to navigate the problem array's file system. Time to fail over, that went well. Within 20 minuts everything was confirmed to be back up and running perfectly. Post-Mortem: After any and all system failures I perform a post-mortem to determine the cause of the failure. First thing I did was ssh back into the box and start reviewing logs. It was offline, completely. Time for a trip to the data center. Hardware reset, backup an and running. In /var/syslog I found this scary looking entry: Nov 15 06:49:44 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_00], 6 Currently unreadable (pending) sectors Nov 15 06:49:44 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_07], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 171 to 170 Nov 15 06:49:45 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_10], 16 Currently unreadable (pending) sectors Nov 15 06:49:45 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_10], 4 Offline uncorrectable sectors Nov 15 06:49:45 umbilo smartd[2827]: Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error Nov 15 06:49:45 umbilo smartd[2827]: # 1 Short offline Completed: read failure 90% 6576 3421766910 Nov 15 06:49:45 umbilo smartd[2827]: # 2 Short offline Completed: read failure 90% 6087 3421766910 Nov 15 06:49:45 umbilo smartd[2827]: # 3 Short offline Completed: read failure 10% 5901 656821791 Nov 15 06:49:45 umbilo smartd[2827]: # 4 Short offline Completed: read failure 90% 5818 651637856 Nov 15 06:49:45 umbilo smartd[2827]: So I went to check the Cacti graphs for the disks in the array. Here we see that, yes, disk 7 is slipping away just like syslog says it is. But we also see that disk 8's SMART Read Erros are fluctuating. There are no messages about disk 8 in syslog. More interesting is that the fluctuating values for disk 8 directly correlate to the high IO wait times! My interpretation is that: Disk 8 is experiencing an odd hardware fault that results in intermittent long operation times. Somehow this fault condition on the disk is locking up the entire array Maybe there is a more accurate or correct description, but the net result has been that the one disk is impacting the performance of the whole array. The Question(s) How can a single disk in a hardware SATA RAID-10 array bring the entire array to a screeching halt? Am I being naïve to think that the RAID card should have dealt with this? How can I prevent a single misbehaving disk from impacting the entire array? Am I missing something?

    Read the article

  • Referencing file on disk from NSManagedObject

    - by Kamchatka
    Hello, What would be the best way to name a file associated to a NSManagedObject. The NSManagedObject will hold the URL to this file. But I need to create a unique filename for my file. Is there some kind of autoincrement id that I could use? Should I use mktemp (but it's not a temporary file) or try to convert the NSManagedObjectId to a filename? but I fear there will be special characters which might cause problem. What would you suggest?

    Read the article

  • While saving a PNG image using NSData writetofile saves corrupted data on the iphone disk

    - by jAmi
    I have a number of images (PNG,GIF and JPG) in my Application Resource Bundle. I want some images to be saved in my Documents Directory so i use : imgPath=[documentsDirectoryPath stringByAppendingPathComponent:@"myImage.png"]; if (![fileMgr fileExistsAtPath:imgPath]) { [[fileMgr contentsAtPath:[[NSBundle mainBundle] pathForResource:@"myImage"ofType:@"png"]] writeToFile:imgPath atomically:NO]; } This saves an Image file on my desired Path but this file has an Extra 300 bytes (of maybe junk data) in it which results in a corrupted image... Am i doing something wrong here? This works in the simulator but on the real device the image has some extra 300 bytes. Also a GIF image gets copied nicely and works but this problem occurs for PNG image.

    Read the article

  • Has anybody used the WB B-tree library?

    - by Chris B
    I stumbled across the WB on-disk B-tree library: http://people.csail.mit.edu/jaffer/WB It seems like it could be useful for my purposes (swapping data to disk during very large statistical calculations that do not fit in memory), but I was wondering how stable it is. Reading the manual, it seems worringly 'researchy' - there are sections labelled [NOT IMPLEMENTED] etc. But maybe the manual is just out-of-date. So, is this library useable? Am I better off looking at Tokyo Cabinet, MemcacheDB, etc.? By the way I am working in Java.

    Read the article

  • Arraylist can't compare objects after they are loaded from disk

    - by Zka
    To make it easy, lets say I have an arraylist allBooks containing class "books" and an arraylist someBooks containing some but not all of the "books". Using contains() method worked fine when I wanted to see if a book from one arraylist was also contained in another. The problem was that this isn't working anymore when I save both of the Arraylists to a .bin file and load them back once the program restarts. Doing the same test as before, the contains() returns false even if the compared objects are the same (have the same info inside). I solved it by overloading the equals method and it works fine, but I want to know why did this happen?

    Read the article

  • C# arraylist can't compare objects after they are loaded from disk

    - by Zka
    To make it easy, lets say I have an arraylist allBooks containing class "books" and an arraylist someBooks containing some but not all of the "books". Using contains() method worked fine when I wanted to see if a book from one arraylist was also contained in another. The problem was that this isn't working anymore when I save both of the Arraylists to a .bin file and load them back once the program restarts. Doing the same test as before, the contains() returns false even if the compared objects are the same (have the same info inside). I solved it by overloading the equals method and it works fine, but I want to know why did this happen?

    Read the article

  • Likelihood of IOError during print vs. write

    - by jkasnicki
    I recently encountered an IOError writing to a file on NFS. There wasn't a disk space or permission issue, so I assume this was just a network hiccup. The obvious solution is to wrap the write in a try-except, but I was curious whether the implementation of print and write in Python make either of the following more or less likely to raise IOError: f_print = open('print.txt', 'w') print >>f_print, 'test_print' f_print.close() vs. f_write = open('write.txt', 'w') f_write.write('test_write\n') f_write.close() (If it matters, specifically in Python 2.4 on Linux).

    Read the article

  • Best data recovery tools?

    - by Nonick
    So due to a recent act of stupidity and bravado, I uttered the words "backups! who needs backups?!" and what followed was the tragic loss of 260gb of data. This scenario in particular is requiring me to recover a repartitioned hard disk, but I was wondering what tools people here use in general to recover lost data. I'm sure everyone has been there, either accidentally rewriting files, resaving an old version, computer crash, hard disk death, user deletes an important document etc. So was thinking it might be an interesting point of discussion as to what you guys use to recover lost data. I appologise if this is considered irrelevant, but considering there has been a few recovery questions, I think this might be interesting.

    Read the article

  • MySQL: LOAD DATA reclaim disk space after delete

    - by Michael
    I have a DB schema composed of MYISAM tables, i am interested to delete old records from time to time from some of the tables. I know that delete does not reclaim the memory space, but as i found in a description of DELETE command, inserts may reuse the space deleted In MyISAM tables, deleted rows are maintained in a linked list and subsequent INSERT operations reuse old row positions. I am interested if LOAD DATA command also reuses the deleted space? UPDATE I am also interested how the index space reclaimed?

    Read the article

  • How can I use io.StringIO() with the csv module?

    - by Tim Pietzcker
    I tried to backport a Python 3 program to 2.7, and I'm stuck with a strange problem: >>> import io >>> import csv >>> output = io.StringIO() >>> output.write("Hello!") # Fail: io.StringIO expects Unicode Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unicode argument expected, got 'str' >>> output.write(u"Hello!") # This works as expected. 6L >>> writer = csv.writer(output) # Now let's try this with the csv module: >>> csvdata = [u"Hello", u"Goodbye"] # Look ma, all Unicode! (?) >>> writer.writerow(csvdata) # Sadly, no. Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unicode argument expected, got 'str' According to the docs, io.StringIO() returns an in-memory stream for Unicode text. It works correctly when I try and feed it a Unicode string manually. Why does it fail in conjunction with the csv module, even if all the strings being written are Unicode strings? Where does the str come from that causes the Exception? (I do know that I can use StringIO.StringIO() instead, but I'm wondering what's wrong with io.StringIO() in this scenario)

    Read the article

  • Writing at the end of file

    - by user342534
    Hi, I'm working on a system that requires high file I/O performance (with C#). Basically, I'm filling up large files (~100MB) from the start of the file until the end of the file. Every ~5 seconds I'm adding ~5MB to the file (sequentially from the start of the file), on every bulk I'm flushing the stream. Every few minutes I need to update a structure which I write at the end of the file (some kind of metadata). When flushing each one of the bulks I have no performance issue. However, when updating the metadata at the end of the file I get really low performance. My guess is that when creating the file (which also should be done extra fast), the file doesn't really allocates the entire 100MB on the disk and when I flush the metadata it must allocates all space until the end of file. Guys/Girls, any Idea how I can overcome this problem? Thanks a lot!

    Read the article

  • How to partition Seagate FreeAgent GoFlex 2TB hard disk?

    - by balki
    Hi I bought a new Seagate 2TB external hard disk. I opened the drive's application in my virtual windows, did product registration using the application present in it. I have few questions on how best to use it. The drive by default has some files and folders - setup.exe, System Volume Information, USB 3.0 PC Card Adapter etc,. I copied all the files to my laptop. Is it safe to delete these files? It has a dash board for windows which allows to tune power options, test the drive etc. Will I be able to use the dash board if I put back all these files and mount on windows again? I want to partition and format the hard disk. Data I like to store is Around 10 to 20 GB Files - Virtual box images. Around 4GB Files - Dvd images. Other Movies and personal Files. What is the best filesystem to store very huge files like 10 to 20GB files. So that they are written and accessed fast also best uses the drive's capacity. If I leave one of the partition as ntfs and others to different files systems, will it be able to mount on windows and Will I be able to use the device's dash board? Note: I dont need any encryption for my data. Any other advice on using the hard disk is also welcome.

    Read the article

  • Send less Server Data with "AFK"

    - by Oliver Schöning
    I am working on a 2D (Realtime) MultiPlayer Game. With Construct2 and a Socket.IO JavaScript Server. Right now the code does not include the Array for each Player. var io = require("socket.io").listen(80); var x = 10; io.sockets.on("connection", function (socket) { socket.on("message", function(data) { x = x+1; }); }); setInterval(function() { io.sockets.emit("message", 'Pos,' + x); },100); I noticed a very annoying problem with my server today. It sends my X Coordinates every 100 milliseconds. The Problem was, that when I went into another Browser Tab, the Browser stopped the Game from running. And when I went back, I think the Game had to run through all the packages. Because my Offline Debugging Button still worked immediately and the Online Button only responded after some seconds. So then I changed my Code so that it would only send out an update when it received a player Input: var io = require("socket.io").listen(80); var x = 10; io.sockets.on("connection", function (socket) { socket.on("message", function(data) { x = x+1; io.sockets.emit("message", 'Pos,' + x); }); }); And it Updated Immediately, even when I had been inactive on the Browser Tab for a long time. Confirming my suspicion that it had to get through all the data. Confirm Please! It would be insane to only send information on Client Input in a Real Time Game. But how would I write a AFK function? I would think it is easier to run a AFK Boolean Loop on the Server. Here is what I need help for: playerArray[Me] if ( "Not Given any Input for X amount of Seconds" ) { "Don't send Data" } else { "Send Data" }

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >