Search Results

Search found 3953 results on 159 pages for 'overlapped io'.

Page 11/159 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • How many disk should I use to meed the capacity and IOPS need?

    - by facebook-100005613813158
    An application needs 1.6 TB of storage capacity and performs 1000 IOPS. How many disks are required to meet the application requirement and offer acceptable response time? The disk specifications are as follows: - Drive capacity = 100 GB - 15K RPM - Each disk can perform 50 IOPS 4 candidate , 10,12,16,20, which one is the most likely answer? in my opinion,16 disks can only meet the capacity need ,but cannot meet the IOPS need, so ,the right answer should be 20 disks?right?

    Read the article

  • At what point is asynchronous reading of disk I/O more efficient than synchronous?

    - by blesh
    Assuming there is some bit of code that reads files for multiple consumers, and the files are of any arbitrary size: At what size does it become more efficient to read the file asynchronously? Or to put it another way, how small must a file be for it to be faster just to read it synchronously? I've noticed (and perhaps I'm incorrect) that when reading very small files, it takes longer to read them asynchronously than synchronously (in particular with .NET). I'm assuming this has to do with set up time for things like I/O Completion Ports, threads, etc. Is there any rule of thumb to help out here? Or is it dependent on the system and the environment?

    Read the article

  • Ubuntu 13.XX unable to mount USB HDD. Tried everything. I/O error boot sector/file system

    - by XaviGG
    I know that there are many posts related but none of them helped me. I will jump to the last test because it is the one that should work, but it does not. An external HDD with single partition slow NTFS formatted in Windows, empty and clean. Checked for errors, it tells that not errors where found. Moving to Ubutnu 13.04... Gparted throws the first error when trying to read the disk: Input/output error It appears as unknown the content of the disk. Unable to create partition table or format it, getting the same error when trying. If I try to mount it in the terminal it tells me the same, specifying that also there is an I/O error reading the boot sector. I have this problem since I upgraded (always with fresh install) to 13.04. I thought it will be solved by the 13.10 but it has the same behavior. I tried with two different HDD (HD and SSHD) that work perfectly in Windows 7. In 13.04 at least I got a trying of mounting where the icon of the drive started showing and disappearing until finally it disappeared. But now it doesn't even try. Possible causes: -The HDD was my old main HDD, so it had WIN,RECOVERY,SYSTEM,UBU,SWAP partitions. Maybe the way or place where the partition table is defined is not the best for an external HDD but I don't know a lot in that topic. I would appreciate a lot if someone can give me a guideline to convert one of these HDD in a working external HDD. No files to recover, nothing to care about. Just format completely the disk and be able to use it for storing backups without having to move the files first to the windows partition, load windows and then copy them to the external HDD. Because I want to use a file comparator for the backups. Thanks a lot Edit 1: I found an option in Windows to convert it to a dynamic HDD that warns me that I wont be able to run O.S. after changing. I suppose that is what I need because in the current mode I cannot safely extract it. But it tells me an error that it couldn't change the mode.

    Read the article

  • Reading input all together or in steps?

    - by nischayn22
    For many programming quizzes we are given a bunch of input lines and we have to process each input , do some computation and output the result. My question is what is the best way to optimize the runtime of the solution ? Read all input, store it (in array or something) ,compute result for all of them, finally output it all together. or 2. Read one input, compute the result, output the result and so on for each input given.

    Read the article

  • Structuring an input file

    - by Ricardo
    I am in the process of structuring a small program to perform some hydraulic analysis of pipe flow. As I am envisioning this, the program will read an input file, store the input parameters in a suitable way, operate on them and finally output results. I am struggling with how to structure the input file in a sane way; that is, in a way that a human can write it easily and a machine can parse it easily. A sample input file made available to me for a similar program is just a stream of comma-separated numbers that don't make much sense on their own, so that's the scenario I am trying to avoid. Though I am giving the details of my particular problem, I am more interested in general input-file structuring strategies. Is a stream of comma-separated values my best bet? Would I be better off using some sort of key:value structure? I don't know much about this, so any help will probably put me in a better track than I am now.

    Read the article

  • At what point asynchronous reading of disk I/O is more efficient than synchronous?

    - by blesh
    Assuming there is some bit of code that reads files for multiple consumers, and the files are of any arbitrary size: At what size does it become more efficient to read the file asynchronously? Or to put it another way, how small must a file be for it to be faster just to read it synchronously? I've noticed (and perhaps I'm incorrect) that when reading very small files, it takes longer to read them asynchronously than synchronously (in particular with .NET). I'm assuming this has to do with set up time for things like I/O Completion Ports, threads, etc. Is there any rule of thumb to help out here? Or is it dependent on the system and the environment?

    Read the article

  • What is wrong with this code for reading binary files? [on hold]

    - by qed
    What is wrong with this code for reading binary files? It compiles OK, but will not print out the file as planned, in fact, it prints nothing at all. #include <iostream> #include <fstream> int main(int argc, const char *argv[]) { if (argc < 2) { ::std::cerr << "usage: " << argv[0] << " <filename>\n"; return 1; } ::std::basic_ifstream<unsigned char> in(argv[1], ::std::ios::binary); unsigned char uc; while (in.get(uc)) { printf("%02X ", uc); } // TODO: error handling, in case the file could not be opened or read return 0; }

    Read the article

  • Java - HtmlUnit - Unable to save HTML to file (in some cases)

    - by Walter White
    Hi all, I am having intermittent issues saving the response HTML in HtmlUnit. Caused by: java.io.IOException: Unable to save file:C:\ccview\PP50773_4.0_walter\TSC_hca\Applications\HCA_J2EE\HCA\target\HtmlUnitTests\single\1\com\pnc\tsc\hca\ui\test\SiteCrawler\crawlSiteAsProvider\10.SiteCrawler.crawl.html at com.pnc.tsc.hca.ui.util.GetUtil.save(GetUtil.java:128) at com.pnc.tsc.hca.ui.util.GetUtil.add(GetUtil.java:75) at com.pnc.tsc.hca.ui.util.GetUtil.click(GetUtil.java:49) at com.pnc.tsc.hca.ui.test.SiteCrawler.crawl(SiteCrawler.java:87) at com.pnc.tsc.hca.ui.test.SiteCrawler.crawl(SiteCrawler.java:61) at com.pnc.tsc.hca.ui.test.SiteCrawler.crawl(SiteCrawler.java:63) at com.pnc.tsc.hca.ui.test.SiteCrawler.crawl(SiteCrawler.java:63) at com.pnc.tsc.hca.ui.test.SiteCrawler.crawl(SiteCrawler.java:63) at com.pnc.tsc.hca.ui.test.SiteCrawler.crawl(SiteCrawler.java:54) at com.pnc.tsc.hca.ui.test.SiteCrawler.crawlSiteAsProvider(SiteCrawler.java:50) ... 15 more Caused by: java.lang.RuntimeException: java.io.IOException: The system cannot find the path specified at com.gargoylesoftware.htmlunit.html.XmlSerializer.getAttributesFor(XmlSerializer.java:165) at com.gargoylesoftware.htmlunit.html.XmlSerializer.printOpeningTag(XmlSerializer.java:126) at com.gargoylesoftware.htmlunit.html.XmlSerializer.printXml(XmlSerializer.java:83) at com.gargoylesoftware.htmlunit.html.XmlSerializer.printXml(XmlSerializer.java:93) at com.gargoylesoftware.htmlunit.html.XmlSerializer.printXml(XmlSerializer.java:93) at com.gargoylesoftware.htmlunit.html.XmlSerializer.asXml(XmlSerializer.java:73) at com.gargoylesoftware.htmlunit.html.XmlSerializer.save(XmlSerializer.java:55) at com.gargoylesoftware.htmlunit.html.HtmlPage.save(HtmlPage.java:2259) at com.pnc.tsc.hca.ui.util.GetUtil.save(GetUtil.java:126) ... 24 more Caused by: java.io.IOException: The system cannot find the path specified at java.io.WinNTFileSystem.createFileExclusively(Native Method) at java.io.File.createNewFile(File.java:883) at com.gargoylesoftware.htmlunit.html.XmlSerializer.createFile(XmlSerializer.java:216) at com.gargoylesoftware.htmlunit.html.XmlSerializer.getAttributesFor(XmlSerializer.java:160) ... 32 more Now, the parent directory exists and some other files have already been written to the directory. Looking at the filename, I don't see anything that would stand out as a red flag indicating the filename is bad. What can I do to correct this error? Thanks, Walter

    Read the article

  • Understanding Node.js and concept of non-blocking I/O

    - by Saif Bechan
    Recently I became interested in using Node.js to tackle some of the parts of my web-application. I love the part that its full JavaScript and its very light weight so no use anymore to call an JavaScript-PHP call but a lighter JavaScript-JavaScript call. I however do not understand all the concepts explained. Basic concepts Now in the presentation for Node.js Ryan Dahl talks about non-blocking IO and why this is the way we need to create our programs. I can understand the theoretical concept. You just don't wait for a response, you go ahead and do other things. You make a callback for the response, and when the response arrives millions of clock-cycles later, you can fire that. If you have not already I recommend to watch this presentation. It is very easy to follow and pretty detailed. There are some nice concepts explained on how to write your code in a good manner. There are also some examples given and I am going to work with the basic example given. Examples The way we do thing now: puts("Enter your name: "); var name = gets(); puts("Name: " + name); Now the problem with this is that the code is halted at line 1. It blocks your code. The way we need to do things according to node puts("Enter your name: "); gets(function (name) { puts("Name: " + name); }); Now with this your program does not halt, because the input is a function within the output. So the programs continues to work without halting. Questions Now the basic question I have is how does this work in real-life situations. I am talking here for the use in web-applications. The application I am writing does I/O, bit is still does it in am blocking matter. I think that most of the time, if not all, you need to block, because you have to wait on what the response is you have to work with. When you need to get some information from the database, most of the time this data needs to be verified before you can further with the code. Example 1 If you take a login for example. You have to wait for the database to response to return, because you can not do anything else. I can't see a way around this without blocking. Example 2 Going back to the basic example. The use just request something from a database which does not need any verification. You still have to block because you don't have anything to do more. I can not come up with a single example where you want to do other things while you wait for the response to return. Possible answers I have read that this frees up recourses. When you program like this it takes less CPU or memory usage. So this non-blocking IO is ONLY meant to free up recourses and does not have any other practical use. Not that this is not a huge plus, freeing up recourses is always good. Yet I fail to see this as a good solution. because in both of the above examples, the program has to wait for the response of the user. Whether this is inside a function, or just inline, in my opinion there is a program that wait for input. Resources I looked at I have looked at some recourses before I posted this question. They talk a lot about the theoretical concept, which is quite clear. Yet i fail to see some real-life examples where this is makes a huge difference. Stackoverflow: What is in simple words blocking IO and non-blocking IO? Blocking IO vs non-blocking IO; looking for good articles tidy code for asynchronous IO Other recources: Wikipedia: Asynchronous I/O Introduction to non-blocking I/O The C10K problem

    Read the article

  • FileInfo.MoveTo() vs File.Move()

    - by Eric
    Is there any difference between these two methods of moving a file? System.IO.FileInfo f = new System.IO.FileInfo(@"c:\foo.txt"); f.MoveTo(@"c:\bar.txt"); //vs System.IO.File.Move(@"c:\foo.txt", @"c:\bar.txt");

    Read the article

  • please give me a solution

    - by user327832
    here is the code i have written so far but ended up giving me error import java.io.File; import java.io.FileInputStream; import java.io.IOException; import java.io.InputStream; public class Main { public static void main(String[] args) throws Exception { File file = new File("c:\\filea.txt"); InputStream is = new FileInputStream(file); long length = file.length(); System.out.println (length); bytes[] bytes = new bytes[(int) length]; try { int offset = 0; int numRead = 0; while (numRead >= 0) { numRead = is.read(bytes); } } catch (IOException e) { System.out.println ("Could not completely read file " + file.getName()); } is.close(); Object[] see = new Object[(int) length]; see[1] = bytes; System.out.println ((String[])see[1]); } }

    Read the article

  • C# File IO with Streams - Best Memory Buffer Size

    - by AJ
    Hi, I am writing a small IO library to assist with a larger (hobby) project. A part of this library performs various functions on a file, which is read / written via the FileStream object. On each StreamReader.Read(...) pass, I fire off an event which will be used in the main app to display progress information. The processing that goes on in the loop is vaired, but is not too time consuming (it could just be a simple file copy, for example, or may involve encryption...). My main question is: What is the best memory buffer size to use? Thinking about physical disk layouts, I could pick 2k, which would cover a CD sector size and is a nice multiple of a 512 byte hard disk sector. Higher up the abstraction tree, you could go for a larger buffer which could read an entire FAT cluster at a time. I realise with today's PC's, I could go for a more memory hungry option (a couple of MiB, for example), but then I increase the time between UI updates and the user perceives a less responsive app. As an aside, I'm eventually hoping to provide a similar interface to files hosted on FTP / HTTP servers (over a local network / fastish DSL). What would be the best memory buffer size for those (again, a "best-case" tradeoff between perceived responsiveness vs. performance). Thanks in advance for any ideas, Adam

    Read the article

  • java.io.IOException: Premature EOF

    - by jouzef19
    Hi: I am trying to download an xml file using Stream and things was fine , until the xml size became bigger than 9 MB , so i've got this error java.io.IOException: Premature EOF this is the code BufferedInputStream bfi = null; try { bfi = new BufferedInputStream(new URL("The URL").openStream()); String name = "name.xml"; FileOutputStream fb = new FileOutputStream(name); BufferedOutputStream bout = new BufferedOutputStream(fb, 1024); byte[] data = new byte[1024]; int x = 0; while ((x = bfi.read(data, 0, 1024)) >= 0) { bout.write(data, 0, x); } this.deletePhishTankDatabase(this.recreateFileName()); ptda.insertDownloadTime(hour, day, month, year); bout.close(); bfi.close(); } catch (IOException ex) { Logger.getLogger(PhishTankDataBase.class.getName()).log(Level.SEVERE, null, ex); } finally { try { bfi.close(); } catch (IOException ex) { Logger.getLogger(PhishTankDataBase.class.getName()).log(Level.SEVERE, null, ex); } } } else { System.out.println("You can't do anything"); return; } thanks in advanced

    Read the article

  • Hibernate JDBCConnectionException: Communications link failure and java.io.EOFException: Can not read response from server

    - by Marc
    I get a quite well-known using MySql jdbc driver : JDBCConnectionException: Communications link failure, java.io.EOFException: Can not read response from server. This is caused by the wait_timeout parameter in my.cnf. So I decided to use c3p0 pool connection along with Hibernate. Here is what I added to hibernate.cfg.xml : <property name="hibernate.connection.provider_class">org.hibernate.connection.C3P0ConnectionProvider</property> <property name="c3p0.min_size">10</property> <property name="c3p0.max_size">100</property> <property name="c3p0.timeout">1000</property> <property name="c3p0.preferredTestQuery">SELECT 1</property> <property name="c3p0.acquire_increment">1</property> <property name="c3p0.idle_test_period">2</property> <property name="c3p0.max_statements">50</property> idle_test_period is volontarily low for test purposes. Looking at the mysql logs I can see the "SELECT 1" request which is regularly sent to the mysql server so it works. Unfortunately I still get this EOF exception within my app if I wait longer than 'wait_timout' seconds (set to 10 for test purposes). I'm using Hibernate 4.1.1 and mysql-jdbc-connector 5.1.18. So what am I doing wrong? Thanks, Marc.

    Read the article

  • File IO with Streams - Best Memory Buffer Size

    - by AJ
    I am writing a small IO library to assist with a larger (hobby) project. A part of this library performs various functions on a file, which is read / written via the FileStream object. On each StreamReader.Read(...) pass, I fire off an event which will be used in the main app to display progress information. The processing that goes on in the loop is vaired, but is not too time consuming (it could just be a simple file copy, for example, or may involve encryption...). My main question is: What is the best memory buffer size to use? Thinking about physical disk layouts, I could pick 2k, which would cover a CD sector size and is a nice multiple of a 512 byte hard disk sector. Higher up the abstraction tree, you could go for a larger buffer which could read an entire FAT cluster at a time. I realise with today's PC's, I could go for a more memory hungry option (a couple of MiB, for example), but then I increase the time between UI updates and the user perceives a less responsive app. As an aside, I'm eventually hoping to provide a similar interface to files hosted on FTP / HTTP servers (over a local network / fastish DSL). What would be the best memory buffer size for those (again, a "best-case" tradeoff between perceived responsiveness vs. performance).

    Read the article

  • Rapid Opening and Closing System.IO.StreamWriter in C#

    - by ccomet
    Suppose you have a file that you are programmatically logging information into with regards to a process. Kinda like your typical debug Console.WriteLine, but due to the nature of the code you're testing, you don't have a console to write onto so you have to write it somewhere like a file. My current program uses System.IO.StreamWriter for this task. My question is about the approach to using the StreamWriter. Is it better to open just one StreamWriter instance, do all of the writes, and close it when the entire process is done? Or is it a better idea to open a new StreamWriter instance to write a line into the file, then immediately close it, and do this for every time something needs to be written in? In the latter approach, this would probably be facilitated by a method that would do just that for a given message, rather than bloating the main process code with excessive amounts of lines. But having a method to aid in that implementation doesn't necessarily make it the better choice. Are there significant advantages to picking one approach or the other? Or are they functionally equivalent, leaving the choice on the shoulders of the programmer?

    Read the article

  • Need some help understanding IO Statistics

    - by Abe Miessler
    I have a query that has a very costly INDEX SEEK operation in the execution plan. In order to track down the cause i set IO STATISTICS on and ran it. In the problem section it gave the following statistics: Table '#TempStudents_Enrollment2_____________________________________000000004D5F'. Scan count 0, logical reads 60, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#TempRace2______________________________________________000000004D58'. Scan count 1, logical reads 1, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'RefRace'. Scan count 120, logical reads 240, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'RefFedEnctyRaceCatg'. Scan count 18, logical reads 36, physical reads 2, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#43B0BA0F'. Scan count 1, logical reads 60, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#42BC95D6'. Scan count 1, logical reads 60, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#41C8719D'. Scan count 1, logical reads 60, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#40D44D64'. Scan count 1, logical reads 60, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#LEA2_________________________________________________000000004D56'. Scan count 1, logical reads 60, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#39332B9C'. Scan count 1, logical reads 60, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#School2________________________________________________000000004D57'. Scan count 1, logical reads 29164, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#GenderKey______________________________________________000000004D5A'. Scan count 1, logical reads 29164, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#LangAcqKey_____________________________________________000000004D5B'. Scan count 1, logical reads 29164, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#TransferCatKey___________________________________________000000004D5C'. Scan count 1, logical reads 29164, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#ResCatKey______________________________________________000000004D5D'. Scan count 1, logical reads 29164, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'RPT_SnapShot_1_4_StuPgm_Denorm'. Scan count 2344954, logical reads 4992518, physical reads 16, read-ahead reads 8, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#3FE0292B'. Scan count 1, logical reads 2344954, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'RPT_SnapShot_1_4_StuEnrlmt_Denorm'. Scan count 20, logical reads 87679, physical reads 0, read-ahead reads 87425, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#GradeKey_______________________________________________000000004D59'. Scan count 1, logical reads 1, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. What should I look for in here when i'm looking to improve the performance? The line with over 2 million for the Scan count looked suspicious to me but I really don't know. Does anyone see anything here that i should look into in more detail?

    Read the article

  • Problem with creating a deterministic finite automata (DFA) - Mercury

    - by Jabba The hut
    I would like to have a deterministic finite automata (DFA) simulated in Mercury. But I’m s(t)uck at several places. Formally, a DFA is described with the following characteristics: a setOfStates S, an inputAlphabet E <-- summation symbol, a transitionFunction : S × E -- S, a startState s € S, a setOfAcceptableFinalStates F =C S. A DFA will always starts in the start state. Then the DFA will read all the characters on the input, one by one. Based on the current input character and the current state, there will be made to a new state. These transitions are defined in the transitions function. when the DFA is in one of his acceptable final states, after reading the last character, then will the DFA accept the input, If not, then the input will be is rejected. The figure shows a DFA the accepting strings where the amount of zeros, is a plurality of three. Condition 1 is the initial state, and also the only acceptable state. for each input character is the corresponding arc followed to the next state. Link to Figure What must be done A type “mystate” which represents a state. Each state has a number which is used for identification. A type “transition” that represents a possible transition between states. Each transition has a source_state, an input_character, and a final_state. A type “statemachine” that represents the entire DFA. In the solution, the DFA must have the following properties: The set of all states, the input alphabet, a transition function, represented as a set of possible transitions, a set of accepting final states, a current state of the DFA A predicate “init_machine (state machine :: out)” which unifies his arguments with the DFA, as shown as in the Figure. The current state for the DFA is set to his initial state, namely, 1. The input alphabet of the DFA is composed of the characters '0'and '1'. A user can enter a text, which will be controlled by the DFA. the program will continues until the user types Ctrl-D and simulates an EOF. If the user use characters that are not allowed into the input alphabet of the DFA, then there will be an error message end the program will close. (pred require) Example Enter a sentence: 0110 String is not ok! Enter a sentence: 011101 String is not ok! Enter a sentence: 110100 String is ok! Enter a sentence: 000110010 String is ok! Enter a sentence: 011102 Uncaught exception Mercury: Software Error: Character does not belong to the input alphabet! the thing wat I have. :- module dfa. :- interface. :- import_module io. :- pred main(io.state::di, io.state::uo) is det. :- implementation. :- import_module int,string,list,bool. 1 :- type mystate ---> state(int). 2 :- type transition ---> trans(source_state::mystate, input_character::bool, final_state::mystate). 3 (error, finale_state and current_state and input_character) :- type statemachine ---> dfa(list(mystate),list(input_character),list(transition),list(final_state),current_state(mystate)) 4 missing a lot :- pred init_machine(statemachine :: out) is det. %init_machine(statemachine(L_Mystate,0,L_transition,L_final_state,1)) :- <-probably fault 5 not perfect main(!IO) :- io.write_string("\nEnter a sentence: ", !IO), io.read_line_as_string(Input, !IO), ( Invoer = ok(StringVar), S1 = string.strip(StringVar), (if S1 = "mustbeabool" then io.write_string("Sentenceis Ok! ", !IO) else io.write_string("Sentence is not Ok!.", !IO)), main(!IO) ; Invoer = eof ; Invoer = error(ErrorCode), io.format("%s\n", [s(io.error_message(ErrorCode))], !IO) ). Hope you can help me kind regards

    Read the article

  • Mongodump on Gridfs is killing the host IOs

    - by Raphael
    I'm trying to make a mongodump from our production mongodb while the production is running. We have three production instances, one regular mongodb, one with very few gb of data on gridfs, one with a larger amount of data on gridfs. All mongodb instances are running in version 2.4.9 on a ubuntu 10.04 virtual server. I use a mongodump command to export the bases to another server. Unfortunately our machines are virtually hosted in a "low performances" datacenter (vmware based) so when I try to export the large gridfs db, the disk IO hits 100% (and 50% of the cpu starts waiting for IO too). This has a very negative impact on the production applicatiosn because db access time is excessively increased, making the applications unusable. I'm looking for a way to regulate the mongodump so the export goes slower but cooler on the hardware ressources allowing better performances for the applications to run. Has anyone had a similar scenario ?

    Read the article

  • Questions related to writing your own file downloader using multiple threads java

    - by Shekhar
    Hello In my current company, i am doing a PoC on how we can write a file downloader utility. We have to use socket programming(TCP/IP) for downloading the files. One of the requirements of the client is that a file(which will be large in size) should be transfered in chunks for example if we have a file of 5Mb size then we can have 5 threads which transfer 1 Mb each. I have written a small application which downloads a file. You can download the eclipe project from http://www.fileflyer.com/view/QM1JSC0 A brief explanation of my classes FileSender.java This class provides the bytes of file. It has a method called sendBytesOfFile(long start,long end, long sequenceNo) which gives the number of bytes. import java.io.File; import java.io.IOException; import java.util.zip.CRC32; import org.apache.commons.io.FileUtils; public class FileSender { private static final String FILE_NAME = "C:\\shared\\test.pdf"; public ByteArrayWrapper sendBytesOfFile(long start,long end, long sequenceNo){ try { File file = new File(FILE_NAME); byte[] fileBytes = FileUtils.readFileToByteArray(file); System.out.println("Size of file is " +fileBytes.length); System.out.println(); System.out.println("Start "+start +" end "+end); byte[] bytes = getByteArray(fileBytes, start, end); ByteArrayWrapper wrapper = new ByteArrayWrapper(bytes, sequenceNo); return wrapper; } catch (IOException e) { throw new RuntimeException(e); } } private byte[] getByteArray(byte[] bytes, long start, long end){ long arrayLength = end-start; System.out.println("Start : "+start +" end : "+end + " Arraylength : "+arrayLength +" length of source array : "+bytes.length); byte[] arr = new byte[(int)arrayLength]; for(int i = (int)start, j =0; i < end;i++,j++){ arr[j] = bytes[i]; } return arr; } public static long fileSize(){ File file = new File(FILE_NAME); return file.length(); } } Second Class is FileReceiver.java - This class receives the file. Small Explanation what this file does This class finds the size of the file to be fetched from Sender Depending upon the size of the file it finds the start and end position till the bytes needs to be read. It starts n number of threads giving each thread start,end, sequence number and a list which all the threads share. Each thread reads the number of bytes and creates a ByteArrayWrapper. ByteArrayWrapper objects are added to the list Then i have while loop which basically make sure that all threads have done their work finally it sorts the list based on the sequence number. then the bytes are joined, and a complete byte array is formed which is converted to a file. Code of File Receiver package com.filedownloader; import java.io.File; import java.io.IOException; import java.util.ArrayList; import java.util.Collections; import java.util.Comparator; import java.util.List; import java.util.zip.CRC32; import org.apache.commons.io.FileUtils; public class FileReceiver { public static void main(String[] args) { FileReceiver receiver = new FileReceiver(); receiver.receiveFile(); } public void receiveFile(){ long startTime = System.currentTimeMillis(); long numberOfThreads = 10; long filesize = FileSender.fileSize(); System.out.println("File size received "+filesize); long start = filesize/numberOfThreads; List<ByteArrayWrapper> list = new ArrayList<ByteArrayWrapper>(); for(long threadCount =0; threadCount<numberOfThreads ;threadCount++){ FileDownloaderTask task = new FileDownloaderTask(threadCount*start,(threadCount+1)*start,threadCount,list); new Thread(task).start(); } while(list.size() != numberOfThreads){ // this is done so that all the threads should complete their work before processing further. //System.out.println("Waiting for threads to complete. List size "+list.size()); } if(list.size() == numberOfThreads){ System.out.println("All bytes received "+list); Collections.sort(list, new Comparator<ByteArrayWrapper>() { @Override public int compare(ByteArrayWrapper o1, ByteArrayWrapper o2) { long sequence1 = o1.getSequence(); long sequence2 = o2.getSequence(); if(sequence1 < sequence2){ return -1; }else if(sequence1 > sequence2){ return 1; } else{ return 0; } } }); byte[] totalBytes = list.get(0).getBytes(); byte[] firstArr = null; byte[] secondArr = null; for(int i = 1;i<list.size();i++){ firstArr = totalBytes; secondArr = list.get(i).getBytes(); totalBytes = concat(firstArr, secondArr); } System.out.println(totalBytes.length); convertToFile(totalBytes,"c:\\tmp\\test.pdf"); long endTime = System.currentTimeMillis(); System.out.println("Total time taken with "+numberOfThreads +" threads is "+(endTime-startTime)+" ms" ); } } private byte[] concat(byte[] A, byte[] B) { byte[] C= new byte[A.length+B.length]; System.arraycopy(A, 0, C, 0, A.length); System.arraycopy(B, 0, C, A.length, B.length); return C; } private void convertToFile(byte[] totalBytes,String name) { try { FileUtils.writeByteArrayToFile(new File(name), totalBytes); } catch (IOException e) { throw new RuntimeException(e); } } } Code of ByteArrayWrapper package com.filedownloader; import java.io.Serializable; public class ByteArrayWrapper implements Serializable{ private static final long serialVersionUID = 3499562855188457886L; private byte[] bytes; private long sequence; public ByteArrayWrapper(byte[] bytes, long sequenceNo) { this.bytes = bytes; this.sequence = sequenceNo; } public byte[] getBytes() { return bytes; } public long getSequence() { return sequence; } } Code of FileDownloaderTask import java.util.List; public class FileDownloaderTask implements Runnable { private List<ByteArrayWrapper> list; private long start; private long end; private long sequenceNo; public FileDownloaderTask(long start,long end,long sequenceNo,List<ByteArrayWrapper> list) { this.list = list; this.start = start; this.end = end; this.sequenceNo = sequenceNo; } @Override public void run() { ByteArrayWrapper wrapper = new FileSender().sendBytesOfFile(start, end, sequenceNo); list.add(wrapper); } } Questions related to this code 1) Does file downloading becomes fast when multiple threads is used? In this code i am not able to see the benefit. 2) How should i decide how many threads should i create ? 3) Are their any opensource libraries which does that 4) The file which file receiver receives is valid and not corrupted but checksum (i used FileUtils of common-io) does not match. Whats the problem? 5) This code gives out of memory when used with large file(above 100 Mb) i.e. because byte array which is created. How can i avoid? I know this is a very bad code but i have to write this in one day -:). Please suggest any other good way to do this? Thanks Shekhar

    Read the article

  • Java textfile I/O problem

    - by KáGé
    Hello, I have to make a torpedo game for school with a toplist for it. I want to store it in a folder structure near the JAR: /Torpedo/local/toplist/top_i.dat, where the i is the place of that score. The files will be created at the first start of the program with this call: File f; f = new File(Toplist.toplistPath+"/top_1.dat"); if(!f.exists()){ Toplist.makeToplist(); } Here is the toplist class: package main; import java.awt.Color; import java.io.BufferedReader; import java.io.File; import java.io.FileNotFoundException; import java.io.FileReader; import java.io.FileWriter; import java.io.IOException; import java.io.PrintWriter; import java.text.SimpleDateFormat; import java.util.Calendar; import java.util.prefs.Preferences; import javax.swing.JFrame; import javax.swing.JOptionPane; import javax.swing.JTextArea; public class Toplist { static String toplistPath = "./Torpedo/local/toplist"; //I know it won't work this easily, it's only to get you the idea public static JFrame toplistWindow = new JFrame("Torpedó - [TOPLISTA]"); public static JTextArea toplist = new JTextArea(""); static StringBuffer toplistData = new StringBuffer(3000); public Toplist() { toplistWindow.setSize(500, 400); toplistWindow.setLocationRelativeTo(null); toplistWindow.setResizable(false); getToplist(); toplist.setSize(400, 400); toplist.setLocation(0, 100); toplist.setColumns(5); toplist.setText(toplistData.toString()); toplist.setEditable(false); toplist.setBackground(Color.WHITE); toplistWindow.setLayout(null); toplistWindow.setVisible(true); } public Toplist(Player winner) { //this is to be done yet, this will set the toplist at first and then display it toplistWindow.setLayout(null); toplistWindow.setVisible(true); } /** * Creates a new toplist */ public static void makeToplist(){ new File(toplistPath).mkdir(); for(int i = 1; i <= 10; i++){ File f = new File(toplistPath+"/top_"+i+".dat"); try { f.createNewFile(); } catch (IOException e) { JOptionPane.showMessageDialog(new JFrame(), "Fájl hiba: toplista létrehozása", "Error", JOptionPane.ERROR_MESSAGE); } } } /** * If the score is a top score it inserts it into the list * * @param score - the score to be checked */ public static void setToplist(int score, Player winner){ BufferedReader input = null; PrintWriter output = null; int topscore; for(int i = 1; i <= 10; i++){ try { input = new BufferedReader(new FileReader(toplistPath+"/top_"+i+",dat")); String s; topscore = Integer.parseInt(input.readLine()); if(score > topscore){ for(int j = 9; j >= i; j--){ input = new BufferedReader(new FileReader(toplistPath+"/top_"+j+".dat")); output = new PrintWriter(new FileWriter(toplistPath+"/top_"+(j+1)+".dat")); while ((s = input.readLine()) != null) { output.println(s); } } output = new PrintWriter(new FileWriter(toplistPath+"/top_"+i+".dat")); output.println(score); output.println(winner.name); if(winner.isLocal){ output.println(Torpedo.session.remote.name); }else{ output.println(Torpedo.session.remote.name); } output.println(Torpedo.session.mapName); output.println(DateUtils.now()); break; } } catch (FileNotFoundException e) { JOptionPane.showMessageDialog(new JFrame(), "Fájl hiba: toplista frissítése", "Error", JOptionPane.ERROR_MESSAGE); } catch (IOException e) { JOptionPane.showMessageDialog(new JFrame(), "Fájl hiba: toplista frissítése", "Error", JOptionPane.ERROR_MESSAGE); } finally { if (input != null) { try { input.close(); } catch (IOException e) { JOptionPane.showMessageDialog(new JFrame(), "Fájl hiba: toplista frissítése", "Error", JOptionPane.ERROR_MESSAGE); } } if (output != null) { output.close(); } } } } /** * This loads the toplist into the buffer */ public static void getToplist(){ BufferedReader input = null; toplistData = null; String s; for(int i = 1; i <= 10; i++){ try { input = new BufferedReader(new FileReader(toplistPath+"/top_"+i+".dat")); while((s = input.readLine()) != null){ toplistData.append(s); toplistData.append('\t'); } toplistData.append('\n'); } catch (FileNotFoundException e) { JOptionPane.showMessageDialog(new JFrame(), "Fájl hiba: toplista betöltése", "Error", JOptionPane.ERROR_MESSAGE); } catch (IOException e) { JOptionPane.showMessageDialog(new JFrame(), "Fájl hiba: toplista betöltése", "Error", JOptionPane.ERROR_MESSAGE); } } } /** * * @author http://www.rgagnon.com/javadetails/java-0106.html * */ public static class DateUtils { public static final String DATE_FORMAT_NOW = "yyyy-MM-dd HH:mm:ss"; public static String now() { Calendar cal = Calendar.getInstance(); SimpleDateFormat sdf = new SimpleDateFormat(DATE_FORMAT_NOW); return sdf.format(cal.getTime()); } } } The problem is, that it can't access any of the files. I've tried adding them to the classpath and at least six different variations of file/path handling I found online but nothing worked. Could anyone tell me what do I do wrong? Thank you.

    Read the article

  • Handling file upload in a non-blocking manner

    - by Kaliyug Antagonist
    The background thread is here Just to make objective clear - the user will upload a large file and must be redirected immediately to another page for proceeding different operations. But the file being large, will take time to be read from the controller's InputStream. So I unwillingly decided to fork a new Thread to handle this I/O. The code is as follows : The controller servlet /** * @see HttpServlet#doPost(HttpServletRequest request, HttpServletResponse * response) */ protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { // TODO Auto-generated method stub System.out.println("In Controller.doPost(...)"); TempModel tempModel = new TempModel(); tempModel.uploadSegYFile(request, response); System.out.println("Forwarding to Accepted.jsp"); /*try { Thread.sleep(1000 * 60); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); }*/ request.getRequestDispatcher("/jsp/Accepted.jsp").forward(request, response); } The model class package com.model; import java.io.IOException; import java.util.concurrent.ExecutionException; import java.util.concurrent.Future; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import com.utils.ProcessUtils; public class TempModel { public void uploadSegYFile(HttpServletRequest request, HttpServletResponse response) { // TODO Auto-generated method stub System.out.println("In TempModel.uploadSegYFile(...)"); /* * Trigger the upload/processing code in a thread, return immediately * and notify when the thread completes */ try { FileUploaderRunnable fileUploadRunnable = new FileUploaderRunnable( request.getInputStream()); /* * Future<FileUploaderRunnable> future = ProcessUtils.submitTask( * fileUploadRunnable, fileUploadRunnable); * * FileUploaderRunnable processed = future.get(); * * System.out.println("Is file uploaded : " + * processed.isFileUploaded()); */ Thread uploadThread = new Thread(fileUploadRunnable); uploadThread.start(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } /* * catch (InterruptedException e) { // TODO Auto-generated catch block * e.printStackTrace(); } catch (ExecutionException e) { // TODO * Auto-generated catch block e.printStackTrace(); } */ System.out.println("Returning from TempModel.uploadSegYFile(...)"); } } The Runnable package com.model; import java.io.File; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.FileOutputStream; import java.io.IOException; import java.io.InputStream; import java.nio.ByteBuffer; import java.nio.channels.Channels; import java.nio.channels.ReadableByteChannel; public class FileUploaderRunnable implements Runnable { private boolean isFileUploaded = false; private InputStream inputStream = null; public FileUploaderRunnable(InputStream inputStream) { // TODO Auto-generated constructor stub this.inputStream = inputStream; } public void run() { // TODO Auto-generated method stub /* Read from InputStream. If success, set isFileUploaded = true */ System.out.println("Starting upload in a thread"); File outputFile = new File("D:/06c01_output.seg");/* * This will be changed * later */ FileOutputStream fos; ReadableByteChannel readable = Channels.newChannel(inputStream); ByteBuffer buffer = ByteBuffer.allocate(1000000); try { fos = new FileOutputStream(outputFile); while (readable.read(buffer) != -1) { fos.write(buffer.array()); buffer.clear(); } fos.flush(); fos.close(); readable.close(); } catch (FileNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } System.out.println("File upload thread completed"); } public boolean isFileUploaded() { return isFileUploaded; } } My queries/doubts : Spawning threads manually from the Servlet makes sense to me logically but scares me coding wise - the container isn't aware of these threads after all(I think so!) The current code is giving an Exception which is quite obvious - the stream is inaccessible as the doPost(...) method returns before the run() method completes : In Controller.doPost(...) In TempModel.uploadSegYFile(...) Returning from TempModel.uploadSegYFile(...) Forwarding to Accepted.jsp Starting upload in a thread Exception in thread "Thread-4" java.lang.NullPointerException at org.apache.coyote.http11.InternalInputBuffer.fill(InternalInputBuffer.java:512) at org.apache.coyote.http11.InternalInputBuffer.fill(InternalInputBuffer.java:497) at org.apache.coyote.http11.InternalInputBuffer$InputStreamInputBuffer.doRead(InternalInputBuffer.java:559) at org.apache.coyote.http11.AbstractInputBuffer.doRead(AbstractInputBuffer.java:324) at org.apache.coyote.Request.doRead(Request.java:422) at org.apache.catalina.connector.InputBuffer.realReadBytes(InputBuffer.java:287) at org.apache.tomcat.util.buf.ByteChunk.substract(ByteChunk.java:407) at org.apache.catalina.connector.InputBuffer.read(InputBuffer.java:310) at org.apache.catalina.connector.CoyoteInputStream.read(CoyoteInputStream.java:202) at java.nio.channels.Channels$ReadableByteChannelImpl.read(Unknown Source) at com.model.FileUploaderRunnable.run(FileUploaderRunnable.java:39) at java.lang.Thread.run(Unknown Source) Keeping in mind the point 1., does the use of Executor framework help me in anyway ? package com.utils; import java.util.concurrent.Future; import java.util.concurrent.ScheduledThreadPoolExecutor; public final class ProcessUtils { /* Ensure that no more than 2 uploads,processing req. are allowed */ private static final ScheduledThreadPoolExecutor threadPoolExec = new ScheduledThreadPoolExecutor( 2); public static <T> Future<T> submitTask(Runnable task, T result) { return threadPoolExec.submit(task, result); } } So how should I ensure that the user doesn't block and the stream remains accessible so that the (uploaded)file can be read from it?

    Read the article

  • Why does my program not read full files?

    - by user593395
    I have written code in Java to read the content of a file. But it is working for small line of file only not for more than 1000 line of file. Please tell me me what error I have made in the below program. program: import java.io.DataInputStream; import java.io.DataOutputStream; import java.io.File; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.FileOutputStream; import java.util.regex.Matcher; import java.util.regex.Pattern; public class aaru { public static void main(String args[]) throws FileNotFoundException { File sourceFile = new File("E:\\parser\\parse3.txt"); File destinationFile = new File("E:\\parser\\new.txt"); FileInputStream fileIn = new FileInputStream(sourceFile); FileOutputStream fileOut = new FileOutputStream(destinationFile); DataInputStream dataIn = new DataInputStream(fileIn); DataOutputStream dataOut = new DataOutputStream(fileOut); String str=""; String[] st; String sub[]=null; String word=""; String contents=""; String total=""; String stri="";; try { while((contents=dataIn.readLine())!=null) { total = contents.replaceAll(",",""); String str1=total.replaceAll("--",""); String str2=str1.replaceAll(";","" ); String str3=str2.replaceAll("&","" ); String str4=str3.replaceAll("^","" ); String str5=str4.replaceAll("#","" ); String str6=str5.replaceAll("!","" ); String str7=str6.replaceAll("/","" ); String str8=str7.replaceAll(":","" ); String str9=str8.replaceAll("]","" ); String str10=str9.replaceAll("\\?",""); String str11=str10.replaceAll("\\*",""); String str12=str11.replaceAll("\\'",""); Pattern pattern = Pattern.compile("\\s+", Pattern.CASE_INSENSITIVE | Pattern.DOTALL | Pattern.MULTILINE); Matcher matcher = pattern.matcher(str12); //boolean check = matcher.find(); String result=str12; Pattern p=Pattern.compile("^www\\.|\\@"); Matcher m=p.matcher(result); stri = m.replaceAll(" "); int i; int j; st=stri.split("\\."); for(i=0;i<st.length;i++) { st[i]=st[i].trim(); /*if(st[i].startsWith(" ")) st[i]=st[i].substring(1,st[i].length);*/ sub=st[i].split(" "); if(sub.length>1) { for(j=0;j<sub.length-1;j++) { word = word+sub[j]+","+sub[j+1]+"\r\n"; } } else { word = word+st[i]+"\r\n"; } } } System.out.println(word); dataOut.writeBytes(word+"\r\n"); fileIn.close(); fileOut.close(); dataIn.close(); dataOut.close(); } catch(Exception e) { System.out.print(e); } } }

    Read the article

  • Write asynchronously to file in perl

    - by Stefhen
    Basically I would like to: Read a large amount of data from the network into an array into memory. Asynchronously write this array data, running it thru bzip2 before it hits the disk. repeat.. Is this possible? If this is possible, I know that I will have to somehow read the next pass of data into a different array as the AIO docs say that this array must not be altered before the async write is complete. I would like to background all of my writes to disk in order as the bzip2 pass is going to take much longer than the network read. Is this doable? Below is a simple example of what I think is needed, but this just reads a file into array @a for testing. use warnings; use strict; use EV; use IO::AIO; use Compress::Bzip2; use FileHandle; use Fcntl; my @a; print "loading to array...\n"; while(<>) { $a[$. - 1] = $_; } print "array loaded...\n"; my $aio_w = EV::io IO::AIO::poll_fileno, EV::WRITE, \&IO::AIO::poll_cb; aio_open "./out", O_WRONLY || O_NONBLOCK, 0, sub { my $fh = shift or die "error while opening: $!\n"; aio_write $fh, undef, undef, $a, -1, sub { $_[0] > 0 or die "error: $!\n"; EV::unloop; }; }; EV::loop EV::LOOP_NONBLOCK;

    Read the article

  • In Haskell, I want to read a file and then write to it. Do I need strictness annotation?

    - by Steve
    Hi, Still quite new to Haskell.. I want to read the contents of a file, do something with it possibly involving IO (using putStrLn for now) and then write new contents to the same file. I came up with: doit :: String -> IO () doit file = do contents <- withFile tagfile ReadMode $ \h -> hGetContents h putStrLn contents withFile tagfile WriteMode $ \h -> hPutStrLn h "new content" However this doesn't work due to laziness. The file contents are not printed. I found this post which explains it well. The solution proposed there is to include putStrLn within the withFile: doit :: String -> IO () doit file = do withFile tagfile ReadMode $ \h -> do contents <- hGetContents h putStrLn contents withFile tagfile WriteMode $ \h -> hPutStrLn h "new content" This works, but it's not what I want to do. The operation in I will eventually replace putStrLn might be long, I don't want to keep the file open the whole time. In general I just want to be able to get the file content out and then close it before working with that content. The solution I came up with is the following: doit :: String -> IO () doit file = do c <- newIORef "" withFile tagfile ReadMode $ \h -> do a <- hGetContents h writeIORef c $! a d <- readIORef c putStrLn d withFile tagfile WriteMode $ \h -> hPutStrLn h "Test" However, I find this long and a bit obfuscated. I don't think I should need an IORef just to get a value out, but I needed "place" to put the file contents. Also, it still didn't work without the strictness annotation $! for writeIORef. I guess IORefs are not strict by nature? Can anyone recommend a better, shorter way to do this while keeping my desired semantics? Thanks!

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >