Search Results

Search found 5919 results on 237 pages for 'io priority'.

Page 31/237 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • What is the fastest way for reading huge files in Delphi?

    - by dummzeuch
    My program needs to read chunks from a huge binary file with random access. I have got a list of offsets and lengths which may have several thousand entries. The user selects an entry and the program seeks to the offset and reads length bytes. The program internally uses a TMemoryStream to store and process the chunks read from the file. Reading the data is done via a TFileStream like this: FileStream.Position := Offset; MemoryStream.CopyFrom(FileStream, Size); This works fine but unfortunately it becomes increasingly slower as the files get larger. The file size starts at a few megabytes but frequently reaches several tens of gigabytes. The chunks read are around 100 kbytes in size. The file's content is only read by my program. It is the only program accessing the file at the time. Also the files are stored locally so this is not a network issue. I am using Delphi 2007 on a Windows XP box. What can I do to speed up this file access?

    Read the article

  • Java how to copy part of a file

    - by user3479074
    I have to read a file and depending of the content of the last lines, I have to copy most of its content into a new file. Unfortunately I didn't found a way to copy first n lines or chars of a file in java. The only way I found, is copying the file using nio FileChannels where I can specifiy the length in bytes. However, therefore I would need to know how many bytes the stuff I read needed in the source-file. Does anyone know a solution for one of these problems?

    Read the article

  • Explained shell statement

    - by Mats Stijlaart
    The following statement will remove line numbers in a txt file: cat withLineNumbers.txt | sed 's/^.......//' >> withoutLineNumbers.txt The input file is created with the following statement (this one i understand): nl -ba input.txt >> withLineNumbers.txt I know the functionality of cat and i know the output is written to the 'withoutLineNumbers.txt' file. But the part of '| sed 's/^.......//'' is not really clear to me. Thanks for your time.

    Read the article

  • c++ File input/output

    - by Myx
    Hi: I am trying to read from a file using fgets and sscanf. In my file, I have characters on each line of the while which I wish to put into a vector. So far, I have the following: FILE *fp; fp = fopen(filename, "r"); if(!fp) { fprintf(stderr, "Unable to open file %s\n", filename); return 0; } // Read file int line_count = 0; char buffer[1024]; while(fgets(buffer, 1023, fp)) { // Increment line counter line_count++; char *bufferp = buffer; ... while(*bufferp != '\n') { char *tmp; if(sscanf(bufferp, "%c", tmp) != 1) { fprintf(stderr, "Syntax error reading axiom on " "line %d in file %s\n", line_count, filename); return 0; } axiom.push_back(tmp); printf("put %s in axiom vector\n", axiom[axiom.size()-1]); // increment buffer pointer bufferp++; } } my axiom vector is defined as vector<char *> axiom;. When I run my program, I get a seg fault. It happens when I do the sscanf. Any suggestions on what I'm doing wrong?

    Read the article

  • How to append text into text file dynamically

    - by niraj deshmukh
    [12] key1=val1 key2=val2 key3=val3 key4=val4 key5=val5 [13] key1=val1 key2=val2 key3=val3 key4=val4 key5=xyz [14] key1=val1 key2=val2 key3=val3 key4=val4 key5=val5 I want to update key5=val5 where [13]. try { br = new BufferedReader(new FileReader(oldFileName)); bw = new BufferedWriter(new FileWriter(tmpFileName)); String line; while ((line = br.readLine()) != null) { System.out.println(line); if (line.contains("[13]")) { while (line.contains("key5")) { if (line.contains("key5")) { line = line.replace("key5", "key5= Val5"); bw.write(line+"\n"); } } } } } catch (Exception e) { return; } finally { try { if(br != null) br.close(); } catch (IOException e) { // } try { if(bw != null) bw.close(); } catch (IOException e) { // } }

    Read the article

  • Read whole ASCII file into C++ std::string

    - by Arrieta
    Hello, I need to read a whole file into memory and place it in a C++ std::string. If I were to read it into a char, the answer would be very simple: std::ifstream t; int lenght; t.open("file.txt", "r"); // open input file t.seekg(0, std::ios::end); // go to the end length = t.tellg(); // report location (this is the lenght) t.seekg(0, std::ios::beg); // go back to the beginning buffer = new char[length]; // allocate memory for a buffer of appropriate dimension t.read(buffer, length); // read the whole file into the buffer t.close(); // close file handle // ... do stuff with buffer here ... Now, I want to do the exact same thing, but using a std::string instead of a char. I want to avoid loops, i. e., I don't want to: std::ifstream t; t.open("file.txt", "r"); std::string buffer; std::string line; while(t){ std::getline(t, line); // ... append line to buffer and go on } t.close() any ideas?

    Read the article

  • Unable to open a file for writing

    - by asdasdas
    I am trying to write to a file. I do a file_exists check on it before I do fopen and it returns true (the file does exist). However, the file fails this code and gives me the error every time: $handle = fopen($filename, 'w'); if($handle) { flock($handle, LOCK_EX); fwrite($handle, $contents); } else { echo 'ERROR: Unable to open the file for writing.',PHP_EOL; exit(); } flock($handle, LOCK_UN); fclose($handle); Is there a way I can get more specific error details as to why this file does not let me open it for writing? I know that the filename is legit, but for some reason it just wont let me write to it. I do have write permissions, I was able to write and write over another file.

    Read the article

  • Error copying file from app bundle

    - by Michael Chen
    I used the FireFox add-on SQLite Manager, created a database, which saved to my desktop as "DB.sqlite". I copied the file into my supporting files for the project. But when I run the app, immediately I get the error "Assertion failure in -[AppDelegate copyDatabaseIfNeeded], /Users/Mac/Desktop/Note/Note/AppDelegate.m:32 2014-08-19 23:38:02.830 Note[28309:60b] Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Failed to create writable database file with message 'The operation couldn’t be completed. (Cocoa error 4.)'.' First throw call stack: "... Here is the App Delegate Code where the error takes place -(void) copyDatabaseIfNeeded { NSFileManager *fileManager = [NSFileManager defaultManager]; NSError *error; NSString *dbPath = [self getDBPath]; BOOL success = [fileManager fileExistsAtPath:dbPath]; if (!success) { NSString *defaultDBPath = [[ [NSBundle mainBundle ] resourcePath] stringByAppendingPathComponent:@"DB.sqlite"]; success = [fileManager copyItemAtPath:defaultDBPath toPath:dbPath error:&error]; if (!success) NSAssert1(0, @"Failed to create writable database file with message '%@'.", [error localizedDescription]); } } I am very new to Sqlite, so I maybe I didn't create a database correctly in the FireFox Sqlite manager, or maybe I didn't "properly" copy the .sqlite file in? (I did check the target membership in the sqlite and it correctly has my project selected. Also, the .sqlite file names all match up perfectly.)

    Read the article

  • Extract some data from a lot of xml files

    - by LifeH2O
    I have cricket player profiles saved in the form of .xml files in a folder. each file has these tags in it <playerid>547</playerid> <majorteam>England</majorteam> <playername>Don</playername> the playerid is same as in .xml (each file is of different size,1kb to 5kb). These are about 500 files. What i need is to extract the playername, majorteam, and playerid from all these files to a list. I will convert that list to XML later. If you know how can i do it directly to XML i will be very thankful.

    Read the article

  • How to read from database and write into text file with C#?

    - by user147685
    How to read from database and write into text file? I want to write/copy (not sure what to call) the record inside my database into a text file. One row record in database is equal to one line in the text file. I'm having no problem in database. For creating text file, it mentions FileStream and StreamWriter. Which one should I use?

    Read the article

  • Embarrassingly parallel workflow creates too many output files

    - by Hooked
    On a Linux cluster I run many (N > 10^6) independent computations. Each computation takes only a few minutes and the output is a handful of lines. When N was small I was able to store each result in a separate file to be parsed later. With large N however, I find that I am wasting storage space (for the file creation) and simple commands like ls require extra care due to internal limits of bash: -bash: /bin/ls: Argument list too long. Each computation is required to run through a qsub scheduling algorithm so I am unable to create a master program which simply aggregates the output data to a single file. The simple solution of appending to a single fails when two programs finish at the same time and interleave their output. I have no admin access to the cluster, so installing a system-wide database is not an option. How can I collate the output data from embarrassingly parallel computation before it gets unmanageable?

    Read the article

  • What is the magic behind perl read() function and buffer which is not a ref ?

    - by alex8657
    I do not get to understand how the Perl read($buf) function is able to modify the content of the $buf variable. $buf is not a reference, so the parameter is given by copy (from my c/c++ knowledge). So how come the $buf variable is modified in the caller ? Is it a tie variable or something ? The C documentation about setbuf is also quite elusive and unclear to me # Example 1 $buf=''; # It is a scalar, not a ref $bytes = $fh->read($buf); print $buf; # $buf was modified, what is the magic ? # Example 2 sub read_it { my $buf = shift; return $fh->read($buf); } my $buf; $bytes = read_it($buf); print $buf; # As expected, this scope $buf was not modified

    Read the article

  • VB.NET 2008, Windows 7 and saving files

    - by James Brauman
    Hello, We have to learn VB.NET for the semester, my experience lies mainly with C# - not that this should make a difference to this particular problem. I've used just about the most simple way to save a file using the .NET framework, but Windows 7 won't let me save the file anywhere (or anywhere that I have found yet). Here is the code I am using to save a text file. Dim dialog As FolderBrowserDialog = New FolderBrowserDialog() Dim saveLocation As String = dialog.SelectedPath ... Build up output string ... Try ' Try to write the file. My.Computer.FileSystem.WriteAllText(saveLocation, output, False) Catch PermissionEx As UnauthorizedAccessException ' We do not have permissions to save in this folder. MessageBox.Show("Do not have permissions to save file to the folder specified. Please try saving somewhere different.", "Error", MessageBoxButtons.OK, MessageBoxIcon.Error) Catch Ex As Exception ' Catch any exceptions that occured when trying to write the file. MessageBox.Show("Writing the file was not successful.", "Error", MessageBoxButtons.OK, MessageBoxIcon.Error) End Try The problem is that this using this code throws an UnauthorizedAccessException no matter where I try to save the file. I've tried running the .exe file as administrator, and the IDE as administrator. Is this just Windows 7 being overprotective? And if so, what can I do to solve this problem? The requirements state that I be able to save a file! Thanks.

    Read the article

  • What is the easiest way to loop through a folder of files in C#?

    - by badpanda
    I am new to C# and am trying to write a program that navigates the local file system using a config file containing relevant filepaths. My question is this: What are the best practices to use when performing file I/O (this will be from the desktop app to a server and back) and file system navigation in C#? I know how to google, and I have found several solutions, but I would like to know which of the various functions is most robust and flexible. As well, if anyone has any tips regarding exception handling for C# file I/O that would also be very helpful. Thanks!!! badPanda

    Read the article

  • Why does C's "fopen" take a "const char *" as its second argument?

    - by Chris Cooper
    It has always struck me as strange that the C function "fopen" takes a "const char *" as the second argument. I would think it would be easier to both read your code and implement the library's code if there were bit masks defined in stdio.h, like "IO_READ" and such, so you could do things like: FILE* myFile = fopen("file.txt", IO_READ & IO_WRITE); Is there a programmatic reason for the way it actually is, or is it just historic? (i.e. "That's just the way it is.")

    Read the article

  • Efficient way to delete a line from a text file (C#)

    - by Valentin Vasilyev
    Hello. I need to delete a certain line from a text file. What is the most efficient way of doing this? File can be potentially large(over million records). Thank you. UPDATE: below is the code I'm currently using, but I'm not sure if it is good. internal void DeleteMarkedEntries() { string tempPath=Path.GetTempFileName(); using (var reader = new StreamReader(logPath)) { using (var writer = new StreamWriter(File.OpenWrite(tempPath))) { int counter = 0; while (!reader.EndOfStream) { if (!_deletedLines.Contains(counter)) { writer.WriteLine(reader.ReadLine()); } ++counter; } } } if (File.Exists(tempPath)) { File.Delete(logPath); File.Move(tempPath, logPath); } }

    Read the article

  • Write problem - lossing the original data

    - by John
    Every time I write to the text file I will lose the original data, how can I read the file and enter the data in the empty line or the next line which is empty? public void writeToFile() { try { output = new Formatter(myFile); } catch(SecurityException securityException) { System.err.println("Error creating file"); System.exit(1); } catch(FileNotFoundException fileNotFoundException) { System.err.println("Error creating file"); System.exit(1); } Scanner scanner = new Scanner (System.in); String number = ""; String name = ""; System.out.println("Please enter number:"); number = scanner.next(); System.out.println("Please enter name:"); name = scanner.next(); output.format("%s,%s \r\n", number, name); output.close(); }

    Read the article

  • How to store a scaleable sized extensible event log?

    - by firoso
    Hello everyone! I've been contemplating writing a simple "event log" that takes a paramater list and stores event messages in a log file, trouble is, I forsee this file growing to be rather large (assume 1M entries or more) the question is, how can I implement this system without pulling teeth, I know that SQL would be a possible way to go. XML would be ideal but not really practical for scaleability if i'm not going nuts. Example Log Entry -----Time Date-------- ---------Sender----------------------- ---------Tags---------- --Message---------- 12/24/2008 24:00:00 $DOMAIN\SYSTEM\Application$ :Trivial: :Notification: It's Christmas in 1s

    Read the article

  • Search a string in a file and write the matched lines to another file in Java

    - by Geeta
    For searching a string in a file and writing the lines with matched string to another file it takes 15 - 20 mins for a single zip file of 70MB(compressed state). Is there any ways to minimise it. my source code: getting Zip file entries zipFile = new ZipFile(source_file_name); entries = zipFile.entries(); while (entries.hasMoreElements()) { ZipEntry entry = (ZipEntry)entries.nextElement(); if (entry.isDirectory()) { continue; } searchString(Thread.currentThread(),entry.getName(), new BufferedInputStream (zipFile.getInputStream(entry)), Out_File, search_string, stats); } zipFile.close(); Searching String public void searchString(Thread CThread, String Source_File, BufferedInputStream in, File outfile, String search, String stats) throws IOException { int count = 0; int countw = 0; int countl = 0; String s; String[] str; BufferedReader br2 = new BufferedReader(new InputStreamReader(in)); System.out.println(CThread.currentThread()); while ((s = br2.readLine()) != null) { str = s.split(search); count = str.length - 1; countw += count; //word count if (s.contains(search)) { countl++; //line count WriteFile(CThread,s, outfile.toString(), search); } } br2.close(); in.close(); } -------------------------------------------------------------------------------- public void WriteFile(Thread CThread,String line, String out, String search) throws IOException { BufferedWriter bufferedWriter = null; System.out.println("writre thread"+CThread.currentThread()); bufferedWriter = new BufferedWriter(new FileWriter(out, true)); bufferedWriter.write(line); bufferedWriter.newLine(); bufferedWriter.flush(); } Please help me. Its really taking 40 mins for 10 files using threads and 15 - 20 mins for a single file of 70MB after being compressed. Any ways to minimise the time.

    Read the article

  • Improving File Read Performance (single file, C++, Windows)

    - by david
    I have large (hundreds of MB or more) files that I need to read blocks from using C++ on Windows. Currently the relevant functions are: errorType LargeFile::read( void* data_out, __int64 start_position, __int64 size_bytes ) const { if( !m_open ) { // return error } else { seekPosition( start_position ); DWORD bytes_read; BOOL result = ReadFile( m_file, data_out, DWORD( size_bytes ), &bytes_read, NULL ); if( size_bytes != bytes_read || result != TRUE ) { // return error } } // return no error } void LargeFile::seekPosition( __int64 position ) const { LARGE_INTEGER target; target.QuadPart = LONGLONG( position ); SetFilePointerEx( m_file, target, NULL, FILE_BEGIN ); } The performance of the above does not seem to be very good. Reads are on 4K blocks of the file. Some reads are coherent, most are not. A couple questions: Is there a good way to profile the reads? What things might improve the performance? For example, would sector-aligning the data be useful? I'm relatively new to file i/o optimization, so suggestions or pointers to articles/tutorials would be helpful.

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >