Search Results

Search found 4228 results on 170 pages for 'gathering io'.

Page 31/170 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • Reading and writing in parallel

    - by Malfist
    I want to be able to read and write a large file in parallel, or if not in parallel, at least in blocks so that I don't use up so much memory. This is my current code: // Define memory stream which will be used to hold encrypted data. MemoryStream memoryStream = new MemoryStream(); // Define cryptographic stream (always use Write mode for encryption). CryptoStream cryptoStream = new CryptoStream(memoryStream, encryptor, CryptoStreamMode.Write); //start encrypting using (BinaryReader reader = new BinaryReader(File.Open(fileIn, FileMode.Open))) { byte[] buffer = new byte[1024 * 1024]; int read = 0; do { read = reader.Read(buffer, 0, buffer.Length); cryptoStream.Write(buffer, 0, read); } while (read == buffer.Length); } // Finish encrypting. cryptoStream.FlushFinalBlock(); // Convert our encrypted data from a memory stream into a byte array. //byte[] cipherTextBytes = memoryStream.ToArray(); //write our memory stream to a file memoryStream.Position = 0; using (BinaryWriter writer = new BinaryWriter(File.Open(fileOut, FileMode.Create))) { byte[] buffer = new byte[1024 * 1024]; int read = 0; do { read = memoryStream.Read(buffer, 0, buffer.Length); writer.Write(buffer, 0, read); } while (read == buffer.Length); } // Close both streams. memoryStream.Close(); cryptoStream.Close(); As you can see, it reads the entire file into memory, encrypts it, then writes it out. If I happen to be encrypting files that are very large (2GB+) it tends not to work, or at the very least, consumes ~97% of my memory. How could I do it in a more effective manner?

    Read the article

  • write 2d array to a file in C (Operating system)

    - by Bobj-C
    Hello All, I used to use the code below to Write an 1D array to a File: FILE *fp; float floatValue[5] = { 1.1F, 2.2F, 3.3F, 4.4F, 5.5F }; int i; if((fp=fopen("test", "wb"))==NULL) { printf("Cannot open file.\n"); } if(fwrite(floatValue, sizeof(float), 5, fp) != 5) printf("File read error."); fclose(fp); /* read the values */ if((fp=fopen("test", "rb"))==NULL) { printf("Cannot open file.\n"); } if(fread(floatValue, sizeof(float), 5, fp) != 5) { if(feof(fp)) printf("Premature end of file."); else printf("File read error."); } fclose(fp); for(i=0; i<5; i++) printf("%f ", floatValue[i]); My question is if i want to write and read 2D array ??

    Read the article

  • How can I speed up line by line reading of an ASCII file? (C++)

    - by Jon
    Here's a bit of code that is a considerable bottleneck after doing some measuring: //----------------------------------------------------------------------------- // Construct dictionary hash set from dictionary file //----------------------------------------------------------------------------- void constructDictionary(unordered_set<string> &dict) { ifstream wordListFile; wordListFile.open("dictionary.txt"); string word; while( wordListFile >> word ) { if( !word.empty() ) { dict.insert(word); } } wordListFile.close(); } I'm reading in ~200,000 words and this takes about 240 ms on my machine. Is the use of ifstream here efficient? Can I do better? I'm reading about mmap() implementations but I'm not understanding them 100%. The input file is simply text strings with *nix line terminations.

    Read the article

  • How to get proper alignment when printing to file

    - by user1067334
    I have this Structure the elements of which that I need to write in a text file struct Stage3ADisplay { int nSlot; char *Item; char *Type; int nIndex; unsigned char attributesMD[17]; //the last character is \0 unsigned char contentsMD[17]; //only for regular files - //the last character is \0 }; buffer = malloc(sizeof(Stage3ADisplayVar[nIterator]->nSlot) + sizeof(Stage3ADisplayVar[nIterator]->Item) + sizeof(Stage3ADisplayVar[nIterator]->Type) + sizeof(Stage3ADisplayVar[nIterator]->nIndex) + sizeof(Stage3ADisplayVar[nIterator]->attributesMD) + sizeof(Stage3ADisplayVar[nIterator]->contentsMD) + 1); sprintf (buffer,"%d %s %s %d %x %x",Stage3ADisplayVar[nIterator]->nSlot, Stage3ADisplayVar[nIterator]->Item,Stage3ADisplayVar[nIterator]->Type,Stage3ADisplayVar[nIterator]->nIndex,Stage3ADisplayVar[nIterator]->attributesMD,Stage3ADisplayVar[nIterator]->contentsMD); How do I make sure the rows in the file are properly aligned. Thank you.

    Read the article

  • [c++] How to create a std::ofstream to a temp file?

    - by dehmann
    Okay, mkstemp is the preferred way to create a temp file in POSIX. But it opens the file and returns an int, which is a file descriptor. From that I can only create a FILE*, but not an std::ofstream, which I would prefer in C++. (Apparently, on AIX and some other systems, you can create an std::ofstream from a file descriptor, but my compiler complains when I try that.) I know I could get a temp file name with tmpnam and then open my own ofstream with it, but that's apparently unsafe due to race conditions, and results in a compiler warning (g++ v3.4. on Linux): warning: the use of `tmpnam' is dangerous, better use `mkstemp' So, is there any portable way to create an std::ofstream to a temp file?

    Read the article

  • Improving File Read Performance (single file, C++, Windows)

    - by david
    I have large (hundreds of MB or more) files that I need to read blocks from using C++ on Windows. Currently the relevant functions are: errorType LargeFile::read( void* data_out, __int64 start_position, __int64 size_bytes ) const { if( !m_open ) { // return error } else { seekPosition( start_position ); DWORD bytes_read; BOOL result = ReadFile( m_file, data_out, DWORD( size_bytes ), &bytes_read, NULL ); if( size_bytes != bytes_read || result != TRUE ) { // return error } } // return no error } void LargeFile::seekPosition( __int64 position ) const { LARGE_INTEGER target; target.QuadPart = LONGLONG( position ); SetFilePointerEx( m_file, target, NULL, FILE_BEGIN ); } The performance of the above does not seem to be very good. Reads are on 4K blocks of the file. Some reads are coherent, most are not. A couple questions: Is there a good way to profile the reads? What things might improve the performance? For example, would sector-aligning the data be useful? I'm relatively new to file i/o optimization, so suggestions or pointers to articles/tutorials would be helpful.

    Read the article

  • Memory mapping of files and system cache behavior in WinXP

    - by Canopus
    Our application is memory intensive and deals with reading a large number of disk files. The total load can be more than 3 GB. There is a custom memory manager that uses memory mapped files to achieve reading of such a huge data. The files are mapped into the process memory space only when needed and with this the process memory is well under control. But what is observed is, with memory mapping, the system cache keeps on increasing until it occupies the available physical memory. This leads to the slowing down of the entire system. My question is how to prevent system cache from hogging the physical memory? I attempted to remove the file buffering (by using FILE_FLAG_NO_BUFFERING ), but with this, the read operations take considerable amount of time and slows down the application performance. How to achieve the scalability without sacrificing much on performance. What are the common techniques used in such cases? I dont have a good understanding of the WinXP OS caching behavior. Any good links explaining the same would also be helpful.

    Read the article

  • Reading a large text file to memory in C++

    - by NoneType
    Is there a way to read a large text file (~60MB) into memory at once (like a compiler flag to increase program memory limit) ? Currently, ofstream's open function throws a segmentation fault while trying to read this file. ifstream fis; fis.open("my_large_file.txt"); // Segfaults here The file just consists of rows of the form number_1<tabspace>number_2 i.e., two numbers separated by a tabspace.

    Read the article

  • File mode for creating+reading+appending+binary

    - by MihaiD
    I need to open a file for reading and writing. If the file is not found, it should be created. It should also be treated as a binary for Windows. Can you tell me the file mode sequence I need to use for this? I tried 'r+ab' but that doesn't create the files if they are not found. Thanks

    Read the article

  • Python text file processing speed issues

    - by Anonymouslemming
    Hi all, I'm having a problem with processing a largeish file in Python. All I'm doing is f = gzip.open(pathToLog, 'r') for line in f: counter = counter + 1 if (counter % 1000000 == 0): print counter f.close This takes around 10m25s just to open the file, read the lines and increment this counter. In perl, dealing with the same file and doing quite a bit more (some regular expression stuff), the whole process takes around 1m17s. Perl Code: open(LOG, "/bin/zcat $logfile |") or die "Cannot read $logfile: $!\n"; while (<LOG>) { if (m/.*\[svc-\w+\].*login result: Successful\.$/) { $_ =~ s/some regex here/$1,$2,$3,$4/; push @an_array, $_ } } close LOG; Can anyone advise what I can do to make the Python solution run at a similar speed to the Perl solution? I've tried just uncompressing the file and dealing with it using open instead of gzip.open, but that made a very small difference to the overall time.

    Read the article

  • How to create custom filenames in C?

    - by eSKay
    Please see this piece of code: #include<stdio.h> #include<string.h> #include<stdlib.h> int main() { int i = 0; FILE *fp; for(i = 0; i < 100; i++) { fp = fopen("/*what should go here??*/","w"); //I need to create files with names: file0.txt, file1.txt, file2.txt etc //i.e. file{i}.txt } }

    Read the article

  • Fastest Way To Format a Plain Text Using Javascript

    - by Nathan Campos
    I have a huge plain text document, about 700kb which is very big for plain texts and I need to format it on cloud converting it to HTML, but the only things that I need to replace, format to HTML so it can be displayed by the browser, are bold and italic. For bold at the plain text they are like this: Not on bold... **bold text here** not bold here And italic like this: Not italic... *italic text* no italic Just like StackOverflow does for their formatting, but the problem is that I need to make it a lot faster, since the text is so big... One of my ideas was to add a page slide, so I the script just need to format some part of the text, not it all, then after the user changes the page the script would be called again, but the problem is how I can make the code for this all?

    Read the article

  • FileNotFound exception

    - by Pratik
    I am trying to read a file in a servlet. I am using eclipse IDE. I get a FileNotFoundException if I provide relative file name. List<String> ls=new ArrayList<String>(); Scanner input = new Scanner(new File("Input.txt")); while(input.hasNextLine()) { ls.add(input.nextLine()); } The same code works if I put the absolute path like this: Scanner input = new Scanner(new File("F:/Spring and other stuff/AjaxDemo/src/com/pdd/ajax/Input.txt")); The Java file and text file are there in the same folder. Does it searches text file in some other folder ?

    Read the article

  • File.Exists() returns false, but not in debug

    - by Tor Haugen
    I'm being completely confused here folks, My code throws an exception because File.Exists() returns false public override sealed TCargo ReadFile(string fileName) { if (!File.Exists(fileName)) { throw new ArgumentException("Provided file name does not exist", "fileName"); } Visual studio breaks at the throw statement, and I immediately check the value of File.Exists(fileName) in the immediate window. It returns true. When I drag the breakpoint back up to the if statement and execute it again, it throws again. fileName is an absolute path to a file. I'm not creating the file, nor writing to it (it's there all along). If I paste the path into the open dialog in Notepad, it reads the file without problems. The code is executing in a background worker. It's the only complicating factor I can think of. I am positive the file has not been opened already, either in the worker thread or elsewhere. What's going on here?

    Read the article

  • Read char from txt file in C++

    - by Jack in the Box
    I have a program that will read the number of rows and columns from a txt file. Also, the program has to read the contents of a 2D array from the same file. Here is the txt file 8 20 * * *** *** 8 and 20 are the number of rows and columns respectively. The spaces and asterisks are the contents of the array, Array[8][20] For example, Array[0][1] = '*' I did make the program reading 8 and 20 as follow: ifstream myFile; myFile.open("life.txt"); if(!myFile) { cout << endl << "Failed to open file"; return 1; } myFile >> rows >> cols; myFile.close(); grid = new char*[rows]; for (int i = 0; i < rows; i++) { grid[i] = new char[cols]; } Now, how to assign the spaces and the asterisks to to the fields in the array? I hope you got the point.

    Read the article

  • Write to the second line of a PHP file

    - by Woz
    I have a php file that I want to add an include path to on the second line. I need to open the file and inset a line of code on line 2. I have tried a few techniques none of which are working but I think it has something to do with the text I am trying to write and possibly not escaping character correctly as I am not too familiar with file writing. So here is the file I want to write to: $file = $_SERVER['DOCUMENT_ROOT'].'/'.$domaindir.'/test.php'; Here is the piece of text I want to insert into the file: $dbfile = "include('".$_SERVER['DOCUMENT_ROOT']."/".$domaindir."/web_".$dbname.".inc.php');"; Then what I was doing was a string replace but all it did was bump the "session_start();" bit to a newline! Can anyone point me in the direction of a tutorial that might tell me how to insert this into the second line of my php file or indeed if anyone has any ideas? I can say for sure that the path to the PHP file is fully tested so i know its not that the file is not being open or written to. Any ideas would be much appreciated. Thanks in advance.

    Read the article

  • unbuffered I/O in Linux

    - by stuck
    I'm writing lots and lots of data that will not be read again for weeks - as my program runs the amount of free memory on the machine (displayed with 'free' or 'top') drops very quickly, the amount of memory my app uses does not increase - neither does the amount of memory used by other processes. This leads me to believe the memory is being consumed by the filesystems cache - since I do not intend to read this data for a long time I'm hoping to bypass the systems buffers, such that my data is written directly to disk. I dont have dreams of improving perf or being a super ninja, my hope is to give a hint to the filesystem that I'm not going to be coming back for this memory any time soon, so dont spend time optimizing for those cases. On Windows I've faced similar problems and fixed the problem using FILE_FLAG_NO_BUFFERING|FILE_FLAG_WRITE_THROUGH - the machines memory was not consumed by my app and the machine was more usable in general. I'm hoping to duplicate the improvements I've seen but on Linux. On Windows there is the restriction of writing in sector sized pieces, I'm happy with this restriction for the amount of gain I've measured. is there a similar way to do this in Linux?

    Read the article

  • How do I open a file in such a way that if the file doesn't exist it will be created and opened automatically?

    - by snakile
    Here's how I open a file for writing+ : if( fopen_s( &f, fileName, "w+" ) !=0 ) { printf("Open file failed\n"); return; } fprintf_s(f, "content"); If the file doesn't exist the open operation fails. What's the right way to fopen if I want to create the file automatically if the file doesn't already exist? EDIT: If the file does exist, I would like fprintf to overwrite the file, not to append to it.

    Read the article

  • How can I determine if a file is read-only for my process on *nix?

    - by user109078
    Using the stat function, I can get the read/write permissions for: owner user other ...but this isn't what I want. I want to know the read/write permissions of a file for my process (i.e. the application I'm writing). The owner/user/other is only helpful if I know if my process is running as the owner/user/other of the file...so maybe that's the solution but I'm not sure of the steps to get there.

    Read the article

  • Exciting new Evented I/O technologies

    - by Saif Bechan
    Lately I have been having my eye on evented I/O to tackle some of my web application problems. I have been looking at things as Python's Twisted, Ruby's Eventmachine, and Node.js. Are there any other alternatives to these three, maybe in other languages as PHP ?

    Read the article

  • How to get size of file in visual c++?

    - by karikari
    Below is my code. My problem is, my destination file always has a lot more strings than the originating file. Then, inside the for loop, instead of using i < sizeof more, I realized that I should use i < sizeof file2 . Now my problem is, how to get the size of file2? int i = 0; FILE *file2 = fopen(LOG_FILE_NAME,"r"); wfstream file3 (myfile, ios_base::out); // char more[1024]; char more[SIZE-OF-file2]; for(i = 0; i < SIZE-OF-file2 ; i++) { fgets(more, SIZE-OF-file2, file2); file3 << more; } fclose(file2); file3.close();

    Read the article

  • File Operations in Java

    - by Amir Rachum
    I'm working on a small application in Java that takes a directory structure and renames the files according to a certain format, after parsing the original name. What is the best Java class / methodology to use in order to facilitate these file operations? Edit: the question is only regarding the file operations part, I got the "getting the formatted name" down :)

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >