Search Results

Search found 6492 results on 260 pages for 'trigger io'.

Page 46/260 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • How to get proper alignment when printing to file

    - by user1067334
    I have this Structure the elements of which that I need to write in a text file struct Stage3ADisplay { int nSlot; char *Item; char *Type; int nIndex; unsigned char attributesMD[17]; //the last character is \0 unsigned char contentsMD[17]; //only for regular files - //the last character is \0 }; buffer = malloc(sizeof(Stage3ADisplayVar[nIterator]->nSlot) + sizeof(Stage3ADisplayVar[nIterator]->Item) + sizeof(Stage3ADisplayVar[nIterator]->Type) + sizeof(Stage3ADisplayVar[nIterator]->nIndex) + sizeof(Stage3ADisplayVar[nIterator]->attributesMD) + sizeof(Stage3ADisplayVar[nIterator]->contentsMD) + 1); sprintf (buffer,"%d %s %s %d %x %x",Stage3ADisplayVar[nIterator]->nSlot, Stage3ADisplayVar[nIterator]->Item,Stage3ADisplayVar[nIterator]->Type,Stage3ADisplayVar[nIterator]->nIndex,Stage3ADisplayVar[nIterator]->attributesMD,Stage3ADisplayVar[nIterator]->contentsMD); How do I make sure the rows in the file are properly aligned. Thank you.

    Read the article

  • How to read comma separated values from text file in JAVA?

    - by user1425223
    I have got this text file with latitude and longitude values of different points on a map. I want to store these coordinates into a mySQL database using hibernate. I want to know how can I split my string into latitudes and longitudes? What is the general way to do these type of things that is with other delimiters like space, tab etc.? File: 28.515046280572285,77.38258838653564 28.51430151808072,77.38336086273193 28.513566177802456,77.38413333892822 28.512830832397192,77.38490581512451 28.51208605426073,77.3856782913208 28.511341270865113,77.38645076751709 28.510530488025346,77.38720178604126 28.509615992924807,77.38790988922119 28.50875805732363,77.38862872123718 28.507994394490268,77.38943338394165 28.50728729434496,77.39038825035095 28.506674470385246,77.39145040512085 28.506174780521828,77.39260911941528 28.505665660113582,77.39376783370972 28.505156537248446,77.39492654800415 28.50466626846366,77.39608526229858 28.504175997400655,77.39724397659302 28.503685724059455,77.39840269088745 28.503195448440064,77.39956140518188 28.50276174118543,77.4007523059845 28.502309175192945,77.40194320678711 28.50185660725938,77.40313410758972 28.50140403738471,77.40432500839233 28.500951465568985,77.40551590919495 28.500498891812207,77.40670680999756 28.5000463161144,77.40789771080017 28.49959373847559,77.40908861160278 Code I am using to read from file: try { BufferedReader in = new BufferedReader(new FileReader("G:\\RoutePPAdvant2.txt")); String str; str = in.readLine(); while ((str = in.readLine()) != null) { System.out.println(str); } in.close(); } catch (IOException e) { System.out.println("File Read Error"); }

    Read the article

  • write 2d array to a file in C (Operating system)

    - by Bobj-C
    Hello All, I used to use the code below to Write an 1D array to a File: FILE *fp; float floatValue[5] = { 1.1F, 2.2F, 3.3F, 4.4F, 5.5F }; int i; if((fp=fopen("test", "wb"))==NULL) { printf("Cannot open file.\n"); } if(fwrite(floatValue, sizeof(float), 5, fp) != 5) printf("File read error."); fclose(fp); /* read the values */ if((fp=fopen("test", "rb"))==NULL) { printf("Cannot open file.\n"); } if(fread(floatValue, sizeof(float), 5, fp) != 5) { if(feof(fp)) printf("Premature end of file."); else printf("File read error."); } fclose(fp); for(i=0; i<5; i++) printf("%f ", floatValue[i]); My question is if i want to write and read 2D array ??

    Read the article

  • How can I speed up line by line reading of an ASCII file? (C++)

    - by Jon
    Here's a bit of code that is a considerable bottleneck after doing some measuring: //----------------------------------------------------------------------------- // Construct dictionary hash set from dictionary file //----------------------------------------------------------------------------- void constructDictionary(unordered_set<string> &dict) { ifstream wordListFile; wordListFile.open("dictionary.txt"); string word; while( wordListFile >> word ) { if( !word.empty() ) { dict.insert(word); } } wordListFile.close(); } I'm reading in ~200,000 words and this takes about 240 ms on my machine. Is the use of ifstream here efficient? Can I do better? I'm reading about mmap() implementations but I'm not understanding them 100%. The input file is simply text strings with *nix line terminations.

    Read the article

  • How to load/save C++ class instance (using STL containers) to disk

    - by supert
    I have a C++ class representing a hierarchically organised data tree which is very large (~Gb, basically as large as I can get away with in memory). It uses an STL list to store information at each node plus iterators to other nodes. Each node has only one parent, but 0-10 children. Abstracted, it looks something like: struct node { public: node_list_iterator parent; // iterator to a single parent node double node_data_array[X]; map<int,node_list_iterator> children; // iterators to child nodes }; class strategy { private: list<node> tree; // hierarchically linked list of nodes struct some_other_data; public: void build(); // build the tree void save(); // save the tree from disk void load(); // load the tree from disk void use(); // use the tree }; I would like to implement the load() and save() to disk, and it should be fairly fast, however the obvious problems are: I don't know the size in advance; The data contains iterators, which are volatile; My ignorance of C++ is prodigious. Could anyone suggest a pure C++ solution please?

    Read the article

  • Improving File Read Performance (single file, C++, Windows)

    - by david
    I have large (hundreds of MB or more) files that I need to read blocks from using C++ on Windows. Currently the relevant functions are: errorType LargeFile::read( void* data_out, __int64 start_position, __int64 size_bytes ) const { if( !m_open ) { // return error } else { seekPosition( start_position ); DWORD bytes_read; BOOL result = ReadFile( m_file, data_out, DWORD( size_bytes ), &bytes_read, NULL ); if( size_bytes != bytes_read || result != TRUE ) { // return error } } // return no error } void LargeFile::seekPosition( __int64 position ) const { LARGE_INTEGER target; target.QuadPart = LONGLONG( position ); SetFilePointerEx( m_file, target, NULL, FILE_BEGIN ); } The performance of the above does not seem to be very good. Reads are on 4K blocks of the file. Some reads are coherent, most are not. A couple questions: Is there a good way to profile the reads? What things might improve the performance? For example, would sector-aligning the data be useful? I'm relatively new to file i/o optimization, so suggestions or pointers to articles/tutorials would be helpful.

    Read the article

  • Reading a large text file to memory in C++

    - by NoneType
    Is there a way to read a large text file (~60MB) into memory at once (like a compiler flag to increase program memory limit) ? Currently, ofstream's open function throws a segmentation fault while trying to read this file. ifstream fis; fis.open("my_large_file.txt"); // Segfaults here The file just consists of rows of the form number_1<tabspace>number_2 i.e., two numbers separated by a tabspace.

    Read the article

  • Memory mapping of files and system cache behavior in WinXP

    - by Canopus
    Our application is memory intensive and deals with reading a large number of disk files. The total load can be more than 3 GB. There is a custom memory manager that uses memory mapped files to achieve reading of such a huge data. The files are mapped into the process memory space only when needed and with this the process memory is well under control. But what is observed is, with memory mapping, the system cache keeps on increasing until it occupies the available physical memory. This leads to the slowing down of the entire system. My question is how to prevent system cache from hogging the physical memory? I attempted to remove the file buffering (by using FILE_FLAG_NO_BUFFERING ), but with this, the read operations take considerable amount of time and slows down the application performance. How to achieve the scalability without sacrificing much on performance. What are the common techniques used in such cases? I dont have a good understanding of the WinXP OS caching behavior. Any good links explaining the same would also be helpful.

    Read the article

  • FileNotFound exception

    - by Pratik
    I am trying to read a file in a servlet. I am using eclipse IDE. I get a FileNotFoundException if I provide relative file name. List<String> ls=new ArrayList<String>(); Scanner input = new Scanner(new File("Input.txt")); while(input.hasNextLine()) { ls.add(input.nextLine()); } The same code works if I put the absolute path like this: Scanner input = new Scanner(new File("F:/Spring and other stuff/AjaxDemo/src/com/pdd/ajax/Input.txt")); The Java file and text file are there in the same folder. Does it searches text file in some other folder ?

    Read the article

  • File mode for creating+reading+appending+binary

    - by MihaiD
    I need to open a file for reading and writing. If the file is not found, it should be created. It should also be treated as a binary for Windows. Can you tell me the file mode sequence I need to use for this? I tried 'r+ab' but that doesn't create the files if they are not found. Thanks

    Read the article

  • Python text file processing speed issues

    - by Anonymouslemming
    Hi all, I'm having a problem with processing a largeish file in Python. All I'm doing is f = gzip.open(pathToLog, 'r') for line in f: counter = counter + 1 if (counter % 1000000 == 0): print counter f.close This takes around 10m25s just to open the file, read the lines and increment this counter. In perl, dealing with the same file and doing quite a bit more (some regular expression stuff), the whole process takes around 1m17s. Perl Code: open(LOG, "/bin/zcat $logfile |") or die "Cannot read $logfile: $!\n"; while (<LOG>) { if (m/.*\[svc-\w+\].*login result: Successful\.$/) { $_ =~ s/some regex here/$1,$2,$3,$4/; push @an_array, $_ } } close LOG; Can anyone advise what I can do to make the Python solution run at a similar speed to the Perl solution? I've tried just uncompressing the file and dealing with it using open instead of gzip.open, but that made a very small difference to the overall time.

    Read the article

  • How to create custom filenames in C?

    - by eSKay
    Please see this piece of code: #include<stdio.h> #include<string.h> #include<stdlib.h> int main() { int i = 0; FILE *fp; for(i = 0; i < 100; i++) { fp = fopen("/*what should go here??*/","w"); //I need to create files with names: file0.txt, file1.txt, file2.txt etc //i.e. file{i}.txt } }

    Read the article

  • unbuffered I/O in Linux

    - by stuck
    I'm writing lots and lots of data that will not be read again for weeks - as my program runs the amount of free memory on the machine (displayed with 'free' or 'top') drops very quickly, the amount of memory my app uses does not increase - neither does the amount of memory used by other processes. This leads me to believe the memory is being consumed by the filesystems cache - since I do not intend to read this data for a long time I'm hoping to bypass the systems buffers, such that my data is written directly to disk. I dont have dreams of improving perf or being a super ninja, my hope is to give a hint to the filesystem that I'm not going to be coming back for this memory any time soon, so dont spend time optimizing for those cases. On Windows I've faced similar problems and fixed the problem using FILE_FLAG_NO_BUFFERING|FILE_FLAG_WRITE_THROUGH - the machines memory was not consumed by my app and the machine was more usable in general. I'm hoping to duplicate the improvements I've seen but on Linux. On Windows there is the restriction of writing in sector sized pieces, I'm happy with this restriction for the amount of gain I've measured. is there a similar way to do this in Linux?

    Read the article

  • Exciting new Evented I/O technologies

    - by Saif Bechan
    Lately I have been having my eye on evented I/O to tackle some of my web application problems. I have been looking at things as Python's Twisted, Ruby's Eventmachine, and Node.js. Are there any other alternatives to these three, maybe in other languages as PHP ?

    Read the article

  • File.Exists() returns false, but not in debug

    - by Tor Haugen
    I'm being completely confused here folks, My code throws an exception because File.Exists() returns false public override sealed TCargo ReadFile(string fileName) { if (!File.Exists(fileName)) { throw new ArgumentException("Provided file name does not exist", "fileName"); } Visual studio breaks at the throw statement, and I immediately check the value of File.Exists(fileName) in the immediate window. It returns true. When I drag the breakpoint back up to the if statement and execute it again, it throws again. fileName is an absolute path to a file. I'm not creating the file, nor writing to it (it's there all along). If I paste the path into the open dialog in Notepad, it reads the file without problems. The code is executing in a background worker. It's the only complicating factor I can think of. I am positive the file has not been opened already, either in the worker thread or elsewhere. What's going on here?

    Read the article

  • Read char from txt file in C++

    - by Jack in the Box
    I have a program that will read the number of rows and columns from a txt file. Also, the program has to read the contents of a 2D array from the same file. Here is the txt file 8 20 * * *** *** 8 and 20 are the number of rows and columns respectively. The spaces and asterisks are the contents of the array, Array[8][20] For example, Array[0][1] = '*' I did make the program reading 8 and 20 as follow: ifstream myFile; myFile.open("life.txt"); if(!myFile) { cout << endl << "Failed to open file"; return 1; } myFile >> rows >> cols; myFile.close(); grid = new char*[rows]; for (int i = 0; i < rows; i++) { grid[i] = new char[cols]; } Now, how to assign the spaces and the asterisks to to the fields in the array? I hope you got the point.

    Read the article

  • Fastest Way To Format a Plain Text Using Javascript

    - by Nathan Campos
    I have a huge plain text document, about 700kb which is very big for plain texts and I need to format it on cloud converting it to HTML, but the only things that I need to replace, format to HTML so it can be displayed by the browser, are bold and italic. For bold at the plain text they are like this: Not on bold... **bold text here** not bold here And italic like this: Not italic... *italic text* no italic Just like StackOverflow does for their formatting, but the problem is that I need to make it a lot faster, since the text is so big... One of my ideas was to add a page slide, so I the script just need to format some part of the text, not it all, then after the user changes the page the script would be called again, but the problem is how I can make the code for this all?

    Read the article

  • How do I open a file in such a way that if the file doesn't exist it will be created and opened automatically?

    - by snakile
    Here's how I open a file for writing+ : if( fopen_s( &f, fileName, "w+" ) !=0 ) { printf("Open file failed\n"); return; } fprintf_s(f, "content"); If the file doesn't exist the open operation fails. What's the right way to fopen if I want to create the file automatically if the file doesn't already exist? EDIT: If the file does exist, I would like fprintf to overwrite the file, not to append to it.

    Read the article

  • File Operations in Java

    - by Amir Rachum
    I'm working on a small application in Java that takes a directory structure and renames the files according to a certain format, after parsing the original name. What is the best Java class / methodology to use in order to facilitate these file operations? Edit: the question is only regarding the file operations part, I got the "getting the formatted name" down :)

    Read the article

  • How can I determine if a file is read-only for my process on *nix?

    - by user109078
    Using the stat function, I can get the read/write permissions for: owner user other ...but this isn't what I want. I want to know the read/write permissions of a file for my process (i.e. the application I'm writing). The owner/user/other is only helpful if I know if my process is running as the owner/user/other of the file...so maybe that's the solution but I'm not sure of the steps to get there.

    Read the article

  • fopen() fails to open stream: permission denied, yet permissions should be valid

    - by about blank
    So, I have this error: Warning: fopen(/path/to/test-in.txt) [function.fopen]: failed to open stream: Permission denied Performing ls -l in the directory where test-in.txt is produces the following output: -rw-r--r-- 1 $USER $USER 1921 Sep 6 20:09 test-in.txt -rw-r--r-- 1 $USER $USER 0 Sep 6 20:08 test-out.txt In order to get past this, I decided to perform the following: chgrp -R www-data /path/to/php/webroot And then did: chmod g+rw /path/to/php/webroot Yet, I still get this error when I run my php5 script to open the file. Why is this happening? I've tried this using LAMP as well as cherokee through CGI, so it can't be this. Is there a solution of some sort? Edit I'll also add that I'm just developing via localhost right now. Update - PHP fopen() line $fullpath = $this->fileRoot . $this->fileInData['fileName']; $file_ptr = fopen( $fullpath, 'r+' ); I should also mention I'd like to stick with Cherokee if possible. What's this deal about setting file permissions for Apache/Cherokee?

    Read the article

  • How can chunks be allocated in a node.js stream in object mode all at once?

    - by Quentin Engles
    I can see how buffers, and strings can be sent as chunks, but I'm having a problem thinking about how streams can be dealt when working in object mode. Say I have a byte stream from an http request message. I want to take that message, parse, and then transform it into one big object. I already know how to parse the message. What I'm wondering is if the message is big so it has many chunks, but I want to make one object for the output how can I make sure the data event waits for the whole thing? Is this just a matter of not using the push method until the chunked data has finished being sent? That would then restrict the stream data output to a smaller object which I think I'm fine with for now. As an added condition the larger data will be reduced in size after the the transform.

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >