Search Results

Search found 6654 results on 267 pages for 'socket io'.

Page 59/267 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • read chars from a file - c#

    - by Saskaaa
    How to read from a file array of numbers? I mean, how to read chars from a file? sorry for bad eng. upd: yes, i can :) just: "1 2 3 4 5 6 7 8" and etc. I just do not know how to read chars from a file.

    Read the article

  • Is it possible to open a pipe-based filehandle which prints to a variable in perl?

    - by blackkettle
    Hi, I know I can do this, ------ open(F,"",\$var); print F "something cool"; close(F); print $var; ------ or this, open(F, "| ./prog1 | ./prog2 tmp.file"); print F "something cool"; close(F); but is it possible to combine these? The semantics of what I'd like to do should be clear from the following, open(F,"|./prog1 | ./prog2", \$var); print F "something cool"; close(F); print $var; however the above clearly won't work. A few minutes of experimenting and googling seems to indicate that this is not possible, but I'd like to know if I'm stuck with using the `` to capture the output.

    Read the article

  • C Remove the first line from a text file without rewriting file

    - by Tom Van den Bon
    Hi, I've got a service which runs all the time and also keeps a log file. It basically adds new lines to the log file every few seconds. I'm written a small file which reads these lines and then parses them to various actions. The question I have is how can I delete the lines which I have already parsed from the log file without disrupting the writing of the log file by the service? Usually when I need to delete a line in a file then I open the original one and a temporary one and then I just write all the lines to the temp file except the original which I want to delete. Obviously this method will not word here. So how do I go about deleting them ?

    Read the article

  • write cache and write sequence order

    - by excanoe
    ok, here i have some weird question: let say we have some binary file (.log), and sequence of write operations, for example log1, log2, log3 and each has some block size n (raw data). question: can I be sure that log1,log2 and log3 sequences can be written in the correct order in ONE file, even if i have few cache levels (disk hardware and os level)? update very interested in what will be with records order (not with records) if we have software or hardware failure (reboot or another reason). update there can be some percent of write failures, but main question is: will write order stay correct?

    Read the article

  • How to copy files without slowing down my app?

    - by Kevin Gebhardt
    I have a bunch of little files in my assets which need to be copied to the SD-card on the first start of my App. The copy code i got from here placed in an IntentService works like a charm. However, when I start to copy many litte files, the whole app gets increddible slow (I'm not really sure why by the way), which is a really bad experience for the user on first start. As I realised other apps running normal in that time, I tried to start a child process for the service, which didn't work, as I can't acess my assets from another process as far as I understood. Has anybody out there an idea how a) to copy the files without blocking my app b) to get through to my assets from a private process (process=":myOtherProcess" in Manifest) or c) solve the problem in a complete different way Edit: To make this clearer: The copying allready takes place in a seperate thread (started automaticaly by IntentService). The problem is not to separate the task of copying but that the copying in a dedicated thread somehow affects the rest of the app (e.g. blocking to many app-specific resources?) but not other apps (so it's not blocking the whole CPU or someting) Edit2: Problem solved, it turns out, there wasn't really a problem. See my answer below.

    Read the article

  • Best directory to store application data with read\write rights for all users?

    - by Wodzu
    Hi guys. Until Windows Vista I was saving my application data into the directory where the program was located. The most common place was "C:\Program Files\MyApplication". As we know, under Vista and later the common user does't have rights to write under "Program Files" folder. So my first idea was to save the application data under "All Useres\Application Data" folder. But it seams that this folder has writing restrictions too! So to sum up, my requirements are: Folder should exist under Windows XP and above Microsoft's systems. All useres of the system should read\write\creation rights to this folder and it subfolders and files. I want to have only one copy of file\files for all useres. Thanks for your time.

    Read the article

  • How to run a set of SQL queries from a file, in PHP?

    - by Harish Kurup
    I have some set of SQL queries which is in a file(i.e query.sql), and i want to run those queries in files using PHP, the code that i have wrote is not working, //database config's... $file_name="query.sql"; $query==file($file_name); $array_length=count($query); for($i=0;$i<$array_length;$i++) { $data .= $query[$i]; } echo $data; mysql_query($data); it echos the SQL Query from the file but throws an error at mysql_query() function...

    Read the article

  • [c++] How to create a std::ofstream to a temp file?

    - by dehmann
    Okay, mkstemp is the preferred way to create a temp file in POSIX. But it opens the file and returns an int, which is a file descriptor. From that I can only create a FILE*, but not an std::ofstream, which I would prefer in C++. (Apparently, on AIX and some other systems, you can create an std::ofstream from a file descriptor, but my compiler complains when I try that.) I know I could get a temp file name with tmpnam and then open my own ofstream with it, but that's apparently unsafe due to race conditions, and results in a compiler warning (g++ v3.4. on Linux): warning: the use of `tmpnam' is dangerous, better use `mkstemp' So, is there any portable way to create an std::ofstream to a temp file?

    Read the article

  • Scanner's Read Line returning NoSuchElementException

    - by Brian
    This is my first time using StackOverflow. I am trying to read a text file which consists of a single number one the first line. try { Scanner s = new Scanner(new File("HighScores.txt")); int temp =Integer.parseInt(s.nextLine()); s.close(); return temp; } catch (FileNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); } However, I get an error: java.util.NoSuchElementException: No line found at java.util.Scanner.nextLine(Unknown Source) at GameStart.getHighScore(GameStart.java:334) at GameStart.init(GameStart.java:82) at sun.applet.AppletPanel.run(Unknown Source) at java.lang.Thread.run(Unknown Source) I know that HighScores.txt is not empty, so why is this problem occuring? I tried using BufferedReader, and BufferReader.readLine() return null.

    Read the article

  • Ruby - Writing Hpricot data to a file

    - by John
    Hey everyone, I am currently doing some XML parsing and I've chosen to use Hpricot because of it's ease of use and syntax, however I am running into some problems. I need to write a piece of XML data that I have found out to another file. However, when I do this the format is not preserved. For example, if the content should look like this: <dict> <key>item1</key><value>12345</value> <key>item2</key><value>67890</value> <key>item3</key><value>23456</value> </dict> And assuming that there are many entries like this in the document. I am iterating through the 'dict' items by using hpricot_element = Hpricot(xml_document_body) f = File.new('some_new_file.xml') (hpricot_element/:dict).each { |dict| f.write( dict.to_original_html ) } After using the above code, I would expect that the output look like the following exactly like the XML shown above. However to my surprise, the output of the file looks more like this: <dict>\n", " <key>item1</key><value>12345</value>\n", " <key>item2</key><value>67890</value>\n", " <key>item3</key><value>23456</value\n", " </dict> I've tried splitting at the "\n" characters and writing to the file one line at a time, but that didn't seem to work either as it did not recognize the "\n" characters. Any help is greatly appreciated. It might be a very simple solution, but I am having troubling finding it. Thanks!

    Read the article

  • write 2d array to a file in C (Operating system)

    - by Bobj-C
    Hello All, I used to use the code below to Write an 1D array to a File: FILE *fp; float floatValue[5] = { 1.1F, 2.2F, 3.3F, 4.4F, 5.5F }; int i; if((fp=fopen("test", "wb"))==NULL) { printf("Cannot open file.\n"); } if(fwrite(floatValue, sizeof(float), 5, fp) != 5) printf("File read error."); fclose(fp); /* read the values */ if((fp=fopen("test", "rb"))==NULL) { printf("Cannot open file.\n"); } if(fread(floatValue, sizeof(float), 5, fp) != 5) { if(feof(fp)) printf("Premature end of file."); else printf("File read error."); } fclose(fp); for(i=0; i<5; i++) printf("%f ", floatValue[i]); My question is if i want to write and read 2D array ??

    Read the article

  • How can I speed up line by line reading of an ASCII file? (C++)

    - by Jon
    Here's a bit of code that is a considerable bottleneck after doing some measuring: //----------------------------------------------------------------------------- // Construct dictionary hash set from dictionary file //----------------------------------------------------------------------------- void constructDictionary(unordered_set<string> &dict) { ifstream wordListFile; wordListFile.open("dictionary.txt"); string word; while( wordListFile >> word ) { if( !word.empty() ) { dict.insert(word); } } wordListFile.close(); } I'm reading in ~200,000 words and this takes about 240 ms on my machine. Is the use of ifstream here efficient? Can I do better? I'm reading about mmap() implementations but I'm not understanding them 100%. The input file is simply text strings with *nix line terminations.

    Read the article

  • Reading and writing in parallel

    - by Malfist
    I want to be able to read and write a large file in parallel, or if not in parallel, at least in blocks so that I don't use up so much memory. This is my current code: // Define memory stream which will be used to hold encrypted data. MemoryStream memoryStream = new MemoryStream(); // Define cryptographic stream (always use Write mode for encryption). CryptoStream cryptoStream = new CryptoStream(memoryStream, encryptor, CryptoStreamMode.Write); //start encrypting using (BinaryReader reader = new BinaryReader(File.Open(fileIn, FileMode.Open))) { byte[] buffer = new byte[1024 * 1024]; int read = 0; do { read = reader.Read(buffer, 0, buffer.Length); cryptoStream.Write(buffer, 0, read); } while (read == buffer.Length); } // Finish encrypting. cryptoStream.FlushFinalBlock(); // Convert our encrypted data from a memory stream into a byte array. //byte[] cipherTextBytes = memoryStream.ToArray(); //write our memory stream to a file memoryStream.Position = 0; using (BinaryWriter writer = new BinaryWriter(File.Open(fileOut, FileMode.Create))) { byte[] buffer = new byte[1024 * 1024]; int read = 0; do { read = memoryStream.Read(buffer, 0, buffer.Length); writer.Write(buffer, 0, read); } while (read == buffer.Length); } // Close both streams. memoryStream.Close(); cryptoStream.Close(); As you can see, it reads the entire file into memory, encrypts it, then writes it out. If I happen to be encrypting files that are very large (2GB+) it tends not to work, or at the very least, consumes ~97% of my memory. How could I do it in a more effective manner?

    Read the article

  • How to read comma separated values from text file in JAVA?

    - by user1425223
    I have got this text file with latitude and longitude values of different points on a map. I want to store these coordinates into a mySQL database using hibernate. I want to know how can I split my string into latitudes and longitudes? What is the general way to do these type of things that is with other delimiters like space, tab etc.? File: 28.515046280572285,77.38258838653564 28.51430151808072,77.38336086273193 28.513566177802456,77.38413333892822 28.512830832397192,77.38490581512451 28.51208605426073,77.3856782913208 28.511341270865113,77.38645076751709 28.510530488025346,77.38720178604126 28.509615992924807,77.38790988922119 28.50875805732363,77.38862872123718 28.507994394490268,77.38943338394165 28.50728729434496,77.39038825035095 28.506674470385246,77.39145040512085 28.506174780521828,77.39260911941528 28.505665660113582,77.39376783370972 28.505156537248446,77.39492654800415 28.50466626846366,77.39608526229858 28.504175997400655,77.39724397659302 28.503685724059455,77.39840269088745 28.503195448440064,77.39956140518188 28.50276174118543,77.4007523059845 28.502309175192945,77.40194320678711 28.50185660725938,77.40313410758972 28.50140403738471,77.40432500839233 28.500951465568985,77.40551590919495 28.500498891812207,77.40670680999756 28.5000463161144,77.40789771080017 28.49959373847559,77.40908861160278 Code I am using to read from file: try { BufferedReader in = new BufferedReader(new FileReader("G:\\RoutePPAdvant2.txt")); String str; str = in.readLine(); while ((str = in.readLine()) != null) { System.out.println(str); } in.close(); } catch (IOException e) { System.out.println("File Read Error"); }

    Read the article

  • Reading a large text file to memory in C++

    - by NoneType
    Is there a way to read a large text file (~60MB) into memory at once (like a compiler flag to increase program memory limit) ? Currently, ofstream's open function throws a segmentation fault while trying to read this file. ifstream fis; fis.open("my_large_file.txt"); // Segfaults here The file just consists of rows of the form number_1<tabspace>number_2 i.e., two numbers separated by a tabspace.

    Read the article

  • Memory mapping of files and system cache behavior in WinXP

    - by Canopus
    Our application is memory intensive and deals with reading a large number of disk files. The total load can be more than 3 GB. There is a custom memory manager that uses memory mapped files to achieve reading of such a huge data. The files are mapped into the process memory space only when needed and with this the process memory is well under control. But what is observed is, with memory mapping, the system cache keeps on increasing until it occupies the available physical memory. This leads to the slowing down of the entire system. My question is how to prevent system cache from hogging the physical memory? I attempted to remove the file buffering (by using FILE_FLAG_NO_BUFFERING ), but with this, the read operations take considerable amount of time and slows down the application performance. How to achieve the scalability without sacrificing much on performance. What are the common techniques used in such cases? I dont have a good understanding of the WinXP OS caching behavior. Any good links explaining the same would also be helpful.

    Read the article

  • How to get proper alignment when printing to file

    - by user1067334
    I have this Structure the elements of which that I need to write in a text file struct Stage3ADisplay { int nSlot; char *Item; char *Type; int nIndex; unsigned char attributesMD[17]; //the last character is \0 unsigned char contentsMD[17]; //only for regular files - //the last character is \0 }; buffer = malloc(sizeof(Stage3ADisplayVar[nIterator]->nSlot) + sizeof(Stage3ADisplayVar[nIterator]->Item) + sizeof(Stage3ADisplayVar[nIterator]->Type) + sizeof(Stage3ADisplayVar[nIterator]->nIndex) + sizeof(Stage3ADisplayVar[nIterator]->attributesMD) + sizeof(Stage3ADisplayVar[nIterator]->contentsMD) + 1); sprintf (buffer,"%d %s %s %d %x %x",Stage3ADisplayVar[nIterator]->nSlot, Stage3ADisplayVar[nIterator]->Item,Stage3ADisplayVar[nIterator]->Type,Stage3ADisplayVar[nIterator]->nIndex,Stage3ADisplayVar[nIterator]->attributesMD,Stage3ADisplayVar[nIterator]->contentsMD); How do I make sure the rows in the file are properly aligned. Thank you.

    Read the article

  • unbuffered I/O in Linux

    - by stuck
    I'm writing lots and lots of data that will not be read again for weeks - as my program runs the amount of free memory on the machine (displayed with 'free' or 'top') drops very quickly, the amount of memory my app uses does not increase - neither does the amount of memory used by other processes. This leads me to believe the memory is being consumed by the filesystems cache - since I do not intend to read this data for a long time I'm hoping to bypass the systems buffers, such that my data is written directly to disk. I dont have dreams of improving perf or being a super ninja, my hope is to give a hint to the filesystem that I'm not going to be coming back for this memory any time soon, so dont spend time optimizing for those cases. On Windows I've faced similar problems and fixed the problem using FILE_FLAG_NO_BUFFERING|FILE_FLAG_WRITE_THROUGH - the machines memory was not consumed by my app and the machine was more usable in general. I'm hoping to duplicate the improvements I've seen but on Linux. On Windows there is the restriction of writing in sector sized pieces, I'm happy with this restriction for the amount of gain I've measured. is there a similar way to do this in Linux?

    Read the article

  • How to create custom filenames in C?

    - by eSKay
    Please see this piece of code: #include<stdio.h> #include<string.h> #include<stdlib.h> int main() { int i = 0; FILE *fp; for(i = 0; i < 100; i++) { fp = fopen("/*what should go here??*/","w"); //I need to create files with names: file0.txt, file1.txt, file2.txt etc //i.e. file{i}.txt } }

    Read the article

  • Improving File Read Performance (single file, C++, Windows)

    - by david
    I have large (hundreds of MB or more) files that I need to read blocks from using C++ on Windows. Currently the relevant functions are: errorType LargeFile::read( void* data_out, __int64 start_position, __int64 size_bytes ) const { if( !m_open ) { // return error } else { seekPosition( start_position ); DWORD bytes_read; BOOL result = ReadFile( m_file, data_out, DWORD( size_bytes ), &bytes_read, NULL ); if( size_bytes != bytes_read || result != TRUE ) { // return error } } // return no error } void LargeFile::seekPosition( __int64 position ) const { LARGE_INTEGER target; target.QuadPart = LONGLONG( position ); SetFilePointerEx( m_file, target, NULL, FILE_BEGIN ); } The performance of the above does not seem to be very good. Reads are on 4K blocks of the file. Some reads are coherent, most are not. A couple questions: Is there a good way to profile the reads? What things might improve the performance? For example, would sector-aligning the data be useful? I'm relatively new to file i/o optimization, so suggestions or pointers to articles/tutorials would be helpful.

    Read the article

  • File.Exists() returns false, but not in debug

    - by Tor Haugen
    I'm being completely confused here folks, My code throws an exception because File.Exists() returns false public override sealed TCargo ReadFile(string fileName) { if (!File.Exists(fileName)) { throw new ArgumentException("Provided file name does not exist", "fileName"); } Visual studio breaks at the throw statement, and I immediately check the value of File.Exists(fileName) in the immediate window. It returns true. When I drag the breakpoint back up to the if statement and execute it again, it throws again. fileName is an absolute path to a file. I'm not creating the file, nor writing to it (it's there all along). If I paste the path into the open dialog in Notepad, it reads the file without problems. The code is executing in a background worker. It's the only complicating factor I can think of. I am positive the file has not been opened already, either in the worker thread or elsewhere. What's going on here?

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >