Search Results

Search found 3856 results on 155 pages for 'io'.

Page 32/155 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • Exciting new Evented I/O technologies

    - by Saif Bechan
    Lately I have been having my eye on evented I/O to tackle some of my web application problems. I have been looking at things as Python's Twisted, Ruby's Eventmachine, and Node.js. Are there any other alternatives to these three, maybe in other languages as PHP ?

    Read the article

  • Reading a large text file to memory in C++

    - by NoneType
    Is there a way to read a large text file (~60MB) into memory at once (like a compiler flag to increase program memory limit) ? Currently, ofstream's open function throws a segmentation fault while trying to read this file. ifstream fis; fis.open("my_large_file.txt"); // Segfaults here The file just consists of rows of the form number_1<tabspace>number_2 i.e., two numbers separated by a tabspace.

    Read the article

  • Split file with PHP and generate contents

    - by user201140
    How do I split the content below into separate files without the placeholder tags. I'd also like to take the text inside the placeholder tags and place them inside a new contents file. <div class='placeholder'>The First Chapter</div> This is some text. <div class='placeholder'>The Second Chapter</div> This is some more text. <div class='placeholder'>Last Chapter</div> The last chapter. Thanks.

    Read the article

  • [c++] How to create a std::ofstream to a temp file?

    - by dehmann
    Okay, mkstemp is the preferred way to create a temp file in POSIX. But it opens the file and returns an int, which is a file descriptor. From that I can only create a FILE*, but not an std::ofstream, which I would prefer in C++. (Apparently, on AIX and some other systems, you can create an std::ofstream from a file descriptor, but my compiler complains when I try that.) I know I could get a temp file name with tmpnam and then open my own ofstream with it, but that's apparently unsafe due to race conditions, and results in a compiler warning (g++ v3.4. on Linux): warning: the use of `tmpnam' is dangerous, better use `mkstemp' So, is there any portable way to create an std::ofstream to a temp file?

    Read the article

  • [C] Read line from file without knowing the line length.

    - by ryyst
    Hi, I want to read in a file line by line, without knowing the line length before. Here's what I got so far: int ch = getc(file); int length = 0; char buffer[4095]; while (ch != '\n' && ch != EOF) { ch = getc(file); buffer[length] = ch; length++; } printf("Line length: %d characters.", length); I can now figure out the line length, but only for lines that are shorter than 4095 characters. Is there a better way to do this (I already used fgets() but got told it wasn't the best way)? --Ry

    Read the article

  • File won't save output to file, and prints out a string oddly C++ Linux

    - by Predictability
    I'm trying to make a password code, the user enters a password, then it will save the password to a file in /tmp/ and then it will output the password (For me so I can find bugs). I have included the "string" library, and I set the password type to string, but when I output it, it outputs like this: 0x7fffb55baac0password // <-- thats the password I entered It will output hex (I think), then the password I entered, and it won't save it to the file in /tmp/ I want it to (Or any file in /tmp/). Here's the source code: http://codepad.org/3aamAv7R Thank you for all the help you guys have given me so far.

    Read the article

  • What influences running time of reading a bunch of images?

    - by remi
    I have a program where I read a handful of tiny images (50000 images of size 32x32). I read them using OpenCV imread function, in a program like this: std::vector<std::string> imageList; // is initialized with full path to the 50K images for(string s : imageList) { cv::Mat m = cv::imread(s); } Sometimes, it will read the images in a few seconds. Sometimes, it takes a few minutes to do so. I run this program in GDB, with a breakpoint further away than the loop for reading images so it's not because I'm stuck in a breakpoint. The same "erratic" behaviour happens when I run the program out of GDB. The same "erratic" behaviour happens with program compiled with/without optimisation The same "erratic" behaviour happens while I have or not other programs running in background The images are always at the same place in the hard drive of my machine. I run the program on a Linux Suse distrib, compiled with gcc. So I am wondering what could affect the time of reading the images that much?

    Read the article

  • read text files containing binary data as a single matrix in matlab

    - by user1716595
    I have a text file which contains binary data in the following manner: 00000000000000000000000000000000001011111111111111111111111111111111111111111111111111111111110000000000000000000000000000000 00000000000000000000000000000000000000011111111111111111111111111111111111111111111111000111100000000000000000000000000000000 00000000000000000000000000000000000011111111111111111111111111111111111111111111111111111111100000000000000000000000000000000 00000000000000000000000000000000000111111111111111111111111111111111111111111111111111111111100000000000000000000000000000000 00000000000000000000000000000000000011111111111111111111111111111111111111111111111111111111100000000000000000000000000000000 00000000000000000000000000000000000000011111111111111111111111111111111111111111111111111111100000000000000000000000000000000 00000000000000000000000000000000000000011111111111111111111111111111111111111111111111000111110000000000000000000000000000000 00000000000000000000000000000000000000111111111111111111111111111111111111111111111111111111110000000000000000000000000000000 00000000000000000000000000000000000000000000111111111111111111111111111111111111110000000011100000000000000000000000000000000 00000000000000000000000000000000000000011111111111111111111111111111111111111111111111100111110000000000000000000000000000000 00000000000000000000000000000000000111111111111111111111111111111111111111111111111111110111110000000000000000000000000000000 00000000000000000000000000000000001111111111111111111111111111111111111111111111111111111111100000000000000000000000000000000 00000000000000000000000000000000000000001111111111111111111111111111111111111111111111000011100000000000000000000000000000000 00000000000000000000000000000000000000001111111111111111111111111111111111111111111111000011100000000000000000000000000000000 00000000000000000000000000000000000001111111111111111111111111111111111111111111111111111111000000000000000000000000000000000 00000000000000000000000000000000000000011111111111111111111111111111111111111111111110000011100000000000000000000000000000000 00000000000000000000000000000000000000000000011111111111111111111111111111111111100000000011100000000000000000000000000000000 00000000000000000000000000000000000000111111111111111111111111111111111111111111111111110111100000000000000000000000000000000 Plz note that each 1 or 0 is independent i.e the values are not decimal.I need to find the column wise sum of the file.There are 125 columns in all (here it is jumping onto the next line) and there are 840946 rows. I have tried textread,fscanf and a few other matlab commands but the result is that they all read each row in decimal format and create a 840946*1 array.I want to create a 840946*125 array to compute a column wise sum. Kindly help, Thanks!

    Read the article

  • Preventing threads from writing to the same file

    - by EpsilonVector
    I'm implementing an FTP-like protocol in Linux kernel 2.4 (homework), and I was under the impression that if a file is open for writing any subsequent attempt to open it by another thread should fail, until I actually tried it and discovered it goes through. How do I prevent this from happening? PS: I'm using open() to open the file.

    Read the article

  • How does including a .csv work in an enum?

    - by Tommy
    enum ID // IDs { ID_HEADER = 0, // ID 0 = headers #include "DATA.CSV" ID_LIMIT }; I inherited some code here..... Looking at "DATA.CSV" I see all the ID's used to populate the enum in column B, along with other data. My question: How does the enum know that it is using "column B" to retrieve it's members? There must be some other logic in the application yet I don't see it. What else should I look for? Thanks.

    Read the article

  • Likelihood of IOError during print vs. write

    - by jkasnicki
    I recently encountered an IOError writing to a file on NFS. There wasn't a disk space or permission issue, so I assume this was just a network hiccup. The obvious solution is to wrap the write in a try-except, but I was curious whether the implementation of print and write in Python make either of the following more or less likely to raise IOError: f_print = open('print.txt', 'w') print >>f_print, 'test_print' f_print.close() vs. f_write = open('write.txt', 'w') f_write.write('test_write\n') f_write.close() (If it matters, specifically in Python 2.4 on Linux).

    Read the article

  • How do i force a file to be deleted? Windows server 2008

    - by acidzombie24
    On my site a user may upload a file (pic, zip, audio, video, whatever). He then may decide to replace it with a newer revision. This user may upload a file, make a post then decide to put up a new revision replacing the old (lets say its a large zip or tar.gz file). Theres a good chance people may be downloading it if he sent out an email or even im for the home user. Problem. I need to replace the file and people may be downloading and it may be some minutes before it is deleted. I dont want my code to stall until i cant delete or check every second to see if its unused (especially bad if another user can start and he takes long creating a cycle). How do i delete the file while users are downloading the file? i dont care if they stop i just care that the file can be replaced and new downloads are the new revision.

    Read the article

  • VB6: Slow Binary Write?

    - by Tom the Junglist
    Wondering why a particular binary write operation in VB is so slow. The function reads a Byte array from memory and dumps it into a file like this: Open Destination For Binary Access Write As #1 Dim startP, endP As Long startP = BinaryStart endP = UBound(ReadBuf) - 1 Dim i as Integer For i = startP To endP DoEvents Put #1, (i - BinaryStart) + 1, ReadBuf(i) Next Close #1 For two megabytes on a slower system, this can take up to a minute. Can anyone tell me why this is so slow?

    Read the article

  • .NET Best Way to move many files to and from various directories??

    - by Dan
    I've created a program that moves files to and from various directories. An issue I've come across is when you're trying to move a file and some other program is still using it. And you get an error. Leaving it there isn't an option, so I can only think of having to keep trying to move it over and over again. This though slows the entire program down, so I create a new thread and let it deal with the problem file and move on to the next. The bigger problem is when you have too many of these problem files and the program now has so many threads trying to move these files, that it just crashes with some kernel.dll error. Here's a sample of the code I use to move the files: Public Sub MoveIt() Try File.Move(_FileName, _CopyToFileName) Catch ex As Exception Threading.Thread.Sleep(5000) MoveIt() End Try End Sub As you can see.. I try to move the file, and if it errors, I wait and move it again.. over and over again.. I've tried using FileInfo as well, but that crashes WAY sooner than just using the File object. So has anyone found a fool proof way of moving files without it ever erroring? Note: it takes a lot of files to make it crash. It'll be fine on the weekend, but by the end of the day on monday, it's done.

    Read the article

  • Need data on disk drive management by OS: getting base I/O unit size, “sync” option, Direct Memory A

    - by Richard T
    Hello All, I want to ensure I have done all I can to configure a system's disks for serious database use. The three areas I know of (any others?) to be concerned about are: I/O size: the database engine and disk's native size should either match, or the database's native I/O size should be a multiple of the disk's native I/O size. Disks that are capable of Direct Memory Access (eg. IDE) should be configured for it. When a disk says it has written data persistently, it must be so! No keeping it in cache and lying about it. I have been looking for information on how to ensure these are so for CENTOS and Ubuntu, but can't seem to find anything at all! I want to be able to check these things and change them if needed. Any and all input appreciated.

    Read the article

  • Why are some programs writing on stderr instead of stdout their output?

    - by Zagorax
    I've recently added to my .bashrc file an ssh-add command. I found that ssh-add $HOME/.ssh/id_rsa_github > /dev/null results on a message "identity added and something else" every time I open a shell. While ssh-add $HOME/.ssh/id_rsa_github > /dev/null 2>&1 did the trick and my shell is now 'clean'. Reading on internet, I found that other command do it, (for example time). Could you please explain why it's done?

    Read the article

  • How do I read and write to a file using threads in java?

    - by WarmWaffles
    I'm writing an application where I need to read blocks in from a single file, each block is roughly 512 bytes. I am also needing to write blocks simultaneously. One of the ideas I had was BlockReader implements Runnable and BlockWriter implements Runnable and BlockManager manages both the reader and writer. The problem that I am seeing with most examples that I have found was locking problems and potential deadlock situations. Any ideas how to implement this?

    Read the article

  • C read part of file into cache

    - by Pete Jodo
    I have to do a program (for Linux) where there's an extremely large index file and I have to search and interpret the data from the file. Now the catch is, I'm only allowed to have x-bytes of the file cached at any time (determined by argument) so I have to remove certain data from the cache if it's not what I'm looking for. If my understanding is correct, fopen (r) doesn't put anything in the cache, only when I call getc or fread(specifying size) does it get cached. So my question is, lets say I use fread and read 100 bytes but after checking it, only 20 of the 100 bytes contains the data I need; how would I remove the useless 80 bytes from cache (or overwrite it) in order to read more from the file.

    Read the article

  • Relative Path issue with .Net Windows Service..?

    - by Amitabh
    I have a windows service which is trying to access an xml file from the Application directory. Windows Service Installed directory : C:\Services\MyService\MyService.exe Path of the xml file : C:\Services\MyService\MyService.xml I am trying to access the file using the following code. using (FileStream stream = new FileStream("MyService.xml", FileMode.Open, FileAccess.Read)) { //Read file } I get the following error. "Can not find file : C:\WINDOWS\system\MyService.xml" My service is running with local system account and I don't want to use absolute path.

    Read the article

  • lib to read a DVD FS (data disc)

    - by acidzombie24
    I am thinking i might want to port a lib to read a DVD filesystem. I am not talking about movies but datadisc. Theres existing code for me to do raw reads from the disc. I need code that request this data and allow me to browse files on the disc. What lib can i use for this? -edit- NOTE: I am using an OSless hardware. Ppl seem to miss that but Alnitak caught it and gave me a great answer :)

    Read the article

  • Read file structure into an array, but only specific files.

    - by dmackerman
    I have a directory structure that looks like this: /expandables - folder - folder - folder - folder - BannerInfo.txt - index.html Each one of the folder has the same exact stucture. One file named BannerInfo.txt and index.html. There are about 250 of these folders if that matters. I want to loop through these folders and store each of the index.html files into an array. Inside of the index.html file is just some simple HTML and Javascript of which I want to read into a string to be displayed later on. I'm struggling with how to filter out only the index.html file from the individual folders. The purpose of this is because I want to randomly select an index.html file and put the contents into a textarea. I thought I could do a simple array_rand() on the returned array and spit out the string. Any ideas?

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >