Search Results

Search found 60836 results on 2434 pages for 'system io directory'.

Page 250/2434 | < Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >

  • Seeking to a line in a file in g++

    - by Phenom
    Is there a way that I can seek to a certain line in a file to read or write data? Let's say I want to write some data starting on the 10th line in a text file. There might be some data already in the first few lines, or the file could even be empty. Is there a way I can seek directly to the line I want without having to worry about what's already in the file?

    Read the article

  • How does including a .csv work in an enum?

    - by Tommy
    enum ID // IDs { ID_HEADER = 0, // ID 0 = headers #include "DATA.CSV" ID_LIMIT }; I inherited some code here..... Looking at "DATA.CSV" I see all the ID's used to populate the enum in column B, along with other data. My question: How does the enum know that it is using "column B" to retrieve it's members? There must be some other logic in the application yet I don't see it. What else should I look for? Thanks.

    Read the article

  • Why are some programs writing on stderr instead of stdout their output?

    - by Zagorax
    I've recently added to my .bashrc file an ssh-add command. I found that ssh-add $HOME/.ssh/id_rsa_github > /dev/null results on a message "identity added and something else" every time I open a shell. While ssh-add $HOME/.ssh/id_rsa_github > /dev/null 2>&1 did the trick and my shell is now 'clean'. Reading on internet, I found that other command do it, (for example time). Could you please explain why it's done?

    Read the article

  • C read part of file into cache

    - by Pete Jodo
    I have to do a program (for Linux) where there's an extremely large index file and I have to search and interpret the data from the file. Now the catch is, I'm only allowed to have x-bytes of the file cached at any time (determined by argument) so I have to remove certain data from the cache if it's not what I'm looking for. If my understanding is correct, fopen (r) doesn't put anything in the cache, only when I call getc or fread(specifying size) does it get cached. So my question is, lets say I use fread and read 100 bytes but after checking it, only 20 of the 100 bytes contains the data I need; how would I remove the useless 80 bytes from cache (or overwrite it) in order to read more from the file.

    Read the article

  • [c++] How to create a std::ofstream to a temp file?

    - by dehmann
    Okay, mkstemp is the preferred way to create a temp file in POSIX. But it opens the file and returns an int, which is a file descriptor. From that I can only create a FILE*, but not an std::ofstream, which I would prefer in C++. (Apparently, on AIX and some other systems, you can create an std::ofstream from a file descriptor, but my compiler complains when I try that.) I know I could get a temp file name with tmpnam and then open my own ofstream with it, but that's apparently unsafe due to race conditions, and results in a compiler warning (g++ v3.4. on Linux): warning: the use of `tmpnam' is dangerous, better use `mkstemp' So, is there any portable way to create an std::ofstream to a temp file?

    Read the article

  • How do I read and write to a file using threads in java?

    - by WarmWaffles
    I'm writing an application where I need to read blocks in from a single file, each block is roughly 512 bytes. I am also needing to write blocks simultaneously. One of the ideas I had was BlockReader implements Runnable and BlockWriter implements Runnable and BlockManager manages both the reader and writer. The problem that I am seeing with most examples that I have found was locking problems and potential deadlock situations. Any ideas how to implement this?

    Read the article

  • Preventing threads from writing to the same file

    - by EpsilonVector
    I'm implementing an FTP-like protocol in Linux kernel 2.4 (homework), and I was under the impression that if a file is open for writing any subsequent attempt to open it by another thread should fail, until I actually tried it and discovered it goes through. How do I prevent this from happening? PS: I'm using open() to open the file.

    Read the article

  • File System Types in .Net

    - by Avi
    I don't get the abstractions and the terminology :-( For example, DirectoryInfo.FullName is defined as the full path of the directory or file, but it's a string! So is DirectoryInfo.Name, FileInfo.FullName, Path.GetDirectoyName and so on. This means that in .Net there is no "depth" (or "meat" - my English isn't so good) for the file system objects. There's no protection from a type system. I can't, for example, define two Path objects and ask if one of them is "above" the other - I have to manipulate the strings. I can't differentiate between a Path that identifies a directory and a path that identifies a file. I can't do anything!-( Just manipulate strings. Is this correct (or am I simply missing something). If correct, are there any alternatives?

    Read the article

  • Reading a large text file to memory in C++

    - by NoneType
    Is there a way to read a large text file (~60MB) into memory at once (like a compiler flag to increase program memory limit) ? Currently, ofstream's open function throws a segmentation fault while trying to read this file. ifstream fis; fis.open("my_large_file.txt"); // Segfaults here The file just consists of rows of the form number_1<tabspace>number_2 i.e., two numbers separated by a tabspace.

    Read the article

  • which one consume less resources? opening text file or make an sql query,both a thousand times ?

    - by imin
    hi I've a php website which displays recipes www.trymasak.my, to be exact. The recipes being displayed at the index page is updated about once a day. To get the latest recipes, I just use a mysql query which is something like "select recipe_name, page_views, image from table order by last_updated". So if I got 10000 visitors a day, obviously the query would be made 10000 times a day. A friend told me a better way (in terms of reducing server load) is when I update the recipes, I just put in the latest recipe details (names,images etc) into a text file, and make my page instead of querying a same query for 10,000 times, just get the data from the text file. Is his suggestion really better? If yes, which is the best php command should I use to open, read and close the text file? thanks

    Read the article

  • USB device Set Attribute in C#

    - by p19lord
    I have this bit of code: DriveInfo[] myDrives = DriveInfo.GetDrives(); foreach (DriveInfo myDrive in myDrives) { if (myDrive.DriveType == DriveType.Removable) { string path = Convert.ToString(myDrive.RootDirectory); DirectoryInfo mydir = new DirectoryInfo(path); String[] dirs = new string[] {Convert.ToString(mydir.GetDirectories())}; String[] files = new string[] {Convert.ToString(mydir.GetFiles())}; foreach (var file in files) { File.SetAttributes(file, ~FileAttributes.Hidden); File.SetAttributes(file, ~FileAttributes.ReadOnly); } foreach (var dir in dirs) { File.SetAttributes(dir, ~FileAttributes.Hidden); File.SetAttributes(dir, ~FileAttributes.ReadOnly); } } } I have a problem. It is trying the code for Floppy Disk drive first which and because no Floppy disk in it, it threw the error The device is not ready. How can I prevent that?

    Read the article

  • (C++) Loading a file into a vector

    - by Alden
    This is probably a simple question, however I am new to C++ and I cannot figure this out. I am trying to load a binary file and load each byte to a vector. This works fine with a small file, but when I try to read larger than 410 bytes the program crashes and says: This application has requested the Runtime to terminate it in an unusual way. Please contact the application's support team for more information. I am using code::blocks on windows. This is the code: #include <iostream> #include <fstream> #include <vector> using namespace std; int main() { std::vector<char> vec; std::ifstream file; file.exceptions( std::ifstream::badbit | std::ifstream::failbit | std::ifstream::eofbit); file.open("file.bin"); file.seekg(0, std::ios::end); std::streampos length(file.tellg()); if (length) { file.seekg(0, std::ios::beg); vec.resize(static_cast<std::size_t>(length)); file.read(&vec.front(), static_cast<std::size_t>(length)); } int firstChar = static_cast<unsigned char>(vec[0]); cout << firstChar <<endl; return 0; } Thank you for your help!

    Read the article

  • How do i force a file to be deleted? Windows server 2008

    - by acidzombie24
    On my site a user may upload a file (pic, zip, audio, video, whatever). He then may decide to replace it with a newer revision. This user may upload a file, make a post then decide to put up a new revision replacing the old (lets say its a large zip or tar.gz file). Theres a good chance people may be downloading it if he sent out an email or even im for the home user. Problem. I need to replace the file and people may be downloading and it may be some minutes before it is deleted. I dont want my code to stall until i cant delete or check every second to see if its unused (especially bad if another user can start and he takes long creating a cycle). How do i delete the file while users are downloading the file? i dont care if they stop i just care that the file can be replaced and new downloads are the new revision.

    Read the article

  • Python to extract data from a file

    - by user297003
    Hi, I am new to python. I am trying to extract the text between that has specific text file: ---- data1 data1 data1 extractme ---- data2 data2 data2 ---- data3 data3 extractme ---- and then dump it to text file so that ---- data1 data1 data1 extractme --- data3 data3 extractme --- Thanks for the help.

    Read the article

  • How to pass a file (read from Java) most effectively to a native method?

    - by soc
    Hi, I have approx. 30000 files (1MB each) which I want to put into a native method, which requires just an byte array and the size of it as arguments. I looked through some examples and benchmarks (like http://nadeausoftware.com/articles/2008/02/java_tip_how_read_files_quickly) but all of them do some other fancy things. Basically I don't care about the contents of the file, I don't want to access something in that file or the byte array or do anything else with it. I just want to put a file into a native method which accepts an byte array as fast as possible. At the moment I'm using RandomAccessFile, but that's horribly slow (10MB/s). Is there anything like byte[] readTheWholeFile(File file){ ... } which I could put into native void fancyCMethod(readTheWholeFile(myFile), myFile.length()) What would you suggest?

    Read the article

  • Read a file to multiple array byte[]

    - by hankol
    I have an encryption algorithm (AES) that accepts file converted to array byte and encrypt it. Since I am going to process a very big size files, the JVM may go out of memory. I am planing to read the files in multiple array byte. each containing some part of the file. Then I teratively feed the algorithm. Finally merge them to produce encrypted file. So my question is: there any way to read a file part by part to multiple array byte? I thought I can use the following to read the file to array byte: IOUtils.toByteArray(InputStream input). And then split the array into multiple bytes using: Arrays.copyOfRange(). But I am afraid that the first code that reads file to byte will make the JVM to go out of memory. any suggestion please ? thanks

    Read the article

  • .NET Best Way to move many files to and from various directories??

    - by Dan
    I've created a program that moves files to and from various directories. An issue I've come across is when you're trying to move a file and some other program is still using it. And you get an error. Leaving it there isn't an option, so I can only think of having to keep trying to move it over and over again. This though slows the entire program down, so I create a new thread and let it deal with the problem file and move on to the next. The bigger problem is when you have too many of these problem files and the program now has so many threads trying to move these files, that it just crashes with some kernel.dll error. Here's a sample of the code I use to move the files: Public Sub MoveIt() Try File.Move(_FileName, _CopyToFileName) Catch ex As Exception Threading.Thread.Sleep(5000) MoveIt() End Try End Sub As you can see.. I try to move the file, and if it errors, I wait and move it again.. over and over again.. I've tried using FileInfo as well, but that crashes WAY sooner than just using the File object. So has anyone found a fool proof way of moving files without it ever erroring? Note: it takes a lot of files to make it crash. It'll be fine on the weekend, but by the end of the day on monday, it's done.

    Read the article

  • Random Loss of precision in Python ReadLine()

    - by jackyouldon
    Hi all, We have a process which takes a very large csv (1.6GB) and breaks it down into pieces (in this case 3). This runs nightly and normally doesn't give us any problems. When it ran last night, however, the first of the output files had lost precision on the numeric fields in the data. The active ingredient in the script are the lines: while lineCounter <= chunk: oOutFile.write(oInFile.readline()) lineCounter = lineCounter + 1 and the normal output might be something like StringField1; StringField2; StringField3; StringField4; 1000000; StringField5; 0.000054454 etc. On this one occasion and in this one output file the numeric fields were all output with 6 zeros at the end i.e. StringField1; StringField2; StringField3; StringField4; 1000000.000000; StringField5; 0.000000 We are using Python v2.6 (and don't want to upgrade unless we really have to) but we can't afford to lose this data. Does anyone have any idea why this might have happened? If the readline is doing some kind of implicit conversion is there a way to do a binary read, because we really just want this data to pass through untouched? It is very wierd to us that this only affected one of the output files generated by the same script, and when it was rerun the output was as expected. thanks Jack

    Read the article

< Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >