Search Results

Search found 5919 results on 237 pages for 'io priority'.

Page 28/237 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • Reading a file N lines at a time in ruby

    - by Sam
    I have a large file (hundreds of megs) that consists of filenames, one per line. I need to loop through the list of filenames, and fork off a process for each filename. I want a maximum of 8 forked processes at a time and I don't want to read the whole filename list into RAM at once. I'm not even sure where to begin, can anyone help me out?

    Read the article

  • Multiple actions upon a case statement in Haskell

    - by Schroedinger
    One last question for the evening, I'm building the main input function of my Haskell program and I have to check for the args that are brought in so I use args <- getArgs case length args of 0 -> putStrLn "No Arguments, exiting" otherwise -> { other methods here} Is there an intelligent way of setting up other methods, or is it in my best interest to write a function that the other case is thrown to within the main? Or is there an even better solution to the issue of cases. I've just got to take in one name.

    Read the article

  • "Unable to open file", when the program tries to open file in /proc

    - by tristartom
    Hi, I try to read file /proc/'pid'/status, using c program. The code is as follows, and even I use sudo to run it, the prompt still keeps throwing "Unable to open file". Please let me know if you have any ideas on how to fix this. thanks Richard ... int main (int argc, char* argv[]) { string line; char* fileLoc; if(argc != 2) { cout << "a.out file_path" << endl; fileLoc = "/proc/net/dev"; } else { sprintf(fileLoc, "/proc/%d/status", atoi(argv[1])); } cout<< fileLoc << endl; ifstream myfile (fileLoc); if (myfile.is_open()) { while (! myfile.eof() ) { getline (myfile,line); cout << line << endl; } myfile.close(); } else cout << "Unable to open file"; return 0; }

    Read the article

  • C# DriveInfo FileInfo

    - by maxfridbe
    This may seem like a stupid question, so here goes: Other than parsing the string of FileInfo.FullPath for the drive letter to then use DriveInfo("c") etc to see if there is enough space to write this file. Is there a way to get the drive letter from FileInfo?

    Read the article

  • How to use C to write to flash drive bootsector despite error 'Failed to open file to write.:Permiss

    - by updateraj
    My goal is to manipulate the boot-sector in my flashdrive (volume E:) I am using XP. I am able to read the boot-sector FILE *fp_read = fopen("\\\\.\\E:", "rb"); /* Able to proceed to read boot sector */ however i am not able to open the file to write using fopen in 'wb' mode. FILE *fp_read = fopen("\\\\.\\E:", "wb"); /* Unable to proceed due to Failed to open file to write.:Permission Denied */ The flash-drive is not in use at the moment of execution. Hex-editors are able to manipulated boot sector etc, i believe it possible to do so in c. Any suggestion or insight to overcome the access problem so as to be able to write?

    Read the article

  • serving files using django - is this a security vulnerability

    - by Tom Tom
    I'm using the following code to serve uploaded files from a login secured view in a django app. Do you think that there is a security vulnerability in this code? I'm a bit concerned about that the user could place arbitrary strings in the url after the upload/ and this is directly mapped to the local filesystem. Actually I don't think that it is a vulnerability issue, since the access to the filesystem is restricted to the files in the folder defined with the UPLOAD_LOCATION setting. UPLOAD_LOCATION = is set to a not publicly available folder on the webserver url(r'^upload/(?P<file_url>[/,.,\s,_,\-,\w]+)', 'aeon_infrastructure.views.serve_upload_files', name='project_detail'), @login_required def serve_upload_files(request, file_url): import os.path import mimetypes mimetypes.init() try: file_path = settings.UPLOAD_LOCATION + '/' + file_url fsock = open(file_path,"r") file_name = os.path.basename(file_path) file_size = os.path.getsize(file_path) print "file size is: " + str(file_size) mime_type_guess = mimetypes.guess_type(file_name) if mime_type_guess is not None: response = HttpResponse(fsock, mimetype=mime_type_guess[0]) response['Content-Disposition'] = 'attachment; filename=' + file_name #response.write(file) except IOError: response = HttpResponseNotFound() return response

    Read the article

  • How can I check the version of an assembly then delete the assembly?

    - by Nescio
    I am using the FileVersionInfo to retrieve the version of a .Net assembly. Then, I want to immediately delete the file. Unfortunately after I call GetVersionInfo, any attempt to delete the file results in an error “…in use by another process…” Is there another technique to determine the version that does not lock the file? Or, is it possible to ensure the lock is released after calling GetVersionInfo? The below example is heavily simplified, but scope matches my real code. void Main() { var fvi = GetVersion("myPath"); if (fvi.ToString() == "2.0.0.7") DeleteFile("myPath"); } FileVersionInfo GetVersion(string path) { return FileVersionInfo.GetVersionInfo(path); } void DeleteFile(string path) { File.Delete(path); }

    Read the article

  • mmap() for large file I/O?

    - by Boatzart
    I'm creating a utility in C++ to be run on Linux which can convert videos to a proprietary format. The video frames are very large (up to 16 megapixels), and we need to be able to seek directly to exact frame numbers, so our file format uses libz to compress each frame individually, and append the compressed data onto a file. Once all frames are finished being written, a journal which includes meta data for each frame (including their file offsets and sizes) is written to the end of the file. I'm currently using ifstream and ofstream to do the file i/o, but I am looking to optimize as much as possible. I've heard that mmap() can increase performance in a lot of cases, and I'm wondering if mine is one of them. Our files will be in the tens to hundreds of gigabytes, and although writing will always be done sequentially, random access reads should be done in constant time. Any thoughts as to whether I should investigate this further, and if so does anyone have any tips for things to look out for? Thanks!

    Read the article

  • ASP.net file operations delay

    - by mtranda
    Ok, so here's the problem: I'm reading the stream from a FileUpload control, reading in chunks of n bytes and writing the array in a loop until I reach the stream's end. Now the reason I do this is because I need to check several things while the upload is still going on (rather than doing a Save(); which does the whole thing in one go). Here's the problem: when doing this from the local machine, I can see the file just fine as it's uploading and its size increases (had to add a Sleep(); clause in the loop to actually get to see the file being written). However, when I upload the file from a remote machine, I don't get to see it until the the file has completed uploading. Also, I've added another call to write the progress to a text file as the progress is going on, and I get the same thing. Local: the file updates as the upload goes on, remote: the token file only appears after the upload's done (which is somewhat useless since I need it while the upload's still happening). Is there some sort of security setting in (or ASP.net) that maybe saves files in a temporary location for remote machines as opposed to the local machine and then moves them to the specified destination? I would liken this with ASP.net displaying error messages when browsing from the local machine (even on the public hostname) as opposed to the generic compilation error page/generic exception page that is shown when browsing from a remote machine (and customErrors are not off) Any clues on this? Thanks in advance.

    Read the article

  • Atomic int writes on file

    - by Waneck
    Hello! I'm writing an application that will have to be able to handle many concurrent accesses to it, either by threads as by processes. So no mutex'es or locks should be applied to this. To make the use of locks go down to a minimum, I'm designing for the file to be "append-only", so all data is first appended to disk, and then the address pointing to the info it has updated, is changed to refer to the new one. So I will need to implement a small lock system only to change this one int so it refers to the new address. How is the best way to do it? I was thinking about maybe putting a flag before the address, that when it's set, the readers will use a spin lock until it's released. But I'm afraid that it isn't at all atomic, is it? e.g. a reader reads the flag, and it is unset on the same time, a writer writes the flag and changes the value of the int the reader may read an inconsistent value! I'm looking for locking techniques but all I find is either for thread locking techniques, or to lock an entire file, not fields. Is it not possible to do this? How do append-only databases handle this? Thanks! Cauê

    Read the article

  • Performance of fopen vs stat

    - by Alex Marshall
    Hello, I'm writing several C programs for an embedded system where every bit of performance we can squeeze out will matter. Part of that is accessing log files. When determining if a file exists, is there any performance difference between using open / fopen, and stat ? I've been using stat on the assumption that it only has to do a quick check against the file system, whereas fopen would have to actually gain access to a file and manipulate internal data structures before returning. Is there any merit to this ?

    Read the article

  • C++: ifstream::getline problem

    - by Jay
    I am reading a file like this: char string[256]; std::ifstream file( "file.txt" ); // open the level file. if ( ! file ) // check if the file loaded fine. { // error } while ( file.getline( string, 256, ' ' ) ) { // handle input } Just for testing purposes, my file is just one line, with a space at the end: 12345 My code first reads the 12345 successfully. But then instead of the loop ending, it reads another string, which seems to be a return/newline. I have saved my file both in gedit and in nano. And I have also outputted it with the Linux cat command, and there is no return on the end. So the file should be fine. Why is my code reading a return/newline? Thanks.

    Read the article

  • how to read an arraylist from a txt file in java?

    - by lox
    how to read an arraylist from a txt file in java? my arraylist is the form of: public class Account { String username; String password; } i managed to put some "Accounts" in the a txt file, but now i don't know how to read them. this is how my arraylist look in the txt file: username1 password1 | username2 password2 | etc this is a part of the code i came up with, but it doesn't work. it looks logic to me though... :) . public static void RdAc(String args[]) { ArrayList<Account> peoplelist = new ArrayList<Account>(50); int i,i2,i3; String[] theword = null; try { FileReader fr = new FileReader("myfile.txt"); BufferedReader br = new BufferedReader(fr); String line = ""; while ((line = br.readLine()) != null) { String[] theline = line.split(" | "); for (i = 0; i < theline.length; i++) { theword = theline[i].split(" "); } for(i3=0;i3<theline.length;i3++) { Account people = new Account(); for (i2 = 0; i2 < theword.length; i2++) { people.username = theword[i2]; people.password = theword[i2+1]; peoplelist.add(people); } } } } catch (IOException ex) { System.out.println("Could not read from file"); } }

    Read the article

  • display records which exist in file2 but not in file1

    - by Phoenix
    log file1 contains records of customers(name,id,date) who visited yesterday log file2 contains records of customers(name,id,date) who visited today How would you display customers who visited yesterday but not today? Constraint is: Don't use auxiliary data structure because file contains millions of records. [So, no hashes] Is there a way to do this using Unix commands ??

    Read the article

  • How to loop an executable command in the terminal in Linux?

    - by user1452373
    Let me first describe my situation, I am working on a Linux platform and have a collection of .bmp files that add one to the picture number from filename0022.bmp up to filename0680.bmp. So a total of 658 pictures. I want to be able to run each of these pictures through a .exe file that operates on the picture then kicks out the file to a file specified by the user, it also has some threshold arguments: lower, upper. So the typical call for the executable is: ./filter inputfile outputfile lower upper Is there a way that I can loop this call over all the files just from the terminal or by creating some kind of bash script? My problem is similar to this: Execute a command over multiple files with a batch file but this time I am working in a Linux command line terminal. Thank you for your help, Luke H

    Read the article

  • Interface for reading variable length files with header and footer.

    - by John S
    I could use some hints or tips for a decent interface for reading file of special characteristics. The files in question has a header (~120 bytes), a body (1 byte - 3gb) and a footer (4 bytes). The header contains information about the body and the footer is only a simple CRC32-value of the body. I use Java so my idea was to extend the "InputStream" class and add a constructor such as "public MyInStream( InputStream in)" where I immediately read the header and the direct the overridden read()'s the body. Problem is, I can't give the user of the class the CRC32-value until the whole body has been read. Because the file can be 3gb large, putting it all in memory is a be an idea. Reading it all in to a temporary file is going to be a performance hit if there are many small files. I don't know how large the file is because the InputStream doesn't have to be a file, it could be a socket. Looking at it again, maybe extending InputStream is a bad idea. Thank you for reading the confused thoughts of a tired programmer. :)

    Read the article

  • Python: slow read & write for millions of small files

    - by Jami
    I am building directory tree which has tons of subdirectories and files. The total directory count is somewhere along 256^32 subdirectories with 256 files in each end which are only a few bytes long. I did this so I would have fast access to these files (since i'm not searching and i'm just directly accessing then via a known file path) I have a python script that builds this filesystem and reads & writes those files. The problem is that when I reach more than 1Gb of total filesize, the read and write methods become extremely slow. Here's the function I have that reads the contents of a file (the file contains an integer string), adds a certain number to it, then writes it back to the original file. def addInFile(path, scoreToAdd): num = scoreToAdd try: shutil.copyfile(path, '/tmp/tmp.txt') fp = open('/tmp/tmp.txt', 'r') num += int(fp.readlines()[0]) fp.close() except: pass fp = open('/tmp/tmp.txt', 'w') fp.write(str(num)) fp.close() shutil.copyfile('/tmp/tmp.txt', path) I previously tried performing linux console commands but it was slower. I copy the file to a temporary file first then access/modify it then copy it back because i found this was faster than directly accessing the file. I think the cause of the slowdown is because there're tons of files. performing this function 1000 times sometimes reach 1 minute now, but before (when there were only a few files, 1000 calls was performed for only less than 1 second) How do you suggest I fix this?

    Read the article

  • Loop over a file and write the next line if a condition is met

    - by 111078384259264152964
    Having a hard time fixing this or finding any good hints about it. I'm trying to loop over one file, modify each line slightly, and then loop over a different file. If the line in the second file starts with the line from the first then the following line in the second file should be written to a third file. !/usr/bin/env python with open('ids.txt', 'rU') as f: with open('seqres.txt', 'rU') as g: for id in f: id=id.lower()[0:4]+'_'+id[4] with open(id + '.fasta', 'w') as h: for line in g: if line.startswith(''+ id): h.write(g.next()) All the correct files appear, but they are empty. Yes, I am sure the if has true cases. :-) "seqres.txt" has lines with an ID number in a certain format, each followed by a line with data. The "ids.txt" has lines with the ID numbers of interest in a different format. I want each line of data with an interesting ID number in its own file. Thanks a million to anyone with a little advice!

    Read the article

  • IOCP multiple socket completionports in same container

    - by Ohmages
    For the past couple of days I have been thinking about how to solve one of my problems I am facing, and I have tried to research the topic but don't really know what I can do. I have 2 sockets in the same struct that both have the same completionport. Problem is, they both use different protocols. Is there a way that I can find out which socket got triggered? Their called game_socket, and client_socket Example code would be something like... while (true) { error = GetQueuedCompletionStatus(CompletionPort, &BytesTransfered, (PULONG_PTR)&Key, &lpOverlapped, 0); srvc = CONTAINING_RECORD ( lpOverlapped, client , ol ); if ( error == TRUE ) { cout << endl << "SOCKET: [" << srvc->client_socket << "] TRIGGERED - WORKER THREAD" << endl; cout << endl << "BytesTransfered: [" << BytesTransfered << "]" << endl; if ( srvc->game_client triggered ) { // .. this code } else { // .. this code } Any ideas our help would be appreciated :)

    Read the article

  • How can I read/write data from a file?

    - by samy
    I'm writing a simple chrome extension. I need to create the ability to add sites URLs to a list, or read from the list. I use the list to open the sites in the new tabs. I'm looking for a way to have a data file I can write to, and read from. I was thinking on XML. I read there is a problem changing the content of files with Javascript. Is XML the right choice for this kinda thing? I should add that there is no web server, and the app will run locally, so maybe the problem websites are having are not same as this. Before I wrote this question, I tried one thing, and started to feel insecure because it didn't work. I made a XML file called Site.xml: <?xml version="1.0" encoding="utf-8" ?> <Sites> <site> <url> http://www.sulamacademy.com/AddMsgForum.asp?FType=273171&SBLang=0&WSUAccess=0&LocSBID=20375 </url> </site> <site> <url> http://www.wow.co.il </url> </site> <site> <url> http://www.Google.co.il </url> </site> I made this script to read the data from him, and put in on the html file. function LoadXML() { var ajaxObj = new XMLHttpRequest(); ajaxObj.open('GET', 'Sites.xml', false); ajaxObj.send(); var myXML = ajaxObj.responseXML; document.write('<table border="2">'); var prs = myXML.getElementsByTagName("site"); for (i = 0; i < prs.length; i++) { document.write("<tr><td>"); document.write(prs[i].getElementsByName("url")[0].childNode[0].nodeValue); document.write("</td></tr>"); } document.write("</table"); }

    Read the article

  • Cannot write to SD card -- canWrite is returning false

    - by Fizz
    Sorry for the ambiguous title but I'm doing the following to write a simple string to a file: try { File root = Environment.getExternalStorageDirectory(); if (root.canWrite()){ System.out.println("Can write."); File def_file = new File(root, "default.txt"); FileWriter fw = new FileWriter(def_file); BufferedWriter out = new BufferedWriter(fw); String defbuf = "default"; out.write(defbuf); out.flush(); out.close(); } else System.out.println("Can't write."); }catch (IOException e) { e.printStackTrace(); } But root.canWrite() seems to be returning false everytime. I am not running this off of an emulator, I have my android Eris plugged into my computer via USB and running the app off of my phone via Eclipse. Is there a way of giving my app permission so this doesn't happen? Also, this code seems to be create the file default.txt but what if it already exists, will it ignore the creation and just open it to write or do I have to catch something like FileAlreadyExists(if such an exception exists) which then just opens it and writes? Thanks for any help guys.

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >