Search Results

Search found 5919 results on 237 pages for 'io priority'.

Page 35/237 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • [FIXED] Scan file contents into an array of a structure.

    - by ZaZu
    Hello, I have a structure in my program that contains a particular array. I want to scan a random file with numbers and put the contents into that array. This is my code : ( NOTE : This is a sample from a bigger program, so I need the structure and arrays as declared ) The contents of the file are basically : 5 4 3 2 5 3 4 2 #include<stdio.h> #define first 500 #define sec 500 struct trial{ int f; int r; float what[first][sec]; }; int trialtest(trial *test); main(){ trial test; trialtest(&test); } int trialtest(trial *test){ int z,x,i; FILE *fin; fin=fopen("randomfile.txt","r"); for(i=0;i<5;i++){ fscanf(fin,"%5.2f\t",(*test).what[z][x]); } fclose(fin); return 0; } But the problem is, whenever this I run this code, I get this error : (25) : warning 508 - Data of type 'double' supplied where a pointer is required I tried adding do{ for(i=0;i<5;i++){ q=fscanf(fin,"%5.2f\t",(*test).what[z][x]); } }while(q!=EOF); But that didnt work either, it gives the same error. Does anyone have a solution to this problem ?

    Read the article

  • R problem with apply + rbind

    - by Carl
    I cannot seem to get the following to work directory <- "./" files.15x16 <- c("15x16-70d.out", "15x16-71d.out") data.15x16<-rbind( lapply( as.array(paste(directory, files.15x16, sep="")), FUN=read.csv, sep=" ", header=F) ) What it should be doing is pretty straightforward - I have a directory name, some file names, and actual files of data. I paste the directory and file names together, read the data from the files in, and then rbind them all together into a single chunk of data. Except the result of the lapply has the data in [[]] - i.e., accessing it occurs via a[[1]], a[[2]], etc which rbind doesn't seem to accept. Suggestions?

    Read the article

  • searching within a compressed sorted fixed width file

    - by user275455
    Assume I have a regular compressed fixed width file that is sorted on one of the fields. Given that I know the length of the records, I can use lseek to implement a binary search to records with fields that match a given value without having to read the entire file. Now the difficulty is that the file is gzipped. Is it possible to do this without completely inflating the file? If not with gzip. is there any compression that supports this kind of behavior?

    Read the article

  • How can chunks be allocated in a node.js stream in object mode all at once?

    - by Quentin Engles
    I can see how buffers, and strings can be sent as chunks, but I'm having a problem thinking about how streams can be dealt when working in object mode. Say I have a byte stream from an http request message. I want to take that message, parse, and then transform it into one big object. I already know how to parse the message. What I'm wondering is if the message is big so it has many chunks, but I want to make one object for the output how can I make sure the data event waits for the whole thing? Is this just a matter of not using the push method until the chunked data has finished being sent? That would then restrict the stream data output to a smaller object which I think I'm fine with for now. As an added condition the larger data will be reduced in size after the the transform.

    Read the article

  • write 2d array to a file in C (Operating system)

    - by Bobj-C
    Hello All, I used to use the code below to Write an 1D array to a File: FILE *fp; float floatValue[5] = { 1.1F, 2.2F, 3.3F, 4.4F, 5.5F }; int i; if((fp=fopen("test", "wb"))==NULL) { printf("Cannot open file.\n"); } if(fwrite(floatValue, sizeof(float), 5, fp) != 5) printf("File read error."); fclose(fp); /* read the values */ if((fp=fopen("test", "rb"))==NULL) { printf("Cannot open file.\n"); } if(fread(floatValue, sizeof(float), 5, fp) != 5) { if(feof(fp)) printf("Premature end of file."); else printf("File read error."); } fclose(fp); for(i=0; i<5; i++) printf("%f ", floatValue[i]); My question is if i want to write and read 2D array ??

    Read the article

  • FileNotFound exception

    - by Pratik
    I am trying to read a file in a servlet. I am using eclipse IDE. I get a FileNotFoundException if I provide relative file name. List<String> ls=new ArrayList<String>(); Scanner input = new Scanner(new File("Input.txt")); while(input.hasNextLine()) { ls.add(input.nextLine()); } The same code works if I put the absolute path like this: Scanner input = new Scanner(new File("F:/Spring and other stuff/AjaxDemo/src/com/pdd/ajax/Input.txt")); The Java file and text file are there in the same folder. Does it searches text file in some other folder ?

    Read the article

  • Fastest Way To Format a Plain Text Using Javascript

    - by Nathan Campos
    I have a huge plain text document, about 700kb which is very big for plain texts and I need to format it on cloud converting it to HTML, but the only things that I need to replace, format to HTML so it can be displayed by the browser, are bold and italic. For bold at the plain text they are like this: Not on bold... **bold text here** not bold here And italic like this: Not italic... *italic text* no italic Just like StackOverflow does for their formatting, but the problem is that I need to make it a lot faster, since the text is so big... One of my ideas was to add a page slide, so I the script just need to format some part of the text, not it all, then after the user changes the page the script would be called again, but the problem is how I can make the code for this all?

    Read the article

  • Read char from txt file in C++

    - by Jack in the Box
    I have a program that will read the number of rows and columns from a txt file. Also, the program has to read the contents of a 2D array from the same file. Here is the txt file 8 20 * * *** *** 8 and 20 are the number of rows and columns respectively. The spaces and asterisks are the contents of the array, Array[8][20] For example, Array[0][1] = '*' I did make the program reading 8 and 20 as follow: ifstream myFile; myFile.open("life.txt"); if(!myFile) { cout << endl << "Failed to open file"; return 1; } myFile >> rows >> cols; myFile.close(); grid = new char*[rows]; for (int i = 0; i < rows; i++) { grid[i] = new char[cols]; } Now, how to assign the spaces and the asterisks to to the fields in the array? I hope you got the point.

    Read the article

  • Write to the second line of a PHP file

    - by Woz
    I have a php file that I want to add an include path to on the second line. I need to open the file and inset a line of code on line 2. I have tried a few techniques none of which are working but I think it has something to do with the text I am trying to write and possibly not escaping character correctly as I am not too familiar with file writing. So here is the file I want to write to: $file = $_SERVER['DOCUMENT_ROOT'].'/'.$domaindir.'/test.php'; Here is the piece of text I want to insert into the file: $dbfile = "include('".$_SERVER['DOCUMENT_ROOT']."/".$domaindir."/web_".$dbname.".inc.php');"; Then what I was doing was a string replace but all it did was bump the "session_start();" bit to a newline! Can anyone point me in the direction of a tutorial that might tell me how to insert this into the second line of my php file or indeed if anyone has any ideas? I can say for sure that the path to the PHP file is fully tested so i know its not that the file is not being open or written to. Any ideas would be much appreciated. Thanks in advance.

    Read the article

  • How to close reconnect SocketIOClient on android?

    - by erginduran
    My problem is reconnect.I connect SocketIOClient.connect(..) in background service.I close service when internet connection is off.and I re-start service again connection on. How to close this reconnection?I don't want to reconnect SocketIOClient. Its my code: ConnectCallback mConnectCallback = new ConnectCallback() { @Override public void onConnectCompleted(Exception ex, SocketIOClient client) { if (ex != null) { ex.printStackTrace(); return; } client.setReconnectCallback(new ReconnectCallback() { @Override public void onReconnect() { // TODO Auto-generated method stub } }); client.setDisconnectCallback(new DisconnectCallback() { @Override public void onDisconnect(Exception arg0) { // TODO Auto-generated method stub } }); client.setErrorCallback(new ErrorCallback() { @Override public void onError(String arg0) { // TODO Auto-generated method stub } }); client.on("event", new EventCallback() { @Override public void onEvent(JSONArray jsonArray, Acknowledge acknowledge) { ///bla bla } }); ScreenChat.mClient = client; } };

    Read the article

  • basic file input using C

    - by user1781966
    So im working on learning how to do file I/O, but the book I'm using is terrible at teaching how to receive input from a file. Below is is their example of how to receive input from a file, but it doesn't work. I have copied it word for word, and it should loop through a list of names until it reaches the end of the file( or so they say in the book), but it doesn't. In fact if I leave the while loop in there, it doesn't print anything. #include <stdio.h> #include <conio.h> int main() { char name[10]; FILE*pRead; pRead=fopen("test.txt", "r"); if (pRead==NULL) { printf("file cannot be opened"); }else printf("contents of test.txt"); fscanf(pRead,"%s",name); while(!feof(pRead)) { printf("%s\n",name); fscanf(pRead, "%s", name); } getch(); } Even online, every beginners tutorial I see does some variation of this, but I can't seem to get it to work even a little bit.

    Read the article

  • Get license file from a folder in C# project

    - by daft
    I have a license file that I need to access at runtime in order to create pdf-files. After I have created the in memory pdf, I need to call a method on that pdf to set the license, like this: pdf.SetLicense("pathToLicenseFileHere"); The license file is located in the same project as the.cs-file that creates the pdf, but is in a separate folder. I cannot get this simple thing to behave correctly, which makes me a bit sad, since it really shouldn't be that hard. :( I try to set the path like this: string path = @"\Resources\File.lic"; But it just isn't working out for me.

    Read the article

  • How can I determine if a file is read-only for my process on *nix?

    - by user109078
    Using the stat function, I can get the read/write permissions for: owner user other ...but this isn't what I want. I want to know the read/write permissions of a file for my process (i.e. the application I'm writing). The owner/user/other is only helpful if I know if my process is running as the owner/user/other of the file...so maybe that's the solution but I'm not sure of the steps to get there.

    Read the article

  • best practices question: How to save a collection of images and a java object in a single file? File

    - by Richard
    Hi all, I am making a java program that has a collection of flash-card like objects. I store the objects in a jtree composed of defaultmutabletreenodes. Each node has a user object attached to it with has a few string/native data type parameters. However, i also want each of these objects to have an image (typical formats, jpg, png etc). I would like to be able to store all of this information, including the images and the tree data to the disk in a single file so the file can be transferred between users and the entire tree, including the images and parameters for each object, can be reconstructed. I had not approached a problem like this before so I was not sure what the best practices were. I found XLMEncoder (http://java.sun.com/j2se/1.4.2/docs/api/java/beans/XMLEncoder.html) to be a very effective way of storing my tree and the native data type information. However I couldn't figure out how to save the image data itself inside of the XML file, and I'm not sure it is possible since the data is binary (so restricted characters would be invalid). My next thought was to associate a hash string instead of an image within each user object, and then gzip together all of the images, with the hash strings as the names and the XMLencoded tree in the same compmressed file. That seemed really contrived though. Does anyone know a good approach for this type of issue? THanks! Thanks!

    Read the article

  • unbuffered I/O in Linux

    - by stuck
    I'm writing lots and lots of data that will not be read again for weeks - as my program runs the amount of free memory on the machine (displayed with 'free' or 'top') drops very quickly, the amount of memory my app uses does not increase - neither does the amount of memory used by other processes. This leads me to believe the memory is being consumed by the filesystems cache - since I do not intend to read this data for a long time I'm hoping to bypass the systems buffers, such that my data is written directly to disk. I dont have dreams of improving perf or being a super ninja, my hope is to give a hint to the filesystem that I'm not going to be coming back for this memory any time soon, so dont spend time optimizing for those cases. On Windows I've faced similar problems and fixed the problem using FILE_FLAG_NO_BUFFERING|FILE_FLAG_WRITE_THROUGH - the machines memory was not consumed by my app and the machine was more usable in general. I'm hoping to duplicate the improvements I've seen but on Linux. On Windows there is the restriction of writing in sector sized pieces, I'm happy with this restriction for the amount of gain I've measured. is there a similar way to do this in Linux?

    Read the article

  • Any Alternate way for writing to a file other than ofstream

    - by Aditya
    Hi All, I am performing file operations (writeToFile) which fetches the data from a xml and writes into a output file(a1.txt). I am using MS Visual C++ 2008 and in windows XP. currently i am using this method of writing to output file.. 01.ofstreamhdr OutputFile; 02./* few other stmts / 03.hdrOutputFile.open(fileName, std::ios::out); 04. 05.hdrOutputFile << "#include \"commondata.h\""<< endl ; 06.hdrOutputFile << "#include \"Commonconfig.h\"" << endl ; 07.hdrOutputFile << "#include \"commontable.h\"" << endl << endl ; 08. hdrOutputFile << "#pragma pack(push,1)" << endl ; 09.hdrOutputFile << "typedef struct \n {" << endl ; 10./ simliar hdrOutputFiles statements... */.. I have around 250 lines to write.. Is any better way to perform this task. I want to reduce this hdrOutputFile and use a buffer to do this. Please guide me how to do that action. I mean, buff = "#include \"commontable.h\"" + "typedef struct \n {" + ....... hdrOutputFile << buff. is this way possible? Thanks Ramm

    Read the article

  • What would happen if a same file being read and appended at the same time(python programming)?

    - by Shane
    I'm writing a script using two separate thread one doing file reading operation and the other doing appending, both threads run fairly frequently. My question is, if one thread happens to read the file while the other is just in the middle of appending strings such as "This is a test" into this file, what would happen? I know if you are appending a smaller-than-buffer string, no matter how frequently you read the file in other threads, there would never be incomplete line such as "This i" appearing in your read file, I mean the os would either do: append "This is a test" - read info from the file; or: read info from the file - append "This is a test" to the file; and such would never happen: append "This i" - read info from the file - append "s a test". But if "This is a test" is big enough(assuming it's a bigger-than-buffer string), the os can't do appending job in one operation, so the appending job would be divided into two: first append "This i" to the file, then append "s a test", so in this kind of situation if I happen to read the file in the middle of the whole appending operation, would I get such result: append "This i" - read info from the file - append "s a test", which means I might read a file that includes an incomplete string?

    Read the article

  • Read a file with 2048 bytes

    - by Suresh S
    Guys i have a file which has only one line. The file has no encoding it is a simple text file with single line. For every 2048 byte in a line , there is new record of 151 byte (totally 13*151 byte = 1945 records + 85 byte empty space). similarly for the next 2048 bytes. What is the best file i/o to use? i am thinking of reading 2048 bytes from file and storing it in an array . while (offset < fileLength &&(numRead=in.read(recordChunks, offset,alength)) >= 0) { } how can i get from the read statement only 2048 bytes at a time . i am getting IndexOutofBoundException.

    Read the article

  • C++ File manipulation problem

    - by Carlucho
    I am trying to open a file which normally has content, for the purpose of testing i will like to initialize the program without the files being available/existing so then the program should create empty ones, but am having issues implementing it. This is my code originally void loadFiles() { fstream city; city.open("city.txt", ios::in); fstream latitude; latitude.open("lat.txt", ios::in); fstream longitude; longitude.open("lon.txt", ios::in); while(!city.eof()){ city >> cityName; latitude >> lat; longitude >> lon; t.add(cityName, lat, lon); } city.close(); latitude.close(); longitude.close(); } I have tried everything i can think of, ofstream, ifstream, adding ios::out all all its variations. Could anybody explain me what to do in order to fix the problem. Thanks!

    Read the article

  • fopen() fails to open stream: permission denied, yet permissions should be valid

    - by about blank
    So, I have this error: Warning: fopen(/path/to/test-in.txt) [function.fopen]: failed to open stream: Permission denied Performing ls -l in the directory where test-in.txt is produces the following output: -rw-r--r-- 1 $USER $USER 1921 Sep 6 20:09 test-in.txt -rw-r--r-- 1 $USER $USER 0 Sep 6 20:08 test-out.txt In order to get past this, I decided to perform the following: chgrp -R www-data /path/to/php/webroot And then did: chmod g+rw /path/to/php/webroot Yet, I still get this error when I run my php5 script to open the file. Why is this happening? I've tried this using LAMP as well as cherokee through CGI, so it can't be this. Is there a solution of some sort? Edit I'll also add that I'm just developing via localhost right now. Update - PHP fopen() line $fullpath = $this->fileRoot . $this->fileInData['fileName']; $file_ptr = fopen( $fullpath, 'r+' ); I should also mention I'd like to stick with Cherokee if possible. What's this deal about setting file permissions for Apache/Cherokee?

    Read the article

  • Python text file processing speed issues

    - by Anonymouslemming
    Hi all, I'm having a problem with processing a largeish file in Python. All I'm doing is f = gzip.open(pathToLog, 'r') for line in f: counter = counter + 1 if (counter % 1000000 == 0): print counter f.close This takes around 10m25s just to open the file, read the lines and increment this counter. In perl, dealing with the same file and doing quite a bit more (some regular expression stuff), the whole process takes around 1m17s. Perl Code: open(LOG, "/bin/zcat $logfile |") or die "Cannot read $logfile: $!\n"; while (<LOG>) { if (m/.*\[svc-\w+\].*login result: Successful\.$/) { $_ =~ s/some regex here/$1,$2,$3,$4/; push @an_array, $_ } } close LOG; Can anyone advise what I can do to make the Python solution run at a similar speed to the Perl solution? I've tried just uncompressing the file and dealing with it using open instead of gzip.open, but that made a very small difference to the overall time.

    Read the article

  • Reading a simple Avro file from HDFS

    - by John Galt... who
    I am trying to do a simple read of an Avro file stored in HDFS. I found out how to read it when it is on the local file system.... FileReader reader = DataFileReader.openReader(new File(filename), new GenericDatumReader()); for (GenericRecord datum : fileReader) { String value = datum.get(1).toString(); System.out.println("value = " value); } reader.close(); My file is in HDFS, however. I cannot give the openReader a Path or an FSDataInputStream. How can I simply read an Avro file in HDFS? EDIT: I got this to work by creating a custom class (SeekableHadoopInput) that implements SeekableInput. I "stole" this from "Ganglion" on github. Still, seems like there would be a Hadoop/Avro integration path for this. Thanks

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >