Search Results

Search found 12367 results on 495 pages for 'disk io'.

Page 90/495 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • close file with fclose() but file still in use

    - by Marco
    Hi all, I've got a problem with deleting/overwriting a file using my program which is also being used(read) by my program. The problem seems to be that because of the fact my program is reading data from the file (output.txt) it puts the file in a 'in use' state which makes it impossible to delete or overwrite the file. I don't understand why the file stays 'in use' because I close the file after use with fclose(); this is my code: bool bBool = true while(bBool){ //Run myprogram.exe tot generate (a new) output.txt //Create file pointer and open file FILE* pInputFile = NULL; pInputFile = fopen("output.txt", "r"); // //then I do some reading using fscanf() // //And when I'm done reading I close the file using fclose() fclose(pInputFile); //The next step is deleting the output.txt if( remove( "output.txt" ) == -1 ){ //ERROR }else{ //Succesfull } } I use fclose() to close the file but the file remains in use by my program until my program is totally shut down. What is the solution to free the file so it can be deleted/overwrited? In reality my code isn't a loop without an end ; ) Thanks in advance! Marco Update Like ask a part of my code which also generates the file 'in use'. This is not a loop and this function is being called from the main(); Here is a piece of code: int iShapeNr = 0; void firstRun() { //Run program that generates output.txt runProgram(); //Open Shape data file FILE* pInputFile = NULL; int iNumber = 0; pInputFile = fopen("output.txt", "r"); //Put all orientations of al detected shapes in an array int iShapeNr = 0; int iRotationBuffer[1024];//1024 is maximum detectable shapes, can be changed in RoboRealm int iXMinBuffer[1024]; int iXMaxBuffer[1024]; int iYMinBuffer[1024]; int iYMaxBuffer[1024]; while(feof(pInputFile) == 0){ for(int i=0;i<9;i++){ fscanf(pInputFile, "%d", &iNumber); fscanf(pInputFile, ","); if(i == 1) { iRotationBuffer[iShapeNr] = iNumber; } if(i == 3){//xmin iXMinBuffer[iShapeNr] = iNumber; } if(i == 4){//xmax iXMaxBuffer[iShapeNr] = iNumber; } if(i == 5){//ymin iYMinBuffer[iShapeNr] = iNumber; } if(i == 6){//ymax iYMaxBuffer[iShapeNr] = iNumber; } } iShapeNr++; } fflush(pInputFile); fclose(pInputFile); } The while loop parses the file. The output.txt contains sets of 9 variables, the number of sets is unknown but always in sets of 9. output.txt could contain for example: 0,1,2,3,4,5,6,7,8,8,7,6,5,4,1,2,3,0

    Read the article

  • Execute a Application On The Server Using PHP(With safe_mode enabled)

    - by Nathan Campos
    I have an application on my server that is called leaf.exe, that haves two arguments needed to run, they are: inputfile and outputfile, that will be like this example: pnote.exe input.pnt output.txt The executable is at exec/, inputfile is at upload/ and outputfile is on compiled/. But I need that a PHP could run the application like that, then I want to know: How could I do this on a server that have exec() disabled and I can't turn it on, because I don't have privileges to do it? How could I echo the output of the program?

    Read the article

  • Out-of-memory algorithms for addressing large arrays

    - by reve_etrange
    I am trying to deal with a very large dataset. I have k = ~4200 matrices (varying sizes) which must be compared combinatorially, skipping non-unique and self comparisons. Each of k(k-1)/2 comparisons produces a matrix, which must be indexed against its parents (i.e. can find out where it came from). The convenient way to do this is to (triangularly) fill a k-by-k cell array with the result of each comparison. These are ~100 X ~100 matrices, on average. Using single precision floats, it works out to 400 GB overall. I need to 1) generate the cell array or pieces of it without trying to place the whole thing in memory and 2) access its elements (and their elements) in like fashion. My attempts have been inefficient due to reliance on MATLAB's eval() as well as save and clear occurring in loops. for i=1:k [~,m] = size(data{i}); cur_var = ['H' int2str(i)]; %# if i == 1; save('FileName'); end; %# If using a single MAT file and need to create it. eval([cur_var ' = cell(1,k-i);']); for j=i+1:k [~,n] = size(data{j}); eval([cur_var '{i,j} = zeros(m,n,''single'');']); eval([cur_var '{i,j} = compare(data{i},data{j});']); end save(cur_var,cur_var); %# Add '-append' when using a single MAT file. clear(cur_var); end The other thing I have done is to perform the split when mod((i+j-1)/2,max(factor(k(k-1)/2))) == 0. This divides the result into the largest number of same-size pieces, which seems logical. The indexing is a little more complicated, but not too bad because a linear index could be used. Does anyone know/see a better way?

    Read the article

  • how to write floating value accurately to a bin file.

    - by user319873
    Hi I am trying to dump the floating point values from my program to a bin file. Since I can't use any stdlib function, I am thinking of writting it char by char to a big char array which I am dumping in my test application to a file. It's like float a=3132.000001; I will be dumping this to a char array in 4 bytes. Code example would be:- if((a < 1.0) && (a > 1.0) || (a > -1.0 && a < 0.0)) a = a*1000000 // 6 bit fraction part. Can you please help me writting this in a better way.

    Read the article

  • Using scanf() in C++ programs is faster than using cin ?

    - by zeroDivisible
    Hello, I don't know if this is true, but when I was reading FAQ on one of the problem providing sites, I found something, that poke my attention: Check your input/output methods. In C++, using cin and cout is too slow. Use these, and you will guarantee not being able to solve any problem with a decent amount of input or output. Use printf and scanf instead. Can someone please clarify this? Is really using scanf() in C++ programs faster than using cin something ? If yes, that is it a good practice to use it in C++ programs? I thought that it was C specific, though I am just learning C++...

    Read the article

  • Changing brightness in bufferedImage with DataBufferInt

    - by user2958025
    I must read some image and then I have to change brightness and contrast of this image I create main class and constructor where are panels, sliders and other stuff, I added changeListener to slider to take current value. My imagePanel is new Object of that class: public class Obrazek extends JPanel{ public static BufferedImage img = null; public Obrazek() { super(); try { img = ImageIO.read(new File("D:\\ja.jpg")); } catch (IOException e) {} } @Override public void paint(Graphics g) { g.drawImage(img, 0, 0, null); } } This is my load button private void przyciskWczytaj(java.awt.event.ActionEvent evt) { int odpowiedz = jFileChooser1.showOpenDialog(this); if (odpowiedz == jFileChooser1.APPROVE_OPTION) { File file = jFileChooser1.getSelectedFile(); try { BufferedImage im = ImageIO.read(new File(file.getAbsolutePath())); Obrazek.img = im; } catch (IOException ex) { System.out.println("Error"); } } } And now I want to create class where I will change that brightness. I have to use but I don't know how to use that thing: BufferedImage(256, 256, Bufferedmage.TYPE_INT_RGB) and to get each pixel of image I need to do something like: int rgb []=((DataBufferInt)img.getRaster().getDataBuffer()).getData(); And here I is next problem: How can I change the value of each r,g,b and show that new image on my panel

    Read the article

  • R problem with apply + rbind

    - by Carl
    I cannot seem to get the following to work directory <- "./" files.15x16 <- c("15x16-70d.out", "15x16-71d.out") data.15x16<-rbind( lapply( as.array(paste(directory, files.15x16, sep="")), FUN=read.csv, sep=" ", header=F) ) What it should be doing is pretty straightforward - I have a directory name, some file names, and actual files of data. I paste the directory and file names together, read the data from the files in, and then rbind them all together into a single chunk of data. Except the result of the lapply has the data in [[]] - i.e., accessing it occurs via a[[1]], a[[2]], etc which rbind doesn't seem to accept. Suggestions?

    Read the article

  • [FIXED] Scan file contents into an array of a structure.

    - by ZaZu
    Hello, I have a structure in my program that contains a particular array. I want to scan a random file with numbers and put the contents into that array. This is my code : ( NOTE : This is a sample from a bigger program, so I need the structure and arrays as declared ) The contents of the file are basically : 5 4 3 2 5 3 4 2 #include<stdio.h> #define first 500 #define sec 500 struct trial{ int f; int r; float what[first][sec]; }; int trialtest(trial *test); main(){ trial test; trialtest(&test); } int trialtest(trial *test){ int z,x,i; FILE *fin; fin=fopen("randomfile.txt","r"); for(i=0;i<5;i++){ fscanf(fin,"%5.2f\t",(*test).what[z][x]); } fclose(fin); return 0; } But the problem is, whenever this I run this code, I get this error : (25) : warning 508 - Data of type 'double' supplied where a pointer is required I tried adding do{ for(i=0;i<5;i++){ q=fscanf(fin,"%5.2f\t",(*test).what[z][x]); } }while(q!=EOF); But that didnt work either, it gives the same error. Does anyone have a solution to this problem ?

    Read the article

  • C Remove the first line from a text file without rewriting file

    - by Tom Van den Bon
    Hi, I've got a service which runs all the time and also keeps a log file. It basically adds new lines to the log file every few seconds. I'm written a small file which reads these lines and then parses them to various actions. The question I have is how can I delete the lines which I have already parsed from the log file without disrupting the writing of the log file by the service? Usually when I need to delete a line in a file then I open the original one and a temporary one and then I just write all the lines to the temp file except the original which I want to delete. Obviously this method will not word here. So how do I go about deleting them ?

    Read the article

  • How can I speed up line by line reading of an ASCII file? (C++)

    - by Jon
    Here's a bit of code that is a considerable bottleneck after doing some measuring: //----------------------------------------------------------------------------- // Construct dictionary hash set from dictionary file //----------------------------------------------------------------------------- void constructDictionary(unordered_set<string> &dict) { ifstream wordListFile; wordListFile.open("dictionary.txt"); string word; while( wordListFile >> word ) { if( !word.empty() ) { dict.insert(word); } } wordListFile.close(); } I'm reading in ~200,000 words and this takes about 240 ms on my machine. Is the use of ifstream here efficient? Can I do better? I'm reading about mmap() implementations but I'm not understanding them 100%. The input file is simply text strings with *nix line terminations.

    Read the article

  • C++ File manipulation problem

    - by Carlucho
    I am trying to open a file which normally has content, for the purpose of testing i will like to initialize the program without the files being available/existing so then the program should create empty ones, but am having issues implementing it. This is my code originally void loadFiles() { fstream city; city.open("city.txt", ios::in); fstream latitude; latitude.open("lat.txt", ios::in); fstream longitude; longitude.open("lon.txt", ios::in); while(!city.eof()){ city >> cityName; latitude >> lat; longitude >> lon; t.add(cityName, lat, lon); } city.close(); latitude.close(); longitude.close(); } I have tried everything i can think of, ofstream, ifstream, adding ios::out all all its variations. Could anybody explain me what to do in order to fix the problem. Thanks!

    Read the article

  • searching within a compressed sorted fixed width file

    - by user275455
    Assume I have a regular compressed fixed width file that is sorted on one of the fields. Given that I know the length of the records, I can use lseek to implement a binary search to records with fields that match a given value without having to read the entire file. Now the difficulty is that the file is gzipped. Is it possible to do this without completely inflating the file? If not with gzip. is there any compression that supports this kind of behavior?

    Read the article

  • write 2d array to a file in C (Operating system)

    - by Bobj-C
    Hello All, I used to use the code below to Write an 1D array to a File: FILE *fp; float floatValue[5] = { 1.1F, 2.2F, 3.3F, 4.4F, 5.5F }; int i; if((fp=fopen("test", "wb"))==NULL) { printf("Cannot open file.\n"); } if(fwrite(floatValue, sizeof(float), 5, fp) != 5) printf("File read error."); fclose(fp); /* read the values */ if((fp=fopen("test", "rb"))==NULL) { printf("Cannot open file.\n"); } if(fread(floatValue, sizeof(float), 5, fp) != 5) { if(feof(fp)) printf("Premature end of file."); else printf("File read error."); } fclose(fp); for(i=0; i<5; i++) printf("%f ", floatValue[i]); My question is if i want to write and read 2D array ??

    Read the article

  • Write file need to optimised for heavy traffic part 2

    - by Clayton Leung
    For anyone interest to see where I come from you can refer to part 1, but it is not necessary. write file need to optimised for heavy traffic Below is a snippet of code I have written to capture some financial tick data from the broker API. The code will run without error. I need to optimize the code, because in peak hours the zf_TickEvent method will be call more than 10000 times a second. I use a memorystream to hold the data until it reaches a certain size, then I output it into a text file. The broker API is only single threaded. void zf_TickEvent(object sender, ZenFire.TickEventArgs e) { outputString = string.Format("{0},{1},{2},{3},{4}\r\n", e.TimeStamp.ToString(timeFmt), e.Product.ToString(), Enum.GetName(typeof(ZenFire.TickType), e.Type), e.Price, e.Volume); fillBuffer(outputString); } public class memoryStreamClass { public static MemoryStream ms = new MemoryStream(); } void fillBuffer(string outputString) { byte[] outputByte = Encoding.ASCII.GetBytes(outputString); memoryStreamClass.ms.Write(outputByte, 0, outputByte.Length); if (memoryStreamClass.ms.Length > 8192) { emptyBuffer(memoryStreamClass.ms); memoryStreamClass.ms.SetLength(0); memoryStreamClass.ms.Position = 0; } } void emptyBuffer(MemoryStream ms) { FileStream outStream = new FileStream("c:\\test.txt", FileMode.Append); ms.WriteTo(outStream); outStream.Flush(); outStream.Close(); } Question: Any suggestion to make this even faster? I will try to vary the buffer length but in terms of code structure, is this (almost) the fastest? When memorystream is filled up and I am emptying it to the file, what would happen to the new data coming in? Do I need to implement a second buffer to hold that data while I am emptying my first buffer? Or is c# smart enough to figure it out? Thanks for any advice

    Read the article

  • Program crashes after trying to use a recently created file. C#

    - by Jason T.
    So here is my code if (!File.Exists(pathName)) { File.Create(pathName); } StreamWriter outputFile = new StreamWriter(pathName,true); But whenever I run the program the first time the path with file gets created. However once I get to the StreamWriter line my program crashes because it says my fie is in use by another process. Is there something I'm missing between the File.Create and the StreamWriter statements?

    Read the article

  • StreamReader not working as expected

    - by Jon Preece
    Hi, I have written a simple utility that loops through all C# files in my project and updates the copyright text at the top. For example, a file may look like this; //Copyright My Company, © 2009-2010 The program should update the text to look like this; //Copyright My Company, © 2009-2010 However, the code I have written results in this; //Copyright My Company, � 2009-2011 Here is the code I am using; public bool ModifyFile(string filePath, List<string> targetText, string replacementText) { if (!File.Exists(filePath)) return false; if (targetText == null || targetText.Count == 0) return false; if (string.IsNullOrEmpty(replacementText)) return false; string modifiedFileContent = string.Empty; bool hasContentChanged = false; //Read in the file content using (StreamReader reader = File.OpenText(filePath)) { string file = reader.ReadToEnd(); //Replace any target text with the replacement text foreach (string text in targetText) modifiedFileContent = file.Replace(text, replacementText); if (!file.Equals(modifiedFileContent)) hasContentChanged = true; } //If we haven't modified the file, dont bother saving it if (!hasContentChanged) return false; //Write the modifications back to the file using (StreamWriter writer = new StreamWriter(filePath)) { writer.Write(modifiedFileContent); } return true; } Any help/suggestions are appreciated. Thanks!

    Read the article

  • Read a file with 2048 bytes

    - by Suresh S
    Guys i have a file which has only one line. The file has no encoding it is a simple text file with single line. For every 2048 byte in a line , there is new record of 151 byte (totally 13*151 byte = 1945 records + 85 byte empty space). similarly for the next 2048 bytes. What is the best file i/o to use? i am thinking of reading 2048 bytes from file and storing it in an array . while (offset < fileLength &&(numRead=in.read(recordChunks, offset,alength)) >= 0) { } how can i get from the read statement only 2048 bytes at a time . i am getting IndexOutofBoundException.

    Read the article

  • Any Alternate way for writing to a file other than ofstream

    - by Aditya
    Hi All, I am performing file operations (writeToFile) which fetches the data from a xml and writes into a output file(a1.txt). I am using MS Visual C++ 2008 and in windows XP. currently i am using this method of writing to output file.. 01.ofstreamhdr OutputFile; 02./* few other stmts / 03.hdrOutputFile.open(fileName, std::ios::out); 04. 05.hdrOutputFile << "#include \"commondata.h\""<< endl ; 06.hdrOutputFile << "#include \"Commonconfig.h\"" << endl ; 07.hdrOutputFile << "#include \"commontable.h\"" << endl << endl ; 08. hdrOutputFile << "#pragma pack(push,1)" << endl ; 09.hdrOutputFile << "typedef struct \n {" << endl ; 10./ simliar hdrOutputFiles statements... */.. I have around 250 lines to write.. Is any better way to perform this task. I want to reduce this hdrOutputFile and use a buffer to do this. Please guide me how to do that action. I mean, buff = "#include \"commontable.h\"" + "typedef struct \n {" + ....... hdrOutputFile << buff. is this way possible? Thanks Ramm

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >