Search Results

Search found 4688 results on 188 pages for 'io redirection'.

Page 45/188 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • What is a good way of coding a file processing program, which accepts multisource data in Java

    - by jjepsuomi
    I'm making a data prosessing system, which currently is using csv-data as input and output form. In the future I might want to add support for example database-, xml-, etc. typed input and output forms. How should I desing my program so that it would be easy to add support for new type of data sources? Should simply make for example an abstract data class (which would contain the basic file prosessing methods) and then inherit this class for database, xml, etc. cases? Hope my question is clear =) In other words my question is: "How to desing a file prosessing system, which can be easily updated to accept input data from different sources (database, XML, Excel, etc.)".

    Read the article

  • Accessing a file (for writing) from a JBoss Web Service

    - by Andreas Grech
    Let's say I have this structure of my Java Web Application: TheProject -- [Web Pages] -- -- abc.txt -- -- index.jsp -- [Source Packages] -- -- [wservices] -- -- -- WS.java WS.java is my Web Service, which is situated in a wservices package. Now from this service, I need to access the abc.txt file and write to it. These are my urls: http://127.0.0.1:8080/TheProject/WS <- the webservice http://127.0.0.1:8080/TheProject/abc.txt <- the file I want to access To read the file, I tried with getResourceAsStream and I was successful in reading from it. But now I also want to write to this file, and I tried such a method but failed. Is there a way I can get access to the abc.txt file from WS.java and be able to successfully read from and write to it?

    Read the article

  • Need to have JProgress bar to measure progress when copying directories and files

    - by user1815823
    I have the below code to copy directories and files but not sure where to measure the progress. Can someone help as to where can I measure how much has been copied and show it in the JProgress bar public static void copy(File src, File dest) throws IOException{ if(src.isDirectory()){ if(!dest.exists()){ //checking whether destination directory exisits dest.mkdir(); System.out.println("Directory copied from " + src + " to " + dest); } String files[] = src.list(); for (String file : files) { File srcFile = new File(src, file); File destFile = new File(dest, file); copyFolder(srcFile,destFile); } }else{ InputStream in = new FileInputStream(src); OutputStream out = new FileOutputStream(dest); byte[] buffer = new byte[1024]; int length; while ((length = in.read(buffer)) > 0){ out.write(buffer, 0, length); } in.close(); out.close(); System.out.println("File copied from " + src + " to " + dest); }

    Read the article

  • fopen() fails to open stream: permission denied, yet permissions should be valid

    - by about blank
    So, I have this error: Warning: fopen(/path/to/test-in.txt) [function.fopen]: failed to open stream: Permission denied Performing ls -l in the directory where test-in.txt is produces the following output: -rw-r--r-- 1 $USER $USER 1921 Sep 6 20:09 test-in.txt -rw-r--r-- 1 $USER $USER 0 Sep 6 20:08 test-out.txt In order to get past this, I decided to perform the following: chgrp -R www-data /path/to/php/webroot And then did: chmod g+rw /path/to/php/webroot Yet, I still get this error when I run my php5 script to open the file. Why is this happening? I've tried this using LAMP as well as cherokee through CGI, so it can't be this. Is there a solution of some sort? Edit I'll also add that I'm just developing via localhost right now. Update - PHP fopen() line $fullpath = $this->fileRoot . $this->fileInData['fileName']; $file_ptr = fopen( $fullpath, 'r+' ); I should also mention I'd like to stick with Cherokee if possible. What's this deal about setting file permissions for Apache/Cherokee?

    Read the article

  • In OCaml, how can I create an out_channel which writes to a string/buffer instead of a file on disk

    - by Tianyi Cui
    I have a function of type in_channel -> out_channel -> unit which will output something to an out_channel. Now I'd like to get its output as a string. Creating temporary files to write and read it back seems ugly, so how can I do that? Is there any other methods to create out_channel besides Pervasives.open_out family? Actually, this function implemented a repl. What I really need is to test it programmatically, so I'd like to first wrap it to a function of type string -> string. For creating the in_channel, it seems I can use Scanf.Scanning.from_string, but I don't know how to create the out_channel parameter.

    Read the article

  • C: Reading file with a starting point

    - by Shinka
    A simple question but I can't find the answer in my book. I want to read a binary file to seed a random number generator, but I don't want to seed my generator with the same seed each time I call the function, so I will need to keep a variable for my position in the file (not a problem) and I would need to know how to read a file starting a specific point in the file (no idea how). The code: void rng_init(RNG* rng) { // ... FILE *input = fopen("random.bin", "rb"); unsigned int seed[32]; fread(seed, sizeof(unsigned int), 32, input); // seed 'rng'... fclose(input); }

    Read the article

  • stdio data from write not making it into a file

    - by user1551209
    I'm having a problem with using stdio commands for manipulating data in a file. I short, when I write data into a file, write returns an int indicating that it was successful, but when I read it back out I only get the old data. Here's a stripped down version of the code: fd = open(filename,O_RDWR|O_APPEND); struct dE *cDE = malloc(sizeof(struct dE)); //Read present data printf("\nreading values at %d\n",off); printf("SeekStatus <%d>\n",lseek(fd,off,SEEK_SET)); printf("ReadStatus <%d>\n",read(fd,cDE,deSize)); printf("current Key/Data <%d/%s>\n",cDE->key,cDE->data); printf("\nwriting new values\n"); //Change the values locally cDE->key = //something new cDE->data = //something new //Write them back printf("SeekStatus <%d>\n",lseek(fd,off,SEEK_SET)); printf("WriteStatus <%d>\n",write(fd,cDE,deSize)); //Re-read to make sure that it got written back printf("\nre-reading values at %d\n",off); printf("SeekStatus <%d>\n",lseek(fd,off,SEEK_SET)); printf("ReadStatus <%d>\n",read(fd,cDE,deSize)); printf("current Key/Data <%d/%s>\n",cDE->key,cDE->data); Furthermore, here's the dE struct in case you're wondering: struct dE { int key; char data[DataSize]; }; This prints: reading values at 1072 SeekStatus <1072> ReadStatus <32> current Key/Data <27/old> writing new values SeekStatus <1072> WriteStatus <32> re-reading values at 1072 SeekStatus <1072> ReadStatus <32> current Key/Data <27/old>

    Read the article

  • Seeking to a line in a file in g++

    - by Phenom
    Is there a way that I can seek to a certain line in a file to read or write data? Let's say I want to write some data starting on the 10th line in a text file. There might be some data already in the first few lines, or the file could even be empty. Is there a way I can seek directly to the line I want without having to worry about what's already in the file?

    Read the article

  • Reading files with Java

    - by sikas
    I would like to know how can I read a file byte by byte then perform some operation every n bytes. for example: Say I have a file of size = 50 bytes, I want to divide it into blocks each of n bytes. Then each block is sent to a function for some operations to be done on those bytes. The blocks are to be created during the read process and sent to the function when the block reaches n bytes so that I don`t use much memory for storing all blocks. I want the output of the function to be written/appended on a new file. This is what I've reached to read, yet I don't know it it is right: fc = new JFileChooser(); File f = fc.getSelectedFile(); FileInputStream in = new FileInputStream(f); byte[] b = new byte[16]; in.read(b); I haven't done anything yet for the write process.

    Read the article

  • Writing Strings to files in python

    - by Leif Andersen
    I'm getting the following error when trying to write a string to a file in pythion: Traceback (most recent call last): File "export_off.py", line 264, in execute save_off(self.properties.path, context) File "export_off.py", line 244, in save_off primary.write(file) File "export_off.py", line 181, in write variable.write(file) File "export_off.py", line 118, in write file.write(self.value) TypeError: must be bytes or buffer, not str I basically have a string class, which contains a string: class _off_str(object): __slots__ = 'value' def __init__(self, val=""): self.value=val def get_size(self): return SZ_SHORT def write(self,file): file.write(self.value) def __str__(self): return str(self.value) Furthermore, I'm calling that class like this: def write(self, file): for variable in self.variables: variable.write(file) I have no idea what is going on. I've seen other python programs writing strings to files, so why can't this one? Thank you very much for your help.

    Read the article

  • Is there an easier way to reconcile a list of files and a directory with subfolders/files to find ch

    - by rwmnau
    I have a SQL Server table with a list of files (path + filename), and a folder with multiple layers and files in each layer. I'm looking for a way to reconcile the two without having to process the list twice. Currently, I'm doing this: For Each f as FileInfo In FileListFromDatabase If f.Exists is False, mark it as deleted in the database Next For Each f as FileInfo In RecursiveListOFFilesOnDisk If Not FileExistsInDatabase, then add it Next Is there a better way to do this? I'd like to avoid converting all the matching files (of which most will be) to FileInfo objects twice. Since I'm a T-SQL developer first, I'm picturing something like an OUTER JOIN of the two lists where they don't match. Something LINQ-ish?

    Read the article

  • Write file need to optimised for heavy traffic part 2

    - by Clayton Leung
    For anyone interest to see where I come from you can refer to part 1, but it is not necessary. write file need to optimised for heavy traffic Below is a snippet of code I have written to capture some financial tick data from the broker API. The code will run without error. I need to optimize the code, because in peak hours the zf_TickEvent method will be call more than 10000 times a second. I use a memorystream to hold the data until it reaches a certain size, then I output it into a text file. The broker API is only single threaded. void zf_TickEvent(object sender, ZenFire.TickEventArgs e) { outputString = string.Format("{0},{1},{2},{3},{4}\r\n", e.TimeStamp.ToString(timeFmt), e.Product.ToString(), Enum.GetName(typeof(ZenFire.TickType), e.Type), e.Price, e.Volume); fillBuffer(outputString); } public class memoryStreamClass { public static MemoryStream ms = new MemoryStream(); } void fillBuffer(string outputString) { byte[] outputByte = Encoding.ASCII.GetBytes(outputString); memoryStreamClass.ms.Write(outputByte, 0, outputByte.Length); if (memoryStreamClass.ms.Length > 8192) { emptyBuffer(memoryStreamClass.ms); memoryStreamClass.ms.SetLength(0); memoryStreamClass.ms.Position = 0; } } void emptyBuffer(MemoryStream ms) { FileStream outStream = new FileStream("c:\\test.txt", FileMode.Append); ms.WriteTo(outStream); outStream.Flush(); outStream.Close(); } Question: Any suggestion to make this even faster? I will try to vary the buffer length but in terms of code structure, is this (almost) the fastest? When memorystream is filled up and I am emptying it to the file, what would happen to the new data coming in? Do I need to implement a second buffer to hold that data while I am emptying my first buffer? Or is c# smart enough to figure it out? Thanks for any advice

    Read the article

  • Loading specific files from arbitrary directories?

    - by Haydn V. Harach
    I want to load foo.txt. foo.txt might exist in the data/bar/ directory, or it might exist in the data/New Folder/ directory. There might be a different foo.txt in both of these directories, in which case I would want to either load one and ignore the other according to some order that I've sorted the directories by (perhaps manually, perhaps by date of creation), or else load them both and combine the results somehow. The latter (combining the results of both/all foo.txt files) is circumstantial and beyond the scope of this question, but something I want to be able to do in the future. I'm using SDL and boost::filesystem. I want to keep my list of dependencies as small as possible, and as cross-platform as possible. I'm guessing that my best bet would be to get a list of every directory (within the data/ folder), sort/filter this list, then when I go to load foo.txt, I search for it in each potential directory? This sounds like it would be very inefficient, if I have dozens of potential directories to search through every time. What's the best way to go about accomplishing this? Bonus: What if I want some of the directories to be archives? ie. considering both data/foo/ and data/bar.zip to both be valid, and pull foobar.txt from either one without caring.

    Read the article

  • Exciting new Evented I/O technologies

    - by Saif Bechan
    Lately I have been having my eye on evented I/O to tackle some of my web application problems. I have been looking at things as Python's Twisted, Ruby's Eventmachine, and Node.js. Are there any other alternatives to these three, maybe in other languages as PHP ?

    Read the article

  • Reading a large text file to memory in C++

    - by NoneType
    Is there a way to read a large text file (~60MB) into memory at once (like a compiler flag to increase program memory limit) ? Currently, ofstream's open function throws a segmentation fault while trying to read this file. ifstream fis; fis.open("my_large_file.txt"); // Segfaults here The file just consists of rows of the form number_1<tabspace>number_2 i.e., two numbers separated by a tabspace.

    Read the article

  • Split file with PHP and generate contents

    - by user201140
    How do I split the content below into separate files without the placeholder tags. I'd also like to take the text inside the placeholder tags and place them inside a new contents file. <div class='placeholder'>The First Chapter</div> This is some text. <div class='placeholder'>The Second Chapter</div> This is some more text. <div class='placeholder'>Last Chapter</div> The last chapter. Thanks.

    Read the article

  • [c++] How to create a std::ofstream to a temp file?

    - by dehmann
    Okay, mkstemp is the preferred way to create a temp file in POSIX. But it opens the file and returns an int, which is a file descriptor. From that I can only create a FILE*, but not an std::ofstream, which I would prefer in C++. (Apparently, on AIX and some other systems, you can create an std::ofstream from a file descriptor, but my compiler complains when I try that.) I know I could get a temp file name with tmpnam and then open my own ofstream with it, but that's apparently unsafe due to race conditions, and results in a compiler warning (g++ v3.4. on Linux): warning: the use of `tmpnam' is dangerous, better use `mkstemp' So, is there any portable way to create an std::ofstream to a temp file?

    Read the article

  • [C] Read line from file without knowing the line length.

    - by ryyst
    Hi, I want to read in a file line by line, without knowing the line length before. Here's what I got so far: int ch = getc(file); int length = 0; char buffer[4095]; while (ch != '\n' && ch != EOF) { ch = getc(file); buffer[length] = ch; length++; } printf("Line length: %d characters.", length); I can now figure out the line length, but only for lines that are shorter than 4095 characters. Is there a better way to do this (I already used fgets() but got told it wasn't the best way)? --Ry

    Read the article

  • File won't save output to file, and prints out a string oddly C++ Linux

    - by Predictability
    I'm trying to make a password code, the user enters a password, then it will save the password to a file in /tmp/ and then it will output the password (For me so I can find bugs). I have included the "string" library, and I set the password type to string, but when I output it, it outputs like this: 0x7fffb55baac0password // <-- thats the password I entered It will output hex (I think), then the password I entered, and it won't save it to the file in /tmp/ I want it to (Or any file in /tmp/). Here's the source code: http://codepad.org/3aamAv7R Thank you for all the help you guys have given me so far.

    Read the article

  • What influences running time of reading a bunch of images?

    - by remi
    I have a program where I read a handful of tiny images (50000 images of size 32x32). I read them using OpenCV imread function, in a program like this: std::vector<std::string> imageList; // is initialized with full path to the 50K images for(string s : imageList) { cv::Mat m = cv::imread(s); } Sometimes, it will read the images in a few seconds. Sometimes, it takes a few minutes to do so. I run this program in GDB, with a breakpoint further away than the loop for reading images so it's not because I'm stuck in a breakpoint. The same "erratic" behaviour happens when I run the program out of GDB. The same "erratic" behaviour happens with program compiled with/without optimisation The same "erratic" behaviour happens while I have or not other programs running in background The images are always at the same place in the hard drive of my machine. I run the program on a Linux Suse distrib, compiled with gcc. So I am wondering what could affect the time of reading the images that much?

    Read the article

  • read text files containing binary data as a single matrix in matlab

    - by user1716595
    I have a text file which contains binary data in the following manner: 00000000000000000000000000000000001011111111111111111111111111111111111111111111111111111111110000000000000000000000000000000 00000000000000000000000000000000000000011111111111111111111111111111111111111111111111000111100000000000000000000000000000000 00000000000000000000000000000000000011111111111111111111111111111111111111111111111111111111100000000000000000000000000000000 00000000000000000000000000000000000111111111111111111111111111111111111111111111111111111111100000000000000000000000000000000 00000000000000000000000000000000000011111111111111111111111111111111111111111111111111111111100000000000000000000000000000000 00000000000000000000000000000000000000011111111111111111111111111111111111111111111111111111100000000000000000000000000000000 00000000000000000000000000000000000000011111111111111111111111111111111111111111111111000111110000000000000000000000000000000 00000000000000000000000000000000000000111111111111111111111111111111111111111111111111111111110000000000000000000000000000000 00000000000000000000000000000000000000000000111111111111111111111111111111111111110000000011100000000000000000000000000000000 00000000000000000000000000000000000000011111111111111111111111111111111111111111111111100111110000000000000000000000000000000 00000000000000000000000000000000000111111111111111111111111111111111111111111111111111110111110000000000000000000000000000000 00000000000000000000000000000000001111111111111111111111111111111111111111111111111111111111100000000000000000000000000000000 00000000000000000000000000000000000000001111111111111111111111111111111111111111111111000011100000000000000000000000000000000 00000000000000000000000000000000000000001111111111111111111111111111111111111111111111000011100000000000000000000000000000000 00000000000000000000000000000000000001111111111111111111111111111111111111111111111111111111000000000000000000000000000000000 00000000000000000000000000000000000000011111111111111111111111111111111111111111111110000011100000000000000000000000000000000 00000000000000000000000000000000000000000000011111111111111111111111111111111111100000000011100000000000000000000000000000000 00000000000000000000000000000000000000111111111111111111111111111111111111111111111111110111100000000000000000000000000000000 Plz note that each 1 or 0 is independent i.e the values are not decimal.I need to find the column wise sum of the file.There are 125 columns in all (here it is jumping onto the next line) and there are 840946 rows. I have tried textread,fscanf and a few other matlab commands but the result is that they all read each row in decimal format and create a 840946*1 array.I want to create a 840946*125 array to compute a column wise sum. Kindly help, Thanks!

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >