Search Results

Search found 4688 results on 188 pages for 'io redirection'.

Page 34/188 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • loading data from file into 2d array

    - by Chris
    I am just starting with perl and would like some help with arrays please. I am reading lines from a data file and splitting the line into fields: open (INFILE, $infile); do { my $linedata = <INFILE>; my @data= split ',',$linedata; .... } until eof; I then want to store the individual field values (in @data) in and array so that the array looks like the input data file ie, the first "row" of the array contains the first line of data from INFILE etc. Each line of data from the infile contains 4 values, x,y,z and w and once the data are all in the array, I have to pass the array into another program which reads the x,y,z,w and displays the w value on a screen at the point determined by the x,y,z value. I can not pas the data to the other program on a row-by-row basis as the program expects the data to in a 2d matrtix format. Any help greatly appreciated. Chris

    Read the article

  • Load binary file using fstream

    - by Kirill V. Lyadvinsky
    I'm trying to load binary file using fstream in the following way: #include <iostream #include <fstream #include <iterator #include <vector using namespace std; int main() { basic_fstream<uint32_t file( "somefile.dat", ios::in|ios::binary ); vector<uint32_t buffer; buffer.assign( istream_iterator<uint32_t, uint32_t( file ), istream_iterator<uint32_t, uint32_t() ); cout << buffer.size() << endl; return 0; } But it doesn't work. In Ubuntu it crashed with std::bad_cast exception. In MSVC++ 2008 it just prints 0. I know that I could use file.read to load file, but I want to use iterator and operator>> to load parts of the file. Is that possible? Why the code above doesn't work?

    Read the article

  • Get last n lines of a file with Python, similar to tail

    - by Armin Ronacher
    I'm writing a log file viewer for a web application and for that I want to paginate through the lines of the log file. The items in the file are line based with the newest item on the bottom. So I need a tail() method that can read n lines from the bottom and supports an offset. What I came up with looks like this: def tail(f, n, offset=0): """Reads a n lines from f with an offset of offset lines.""" avg_line_length = 74 to_read = n + offset while 1: try: f.seek(-(avg_line_length * to_read), 2) except IOError: # woops. apparently file is smaller than what we want # to step back, go to the beginning instead f.seek(0) pos = f.tell() lines = f.read().splitlines() if len(lines) >= to_read or pos == 0: return lines[-to_read:offset and -offset or None] avg_line_length *= 1.3 Is this a reasonable approach? What is the recommended way to tail log files with offsets?

    Read the article

  • PHP is unable to open a file for writing - but it does exist

    - by asdasdas
    I am trying to write to a file. I do a file_exists check on it before i do fopen and it comes up true: the file does exist. However, the file fails this code and gives me the error every time: $handle = fopen($filename, 'w'); if($handle) { flock($handle, LOCK_EX); fwrite($handle, $contents); } else { echo 'ERROR: Unable to open the file for writing.',PHP_EOL; exit(); } flock($handle, LOCK_UN); fclose($handle); Is there a way I can get more specific error details as to why this file does not let me open it for writing? I know that the filename is legit, but for some reason it just wont let me write to it. I do have write permissions, I was able to write and write over another file.

    Read the article

  • Why does File::Slurp return a scalar when it should return a list?

    - by BrianH
    I am new to the File::Slurp module, and on my first test with it, it was not giving the results I was expecting. It took me a while to figure it out, so now I am interested in why I was seeing this certain behavior. My call to File::Slurp looked like this: my @array = read_file( $file ) || die "Cannot read $file\n"; I included the "die" part because I am used to doing that when opening files. My @array would always end up with the entire contents of the file in the first element of the array. Finally I took out the "|| die" section, and it started working as I expected. Here is an example to illustrate: perl -de0 Loading DB routines from perl5db.pl version 1.22 Editor support available. Enter h or `h h' for help, or `man perldebug' for more help. main::(-e:1): 0 DB<1> use File::Slurp DB<2> $file = '/usr/java6_64/copyright' DB<3> x @array1 = read_file( $file ) 0 'Licensed material - Property of IBM.' 1 'IBM(R) SDK, Java(TM) Technology Edition, Version 6' 2 'IBM(R) Runtime Environment, Java(TM) Technology Edition, Version 6' 3 '' 4 'Copyright Sun Microsystems Inc, 1992, 2008. All rights reserved.' 5 'Copyright IBM Corporation, 1998, 2009. All rights reserved.' 6 '' 7 'The Apache Software License, Version 1.1 and Version 2.0' 8 'Copyright 1999-2007 The Apache Software Foundation. All rights reserved.' 9 '' 10 'Other copyright acknowledgements can be found in the Notices file.' 11 '' 12 'The Java technology is owned and exclusively licensed by Sun Microsystems Inc.' 13 'Java and all Java-based trademarks and logos are trademarks or registered' 14 'trademarks of Sun Microsystems Inc. in the United States and other countries.' 15 '' 16 'US Govt Users Restricted Rights - Use duplication or disclosure' 17 'restricted by GSA ADP Schedule Contract with IBM Corp.' DB<4> x @array2 = read_file( $file ) || die "Cannot read $file\n"; 0 'Licensed material - Property of IBM. IBM(R) SDK, Java(TM) Technology Edition, Version 6 IBM(R) Runtime Environment, Java(TM) Technology Edition, Version 6 Copyright Sun Microsystems Inc, 1992, 2008. All rights reserved. Copyright IBM Corporation, 1998, 2009. All rights reserved. The Apache Software License, Version 1.1 and Version 2.0 Copyright 1999-2007 The Apache Software Foundation. All rights reserved. Other copyright acknowledgements can be found in the Notices file. The Java technology is owned and exclusively licensed by Sun Microsystems Inc. Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems Inc. in the United States and other countries. US Govt Users Restricted Rights - Use duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. ' Why does the || die make a difference? I have a feeling this might be more of a Perl precedence question instead of a File::Slurp question. I looked in the File::Slurp module and it looks like it is set to croak if there is a problem, so I guess the proper way to do it is to allow File::Slurp to croak for you. Now I'm just curious why I was seeing these differences.

    Read the article

  • implementing a download manager that supports resuming

    - by Idan K
    hi, I intend on writing a small download manager in C++ that supports resuming (and multiple connections per download). From the info I gathered so far, when sending the http request I need to add a header field with a key of "Range" and the value "bytes=startoff-endoff". Then the server returns a http response with the data between those offsets. So roughly what I have in mind is to split the file to the number of allowed connections per file and send a http request per splitted part with the appropriate "Range". So if I have a 4mb file and 4 allowed connections, I'd split the file to 4 and have 4 http requests going, each with the appropriate "Range" field. Implementing the resume feature would involve remembering which offsets are already downloaded and simply not request those. Is this the right way to do this? What if the web server doesn't support resuming? (my guess is it will ignore the "Range" and just send the entire file) When sending the http requests, should I specify in the range the entire splitted size? Or maybe ask smaller pieces, say 1024k per request? When reading the data, should I write it immediately to the file or do some kind of buffering? I guess it could be wasteful to write small chunks. Should I use a memory mapped file? If I remember correctly, it's recommended for frequent reads rather than writes (I could be wrong). Is it memory wise? What if I have several downloads simultaneously? If I'm not using a memory mapped file, should I open the file per allowed connection? Or when needing to write to the file simply seek? (if I did use a memory mapped file this would be really easy, since I could simply have several pointers). Note: I'll probably be using Qt, but this is a general question so I left code out of it.

    Read the article

  • Windows Disk I/O Analysis

    - by Jonathon
    It appears that we are having a problem with the disk i/o speed on our Windows 2003 Enterprise Edition server (64-bit). As we were initializing a database that created two 1G tablespaces on 3 different machines, it became obvious that the two smaller machines (each 32-bit Windows 2003 Standard Edition with less RAM) killed the larger machine when creating the files. The larger machine took 10x as long to create the tablespaces than did the other machines. Now, I am left wondering how that could be. What programs or scripts would you guys recommend for tracking down the I/O problem? I think the issue may be with the controller card (all boxes are hardware RAID 10, but have different controller cards), but I would like to check the actual disk I/O speed as well, so I have some hard numbers to work with. Any help would be appreciated.

    Read the article

  • How to create a Java String from the contents of a file

    - by Oscar Reyes
    I've been using this idiom for some time now. And it seems to be the most wide spread at least in the sites I've visited. Does anyone have a better/different way to read a file into a string in Java. Thanks private String readFile( String file ) throws IOException { BufferedReader reader = new BufferedReader( new FileReader (file)); String line = null; StringBuilder stringBuilder = new StringBuilder(); String ls = System.getProperty("line.separator"); while( ( line = reader.readLine() ) != null ) { stringBuilder.append( line ); stringBuilder.append( ls ); } return stringBuilder.toString(); }

    Read the article

  • A couple questions using fwrite/fread with data structures

    - by Nazgulled
    Hi, I'm using fwrite() and fread() for the first time to write some data structures to disk and I have a couple of questions about best practices and proper ways of doing things. What I'm writing to disk (so I can later read it back) is all user profiles inserted in a Graph structure. Each graph vertex is of the following type: typedef struct sUserProfile { char name[NAME_SZ]; char address[ADDRESS_SZ]; int socialNumber; char password[PASSWORD_SZ]; HashTable *mailbox; short msgCount; } UserProfile; And this is how I'm currently writing all the profiles to disk: void ioWriteNetworkState(SocialNetwork *social) { Vertex *currPtr = social->usersNetwork->vertices; UserProfile *user; FILE *fp = fopen("save/profiles.dat", "w"); if(!fp) { perror("fopen"); exit(EXIT_FAILURE); } fwrite(&(social->usersCount), sizeof(int), 1, fp); while(currPtr) { user = (UserProfile*)currPtr->value; fwrite(&(user->socialNumber), sizeof(int), 1, fp); fwrite(user->name, sizeof(char)*strlen(user->name), 1, fp); fwrite(user->address, sizeof(char)*strlen(user->address), 1, fp); fwrite(user->password, sizeof(char)*strlen(user->password), 1, fp); fwrite(&(user->msgCount), sizeof(short), 1, fp); break; currPtr = currPtr->next; } fclose(fp); } Notes: The first fwrite() you see will write the total user count in the graph so I know how much data I need to read back. The break is there for testing purposes. There's thousands of users and I'm still experimenting with the code. My questions: After reading this I decided to use fwrite() on each element instead of writing the whole structure. I also avoid writing the pointer to to the mailbox as I don't need to save that pointer. So, is this the way to go? Multiple fwrite()'s instead of a global one for the whole structure? Isn't that slower? How do I read back this content? I know I have to use fread() but I don't know the size of the strings, cause I used strlen() to write them. I could write the output of strlen() before writing the string, but is there any better way without extra writes?

    Read the article

  • Ruby: Is there a better way to iterate over multiple (big) files?

    - by zxcvbnm
    Here's what I'm doing (sorry for the variable names, I'm not using those in my code): File.open("out_file_1.txt", "w") do |out_1| File.open("out_file_2.txt", "w") do |out_2| File.open_and_process("in_file_1.txt", "r") do |in_1| File.open_and_process("in_file_2.txt", "r") do |in_2| while line_1 = in_1.gets do line_2 = in_2.gets #input files have the same number of lines #process data and output to files end end end end end The open_and_process method is just to open the file and close it once it's done. It's taken from the pickaxe book. Anyway, the main problem is that the code is nested too deeply. I can't load all the files' contents into memory, so I have to iterate line by line. Is there a better way to do this? Or at least prettify it?

    Read the article

  • question about fgets

    - by user105033
    Is this safe to do? (does fgets terminate the buffer with null) or should I be setting the 20th byte to null after the call to fgets before i call clean. // strip new lines void clean(char *data) { while (*data) { if (*data == '\n' || *data == '\r') *data = '\0'; data++; } } // for this, assume that the file contains 1 line no longer than 19 bytes // buffer is freed elsewhere char *load_latest_info(char *file) { FILE *f; char *buffer = (char*) malloc(20); if (f = fopen(file, "r")) if (fgets(buffer, 20, f)) { clean(buffer); return buffer; } free(buffer); return NULL; }

    Read the article

  • Not able to open a file in php

    - by ehsanul
    The following code works when invoking through the command line with php -f test.php, from root. It does not work though when being invoked via apache when loading the php page. The code chokes at fopen() and the resulting web page just says "can't open file". <?php $fp = fopen("/path/to/some_file.txt","a") or die("can't open file"); fwrite($fp,"some text"); fclose($fp); ?> I tried to play with the file permissions, but to no avail. I changed the user/group with chown apache:apache test.php and changed permissions with chmod 755 test.php. Here is the relevant result of ls -l /path/to/some_file.txt: -rwxr-xr-x 1 apache apache 0 Apr 12 04:16 some_file.txt

    Read the article

  • Python unicode problem

    - by Somebody still uses you MS-DOS
    I'm receiving some data from a ZODB (Zope Object Database). I receive a mybrains object. Then I do: o = mybrains.getObject() and I receive a "Person" object in my project. Then, I can do b = o.name and doing print b on my class I get: José Carlos and print b.name.__class__ <type 'unicode'> I have a lot of "Person" objects. They are added to a list. names = [o.nome, o1.nome, o2.nome] Then, I trying to create a text file with this data. delimiter = ';' all = delimiter.join(names) + '\n' No problem. Now, when I do a print all I have: José Carlos;Jonas;Natália Juan;John But when I try to create a file of it: f = open("/tmp/test.txt", "w") f.write(all) I get an error like this (the positions aren't exaclty the same, since I change the names) UnicodeEncodeError: 'ascii' codec can't encode character u'\xe9' in position 84: ordinal not in range(128) If I can print already with the "correct" form to display it, why I can't write a file with it? Which encode/decode method should I use to write a file with this data? I'm using Python 2.4.5 (can't upgrade it)

    Read the article

  • Keeping the UI responsive while parsing a very large logfile

    - by Carlos
    I'm writing an app that parses a very large logfile, so that the user can see the contents in a treeview format. I've used a BackGroundWorker to read the file, and as it parses each message, I use a BeginInvoke to get the GUI thread to add a node to my treeview. Unfortunately, there's two issues: The treeview is unresponsive to clicks or scrolls while the file is being parsed. I would like users to be able to examine (ie expand) nodes while the file is parsing, so that they don't have to wait for the whole file to finish parsing. The treeview flickers each time a new node is added. Here's the code inside the form: private void btnChangeDir_Click(object sender, EventArgs e) { OpenFileDialog browser = new OpenFileDialog(); if (browser.ShowDialog() == DialogResult.OK) { tbSearchDir.Text = browser.FileName; BackgroundWorker bgw = new BackgroundWorker(); bgw.DoWork += (ob, evArgs) => ParseFile(tbSearchDir.Text); bgw.RunWorkerAsync(); } } private void ParseFile(string inputfile) { FileStream logFileStream = new FileStream(inputfile, FileMode.Open, FileAccess.Read, FileShare.ReadWrite); StreamReader LogsFile = new StreamReader(logFileStream); while (!LogsFile.EndOfStream) { string Msgtxt = LogsFile.ReadLine(); Message msg = new Message(Msgtxt.Substring(26)); //Reads the text into a class with appropriate members AddTreeViewNode(msg); } } private void AddTreeViewNode(Message msg) { TreeNode newNode = new TreeNode(msg.SeqNum); BeginInvoke(new Action(() => { treeView1.BeginUpdate(); treeView1.Nodes.Add(newNode); treeView1.EndUpdate(); Refresh(); } )); } What needs to be changed?

    Read the article

  • EDIT Control Showing Squares Instead Of Returns

    - by Nathan Campos
    I'm playing a little bit with PocketC by doing a simple text editor. But with this code to read and to display the contents of the file on the EDIT control: int filehandle; int file_len; string file_mode; initComponents() { createctrl("EDIT", "test", 2, 1, 0, 24, 70, 25, TEXTBOX); wndshow(TEXTBOX, SW_SHOW); guigetfocus(); } main() { filehandle = fileopen(OpenFileDlg("Plain Text Files (*.txt)|*.txt; All Files (*.*)|*.*"), 0, FILE_READWRITE); file_len = filegetlen(filehandle); if(filehandle == -1) { MessageBox("File Could Not Be Found!", "Error", 3, 1); } initComponents(); editset(TEXTBOX, fileread(filehandle, file_len)); } It's all ok, but my test file, now have returns: Hello, World! PocketC Test Of My Editor Then when I open this file on the editor, instead of returns, I just see two squares(that means that it's a unknown character for that control), but if I change the control to a STATIC, it does he returns ok, but I can't edit the text if I use a STATIC. Then I want to know what I need to do to do the returns instead of showing those squares.

    Read the article

  • Opening a file from a pack URI in WPF

    - by cptmorgan
    Hi All, I am looking to open a .csv file from the application pack to do some unit testing. So what I would really love is some analog to File.ReadAllText(string path) which is instead X.ReadAllText(Uri uri). I haven't as yet been able to find this. Does anyone know if it is possible to read text / bytes (don't mind which) from a file in the pack without compiling this file to disk first? Oh and btw, File.ReadAllText(@"pack://application:,,,/SpreadSheetEngine/Tests/Example.csv") didn't work for me.. Thanks in advance.. Gav

    Read the article

  • uploading multiple files from client to server with asp.net

    - by Maestro1024
    uploading multiple files from client to server with asp.net I have been looking at the asp.net upload control but that is for one file (unless someone knows a better way to do it). http://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.fileupload.aspx For what I want to do I don't even really need a browse. I know the files off of the client are at a certain location. Is it possible to create a collection of *HttpPostedFile*s and upload those? http://msdn.microsoft.com/en-us/library/system.web.httppostedfile.aspx I don't think it is possible but would be glad to be proven wrong. Is there a different asp.net method or control that will easily allow uploading multiple files from client to server?

    Read the article

  • Import and Export for CSV are both broken in Mathematica

    - by dreeves
    Consider the following 2 by 2 array: x = {{"a b c", "1,2,3"}, {"i \"comma-heart\" you", "i \",heart\" u, too"}} If we Export that to CSV and then Import it again we don't get the same thing back: Import[Export["tmp.csv", d]] Looking at tmp.csv it's clear that the Export didn't work, since the quotes are not escaped properly. According to the RFC which I presume is summarized correctly on Wikipedia's entry on CSV, the right way to export the above array is as follows: a b c, "1,2,3" "i ""heart"" you", "i "",heart"" u, too" Importing the above does not yield the original array either. So Import is broken as well. I've reported these bugs to [email protected] but I'm wondering if others have workarounds in the meantime. One workaround is to just use TSV instead of CSV. I tested the above with TSV and it seems to work (even with tabs embedded in the entries of the array).

    Read the article

  • Getting Error When Opening Files

    - by Nathan Campos
    I'm developing a simple Text Editor to understand better PocketC language, then I've done this: #include "\\Storage Card\\My Documents\\PocketC\\Parrot\\defines.pc" int filehandle; int file_len; string file_mode; initComponents() { createctrl("EDIT", "test", 2, 1, 0, 24, 70, 25, TEXTBOX); wndshow(TEXTBOX, SW_SHOW); guigetfocus(); } main() { filehandle = fileopen(OpenFileDlg("Plain Text Files (*.txt)|*.txt; All Files (*.*)|*.*"), 0, FILE_READWRITE); file_len = filegetlen(filehandle); if(filehandle = -1) { MessageBox("File Could Not Be Found!", "Error", 3, 1); } initComponents(); editset(TEXTBOX, fileread(filehandle, file_len)); } Then I tried to run the application, it opens the Open File Dialog, I select a file(that is at \test.txt) that I've created with notepad, then I got my MessageBox saying that the file wans't found. Then I want to know why I'm getting this if the file is all correct? *PS: When I click to exit the MessageBox, I saw that the TextBox is displaying where the file is(I've tested with many other files, and with all I got the error and this).

    Read the article

  • What should be the ideal number of parallel java threads for copying a large set of files from a qua

    - by ukgenie
    What should be the ideal number of parallel java threads for copying a large set of files from a quad core linux box to an external shared folder? I can see that with a single thread it is taking a hell lot of time to move the files one by one. Multiple threads is improving the copy performance, but I don't know what should be the exact number of threads. I am using Java executor service to create the thread pool.

    Read the article

  • Execute a Application On The Server Using PHP

    - by Nathan Campos
    I have an application on my server that is called leaf.exe, that haves two arguments needed to run, they are: inputfile and outputfile, that will be like this example: pnote.exe input.pnt output.txt The executable is at exec/, inputfile is at upload/ and outputfile is on compiled/. But I need that a PHP could run the application like that, then I want to know: How could I do this? How could I echo the output of the program?

    Read the article

  • How do I take advantage of Android's "Clear Cache" button

    - by Jay Askren
    In Android's settings, in the "Manage Applications" activity when clicking on an app, the data is broken down into Application, Data, and cache. There is also a button to clear the cache. My app caches audio files and I would like the user to be able to clear the cache using this button. How do I store them so they get lumped in with the cache and the user can clear them? I've tried storing files using both of the following techniques: newFile = File.createTempFile("mcb", ".mp3", context.getCacheDir()); newFile = new File(context.getCacheDir(), "mcb.mp3"); newFile.createNewFile(); In both cases, these files are listed as Data and not Cache.

    Read the article

  • How do I find out what process Id and thread id / name has a file open

    - by peter
    Hi All, I am using C# in an application and am having some problems with a file becoming locked. The piece of code does this, while (true) { Read a packet from a socket (with data in it to add to the file) Open a file Writes data to it Close a file } But in the process the file becomes locked. I don't really understand how, we are are definately catching and reporting exceptions so I don't see how the file doesn't get closed every time. My best guess is that something else is opening the file, but I want to prove it. Can someone please provide a piece of code to check whether the file is open and if so report what processid and threadid has the file open. For example if I had this code, StreamWriter streamWriter1 = new StreamWriter(@"c:\logs\test.txt"); streamWriter1.WriteLine("Test"); // code to check for locks?? StreamWriter streamWriter2 = new StreamWriter(@"c:\logs\test.txt"); streamWriter1.Close(); streamWriter2.Close(); That will throw an exception because the file is locked when we try and open it the second time. So where the comment is what could I put in there to report that the current app (process Id) and the current thread (thread Id) have the file locked? Thanks.

    Read the article

  • Check to see if file transfer is complete

    - by Cymon
    We have a daily job that processes files delivered from an external source. The process usually runs fine without any issues but every once in a while we have an issue of attempting to process a file that is not completely transferred. The external source SCPs these files from a UNIX server to our Windows server. From there we try to process the files. Is there a way to check to see if a file is still being transferred? Does UNIX put a lock on a file while SCPing it that we could check on the Windows side?

    Read the article

  • Spaces while using "Print" in VBA

    - by Josh
    For some reason I am getting a lot of spaces in front of each value while trying to print to a flat text file. 'append headers Cells(start_row - 2, 1).Select For i = 1 To ActiveCell.SpecialCells(xlLastCell).Column If ActiveCell.Offset(0, 1).Column = ActiveCell.SpecialCells(xlLastCell).Column Then Print #finalCSV, Cells(start_row - 2, i) & "\n", Else Print #finalCSV, Cells(start_row - 2, i) & ",", End If Next i Example output: DC Capacity:hi, Resistive Capacity:lo, Resistive Capacity:hi, Reactive Capacity:lo, Is there any way to get rid of these spaces?

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >