Search Results

Search found 68407 results on 2737 pages for 'file organizer'.

Page 349/2737 | < Previous Page | 345 346 347 348 349 350 351 352 353 354 355 356  | Next Page >

  • File based caching under PHP

    - by azatoth
    I've been using http://code.google.com/p/phpbrowscap/ for a project, and it usually works nice. But a few times it's cache, which is plain php-files (see http://code.google.com/p/phpbrowscap/source/browse/trunk/browscap/Browscap.php#372 et. al.), has been "zeroed", i.e. the whole cache file has become large blob of NULLs. Instead of trying to find out why the files become NULL, I though perhaps it might be better to change the caching strategy to something more resilient. So I do wonder if you has any good ideas what would be a good solution; I've been looking at http://www.jongales.com/blog/2009/02/18/simple-file-based-php-cache-class/ and http://www.phpclasses.org/package/313-PHP-Cache-arbitrary-data-in-files-.html and I also though of just saving an serialized array to the file instead of pure php as it's been doing now; But I'm uncertain what approach I should target here. I'm grateful for any insight into this area of technology, as I know it's complex from a performance point of view.

    Read the article

  • How to extract block of XML from a log file on Linux

    - by dragonmantank
    I have a log file that looks like the following: 2010-05-12 12:23:45 Some sort of log entry 2010-05-12 01:45:12 Request XML: <RootTag> <Element>Value</Element> <Element>Another Value</Element> </RootTag> 2010-05-12 01:45:32 Response XML: <ResponseRoot> <Element>Value</Element> </ResponseRoot> 2010-05-12 01:45:49 Another log entry What I want to do is extract the Request and Response XML (and ultimately dump them into their own single files). I had a similar parser that used egrep but the XML was all on one line, not multiple ones like above. The log files are also somewhat large, hitting 500-600 megs a log. Smaller logs I would read in via a PHP script and use regex matching, but the amount of memory required for such a large file would more than likely kill the script. Is there an easy way using the built-in tools on a Linux box (CentOS in this case) to extract multiple lines or am I going to have to bite the bullet and use Perl or PHP to read in the entire file to extract it?

    Read the article

  • Using bash to copy file to spec folders

    - by Franko
    I have a folder with a fair amount of subfolders. In some of the subfolders do I have a folder.jpg picture. What I try to do find that folder(s) and copy it to all other subfolders that got the same artist and album information then continue on to the next album etc. The structure of all the folders are "artist - year - album - [encoding information]". I have made a really simple one liner that find the folders that got the file but there am I stuck. ls -F | grep / | while read folders;do find "$folders" -name folder.jpg; done Anyone have any good tip or ideas how to solve this or pointers how to proceed? Edit: First of all, i´m real new to this (like you cant tell) so please have patience. Ok, let me break it down even more. I have a folder structure that looks like this: artist1 - year - album - [flac] artist1 - year - album - [mp3] artist1 - year - album - [AAC] artist2 - year - album - [flac] etc I like to loop over the set of folders that have the same artist and album information and look for a folder.jpg file. When I find that file do I like to copy it to all of the other folders in the same set. Ex if I find one folder.jpg in artist1 - year - album - [flac] folder do I like to have that folder.jpg copied to artist1 - year - album - [mp3] & artist1 - year - album - [AAC] but not to artist2 - year - album - [flac]. The continue the loop until all the sets been processed. I really hope that makes it a bit more easy to understand what I try to do :)

    Read the article

  • Getting zeros between data while reading a binary file in C

    - by indiajoe
    I have a binary data which I am reading into an array of long integers using a C programme. hexdump of the binary data shows, that after first few data points , it starts again at a location 20000 hexa adresses away. hexdump output is as shown below. 0000000 0000 0000 0000 0000 0000 0000 0000 0000 * 0020000 0000 0000 0053 0000 0064 0000 006b 0000 0020010 0066 0000 0068 0000 0066 0000 005d 0000 0020020 0087 0000 0059 0000 0062 0000 0066 0000 ........ and so on... But when I read it into an array 'data' of long integers. by the typical fread command fread(data,sizeof(*data),filelength/sizeof(*data),fd); It is filling up with all zeros in my data array till it reaches the 20000 location. After that it reads in data correctly. Why is it reading regions where my file is not there? Or how will I make it read only my file, not anything inbetween which are not in file? I know it looks like a trivial problem, but I cannot figure it out even after googling one night.. Can anyone suggest me where I am doing it wrong? Other Info : I am working on a gnu/linux machine. (slax-atma distro to be specific) My C compiler is gcc.

    Read the article

  • Creating a tar file with checksums included

    - by wazoox
    Here's my problem : I need to archive to tar files a lot ( up to 60 TB) of big files (usually 30 to 40 GB each). I would like to make checksums ( md5, sha1, whatever) of these files before archiving; however not reading every file twice (once for checksumming, twice for tar'ing) is more or less a necessity to achieve a very high archiving performance (LTO-4 wants 120 MB/s sustained, and the backup window is limited). So I'd need some way to read a file, feeding a checksumming tool on one side, and building a tar to tape on the other side, something along : tar cf - files | tee tarfile.tar | md5sum - Except that I don't want the checksum of the whole archive (this sample shell code does just this) but a checksum for each individual file in the archive. I've studied GNU tar, Pax, Star options. I've looked at the source from Archive::Tar. I see no obvious way to achieve this. It looks like I'll have to hand-build something in C or similar to achieve what I need. Perl/Python/etc simply won't cut it performance-wise, and the various tar programs miss the necessary "plugin architecture". Does anyone know of any existing solution to this before I start code-churning ?

    Read the article

  • Saving NSString to file

    - by Michael Amici
    I am having trouble saving simple website data to file. I believe the sintax is correct, could someone help me? When I run it, it shows that it gets the data, but when i quit and open up the file that it is supposed to save to, nothing is in it. - (BOOL)textFieldShouldReturn:(UITextField *)nextField { [timer invalidate]; startButton.hidden = NO; startButton.enabled = YES; stopButton.enabled = NO; stopButton.hidden = YES; stopLabel.hidden = YES; label.hidden = NO; label.text = @"Press to Activate"; [nextField resignFirstResponder]; NSString *urlString = textField.text; NSData *dataNew = [NSData dataWithContentsOfURL:[NSURL URLWithString:urlString]]; NSUInteger len = [dataNew length]; NSString *stringCompare = [NSString stringWithFormat:@"%i", len]; NSLog(@"%@", stringCompare); NSString *filePath = [[NSBundle mainBundle] pathForResource:@"websiteone" ofType:@"txt"]; if (filePath) { [stringCompare writeToFile:filePath atomically:YES encoding:NSUTF8StringEncoding error:NULL]; NSString *myText = [NSString stringWithContentsOfFile:filePath encoding:NSUTF8StringEncoding error:NULL]; NSLog(@"Saving... %@", myText); } else { NSLog(@"cant find the file"); } return YES; }

    Read the article

  • What can i use to journal writes to file system

    - by Dmitry
    Hello, all I need to track all writes to files in order to have synchronized version of files on different place (server or just other directory, not considerable). Let it: all files located in same directory feel free to create some system files (e.g. SomeFileName.Ext~temp-data) no one have concurrent access to synced directory; nobody spoil ours meta-files or change real-files before we do postponed writes (like a commits) do not to care recovering "local" changes in case of crash; system can just rolled back to state of "server" by simple copy from it significant to have it transparent to use (so programmer must just call ordinary fopen(), read(), write()) It must be guaranteed that copy of files which "server" have is consistent. That is whole files scope existed in some moment of time. They may be sufficiently outdated but it must be fair snapshot of all files at some time. As i understand i should overload writing logic to collect data in order sent changes to "server". For example writing to temporary File~tmp. And so i have to overload reads in order program could read actual data of file. It would be great if you suggest some existing library (java or c++, it is unimportant) or solution (VCS customizing?). Or give hints how should i write it by myself. edit: After some reading i have more precision requirements: I need COW (Copy-on-write) wrapper for fopen(),fwrite(),.. or interceptor (hook) WriteFile() and other FS api system calls. Log-structured file system in userspace would be a alternative too.

    Read the article

  • JavaFX: File upload to REST service / servlet fails because of missing boundary

    - by spa
    I'm trying to upload a file using JavaFX using the HttpRequest. For this purpose I have written the following function. function uploadFile(inputFile : File) : Void { // check file if (inputFile == null or not(inputFile.exists()) or inputFile.isDirectory()) { return; } def httpRequest : HttpRequest = HttpRequest { location: urlConverter.encodeURL("{serverUrl}"); source: new FileInputStream(inputFile) method: HttpRequest.POST headers: [ HttpHeader { name: HttpHeader.CONTENT_TYPE value: "multipart/form-data" } ] } httpRequest.start(); } On the server side, I am trying to handle the incoming data using the Apache Commons FileUpload API using a Jersey REST service. The code used to do this is a simple copy of the FileUpload tutorial on the Apache homepage. @Path("Upload") public class UploadService { public static final String RC_OK = "OK"; public static final String RC_ERROR = "ERROR"; @POST @Produces("text/plain") public String handleFileUpload(@Context HttpServletRequest request) { if (!ServletFileUpload.isMultipartContent(request)) { return RC_ERROR; } FileItemFactory factory = new DiskFileItemFactory(); ServletFileUpload upload = new ServletFileUpload(factory); List<FileItem> items = null; try { items = upload.parseRequest(request); } catch (FileUploadException e) { e.printStackTrace(); return RC_ERROR; } ... } } However, I get a exception at items = upload.parseRequest(request);: org.apache.commons.fileupload.FileUploadException: the request was rejected because no multipart boundary was found I guess I have to add a manual boundary info to the InputStream. Is there any easy solution to do this? Or are there even other solutions?

    Read the article

  • Java: Embedding Soundbank file in JAR

    - by Pyroclastic
    If I have a soundbank stored in a JAR, how would I load that soundbank into my application using resource loading...? I'm trying to consolidate as much of a MIDI program into the jar file as I can, and the last thing I have to add is the soundbank file I'm using, as users won't have the soundbanks installed. I'm trying to put it into my jar file, and then load it with getResource() in the Class class, but I'm getting an InvalidMidiDataException on a soundbank that I know is valid. Here's the code, it's in the constructor for my synthesizer object: try { synth = MidiSystem.getSynthesizer(); channels = synth.getChannels(); instrument = MidiSystem.getSoundbank(this.getClass().getResource("img/soundbank-mid.gm")).getInstruments(); currentInstrument = instrument[0]; synth.loadInstrument(currentInstrument); synth.open(); } catch (InvalidMidiDataException ex) { System.out.println("FAIL"); instrument = synth.getAvailableInstruments(); currentInstrument = instrument[0]; synth.loadInstrument(currentInstrument); try { synth.open(); } catch (MidiUnavailableException ex1) { Logger.getLogger(MIDISynth.class.getName()).log(Level.SEVERE, null, ex1); } } catch (IOException ex) { Logger.getLogger(MIDISynth.class.getName()).log(Level.SEVERE, null, ex); } catch (MidiUnavailableException ex) { Logger.getLogger(MIDISynth.class.getName()).log(Level.SEVERE, null, ex); }

    Read the article

  • Loading .dll/.exe from file into temporary AppDomain throws Exception

    Hi Gang, I am trying to make a Visual Studio AddIn that removes unused references from the projects in the current solution (I know this can be done, Resharper does it, but my client doesn't want to pay for 300 licences). Anyhoo, I use DTE to loop through the projects, compile their assemblies, then reflect over those assemblies to get their referenced assemblies and cross-examine the .csproj file. Problem: since the .dll/.exe I loaded up with Reflection doesn't unload until the app domian unloads, it is now locked and the projects can't be built again because VS tries to re-create the files (all standard stuff). I have tried creating temporary files, then reflecting over them...no worky, still have locked original files (I totally don’t understand that BTW). Now I am now going down the path of creating a temporary AppDomain to load the files into and then destroy. I am having problems loading the files though: The way I understand AddDomain.Load is that I should create and send a byte array of the assembly to it. I do that: FileStream fs = new FileStream(assemblyFile, FileMode.Open); byte[] assemblyFileBuffer = new byte[(int)fs.Length]; fs.Read(assemblyFileBuffer, 0, assemblyFileBuffer.Length); fs.Close(); AppDomainSetup domainSetup = new AppDomainSetup(); domainSetup.ApplicationBase = assemblyFileInfo.Directory.FullName; AppDomain tempAppDomain = AppDomain.CreateDomain("TempAppDomain", null, domainSetup); Assembly projectAssembly = tempAppDomain.Load(assemblyFileBuffer); The last line throws an exception: "Could not load file or assembly 'WindowsFormsApplication1, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' or one of its dependencies. The system cannot find the file specified.":"WindowsFormsApplication3, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null"}" Any help or thoughts would be greatly appreciated. My head is lopsided from beating it against the wall... Thanks, Dan

    Read the article

  • Still don't understand file upload-folder permissions

    - by Camran
    I have checked out articles and tutorials. I don't know what to do about the security of my picture upload-folder. It is pictures for classifieds which should be uploaded to the folder. This is what I want: Anybody may upload images to the folder. The images will be moved to another folder, by another php-code later on (automatic). Only I may manually remove them, as well as another php file on the server which automatically empties the folder after x-days. What should I do here? The images are uploaded via a php-upload script. This script checks to see if the extension of the file is actually a valid image-file. When I try this: chmod 755 images the images wont be uploaded. But like this it works: chmod 777 images But 777 is a security risk right? Please give me detailed information... The Q is, what to do to solve this problem, not info about what permissions there are etc etc... Thanks If you need more info let me know...

    Read the article

  • Parse a CSV file using python (to make a decision tree later)

    - by Margaret
    First off, full disclosure: This is going towards a uni assignment, so I don't want to receive code. :). I'm more looking for approaches; I'm very new to python, having read a book but not yet written any code. The entire task is to import the contents of a CSV file, create a decision tree from the contents of the CSV file (using the ID3 algorithm), and then parse a second CSV file to run against the tree. There's a big (understandable) preference to have it capable of dealing with different CSV files (I asked if we were allowed to hard code the column names, mostly to eliminate it as a possibility, and the answer was no). The CSV files are in a fairly standard format; the header row is marked with a # then the column names are displayed, and every row after that is a simple series of values. Example: # Column1, Column2, Column3, Column4 Value01, Value02, Value03, Value04 Value11, Value12, Value13, Value14 At the moment, I'm trying to work out the first part: parsing the CSV. To make the decisions for the decision tree, a dictionary structure seems like it's going to be the most logical; so I was thinking of doing something along these lines: Read in each line, character by character If the character is not a comma or a space Append character to temporary string If the character is a comma Append the temporary string to a list Empty string Once a line has been read Create a dictionary using the header row as the key (somehow!) Append that dictionary to a list However, if I do things that way, I'm not sure how to make a mapping between the keys and the values. I'm also wondering whether there is some way to perform an action on every dictionary in a list, since I'll need to be doing things to the effect of "Everyone return their values for columns Column1 and Column4, so I can count up who has what!" - I assume that there is some mechanism, but I don't think I know how to do it. Is a dictionary the best way to do it? Would I be better off doing things using some other data structure? If so, what?

    Read the article

  • looping problem while appending data to existing text file

    - by Manu
    try { stmt = conn.createStatement(); stmt1 = conn.createStatement(); stmt2 = conn.createStatement(); rs = stmt.executeQuery("select cust from trip1"); rs1 = stmt1.executeQuery("select cust from trip2"); rs2 = stmt2.executeQuery("select cust from trip3"); File f = new File(strFileGenLoc); OutputStream os = (OutputStream)new FileOutputStream(f,true); String encoding = "UTF8"; OutputStreamWriter osw = new OutputStreamWriter(os, encoding); BufferedWriter bw = new BufferedWriter(osw); } while ( rs.next() ) { while(rs1.next()){ while(rs2.next()){ bw.write(rs.getString(1)==null? "":rs.getString(1)); bw.write("\t"); bw.write(rs1.getString(1)==null? "":rs1.getString(1)); bw.write("\t"); bw.write(rs2.getString(1)==null? "":rs2.getString(1)); bw.write("\t"); bw.newLine(); } } } Above code working fine. My problem is 1. "rs" resultset contains one record in the table 2. "rs1" resultset contains 5 record in the table 3. "rs2" resultset contains 5 record in the table "rs" data is getting recursive. while writing to the same text file , the output i am getting like 1 2 3 1 12 21 1 23 25 1 10 5 1 8 54 but i need output like below 1 2 3 12 21 23 25 10 5 8 54 What things i need to change in my code.. Please advice

    Read the article

  • Reading strings and integers from .txt file and printing output as strings only

    - by screename71
    Hello, I'm new to C++, and I'm trying to write a short C++ program that reads lines of text from a file, with each line containing one integer key and one alphanumeric string value (no embedded whitespace). The number of lines is not known in advance, (i.e., keep reading lines until end of file is reached). The program needs to use the 'std::map' data structure to store integers and strings read from input (and to associate integers with strings). The program then needs to output string values (but not integer values) to standard output, 1 per line, sorted by integer key values (smallest to largest). So, for example, suppose I have a text file called "data.txt" which contains the following three lines: 10 dog -50 horse 0 cat -12 zebra 14 walrus The output should then be: horse zebra cat dog walrus I've pasted below the progress I've made so far on my C++ program: #include <fstream> #include <iostream> #include <map> using namespace std; using std::map; int main () { string name; signed int value; ifstream myfile ("data.txt"); while (! myfile.eof() ) { getline(myfile,name,'\n'); myfile >> value >> name; cout << name << endl; } return 0; myfile.close(); } Unfortunately, this produces the following incorrect output: horse cat zebra walrus If anyone has any tips, hints, suggestions, etc. on changes and revisions I need to make to the program to get it to work as needed, can you please let me know? Thanks!

    Read the article

  • Why do I get "Bad File Descriptor" when I try to read a file with Perl?

    - by Magicked
    I'm trying to read a binary file 40 bytes at a time, then check to see if all those bytes are 0x00, and if so ignore them. If not, it will write them back out to another file (basically just cutting out large blocks of null bytes). This may not be the most efficient way to do this, but I'm not worried about that. However, right now I'm getting a "Bad File Descriptor" error and I cannot figure out why. my $comp = "\x00" * 40; my $byte_count = 0; my $infile = "/home/magicked/image1"; my $outfile = "/home/magicked/image1_short"; open IN, "<$infile"; open OUT, ">$outfile"; binmode IN; binmode OUT; my ($buf, $data, $n); while (read (IN, $buf, 40)) { ### Problem is here ### $boo = 1; for ($i = 0; $i < 40; $i++) { if ($comp[$i] != $buf[$i]) { $i = 40; print OUT $buf; $byte_count += 40; } } } die "Problems! $!\n" if $!; close OUT; close IN; I marked with a comment where it is breaking. Thanks for any help!

    Read the article

  • Using a function found in a different file in a loop

    - by Anders
    This question is related to BuddyPress, and a follow-up question from this question I have a .csv-file with 790 rows and 3 columns where the first column is the group name, second is the group description and last (third) the slug. As far as I've been told I can use this code: <?php $groups = array(); if (($handle = fopen("groupData.csv", "r")) !== FALSE) { while (($data = fgetcsv($handle, 1000, ",")) !== FALSE) { $group = array('group_id' = 'SOME ID', 'name' = $data[0], 'description' = $data[1], 'slug' = groups_check_slug(sanitize_title(esc_attr($data[2]))), 'date_created' = gmdate( "Y-m-d H:i:s" ), 'status' = 'public' ); $groups[] = $group; } fclose($handle); } foreach ($groups as $group) { groups_create_group($group); } With http://www.nomorepasting.com/getpaste.php?pasteid=35217 which is called bp-groups.php. The thing is that I can't make it work. I've created a new file with the code written above called groupgenerator.php uploaded the .csv file to the same folder and opened groupgenerator.php in my browser. But, i get this error: Fatal error: Call to undefined function groups_check_slug() in What am I doing wrong?

    Read the article

  • Looking for a Regex to get SccTeamFoundationServer value from .sln file

    - by Arthur
    I am looking tor a Regex for C# to get SccTeamFoundationServer value from .sln file. Maybe someone has come across such need and found a solution. Could you help? File: Microsoft Visual Studio Solution File, Format Version 10.00 # Visual Studio 2008 Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "WebApplication", "WebApplication\WebApplication.csproj", "{AE0F6C02-1C8D-426D-AFA0-C07A52E6112F}" EndProject Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "ConsoleApplication", "ConsoleApplication\ConsoleApplication.csproj", "{2BD82C34-CF50-4559-A3CD-F85ACD657292}" EndProject Global GlobalSection(TeamFoundationVersionControl) = preSolution SccNumberOfProjects = 3 SccEnterpriseProvider = {4CA58AB2-18FA-4F8D-95D4-32DDF27D184C} SccTeamFoundationServer = http://ServerName:8080/ SccLocalPath0 = . SccProjectUniqueName1 = ConsoleApplication\\ConsoleApplication.csproj SccProjectName1 = ConsoleApplication SccLocalPath1 = ConsoleApplication SccProjectUniqueName2 = WebApplication\\WebApplication.csproj SccProjectName2 = WebApplication SccLocalPath2 = WebApplication EndGlobalSection GlobalSection(SolutionConfigurationPlatforms) = preSolution Debug|Any CPU = Debug|Any CPU Release|Any CPU = Release|Any CPU EndGlobalSection GlobalSection(ProjectConfigurationPlatforms) = postSolution {AE0F6C02-1C8D-426D-AFA0-C07A52E6112F}.Debug|Any CPU.ActiveCfg = Debug|Any CPU {AE0F6C02-1C8D-426D-AFA0-C07A52E6112F}.Debug|Any CPU.Build.0 = Debug|Any CPU {AE0F6C02-1C8D-426D-AFA0-C07A52E6112F}.Release|Any CPU.ActiveCfg = Release|Any CPU {AE0F6C02-1C8D-426D-AFA0-C07A52E6112F}.Release|Any CPU.Build.0 = Release|Any CPU {2BD82C34-CF50-4559-A3CD-F85ACD657292}.Debug|Any CPU.ActiveCfg = Debug|Any CPU {2BD82C34-CF50-4559-A3CD-F85ACD657292}.Debug|Any CPU.Build.0 = Debug|Any CPU {2BD82C34-CF50-4559-A3CD-F85ACD657292}.Release|Any CPU.ActiveCfg = Release|Any CPU {2BD82C34-CF50-4559-A3CD-F85ACD657292}.Release|Any CPU.Build.0 = Release|Any CPU EndGlobalSection GlobalSection(SolutionProperties) = preSolution HideSolutionNode = FALSE EndGlobalSection EndGlobal

    Read the article

  • Storing header and data sections in a CSV file

    - by morpheous
    This should be relatively easy to do, but after several hours straight programming my mind seems a bit frazzled and could do with some help. I have a C++ class which I am currently using to store read/write data to file. I was initially using binary data, but have decided to store the data as CSV in order to let programs written in other languages be able to load the data. The C++ class looks a bit like this: class BinaryData { public: BinaryData(); void serialize(std::ostream& output) const; void deserialize(std::istream& input); private: Header m_hdr; std::vector<Row> m_rows; }; I am simply rewriting the serialize/deserialize methods to write to a CSV file. I am not sure on the "best" way to store a header section and a "data" section in a "flat" CSV file though - any suggestions on the most sensible way to do this?

    Read the article

  • how to remove subsets form given text file

    - by user324887
    i have a problem like this 10 20 30 40 70 20 30 70 30 40 10 20 29 70 80 90 20 30 40 40 45 65 10 20 80 45 65 20 I want to remove all subset transaction from this file. output file should be like follows 10 20 30 40 70 29 70 80 90 20 30 40 40 45 65 10 20 80 Where records like 20 30 70 30 40 10 20 45 65 20 are removed because of they are subset of other records. i AM using set for this but i am not able to create one set for one line can anybody know how to do this please help me here i am sending you my code include include include using namespace std; using namespace std; set s1; int main() { FILE fp = fopen ( "abc.txt", "r" ); if ( fp != NULL ) { char line [ 128 ]; / or other suitable maximum line size */ while ( fgets ( line, sizeof line, fp ) != NULL ) /* read a line */ { istringstream iss(line); do { string sub; iss >> sub; s1.insert(sub); } while (iss); for (set<string>::const_iterator p = s1.begin( );p != s1.end( ); ++p) cout << *p << endl; } } }

    Read the article

  • Replacing a word in a text file with a value using python

    - by Jamde Jam
    I have been trying to replace a word in a text file with a value (say 1), but my outfile is blank.I am new to python (its only been a month since I have been learning it). My file is relatively large, but I just want to replace a word with the value 1 for now. Here is a segment of what the file looks like: NAME SECOND_1 ATOM 1 6 0 0 0 # ORB 1 ATOM 2 2 0 12/24 0 # ORB 2 ATOM 3 2 12/24 0 0 # ORB 2 ATOM 4 2 0 0 4/24 # ORB 3 ATOM 5 2 0 0 20/24 # ORB 3 ATOM 6 2 0 0 8/24 # ORB 3 ATOM 7 2 0 0 16/24 # ORB 3 ATOM 8 6 0 0 12/24 # ORB 1 ATOM 9 2 12/24 0 12/24 # ORB 2 ATOM 10 2 0 12/24 12/24 # ORB 2 #1 #2 #3 I want to first replace the word ATOM with the value 1. Next I want to replace #ORB with a space. Here is what I am trying thus far. input = open('SECOND_orbitsJ22.txt','r') output=open('SECOND_orbitsJ22_out.txt','w') for line in input: word=line.split(',') if(word[0]=='ATOM'): word[0]='1' output.write(','.join(word)) Can anyone offer any suggestions or help? Thanks so much.

    Read the article

  • Import module stored in a cStringIO data structure vs. physical disk file

    - by Malcolm
    Is there a way to import a Python module stored in a cStringIO data structure vs. physical disk file? It looks like "imp.load_compiled(name, pathname[, file])" is what I need, but the description of this method (and similar methods) has the following disclaimer: Quote: "The file argument is the byte-compiled code file, open for reading in binary mode, from the beginning. It must currently be a real file object, not a user-defined class emulating a file." [1] I tried using a cStringIO object vs. a real file object, but the help documentation is correct - only a real file object can be used. Any ideas on why these modules would impose such a restriction or is this just an historical artifact? Are there any techniques I can use to avoid this physical file requirement? Thanks, Malcolm [1] http://docs.python.org/library/imp.html#imp.load_module

    Read the article

  • In Mercurial, can I apply changes from one file to another file in the same branch?

    - by Stephen
    In the good old days of Subversion, I would sometimes derive a new file from an existing one using svn copy. Then if something changed in sections they had in common, I could still use svn merge to update the derived version. To use the example from hginit.com, say the "guac" recipe already exists, and I want to create a "superguac" that includes instructions on how to serve guacamole to 1000 raving soccer fans. Using the process I just described, I could: svn cp guac superguac svn ci -m "Created superguac by copying guac" (edit superguac) svn ci -m "Added instructions for serving 1000 raving soccer fans to superguac" (edit guac) svn ci -m "Fixed a typo in guac" svn merge -r3:4 guac superguac and thus the typo fix would be applied to superguac. Mercurial provides an hg copy command that marks a file as a copy of the original, but I'm not sure the repository structure supports a similar workflow. Here's the same example, and I carefully only edit a single file in the commit I want to use in the merge: hg cp guac superguac hg ci -m "Created superguac by copying guac" (edit superguac) hg ci -m "Added instructions for serving 1000 raving soccer fans to superguac" (edit guac) hg ci -m "Fixed a typo in guac" I now want to apply the change in guac to superguac. Is that possible? If so, what's the right command? Is there a different workflow in Mercurial that achieves the same results (limited to a single branch)?

    Read the article

  • Process xml-like log file queue

    - by Zsolt Botykai
    Hi all, first of all: I'm not a programmer, never was, although had learn a lot during my professional carreer as a support consultant. Now my task is to process - and create some statistics about a constantly written and rapidly growing XML like log file. It's not valid XML, because it does not have a proper <root> element, e.g. the log looks like this: <log itemdate="somedate"> <field id="0" /> ... </log> <log itemdate="somedate+1"> <field id="0" /> ... </log> <log itemdate="somedate+n"> <field id="0" /> ... </log> E.g. I have to count all the items with field id=0. But most of the solutions I had found (e.g. using XPath) reports an error about the garbage after the first closing </log>. Most probably I can use python (2.6, although I can compile 3.x as well), or some really old perl version (5.6.x), and recently compiled xmlstarlet which really looks promising - I was able to create the statistics for a certain period after copying the file, and pre- & appending the opening and closing root element. But this is a huge file and copying takes time as well. Isn't there a better solution? Thanks in advance!

    Read the article

  • File.Move, why do i get a FileNotFoundException? The file exist...

    - by acidzombie24
    Its extremely weird since the program is iterating the file! outfolder and infolder are both in H:/ my external HD using windows 7. The idea is to move all folders that only contain files with the extention db and svn-base. When i try to move the folder i get an exception. VS2010 tells me it cant find the folder specified in dir. This code is iterating through dir so how can it not find it! this is weird. string []theExt = new string[] { "db", "svn-base" }; foreach (var dir in Directory.GetDirectories(infolder)) { bool hit = false; if (Directory.GetDirectories(dir).Count() > 0) continue; foreach (var f in Directory.GetFiles(dir)) { var ext = Path.GetExtension(f).Substring(1); if(theExt.Contains(ext) == false) { hit = true; break; } } if (!hit) { var dst = outfolder + "\\" + Path.GetFileName(dir); File.Move(dir, outfolder); //FileNotFoundException: Could not find file dir. } } }

    Read the article

  • How to use a rewrite rule to force calls for "domain.tld/subdir/file.html" to show as "subdir.domain.tld/file.html"?

    - by Wion
    Hi! First time poster. Very new to mod_rewrite. I'm on a shared server and the context of this problem is with a virtual directory under my root account. The domain (domain.tld) will have subdirectories representing annual mini-sites of static .html files. Subdirectory names (yyyy) will be the 4-digit year (e.g., "2010"). I want any call to domain.tld/yyyy/file.html to appear as yyyy.domain.tld/file.html in the browser address bar, and (of course) for the page to load properly. I already force dropping “www” by using… RewriteCond %{HTTP_HOST} ^www\.domain\.tld [NC] RewriteRule (.*) http://domain.tld/$1 [R=301,L] So far so good. But no matter what I try after that, I can’t get the subdomain to force to the front of the domain. Here’s one of the more complicated examples I’ve tried (no doubt wrong)… RewriteCond %{HTTP_HOST} ^domain\.tld/([0-9]+)/([a-z-]+)\.html [NC] RewriteRule (.*) %1.domain.tld/%2.html [NC] This doesn’t break anything (that I can tell), but it doesn’t do what I want either. I.e., if I type yyyy.domain.tld, I’ll see yyyy.domain.tld in the address bar, and navigating around will give me yyyy.domain.tld/file.html, etc. Fine. But if also type domain.tld/yyyy I’ll see domain.tld/yyyy, etc, which is not how I want people to see it. It doesn’t redirect or mask or alias or whatever you call it. Is it even possible to force one look over the other like that? Should I be handling this with DNS instead? Thanks in advance!

    Read the article

< Previous Page | 345 346 347 348 349 350 351 352 353 354 355 356  | Next Page >