Search Results

Search found 75611 results on 3025 pages for 'copy file'.

Page 48/3025 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • Loop over a file and write the next line if a condition is met

    - by 111078384259264152964
    Having a hard time fixing this or finding any good hints about it. I'm trying to loop over one file, modify each line slightly, and then loop over a different file. If the line in the second file starts with the line from the first then the following line in the second file should be written to a third file. !/usr/bin/env python with open('ids.txt', 'rU') as f: with open('seqres.txt', 'rU') as g: for id in f: id=id.lower()[0:4]+'_'+id[4] with open(id + '.fasta', 'w') as h: for line in g: if line.startswith(''+ id): h.write(g.next()) All the correct files appear, but they are empty. Yes, I am sure the if has true cases. :-) "seqres.txt" has lines with an ID number in a certain format, each followed by a line with data. The "ids.txt" has lines with the ID numbers of interest in a different format. I want each line of data with an interesting ID number in its own file. Thanks a million to anyone with a little advice!

    Read the article

  • java get file paths

    - by user283188
    hey everybody, I have a jsp page which contains the code which prints all files in a given directory and their file paths. The code is if (dir.isDirectory()) { File[] dirs = dir.listFiles(); for (File f : dirs) { if (f.isDirectory() && !f.isHidden()) { File files[] = f.listFiles(); for (File d : files) { if (d.isFile() && !d.isHidden()) { System.out.println(d.getName()+ d.getParent() + (d.length()/1024)); } } } if (f.isFile() && !f.isHidden()) { System.out.println(f.getName()+ f.getParent() + (f.length()/1024)); } } } The problem is that it prints the complete file path, which when accessed from tomcat is invalid. For example, the code spits out the following path: /usr/local/tomcat/sites/web_tech/images/scores/blah.jpg and I want it to only print the path up to /images ie /images/scores/blah.jpg I know I could just mess around with an actual string, ie splitting it or string matching, but is there an easier way to do it? Thanks

    Read the article

  • How to code a batch file to copy and rename the most recently dated file?

    - by david.murtagh.keltie.com
    I'm trying to code a batch file to copy only the most recently dated file in a given folder to another directory on the local machine, and simultaneously rename it as it does. I've found a very similar question here http://stackoverflow.com/questions/97371/batch-script-to-copy-newest-file and have managed to cobble together the below code from other forums too, but have hit a brick wall as it only results in the batch file itself being copied to the destination folder. It doesn't matter to me where the batch file itself sits in order for this to run. The source folder is C:! BATCH and the destination folder is C:\DROP The code is below, apologies if this is a glaringly obvious answer but it's literally the first foray into coding batch files for me... Thanks! @echo off setLocal EnableDelayedExpansion pushd C:\! BATCH for /f "tokens=* delims= " %%G in ('dir/b/od') do (set newest=%%G) copy "!newest!" C:\DROP\ PAUSE

    Read the article

  • How can I delete a file created and opened with Perl's IO::File and XML::Writer?

    - by Sho Minamimoto
    So I'm running through a list of things and have code that creates an .xml files with IO::File called $doc, then I make a new writer with XML::Writer(OUTPUT => $doc). More code runs and I build a big XML file with XML::Writer. Then, near the end of the file, I find out if I need this file at all. If I do need it, I just: $writer->end(); $doc->close(); but if I don't need it, what should I enter to just delete all data I've stored/saved and move onto the next file? I tried unlink($docpath) (before and after $doc->close()), the file was not deleted.

    Read the article

  • Unable to open a file for writing

    - by asdasdas
    I am trying to write to a file. I do a file_exists check on it before I do fopen and it returns true (the file does exist). However, the file fails this code and gives me the error every time: $handle = fopen($filename, 'w'); if($handle) { flock($handle, LOCK_EX); fwrite($handle, $contents); } else { echo 'ERROR: Unable to open the file for writing.',PHP_EOL; exit(); } flock($handle, LOCK_UN); fclose($handle); Is there a way I can get more specific error details as to why this file does not let me open it for writing? I know that the filename is legit, but for some reason it just wont let me write to it. I do have write permissions, I was able to write and write over another file.

    Read the article

  • Reverse massive text file in Java

    - by DanJanson
    What would be the best approach to reverse a large text file that is uploaded asynchronously to a servlet that reverses this file in a scalable and efficient way? text file can be massive (gigabytes long) can assume mulitple server/clustered environment to do this in a distributed manner. open source libraries are encouraged to consider I was thinking of using Java NIO to treat file as an array on disk (so that I don't have to treat the file as a string buffer in memory). Also, I am thinking of using MapReduce to break up the file and process it in separate machines. Any input is appreciated. Thanks. Daniel

    Read the article

  • repeated entries in website log file

    - by Reza
    I am writing an ad hoc log analyser for my website log file. The following is part of the log file in which it shows file1.pdf has been downloaded twice. Looking carefully, the time stamp and IP address are exactly the same in both entries. How can it be possible to have 2 downloads at the same time by the same person. Should I count it as 2 in my programme or as 1? Any reply is appreciated. name_of_subdomain xxx.xxx.xx.xx - - [02/Apr/2012:09:13:31 +0100] "GET /file1.pdf HTTP/1.1" 206 3706 "-" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; CMDTDF)" name_of_subdomain xxx.xxx.xx.xx - - [02/Apr/2012:09:13:31 +0100] "GET /file1.pdf HTTP/1.1" 206 425462 "-" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; CMDTDF)"

    Read the article

  • Efficient way to delete a line from a text file (C#)

    - by Valentin Vasilyev
    Hello. I need to delete a certain line from a text file. What is the most efficient way of doing this? File can be potentially large(over million records). Thank you. UPDATE: below is the code I'm currently using, but I'm not sure if it is good. internal void DeleteMarkedEntries() { string tempPath=Path.GetTempFileName(); using (var reader = new StreamReader(logPath)) { using (var writer = new StreamWriter(File.OpenWrite(tempPath))) { int counter = 0; while (!reader.EndOfStream) { if (!_deletedLines.Contains(counter)) { writer.WriteLine(reader.ReadLine()); } ++counter; } } } if (File.Exists(tempPath)) { File.Delete(logPath); File.Move(tempPath, logPath); } }

    Read the article

  • Update in grub.cfg file for loading new kernel image in Ubuntu 11.04

    - by user1627657
    I am new to Linux. I am compiling Linux kernel (ver: 2.6.34.12) in gcc in traditional manner in VMware machine in Ubuntu (kernel version - 2.6.38-8-generic) 11.04 version. I am unable to find, where to update about newly compiled kernel in the grub.cfg file. I updated the newly created image version name in the existing image. Then VMware didn't able to load new kernel. I have searched in internet but I didn't find. So anyone can help me, to update in the grub.cfg and to successfully load new kernel. Few things about what I have done: Make bzImage to create image file. Make modules_install && make install to install modules and then sudo mkinitramfs -o initramfs.img-2.6.34 2.6.34. Then sudo gedit grub.cfg. In that at the mementry I updated the version of vmlinuz and initrd from 2.6.38-8 to 2.6.34.12. This is I have done.

    Read the article

  • Load/Store Objects in file in Java

    - by brain_damage
    I want to store an object from my class in file, and after that to be able to load the object from this file. But somewhere I am making a mistake(s) and cannot figure out where. May I receive some help? public class GameManagerSystem implements GameManager, Serializable { private static final long serialVersionUID = -5966618586666474164L; HashMap<Game, GameStatus> games; HashMap<Ticket, ArrayList<Object>> baggage; HashSet<Ticket> bookedTickets; Place place; public GameManagerSystem(Place place) { super(); this.games = new HashMap<Game, GameStatus>(); this.baggage = new HashMap<Ticket, ArrayList<Object>>(); this.bookedTickets = new HashSet<Ticket>(); this.place = place; } public static GameManager createManagerSystem(Game at) { return new GameManagerSystem(at); } public boolean store(File f) { try { FileOutputStream fos = new FileOutputStream(f); ObjectOutputStream oos = new ObjectOutputStream(fos); oos.writeObject(games); oos.writeObject(bookedTickets); oos.writeObject(baggage); oos.close(); fos.close(); } catch (IOException ex) { return false; } return true; } public boolean load(File f) { try { FileInputStream fis = new FileInputStream(f); ObjectInputStream ois = new ObjectInputStream(fis); this.games = (HashMap<Game,GameStatus>)ois.readObject(); this.bookedTickets = (HashSet<Ticket>)ois.readObject(); this.baggage = (HashMap<Ticket,ArrayList<Object>>)ois.readObject(); ois.close(); fis.close(); } catch (IOException e) { return false; } catch (ClassNotFoundException e) { return false; } return true; } . . . } public class JUnitDemo { GameManager manager; @Before public void setUp() { manager = GameManagerSystem.createManagerSystem(Place.ENG); } @Test public void testStore() { Game g = new Game(new Date(), Teams.LIONS, Teams.SHARKS); manager.registerGame(g); File file = new File("file.ser"); assertTrue(airport.store(file)); } }

    Read the article

  • How can I tell why I have access to a file share on Windows Server

    - by Joel
    I have a file share on a Windows 2008 R2 server in a AD domain (call it \SECURESERVER\STUFF) and I am not sure if I have the share and folder permissions set up right. I noticed the problem when I set up new server (WORKGROUP\FOREIGNSERVER) that was not joined to the domain and tried to copy some files off of \SECURESERVER\STUFF. I was surprised to find that when I tried to access the files, it did not prompt me for a username and password and proceeded to give me full access to the files. That worried me so I tried the same thing on some workstations that were not in the domain and they did NOT have the same behavior (they did prompt for a username/password as desired/expected). So, I think there is something peculiar about FOREIGNSERVER. I am logging into it with a local admin account, but my domain and SECURESERVER should know nothing of this server. I've carefully gone through the share and folder permissions on the share but I can't find the reason that FOREIGNSERVER has access. How can I find out why FOREIGNSERVER has access to SECURESERVER?

    Read the article

  • How to search for duplicate values in a huge text file having around Half Million records

    - by Shibu
    I have an input txt file which has data in the form of records (each row is a record and represents more or less like a DB table) and I need to find for duplicate values. For example: Rec1: ACCOUNT_NBR_1*NAME_1*VALUE_1 Rec2: ACCOUNT_NBR_2*NAME_2*VALUE_2 Rec3: ACCOUNT_NBR_1*NAME_3*VALUE_3 In the above set, the Rec1 and Rec2 are considered to be duplicates as the ACCOUNT NUMBERS are same(ACCOUNT_NBR1). Note: The input file shown above is a delimiter type file (the delimiter being *) however the file type can also be a fixed length file where each column starts and ends with a specified positions. I am currently doing this with the following logic: Loop thru each ACCOUNT NUMBER Loop thru each line of the txt file and record and check if this is repeated. If repeated record the same in a hashtable. End End And I am using 'Pattern' & 'BufferedReader' java API's to perform the above task. But since it is taking a long time, I would like to know a better way of handling it. Thanks, Shibu

    Read the article

  • Elevating UAC via .bat file?

    - by jslaker
    Pretty straightforward one that I'm having trouble finding an answer to. serverfault previously helped me with finding a way to automate Windows updates without using WSUS. It's working fantastically, but to run it over the network, you have to first mount a shared drive. That's pretty simple XP since you just mount the drive and run the updater. On Vista and W7, though, this all has to be done with elevated privileges to work correctly. The UAC account can't see network drives mounted by the regular user, so in order to get everything working, I have to mount the share via net use from an escalated shell. I'd like to automate mounting this share and launching the updater via a simple .bat file. I could probably just instruct everybody to right click "Run as Administrator" on the .bat file, but I'd like to keep things as simple as possible and have the .bat automatically prompt the user to escalate their privileges. Since these computers don't belong to us, I can't count on anything like Powershell being installed, so that rules any solution along those lines out and pretty much have to rely on things that would be included in an RTM Vista install. I'm hoping I'm mostly missing something obvious here. :)

    Read the article

  • Java File manipulation

    - by user69514
    So I have an application with a JFileChooser from which I select a file to read. Then I change some words and write a new file. The problem that I am having is that when I write the new file it's saved in the project directory. How do I save it in the same directory as the file that I chose using the JFileChooser. Note: I don't want to use the JFileChooser to choose the location. I just need to save the file in the same directory as the original file that I read.

    Read the article

  • In which document do file specifications belong?

    - by Andrew
    In which document would a file specification belong? Perhaps this file is used as an input to a third-party system. Would it belong in its own document? Or would it be better to put it in the functional or design spec? Or somewhere else? When I say file specification, I mean a description of what format the file is (CSV, fixed width, etc), columns, data types, etc. Also, where should you document how the file is generated? i.e. business rules/algorithms which are used to generate the file.

    Read the article

  • Windows batch/script code to conditionally process based on date value in text file

    - by CarolinaJay65
    I am using Windows XP. The code can be used in a batch file or vbs script. I intend to use Windows scheduler to run the program. I need code to read a date from a text file (could be the only line in the text file or the date could be included in the filename, I control the process that generates the file) The code would then need to evaluate the text file date against the current date to confirm that the text file date is from the prior month. I'm starting to build a process to be able to run 1st-of-the-month jobs once the monthly data has been refreshed. I'm new to building this kind of process using batch/script files. Thanks for your time

    Read the article

  • Split a binary file into chunks c++

    - by L4nce0
    I've been bashing my head against trying to first divide up a file into chunks, for the purpose of sending over sockets. I can read / write a file easily without splitting it into chunks. The code below runs, works, kinda. It will write a textfile and has a garbage character. Which if this was just for txt, no problem. Jpegs aren't working with said garbage. Been at it for a few days, so I've done my research, and it's time to get some help. I do want to stick strictly to binary readers, as this need to handle any file. I've seen a lot of slick examples out there. (none of them worked for me with jpgs) Mostly something along the lines of while(file)... I subscribe to the, if you know the size, use a for-loop, not a while-loop camp. Thank you for the help!! vector<char*> readFile(const char* fn){ vector<char*> v; ifstream::pos_type size; char * memblock; ifstream file; file.open(fn,ios::in|ios::binary|ios::ate); if (file.is_open()) { size = fileS(fn); file.seekg (0, ios::beg); int bs = size/3; // arbitrary. Actual program will use the socket send size int ws = 0; int i = 0; for(i = 0; i < size; i+=bs){ if(i+bs > size) ws = size%bs; else ws = bs; memblock = new char [ws]; file.read (memblock, ws); v.push_back(memblock); } } else{ exit(-4); } return v; } int main(int argc, char **argv) { vector<char*> v = readFile("foo.txt"); ofstream myFile ("bar.txt", ios::out | ios::binary); for(vector<char*>::iterator it = v.begin(); it!=v.end(); ++it ){ myFile.write(*it,strlen(*it)); } }

    Read the article

  • Granular Clipboard Control in Oracle IRM

    - by martin.abrahams
    One of the main leak prevention controls that customers are looking for is clipboard control. After all, there is little point in controlling access to a document if authorised users can simply make unprotected copies by use of the cut and paste mechanism. Oddly, for such a fundamental requirement, many solutions only offer very simplistic clipboard control - and require the customer to make an awkward choice between usability and security. In many cases, clipboard control is simply an ON-OFF option. By turning the clipboard OFF, you disable one of the most valuable edit functions known to man. Try working for any length of time without copying and pasting, and you'll soon appreciate how valuable that function is. Worse, some solutions disable the clipboard completely - not just for the protected document but for all of the various applications you have open at the time. Normal service is only resumed when you close the protected document. In this way, policy enforcement bleeds out of the particular assets you need to protect and interferes with the entire user experience. On the other hand, turning the clipboard ON satisfies a fundamental usability requirement - but also makes it really easy for users to create unprotected copies of sensitive information, maliciously or otherwise. All they need to do is paste into another document. If creating unprotected copies is this simple, you have to question how much you are really gaining by applying protection at all. You may not be allowed to edit, forward, or print the protected asset, but all you need to do is create a copy and work with that instead. And that activity would not be tracked in any way. So, a simple ON-OFF control creates a real tension between usability and security. If you are only using IRM on a small scale, perhaps security can outweigh usability - the business can put up with the restriction if it only applies to a handful of important documents. But try extending protection to large numbers of documents and large user communities, and the restriction rapidly becomes really unwelcome. I am aware of one solution that takes a different tack. Rather than disable the clipboard, pasting is always permitted, but protection is automatically applied to any document that you paste into. At first glance, this sounds great - protection travels with the content. However, at any scale this model may not be so appealing once you've had to deal with support calls from users who have accidentally applied protection to documents that really don't need it - which would be all too easily done. This may help control leakage, but it also pollutes the system with documents that have policies applied with no obvious rhyme or reason, and it can seriously inconvenience the business by making non-sensitive documents difficult to access. And what policy applies if you paste some protected content into an already protected document? Which policy applies? There are no prizes for guessing that Oracle IRM takes a rather different approach. The Oracle IRM Approach Oracle IRM offers a spectrum of clipboard controls between the extremes of ON and OFF, and it leverages the classification-based rights model to give granular control that satisfies both security and usability needs. Firstly, we take it for granted that if you have EDIT rights, of course you can use the clipboard within a given document. Why would we force you to retype a piece of content that you want to move from HERE... to HERE...? If the pasted content remains in the same document, it is equally well protected whether it be at the beginning, middle, or end - or all three. So, the first point is that Oracle IRM always enables the clipboard if you have the right to edit the file. Secondly, whether we enable or disable the clipboard, we only affect the protected document. That is, you can continue to use the clipboard in the usual way for unprotected documents and applications regardless of whether the clipboard is enabled or disabled for the protected document(s). And if you have multiple protected documents open, each may have the clipboard enabled or disabled independently, according to whether you have Edit rights for each. So, even for the simplest cases - the ON-OFF cases - Oracle IRM adds value by containing the effect to the protected documents rather than to the whole desktop environment. Now to the granular options between ON and OFF. Thanks to our classification model, we can define rights that enable pasting between documents in the same classification - ie. between documents that are protected by the same policy. So, if you are working on this month's financial report and you want to pull some data from last month's report, you can simply cut and paste between the two documents. The two documents are classified the same way, subject to the same policy, so the content is equally safe in both documents. However, if you try to paste the same data into an unprotected document or a document in a different classification, you can be prevented. Thus, the control balances legitimate user requirements to allow pasting with legitimate information security concerns to keep data protected. We can take this further. You may have the right to paste between related classifications of document. So, the CFO might want to copy some financial data into a board document, where the two documents are sealed to different classifications. The CFO's rights may well allow this, as it is a reasonable thing for a CFO to want to do. But policy might prevent the CFO from copying the same data into a classification that is accessible to external parties. The above option, to copy between classifications, may be for specific classifications or open-ended. That is, your rights might enable you to go from A to B but not to C, or you might be allowed to paste to any classification subject to your EDIT rights. As for so many features of Oracle IRM, our classification-based rights model makes this type of granular control really easy to manage - you simply define that pasting is permitted between classifications A and B, but omit C. Or you might define that pasting is permitted between all classifications, but not to unprotected locations. The classification model enables millions of documents to be controlled by a few such rules. Finally, you MIGHT have the option to paste anywhere - such that unprotected copies may be created. This is rare, but a legitimate configuration for some users, some use cases, and some classifications - but not something that you have to permit simply because the alternative is too restrictive. As always, these rights are defined in user roles - so different users are subject to different clipboard controls as required in different classifications. So, where most solutions offer just two clipboard options - ON-OFF or ON-but-encrypt-everything-you-touch - Oracle IRM offers real granularity that leverages our classification model. Indeed, I believe it is the lack of a classification model that makes such granularity impractical for other IRM solutions, because the matrix of rules for controlling pasting would be impossible to manage - there are so many documents to consider, and more are being created all the time.

    Read the article

  • How to pause/resume transfer of large files?

    - by Olivier Lalonde
    I recently had to copy about 20 GB of data split between about 20 files from my laptop to an external hard drive. Since this operation takes quite a while (at ~560kb/s), I was wondering if there was any way to pause the transfer and resume it later (in case, I need to interrupt the transfer). As a side question, is there any performance difference between copying from the terminal vs copying from Nautilus?

    Read the article

  • What file manager/uploader do you use with your embeded wysiwyg editor ?

    - by Wookai
    I am looking for a wysiwyg editor to allow user to edit part of their website. I already worked with TinyMCE, and heard about CKEditor, which both look great. However, they both lack a free (PHP) file manager/uploader, as they sell their own tool to make some money. I found a few free alternatives for TinyMCE, the best one (for my needs) being PHP Letter's one, but I was wondering about what the community uses ? Do you buy the "official" file manager ? Do you code your own ? Or do you have a great (free) alternative ?

    Read the article

  • What is the ideal file size for a web page? [closed]

    - by Rob
    Possible Duplicate: Is there a maximum size that web pages should be kept under? What is the ideal file size for a web page? Specifically when it comes to image sizes, what's the total file size for a webpage which includes several images. I tend to compress images down as much as possible before it starts to visually lose quality. We run several CMS website's and the clients tend to ask this question a lot! I'd love to hear another view on it.

    Read the article

  • Uploading multiple files asynchronously by blueimp jquery-fileupload

    - by Ryo
    I'm using jQuery File Upload library (http://github.com/blueimp/jQuery-File-Upload), and I've been stuck figuring out how to use the library satisfying the following conditions. The page has multiple file input fields surrounded by a form tag. Users can attach multiple files to each input field All files are sent to a server when the button is clicked, not when files are attached to the input fields. Upload is done asynchronously Say the page has 3 input fields with their name attributes being "file1[]", "file2[]" and "file3[]", the request payload should be like {file1: [ array of files on file1[] ], file2: [ array of files on file2[] ], ...} Here's jsFiddle, it's behaving weird so far in that it sends post request twice and the first one is cancelled. http://jsfiddle.net/BAQtG/24/ The core part of js code looks like this. $(document).ready(function(){ var filesList = [] var elem = $("form") file_upload = elem.fileupload({ formData:{extra:1}, autoUpload: false, fileInput: $("input:file"), }).on("fileuploadadd", function(e, data){ filesList.push(data.files[0]) }); $("button").click(function(){ file_upload.fileupload('send', {files:filesList} ) }) }) Anybody have idea how to get this to work? Updates Now thanks to @CBroe 's comment, the issue that request is sent twice is fixed. However the keys of request parameter is not correctly set. Here's updated jsFiddle. http://jsfiddle.net/BAQtG/27/

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >