Search Results

Search found 81445 results on 3258 pages for 'file command'.

Page 53/3258 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • Information about a file or directory

    - by Tim
    In Linux, the information about a file or directory is stored in its inode. I was wondering what is the data structure for information about a file or directory in Windows 7? How to get the information about a file or directory in Linux and in Windows 7, in terminal and command line window? Is the owner of a file or directory always its creator? Will it be able to change? Is there a creation timestamp for a file in Linux and in Windows 7? How to get it? Thanks and regards!

    Read the article

  • How to combine wildcards and spaces (quotes) in an Windows command?

    - by Jan Fabry
    I want to remove directories of the following format: C:\Program Files\FogBugz\Plugins\cache\[email protected]_NN NN is a number, so I want to use a wildcard (this is part of a post-build step in Visual Studio). The problem is that I need to combine quotes around the path name (for the space in Program Files) with a wildcard to match the end of the path. I already found out that rd is the remove command that accepts wildcards, but where do I put the quotes? I have tried no ending quote (works for dir), ...example.com*", ...example.com"*, ...example.com_??", ...cache\"[email protected]*, ...cache"\[email protected]*, but none of them work. (How many commands to remove a file/directory are there in Windows anyway? And why do they all differ in capabilities?)

    Read the article

  • Not able to delete file from the server?

    - by kvijayhari
    I've a file called piture-list.php in my website... When i see them through the ftp client it shows two files with different filesizes.. as File name filesize picture-list.php 19818 picture-list.php 9063 When i select the file with 9063 and delete using ftp it deletes the file with the filesize 19818 then i used the command prompt to list files and happened to see actually there were two files one with the original name and other with a space before the filename (" picture-list.php").. I tried to move, delete the file but nothing is successful.. What may be the issue??

    Read the article

  • What command line tools for monitoring host network activity on linux do you use?

    - by user27388
    What command line tools are good for reliably monitoring network activity? I have used ifconfig, but an office colleague said that its statistics are not always reliable. Is that true? I have recently used ethtool, but is it reliable? What about just looking at /proc/net 'files'? Is that any better? EDIT I'm interested in packets Tx/Rx, bytes Tx/Rx, but most importantly drops or errors and why the drop/error might have occurred.

    Read the article

  • TrueCrypt drive letter not available

    - by Tono Nam
    With c# or a batch file I mount a trueCrypt volume located at A:\volumeTrueCrypt.tc With c# I do: static void Main(string[] args) { var p = Process.Start( fileName:@"C:\Program Files\TrueCrypt\TrueCrypt.exe", arguments:@"/v a:\volumetruecrypt.tc /lw /a /p truecrypt" ); p.WaitForExit(); } the alternative is to run the command on the command line as: C:\Windows\system32>"C:\Program Files\TrueCrypt\TrueCrypt.exe" /v "a:\volumetruecrypt.tc" /lw /a /p truecrypt Either way I get the error: Why do I get that error? I was able to run that command the first time. The moment I dismounted the volume and tryied to mount it again I got that error. I know that drive letter W is available because it shows as an available letter on true crypt if I where to open it manually: If I where then click on the button mount and then type the password truecrypt (truecrypt is the password) then it will successfully mount on drive w. Why I am not able to mount it from the command line!? If I change the drive letter on the command line it works. I want to use the drive W though. In other words executing "C:\Program Files\TrueCrypt\TrueCrypt.exe" /v "a:\volumetruecrypt.tc" /lz /a /p truecrypt will successfully mount that volume on drive z but I do not want to mount it on drive z I want to mount it on drive w. The first time I ran the batch it ran fine. Also if I restart my computer I believe it should work. More info on how to use trueCrypt through the command line can be found at: http://www.truecrypt.org/docs/?s=command-line-usage Edit I was also investivating when does this error occures. In order to generate this error you need to follow this steps. 1) execute the command: (note the /q argument at the end for quiet) "C:\Program Files\TrueCrypt\TrueCrypt.exe" /v "a:\volumetruecrypt.tc" /ln /a /p truecrypt /q "C...TrueCrypt.exe" = location where trueCrypt is located /v "path" = location where volume is located /n = drive letter n /p truecrypt = password is "trueCrypt" /q = execute in quiet mode. do not show window note I am mounting to drive letter n 2) now volume should be mounted. 3) Open trueCrypt and manually dismount that volume (without using command line) 4) Attempt to run the same command line (without the /q so you see the error) "C:\Program Files\TrueCrypt\TrueCrypt.exe" /v "a:\volumetruecrypt.tc" /ln /a /p truecrypt 5) an error should show up So the problem ocures when I manually dismount the volume. If I dismount it from the command line I get no errors. But I think this is a bug from trueCrypt

    Read the article

  • PDFtk Password Protection Help

    - by Dave W.
    I am using Ubuntu 11.10 and am looking for a solution to password protect a bunch of pdf files in a directory in batch. I came across PDFtk and it looks like it might do what I need, but I've reviewed the command line PDFtk examples and can't figure out if there is a way to do it in batch without having to individually specify the output file name for every file. I'm hoping a command-line guru can take a look at the PDFtk syntax and tell me if there is some trick / command that will allow me to password protect a directory of pdf files (e.g., *.pdf) and overwrite the existing files using the same name, or consistently rename the individual output files without having to specify each output name individually. Here's a link to the PDFtk command line examples page: http://www.pdflabs.com/tools/pdftk-the-pdf-toolkit/ Thanks for your help. I think I've answered my own question. Here's a bash script that appears to do the trick. I'd welcome help evaluating why the code I've commented out doesn't work... #!/bin/bash # Created by Dave, 2012-02-23 # This script uses PDFtk to password protect every PDF file # in the directory specified. The script creates a directory named "protected_[DATE]" # to hold the password protected version of the files. # # I'm using the "user_pw" parameter, # which means no one will be able to open or view the file without # the password. # # PDFtk must be installed for this script to work. # # Usage: ./protect_with_pdftk.bsh [FILE(S)] # [FILE(S)] can use wildcard expansion (e.g., *.pdf) # This part isn't working.... ignore. The goal is to avoid errors if the # directory to be created already exists by only attempting to create # it if it doesn't exists # #TARGET_DIR="protected_$(date +%F)" #if [ -d "$TARGET_DIR" ] #then #echo # echo "$TARGET_DIR directory exists!" #else #echo # echo "$TARGET_DIR directory does not exist!" #fi # mkdir protected_$(date +%F) for i in *pdf ; do pdftk "$i" output "./protected_$(date +%F)/$i" user_pw [PASSWORD]; done echo "Complete. Output is in the directory: ./protected_$(date +%F)"

    Read the article

  • krunner unreadable black text on black background. File a bug? Where?

    - by user52784
    I'm running Kubuntu 12.04 beta2 (up to date). Already tried to create a user from the scratch but the problem can still be replicated. I'll make it simple: with any of the available themes (air, air 4 netbooks, oxygen) whenever I disable desktop effects, KRunner instantly becomes black (with black fonts) rendering itself useless because of its unreadable text. Here is a screenshot I took: http://i42.tinypic.com/348sz7n.jpg The weird thing is that with effects enabled KRunner is "light gray" and perfectly functional. What can I do? Should I file a bug? If yes: where? On the KDE bug tracker or the Kubuntu one? Thanks in advance!

    Read the article

  • USB device Set Attribute in C#

    - by p19lord
    I have this bit of code: DriveInfo[] myDrives = DriveInfo.GetDrives(); foreach (DriveInfo myDrive in myDrives) { if (myDrive.DriveType == DriveType.Removable) { string path = Convert.ToString(myDrive.RootDirectory); DirectoryInfo mydir = new DirectoryInfo(path); String[] dirs = new string[] {Convert.ToString(mydir.GetDirectories())}; String[] files = new string[] {Convert.ToString(mydir.GetFiles())}; foreach (var file in files) { File.SetAttributes(file, ~FileAttributes.Hidden); File.SetAttributes(file, ~FileAttributes.ReadOnly); } foreach (var dir in dirs) { File.SetAttributes(dir, ~FileAttributes.Hidden); File.SetAttributes(dir, ~FileAttributes.ReadOnly); } } } I have a problem. It is trying the code for Floppy Disk drive first which and because no Floppy disk in it, it threw the error The device is not ready. How can I prevent that?

    Read the article

  • Inconsistent file downloads of (what should be) the same file

    - by Austin A.
    I'm working on a system that archives large collections of timetstamped images. Part of the system deals with saving an image to a growing .zip file. This morning I noticed that the log system said that an image was successfully downloaded and placed in the zip file, but when I downloaded the .zip (from an apache alias running on our server), the images didn't match the log. For example, although the log said that camera 3484 captured on January 17, 2011, when I download from the apache alias, the downloaded zip file only contains images up to January 14. So, I sshed onto the server, and unzipped the file in its own directory, and that zip file has images from January 14 to today (January 17). What strikes me as odd is that this should be the exact same file as the one I downloaded from the apache alias. Other experiments: I scp-ed the file from the server to my local machine, and the zip file has the newer images. But when I use an SCP client (in this case, Fugu for OSX), I get the zip file for the older images. In short: unzipping a file on the server or after downloading through scp or after downloading through wget gives one zip file, but unzipping a file from Chrome, Firefox, or SCP client gives a different zip file, when they should be exactly the same. Unzipping on the server... [user@server ~]$ cd /export1/amos/images/2011/84/3484/00003484/ [user@server 00003484]$ ls -la total 6180 drwxr-sr-x 2 user groupname 24 Jan 17 11:20 . drwxr-sr-x 4 user groupname 36 Jan 11 19:58 .. -rw-r--r-- 1 user groupname 6309980 Jan 17 12:05 2011.01.zip [user@server 00003484]$ unzip 2011.01.zip Archive: 2011.01.zip extracting: 20110114_140547.jpg extracting: 20110114_143554.jpg replace 20110114_143554.jpg? [y]es, [n]o, [A]ll, [N]one, [r]ename: y extracting: 20110114_143554.jpg extracting: 20110114_153458.jpg (...bunch of files...) extracting: 20110117_170459.jpg extracting: 20110117_173458.jpg extracting: 20110117_180501.jpg Using the wget through apache alias. local:~ user$ wget http://example.com/zipfiles/2011/84/3484/00003484/2011.01.zip --12:38:13-- http://example.com/zipfiles/2011/84/3484/00003484/2011.01.zip => `2011.01.zip' Resolving example.com... ip.ip.ip.ip Connecting to example.com|ip.ip.ip.ip|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 6,327,747 (6.0M) [application/zip] 100% [=====================================================================================================>] 6,327,747 1.03M/s ETA 00:00 12:38:56 (143.23 KB/s) - `2011.01.zip' saved [6327747/6327747] local:~ user$ unzip 2011.01.zip Archive: 2011.01.zip extracting: 20110114_140547.jpg (... same as before...) extracting: 20110117_183459.jpg Using scp to grab the zip local:~ user$ scp user@server:/export1/amos/images/2011/84/3484/00003484/2011.01.zip . 2011.01.zip 100% 6179KB 475.3KB/s 00:13 local:~ user$ unzip 2011.01.zip Archive: 2011.01.zip extracting: 20110114_140547.jpg (...same as before...) extracting: 20110117_183459.jpg Using Fugu to download 2011.01.zip from /export1/amos/images/2011/84/3484/00003484/ gives images 20110113_090457.jpg through 201100114_010554.jpg Using Firefox to download 2011.01.zip from http://example.com/zipfiles/2011/84/3484/00003484/2011.01.zip gives images 20110113_090457.jpg through 201100114_010554.jpg Using Chrome gives same results as Firefox. Relevant section from apache httpd.conf: # ScriptAlias: This controls which directories contain server scripts. # ScriptAliases are essentially the same as Aliases, except that # documents in the realname directory are treated as applications and # run by the server when requested rather than as documents sent to the client. # The same rules about trailing "/" apply to ScriptAlias directives as to # Alias. # ScriptAlias /cgi-bin/ "/var/www/cgi-bin/" Alias /zipfiles/ /export1/amos/images/

    Read the article

  • File Sync Solution for Batch Processing (ETL)

    - by KenFar
    I'm looking for a slightly different kind of sync utility - not one designed to keep two directories identical, but rather one intended to keep files flowing from one host to another. The context is a data warehouse that currently has a custom-developed solution that moves 10,000 files a day, some of which are 1+ gbytes gzipped files, between linux servers via ssh. Files are produced by the extract process, then moved to the transform server where a transform daemon is waiting to pick them up. The same process happens between transform & load. Once the files are moved they are typically archived on the source for a week, and the downstream process likewise moves them to temp then archive as it consumes them. So, my requirements & desires: It is never used to refresh updated files - only used to deliver new files. Because it's delivering files to downstream processes - it needs to rename the file once done so that a partial file doesn't get picked up. In order to simplify recovery, it should keep a copy of the source files - but rename them or move them to another directory. If the transfer fails (network down, file system full, permissions, file locked, etc), then it should retry periodically - and never fail in a non-recoverable way, or a way that sends the file twice or never sends the file. Should be able to copy files to 2+ destinations. Should have a consolidated log so that it's easy to find problems Should have an optional checksum feature Any recommendations? Can Unison do this well?

    Read the article

  • Windows 7 delayed file delete

    - by GregoryM
    I'm stuck with a pretty rare problem that happens on Windows 7 OS only. Every time I'm deleting the file with *.exe extension through explorer, the file doesn't get deleted immediately. I'm forced to wait for around one-two minutes before the system will delete the file. The main problem is that I cannot develop in such situation, because every time I build my solution, the old executable gets 'deleted', but is still there. So the new one cannot be created by Visual Studio. This problem breaks the Steam update progress and a few other installers functionality too. Fresh installed Win7 doesn't have this kind of trouble, so I guess this must be some bad registry entries or some services. Browsing the internet for solutions I found only this: http://www.sevenforums.com/software/72091-several-minute-delay-when-deleting-any-exe-file.html. But the solution the author found is not working (change the userName :)). Is there any ideas how to find what causes this to happen? BTW: when I place the file into Trash bin, no delay occurs. When I delete file with Total Commander - no delay too. Tech details: Windows 7 x64 Ultimate.

    Read the article

  • Windows 7 delayed file delete

    - by GregoryM
    I'm stuck with a pretty rare problem that happens on Windows 7 OS only. Every time I'm deleting the file with *.exe extension through explorer, the file doesn't get deleted immediately. I'm forced to wait for around one-two minutes before the system will delete the file. The main problem is that I cannot develop in such situation, because every time I build my solution, the old executable gets 'deleted', but is still there. So the new one cannot be created by Visual Studio. This problem breaks the Steam update progress and a few other installers functionality too. Fresh installed Win7 doesn't have this kind of trouble, so I guess this must be some bad registry entries or some services. Browsing the internet for solutions I found only this: http://www.sevenforums.com/software/72091-several-minute-delay-when-deleting-any-exe-file.html. But the solution the author found is not working (change the userName :)). Is there any ideas how to find what causes this to happen? BTW: when I place the file into Trash bin, no delay occurs. When I delete file with Total Commander - no delay too. Tech details: Windows 7 x64 Ultimate. UPD: maybe some shadow copying or system restore services (though I have the system restore turned off) block the files? Can't even guess...

    Read the article

  • How to simulate inner join on very large files in java (without running out of memory)

    - by Constantin
    I am trying to simulate SQL joins using java and very large text files (INNER, RIGHT OUTER and LEFT OUTER). The files have already been sorted using an external sort routine. The issue I have is I am trying to find the most efficient way to deal with the INNER join part of the algorithm. Right now I am using two Lists to store the lines that have the same key and iterate through the set of lines in the right file once for every line in the left file (provided the keys still match). In other words, the join key is not unique in each file so would need to account for the Cartesian product situations ... left_01, 1 left_02, 1 right_01, 1 right_02, 1 right_03, 1 left_01 joins to right_01 using key 1 left_01 joins to right_02 using key 1 left_01 joins to right_03 using key 1 left_02 joins to right_01 using key 1 left_02 joins to right_02 using key 1 left_02 joins to right_03 using key 1 My concern is one of memory. I will run out of memory if i use the approach below but still want the inner join part to work fairly quickly. What is the best approach to deal with the INNER join part keeping in mind that these files may potentially be huge public class Joiner { private void join(BufferedReader left, BufferedReader right, BufferedWriter output) throws Throwable { BufferedReader _left = left; BufferedReader _right = right; BufferedWriter _output = output; Record _leftRecord; Record _rightRecord; _leftRecord = read(_left); _rightRecord = read(_right); while( _leftRecord != null && _rightRecord != null ) { if( _leftRecord.getKey() < _rightRecord.getKey() ) { write(_output, _leftRecord, null); _leftRecord = read(_left); } else if( _leftRecord.getKey() > _rightRecord.getKey() ) { write(_output, null, _rightRecord); _rightRecord = read(_right); } else { List<Record> leftList = new ArrayList<Record>(); List<Record> rightList = new ArrayList<Record>(); _leftRecord = readRecords(leftList, _leftRecord, _left); _rightRecord = readRecords(rightList, _rightRecord, _right); for( Record equalKeyLeftRecord : leftList ){ for( Record equalKeyRightRecord : rightList ){ write(_output, equalKeyLeftRecord, equalKeyRightRecord); } } } } if( _leftRecord != null ) { write(_output, _leftRecord, null); _leftRecord = read(_left); while(_leftRecord != null) { write(_output, _leftRecord, null); _leftRecord = read(_left); } } else { if( _rightRecord != null ) { write(_output, null, _rightRecord); _rightRecord = read(_right); while(_rightRecord != null) { write(_output, null, _rightRecord); _rightRecord = read(_right); } } } _left.close(); _right.close(); _output.flush(); _output.close(); } private Record read(BufferedReader reader) throws Throwable { Record record = null; String data = reader.readLine(); if( data != null ) { record = new Record(data.split("\t")); } return record; } private Record readRecords(List<Record> list, Record record, BufferedReader reader) throws Throwable { int key = record.getKey(); list.add(record); record = read(reader); while( record != null && record.getKey() == key) { list.add(record); record = read(reader); } return record; } private void write(BufferedWriter writer, Record left, Record right) throws Throwable { String leftKey = (left == null ? "null" : Integer.toString(left.getKey())); String leftData = (left == null ? "null" : left.getData()); String rightKey = (right == null ? "null" : Integer.toString(right.getKey())); String rightData = (right == null ? "null" : right.getData()); writer.write("[" + leftKey + "][" + leftData + "][" + rightKey + "][" + rightData + "]\n"); } public static void main(String[] args) { try { BufferedReader leftReader = new BufferedReader(new FileReader("LEFT.DAT")); BufferedReader rightReader = new BufferedReader(new FileReader("RIGHT.DAT")); BufferedWriter output = new BufferedWriter(new FileWriter("OUTPUT.DAT")); Joiner joiner = new Joiner(); joiner.join(leftReader, rightReader, output); } catch (Throwable e) { e.printStackTrace(); } } } After applying the ideas from the proposed answer, I changed the loop to this private void join(RandomAccessFile left, RandomAccessFile right, BufferedWriter output) throws Throwable { long _pointer = 0; RandomAccessFile _left = left; RandomAccessFile _right = right; BufferedWriter _output = output; Record _leftRecord; Record _rightRecord; _leftRecord = read(_left); _rightRecord = read(_right); while( _leftRecord != null && _rightRecord != null ) { if( _leftRecord.getKey() < _rightRecord.getKey() ) { write(_output, _leftRecord, null); _leftRecord = read(_left); } else if( _leftRecord.getKey() > _rightRecord.getKey() ) { write(_output, null, _rightRecord); _pointer = _right.getFilePointer(); _rightRecord = read(_right); } else { long _tempPointer = 0; int key = _leftRecord.getKey(); while( _leftRecord != null && _leftRecord.getKey() == key ) { _right.seek(_pointer); _rightRecord = read(_right); while( _rightRecord != null && _rightRecord.getKey() == key ) { write(_output, _leftRecord, _rightRecord ); _tempPointer = _right.getFilePointer(); _rightRecord = read(_right); } _leftRecord = read(_left); } _pointer = _tempPointer; } } if( _leftRecord != null ) { write(_output, _leftRecord, null); _leftRecord = read(_left); while(_leftRecord != null) { write(_output, _leftRecord, null); _leftRecord = read(_left); } } else { if( _rightRecord != null ) { write(_output, null, _rightRecord); _rightRecord = read(_right); while(_rightRecord != null) { write(_output, null, _rightRecord); _rightRecord = read(_right); } } } _left.close(); _right.close(); _output.flush(); _output.close(); } UPDATE While this approach worked, it was terribly slow and so I have modified this to create files as buffers and this works very well. Here is the update ... private long getMaxBufferedLines(File file) throws Throwable { long freeBytes = Runtime.getRuntime().freeMemory() / 2; return (freeBytes / (file.length() / getLineCount(file))); } private void join(File left, File right, File output, JoinType joinType) throws Throwable { BufferedReader leftFile = new BufferedReader(new FileReader(left)); BufferedReader rightFile = new BufferedReader(new FileReader(right)); BufferedWriter outputFile = new BufferedWriter(new FileWriter(output)); long maxBufferedLines = getMaxBufferedLines(right); Record leftRecord; Record rightRecord; leftRecord = read(leftFile); rightRecord = read(rightFile); while( leftRecord != null && rightRecord != null ) { if( leftRecord.getKey().compareTo(rightRecord.getKey()) < 0) { if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.LeftExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, leftRecord, null); } leftRecord = read(leftFile); } else if( leftRecord.getKey().compareTo(rightRecord.getKey()) > 0 ) { if( joinType == JoinType.RightOuterJoin || joinType == JoinType.RightExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, null, rightRecord); } rightRecord = read(rightFile); } else if( leftRecord.getKey().compareTo(rightRecord.getKey()) == 0 ) { String key = leftRecord.getKey(); List<File> rightRecordFileList = new ArrayList<File>(); List<Record> rightRecordList = new ArrayList<Record>(); rightRecordList.add(rightRecord); rightRecord = consume(key, rightFile, rightRecordList, rightRecordFileList, maxBufferedLines); while( leftRecord != null && leftRecord.getKey().compareTo(key) == 0 ) { processRightRecords(outputFile, leftRecord, rightRecordFileList, rightRecordList, joinType); leftRecord = read(leftFile); } // need a dispose for deleting files in list } else { throw new Exception("DATA IS NOT SORTED"); } } if( leftRecord != null ) { if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.LeftExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, leftRecord, null); } leftRecord = read(leftFile); while(leftRecord != null) { if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.LeftExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, leftRecord, null); } leftRecord = read(leftFile); } } else { if( rightRecord != null ) { if( joinType == JoinType.RightOuterJoin || joinType == JoinType.RightExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, null, rightRecord); } rightRecord = read(rightFile); while(rightRecord != null) { if( joinType == JoinType.RightOuterJoin || joinType == JoinType.RightExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, null, rightRecord); } rightRecord = read(rightFile); } } } leftFile.close(); rightFile.close(); outputFile.flush(); outputFile.close(); } public void processRightRecords(BufferedWriter outputFile, Record leftRecord, List<File> rightFiles, List<Record> rightRecords, JoinType joinType) throws Throwable { for(File rightFile : rightFiles) { BufferedReader rightReader = new BufferedReader(new FileReader(rightFile)); Record rightRecord = read(rightReader); while(rightRecord != null){ if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.RightOuterJoin || joinType == JoinType.FullOuterJoin || joinType == JoinType.InnerJoin ) { write(outputFile, leftRecord, rightRecord); } rightRecord = read(rightReader); } rightReader.close(); } for(Record rightRecord : rightRecords) { if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.RightOuterJoin || joinType == JoinType.FullOuterJoin || joinType == JoinType.InnerJoin ) { write(outputFile, leftRecord, rightRecord); } } } /** * consume all records having key (either to a single list or multiple files) each file will * store a buffer full of data. The right record returned represents the outside flow (key is * already positioned to next one or null) so we can't use this record in below while loop or * within this block in general when comparing current key. The trick is to keep consuming * from a List. When it becomes empty, re-fill it from the next file until all files have * been consumed (and the last node in the list is read). The next outside iteration will be * ready to be processed (either it will be null or it points to the next biggest key * @throws Throwable * */ private Record consume(String key, BufferedReader reader, List<Record> records, List<File> files, long bufferMaxRecordLines ) throws Throwable { boolean processComplete = false; Record record = records.get(records.size() - 1); while(!processComplete){ long recordCount = records.size(); if( record.getKey().compareTo(key) == 0 ){ record = read(reader); while( record != null && record.getKey().compareTo(key) == 0 && recordCount < bufferMaxRecordLines ) { records.add(record); recordCount++; record = read(reader); } } processComplete = true; // if record is null, we are done if( record != null ) { // if the key has changed, we are done if( record.getKey().compareTo(key) == 0 ) { // Same key means we have exhausted the buffer. // Dump entire buffer into a file. The list of file // pointers will keep track of the files ... processComplete = false; dumpBufferToFile(records, files); records.clear(); records.add(record); } } } return record; } /** * Dump all records in List of Record objects to a file. Then, add that * file to List of File objects * * NEED TO PLACE A LIMIT ON NUMBER OF FILE POINTERS (check size of file list) * * @param records * @param files * @throws Throwable */ private void dumpBufferToFile(List<Record> records, List<File> files) throws Throwable { String prefix = "joiner_" + files.size() + 1; String suffix = ".dat"; File file = File.createTempFile(prefix, suffix, new File("cache")); BufferedWriter writer = new BufferedWriter(new FileWriter(file)); for( Record record : records ) { writer.write( record.dump() ); } files.add(file); writer.flush(); writer.close(); }

    Read the article

  • Flash AS3 load file xml

    - by Elias
    Hello, I'm just trying to load an xml file witch can be anywere in the hdd, this is what I have done to browse it, but later when I'm trying to load the file it would only look in the same path of the swf file here is the code package { import flash.display.Sprite; import flash.events.; import flash.net.; public class cargadorXML extends Sprite { public var cuadro:Sprite = new Sprite(); public var file:FileReference; public var req:URLRequest; public var xml:XML; public var xmlLoader:URLLoader = new URLLoader(); public function cargadorXML() { cuadro.graphics.beginFill(0xFF0000); cuadro.graphics.drawRoundRect(0,0,100,100,10); cuadro.graphics.endFill(); cuadro.addEventListener(MouseEvent.CLICK,browser); addChild(cuadro); } public function browser(e:Event) { file = new FileReference(); file.addEventListener(Event.SELECT,bien); file.browse(); } public function bien(e:Event) { xmlLoader.addEventListener(Event.COMPLETE, loadXML); req=new URLRequest(file.name); xmlLoader.load(req); } public function loadXML(e:Event) { xml=new XML(e.target.data); //xml.name=file.name; trace(xml); } } } when I open a xml file that isnt it the same directory as the swf, it gives me an unfound file error. is there anything I can do? cause for example for mp3 there is an especial class for loading the file, see http://www.flexiblefactory.co.uk/flexible/?p=46 thanks

    Read the article

  • Unable to access Java-created file -- sometimes

    - by BlairHippo
    In Java, I'm working with code running under WinXP that creates a file like this: public synchronized void store(Properties props, byte[] data) { try { File file = filenameBasedOnProperties(props); if ( file.exists() ) { return; } File temp = File.createTempFile("tempfile", null); FileOutputStream out = new FileOutputStream(temp); out.write(data); out.flush(); out.close(); file.getParentFile().mkdirs(); temp.renameTo(file); } catch (IOException ex) { // Complain and whine and stuff } } Sometimes, when a file is created this way, it's just about totally inaccessible from outside the code (though the code responsible for opening and reading the file has no problem), even when the application isn't running. When accessed via Windows Explorer, I can't move, rename, delete, or even open the file. Under Cygwin, I get the following when I ls -l the directory: ls: cannot access [big-honkin-filename] total 0 ?????????? ? ? ? ? ? [big-honkin-filename] As implied, the filenames are big, but under the 260-character max for XP (though they are slightly over 200 characters). To further add to the sense the my computer just wants me to feel stupid, sometimes the files created by this code are perfectly normal. The only pattern I've spotted is that once one file in the directory "locks", the rest are screwed. Anybody ever run into something like this before, or have any insights into what's going on here?

    Read the article

  • How to compare two file structures in PHP?

    - by OM The Eternity
    I have a function which gives me the complete file structure upto n-level, function getDirectory($path = '.', $ignore = '') { $dirTree = array (); $dirTreeTemp = array (); $ignore[] = '.'; $ignore[] = '..'; $dh = @opendir($path); while (false !== ($file = readdir($dh))) { if (!in_array($file, $ignore)) { if (!is_dir("$path/$file")) { //display of file and directory name with their modification time $stat = stat("$path/$file"); $statdir = stat("$path"); $dirTree["$path"][] = $file. " === ". date('Y-m-d H:i:s', $stat['mtime']) . " Directory == ".$path."===". date('Y-m-d H:i:s', $statdir['mtime']) ; } else { $dirTreeTemp = getDirectory("$path/$file", $ignore); if (is_array($dirTreeTemp))$dirTree = array_merge($dirTree, $dirTreeTemp); } } } closedir($dh); return $dirTree; } $ignore = array('.htaccess', 'error_log', 'cgi-bin', 'php.ini', '.ftpquota'); //function call $dirTree = getDirectory('.', $ignore); //file structure array print print_r($dirTree); Now here my requirement is , I have two sites The Development/Test Site- where i do testing of all the changes The Production Site- where I finally post all the changes as per test in development site Now, for example, I have tested an image upload in the Development/test site, and i found it appropriate to publish on Production site then i will completely transfer the Development/Test DB detail to Production DB, but now I want to compare the files structure as well to transfer the corresponding image file to Production folder. There could be the situation when I update the image by editing the image and upload it with same name, now in this case the image file would be already present there, which will restrict the use of "file_exist" logic, so for these type of situations....HOW CAN I COMPARE THE TWO FILE STRUCTURE TO GET THE SYNCHRONIZATION DONE AS PER REQUIREMENT??

    Read the article

  • Modifying File while in use using Java

    - by Marquinio
    Hi all, I have this recurrent Java JAR program tasks that tries to modify a file every 60seconds. Problem is that if user is viewing the file than Java program will not be able to modify the file. I get the typical IOException. Anyone knows if there is a way in Java to modify a file currently in use? Or anyone knows what would be the best way to solve this problem? I was thinking of using the File canRead(), canWrite() methods to check if file is in use. If file is in use then I'm thinking of making a backup copy of data that could not be written. Then after 60 seconds add some logic to check if backup file is empty or not. If backup file is not empty then add its contents to main file. If empty then just add new data to main file. Of course, the first thing I will always do is check if file is in use. Thanks for all your ideas.

    Read the article

  • File mkdirs() method not working in android/java

    - by Leif Andersen
    I've been pulling out my hair on this for a while now. The following method is supposed to download a file, and save it to the location specified on the hard drive. private static void saveImage(Context context, boolean backgroundUpdate, URL url, File file) { if (!Tools.checkNetworkState(context, backgroundUpdate)) return; // Get the image try { // Make the file file.getParentFile().mkdirs(); // Set up the connection URLConnection uCon = url.openConnection(); InputStream is = uCon.getInputStream(); BufferedInputStream bis = new BufferedInputStream(is); // Download the data ByteArrayBuffer baf = new ByteArrayBuffer(50); int current = 0; while ((current = bis.read()) != -1) { baf.append((byte) current); } // Write the bits to the file OutputStream os = new FileOutputStream(file); os.write(baf.toByteArray()); os.close(); } catch (Exception e) { // Any exception is probably a newtork faiilure, bail return; } } Also, if the file doesn't exist, it is supposed to make the directory for the file. (And if there is another file already in that spot, it should just not do anything). However, for some reason, the mkdirs() method never makes the directory. I've tried everything from explicit parentheses, to explicitly making the parent file class, and nothing seems to work. I'm fairly certain that the drive is writable, as it's only called after that has already been determined, also that is true after running through it while debugging. So the method fails because the parent directories aren't made. Can anyone tell me if there is anything wrong with the way I'm calling it? Also, if it helps, here is the source for the file I'm calling it in: https://github.com/LeifAndersen/NetCatch/blob/master/src/net/leifandersen/mobile/android/netcatch/services/RSSService.java Thank you

    Read the article

  • Unset the system immutable bit in Mac OS X

    - by skylarking
    In theory I believe you can unlock and remove the system immutable bit with: chflags noschg /Path/To/File But how can you do this when you've set the bit as root? I have a file that is locked, and even running this command as root will not work as the operation is not permitted. I tried logging in as Single-User mode to no avail. I seem to remember that even though you are in as root you are in at level '1'. And to be able to remove the system-immutable flag you need to be logged in at level '0'. Does this have something to do with this issue?

    Read the article

  • Is there any reason this cronjob would fail in cron, but not on the command line?

    - by Treffynnon
    I have written a little one liner that will email me when a list of files changes - I used sha512 to generate a list of hashes and then periodically check that those hashes still match. */5 * * * * /usr/bin/sha512sum --status -c /sha512.sumlist && echo "Success" > /dev/null || echo "Check robots.txt and index.html in /var/www as staging sites are now potentially exposed to the world and the damned googlebot" | /usr/bin/mail -s "Default staging server files have changed" [email protected] It works fine on the command line with: /usr/bin/sha512sum --status -c /sha512.sumlist && echo "Success" > /dev/null || echo "Check robots.txt and index.html in /var/www as staging sites are now potentially exposed to the world and the damned googlebot" | /usr/bin/mail -s "Default staging server files have changed" [email protected] As soon as I run it as a cronjob though it emails every time it runs with the failure message instead of only when the sha512sum check should fail. Is there something silly I have missed in a rush? I forgot to mention that I am running an Ubuntu machine.

    Read the article

  • Reliable file copy (move) process - mostly Unix/Linux

    - by mfinni
    Short story : We have a need for a rock-solid reliable file mover process. We have source directories that are often being written to that we need to move files from. The files come in pairs - a big binary, and a small XML index. We get a CTL file that defines these file bundles. There is a process that operates on the files once they are in the destination directory; that gets rid of them when it's done. Would rsync do the best job, or do we need to get more complex? Long story as follows : We have multiple sources to pull from : one set of directories are on a Windows machine (that does have Cygwin and an SSH daemon), and a whole pile of directories are on a set of SFTP servers (Most of these are also Windows.) Our destinations are a list of directories on AIX servers. We used to use a very reliable Perl script on the Windows/Cygwin machine when it was our only source. However, we're working on getting rid of that machine, and there are other sources now, the SFTP servers, that we cannot presently run our own scripts on. For security reasons, we can't run the copy jobs on our AIX servers - they have no access to the source servers. We currently have a homegrown Java program on a Linux machine that uses SFTP to pull from the various new SFTP source directories, copies to a local tmp directory, verifies that everything is present, then copies that to the AIX machines, and then deletes the files from the source. However, we're finding any number of bugs or poorly-handled error checking. None of us are Java experts, so fixing/improving this may be difficult. Concerns for us are: With a remote source (SFTP), will rsync leave alone any file still being written? Some of these files are large. From reading the docs, it seems like rysnc will be very good about not removing the source until the destination is reliably written. Does anyone have experience confirming or disproving this? Additional info We will be concerned about the ingestion process that operates on the files once they are in the destination directory. We don't want it operating on files while we are in the process of copying them; it waits until the small XML index file is present. Our current copy job are supposed to copy the XML file last. Sometimes the network has problems, sometimes the SFTP source servers crap out on us. Sometimes we typo the config files and a destination directory doesn't exist. We never want to lose a file due to this sort of error. We need good logs If you were presented with this, would you just script up some rsync? Or would you build or buy a tool, and if so, what would it be (or what technologies would it use?) I (and others on my team) are decent with Perl.

    Read the article

  • MSDeploy: error using runCommand provider to call remote .cmd file (timeout)

    - by mjw.
    We are running into an error when trying to use the MSDeploy "runCommand" provider to execute a .cmd file on a remote machine. The expected run time should be about 10 seconds, but MSDeploy only runs it for about 2-3 seconds, after which time error details are returned. Here is the complete MSDeploy "runCommand" command line text I am using: msdeploy.exe -verb:sync -source:runCommand="D:\web deploy tester\test_cmd.cmd",dontUseCommandExe=false,waitAttempts=5,waitInterval=1000 -dest:auto,computername=http://test-machine:89/MsDeployAgentService/,userName=aaa,password=bbb Here are the error details returned: Error 'Error: (4/21/2010 12:19:25 PM) An error occurred when the request was processed on the remote computer. Error: The process 'C:\WINDOWS\system32\cmd.exe' (command line '/c "D:\web deploy tester\test_cmd.cmd"') was terminated because it exceeded the wait time. Error count: 1. ' occurred in call to RunCommand Any ideas as to why this is happening and how to resolve it?

    Read the article

  • Prevent command "del /s" from entering a folder

    - by jzuniga
    I need to recursively remove unnecessary files from a svn repository and i have the following batch file to do this: @echo on del /s ~*.* del /s *.~* del /s Thumbs.db However, this is also deleting the entries under the .svn/ subfolders. Is there any way to prevent this commands from being executed under the .svn/ folders so that it doesn't mess things up? Thanks in advance! EDIT: A solution using Bash (cygwin) would also work for me since i just need to do this once.

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >