Search Results

Search found 68873 results on 2755 pages for 'flat file'.

Page 103/2755 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • time on files differ by 1 sec. FAIL Robocopy sync

    - by csmba
    I am trying to use Robocopy to sync (/IMG) a folder on my PC and a shared network drive. The problem is that the file attributes differ by 1 sec on both locations (creation,modified and access). So every time I run robocopy, it syncs the file again... BTW, problem is the same if I delete the target file and robocopy it from new... still, new file has 1 sec different properties. Env Details: Source: Win 7 64 bit Target: WD My Book World Edition NAS 1TB which takes its time from online NTP pool.ntp.org (I don't know if file system is FAT or not)

    Read the article

  • PHP: Fastest way possible to read contents of a file.

    - by SoLoGHoST
    Ok, I'm looking for the fastest possible way to read all of the contents of a file via php with a filepath on the server, also these files can be huge. So it's very important that it does a READ ONLY to it as fast as possible. Is reading it line by line faster than reading the entire contents? Though, I remember reading up on this some, that reading the entire contents can produce errors for huge files. Is this true?

    Read the article

  • Is there a way to efficiently yield every file in a directory containing millions of files?

    - by Josh Smeaton
    I'm aware of os.listdir, but as far as I can gather, that gets all the filenames in a directory into memory, and then returns the list. What I want, is a way to yield a filename, work on it, and then yield the next one, without reading them all into memory. Is there any way to do this? I worry about the case where filenames change, new files are added, and files are deleted using such a method. Some iterators prevent you from modifying the collection during iteration, essentially by taking a snapshot of the state of the collection at the beginning, and comparing that state on each move operation. If there is an iterator capable of yielding filenames from a path, does it raise an error if there are filesystem changes (add, remove, rename files within the iterated directory) which modify the collection? There could potentially be a few cases that could cause the iterator to fail, and it all depends on how the iterator maintains state. Using S.Lotts example: filea.txt fileb.txt filec.txt Iterator yields filea.txt. During processing, filea.txt is renamed to filey.txt and fileb.txt is renamed to filez.txt. When the iterator attempts to get the next file, if it were to use the filename filea.txt to find it's current position in order to find the next file and filea.txt is not there, what would happen? It may not be able to recover it's position in the collection. Similarly, if the iterator were to fetch fileb.txt when yielding filea.txt, it could look up the position of fileb.txt, fail, and produce an error. If the iterator instead was able to somehow maintain an index dir.get_file(0), then maintaining positional state would not be affected, but some files could be missed, as their indexes could be moved to an index 'behind' the iterator. This is all theoretical of course, since there appears to be no built-in (python) way of iterating over the files in a directory. There are some great answers below, however, that solve the problem by using queues and notifications. Edit: The OS of concern is Redhat. My use case is this: Process A is continuously writing files to a storage location. Process B (the one I'm writing), will be iterating over these files, doing some processing based on the filename, and moving the files to another location. Edit: Definition of valid: Adjective 1. Well grounded or justifiable, pertinent. (Sorry S.Lott, I couldn't resist). I've edited the paragraph in question above.

    Read the article

  • Need to have JProgress bar to measure progress when copying directories and files

    - by user1815823
    I have the below code to copy directories and files but not sure where to measure the progress. Can someone help as to where can I measure how much has been copied and show it in the JProgress bar public static void copy(File src, File dest) throws IOException{ if(src.isDirectory()){ if(!dest.exists()){ //checking whether destination directory exisits dest.mkdir(); System.out.println("Directory copied from " + src + " to " + dest); } String files[] = src.list(); for (String file : files) { File srcFile = new File(src, file); File destFile = new File(dest, file); copyFolder(srcFile,destFile); } }else{ InputStream in = new FileInputStream(src); OutputStream out = new FileOutputStream(dest); byte[] buffer = new byte[1024]; int length; while ((length = in.read(buffer)) > 0){ out.write(buffer, 0, length); } in.close(); out.close(); System.out.println("File copied from " + src + " to " + dest); }

    Read the article

  • Referencing java resource files for cold fusion

    - by Chimeara
    I am using a .Jar file containing a .properties file in my CF code, however it seems unable to find the .properties file when run from CF. My java code is: String key =""; String value =""; try { File file = new File("src/test.properties"); FileInputStream fileInput = new FileInputStream(file); Properties properties = new Properties(); properties.load(fileInput); fileInput.close(); Enumeration enuKeys = properties.keys(); while (enuKeys.hasMoreElements()) { key = (String) enuKeys.nextElement(); value = properties.getProperty(key); //System.out.println(key + ": " + value); } } catch (FileNotFoundException e) { e.printStackTrace(); key ="error"; } catch (IOException e) { e.printStackTrace(); key ="error"; } return(key + ": " + value); I have my test.properties file in the project src folder, and make sure it is selected when compiling, when run from eclipse it gives the expected key and value, however when run from CF I get the caught errors. My CF code is simply: propTest = CreateObject("java","package.class"); testResults = propTest.main2(); Is there a special way to reference the .properties file so CF can access it, or do I need to include the file outside the .jar somewhere?

    Read the article

  • Can I use php's fwrite with 644 file permissions?

    - by filip
    I am trying to set up automated .htaccess updating. This clearly needs to be as secure as possible, however right now the best I can do file permission-wise is 666. What can I do to setup either my server or php code so that my script's fwrite() command will work with 644 or better? For instance is there a way to set my script(s) to run as owner?

    Read the article

  • [Python] How to create a named temporary file in memory?

    - by conradlee
    I would like to use Python's tempfile module to create a temporary file that I will use for communication between processes (use of pipes is awkward). The documentation I've linked to above shows two functions that almost do what I want: tempfile.NamedTemporaryFile # For creating named tempfiles tempfile.SpooledTemporaryFile # For creating tempfiles in memory but actually I want a tempfile that is both named AND in memory. Any ideas?

    Read the article

  • Why can't I reserve 1,000,000,000 in my vector ?

    - by vipersnake005
    When I type in the foll. code, I get the output as 1073741823. #include <iostream> #include <vector> using namespace std; int main() { vector <int> v; cout<<v.max_size(); return 0; } However when I try to resize the vector to 1,000,000,000, by v.resize(1000000000); the program stops executing. How can I enable the program to allocate the required memory, when it seems that it should be able to? I am using MinGW in Windows 7. I have 2 GB RAM. Should it not be possible? In case it is not possible, can't I declare it as an array of integers and get away? BUt even that doesn't work. Another thing is that, suppose I would use a file(which can easily handle so much data ). How can I let it read and write and the same time. Using fstream file("file.txt', ios::out | ios::in ); doesn't create a file, in the first place. But supposing the file exists, I am unable to use to do reading and writing simultaneously. WHat I mean is this : Let the contents of the file be 111111 Then if I run : - #include <fstream> #include <iostream> using namespace std; int main() { fstream file("file.txt",ios:in|ios::out); char x; while( file>>x) { file<<'0'; } return 0; } Shouldn't the file's contents now be 101010 ? Read one character and then overwrite the next one with 0 ? Or incase the entire contents were read at once into some buffer, should there not be atleast one 0 in the file ? 1111110 ? But the contents remain unaltered. Please explain. Thank you.

    Read the article

  • Beginner servlet question: accessing files in a .war, which path?

    - by Navigateur
    When a third-party library I'm using tries to access a file, I'm getting "Error opening ... file ... (No such file or directory)" even though I KNOW the file is in the WAR. I've tried both packaged (.war) and "exploded" (directory) deployment, and the file is definitely there. I've tried setting full permissions on it too. It's on Unix (Ubuntu). File is war/dict/index.sense and the error is "dict/index.sense (No such file or directory)". It works fine on my Windows computer when running in hosted mode as a GWT app from Eclipse, just not when I transfer it to the Unix machine for deployment. My question is: has anybody experienced this before and/or are there differences in relative path that I should consider i.e. what's the root path for relative file access in a war?

    Read the article

  • Why does Java automatically decode %2F in URI encoded filenames?

    - by Lucas
    I have a java servlet that needs to write out files that have a user-configurable name. I am trying to use URI encoding to properly escape special characters, but the JRE appears to automatically convert encoded forward slashes (%2F) into path separators. Example: File dir = new File("C:\Documents and Setting\username\temp"); String fn = "Top 1/2.pdf"; URI uri = new URI( dir.toURI().toASCIIString() + URLEncoder.encoder( fn, "UTF-8" ).toString() ); File out = new File( uri ); System.out.println( dir.toURI().toASCIIString() ); System.out.println( URLEncoder.encoder( fn, "UTF-8" ).toString() ); System.out.println( uri.toASCIIString() ); System.out.println( output.toURI().toASCIIString() ); The output is: file:/C:/Documents%20and%20Settings/username/temp/ Top+1%2F2.pdf file:/C:/Documents%20and%20Settings/username/temp/Top+1%2F2.pdf file:/C:/Documents%20and%20Settings/username/temp/Top+1/2.pdf After the new File object is instantiated, the %2F sequence is automatically converted to a forward slash and I end up with an incorrect path. Does anybody know the proper way to approach this issue? The core of the problem seems to be that uri.equals( new File(uri).toURI() ) == FALSE when there is a '%2F' in the URI. I'm planning to just use the URLEncoded string verbatim rather than trying to use the File(uri) constructor.

    Read the article

  • Samba Server Make Multiple User Permissions Profiles

    - by Scriptonaut
    I have a Samba file server running, and I was wondering how I could make multiple user accounts that have different permissions. For example, at the moment I have a user, smbusr, but when I ssh to the share, I can read, write, execute, and even navigate out of the samba directory and do stuff on the actual computer. This is bad because I want to be able to give out my IP so friends/family can use the server, but I don't want them to be able to do just anything. I want to lock the user in the samba share directory(and all the sub directories). Eventually I would like several profiles such as (smbusr_R, smbusr_RW, smbguest_R, smbguest_RW). I also have a second question related to this, is SSH the best method to connect from other unix machines? What about VPN? Or simply mounting like this: mount -t ext3 -o user=username //ipaddr/share /mnt/mountpoint Is that mounting command above the same thing as a vpn? This is really confusing me. Thanks for the help guys, let me know if you need to see any files, or need anymore information.

    Read the article

  • How to Export Flash Animation Data

    - by charliep
    I'd love for my partner, the artist, to be able to animate using flash movieclips and timelines. Then I, the programmer, would like to read the raw Flash info and re-program it into my engine of choice (which happens to be Torque2D). The data I'd want is the bitmap images that were used in Flash, like the head and body the links between the images, like where the head connects to the body the motion data from the flash animation, like move, rotate (at what speed), shear, etc. for the head or arms or whatever. Is there any way to get this data? Here's what I know so far. There are tools like SWFSheet and Spriteloq that convert the entire flash animation into a frame by frame sprite animation (in a sprite sheet). This would take too much space in my case, so I'd like to avoid that. Re-animating on the fly would take much less texture memory. There is a PDF that describes the SWF file format but NOT the individual components like the movieclips. So anyone know of a library I can use, or how I can learn more about the movieclip components and whatnot? (more better tags: transform, export, convert)

    Read the article

  • Old School Wizardry Tip: Batch File Comments

    - by jkauffman
    Johnny, the Endangered Keyboard-Driven Windows User Some of my proudest, obscure Windows tricks are losing their relevance. I know I’m not alone. Keyboard shortcuts are going the way of the dodo. I used to induce fearful awe by slapping Ctrl+Shift+Esc in front of the lowly, pedestrian Windows users. No windows key on the keyboard? No problem: Ctrl+Esc. No menu key on the keyboard: Shift+F10. I am also firmly planted in the habit of closing windows with the Alt+Space menu (Alt+Space, C); and I harbor a brooding, slow=growing list of programs that fail to support this correctly (that means you, Paint.NET). Every time a new version of windows comes out, the support for some of these minor time-saving habits get pared out. Will I complain publicly? Nope, I know my old ways should be axed to conserve precious design energy. In fact, I disapprove of fierce un-intuitiveness for the sake of alleged productivity. Like vim, for example. If you approach a program after being away for 5 years, having to recall encyclopedic knowledge is a flaw. The RTFM disciples have lost. Anyway, some of the items in my arsenal of goofy time-saving tricks are still relevant today. I wanted to draw attention to one that’s stood the test of time. Remember Batch Files? Yes, it’s true, batch files are fading faster than the world of print. But they're not dead yet. I still run into some situations where I opt to use batch files. They are still relevant for build processes, or just various development workflow tools. Sure, there’s powershell, but there’s that stupid Set-ExecutionPolicy speed bump standing in your way; can you really spare the time to A) hunt down that setting on all machines affected and/or B) make futile efforts to convince your coworkers/boss that the hassle was worth it? When possible, I prefer the batch file wild card. And whenever I return to batch files, I end up researching some of the unintuitive aspects such as parameters, quote handling, and ERRORLEVEL. But I never have to remember to use “REM” for comment lines, because there’s a cleaner way to do them! Double Colon For Eye-Friendly Comments Here is a very simple batch file, with pretty much minimal content: @ECHO OFF SETLOCAL REM This is a comment ECHO This batch file doesn’t do much If you code on a daily basis, this may be more suitable to your eyes: @ECHO OFF SETLOCAL :: This is a comment ECHO This batch file doesn’t do much Works great! I imagine I find it preferable due to the similarity to comments in other situations: // or ;  or # I’ve often make visual pseudo-line breaks in my code, and this colon-based syntax works wonders: @ECHO OFF SETLOCAL :: Do stuff ECHO Doing Stuff :::::::::::::::::::::::::::: :: Do more stuff ECHO This batch file doesn’t do much Not only is it more readable, but there’s a slight performance benefit. The batch file engine sees this as an invalid line label and immediately reads the following line. Use that fact to your advantage if this trick leads you into heated nerd debate. Two Pitfalls to Avoid Be aware of that there are a couple situations where this hack will fail you. It most likely won’t be a problem unless you’re getting really sophisticated with your batch files. Pitfall #1: Inline comments @ECHO OFF SETLOCAL IF EXIST C:\SomeFile.txt GOTO END ::This will fail :END Unfortunately, this fails. You can only have whitespace to the left of your comments. Pitfall #2: Code Blocks @ECHO OFF SETLOCAL IF EXIST C:\SomeFile.txt (         :: This will fail         ECHO HELLO ) Code blocks, such as if statements and for loops, cannot contain these comments. This is ultimately due to the fact that entire code blocks are processed as a single line. I originally learned this from Rob van der Woude’s site. He goes into more depth about the behavior of the pitfalls as well, if you are interested in further details. I hope this trick earns you serious geek rep!

    Read the article

  • Examples of limitations in IT due to different bit length by design

    - by Alaudo
    I am teaching the course "Introduction in Programming" for the first-year students and would like to find interesting examples where the datatype size in bits, chosen by design, led to certain known restrictions or important values. Here are some examples: Due to the fact that the Bell teleprinter used 7-bit-code (later accepted as ASCII) until now have we often to encode attachments in electronic messages to contain only 7 bit data. Classical limitation of 32-bit address space leads to the 4Gb maximal RAM size available for 32-bit systems and 4Gb maximal file size in FAT32. Do you know some other interesting examples how the choice of the data type (and especially its binary length) influenced the modern IT world. Added after some discussion in comments: I am not going to teach how to overcome limitations. I just want them to know that 1 byte can hold the values from -127..0..+127 o 0..255, 2 bytes cover the range 0..65535 etc by proving examples they know from other sources, like the above-mentioned base64 encoding etc. We are just learning the basic datatypes and I am trying to find a good reference for "how large" these types are.

    Read the article

  • Samba issue with sharing directories on NTFS/FAT32

    - by Microkernel
    I have some strange problems with Samba server. I am using samba Version 3.5.4 on Ubuntu 10.10. I have two Windows XP machines, one on VirtualBox on Ubuntu and another office laptop. Windows machine on VirtualBox has no issues in accessing the shared folders, but the laptop is not able to access all the shared content. The issue faced on laptop is the following. Shared folders on ext3 drives have no issues in accessing, but the contents shared on NTFS and FAT32 drives (mounted ones) are not accessible. When I try to open the shared folder, it asks for user name and password, but doesn't accept when I provide it. (Even if I provide admin login details). I changed workgroup value to the domain_name in office laptop, but still the problem persists. Here is the smdb.conf I am using: [global] workgroup = XXX.XXX.ORG server string = %h server (Samba, Ubuntu) map to guest = Bad User obey pam restrictions = Yes pam password change = Yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . unix password sync = Yes syslog = 0 log file = /var/log/samba/log.%m max log size = 1000 dns proxy = No usershare allow guests = Yes panic action = /usr/share/samba/panic-action %d guest ok = Yes [homes] comment = Home Directories [printers] comment = All Printers path = /var/spool/samba read only = No create mask = 0700 printable = Yes browseable = No [print$] comment = Samba server's CD-ROM path = /cdrom force user = nobody force group = nobody locking = No Workgroup was defined as "HOMENET" before, changed it to domain name on the office laptop thinking it was the problem, but for no avail.

    Read the article

  • Was API hooking done as needed for Stuxnet to work? I don't think so

    - by The Kaykay
    Caveat: I am a political science student and I have tried my level best to understand the technicalities; if I still sound naive please overlook that. In the Symantec report on Stuxnet, the authors say that once the worm infects the 32-bit Windows computer which has a WINCC setup on it, Stuxnet does many things and that it specifically hooks the function CreateFileA(). This function is the route which the worm uses to actually infect the .s7p project files that are used to program the PLCs. ie when the PLC programmer opens a file with .s7p the control transfers to the hooked function CreateFileA_hook() instead of CreateFileA(). Once Stuxnet gains the control it covertly inserts code blocks into the PLC without the programmers knowledge and hides it from his view. However, it should be noted that there is also one more function called CreateFileW() which does the same task as CreateFileA() but both work on different character sets. CreateFileA works with ASCII character set and CreateFileW works with wide characters or Unicode character set. Farsi (the language of the Iranians) is a language that needs unicode character set and not ASCII Characters. I'm assuming that the developers of any famous commercial software (for ex. WinCC) that will be sold in many countries will take 'Localization' and/or 'Internationalization' into consideration while it is being developed in order to make the product fail-safe ie. the software developers would use UNICODE while compiling their code and not just 'ASCII'. Thus, I think that CreateFileW() would have been invoked on a WINCC system in Iran instead of CreateFileA(). Do you agree? My question is: If Stuxnet has hooked only the function CreateFileA() then based on the above assumption there is a significant chance that it did not work at all? I think my doubt will get clarified if: my assumption is proved wrong, or the Symantec report is proved incorrect. Please help me clarify this doubt. Note: I had posted this question on the general stackexchange website and did not get appropriate responses that I was looking for so I'm posting it here.

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >