Search Results

Search found 61297 results on 2452 pages for 'open files'.

Page 118/2452 | < Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >

  • ant ftp doesn't download files in subdirectories

    - by Kristof Neirynck
    Hello, I'm trying to download files in subdirectories from an ftp server with ant. The exact set of files is known. Some of them are in subdirectories. Ant only seems to download the ones in the root directory. It does work if I download all files without listing them. The first ftp action should do the exact same thing as the second. Instead it complains about "Hidden files" and seems to prefix the paths with "\\". Does anyone know what's wrong here? Is this a bug in commons-net? build.xml <?xml version="1.0" encoding="utf-8"?> <project name="example" default="example" basedir="."> <taskdef name="ftp" classname="org.apache.tools.ant.taskdefs.optional.net.FTP" /> <target name="example"> <!-- 2 files retrieved --> <ftp action="get" verbose="true" server="localhost" userid="example" password="example"> <fileset dir="downloads" casesensitive="false" includes="root1.txt,root2.txt,a/a.txt,a/b/ab.txt,c/c.txt" /> </ftp> <!-- 5 files retrieved --> <ftp action="get" verbose="true" server="localhost" userid="example" password="example"> <fileset dir="downloads" casesensitive="false" includes="**/*" /> </ftp> </target> </project> run_ant.bat @ECHO OFF PUSHD %~dp0 SET CLASSPATH= SET ANT_HOME=C:\apache-ant-1.8.0 SET ant=%ANT_HOME%\bin\ant.bat SET antoptions=-nouserlib -noclasspath -d SET ftpjars=^ -lib lib\jakarta-oro-2.0.8.jar ^ -lib lib\commons-net-2.0.jar CALL %ant% %antoptions% %ftpjars% > output.txt POPD output.txt Unable to locate tools.jar. Expected to find it in C:\PROGRA~1\Java\jre6\lib\tools.jar Apache Ant version 1.8.0 compiled on February 1 2010 Trying the default build file: build.xml Buildfile: G:\ftp\build.xml Adding reference: ant.PropertyHelper Detected Java version: 1.6 in: C:\PROGRA~1\Java\jre6 Detected OS: Windows 7 Adding reference: ant.ComponentHelper Setting ro project property: ant.file -> G:\ftp\build.xml Setting ro project property: ant.file.type -> file Adding reference: ant.projectHelper Adding reference: ant.parsing.context Adding reference: ant.targets parsing buildfile G:\ftp\build.xml with URI = file:/G:/ftp/build.xml Setting ro project property: ant.project.name -> example Adding reference: example Setting ro project property: ant.project.default-target -> example Setting ro project property: ant.file.example -> G:\ftp\build.xml Setting ro project property: ant.file.type.example -> file Project base dir set to: G:\ftp +Target: +Target: example Adding reference: ant.LocalProperties parsing buildfile jar:file:/C:/apache-ant-1.8.0/lib/ant.jar!/org/apache/tools/ant/antlib.xml with URI = jar:file:/C:/apache-ant-1.8.0/lib/ant.jar!/org/apache/tools/ant/antlib.xml from a zip file Class org.apache.tools.ant.taskdefs.optional.net.FTP loaded from parent loader (parentFirst) Setting ro project property: ant.project.invoked-targets -> example Attempting to create object of type org.apache.tools.ant.helper.DefaultExecutor Adding reference: ant.executor Build sequence for target(s) `example' is [example] Complete build sequence is [example, ] example: [ftp] Opening FTP connection to localhost [ftp] connected [ftp] logging in to FTP server [ftp] login succeeded [ftp] getting files fileset: Setup scanner in dir G:\ftp\downloads with patternSet{ includes: [root1.txt, root2.txt, a/a.txt, a/b/ab.txt] excludes: [] } will try to cd to A where a directory called a exists testing case sensitivity, attempting to cd to A remote system is case sensitive : false [ftp] Hidden file \\a\b\ assumed to not be a symlink. filelist map used in listing files filelist map used in listing files [ftp] Hidden file \\a\b\ assumed to not be a symlink. filelist map used in listing files filelist map used in listing files filelist map used in listing files filelist map used in listing files filelist map used in listing files [ftp] Hidden file \\a\a.txt assumed to not be a symlink. filelist map used in listing files [ftp] Hidden file \\a\a.txt assumed to not be a symlink. filelist map used in listing files filelist map used in listing files filelist map used in listing files [ftp] transferring root1.txt to G:\ftp\downloads\root1.txt [ftp] File G:\ftp\downloads\root1.txt copied from localhost [ftp] transferring root2.txt to G:\ftp\downloads\root2.txt [ftp] File G:\ftp\downloads\root2.txt copied from localhost [ftp] 2 files retrieved [ftp] disconnecting [ftp] Opening FTP connection to localhost [ftp] connected [ftp] logging in to FTP server [ftp] login succeeded [ftp] getting files fileset: Setup scanner in dir G:\ftp\downloads with patternSet{ includes: [**/*] excludes: [] } will try to cd to A where a directory called a exists testing case sensitivity, attempting to cd to A remote system is case sensitive : false [ftp] transferring a\a.txt to G:\ftp\downloads\a\a.txt [ftp] File G:\ftp\downloads\a\a.txt copied from localhost [ftp] transferring a\b\ab.txt to G:\ftp\downloads\a\b\ab.txt [ftp] File G:\ftp\downloads\a\b\ab.txt copied from localhost [ftp] transferring c\c.txt to G:\ftp\downloads\c\c.txt [ftp] File G:\ftp\downloads\c\c.txt copied from localhost [ftp] transferring root1.txt to G:\ftp\downloads\root1.txt [ftp] File G:\ftp\downloads\root1.txt copied from localhost [ftp] transferring root2.txt to G:\ftp\downloads\root2.txt [ftp] File G:\ftp\downloads\root2.txt copied from localhost [ftp] 5 files retrieved [ftp] disconnecting BUILD SUCCESSFUL Total time: 0 seconds server log (000153) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> Connected, sending welcome message... (000153) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> 220-FileZilla Server version 0.9.34 beta (000153) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> 220-written by Tim Kosse ([email protected]) (000153) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> 220 Please visit http://sourceforge.net/projects/filezilla/ (000153) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> USER example (000153) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> 331 Password required for example (000153) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> PASS ******* (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 230 Logged on (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> TYPE I (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Type set to I (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> SYST (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 215 UNIX emulated by FileZilla (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PORT 127,0,0,1,207,232 (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Port command successful (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> LIST (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 150 Opening data channel for directory list. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 226 Transfer OK (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD A (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/A" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD a (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/a" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD b (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/a/b" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD //a/b (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/a/b" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/a/b" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PORT 127,0,0,1,207,233 (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Port command successful (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> LIST (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 150 Opening data channel for directory list. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 226 Transfer OK (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PORT 127,0,0,1,207,234 (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Port command successful (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> LIST (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 150 Opening data channel for directory list. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 226 Transfer OK (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD //\\a\b\ (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/a/b" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/a/b" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD //\\a\b\ (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/a/b" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/a/b" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD a (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/a" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD //a (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/a" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/a" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PORT 127,0,0,1,207,235 (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Port command successful (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> LIST (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 150 Opening data channel for directory list. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 226 Transfer OK (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PORT 127,0,0,1,207,236 (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Port command successful (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> RETR root1.txt (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 150 Opening data channel for file transfer. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 226 Transfer OK (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PORT 127,0,0,1,207,237 (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Port command successful (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> RETR root2.txt (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 150 Opening data channel for file transfer. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 226 Transfer OK (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> QUIT (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 221 Goodbye (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> disconnected. (000154) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> Connected, sending welcome message... (000154) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> 220-FileZilla Server version 0.9.34 beta (000154) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> 220-written by Tim Kosse ([email protected]) (000154) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> 220 Please visit http://sourceforge.net/projects/filezilla/ (000154) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> USER example (000154) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> 331 Password required for example (000154) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> PASS ******* (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 230 Logged on (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> TYPE I (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Type set to I (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> SYST (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 215 UNIX emulated by FileZilla (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> PORT 127,0,0,1,207,239 (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> LIST (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 150 Opening data channel for directory list. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD A (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/A" is current directory. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> PORT 127,0,0,1,207,240 (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> LIST (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 150 Opening data channel for directory list. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CWD / (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CWD a (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 250 CWD successful. "/a" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> PORT 127,0,0,1,207,241 (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> LIST (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 150 Opening data channel for directory list. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CWD //a/ (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 250 CWD successful. "/a" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CWD b (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 250 CWD successful. "/a/b" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> PORT 127,0,0,1,207,242 (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> LIST (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 150 Opening data channel for directory list. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CDUP (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 CDUP successful. "/a" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CDUP (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 CDUP successful. "/" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CWD / (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CWD c (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 250 CWD successful. "/c" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> PORT 127,0,0,1,207,243 (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> LIST (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 150 Opening data channel for directory list. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CDUP (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 CDUP successful. "/" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CDUP (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 CDUP successful. "/" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CWD / (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> PORT 127,0,0,1,207,244 (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> RETR a/a.txt (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 150 Opening data channel for file transfer. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> PORT 127,0,0,1,207,245 (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> RETR a/b/ab.txt (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 150 Opening data channel for file transfer. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> PORT 127,0,0,1,207,246 (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> RETR c/c.txt (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 150 Opening data channel for file transfer. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> PORT 127,0,0,1,207,247 (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> RETR root1.txt (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 150 Opening data channel for file transfer. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> PORT 127,0,0,1,207,248 (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> RETR root2.txt (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 150 Opening data channel for file transfer. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> QUIT (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 221 Goodbye (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> disconnected.

    Read the article

  • Using the HTML5 &lt;input type=&quot;file&quot; multiple=&quot;multiple&quot;&gt; Tag in ASP.NET

    - by Rick Strahl
    Per HTML5 spec the <input type="file" /> tag allows for multiple files to be picked from a single File upload button. This is actually a very subtle change that's very useful as it makes it much easier to send multiple files to the server without using complex uploader controls. Please understand though, that even though you can send multiple files using the <input type="file" /> tag, the process of how those files are sent hasn't really changed - there's still no progress information or other hooks that allow you to automatically make for a nicer upload experience without additional libraries or code. For that you will still need some sort of library (I'll post an example in my next blog post using plUpload). All the new features allow for is to make it easier to select multiple images from disk in one operation. Where you might have required many file upload controls before to upload several files, one File control can potentially do the job. How it works To create a file input box that allows with multiple file support you can simply do:<form method="post" enctype="multipart/form-data"> <label>Upload Images:</label> <input type="file" multiple="multiple" name="File1" id="File1" accept="image/*" /> <hr /> <input type="submit" id="btnUpload" value="Upload Images" /> </form> Now when the file open dialog pops up - depending on the browser and whether the browser supports it - you can pick multiple files. Here I'm using Firefox using the thumbnail preview I can easily pick images to upload on a form: Note that I can select multiple images in the dialog all of which get stored in the file textbox. The UI for this can be different in some browsers. For example Chrome displays 3 files selected as text next to the Browse… button when I choose three rather than showing any files in the textbox. Most other browsers display the standard file input box and display the multiple filenames as a comma delimited list in the textbox. Note that you can also specify the accept attribute in the <input> tag, which specifies a mime-type to specify what type of content to allow.Here I'm only allowing images (image/*) and the browser complies by just showing me image files to display. Likewise I could use text/* for all text formats registered on the machine or text/xml to only show XML files (which would include xml,xst,xsd etc.). Capturing Files on the Server with ASP.NET When you upload files to an ASP.NET server there are a couple of things to be aware of. When multiple files are uploaded from a single file control, they are assigned the same name. In other words if I select 3 files to upload on the File1 control shown above I get three file form variables named File1. This means I can't easily retrieve files by their name:HttpPostedFileBase file = Request.Files["File1"]; because there will be multiple files for a given name. The above only selects the first file. Instead you can only reliably retrieve files by their index. Below is an example I use in app to capture a number of images uploaded and store them into a database using a business object and EF 4.2.for (int i = 0; i < Request.Files.Count; i++) { HttpPostedFileBase file = Request.Files[i]; if (file.ContentLength == 0) continue; if (file.ContentLength > App.Configuration.MaxImageUploadSize) { ErrorDisplay.ShowError("File " + file.FileName + " is too large. Max upload size is: " + App.Configuration.MaxImageUploadSize); return View("UploadClassic",model); } var image = new ClassifiedsBusiness.Image(); var ms = new MemoryStream(16498); file.InputStream.CopyTo(ms); image.Entered = DateTime.Now; image.EntryId = model.Entry.Id; image.ContentType = "image/jpeg"; image.ImageData = ms.ToArray(); ms.Seek(0, SeekOrigin.Begin); // resize image if necessary and turn into jpeg Bitmap bmp = Imaging.ResizeImage(ms.ToArray(), App.Configuration.MaxImageWidth, App.Configuration.MaxImageHeight); ms.Close(); ms = new MemoryStream(); bmp.Save(ms,ImageFormat.Jpeg); image.ImageData = ms.ToArray(); bmp.Dispose(); ms.Close(); model.Entry.Images.Add(image); } This works great and also allows you to capture input from multiple input controls if you are dealing with browsers that don't support multiple file selections in the file upload control. The important thing here is that I iterate over the files by index, rather than using a foreach loop over the Request.Files collection. The files collection returns key name strings, rather than the actual files (who thought that was good idea at Microsoft?), and so that isn't going to work since you end up getting multiple keys with the same name. Instead a plain for loop has to be used to loop over all files. Another Option in ASP.NET MVC If you're using ASP.NET MVC you can use the code above as well, but you have yet another option to capture multiple uploaded files by using a parameter for your post action method.public ActionResult Save(HttpPostedFileBase[] file1) { foreach (var file in file1) { if (file.ContentLength < 0) continue; // do something with the file }} Note that in order for this to work you have to specify each posted file variable individually in the parameter list. This works great if you have a single file upload to deal with. You can also pass this in addition to your main model to separate out a ViewModel and a set of uploaded files:public ActionResult Edit(EntryViewModel model,HttpPostedFileBase[] uploadedFile) You can also make the uploaded files part of the ViewModel itself - just make sure you use the appropriate naming for the variable name in the HTML document (since there's Html.FileFor() extension). Browser Support You knew this was coming, right? The feature is really nice, but unfortunately not supported universally yet. Once again Internet Explorer is the problem: No shipping version of Internet Explorer supports multiple file uploads. IE10 supposedly will, but even IE9 does not. All other major browsers - Chrome, Firefox, Safari and Opera - support multi-file uploads in their latest versions. So how can you handle this? If you need to provide multiple file uploads you can simply add multiple file selection boxes and let people either select multiple files with a single upload file box or use multiples. Alternately you can do some browser detection and if IE is used simply show the extra file upload boxes. It's not ideal, but either one of these approaches makes life easier for folks that use a decent browser and leaves you with a functional interface for those that don't. Here's a UI I recently built as an alternate uploader with multiple file upload buttons: I say this is my 'alternate' uploader - for my primary uploader I continue to use an add-in solution. Specifically I use plUpload and I'll discuss how that's implemented in my next post. Although I think that plUpload (and many of the other packaged JavaScript upload solutions) are a better choice especially for large uploads, for simple one file uploads input boxes work well enough. The advantage of this solution is that it's very easy to handle on the server side. Any of the JavaScript controls require special handling for uploads which I'll also discuss in my next post.© Rick Strahl, West Wind Technologies, 2005-2012Posted in HTML5  ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Hosting and consuming WCF services without configuration files

    - by martinsj
    In this post, I'll demonstrate how to configure both the host and the client in code without the need for configuring services i the <system.serviceModel> section of the config-file. In fact, you don't need a  <system.serviceModel> section at all. What you'll do need (and want) sometimes, is the Uri of the service in the configuration file. Configuring the Uri of the the service is actually only needed for the client or when self-hosting, not when hosting in IIS. So, exactly What do we need to configure? The binding type and the binding constraints The metadata behavior Debug behavior You can of course configure even more, and even more if you want to, WCF is after all the king of configuration… As an example I'll be hosting and consuming a service that removes most of the default constraints for WCF-services, using a BasicHttpBinding. Of course, in regards to security, it is probably better to have some constraints on the server, but this is only a demonstration. The ServerConfig class in the code beneath is a static helper class that will be used in the examples. In this post, I’ll be using this helper-class for all configuration, for both the server and the client. In WCF, the  client and the server have both their own WCF-configuration. With this piece of code, they will be sharing the same configuration. 1: public static class ServiceConfig 2: { 3: public static Binding DefaultBinding 4: { 5: get 6: { 7: var binding = new BasicHttpBinding(); 8: Configure(binding); 9: return binding; 10: } 11: } 12:  13: public static void Configure(HttpBindingBase binding) 14: { 15: if (binding == null) 16: { 17: throw new ArgumentException("Argument 'binding' cannot be null. Cannot configure binding."); 18: } 19:  20: binding.SendTimeout = new TimeSpan(0, 0, 30, 0); // 30 minute timeout 21: binding.MaxBufferSize = Int32.MaxValue; 22: binding.MaxBufferPoolSize = 2147483647; 23: binding.MaxReceivedMessageSize = Int32.MaxValue; 24: binding.ReaderQuotas.MaxArrayLength = Int32.MaxValue; 25: binding.ReaderQuotas.MaxBytesPerRead = Int32.MaxValue; 26: binding.ReaderQuotas.MaxDepth = Int32.MaxValue; 27: binding.ReaderQuotas.MaxNameTableCharCount = Int32.MaxValue; 28: binding.ReaderQuotas.MaxStringContentLength = Int32.MaxValue; 29: } 30:  31: public static ServiceMetadataBehavior ServiceMetadataBehavior 32: { 33: get 34: { 35: return new ServiceMetadataBehavior 36: { 37: HttpGetEnabled = true, 38: MetadataExporter = {PolicyVersion = PolicyVersion.Policy15} 39: }; 40: } 41: } 42:  43: public static ServiceDebugBehavior ServiceDebugBehavior 44: { 45: get 46: { 47: var smb = new ServiceDebugBehavior(); 48: Configure(smb); 49: return smb; 50: } 51: } 52:  53:  54: public static void Configure(ServiceDebugBehavior behavior) 55: { 56: if (behavior == null) 57: { 58: throw new ArgumentException("Argument 'behavior' cannot be null. Cannot configure debug behavior."); 59: } 60: 61: behavior.IncludeExceptionDetailInFaults = true; 62: } 63: } Configuring the server There are basically two ways to host a WCF service, in IIS and self-hosting. When hosting a WCF service in a production environment using SOA architecture, you'll be most likely hosting it in IIS. When testing the service in integration tests, it's very handy to be able to self-host services in the unit-tests. In fact, you can share the the WCF configuration for self-hosted services and services hosted in IIS. And that is exactly what you want to do, testing the same configurations for test and production environments.   Configuring when Self-hosting When self-hosting, in order to start the service, you'll have to instantiate the ServiceHost class, configure the  service and open it. 1: // Create the service-host. 2: var host = new ServiceHost(typeof(MyService), endpoint); 3:  4: // Configure the binding 5: host.AddServiceEndpoint(typeof(IMyService), ServiceConfig.DefaultBinding, endpoint); 6:  7: // Configure metadata behavior 8: host.Description.Behaviors.Add(ServiceConfig.ServiceMetadataBehavior); 9:  10: // Configure debgug behavior 11: ServiceConfig.Configure((ServiceDebugBehavior)host.Description.Behaviors[typeof(ServiceDebugBehavior)]); 12: 13: // Start listening to the service 14: host.Open(); 15:  Configuring when hosting in IIS When you create a WCF service application with the wizard in Visual Studio, you'll end up with bits and pieces of code in order to get the service running: Svc-file with codebehind. A interface to the service Web.config In order to get rid of the configuration in the <system.serviceModel> section, which the wizard has generated for us, we must tell the service that we have a factory that will create the service for us. We do this by changing the markup for the svc-file: 1: <%@ ServiceHost Language="C#" Debug="true" Service="Namespace.MyService" Factory="Namespace.ServiceHostFactory" %> The markup tells IIS that we have a factory called ServiceHostFactory for this service. The service factory has a method we can override which will be called when someone asks IIS for the service. There are overloads we can override: 1: System.ServiceModel.ServiceHostBase CreateServiceHost(string constructorString, Uri[] baseAddresses) 2: System.ServiceModel.ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) 3:  In this example, we'll be using the last one, so our implementation looks like this: 1: public class ServiceHostFactory : System.ServiceModel.Activation.ServiceHostFactory 2: { 3:  4: protected override System.ServiceModel.ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) 5: { 6: var host = base.CreateServiceHost(serviceType, baseAddresses); 7: host.Description.Behaviors.Add(ServiceConfig.ServiceMetadataBehavior); 8: ServiceConfig.Configure((ServiceDebugBehavior)host.Description.Behaviors[typeof(ServiceDebugBehavior)]); 9: return host; 10: } 11: } 12:  1: public class ServiceHostFactory : System.ServiceModel.Activation.ServiceHostFactory 2: { 3: 4: protected override System.ServiceModel.ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) 5: { 6: var host = base.CreateServiceHost(serviceType, baseAddresses); 7: host.Description.Behaviors.Add(ServiceConfig.ServiceMetadataBehavior); 8: ServiceConfig.Configure((ServiceDebugBehavior)host.Description.Behaviors[typeof(ServiceDebugBehavior)]); 9: return host; 10: } 11: } 12: As you can see, we are using the same configuration helper we used when self-hosting. Now, when you have a factory, the <system.serviceModel> section of the configuration can be removed, because the section will be ignored when the service has a custom factory. If you want to configure something else in the config-file, one could configure in some other section.   Configuring the client Microsoft has helpfully created a ChannelFactory class in order to create a proxy client. When using this approach, you don't have generate those awfull proxy classes for the client. If you share the contracts with the server in it's own assembly like in the layer diagram under, you can share the same piece of code. The contracts in WCF are the interface to the service and if any, the datacontracts (custom types) the service depends on. Using the ChannelFactory with our configuration helper-class is very simple: 1: var identity = EndpointIdentity.CreateDnsIdentity("localhost"); 2: var endpointAddress = new EndpointAddress(endPoint, identity); 3: var factory = new ChannelFactory<IMyService>(DeployServiceConfig.DefaultBinding, endpointAddress); 4: using (var myService = new factory.CreateChannel()) 5: { 6: myService.Hello(); 7: } 8: factory.Close();   Happy configuration!

    Read the article

  • Delphi - threads and FindFirst function

    - by radu-barbu
    Hi, I'm encountering a big problem when i'm trying to make a recursive search function inside a thread (using delphi 7) bellow is the code: TParcFicDir = class(TThread) private several variables.. protected procedure Execute; override; public constructor Create(CreateSuspended: Boolean); constructor TParcFicDir.Create(CreateSuspended: Boolean); begin inherited Create(CreateSuspended); end; procedure TParcFicDir.Execute; begin try FindFiles(FStartDir,FMask);//'c:\' and '*.*' except on e:Exception do end; end; procedure TParcFicDir.FindFiles(StartDir, FileMask: string); var wTmp : string; f:TextFile; wTempSR:TSearchRec; function Search(StartDir, FileMask: string): string; var SR : TSearchRec; IsFound : Boolean; files : integer; dirs : integer; t : string; begin try files := 0; dirs := 0; if StartDir[length(StartDir)] <> '\' then StartDir := StartDir + '\'; try IsFound := (FindFirst(StartDir + '*.*', faAnyFile, SR) = 0);// here the thread gets interrupted except on e: Exception do end; while IsFound do begin if (SR.Name <> '.') and (SR.Name <> '..') then if ((SR.Attr and faDirectory) <> 0) then if FScanDirs then begin inc(dirs); t := Search(StartDir + SR.Name, FileMask); try files := files + strtoint(copy((t), 0, pos('#', t) - 1));//old code, don't take on calcul; Delete(t, 1, pos('#', t)); dirs := dirs + strtoint(t); except on e: Exception do end; begin t := StartDir + SR.Name; wTmp := t; wtmp := ''; Inc(FDirNo); writeln(f,t); inc(filno); end; end else if ScanFiles then begin inc(filno); inc(files); end; IsFound := FindNext(SR) = 0; end; Result := IntToStr(files) + '#' + IntToStr(dirs); sysutils.FindClose(SR); except on e: Exception do end; end; begin filno := 0; try try if trim(FPathFileTmp)<>'' then AssignFile(f, FPathFileTmp+'Temp.bak') else AssignFile(f,ExtractFileDir(GetDllName)+'\Temp.bak'); Rewrite(f); Search(StartDir, FileMask); if StartDir[length(StartDir)] = '\' then delete(StartDir, length(StartDir), 1); wTmp := StartDir; wTmp := ''; if FindFirst(StartDir, faDirectory, wTempSR) = 0 then writeln(f); writeln(f); CloseFile(f); except on e: Exception do end; finally end; end; ok, probably the code is a little messed up, but i don't understand why the thread ends at 'findfirst' part....i googled it, no results. any help will be appreciated! Thanks in advance

    Read the article

  • C# file write permission issue under "Program Files" folder

    - by George2
    Hello everyone, I am using inno setup to make a installation package for my application, and my application is written by C# + .Net 2.0 + VSTS 2008. Inno setup = http://www.jrsoftware.org/isinfo.php and I install my application under Program Files/Foo folder (Foo is my application name). My application is targeting to Windows Vista. The issue I found is my program cannot write to the folder Program Files/Foo. And I need the permission of write to this folder in order to save some configuration files. The strange thing I notice is the folder Program Files/Foo is marked as readonly and I have checked all folders under Program Files are marked with read only, such as Office. My questions are, Why all folders are marked as read only under Program Files? It means we should not write to individual application folders under Program Files? If not, where should we write information to disk like user last selected configuration information of an individual application? If we could write to individual application folders under Program Files, what is the solution? I do not want my application to Run As administrator to solve this issue, and if there are solution to write to this folder, I want to require minimal permission if possible. thanks in advance, George

    Read the article

  • Problems importing WAR files in Eclipse?

    - by CitadelCSAlum
    I was unfortunately forced to result to uploading a WAR file as my backup for a web application I am working on. Luckily I have the most recent WAR file available. I am using Eclipse IDE and am using the Web Tools plugin for all the J2EE work that I am doing with the Dynamic Web Application Project. When I imported my WAR file, and ran it on a local server, everything works fine. The problem I a ran into is that in the Java Resources/src folder that all my packages and .java files were now only consists of all the same packages, but they are empty. I checked to see if I could find the files and I found the .class files in an "Imported files" folder that is not accessible in the Eclipse Project Explorer. I believe that I need to do some type of build or something so that my .java files are available for me, but unfortunately this is one area where I lack. One thing I would also like to know is, one way or the other, am I able to obtain the .java source code files if I have access to the .class files? Also, I would like to configure this environment as it was before where my Java Resources:src folder contaiend the packages and .java files.

    Read the article

  • Access/Download server files, not in site root, with PHP

    - by user271619
    Usually I save documents (images, mpegs, excel, word docs, etc...) for my friends or family on my website's root, inside a directory called /files/ or something similar. Nothing too uncommon. But, I have been playing with user session control, and allowing users to upload files to the dedicated /files/ directory. (the file names are saved in a db, with that user's ID) But, that means other people could try to guess and locate other people's files. I do randomize the file names, upon upload. And I stop the apache from displaying the /files/ directory content. However, I'd like to start saving the files outside of the website's root. This way it can't be accessible via the browser. I don't have any code to show, but I didn't want to even start on this endeavor if it's not able to be accomplished. I did find this snippet that shows how to display an image, from outside your website root: $file = $_GET['file']; $fileDir = '/path/to/files/'; if (file_exists($fileDir . $file)) { // Note: You should probably do some more checks // on the filetype, size, etc. $contents = file_get_contents($fileDir . $file); // Note: You should probably implement some kind // of check on filetype header('Content-type: image/jpeg'); echo $contents; } ? Maybe I can use this for any file type, but has anyone heard of a better way to allow users (logged in) to access their files from online, but not letting other users has similar access?

    Read the article

  • Visual Studio Linked Files Directory Structure

    - by jeffn825
    I have two versions of a project. One for Silverlight and one for .NET. The SL project has the vast majority of the code base in it. I want to globally add all files from the SL project into the .NET version as linked files. I've managed to do so successfully like this in the csproj file for the .NET version: <Compile Include="..\MyProj.Common.SL\**\*.cs" Exclude="..\MyProj.Common\Properties\**"> Unfortunately, this adds all the files right to the root of my project... so I end up with a long unreadable list of linked files in the .NET project. I really really really don't want to have to maintain an entire duplicate directory structure by hand and deal with directory name changes and file name changes and whatnot. So, is there any way to have Visual Studio preserve the directory structure when adding linked files in the wildcard manner above? Or is there at least a way of making it group all the linked files together under a directory in the .NET project like MyProj.Common.SL.Links? The very closest I've come is to set the <Visible>false</Visible> under the <Compile> tag, which effectively removes the long unreadable list of 300+ files....but unfortunately this screws up Resharper, which no longer sees those files as valid and it goes crazy on all the projects that reference the .NET project. If I could figure out a way of making Resharper not get all messed up, that would be an acceptable solution too... Any suggestions? Thanks.

    Read the article

  • Programatically rebuild .exd-files when loading VBA

    - by aspartame
    Hi, After updating Microsoft Office 2007 to Office 2010 some custom VBA scripts embedded in our software failed to compile with the following error message: Object library invalid or contains references to object definitions that could not be found. As far as I know, this error is a result of a security update from Microsoft (Microsoft Security Advisory 960715). When adding ActiveX-controls to VBA scripts, information about the controls are stored in cache files on the local hard drive (.exd-files). The security update modified some of these controls, but the .exd-files were not automatically updated. When the VBA scripts try to load the old versions of the controls stored in the cached files, the error occurs. These cache-files must be removed from the hard drive in order for the controls to load successfully (which will create new, updated .exd-files automatically). What I would like to do is to programatically (using Visual C++) remove the outdated .exd-files when our software loads. When opening a VBA project using CApcProject::ApcProject.Open I set the following flag:axProjectThrowAwayCompiledState. TestHR(ApcProject.Open(pHost, (MSAPC::AxProjectFlag) (MSAPC::axProjectNormal | MSAPC::axProjectThrowAwayCompiledState))); According to the documentation, this flag should cause the VBA project to be recompiled and the temporary files to be deleted and rebuilt. I've also tried to update the checksum of the host application type library which should have the same effect. However none of these fixes seem to do the job and I'm running out of ideas. Help is very much appreciated!

    Read the article

  • IIS 6+ASP.NET - many temp files generated

    - by moshe_ravid
    I have a ASP.NET + some .NET web-services running on IIS 6 (win 2003 server). The issue is that IIS is generating a lot (!) of files in "c:\WINDOWS\Temp" directory. a lot of files means thousands of files, which get to more than 3G of size so far. The files are generated by this command: C:\WINDOWS\SysWOW64\inetsrv "C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\csc.exe" /t:library /utf8output /R:"C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\vfagt\113819dd\db0d5802\assembly\dl3\fedc6ef1\006e24d8_3bc9ca01\VfAgentWService.DLL" /R:"C:\WINDOWS\assembly\GAC_MSIL\System.Web.Services\2.0.0.0__b03f5f7f11d50a3a\System.Web.Services.dll" /R:"C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\mscorlib.dll" /R:"C:\WINDOWS\assembly\GAC_MSIL\System.Xml\2.0.0.0__b77a5c561934e089\System.Xml.dll" /R:"C:\WINDOWS\assembly\GAC_MSIL\System\2.0.0.0__b77a5c561934e089\System.dll" /out:"C:\WINDOWS\TEMP\9i_i2bmg.dll" /debug- /optimize+ /nostdlib /D:_DYNAMIC_XMLSERIALIZER_COMPILATION "C:\WINDOWS\TEMP\9i_i2bmg.0.cs" The files in the temp directory are pairs of *.out & *.err, where the *.err file is zero size, and the *.out file contains the compilation output messages. What is causing IIS to generate so many files? How can I prevent it? UPDATE: The problem is that the command i described above (csc.exe) is being executed many (many) times, causing the .out & .err to be generated so many times, until it consumes the disk space. So - my question is: what is causing this command to run so many times? (i don't have that many .aspx & .asmx files in my web app). Thanks, Moe

    Read the article

  • Find files on a remote server

    - by Peter Kelly
    I have a web-service that resides on serverA. The webservice will be responsible for finding files of a certain type in a virtual directory on serverB and then returning the full URL to the files. I have the service working if the files are located on the same machine - this is straight-forward enough. My question is what is the best way to find all files of a certain type (say *.xml) in all directories below a known virtual directory on a remote server? So for example, the webservice is on http://ServerA/service.asmx and the virtual directory is located at http://serverB/virtualdirectory So in this code, obviously the DirectoryInfo will not take a path to the remote server - how do I access this so I can find the files it contains? How do I then get the full URL to a file found on that remote server? DirectoryInfo updateDirectory = new DirectoryInfo(path); FileInfo[] files = updateDirectory.GetFiles("*.xml", SearchOption.AllDirectories); foreach (FileInfo fileInfo in files) { // Get URL to the file } I cannot have the files and the service on the same server - IT decision that is out of my hands. Thanks!

    Read the article

  • jQuery recursive function to upload many files while giving the user some feedback

    - by checcco
    Hi guys, I'm trying to write a jQuery function to let users upload many files at once. Here's the function I thought to give the user some feedback about the upload process progress. function uploadFiles(numbersOfFiles, start) { $("#info").html(start + " files uploaded"); $.post('upload.php', { start: start }, function (data) { start += 5; if (start < numbersOfFiles) { $("#info").html(start + " files uploaded"); uploadFiles(numbersOfFiles, start); } else { $("#info").html("All files have been uploaded"); } }); } The function calls a php script to upload 5 files, then if there are more files to upload it calls the script again. The whole process works. I've tried it with 100 files. The only thing that doesn't work is the #info div updating. The div get updated the first time and then again only to show "All files have been uploaded". So there's no feedback for the user about the uploading process. I can't understand why... Any help?

    Read the article

  • Template inheritence c++

    - by Chris Condy
    I have made a template singleton class, I have also made a data structure that is templated. My question is; how do I make my templated data structure inherit from a singleton so you can only have one float type of this structure? I have tested both seperate and have found no problems. Code provided under... (That is the problem) template <class Type> class AbstractRManagers : public Singleton<AbstractRManagers<Type> > The problem is the code above doesn't work I get alot of errors. I cant get it to no matter what I do template a templated singleton class... I was asking for maybe advice or maybe if the code above is incorrect guidence? #ifndef SINGLETON_H #define SINGLETON_H template <class Type> class Singleton { public: virtual ~Singleton(); Singleton(); static Type* m_instance; }; template <class Type> Type* Singleton<Type>::m_instance = 0; #include "Singleton.cpp" #endif #ifndef SINGLETON_CPP #define SINGLETON_CPP #include "Singleton.h" template <class Type> Singleton<Type>::Singleton() { } template <class Type> Singleton<Type>::~Singleton() { } template <class Type> Type* Singleton<Type>::getInstance() { if(m_instance==nullptr) { m_instance = new Type; } return m_instance; } #endif #ifndef ABSTRACTRMANAGERS_H #define ABSTRACTRMANAGERS_H #include <vector> #include <map> #include <stack> #include "Singleton.h" template <class Type> class AbstractRManagers : public Singleton<AbstractRManagers<Type> > { public: virtual ~AbstractRManagers(); int insert(Type* type, std::string name); Type* remove(int i); Type* remove(std::string name); Type* get(int i); Type* getS(std::string name); int get(std::string name); int get(Type* i); bool check(std::string name); int resourceSize(); protected: private: std::vector<Type*> m_resources; std::map<std::string,int> m_map; std::stack<int> m_freePos; }; #include "AbstractRManagers.cpp" #endif #ifndef ABSTRACTRMANAGERS_CPP #define ABSTRACTRMANAGERS_CPP #include "AbstractRManagers.h" template <class Type> int AbstractRManagers<Type>::insert(Type* type, std::string name) { int i=0; if(!check(name)) { if(m_freePos.empty()) { m_resources.push_back(type); i = m_resources.size()-1; m_map[name] = i; } else { i = m_freePos.top(); m_freePos.pop(); m_resources[i] = type; m_map[name] = i; } } else i = -1; return i; } template <class Type> int AbstractRManagers<Type>::resourceSize() { return m_resources.size(); } template <class Type> bool AbstractRManagers<Type>::check(std::string name) { std::map<std::string,int>::iterator it; it = m_map.find(name); if(it==m_map.end()) return false; return true; } template <class Type> Type* AbstractRManagers<Type>::remove(std::string name) { Type* temp = m_resources[m_map[name]]; if(temp!=NULL) { std::map<std::string,int>::iterator it; it = m_map[name]; m_resources[m_map[name]] = NULL; m_freePos.push(m_map[name]); delete (*it).second; delete (*it).first; return temp; } return NULL; } template <class Type> Type* AbstractRManagers<Type>::remove(int i) { if((i < m_resources.size())&&(i > 0)) { Type* temp = m_resources[i]; m_resources[i] = NULL; m_freePos.push(i); std::map<std::string,int>::iterator it; for(it=m_map.begin();it!=m_map.end();it++) { if((*it).second == i) { delete (*it).second; delete (*it).first; return temp; } } return temp; } return NULL; } template <class Type> int AbstractRManagers<Type>::get(Type* i) { for(int i2=0;i2<m_resources.size();i2++) { if(i == m_resources[i2]) { return i2; } } return -1; } template <class Type> Type* AbstractRManagers<Type>::get(int i) { if((i < m_resources.size())&&(i >= 0)) { return m_resources[i]; } return NULL; } template <class Type> Type* AbstractRManagers<Type>::getS(std::string name) { return m_resources[m_map[name]]; } template <class Type> int AbstractRManagers<Type>::get(std::string name) { return m_map[name]; } template <class Type> AbstractRManagers<Type>::~AbstractRManagers() { } #endif #include "AbstractRManagers.h" struct b { float x; }; int main() { b* a = new b(); AbstractRManagers<b>::getInstance()->insert(a,"a"); return 0; } This program produces next errors when compiled : 1> main.cpp 1>c:\program files\microsoft visual studio 10.0\vc\include\xfunctional(125): error C2784: 'bool std::operator <(const std::stack<_Ty,_Container> &,const std::stack<_Ty,_Container> &)' : could not deduce template argument for 'const std::stack<_Ty,_Container> &' from 'const std::string' 1> c:\program files\microsoft visual studio 10.0\vc\include\stack(166) : see declaration of 'std::operator <' 1> c:\program files\microsoft visual studio 10.0\vc\include\xfunctional(124) : while compiling class template member function 'bool std::less<_Ty>::operator ()(const _Ty &,const _Ty &) const' 1> with 1> [ 1> _Ty=std::string 1> ] 1> c:\program files\microsoft visual studio 10.0\vc\include\map(71) : see reference to class template instantiation 'std::less<_Ty>' being compiled 1> with 1> [ 1> _Ty=std::string 1> ] 1> c:\program files\microsoft visual studio 10.0\vc\include\xtree(451) : see reference to class template instantiation 'std::_Tmap_traits<_Kty,_Ty,_Pr,_Alloc,_Mfl>' being compiled 1> with 1> [ 1> _Kty=std::string, 1> _Ty=int, 1> _Pr=std::less<std::string>, 1> _Alloc=std::allocator<std::pair<const std::string,int>>, 1> _Mfl=false 1> ] 1> c:\program files\microsoft visual studio 10.0\vc\include\xtree(520) : see reference to class template instantiation 'std::_Tree_nod<_Traits>' being compiled 1> with 1> [ 1> _Traits=std::_Tmap_traits<std::string,int,std::less<std::string>,std::allocator<std::pair<const std::string,int>>,false> 1> ] 1> c:\program files\microsoft visual studio 10.0\vc\include\xtree(659) : see reference to class template instantiation 'std::_Tree_val<_Traits>' being compiled 1> with 1> [ 1> _Traits=std::_Tmap_traits<std::string,int,std::less<std::string>,std::allocator<std::pair<const std::string,int>>,false> 1> ] 1> c:\program files\microsoft visual studio 10.0\vc\include\map(81) : see reference to class template instantiation 'std::_Tree<_Traits>' being compiled 1> with 1> [ 1> _Traits=std::_Tmap_traits<std::string,int,std::less<std::string>,std::allocator<std::pair<const std::string,int>>,false> 1> ] 1> c:\users\chris\desktop\311\ideas\idea1\idea1\abstractrmanagers.h(28) : see reference to class template instantiation 'std::map<_Kty,_Ty>' being compiled 1> with 1> [ 1> _Kty=std::string, 1> _Ty=int 1> ] 1> c:\users\chris\desktop\311\ideas\idea1\idea1\abstractrmanagers.h(30) : see reference to class template instantiation 'AbstractRManagers<Type>' being compiled 1>c:\program files\microsoft visual studio 10.0\vc\include\xfunctional(125): error C2784: 'bool std::operator <(const std::stack<_Ty,_Container> &,const std::stack<_Ty,_Container> &)' : could not deduce template argument for 'const std::stack<_Ty,_Container> &' from 'const std::string' 1> c:\program files\microsoft visual studio 10.0\vc\include\stack(166) : see declaration of 'std::operator <' 1>c:\program files\microsoft visual studio 10.0\vc\include\xfunctional(125): error C2784: 'bool std::operator <(const std::stack<_Ty,_Container> &,const std::stack<_Ty,_Container> &)' : could not deduce template argument for 'const std::stack<_Ty,_Container> &' from 'const std::string' 1> c:\program files\microsoft visual studio 10.0\vc\include\stack(166) : see declaration of 'std::operator <' 1>c:\program files\microsoft visual studio 10.0\vc\include\xfunctional(125): error C2784: 'bool std::operator <(const std::deque<_Ty,_Alloc> &,const std::deque<_Ty,_Alloc> &)' : could not deduce template argument for 'const std::deque<_Ty,_Alloc> &' from 'const std::string' 1> c:\program files\microsoft visual studio 10.0\vc\include\deque(1725) : see declaration of 'std::operator <' 1>c:\program files\microsoft visual studio 10.0\vc\include\xfunctional(125): error C2784: 'bool std::operator <(const std::deque<_Ty,_Alloc> &,const std::deque<_Ty,_Alloc> &)' : could not deduce template argument for 'const std::deque<_Ty,_Alloc> &' from 'const std::string' 1> c:\program files\microsoft visual studio 10.0\vc\include\deque(1725) : see declaration of 'std::operator <' 1>c:\program files\microsoft visual studio 10.0\vc\include\xfunctional(125): error C2784: 'bool std::operator <(const std::deque<_Ty,_Alloc> &,const std::deque<_Ty,_Alloc> &)' : could not deduce template argument for 'const std::deque<_Ty,_Alloc> &' from 'const std::string' 1> c:\program files\microsoft visual studio 10.0\vc\include\deque(1725) : see declaration of 'std::operator <' 1>c:\program files\microsoft visual studio 10.0\vc\include\xfunctional(125): error C2784: 'bool std::operator <(const std::_Tree<_Traits> &,const std::_Tree<_Traits> &)' : could not deduce template argument for 'const std::_Tree<_Traits> &' from 'const std::string' 1> c:\program files\microsoft visual studio 10.0\vc\include\xtree(1885) : see declaration of 'std::operator <' 1>c:\program files\microsoft visual studio 10.0\vc\include\xfunctional(125): error C2784: 'bool std::operator <(const std::_Tree<_Traits> &,const std::_Tree<_Traits> &)' : could not deduce template argument for 'const std::_Tree<_Traits> &' from 'const std::string' 1> c:\program files\microsoft visual studio 10.0\vc\include\xtree(1885) : see declaration of 'std::operator <' 1>c:\program files\microsoft visual studio 10.0\vc\include\xfunctional(125): error C2784: 'bool std::operator <(const std::_Tree<_Traits> &,const std::_Tree<_Traits> &)' : could not deduce template argument for 'const std::_Tree<_Traits> &' from 'const std::string' 1> c:\program files\microsoft visual studio 10.0\vc\include\xtree(1885) : see declaration of 'std::operator <' 1>c:\program files\microsoft visual studio 10.0\vc\include\xfunctional(125): error C2784: 'bool std::operator <(const std::vector<_Ty,_Ax> &,const std::vector<_Ty,_Ax> &)' : could not deduce template argument for 'const std::vector<_Ty,_Ax> &' from 'const std::string' 1> c:\program files\microsoft visual studio 10.0\vc\include\vector(1502) : see declaration of 'std::operator <' 1>c:\program files\microsoft visual studio 10.0\vc\include\xfunctional(125): error C2784: 'bool std::operator <(const std::vector<_Ty,_Ax> &,const std::vector<_Ty,_Ax> &)' : could not deduce template argument for 'const std::vector<_Ty,_Ax> &' from 'const std::string' 1> c:\program files\microsoft visual studio 10.0\vc\include\vector(1502) : see declaration of 'std::operator <' 1>c:\program files\microsoft visual studio 10.0\vc\include\xfunctional(125): error C2784: 'bool std::operator <(const std::vector<_Ty,_Ax> &,const std::vector<_Ty,_Ax> &)' : could not deduce template argument for 'const std::vector<_Ty,_Ax> &' from 'const std::string' 1> c:\program files\microsoft visual studio 10.0\vc\include\vector(1502) : see declaration of 'std::operator <' 1>c:\program files\microsoft visual studio 10.0\vc\include\xfunctional(125): error C2784: 'bool std::operator <(const std::unique_ptr<_Ty,_Dx> &,const std::unique_ptr<_Ty2,_Dx2> &)' : could not deduce template argument for 'const std::unique_ptr<_Ty,_Dx> &' from 'const std::string' 1> c:\program files\microsoft visual studio 10.0\vc\include\memory(2582) : see declaration of 'std::operator <' 1>c:\program files\microsoft visual studio 10.0\vc\include\xfunctional(125): error C2784: 'bool std::operator <(const std::unique_ptr<_Ty,_Dx> &,const std::unique_ptr<_Ty2,_Dx2> &)' : could not deduce template argument for 'const std::unique_ptr<_Ty,_Dx> &' from 'const std::string' 1> c:\program files\microsoft visual studio 10.0\vc\include\memory(2582) : see declaration of 'std::operator <' 1>c:\program files\microsoft visual studio 10.0\vc\include\xfunctional(125): error C2784: 'bool std::operator <(const std::unique_ptr<_Ty,_Dx> &,const std::unique_ptr<_Ty2,_Dx2> &)' : could not deduce template argument for 'const std::unique_ptr<_Ty,_Dx> &' from 'const std::string' 1> c:\program files\microsoft visual studio 10.0\vc\include\memory(2582) : see declaration of 'std::operator <' 1>c:\program files\microsoft visual studio 10.0\vc\include\xfunctional(125): error C2784: 'bool std::operator <(const std::reverse_iterator<_RanIt> &,const std::reverse_iterator<_RanIt2> &)' : could not deduce template argument for 'const std::reverse_iterator<_RanIt> &' from 'const std::string' 1> c:\program files\microsoft visual studio 10.0\vc\include\xutility(1356) : see declaration of 'std::operator <' 1>c:\program files\microsoft visual studio 10.0\vc\include\xfunctional(125): error C2784: 'bool std::operator <(const std::reverse_iterator<_RanIt> &,const std::reverse_iterator<_RanIt2> &)' : could not deduce template argument for 'const std::reverse_iterator<_RanIt> &' from 'const std::string' 1> c:\program files\microsoft visual studio 10.0\vc\include\xutility(1356) : see declaration of 'std::operator <' 1>c:\program files\microsoft visual studio 10.0\vc\include\xfunctional(125): error C2784: 'bool std::operator <(const std::reverse_iterator<_RanIt> &,const std::reverse_iterator<_RanIt2> &)' : could not deduce template argument for 'const std::reverse_iterator<_RanIt> &' from 'const std::string' 1> c:\program files\microsoft visual studio 10.0\vc\include\xutility(1356) : see declaration of 'std::operator <' 1>c:\program files\microsoft visual studio 10.0\vc\include\xfunctional(125): error C2784: 'bool std::operator <(const std::_Revranit<_RanIt,_Base> &,const std::_Revranit<_RanIt2,_Base2> &)' : could not deduce template argument for 'const std::_Revranit<_RanIt,_Base> &' from 'const std::string' 1> c:\program files\microsoft visual studio 10.0\vc\include\xutility(1179) : see declaration of 'std::operator <' 1>c:\program files\microsoft visual studio 10.0\vc\include\xfunctional(125): error C2784: 'bool std::operator <(const std::_Revranit<_RanIt,_Base> &,const std::_Revranit<_RanIt2,_Base2> &)' : could not deduce template argument for 'const std::_Revranit<_RanIt,_Base> &' from 'const std::string' 1> c:\program files\microsoft visual studio 10.0\vc\include\xutility(1179) : see declaration of 'std::operator <' 1>c:\program files\microsoft visual studio 10.0\vc\include\xfunctional(125): error C2784: 'bool std::operator <(const std::_Revranit<_RanIt,_Base> &,const std::_Revranit<_RanIt2,_Base2> &)' : could not deduce template argument for 'const std::_Revranit<_RanIt,_Base> &' from 'const std::string' 1> c:\program files\microsoft visual studio 10.0\vc\include\xutility(1179) : see declaration of 'std::operator <' 1>c:\program files\microsoft visual studio 10.0\vc\include\xfunctional(125): error C2784: 'bool std::operator <(const std::pair<_Ty1,_Ty2> &,const std::pair<_Ty1,_Ty2> &)' : could not deduce template argument for 'const std::pair<_Ty1,_Ty2> &' from 'const std::string' 1> c:\program files\microsoft visual studio 10.0\vc\include\utility(318) : see declaration of 'std::operator <' 1>c:\program files\microsoft visual studio 10.0\vc\include\xfunctional(125): error C2784: 'bool std::operator <(const std::pair<_Ty1,_Ty2> &,const std::pair<_Ty1,_Ty2> &)' : could not deduce template argument for 'const std::pair<_Ty1,_Ty2> &' from 'const std::string' 1> c:\program files\microsoft visual studio 10.0\vc\include\utility(318) : see declaration of 'std::operator <' 1>c:\program files\microsoft visual studio 10.0\vc\include\xfunctional(125): error C2784: 'bool std::operator <(const std::pair<_Ty1,_Ty2> &,const std::pair<_Ty1,_Ty2> &)' : could not deduce template argument for 'const std::pair<_Ty1,_Ty2> &' from 'const std::string' 1> c:\program files\microsoft visual studio 10.0\vc\include\utility(318) : see declaration of 'std::operator <' 1>c:\program files\microsoft visual studio 10.0\vc\include\xfunctional(125): error C2676: binary '<' : 'const std::string' does not define this operator or a conversion to a type acceptable to the predefined operator ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========

    Read the article

  • PHP sessions and class members.

    - by JDW
    Ok, messing about with classes in PHP and can't get it to work the way I'm used to as a C++/Java-guy. In the "_init" funtion, if I run a query at the "// query works here" line", everythong works, but in the "getUserID" function, all that happens is said warning... "getUserID" gets called from login.php (they are in the same dir): login.php <?php include_once 'sitehandler.php'; include_once 'dbhandler.php'; session_start(); #TODO: Safer input handling $t_userName = $_POST["name"]; $t_userId = $_SESSION['handler']['db']->getUserID($t_userName); if ($t_userId != -1) { $_SESSION['user']['name'] = $t_userName; $_SESSION['user']['id'] = $t_userId; } //error_log("user: " . $_SESSION['user']['name'] . ", id: ". $_SESSION['user']['id']); header("Location: " . $_SERVER["HTTP_REFERER"]); ? dbhandler.php <?php include_once 'handler.php'; class DBHandler extends HandlerAbstract { private $m_handle; function __construct() { parent::__construct(); } public function test() { #TODO: isdir liquibase #TODO: isfile liquibase-195/liquibase + .bat + execrights $this->m_isTested = true; } public function _init() { if (!$this->isTested()) $this->test(); if (!file_exists('files/data.db')) { #TODO: How to to if host is Windows based? exec('./files/liquibase-1.9.5/liquibase --driver=org.sqlite.JDBC --changeLogFile=files/data_db.xml --url=jdbc:sqlite:files/data.db update'); #TODO: quit if not success } #TODO: Set with default data try { $this->m_handle = new SQLite3('files/data.db'); } catch (Exception $e) { die("<hr />" . $e->getMessage() . "<hr />"); } // query works here $this->m_isSetup = true; } public function teardown() { } public function getUserID($name) { // PHP Warning: SQLite3::prepare(): The SQLite3 object has not been correctly initialised in $t_statement = $this->m_handle->prepare("SELECT id FROM users WHERE name = :name"); $t_statement->bindValue(":name", $name, SQLITE3_TEXT); $t_result = $t_statement->execute(); //var_dump($this->m_handle); return ($t_result)? (int)$t_result['id']: -1; } }

    Read the article

  • Given a trace of packets, how would you group them into flows?

    - by zxcvbnm
    I've tried it these ways so far: 1) Make a hash with the source IP/port and destination IP/port as keys. Each position in the hash is a list of packets. The hash is then saved in a file, with each flow separated by some special characters/line. Problem: Not enough memory for large traces. 2) Make a hash with the same key as above, but only keep in memory the file handles. Each packet is then put into the hash[key] that points to the right file. Problems: Too many flows/files (~200k) and it might run out of memory as well. 3) Hash the source IP/port and destination IP/port, then put the info inside a file. The difference between 2 and 3 is that here the files are opened and closed for each operation, so I don't have to worry about running out of memory because I opened too many at the same time. Problems: WAY too slow, same number of files as 2 so also impractical. 4) Make a hash of the source IP/port pairs and then iterate over the whole trace for each flow. Take the packets that are part of that flow and place them into the output file. Problem: Suppose I have a 60 MB trace that has 200k flows. This way, I would process, say, a 60 MB file 200k times. Maybe removing the packets as I iterate would make it not so painful, but so far I'm not sure this would be a good solution. 5) Split them by IP source/destination and then create a single file for each one, separating the flows by special characters. Still too many files (+50k). Right now I'm using Ruby to do it, which might've been a bad idea, I guess. Currently I've filtered the traces with tshark so that they only have relevant info, so I can't really make them any smaller. I thought about loading everything in memory as described in 1) using C#/Java/C++, but I was wondering if there wouldn't be a better approach here, especially since I might also run out of memory later on even with a more efficient language if I have to use larger traces. In summary, the problem I'm facing is that I either have too many files or that I run out of memory. I've also tried searching for some tool to filter the info, but I don't think there is one. The ones I've found only return some statistics and wouldn't scan for every flow as I need.

    Read the article

  • What is the right way to process inconsistent data files?

    - by Tahabi
    I'm working at a company that uses Excel files to store product data, specifically, test results from products before they are shipped out. There are a few thousand spreadsheets with anywhere from 50-100 relevant data points per file. Over the years, the schema for the spreadsheets has changed significantly, but not unidirectionally - in the sense that, changes often get reverted and then re-added in the space of a few dozen to few hundred files. My project is to convert about 8000 of these spreadsheets into a database that can be queried. I'm using MongoDB to deal with the inconsistency in the data, and Python. My question is, what is the "right" or canonical way to deal with the huge variance in my source files? I've written a data structure which stores the data I want for the latest template, which will be the final template used going forward, but that only helps for a few hundred files historically. Brute-forcing a solution would mean writing similar data structures for each version/template - which means potentially writing hundreds of schemas with dozens of fields each. This seems very inefficient, especially when sometimes a change in the template is as little as moving a single line of data one row down or splitting what used to be one data field into two data fields. A slightly more elegant solution I have in mind would be writing schemas for all the variants I can find for pre-defined groups in the source files, and then writing a function to match a particular series of files with a series of variants that matches that set of files. This is because, more often that not, most of the file will remain consistent over a long period, only marred by one or two errant sections, but inside the period, which section is inconsistent, is inconsistent. For example, say a file has four sections with three data fields, which is represented by four Python dictionaries with three keys each. For files 7000-7250, sections 1-3 will be consistent, but section 4 will be shifted one row down. For files 7251-7500, 1-3 are consistent, section 4 is one row down, but a section five appears. For files 7501-7635, sections 1 and 3 will be consistent, but section 2 will have five data fields instead of three, section five disappears, and section 4 is still shifted down one row. For files 7636-7800, section 1 is consistent, section 4 gets shifted back up, section 2 returns to three cells, but section 3 is removed entirely. Files 7800-8000 have everything in order. The proposed function would take the file number and match it to a dictionary representing the data mappings for different variants of each section. For example, a section_four_variants dictionary might have two members, one for the shifted-down version, and one for the normal version, a section_two_variants might have three and five field members, etc. The script would then read the matchings, load the correct mapping, extract the data, and insert it into the database. Is this an accepted/right way to go about solving this problem? Should I structure things differently? I don't know what to search Google for either to see what other solutions might be, though I believe the problem lies in the domain of ETL processing. I also have no formal CS training aside from what I've taught myself over the years. If this is not the right forum for this question, please tell me where to move it, if at all. Any help is most appreciated. Thank you.

    Read the article

  • iPhone - Open excel from SSRS 2008

    - by milesmcgehee
    We're currently having a problem with our 2008 SSRS server sending excel reports to users with iPhones. They get the email with no problems, but when the XLS file is opened on the phone, it returns an error of: Invalid format. Everyone else can open the report with no problems (email/blackberry) The odd thing, is I can drag the file to my desktop from the message, open it, save it, and then email it again and it opens just fine on the phone. Does anyone know of hotfix that can be applied to the SSRS server to create the XLS files correctly? Or something I can change to make this work? I know we can send all the attachments in PDF but I'd like to keep them XLS if at all possible.

    Read the article

  • Microsoft Office documents collaboration - Open Source alternative

    - by Saggi Malachi
    I am looking for a good solution to collaborate on Microsoft Office documents, we currently just edit directly on a Samba share but it's one big mess because sometimes people leave the office with their laptops while docs are open so swap files remain there and then you nobody is sure what's going on. Is there any good and simple open source solution based on Linux? I've tried Alfresco but it is much more than what I need, we got an internal wiki for most collaboration and I just need some solution for the stuff we need to do in Microsoft Office (mostly Excel files, the rest is in the wiki) EDIT: Some more info as requested - we are very small group, 4 full time employees and a few freelancers. The best idea I've got so far is just managing it in a subversion repository with a Lock-Modify-Lock policy but I'd love to hear about better solutions. Thanks!

    Read the article

  • CHKDSK: What option DOES NOT delete files and turn them into .chk files?

    - by CHKDSKuser
    I had a recent power outage while using my computer, with a 1TB hard drive being directly accessed as the power went out. When the power came back on, and I rebooted my computer, one of my 1TB hard drives would not register with WinXP SP3, and showed a Total Space of 0, and an Available Space of 0. The file system (NTFS) also did not register...every entry for the drive was either blank or zeroed. My assumption is that the file tables were damaged/corrupted because the drive was being directly accessed when the power went out. After doing some research, I ran CHKDSK with whatever default options it runs with (I'm not sure what they are as I didn't see them displayed). Upon completion of CHKDSK, the drive registered with WinXP as a 1TB hard drive, with an accurately-reflected amount of available space. But CHKDSK also deleted about 16GB of files from their original directories, and changed them all into sequentially-named *.chk files. My question is how can CHKDSK be run in a situation like mine where the file tables needed to be restored, but without having CHKDSK delete any files from their original directories, even if they may be damaged/corrupt? I'd simply like to be able to run CHKDSK and have it restore the file tables, and repair bad sector damage, as it did, but not have it do anything else such as delete files and convert them to CHK files. Any ideas? Or is there a CHKDSK alternative that can perform the same functions without the file deletions?

    Read the article

  • Unable to remove file using 'rm'

    - by Alvin Sim
    Hi all, In one of our severs (IBM AIX), we have a file in path /data/1002/ which we were not able to remove or delete using the 'rm' command. The error message we got is "rm: A1208001.002: A file or directory in the path name does not exist." With the "-f" option, no error message was displayed, but the file is still there. This file has a '0' byte size and when i use the command "touch A120001.002", i see two files with the same file name in that directory. The directory listing is as below: $ ls -l total 56 -rwxrwxrwx 1 oracle dba 0 Feb 09 11:57 S1208001.002 drwxrwxrwx 4 nobody dba 24576 Feb 09 13:36 backup How do I remove this bogus fie? Thanks.

    Read the article

  • GnuPG PHP gnupg Folder & Files Permission

    - by Michael Robinson
    Situation: we plan on using PHP's GnuPG extension to encrypt/decrypt files. Currently we've setup some test cases, using keys generated with GPG. The generated files reside in: /Users/username/.gnupg/ I am able to get keyinfo for the key I want to use to encrypt/decrypt, but when I attempt to use addencryptkey, I get: (E_WARNING: 2): gnupg::addencryptkey() [gnupg.addencryptkey]: get_key failed I think this is due to the permissions on the ~/.gnupg folder & enclosed files. The files are owned by me - username, but apache runs as www. A few days ago I did have this working, but it seems each time I use GPG Keychain Access to import / export a key, the folder's permissions are changed. Question: What are the exact permissions required to allow PHP's GnuPG to add encrypt & decrypt keys?

    Read the article

  • Natively open .doc or .odt file in LaTex

    - by MaQleod
    I have looked at ways to convert word or open office text files to LaTex format, and those have been addressed on SU here and here. These will work, but I'd rather have a native single-step solution. Does anyone know of any module or add-on for LaTex that will let one open .doc and/or .odt files without the use of third-party conversion tools? It would be ideal if it would allow for editing and saving in the same format. If it makes any difference, I am using LaTex Editor in Windows.

    Read the article

  • DPM 2010 PowerShell Script to Easily Restore Multiple Files

    - by bmccleary
    I’ve got what I thought would be a simple task with Data Protection Manager 2010 that is turning out to be quite frustrating. I have a file server on one server and it is the only server in a protection group. This file server is the repository for a document management application which stores the files according to the data within a SQL database. Sometimes users inadvertently delete files from within our application and we need to restore them. We have all the information needed to restore the files to include the file name, the folder that the file was stored in and the exact date that the file was deleted. It is easy for me to restore the file from within the DPM console since we have a recovery point created every day, I simply go to the day before the delete, browse to the proper folder and restore the file. The problem is that using the DPM console, the cumbersome wizard requires about 20 mouse clicks to restore a single file and it takes 2-4 minutes to get through all the windows. This becomes very irritating when a client needs 100’s of files restored… it takes all day of redundant mouse clicks to restore the files. Therefore, I want to use a PowerShell script (and I’m a novice at PowerShell) to automate this process. I want to be able to create a script that I pass in a file name, a folder, a recovery point date (and a protection group/server name if needed) and simply have the file restored back to its original location with some sort of success/failure notification. I thought it was a simple basic task of a backup solution, but I am having a heck of a time finding the right code. I have seen the sample code at http://social.technet.microsoft.com/wiki/contents/articles/how-to-use-a-windows-powershell-script-to-recover-an-item-in-data-protection-manager.aspx that I have tried to follow, but it doesn’t accomplish what I really want to do (it’s too simplistic) and there are errors in the sample code. Therefore, I would like to get some help writing a script to restore these files. An example of the known values to restore the data are: DPM Server: BACKUP01 Protection Group: Document Repository Data Protected Server: FILER01 File Path: R:\DocumentRepository\ToBackup\ClientName\Repository\2010\07\24\filename.pdf Date Deleted: 8/2/2010 (last recovery point = 8/1/2010) Bonus Points: If you can help me not only create this script, but also show me how to automate by providing a text file with the above information that the PowerShell script loops through, or even better, is able to query our SQL server for the needed data, then I would be more than willing to pay for this development.

    Read the article

  • can I bundle multiple installs for Mac OSX and do them as a single script?

    - by Dov
    I have a lot of open source software to be installed for a course. We currently run on PCs that we provide. If we allow students to use their own Macs in Mac-centric schools, that means we have to load the software on those Macs. Rathern than have to load individual software, is there any way I can create a single file, mount it and run a script to install all packages? We are willing to simplify the installs by standardizing the locations to store the applications, since the students will have identical machines.

    Read the article

  • Cygwin vim doesn't load files in runtimepath

    - by durron597
    I created a custom syntax file, and but none of the files in $VIMRUNTIME seem to load. I followed these pieces of the documentation: http://vimdoc.sourceforge.net/htmldoc/filetype.html#new-filetype http://vimdoc.sourceforge.net/htmldoc/syntax.html#mysyntaxfile When I do :echo &runtimepath I get: /home/durron597/.vim,/usr/share/vim/vimfiles,/usr/share/vim/vim73,/usr/share/vim/vimfiles/after,/home/durron597/.vim/after However, if I open a file with vim -D, here's the listed files as I type f: /etc/vimrc /home/durron597/.vimrc /usr/share/vim/vim73/plugin/getscriptPlugin.vim /usr/share/vim/vim73/plugin/gzip.vim /usr/share/vim/vim73/plugin/matchparen.vim /usr/share/vim/vim73/plugin/netrwPlugin.vim /usr/share/vim/vim73/plugin/rrhelper.vim /usr/share/vim/vim73/plugin/spellfile.vim /usr/share/vim/vim73/plugin/tarPlugin.vim /usr/share/vim/vim73/plugin/tohtml.vim /usr/share/vim/vim73/plugin/vimballPlugin.vim /usr/share/vim/vim73/plugin/zipPlugin.vim Here's the output of ls -lR durron597@Durron597 ~/.vim $ ls -lR .: total 0 drwxr-xr-x+ 1 durron597 None 0 Jun 3 11:06 ftdetect drwxr-xr-x+ 1 durron597 None 0 Jun 3 11:06 syntax ./ftdetect: total 1.0K -rw-r--r-- 1 durron597 None 45 Jun 3 11:06 mytype.vim ./syntax: total 4.0K -rw-r--r-- 1 durron597 None 740 Jun 3 11:06 mytype.vim The exact exact paths are: /home/durron597/.vim/ftdetect/mytype.vim /home/durron597/.vim/syntax/mytype.vim Note: the problem is that these files don't seem to be loaded at all, not that these files have internal mistakes Output of :filetype filetype detection:ON plugin:ON indent:OFF Edit 3: No, really, the files are in the right place: $ find /home -name '*.vim' /home/durron597/.vim /home/durron597/.vim/ftdetect/fix.vim /home/durron597/.vim/syntax/fix.vim

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >