Search Results

Search found 40999 results on 1640 pages for 'duplicate files'.

Page 371/1640 | < Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >

  • Objects in Java ArrayList don't get updated.

    - by Sbm007
    This is going to be a very long post, hopefully you can understand what I'm talking about and I appreciate any help. Thanks Basically, I've created a personal, non-commercial project (which I don't plan to release) that can read ZIP and RAR files. It can only read the contents in the archive, the folders inside, the files inside the folders and its properties (such as last modified date, last modified time, CRC checksum, uncompressed size, compressed size and file name). It can't extract files either, so it's really a ZIP/RAR viewer if you may. Anyway that's slightly irrelevant to my problem but I thought I'd give you some background info. Now for my problem: I can successfully list all the folders and files inside a ZIP archive, so now I want to take that raw input and link it together in some useful way. I made 2 classes: ArchiveFile (represents a file inside a ZIP) and ArchiveFolder (represents a folder inside a ZIP). They both have some useful methods such as getLastModifiedDate, getName, getPath and so on. But the difference is that ArchiveFolder can hold an ArrayList of ArchiveFile's and additional ArchiveFolder's (think of this as files and folders inside a folder). Now I want to populate my raw input into one root ArchiveFolder, which will have all the files in the root dir of the ZIP in the ArchiveFile's ArrayList and any additional folders in the root dir of the ZIP in the ArchiveFolder's ArrayList (and this process can continue on like this like a chain reaction (more files/folders in that ArchiveFolder etc etc). So I came up with the following code: while (archive.hasMore()) { String path = ""; ArchiveFolder current = root; String[] contents = archive.getName().split("/"); for (int x = 0; x < contents.length; ++x) { if (x == (contents.length - 1) && !archive.getName().endsWith("/")) { // If on last item and item is a file path += contents[x]; // Update final path ArchiveFile file = new ArchiveFile(path, contents[x], archive.getUncompressedSize(), archive.getCompressedSize(), archive.getModifiedTime(), archive.getModifiedDate(), archive.getCRC()); current.addFile(file); // Create and add the file to the current ArchiveFolder } else if (x == (contents.length - 1)) { // Else if we are on last item and it is a folder path += contents[x] + "/"; // Update final path ArchiveFolder folder = new ArchiveFolder(path, contents[x], archive.getModifiedTime(), archive.getModifiedDate()); current.addFolder(folder); // Create and add this folder to the current ArchiveFile } else { // Else if we are still traversing through the path path += contents[x] + "/"; // Update path ArchiveFolder folder = new ArchiveFolder(path, contents[x]); current.addFolder(folder); // Create and add folder (remember we do not know the modified date/time as all we know is the path, so we can deduce the name only) current = folder; // Update current ArchiveFolder to the newly created one for the next iteration of the for loop } } archive.getNext(); } Assume that root is the root ArchiveFolder (initially empty). And that archive.getName() returns the name of the current file OR folder in the following fashion: file.txt or folder1/file2.txt or folder4/folder2/ (this is a empty folder) etc. So basically the relative path from the root of the ZIP archive. Please read through the comments in the above code to familiarize yourself with it. Also assume that the addFolder method in an ArchiveFile, only adds the folder if it doesn't exist already (so there are no multiple folders) and it also updates the time and date of an existing folder if it is blank (ie it was a intermediate folder we only knew the name of, but now we know its details). The code for addFolder is (pretty self-explanitory): public void addFolder(ArchiveFolder folder) { int loc = folders.indexOf(folder); // folders is the ArrayList containing ArchiveFolder's if (loc == -1) { folders.add(folder); } else { ArchiveFolder real = folders.get(loc); if (real.time == null) { real.setTime(folder.getTime()); real.setDate(folder.getDate()); } } } So I can't see anything wrong with the code, it works and after finishing, the root ArchiveFolder contains all the files in the root of the ZIP as I want it to, and it contains all the direcories in the root folder as I want it to. So you'd think it works as expected, but no the ArchiveFolder's in the root folder don't contain the data inside those 'child' folders, it's just a blank folder with no additional files and folders (while it does really contain some more files/folders when viewed in WinZip). After debugging using Eclipse, the for loop does iterate through all the files (even those not included above), so this led me to believe that there is a problem with this line of the code: current = folder; What it does is, it updates the current folder (used as an intermediate by the loop) to the newly added folder. I thought Java passed by reference and thus all new operations and new additions in future ArchiveFile's and ArchiveFolder's are automatically updated, and parent ArchiveFolder's will be updated accordingly. But that does not appear to be the case? I know this is a long ass post and I really hope anyone can help me out with this. Thanks in advance.

    Read the article

  • web.xml not reloading in tomcat even after stop/start

    - by ajay
    This is in relation to:- http://stackoverflow.com/questions/2576514/basic-tomcat-servlet-error I changed my web.xml file, did ant compile , all, /etc/init.d/tomcat stop , start Even then my web.xml file in tomcat deployment is still unchanged. This is build.properties file:- app.name=hello catalina.home=/usr/local/tomcat manager.username=admin manager.password=admin This is my build.xml file. Is there something wrong with this:- <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <!-- General purpose build script for web applications and web services, including enhanced support for deploying directly to a Tomcat 6 based server. This build script assumes that the source code of your web application is organized into the following subdirectories underneath the source code directory from which you execute the build script: docs Static documentation files to be copied to the "docs" subdirectory of your distribution. src Java source code (and associated resource files) to be compiled to the "WEB-INF/classes" subdirectory of your web applicaiton. web Static HTML, JSP, and other content (such as image files), including the WEB-INF subdirectory and its configuration file contents. $Id: build.xml.txt 562814 2007-08-05 03:52:04Z markt $ --> <!-- A "project" describes a set of targets that may be requested when Ant is executed. The "default" attribute defines the target which is executed if no specific target is requested, and the "basedir" attribute defines the current working directory from which Ant executes the requested task. This is normally set to the current working directory. --> <project name="My Project" default="compile" basedir="."> <!-- ===================== Property Definitions =========================== --> <!-- Each of the following properties are used in the build script. Values for these properties are set by the first place they are defined, from the following list: * Definitions on the "ant" command line (ant -Dfoo=bar compile). * Definitions from a "build.properties" file in the top level source directory of this application. * Definitions from a "build.properties" file in the developer's home directory. * Default definitions in this build.xml file. You will note below that property values can be composed based on the contents of previously defined properties. This is a powerful technique that helps you minimize the number of changes required when your development environment is modified. Note that property composition is allowed within "build.properties" files as well as in the "build.xml" script. --> <property file="build.properties"/> <property file="${user.home}/build.properties"/> <!-- ==================== File and Directory Names ======================== --> <!-- These properties generally define file and directory names (or paths) that affect where the build process stores its outputs. app.name Base name of this application, used to construct filenames and directories. Defaults to "myapp". app.path Context path to which this application should be deployed (defaults to "/" plus the value of the "app.name" property). app.version Version number of this iteration of the application. build.home The directory into which the "prepare" and "compile" targets will generate their output. Defaults to "build". catalina.home The directory in which you have installed a binary distribution of Tomcat 6. This will be used by the "deploy" target. dist.home The name of the base directory in which distribution files are created. Defaults to "dist". manager.password The login password of a user that is assigned the "manager" role (so that he or she can execute commands via the "/manager" web application) manager.url The URL of the "/manager" web application on the Tomcat installation to which we will deploy web applications and web services. manager.username The login username of a user that is assigned the "manager" role (so that he or she can execute commands via the "/manager" web application) --> <property name="app.name" value="myapp"/> <property name="app.path" value="/${app.name}"/> <property name="app.version" value="0.1-dev"/> <property name="build.home" value="${basedir}/build"/> <property name="catalina.home" value="../../../.."/> <!-- UPDATE THIS! --> <property name="dist.home" value="${basedir}/dist"/> <property name="docs.home" value="${basedir}/docs"/> <property name="manager.url" value="http://localhost:8080/manager"/> <property name="src.home" value="${basedir}/src"/> <property name="web.home" value="${basedir}/web"/> <!-- ==================== External Dependencies =========================== --> <!-- Use property values to define the locations of external JAR files on which your application will depend. In general, these values will be used for two purposes: * Inclusion on the classpath that is passed to the Javac compiler * Being copied into the "/WEB-INF/lib" directory during execution of the "deploy" target. Because we will automatically include all of the Java classes that Tomcat 6 exposes to web applications, we will not need to explicitly list any of those dependencies. You only need to worry about external dependencies for JAR files that you are going to include inside your "/WEB-INF/lib" directory. --> <!-- Dummy external dependency --> <!-- <property name="foo.jar" value="/path/to/foo.jar"/> --> <!-- ==================== Compilation Classpath =========================== --> <!-- Rather than relying on the CLASSPATH environment variable, Ant includes features that makes it easy to dynamically construct the classpath you need for each compilation. The example below constructs the compile classpath to include the servlet.jar file, as well as the other components that Tomcat makes available to web applications automatically, plus anything that you explicitly added. --> <path id="compile.classpath"> <!-- Include all JAR files that will be included in /WEB-INF/lib --> <!-- *** CUSTOMIZE HERE AS REQUIRED BY YOUR APPLICATION *** --> <!-- <pathelement location="${foo.jar}"/> --> <!-- Include all elements that Tomcat exposes to applications --> <fileset dir="${catalina.home}/bin"> <include name="*.jar"/> </fileset> <pathelement location="${catalina.home}/lib"/> <fileset dir="${catalina.home}/lib"> <include name="*.jar"/> </fileset> </path> <!-- ================== Custom Ant Task Definitions ======================= --> <!-- These properties define custom tasks for the Ant build tool that interact with the "/manager" web application installed with Tomcat 6. Before they can be successfully utilized, you must perform the following steps: - Copy the file "lib/catalina-ant.jar" from your Tomcat 6 installation into the "lib" directory of your Ant installation. - Create a "build.properties" file in your application's top-level source directory (or your user login home directory) that defines appropriate values for the "manager.password", "manager.url", and "manager.username" properties described above. For more information about the Manager web application, and the functionality of these tasks, see <http://localhost:8080/tomcat-docs/manager-howto.html>. --> <taskdef resource="org/apache/catalina/ant/catalina.tasks" classpathref="compile.classpath"/> <!-- ==================== Compilation Control Options ==================== --> <!-- These properties control option settings on the Javac compiler when it is invoked using the <javac> task. compile.debug Should compilation include the debug option? compile.deprecation Should compilation include the deprecation option? compile.optimize Should compilation include the optimize option? --> <property name="compile.debug" value="true"/> <property name="compile.deprecation" value="false"/> <property name="compile.optimize" value="true"/> <!-- ==================== All Target ====================================== --> <!-- The "all" target is a shortcut for running the "clean" target followed by the "compile" target, to force a complete recompile. --> <target name="all" depends="clean,compile" description="Clean build and dist directories, then compile"/> <!-- ==================== Clean Target ==================================== --> <!-- The "clean" target deletes any previous "build" and "dist" directory, so that you can be ensured the application can be built from scratch. --> <target name="clean" description="Delete old build and dist directories"> <delete dir="${build.home}"/> <delete dir="${dist.home}"/> </target> <!-- ==================== Compile Target ================================== --> <!-- The "compile" target transforms source files (from your "src" directory) into object files in the appropriate location in the build directory. This example assumes that you will be including your classes in an unpacked directory hierarchy under "/WEB-INF/classes". --> <target name="compile" depends="prepare" description="Compile Java sources"> <!-- Compile Java classes as necessary --> <mkdir dir="${build.home}/WEB-INF/classes"/> <javac srcdir="${src.home}" destdir="${build.home}/WEB-INF/classes" debug="${compile.debug}" deprecation="${compile.deprecation}" optimize="${compile.optimize}"> <classpath refid="compile.classpath"/> </javac> <!-- Copy application resources --> <copy todir="${build.home}/WEB-INF/classes"> <fileset dir="${src.home}" excludes="**/*.java"/> </copy> </target> <!-- ==================== Dist Target ===================================== --> <!-- The "dist" target creates a binary distribution of your application in a directory structure ready to be archived in a tar.gz or zip file. Note that this target depends on two others: * "compile" so that the entire web application (including external dependencies) will have been assembled * "javadoc" so that the application Javadocs will have been created --> <target name="dist" depends="compile,javadoc" description="Create binary distribution"> <!-- Copy documentation subdirectories --> <mkdir dir="${dist.home}/docs"/> <copy todir="${dist.home}/docs"> <fileset dir="${docs.home}"/> </copy> <!-- Create application JAR file --> <jar jarfile="${dist.home}/${app.name}-${app.version}.war" basedir="${build.home}"/> <!-- Copy additional files to ${dist.home} as necessary --> </target> <!-- ==================== Install Target ================================== --> <!-- The "install" target tells the specified Tomcat 6 installation to dynamically install this web application and make it available for execution. It does *not* cause the existence of this web application to be remembered across Tomcat restarts; if you restart the server, you will need to re-install all this web application. If you have already installed this application, and simply want Tomcat to recognize that you have updated Java classes (or the web.xml file), use the "reload" target instead. NOTE: This target will only succeed if it is run from the same server that Tomcat is running on. NOTE: This is the logical opposite of the "remove" target. --> <target name="install" depends="compile" description="Install application to servlet container"> <deploy url="${manager.url}" username="${manager.username}" password="${manager.password}" path="${app.path}" localWar="file://${build.home}"/> </target> <!-- ==================== Javadoc Target ================================== --> <!-- The "javadoc" target creates Javadoc API documentation for the Java classes included in your application. Normally, this is only required when preparing a distribution release, but is available as a separate target in case the developer wants to create Javadocs independently. --> <target name="javadoc" depends="compile" description="Create Javadoc API documentation"> <mkdir dir="${dist.home}/docs/api"/> <javadoc sourcepath="${src.home}" destdir="${dist.home}/docs/api" packagenames="*"> <classpath refid="compile.classpath"/> </javadoc> </target> <!-- ====================== List Target =================================== --> <!-- The "list" target asks the specified Tomcat 6 installation to list the currently running web applications, either loaded at startup time or installed dynamically. It is useful to determine whether or not the application you are currently developing has been installed. --> <target name="list" description="List installed applications on servlet container"> <list url="${manager.url}" username="${manager.username}" password="${manager.password}"/> </target> <!-- ==================== Prepare Target ================================== --> <!-- The "prepare" target is used to create the "build" destination directory, and copy the static contents of your web application to it. If you need to copy static files from external dependencies, you can customize the contents of this task. Normally, this task is executed indirectly when needed. --> <target name="prepare"> <!-- Create build directories as needed --> <mkdir dir="${build.home}"/> <mkdir dir="${build.home}/WEB-INF"/> <mkdir dir="${build.home}/WEB-INF/classes"/> <!-- Copy static content of this web application --> <copy todir="${build.home}"> <fileset dir="${web.home}"/> </copy> <!-- Copy external dependencies as required --> <!-- *** CUSTOMIZE HERE AS REQUIRED BY YOUR APPLICATION *** --> <mkdir dir="${build.home}/WEB-INF/lib"/> <!-- <copy todir="${build.home}/WEB-INF/lib" file="${foo.jar}"/> --> <!-- Copy static files from external dependencies as needed --> <!-- *** CUSTOMIZE HERE AS REQUIRED BY YOUR APPLICATION *** --> </target> <!-- ==================== Reload Target =================================== --> <!-- The "reload" signals the specified application Tomcat 6 to shut itself down and reload. This can be useful when the web application context is not reloadable and you have updated classes or property files in the /WEB-INF/classes directory or when you have added or updated jar files in the /WEB-INF/lib directory. NOTE: The /WEB-INF/web.xml web application configuration file is not reread on a reload. If you have made changes to your web.xml file you must stop then start the web application. --> <target name="reload" depends="compile" description="Reload application on servlet container"> <reload url="${manager.url}" username="${manager.username}" password="${manager.password}" path="${app.path}"/> </target> <!-- ==================== Remove Target =================================== --> <!-- The "remove" target tells the specified Tomcat 6 installation to dynamically remove this web application from service. NOTE: This is the logical opposite of the "install" target. --> <target name="remove" description="Remove application on servlet container"> <undeploy url="${manager.url}" username="${manager.username}" password="${manager.password}" path="${app.path}"/> </target> </project>

    Read the article

  • Do we need to explicitly pass php.ini's location to php-fpm?

    - by F21
    I am seeing a strange issue where my php.ini is not used if I do not explicitly pass it to php-fpm when starting it. This is the upstart script I am using: start on (filesystem and net-device-up IFACE=lo) stop on runlevel [016] pre-start script mkdir -p /run/php end script expect fork respawn exec /usr/local/php/sbin/php-fpm --fpm-config /etc/php/php-fpm.conf If PHP is started with the above, my php.ini is never used, even though it is in Configuration File (php.ini) Path. This is the relevant part from phpinfo(): Configuration File (php.ini) Path /etc/php/ Loaded Configuration File (none) Scan this dir for additional .ini files (none) Additional .ini files parsed (none) If I modify the last line of the upstart script to point php-fpm to php.ini explicitly: exec /usr/local/php/sbin/php-fpm --fpm-config /etc/php/php-fpm.conf -c /etc/php/php.ini Then we see that the php.ini is loaded: Configuration File (php.ini) Path /etc/php/ Loaded Configuration File /etc/php/php.ini Scan this dir for additional .ini files (none) Additional .ini files parsed (none) Why is this the case? Is this a quirk in php-fpm? Minor update: This also seems to be a problem for php5-fpm installed using apt-get. I did a test install in a Ubuntu Server 12.04 virtual machine by running the following: sudo apt-get install nginx php5-fpm PHP-FPM and nginx were started after installation and everything seemed fine. I then uncommented php's settings in nginx's configuration and placed a test phpinfo() file to inspect PHP's settings. The relevant bits are: Configuration File (php.ini) Path /etc/php5/fpm Loaded Configuration File (none) Scan this dir for additional .ini files /etc/php5/fpm/conf.d Additional .ini files parsed /etc/php5/fpm/conf.d/10-pdo.ini I noted that no php.ini was loaded either. However, if I go to /etc/php5/fpm, I can see that a php.ini exists. I also checked the start up scripts for PHP-FPM and the -c parameter was not used to link the ini file to PHP. This can potentially be confusing for people who would expect php.ini to be loaded automatically by PHP-FPM.

    Read the article

  • How can I trigger the creation of a new CLB file?

    - by Xperimental
    I'm currently having a problem with an application using COM running on Windows Vista. The application runs ok on one machine, but doesn't work on a similar configured machine. Both machines are virtual images originating from the same source image. While searching the registry for causes of this error, I came across the CLBVersion key in HKCR\CLSID which seems to have something to do with COM. The value of the key differs between the two machines (0x6 on the erroneous one, 0xc on the working one). Also there are files containing the same number in their filenames in the %SystemRoot\Registration directories of the machines. They are called R000000000006.clb and R00000000000c.clb respectively. I have already searched the windows event log for anything leading to the creation of those files (I have searched by the creation date of the files). Now a few questions regarding the registry keys and the files: Is it correct, that this is connected to COM? What is the function of the files? What causes the creation of a new "CLBVersion"? Is there a way for me to trigger the creation of a new CLB file? edit: I have now found out, that this has nothing to do with my application error. But I would still be interested in details about the registry key and the files. An installation of Visual Studio 2005 has brought the second machine to the same configuration (0xc in registry and file) as the other one.

    Read the article

  • Can't install py-subversion on freebsd 8.2

    - by max taldykin
    I'm trying to install python bindings for subversion: # cd /usr/ports/devel/py-subversion # make ===> Patching for py26-subversion-1.6.15 ===> Applying extra patch /usr/ports/devel/py-subversion/../../devel/subversion/files /bindings-patch-subversion--bindings--swig--perl--native--Makefile.PL.in cannot open /usr/ports/devel/py-subversion/../../devel/subversion/files/bindings-patch-subversion--bindings--swig--perl--native--Makefile.PL.in: No such file or directory *** Error code 2 Yes, there is no such file in subversion/files, but there is file patch-subversion::bindings::swig::perl::natives::Makefle.PL.in (with colons instead of hyphens). After renaming and rerunning make I got another error: # make ===> Patching for py26-subversion-1.6.15 ===> Applying extra patch /usr/ports/devel/py-subversion/../../devel/subversion/files/bindings-patch-subversion--bindings--swig--perl--native--Makefile.PL.in cannot open /usr/ports/devel/py-subversion/../../devel/subversion/files/bindings-patch-subversion--bindings--swig--perl--native--Makefile.PL.in: No such file or directory *** Error code 2 But now there is nothing like bindings-* in subversion/files. So, the question is why is it so and how can I install py-subversion? PS: FreeBSD is running on virtual private server, so I think it is somehow patched. # uname -a FreeBSD mskhug.ru 8.2-PRERELEASE FreeBSD 8.2-PRERELEASE #0 r50: Thu Feb 24 10:15:34 IRKT 2011 [email protected]:/root/src/sys/amd64/compile/DEBUG amd64

    Read the article

  • Finding a backup and synchronization solution

    - by Andrea Zilio
    I'm having difficulties to find a backup and synchronization solution with the following characteristics: Cross-platform: Windows, Linux, Mac Offsite backup (so Internet Backup) Data deduplication Transfer only new/modified bits of modified files Secure: Data encrypted before leaving computer Maintain multiple versions of files (even deleted files) Folder synchronization integrated with backup and across multiple computers connected to the internet (not necessarily in the same LAN) I think that the Folder Sync feature needs a better explanation. The use case is this: you have a desktop pc and a laptop. The desktop pc contains a folder with some files and this folder is part of the backup (so it was selected to be backed up). The laptop does not contain that folder or that files at all. Then you're abroad with your laptop and you need that folder. So you want to be able to open the backup program, select that folder from the backup and download it in your laptop mantaining it synchronized with the backed up version. When you then come back home and switch on your desktop pc you want the folder we're talking about to be updated in the desktop PC. Does anyone knows any service with all these features? I've only found SpiderOak to support all the features I've mentioned but I'm not completely satisfied by the time taken to complete a backup. Sometimes it seems to hang for minutes with no reasons at all and folder synchronization occurs only after all files are backed up (instead folder sync should have a separated queue independent from other backup operations and synchronization should occurs frequently... for example every 5 minutes or less, independently from the frequency of normal backup operations)

    Read the article

  • "Options ExecCGI is off in this directory" When try to run Ruby code using mod_ruby

    - by Itay Moav
    I am on Ubuntu, Apache 2.2 Installed the fcgi via apt-get then removed it via apt-get remove. Installed mod-ruby configuration I added to Apache: LoadModule ruby_module /usr/lib/apache2/modules/mod_ruby.so RubyRequire apache/ruby-run <Directory /var/www> Options +ExecCGI </Directory> <Files *.rb> SetHandler ruby-object RubyHandler Apache::RubyRun.instance </Files> <Files *.rbx> SetHandler ruby-object RubyHandler Apache::RubyRun.instance </Files> I have a file in the www direcoty with puts 'baba' I have other files in that directory, all accessible via Apache. Test file has been chmod 777 In the browser I get 403. In Apache error log I get: [error] access to /var/www/t.rb failed for (null), reason: Options ExecCGI is off in this directory If I move this to a sub folder rubytest and modify the relevant config to be: <Directory /var/www/rubytest> Options +ExecCGI </Directory> and making sure the directory has 755 permissions on it, it just try to download the file, as if it does not recognize the postfix *.rb any more If I give directory and files 777 it fails: usr/lib/ruby/1.8/apache/ruby-run.rb:53: warning: Insecure world writable dir /var/www/rubytest in LOAD_PATH, mode 040777 [Tue May 24 19:39:58 2011] [error] mod_ruby: error in ruby [Tue May 24 19:39:58 2011] [error] mod_ruby: /usr/lib/ruby/1.8/apache/ruby-run.rb:53:in load': loading from unsafe file /var/www/rubytest/t.rb (SecurityError) [Tue May 24 19:39:58 2011] [error] mod_ruby: from /usr/lib/ruby/1.8/apache/ruby-run.rb:53:in handler' BUT, IF I USE *.rbx it works like a charm...go figure.

    Read the article

  • Extract large zip file (50 GB) on Mac OS X

    - by chingjun
    I was trying to move the files to another hard drive. So I archived all my photos in one large ZIP file using the Mac OS X built-in compress function. But the file failed to extract. I've tried many programs, but none of the programs I tried were able to extract the file. I've tried Mac OS X's extract utility, StuffIt Expander, 7-Zip (command line), all failed. Mac's archive utility and StuffIt don't seem to support large files, and 7-Zip's command line version gave an error stating unsupported archive. I have no luck in Windows either as many of my files have Chinese filenames, and couldn't extract to the correct name under Windows. Are there some programs that can support large files, can handle files compressed using Mac OS X's compress function, and can support UTF-8 filename? With or without GUI is fine. Update Well, I had made the wrong decision to compress the files, and it's already too late. I thought I should be able to extract the file if I could compress it. It's too late, the original copies are gone, only a large ZIP file left here. I have tried using 'unzip', but it says End-of-central-directory signature not found. I guess it doesn't have large file support as well. I would try the Windows Vista method as stated by SuperMagic, but I need to borrow a computer for that. Anyway, thank you everyone, but please provide more suggestions on what software that could possibly extract that file.

    Read the article

  • Unusual Caching Issue with IE 7/8 and IIS 7

    - by Daniel A. White
    We recently moved a site into production running Server 2008 x64 and IIS 7. The ASP.NET pages apparently load just fine, but when it comes to IE 7 and 8, a weird caching issue has cropped up with the CSS and JavaScript files on the page. On a very sporadic schedule, IE does not get all the files necessary to compose the page (i.e. CSS and JS files). When I manually go to the missing files from the address bar, they come back from local cache as empty. I F5 these source files and magically they come down properly. I refresh the site after loading a few files and the cache seems to hold. This problem has only been reproduced (again, sporadically) on IE 7 and 8 running XP. Chrome and Firefox appear to be immune. We have set IIS to use server-side kernel caching for CSS, JS and images. We also have set to expire content for the App_Themes and Scripts directories to expire immediately. One initial thought it was a SWF loading an FLV on page load. These fixes have not remedied the problem. We had no problems on our staging server which is using Server 2003 and IIS 6. Any ideas would be greatly appreciated. P.S. It sounds similar to this problem: but we do have the Static Content module installed. http://serverfault.com/questions/115099/iis-content-length-0-for-css-javascript-and-images

    Read the article

  • DFS replication and the SYSTEM user (NTFS permissions)

    - by HopelessN00b
    Question for which I'm having trouble finding an answer on Google or Technet... Does granting the SYSTEM user permissions to DFS-shared files and folders have any effect on DFS replication? (And while we're at it, is there any good reason not to let SYSTEM have permissions to DFS-shared files?) It comes up because I have a collection of DFS namespaces and folders that I'm not able to make someone else's problem, and while troubleshooting a problem where one DFS replica just wasn't replicating with another for no discernible reason, I observed that the SYSTEM didn't have any permissions granted to any of the files or folders in the folder in question. So I set SYSTEM to have full control and propagated it down, and our DFS health diagnostic reports went from showing a backlog of ~80 files to a backlog of ~100,000... and things started replicating, including a number of files that had been missing on one server or the other (so more than just the permissions changes started replicating). Naturally, this made me curious as to whether or not DFS needs the SYSTEM account to have permissions to do its work, or if perhaps it was just any change to folder tree in question that prompted DFS to jump into action. If it matters, our DFS namespaces were set up under 2000/2003, and I have just recently finished upgrading all the servers to 2008 R2 or 2012 (with UAC enabled, blech), but have not yet gotten around to raising the DFS namespace functional levels to Server 2008. (And bonus points if anyone has an official Microsoft article on NTFS file permissions and the SYSTEM account as it pertains to DFS or network files.)

    Read the article

  • How Could My Website Be Hacked

    - by Kiewic
    Hi! I wonder how this could happen. Someone delete my index.php files from all my domains and puts his own index.php files with the next message: Hacked by Z4i0n - Fatal Error - 2009 [Fatal Error Group Br] Site desfigurado por Z4i0n Somos: Elemento_pcx - s4r4d0 - Z4i0n - Belive Gr33tz: W4n73d - M4v3rick - Observing - MLK - l3nd4 - Soul_Fly 2009 My domain has many subdomains, but only the subdomains that can be accessed with an specific user were hacked, the rest weren't affected. I assumed that someone entered through SSH, because some of these subdomains are empty and Google doesn't know about them. But I checked the access log using the last command, but this didn't show any activity through SSH or FTP the day of the attack neither seven days before. Does anybody has an idea? I already changed my passwords. What do you recommend me to do? UPDATE My website is hosted at Dreamhost. I suppose they have the latest patches installed. But, while I was looking how they entered to my server, I found weird things. In one of my subdomains, there were many scripts for execute commands on the server, upload files, send mass emails and display compromising information. These files had been created since last December!! I have deleted those files and I'm looking for more malicious files. Maybe the security hold is an old and forgotten PHP application. This application has a file upload form protected by a password system based on sessions. One of the malicious scripts was in the uploads directory. This doesn't seem like an SQL Injection attack. Thanks for your help.

    Read the article

  • Strange corruption saving from Textpad 5 within Windows 7-64 VirtualBox VM to shared folder with Mac host

    - by joelarson
    I have a fairly new Window-7 64bit install running in Virtual Box on a MacBook Pro. I'm using TextPad 5 within that environment to edit source files that live on a shared folder that is on the Mac Host. When I save some of these source files, the saved file ends up with some amount of the end of the file repeated one or more times. For example, a file that has this at the end: ... return ttp; }; would, once saved, open up with: ... return ttp; }; }; It is definitely a problem with how the file gets written as opposed to how it's read, because I can see this now matter what app I use to open the file with (NotePad & Word in Windows 7, TextWrangler back in the Mac). I've tried saving as ANSI and UTF-8, and with or without the 'Write Unicode and UTF-8 BOM' checked in TextPad preferences. It doesn't happen with all files though I can't see any pattern about which files do or don't have the problem. It doesn't happen with files written to the Windows 7 c:\ drive. And so far it doesn't happen from other applications saving files, only TextPad. Any ideas? My versions: Textpad 5.4.2 Windows 7 Professional 64-bit, fully up to date VirtualBox 4.0.8 r71778 OSX 10.6.7

    Read the article

  • Rsync: General file/folder synchronization

    - by Rey Leonard Amorato
    I have a file server, which is in-charge of pulling a folder tree from multiple workstations on a daily basis. My current method for this is by using rsync, (which works pretty well provided directory names and/or files remain the same) however, when files are renamed or moved about within subdir1, rsync will copy them over to the server, creating duplicates. I have to manually find and delete extraneous files/folders that had been left on the server during previous syncs. Note that I cannot use rsync's --delete flag because any sync from a workstation will then mirror that particular folder tree, instead of merging them to the server. Visual diagram: Server: Workstation1 Workstation2 Workstation(n) Folder* Folder* Folder* Folder* -subdir1 -subdir1 -subdir1 -subdir(n) -file1 -file1 -file2 -file(n) -file2 -file(n) Is there a simple script (preferably in bash, nothing fancy) that can accomplish the deletion of the extraneous files/folders in the event a file is renamed or moved to a different subdir? Is there a different program, much like rsync that can accomplish this task autonomously and in a much simpler manner? I have looked at unison, but I did not like the fact that it keeps a local database for the syncing info. Any tips at all as to how I am supposed to tackle this? Thank you in advanced for your help. EDIT: I have tried unison just recently and I can safely say it is out of the question now. unison is a bi-directional synchronization tool and from my testing, it mirrors the files existing on the server to all workstations. - This is unwanted. preferably, i would want files/folders to stay within their respective workstations and just merge to the server. AKA uni-directional sync; but with renames/moves propagated to the server. I might have to look into Git/Mercurial/Bazaar as mentioned by kyle, but still unsure if they are fit for the job.

    Read the article

  • Populating a drive that was formated to ntsf on seperate computer

    - by Will Love
    I recently downloaded some files that were over 4gb and wouldnt do a straight transfer to a external hd. this is all being done on a serperate computer we shall call computer1. my computer(computer2) is where im trying to get the files too. i used cmd to "Soft format" the drive to NTSF on computer1 so i could then take the drive to transfer the files to computer2. this was suppose to allow the format change without losing the files i had on there already... after the proccess was done i checked the drive on computer1 and all the old files were there and working....when i took the drive back to computer2 to transfer the files i had just downloaded from computer2 my computer...computer1:wont reconize or populate the external drive so it can give it a drive letter and function. the external drive works fine on computer1, but how do i get it to work again on computer2? when i try to populate it, i cant access the properties function in order to make permissions for everyone so it is reconized....any help would be greatfull..... also i am running windows 7 ultimate edition if that helps.

    Read the article

  • MySQL taking a long time to start

    - by Dscoduc
    I'm running Windows Server 2008 with MySQL installed and every time I reboot the server the MySQL Service doesn't start right away. A look into the Windows Eventlog shows that the MySQL Service was hung at startup. Looking at the Services.msc console shows the service state at Starting... Eventually, like 10 minutes, the MySQL Service actually finishes the startup process and the database becomes available for my Wordpress server... I looked at the MySQL .err files and didn't find anything that would indicate a delay in the statup process... Can anyone suggest a way to determine what is causing the delay, and more importantly, how to prevent the delay in the MySQL Startup? UPDATE: Here is the .err log contents from the shutdown to the startup complete. Notice the startup begins at 10:30:00 and the MySQL isn't ready for connections until 10:47:14, a full 17 minutes later: 100322 10:27:06 [Note] C:\Program Files\MySQL\MySQL Server 5.1\bin\mysqld: Normal shutdown 100322 10:27:06 [Note] Event Scheduler: Purging the queue. 0 events 100322 10:27:06 InnoDB: Starting shutdown... 100322 10:27:08 InnoDB: Shutdown completed; log sequence number 4 3854351346 100322 10:27:08 [Warning] Forcing shutdown of 1 plugins 100322 10:27:08 [Note] C:\Program Files\MySQL\MySQL Server 5.1\bin\mysqld: Shutdown complete 100322 10:30:00 [Note] Plugin 'FEDERATED' is disabled. 100322 10:30:01 InnoDB: Started; log sequence number 4 3854351346 100322 10:47:14 [Note] Event Scheduler: Loaded 0 events 100322 10:47:14 [Note] C:\Program Files\MySQL\MySQL Server 5.1\bin\mysqld: ready for connections. UPDATE 2: MySQL is configured as a service (part of the install process, nothing I did) and executes the following syntax (as it appears in the registry): "C:\Program Files\MySQL\MySQL Server 5.1\bin\mysqld" --defaults-file="C:\Program Files\MySQL\MySQL Server 5.1\my.ini" MySQL

    Read the article

  • Is this a File Header / Magic Number?

    - by Hammer Bro.
    I've got 120,000 files (way more, actually; this is just an arbitrary subset) of an unknown type. Linux file does not identify them (not that they're necessarily Linux files), nor do any other methods I've tried. There are only two hints about them that I currently have. One is that I suspect some compression is employed -- I have metadata that claims the file sizes are always some amount larger than what I observe. The other is that in 100,000 of these files, the first 16 bytes are always: ff ee ee dd 00 00 00 00 01 00 00 00 00 00 00 00 That really looks like a file header/magic number to me, but I just can't place it. Does anyone know what kind of files this would indicate? Alternatively, can anyone convince me that these suspiciously common bytes certainly do not indicate a specific file type? UPDATE I don't know the exact reverse-engineering details, but most of the files in our case are zips after the first 29(? or so) bytes are ignored. So in practice the problem is solved (we know how to process the files) but in theory the question is still unanswered -- I don't know which application routinely prepends about 29 bytes to its zips. [I'm not sure if I should leave the question open or not at this point.]

    Read the article

  • hosts file seems to be ignored

    - by z4y4ts
    I have almost fresh Ubuntu desktop box. OS was installed two weeks ago and updated from karmic repositories. Last week I had no problems with DNS. But this week something had changed. I'm not sure what and when, and not sure whether I changed any configs. So now I have some really weird situation. According to logs name resolving should work normally. /etc/hosts 127.0.0.1 localhost test 127.0.1.1 desktop /etc/host.conf order hosts,bind multi on /etc/resolv.conf # Generated by NetworkManager search search servers obtained via DHCP nameserver 192.168.0.3 /etc/nsswitch.conf passwd: compat group: compat shadow: compat hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 networks: files protocols: db files services: db files ethers: db files rpc: db files netgroup: nis But if fact it is not. user@test ~ping test PING localhost (127.0.0.1) 56(84) bytes of data. [skip] Pinging is ok. user@test ~host test test.mydomain.com has address xx.xxx.161.201 But pure I suspect that NetworkManager might cause this misbehavior, but don't know where to start to check it. Any thoughts, suggestions?

    Read the article

  • Backup solution, or, how Duplicati duped me

    - by blarghmaster
    TL/DR version: Mono + Duplicati.commandline.exe restore etc. etc. spits this out for several files regardless of what I try. I am able to list sets, list files in said sets, even do a verify, but each time i do a restore of any kind, i get errors to the effect of : Failed to restore file: "snapshot/blahblah/2005-11-07.tar.gz", Error message: The partial file record for snapshot/blahblah/2005-11-07.tar.gz does not match the file Any advice here, or an idea of where to look for a better solution? FULL STORY: Ive recently put together an nice clean, friendly backup solution for several servers, predominantly Linux, but occasionally a windows box is added too. The solution as is meets all my requirements and does it well... save 1: cross-compatibility The solution is based on a combination of several elements, but eventually comes done to using Duplicity and Duplicati for the actual storage of files. The entire solution was ready to go before i realized that Duplicati, does not, in fact allow me to restore my files to a Linux box, regardless of what the commandline under Mono might tell you. It just spits out errors on random zip and image files, for apparently no good reason as i have tried several options to get it to restore, and several versions of Mono including installing it pretty much lib-for-lib. There is no effective log file for the reasons for these errors, and even the "--debug-output=true" flag does nothing. I am able to list sets, list files in said sets, even do a verify, but each time i do a restore of any kind, i get errors to the effect of : Failed to restore file: "snapshot/blahblah/2005-11-07.tar.gz", Error message: The partial file record for snapshot/blahblah/2005-11-07.tar.gz does not match the file Now i could most likely use the friendly instructions on Duplicati's site and script a bash equivalent of the restore, but that's not exactly ideal. Any advice on this? or possibly an alternative solution that presents the same benefits of Duplicati/Duplicity but that actually works across platforms?

    Read the article

  • Why Ubuntu could treat hosts file so strange?

    - by z4y4ts
    I have almost fresh Ubuntu desktop box. OS was installed two weeks ago and updated from karmic repositories. Last week I had no problems with DNS. But this week something had changed. I'm not sure what and when, and not sure whether I changed any configs. So now I have some really weird situation. According to logs name resolving should work normally. /etc/hosts 127.0.0.1 localhost test 127.0.1.1 desktop /etc/host.conf order hosts,bind multi on /etc/resolv.conf # Generated by NetworkManager search search servers obtained via DHCP nameserver 192.168.0.3 /etc/nsswitch.conf passwd: compat group: compat shadow: compat hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 networks: files protocols: db files services: db files ethers: db files rpc: db files netgroup: nis But if fact it is not. user@test ~ping test PING localhost (127.0.0.1) 56(84) bytes of data. [skip] Pinging is ok. user@test ~host test test.myviacube.com has address xx.xxx.161.201 But pure I suspect that NetworkManager might cause this misbehavior, but don't know where to start to check it. Any thoughts, suggestions?

    Read the article

  • Easy_install the wrong version of python modules (Mac OS)

    - by user73250
    I installed Python 2.7 on my Mac. When typing "python" in terminal, it shows: $ python Python 2.7 (r27:82508, Jul 3 2010, 20:17:05) [GCC 4.0.1 (Apple Inc. build 5493)] on darwin Type "help", "copyright", "credits" or "license" for more information. The Python version is correct here. But when I try to easy_install some modules. The system will install the modules with python version 2.6 which are not able be imported to Python 2.7. And of course I can not do the functions I need in my code. Here's an example of easy_install graphy: $ easy_install graphy Searching for graphy Reading pypi.python.org/simple/graphy/ Reading http://code.Google.com/p/graphy/ Best match: Graphy 1.0.0 Downloading http://pypi.python.org/packages/source/G/Graphy/Graphy- 1.0.0.tar.gz#md5=390b4f9194d81d0590abac90c8b717e0 Processing Graphy-1.0.0.tar.gz Running Graphy-1.0.0/setup.py -q bdist_egg --dist-dir /var/folders/fH/fHwdy4WtHZOBytkg1nOv9E+++TI/-Tmp-/easy_install-cFL53r/Graphy-1.0.0/egg-dist-tmp-YtDCZU warning: no files found matching '*.tmpl' under directory 'graphy' warning: no files found matching '*.txt' under directory 'graphy' warning: no files found matching '*.h' under directory 'graphy' warning: no previously-included files matching '*.pyc' found under directory '.' warning: no previously-included files matching '*~' found under directory '.' warning: no previously-included files matching '*.aux' found under directory '.' zip_safe flag not set; analyzing archive contents... graphy.all_tests: module references __file__ Adding Graphy 1.0.0 to easy-install.pth file Installed /Library/Python/2.6/site-packages/Graphy-1.0.0-py2.6.egg Processing dependencies for graphy Finished processing dependencies for graphy So it installs graphy for Python 2.6. Can someone help me with it? I just want to set my default easy_install Python version to 2.7.

    Read the article

  • rsync general question

    - by CaptnLenz
    I'm trying to use rsync. At first, everything looks very good: rsync -Pniahv -e ssh /home/xxx/Videos/ [email protected]:"/shares/Public/Shared\ Videos/" --stats ... <f+++++++++ Serien/blah.avi <f+++++++++ Serien/blah S01E01 <f+++++++++ Serien/blah - S01E02 <f+++++++++ Serien/blah - S01E03 <f+++++++++ Serien/blah - S01E04 <f+++++++++ Serien/blah - S01E05 <f+++++++++ Serien/blah - S01E06 <f+++++++++ Serien/blah - S01E07 ... Number of files: 232 Number of files transferred: 223 Total file size: 118.24G bytes Total transferred file size: 117.51G bytes Literal data: 0 bytes Matched data: 0 bytes File list size: 9.46K File list generation time: 0.001 seconds File list transfer time: 0.000 seconds Total bytes sent: 10.18K Total bytes received: 712 After that, i copied some of the files manually and runned rsync again in dry mode: rsync -Pniahv -e ssh /home/xxx/Videos/ [email protected]:"/shares/Public/Shared\ Videos/" --stats ... <f..tpo.... Serien/blah.avi <f..tpo.... Serien/blah S01E01 <f..tpo.... Serien/blah - S01E02 <f..tpo.... Serien/blah - S01E03 <f..tpo.... Serien/blah - S01E04 <f..tpo.... Serien/blah - S01E05 <f..tpo.... Serien/blah - S01E06 <f..tpo.... Serien/blah - S01E07 ... Number of files: 232 Number of files transferred: 223 Total file size: 118.24G bytes Total transferred file size: 117.51G bytes Literal data: 0 bytes Matched data: 0 bytes File list size: 9.46K File list generation time: 0.001 seconds File list transfer time: 0.000 seconds Total bytes sent: 10.18K Total bytes received: 712 Why hasn't changed something in the --stats, although only the permissions and the timestamp have to be updated and not the full files need to be copied?

    Read the article

  • How can I trigger the creation of a new CLB file?

    - by Xperimental
    I'm currently having a problem with an application using COM running on Windows Vista. The application runs ok on one machine, but doesn't work on a similar configured machine. Both machines are virtual images originating from the same source image. While searching the registry for causes of this error, I came across the CLBVersion key in HKCR\CLSID which seems to have something to do with COM. The value of the key differs between the two machines (0x6 on the erroneous one, 0xc on the working one). Also there are files containing the same number in their filenames in the %SystemRoot\Registration directories of the machines. They are called R000000000006.clb and R00000000000c.clb respectively. I have already searched the windows event log for anything leading to the creation of those files (I have searched by the creation date of the files). Now a few questions regarding the registry keys and the files: Is it correct, that this is connected to COM? What is the function of the files? What causes the creation of a new "CLBVersion"? Is there a way for me to trigger the creation of a new CLB file? edit: I have now found out, that this has nothing to do with my application error. But I would still be interested in details about the registry key and the files. An installation of Visual Studio 2005 has brought the second machine to the same configuration (0xc in registry and file) as the other one.

    Read the article

  • Getting rsync to move file from source to destination ?

    - by fabien-barbier
    Is rsync is a good choice for my project ? I have to : - copy files from source to destination folder via SSH, - be sure all files are copied, - delete source files after copy. - if I have conflict name, I have to rename files. It looks like I can use option : --remove-source-files (to delete source files) But how rsync manage conflict, can I had rules ? Use case on my project : I run scientific calculation on server A and results are inserted in folder "process", for each calculation I have a repository like this : /process/calc1. Now I would like to transfer repository "/calc1" to server B (I get /process/calc1), and delete "calc1" from server A. ...During another calculation I get "/process/calc2" on server A, the idea is also to move "calc2" in "/process/" directory on server B, then I have now on server B : - /process/calc1 - /process/calc2 (and /process/ on server A is empty). How rsync will manage conflict (on server B) if I have another folder like "/process/calc1" in server A after a new calculation (if "/process/calc1" already exist on server B) ? Is it possible to add rules with rsync, and rename "/process/calc1" by "process/calc1R2" in server B ? And so on (ex:calc1R3) ? Thanks.

    Read the article

  • What precautions should I take once defective RAM has been replaced?

    - by DustByte
    I recently discovered that my RAM is faulty (MemTest86+). I am waiting for new RAM to be sent to me.  It was through sheer luck that I discovered something was wrong. I was copying a large amount of big files and decided to verify the copies by their checksums. I discovered strange discrepancies, and noticed that checksum computation for the same file was not consistent. Now, this is the only problem I have encountered; no BSOD, no crashes, no errors. In a sense this makes me more worried than if I would have had massive crashes. I have no idea for how long the RAM has been faulty, and I have no idea if corrupt bits have been saved into files on my hard drives. I do know the RAM was fine two months ago (tested it back then). I am a user of Adobe's Lightroom and I am worried that photos or the catalog itself could carry corrupt data. Question: what should I do once new healthy RAM has been installed? Reinstall Windows (I'm using Windows 7, 64 bit)? Is there a risk that I will be presented with nasty surprises in the future if I don't? What about personal files? I have backups of some of the files but for newer files I'm not sure I can even trust the backups. It's going to take me many hard hours to manually replace files with older versions, or compare checksums.

    Read the article

  • SVN, Samba and Symbolic Links. How to get them all to play together?

    - by Camsoft
    I've got a website project under version control that relies on files from an unversioned directory on the same server via Symbolic Links. I'm currently storing the symbolic links in the repository. The idea is that if someone checks out a working copy on to the same server they can edit and test the working copy of the project before committing it back to the repository. When they checkout their working copy it successfully sets up the symlinks so that the entire site works when testing. The users that work on the project are Windows users, so I've set a samba shares on the server and then mapped them to network drives in Windows. People can edit their working copies directly on the server via network shares and then test them in the web browser before committing their changes back to the repository via TortoiseSVN. The Problem The problem I have is that Samba resolves the symlinks as expected but when a user tries to commit their changes back to the repository, TortoiseSVN thinks the linked files are part of the project and tries to commit the target files to the repository and not the symlinks themselves. I tried turning off symlink support in samba which means that the linked files cannot be resolved as I don't really want people to have access to the linked files nor do I want to import the linked files in the repository. The problem with this is that I get Can't stat '\webserver\projects\working\project\symlinked_file.php'. Access is denied Apart from the symlink problem everything else works 100% perfectly. Users can either checkout website projects to their machine and work on them (but can't test) or checkout them out to their space on the dev web server and work on them and fully test. So I don't want to change the workflow process, I just need a solution to the symbolic link issue. Many thanks. Originally posted on StackOverflow: http://stackoverflow.com/questions/2400917/svn-samba-and-symbolic-links-how-to-get-them-all-to-play-together

    Read the article

< Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >