Search Results

Search found 40999 results on 1640 pages for 'duplicate files'.

Page 203/1640 | < Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >

  • Software RAID for several HDs which retains files on each HD

    - by Fuxi
    Is there some kind of software/driver that would enable me to create one big volume out of several hard disks but retain the file structure on each HD? This would be in case where one hard disk crashes and only data on that HD will be lost. Windows 7 enables me to do that but in case one HD breaks all data will be lost.

    Read the article

  • Error message when renaming files on a network drive stored in Windows 7 favorites

    - by paulmorriss
    I have a network drive mapped to a share on a Window Server 2003. I have a shortcut to this drive stored in my Windows 7 favorites. When I double click the shortcut and then rename a file on the drive, if the file is longer than 8 chars or contains spaces then I get this error The drive that this file or folder is stored on does not allow long file names, or names containing blanks or any of the following characters:... If I get to the network drive by click on it in the tree under computer then it works fine. Is there a way to get round this?

    Read the article

  • Removing files with strange names

    - by pythonic metaphor
    Somehow I ended up with a file named "-r". How do I remove it? rm -r doesn't work. I tried 'rm -i `ls -a`' to step through the file names, but it didn't prompt me to delete this one. Edit A very hacky approach was to use python's os.unlink function. That worked, but I'm curious to hear other ways.

    Read the article

  • undelete files using a live cd

    - by doug
    I'm trying to recover some data using testdisk and running ubuntu from a live cd. When i choose the undelete option i get the following message and it takes me out of testdisk: file undeleteAborded (core dumped) do you know why? can you give me some advices, tips about what to do?

    Read the article

  • [Mac] Recover iCal 10.5 files in 10.6?

    - by shox
    Hi, I installed Mac OSX 10.6 and have the old 10.5 on a seperate HDD. Can I now simply copy the old iCal-Data into the new iCal-installation? I tried ~/Libarary/Application Support/ but there is no Folder called iCal? Thanks

    Read the article

  • Internet Explorer wont open docx files, saves them as zip

    - by David Gard
    I have several docx documents on an Intranet for my work, but IE8 refuses to open them, instaed only saving them as a zip (filename_docx.zip). This seems to be only an IE8 problem (surprise, surprise!) as both FF and Chrome open the documents just fine. Unfortunately as this is work based, I cannot simply drop IE as I would, in favour of a decent browser. Does anybody know how to fix this issue in IE? Thanks.

    Read the article

  • Is it possible to upload only files that have been updated into a server?

    - by kamikaze_pilot
    Hi guys, Suppose I have a server accessible via FTP and it hosts websites Suppose I want to edit the website locally so it wont affect the site live, and suppose I edit a whole bunch of files, and I don't want to deal with the hassle of keeping track of which files I've edited all the time... Once I finished editing I want to upload it to the server via FTP....is there some FTP software that automatically detects which files have been edited and have only those files uploaded and overwritten rather than having me manually choosing the files I've edited (and hence having to keep track of edited files) or have me upload the entire site which is a waste of time thanks in advance

    Read the article

  • recover deleted files from other computer

    - by Giorgi
    Hello, I moved a html file from my computer to another and accidentally deleted it from that computer which I accessed from my computer like this: \name\folder As a result the file did not go recycle bin. I tried ntfsundelete and it did found the file on my computer but when I recovered, it looks like as if you opened binary file with notepad. I then tried Recuva and it says that part of the file is overwritten. Is there any chance to recover it? Can I recover it from another computer? Thanks.

    Read the article

  • Highly robust and scalable search server needed for managing and analyze files

    - by ChrisBenyamin
    Hi everybody, I am looking for a professional search server system with functionality, like e.g. solr http://lucene.apache.org/solr/ holds. Place of action should be a centralized location, whereon many hosts would request data. Furthermore the system should be extensible for implementing statistical procedures. (e.g. a kind of heatmap (or common diagrams) of a (or more) file(s) (which has a guid), that is spread on different hosts.) This software doesn't have to be opensource. thanks. chris

    Read the article

  • How long do uploaded files stay in the tmp folder in Linux Ubuntu?

    - by Jean-Nicolas Boulay Desjardins
    I am building a web application where my users will be able to upload files. After the files are uploaded I need to send the files to two other servers, and after they will be deleted from the server where they were just uploaded to. I am wandering is it a good I idea to keep the uploaded files in the tmp/ folder the time the uploaded files are sent to the other two servers or should I move them to another folder incase they get deleted? I am also wandering because I would like to know if I have to build a cron script to get rid of the files that have been transfered to the other servers so that I get my disk space back.

    Read the article

  • greping files question

    - by tearman
    I've been using grep to run a few PII scans and while its finding results, its indeed finding too many false positives. Is there a way that I can tell grep not to trigger a match for a file unless it contains other data? For instance, can I tell it not to trigger an alert on a regex for a SSN unless the file includes text like "ssn" or "social security number"?

    Read the article

  • glusterfs to replicate files to other servers

    - by sbrattla
    I've got multiple servers which all need to have the same content in /home. In other words, if the file /home/user1/test.txt is updated on server A, this needs to be replicated to all other servers in the cluster. Is it possible to use GlusterFS for this purpose? That is, let each server have a full copy of all data locally - which that server will be working on - and solely use GlusterFS to take care of replicating this data to the other servers? I'm not intersted in a combined storage, but rather have all data on all machines only to have GlusterFS to replicate it to the other machines.

    Read the article

  • Restore files from certain increments using Duplicity

    - by luckytaxi
    Given the following backup sets ... Found primary backup chain with matching signature chain: ------------------------- Chain start time: Tue Jun 21 11:27:26 2011 Chain end time: Tue Jun 21 11:27:59 2011 Number of contained backup sets: 2 Total number of contained volumes: 2 Type of backup set: Time: Num volumes: Full Tue Jun 21 11:27:26 2011 1 Incremental Tue Jun 21 11:27:59 2011 1 If i run the following command, it works (1308655646 was converted from Tue Jun 21 11:27:26 2011): duplicity --no-encryption --restore-time 1308655646 --file-to-restore ORIG_FILE \ file:///storage/test/ restored-file.txt However, if I run the following command, it restores the from the latest set. duplicity --no-encryption --restore-time 2011-06-21T11:27:26 --file-to-restore \ ORIG_FILE file:///storage/test/ restored-file.txt What am I doing wrong w/ the time? I prefer the second option only because I don't want to have to do the conversion manually.

    Read the article

  • How to delete files on the command line with regular expressions?

    - by Jack
    Lets say I have 20 files named FOOXX, where XX is the number of the file, eg 01, 02 etc. At the moment, if I want to delete all files lower than the number 10, this is easy and I just use a wildcard, eg rm FOO0* However, if I want to delete specific files ina range, eg 13-15, this becomes more difficult. rm FPP[13-15] does not work, and asks me if I wish to delete all files. Likewse rm FOO1[3-5] wishes to delete all files that begin with FOO1 So, what is the best way to delete ranges of files like this? I have tried with both bash and zsh, and I don't think they differ so much for such a basic task?

    Read the article

  • HTC Sync Manager and MKV files

    - by Zundrium
    My problem is pretty straight forward: HTC Sync manager works perfectly with my HTC One X. However, it filters extensions it's not able to use with it's stock applications. (HTC Sense) But 3rd party applications can handle other extensions of course. Is there a way to adjust the HTC Sync Manager so that extensions will not be filtered? And if that's not possible is there a syncronisation tool that synchronises automatically once the android device is connected through USB? (Tried Allway Sync, doesn't work properly)

    Read the article

  • Understanding RewriteCond in .htacces files

    - by Paulo Bu
    I'm having problems understanding how RewriteCond directive works. So far, it's pretty clear that it compares to strings to apply a RewriteRule. I have this file: <IfModule rewrite_module> RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ app_dev.php </IfModule> This works for me but I don't know why it works. So far in the RewriteCond directive I understand: if the value of REQUEST_FILENAME is NOT a file in the hard drive then allow the rule This doesn't have sense becouse app_dev.php after substituting is a file in the hard drive. Anyways, could someone enlighten me with this issue? I am having a very harsh time figuring out how this works.

    Read the article

  • ubuntu preseed installation keep missing mirror files

    - by JackWu
    Install ubuntu12.04.2 with preseed file, but there is one buggy problem about preseed mirror setting. The symptom here is installing process got stuck. So I track down the log file, and find out the real problem, the installation is looking for a file that's not there. This is just one of them, another pops up if I faked this file. This all happened during preseed, so I believe preseed has something to do with this. I google ubuntu preseed mirror and find this post saying: # If you select ftp, the mirror/country string does not need to be set. #d-i mirror/protocol string ftp d-i mirror/country string manual d-i mirror/http/hostname string archive.ubuntu.com d-i mirror/http/directory string /ubuntu d-i mirror/http/proxy string # Alternatively: by default, the installer uses CC.archive.ubuntu.com where # CC is the ISO-3166-2 code for the selected country. You can preseed this # so that it does so without asking. #d-i mirror/http/mirror select CC.archive.ubuntu.com # Suite to install. #d-i mirror/suite string lucid # Suite to use for loading installer components (optional). #d-i mirror/udeb/suite string lucid # Components to use for loading installer components (optional). #d-i mirror/udeb/components multiselect main, restricted I wonder the difference between d-i mirror/http/hostname and d-i mirror/http/mirror, I mean they all specify a mirror, right? In my preseed file, this is no d-i mirror/http/mirror, and d-i mirror/http/hostname points to my own repo as you might notice in the previous image. Here is my question: Does preseed fetches file/resource from internet, if I use local repo? Why it's looking for file that's not even there? This has bothered for quite time, many thanks in advance to anyone who might give any help.

    Read the article

  • Sync files between two users on Windows-7 Enterprise

    - by Zachary
    I'm running Windows-7 enterprise on a dell Laptop, and I'd like to sync the entire user directory structure between two users. Background: I am an existing user on the computer, and soon I'll be sharing the computer with an employee. I want everything from my account to overwrite the other, while anything he does is mirrored on mine. I'm not worried about security because nothing vital is on the computer. Both accounts are administrators, and I have already tried to use hard links to accomplish this. However the prompt leaves me with "Access Denied". Is what I'm trying to do possible, and if so what steps must be done to accomplish it?

    Read the article

  • Ways to parse NCSA combined based log files

    - by Kyle
    I've done a bit of site: searching with Google on Server Fault, Super User and Stack Overflow. I also checked non site specific results and and didn't really see a question like this, so here goes... I did spot this question, related to grep and awk which has some great knowledge but I don't feel the text qualification challenge was addressed. This question also broadens the scope to any platform and any program. I've got squid or apache logs based on the NCSA combined format. When I say based, meaning the first n col's in the file are per NCSA combined standards, there might be more col's with custom stuff. Here is an example line from a squid combined log: 1.1.1.1 - - [11/Dec/2010:03:41:46 -0500] "GET http://yourdomain.com:8080/en/some-page.html HTTP/1.1" 200 2142 "-" "Mozilla/5.0 (Windows; U; Windows NT 6.1; C) AppleWebKit/532.4 (KHTML, like Gecko)" TCP_MEM_HIT:NONE I'd like to be able to parse n logs and output specific columns, for sorting, counting, finding unique values etc The main challenge and what makes it a little tricky and also why I feel this question hasn't yet been asked or answered, is the text qualification conundrum. When I spotted asql from the grep/awk question, I was very excited but then realised that it didn't support combined out of the box, something I'll look at extending I guess. Looking forward to answers, and learning new stuff! Answers doesn't have to be limited to platform or program/language. For the context of this question, the platforms I use the most are Linux or OSX. Cheers

    Read the article

  • Nginx Reverse Proxy Node.js and Wordpress + Static Files Issue

    - by joemccann
    I have had quite a time trying to get nginx to serve static assets from my wordpress blog. Have a look at the config and let me know if you can help. ( https://gist.github.com/1130332 - to see the entire thing) server { listen 80; server_name subprint.com; access_log /var/www/subprint/logs/access.log; error_log /var/www/subprint/logs/error.log; root /var/www/subprint/server/public; # express serves static resources for subprint.com out of here location / { proxy_pass http://127.0.0.1:8124; root /var/www/subprint/server; access_log on; } #serve static assets location ~* ^(?!\/).+\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$ { expires max; access_log off; } # the route for the wordpress blog # unfortunately the static assets (css, img, etc.) are not being pathed/served properly location /blog { root /var/www/localhost/public; index index.php; access_log /var/www/localhost/logs/access.log; error_log /var/www/localhost/logs/error.log; if (!-e $request_filename) { rewrite ^/(.*)$ /index.php?q=$1 last; break; } if (!-f $request_filename) { rewrite /blog$ /blog/index.php last; break; } } # actually serves the wordpress and subsequently phpmyadmin location ~* (?!\/blog).+\.php$ { fastcgi_pass localhost:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/localhost/public$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; include /usr/local/nginx/conf/fastcgi_params; } # This works fine, but ONLY with a symlink inside the /var/www/localhost/public directory pointing to /usr/share/phpmyadmin location /phpmyadmin { index index.php; access_log /var/www/phpmyadmin/logs/access.log; error_log /var/www/phpmyadmin/logs/error.log; alias /usr/share/phpmyadmin/; if (!-f $request_filename) { rewrite /phpmyadmin$ /phpmyadmin/index.php permanent; break; } } # opt-in to the future add_header "X-UA-Compatible" "IE=Edge,chrome=1"; }

    Read the article

< Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >