Search Results

Search found 962 results on 39 pages for 'tar'.

Page 27/39 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • Using wget to recursively download whole FTP directories

    - by user9406
    I want to copy all of the files and folders from one host to another. The files on the old host sit at /var/www/html and I only have FTP access to that server, and I can't TAR all the files. Regular connection to the old host through FTP brings me to the /home/admin folder. I tried running the following command form my new server: wget -r ftp://username:[email protected] But all I get is a made up index.html file. What the right syntax for using wget recursively over FTP?

    Read the article

  • Debugging logrotate postrotate script

    - by robert
    Following is my logrotate conf. /mnt/je/logs/apache/jesites/web/*.log" { missingok rotate 0 size 5M copytruncate notifempty sharedscripts postrotate /home/bitnami/.conf/compress-and-upload.sh /mnt/je/logs/apache/jesites/web/ web endscript } And compress-and-upload.sh script, #!/bin/sh # Perform Rotated Log File Compression tar -czPf $1/log.gz $1/*.1 # Fetch the instance id from the instance EC2_INSTANCE_ID="`wget -q -O - http://169.254.169.254/latest/meta-data/instance-id`" if [ -z $EC2_INSTANCE_ID ]; then echo "Error: Couldn't fetch Instance ID .. Exiting .." exit; else /usr/local/bin/s3cmd put $1/log.gz s3://xxxx/logs/$(date +%Y)/$(date +%m)/$(date +%d)/$2/$EC2_INSTANCE_ID-$(date +%H:%M:%S)-$2.gz fi # Removing Rotated Compressed Log File rm -f $1/log.gz The files are rotated, but shell script is not executed. I don't know how to debug the postscript. Is there any logfile I chek to see if there is any permission issues. If i directly execute the script from commandline file upload works. Thanks.

    Read the article

  • Compressed disk image on Linux

    - by Aaron Digulla
    I just got my new computer with a much bigger harddisk. I think I copied all important files over but just to be sure, I'd like to keep a disk image of my old disk. To save space, I'd like to compress it but I didn't find an option to mount a compressed image. My goals: Result must be easy to access No need to decompress the whole thing before I can access anything Files should be quick to locate - no TAR/CPIO archive Necessary space should be less than just copying the files over So ideally, I'm looking for a read-only, compressed file system which I can create in a file and which grows automatically.

    Read the article

  • Rsync over ssh with root access on both sides

    - by Tim Abell
    Hi, I have one older ubuntu server, and one newer debian server and I am migrating data from the old one to the new one. I want to use rsync to transfer data across to make final migration easier and quicker than the equivalent tar/scp/untar process. As an example, I want to sync the home folders one at a time to the new server. This requires root access at both ends as not all files at the source side are world readable and the destination has to be written with correct permissions into /home. I can't figure out how to give rsync root access on both sides. I've seen a few related questions, but none quite match what I'm trying to do. I have sudo set up and working on both servers.

    Read the article

  • Poppler installation

    - by Menopia
    I downloaded the new poppler 0.15 tar ball and i built it from source successfully but when trying dpkg -l | grep poppler it outputs ii libpoppler-dev 0.14.3-0ubuntu1.1 PDF rendering library -- development files ii libpoppler-glib-dev 0.14.3-0ubuntu1.1 PDF rendering library -- development files (GLib interface) ii libpoppler-glib4 0.12.4-1ubuntu1 PDF rendering library (GLib-based shared library) ii libpoppler-glib5 0.14.3-0ubuntu1.1 PDF rendering library (GLib-based shared library) ii libpoppler5 0.12.4-1ubuntu1 PDF rendering library rc libpoppler6 0.14.2.is.0.14.1-0ubuntu1 PDF rendering library ii libpoppler7 0.14.3-0ubuntu1.1 PDF rendering library ii poppler-utils 0.14.3-0ubuntu1.1 PDF utilitites (based on libpoppler) So AFAIK this means that the new version is not installed !!

    Read the article

  • pyexiv2 build error src/exiv2wrapper.hpp:32:29: error: exiv2/preview.hpp: No such file or directory

    - by Jake
    The other day I used apt-get install python-pyexiv2 on my ubuntu server, but it seems to have given me an old version. It's not compatible with the code I wrote in my local development environment so I'd like to update it. I downloaded the latest tar.gz from the website, extracted it and ran scons as per the readme. But it will not build, I get the error src/exiv2wrapper.hpp:32:29: error: exiv2/preview.hpp: No such file or directory I've also user apt-get to install libboost-python-dev and libexiv2-dev Can anyone help me on this?

    Read the article

  • Email is not sending when the script is running by CRON

    - by Adam Blok
    I wrote the simple backup bash script and at the end of it, it's sending an email to me that backup is ready. Everything works perfect when I run this script from terminal (root), but when the script is running by CRON, email is not sending :-/. #!/bin/sh filename=$(date +%d-%m-%Y) backup_dir="/mnt/backup/" email_from_name="BACKUP" email_to="my@email" email_subject="Backup is ready" email_body_file="/tmp/backup-email-body.txt" tar czf "$backup_dir$filename.tgz" "/home/www" echo "Subject: $email_subject" > $email_body_file ls $backup_dir -sh >> $email_body_file sendmail -F $email_from_name -t $email_to < $email_body_file

    Read the article

  • Can a working Tomcat 6 webapp be turned into a usable .war file?

    - by Bill Cole
    Problem: I have a working webapp on a FreeBSD 8.1 Tomcat 6 test server that I need to move to a production system. The developer who last touched it (and had root on that server) has moved on and isn't helpful. The running app seems to have been deployed from a CVS server that is now unavailable. My thinking is that I would like to find a way to wrap the working webapp into a proper .war so that I can deploy it on a pristine host and (after testing) send the existing system to a very deep bitbucket. But I'm not having luck finding a way to do that. I'm a sysadmin not a developer and don't work much with Tomcat systems so I may be (likely am) overlooking something blindingly simple. I gather that I may be able to just tar up the deployed directory and untar it on the new machine, but I have a nagging feeling that there are pitfalls in that.

    Read the article

  • backuppc - how to backup remote (over the internet) clients?

    - by Scott
    I am testing out backuppc, which works great so far backing up windows clients on a LAN via SMB (no backup client/agent required). However I have quite a few laptops and desktops that are in various remote locations - some of which move around. I need some way to have that remote computer create an outgoing connection for backup purposes (Windows XP/7). I know backuppc supports smb, rsync and 'tar', but I believe these are all connections going from the server TO the client. SO, I either need a way to vpn the client on a timed basis, or it would be a lot better if the client could some how connect to the server (ssh?) and initiate it's own backup somehow (rsync?). Of course this all needs to be pre-installed by me and require no maintenance by the end user, no dialogs on their side. What do you think?

    Read the article

  • Distributing a Python Software for Linux [closed]

    - by zfranciscus
    Hi, I am writing my first software in Python for Ubuntu (or Debian based Linux). I am looking for a good advise on the best way to distribute my software. The easist alternative that I can think of at the moment is to archive the python code into *.tar.gz, and let user execute the main python script as an executable to run the software. I realize that this may not be the best approach. I looked at the Debian maintainer guide: "http://www.debian.org/doc/maint-guide/ch-dother.en.html", not too sound lazy, but the guide looks very intimidating for a beginner. Are there any other tutorial that show how to create a debian package for a beginner ? If anyone has a suggestion do let me know. Thanks ^_^

    Read the article

  • Backup Solr home

    - by user226188
    I'm new to Solr: I've successfully installed Tomcat and Solr 4.3.1 webapp, and two collections on a CentOS 6.4 machine. Now, my server is in production and I need to make backups of solr. So, I would like to know what is the best way to backup solr... For the moment I'm dooing: stop tomcat = tar of my solr home = start tomcat, but I've read that is not a good solution? Moreover, this implie to stop all the tomcat which have other webapps than solr. I've also heard that there is a script named "backup" in solr home bin's folder ? but my bin folder is empty :( I don't want to make an another slave server with replication, for me it's not a backup solution because my backup are supposed to be send to a bacula backup server all nights. There is no builtin solution that I can work around to make a script ? like a mysqldump for Mysql servers. Thanks for help !

    Read the article

  • What is the difference between yum, apt-get, rpm, ./configure && make install

    - by Saif Bechan
    I am new to Linux and am running CentOs. When I want to update or install certain software I came across three ways. Sometimes it's: yum install program rpm -i program.rpm wget program.tar.gz unpack ./configure make make install That last one is a real pain, esp when you come from windows where a program install is usually one click and then a nice guide. Now can someone please explain to me: Why are there so many different ways to do this? Which one do you recommend to use and why? Are there any other ways for installing programs?

    Read the article

  • PostgreSQL, update existing rows with pg_restore

    - by woky
    Hello. I need to sync two PostgreSQL databases (some tables from development db to production db) sometimes. So I came up with this script: [...] pg_dump -a -F tar -t table1 -t table2 -U user1 dbname1 | \ pg_restore -a -U user2 -d dbname2 [...] The problem is that this works just for newly added rows. When I edit non-PK column I get constraint error and row isn't updated. For each dumped row I need to check if it exists in destination database (by PK) and if so delete it before INSERT/COPY. Thanks for your advice. (Previously posted on stackoverflow.com, but IMHO this is better place for this question).

    Read the article

  • How can I too many files upload more fast way to Cloud files in Rasckspace?

    - by andy kim
    I have a lot of image files, it's all I want to upload to RackSpace cloud files about a million in a single directory the fastest and most efficient way. but I'm use uploading python-cloudfiles script is very slow and I want to know different ways or python script code. because one by one connection upload is very slow. I think one files tar and uncompress directory is better way. but cloudfiles do not support this way. Who know any other way?

    Read the article

  • Backup of "Leavers" network directory

    - by Mez
    I want to create a backup of a Leavers network home directory. I've generally done this before by just creating an iso with genisoimage and then burning it. However, it seems that the latest users have 10G in their files. For archival purposes, I want to be able to burn these to multiple DVDs. How do I create these DVD iso images (I know it's got something to do with tar and stream-media-size, and then how do I restore them if I need them again? Using Debian

    Read the article

  • linux installation cd enviroment

    - by haw3d
    i recently make a custom linux system, for my special need. its on my HDD, but i want to create a cd for installation. multiple day ego, i found a livecd and create install script for that, but in power failure my HDD is gone, and i cant found that live cd again. my install script is based on recoverin tar.gz backup. my requorement is: based on glibc (not uclibc) recognize every devices have you any suggestion? excuse me for my bad english.

    Read the article

  • Roll standalone JBoss app under Tomcat

    - by Seva Alekseyev
    I've got a Linux box where there’s Tomcat running, with some JSP applications in it. Now, I’ve received a third party app from a developer shop to be eventually deployed. It came as an archive called "jboss7.tar" which, it seems, contained a whole standalone Web server. Once I’ve followed their instructions and run the designated shell script, it would start a server that would listen on port 8081, and app pages are being served up. Still, this strikes me as an inelegant setup. Why run two Web servers side by side, both of them Java-enabled? Also, the manual startup of the standalone app, I don't like that either. The real question is – can I take the user-provided portions from the said archive and somehow plug it under the existing Tomcat instance? It looks like the user code is packaged into files with .war extension, I can see them under /var/jboss7/standalone/deployments.

    Read the article

  • Viewing zip archive contents using 'less' on OS X.

    - by multihead
    I couldn't help but notice that the 'less' program on all of the recent distributions of Linux that I've used (Ubuntu and Gentoo in this case) allow me to view the contents of ZIP and TAR archives, while the install of 'less' that I have on OS X (and Solaris) instead produce a "foo.zip may be a binary file. See it anyway?", which proceeds to spit out the raw binary data instead of a nice file structure listing. Google has not produced much in the way of helpful results -- it's tricky to search for 'less' in this context. I downloaded and built the latest version from greenwoodsoftware.com, but even it refuses to show the contents of these archives. I didn't come across any related configure/build options either. Any ideas? Thanks!

    Read the article

  • Using Dropbox API instead of a FTP server.

    - by Somebody still uses you MS-DOS
    This is a small aplication scenario. Usually, when you have to do some backups of source code/database on your server, you use a second ftp server, a cronjob to tar.gz your db dumps and source files, and send this file to your ftp server from your application server. Dropbox created an API to use it's infrastrucutre. Since they provide 2gb for free accounts, I thought about being able to upload to it instead of a ftp server. So, if you do some freelance work, you can create a free account for each client and use this approach, maybe encrypting the files you send. You even gain a revision for each sent file, like a revison control system, for free, from the last 30 days. What do you think of this approach? Is it possible? And, more importantly: what are the security risks involved? (That's why I'm asking this on serverfault, since this POV from sysadmins will be more accurate). Thanks!

    Read the article

  • find directories in the current directory, older than 5 days and archive them

    - by user197284
    This is basic questions. I need to find folders in the current working directory(not recursively) and if they are older than 5 days archive them. zip or tar.gz is fine. I can find the folders with following commands find ./ -maxdepth 1 -type d -mtime +5 And i know i can pass this output of the find using xargs. But i do not know how to archive with folder name intact. That is the directory test1 should be archived to test1.zip and directory "test2" should be archived to "test2.zip". Any inputs are welcome. Regards

    Read the article

  • Using Dropbox API instead of a FTP server for backing up DB/Source in your application.

    - by Somebody still uses you MS-DOS
    This is a small aplication scenario. Usually, when you have to do some backups of source code/database on your server, you use a second ftp server, a cronjob to tar.gz your db dumps and source files, and send this file to your ftp server from your application server. Dropbox created an API to use it's infrastrucutre. Since they provide 2gb for free accounts, I thought about being able to upload to it instead of a ftp server. So, if you do some freelance work, you can create a free account for each client and use this approach, maybe encrypting the files you send. You even gain a revision for each sent file, like a revison control system, for free, from the last 30 days. What do you think of this approach? Is it possible? And, more importantly: what are the security risks involved? (That's why I'm asking this on serverfault, since this POV from sysadmins will be more accurate). Thanks!

    Read the article

  • node.js on CentOS box is at v0.6.18, yum doesn't update or upgrade it. Why?

    - by ariestav
    I'm currently working with a CentOS box that has a version of node installed, when I do: nodejs -v I get v0.6.18 But I noticed on nodejs.org website, that the latest release is 0.8.12, so do: sudo yum update nodejs I get Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: centos-mirror.jchost.net * epel: fedora-epel.mirror.lstn.net * extras: centos.mirror.lstn.net * updates: centos.mirror.lstn.net Setting up Update Process No Packages marked for Update What's the deal? Why doesn't yum find the latest version of node? Do I have to download the .tar.gz from nodejs.org and install it that way?

    Read the article

  • What's the best way to install software in Ubuntu?

    - by the0ther
    I'm new to Ubuntu and have been away from Linux for a while. I'm used to Windows and find this tedious on Linux but I want to give it a shot. My tendency is to prefer GUI tools over command-line, and Ubuntu is a distro that seems to cater to usability. I note it is based somewhat on apt-get which I've heard good things about. What's the best practise for installing apps on Ubuntu? Should I prefer to try my options in this order? Synaptic Package Manger apt-get on the command line .tar.gz files (old school)

    Read the article

  • Copy any file with a specific file extension in subfolders into a folder

    - by Onyxius
    I found a script on here that would use 7zip and extract all the files in all the sub-folders of a specific folder and put them in their own folder using the script below. What I need is add to it or maybe use another script if i have to and specify where i want those files to go instead of putting them in their own folder within the folder. I don't know how to do this and hope someone would be able to help. Thanks for the help @echo on FOR /D /r %%F in ("*") DO ( pushd %CD% cd %%F FOR %%X in (*.rar *.zip *.tar) DO ( "C:\Program Files\7-zip\7z.exe" x -o"%%~nX" "%%X" ) popd )

    Read the article

  • Repository Spec file

    - by ahmadfrompk
    I have source of webfiles. I need to make a RPM for it. I have placed my source in SOURCES folder and use following spec file. But it is creating noarch rpm with 2MB size, but my source is greater than 2MB size. Its also did not attach files with this. I think i have a problem in spec file. Summary: my_project rpm script package Name: my_project Version: 1 Release: 1 Source0: my_project-1.tar.gz License: GPL Group: MyJunk BuildArch: noarch BuildRoot: %{_tmppath}/%{name}-buildroot %description Make some relevant package description here %prep %setup -q %build %install install -m 0755 -d $RPM_BUILD_ROOT/opt/my_project %clean rm -rf $RPM_BUILD_ROOT %post echo " " echo "This will display after rpm installs the package!" %files %dir /opt/my_project

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >