Search Results

Search found 2280 results on 92 pages for 'tmp'.

Page 59/92 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • How to use second volume devide of amazon EC2

    - by Khoyendra Pande
    I have two volumes of amazon EC2 where by default 1 GiB volume using which has fulled. Now I want to use my second volume which is 9 Gim. I used command cat /proc/partitions I got major minor #blocks name 202 1 1048576 xvda1 202 80 9437184 xvdf Then I hit mkfs.ext3 -F /dev/sdf its showing mkfs.ext3: No such file or directory while trying to determine filesystem size then I hit command df and I got Filesystem 1K-blocks Used Available Use% Mounted on /dev/xvda1 1032088 1031280 0 100% / tmpfs 313160 8 313152 1% /lib/init/rw udev 297800 24 297776 1% /dev tmpfs 313160 4 313156 1% /dev/shm overflow 1024 32 992 4% /tmp means still I am unable to use my 9 GiB space Volume. I am conform I have two volume where attachment information is i-7e4fb41c:/dev/sda1 (attached) and i-7e4fb41c:/dev/sdf (attached) where only sda1 is using. Any one know how may I use my second volume(sdf). Thx

    Read the article

  • Cannot install Pecl (Imagick) extension on Centos server - autoconf missing

    - by Stevo
    I'm trying to install the pecl extension Imagick on a centos server, but I'm getting an error about autoconf. Autoconf is installed, as is make and gcc. but it's complaining about the path: [root@server ~]# pecl install imagick downloading imagick-3.0.1.tgz ... Starting to download imagick-3.0.1.tgz (93,920 bytes) .....................done: 93,920 bytes 13 source files, building running: phpize Configuring for: PHP Api Version: 20090626 Zend Module Api No: 20090626 Zend Extension Api No: 220090626 /usr/bin/phpize: /var/tmp/imagick/build/shtool: /bin/sh: bad interpreter: Permission denied Cannot find autoconf. Please check your autoconf installation and the $PHP_AUTOCONF environment variable. Then, rerun this script. ERROR: `phpize' failed What should I do?

    Read the article

  • Does CPIO produce platform dependant archives?

    - by TiCL
    I made a CPIO archive with following command on Solaris 11 (SPARC): find . | cpio -ov >/tmp/myarchive.cpio I copied it to Intel based Solaris 11 machine and tried to extract using the following command cpio -icvdu < myarchive.cpio It gives me following error: cpio: Not a cpio file, bad header. 1 errors The MD5SUM hash matches and I can extract it on another SPARC machine. My question, does CPIO produce platform dependant output? Is there any way to convert it? I cannot use TAR at this moment, because the directory I am archiving has long symbolic links that are skipped by TAR command

    Read the article

  • Kill xserver from command line (init 3/5 does not work)

    - by John Smith
    Hi, I'm running Linux Mint 10, although I've had this same issue with other variants of Linux. I've been told/found while researching that if the X server hangs or otherwise errors, one can drop to a root prompt, usually at another tty, and execute init 3 (to drop to single user mode) and then init 5 to return to the default, graphical session. Needless to say, I've tried this before in multiple configurations on multiple machines to no avail. The only feedback I receive form executing those two commands is a listing of VMWare services (from a kernel module) that are stopped and then restarted. Note: If I run startx (either before or after init 3), then I am told that the xserver is still running and that I should remove /tmp/.X0-lock. Having tried that, it removes that error message, but claims that the xserver cannot be attached as another instance is running. How do I kill the xserver completely? Can I killall some process name?

    Read the article

  • Suphp connection reset by peer in error logs

    - by trante
    In my Linux server's logs I have this record nearly every 5 minutes. I couldn't find the reason for two weeks and I would be very happy if you can recommend me a way to diagnose the problem. It is inside error_log file. I use php 5.3.8 and litespeed. 2012-09-03 16:01:28.399 [INFO] [95.7.223.91:63814-0#APVH_example.com] connection to [/tmp/lshttpd/ APVH_example.com_Suphp.sock. 781] on request #151, confirmed, 0, associated process: 845244, running: 0, error: Connection reset by peer!

    Read the article

  • MediaTemple Django Bad Gateway

    - by Eeyore
    I have a site running on GS server on MediaTemple. It's Django/PostgreSQL setup. For some reason from time to time I get Bad Gateway error and I can't figure out what's causing it. What can cause this error? What else can I do to find the cause of the problem? url.access-deny = ( "~", ".inc" ) fastcgi.server = ( "/main.fcgi" => ( "main" => ( "socket" => "/var/tmp/" + appname + ".sock", # don't change this "check-local" => "disable", ) ) ) alias.url = ( "/media/" => "/home/xxx/data/python/django/django/contrib/admin/media/", "/static/" => "/home/xxx/containers/django/site/static/", ) url.rewrite-once = ( "^(/media.*)$" => "$1", "^(/static.*)$" => "$1", "^/favicon\.ico$" => "/media/favicon.ico", "^(/.*)$" => "/main.fcgi$1", ) server.error-handler-404 = "/main.fcgi"

    Read the article

  • Puppet templates and undefined/nil variables

    - by larsks
    I often want to include default values in Puppet templates. I was hoping that given a class like this: class myclass ($a_variable=undef) { file { '/tmp/myfile': content => template('myclass/myfile.erb'), } } I could make a template like this: a_variable = <%= a_variable || "a default value" %> Unfortunately, undef in Puppet doesn't translate to a Ruby nil value in the context of the template, so this doesn't actually work. What is the canonical way of handling default values in Puppet templates? I can set the default value to an empty string and then use the empty? test... a variable = <%= a_variable.empty? ? "a default value" : a_variable %> ...but that seems a little clunky.

    Read the article

  • Solaris mounting partitions

    - by Benco
    I'm trying to mount a partition in solaris 10... bash-3.00# mount /dev/dsk/c0t0d0s3 /data mount: /dev/dsk/c0t0d0s3 is already mounted or /data is busy As far as I know c0t0d0s3 isn't already mounted elsewhere, so what's really going on here? From /etc/mnttab : /dev/dsk/c1t0d0s0 / ufs rw,intr,largefiles,logging,xattr,onerror=panic,dev=7800001285811136 /devices /devices devfs dev=4840000 1285811125 ctfs /system/contract ctfs dev=48c0001 1285811125 proc /proc proc dev=4880000 1285811125 mnttab /etc/mnttab mntfs dev=4900001 1285811125 swap /etc/svc/volatile tmpfs xattr,dev=4940001 1285811125 objfs /system/object objfs dev=4980001 1285811125 sharefs /etc/dfs/sharetab sharefs dev=49c0001 1285811125 /usr/lib/libc/libc_hwcap1.so.1 /lib/libc.so.1 lofs dev=780000 1285811131 fd /dev/fd fd rw,dev=4b40001 1285811136 swap /tmp tmpfs xattr,dev=4940002 1285811137 swap /var/run tmpfs xattr,dev=4940003 1285811137 -hosts /net autofs nosuid,indirect,ignore,nobrowse,dev=4c00001 1285811148 auto_home /home autofs indirect,ignore,nobrowse,dev=4c00002 1285811148 cordb:vold(pid530) /vol nfs ignore,noquota,dev=4bc0001 1285811149 I suspect the problem is not related to the mount point, but rather the disk slice I'm trying to mount: bash-3.00# newfs -v /dev/dsk/c0t0d0s3 /dev/rdsk/c0t0d0s3: Device busy

    Read the article

  • centos 100% disk full - How to remove log files, history, etc?

    - by kopeklan
    mysqld won't start because disk space is full: 101221 14:06:50 [ERROR] /usr/libexec/mysqld: Error writing file '/var/run/mysqld/mysqld.pid' (Errcode: 28) 101221 14:06:50 [ERROR] Can't start server: can't create PID file: No space left on device running df -h: Filesystem Size Used Avail Use% Mounted on /dev/sda2 16G 3.2G 12G 23% / /dev/sda5 4.8G 4.6G 0 100% /var /dev/sda3 430G 855M 407G 1% /home /dev/sda1 76M 24M 49M 33% /boot tmpfs 956M 0 956M 0% /dev/shm du -sh * in /var: 12K account 56M cache 24K db 32K empty 8.0K games 1.5G lib 8.0K local 32K lock 221M log 16K lost+found 0 mail 24K named 8.0K nis 8.0K opt 8.0K preserve 8.0K racoon 292K run 70M spool 8.0K tmp 76K webmin 2.6G www 20K yp in /dev/sda5, there is website files in /var/www. because this is first time, I have no idea which files to remove other than moving /var/www to other partition And one more, what is the right way to remove log files, history, etc in /dev/sda5?

    Read the article

  • rsync doesn't use delta transfer on first run

    - by ockzon
    I'm trying to synchronize a large local directory (with a batch file using rsync 3.0.7 on Cygwin, Windows 7 x64, 30k files, 200gb size) to a remote server (Debian x64 with kernel 2.6, rsyncd 3.0.7) over a slow internet connection (90kbyte/s upload). I know almost all files are identical and I verified that using md5sum locally and remotely. However when executing rsync from my local machine every file gets transferred completely for the first time. When I terminate the batch file after a few transfers and run it again then the already transferred files are skipped. But as soon as it gets to a file not yet transferred it uploads the file as a whole again instead of noticing that the checksum is the same locally and remotely. The batch file calling rsync looks like this (backslashes and line brakes added here for readability): c:\cygwin\bin\rsync.exe --verbose --human-readable --progress --stats \ --recursive --ignore-times --password-file pwd.txt \ /cygdrive/d/ftp/data/ \ rsync://[email protected]:33400/data/ | \ c:\cygwin\bin\tee.exe --append rsync.log I experimented using the following parameters in varying combinations but that didn't help either: --checksum --partial --partial-dir=/tmp/.rsync-partial --compress

    Read the article

  • Identify "Composite Document File"

    - by Steven
    In a folder containing several PowerPoint Presentations and Spreadsheets, I discovered the following file: Name: ppt115.tmp Size: 160 MB Meta: No EXIF or other metadata Type: (as identified by the cygwin / linux program 'file') Composite Document File V2 Document, No summary info Notes: The filename does not correspond to other files in the directory. Neither MS Power Point nor Excel can open the file. MS Word will only attempt to recover text. Please help me identify this file. Is it just a temporary file that I can safely remove?

    Read the article

  • create log for an encrypted tar

    - by magiza83
    I want to create an encrypted tar but also I want to have a log of what tar has compressed, I'm using the following command: tar -cvvf - --files-from=/root/backup.cfg | openssl des3 -salt -k backuppass | dd of=/root/tmp/back.encrypted But I need to have a log of tar's stdout. I don't know how to get it, because If I use "" in tar command openssl result is not correct. I've also checked tar manual hoping to find some option to write stdout to a file, but I have found nothing. any help? thanks & Regards.

    Read the article

  • WebSVN accept untrusted HTTPS certificate

    - by Laurent
    I am using websvn with a remote repository. This repository uses https protocol. After having configured websvn I get on the websvn webpage: svn --non-interactive --config-dir /tmp list --xml --username '***' --password '***' 'https://scm.gforge.....' OPTIONS of 'https://scm.gforge.....': Server certificate verification failed: issuer is not trusted I don't know how to indicate to websvn to execute svn command in order to accept and to store the certificate. Does someone knows how to do it? UPDATE: It works! In order to have something which is well organized I have updated the WebSVN config file to relocate the subversion config directory to /etc/subversion which is the default path for debian: $config->setSvnConfigDir('/etc/subversion'); In /etc/subversion/servers I have created a group and associated the certificate to trust: [groups] my_repo = my.repo.url.to.trust [global] ssl-trust-default-ca = true store-plaintext-passwords = no [my_repo] ssl-authority-files = /etc/apache2/ssl/my.repo.url.to.trust.crt

    Read the article

  • Email is not sending when the script is running by CRON

    - by Adam Blok
    I wrote the simple backup bash script and at the end of it, it's sending an email to me that backup is ready. Everything works perfect when I run this script from terminal (root), but when the script is running by CRON, email is not sending :-/. #!/bin/sh filename=$(date +%d-%m-%Y) backup_dir="/mnt/backup/" email_from_name="BACKUP" email_to="my@email" email_subject="Backup is ready" email_body_file="/tmp/backup-email-body.txt" tar czf "$backup_dir$filename.tgz" "/home/www" echo "Subject: $email_subject" > $email_body_file ls $backup_dir -sh >> $email_body_file sendmail -F $email_from_name -t $email_to < $email_body_file

    Read the article

  • after changing file from unix to dos it is having empty lines..how to handle this

    - by user2814717
    After converting file using **unix2dos** command it is having some empty lines. Please help me. How to handle this? I tried to delete empty lines as follow, but couldn't work. $ sed '/^$/d' /tmp/data.txt Hey following examples also didn't work. Pl help This is source data before using unix2dos. ID NAME DATE 1 BALA 09/23/2013 2 KRISHH 09/24/2013 3 billy 09/24/2013 After using unix2dos it is coming as ID NAME DATE 1 BALA 09/23/2013 2 KRISHH 09/24/2013 3 billy 09/24/2013 first and second record there is an empty line coming up..may be in bewteen data also Thanks

    Read the article

  • Installing Java 1.5 on Ubuntu?

    - by StackedCrooked
    I already have Java 1.6, but I need to test something with 1.5. I have downloaded the .bin file from http://java.sun.com/javase/downloads/index_jdk5.jsp using the Sun Download Manager. Now I want to create a deb file from this bin file: $ fakeroot make-jpkg java_ee_sdk-5_01-linux.bin Creating temporary directory: /tmp/make-jpkg.Zpm1Y7LbZ0 Loading plugins: blackdown-j2re.sh blackdown-j2sdk.sh common.sh ibm-j2re.sh ibm-j2sdk.sh j2re.sh j2sdk-doc.sh j2sdk.sh j2se.sh sun-j2re.sh sun-j2sdk-doc.sh sun-j2sdk.sh Detected Debian build architecture: i386 Detected Debian GNU type: i486-linux-gnu No matching plugin was found. Removing temporary directory: done How can I fix the "No matching plugin was found." error? Update I downloaded jdk-1_5_0_22-linux-amd64.bin from the archive page and ran Linux installer. It works fine.

    Read the article

  • curl XPUT returning HTTP 500 error message

    - by pradeepchhetri
    I have added the following changes in nginx configuration. server { listen 8080; root /usr/share/nginx/www; client_body_temp_path /tmp/; dav_methods PUT DELETE MKCOL COPY MOVE; create_full_put_path on; dav_access user:rw group:rw all:rw; } I have my nginx configured with --with-http_dav_module also. But when I am trying to running the command: $ curl -XPUT http://172.16.31.127:8080/test.html -d 'test' I am getting 500 Internal Server error. Can anyone help me out in solving this.

    Read the article

  • Can't ssh from Ubuntu to RHEL or CentOS

    - by Alex N
    I am trying to setup publickey based authenitcation for 2 different boxes. One RHEL another on e is CentOS. I am having same issue with both where ssh fails and falls back to password based authentication. Error that seems to be causing this is quite obscure: debug1: Unspecified GSS failure. Minor code may provide more information Credentials cache file '/tmp/krb5cc_1000' not found Both boxes are completely unrelated. I have my public key in .ssh/authorized_keys file on both boxes, all permissions are checked and good(700 for .ssh and 600 for internals) I have bunch of other servers that are running on various flavors(Gentoo, Fedora, FreeBSD etc.) and publickey ssh works just fine, but CentOS and RHEL giving me this for some reason :( Anyone experienced this before? I am not even sure how to further analyze this issue :(

    Read the article

  • How to convert non key, value java arguments to applet params? (args like -Xmx64m)

    - by bwizzy
    I'm trying to use xvpviewer (based on TightVNC) to VNC into my VMs running on Citirx XenServer. There are a couple of caveats required with trusting the certificate from XenServer which I've got working. Essentially I'm trying to convert the java command below (which works on the command line to launch VncViewer) for use in an applet that can be accessed via HTML page. java -Djavax.net.ssl.trustStore=/tmp/kimo.jks -Xmx64m -jar VncViewer.jar HOST "/console?ref=OpaqueRef:141f4204-2240-4627-69c6-a0c7d9898e6a&session_id=OpaqueRef:91a483c4-bc40-3bb0-121c-93f2f89acc3c" PORT 443 PROXYHOST1 192.168.0.5 PROXYPORT1 443 SocketFactory "HTTPSConnectSocketFactory" I know I can put the HOST, PORT etc arguments into param tags for the applet but I'm not sure how to apply the two initial argments.

    Read the article

  • Windows 7 Permissions

    - by Scott
    I have an odd problem with a windows 7 laptop. It's a single user installation currently. This is a fresh install on an Asus laptop. I have a svn repo checked out on my second partition. I have a directory which I have added to svn:ignore list, because it is for tmp files. This specific directory shows as read-only. I need write access on this directory for my project to function properly. If I right click and modify the directory to be not be read only and run this recursively, it simply is immediately reverted back to a read-only directory. I have also modified apache's service to run as myself to no avail. I'm stumped... Any ideas?

    Read the article

  • Kill xserver from command line (init 3/5 does not work)

    - by Richard Martinez
    I'm running Linux Mint 10, although I've had this same issue with other variants of Linux. I've been told/found while researching that if the X server hangs or otherwise errors, one can drop to a root prompt, usually at another tty, and execute init 3 (to drop to single user mode) and then init 5 to return to the default, graphical session. Needless to say, I've tried this before in multiple configurations on multiple machines to no avail. The only feedback I receive form executing those two commands is a listing of VMWare services (from a kernel module) that are stopped and then restarted. Note: If I run startx (either before or after init 3), then I am told that the xserver is still running and that I should remove /tmp/.X0-lock. Having tried that, it removes that error message, but claims that the xserver cannot be attached as another instance is running. How do I kill the xserver completely? Can I killall some process name?

    Read the article

  • How can I increase space on the Filesystem linux?

    - by xtrimsky
    I am renting a dedicated server with Parallel Plesk on it (which I hate and I try to use command line). I have a filesystem that is full,"df -H" prints this: Filesystem Size Used Avail Use% Mounted on /dev/md1 4.0G 4.0G 361k 100% / /dev/mapper/vg00-usr 4.3G 1.4G 3.0G 32% /usr /dev/mapper/vg00-var 4.3G 2.8G 1.6G 64% /var /dev/mapper/vg00-home 4.3G 4.4M 4.3G 1% /home none 1.1G 24M 1.1G 3% /tmp tmpfs 1.1G 0 1.1G 0% /usr/local/psa/handlers/before-local tmpfs 1.1G 0 1.1G 0% /usr/local/psa/handlers/before-queue tmpfs 1.1G 0 1.1G 0% /usr/local/psa/handlers/before-remote tmpfs 1.1G 0 1.1G 0% /usr/local/psa/handlers/info tmpfs 1.1G 0 1.1G 0% /usr/local/psa/handlers/spool The server I'm renting has 1TB of hard drive. Why are these so small, how can I increase my storage ? (I'm pretty beginner with Linux). Thank you

    Read the article

  • Gitosis installation of public key not working...

    - by user29600
    I've been following this tutorial to install and setup git on Ubuntu Server 10.04 using Windows 7 as a client. However, after finally figuring out how it works (executed gitosis-init a bunch of times on the wrong key), I copied the id_rsa.pub file over to the server in /tmp folder and ran it again. Unfortunately it still doesn't work and when I execute git clone [email protected]:gitosis-admin.git it asks for gitosis's password rather than the RSA Passphrase. I'm assuming is the same problem this guy is having here... however, after following his instructions: Purge git-core and gitosis and manually remove the /srv/gitosis folder and following the instructions again (with the proper id_rsa.pub file this time), I'm still having the same issue. Anyone know what I'm doing wrong? Is there any way to probe for more information that might help in solving this?

    Read the article

  • s3cmd run on command line not on cron

    - by Jonar
    Many have said that the problem is with environment but I still can't seem to solve this problem. BTW I am using Ubuntu 9.10 login as user, then sudo -s using this command: s3cmd put file s3://bucket worked! now here is the simple script intended for testing: #! /bin/bash env >/tmp/cronjob.log s3cmd put file s3://bucket issuing the command crontab -e * * * * * /opt/script 2>&1 | logger Then using tail to syslogs Dec 3 23:22:01 ubuntu CRON[10795]: (root) CMD (/opt/script 2&1 | logger) But by verifying it on s3Fox Organizer, the file is not uploaded. (I tried changing the #! /bin/sh (no effect), putting crons on /etc/crontab (no effect), setting HOME=/home/user (no effect) What are other options to try? Or other ways to debug this problem. Thanks

    Read the article

  • what is the best setting for using lighttpd on 8G ram?

    - by user39639
    I have running 8GB ram and 8 x Xeon 3361 system! What is the best setting for running simultaneous connection! What is the maximum? Is setting like this correct? server.max-keep-alive-requests = 0 server.max-keep-alive-idle = 10 server.max-read-idle = 60 server.max-write-idle = 60 server.event-handler = "linux-sysepoll" server.max-fds = 2048 fastcgi.server = ( ".php" = ( "localhost" = ( "socket" = "/tmp/php-fastcgi.socket", "bin-path" = "/usr/bin/php-cgi", "max-procs" = 20, "bin-environment" = ( "PHP_FCGI_CHILDREN" = "40", "PHP_FCGI_MAX_REQUESTS" = "800" ), "broken-scriptfilename" = "enable" ) ) ) please help me!

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >