Search Results

Search found 5597 results on 224 pages for 'sudo rm rf'.

Page 78/224 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • How do I solve this "unexpected '}' syntax error" in my bash script?

    - by WASasquatch
    I have a piece of code that has some serious issues and I was hoping to get it solved soon but no one has offered any help. I thought I'd try some Ubuntu users since this is the OS running the script. mc_addplugin() { if pgrep -u $USERNAME -f $SERVICE > /dev/null then echo "$SERVICE is running! Please stop the service before adding a plugin." else echo "Paste the URL to the .JAR Plugin..." read JARURL JARNAME=$(basename "$JARURL") if [ -d "$TEMPPLUGINS" ] then as_user "cd $PLUGINSPATH && wget -r -A.jar $JARURL -o temp_plugins/$JARNAME" else as_user "cd $PLUGINSPATH && mkdir $TEMPPLUGINS && wget -r -A.jar $JARURL -o temp_plugins/$JARNAME" fi if [ -f "$TMPDIR/$JARNAME" ] then if [ -f "$PLUGINSPATH/$JARNAME" ] then if `diff $PLUGINSPATH/$JARNAME $TMPDIR/$JARNAME >/dev/null` then echo "You are already running the latest version of $JARNAME." else NOW=`date "+%Y-%m-%d_%Hh%M"` echo "Are you sure you want to overwrite this plugin? [Y/n]" echo "Note: Your old plugin will be moved to the "$TEMPPLUGINS" folder with todays date." select yn in "Yes" "No"; do case $yn in Yes ) as_user "mv $PLUGINSPATH/$JARNAME $TEMPPLUGINS/${JARNAME}_${NOW} && mv $TEMPPLUGINS/$JARNAME $PLUGINSPATH/$JARNAME"; break;; No ) echo "The plugin has not been installed! Removing temporary plugin and exiting..." as_user "rm $TEMPPLUGINS/$JARNAME"; exit;; esac done echo "Would you like to start the $SERVICE now? [Y/n]" select yn in "Yes" "No"; do case $yn in Yes ) mc_start; break;; No ) "$SERVICE not running! To start the service run: /etc/init.d/craftbukkit start"; exit;; esac done fi else echo "Are you sure you want to add this new plugin? [Y/n]" select yn in "Yes" "No"; do case $yn in Yes ) as_user "mv $PLUGINSPATH/$JARNAME $TEMPPLUGINS/${JARNAME}_${NOW} && mv $TEMPPLUGINS/$JARNAME $PLUGINSPATH/$JARNAME"; break;; No ) echo "The plugin has not been installed! Removing temporary plugin and exiting..." as_user "rm $TEMPPLUGINS/$JARNAME"; exit;; esac done echo "Would you like to start the $SERVICE now? [Y/n]?" select yn in "Yes" "No"; do case $yn in Yes ) mc_start; break;; No ) "$SERVICE not running! To start the service run: /etc/init.d/craftbukkit start"; exit;; esac done fi else echo "Failed to download the plugin from the URL you specified!" exit; fi } It throws it at the closing bracket at the end of the function.

    Read the article

  • uploadify scriptData problem

    - by elpaso66
    Hi, I'm having problems with scriptData on uploadify, I'm pretty sure the config syntax is fine but whatever I do, scriptData is not passed to the upload script. I tested in both FF and Chrome with flash v. Shockwave Flash 9.0 r31 This is the config: $(document).ready(function() { $('#id_file').uploadify({ 'uploader' : '/media/filebrowser/uploadify/uploadify.swf', 'script' : '/admin/filebrowser/upload_file/', 'scriptData' : {'session_key': 'e1b552afde044bdd188ad51af40cfa8e'}, 'checkScript' : '/admin/filebrowser/check_file/', 'cancelImg' : '/media/filebrowser/uploadify/cancel.png', 'auto' : false, 'folder' : '', 'multi' : true, 'fileDesc' : '*.html;*.py;*.js;*.css;*.jpg;*.jpeg;*.gif;*.png;*.tif;*.tiff;*.mp3;*.mp4;*.wav;*.aiff;*.midi;*.m4p;*.mov;*.wmv;*.mpeg;*.mpg;*.avi;*.rm;*.pdf;*.doc;*.rtf;*.txt;*.xls;*.csv;', 'fileExt' : '*.html;*.py;*.js;*.css;*.jpg;*.jpeg;*.gif;*.png;*.tif;*.tiff;*.mp3;*.mp4;*.wav;*.aiff;*.midi;*.m4p;*.mov;*.wmv;*.mpeg;*.mpg;*.avi;*.rm;*.pdf;*.doc;*.rtf;*.txt;*.xls;*.csv;', 'sizeLimit' : 10485760, 'scriptAccess' : 'sameDomain', 'queueSizeLimit' : 50, 'simUploadLimit' : 1, 'width' : 300, 'height' : 30, 'hideButton' : false, 'wmode' : 'transparent', translations : { browseButton: 'BROWSE', error: 'An Error occured', completed: 'Completed', replaceFile: 'Do you want to replace the file', unitKb: 'KB', unitMb: 'MB' } }); $('input:submit').click(function(){ $('#id_file').uploadifyUpload(); return false; }); }); I checked that other values (file name) are passed correctly but session_key is not. This is the decorator code from django-filebrowser, you can see it checks for request.POST.get('session_key'), the problem is that request.POST is empty. def flash_login_required(function): """ Decorator to recognize a user by its session. Used for Flash-Uploading. """ def decorator(request, *args, **kwargs): try: engine = __import__(settings.SESSION_ENGINE, {}, {}, ['']) except: import django.contrib.sessions.backends.db engine = django.contrib.sessions.backends.db print request.POST session_data = engine.SessionStore(request.POST.get('session_key')) user_id = session_data['_auth_user_id'] # will return 404 if the session ID does not resolve to a valid user request.user = get_object_or_404(User, pk=user_id) return function(request, *args, **kwargs) return decorator

    Read the article

  • OSS4 on Debian Squeeze

    - by mit
    Hi, I am trying to get OSS4 to work on a Debian Squeeze 64 bit machine with an usb sound adapter. There is no sound from this adapter at the present, although it worked just before on the previous installation. You can see the output of some commands here: $ sudo /etc/init.d/oss4-base restart Stopping Open Sound System: SNDCTL_MIX_EXTINFO: No such device or address done. Starting Open Sound System: done (OSS is already loaded). $ sudo /etc/init.d/oss4-base stop Stopping Open Sound System: SNDCTL_MIX_EXTINFO: No such device or address done. $ sudo /etc/init.d/oss4-base start Starting Open Sound System: done (OSS is already loaded). $ ossinfo Version info: OSS 4.2 (b 2002/201001250441) (0x00040100) GPL Platform: Linux/x86_64 2.6.32-5-xen-amd64 #1 SMP Fri Dec 10 17:41:50 UTC 2010 (pc11) Number of audio devices: 2 Number of audio engines: 2 Number of MIDI devices: 0 Number of mixer devices: 2 Device objects 0: osscore0 OSS core services 1: oss_hdaudio0 ATI HD Audio interrupts=0 (613) HD Audio controller ATI HD Audio Vendor ID 0x10024383 Subvendor ID 0x10192816 Codec 0: Not present 2: oss_usb0 USB audio core services 3: usb0d8c0126-0 USB sound device 4: usb0d8c0126-1 USB sound device 5: usb0d8c0126-2 USB sound device 6: usb0d8c0126-3 USB sound device MIDI devices (/dev/midi*) Mixer devices 0: (USB sound device )(Mixer 0 of device object 3) 1: USB sound device (Mixer 0 of device object 5) Audio devices (USB sound device play /dev/oss/usb0d8c0126-1/pcm0 ) (device index 0) USB sound device play /dev/oss/usb0d8c0126-3/pcm0 (device index 1) Nodes /dev/dsp -> /dev/oss/usb0d8c0126-1/pcm0 /dev/dsp_out -> /dev/oss/usb0d8c0126-1/pcm0 /dev/dsp_mmap -> /dev/oss/usb0d8c0126-1/pcm0 $ osstest Sound subsystem and version: OSS 4.2 (b 2002/201001250441) (0x00040100) Platform: Linux/x86_64 2.6.32-5-xen-amd64 #1 SMP Fri Dec 10 17:41:50 UTC 2010 *** Scanning sound adapter #-1 *** /dev/oss/usb0d8c0126-1/pcm0 (audio engine 0): USB sound device play - Device not present - Skipping *** Scanning sound adapter #-1 *** /dev/oss/usb0d8c0126-3/pcm0 (audio engine 1): USB sound device play - Performing audio playback test... /dev/oss/usb0d8c0126-3/pcm0: No such file or directory Can't open the device *** Some errors were detected during the tests *** $ ossxmix /dev/oss/usb0d8c0126-2/mix0: No such file or directory No mixers could be opened $ ossmix SNDCTL_MIX_EXTINFO: No such device or address ad@pc11:~$ man ossmix ad@pc11:~$ ossmix -a SNDCTL_MIX_EXTINFO: No such device or address ad@pc11:~$ man ossmix ad@pc11:~$ ossmix -D SNDCTL_MIX_EXTINFO: No such device or address ad@pc11:~$ ossmix -D 0 SNDCTL_MIX_EXTINFO: No such device or address ad@pc11:~$ man ossmix ad@pc11:~$ ossxmix /dev/oss/usb0d8c0126-2/mix0: No such file or directory No mixers could be opened How can I make oss sound work? I can add more information if necessary.

    Read the article

  • Network shares do not mount.

    - by Alex
    My network shares were mounting fine yesterday.. suddenly they are not. They were mounting fine for the last two weeks or however long since I added them. When I run sudo mount -a I get the following error: topsy@monolyth:~$ sudo mount -a mount error(12): Cannot allocate memory Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) mount error(12): Cannot allocate memory Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) mount error(12): Cannot allocate memory Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) mount error(12): Cannot allocate memory Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) topsy@monolyth:~$ I followed this guide when setting them up: http://ubuntuforums.org/showthread.php?t=288534 So I tried removing them by doing the reverse, and then rebooting, then adding them again and rebooting. Problem persists.

    Read the article

  • Cannot install virtualbox-ose-guest-dkms on Ubuntu 10.04

    - by mrduongnv
    I'm using Ubuntu 10.04 and kernel: 2.6.38-11-generic-pae. While using VirtualBox, i got this error: Kernel driver not installed (rc=-1908) Please install the virtualbox-ose-dkms package and execute 'modprobe vboxdrv' as root. I tried to execute the command: sudo modprobe vboxdrv and got this error: FATAL: Module vboxdrv not found. After that, i execute this: sudo aptitude install virtualbox-ose-guest-dkms duongnv@duongnv-laptop:~$ sudo aptitude install virtualbox-ose-guest-dkms Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done The following partially installed packages will be configured: virtualbox-ose-guest-dkms 0 packages upgraded, 0 newly installed, 0 to remove and 105 not upgraded. Need to get 0B of archives. After unpacking 0B will be used. Writing extended state information... Done Setting up virtualbox-ose-guest-dkms (3.1.6-dfsg-2ubuntu2) ... Removing old virtualbox-ose-guest-3.1.6 DKMS files... ------------------------------ Deleting module version: 3.1.6 completely from the DKMS tree. ------------------------------ Done. Loading new virtualbox-ose-guest-3.1.6 DKMS files... First Installation: checking all kernels... Building only for 2.6.38-11-generic-pae Building for architecture i686 Building initial module for 2.6.38-11-generic-pae Error! Bad return status for module build on kernel: 2.6.38-11-generic-pae (i686) Consult the make.log in the build directory /var/lib/dkms/virtualbox-ose-guest/3.1.6/build/ for more information. dpkg: error processing virtualbox-ose-guest-dkms (--configure): subprocess installed post-installation script returned error exit status 10 Errors were encountered while processing: virtualbox-ose-guest-dkms E: Sub-process /usr/bin/dpkg returned an error code (1) A package failed to install. Trying to recover: Setting up virtualbox-ose-guest-dkms (3.1.6-dfsg-2ubuntu2) ... Removing old virtualbox-ose-guest-3.1.6 DKMS files... ------------------------------ Deleting module version: 3.1.6 completely from the DKMS tree. ------------------------------ Done. Loading new virtualbox-ose-guest-3.1.6 DKMS files... First Installation: checking all kernels... Building only for 2.6.38-11-generic-pae Building for architecture i686 Building initial module for 2.6.38-11-generic-pae Error! Bad return status for module build on kernel: 2.6.38-11-generic-pae (i686) Consult the make.log in the build directory /var/lib/dkms/virtualbox-ose-guest/3.1.6/build/ for more information. dpkg: error processing virtualbox-ose-guest-dkms (--configure): subprocess installed post-installation script returned error exit status 10 Errors were encountered while processing: virtualbox-ose-guest-dkms Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done

    Read the article

  • Trouble using Upstart to launch Redis as redis user

    - by Chris
    I'm trying to launch redis-server as a user (called redis) via Upstart. My /etc/init/redis-server.conf looks like this: description "redis server" start on runlevel [23] stop on shutdown exec sudo -u redis /usr/local/bin/redis-server /var/lib/redis/redis.conf Looks good, right? I start redis-server using $start redis-server redis-server start/running, process 16808 $redis-cli Could not connect to Redis at 127.0.0.1:6379: Connection refused $ps ax | grep ps 168 16810 tty1 R+ 0:00 ps ax 16811 tty1 S+ 0:00 grep 168 So redis-server definitely isn't running. Let's try executing the Upstart command by hand, shall we? exec sudo -u redis /usr/local/bin/redis-server /var/lib/redis/redis.conf [16852] 19 Jun 10:37:21 # Can't chdir to './': Permission denied Connection to 10.19.2.94 closed. And then I get logged off. I'm at a loss. Any ideas?

    Read the article

  • Problems with WCF reliable session (reliable messaging)

    - by Rob
    Hi, In our WCF application I am trying to configure reliable sessions. Service: <wsHttpBinding> <binding name="BindingStabiHTTP" maxBufferPoolSize="524288" maxReceivedMessageSize="2097152" messageEncoding="Text"> <reliableSession enabled="true" ordered="true" inactivityTimeout="00:10:00"/> <readerQuotas maxDepth="0" maxStringContentLength="0" maxArrayLength="0" maxBytesPerRead="0" maxNameTableCharCount="0" /> </binding> </wsHttpBinding> Client: <wsHttpBinding> <binding name="BindingClientWsHttpStandard" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="true" /> <security mode="Message"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm="" /> <message clientCredentialType="Windows" negotiateServiceCredential="true" algorithmSuite="Default" establishSecurityContext="true" /> </security> </binding> Unfortunately I get an error which is as follows: No signature message parts were specified for messages with the 'http://schemas.xmlsoap.org/ws/2005/02/rm/CreateSequence' action. If I disable the reliableSession on the client I get this message: The action is not supported by this endpoint. Only WS-ReliableMessaging February 2005 messages are processed by this endpoint. So it seems that the server is configured correctly for RM. I cannot find anything valuable about the error I get so I don't know how to fix this. Any ideas what can be wrong? Thank in advance, Rob

    Read the article

  • error when start Subversion Server in UberSVN

    - by dakiquang
    I'm using uberSVN on Ubuntu server. Now, I can't checkout, commit source from Subversion Server ( domain.com:9880, port subversion) I tried to start httpd by command line as: sudo /opt/ubersvn/bin/httpdserverctl start , but I get error: httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName httpd (pid 895) already running I tried run this command as follow: cd /opt/ubersvn/bin sudo ./ubersvncontrol start then get errors: Starting SysV Tomcat Using CATALINA_BASE: /opt/ubersvn/tomcat Using CATALINA_HOME: /opt/ubersvn/tomcat Using CATALINA_TMPDIR: /opt/ubersvn/tomcat/temp Using JRE_HOME: /opt/ubersvn/jre Using CLASSPATH: /opt/ubersvn/tomcat/bin/bootstrap.jar Using CATALINA_PID: /opt/ubersvn/data/run/tomcat.pid Existing PID file found during start. Tomcat appears to still be running with PID 943. Start aborted. In the Ubersvn website GUI, I tried start Ubersvn server but get error : How to start Subversion Server ?

    Read the article

  • Samba: session setup failed: NT_STATUS_LOGON_FAILURE

    - by stivlo
    I tried to set up Samba with "unix password sync", but I still get logon failure. I am running Ubuntu Natty Narwhal. $ smbclient -L localhost Enter stivlo's password: session setup failed: NT_STATUS_LOGON_FAILURE Here is my /etc/samba/smb.conf [global] workgroup = obliquid server string = %h server (Samba, Ubuntu) dns proxy = no log file = /var/log/samba/log.%m max log size = 1000 syslog = 0 panic action = /usr/share/samba/panic-action %d security = user encrypt passwords = true passdb backend = tdbsam obey pam restrictions = yes unix password sync = yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes map to guest = bad user [www] path = /var/www browsable = yes read only = no create mask = 0755 After modifying I restarted the servers: $ sudo restart smbd $ sudo restart nmbd However I still can't logon with my Unix username and password. Can anyone please help? Thank you in advance!

    Read the article

  • OSX: The item XYZ.txt~ can’t be moved to the Trash because it can’t be deleted

    - by dsg
    I'm trying to delete a file under OSX Lion, but can't. I get the following message: The item XYZ.txt~ can’t be moved to the Trash because it can’t be deleted. Here's what I've tried: select file and press COMMAND + DELETE (I get the message above.) renaming the file in finder (There is no option to rename the file.) sudo ls -a in a terminal (The file does not appear.) sudo rm XYZ.txt~ (I get "No such file or directory".) How do I remove this file? EDIT The file went away after restarting. My guess is that it was a glitch in finder.

    Read the article

  • Issues configuring CUPS print server for Ubuntu Server 9.10

    - by Tone
    I have a 9.10 Ubuntu Server installed and I want to make it a print server and am trying to get access to the cups browser admin page from a windows client machine. I installed cups: sudo apt-get install cups then I edited the /etc/cups/cupsd.conf file and tried several different listen cominbations: Listen 192.168.1.109:631 #ip my router gives it3 Listen /var/run/cups/cups.sock #already in conf file Listen fileserver:631 #hostname of server Port 631 #listen for all incoming requests on 631? samba is also installed (which I think is necessary to share the printer out? and finally I added my user to the lpadmin group: sudo adduser tone lpadmin but when I try to navigate any of the following I get 403 forbidden http://fileserver:631/admin http://fileserver:631 http://192.168.1.109:631/admin http://192.168.1.109:631 What did I miss?

    Read the article

  • Operation not permitted when starting Unicorn

    - by fiskeben
    I've created an nginx/unicorn/capistrato setup on Ubuntu (Amazon EC2) by following mostly this guide. I guess everything is set up like it should but when I start Unicorn I get (a LOT of) this error in the log: E, [2012-09-08T08:57:20.658092 #12356] ERROR -- : Operation not permitted (Errno::EPERM) /home/deployer/apps/bridgekalenderen.no/shared/bundle/ruby/1.9.1/gems/unicorn-4.3.1/lib/unicorn/worker.rb:82:in `initgroups' I see it's related to the user's permissions but I just can't figure out what I've left out. The server starts up nicely if I start it with sudo (or, rvmsudo, really). The user has sudo capabilities, I have chmod'ed the app several times so the file permissions there should be ok. The unicorn socket in /tmp is owned by the deployer user, so that shouldn't be the problem either. Does anybody have a clue where to look?

    Read the article

  • How to change SMP affinity of an IRQ on Ubuntu domU inside Xen XCP?

    - by Alexander Gladysh
    I'd like to change IRQ SMP affinity for reasons, outlined in this question: CPU0 is swamped with eth1 interrupts But I can't — I see Input/output error when I try to write to /proc/irq/*/smp_affinity. Please point me to the HOWTO on the matter. (A formal reference on /proc/irq/*/ would be cool as well.) Gory details: Note that this is a VM inside an Ubuntu-based Xen XCP host. $ uname -a Linux MYHOST 2.6.38-15-virtual #59-Ubuntu SMP Fri Apr 27 16:40:18 UTC 2012 i686 i686 i386 GNU/Linux $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 11.04 Release: 11.04 Codename: natty $ sudo cat /proc/irq/*/smp_affinity 01 01 01 01 01 80 80 80 80 80 80 40 40 40 40 40 40 20 20 20 20 20 20 10 10 10 10 10 10 08 08 08 08 08 08 04 04 04 04 04 04 02 02 02 02 02 02 01 01 01 01 01 01 Update. The error details: $ N=$(grep -c processor /proc/cpuinfo) $ echo $N 8 $ printf %x $((2**N-1)) ff $ printf %x $((2**N-1)) | sudo tee /proc/irq/*/smp_affinity fftee: /proc/irq/288/smp_affinity: Input/output error tee: /proc/irq/289/smp_affinity: Input/output error tee: /proc/irq/290/smp_affinity: Input/output error tee: /proc/irq/291/smp_affinity: Input/output error tee: /proc/irq/292/smp_affinity: Input/output error tee: /proc/irq/293/smp_affinity: Input/output error tee: /proc/irq/294/smp_affinity: Input/output error tee: /proc/irq/295/smp_affinity: Input/output error tee: /proc/irq/296/smp_affinity: Input/output error tee: /proc/irq/297/smp_affinity: Input/output error tee: /proc/irq/298/smp_affinity: Input/output error tee: /proc/irq/299/smp_affinity: Input/output error tee: /proc/irq/300/smp_affinity: Input/output error tee: /proc/irq/301/smp_affinity: Input/output error tee: /proc/irq/302/smp_affinity: Input/output error tee: /proc/irq/303/smp_affinity: Input/output error tee: /proc/irq/304/smp_affinity: Input/output error tee: /proc/irq/305/smp_affinity: Input/output error tee: /proc/irq/306/smp_affinity: Input/output error tee: /proc/irq/307/smp_affinity: Input/output error tee: /proc/irq/308/smp_affinity: Input/output error tee: /proc/irq/309/smp_affinity: Input/output error tee: /proc/irq/310/smp_affinity: Input/output error tee: /proc/irq/311/smp_affinity: Input/output error tee: /proc/irq/312/smp_affinity: Input/output error tee: /proc/irq/313/smp_affinity: Input/output error tee: /proc/irq/314/smp_affinity: Input/output error tee: /proc/irq/315/smp_affinity: Input/output error tee: /proc/irq/316/smp_affinity: Input/output error tee: /proc/irq/317/smp_affinity: Input/output error tee: /proc/irq/318/smp_affinity: Input/output error tee: /proc/irq/319/smp_affinity: Input/output error tee: /proc/irq/320/smp_affinity: Input/output error tee: /proc/irq/321/smp_affinity: Input/output error tee: /proc/irq/322/smp_affinity: Input/output error tee: /proc/irq/323/smp_affinity: Input/output error tee: /proc/irq/324/smp_affinity: Input/output error tee: /proc/irq/325/smp_affinity: Input/output error tee: /proc/irq/326/smp_affinity: Input/output error tee: /proc/irq/327/smp_affinity: Input/output error tee: /proc/irq/328/smp_affinity: Input/output error tee: /proc/irq/329/smp_affinity: Input/output error tee: /proc/irq/330/smp_affinity: Input/output error tee: /proc/irq/331/smp_affinity: Input/output error tee: /proc/irq/332/smp_affinity: Input/output error tee: /proc/irq/333/smp_affinity: Input/output error tee: /proc/irq/334/smp_affinity: Input/output error tee: /proc/irq/335/smp_affinity: Input/output error Update. irqbalance is running: $ sudo service irqbalance status irqbalance start/running, process 560

    Read the article

  • Cronjob as root?

    - by Rob
    I'm having a bit of a problem with cronjobs for backups. I've set up the following in sudo crontab -e (not under personal account): 0 1 * * * /backups/dobackup /backups/dobackup contains this: #!/bin/sh touch ITRAN tar -cvpjf /backups/$(date +%d.%m.%Y)_backup.tar.bz2 --exclude=/backups --exclude=/proc --exclude=/lost+found --exclude=/sys --exclude=/mnt --exclude=/media --exclude=/dev / The backup file is created, but the file ITRAN is not. Also, the backup file is vastly smaller than expected: -rw-r--r-- 1 rjrudman root 371620259 2012-06-21 12:39 21.06.2012_backup.tar.bz2 -rw-r--r-- 1 rjrudman root 1023211449 2012-06-22 18:00 22.06.2012_backup.tar.bz2 -rw-r--r-- 1 rjrudman root 1512785 2012-06-23 01:00 23.06.2012_backup.tar.bz2 -rw-r--r-- 1 rjrudman root 1023272455 2012-06-24 22:41 24.06.2012_backup.tar.bz2 -rw-r--r-- 1 rjrudman root 1514027 2012-06-25 01:00 25.06.2012_backup.tar.bz2 The backups with much larger file sizes are created by manually running sudo /backups/dobackup. It seems the cronjob is failing at some point.. but I have no idea how to debug this issue or where to start. Any ideas? Running ubuntu 10.04

    Read the article

  • Why does Ubuntu's Nautilus display a folder called "Examples" while the console displays "examples.desktop"

    - by broiyan
    These folders occur at /home/username. How does this name discrepancy arise? (Uppercase E versus lowercase e.) It seems to be a shortcut to /usr/share/example-content. How can I delete /usr/share/example-content/Ubuntu_Free_Culture_Showcase without using the command line? One possible answer is to make a privileged Nautilus using something like these SUSE instructions (link below). Unfortunately "gnomesu nautilus" gives me a "gnomesu: no such file" message and "sudo nautilus" does not do anything when added to the properties of the Launcher. Update: "sudo nautilus" from the console let's me delete but there is a mess of error messages. http://forums.opensuse.org/english/get-technical-help-here/how-faq-forums/unreviewed-how-faq/426153-how-nautilus-super-user-mode-gnome.html

    Read the article

  • ip namespace non-root shell

    - by user2730940
    I am trying to run ssh command to another ip namespace. I can do it right now, but it runs as root. I want to run it as a normal user. I want to know if there is a way to enter a non-root shell in another network namespace. I know you can use this to enter a root shell in another namespace: sudo ip netns exec <namespace> bash Alternatively, is there a way to run single commands as a non-root user? I know you can run commands as root with this: sudo ip netns exec <namespace> <command>

    Read the article

  • Ubuntu - Ruby Daemon script creates two processes - sh and ruby - PID file points at sh, not ruby

    - by Jonathan Scoles
    The PID file for a ruby process I have running as a daemon is getting the wrong PID. It appears that running /etc/init.d/sinatra start creates two processes - sh and ruby, and the PID that ends up in the PID file is that of the sh process. This means that when I then run /etc/init.d/sinatra stop or /etc/init.d/sinatra restart, it is killing sh and leaving the ruby process still running. I'd like to know a) why is my script launching two processes - sh and ruby, and not just ruby, and b) how do I fix it to just launch ruby? Details of the setup: I have a small Sinatra server set up on an ubuntu server, running as a daemon. It is set to automatically at server startup run a script named sinatra in /etc/init.d that launches the a control script control.rb, which then runs a ruby daemon command to start the server. The script is run under the 'sinatrauser' account, which has permissions for the directories the script needs. contents of /etc/init.d/sinatra #!/bin/bash # sinatra Startup script for Sinatra server. sudo -u sinatrauser ruby /var/www/sinatra/control.rb $1 RETVAL=$? exit $RETVAL To install this script, I simply copied it to /etc/init.d/ and ran sudo update-rc.d sinatra defaults contents of /var/www/sinatra/control.rb require 'rubygems' require 'daemons' pwd = Dir.pwd Daemons.run_proc('sinatraserver.rb', {:dir_mode => :normal, :dir => "/opt/pids/sinatra"}) do Dir.chdir(pwd) exec 'ruby /var/www/sinatra/sintraserver.rb >> /var/log/sinatra/sinatraOutput.log 2>&1' end portion of output from ps -A 6967 ? 00:00:00 apache2 10181 ? 00:00:00 sh <--- PID file gets this PID 10182 ? 00:00:02 ruby <--- Actual ruby process running Sinatra 12172 ? 00:00:00 sshd The PID file gets created in /opt/pids/sinatra/sinatraserver.rb.pid, and always contains the PID of the sh instance, which is always one less than the PID of the ruby process EDIT: I tried micke's solution, but it had no effect on the behavior I am seeing. This is the output from ps -A f. This output looks the same whether I use sudo -u sinatrauser ... or su sinatrauser -c ... in the service start script in /etc/init.d. 1146 ? S 0:00 sh -c ruby /var/www/sinatra/sinatraserver.rb >> /var/log/sinatra/sinatraOutput.log 2>&1 1147 ? S 0:00 \_ ruby /var/www/sinatra/sinatraserver.rb

    Read the article

  • VirtualBox guest responds to ping but all ports closed in nmap

    - by jeremyjjbrown
    I want to setup a test database on a vm for development purposes but I cannot connect to the server via the network. I've got Ubuntu 12.04vm installed on 12.04 host in Virtualbox 4.2.4 set to - Bridged network mode - Promiscuous Allow All When I try to ping the virtual guest from any network client I get the expected result. PING 192.168.1.209 (192.168.1.209) 56(84) bytes of data. 64 bytes from 192.168.1.209: icmp_req=1 ttl=64 time=0.427 ms ... Internet access inside the vm is normal But when I nmap it I get nothin! jeremy@bangkok:~$ nmap -sV -p 1-65535 192.168.1.209 Starting Nmap 5.21 ( http://nmap.org ) at 2012-11-15 18:39 CST Nmap scan report for jeremy (192.168.1.209) Host is up (0.0032s latency). All 65535 scanned ports on jeremy (192.168.1.209) are closed Service detection performed. Please report any incorrect results at http://nmap.org/submit/ Nmap done: 1 IP address (1 host up) scanned in 0.88 seconds ufw and iptables on VM... jeremy@jeremy:~$ sudo service ufw stop [sudo] password for jeremy: ufw stop/waiting jeremy@jeremy:~$ sudo iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination I have scanned around and have no reason to believe that my router is blocking internal ports. jeremy@bangkok:~$ nmap -v 192.168.1.2 Starting Nmap 5.21 ( http://nmap.org ) at 2012-11-15 18:44 CST Initiating Ping Scan at 18:44 Scanning 192.168.1.2 [2 ports] Completed Ping Scan at 18:44, 0.00s elapsed (1 total hosts) Initiating Parallel DNS resolution of 1 host. at 18:44 Completed Parallel DNS resolution of 1 host. at 18:44, 0.03s elapsed Initiating Connect Scan at 18:44 Scanning 192.168.1.2 [1000 ports] Discovered open port 445/tcp on 192.168.1.2 Discovered open port 139/tcp on 192.168.1.2 Discovered open port 3306/tcp on 192.168.1.2 Discovered open port 80/tcp on 192.168.1.2 Discovered open port 111/tcp on 192.168.1.2 Discovered open port 53/tcp on 192.168.1.2 Discovered open port 5902/tcp on 192.168.1.2 Discovered open port 8090/tcp on 192.168.1.2 Discovered open port 6881/tcp on 192.168.1.2 Completed Connect Scan at 18:44, 0.02s elapsed (1000 total ports) Nmap scan report for 192.168.1.2 Host is up (0.0017s latency). Not shown: 991 closed ports PORT STATE SERVICE 53/tcp open domain 80/tcp open http 111/tcp open rpcbind 139/tcp open netbios-ssn 445/tcp open microsoft-ds 3306/tcp open mysql 5902/tcp open vnc-2 6881/tcp open bittorrent-tracker 8090/tcp open unknown Read data files from: /usr/share/nmap Nmap done: 1 IP address (1 host up) scanned in 0.08 seconds Answer... Turns out all of the ports were open to the network. I installed open ssh and confirmed it. Then I edited my db conf to listen to external IP's and all was well.

    Read the article

  • Nothing happens when trying to upgrade from Linux Mint 12 to 13

    - by Ares
    I am trying to upgrade from Linux Mint 12 to 13 using apt-get. After running the following commands, nothing seems to happen. I am fairly new to linux, what am I doing wrong? user@olympus /etc/default $ sudo apt-get dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. user@olympus /etc/default $ sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

    Read the article

  • Github file size limit changed 6/18/13. Can't push now

    - by slindsey3000
    How does this change as of June 18, 2013 affect my existing repository with a file that exceeds that limit? I last pushed 2 months ago with a large file. I have a large file that I have removed locally but I can not push anything now. I get a "remote error" ... remote: error: File cron_log.log is 126.91 MB; this exceeds GitHub's file size limit of 100 MB I added the file to .gitignore after original push... But it still exists on remote (origin) Removing it locally should get rid of it at origin(Github) right? ... but ... it is not letting me push because there is a file on Github that exceeds the limit... https://github.com/blog/1533-new-file-size-limits These are the commands I issued plus error messages.. git add . git commit -m "delete cron_log.log" git push origin master remote: Error code: 40bef1f6653fd2410fb2ab40242bc879 remote: warning: Error GH413: Large files detected. remote: warning: See http://git.io/iEPt8g for more information. remote: error: File cron_log.log is 141.41 MB; this exceeds GitHub's file size limit of 100 MB remote: error: File cron_log.log is 126.91 MB; this exceeds GitHub's file size limit of 100 MB To https://github.com/slinds(omited_here)/linexxxx(omited_here).git ! [remote rejected] master - master (pre-receive hook declined) error: failed to push some refs to 'https://github.com/slinds(omited_here) I then tried things like git rm cron_log.log git rm --cached cron_log.log Same error.

    Read the article

  • Issues with cross-domain uploading

    - by meder
    I'm using a django plugin called django-filebrowser which utilizes uploadify. The issue I'm having is that I'm hosting uploadify.swf on a remote static media server, whereas my admin area is on my django server. At first, the browse button wouldn't invoke my browser's upload. I fixed this by modifying the sameScriptAccess to always instead of sameDomain. Now the progress bar doesn't move at all, I probably have to enable some server setting for cross domain file uploading, or most likely actually host a separate upload script on my media server. I thought I could solve this by adding a crossdomain.xml to enable any site at the root of both servers, but that doesn't seem to solve it. $(document).ready(function() { $('#id_file').uploadify({ 'uploader' : 'http://media.site.com:8080/admin/filebrowser/uploadify/uploadify.swf', 'script' : '/admin/filebrowser/upload_file/', 'scriptData' : {'session_key': '...'}, 'checkScript' : '/admin/filebrowser/check_file/', 'cancelImg' : 'http://media.site.com:8080/admin/filebrowser/uploadify/cancel.png', 'auto' : false, 'folder' : '', 'multi' : true, 'fileDesc' : '*.html;*.py;*.js;*.css;*.jpg;*.jpeg;*.gif;*.png;*.tif;*.tiff;*.mp3;*.mp4;*.wav;*.aiff;*.midi;*.m4p;*.mov;*.wmv;*.mpeg;*.mpg;*.avi;*.rm;*.pdf;*.doc;*.rtf;*.txt;*.xls;*.csv;', 'fileExt' : '*.html;*.py;*.js;*.css;*.jpg;*.jpeg;*.gif;*.png;*.tif;*.tiff;*.mp3;*.mp4;*.wav;*.aiff;*.midi;*.m4p;*.mov;*.wmv;*.mpeg;*.mpg;*.avi;*.rm;*.pdf;*.doc;*.rtf;*.txt;*.xls;*.csv;', 'sizeLimit' : 10485760, 'scriptAccess' : 'always', //'scriptAccess' : 'sameDomain', 'queueSizeLimit' : 50, 'simUploadLimit' : 1, 'width' : 300, 'height' : 30, 'hideButton' : false, 'wmode' : 'transparent', translations : { browseButton: 'BROWSE', error: 'An Error occured', completed: 'Completed', replaceFile: 'Do you want to replace the file', unitKb: 'KB', unitMb: 'MB' } }); $('input:submit').click(function(){ $('#id_file').uploadifyUpload(); return false; }); }); The page I'm viewing this on is http://site.com/admin/filebrowser/browse on port 80.

    Read the article

  • Is there a way to change the GTK theme for applications run as superuser on KDE?

    - by Patches
    When I run GTK applications on KDE, they use the QtCurve theme that matches my color and font scheme as configured in the KDE System Settings application. However GTK applications run as superuser use the old default GNOME, regardless of whether I run them with kdesudo, gksudo, or sudo on a terminal. For example, here's gedit run as superuser on top, and under my normal user account on the bottom: Strangely, KDE applications run with kdesudo display the default Oxygen styling but use my settings when run with sudo on a terminal. Is there any way to configure the stying GTK applications use when run as superuser on KDE?

    Read the article

  • How do I send traffic from my Mac's wifi to my VPN client?

    - by Heath Borders
    I need to connect my Android to a Juniper VPN. Unfortunately, Juniper doesn't support Android on our VPN version. We've already put in a feature request for it, but we have no idea how long it will take to be complete. Right now, I connect to the Juniper VPN with a Juniper Mac OSX VPN client that uses Java to install kernel extensions to start and stop the VPN. Thus, I can't use the Network panel in System Preferences to create a VPN device, which means it won't show up in the 'Sharing' panel's Internet Sharing Share your connection from: menu, as suggested here. I used newproc.d to see what /usr/libexec/InternetSharing did when it ran, and it runs the following processes: 2013 Nov 1 00:26:54 5565 <1> 64b /usr/libexec/launchdadd 2013 Nov 1 00:26:55 5566 <1> 64b /usr/libexec/InternetSharing 2013 Nov 1 00:26:56 5568 <5566> 64b natpmpd -d -y bridge100 en0 2013 Nov 1 00:26:56 5569 <1> 64b /usr/libexec/pfd -d 2013 Nov 1 00:26:56 5567 <5566> 64b bootpd -d -P My Juniper VPN client creates the following devices (output of ifconfig): jnc0: flags=841<UP,RUNNING,SIMPLEX> mtu 1400 inet 10.61.9.61 netmask 0xffffffff open (pid 920) jnc1: flags=841<UP,RUNNING,SIMPLEX> mtu 1450 closed So, it seems like I should just be able to do this and have everything work: sudo killall -9 natpmpd sudo /usr/libexec/natpmpd -y bridge100 jnc0 My android connected fine and could hit public internet sites, but it couldn't hit private VPN sites. I assume this is because I need to change the routes that /usr/libexec/InternetSharing sets up. This is the output from sudo pfctl -s all before starting Internet Sharing: No ALTQ support in kernel ALTQ related functions disabled TRANSLATION RULES: nat-anchor "com.apple/*" all rdr-anchor "com.apple/*" all FILTER RULES: scrub-anchor "com.apple/*" all fragment reassemble anchor "com.apple/*" all DUMMYNET RULES: dummynet-anchor "com.apple/*" all INFO: Status: Disabled for 0 days 00:11:02 Debug: Urgent State Table Total Rate current entries 0 searches 22875 34.6/s inserts 1558 2.4/s removals 1558 2.4/s Counters match 2005 3.0/s bad-offset 0 0.0/s fragment 0 0.0/s short 0 0.0/s normalize 0 0.0/s memory 0 0.0/s bad-timestamp 0 0.0/s congestion 0 0.0/s ip-option 12 0.0/s proto-cksum 0 0.0/s state-mismatch 1 0.0/s state-insert 0 0.0/s state-limit 0 0.0/s src-limit 0 0.0/s synproxy 0 0.0/s dummynet 0 0.0/s TIMEOUTS: tcp.first 120s tcp.opening 30s tcp.established 86400s tcp.closing 900s tcp.finwait 45s tcp.closed 90s tcp.tsdiff 60s udp.first 60s udp.single 30s udp.multiple 120s icmp.first 20s icmp.error 10s grev1.first 120s grev1.initiating 30s grev1.estblished 1800s esp.first 120s esp.estblished 900s other.first 60s other.single 30s other.multiple 120s frag 30s interval 10s adaptive.start 6000 states adaptive.end 12000 states src.track 0s LIMITS: states hard limit 10000 app-states hard limit 10000 src-nodes hard limit 10000 frags hard limit 5000 tables hard limit 1000 table-entries hard limit 200000 OS FINGERPRINTS: 696 fingerprints loaded This is the output from sudo pfctl -s all after starting Internet Sharing: No ALTQ support in kernel ALTQ related functions disabled TRANSLATION RULES: nat-anchor "com.apple/*" all nat-anchor "com.apple.internet-sharing" all rdr-anchor "com.apple/*" all rdr-anchor "com.apple.internet-sharing" all FILTER RULES: scrub-anchor "com.apple/*" all fragment reassemble scrub-anchor "com.apple.internet-sharing" all fragment reassemble anchor "com.apple/*" all anchor "com.apple.internet-sharing" all DUMMYNET RULES: dummynet-anchor "com.apple/*" all STATES: ALL tcp 10.0.1.32:50593 -> 74.125.225.113:443 SYN_SENT:CLOSED ALL udp 10.0.1.32:61534 -> 10.0.1.1:53 SINGLE:NO_TRAFFIC ALL udp 10.0.1.32:55433 -> 10.0.1.1:53 SINGLE:NO_TRAFFIC ALL udp 10.0.1.32:64041 -> 10.0.1.1:53 SINGLE:NO_TRAFFIC ALL tcp 10.0.1.32:50619 -> 74.125.225.131:443 SYN_SENT:CLOSED INFO: Status: Enabled for 0 days 00:00:01 Debug: Urgent State Table Total Rate current entries 5 searches 22886 22886.0/s inserts 1563 1563.0/s removals 1558 1558.0/s Counters match 2010 2010.0/s bad-offset 0 0.0/s fragment 0 0.0/s short 0 0.0/s normalize 0 0.0/s memory 0 0.0/s bad-timestamp 0 0.0/s congestion 0 0.0/s ip-option 12 12.0/s proto-cksum 0 0.0/s state-mismatch 1 1.0/s state-insert 0 0.0/s state-limit 0 0.0/s src-limit 0 0.0/s synproxy 0 0.0/s dummynet 0 0.0/s TIMEOUTS: tcp.first 120s tcp.opening 30s tcp.established 86400s tcp.closing 900s tcp.finwait 45s tcp.closed 90s tcp.tsdiff 60s udp.first 60s udp.single 30s udp.multiple 120s icmp.first 20s icmp.error 10s grev1.first 120s grev1.initiating 30s grev1.estblished 1800s esp.first 120s esp.estblished 900s other.first 60s other.single 30s other.multiple 120s frag 30s interval 10s adaptive.start 6000 states adaptive.end 12000 states src.track 0s LIMITS: states hard limit 10000 app-states hard limit 10000 src-nodes hard limit 10000 frags hard limit 5000 tables hard limit 1000 table-entries hard limit 200000 TABLES: OS FINGERPRINTS: 696 fingerprints loaded It looks like I need to change the pf settings that /usr/libexec/InternetSharing set up, but I have no idea how to do that.

    Read the article

  • rsync ssh not working in crontab after reboot

    - by kabeer
    I was using a script to perform rsync in sudo crontab. The script does a 2-way rsync (from serverA to serverB and reverse). the rsync uses ssh to connect between servers. After i reboot both the server machines, the rsync is not working in sudo crontab. I also setup a new cronjob and it fails, The error is: rsync error: unexplained error (code 255) at io.c(600) [sender=3.0.6] rsync: connection unexpectedly closed (0 bytes received so far) [receiver] However, when run from terminal, the rync script works as expected without issues. please help. looks like issue with ssh. however, i am able to ssh into either servers withoiut issues.

    Read the article

  • How to correctly add daemon in MacOS 10.6.6 via launchd?

    - by Eye of Hell
    Hello. I have a very simple task to accomplish: to start tomcat application server on latest MacOS as a daemon. I have performed following steps: Installed Tomcat in /Library/Tomcat/Home Validated that it runs fine by executed /Library/Tomcat/Home/bin/startup.sh Added org.apache.tomcat.plist file to /Library/LaunchDaemons as found on internet (http://blog.i18n.ro/complete-guide-for-installing-hudson-ci-on-os-x-10-6/) Instructed MacOS to load a daemon description via sudo launchctl load org.apache.tomcat.plist. It succeeded (issuing this command second time outputs "already loaded"). Instructed MacOS to start a daeon via sudo launchctl start org.apache.tomcat.plist At this point MacOS shows an error "launchctl start error: No such process". I have checked the logfile for launchd - it have no record for this error. Google says nothing. And from error text i can't figure out what is the "process" and why it is "wrong" :(. Any hints what i'm doing wrong?

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >