Search Results

Search found 5597 results on 224 pages for 'sudo rm rf'.

Page 169/224 | < Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >

  • glassfish v3 - update all pakages via command line on linux

    - by orange80
    Does anyone know how to do this? I just want a one-shot command to "update everything" over the command line? This is for a remote server so it must be over the command line. I used: $ sudo pkg list -u to see the list of packages that are out of date, but I cannot for the life of me figure out how to say, "ok, update them". I have scoured the web for clues, but to no avail :( This is classic Sun Solaris-type patching that is the exact reason I am now on Linux. Please help!! Thanks :) Jamie

    Read the article

  • Script errors when run by launchd at startup, but not when run in Terminal

    - by Mechcozmo
    I'm attempting to create a RAM disk that loads the previous contents when the system starts up, and every six hours writes the contents to a disk image. Currently, when you run the script from the terminal ("sudo bash LogToRAM.sh") everything works fine. But when run from launchd during startup, it doesn't work. Here's the lines from the log; the first line just gives some idea as to where in the boot process we are: SecurityAgent[202] Showing Login Window com.mechcozmo.LogToRAM[51] + /Developer/usr/bin/SetFile -a V /Volumes/LogfileRAMdisk com.mechcozmo.LogToRAM[51] ERROR: File Not Found. (-43) on file: /Volumes/LogfileRAMdisk com.mechcozmo.LogToRAM[51] + /usr/sbin/asr -source '/Library/Application Support/LogToRAM/RAMdisk_store.dmg' -target /Volumes/LogfileRAMdisk/ -noverify Here is the script and plist file in question. Note that 'set -vx' is up at the top of the script; it give a lot of information about what is happening in the script. My current theory is that the /Volumes directory does not exist at this stage of the boot process, but that seems unlikely to be honest.

    Read the article

  • Mounting LVM2 volume with XFS filesystem

    - by Chris
    Unfortunately I'm not able to access the data on my NAS anymore. I can't figure out why this is the case as I haven't changed anything. So I plugged one of the harddisks in my computer to access the data. What I did: kpartx -a /dev/sdc Now I should be able to access /dev/mapper/vg001-lv001 When trying to mount it I get: sudo mount -t xfs /dev/mapper/vg001-lv001 /home/user/mnt mount: /dev/mapper/vg001-lv001: can't read superblock Now I did a parted -l which gave me Modell: Linux device-mapper (linear) (dm) Festplatte /dev/mapper/vg001-lv001: 498GB Sektorgröße (logisch/physisch): 512B/512B Partitionstabelle: loop Number Begin End Size Filesystem Flags 1 0,00B 498GB 498GB xfs Does anybody have a solution how to recover the data?

    Read the article

  • How do I launch a process as a specific user at startup on OS X?

    - by Scott Bonds
    I would like to run a script as a particular user on startup (not on login). I thought a launchd LaunchDaemon would do it, but 'man launchd' says: "If you wish your service to run as a certain user, in that user's environment, making it a launchd agent is the ONLY supported means of accomplishing this on Mac OS X. In other words, it is not sufficient to perform a setuid(2) to become a user in the truest sense on Mac OS X." They aren't kidding--when I try to run my script as a LaunchDaemon it doesn't work. In particular I'm trying to automate some keychain operations using the 'security' command, and it won't let me change the default keychain when I run the script through LaunchDaemon, though the script works fine when run using sudo from a shell. A LaunchAgent won't work, because the goal is for the proces to run without a user logging in and LaunchAgents only run when someone logs in. I looked at cron and the @reboot directive and that looks promising, but I read that cron is deprecated on OSX.

    Read the article

  • Gem Load Error about whois command and removed cache

    - by Puru puru rin..
    Hello, I have an awesome trouble with Gem. After executing this command: rm -f /usr/local/lib/ruby/gems/1.9.1/cache/* I can not do any thing. If I try for instance: gem cleanup I get this kind of answer: /usr/local/lib/ruby/gems/1.9.1/gems/gemwhois-0.1/lib/gemwhois.rb:3:in `require': no such file to load -- rubygems/commands/whois (LoadError) from /usr/local/lib/ruby/gems/1.9.1/gems/gemwhois-0.1/lib/gemwhois.rb:3:in `<top (required)>' from /usr/local/lib/ruby/gems/1.9.1/gems/gemwhois-0.1/lib/rubygems_plugin.rb:2:in `require' from /usr/local/lib/ruby/gems/1.9.1/gems/gemwhois-0.1/lib/rubygems_plugin.rb:2:in `<top (required)>' from /usr/local/lib/ruby/site_ruby/1.9.1/rubygems.rb:1113:in `load' from /usr/local/lib/ruby/site_ruby/1.9.1/rubygems.rb:1113:in `block in <top (required)>' from /usr/local/lib/ruby/site_ruby/1.9.1/rubygems.rb:1105:in `each' from /usr/local/lib/ruby/site_ruby/1.9.1/rubygems.rb:1105:in `<top (required)>' from <internal:gem_prelude>:235:in `require' from <internal:gem_prelude>:235:in `load_full_rubygems_library' from <internal:gem_prelude>:334:in `const_missing' from /usr/local/bin/gem:12:in `<main>' It's the same for gem -v, of just gem command... I'm working of Snow Leopard. What should the best solution about you? Thanks a lot!

    Read the article

  • duplicity can't find remote backup directory?

    - by leeand00
    Using my private key to do so, this command allows me to connect to /home/backupUser/backup just fine: $ sudo sftp -oPort=7843 [email protected]:backup However when I run duplicity, I get the following error: duplicity full --exclude ... / scp://backupUser:[email protected]:7843:/backup bash: [email protected]:7843./backup: No such file or directory I'm under the assumption that duplicity would interpret the /backup path as relative to the user's home directory. But since the above command didn't work, I also tried leaving off the / in the backup directory at the end of the command, i.e. duplicity full --exclude ... / scp://backupUser:[email protected]:7843:backup bash: [email protected]:7843:backup: command not found Is there something I'm missing here, like adding the passcode for the private key to make this command work?

    Read the article

  • How to install PyQt on Mac OS X 10.6

    - by Albert
    I want to install PyQt. This seems kind of complicated to install on OS X. I haven't found any precompiled packages of it (are there any? I would really prefer those). So I downloaded PyQt. And SIP, because it depends on that. These files: http://www.riverbankcomputing.co.uk/static/Downloads/PyQt4/PyQt-mac-gpl-4.7.3.tar.gz http://www.riverbankcomputing.co.uk/static/Downloads/sip4/sip-4.10.2.tar.gz Did a python configure.py && make && sudo make install on SIP -- installed without any problems. Tried the same on PyQt -- and failed of course: /Library/Frameworks/QtCore.framework/Headers/qglobal.h:288:2: error: #error "You are building a 64-bit application, but using a 32-bit version of Qt. Check your build configuration." Ok, so I tried with python configure.py --use-arch=i386. Same error. Any idea?

    Read the article

  • Cannot connect to FTP server from external host

    - by h3
    I have a FTP server (vsftpd) setuped on a Linux box (Ubuntu server). When I try to connect with a computer on the same network everything works fine as expected. But as soon the IP is external it won't connect.. I first assumed the port was blocked, but then: localserver:$ sudo tail -f /var/log/vsftpd.log Wed Jan 13 14:21:17 2010 [pid 2407] CONNECT: Client "xxx.xxx.107.4" remotemachine:$ netcat svn-motion.no-ip.biz 21 220 FTP Server And it hangs there .. Is there any other ports other than 21 that are required to be open ?

    Read the article

  • Multiple users writing to one Samba mount point in OSX

    - by Sam
    I have an OSX box containing a script which writes a unique file to a Samba share. The first part of the script mounts the share. On the machine are 2 users- UserA and UserB. Each requires to run this script at any given time however only the user who mounted the share is able to write to it. I really need both users to have rwx access. Here is what I have tried: Mounting then chmod'ing the mountpoint (no effect- overruled by Samba server?) chmod'ing the mountpoint then mounting (same as above) sudo mount_smbfs Both users have admin privileges. Ideally a solution would be executable by one of the users (contained in the script) and not rely on mounting at machine boot time. Any ideas appreciated, thanks!

    Read the article

  • SSH authentication working unless ran from script??

    - by awright418
    I have set up my server to allow key/pair authentication by following instructions similar to what is found in this post. As far as I can tell that is working correctly. If I do the following, for example, it works correctly: ssh [email protected] It will NOT prompt me for a password. This is what I want to happen. However if I write a small bash script like this: #!/bin/bash -x ssh [email protected] and execute with: sudo ./mytestscript.sh ...it will prompt me with: [email protected]'s password: What am I doing wrong? I need to be able to login from within my script without being prompted for a password!

    Read the article

  • install filezilla error, Depends: libatk1.0-0 (>= 1.29.3) but 1.28.0-0ubuntu1 is to be installed

    - by solomongaby
    Hello, I am trying to install filezilla from this repo: https://launchpad.net/~yofel/+archive/ppa and after sudo apt-get update i tried to install it but i get the error: Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: filezilla: Depends: libatk1.0-0 (>= 1.29.3) but 1.28.0-0ubuntu1 is to be installed Do you have any ideea what is happening ?

    Read the article

  • rsync remote to local automatic backup

    - by Mark Molina
    Because all my work is stored on a remote server I would like to auto backup my server monthly and weekly. My server is running Centos 5.5 and while searching the web I'm found a tool named rsync. I got my first update manually by using this command in terminal: sudo rsync -chavzP --stats USERNAME@IPADDRES: PATH_TO_BACKUP LOCAL_PATH_TO_BACKUP I then prompt my password for that user and bob's my uncle. This backups the necessary files from my remote server to my local device but does somebody know how I can automate this? Like automatic running this script every sunday? EDIT I forgot to mention that I let direct admin backup the files I need and then copy those files from the remote server to a local server.

    Read the article

  • upstart not working

    - by dorelal
    I saved the following file at /etc/init/nodejs.conf description "node.js server" author "dorelal" start on startup stop on shutdown script # We found $HOME is needed. Without it, we ran into problems export HOME="/root" exec /usr/local/bin/node /home/dorelal/nodejs.js 2>&1 >> /var/log/node.log end script Then I tried to start the server > sudo initctl start nodejs initctl: Unknown job: nodejs Ubuntu information > cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=9.10 DISTRIB_CODENAME=karmic DISTRIB_DESCRIPTION="Ubuntu 9.10" What do I need to do to start the server using upstart.

    Read the article

  • How to display password policy information for a user (Ubuntu)?

    - by C.W.Holeman II
    Ubuntu Documentation Ubuntu 9.04 Ubuntu Server Guide Security User Management states that there is a default minimum password length for Ubuntu: By default, Ubuntu requires a minimum password length of 4 characters Is there a command for displaying the current password policies for a user (such as the chage command displays the password expiration information for a specific user)? > sudo chage -l SomeUserName Last password change : May 13, 2010 Password expires : never Password inactive : never Account expires : never Minimum number of days between password change : 0 Maximum number of days between password change : 99999 Number of days of warning before password expires : 7 This is rather than examining various places that control the policy and interpreting them since this process could contain errors. A command that reports the composed policy would be used to check the policy setting steps.

    Read the article

  • Very very weird problem with UIImageView property - I can access it then I can't, and it's not nil.

    - by just_another_coder
    Very very weird problem with UIImageView property on iPad application @interface MyViewController : UIViewController { IBOutlet UIImageView* coverImage; } @property(nonatomic, retain) IBOutlet UIImageView* coverImage; … more code @implementation MyViewController @synthesize coverImage; … more code - (void)viewDidLoad { [super viewDidLoad]; NSString* imageName = @"my_image.png"; UIImage* tempImage = [UIImage imageNamed:imageName]; [self.coverImage setImage:tempImage]; } The above code WILL display the image. In another part of code: -(IBAction) stopButtonPressed:(id)sender { [self.coverImage setHidden:YES]; NSLog(@"coverImage desc: %@", [coverImage description]); } The image will NOT disappear. I know the reference to the image isn't nil, because it gives me this output: 2010-05-29 17:37:40.706 MyApp[95360:207] coverImage desc: UIImageView: 0x5128420; frame = (0 0; 1024 768); autoresize = RM+BM; userInteractionEnabled = NO; layer = CALayer: 0x512bed0 In addition, if I move the code in viewDidLoad to another part of the class, and try to execute it from there, it fails to show the image at all.

    Read the article

  • Mac firewall blocking nginx (port 80) from external side

    - by Alex Ionescu
    I installed nginx using ports and started it with sudo. Accessing the nginx welcome page from localhost works perfectly, however accessing it from an external computer fails. Doing an nmap on the computer from the outside reveals 80/tcp filtered http So clearly the mac firewall is blocking the port. I then proceed to add the nginx executable to the firewall exception list as seen in this image, however the nmap still shows up as port 80 being filtered and I'm unable to access the webpage. The exact binary that is in the list is /opt/local/sbin/nginx which to my knowledge seems correct Any ideas what I should do? Thanks! P.S. Turning the firewall off does allow me to access the website from the outside world, however that isn't an ideal solution.

    Read the article

  • Ubuntu Linux: Process swap memory and memory usage

    - by David Halter
    My Ubuntu eats more memory than the task manager is showing: sudo ps -e --format rss | awk 'BEGIN{c=0} {c+=$1} END{print c/1024}' 1043.84 free -m total used free shared buffers cached Mem: 3860 1878 1982 0 20 679 -/+ buffers/cache: 1178 2681 Swap: 2729 1035 1693 That's strange. Can someone explain this difference? But what is more important: I'd like to know how much memory a process is really using. I don't want to know the virtual memory size, but rather the resident memory plus swap of a process. I have also tried to output the format param "sz" of 'ps', but the sum of this is to high (5450 MB) (param 'size' gives 8323.45 MB). Are there any other options? I really want to use this, to determine which programs/processes are eating to much memory (and swap), to kill them, because hibernate might not be working if the swap partition is to little.

    Read the article

  • notify url is not called

    - by Jahangeer Ahmed
    Dim redirecturl As String = "" redirecturl = ConfigurationManager.AppSettings("papalUrl").ToString() & "us/cgi-bin/webscr?cmd=_cart&upload=1&business=" & ConfigurationManager.AppSettings("paypalemail").ToString() Dim j As Integer = 0 Dim dr1 As DataRow If ds.Tables("ReviewOrder").Rows.Count 0 Then Dim requestsFile As String = Server.MapPath("~/App_Data/PaymentRequests.xml") ' ds.Tables("ReviewOrder").WriteXml(requestsFile) For j = 0 To ds.Tables("ReviewOrder").Rows.Count - 1 dr1 = ds.Tables("ReviewOrder").Rows(j) redirecturl += "&item_name_" & j + 1 & "=" & dr1("varTitle") redirecturl += "&amount_" & j + 1 & "=" & dr1("flRate") redirecturl += "&image_url_" & j + 1 & "=" & ConfigurationManager.AppSettings("RSSurl").ToString() & dr1("imgImage") redirecturl += "&quantity_" & j + 1 & "=" & Convert.ToInt64(dr1("flQuantity")) ''redirecturl += "&item_name_2=Sample_testing2&amount_2=9.50" ''redirecturl += "&quantity_2=2" ''redirecturl += "&item_name_3=Sample_testing3" ''redirecturl += "&amount_3=8.50" ''redirecturl += "&quantity_3=3" redirecturl += "&custom_" & j + 1 & "=" & dr1("BasketID") Next End If redirecturl += "&currency=" & ConfigurationManager.AppSettings("CurrencyCode").ToString() redirecturl += "&first_name=" & firstName redirecturl += "&last_name=" & lastName redirecturl += "&city=" & city redirecturl += "&state=" & state redirecturl += "&zip=" & zip redirecturl += "&address1=" & address1 redirecturl += "&address2=" & address2 redirecturl += "&notify_url=" & Server.UrlEncode(ConfigurationManager.AppSettings("NotifyUrl").ToString() & "&rm=2") redirecturl += "&return=" & ConfigurationManager.AppSettings("SuccessURL").ToString() 'Failed return page url redirecturl += "&cancel_return=" & ConfigurationManager.AppSettings("FailedURL").ToString() Page.ClientScript.RegisterClientScriptBlock(Me.GetType(), "Redirect", "window.parent.location='" & redirecturl & "';", True)

    Read the article

  • Ubuntu: crypt user's home directory and protect from admin ?

    - by Luc
    I have the following problem: I need to run some scripts on a Ubuntu machine but I do not want those scripts to be visible by anybody. What could be the best way to do that ? I was thinking of the following: create a particular user Add the scripts in this user's home directory Protect + crypt the user's home directory = Can I run the script from outside if the directory is crypted ? Can superuser see the content of the home dir ? Is there a right way to do this ? UPDATE I thing the best way would be that root own those scripts. In this case I would need to allow an another user to modify the network configuration. Is it possible to ONLY provide network rights to a user ? (via sudo or else)

    Read the article

  • How to send a future email using AT command.

    - by BHare
    I just need to send one email into the future, so I figured i'd be best at using at rather than using cron. This is what I have so far, its messy and ugly and not that great at escaping: <pre> <?php $out = array(); // Where is the email going? $email = "[email protected]"; // What is the body of the email (make sure to escape any double-quotes) $body = "This is what is actually emailed to me"; $body = escapeshellcmd($body); $body = str_replace('!', '\!', $body); // What is the subject of the email (make sure to escape any double-quotes) $subject = "It's alive!"; $subject = escapeshellcmd($subject); $subject = str_replace('!', '\!', $subject); // How long from now should this email be sent? IE: 1 minute, 32 days, 1 month 2 days. $when = "1 minute"; $command= <<<END echo " echo \"$body\" > /tmp/email; mail -s \"$subject\" $email < /tmp/email; rm /tmp/email; " | at now + $when; END; $ret = exec($command, $out); print_r($out); ?> </pre> The output should be something like warning: commands will be executed using /bin/sh job 60 at Thu Dec 30 19:39:00 2010 However I am doing something wrong with exec and not getting the result? The main thing is this seem very messy. Is there any alternative better methods for doing this? PS: I had to add apache's user (www-data for me) to /etc/at.allow ...Which I don't like, but I can live with it.

    Read the article

  • nmap reports host up when it isn't

    - by martianway
    On an Ubuntu VM I ran: sudo nmap -sP 192.168.0.* This returned: Starting Nmap 5.00 ( http://nmap.org ) at 2010-12-28 22:46 PST Host 192.168.0.0 is up (0.00064s latency). Host 192.168.0.1 is up (0.00078s latency). Host 192.168.0.2 is up (0.00011s latency). . . . Host 192.168.0.254 is up (0.00068s latency). Host 192.168.0.255 is up (0.00066s latency). The problem is I only have 4 live machines on 192.168.0.* so why did nmap report every ip in the subnet has a live host? The ip address of the Ubuntu machine is 192.168.28.131 From this VM I can ping the live systems on my internal subnet 192.168.0.* and get the expected response. And if I ping a machine that doesn't exist I can get no response as expected.

    Read the article

  • s3cmd run on command line not on cron

    - by Jonar
    Many have said that the problem is with environment but I still can't seem to solve this problem. BTW I am using Ubuntu 9.10 login as user, then sudo -s using this command: s3cmd put file s3://bucket worked! now here is the simple script intended for testing: #! /bin/bash env >/tmp/cronjob.log s3cmd put file s3://bucket issuing the command crontab -e * * * * * /opt/script 2>&1 | logger Then using tail to syslogs Dec 3 23:22:01 ubuntu CRON[10795]: (root) CMD (/opt/script 2&1 | logger) But by verifying it on s3Fox Organizer, the file is not uploaded. (I tried changing the #! /bin/sh (no effect), putting crons on /etc/crontab (no effect), setting HOME=/home/user (no effect) What are other options to try? Or other ways to debug this problem. Thanks

    Read the article

  • Unable to connect with PPTP (From Windows 8 to Ubuntu 12.10)

    - by jaja
    I'm trying to connect using PPTP to my VPS. At first, I got the problem that my connection (Wlan, which is what I use to connect to the Internet) goes "limited" (I can't connect to the Internet) when I connect to the VPN. Then, I used to get some long message, something like you might be trying using L2TP. Then now it's back again to the "limited" problem. What's the solution to that problem? One thing I'm not sure of is what I put as my local IP address in "sudo nano /etc/pptpd.conf" Is it 127.0.0.1? I'm following this tutorial: http://thesinclairs.gotdns.com/blog/set-up-a-pptp-vpn-on-ubuntu-server/

    Read the article

  • dpkg -S not showing all files in package

    - by dimadima
    I've been using dpkg -S <package_name> to list the contents of a package. Sometimes I pipe to grep bin to quickly scan for executables. I just ran into a case where this didn't work out for me: $ which virtualenv $ sudo apt-get install python-virtualenv Reading package lists... Done ... Setting up python-virtualenv (1.7.1.2-1) ... $ which virtualenv /usr/bin/virtualenv $ dpkg -S /usr/bin/virtualenv python-virtualenv: /usr/bin/virtualenv $ dpkg -S python-virtualenv | grep bin $ /usr/bin/virtualenv seems to be provided by python-virtualenv, but isn't listed in the package contents provided by dpkg -S. All the while, passing /usr/bin/virtualenv to dpkg -S returns that the file comes from python-virtualenv. Can you all explain this?

    Read the article

  • Sniff packets using tcpdump

    - by denisk
    I have a completely noob question. I want to see all packets that come to my computer from particular site (google.com). So I start tcpdump sudo tcpdump -i eth0 host google.com and enter google.com in a browser and hit enter - nothing gets captured. I can't figure out why it happen. What do I do wrong? Edit It appeared that I was listening to the wrong interface. I had changed eth0 to any and it worked. It was ppp1 that needed listening. Thanks for your answers!

    Read the article

< Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >