Search Results

Search found 26263 results on 1051 pages for 'linux guest'.

Page 450/1051 | < Previous Page | 446 447 448 449 450 451 452 453 454 455 456 457  | Next Page >

  • How to set the laptop screen brightness programatically?

    - by zls
    I'm currently migrating to openbox without gnome session. In unity i can use the vendor keys to set the screen brightness, but in openbox I'm on my own. /sys/class/backlight/acpi_video0/brightness works fine, the problem is that I need sudo to set the brightness and that wouldn't work with keyboard mappings. xbacklight -get/set doesn't do or output anything. I don't really want to use xrandr --brightness. Are there any other options or a way to fix the problems with xbacklight or acpi_video0 ?

    Read the article

  • Will deleting partitions affect my hard drive in any way?

    - by Portali5t
    I installed a Suse partition of around 200 Gigabytes on my hard drive, primarily running Windows 7. I am sick of Suse's crap, and just want to get rid of the OS and get that partition back for Windows' use. Is it as simple as that partition gets deleted,and I can choose what partition that space goes to, or is it communal that all partitions can access? I know next to nothing about partitions, so any help would be great. Also, if someone knows HOW to delete partitions, that would be a great help too. Thanks!

    Read the article

  • Single application through OpenVPN tunnel (Debian Lenny)

    - by user14124
    I'm using Debian Lenny and I want to tunnel rtorrent only through a OpenVPN tunnel. I have a tunnel running, the config file looks like this: client dev tun proto udp remote openvpn.xxx.com 1194 resolv-retry infinite nobind persist-key persist-tun ca /etc/openvpn/xxx/keys/ca.crt cert /etc/openvpn/xxx/keys/client.crt key /etc/openvpn/xxx/keys/client.key tls-auth /etc/openvpn/xxx/keys/tls.key 1 ns-cert-type server comp-lzo verb 3 auth-user-pass script-security 3 reneg-sec 0 My idea is that I could run a sockd proxy internally that redirects traffic to the openvpn tunnel. I could use the *nix "proxifier" application "tsocks" to make it possible for rtorrent to connect through that proxy (as rtorrent doesn't support proxies). I have trouble configuring sockd as my IP inside the VPN changes every time I connect. This is a config file someone said would help: http://ircpimps.org/sockd.conf As my IP changes at each connect I don't know what to put in that config file. I have no control over the host side config file. Any help wanted. Any other method is very welcome.

    Read the article

  • E: Internal Error, Could not perform immediate configuration (2) on libattr1 ? in Ubuntu

    I am working with Ubuntu latest version. While installing via apt-get install i tried to abort that by pressing Ctrl+Z. It terminate successfully ;). But next time when i tried to use apt-get, i got some error "lock" and "temporally unavailable" something like that and **I unfortunately i delete the /var/lib/dkpg folder.** after that i cant install anything from apt-get, getting an error. E: Internal Error, Could not perform immediate configuration (2) on libattr1 so how can i solve this issue?

    Read the article

  • Prevent rmdir -p from traversing above a certain directory

    - by thepurplepixel
    I hacked together this script to rsync some files over ssh. The --remove-source-files option of rsync seems to remove the files it transfers, which is what I want. However, I also want the directories those files are placed in to be gone as well. The current part of the find command, -exec rmdir -p {} ; tries to remove the parent directory (in this case, /srv/torrents), but fails because it doesn't have the right permissions. What I'd like to do is stop rmdir from traversing above the directory find is run in, or find another solution to get rid of all the empty folders. I've thought of using some kind of loop with find and running rmdir without the -p switch, but I thought it wouldn't work out. Essentially, is there an alternative way to remove all the empty directories under the parent directory? Thanks in advance! #!/bin/bash HOST='<hostname>' USER='<username>' DIR='<destination directory>' SOURCE='/srv/torrents/' rsync -e "ssh -l $USER" --remove-source-files -h -4 -r --stats -m --progress -i $SOURCE $HOST:$DIR find $SOURCE -mindepth 1 -type d -empty -prune -exec rmdir -p \{\} \;

    Read the article

  • CPU/RAM usage log over a period of time to file on CentOS

    - by joel_gil
    Hi everyone Im looking for an app pr line of code that could let me observe a process, save the info in a number of variable and then put the gathered info on a file. Ive been trying with variations of top but no luck. I am running several CentOS virtual servers, VM is 2gb ram 2 processor. Maybe a script that works over a specified amount of time while writing lines with the info on a text file so at the end i can have a sort of table with the data. The thing is Im going to stress test the server and I would like to have the data to make some statistics. Any comments and suggestions are most welcome.

    Read the article

  • Do best-practices say to restrict the usage of /var to sudoers?

    - by NewAlexandria
    I wrote a package, and would like to use /var to persist some data. The data I'm storing would perhaps even be thought of as an addition for /var/db. The pattern I observe is that files in /var/db, and the surrounds, are owned by root. The primary (intended) use of the package filters cron jobs - meaning you would need permissions to edit the crontab. Should I presume a sudo install of the package? Should I have the package gracefully degrade to a /usr subdir, and if so then which one? If I 'opinionate' that any non-sudo install requires a configrc (with paths), where should the package look (presuming a shared-host environment) for that config file? Incidentally, this package is a ruby gem, and you can find it here.

    Read the article

  • mutt isn't sending large messages

    - by Guy
    I'm using mutt in the following way: echo <MESSAGE> | mutt -s <SUBJECT> -- <TO-ADDR> This usually works when I try small message (messages with ~10 lines in the body). But when trying very large message (a message with ~200 lines) the email just isn't received. Any ideas?

    Read the article

  • How to use ccache selectively?

    - by Anonymous
    I have to compile multiple versions of an app written in C++ and I think to use ccache for speeding up the process. ccache howtos have examples which suggest to create symlinks named gcc, g++ etc and make sure they appear in PATH before the original gcc binaries, so ccache is used instead. So far so good, but I'd like to use ccache only when compiling this particular app, not always. Of course, I can write a shell script that will try to create these symlinks every time I want to compile the app and will delete them when the app is compiled. But this looks like filesystem abuse to me. Are there better ways to use ccache selectively, not always? For compilation of a single source code file, I could just manually call ccache instead of gcc and be done, but I have to deal with a complex app that uses an automated build system for multiple source code files.

    Read the article

  • Only allow root to change filesystem

    - by Uejji
    The VPS I manage uses a simple hard link rsync archive daily backup system saved to a loop file. This is great, because each backup only takes up as much space as what has changed each day, and all user/group permissions are kept. I would like to give users direct access to their home directories in each backup, but I'm worried about intentional or accidental backup data destruction, as how it stands now users can actually change, destroy or add to backed up data they originally owned. I've been looking for a way to mount this filesystem similar to an ro mount option, but something that would still allow rw access to root, but I've had absolutely no luck. In other words, I want users to be able to view and copy their backed up data without actually being able to change it, and have that data maintain the original permissions. I've got no real preferences as far as filesystem, as long as it's a standard unix filesystem that can preserve permissions, support hard links and deny write access to users without actually stripping the w permission from everything.

    Read the article

  • Backing up to smaller drive

    - by Dave
    In a few hours I'll have a new 500GB Sony laptop, filled with the usual Sony rubbish which I'll promptly be replacing with Ubuntu or Crunchbang or something. However, first I want to make a full clone of the drive (including recovery partitions), should I wish to return it to Sony or sell it on in its factory state. The problem is that the only backup drives I have are less than 500GB - the biggest I have is 250GB or so! So I need to backup and compress on-the-fly. What's the best way to do this? Presumably dd piped into gzip would do the trick, or does anyone have any other suggestions to accomplish this?

    Read the article

  • How to configure three IP address into single server

    - by user1363308
    I have Cisco device for call forwarding and three different system,I want to configure 15 and 16 server IP into 192.168.53.197 means eth0 --> 192.168.53.197 eth1 --> 192.168.16.15 eth2 --> 192.168.16.16 which work i have done with 15 and 16 individual , I will do some work on 197 after configuration eth1 and eth2. Means one system have three IP address but base IP address is 192.168.53.197

    Read the article

  • Running a script at startup as root?

    - by Usman Ajmal
    Hi i developed a script which I set to run at startup i.e. when the Desktop appears. In the script I mounted a partition using sudo mount /dev/sda1 /mnt &> result.txt After executing script a file named result.txt was created which contained sudo: no tty present and no askpass program specified In other words the mounting failed. If I run the script using sudo ./myProgram i don't face this problem and the drive gets mounted successfully. Any suggestions please....

    Read the article

  • How to find the process(es) which are hogging the machine

    - by Aaron Digulla
    Scenario: All of a sudden, my computer feels sluggish. Mouse moves but windows take ages to open, etc. uptime says the load is 7.69 and raising. What is the fastest way to find out which process(es) are the cause of the load? Now, "top" and similar tools isn't the answer because they either show CPU or memory usage but not both at the same time. What I need is the single command which I might be able to type as it happens - something that will figure out any of System is trying to swap 8GB of RAM to disk because process X ... or process X seeks all over the disk or process X uses 400% CPU" So what I'm looking for is iostat, htop/atop and similar tools run into one with an output like this: 1235 cp - Disk trashing 87 chrome - Uses 2&nbsp;GB of RAM 137 nfs_bench - Uses 95% of the network bandwidth I don't want a tool that gives me some numbers which I can analyze but a tool that tells me exactly which process causes the current load. Assume that the user in front of the keyboard barely knows how to write "process", but the user is quickly overwhelmed when it comes to "resident size", "virtual memory" or "process life cycle". My argument goes like this: A user notices a problem. There can be thousands of reasons ... well, almost :-) The user wants to know the source of the problem. The current solutions give me lots of numbers, and I need to know what these numbers mean. What I'm looking for is a meta tool. 99% of the data is irrelevant to the problem. So what the tool should do is look for processes which hog some resource and list only those along with "this process needs a lot of CPU, this produces many IRQs, this process allocates a lot of RAM (and it's still growing)". This will be a relatively short list. It will be much more simple for someone new to this to locate the culprit from this list than from the output of, say, htop which gives me about 5000 numbers but requires me to fold multi-threaded processes myself (I have 50 lines which say VIRT 2750M but only 16 GB of RAM - the machine ought to swap itself to death but of course, this is a misinterpretation of the data that can happen quickly).

    Read the article

  • scsi and ata entries for same hard drive under /dev/disk/by-id

    - by John Dibling
    I am trying to set up a ZFS pool using 4 bare drives which I have attached to my Ubuntu system via a SATA hot swap backplane. These are Hitachi SATA drives. When I list the contents of /dev/disk/by-id, I see two entries for each drive: root@scorpius:/dev/disk/by-id# ls | grep Hitachi ata-Hitachi_HDS5C3030ALA630_MJ1323YNG0ZJ7C ata-Hitachi_HDS5C3030ALA630_MJ1323YNG1064C ata-Hitachi_HDS5C3030ALA630_MJ1323YNG190AC ata-Hitachi_HDS5C3030ALA630_MJ1323YNG1DGPC scsi-SATA_Hitachi_HDS5C30_MJ1323YNG0ZJ7C scsi-SATA_Hitachi_HDS5C30_MJ1323YNG1064C scsi-SATA_Hitachi_HDS5C30_MJ1323YNG190AC scsi-SATA_Hitachi_HDS5C30_MJ1323YNG1DGPC I know these are the same drives because I wrote down the serial numbers, and all the other drives in this system are either Seagate or WD. The serial number for the first one, for example, is YNG0ZJ7C. Why are there two entries here for each drive? More to the point, when I create my ZFS pool which one should I use; the scsi- one or the ata- one?

    Read the article

  • How to deal with ssh's "WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!"?

    - by Vi.
    I often need to login to multiple remote stations that are just placed to the same static IPs for me. SSH complains about changed keys in this case: $ ssh [email protected] @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ ... Offending RSA key in /home/vi/.ssh/known_hosts:70 ... I usually just run vim /home/vi/.ssh/known_hosts +70, dd wq and re-run the SSH command. How to do it simpler? Requirements: The warning should be displayed, and not like this: The authenticity of host '172.1.2.3 (172.1.2.3)' can't be established. It is easy to accept the key change. I expect something like this: $ ssh [email protected] @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ ... The fingerprint for the RSA key sent by the remote host is 82:cd:be:7a:ae:1b:91:2c:23:c1:74:4d:8a:38:10:32. Change the host key in /home/vi/.ssh/known_hosts (yes/no)? yes Warning: Changed host key for '172.1.2.3' (RSA) in the list of known hosts. [email protected]'s password: Simple and differs from usual "The authenticity of host can't be established." message.

    Read the article

  • RSync over SSH hangs and fails with timeout

    - by tx2
    Client: Gentoo, GCC 4.3.4, RSync 3.0.9 Server: Ubuntu 10.04.4 LTS, RSync 3.0.7 Client and server connectet through is Internet, about 2Mbps. Ping is ok. RSync called on any files in any direction hangs on random file, then, after timeout, fails with: [sender] io timeout after 30 seconds -- exiting rsync error: timeout in data send/receive (code 30) at io.c(140) [sender=3.0.9] [sender] _exit_cleanup(code=30, file=io.c, line=140): about to call exit(30) In 1/10 trys is pass correctly. I've tryed to add SSH options TcpRcvBufPoll=yes, KeepAlive=yes; disable and enable rsync compression -- no changes. How can i make rsync works properly?

    Read the article

  • Understanding ulimit -u

    - by tripleee
    I'd like to understand what's going on here. linvx$ ( ulimit -u 123; /bin/echo nst ) nst linvx$ ( ulimit -u 122; /bin/echo nst ) -bash: fork: Resource temporarily unavailable Terminated linvx$ ( ulimit -u 123; /bin/echo one; /bin/echo two; /bin/echo three ) one two three linvx$ ( ulimit -u 123; /bin/echo one & /bin/echo two & /bin/echo three ) -bash: fork: Resource temporarily unavailable Terminated one I speculate that the first 122 processes are consumed by Bash itself, and that the remaining ulimit governs how many concurrent processes I am allowed to have. The documentation is not very clear on this. Am I missing something? More importantly, for a real-world deployment, how can I know what sort of ulimit is realistic? It's a long-running daemon which spawns worker threads on demand, and reaps them when the load decreases. I've had it spin the server to its death a few times. The most important limit is probably memory, which I have now limited to 200M per process, but I'd like to figure out how I can enforce a limit on the number of children (the program does allow me to configure a maximum, but how do I know there are no bugs in that part of the code?)

    Read the article

  • Kill program after it outputs a given line, from a shell script

    - by Paul
    Background: I am writing a test script for a piece of computational biology software. The software I am testing can take days or even weeks to run, so it has a recover functionality built in, in the case of system crashes or power failures. I am trying to figure out how to test the recovery system. Specifically, I can't figure out a way to "crash" the program in a controlled manner. I was thinking of somehow timing a SIGKILL instruction to run after some amount of time. This is probably not ideal, as the test case isn't guaranteed to run the same speed every time (it runs in a shared environment), so comparing the logs to desired output would be difficult. This software DOES print a line for each section of analysis it completes. Question: I was wondering if there was a good/elegant way (in a shell script) to capture output from a program and then kill the program when a given line/# of lines is output by the program?

    Read the article

  • How to execute a command whenever a file changes?

    - by Denilson Sá
    I want a quick and simple way to execute a command whenever a file changes. I want something very simple, something I will leave running on a terminal and close it whenever I'm finished working with that file. Currently, I'm using this: while read; do ./myfile.py ; done And then I need to go to that terminal and press Enter, whenever I save that file on my editor. What I want is something like this: while sleep_until_file_has_changed myfile.py ; do ./myfile.py ; done Or any other solution as easy as that. BTW: I'm using Vim, and I know I can add an autocommand to run something on BufWrite, but this is not the kind of solution I want now. Update: I want something simple, discardable if possible. What's more, I want something to run in a terminal because I want to see the program output (I want to see error messages). About the answers: Thanks for all your answers! All of them are very good, and each one takes a very different approach from the others. Since I need to accept only one, I'm accepting the one that I've actually used (it was simple, quick and easy-to-remember), even though I know it is not the most elegant.

    Read the article

  • How do you get autofs and updatedb to work together?

    - by Veek.M
    /etc/my.misc sda1 -fstype=ntfs,user,exec :/dev/sda1 sda3 -fstype=ntfs,user,exec :/dev/sda3 sda4 -fstype=ntfs,user,exec :/dev/sda4 /etc/auto.master /my /etc/my.misc --ghost When I run locate .pdf, I get nothing because though the mount points (sda1, sda2, ..) are created in /my - there's nothing in them till I access them. Unfortunately this is not good enough for updatedb and it purges its cache of /my/sdaX files. How do I prevent/solve this problem?

    Read the article

  • FTP permissions problem

    - by John Isaacks
    I have vsftpd installed on ubuntu. I added a new created a new user and set the users home path to /var/www so I can ftp with that user directly to that location. And that all works, I can now FTP with the user I created directly to that location. However I whenever I ftp, I have no permissions to change anything. How can I change that? Thanks!!

    Read the article

< Previous Page | 446 447 448 449 450 451 452 453 454 455 456 457  | Next Page >