Search Results

Search found 26263 results on 1051 pages for 'linux guest'.

Page 430/1051 | < Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >

  • How to change resolution in Ubuntu 10.04, where xvinfo is showing no adapters present?

    - by YumYum
    I am trying to maximize my resolution where I have Resolution: 800x600 (4:3) and Refresh rate: 61Hz I tried the following, but it did not work: $ xvinfo X-Video Extension version 2.2 screen #0 no adaptors present $ cvt 1920 1080 # 1920x1080 59.96 Hz (CVT 2.07M9) hsync: 67.16 kHz; pclk: 173.00 MHz Modeline "1920x1080_60.00" 173.00 1920 2048 2248 2576 1080 1083 1088 1120 -hsync +vsync $ xrandr --newmode clever_name 173.00 1920 2048 2248 2576 1080 1083 1088 1120 $ xrandr -q Screen 0: minimum 640 x 480, current 800 x 600, maximum 800 x 600 default connected 800x600+0+0 0mm x 0mm 800x600 61.0* 640x480 60.0 clever_name (0x11d) 173.0MHz h: width 1920 start 2048 end 2248 total 2576 skew 0 clock 67.2KHz v: height 1080 start 1083 end 1088 total 1120 clock 60.0Hz $ xrandr --addmode default clever_name $ xrandr --output default --mode clever_name xrandr: Configure crtc 0 failed

    Read the article

  • Netcat UDP File Transfer Between Two Servers Times Out?

    - by Mark Bowytz
    I'm testing file transfer speeds between two Red Hat servers that are connected to the same switch within the data center and I decided to use netcat to eliminate protocol overhead as much as possible. Testing in TCP mode went well and I was wondering how UDP might fare. On my receiving (client) end, I ran this: nc -u -l 11225 -v > myfile.out And then on the sending (server) end I ran the following: cat myfile.out | nc -u myserver.foo.zzz.com 11225 -v The file I'm testing with is 38 GB but the transfer seems to stop at around 15 GB (one time at 14.9, another at 15.6). I've tested by adding a "-w 5000" just in case it's timing out but no joy. Adding the -v doesn't show anything except acknowledging that the connection occurred. No errors. So - any suggestions as to why would the transfer cease?

    Read the article

  • btrfs won't run from cron

    - by Mikkel
    I'm trying to set up a cron job to create a btrfs subvolume snapshot of my root partition. The command works perfectly if I run it from the command line, but nothing happens at the scheduled cron time. I've tried piping to logger and redirecting stdout/stderr to file, and not only is there no content, the file I'm logging to isn't even created. The cron command I have is as follows: 0 0 * * * /sbin/btrfs subvolume snapshot / "/snapshots/$(date +%Y-%m-%d)" I've tried prefixing it with /bin/bash, but that makes no difference. What am I missing?

    Read the article

  • Mount Docker container contents in host file system

    - by dflemstr
    I want to be able to inspect the contents of a Docker container (read-only). An elegant way of doing this would be to mount the container's contents in a directory. I'm talking about mounting the contents of a container on the host, not about mounting a folder on the host inside a container. I can see that there are two storage drivers in Docker right now: aufs and btrfs. My own Docker install uses btrfs, and browsing to /var/lib/docker/btrfs/subvolumes shows me one directory per Docker container on the system. This is however an implementation detail of Docker and it feels wrong to mount --bind these directories somewhere else. Is there a proper way of doing this, or do I need to patch Docker to support these kinds of mounts?

    Read the article

  • Size of modules within initrd

    - by LiKao
    I am currently trying to manually replace the kernel within ubuntu-core on an embedded device with a custom kernel. However when I try to update the initrd my initrd becomes much bigger. Here is what I did: Extract the initrd that was shipped with ubuntu Make a list of all modules within the old initrd get the same modules from the new module director at /lib/modules/new_kernel_version add these modules to the initrd and remove the old ones If I do this my initrd becomes quite bigger than the original one, so I checked the individual modules. I picked the btrfs.ko filesystem driver as an example. So by comparing these two modules I noticed the one I just picked into the initrd was much bigger, which caused the difference in size. -rw-r--r-- 1 root root 999K Nov 14 15:06 btrfs.ko For the btrfs.ko within the shipped initrd. -rw-r--r-- 1 root root 7.2M Nov 14 15:08 btrfs.ko For the new btrfs.ko. What is different between these two modules? Could this be caused by some faulty setting for the new kernel? When producing the kernel I copied /proc/config.gz and used make oldconfig to update it, so all optimisations should be the same for both kernels. Or is there something else which is being done to the modules before they are put into the initrd? Maybe is there even some better way to build a new initrd for the new kernel in ubuntu altogether. Update: I just also tested with an initrd which I created from scratch using the mkinitrfs command within ubuntu, and it has the same size difference that I found for the initrd I manually updated.

    Read the article

  • Decrease in disk performance after partitioning and encryption, is this much of a drop normal?

    - by Biohazard
    I have a server that I only have remote access to. Earlier in the week I repartitioned the 2 disk raid as follows: Filesystem Size Used Avail Use% Mounted on /dev/mapper/sda1_crypt 363G 1.8G 343G 1% / tmpfs 2.0G 0 2.0G 0% /lib/init/rw udev 2.0G 140K 2.0G 1% /dev tmpfs 2.0G 0 2.0G 0% /dev/shm /dev/sda5 461M 26M 412M 6% /boot /dev/sda7 179G 8.6G 162G 6% /data The raid consists of 2 x 300gb SAS 15k disks. Prior to the changes I made, it was being used as a single unencrypted root parition and hdparm -t /dev/sda was giving readings around 240mb/s, which I still get if I do it now: /dev/sda: Timing buffered disk reads: 730 MB in 3.00 seconds = 243.06 MB/sec Since the repartition and encryption, I get the following on the separate partitions: Unencrypted /dev/sda7: /dev/sda7: Timing buffered disk reads: 540 MB in 3.00 seconds = 179.78 MB/sec Unencrypted /dev/sda5: /dev/sda5: Timing buffered disk reads: 476 MB in 2.55 seconds = 186.86 MB/sec Encrypted /dev/mapper/sda1_crypt: /dev/mapper/sda1_crypt: Timing buffered disk reads: 150 MB in 3.03 seconds = 49.54 MB/sec I expected a drop in performance on the encrypted partition, but not that much, but I didn't expect I would get a drop in performance on the other partitions at all. The other hardware in the server is: 2 x Quad Core Intel(R) Xeon(R) CPU E5405 @ 2.00GHz and 4gb RAM $ cat /proc/scsi/scsi Attached devices: Host: scsi0 Channel: 00 Id: 32 Lun: 00 Vendor: DP Model: BACKPLANE Rev: 1.05 Type: Enclosure ANSI SCSI revision: 05 Host: scsi0 Channel: 02 Id: 00 Lun: 00 Vendor: DELL Model: PERC 6/i Rev: 1.11 Type: Direct-Access ANSI SCSI revision: 05 Host: scsi1 Channel: 00 Id: 00 Lun: 00 Vendor: HL-DT-ST Model: CD-ROM GCR-8240N Rev: 1.10 Type: CD-ROM ANSI SCSI revision: 05 I'm guessing this means the server has a PERC 6/i RAID controller? The encryption was done with default settings during debian 6 installation. I can't recall the exact specifics and am not sure how I go about finding them? Thanks

    Read the article

  • BIND having trouble resolving service.graphicly.com

    - by Keith Burgoyne
    Since about two weeks ago, we haven't been able to resolve service.graphicly.com: dig @192.168.0.12 service.graphicly.com ; <<>> DiG 9.3.4-P1 <<>> @192.168.0.12 service.graphicly.com ; (1 server found) ;; global options: printcmd ;; connection timed out; no servers could be reached Digging on the name servers listed for graphicly.com shows that service.graphicly.com is a CNAME to takecomicsadmin.cloudapp.net. Digging on cloudapp.net's name servers seems to fail: dig @NS1.LIVEDNS.MSFT.NET takecomicsadmin.cloudapp.net ; <<>> DiG 9.3.4-P1 <<>> @NS1.LIVEDNS.MSFT.NET takecomicsadmin.cloudapp.net ; (1 server found) ;; global options: printcmd ;; connection timed out; no servers could be reached Somehow, my home ISP's name servers can resolve service.graphicly.com without issue. Has anyone else noticed this problem? Does anyone know what the cause of this problem could be? Thanks!

    Read the article

  • Intermittent apt-get 'no installation candidate' error on fabric deploy

    - by jberryman
    I'm experiencing a strange issue with a fabric script I'm using to bootstrap a server on EC2. I launch a stock Ubuntu 12.04 AMI, wait for it to start, then proceed with: with settings(host_string="ubuntu@%s" % i.dns_name, connection_attempts=30): sudo('apt-get -qy update') sudo('apt-get -qy install --no-install-recommends mdadm') # don't install postfix #etc... The apt-get update appears to run fine and gives no errors, however (2/3 of the time or so) installing mdadm throws a "no installation candidate" error. When I ssh into the server and run apt-get install mdadm I get the same error. Running apt-get update by hand, then the package installs fine. Any ideas on what might be happening, or ideas for debugging?

    Read the article

  • Rails /tmp/cache/assets permissions issue using Debian virtual machine hosted on OS X Lion

    - by Jim
    I am running Parallels Desktop 7 on OS X Lion. I have a VM with Debian installed, and inside that VM I setup a Rails development environment. I am using Parallels Tools to share out my OS X home directory to the VM - the goal here is to run the Rails server on the VM, but host the files on OS X (so they are automatically backed up, and so I can use tools like Textmate to develop with). Everything seems to work with the shared directory - my Debian user can read, write, and execute files. However, when I cloned a recent Rails project from Git, I got an error message when it tried to compile the CSS assets. My symptoms are exactly the same as in the question: http://stackoverflow.com/questions/7556774/rails-sprocket-error-compiling-css-assest-chown-issue I believe this is permissions-based, but it is really weird. My entire Rails project directory has permissions set to 777 and my Debian user owns it. If I navigate into /tmp/cache/assets, those permissions are the same. However, the three-character directories Rails is creating (DCE, DA1, D05, etc...) are being created without write permissions! If I refresh the Rails page a few times, about 4 or 5 (with Rails creating new three-character directories every time), eventually it will create one of the directories with the proper 777 permissions and everything will work! This will persist until I make a change to the CSS files and it has to recompile. Does anyone have any idea what might be going on here? I can't fathom why it is creating temp directories with incorrect permissions, or why after a few refreshes the good permissions kick in and it works... It definitely seems to be an issue with the share, since if I move the project into a different directory on the VM, it seems to work fine. On the OS X side, I've given the shared folder 777 permissions as well, but no dice...any ideas? Update I've found that the number of times I need to refresh before it works is not random - it has to do with how many assets are being compiled. For example, if I edit one of my CSS files, and there are four CSS files in the app/assets/stylesheets directory, I have to refresh four times before the app will finally work without the operation not permitted error...

    Read the article

  • Monitor or log directory permission changes?

    - by Myles
    I'm having an issue with a cPanel shared server running CentOS 5 where a few directories under the public_html folder keep getting changed to 777 from 755. The customer says they are not changing it and i'm wondering if there is a way to monitor these specific directories to find out who/what is changing the permissions. I have looked into using auditctl and after testing it and changing the permissions myself I don't see anything in the logs so i'm not sure if i'm doing it right or if it's even possible. Does anybody have any suggestions or ideas on how I could figure out what is changing the permissions? Thanks!!

    Read the article

  • I have a perl script that is supposed to run indefinitely. It's being killed... how do I determine who or what kills it?

    - by John O
    I run the perl script in screen (I can log in and check debug output). Nothing in the logic of the script should be capable of killing it quite this dead. I'm one of only two people with access to the server, and the other guy swears that it isn't him (and we both have quite a bit of money riding on it continuing to run without a hitch). I have no reason to believe that some hacker has managed to get a shell or anything like that. I have very little reason to suspect the admins of the host operation (bandwidth/cpu-wise, this script is pretty lightweight). Screen continues to run, but at the end of the output of the perl script I see "Killed" and it has dropped back to a prompt. How do I go about testing what is whacking the damn thing? I've checked crontab, nothing in there that would kill random/non-random processes. Nothing in any of the log files gives any hint. It will run from 2 to 8 hours, it would seem (and on my mac at home, it will run well over 24 hours without a problem). The server is running Ubuntu version something or other, I can look that up if it matters.

    Read the article

  • After installing Ubuntu how do I get rid of unity and go back to gnome?

    - by aseq
    After I have installed the newest Ubuntu LTS release (12.04, still in beta though) I am greeted with an unfamiliar and difficult to use desktop environment. I believe it is called unity. However I have used gnome for a decade and a half and I would not like to move to this new and (for me) unusable desktop environment. What is a quick and easy way to remove (most) of unity and bring back gnome, as well as configure my display manager to load gnome by default with the environment as close as possible to the way it was before?

    Read the article

  • What is the cleanest way to upgrade Fedora and also my individual installs while keeping /home?

    - by Don
    I am a professional programmer, using Fedora 10 (and a host of other packages individually installed). I use my system to telecommute. Every year or so, I go through the ritual dance, usually with a second computer and a KVM switch as I don't have office space for two monitors, to build the next version of Fedora and install all my favorite apps. Is there a better way? At least a nice way to keep track of what I need to 'add on' so that I don't have to manually install my app collection? Also, I keep /home on a separate raid-ed drive set so I can also fall prey to 'old-config-file-itis'.

    Read the article

  • How to mount LUKS partition securely on server

    - by Ency
    I'm curious if it is possible to mount a partition encrypted by cryptsetup with LUKS securely and automatically on Ubuntu 10.0.4 LTS. For example, if I use the key for the encrypted partition, than that key has to be presented on a device that is not encrypted and if someone steals my disk they'll be able to find the key and decrypt the partition. Is there any safe way to mount an encrypted partition? If not, does anything exist to do what I want?

    Read the article

  • Moving a lot of small files between servers using rsync

    - by Adirael
    Hello guys, I'm moving a lot of files (about 2 millions) between two servers on different locations using rsync over ssh, it seems to work fine but I just realised I'm losing some files on the process. I got server 1, with the original data, and server 2, with the copy. Server 1 runs CentOS 5 and Server 2 runs on Ubuntu 10. I'm doing the transfer on the Server's 2 command line like this: rsync -e ssh -avzn usr@server1:/remote/path /local/path The first file movement I did using tar, but I didn't though of piping it through ssh and it failed cause the disk on server 1 was almost full, so I transfered it anyways (it was about 200GB) and got about 80% of the files. Then I piped another tar with the rest of the files (they're in folders, I got 100 folders with about 30 subfolders each, with files inside) and now I got everything on server 2. I wanted to be sure, so I my two options are getting the md5sum of all the files and check them or running an rsync on server 2 against server 1, that's what I did. It got some missing stuff and now it says there's nothing more to do (DRY RUN). But I got at least two files that are missing inside a subfolder. I ran that same rsync on that folder, but still dry run. Am I doing something wrong? Thanks, and sorry for the wall of text.

    Read the article

  • CoovaAP router does not get DHCP settings from modem

    - by Dan Sosedoff
    Im having some serious trouble making CoovaAP-powered router get network settings from modem. Wireless router works in captive portal mode (so it handles user login on the same device). But DNS settings and connection ip (for router) are not set as they supposed to via DHCP on modem. It makes installation in other location really complicated, because you have to go and set everything manually. For now i use google`s dns servers. What`s the way to make router self-configurable via modem's dhcp ?

    Read the article

  • Nginx Ubuntu Postfix Config - Can't connect to incoming IMAP server 'server not responding' but can send mail via outgoing using same details?

    - by daveaspinall
    I'm pretty to new server admin and especially nginx but seem to be getting ok fine apart from accessing my mail via my iPhone? I've changed my domain to 'domain.com' The thing is I can send mail via my outgoing IMAP server but can't connect to the incoming one? I just get the message "the mail server at mail.domain.com is not responding" /etc/postfix/main.cf alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix home_mailbox = Maildir/ inet_interfaces = all inet_protocols = all mailbox_command = mailbox_size_limit = 0 mydestination = domain.com, mail.domain.com, localhost.com, , localhost, localhost.localdomain mydomain = domain.com myhostname = mail.domain.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname recipient_delimiter = + relayhost = smtp_tls_note_starttls_offer = yes smtp_tls_security_level = may smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_recipient_restrictions = permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_local_domain = smtpd_sasl_security_options = noanonymous smtpd_tls_CAfile = /etc/ssl/certs/cacert.pem smtpd_tls_auth_only = no smtpd_tls_cert_file = /etc/ssl/certs/smtpd.crt smtpd_tls_key_file = /etc/ssl/private/smtpd.key smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_security_level = may smtpd_tls_session_cache_timeout = 3600s tls_random_source = dev:/dev/urandom telnet localhost 25 ehlo locahost 250-mail.domain.com 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-STARTTLS 250-AUTH LOGIN PLAIN 250-AUTH=LOGIN PLAIN 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN Using the following details to connect: username password hostname: mail.domain.com port: 25 iptables --list Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination I also sent mail to the server as a test and got this missage if it helps? Technical details of temporary failure: [mail.domain.com. (10): Connection refused] I also looked in /var/log/mail.log and it has multiple entries of: postfix/smtpd[12239]: connect from 5acefc9a.bb.sky.com[90.206.252.xxx] Mar 23 06:47:09 new-domain postfix/smtpd[12239]: lost connection after CONNECT from 5acefc9a.bb.sky.com[90.206.252.154] Notice new-domain which is incorrect but the server hostname and hostname in the configs are correct? I recently moves servers and the host has set the primary domain on the service as new-domain.com so this may be the issue? Like I said, it works to connect to outgoing server, but incoming gets the not responding error? Any idea would be much appreciated!

    Read the article

  • Audio organizing via CLI

    - by Radek Šimko
    I'm looking for some software for my OpenSUSE, which with I would be able to organize my audio files. I've found one, which may be good, but it's unable to run without X server (in CLI). http://musicbrainz.org/doc/MusicBrainz_Picard I'm not looking for ID3 renamers. There're maybe hundreds of them... I'm looking for software, which has its own database, or is able to communicate with some database, like CDDB, Gracenote, last.fm etc.

    Read the article

  • How can one keep secure regular backups of his desktop on a remote server through aDSL? [on hold]

    - by Antonis Christofides
    I'm a system administrator and I use rsnapshot to backup some servers, duplicity for some others. Both work fine, each one with advantages and disadvantages. Despite that, I am at a loss on how to backup my own private files. I'd use duplicity to automatically backup my files to a remote server; but the problem is that once in a while I must do a full backup. My emails and important files are 9G, and I expect this to increase. Uploading through aDSL at 1Mbit would be 20 hours. Too much. rsnapshot doesn't require periodic full backups (only the first time), but it must be running on the remote server and have a means to connect to my computer; if the server is compromised (or simply if the NSA decides to use it), my own machine is also compromised. Not good. The only solution I've come up with is use encfs, use unison to synchronize the files to a remote server, and use duplicity or rsnapshot on the remote server to backup these files. In that case, the question is whether I can sync the files on many computers; is it possible for encfs to be used with the same key on many computers? I also think that if I append one character to the unencrypted file, its encrypted encfs counterpart might change a lot, so that incrementals with duplicity would be less efficient—but not a big deal. Maybe also, when I need to restore a file, finding the correct file to restore could be a pain, because of filename encryption. I wonder whether there is any other possibility that I've overlooked. Maybe I'm asking too much for my personal use, and I should settle with an external disk?

    Read the article

  • Is it inefficient to have symbolic links to symbolic links?

    - by Ogre Psalm33
    We're setting up a series of Makefiles where we want to have a project-level include directory that will have symbolic links to sub-project-level include files. Many sub-project developers have chosen to have their include files also be symbolic links to yet another directory where the actual software is located. So my question is, is it inefficient to have a symbolic link to a symbolic link to another file (for, say, a C++ header that may be included dozens or more times during a compile)? Example directory tree: /project/include/ x_header1.h -> /project/src/csci_x/include/header1.h x_header2.h -> /project/src/csci_x/include/header2.h /project/src/csci_x/ include/ header1.h -> /project/src/csci_x/local_1/cxx/header1.h header2.h -> /project/src/csci_x/local_2/cxx/header2.h local_1/cxx/ module1.cpp header1.h local_2/cxx/ module2.cpp header2.h

    Read the article

  • lucid lynx runaway process

    - by Jerome
    I am runnning lucid lynx with a gnome desktop. After uninstalling the klipper program there is a new and unwelcome process that begins at startup and takes around 66% of my cpu time. According to "top", the process number is 10 and the name of the process is "events/1". I am unable to kill the process using "top" or "kill". I have no extra startup applications except for gnome-do and tomboy. Eliminating these has no effect. Can anyone help?

    Read the article

  • Why doesn't SSHFS let me look into a mounted directory?

    - by Jan
    I use SSHFS to mount a directory on a remote server. There is a user xxx on client and server. UID and GID are identical on both boxes. I use sshfs -o kernel_cache -o auto_cache -o reconnect -o compression=no \ -o cache_timeout=600 -o ServerAliveInterval=15 \ [email protected]:/mnt/content /home/xxx/path_to/content to mount the directory on the remote server. When I log in as xxx on the client I have no problems. I can cd into /home/xxx/path_to/content. But when I log in on the client as another user zzz and then $ ls -l /home/xxx/path_to I get this d????????? ? ? ? ? ? content and on $ ls -l /home/xxx/path_to/content I get ls: cannot access content: Permission denied When I do $ ls -l /mnt on the remote server I get drwxr-xr-x 6 xxx xxx 4096 2011-07-25 12:51 content What am I doing wrong? The permissions seem to be correct to me. Am I wrong?

    Read the article

< Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >