Search Results

Search found 2503 results on 101 pages for 'danger cat'.

Page 74/101 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • How does SELinux affect the /home directory?

    - by Matt Solnit
    Hi everyone. I'm migrating a CentOS 5.3 system from MySQL to PostgreSQL. The way our machine is set up is that the biggest disk partition is mounted to /home. This is out of my control and is managed by the hosting provider. Anyway, we obviously want the database files to be on /home for this reason. With MySQL, we did the following: Edited my.cnf and changed the datadir setting to /home/mysql Added a new "File type" policy record (I hope I'm using the right terminology) to set /home/mysql(/.*)? to mysqld_db_t Ran restorecon -R /home/mysql to assign the labels and everything was good. With PostgreSQL, however, I did the following: Edited /etc/init.d/postgresql and changed the PGDATA and PGLOG variables to /home/pgsql/data and /home/pgsql/pgstartup.log, respectively Added a new policy record to set /home/pgsql/pgstartup.log to postgresql_log_t Added a new policy record to set /home/pgsql/data(/.*)? to postgresql_db_t Ran restorecon -R /home/pgsql to assign the labels At this point, I still cannot start PostgreSQL. pgstartup.log says: # cat pgstartup.log postmaster cannot access the server configuration file "/home/pgsql/data/postgresql.conf": Permission denied The weird thing is that I don't see any messages related to this in /var/log/messages or /var/log/secure, but if I turn off SElinux, then everything works. I made sure all the permissions are correct (600 for files and 700 for directories), as well as the ownership (postgres:postgres). Can anyone tell me what I am doing wrong? I'm using the Yum repository from commandprompt.com, version 8.3.7. EDIT: The reason my question specifically mentions the /home directory is that if I go through all these steps for any other directory, e.g. /var/lib/pgsql2 or /usr/local/pgsql, then it works as expected.

    Read the article

  • ssh_exchange_identification: Connection closed by remote host

    - by Charlie Epps
    First: $ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa $ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys Connecting to SSH servers gives this message: $ ssh -vvv localhost OpenSSH_5.3p1, OpenSSL 0.9.8m 25 Feb 2010 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to localhost [127.0.0.1] port 22. debug1: Connection established. debug1: identity file /home/charlie/.ssh/identity type -1 debug1: identity file /home/charlie/.ssh/id_rsa type -1 debug3: Not a RSA1 key file /home/charlie/.ssh/id_dsa. debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug2: key_type_from_name: unknown key type '-----END' debug3: key_read: missing keytype debug1: identity file /home/charlie/.ssh/id_dsa type 2 ssh_exchange_identification: Connection closed by remote host My /etc/hosts.allow is as following: sshd: ALLOW /etc/hosts.deny is as following: ALL: ALL: DENY I have changed my /etc/ssh/sshd_conf as following: ListenAddress 0.0.0.0 Protocol 2 # HostKey for protocol version 1 #HostKey /etc/ssh/ssh_host_key # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key RSAAuthentication yes PubkeyAuthentication yes #AuthorizedKeysFile .ssh/authorized_keys PasswordAuthentication no

    Read the article

  • Upgrading Ubuntu 9.04 to 9.10 when Update Manager doesn't let you

    - by nickf
    I've been trying to upgrade my installation of Ubuntu 9.04 to 9.10, but all of the instructions I've found haven't been helping. They mostly say to run the update manager and it'll tell you that there's a new distribution ready. Well, mine doesn't say that. Things I've run or checked: update-manager -d says: Your system is up-to-date The package information was last updated less than one hour ago. I've set it to get all new distributions, not just LTS $ cat /etc/update-manager/release-upgrades [DEFAULT] # default prompting behavior, valid options: # never - never prompt for a new distribution version # normal - prompt if a new version of the distribution is available # lts - prompt only if a LTS version of the distribution is available Prompt=normal I'm definitely running 9.04 $ lsb_release -r Distributor ID: Ubuntu Description: Ubuntu 9.04 Release: 9.04 Codename: jaunty Even running the release upgrade from console doesn't help: $ sudo do-release-upgrade Checking for a new ubuntu release No new release found This is running from behind a proxy, but I've set it up such that the regular upgrades and apt-get etc doesn't complain. (export http_proxy=http://myuser:mypass@myserver:8080/) Could you think of anything else which might be stopping me from upgrading?

    Read the article

  • Can I upgrade the CPU in my Lenovo 3000 N100 laptop?

    - by Pavel
    I've got an Intel Core Duo T2300 in my laptop (Lenovo 3000 N100, 0768-49G). Here is what I could find out about it: $ sudo dmidecode # dmidecode 2.11 SMBIOS 2.4 present. 42 structures occupying 1436 bytes. Table at 0x000DC010. Handle 0x0000, DMI type 0, 24 bytes BIOS Information Vendor: LENOVO Version: 61ET37WW Release Date: 06/04/07 Address: 0xE6B70 [...] Handle 0x0002, DMI type 2, 8 bytes Base Board Information Manufacturer: LENOVO Product Name: CAPELL VALLEY(NAPA) CRB [...] Handle 0x0004, DMI type 4, 35 bytes Processor Information Socket Designation: U2E1 Type: Central Processor Family: Other Manufacturer: Intel ID: E8 06 00 00 FF FB E9 BF Version: Genuine Intel(R) CPU T2300 @ 1.66GHz Voltage: 3.3 V External Clock: 166 MHz Max Speed: 2048 MHz Current Speed: 1600 MHz Status: Populated, Enabled Upgrade: ZIF Socket L1 Cache Handle: 0x0005 L2 Cache Handle: 0x0006 L3 Cache Handle: Not Provided Serial Number: Not Specified Asset Tag: Not Specified Part Number: Not Specified $ cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 14 model name : Genuine Intel(R) CPU T2300 @ 1.66GHz stepping : 8 microcode : 0x39 cpu MHz : 1000.000 cache size : 2048 KB I believe the chipset is "Mobile Intel 945GM Express", but I don't know how to verify it on a Linux system. I'm not sure about the socket, but Intel claims "Sockets Supported: PBGA479, PPGA478". Now, I'd like to upgrade to the fastest compatible CPU available, but I'm a bit lost in all the details. Can you guys help me out with a couple of questions, please? What CPUs can I choose from? (I think it's only the Core2Duo line, but it should be enough for an upgrade) Can I use a 64-bit CPU? Can I use a CPU with a higher FSB than 667 MHz? Do I have to worry about additional cooling, or is it enough to check for similar voltage/TDP values? Thank you!

    Read the article

  • Drupal install and permissions

    - by Richard
    So I'm really stuck on this issue. An install process is complaining about write permission on settings.php and sites/default/files/. However, I've moved these files temporarily to write/read (chmod 777) and changed the owner/group to "apache" as shown below. -bash-4.1$ ls -hal total 28K drwxrwxrwx. 3 richard richard 4.0K Aug 23 15:03 . drwxr-xr-x. 4 richard richard 4.0K Aug 18 14:20 .. -rwxrwxrwx. 1 apache apache 9.3K Mar 23 16:34 default.settings.php drwxrwxrwx. 2 apache apache 4.0K Aug 23 15:03 files -rwxrwxrwx. 1 apache apache 0 Aug 23 15:03 settings.php However, the install is still complaining about write permissions. I followed steps one and two of the INSTALL.txt but no luck. Update: To further explore the situation, I created sites/default/richard.php with the following code: <?php error_reporting(E_ALL); ini_set('display_errors', '1'); mkdir('files'); print("<hr> User is "); passthru("whoami"); passthru("pwd"); ?> Run from the command line (under user "richard"), no problem. The folder is created everything is a go. Run from the web, I get the following: Warning: mkdir(): Permission denied in /var/www/html/sites/default/richard.php on line 9 User is apache /var/www/html/sites/default Update 2: Safe mode appears to be off... -bash-4.1$ cat /etc/php.ini | grep safe | grep mode | grep -v \; safe_mode = Off safe_mode_gid = Off safe_mode_include_dir = safe_mode_exec_dir = safe_mode_allowed_env_vars = PHP_ safe_mode_protected_env_vars = LD_LIBRARY_PATH sql.safe_mode = Off

    Read the article

  • MultiPath configuration on RHEL5 and Clariion CX-300

    - by Kamil Z
    I have problem with discovering my FC-connected CX-300 storage. Frankly speaking I'm complete novice in FibreChannel, so step by step explanation would be appreciated. My configuration consist of two IBM HS20 blades with RHEL5.4 on board and 2x Qlogic ISP2422-based 4Gb Fibre Channel HBAs on each blade. As a FC switch there are two Brocades built in BladeCenter Chassis, and finally there is EMC Clariion CX-300. CX300, and Brocade switches should be configured properly, because they were working fine with previous configuration, which main defference was RHEL3 instead RHEL5.4 Below there is my output from several usefull commands: #lspci | grep Fibre 06:01.0 FibreChannle: Qlogic Corp. ISP2422-based 4Gb Fibre Channel to PCI-X HBA (rev 02) 06:01.1 FibreChannle: Qlogic Corp. ISP2422-based 4Gb Fibre Channel to PCI-X HBA (rev 02) #lsmod | grep qla qla2xxx 1084741 0 scsi_transport_fc 37577 1 qla2xxx scsi_mod 141717 10 scsi_dh,qla2xxx,sg,scsi_transport_fc,usb_storage,libata,mptspi,mptscsih,scsi_transport_spi,sd_mod #cat /proc/scsi/scsi Attached Devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: LSILOGIC Model: 1030 IM IM Rev: 1000 Type: Direct-Access ANSI SCSI revision: 02 Host: scsi0 Channel: 01 Id: 00 Lun: 00 Vendor: IBM-ESXS Model: ST936701LC FN Rev: B418 Type: Direct-Access ANSI SCSI revision: 04 Host: scsi0 Channel: 01 Id: 00 Lun: 00 Vendor: IBM-ESXS Model: ST936701LC FN Rev: B418 Type: Direct-Access ANSI SCSI revision: 04 I'd followed instructions from this site (editing /etc/multipath.conf), but i failed after multipath -ll - the output was empty. Do you have any suggestions about discovering FC Connected LUNs in such configuration?

    Read the article

  • What can cause an increase in inactive memory and how to reclaim it?

    - by Boaz
    I have heavy application running on a CentOS server and I'm seeing a strange memory behavior. Here is a snapshot of a munin graph: As you can see the amount of committed memory increases gradually causing the swap file to be use. What strikes me odd is that the amount of inactive memory keeps growing as well. It is my understanding that the inactive memory is actually memory freed up but not yet clean by the OS and put back in the free memory pool. It seems that running out of memory is acutally caused by this lack of clean up, but I may be wrong. Can you give some tips to find the cause of the problem and/or cause CentOS to reclaim the inactive memory? Thanks. Some extra info: 1) I have a tmpfs mounted on /tmp and the number of files stored there grows (but it is double the amount of the inactive memory). 2) cat /proc/meminfo (at a later stage than the image) gives: MemTotal: 14371428 kB MemFree: 1207108 kB Buffers: 35440 kB Cached: 4276628 kB SwapCached: 785316 kB Active: 9038924 kB Inactive: 3902876 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 14371428 kB LowFree: 1207108 kB SwapTotal: 10223608 kB SwapFree: 6438320 kB Dirty: 627792 kB Writeback: 0 kB AnonPages: 7844560 kB Mapped: 49304 kB Slab: 146676 kB PageTables: 27480 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 17409320 kB Committed_AS: 16471488 kB VmallocTotal: 34359738367 kB VmallocUsed: 275852 kB VmallocChunk: 34359462007 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 Hugepagesize: 2048 kB 3) The application is a combination of MySQL, Heritrix (http://crawler.archive.org/ ) and a Tomcat based Java servlet to manage things.

    Read the article

  • How can I fix my corrupted RAID1 ext4 partition on a Synology DS212 NAS?

    - by Neil
    I have two identical 3 TB disks that were in a RAID1 array, where one disk crashed. I replaced the failed disk, but not after the RAID partitions got messed up. I need to figure out how to restore the RAID array and get at my ext4 partition. Here are the properties of the surviving disk: # fdisk -l /dev/sda fdisk: device has more than 2^32 sectors, can't use all of them Disk /dev/sda: 2199.0 GB, 2199023255040 bytes 255 heads, 63 sectors/track, 267349 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 1 267350 2147483647+ ee EFI GPT # parted /dev/sda print Model: ATA ST3000DM001-9YN1 (scsi) Disk /dev/sda: 3001GB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 131kB 2550MB 2550MB ext4 raid 2 2550MB 4698MB 2147MB linux-swap(v1) raid 5 4840MB 3001GB 2996GB raid I replaced the failed drive, and cloned the surviving drive to it so I have something to work with. I cloned the drives with dd if=/dev/sdb of=/dev/sda conv=noerror bs=64M, and now /dev/sda and /dev/sdb are identical. Here is the RAID information: # cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md1 : active raid1 sdb2[1] 2097088 blocks [2/1] [_U] md0 : active raid1 sdb1[1] 2490176 blocks [2/1] [_U] unused devices: <none> It seems that md2 is missing. Here is what testdisk 6.14-WIP finds: Disk /dev/sda - 3000 GB / 2794 GiB - CHS 364801 255 63 Current partition structure: Partition Start End Size in sectors 1 P Linux Raid 256 4980735 4980480 [md0] 2 P Linux Raid 4980736 9175039 4194304 [md1] Invalid RAID superblock 5 P Linux Raid 9453280 5860519007 5851065728 5 P Linux Raid 9453280 5860519007 5851065728 # After a quick search Disk /dev/sda - 3000 GB / 2794 GiB - CHS 364801 255 63 Partition Start End Size in sectors D MS Data 256 4980607 4980352 [1.41.12-2197] D Linux Raid 256 4980735 4980480 [md0] D Linux Swap 4980736 9174895 4194160 D Linux Raid 4980736 9175039 4194304 [md1] >P MS Data 9481056 5858437983 5848956928 [1.41.12-2228] And listing the files on the last partition in the list shows all of my files intact. What should I do?

    Read the article

  • What can cause an increase in inactive memory and how to reclaim it?

    - by Boaz
    Hi All, I have heavy application running on a CentOS server and I'm seeing a strange memory behavior. Here is a snapshot of a munin graph: As you can see the amount of committed memory increases gradually causing the swap file to be use. What strikes me odd is that the amount of inactive memory keeps growing as well. It is my understanding that the inactive memory is actually memory freed up but not yet clean by the OS and put back in the free memory pool. It seems that running out of memory is acutally caused by this lack of clean up, but I may be wrong. Can you give some tips to find the cause of the problem and/or cause CentOS to reclaim the inactive memory? Thanks. Some extra info: 1) I have a tmpfs mounted on /tmp and the number of files stored there grows (but it is double the amount of the inactive memory). 2) cat /proc/meminfo (at a later stage than the image) gives: MemTotal: 14371428 kB MemFree: 1207108 kB Buffers: 35440 kB Cached: 4276628 kB SwapCached: 785316 kB Active: 9038924 kB Inactive: 3902876 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 14371428 kB LowFree: 1207108 kB SwapTotal: 10223608 kB SwapFree: 6438320 kB Dirty: 627792 kB Writeback: 0 kB AnonPages: 7844560 kB Mapped: 49304 kB Slab: 146676 kB PageTables: 27480 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 17409320 kB Committed_AS: 16471488 kB VmallocTotal: 34359738367 kB VmallocUsed: 275852 kB VmallocChunk: 34359462007 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 Hugepagesize: 2048 kB 3) The application is a combination of MySQL, Heritrix (http://crawler.archive.org/ ) and a Tomcat based Java servlet to manage things.

    Read the article

  • How to remotely open gedit with SFTP URL in Gnome through SSH?

    - by Álvaro Justen
    My setup is weird and I can't change it now. I have two machines: local-machine: it's my desktop running Ubuntu with Gnome remote-machine: it's one virtual machine, also running Ubuntu but without X In both machines I have my private and public SSH keys. I need to run SSH from remote-machine to local-machine and run gedit (in local-machine, under the default $DISPLAY) but openning a file in remote-machine throught SFTP. Something like this: myuser@remote-machine:~$ ssh local-machine "DISPLAY=:0.0 gedit sftp://remote-machine/some/file" The command above doesn't work. gedit shows this message: Could not open the file sftp://remote-machine/some/file. gedit cannot handle sftp: locations. Note that: /some/file exists on remote-machine. I can SSH normally from remote-machine to local-machine using my SSH key without any problems! I can run the command DISPLAY=:0.0 gedit sftp://remote-machine/some/file in a terminal on local-machine and gedit opens the file on remote-machine without any problems - but the terminal in which I executed the command is running in DISPLAY :0 (really, it's gnome-terminal). I also tried -t option of SSH client (to force pseudo-tty allocation) but it didn't work. If I try to run DISPLAY=:0.0 gedit sftp://remote-machine/some/file in local-machine but under a tty (for example in tty1, by pressing <Ctrl>+<Alt>+<F1>) it doesn't not work - I get the same error when running from remote-machine. I found that if I pass the environment variable DBUS_SESSION_BUS_ADDRESS with a correct value, it works! So, if I do something like that: myuser@local-machine:~$ env | grep DBUS_SESSION_BUS_ADDRESS > env.txt myuser@local-machine:~$ scp env.txt remote-machine: and then: myuser@remote-machine:~$ ssh local-machine "DISPLAY=:0.0 $(cat env.txt) gedit sftp://remote-machine/some/file" it works! The problem is that I'm not on local-machine so I can't get the correct value for this env variable. Is there any other way to make this work?

    Read the article

  • Unexpected behaviour when dynamically add node in HAproxy server

    - by Anand Soni
    I wanted to use HAProxy for my web app for load balancing purpose. I am trying to add a new rabbitmq node dynamically in HAProxy server using command : haproxy -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid). I am doing tcp connection mode with leastconn balance algorithm in load balancing. What is expected is when there is 3 connection in one rabbitmq, I add a new rabbit server in HAProxy server. so the next connection would pass to 2nd rabbitmq server which is not happening in my case. It distributes the connection in haphazardly manner. Here is my config file: defaults log global mode http option httplog option dontlognull retries 3 option redispatch maxconn 2000 contimeout 5000 clitimeout 5000 srvtimeout 5000 listen rabbitmq 0.0.0.0:5672 mode tcp stats enable balance leastconn option tcplog server rabbit01 xx.xx.xx.xx:5672 check server rabbit02 xx.xx.xx.xx:5672 check listen tomcatq 0.0.0.0:80 mode http stats enable balance roundrobin stats refresh 10s stats refresh 10s stats uri /lb?stats stats auth admin:admin option httplog What is the problem causing this behavior? Any suggestion will appreciated.

    Read the article

  • What can cause an increase in inactive memory and how to reclame it?

    - by Boaz
    Hi All, I have heavy application running on a CentOS server and I'm seeing a strange memory behavior. Here is a snapshot of a munin graph: As you can see the amount of committed memory increases gradually causing the swap file to be use. What strikes me odd is that the amount of inactive memory keeps growing as well. It is my understanding that the inactive memory is actually memory freed up but not yet clean by the OS and put back in the free memory pool. It seems that running out of memory is acutally caused by this lack of clean up, but I may be wrong. Can you give some tips to find the cause of the problem and/or cause CentOS to reclaim the inactive memory? Thanks. Some extra info: 1) I have a tmpfs mounted on /tmp and the number of files stored there grows (but it is double the amount of the inactive memory). 2) cat /proc/meminfo (at a later stage than the image) gives: MemTotal: 14371428 kB MemFree: 1207108 kB Buffers: 35440 kB Cached: 4276628 kB SwapCached: 785316 kB Active: 9038924 kB Inactive: 3902876 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 14371428 kB LowFree: 1207108 kB SwapTotal: 10223608 kB SwapFree: 6438320 kB Dirty: 627792 kB Writeback: 0 kB AnonPages: 7844560 kB Mapped: 49304 kB Slab: 146676 kB PageTables: 27480 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 17409320 kB Committed_AS: 16471488 kB VmallocTotal: 34359738367 kB VmallocUsed: 275852 kB VmallocChunk: 34359462007 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 Hugepagesize: 2048 kB

    Read the article

  • MAMP - Host name changes to first vhost SSL entry for project with two localhosts

    - by user1322092
    I have two projects that are a copy of each other on my Mac with MAMP. They both have SSL pages. However, whenever I hit the a secured SSL page of project 2, the base_url or host changes to project1 instead of remaining project2. I know this is an issue with the vhosts, because if I switch the order of the entries, the reverse happens. Here's my config files: /Applications/MAMP/conf/extra/httpd-ssl.conf <VirtualHost _default_:443> DocumentRoot "/Applications/MAMP/htdocs/proj1" ServerName proj1.localhost:443 ErrorLog "/Applications/MAMP/Library/logs/error_log" TransferLog "/Applications/MAMP/Library/logs/access_log" SSLEngine on SSLCertificateFile "/Applications/MAMP/conf/apache/ssl/server.crt" SSLCertificateKeyFile "/Applications/MAMP/conf/apache/ssl/server.key" </VirtualHost> <VirtualHost _default_:443> DocumentRoot "/Applications/MAMP/htdocs/proj2" ServerName proj2.localhost:443 ErrorLog "/Applications/MAMP/Library/logs/error_log" TransferLog "/Applications/MAMP/Library/logs/access_log" SSLEngine on SSLCertificateFile "/Applications/MAMP/conf/apache/ssl/server.crt" SSLCertificateKeyFile "/Applications/MAMP/conf/apache/ssl/server.key" </VirtualHost> -------------------- cat /etc/hosts ## # Host Database # # localhost is used to configure the loopback interface # when the system is booting. Do not change this entry. ## 127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost 127.0.0.1 proj1.localhost 127.0.0.1 proj2.localhost

    Read the article

  • raid 1 and high load average

    - by melocoton
    i have a server with high load average, I think the problem is the raid 1. cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb1[1] sda1[0] 256896 blocks [2/2] [UU] md3 : active raid1 sdb3[1] sda3[0] 2562240 blocks [2/2] [UU] md4 : active raid1 sdb5[1] sda5[0] 958566272 blocks [2/2] [UU] md1 : active raid1 sdb2[1] sda2[0] 15366080 blocks [2/2] [UU] model name : Intel(R) Core(TM)2 Duo CPU E8400 @ 3.00GHz Linux 2.6.18-164.6.1.el5.centos.plus (local) 04/19/2010 avg-cpu: %user %nice %system %iowait %steal %idle 17.37 0.01 6.02 26.17 0.00 50.43 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 61.09 562.65 893.73 1557214 2473546 sda1 0.01 0.27 0.02 751 42 sda2 6.11 195.50 169.78 541075 469888 sda3 0.01 0.23 0.00 641 0 sda4 0.00 0.01 0.00 18 0 sda5 54.96 366.54 723.94 1014449 2003616 sdb 54.40 433.22 893.73 1199015 2473546 sdb1 0.01 0.16 0.02 436 42 sdb2 5.31 169.00 169.78 467729 469888 sdb3 0.01 0.31 0.00 865 0 sdb4 0.00 0.00 0.00 10 0 sdb5 49.05 263.65 723.94 729695 2003616 md1 29.96 364.39 166.68 1008498 461312 md4 124.15 630.07 713.28 1743822 1974112 md3 0.05 0.43 0.00 1192 0 md0 0.04 0.32 0.00 872 10 dm-0 7.96 83.29 23.02 230530 63720 dm-1 3.67 51.81 2.73 143394 7560 dm-2 7.63 67.76 27.35 187546 75696 dm-3 8.20 134.60 14.02 372514 38792 dm-4 5.90 10.66 39.35 29498 108912 dm-5 17.39 24.52 121.79 67850 337080 dm-6 27.19 229.60 139.89 635442 387168 dm-7 0.14 1.07 0.28 2970 776 dm-8 25.84 4.23 202.89 11698 561536 dm-9 14.77 8.38 112.35 23202 310960 dm-10 5.29 12.78 29.55 35376 81784 dm-11 0.16 1.25 0.05 3450 128 the server runs lvm in the md4

    Read the article

  • Java Plugin a huge security risk? How to preseve Java plugin from privilege escalation?

    - by Johannes Weiß
    Installing a regular Java plugin is IMHO a real security risk for non-IT people. Normally Java applets run in a sandbox and the applet cannot do anything harmful to your computer. If an applet, however, needs to do something like read-only accessing your filesystem e.g. uploading an image, you have to give it more privileges. Usually that's ok but I think not everyone knows that you give the applet the same privileges to your computer as your user has! And that's everything Java asks you: That looks as 'harmful' as a self-signed SSL certificate on a random page where no sensitive data is exchanged. The user will click on Run! You can try that at home using JyConsole, that's Jython (Python on Java)! Simply type in python code, e.g. import os os.system('cat /etc/passwd') or worse DON'T TYPE IN THAT CODE ON YOUR COMPUTER!!! import os os.system('rm -rf ~') ... Does anyone know how you can disable the possibily of privilege escalation? And by the way, does anyone know why SUN displays only a dialog as harmless as the one shown above (the self-signed-SSL-certificate-dialog from Firefox 3 and above is much clearer here!)? Live sample from my computer:

    Read the article

  • Cachefilesd (cachefiles) everything seems to be set up, still not working

    - by Evgenius
    I'm trying to set up cachefilesd to work with my network folder shared using NFS. I have seemingly everything set up, however cachefilesd starts normally, however caching isn't functioning. Here is output of commands, which I ran in the same order 1 sudo mount ... cache-1:/mnt/datashared on /mnt/nfsshare type nfs (rw,sync,ac,acregmin=3,acregmax=60,acdirmin=30,acdirmax=300,lookupcache=pos,vers=3,fsc) ... 2 lsbmod | grep cachefiles cachefiles 40555 1 fscache 57430 4 nfs,cifs,cachefiles,nfsv4 3 [edited - deleted] 4 uname -r 3.8.0-34-generic 5 grep CONFIG_NFS_FSCACHE /boot/config-3.8.0-34-generic CONFIG_NFS_FSCACHE=y 6 lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 13.04 Release: 13.04 Codename: raring 7 sudo service cachefilesd restart * Restarting FilesCache daemon cachefilesd [ OK ] 8 dmesg [6211206.141781] FS-Cache: Withdrawing cache "mycache" [6211210.135236] FS-Cache: Cache "mycache" added (type cachefiles) [6211210.135242] CacheFiles: File cache on sdb1 registered [6214644.348929] CacheFiles: File cache on sdb1 unregistering [6214644.348935] FS-Cache: Withdrawing cache "mycache" [6214654.575909] FS-Cache: Cache "mycache" added (type cachefiles) [6214654.575915] CacheFiles: File cache on sdb1 registered 9 ps aux | grep cachefilesd root 65399 0.0 0.0 4460 540 ? Ss 23:14 0:00 /sbin/cachefilesd 1000 65464 0.0 0.0 8160 916 pts/0 S+ 23:16 0:00 grep --color=auto cachefilesd finally, biggest problem is 10 cat /proc/fs/nfsfs/volumes NV SERVER PORT DEV FSID FSC v3 64476645 801 0:24 233e020f0da07a93 no tl;dr I think I configured everything properly but FSC mount option + fscachefilesd don't seem to work

    Read the article

  • Where should CentOS users get /usr/share/virtio-win/drivers for virt-v2v?

    - by Philip Durbin
    I need to migrate a number of virtual machines from VMware ESX to CentOS 6 KVM hypervisors. Ultimately, I wrote an RPM spec file that solved my problem at https://github.com/fasrc/virtio-win/blob/master/virtio-win.spec but I'm not sure if there's another RPM in base CentOS or EPEL (something standard) I should be using instead. Originally, I was getting this "No root device found in this operating system image" error when attemting to migrate a Window 2008 VM. . . [root@kvm01b ~]# virt-v2v -ic 'esx://my-vmware-hypervisor.example.com/' \ -os transferimages --network default my-vm virt-v2v: No root device found in this operating system image. . . . but I solved this with a simply yum install libguestfs-winsupport since the docs say: If you attempt to convert a virtual machine using NTFS without the libguestfs-winsupport package installed, the conversion will fail. Next I got an error about missing drivers for Windows 2008. . . [root@kvm01b ~]# virt-v2v -ic 'esx://my-vmware-hypervisor.example.com/' \ -os transferimages --network default my-vm my-vm_my-vm: 100% [====================================]D virt-v2v: Installation failed because the following files referenced in the configuration file are required, but missing: /usr/share/virtio-win/drivers/amd64/Win2008 . . . and I resolved this by grabbing an iso from Fedora at http://alt.fedoraproject.org/pub/alt/virtio-win/latest/ as recommended by http://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers and building an RPM from it with this spec file: https://github.com/fasrc/virtio-win/blob/master/virtio-win.spec Now, virt-v2v exits without error: [root@kvm01b ~]# virt-v2v -ic 'esx://my-vmware-hypervisor.example.com/' \ -os transferimages --network default my-vm my-vm_my-vm: 100% [====================================]D virt-v2v: my-vm configured with virtio drivers. [root@kvm01b ~]# Now, my question is, rather that the virtio-win RPM from the spec file I wrote, is there some other more standard RPM in base CentOS or EPEL that will resolve the error above? Here's a bit more detail about my setup: [root@kvm01b ~]# cat /etc/redhat-release CentOS release 6.2 (Final) [root@kvm01b ~]# rpm -q virt-v2v virt-v2v-0.8.3-5.el6.x86_64 See also Bug 605334 – VirtIO driver for windows does not show specific OS: Windows 7, Windows 2003

    Read the article

  • HP Storageworks 448 tape drive input/output error with Ubuntu

    - by Dan D
    I'm trying to set a backup to tape of a machine using flexbackup. However any attempt to write to the tape drive (via either flexbackup or just tar) results in "/dev/st0: Input/output error" The machine seems to recognise the drive (HP Storageworks Ultrium 448) and that there's a tape in it and "mt status" seems to work... "mt -f /dev/st0 rewind" or "erase" throw no errors... root@stor001:/# mt status SCSI 2 tape drive: File number=0, block number=0, partition=0. Tape block size 0 bytes. Density code 0x42 (LTO-2). Soft error count since last status=0 General status bits on (41010000): BOT ONLINE IM_REP_EN root@stor001:/# cat /proc/scsi/scsi Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: HL-DT-ST Model: DVDRAM GSA-4084N Rev: KS02 Type: CD-ROM ANSI SCSI revision: 05 Host: scsi2 Channel: 00 Id: 03 Lun: 00 Vendor: HP Model: Ultrium 2-SCSI Rev: S65D Type: Sequential-Access ANSI SCSI revision: 03 "tell" does however root@stor001:/# mt -f /dev/st0 tell /dev/st0: Input/output error Based on a forum post I found, I tried: root@stor001:/# dd if=/dev/zero of=/dev/nst0 bs=1024 count=10 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 5.0815 s, 2.0 kB/s which gave the person on the forum an error but seems to work for me. If anyone has any suggestions, I'm all ears...

    Read the article

  • Linux can't find file that exists

    - by Joe
    I'm trying to get Google's Dart language up and running, but it errors when running dart2js. I'm running Arch linux and I installed dart-sdk from AUR. Some relevant terminal output lies below. % dart2js main.dart /usr/local/bin/dart2js: line 7: /usr/local/bin/dart: No such file or directory % cat /usr/local/bin/dart2js #!/bin/sh # Copyright (c) 2012, the Dart project authors. Please see the AUTHORS file # for details. All rights reserved. Use of this source code is governed by a # BSD-style license that can be found in the LICENSE file. BIN_DIR=`dirname $0` exec $BIN_DIR/dart --allow_string_plus=false $BIN_DIR/../lib/dart2js/lib/compiler/implementation/dart2js.dart "$@" % file /usr/local/bin/dart /usr/local/bin/dart: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.15, BuildID[sha1]=0x27fe166ca015c1adfeaf3a6f9c018e7d7af46d9f, stripped % ls -alh /usr/local/bin total 4.9M drwxr-xr-x 2 root root 4.0K Jun 10 22:51 . drwxr-xr-x 12 root root 4.0K Jun 10 22:51 .. -rwxr-xr-x 1 root root 422K May 10 22:41 cargo -rwxr-xr-x 1 root root 2.7M Jun 10 22:50 dart -rwxr-xr-x 1 root root 360 Jun 6 16:20 dart2js -rwxr-xr-x 1 root root 176 Jun 6 16:20 pub -rwxr-xr-x 1 root root 166K May 10 22:41 rustc -rwxr-xr-x 1 root root 1.6M May 10 22:41 rustdoc % uname -rm 3.3.7-1-ARCH x86_64 Could it be because I'm running a 64bit OS and the dart binary is 32bit?

    Read the article

  • How to list rpm packages/subpackages sorted by total size

    - by smci
    Looking for an easy way to postprocess rpm -q output so it reports the total size of all subpackages matching a regexp, e.g. see the aspell* example below. (Short of scripting it with Python/PERL/awk, which is the next step) (Motivation: I'm trying to remove a few Gb of unnecessary packages from a CentOS install, so I'm trying to track down things that are a) large b) unnecessary and c) not dependencies of anything useful like gnome. Ultimately I want to pipe the ouput through sort -n to what the space hogs are, before doing rpm -e) My reporting command looks like [1]: cat unwanted | xargs rpm -q --qf '%9.{size} %{name}\n' > unwanted.size and here's just one example where I'd like to see rpm's total for all aspell* subpackages: root# rpm -q --qf '%9.{size} %{name}\n' `rpm -qa | grep aspell` 1040974 aspell 16417158 aspell-es 4862676 aspell-sv 4334067 aspell-en 23329116 aspell-fr 13075210 aspell-de 39342410 aspell-it 8655094 aspell-ca 62267635 aspell-cs 16714477 aspell-da 17579484 aspell-el 10625591 aspell-no 60719347 aspell-pl 12907088 aspell-pt 8007946 aspell-nl 9425163 aspell-cy Three extra nice-to-have things: list the dependencies/depending packages of each group (so I can figure out the uninstall order) Also, if you could group them by package group, that would be totally neat. Human-readable size units like 'M'/'G' (like ls -h does). Can be done with regexp and rounding on the size field. Footnote: I'm surprised up2date and yum don't add this sort of intelligence. Ideally you would want to see a tree of group-package-subpackage, with rolled-up sizes. Footnote 2: I see yum erase aspell* does actually produce this summary - but not in a query command. [1] where unwanted.txt is a textfile of unnecessary packages obtained by diffing the output of: yum list installed | sed -e 's/\..*//g' > installed.txt diff --suppress-common-lines centos4_minimal.txt installed.txt | grep '>' and centos4_minimal.txt came from the Google doc given by that helpful blogger.

    Read the article

  • What causes this logrotate behavior in Puppet?

    - by ujjain
    After running logrotate, Puppet starts writing it's logs into /var/log/puppet/masterhttp.log-20130616. How come it doesn't keep logging in /var/log/puppet/masterhttp.log? It seems normal behavior is renaming the original log-file and start with a clean fresh log-file to start writing in that log file, keeping the other file as a log-archive. [root@puppetmaster puppet]# ls -al total 97520 drwxr-x---. 2 puppet puppet 4096 Jun 16 03:24 . drwxr-xr-x. 12 root root 4096 Jul 1 09:11 .. -rw-r--r--. 1 puppet puppet 0 Jun 16 03:24 masterhttp.log -rw-rw----. 1 puppet puppet 99847187 Jul 1 09:19 masterhttp.log-20130616 [root@puppetmaster init.d]# cat /etc/logrotate.d/puppet /var/log/puppet/*log { missingok notifempty create 0644 puppet puppet sharedscripts postrotate pkill -USR2 -u puppet -f /usr/sbin/puppetmasterd || true [ -e /etc/init.d/puppet ] && /etc/init.d/puppet reload > /dev/null 2>&1 || true endscript } [root@puppetmaster init.d]# How can I make Puppet log to /var/log/puppet/masterhttp.log and not to /var/log/puppet/masterhttp.log-20130616? Even restarting puppet doesn't make it log into /var/log/puppet/masterhttp.log instead of /var/log/puppet/masterhttp.log-20130616.

    Read the article

  • Resize Debian in VirtualBox

    - by Poni
    I have a VM with one HD of size 3GB and I'd like to enlarge its HD to 7GB. So I execute this command on the host (while guest is shutdown): VBoxManage modifyhd debian.vdi --resize 7168 Then I run the guest, Debian 6, and then: smith@debian6:~$ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 2.8G 2.6G 60M 98% / tmpfs 61M 0 61M 0% /lib/init/rw udev 57M 160K 57M 1% /dev tmpfs 61M 0 61M 0% /dev/shm smith@debian6:~$ sudo parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 3221MB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 3035MB 3034MB primary ext3 boot 2 3036MB 3220MB 185MB extended 5 3036MB 3220MB 185MB logical linux-swap(v1) smith@debian6:~$ cat /proc/partitions major minor #blocks name 8 0 3145728 sda 8 1 2962432 sda1 8 2 1 sda2 8 5 180224 sda5 So, no automatic resizing (detection) of the HD/partition (while VirtualBox, in the host, shows it's 7GB now). Ok... Then I do: smith@debian6:~$ sudo resize2fs /dev/sda1 resize2fs 1.41.12 (17-May-2010) The filesystem is already 740608 blocks long. Nothing to do! smith@debian6:~$ sudo parted GNU Parted 2.3 Using /dev/sda Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) select /dev/sda1 Using /dev/sda1 (parted) resize WARNING: you are attempting to use parted to operate on (resize) a file system. parted's file system manipulation code is not as robust as what you'll find in dedicated, file-system-specific packages like e2fsprogs. We recommend you use parted only to manipulate partition tables, whenever possible. Support for performing most operations on most types of file systems will be removed in an upcoming release. Partition number? 1 Start? 0 End? [3034MB]? Here I'm stuck. At the above parted it asks me to resize to 3GB. No point in that, right.. What should I do in order to enlarge this partition?

    Read the article

  • Tomato OS: "memory exhausted" running vi .... how to solve?

    - by Sam Jones
    I have set up tomato (shibby) on an asus RT-N66U router. It works great. I loaded up a few pieces, like transmission and optware. I can run vi, but when I run vi it fails with a "memory exhausted" error, and the terminal session hangs. For reference: If I simply start "vi" it runs fine. But if I specify vi I get the memory exhausted error, even if the file I am opening is just a couple of hundred bytes in size (like fstab). I discovered that my swap partition was not properly set up, so I did that. The swapon command now indicates I really do have a swap: [root@MyRouter samba]$ swapon -s Filename Type Size Used Priority /dev/sda1 partition 32900860 0 1 How can I get vi to work? Thanks! System setup reference information: asus RT-N66U router 2TB usb hard drive partitions on hard drive: Disk /dev/sda: 2000.4 GB, 2000398839808 bytes 255 heads, 63 sectors/track, 30400 cylinders Units = cylinders of 16065 * 4096 = 65802240 bytes Disk identifier: 0xfacbc8ab Device Boot Start End Blocks Id System /dev/sda1 1 512 32900868 82 Linux swap / Solaris /dev/sda2 513 29000 1830638880 83 Linux running samba memory: $ cat /proc/meminfo MemTotal: 255840 kB MemFree: 210980 kB Buffers: 5264 kB Cached: 22768 kB SwapCached: 0 kB Active: 20272 kB Inactive: 11448 kB HighTotal: 131072 kB HighFree: 99868 kB LowTotal: 124768 kB LowFree: 111112 kB SwapTotal: 32900860 kB SwapFree: 32900860 kB Dirty: 0 kB Writeback: 0 kB TIA!

    Read the article

  • Ubuntu 9.10 Only Sees 244 MB RAM, while BIOS and Windows Sees 1.5 GB

    - by nicorellius
    I have 1.5 GB of RAM installed on an older Dell, Pentium 4. I just installed Ubuntu 9.1 and the system is only seeing 244 MB of RAM, even though there is 1.5 GB on the system. The BIOS sees all of it. I ran a Knoppix disc and it only saw 25 MB upon booting. I made no particular changes to the installation taht would affect this. I looked through the BIOS and the only setting I could see was the AGP aperture. Not even sure what this is. Anyone know where I went wrong? I also tried moving the memory modules around on the board. Booted with the 1 GB stick, still saw 244 MB. NOTE - This same system, except for the hard drive, had Windows XP running on it. The user who ran it said that the RAM was good and always showed 1.5 GB. Here is sudo cat /proc/meminfo MemTotal: 250064 kB MemFree: 3832 kB Buffers: 13356 kB Cached: 52216 kB SwapCached: 19676 kB Active: 91504 kB Inactive: 113884 kB Active(anon): 60572 kB Inactive(anon): 82156 kB Active(file): 30932 kB Inactive(file): 31728 kB Unevictable: 0 kB Mlocked: 0 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 250064 kB LowFree: 3832 kB SwapTotal: 4883720 kB SwapFree: 4781204 kB Dirty: 496 kB Writeback: 720 kB AnonPages: 123796 kB Mapped: 23368 kB Slab: 17248 kB SReclaimable: 7932 kB SUnreclaim: 9316 kB PageTables: 5304 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 5008752 kB Committed_AS: 740372 kB VmallocTotal: 770600 kB VmallocUsed: 26008 kB VmallocChunk: 662544 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 4096 kB DirectMap4k: 114128 kB DirectMap4M: 147456 kB

    Read the article

  • mysqld causes high CPU load

    - by Radu
    My mysqld goes to use 99.9% of CPU for variable time (between 2 - 20 minutes), and then goes back to normal 0.1% - 5%. Checked processlist: all is normal, 1 to 20 inserts or updates that last 2 to 5 sec, and about 20 process that are in Sleep Mode (maybe because the scripts don't close the mysql connection, but are they are closed in about 5 - 10 secs, I didn't make the scripts :P but the server was running fine the last 2 years, since is was made): | 15375 | root | localhost | stoc | Query | 0 | NULL | show processlist | | 79480 | pppoe | localhost | pppoe | Sleep | 4 | NULL | NULL | | 79481 | pppoe | localhost | pppoe | Sleep | 4 | NULL | NULL | | 79482 | pppoe | localhost | pppoe | Sleep | 4 | NULL | NULL | | 79483 | pppoe | localhost | pppoe | Query | 0 | init | UPDATE acc SET InputOctets="0", OutputOctets="0", InputPackets="unknown", OutputPackets="User | | 79484 | pppoe | localhost | pppoe | Sleep | 5 | NULL | NULL | | 79485 | pppoe | localhost | pppoe | Sleep | 5 | NULL | NULL | | 79486 | pppoe | localhost | pppoe | Sleep | 5 | NULL | NULL Checked raid, seemns OK: [root@db2]# cat /proc/mdstat Personalities : [raid5] [raid4] [raid1] md0 : active raid1 sdd1[3] sdc1[2] sdb1[0] sda1[1] 136448 blocks [4/4] [UUUU] md1 : active raid5 sdd2[3] sdc2[2] sdb2[0] sda2[1] 12023808 blocks level 5, 256k chunk, algorithm 2 [4/4] [UUUU] md3 : active raid5 sda4[1] sdd4[3] sdc4[2] sdb4[0] 203647488 blocks level 5, 256k chunk, algorithm 2 [4/4] [UUUU] md2 : active raid5 sda3[1] sdd3[3] sdc3[2] sdb3[0] 24024576 blocks level 5, 256k chunk, algorithm 2 [4/4] [UUUU] unused devices: <none> [root@db2]# top sees my mysqld cpu load, but nothing else seems to be wrong: [root@db2]# top top - 17:56:05 up 7 days, 3:55, 3 users, load average: 32.93, 24.72, 22.70 Tasks: 75 total, 4 running, 71 sleeping, 0 stopped, 0 zombie Cpu(s): 63.4% us, 36.6% sy, 0.0% ni, 0.0% id, 0.0% wa, 0.0% hi, 0.0% si, 0.0% st Mem: 1988824k total, 1304776k used, 684048k free, 99588k buffers Swap: 12023800k total, 0k used, 12023800k free, 951028k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 5754 mysql 19 0 236m 57m 5108 R 99.9 2.9 21:58.76 mysqld 1 root 16 0 7216 700 580 S 0.0 0.0 0:00.39 init 2 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/0 Repaired all mysql databases, reindexed raid ... I'm running out of ideeas ... Anyone has an ideea what can go wrong with this server ? Thank you

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >