Search Results

Search found 9963 results on 399 pages for 'special commands'.

Page 351/399 | < Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >

  • Solaris10 x86 mirror. Making second disk booteable when failure

    - by Kani
    Did a mirror (RAID1) with Solaris 10 in x86. Everything OK. Now, I´m trying to make the second disk booteable, this is: from grub or in case of failure of disk1. I edited /boot/grub/menu.lst: #---------- ADDED BY BOOTADM - DO NOT EDIT ---------- title Solaris 10 9/10 s10x_u9wos_14a X86 findroot (rootfs1,0,a) kernel /platform/i86pc/multiboot module /platform/i86pc/boot_archive #---------------------END BOOTADM-------------------- #---------- ADDED BY BOOTADM - DO NOT EDIT ---------- title Solaris failsafe findroot (rootfs1,0,a) kernel /boot/multiboot -s module /boot/amd64/x86.miniroot-safe #---------------------END BOOTADM-------------------- #---------- ADDED BY BOOTADM - DO NOT EDIT ---------- title Solaris failsafe findroot (rootfs1,0,a) kernel /boot/multiboot kernel/unix -s module /boot/x86.miniroot-safe #---------------------END BOOTADM-------------------- #Make second disk booteable!!!!!!! title alternate boot findroot (rootfs1,1,a) kernel /boot/multiboot kernel/unix -s module /boot/x86.miniroot-safe But is not working. In the BIOS, when I select "alternate boot" I get: Error 15: 15 file not found also, how to configure to GRUB to make the disk2 to boot in case of error in disk1? Additionally, I did (but not related to GRUB): eeprom altbootpath=/devices/pci@0,0/pci108e,5352@1f,2/disk@1,0:a Here is the output of some commands that may help you: /sbin/biosdev 0x80 /pci@0,0/pci108e,5352@1f,2/disk@0,0 0x81 /pci@0,0/pci108e,5352@1f,2/disk@1,0 ls -l /dev/dsk/c1t?d0s0 lrwxrwxrwx 1 root root 50 Jul 7 12:01 /dev/dsk/c1t0d0s0 -> ../../devices/pci@0,0/pci108e,5352@1f,2/disk@0,0:a lrwxrwxrwx 1 root root 50 Jul 7 12:01 /dev/dsk/c1t1d0s0 -> ../../devices/pci@0,0/pci108e,5352@1f,2/disk@1,0:a more /boot/solaris/bootenv.rc setprop ata-dma-enabled '1' setprop atapi-cd-dma-enabled '0' setprop ttyb-rts-dtr-off 'false' setprop ttyb-ignore-cd 'true' setprop ttya-rts-dtr-off 'false' setprop ttya-ignore-cd 'true' setprop ttyb-mode '9600,8,n,1,-' setprop ttya-mode '9600,8,n,1,-' setprop lba-access-ok '1' setprop prealloc-chunk-size '0x2000' setprop bootpath '/pci@0,0/pci108e,5352@1f,2/disk@0,0:a' setprop keyboard-layout 'US-English' setprop console 'text' setprop altbootpath '/pci@0,0/pci108e,5352@1f,2/disk@1,0:a' cat /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # fd - /dev/fd fd - no - /proc - /proc proc - no - #/dev/dsk/c1t0d0s1 - - swap - no - /dev/md/dsk/d1 - - swap - no - /dev/md/dsk/d0 /dev/md/rdsk/d0 / ufs 1 no - /devices - /devices devfs - no - sharefs - /etc/dfs/sharetab sharefs - no - ctfs - /system/contract ctfs - no - objfs - /system/object objfs - no - swap - /tmp tmpfs - yes - df -h Filesystem size used avail capacity Mounted on /dev/md/dsk/d0 909G 11G 889G 2% / /devices 0K 0K 0K 0% /devices ctfs 0K 0K 0K 0% /system/contract proc 0K 0K 0K 0% /proc mnttab 0K 0K 0K 0% /etc/mnttab swap 14G 972K 14G 1% /etc/svc/volatile objfs 0K 0K 0K 0% /system/object sharefs 0K 0K 0K 0% /etc/dfs/sharetab /usr/lib/libc/libc_hwcap1.so.1 909G 11G 889G 2% /lib/libc.so.1 fd 0K 0K 0K 0% /dev/fd swap 14G 40K 14G 1% /tmp swap 14G 28K 14G 1% /var/run

    Read the article

  • PHP & MySQL on Mac OS X: Access denied for GUI user

    - by Eirik Lillebo
    Hey! This question was first posted to Stack Overflow, but as it is perhaps just as much a server issue I though it might be just as well to post it here also. I have just installed and configured Apache, MySQL, PHP and phpMyAdmin on my Macbook in order to have a local development environment. But after I moved one of my projects over to the local server I get a weird MySQL error from one of my calls to mysql_query(): Access denied for user '_securityagent'@'localhost' (using password: NO) First of all, the query I'm sending to MySQL is all valid, and I've even testet it through phpMyAdmin with perfect result. Secondly, the error message only happens here while I have at least 4 other mysql connections and queries per page. This call to mysql_query() happens at the end of a really long function that handles data for newly created or modified articles. This basically what it does: Collect all the data from article form (title, content, dates, etc..) Validate collected data Connect to database Dynamically build SQL query based on validated article data Send query to database before closing the connection Pretty basic, I know. I did not recognize the username "_securityagent" so after a quick search I came across this from and article at Apple's Developer Connection talking about some random bug: Mac OS X's security infrastructure gets around this problem by running its GUI code as a special user, "_securityagent". Then I tried put a var_dump() on all variables used in the mysql_connect() call, and every time it returns the correct values (where username is not "_securityagent" of course). Thus I'm wondering if anyone has any idea why 'securityagent' is trying to connect to my database - and how I can keep this error from occurring when I call mysql_query(). Update: Here is the exact code I'm using to connect to the database. But a little explanation must follow: The connection error happens at a call to mysql_query() in function X in class_1 class_1 uses class_2 to connect to database class_2 reads a config file with the database connection variables (host, user, pass, db) class_2 connect to the database through the following function: var $SYSTEM_DB_HOST = ""; function connect_db() { // Reads the config file include('system_config.php'); if (!($SYSTEM_DB_HOST == "")) { mysql_connect($SYSTEM_DB_HOST, $SYSTEM_DB_USER, $SYSTEM_DB_PASS); @mysql_select_db($SYSTEM_DB); return true; } else { return false; } }

    Read the article

  • DNS and DHCP dies after ~2 days of use on ClearOS

    - by TheLQ
    I'm using ClearOS (based on CentOS, so any info specific to it should apply here) as a gateway, DHCP, and DNS server. I had this server running perfectly for a month or two before replacing it with another server. However due DNS and DHCP failing 2 days in and a host of other performance issues (the box was a little underpowered), I changed back to the origional server. However 2 days in DHCP and DNS are failing again, and I'm out of idea's on why. In both cases to my knowledge no network or server changes occurred after installation. Right after installing (and at least a day in) DNS and DHCP was working just fine. However later (Day 2) I get a call saying their internet is down (translation: Nobody can get to websites because DNS is down) I've tried to fix the problem by checking if the dnsmasq is even running (it is), restarting the service, and restarting the server to no effect. I do have two internal servers that have static DHCP leases but one's lease must of expired as I can't connect to it anymore. I'm hesitant to do any dhcp testing on the last server as I'll not be able to connect to it anymore. Is there anything anyone can think of on why DNS and DHCP would fail 2 days in to running perfectly? More info: Running dnsmasq in debug mode. This is all that's displayed even when running nslookup quackwall. I'm not sure though if nslookup commands should show up in the log [root@quackwall ~]# /usr/sbin/dnsmasq -dq dnsmasq: started, version 2.49 cachesize 150 dnsmasq: compile time options: IPv6 GNU-getopt no-DBus no-I18N DHCP TFTP dnsmasq-dhcp: DHCP, IP range 10.0.0.100 -- 10.0.0.254, lease time 12h dnsmasq: reading /etc/resolv.conf dnsmasq: using nameserver 74.128.17.114#53 dnsmasq: using nameserver 74.128.19.102#53 dnsmasq: read /etc/hosts - 5 addresses dnsmasq-dhcp: read /etc/ethers - 2 addresses On the other server DNS and the Gateway are all configured correctly (10.0.0.2 is quackwall) lordquackstar@quackgame:~$ netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 10.0.0.0 0.0.0.0 255.255.240.0 U 0 0 0 eth0 0.0.0.0 10.0.0.2 0.0.0.0 UG 0 0 0 eth0 lordquackstar@quackgame:~$ cat /etc/resolv.conf nameserver 10.0.0.2 domain highwow.lan search highwow.lan

    Read the article

  • What "pieces" are needed in order to set up a cluster of physical servers?

    - by Chris Dutrow
    Background: Currently, we use Rackspace cloud servers. We have no intention to stop using them, but would like to look into setting up a cluster of physical servers (probably desktop computers in the $400 range with 8gb memory each) to offset some of our load and work as a secondary, more powerful, less reliable system. To put things in perspective, we can buy comparable desktop computers for the same price as we pay in one month to rent them on Rackspace Cloud. I understand that this is generally a dumb idea. However, in this particular instance, the server cluster is needed for its computation power. It is not mission-critical, it does not host a consumer-facing website, and if it goes down for a day or two, its not really a problem. Currently, we have access to business class verizon fios. If I understand correctly, we can get at least 25 dedicated IP addresses with this service, this should be enough. Requirements: Each server runs Linux Centos 6.3 Some of the servers run Python and execute processes from a task queue (Redis or RabbitMQ) Some of the servers are capable of serving static files and Python driven REST APIs Some of the servers host a Cassandra database cluster One or more of the servers are a Redis database servers One or more of the servers are PostgreSQL servers Questions: What kind of router or switch is needed? We would like the computers to be able to communicate effectively with each other via internal IP addresses. This is especially important for communicating with servers hosting Redis that need to be able to respond to requests very quickly. Are there special switches or routers that need to be used to connect the servers together? Are Desktop computers ok for this? We have found that we are mostly RAM-bottle necked, I understand that some servers have highly superior CPUs, but I'm not sure we need CPU power as much as we need RAM, which is cheap in Desktop computers. Will we have problems with the WIFI cards in the desktops or any other unexpected hardware limitation? What tools should be used to "image" the servers. For example, when we get an installation right for a Redis server or Cassandra node, are there tools that come with Linux Centos 6.3 to image the server to a USB drive or something like that? Or do we need to use some other software for this? What other things are we missing that we should be concerned about? Thanks so much!

    Read the article

  • Slow Memcached: Average 10ms memcached `get`

    - by Chris W.
    We're using Newrelic to measure our Python/Django application performance. Newrelic is reporting that across our system "Memcached" is taking an average of 12ms to respond to commands. Drilling down into the top dozen or so web views (by # of requests) I can see that some Memcache get take up to 30ms; I can't find a single use of Memcache get that returns in less than 10ms. More details on the system architecture: Currently we have four application servers each of which has a memcached member. All four memcached members participate in a memcache cluster. We're running on a cloud hosting provider and all traffic is running across the "internal" network (via "internal" IPs) When I ping from one application server to another the responses are in ~0.5ms Isn't 10ms a slow response time for Memcached? As far as I understand if you think "Memcache is too slow" then "you're doing it wrong". So am I doing it wrong? Here's the output of the memcache-top command: memcache-top v0.7 (default port: 11211, color: on, refresh: 3 seconds) INSTANCE USAGE HIT % CONN TIME EVICT/s GETS/s SETS/s READ/s WRITE/s cache1:11211 37.1% 62.7% 10 5.3ms 0.0 73 9 3958 84.6K cache2:11211 42.4% 60.8% 11 4.4ms 0.0 46 12 3848 62.2K cache3:11211 37.5% 66.5% 12 4.2ms 0.0 75 17 6056 170.4K AVERAGE: 39.0% 63.3% 11 4.6ms 0.0 64 13 4620 105.7K TOTAL: 0.1GB/ 0.4GB 33 13.9ms 0.0 193 38 13.5K 317.2K (ctrl-c to quit.) ** Here is the output of the top command on one machine: ** (Roughly the same on all cluster machines. As you can see there is very low CPU utilization, because these machines only run memcache.) top - 21:48:56 up 1 day, 4:56, 1 user, load average: 0.01, 0.06, 0.05 Tasks: 70 total, 1 running, 69 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.3%st Mem: 501392k total, 424940k used, 76452k free, 66416k buffers Swap: 499996k total, 13064k used, 486932k free, 181168k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 6519 nobody 20 0 384m 74m 880 S 1.0 15.3 18:22.97 memcached 3 root 20 0 0 0 0 S 0.3 0.0 0:38.03 ksoftirqd/0 1 root 20 0 24332 1552 776 S 0.0 0.3 0:00.56 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 4 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kworker/0:0 5 root 20 0 0 0 0 S 0.0 0.0 0:00.02 kworker/u:0 6 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/0 7 root RT 0 0 0 0 S 0.0 0.0 0:00.62 watchdog/0 8 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 cpuset 9 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 khelper ...output truncated...

    Read the article

  • VMWare tools not installing with an error

    - by JDS
    VMWare tools not installing on Ubuntu 12.04. I'm using Chef to manage the installation, but the Apt commands fail if run manually. I'm using the VMWare tool Debian repo. Example: $ cat /etc/apt/sources.list.d/vmware-tools-source.list deb http://packages.vmware.com/tools/esx/5.0u2/ubuntu precise main When trying to install, most packages seem to go ok, but one, "vmware-tools-foundation", does not. Example: $ apt-get -q -y install vmware-tools-esx-nox=8.6.10-1.precise Reading package lists... Building dependency tree... Reading state information... You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: vmware-tools-esx-kmods-3.2.0-23-generic : Depends: vmware-tools-foundation (>= 8.6.10) but it is not going to be installed vmware-tools-esx-nox : Depends: ...snip list of deps... E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). $ apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: vmware-tools-foundation The following NEW packages will be installed: vmware-tools-foundation 0 upgraded, 1 newly installed, 0 to remove and 118 not upgraded. 7 not fully installed or removed. Need to get 0 B/5,886 B of archives. After this operation, 86.0 kB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 103499 files and directories currently installed.) Unpacking vmware-tools-foundation (from .../vmware-tools-foundation_8.6.10-1.precise_all.deb) ... VMware Tools cannot install because it appears that another installation of VMware Tools is already present. Please remove the previous installation and then attempt to install this copy of VMware Tools again. dpkg: error processing /var/cache/apt/archives/vmware-tools-foundation_8.6.10-1.precise_all.deb (--unpack): subprocess new pre-installation script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/vmware-tools-foundation_8.6.10-1.precise_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) The key seems to be this error: "VMware Tools cannot install because it appears that another installation of VMware Tools is already present. Please remove the previous installation and then attempt to install this copy of VMware Tools again." However, I've tryed removing and purging and can't seem to "trick" VMWare tools into thinking the packages are gone. Apt thinks they are gone. Is there some service/file/cache/lock left that VMWare tools sees that makes it think that VMWare tools are still installed? I've googled and googled but there is no answer to this question with my particular circumstances on the interwebs. VMWare's documentation of this error is minimal.

    Read the article

  • Can't get Apache 2.2.21 to compile with OpenSSL support

    - by angstwad
    Alright -- having a bad couple days here compiling Apache 2.2.21 on CentOS 5.7 with the following configure commands: ./configure --enable-ssl=shared --with-ssl=/usr/local/openssl I've compiled from source OpenSSL 1.0.0e from source: ./config --prefix=/usr/local --openssldir=/usr/local/openssl shared zlib-dynamic I attempt to start Apache and it returns: httpd: Syntax error on line 54 of /usr/local/apache2/conf/httpd.conf: Cannot load /usr/local/apache2/modules/mod_ssl.so into server: /usr/local/apache2/modules/mod_ssl.so: undefined symbol: SSL_get_servername If I look at how the libraries are linked, this is what I get: [root@web1 modules]# ldd mod_ssl.so libssl.so.6 => /lib64/libssl.so.6 (0x00002aaaaace4000) libcrypto.so.6 => /lib64/libcrypto.so.6 (0x00002aaaaaf30000) libdl.so.2 => /lib64/libdl.so.2 (0x00002aaaab281000) libz.so.1 => /lib64/libz.so.1 (0x00002aaaab486000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00002aaaab69a000) libc.so.6 => /lib64/libc.so.6 (0x00002aaaab8b5000) libgssapi_krb5.so.2 => /usr/lib64/libgssapi_krb5.so.2 (0x00002aaaabc0e000) libkrb5.so.3 => /usr/lib64/libkrb5.so.3 (0x00002aaaabe3c000) libcom_err.so.2 => /lib64/libcom_err.so.2 (0x00002aaaac0d1000) libk5crypto.so.3 => /usr/lib64/libk5crypto.so.3 (0x00002aaaac2d4000) /lib64/ld-linux-x86-64.so.2 (0x0000555555554000) libkrb5support.so.0 => /usr/lib64/libkrb5support.so.0 (0x00002aaaac4f9000) libkeyutils.so.1 => /lib64/libkeyutils.so.1 (0x00002aaaac702000) libresolv.so.2 => /lib64/libresolv.so.2 (0x00002aaaac904000) libselinux.so.1 => /lib64/libselinux.so.1 (0x00002aaaacb19000) libsepol.so.1 => /lib64/libsepol.so.1 (0x00002aaaacd32000) Basically, I've tired compiling from source OpenSSL (both 0.9.8r and 1e), having yum reinstall from the repos, done a make clean and remade both OpenSSL and Apache numerous times -- but I can't get it to compile into the apache base or dynamically as a shared object file. What am I doing wrong here? Update 1: After doing a make clean and make distclean, I've reconfigured with the same parameters as above without any effect. The config.log is at Pastebin. Update 2: Modifying the LD_LIBRARY_PATH had no effect on the lib-deps of mod_ssl.so. UPDATE 3: I've compiled and recompiled many times, and verified with ldconfig that the OpenSSL libs dir is in my path, and included in ld.so.conf. Still cannot get httpd/mod_ssl to load the library at runtime.

    Read the article

  • How to move a ruby on rails application to a new server

    - by ManiacZX
    I have a rails app on an old Ubuntu server I need to move onto a new machine. I haven't worked with ruby on rails so I don't really know anything about the structure of the app. I want to load this onto an Ubuntu 8.04 AMI on Amazon EC2 and am looking for any information regarding the migration process such as: Do I copy over the entire folder defined as the application root in the mongrel config (for ex: /u/apps/myapp/current) or just certain folders? Am I looking for trouble if I go with the latest versions of ruby and the various gems? Any general gotchas to look out for in the process. Current server information: root@webnode001:/# cat /proc/version Linux version 2.6.15-27-server (buildd@terranova) (gcc version 4.0.3 (Ubuntu 4.0.3-1ubuntu5)) #1 SMP Fri Dec 8 18:43:54 UTC 2006 root@webnode001:/# rails -v Rails 1.2.3 root@webnode001:/# mongrel_rails cluster::configure --version Version 1.0.1 root@webnode001:/# gem -v 0.9.0 root@webnode001:/# gem list -l *** LOCAL GEMS *** actionmailer (1.3.3, 1.2.5) Service layer for easy email delivery and testing. actionpack (1.13.3, 1.12.5) Web-flow and rendering framework putting the VC in MVC. actionwebservice (1.2.3, 1.1.6) Web service support for Action Pack. activerecord (1.15.3, 1.15.2, 1.14.4) Implements the ActiveRecord pattern for ORM. activesupport (1.4.2, 1.4.1, 1.3.1) Support and utility classes used by the Rails framework. cgi_multipart_eof_fix (2.1) Fix an exploitable bug in CGI multipart parsing which affects Ruby <= 1.8.5 when multipart boundary attribute contains a non-halting regular expression string. daemons (1.0.7, 1.0.5, 1.0.4, 1.0.2) A toolkit to create and control daemons in different ways eventmachine (0.7.2, 0.7.0) Ruby/EventMachine socket engine library fastercsv (1.2.0, 1.1.0) FasterCSV is CSV, but faster, smaller, and cleaner. fastthread (1.0) Optimized replacement for thread.rb primitives ferret (0.11.4) Ruby indexing library. gem_plugin (0.2.2, 0.2.1) A plugin system based only on rubygems that uses dependencies only mongrel (1.0.1, 0.3.13.4) A small fast HTTP library and server that runs Rails, Camping, Nitro and Iowa apps. mongrel_cluster (0.2.1) Mongrel plugin that provides commands and Capistrano tasks for managing multiple Mongrel processes. mysql (2.7) MySQL/Ruby provides the same functions for Ruby programs that the MySQL C API provides for C programs. piston (1.3.3) Piston is a utility that enables merge tracking of remote repositories. rails (1.2.3, 1.1.6) Web-application framework with template engine, control-flow layer, and ORM. rake (0.7.3, 0.7.1) Ruby based make-like utility. sources (0.0.1) This package provides download sources for remote gem installation swiftiply (0.5.1) A fast clustering proxy for web applications.

    Read the article

  • Starting/Stopping Custom PHP Chat Server Linux Service (CentOS)

    - by chad
    I have been trying all night to get this service working properly. I created this script from a template and am very new to bash coding. I wrote a fully functioning chat server in php which runs endlessly, but now want to make it a dedicated service. I want to do this so that it starts on server boot and boots back up if possible when there are any down-times with the server. The issue is that I need this thing to run in a detached screen so that I can monitor packet data or send server commands via SSH when need-be. The main problem that i'm having is that it needs to have its own PID when it starts so that I can stop/restart it when needed. I am the type who grinds on coding until I figure it out, but this is so new to me that it seems the learning curve here is very steep and frustrating. Below is my code if anybody can please help me with this one, i've gotten so tired I can't even concentrate any more :( #!/bin/sh # # chatserver # # chkconfig: 345 20 90 # description: chatServer Linux Service Daemon \ # for general server handling ### BEGIN INIT INFO # Provides: chatserver # Required-Start: $local_fs $network $named $syslog # Required-Stop: $local_fs $syslog # Default-Start: 3 4 5 # Default-Stop: 0 1 2 6 # Short-Description: This service maintains the chatServer # Description: chatServer Linux Service Daemon # for general server handling ### END INIT INFO # Source function library. . /etc/rc.d/init.d/functions exec="screen php -q /var/www/html/chatServer.php" prog="chatserver" config="/etc/sysconfig/$prog" pidfile="/var/run/chatserver.pid" [ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog lockfile=/var/lock/subsys/$prog start() { #$exec || exit 5 echo -n $"Starting $prog: " daemon $exec --name=$exec --pidfile=$pidfile retval=$? echo [ $retval -eq 0 ] && touch $lockfile return $retval } stop() { echo -n $"Stopping $prog: " killproc -p $pidfile rm -f $pidfile retval=$? echo [ $retval -eq 0 ] && rm -f $lockfile return $retval } restart() { stop start } reload() { restart } force_reload() { restart } rh_status() { # run checks to determine if the service is running or use generic status status $prog } rh_status_q() { rh_status >/dev/null 2>&1 } case "$1" in start) rh_status_q && exit 0 $1 ;; stop) rh_status_q || exit 0 $1 ;; restart) $1 ;; reload) rh_status_q || exit 7 $1 ;; force-reload) force_reload ;; status) rh_status ;; condrestart|try-restart) rh_status_q || exit 0 restart ;; *) echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}" exit 2 esac exit $?

    Read the article

  • postfix error: open database /var/lib/mailman/data/aliases.db: No such file

    - by Thufir
    In trying to follow the Ubuntu guide for postfix and mailman, I do not understand these directions: This build of mailman runs as list. It must have permission to read /etc/aliases and read and write /var/lib/mailman/data/aliases. Do this with these commands: sudo chown root:list /var/lib/mailman/data/aliases sudo chown root:list /etc/aliases Save and run: sudo newaliases I'm getting this kind of error: root@dur:~# root@dur:~# root@dur:~# telnet localhost 25 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. 220 dur.bounceme.net ESMTP Postfix (Ubuntu) ehlo dur 250-dur.bounceme.net 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-STARTTLS 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN quit 221 2.0.0 Bye Connection closed by foreign host. root@dur:~# root@dur:~# tail /var/log/mail.log Aug 28 01:16:43 dur postfix/master[19444]: terminating on signal 15 Aug 28 01:16:43 dur postfix/postfix-script[19558]: starting the Postfix mail system Aug 28 01:16:43 dur postfix/master[19559]: daemon started -- version 2.9.1, configuration /etc/postfix Aug 28 01:16:45 dur postfix/postfix-script[19568]: stopping the Postfix mail system Aug 28 01:16:45 dur postfix/master[19559]: terminating on signal 15 Aug 28 01:16:45 dur postfix/postfix-script[19673]: starting the Postfix mail system Aug 28 01:16:45 dur postfix/master[19674]: daemon started -- version 2.9.1, configuration /etc/postfix Aug 28 01:17:22 dur postfix/smtpd[19709]: error: open database /var/lib/mailman/data/aliases.db: No such file or directory Aug 28 01:17:22 dur postfix/smtpd[19709]: connect from localhost[127.0.0.1] Aug 28 01:18:37 dur postfix/smtpd[19709]: disconnect from localhost[127.0.0.1] root@dur:~# root@dur:~# postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases, hash:/var/lib/mailman/data/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix default_transport = smtp home_mailbox = Maildir/ inet_interfaces = loopback-only mailbox_command = /usr/lib/dovecot/deliver -c /etc/dovecot/conf.d/01-mail-stack-delivery.conf -m "${EXTENSION}" mailbox_size_limit = 0 mailman_destination_recipient_limit = 1 mydestination = dur, dur.bounceme.net, localhost.bounceme.net, localhost myhostname = dur.bounceme.net mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 readme_directory = no recipient_delimiter = + relay_domains = lists.dur.bounceme.net relay_transport = relay relayhost = smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_recipient_restrictions = reject_unknown_sender_domain, reject_unknown_recipient_domain, reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_authenticated_header = yes smtpd_sasl_local_domain = $myhostname smtpd_sasl_path = private/dovecot-auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_tls_auth_only = yes smtpd_tls_cert_file = /etc/ssl/certs/ssl-mail.pem smtpd_tls_key_file = /etc/ssl/private/ssl-mail.key smtpd_tls_mandatory_ciphers = medium smtpd_tls_mandatory_protocols = SSLv3, TLSv1 smtpd_tls_received_header = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport root@dur:~# root@dur:~# And am wondering what connection might be. I do see that I don't have the requisite files: root@dur:~# root@dur:~# ll /var/lib/mailman/data/aliases ls: cannot access /var/lib/mailman/data/aliases: No such file or directory root@dur:~# At what stage were those aliases created? How can I create them? Is that what's causing the error error: open database /var/lib/mailman/data/aliases.db: No such file or directory Aug 28 01:17:22 dur postfix/smtpd[19709]: connect from localhost[127.0.0.1]?

    Read the article

  • Why is the installation of certain programs always such a pain in Linux [closed]

    - by Saif Bechan
    I am new to Linux and I am trying to set up a server. For this I sometimes to need to install special software, but the installation of this is always such a pain. For example I wanted to try the htscanner to see if it did the job for me. When i got to the page there is NO INSTALLATION guide. I had to search for the right one on google. Even on google its a pain to find the right method. Just try it - google search.After a long search and tried different things I finally found out that I had te install some more software before it will work. The website says that the version I used did not had any dependencies. Thats a lie. Release0.8.1: No dependencies registered. You do need certain things for it to work. After managing to set it up it still didn't work I can't figure out why because there is no official guide on the website. So I wanted to just uninstall it and find a better solution. Uninstalling. Uninstalling something in Linux is a real mystery how this actually works. The best answer I got is to manually look for the files and delete them. Whats up with that! There is never something said about uninstalling on the websites. Even on the website of CentOS itself it tels you how to install something like rpmforge packages (it's a miracle they tell you and not have to google it) but there is no mention of what to do when you want to uninstall. Why not? The forums you get on when trying to solve your problem are most of the time in plain text, and you have to scroll trough huge error logs before you see somethings that vaguely resembles your question if you are lucky. The Question My question is if there are any recommended websites / forums that explain the basic concepts of installing and uninstalling software on Linux. And explain other useful operations. And not Wikipedia or the first hits of Google, I have been there already. I am looking for some easy to read trough guides on these operations on Linux. I have been on a lot of websites that explain some Linux operation, but I bet its easier to get a degree in rocket science than to read trough the website and understand what they try to say.

    Read the article

  • Add Mirror for volumes other than the last one in Windows 7 (disk "not up-to-date")

    - by rakslice
    I'm using Windows 7 x64 Ultimate. I have an existing 4TB disk with 3 NTFS volumes, a new 3TB blank disk, and I'm trying to mirror the volumes onto the new disk. My Windows install is on an SSD which is Disk 0. The 4TB disk with volumes is Disk 1, and the new blank disk is Disk 2. I can add a mirror successfully for the last volume, but when I try to add a mirror for the first volume I immediately get errors (see below). Is there something I special I need to do to add a mirror for a volume other than the last one? More info: I opened Disk Management, right-clicked on the first volume on the existing disk, went to Add Mirror, and selected the new disk. The first time I did this I was prompted to convert the new disk to a Dynamic Disk, which I approved. Subsequently I got a message: The operation failed to complete because the Disk Management console view is not up-to-date. Refresh the view by using the refresh task. If the problem persists close the Disk Management console, then restart Disk Management or restart the computer. I've refreshed disk management, restarted the computer, and converted the new disk to basic and back to dynamic, but I still get that error message. Looking around for suggestions of a workaround, I saw a suggestion to use the diskpart command line tool. Running diskpart from the Start Menu as Administrator, I did select volume 2 (the first volume I want to mirror) and then add disk 2 (the new disk), and received a somewhat similar error: Virtual Disk Service error: The disk's extent information is corrupted. DiskPart has referenced an object which is not up-to-date. Refresh the object by using the RESCAN command. If the problem persists exit DiskPart, then restart DiskPart or restart the computer. A rescan appears to be successful: DISKPART> select disk 2 Disk 2 is now the selected disk. DISKPART> rescan Please wait while DiskPart scans your configuration... DiskPart has finished scanning your configuration. but attempting to add the mirror again resulted in the same error. The only similar report I found online was this: http://www.sevenforums.com/hardware-devices/335780-unable-mirror-all-but-last-partition-drive.html Based on that I attempted to mirror the last volume on the disk to the new disk using diskpart, and that started successfully -- it is currently resynchronizing. More Background: In the course of dealing with a failing 3TB hard drive, I bought a replacement 4TB drive and installed it, then copied the partitions from the failing drive to it using Minitool Partition Wizard Home, and then removed the failing drive and was up and running again normally. Now I've received a warranty replacement for the failing drive, and installed it, and now I'm attempting to mirror my partitions to it.

    Read the article

  • Trouble setting up PATH for Java on Debian

    - by milkmansrevenge
    I am trying to get Oracle Java 7 update 3 working correctly on Debian 6. I have downloaded and set up the files in /usr/java/jre1.7.0_03. I have also set the following two lines at the end of /etc/bash.bashrc: export JAVA_HOME=/usr/java/jre1.7.0_03 export PATH=$PATH:$JAVA_HOME/bin Logging in as other users and root is fine, Java can be found: chris@mc:~$ java -version java version "1.7.0_03" Java(TM) SE Runtime Environment (build 1.7.0_03-b04) Java HotSpot(TM) 64-Bit Server VM (build 22.1-b02, mixed mode) However there are two cases where Java cannot be found as detailed below. Note that both of these worked fine when I have previously installed OpenJDK Java 6 via aptitude, but I need Oracle Java 7 for various reasons. Most importantly, I cannot run commands as another user via su, despite the PATH showing that Java should be present. The user was created with adduser chris root@mc:~# su chris -c "echo $PATH" /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/java/jre1.7.0_03/bin:/bin root@mc:~# su chris -c "java -version" bash: java: command not found root@mc:~# su chris -c "/usr/java/jre1.7.0_03/bin/java -version" java version "1.7.0_03" ... How can it be in the PATH but not be found? Update 05/04/2012: explained by Daniel, to do with it being a non-interactive shell so files such as /etc/profile and /etc/bash.bashrc are not executed. Doing a full swap to that user and running Java works: root@mc:~# su chris chris@mc:/root$ java -version java version "1.7.0_03" ... I run a script on start up which exhibits similar but slightly different problems. The script is located in /etc/init.d/start-mystuff.sh and calls a jar: #!/bin/bash # /etc/init.d/start-mystuff.sh java -jar /opt/Mars.jar I can confirm that the script runs on start up and the exit code is 127, which indicates command not found. Inserting a line to print/save the PATH shows that it is: /sbin:/usr/sbin:/bin:/usr/bin This second problem isn't as important because I can just point directly to the Java executable in the script, but I am still curious! I have tried setting the full PATH and JAVA_HOME explicitly in /etc/environment which didn't help. I have also tried setting them in /etc/profile which doesn't seem to help either. I have tried logging in and out again after setting PATH in the various locations (duh!). Anyway, long post for what will probably have a simple one line solution :( Any help with this would be greatly appreciated, I have spent far too long trying to fix it by myself. Motivation The first problem may seem obscure but in my system I have users that are not allowed SSH access yet I still want to run processes as them. I have a ton of scripts operating in this way and don't want to have to change them all.

    Read the article

  • Need help making site available externally

    - by White Island
    I'm trying to open a hole in the firewall (ASA 5505, v8.2) to allow external access to a Web application. Via ASDM (6.3?), I've added the server as a Public Server, which creates a static NAT entry [I'm using the public IP that is assigned to 'dynamic NAT--outgoing' for the LAN, after confirming on the Cisco forums that it wouldn't bring everyone's access crashing down] and an incoming rule "any... public_ip... https... allow" but traffic is still not getting through. When I look at the log viewer, it says it's denied by access-group outside_access_in, implicit rule, which is "any any ip deny" I haven't had much experience with Cisco management. I can't see what I'm missing to allow this connection through, and I'm wondering if there's anything else special I have to add. I tried adding a rule (several variations) within that access-group to allow https to the server, but it never made a difference. Maybe I haven't found the right combination? :P I also made sure the Windows firewall is open on port 443, although I'm pretty sure the current problem is Cisco, because of the logs. :) Any ideas? If you need more information, please let me know. Thanks Edit: First of all, I had this backward. (Sorry) Traffic is being blocked by access-group "inside_access_out" which is what confused me in the first place. I guess I confused myself again in the midst of typing the question. Here, I believe, is the pertinent information. Please let me know what you see wrong. access-list acl_in extended permit tcp any host PUBLIC_IP eq https access-list acl_in extended permit icmp CS_WAN_IPs 255.255.255.240 any access-list acl_in remark Allow Vendor connections to LAN access-list acl_in extended permit tcp host Vendor any object-group RemoteDesktop access-list acl_in remark NetworkScanner scan-to-email incoming (from smtp.mail.microsoftonline.com to PCs) access-list acl_in extended permit object-group TCPUDP any object-group Scan-to-email host NetworkScanner object-group Scan-to-email access-list acl_out extended permit icmp any any access-list acl_out extended permit tcp any any access-list acl_out extended permit udp any any access-list SSLVPNSplitTunnel standard permit LAN_Subnet 255.255.255.0 access-list nonat extended permit ip VPN_Subnet 255.255.255.0 LAN_Subnet 255.255.255.0 access-list nonat extended permit ip LAN_Subnet 255.255.255.0 VPN_Subnet 255.255.255.0 access-list inside_access_out remark NetworkScanner Scan-to-email outgoing (from scanner to Internet) access-list inside_access_out extended permit object-group TCPUDP host NetworkScanner object-group Scan-to-email any object-group Scan-to-email access-list inside_access_out extended permit tcp any interface outside eq https static (inside,outside) PUBLIC_IP LOCAL_IP[server object] netmask 255.255.255.255 I wasn't sure if I needed to reverse that "static" entry, since I got my question mixed up... and also with that last access-list entry, I tried interface inside and outside - neither proved successful... and I wasn't sure about whether it should be www, since the site is running on https. I assumed it should only be https.

    Read the article

  • Windows 7 file-based backup service

    - by Ben Voigt
    I'm looking for a good replacement for Lazy Mirror, since it doesn't support Windows 7 well. Pros: One of the things I really loved about Lazy Mirror is that it always maintains a "full" backup, but does so by only copying modified files. As each file was copied, the old version got archived (moved to an out-of-the-way location). So after mirroring ran, there'd be a complete copy of the file system, which could even be booted if necessary. At the same time, extra space on the backup media was used to store as many older versions of files as possible, without wasting space storing multiple copies of the same version. It seems that with Windows 7 backup, there'd be wasted space storing the same data in both the system image and file backup. It was completely file-based, but also aware of the registry (it had a feature to dump the live registry to hive files in the correct format). The backups were normal NTFS filesystems, no special tool was needed to read them. It automatically cleaned out the oldest previous versions when space ran out (unlike Windows 7 backup which apparently simply starts failing the the backup media fills.) It copied all file attributes including security. Cons: It doesn't deal well with junction points, symbolic links, and hard links. It didn't run as a service without lots of help from firesrv or srvany, and then you couldn't interact with the GUI. Running as a service was necessary to be able to mirror protected OS files. It didn't have open file handling, except for registry hives. I guess that the file-by-file archive and replacement could leave mismatched sets of files, if the mirror was interrupted. This would be the advantage of incremental backup techniques that require old full backup + all intermediate incremental backups to restore. But I don't see this as presenting much of a problem, you'd really only have a boot failure if you had a mixture of pre- and post-service pack files, and I can run a full image backup using another tool before applying a service pack. Does anyone know of a tool that does both full-system backup and storage of old versions of files like Lazy Mirror did (without storing the same data multiple times), and also can run as a service in Windows 7? Free is best of course, but a reasonably priced paid program (e.g. It would be absolutely awesome if it also triggered a backup/mirror pass when a particular external drive was plugged in and generated popup warnings if backups hadn't been run recently)

    Read the article

  • Installing Phusion Passenger 4.0.20 on Ubuntu 13.10

    - by tempestfire2002
    So I'm trying to install Passenger on the newest version of KUbuntu (13.10). I installed Apache2 using the apache2-mpm-worker package using the Muon Package Manager. And these are the commands I ran. rvmsudo gem install passenger rvmsudo passenger-install-apache2-module But I keep getting the following errors: [Fri Oct 18 15:52:13.227790 2013] [core:warn] [pid 13095] AH00111: Config variable ${APACHE_LOCK_DIR} is not defined [Fri Oct 18 15:52:13.227933 2013] [core:warn] [pid 13095] AH00111: Config variable ${APACHE_PID_FILE} is not defined [Fri Oct 18 15:52:13.227969 2013] [core:warn] [pid 13095] AH00111: Config variable ${APACHE_RUN_USER} is not defined [Fri Oct 18 15:52:13.227991 2013] [core:warn] [pid 13095] AH00111: Config variable ${APACHE_RUN_GROUP} is not defined [Fri Oct 18 15:52:13.228026 2013] [core:warn] [pid 13095] AH00111: Config variable ${APACHE_LOG_DIR} is not defined [Fri Oct 18 15:52:13.231737 2013] [core:warn] [pid 13095:tid 3074562624] AH00111: Config variable ${APACHE_RUN_DIR} is not defined [Fri Oct 18 15:52:13.232760 2013] [core:warn] [pid 13095:tid 3074562624] AH00111: Config variable ${APACHE_LOG_DIR} is not defined [Fri Oct 18 15:52:13.233043 2013] [core:warn] [pid 13095:tid 3074562624] AH00111: Config variable ${APACHE_LOG_DIR} is not defined [Fri Oct 18 15:52:13.233078 2013] [core:warn] [pid 13095:tid 3074562624] AH00111: Config variable ${APACHE_LOG_DIR} is not defined AH00526: Syntax error on line 74 of /etc/apache2/apache2.conf: Invalid Mutex directory in argument file:${APACHE_LOCK_DIR} -------------------------------------------- WARNING: Apache doesn't seem to be compiled with the 'prefork', 'worker' or 'event' MPM Phusion Passenger has only been tested on Apache with the 'prefork', the 'worker' and the 'event' MPM. Your Apache installation is compiled with the '' MPM. We recommend you to abort this installer and to recompile Apache with either the 'prefork', the 'worker' or the 'event' MPM. Press Ctrl-C to abort this installer (recommended). Press Enter if you want to continue with installation anyway. The result of my running apache2ctl -V is: Server version: Apache/2.4.6 (Ubuntu) Server built: Aug 9 2013 14:31:04 Server's Module Magic Number: 20120211:23 Server loaded: APR 1.4.8, APR-UTIL 1.5.2 Compiled using: APR 1.4.8, APR-UTIL 1.5.2 Architecture: 32-bit Server MPM: worker threaded: yes (fixed thread count) forked: yes (variable process count) Server compiled with.... -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=256 -D HTTPD_ROOT="/etc/apache2" -D SUEXEC_BIN="/usr/lib/apache2/suexec" -D DEFAULT_PIDLOG="/var/run/apache2.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="mime.types" -D SERVER_CONFIG_FILE="apache2.conf" As can be seen, the server is compiled with the worker MPM, so why is passenger complaining? And how do I solve the above errors (warnings, really, but to be safe, I'd like to not have any warnings)? Thanks.

    Read the article

  • Areca 1280ml RAID6 volume set failed

    - by Richard
    Today we hit some kind of worst case scenario and are open to any kind of good ideas. Here is our problem: We are using several dedicated storage servers to host our virtual machines. Before I continue, here are the specs: Dedicated Server Machine Areca 1280ml RAID controller, Firmware 1.49 12x Samsung 1TB HDDs We configured one RAID6-set with 10 discs that contains one logical volume. We have two hot spares in the system. Today one HDD failed. This happens from time to time, so we replaced it. Upon rebuilding a second disc failed. Normally this is no fun. We stopped heavy IO-operations to ensure a stable RAID rebuild. Sadly the hot-spare disc failed while rebuilding and the whole thing stopped. Now we have the following situation: The controller says that the raid set is rebuilding The controller says that the volume failed It is a RAID 6 system and two discs failed, so the data has to be intact, but we cannot bring the volume online again to access the data. While searching we found the following leads. I don't know whether they are good or bad: Mirroring all the discs to a second set of drives. So we would have the possibility to try different things without loosing more than we already have. Trying to rebuild the array in R-Studio. But we have no real experience with the software. Pulling all drives, rebooting the system, changing into the areca controller bios, reinserting the HDDs one-by-one. Some people are saying that the brought the system online by this. Some are saying that the effect is zero. Some say, that they blew the whole thing. Using undocumented areca commands like "rescue" or "LeVel2ReScUe". Contacting a computer forensics service. But whoa... primary estimates by phone exceeded 20.000€. That's why we would kindly ask for help. Maybe we are missing the obvious? And yes of course, we have backups. But some systems lost one week of data, thats why we'd like to get the system up and running again. Any help, suggestions and questions are more than welcome.

    Read the article

  • Vagrant reporting VirtualBox guest additions out of date

    - by DTest
    Fairly new to Vagrant, so bear with me if I don't understand the process. I downloaded a CentOS box off http://www.vagrantbox.es/ Started it up running VirtualBox 4.2.4 and got this message: [default] The guest additions on this VM do not match the install version of VirtualBox! This may cause things such as forwarded ports, shared folders, and more to not work properly. If any of those things fail on this machine, please update the guest additions and repackage the box. Guest Additions Version: 4.0.8 VirtualBox Version: 4.2.4 So I used the vbguest plugin to update the guest additions, then repackaged the box as suggested. Having replaced the old box and loading it up I get the same message about guest additions being outdated, but vbguest reports that they are up to date (the automatic vbguest update is disabled in my Vagrantfile): Vagrant::Config.run do |config| config.vm.box = "centos56_64" config.vbguest.auto_update = false config.vbguest.no_remote = true end And the commands: dtest$ vagrant up [default] Importing base box 'centos56_64'... [default] The guest additions on this VM do not match the install version of VirtualBox! This may cause things such as forwarded ports, shared folders, and more to not work properly. If any of those things fail on this machine, please update the guest additions and repackage the box. Guest Additions Version: 4.0.8 VirtualBox Version: 4.2.4 [default] Matching MAC address for NAT networking... [default] Clearing any previously set forwarded ports... [default] Forwarding ports... [default] -- 22 => 2222 (adapter 1) [default] Creating shared folders metadata... [default] Clearing any previously set network interfaces... [default] Booting VM... [default] Waiting for VM to boot. This can take a few minutes. [default] VM booted and ready for use! [default] Mounting shared folders... [default] -- v-root: /vagrant dtest$ vagrant vbguest --no-install [default] Detected Virtualbox Guest Additions 4.2.4 --- OK. [default] Virtualbox Guest Additions on host: 4.2.4 - guest's version is 4.2.4 Since they appear to be updated after an install, I could ignore the message. But is it possible to get rid of it?

    Read the article

  • How to unlock and remove a protected partition from Prestigio USB stick?

    - by mr.b
    Ok, so, I have one of those fancy schmancy devices, which is given to me by a frustrated friend of mine. Device is a Prestigio Leather 8GB, which identifies itself to Linux host as: Bus 001 Device 006: ID 1307:0165 Transcend Information, Inc. 2GB/4GB Flash Drive Kernel messages as USB device is plugged in: kernel: [ 2769.580042] usb 1-9: new high speed USB device using ehci_hcd and address 7 kernel: [ 2769.714782] scsi8 : usb-storage 1-9:1.0 kernel: [ 2770.713937] scsi 8:0:0:0: Direct-Access 8192MB flash drive 1.00 PQ: 0 ANSI: 2 kernel: [ 2770.714535] scsi 8:0:0:1: Direct-Access 8192MB flash drive 1.00 PQ: 0 ANSI: 2 kernel: [ 2770.715734] sd 8:0:0:0: Attached scsi generic sg3 type 0 kernel: [ 2770.716108] sd 8:0:0:1: Attached scsi generic sg4 type 0 kernel: [ 2770.722175] sd 8:0:0:0: [sdc] 962560 512-byte logical blocks: (492 MB/470 MiB) kernel: [ 2770.722657] sd 8:0:0:0: [sdc] Write Protect is on kernel: [ 2770.731078] sd 8:0:0:1: [sdd] 14012416 512-byte logical blocks: (7.17 GB/6.68 GiB) kernel: [ 2770.731215] sdc: kernel: [ 2770.738251] sd 8:0:0:1: [sdd] Write Protect is off kernel: [ 2770.880328] kernel: [ 2770.885876] sd 8:0:0:0: [sdc] Attached SCSI removable disk kernel: [ 2770.887442] sdd: unknown partition table kernel: [ 2771.049605] sd 8:0:0:1: [sdd] Attached SCSI removable disk So, symptoms are typical for U3-like devices: two separate devices inside of a single flash device. Windows sees it also as two identical usb devices, and mounts two separate drives to system, whereas first one presents itself as a CDROM device, holding a write-protected content, and second is a regular flash-disk partition, that "can" be written to. However, it seems like it's broken in some weird way, since it won't let me write anything to it, format it, nothing, but that's not the issue right now. Question: How can I unlock entire USB stick so it appears to system as a single, 8GB device which can be partitioned and used normally, without restrictions? Since it appeared to be an U3 device, I have tried standard utilities: both U3 Uninstaller by u3.com (found on SoftPedia), and opensource u3_tool from sourceforge (on both Windows and Linux). First utility failed to even detect USB stick as U3 device (simply stood idle while I re-plugged stick several times), while second tool failed with some obscure error about SCSI command unable to do something (I might be able to provide exact errors when I switch back to windows). u3_tool -i /dev/sg3 (Display device info) fails with u3_partition_info() failed: Device reported command failed: status 1 ...and every other option fails with same error, minus first part which states which command precisely has failed. So, apparently, this isn't a U3 device. Or, if it is, it doesn't behave like one. I read on a few occasions that this device protection is done by special command sent to device which tells it to lock itself, and so there should be an unlock command, that would set drive straight. Does anyone have any idea about what could I do to this device to fix it? P.S. I also mentioned a problem with being unable to use second "drive", but I'll tackle that problem when (and if) I manage to merge those two devices into one...

    Read the article

  • Linux Fiber Channel Host Setup Basic

    - by Jim
    I've been googling for about 4 hours now with no luck. I am trying to setup a Linux server running Oracle Server 6.3 as a Fiber Channel host. And then connect it to a Dell Compellent Fibre Channel Host contain a 500GB Volume. The Oracle server itself contains two Brocade 815 FC HBAs. I've discovered their WWN(I think) via cat /sys/class/fc_host/host1/port_name 0x100000051efc3d85 cat /sys/class/fc_host/host2/port_name 0x100000051efc3d9f The next part is where I am at a loss. I've used iSCSI before...is FC the same deal where you have an initiator and a target? If so where do I specific that in linux? I'm also new to Fiber Channel as a protocol, so i am unsure what is needed to make a transaction? WWN and port ID? Similar to IP:Port combination in the Ethernet world. I've read alot regarding using systool, multipath, fc_transport commands, however none of these is recognized as a valid command from Oracle Server 6.3 Appreciate the guidance and assistance. I installed sccsi-target-utils and can now run rescan-scsi-bus and sg_map -x. rescan-scsi-bus.sh -l -w -r Host adapter 0 (megaraid_sas) found. Host adapter 1 ((null)) found. Host adapter 2 ((null)) found. Host adapter 3 (ata_piix) found. Host adapter 4 (ata_piix) found. Scanning SCSI subsystem for new devices and remove devices that have disappeared Scanning host 0 for SCSI target IDs 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15, LUNs 0 1 2 3 4 5 6 7 Scanning for device 0 2 0 0 .... OLD: Host: scsi0 Channel: 02 Id: 00 Lun: 00 Vendor: DELL Model: PERC H700 Rev: 2.30 Type: Direct-Access ANSI SCSI revision: 05 Scanning for device 0 2 1 0 ... OLD: Host: scsi0 Channel: 02 Id: 01 Lun: 00 Vendor: DELL Model: PERC H700 Rev: 2.30 Type: Direct-Access ANSI SCSI revision: 05 Scanning host 1 for all SCSI target IDs, LUNs 0 1 2 3 4 5 6 7 Scanning for device 1 0 3 1 ... OLD: Host: scsi1 Channel: 00 Id: 03 Lun: 01 Vendor: COMPELNT Model: Compellent Vol Rev: 0505 Type: Direct-Access ANSI SCSI revision: 05 Scanning host 2 for all SCSI target IDs, LUNs 0 1 2 3 4 5 6 7 Scanning host 3 for all SCSI target IDs, LUNs 0 1 2 3 4 5 6 7 Scanning for device 3 0 0 0 ... REM: Host: scsi3 Channel: 00 Id: 00 Lun: 00 DEL: Vendor: TEAC Model: DVD-ROM DV-28SW Rev: R.2A Type: CD-ROM ANSI SCSI revision: 05 Scanning host 4 channels 0 for SCSI target IDs 0, LUNs 0 1 2 3 4 5 6 7 0 new device(s) found. 1 device(s) removed. and sg_map -x /dev/sg0 0 0 32 0 13 /dev/sg1 0 2 0 0 0 /dev/sda /dev/sg2 0 2 1 0 0 /dev/sdb /dev/sg4 1 0 3 1 0 /dev/sdc I'm not sure what this all means...

    Read the article

  • Vagrant - Failed login, ssh not set up

    - by motleydev
    This question is two fold because somewhere in my attempts to solve the problem, I created a new one. First: I was trying to vagrant up using a Vagranfile based on the standard hashicorp/precise32 box. Everything worked up until default: SSH auth method: private key where it would eventually time out. Enabling gui in the Vagrantfile showed that the machine never actually logs in. I can use the standard user/pass and log in from that point but the vagrant up process still remains at that prior status. Here's where my understanding might be a little dim. I've tried setting auth method from the insecure_pass_key to my root ~/.ssh/id_rsa or whichever one I wanted to use. I'm not entirely sure where to put a copy of the public key or my authorized keys file. I've got a .vagrant.d folder in my user dir (I'm on OSX) which seems to contain the box images. I've got a .vagrant folder in my directory with the Vagrantfile which seems to contain the specific machine I am building off of. I've tried pouring over the docs and forums but I seem to be missing a key concept here. And Now: After a host of tips/tricks such as rolling back by VirtualBox install, uninstalling/reinstalling Vagrant and VritualBox several times, when I try to run vagrant ssh-config with a Vagrantfile based on the same hashicorp/precise32 it says that the box is not enabled for SSH. Specifically, this error: The provider for this Vagrant-managed machine is reporting that it is not yet ready for SSH. Depending on your provider this can carry different meanings. Make sure your machine is created and running and try again. Additionally, check the output of `vagrant status` to verify that the machine is in the state that you expect. If you continue to get this error message, please view the documentation for the provider you're using. So now I am slightly up a creek. Any help would be appreciated if not just clarifying a concept. Some pertinent info: I'm on OSX Maverics Due to the fact I'm running a dual HD system with system files on one HD and user files on another, my permissions are a little wonky and VBoxManage will only let me run commands via Sudo - not sure if it's pertinent - but maybe. I have no idea what I'm doing. That part is perhaps more important.

    Read the article

  • How can I force a merge of all WAL files in pg_xlog back into my base "data" directory?

    - by Zac B
    Question: Is there a way to tell Postgres (9.2) to "merge all WAL files in pg_xlog back into the non-WAL data files, and then delete all WAL files successfully merged?" I would like to be able to "force" this operation; i.e. checkpoint_segments or archiving settings should be ignored. The filesystem WAL buffer (pg_xlog) directory should be emptied, or nearly emptied. It's fine if some or all of the space consumed by the pg_xlog directory is then consumed by the data directory; our DBA has asked for WAL database backups without any backlogged WALs, but space consumption is not a concern. Having near-zero WAL activity during this operation is a fine constraint. I can ensure that the database server is either shut down or not connectible (zero user-generated transaction load) during this process. Essentially, I'd like Postgres to ignore archiving/checkpoint retention policies temporarily, and flush all WAL activity to the core database files, leaving pg_xlog in the same state as if the database were recently created--with very few WAL files. What I've Tried: I know that the pg_basebackup utility performs something like this (it generates an almost-all-WALs-merged copy of a Postgres instance's data directory), but we aren't ready to use it on all our systems yet, as we are still testing replication settings; I'm hoping for a more short-term solution. I've tried issuing CHECKPOINT commands, but they just recycle one WAL file and replace it with another (that is, if they do anything at all; if I issue them during database idle time, they do nothing). pg_switch_xlog() similarly just forces a switch to the next log segment; it doesn't flush all queued/buffered segments. I've also played with the pg_resetxlog utility. That utility sort of does what I want, but all of its usage docs seem to indicate that it destroys (rather than flushing out of the transaction log and into the main data files) some or all of the WAL data. Is that impression accurate? If not, can I use pg_resetxlog during a zero-WAL-activity period to force a flush of all queued WAL data to non-WAL data? If the answer to that is negative, how can I achieve this goal? Thanks!

    Read the article

  • What PHP, Xdebug and Eclipse configurations work on Windows 7 64 bit?

    - by thaddeusmt
    I have been mucking around for days, trying to find the right combination that lets me debug with breakpoints and variable viewing, in Eclipse, without crashing Apache. PHP 5.3? PHP 5.2? Eclipse Helios? Eclipse Galileo? One or the other with certain versions of xdebug or php? Or do I really need to use NetBeans or something else? Is my 64 bit OS the problem? Do need specific 64bit versions of PHP, Eclipse or Xdebug to work on Windows 7 64? Any special xdebug config options and tricks that I need in php.ini? Like turning off xdebug.profiler_enable or not using quotes around my zend_extension path to the xdebug dll? A Vhosts issue? Scrap the whole thing and go back to Win XP or Ubuntu? Here's what I've already been reading: http://stackoverflow.com/questions/4509245/so-eclipse-and-xdebug-walk-into-a-bar-and-then-my-apache-server-dies/4602473 http://stackoverflow.com/questions/206788/why-does-xdebug-crash-apache-on-every-xampp-install-ive-tried http://bugs.xdebug.org/view.php?id=459 https://bugs.eclipse.org/bugs/show_bug.cgi?id=312951#c8 http://stackoverflow.com/questions/2799936/xdebug-for-php-5-2-on-windows-7-64bit and so and so on... SO, xdebug bug tracker, eclipse bugzilla, etc, etc Basically what would be great is if folks could post their working (i.e. debugging with breakpoints and local variable viewing in Eclipse) Win7 64bit configurations, including: PHP version (5.3.1, 5.2.11, etc) Xdebug dll (2.1.0-5.3-vc6, etc) Xdebug php.ini config (zend_extension = "C:\xampp\php\ext\php_xdebug.dll", etc) Apache version (2.2.14, etc) Eclipse version Anything else important? The "secret ingredient"? Thanks! I miss my debugger since I got a new laptop with Win 7! Sadly it looks like some of the drivers (switchable graphics, multi-touch pad, etc) on my lappy don't work right with Ubuntu yet, so I feel a bit trapped on Win :( I know I will figure something out eventually, but I've been at this trial-and-error game a while and am seeking some guidance. (Originally posted on StackOverflow here, but moved to SuperUser:) http://stackoverflow.com/questions/4628215/what-php-xdebug-and-eclipse-configurations-work-on-windows-7-64-bit

    Read the article

  • Why apcupsd won't see the UPS connected to the USB posrt on FreeBSD 8.0 amd64

    - by Max Kosyakov
    Hello, Recently I installed an apcusbd on a FreeBSD 8.0 amd64 box via ports system. It installed perfectly but it won't run. Here what is says in the log: FATAL ERROR in generic-usb.c at line 636 Cannot find UPS device It appeared that HID driver picked the /dev/ugen4.2 which could cause the apcusb being unable to find the device. After I had discovered this, I rebuilt the kernel and removed the hid driver. Now it just shows "ugen4.2: <Tripp Lite> at usbus4" and no uhid0 device appears. Nevertheless the problem persisted. I tried to leave the DEVICE config setting blank --- won't help. Then I specified the particular device in the config, but it did not help either. Below you is the output of several commands that can provide some useful information on my case. server# /usr/local/etc/rc.d/apcupsd start Starting apcupsd. server# tail /var/log/messages | grep apcupsd Jun 17 22:30:00 server apcupsd[1520]: apcupsd FATAL ERROR in generic-usb.c at line 636 Cannot find UPS device -- For a link to detailed USB trouble shooting information, please see . Jun 17 22:30:00 server apcupsd[1520]: apcupsd error shutdown completed server# cat /usr/local/etc/apcupsd/apcupsd.conf ## apcupsd.conf v1.1 ## UPSCABLE usb UPSTYPE usb DEVICE /dev/ugen4.2 LOCKFILE /var/lock UPSCLASS standalone UPSMODE disable server# dmesg | grep '^u' uhci0: port 0xa800-0xa81f irq 16 at device 26.0 on pci0 uhci0: [ITHREAD] uhci0: LegSup = 0x0f00 usbus0: on uhci0 uhci1: port 0xa880-0xa89f irq 21 at device 26.1 on pci0 uhci1: [ITHREAD] uhci1: LegSup = 0x0f00 usbus1: on uhci1 uhci2: port 0xac00-0xac1f irq 18 at device 26.2 on pci0 uhci2: [ITHREAD] uhci2: LegSup = 0x0f00 usbus2: on uhci2 usbus3: EHCI version 1.0 usbus3: on ehci0 uhci3: port 0xa080-0xa09f irq 23 at device 29.0 on pci0 uhci3: [ITHREAD] uhci3: LegSup = 0x0f00 usbus4: on uhci3 uhci4: port 0xa400-0xa41f irq 19 at device 29.1 on pci0 uhci4: [ITHREAD] uhci4: LegSup = 0x0f00 usbus5: on uhci4 uhci5: port 0xa480-0xa49f irq 18 at device 29.2 on pci0 uhci5: [ITHREAD] uhci5: LegSup = 0x0f00 usbus6: on uhci5 usbus7: EHCI version 1.0 usbus7: on ehci1 uart0: port 0x3f8-0x3ff irq 4 flags 0x10 on acpi0 uart0: [FILTER] usbus0: 12Mbps Full Speed USB v1.0 usbus1: 12Mbps Full Speed USB v1.0 usbus2: 12Mbps Full Speed USB v1.0 usbus3: 480Mbps High Speed USB v2.0 usbus4: 12Mbps Full Speed USB v1.0 usbus5: 12Mbps Full Speed USB v1.0 usbus6: 12Mbps Full Speed USB v1.0 usbus7: 480Mbps High Speed USB v2.0 ugen0.1: at usbus0 uhub0: on usbus0 ugen1.1: at usbus1 uhub1: on usbus1 ugen2.1: at usbus2 uhub2: on usbus2 ugen3.1: at usbus3 uhub3: on usbus3 ugen4.1: at usbus4 uhub4: on usbus4 ugen5.1: at usbus5 uhub5: on usbus5 ugen6.1: at usbus6 uhub6: on usbus6 ugen7.1: at usbus7 uhub7: on usbus7 uhub0: 2 ports with 2 removable, self powered uhub1: 2 ports with 2 removable, self powered uhub2: 2 ports with 2 removable, self powered uhub4: 2 ports with 2 removable, self powered uhub5: 2 ports with 2 removable, self powered uhub6: 2 ports with 2 removable, self powered uhub3: 6 ports with 6 removable, self powered uhub7: 6 ports with 6 removable, self powered ugen4.2: at usbus4 server#

    Read the article

  • Checking the configuration of two systems to determine changes

    - by None
    We are standing up a replicant data center at work and need to ensure that the new data center is configured (nearly) identically to the original. The new data center will be differently addressed and named than the original and will have differing user accounts, but all the COTS, patches, and configurations should be the same. We would normally ghost the original servers and install those images onto the new machines, however, we have a few problematic pieces of COTS that require we install them outside of an image due to how they capture the setup of the network during their installation and maintain it within their configuration information (in some cases storing it in various databases). We have tried multiple times and this piece of COTS cannot be captured within a ghost image unless the destination machine will have an identical network setup (all the same IPs, hostnames, user accounts, etc across the entire network) as the original. In truth, it is the setup of these special COTS that I want to audit the most because they are difficult to install and configure in the first place. In light of the fact that we can’t simply ghost, I’m trying to find a reasonable manner to audit the new data center and check to see if it is setup like the original (some sort of system wide configuration audit or integrity check). I’m considering using something like Tripwire for Servers to capture the configuration on the source machines and then run an audit on the destination machines. I understand that it will still show some differences due to the minor config changes, but I’m hoping that it will eliminate the majority of the work. Here are some of the constraints I’m working under: Data center is comprised of multiple Windows and Linux machines of differing versions (about 20 total) I absolutely cannot ghost or snap any other type of image of these machines … at least not in their final configuration I want to audit the final configuration to ensure all of the COTS, patches, configurations, etc are installed and setup properly (as compared to the original data center) I would rather not install any additional tools on these machines … I’d much rather run it from a standalone machine or off a DVD Price of tools is important but not an impossible burden, however, getting a solution soon is important (I can’t take the time to roll my own tools to do this) For the COTS that stores the network information, I don’t know all of the places it stores the network information … so it would be unlikely I could find a way in the near future to adjust its setup after the installation has occurred Anyone have any thoughts or alternate approaches? Can anyone recommend tools that would be usable for system wide configuration audits?

    Read the article

< Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >