Search Results

Search found 24814 results on 993 pages for 'linux distro'.

Page 315/993 | < Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >

  • Wireless disconnect randomly with wpa_supplicant reason=2

    - by renenglish
    I installed ubuntu-server 12.04 on my PC , and I use an usb wireless card to join the network. It works ok when I boot up my PC , but the wireless disconnects after a while. I pkill wpa_supplicant and reload the driver rtl8192cu , then it works a again. Then it disconnect again after about a random minutes. Here is the syslog: 22384 May 29 21:49:27 homecenter kernel: [ 6450.459313] wlan1: authenticated 22385 May 29 21:49:27 homecenter kernel: [ 6450.459535] wlan1: associate with f4:ec:38:45:62:74 (try 1) 22386 May 29 21:49:27 homecenter kernel: [ 6450.469080] wlan1: RX AssocResp from f4:ec:38:45:62:74 (capab=0 x431 status=0 aid=3) 22387 May 29 21:49:27 homecenter kernel: [ 6450.469085] wlan1: associated 22388 May 29 21:49:27 homecenter wpa_supplicant[2342]: Associated with f4:ec:38:45:62:74 22389 May 29 21:49:27 homecenter kernel: [ 6450.481933] ADDRCONF(NETDEV_CHANGE): wlan1: link becomes ready 22390 May 29 21:49:27 homecenter wpa_supplicant[2342]: WPA: Key negotiation completed with f4:ec:38:45:62:7 4 [PTK=CCMP GTK=CCMP] 22391 May 29 21:49:27 homecenter wpa_supplicant[2342]: CTRL-EVENT-CONNECTED - Connection to f4:ec:38:45:62: 74 completed (auth) [id=0 id_str=] 22392 May 29 21:49:38 homecenter kernel: [ 6461.472014] wlan1: no IPv6 routers present 22393 May 29 21:49:38 homecenter ntpdate[2263]: step time server 91.189.94.4 offset 0.012758 sec 22394 May 29 21:49:51 homecenter ntpdate[2404]: step time server 91.189.94.4 offset -0.001190 sec 22395 May 29 21:54:38 homecenter kernel: [ 6762.052030] wlan1: deauthenticated from f4:ec:38:45:62:74 (Reas on: 2) 22396 May 29 21:54:38 homecenter wpa_supplicant[2342]: CTRL-EVENT-DISCONNECTED bssid=f4:ec:38:45:62:74 reas on=2 22397 May 29 21:54:38 homecenter kernel: [ 6762.064744] cfg80211: All devices are disconnected, going to re store regulatory settings 22398 May 29 21:54:38 homecenter kernel: [ 6762.064752] cfg80211: Restoring regulatory settings 22399 May 29 21:54:38 homecenter kernel: [ 6762.064757] cfg80211: Calling CRDA to update world regulatory d omain 22400 May 29 21:54:38 homecenter kernel: [ 6762.069938] cfg80211: Ignoring regulatory request Set by core s ince the driver uses its own custom regulatory domain 22401 May 29 21:54:38 homecenter kernel: [ 6762.069943] cfg80211: World regulatory domain updated: 22402 May 29 21:54:38 homecenter kernel: [ 6762.069945] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) 22403 May 29 21:54:38 homecenter kernel: [ 6762.069949] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KH z), (300 mBi, 2000 mBm) 22404 May 29 21:54:38 homecenter kernel: [ 6762.069952] cfg80211: (2457000 KHz - 2482000 KHz @ 20000 KH z), (300 mBi, 2000 mBm) 22405 May 29 21:54:38 homecenter kernel: [ 6762.069956] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KH z), (300 mBi, 2000 mBm) 22406 May 29 21:54:38 homecenter kernel: [ 6762.069959] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KH z), (300 mBi, 2000 mBm) 22407 May 29 21:54:38 homecenter kernel: [ 6762.069962] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KH z), (300 mBi, 2000 mBm)

    Read the article

  • MySQL blocking new connections, and mysqladmin flush-hosts

    - by aidan
    I'm running MySQL on a remote server, and it suddenly started rejecting all connections: $ mysql -h 192.168.1.10 -u root -p ERROR 1129 (00000): Host 'web' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts' So, I try this flush-hosts command... $ mysqladmin flush-hosts -h 192.168.1.10 -u root -p mysqladmin: connect to server at '192.168.1.10' failed error: 'Host 'web' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts'' I.e. it's blocking the very un-blocking tool it recommends. Am I doing it wrong, or will I have to resort to ssh/cpanel/physical access?

    Read the article

  • How do I automate OS installation on 500+ machines?

    - by Igor
    My company has to image a large amount of machines by the end of the year. Each of the machines will have hardware RAID 1 and running CentOS 6. What options do I have for automating the OS installation on these systems? I have a little mini desktop I can set up as an install server, and we can get a switch to create an installation network, but I'm not sure how to go about actually performing the automated installs.

    Read the article

  • Wicd not playing well with networks that utilize network access control

    - by Sion
    Starting a couple months ago (might be able to find exact date if necessary) my installation of Wicd stopped being able to see wireless networks that use NAC (Network Access Control) such as Aruba networks. But if I shut down Wicd and start NetworkManager I can connect to said networks and log in to them depending on what the NAC requires. This is the current wicd package installed: net-misc/wicd-1.7.1_pre20111210-r1 This is how I manipulate the network manager running: su -c'/etc/init.d/wicd stop; /etc/init.d/NetworkManager start' What would cause this specific of a problem to occur?

    Read the article

  • Best MTA setup for home or laptop computers - not server

    - by thomasrutter
    Hello, What is a good MTA (e.g. Postfix or something else) setup for a home computer behind a NAT, or a laptop that connects to various different wifi networks? I've read a lot of Postfix tutorials on how to set it up this way or that, but they are usually geared towards computers that are servers ie they have a static IP have a domain name are always connected to the same network My requirements are, I guess: Ability to forward mail for "root" to another server of my choosing. No listening for incoming SMTP connections - outgoing only Ability to route outgoing mail via an external SMTP server with authentication (and perhaps encryption) If not Postfix, I need an MTA which can queue up mails in case it temporarily has no internet connection.

    Read the article

  • Virtualbox, Virtual guest OS issue

    - by user70370
    Hi, My host OS is: Ubuntu 10.10. One of my virtual OS is: Ubuntu 9.04 and another one is: Ubuntu 10.04. All of these virtualisation has been completed with Virtualbox. Now, I need to replicate those two servers, one server is running in Ubuntu 9.04 and another one is running in Ubuntu 10.04. Is it possible? If yes, can you please provide me some help? So, in short words, the whole thing is: Host: Ubuntu 10.10 Guest 1: Ubuntu 9.04 Guest 2: Ubuntu 10.04 Job: Must have to run two guests at a time. Because, two LDAP of two guests need to replicate. Now, If I try to get the first Guest ( hostname: mohib-laptop ) from second Guest ( hostname: zaman-laptop ), it is not getting! Do I need to change IP address of those both Guests? Or are there anything which will make it possible?

    Read the article

  • Password protect web directory with htpasswd on Cherokee

    - by wdkrnls
    I have a directory on my Cherokee webserver that I am trying to password protect so that when I try to enter it from a web browser, I get a pop up demanding username and password. Needless to say I am getting stuck. I have created the .htaccess file with: AuthUserFile /srv/http/protected AuthGroupFile /dev/null AuthName "Protected Stuff" AuthType Basic Require valid-user And I used the apache-tools' htpasswd command: htpasswd -c .htpasswd wdkrnls I configured Cherokee with a behavior rule on the /protected directory which requires htpasswd authentication and restarted. I get Error 405 Method Not Allowed whenever I navigate there in a directory. What more do I need to do? Thanks for your help.

    Read the article

  • Search all files containing text

    - by enthdegree
    With Busybox, how do you search for an expression within a bunch of files recursively through a bunch of directories, but only look through text files? We don't know what the file's suffix is going to be; it could be .sh, it could be nothing, it could be something else. I was considering somehow basing the search on encoding although I am not quite sure what the encoding would be either. I've tried busybox grep -r but it searches through binary files too, which wastes a lot of time.

    Read the article

  • /proc/net/dev and /sys/class/net/ bogus network interface names

    - by sfink
    I am constructing a list of network interfaces to monitor based on the contents of /proc/net/dev. But I am getting some bogus interfaces in the list: __tmp1104705027 __tmp974528607 Where do those come from? They also show up in /sys/class/net/: # ls -1 /sys/class/net/ eth0 eth1 eth2 eth3 lo sit0 __tmp1104705027 __tmp974528607 For now, I think I'll just ignore anything starting with __tmp, but I'd like to know what they are and where they come from. This is on a recompiled CentOS 5.3 kernel: 2.6.18-128.7.1.el5.tvh.7PAE #1 SMP PREEMPT

    Read the article

  • 'Memory read error',Sever hardware error?

    - by wss8848
    hello I got a error about my server which is running CentOS5.5. MCE 20 HARDWARE ERROR. This is *NOT* a software problem! Please contact your hardware vendor CPU 1 BANK 8 TSC 6ab9ff9745f62 [at 2394 Mhz 9 days 1:50:52 uptime (unreliable)] MISC cf36ad0100081186 ADDR 203376500 MCG status: MCi status: MCi_MISC register valid MCi_ADDR register valid MCA: MEMORY CONTROLLER RD_CHANNELunspecified_ERR Transaction: Memory read error STATUS 8c0000400001009f MCGSTATUS 0 what is the matter? is memory card error or memory controller error?

    Read the article

  • solved: puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work? Update I added verbose logging to the puppet master and restarted nginx; here's the additional info I see in logs Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Could not resolve 10.209.47.31: no name for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 access[/] (info): defaulting to no access for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 Puppet (warning): Denying access: Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 10.209.47.31 - - [10/Dec/2012:18:19:15 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" On the agent machine facter fqdn and hostname both return a fully qualified host name [amisr1@blramisr195602 ~]$ sudo facter fqdn blramisr195602.XXXXXXX.com I then updated the agent configuration to add dns_alt_names = 10.209.47.31 cleaned all certificates on master and agent and regenerated the certificates and signed them on master using the option --allow-dns-alt-names [amisr1@bangvmpllDA02 ~]$ sudo puppet cert sign blramisr195602.XXXXXX.com Error: CSR 'blramisr195602.XXXXXX.com' contains subject alternative names (DNS:10.209.47.31, DNS:blramisr195602.XXXXXX.com), which are disallowed. Use `puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com` to sign this request. [amisr1@bangvmpllDA02 ~]$ sudo puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com Signed certificate request for blramisr195602.XXXXXX.com Removing file Puppet::SSL::CertificateRequest blramisr195602.XXXXXX.com at '/var/lib/puppet/ssl/ca/requests/blramisr195602.XXXXXX.com.pem' however, that doesn't help either; I get same errors as before. Not sure why in the logs it shows comparing access rules by IP and not hostname. Is there any Nginx configuration to change this behavior?

    Read the article

  • postfix specify limited relay domain while allowing sasl-auth relay

    - by tylerl
    I'm trying to set up postfix to allow relaying under a limited set of conditions: The destination domain is one of a pre-defined list -or- The client successfully logs in Here's the relevant bits o' config: smtpd_sasl_auth_enable=yes relay_domains=example.com smtpd_recipient_restrictions=permit_auth_destination,reject_unauth_destination smtpd_client_restrictions=permit_sasl_authenticated,reject The problem is that it requires that BOTH restrictions be satisfied, rather than either-or. Which is to say, it only allows relaying if the client is authenticated AND the recipient domain is @example.com. Instead, I need it to allow relaying if either one of the requirements is satisfied. How do I do this without resorting to running SMTP on two separate ports with different rules? Note: The context is an outbound-use-only (bound to 127.0.0.1) MTA on a shared web server which all site owners are allowed to relay mail to one of the "owned" domains (not server-local, though), and for which a limited set of "trusted" site owners are allowed to relay mail without restriction provided they have a valid SMTP login.

    Read the article

  • Looking for the best ec2 setup for 3 sites totaling in 1.5 mil in traffic monthly

    - by john h.
    I am looking to consolidate our current aws setup of 2 Large ubuntu ec2 servers and 2 large RDS server for our 3 websites that have a total of about 1.5 million hits a month and increasing every month with the majority of traffic (1 mil) to one forum site in the group and the rest of traffic to an ecommerce site and a small wordpress site. So here is my question/thought? Would it be better for us to combine the two ec2 large servers to just one and same with the 2 RDS servers so we run all three sites off one large ec2 and one RDS. -or- Should we setup maybe 2-3 smaller ec2 servers load balenced and a single RDS. -or- Something completely different setup? One concern is that if one site crashes it takes with it the others. It happened in the past but I am pretty sure its because of the forum software and not the server setup. -john

    Read the article

  • List full timestamps of files in a tarball

    - by Mechanical snail
    I have a large tar archive and want to see the exact (nanosecond) timestamps that are stored for each file in the archive. In case it's relevant, the tarball is in POSIX-2001 format (tar --format=posix). tar --list --verbose displays the timestamps rounded off to the minute. For comparison, ls --full-time does what I want, but I'd rather not have to extract everything first because it's huge. For my purposes, command-line and GUI tools are both fine.

    Read the article

  • Prevent gnome from automatically mounting partition when clicked in nautilus

    - by bjarkef
    Hi, I have two partitions on a hard drive in my machine that are formatted as ntfs, but must under no circumstance be mounted by my Ubuntu installation (unless I do some preparation first). However nautilus happily displays the partitions, and a single click will mount them automatically. This is very dangerous behaviour, how can I hide the partitions from nautilus and prevent accidentally mounting them by a single stray mouse click? Thanks

    Read the article

  • Terse, documented, correct way to create Kerberos-backed user shares in Greyhole

    - by MrGomez
    As a migration strategy away from Windows Home Server (which is currently out of support and intractable for our needs, for a variety of reasons), our little cloister of nerds has targeted Greyhole for our shared use at home. Despite the documentation's terseness, getting the system set up for simple, single-user operation isn't especially difficult, but this scenario fails to service our needs. Among other highlights of the system, we're attempting to emulate Integrated Windows Authentication (with Kerberos) and single-user shares to keep the Windows users in the house happy and well-supported. I'm aware of the underlying systems that go into Greyhole and understand how to set up per-user shares in Samba, but the documentation doesn't seem to support cases for Greyhole to sop up these directories as separate landing zones for replication. Enter my question: are both of these cases (IWA user authentication and user-partitioned personal shares) supported by Greyhole? If so, please cite or link the supporting documentation if it exists.

    Read the article

  • Testing home directory scripts by setting $HOME to the location of the test directory

    - by intuited
    I have an interdependent collection of scripts in my ~/bin directory as well as a developed ~/.vim directory and some other libraries and such in other subdirectories. I've been versioning all of this using git, and have realized that it would be potentially very easy and useful to do development and testing of new and existing scripts, vim plugins, etc. using a cloned repo, and then pull the working code into my actual home directory with a merge. The easiest way to do this would seem to be to just change & export $HOME, eg cd ~/testing; git clone ~ home export HOME=~/testing/home cd ~ screen -S testing-home # start vim, write/revise plugins, edit scripts, etc. # test revisions However since I've never tried this before I'm concerned that some programs, environment variables, etc., may end up using my actual home directory instead of the exported one. Is this a viable strategy? Are there just a few outliers that I should be careful about? Is there a much better way to do this sort of thing?

    Read the article

  • Error with procmail script to use Maildir format

    - by bradlis7
    I have this code in /etc/procmailrc: DROPPRIVS=yes DEFAULT=$HOME/Maildir/ :0 * ? /usr/bin/test -d $DEFAULT || /bin/mkdir $DEFAULT { } :0 E { # Bail out if directory could not be created EXITCODE=127 HOST=bail.out } MAILDIR=$HOME/Maildir/ But, when the directory already exists, sometimes it will send a return email with this error: 554 5.3.0 unknown mailer error 127. The email still gets delivered, mind you, but it sends back an error code to the sending user as well. I fixed this temporarily by commenting out the EXITCODE and HOST lines, but I'd like to know if there is a better solution. I found this block of code in multiple places across the net, but couldn't really find why this error was coming back to me. It seems to happen when I send an email to a local user. Sometimes the user has a .forward file to send it on to other users, sometimes not, but the result has been the same. I also tried removing DROPPRIVS, just in case it was messing up the forwarding, but it did not seem to affect it. Is the line starting with * ? /usr/bin/test a problem? The * signifies a regex, but the ? makes it return an integer value, correct? What is the integer being matched against? Or is it just comparing the integer return value? Do I need a space between the two blocks? Thanks for the help.

    Read the article

  • What Hypervisors support non-homogenous clusters?

    - by edude05
    I've been using Citrx Xenserver for awhile on a few machines that don't support Hardware Virtualization as a test for various small servers. I recently have been experimenting with moving the PV Vms between machines but Xenserver gives me errors that roughly say I need to have homogenous hardware for this to work. Because of this I haven't been able to setup XenMotion or any of the nice features that come with server pooling in Xenserver. I'm considering moving away from XenServer, however I can't seem to find a Hypervisor that explicitly supports non-homogenous clusters. On a side note, we do have a few idenitally configured Dell 1950s that haven't had any VM solution setup on yet, so if we can find a solution that can allow us to move PVs to those as well that would be great. Non free solutions are OK as well. What hypervisor will allow this? Thanks!

    Read the article

  • chroot on OSX as a different OS

    - by ekaqu
    I was wondering if anyone has been able to use chroot on OSX to run another OS (ubuntu, centos). I know that they are very different, but almost everything I want to use this for wouldn't care about anything at the level of the kernel, so was hoping there would be a way to do this without using a VM. Based off my google searches, I see this question is asked, but no real answer other than "try a VM". Would really like to do this without a VM though.

    Read the article

  • Executing local script/command on remote server

    - by Ian McGrath
    I have a command that I want to run on machine B from machine A. If I run the command on machine B locally, it works fine. Here is the command: for n in `find /data1/ -name 'ini*.ext'` ; do echo cp $n "`dirname $n `/` basename $n .ext`"; done From machine A, I issue this command ssh user@machineB for n in `find /data1/ -name 'ini*jsem'` ; do echo cp $n "`dirname $n `/` basename $n .jsem`"; done But I get error syntax error near unexpected token do What is wrong? I think it has something to do with double quotes, single quotes, semi colon because executing command ssh user@machineB ls works fine. So not issue of authentication or something else. Thanks

    Read the article

< Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >