Search Results

Search found 19788 results on 792 pages for 'remote host'.

Page 634/792 | < Previous Page | 630 631 632 633 634 635 636 637 638 639 640 641  | Next Page >

  • Setting up multiple servers for one domain

    - by Joseph Torraca
    So I am starting up a new website and I was wondering how to set up 5 servers to host the site. I have already purchased 5 Apple XServes, one will be used as a test server and the other 4 will be for the live site. So I have read some website on the internet and they all reference using one server and installing software onto it and have that server do the load balancing. I have also read that you could use a hardware, rack-mounted system and plug the servers into that. The load balancer would then distribute the load. So I have a few questions about each: 1) How do you set up the software version and have the other servers as "slaves" and have one "master" to direct traffic? 2) Which of the two options above are more reliable, and better suited for a startup that doesn't have many users per month, yet(hopefully)? 3) Is there a theoretical max limit of servers that can be connected to a software load balancing system? Note: Obviously this will change from software to software, but in terms of the server being able to handle it? 4) In your own opinion, what are you using for your sites? Have you had any problems setting up that system or operating it once its running? Are there any things you would stay away from if you had to start over? 5) I also purchased a Apple RAID system, so if you are familiar with it, is there any way to connect it to multiple Xserves so they all serve the same data? I'm a little confused on this, so thanks for all your help and being patient with me. Note: Take it easy on me, I am learning this as I go along, so I may have used terms incorrectly or explained things that don't really make sense. Sorry. P.S. If you need me to supply the specs on the servers to determine which system makes the most sense, I can post them for you.

    Read the article

  • How to configure Hyper-V failover cluster to live migrate when dynamic memory runs out?

    - by Matt Johnson
    Appologies in advance that this is not a direct programming question, but I have a feeling that the solution involves custom powershell scripts (maybe), so this is as good a place to ask as any. I maintain a website that has a large Hyper-V cluster for SQL Servers. We are using Windows 2008 R2 SP1, and the new "dynamic memory" feature. I've already ready reviewed the Best Practices Guide, and implemented it's suggested configuration. Everything works well, except that when SQL demand increases memory pressure to expand to more memory than is available on the physical machine, the memory status goes into the "Warning" state and stays there. I assume the hypervisor is using a swapfile on the host to fulfill the memory requirement, thus slowing the virtual machine down. When this happens, there are plenty of other nodes in the cluster that have available resources. I can live-migrate the virtual server over there and everything works, and the warnings go away. Now how can I automate this? I see no menu options in either Hyper-V or the Failover Cluster Manager for performing a migration or shutdown when dynamic memory goes into the warning state. Any ideas about how to script this, or monitor it and invoke the action directly, would be helpful. If the solution involves coding, powershell would be ideal, but I could envison this as a .Net Service that monitors for this state and kicks off the migration request. I just don't know what objects are involved in doing the monitoring or kicking off the live migration. Thanks in advance.

    Read the article

  • How much network latency is "typical" for east - west coast USA?

    - by Jeff Atwood
    At the moment we're trying to decide whether to move our datacenter from the west coast (Corvallis, OR) to the east coast (NY, NY). However, I am seeing some disturbing latency numbers from my location (Berkeley, CA) to the NYC host. Here's a sample result, retrieving a small .png logo file in Google Chrome and using the dev tools to see how long the request takes: Berkeley to NYC server: 215 ms latency, 46ms transfer time, 261ms total Berkeley to Corvallis server: 114ms latency, 41ms transfer time, 155ms total some URLs if you want to try yourself: http://careers.stackoverflow.com/content/cso/img/logo.png (NY, NY) http://serverfault.com/cache/logo.png (Corvallis, OR) It makes sense that Corvallis, OR is geographically closer to Berkeley, CA so I expect the connection to be a bit faster.. but I'm seeing an increase in latency of +100ms when I perform the same test to the NYC server. That seems .. excessive to me. Particularly since the time spent transferring the actual data only went up 10%, yet the latency went up ten times as much! That feels... wrong... to me. I found a few links here that were helpful (through Google no less!) ... http://serverfault.com/questions/63531/does-routing-distance-affect-performance-significantly http://serverfault.com/questions/61719/how-does-geography-affect-network-latency http://serverfault.com/questions/6210/latency-in-internet-connections-from-europe-to-usa ... but nothing authoritative. So, is this normal? It doesn't feel normal. What is the "typical" latency I should expect when moving network packets from the east coast <--> west coast of the USA?

    Read the article

  • puppet cert mismatch in ec2

    - by Stick
    I'm setting up a puppetmaster (2.7.6) in ec2 via gems (on rhel6) and I'm running into problems with the cert names and getting the master able to talk to itself. my puppet.conf looks like this: [main] logdir = /var/log/puppet rundir = /var/run/puppet vardir = /var/lib/puppet ssldir = $vardir/ssl pluginsync = true environment = production report = true certname = master When I start the puppetmaster process the ssl directory looks like: ssl/private_keys/master.pem ssl/crl.pem ssl/public_keys/master.pem ssl/ca/ca_crl.pem ssl/ca/signed/master.pem ssl/ca/ca_crt.pem ssl/ca/ca_pub.pem ssl/ca/ca_key.pem ssl/certs/ca.pem ssl/certs/master.pem I have an /etc/hosts entry on the box to point the 'puppet' hostname to localhost so that I don't have to change the 'server' option. When I run the agent I get the following: # puppet agent --test info: Retrieving plugin err: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Server hostname 'puppet' did not match server certificate; expected master err: /File[/var/lib/puppet/lib]: Could not evaluate: Server hostname 'puppet' did not match server certificate; expected master Could not retrieve file metadata for puppet://puppet/plugins: Server hostname 'puppet' did not match server certificate; expected master err: Could not retrieve catalog from remote server: Server hostname 'puppet' did not match server certificate; expected master warning: Not using cache on failed catalog err: Could not retrieve catalog; skipping run err: Could not send report: Server hostname 'puppet' did not match server certificate; expected master If I specify the certname as the server (with corresponding hosts entry) I get: # puppet agent --test --server master info: Retrieving plugin err: /File[/var/lib/puppet/lib]: Could not evaluate: Could not retrieve information from environment production source(s) puppet://master/plugins info: Caching catalog for master info: Applying configuration version '1321805956' notice: Finished catalog run in 0.05 seconds Which is success of a sort, that source error will bite me later when I'm applying manifests. I've tried a couple of other variations with using the ec2 private hostname and gotten mixed results. I'd like to avoid setting server = 'x' and use dns/hosts to control what 'puppet' resolves to in order to decide which server (plays easier with availability zones, etc)

    Read the article

  • strange Postfix logwatch log summary on my ubuntu vps

    - by DannyRe
    Hi I would be very thankful if someone could help me on explaining this logwatch summary of my postfix installation on my ubuntu 10.04 vps. I dont really know if this might be a normal log file because of the many authentication failed entries and foreign IP addresses. Any advise for a novice? Thx! ****** Summary ************************************************************************************* 113 SASL authentication failed 195 Miscellaneous warnings 8.419K Bytes accepted 8,621 8.419K Bytes delivered 8,621 ======== ================================================== 3 Accepted 60.00% 2 Rejected 40.00% -------- -------------------------------------------------- 5 Total 100.00% ======== ================================================== 2 5xx Reject relay denied 100.00% -------- -------------------------------------------------- 2 Total 5xx Rejects 100.00% ======== ================================================== 116 Connections 1 Connections lost (inbound) 116 Disconnections 3 Removed from queue 3 Delivered 1 Hostname verification errors ****** Detail (10) ********************************************************************************* 113 SASL authentication failed -------------------------------------------------------------- 113 92.24.80.207 host-92-24-80-207.ppp.as43234.net 113 LOGIN 113 generic failure 195 Miscellaneous warnings ------------------------------------------------------------------ 113 SASL authentication failure: cannot connect to saslauthd server: Permission denied 41 inet_protocols: IPv6 support is disabled: Address family not supported by protocol 41 inet_protocols: configuring for IPv4 support only 2 5xx Reject relay denied ----------------------------------------------------------------- 1 46.242.103.110 unknown 1 [email protected] 1 114.42.142.103 114-42-142-103.dynamic.hinet.net 1 [email protected] 1 Connections lost (inbound) -------------------------------------------------------------- 1 After RCPT 3 Delivered ------------------------------------------------------------------------------- 3 myhost.xx 1 Hostname verification errors ------------------------------------------------------------ 1 Name or service not known 1 46.242.103.110 broadband-46-242-103-110.nationalcablenetworks.ru === Delivery Delays Percentiles ============================================================ 0% 25% 50% 75% 90% 95% 98% 100%

    Read the article

  • Sticky sessions not sticky on coldfusion cluster

    - by GreatSeaSpider
    we're trying to deploy a legacy coldfusion site onto a new CF8 cluster. The cluster consists of three cf instances running under JRUN4 on a single windows 2008 server. I've got the cluster set to not replicate sessions, and sticky sessions turned on. each instance is set to use J2EE session variables. The application tag for the site has: sessionmanagement="Yes" setclientcookies="Yes" setdomaincookies="Yes" when each instance starts... no errors are reported in the instance log and they join the cluster without any issues. though the intances do have: 16/10 08:31:25 info SessionReplicationService successfully joined a JINI lookup service (assigned JINI-ID .....) and 16/10 08:31:25 info Clusterable service SessionReplicationService discovered a SessionReplicationService peer on a JRun server named "xxxx" on host xxxx which is interesting since session replication is definately off, is the SessionReplicationService responsible for sticky sessions aswell? thats enough background, the problem is that the sticky sessions appear to simply not work, each request is bounced to a different instance, and it seems as if the sessions are being lost on each instance anyways? As soon as the cluster is down to a single instance, the web app works exactly as expected and the sessions seem fine. has anybody got any ideas for me? i've been trawling the web and I cant seem to find any answers.

    Read the article

  • Probelms Intstalling Trac using apt-get Ubuntu Jaunty

    - by Ben Waine
    Hi, I'm having some issues getting apt to install trac correctly on my Ubuntu Jaunty Box. Using the command 'apt-get install trac' I get the following output: root@myserver:~# apt-get install trac Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. Since you only requested a single operation it is extremely likely that the package is simply not installable and a bug report against that package should be filed. The following information may help to resolve the situation: The following packages have unmet dependencies: trac: Depends: python-setuptools (> 0.5) but it is not installable Depends: python-pysqlite2 (>= 2.3.2) but it is not going to be installed Depends: python-subversion but it is not installable Depends: libjs-jquery but it is not installable Recommends: python-pygments (= 0.6) but it is not installable or enscript but it is not installable Recommends: python-tz but it is not installable E: Broken packages I have successfully used the command on my karmic kola desktop machine and am able to create new projects etc. I thought I might be able to solve the problem by installing all python related extensions. This produced a very similar output. I have Main, universe and multi-verse repositories enabled. Its a remote machine and I have no access to the gui. Hope someone can help, googleing failed to solve the issue or find a solution! Thanks, Ben

    Read the article

  • How to get the permissions right for /dev/raw1394

    - by Mark0978
    I recently upgraded one of my ubuntu machines to Karmic and I'm having trouble getting the permissions of /dev/raw1394 set to 0666. They only thing this machine is used for is recording audio from a firepod which uses /dev/raw1394 via jackd and there are no other FireWire devices connected, so security around this device is not really an issue. If I run as root, everything works as expected, but I have some folks that run the recorder that I don't want to have root access. However, I can't figure out which lines setup the perms I've tied this: /etc/udev/permissions.d/raw1394.rules:raw1394:root:root:0666 And I have this setup (default install) /lib/udev/rules.d/75-persistent-net-generator.rules:SUBSYSTEMS=="ieee1394", ENV{COMMENT}="Firewire device $attr{host_id})" /lib/udev/rules.d/75-cd-aliases-generator.rules:# the "path" of usb/ieee1394 devices changes frequently, use "id" /lib/udev/rules.d/75-cd-aliases-generator.rules:ACTION=="add", SUBSYSTEM=="block", SUBSYSTEMS=="usb|ieee1394", ENV{ID_CDROM}=="?*", ENV{GENERATED}!="?*", \ /lib/udev/rules.d/60-persistent-storage-tape.rules:KERNEL=="st*[0-9]|nst*[0-9]", ATTRS{ieee1394_id}=="?*", ENV{ID_SERIAL}="$attr{ieee1394_id}", ENV{ID_BUS}="ieee1394" /lib/udev/rules.d/50-udev-default.rules:# FireWire (deprecated dv1394 and video1394 drivers) /lib/udev/rules.d/50-udev-default.rules:KERNEL=="dv1394-[0-9]*", NAME="dv1394/%n", GROUP="video" /lib/udev/rules.d/50-udev-default.rules:KERNEL=="video1394-[0-9]*", NAME="video1394/%n", GROUP="video" /lib/udev/rules.d/60-persistent-storage.rules:KERNEL=="sd*[!0-9]|sr*", ATTRS{ieee1394_id}=="?*", SYMLINK+="disk/by-id/ieee1394-$attr{ieee1394_id}" /lib/udev/rules.d/60-persistent-storage.rules:KERNEL=="sd*[0-9]", ATTRS{ieee1394_id}=="?*", SYMLINK+="disk/by-id/ieee1394-$attr{ieee1394_id}-part%n" And I find these lines in /var/log/syslog Apr 30 09:11:30 record kernel: [ 3.284010] ieee1394: Node added: ID:BUS[0-00:1023] GUID[000a9200c7062266] Apr 30 09:11:30 record kernel: [ 3.284195] ieee1394: Host added: ID:BUS[0-01:1023] GUID[00d0035600a97b9f] Apr 30 09:11:30 record kernel: [ 18.372791] ieee1394: raw1394: /dev/raw1394 device initialized What I can't figure out, is which line actually creates that raw1394 device in the first place. How do you get /dev/raw1394 to have permissions 0666?

    Read the article

  • Windows XP SP3 Keyboard stops working

    - by Kevin K
    Here's the strangest thing I have yet to see in 20+ yrs of computer repairs. My in-laws Windowsx XP SP3 has stopped recognizing keyboards. The keyboards work fine in the BIOS, during the boot select process to boot normally, etc. but once Windows comes up it will not recognize any USB keyboard. The USB mouse works fine, have tried different USB ports, different keyboards, etc. nothing works. I can log into the machine via VNC and use the remote keyboard just fine, but not connected locally. Tried a system restore, it says nothing changed. I am about to just re-install Windows at this point, except I am afraid it will happen again. I have googled for this and it is not unheard of, but I have not found any solution other than nuking it. Anyone have any ideas? I have re-installed the USB drivers for the M/B. Gone into devices and deleted them for a re-install, etc. Keyboard works off a Linux live boot CD, and in the BIOS setup so it is not a hardware issue, and I have tried a few keyboards all of which I know are good and work fine on other systems.

    Read the article

  • Apache Won't Restart After Compiling PHP with Postgres

    - by gonzofish
    I've compiled PHP (v5.3.1) with Postgres using the following configure: ./configure \ --build=x86_64-redhat-linux-gnu \ --host=x86_64-redhat-linux-gnu \ --target=x86_64-redhat-linux-gnu \ --program-prefix= \ --prefix=/usr/ \ --exec-prefix=/usr/ \ --bindir=/usr/bin/ \ --sbindir=/usr/sbin/ \ --sysconfdir=/etc \ --datadir=/usr/share \ --includedir=/usr/include/ \ --libdir=/usr/lib64 \ --libexecdir=/usr/libexec \ --localstatedir=/var \ --sharedstatedir=/usr/com \ --mandir=/usr/share/man \ --infodir=/usr/share/info \ --cache-file=../config.cache \ --with-libdir=lib64 \ --with-config-file-path=/etc \ --with-config-file-scan-dir=/etc/php.d \ --with-pic \ --disable-rpath \ --with-pear \ --with-pic \ --with-bz2 \ --with-exec-dir=/usr/bin \ --with-freetype-dir=/usr \ --with-png-dir=/usr \ --with-xpm-dir=/usr \ --enable-gd-native-ttf \ --with-t1lib=/usr \ --without-gdbm \ --with-gettext \ --without-gmp \ --with-iconv \ --with-jpeg-dir=/usr \ --with-openssl \ --with-zlib \ --with-layout=GNU \ --enable-exif \ --enable-ftp \ --enable-magic-quotes \ --enable-sockets \ --enable-sysvsem \ --enable-sysvshm \ --enable-sysvmsg \ --with-kerberos \ --enable-ucd-snmp-hack \ --enable-shmop \ --enable-calendar \ --with-libxml-dir=/usr \ --enable-xml \ --with-system-tzdata \ --with-mime-magic=/usr/share/file/magic \ --with-apxs2=/usr/sbin/apxs \ --with-mysql=/usr/include/mysql \ --without-gd \ --with-dom=/usr/include/libxml2/libxml \ --disable-dba \ --without-unixODBC \ --disable-pdo \ --enable-xmlreader \ --enable-xmlwriter \ --without-sqlite \ --without-sqlite3 \ --disable-phar \ --enable-fileinfo \ --enable-json \ --without-pspell \ --disable-wddx \ --with-curl=/usr/include/curl \ --enable-posix \ --with-mcrypt \ --enable-mbstring \ --with-pgsql=/mnt/mv/pgsql I'm using Postgres 8.4.0 and Apache 2.2.8; I have the following line in my Apache conf file: LoadModule php5_module /usr/lib64/httpd/modules/libphp5.so And when I attempt to restart Apache, I get the following error message: Starting httpd: httpd: Syntax error on line 205 of /etc/httpd/conf/httpd.conf: Cannot load /usr/lib64/httpd/modules/libphp5.so into server: /usr/lib64/httpd/modules/libphp5.so: undefined symbol: lo_import_with_oid Now, I know that this is a problem with Postgres with PHP because lo_import_with_oid is a function in the Postgres source which allows the importing of large objects; also, if I remove the --with-pgsql option, PHP and Apache get along great. I've scoured the Internet looking for answers all day, but to no avail. Does anyone have ANY insight into what is causing my problems.

    Read the article

  • Sendmail delivering locally instead of to MTA in MX record

    - by CreativeNotice
    Ok, so I've got a box named websrv1.mydomain.com. It's a web server running ubuntu, apache2, sendmail, etc. My email is outsourced to a third party. So in my DNS I've got MX set to mx.thirdparty.net. I've no reason to accept incoming mail on my web server, every email should be sent to the third party. This works correctly accept with sending mail from the webserver (aka via cron or console). So from my web server, if I send an email to [email protected], it just disappears. No errors, nothing in dead.letter, nothing. I can send to any other address with no issues. If I send to [email protected] it's delivered locally which is fine. 1) Doing an nslookup shows the mx record is correct. 2) Running /mx mydomain.com from sendmail -bt returns the correct result. 3) Running sendmail -bv [email protected] returns: sudo sendmail -bv [email protected] [email protected]... deliverable: mailer esmtp, host mydomain.com., user [email protected] 4) Running 3,0 [email protected], returns: 3,0 [email protected] canonify input: me @ mydomain . com Canonify2 input: me Canonify2 returns: me canonify returns: me parse input: me Parse0 input: me Parse0 returns: me Parse1 input: me MailerToTriple input: me MailerToTriple returns: me Parse1 returns: $# esmtp $@ mydomain . com . $: me parse returns: $# esmtp $@ mydomain . com . $: me So I'm at a loss. Sendmail seems to see the mx record, but it's not using it.

    Read the article

  • Enable PasswordAuthentication on OpenSuse 10

    - by Riduidel
    Hi, I've a virtual instance of Suse 10 running in my VMWare player, and I'm fighting against it to allow ssh password authentcation. How can I make it working since I already have tuned the /etc/ssh/ssh_config file like that # $OpenBSD: ssh_config,v 1.20 2005/01/28 09:45:53 dtucker Exp $ Host * # ForwardAgent no ForwardX11 yes ForwardX11Trusted yes PubkeyAuthentication no RhostsRSAAuthentication no RSAAuthentication no PasswordAuthentication yes HostbasedAuthentication no Protocol 2 SendEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES SendEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT SendEnv LC_IDENTIFICATION LC_ALL With ssh connection sending me the following logs Incoming packet #0x5, type 51 / 0x33 (SSH2_MSG_USERAUTH_FAILURE) 00000000 00 00 00 1e 70 75 62 6c 69 63 6b 65 79 2c 6b 65 ....publickey,ke 00000010 79 62 6f 61 72 64 2d 69 6e 74 65 72 61 63 74 69 yboard-interacti 00000020 76 65 00 ve. Outgoing packet #0x6, type 50 / 0x32 (SSH2_MSG_USERAUTH_REQUEST) 00000000 00 00 00 04 72 6f 6f 74 00 00 00 0e 73 73 68 2d ....root....ssh- 00000010 63 6f 6e 6e 65 63 74 69 6f 6e 00 00 00 14 6b 65 connection....ke 00000020 79 62 6f 61 72 64 2d 69 6e 74 65 72 61 63 74 69 yboard-interacti 00000030 76 65 00 00 00 00 00 00 00 00 ve........ Telling me that it expects publickey and keyboard-interactive authentications, which I don't want to use.

    Read the article

  • Port forwarding + shared connection with Ubuntu

    - by Joey Adams
    Because my wireless router's ethernet ports are defective, I set up a shared wireless connection from my laptop (which has wifi) to my eMac (which does not) via a crossover ethernet cable. The laptop is behind a router as 192.168.1.131, and the eMac is behind the laptop as 10.42.43.1 . The laptop is running Ubuntu 9.10 (Karmic). I achieved the shared connection through NetworkManager Applet. I right-clicked on the network icon at the topright, went to Edit Connections, selected the Wired connection named "Auto eth0", clicked "Edit...", went to the "IPv4 Settings" tab, and selected the Method "Shared to other computers". The eMac can now access the Internet. Now I want to enable port forwarding. There's a game I want to play that needs port 6112 forwarded (both TCP and UDP) in order to host games. I set up the router to enable port forwarding for 192.168.1.131 (the laptop), but port forwarding still isn't available on the eMac. I suppose I need to pretend my laptop is a router and configure port forwarding on it, indicating that incoming connections to the laptop (192.168.1.131) should be forwarded to the eMac on the shared connection (10.42.43.1 ). Thus, packets coming into the router on port 6112 would be redirected to the laptop (by the router), then to the eMac (by the laptop). My question is, how would I do that on Ubuntu (in light of NetworkManager's presence)? Also, if I can't get this to work, does anyone mind hosting a comp stomp? :D

    Read the article

  • How to disable all bounce back email in exim 4.69

    - by liame
    I have set up an email server to send out solicited newsletters. There should be no "regular" users of this server, so it is not desirable to send bounce notifications back to the recipient. Especially so since I am tracking bounces myself by parsing the log files periodically. What I want is to unconditionally prevent exim from ever sending a bounce notification email back to a sender. How can I do this? Thank you! (I accidentally posted this to superuser before posting it here, disregard that if you come across) What I want is an email server that will accept all incoming emails, deliver it accordingly (that is remotely or locally) and not send a bounce notification the sender upon bounce. I log bounces myself, in a database. The only function bounce messages have in my setting is to waste resources and bandwidth. I need to send emails fast, using exiwhat during a run, I see a significant number of deliveries to [email protected]. I could potentially increase my email productivity by 10~20% if all bounce emails are eliminated.

    Read the article

  • Solaris 10 invalid ARP requests from 0.0.0.0?

    - by JWD
    The guys at the data center where I'm hosting a server running Solaris 10 are telling me that my server is making a lot of invalid arp requests. This is an example of a portion of what was sent to me from the logs (with Mac addresses and IP addresses changed). xxxx:xxxx:xxxx/0.0.0.0/0000.0000.0000/[myipaddress]/[Datestamp]) I don't see anything in the arp tables (arp -a) or routing tables (netstat -r) and I don't see anything relating to 0.0.0.0 when snoping the arp requests. The only place I see any reference to 0.0.0.0 is if I do netstat -a for the SCTP SCTP: Local Address Remote Address Swind Send-Q Rwind Recv-Q StrsI/O State ------------------------------- ------------------------------- ------ ------ ------ ------ ------- ----------- 0.0.0.0 0.0.0.0 0 0 102400 0 32/32 CLOSED But not really sure what that means. Doesn't seem like I can disable SCTP. Does anyone have any idea what might be causing this and how to stop it? I think the switch I'm connected to doesn't like it and momentarily drops the connection. Is there anyway to at least block those requests using ipfilter or something else?

    Read the article

  • How to access programs in one PC using another PC

    - by darkstar13
    Hi, I was recently given an old PC for my remote access at work. The CPU that comes with it has Windows XP installed, 400+ MB of ram, all USB devices disabled. I access my work applications using VPN / Citrix. Basically, it' sooooo slow. Plus it's bulky and it will just occupy space, so I am now hoping to find a way for me to integrate this work PC with my home PC. I tried to put in the hard drive in my home PC CPU, and set the drive as slave. However, when I booted my PC from this hard drive, I am stuck at the screen where windows is prompting me to select how am I going to boot (ex. Safe Mode, Safe mode with command prompt, Last Working Configuration, etc), but whatever option I select, I am still stuck at this option after reboot. I am thinking if maybe I can clone the drive and mount the cloned drive and access the system as a virtual machine. But I don't know if that will work. I would like to know if there's something I can do so I can work at home using my home PC, where I can access my work programs to connect to VPN / Citrix. My home PC's OS is Windows 7 Ultimate x64.

    Read the article

  • Cygwin, ssh, and git on Windows Server 2008

    - by Paul
    Hi everyone. I'm trying to setup a git repository on an existing Windows 2008 (R2) server. I have successfully installed Cygwin & added git and ssh to the packages, and everything works perfectly (thanks to Mark for his article on it). I can ssh to localhost on the server, and I can do git operations locally on the server. When I try to do either from the client, however, I get the "port 22, Bad file number" error. Detailed SSH output is limited to this: OpenSSH_4.6p1, OpenSSL 0.9.8e 23 Feb 2007 debug1: Connecting to {myserver} [{myserver}] port 22. debug1: connect to address {myserver} port 22: Attempt to connect timed out without establishing a connection ssh: connect to host {myserver} port 22: Bad file number Google tells me that this means I'm being blocked, usually, by a firewall. So, double-checked the firewall settings on the server, rule is there allowing port 22 traffic. I even tried turning off the firewall briefly, no change in behavior. I can ssh just fine from that client to other servers. The hosting company swears that there's no other firewalls blocking that server on port 22 (or any other port, they claim, but I find that hard to believe). I have another trouble ticket into them, just in case the first support person was full of it, but meanwhile I wanted to see if anyone could think of anything else it can be. Thanks, Paul

    Read the article

  • Has anyone used the sharedband connection bonding product?

    - by John Rennie
    See http://www.sharedband.com/ for details on the product. Obviously Sharedband aren't too keen on giving away their technical secrets, but I would guess that it bonds the connections at the IP layer i.e. their routers send the IP packets to the SharedBand routers over all available lines and the ShareBand routers handle all the virtual circuitry and provide the NATing to whatever IP address(es) they've assigned you. It looks a clever idea, and a good way to provide some resilience over ADSL links. You can even use ADSL links from different ISPs and SharedBand will still bond them for you. But, I find myself wondering how well it really works, and whether it's worth it. The Draytek routers can already load balance (though not bond) up to four ADSL lines, so the SharedBand product really only offers an advantage if you're hosting servers i.e. you can have one IP address to accept incoming connections through all your (working) ADSL lines. But should you really try and host servers using ADSL lines, especially since ADSL upload performance isn't stellar? Wouldn't it be better to use a hosted server, or maybe pay up for a leased line with a SLA? So I'm asking if anyone is using SharedBand, and if so what do you think of it? JR

    Read the article

  • So Close: How to get this SSH login working (.bashrc)

    - by This_Is_Fun
    Objective: SSH login ( + eliminate warning message) / run 2 commands / stay logged in: EDIT: Oops, I made a mistake (see below): This code does ~95% of what I wanted to do # .bashrc # Run two commands and stay logged in to new server. alias gr='ssh -t -p 5xx4x [email protected] 2> /dev/null "cd /var; ls; /bin/bash -i"' Now, after successful login / verify user logged in = root pts/0 2011-01-30 22:09 Try to 'logout' = bash: logout: not login shell: use `exit' I seem to have full root access w/o being logged into the shell? (The " /bin/bash -i " was added to 'Stay logged in' but doesn't work quite as expected) FYI: The question is "How to get this SSH login working" & it is mostly solved, sorry I made a mess... ... .. . Original Question Here: # .bashrc # Run two commands and stay logged in to new server. alias gr='ssh -t -p 5xx4x [email protected] "cd /var; ls; /bin/bash -i"' # (hack) Hide "map back to the address - POSSIBLE BREAK-IN ATTEMPT!" message. alias gr='ssh -p 5xx4x [email protected] 2> /dev/null' Both examples 'work' as shown; When I try to add the ' 2 /dev/null ' to the first example, then the whole thing breaks. I'm out of time trying to solve the warning message other ways, so is it possible to combine both examples to make example #1 work w/o the warning message? Thank you. ps. If you also know a proper way to kill the login warning message, please do tell (the 'standard' "edit host file" advice isn't working for me)

    Read the article

  • To clone or to automate a system installation?

    - by Shtééf
    Let's say you're setting up a cluster of servers performing the same task. Or say you're just setting up a bunch of different servers, but you expect to use a base configuration on all of your servers. Would it be better practice to create a base image and clone it, or to automate the installation and configuration? I occasionally end up in this argument with my boss, in situations where we're time-pressed. When he sees me struggle with perfecting the automation, his suggestion is often to clone the entire disk to the other machines. But my instinct has always been to avoid cloning. This is mostly from an Ubuntu perspective, but the question is fairly general. My reasons for avoiding cloning are: On a typical install, even if it's fresh, there are already several unique identifiers installed: filesystem UUIDs, SSH host keys, among others. These would have to be regenerated. Network needs to be reconfigured for each clone. This would need to be done off-line, of course, or the settings will conflict with other machines on the network. On the other hand, some of the cloning advantages are quite clear as well: (Initially?) less effort required than automating configuration. Tools exist to quickly address (some) of the above disadvantages. (I can see right through my own bias there.)

    Read the article

  • Using dnsmasq for accessing multiple nameservers assigned by DHCP

    - by Ash
    At my work desktop running openSUSE 11.4, I have a local network which gets its address, domain (work.site) and nameservers (10.100.1.1, 10.100.1.2) info through DHCP - which get written into /etc/resolv.conf I get to access the internet using the work network, and these 2 nameservers end up returning the entries for any public domain name lookups on the internet. I also have a private VPN that I end up connecting. The nameserver (10.111.1.1) and domain (private.site) are rarely bound to change for this network, but currently they're pushed by the openVPN client into networkmanager, and which also gets merged with the existing /etc/resolv.conf My resolv.conf ultimately ends up looking like this: search private.site work.site nameserver 127.0.0.1 nameserver 10.111.1.1 nameserver 10.100.1.1 As you can see the 2nd nameserver from my work network was pushed out because of the max 3 entry limitations. It is fine still, but would be a problem if that nameserver goes down for maintenance or something. So I found out that dnsmasq could help me here, and hence I setup dnsmasq just as a local DNS resolver without any DHCP support. So right now this is my /etc/dnsmasq.conf: resolv-file=/etc/resolv.conf server=/private.site/10.111.1.1 server=/1.111.10.in-addr.arpa/10.111.1.1 listen-address=127.0.0.1 bind-interfaces log-queries I've made dnsmasq get the list of nameservers from /etc/resolv.conf since NetworkManager seems to be updating this list correctly (for a max of 3 nameservers). I'm able to resolve the host names in both the networks correctly. So these are the questions I have: Is there a way I can make either NetworkManager or dhclient write out the list of nameservers somewhere else which I can make dnsmasq use as resolv-file ? How do I make dnsmasq use certain nameservers as the default for all queries ? Right now I notice that lookups for public domains on the internet are usually sent to both the nameservers - the one on work.site as well as private.site. It would be good if I can limit this only to work.site.

    Read the article

  • Error while detecting start profile for instance with ID _u

    - by Techboy
    I am upgrading a SAP Java instance and am getting the error shown in the log below. There are not backup profile files in the profiles directory. The string _u does not exist in the default, start or instance profile for instance SCS01. Please can you suggest what the issue might be? Mar 13, 2010 12:09:01 PM [Info]: com.sap.sdt.ucp.phases.AbstractPhaseType.initializexAbstractPhaseType.javax758x [Thread[main,5,main]]: Phase PREPARE/INIT/DETERMINE_PROFILES has been started. Mar 13, 2010 12:09:01 PM [Info]: com.sap.sdt.ucp.phases.AbstractPhaseType.initializexAbstractPhaseType.javax759x [Thread[main,5,main]]: Phase type is com.sap.sdt.j2ee.phases.PhaseTypeDetermineProfiles. Mar 13, 2010 12:09:01 PM [Info]: ...ap.sdt.j2ee.phases.PhaseTypeDetermineProfiles.checkVariablesxPhaseTypeDetermineProfiles.javax284x [Thread[main,5,main]]: All 4 required variables exist in variable handler. Mar 13, 2010 12:09:01 PM [Info]: ....j2ee.tools.sysinfo.AbstractInfoController.updateProfileVariablexAbstractInfoController.javax302x [Thread[main,5,main]]: Parameter /J2EE/StandardSystem/DefaultProfilePath has been detected. Parameter value is \server1\SAPMNT\PI7\SYS\profile\DEFAULT.PFL. Mar 13, 2010 12:09:01 PM [Info]: com.sap.sdt.j2ee.tools.sysinfo.ProfileDetector.detectInstancexProfileDetector.javax340x [Thread[main,5,main]]: Instance SCS01 with profile \server1\SAPMNT\PI7\SYS\profile\PI7_SCS01_server1, start profile \server1\SAPMNT\PI7\SYS\profile\START_SCS01_server1, and host server1 has been detected. Mar 13, 2010 12:09:01 PM [Error]: com.sap.sdt.ucp.phases.AbstractPhaseType.doExecutexAbstractPhaseType.javax862x [Thread[main,5,main]]: Exception has occurred during the execution of the phase. Mar 13, 2010 12:09:01 PM [Error]: com.sap.sdt.j2ee.tools.sysinfo.ProfileDetector.detectInstancexProfileDetector.javax297x [Thread[main,5,main]]: Error while detecting start profile for instance with ID _u. Mar 13, 2010 12:09:01 PM [Info]: com.sap.sdt.ucp.phases.AbstractPhaseType.cleanupxAbstractPhaseType.javax905x [Thread[main,5,main]]: Phase PREPARE/INIT/DETERMINE_PROFILES has been completed. Mar 13, 2010 12:09:01 PM [Info]: com.sap.sdt.ucp.phases.AbstractPhaseType.cleanupxAbstractPhaseType.javax906x [Thread[main,5,main]]: Start time: 2010/03/13 12:09:01. Mar 13, 2010 12:09:01 PM [Info]: com.sap.sdt.ucp.phases.AbstractPhaseType.cleanupxAbstractPhaseType.javax908x [Thread[main,5,main]]: End time: 2010/03/13 12:09:01. Mar 13, 2010 12:09:01 PM [Info]: com.sap.sdt.ucp.phases.AbstractPhaseType.cleanupxAbstractPhaseType.javax909x [Thread[main,5,main]]: Duration: 0:00:00.078. Mar 13, 2010 12:09:01 PM [Info]: com.sap.sdt.ucp.phases.AbstractPhaseType.cleanupxAbstractPhaseType.javax910x [Thread[main,5,main]]: Phase status is error.

    Read the article

  • Poor NFS Performance: OpenFiler

    - by Safin09
    Good Day Everyone, I have an issue with OpenFiler, a Linux-based operating that converts a computer system into a SAN/NAS appliance. Here is the problem. In my environment we have two Netapp Storevault 500 appliances that I normally perform backups to a NFS share. There are two backup cronjobs that use ghettoVCB to backup two groups of VM's. One group is a pool of 3 VMs. This takes 13 mins to complete. A second job that backups a pool of 5 VMs to a 2nd Storevault appliance which takes 2 hours. We then installed Openfiler on a old server that has 2 core Xeon processors. There is a software RAID 5 process in place. When performing the same backups to a NFS Openfiler share, the first backup job, which takes 13 mins, takes around 4 hours. The second backup job, which takes 2 hours, takes almost 10 hours to complete. This is unacceptable!!!! Especially considering the strain placed on the host ESX Server. I assumed that because of the software RAID 5, the overhead on the CPU explained the long backup times. I then installed Openfiler on a 2nd server, an IBM x306 machine which has a P4 Intel processor. This time no software RAID or any RAID at all. A single 750GB hard drive that contained the OS and the rest of the disk uses to backup VMs to a NFS share. I performed the first backup job of the pool of 3 VMs. This time the backup job took 1 and 1/2 hours to complete instead of 13 mins!!!!!!!!!! Is Openfiler simply poor at being an NFS Server!!!!!!!!!!!!! Has anyone else had these issues with Openfiler?

    Read the article

  • Apache2/Shibboleth TCP connections stuck in CLOSE_WAIT

    - by RJT
    I run an Apache2 server which uses the Shibboleth daemon (shibd) as federated authentication module. Certain server connections using Shibboleth seem to stick permanently in CLOSE_WAIT state. tcp 38 0 blah.blah:57346 shib.server.:8443 CLOSE_WAIT tcp 38 0 blah.blah:45601 shib.server2:8443 CLOSE_WAIT tcp 38 0 blah.blah:41737 shib.server3:5057 CLOSE_WAIT From what I can find out, CLOSE_WAIT means that when the remote server disconnects, the local application is failing to close the connection, as it should. I suspect shibd is responsible somehow. Needless to say, if enough CLOSE_WAIT connections accumulate, I have a problem. Trying to get rid of the CLOSE_WAIT connections by simply using /etc/init.d/networking restart does not work. In fact networking seems to refuse to close down and restart, and I get a SIOCADDRT: File exists error (ie networking is trying to start without having stopped first). Same problem with ifup -a So I have two questions - one may be easy, and one harder. What's a good way to force networking to restart, and force whatever connections are stuck in CLOSE_WAIT to clear? Any ideas about how to fix shibboleth and force shibd module to behave?

    Read the article

  • Configuration for a two machine ESXi cluster using VSA to present local storage to VMs

    - by MDMarra
    I'm designing a little vSphere 5 cluster for one of our remote sites. We have some IBM x3650s that have 6x 300GB 10K RPM drives in them, along with dual quad core CPUs and 24GB RAM. Because we use HP P4500 G2s at our primary site, we have licenses available for HP P4000 VSAs. I thought that this would be the perfect opportunity to use them. Below is a basic drawing of what I want to accomplish: I want to run a P4000 VSA on each server and run them in a Network RAID-10 (Lefthand speak for network mirroring, think of it as RAID 1 across nodes or as an active/active storage cluster). I will then present this storage to guests that will run on this mini-cluster. It will be managed by a vCenter Server on our main site. All connections will be GbE with two dedicated to storage. Management and Data will share a pair of connections, since I don't expect there to be high load. These servers are just there to provide directory services, dhcp, printing, etc. Does anyone see anything potentially wrong with this approach? Is this the best way to do this without adding additional dedicated storage heads? Are there any pitfalls in this design, besides the lack of dedicated Data/Mgmt interfaces?

    Read the article

< Previous Page | 630 631 632 633 634 635 636 637 638 639 640 641  | Next Page >