Search Results

Search found 12836 results on 514 pages for 'host mechanic'.

Page 413/514 | < Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >

  • Disable IPv6 on Debian VPS (Virtuozzo!)

    - by chris_l
    I have a Debian Lenny VPS, that's running virtualized by Parallels/Virtuozzo. Currently, the network interface doesn't have an IPv6 address - and that's good, because I don't have an ip6tables configuration. But I assume, that I could wake up one day, and ifconfig will show me an ipv6 address for the interface - because I have no control over the kernel or its modules - they're under the control of the hosting company. That would leave the server completely vulnerable to attacks from IPv6 addresses. What would be the best way to disable IPv6 (for the interface or maybe for the entire host)? Usually I would simply disable the kernel module, but that's not possible in this case. Update Maybe I should add, that I can use iptables and everything normally (I'm root on the VPS), but I can't make changes to the kernel or load kernel modules because of the way Virtuozzo works (shared kernel). lsmod always returns nothing. I can't call ip6tables -L (it says that I need to insmod, or that the kernel would have to be upgraded). I don't think, that changes to /etc/modprobe.d/aliases would have any effect, or do they? Networking Config? I thought, that maybe I can turn IPv6 off from /etc/network/... Is that possible? I just see, that they've set up avahi, so I should probably change the setting use-ipv6=yes to "no" in /etc/avahi/avahi.conf (?) Has anybody already tried this solution, and can I rely on it? I don't know too much about avahi. Would it actually have any effect? Or could it even bring my entire interface down, once IPv6 is enabled by the kernel?

    Read the article

  • Why should one have a secondary DNS server?

    - by Sam Levin
    I'm very confused. I basically understand how DNS works. Here's an example that helps illustrate what I'm having trouble understanding. Right now, I run a small web-server. I use my provider's DNS manager, so I don't have a DNS server hosted on the machine. Let's say for a second, that I don't use my host's DNS, and I decide to set up a DNS server on my server. Hypothetical scenario: my server (entire) server goes down - DNS included. Why do I need backup DNS? If the server is down, who cares if the DNS server is down too, considering that even if I had DNS up (it wasn't on the crashed server), it wouldn't be able to forward requests anyway since the server would be down? Is the point of having secondary DNS, to be able to change the IP addresses that your DNS server points to, so if your webserver was down, you could redirect traffic to a backup? How would you switch to the secondary provider, in the event that your main DNS provider becomes unavailable? Is a backup DNS system basically up all the time? How is it configured? Is it just an exact clone of the DNS server you would have on your server? Do they run simultaneously? Hopefully someone can see what I'm hung up on, and provide some guidance. Thanks

    Read the article

  • What could cause a WMV to not play to completion in a browser?

    - by Ty W
    A realtor has had videos created for a community she is selling homes for, the people who made the videos gave them to us in WMV format. I can play these videos without any problem in Windows Media Player, VLC, and Quicktime (via Flip4Mac). I can play the videos from their location at videohomeguide.com in my browser without any trouble. However when I upload the files to our server the video stops at about the 1 minute mark in Safari and FireFox on Mac OS X Snow Leopard. I'm not sure if Windows browsers have the same issue because they are loaded using Windows Media Player. http://carolepaul.com/images/uploads/cottageslsjamestown.wmv <- our server, will fail at 1:09ish. http://www.videohomeguide.com/media/cottageslsjamestown.wmv <- should play to completion (3:27ish) The files generate the same MD5 hash on my desktop and on our server. I used WGET to transfer the files, always downloading from videohomeguide.com. Since the files are identical and are playable using VLC/WMP/Quicktime, and playable in the browsers from videohomeguide.com it seems to me that it is some sort of server config... maybe incorrect headers sent to the browsers? Here are the headers sent and received by FireFox on OS X: http://carolepaul.com/images/uploads/cottageslsjamestown.wmv GET /images/uploads/cottageslsjamestown.wmv HTTP/1.1 Host: carolepaul.com User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.2) Gecko/20100316 Firefox/3.6.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive HTTP/1.1 200 OK Date: Mon, 29 Mar 2010 20:43:20 GMT Server: Apache/1.3.41 (Unix) PHP/5.2.6 FrontPage/5.0.2.2635 mod_psoft_traffic/0.2 mod_ssl/2.8.31 OpenSSL/0.9.8b Last-Modified: Wed, 02 Dec 2009 18:08:46 GMT Etag: "1e7919c-198eadc-4b16ad2e" Accept-Ranges: bytes Content-Length: 26798812 Keep-Alive: timeout=10, max=200 Connection: Keep-Alive Content-Type: video/x-ms-wmv

    Read the article

  • High latency issue for web service call from amazon aws ec2 to local server

    - by SibzTer
    We have a legacy web application that is running in our data center on premises located in Houston. We have a developed a new .net 4 based web application in order to provide new features to customers. The new web application is hosted in amazon aws ec2 environment (N. Virginia region us-east-1b zone). In order to get seamlessly integrate with the legacy application the new web application makes web service calls to retrieve data. We are seeing an unusually high latency time in the order of 5+ seconds for these web service calls. The exact same web service call returns in less than a second on our local PCs (which makes sense given physical proximity to the actual server). The weird part is that we have developers in California who also have the same milliseconds response time. We are testing the web service response using third party tools such as SoapUI, Google Chrome extensions such as Advanced REST Client, Postman REST Client, etc. As if this wasnt weird enough, we have noticed the same low latency from certain other ec2 instances while testing which are in the same region and availability zone as well. If we experienced the high latency consistently from all the ec2 instances I could understand. But there is something else going on. Comparing the various stats and results between the low latency and high latency ec2 servers do not show any significant differences: ping (constant 40ms), tracert, winmtr, etc. We have instances that are in the VPC as well. So I tried both the public and private IP address of the web service host server and that didnt make a difference either for the above results. We need to resolve this latency issue as this is causing the resulting web pages to load very slowly (almost 15+ seconds which is simply unacceptable). The ec2 instances have Windows Server Datacenter 64 bit. Let me know if there is any other infor I can provide to help diagnose this.

    Read the article

  • What's wrong with this HTTP POST request?

    - by bigboy
    I'm trying to fuzz a server using the Sulley fuzzing framework. I observe the following stream in Wireshark. The error talks about a problem with JSON parsing, however, when I try the same HTTP POST request using Google Chrome's Postman extension, it succeeds. Can anyone please explain what could be wrong about this HTTP POST request? The JSON seems valid. POST /restconf/config HTTP/1.1 Host: 127.0.0.1:8080 Accept: */* Content-Type: application/yang.data+json { "toaster:toaster" : { "toaster:toasterManufacturer" : "Geqq", "toaster:toasterModelNumber" : "asaxc", "toaster:toasterStatus" : "_." }} HTTP/1.1 400 Bad Request Server: Apache-Coyote/1.1 Content-Type: */* Transfer-Encoding: chunked Date: Sat, 07 Jun 2014 05:26:35 GMT Connection: close 152 <?xml version="1.0" encoding="UTF-8" standalone="no"?> <errors xmlns="urn:ietf:params:xml:ns:yang:ietf-restconf"> <error> <error-type>protocol</error-type> <error-tag>malformed-message</error-tag> <error-message>Error parsing input: Root element of Json has to be Object</error-message> </error> </errors> 0

    Read the article

  • Launching Installer Via Powershell and WinRM and Nothing Happens

    - by Nick DeMayo
    I'm currently working on a Powershell script to run some Microsoft Hotfix installers remotely on several Windows Server 2008 R2 servers that I manage. Basically, the script copies all the appropriate files up to the server, and then runs the installer via Invoke-Command, like so: function InstallCU { Write-Host "Installing June 2013 CU..." Invoke-Command -ComputerName $ServerName -ScriptBlock { Start-Process "c:\aaa\prjcusp2\ubersrvprj2010-kb2817530-fullfile-x64-glb.exe" -ArgumentList "/passive" } } If I run the "Start-Process" command locally on the server, the installer runs properly. However, when trying to run it remotely, nothing happens (actually, I can see the installer start up in Task Manager, but it closes a couple seconds later and doesn't run). I've attempted giving the Invoke-Command -Credentials, I've turned off UAC on the server, and I've ensured that my WinRM settings (running 'winrm quickconfig' and setting TrustedHosts to *) are correct. I've also tried having the Invoke-Command script run a local Powershell script to run the installer and changing the Argument from '/passive' to 'quiet' (in case it can't remotely launch something that has a UI), but again, no dice. Is there anything else I can try, or am I just not going to be able to do this?

    Read the article

  • How to create RPM for 32-bit arch from a 64-bit arch server?

    - by Gnanam
    Our production server is running CentOS5 64-bit arch. Because there are no RPM available currently for SQLite latest version (v3.7.3), I created RPM using rpmbuild the very first time by following the instructions given here. I was able to successfully create RPM for 64-bit (x86_64) architecture. But am not able to create RPM for 32-bit (i386) architecture. It failed with the following errors: ... ... ... + ./configure --build=x86_64-redhat-linux-gnu --host=x86_64-redhat-linux-gnu --target=i386-redhat-linux-gnu --program-prefix= --prefix=/usr --exec-prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin --sysconfdir=/etc --datadir=/usr/share --includedir=/usr/include --libdir=/usr/lib64 --libexecdir=/usr/libexec --localstatedir=/var --sharedstatedir=/usr/com --mandir=/usr/share/man --infodir=/usr/share/info --enable-threadsafe checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... gawk checking whether make sets $(MAKE)... yes checking for style of include used by make... GNU checking for x86_64-redhat-linux-gnu-gcc... no checking for gcc... gcc checking for C compiler default output file name... configure: error: C compiler cannot create executables See `config.log' for more details. error: Bad exit status from /var/tmp/rpm-tmp.73141 (%build) RPM build errors: Bad exit status from /var/tmp/rpm-tmp.73141 (%build) This is the command I called: rpmbuild --target i386 -ba sqlite.spec My question is, how do I create RPM for 32-bit arch from a 64-bit arch server?

    Read the article

  • Virtualhost setup for Ruby on Rails application (mod passenger)

    - by Ingo86
    Hi all, I'm trying to install Redmine under apache. The apache server works on a local network. My apache setup consist on a single virtual host. I can get insto different directories using simply the path corresponding: http://ip_address/folder_of_the_project_1 How can I setup the virtualhost to make redmine works in this situation? Here is my current virtualhost setup: NameVirtualHost * <VirtualHost *> ServerAdmin webmaster@localhost DocumentRoot /var/www/ RailsBaseURI /redmine <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> <Directory /var/www/redmine/public> Options FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined ServerSignature On Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> Thank you, Ingo86

    Read the article

  • Installing Tomcat on CentOS 5

    - by andybaird
    Disclaimer: I am not a server admin, I am a windows user that has lead a life of sinful installation wizards and drag and drop I'm attempting to install Tomcat on CentOS 5 hosted by a MediaTemple dedicated virtual server. I basically followed this guide: Installed jpackage and configured the yum.repo.d jpackage file to set enabled=1 Used yum to install java (yum install java) Downloaded the binary distribution of tomcat with "wget http://archive.apache.org/dist/tomcat/tomcat-6/v6.0.14/bin/apache-tomcat-6.0.14.tar.gz" set JAVA_HOME to point at the jdk location I found with "export JAVA_HOME=/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0/" I gunzip/untar the Tomcat files and run ./startup.sh to start the Tomcat server. That is supposed to put the Tomcat server at myserver.com:8080 - however, I just get a could not contact host error when I try to browse to it (or when I try 'curl localhost:8080' from SSH) After I type ./startup.sh, here is the console output: [root@myserver bin]# ./startup.sh Using CATALINA_BASE: /root/apache-tomcat-6.0.14 Using CATALINA_HOME: /root/apache-tomcat-6.0.14 Using CATALINA_TMPDIR: /root/apache-tomcat-6.0.14/temp Using JRE_HOME: /usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0/ [root@myserver bin]# Is there a step I have missed here? Edit: I've now discovered by looking at the log the following error is occuring: Error occurred during initialization of VM Could not reserve enough space for object heap

    Read the article

  • asterisk extensions.conf & sip.conf

    - by Josh
    I'm trying to get my Dialplan to work. When I call, the only thing I get is a dial tone to enter extension "no Background(thanks-calling) is played". When extension 123 is dialed, busy signal is triggered and asterisk CLI get frozen. Any help will be appreciate it. Conf files below. ; PSTN on sip.conf [pstn] type=friend host=dynamic context=pstn username=pstn secret=password nat=yes canreinvite=no dtmfmode=rfc2833 qualify=yes insecure=port,invite disallow=all allow=ulaw ; PSTN on extensions.conf [pstn] exten => s,1,Answer exten => s,2,Wait,2 exten => s,4,DigitTimeout,5 exten => s,5,ResponseTimeout,10 exten => s,6,Background(thanks-calling) exten => 0,1,Goto(incoming,123,1) ; (Member Services) [incoming] exten => 123,1,NoOP(${CALLERID}) ; show the caller ID info in the console exten => 123,n,Ringing() exten => 123,n,Answer() exten => 123,n,Playback(silence/1) exten => 123,n,Playback(connecting1) exten => 123,n,Wait(3) exten => 123,n,Dial(SIP/line1,60) exten => 123,n,Congestion

    Read the article

  • Vista WHS Client stopped resolving local names

    - by andrewcr
    I’m running Windows Home Server PP2 in my home, with 3 client computers: two XP and one Vista. I have a router that provides my local DHCP and the server has a static IP address. The other day the Vista machine hung, and on reboot stopped resolving local names. It will show the green home server client icon in the system tray, but if I attempt to log in to the console, I get a “This computer cannot connect to your home server” message. If I ping the server name from the command line, it does not resolve, and gives a “could not find host” message. Oddly enough, if I browse the network, I can see the server, but double clicking on it fails. The other machines on the local network have no problems seeing the server, and the Vista machine has no problems resolving names from the internet, it just can’t see any local machines. I’m aware that I can work around this by adding entries to my HOSTS file (it does work), but I’d like this to work the way it’s “supposed” to. I’m an experienced computer user and developer, but not a networking whiz. Can anyone tell me how local name resolution is supposed to work in my environment and/or suggest ways to troubleshoot this? Thanks, Andy

    Read the article

  • Cannot perform a PECL installation

    - by Petrusa
    I have been trying to do a few PECL installations, but all of them return the same type of error. Something related to timezones? Im running RedHat x86_64 es5. Attempting to install geoip-1.0.7: root@server [~]# pecl install geoip-1.0.7 downloading geoip-1.0.7.tgz ... Starting to download geoip-1.0.7.tgz (9,416 bytes) .....done: 9,416 bytes Warning: strtotime(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected 'America/Chicago' for 'CST/-6.0/no DST' instead in PEAR/Validate.php on line 489 Warning: strtotime(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected 'America/Chicago' for 'CST/-6.0/no DST' instead in /usr/local/lib/php/PEAR/Validate.php on line 489 3 source files, building running: phpize Configuring for: PHP Api Version: 20090626 Zend Module Api No: 20090626 Zend Extension Api No: 220090626 building in /var/tmp/pear-build-root/geoip-1.0.7 running: /root/tmp/pear/geoip/configure checking for egrep... grep -E checking for a sed that does not truncate output... /bin/sed checking for cc... cc checking for C compiler default output file name... a.out checking whether the C compiler works... configure: error: cannot run C compiled programs. If you meant to cross compile, use `--host'. See `config.log' for more details. ERROR: `/root/tmp/pear/geoip/configure' failed What is going on? Anyone that could assist please...

    Read the article

  • How do I run a stable Windows XP kvm guest on Ubuntu 10.04?

    - by Jean-Paul Calderone
    I have three Windows XP guests running on a recently upgraded 64-bit Ubuntu 10.04 system. Occasionally (on the order of once every few days), one of the guests will become non-responsive and the kvm process on the host which is running that guest will start consuming 100% CPU. It will continue to do so until it is killed. When restarted, it will be fine for a while, and then the issue repeats. The kvm command line used to run all three guests is this: /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 1024 -smp 1 -name bigdog21vmxp1 \ -uuid ea47ff84-125b-16f7-9a4d-a6d0d8bab46a \ -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/bigdog21vmxp1.monitor,server,nowait \ -monitor chardev:monitor \ -localtime \ -boot c \ -drive file=/var/lib/libvirt/images/windowsxp-1.qcow2,if=ide,index=0,boot=on,format=qcow2 \ -net nic,macaddr=54:52:00:02:06:0e,vlan=0,name=nic.0 \ -net tap,fd=58,vlan=0,name=tap.0 \ -chardev pty,id=serial0 \ -serial chardev:serial0 \ -parallel none \ -usb \ -usbdevice tablet \ -vnc 127.0.0.1:1 \ -k en-us \ -vga cirrus \ -soundhw es1370 Why do the systems misbehave this way sometimes? And what configuration can I change in order to fix it? Or, if the problem is due to a bug in kvm, what is the process for isolating a kvm failure so that the developers have a chance of fixing it?

    Read the article

  • Sun T5120: Memory issue with 8GB DIMMs, "Unsupported memory configuration"

    - by watain
    I'm trying to upgrade RAM in a Sun T5120 server, replacing 2GB (Sun P/N: 501-7953-01) with 8GB DIMMs (Sun P/N: 511-1262-01). When bringing up the host system, I get the following errors on the ILOM: -> show faulty Target | Property | Value --------------------+------------------------+--------------------------------- /SP/faultmgmt/0 | fru | /SYS/MB /SP/faultmgmt/0/ | timestamp | Dec 14 15:29:42 faults/0 | | /SP/faultmgmt/0/ | sp_detected_fault | /SYS/MB/CMP0/MCU3 Forced fail faults/0 | | (IBIST) /SP/faultmgmt/0/ | timestamp | Dec 14 15:29:28 faults/1 | | /SP/faultmgmt/0/ | sp_detected_fault | /SYS/MB/CMP0/MCU2 Forced fail faults/1 | | (IBIST) /SP/faultmgmt/0/ | timestamp | Dec 14 15:29:13 faults/2 | | /SP/faultmgmt/0/ | sp_detected_fault | /SYS/MB/CMP0/MCU1 Forced fail faults/2 | | (IBIST) /SP/faultmgmt/0/ | timestamp | Dec 14 15:28:59 faults/3 | | /SP/faultmgmt/0/ | sp_detected_fault | /SYS/MB/CMP0/MCU0 Forced fail faults/3 | | (IBIST) /SP/faultmgmt/1 | fru | /SYS /SP/faultmgmt/1/ | timestamp | Dec 14 15:29:42 faults/0 | | /SP/faultmgmt/1/ | sp_detected_fault | Dec 14 15:29:42 ERROR: faults/0 | | Unsupported memory | | configuration As you can see, the only error message I get is "Unsupported memory configuration". Note that I'm absolutely sure that I placed in the DIMMs in the correct slots. Might this issue be related to the Voltage of the DIMMs? Any suggestions on how to trouble-shoot this issue? This issue seems to be similar to the one explained at "Inserted disabled" while upgrading Sun Sparc t5120 memory. However the given link http://docs.sun.com/source/820-4445-10/chapter1.html seems to point to an inexistent page...

    Read the article

  • Centos 6.2 Fresh 'Basic Server' install networking issues

    - by RWC
    I've had a /29 provisioned on a network port for a server and am trying to at least configure the machine so I can ssh into it. It's Centos 6.2 x64 with the Basic Server install. Currently not able to ping gateway or any address for that matter. For reference: Default Interface: em2 Network ID: 66.*.*.0/29 Gateway: 66.*.*.1 Broadcast: 66.*.*.7 Please see my following configs: /etc/sysconfig/network-scripts/ifcfg-em2 DEVICE=em2 NM_CONTROLLED=yes ONBOOT=yes HWADDR=Not Important TYPE=Ethernet BOOTPROTO=none IPADDR=66.*.*.2 PREFIX=29 DNS1=8.8.8.8 DNS2=8.8.4.4 DEFROUTE=yes IPV4_FAILURE_FATAL=yes IPV6INIT=no NAME="System em2" NETMASK=255.255.255.248 USERCTL=no $: route -n Destination // Gateway // Genmask // Flags // Metric // Ref // Use // Iface 66.*.*.0 0.0.0.0 255.255.255.248 U 0 0 0 em2 169.254.0.0 0.0.0.0 255.255.0.0 U 0 1003 0 em2 0.0.0.0 66.*.*.1 0.0.0.0 UG 0 0 0 em2 $: route Destination // Gateway // Genmask // Flags // Metric // Ref // Use // Iface 66.*.*.0 * 255.255.255.248 U 0 0 0 em2 link-local * 255.255.0.0 U 0 1003 0 em2 default 66.*.*.1 0.0.0.0 UG 0 0 0 em2 $: cat /etc/sysconfig/network NETWORKING=yes HOSTNAME=excalibur.domain.com GATEWAY=66.*.*.1 Keep in mind that I cannot even currently ping the gateway which is quite confusing for me. My /etc/hosts are configured correctly with the *.2 address. I'm not concerned with getting all of the addresses on the /29 up and running yet, just one so I can at least ssh in. Thanks! Edit: Adding in ifconfig. $: ifconfig em2 Link encap:Ethernet HWaddr XX:XX:XX:XX:XX:XX inet addr:66.*.*.2 Bcat:66.*.*.7 Mask:255.255.255.248 inet6 addr: UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5536 errors:0 dropped:0 overruns:0 frame:0 TX packets:10 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2599469 (2.4 MiB) TX bytes: 748 (748.0 b) Interrupt:48 Memory:dc000000-dc012800 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:34 errors:0 etc etc

    Read the article

  • Physical-to-virtual for personal use

    - by Journeyman Geek
    One of my current projects involves reducing the number of old computers lying around. So far, the office systems have been downsized from 5 to 2. One system which had no data on it was dumped entirely, while three other identical boxes were consolidated into to one (I swap hard drives as needed and they just work(tm)). I want to make further cuts. The physical systems in question all run windows xp sp3, and were old Pentium 4s with 256 MB of RAM. Currently, I have got a VirtualBox running on another system. My intention is to migrate all services to that. We can assume that disk space and RAM on the host is not an issue. I'd rather not go with a client-server physical-to-virtual (P2V) method like what VirtualBox uses now. I have a few specific concerns as well - specifically whether I'd need to reactivate or sysprep for dissimilar hardware first. An offline tool for imaging (Windows or Linux) would not be an issue since the drives could be mounted onto another system I have anyway or I could use a LiveCD. What would be my options for P2V conversion, and what would I need to keep in mind when doing this?

    Read the article

  • Webservice randomly dropping connections - possibly due to firewall nonevent data?

    - by adam
    I have a hosted webapp which requests data from a REST webservice in our office. Each page calls one (or several) webservices, which go from our host, via our firewall (a Watchguard Firebox) to a server in our office. All of a sudden, the app has dramatically slowed. We have determined that the webservice is timing out at random when called externally (it's fine when called within the office network). I'm pretty certain it's our connection which is dropping the webservice call, so I've written a quick php/curl script which calls the webservice over many iterations and shows the various timings. Below is an example output, showing both a failed and a successful call (with a 5 second timeout): http_code namelookup_time connect_time pretransfer_time starttransfer_time total_time 1 0 0.000096 0.0342 0.0000 0.0000 0.0342 2 200 0.000052 0.0332 0.1327 0.1751 0.1752 As per iteration #1 above, failed requests seem to be failing between connect and pretransfer. I'm not sure if this shows that the connection is successfully past the firewall, or could the firewall still cause an issue? Our firewall is showing a series of nondata event log messages for the relevant access rule. Our IT team tells me these are routine, although I can find no mention of these in Google. I'm not sure if this fits in between connect and pretransfer. Having elinated the webservice server (by testing internally) and the live webapp (by testing different code on different external servers, I am left suspecting the connection to the office. Could the firebox nondata events be causing a problem between connect and pretransfer?

    Read the article

  • Problem hosting server behing personal router

    - by Venkatesh Hodavdekar
    I recently bought the domain name lucidcontraptions.com and want to host the website from home. I have a D-Link router in which I have set up my personal virtual server correctly. My application server is Apache 2.2. The server works perfectly with the following settings: External IP: 207.172.xx.xx. Public port: 8888 Internal IP: 192.168.xx.xx. Private port: 80 If I go to 207.172.xx.xx:8888/ the server works perfectly and my Apache page shows up without any issues, both from inside the intranet as well as outside. This setting would not work out for me as I am not allowed port numbers in my DNS management. Now when I tweak the settings to the following: External IP: 207.172.xx.xx. Public port: 80 Internal IP: 192.168.xx.xx. Private port: 80 If I go to 207.172.xx.xx/ the server works perfectly and my Apache page shows up without any issues, BUT ONLY FROM INSIDE THE INTRANET. This page does not show up for people outside the intranet.

    Read the article

  • Switching from prefork MPM to worker MPM + php-fpm on ubuntu

    - by Shane
    All tutorials I found were how to fresh install worker MPM + PHP-FPM, since my wordpress blog's already up and running with prefork MPM, correct me if I'm wrong in the simulated installation process: I'm on ubuntu and according to some tutorials, the following lines would do all the tricks: apt-get install apache2-mpm-worker libapache2-mod-fastcgi php5-fpm php5-gd a2enmod actions fastcgi alias Then you setup configuration in /etc/apache2/conf.d/php5-fpm.conf: <IfModule mod_fastcgi.c> AddHandler php5-fcgi .php Action php5-fcgi /php5-fcgi Alias /php5-fcgi /usr/lib/cgi-bin/php5-fcgi FastCgiExternalServer /usr/lib/cgi-bin/php5-fcgi -host 127.0.0.1:9000 -pass-header Authorization </IfModule> After all these, restart: service apache2 restart && service php5-fpm restart Question: 1) Would it cause any down time in the whole process for previously running sites with prefork MPM? 2) Do you have to change any already existent configuration files like php or mysql or apache2(would they take effect immediately after the switch without you doing anything)? 3) I've already have apc up and running, do you have to re-install/re-configure it after the switch? 4) How do you find out if apache2 is working in worker MPM mode as expected? Thanks a lot!

    Read the article

  • CentOS networking BNX2

    - by james moore
    Having some trouble with my NICs. The server starts fine and I can wget/ping etc. However, when I /etc/init.d/networking restart I then receive the following error: Bringing up interface eth0: bnx2: fw sync timeout, reset code = 1030009 SIOCSIFFLAGS: Device or resource busy Consequently, the task fails. I have searched around on google users suggesting to disable PNP in the BIOS but I see no option. Here is some system information: $ ethtool -i eth0 driver: bnx2 version: 2.0.8-rh firmware-version: bc 2.9.1 $ uname -a Linux host 2.6.18-238.9.1.el5 #1 SMP Tue Apr 12 18:10:13 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux $/sbin/lspci | grep Broadcom 04:00.0 PCI brodge: Broadcom EPB PCI-Express to PCI-X Bridge (rev c3) 05:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM5700 Gigabit Ethernet (rev 12) 08:00.0 PCI brodge: Broadcom EPB PCI-Express to PCI-X Bridge (rev c3) 09:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM5700 Gigabit Ethernet (rev 12) $ lsmod | grep bnx2 bnx2i 81704 0 cnic 109512 1 bnx2i libiscsi2 77765 6 be2iscsi,ib_iser,iscsi_tcp,bnx2i,cxgb3i,libiscsi_tcp scsi_transport_iscsi2 73945 8 be2iscsi,ib_iser,iscsi_tcp,bnx2i,cxgb3i,libiscsi2 bnx2 224780 0 scsi_mod 199001 15 mpt2sas,scsi_transport_sas,mptctl,be2iscsi,ib_iser,iscsi_tcp,bnx2i,cxgb3i,libiscsi2,scsi_transport_iscsi2,scsi_dh,sg,libata,megaraid_sas,sd_mod $ rmmod bnx2; modprobe bnx2 PCI: Enabling device 0000:05:00.0 (0158 -> 015a) PCI: Enabling device 0000:09:00.0 (0158 -> 015a) bnx2: fw sync timeout, reset code = 10300003 Any help would be appreciated as I am at a loss.

    Read the article

  • BIND9 DNS Problems - Not resolving

    - by clone1018
    I host a BIND9 DNS server for my VirtualMin users to use. And It only resolves for 75% of the people. It has been WELL over 1 week now. Here is a sample. $ttl 38400 @ IN SOA axxim.net. root.axxim.net. ( 1274031391 10800 3600 604800 38400 ) @ IN NS axxim.net. day7tech.com. IN A 96.226.216.37 www.day7tech.com. IN A 96.226.216.37 ftp.day7tech.com. IN A 96.226.216.37 m.day7tech.com. IN A 96.226.216.37 localhost.day7tech.com. IN A 127.0.0.1 webmail.day7tech.com. IN A 96.226.216.37 admin.day7tech.com. IN A 96.226.216.37 mail.day7tech.com. IN A 96.226.216.37 day7tech.com. IN MX 5 mail.day7tech.com.

    Read the article

  • When pointing to new DNS servers is there any chance of E-mails being lost if the old E-mail hosting service is still up?

    - by LaserBeak
    I am changing webhosts and will be using the new hosts mail servers instead of the old ones. I have created all the correctly named mailboxes on the new service but have also not yet cut ties with the old webhost. I am expecting that even if the new DNS values which point to the new hosts DNS servers and respective SOA\zone file with the new MX values have not yet propagated and an E-mail is directed at the old hosts mail servers as per the mx records in the SOA\zone records which the old hosting provider holds, the E-mail would still come through to the mailbox that's on the old host providers mail servers. So I am just trying to reaffirm if I got this right and it's essentially impossible for me to loose an E-mail since it will hit either the old hosts mail servers or the new ones ? Also is it possible to configure the same E-mail account to check and collect mail from different mail servers by entering multiple pop3 addresses ? And if I choose to keep the old web hosts mail hosting services as a backup by specifying the mx records for it with a lower priority in the SOA records hosted by the new webhost, is it possible to have any incoming E-mails sent to both servers by the mail daemon so I have two copies? Or is my only option having the primary mail server forward the E-mail somehow to the old mailserver ?

    Read the article

  • Rsyslogd not listening on port

    - by amorfis
    I installed rsyslogd on ubuntu server, started it and everything looks fine, but the port the server should listen on is not opened. ubuntu@node7:~$ sudo service rsyslog restart rsyslog stop/waiting rsyslog start/running, process 14114 Netstat shows it is not listening: ubuntu@node7:~$ netstat -tlan Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN tcp 0 320 172.22.0.17:22 10.8.8.38:61335 ESTABLISHED tcp6 0 0 :::22 :::* LISTEN tcp6 0 0 :::2776 :::* LISTEN tcp6 0 0 :::2777 :::* LISTEN tcp6 0 0 172.22.0.17:2777 172.22.0.11:56554 ESTABLISHED tcp6 0 0 172.22.0.17:2776 172.22.0.11:39780 ESTABLISHED This is how /etc/rsyslog.conf looks like (most comments omitted): ubuntu@node7:~$ cat /etc/rsyslog.conf ################# #### MODULES #### ################# $ModLoad imuxsock # provides support for local system logging $ModLoad imklog # provides kernel logging support (previously done by rklogd) $ModLoad imtcp $InputTCPServerRun 514 ########################### #### GLOBAL DIRECTIVES #### ########################### $ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat $RepeatedMsgReduction on $WorkDirectory /var/spool/rsyslog $FileOwner syslog $FileGroup adm $FileCreateMode 0640 $DirCreateMode 0755 $Umask 0022 $PrivDropToUser syslog $PrivDropToGroup adm $IncludeConfig /etc/rsyslog.d/*.conf In /etc/rsyslog.d/35-server-per-host.conf I have following lines, and I suspect this can be the cause. What does it mean? # Stop processing of all non-local messages. You can process remote messages # on levels less than 35. :fromhost-ip,!isequal,"127.0.0.1" ~ and if it is, how could I change it to have server listening and receiving and logging messages? UPDATE: I commented out suspected line, but still it's not listening on port 514

    Read the article

  • What are useful .screenrc settings?

    - by gyaresu
    Basically like some of my own that I've posted below. I'm looking for added functionality to the programme 'screen'. At the very least have a look at the last line for a fantastic 'menu bar' at the bottom of a screen session. ## gyaresu's .screenrc 2008-03-25 # http://delicious.com/search?p=screenrc # Don't display the copyright page startup_message off # tab-completion flash in heading bar vbell off # keep scrollback n lines defscrollback 1000 # Doesn't fix scrollback problem on xterm because if you scroll back # all you see is the other terminals history. # termcapinfo xterm|xterms|xs|rxvt ti@:te@ # These will let you use bind -c selectHighs 0 select 10 #these three commands are bind -c selectHighs 1 select 11 #added to the command-class bind -c selectHighs 2 select 12 #selectHighs bind -c selectHighs 3 select 13 bind -c selectHighs 4 select 14 bind -c selectHighs 5 select 15 bind - command -c selectHighs #bind the hyphen to #command-class selectHighs screen -t rtorrent 0 rtorrent #screen -t tunes 1 ncmpc --host=192.168.1.4 --port=6600 #was for connecting to MPD music server. screen -t stuff 1 screen -t irssi 2 irssi screen -t dancing 4 screen -t python 5 python screen -t giantfriend 6 these_are_ssh_to_server_scripts.sh screen -t computerrescue 7 these_are_ssh_to_server_scripts.sh screen -t BMon 8 bmon -p eth0 screen -t htop 9 htop screen -t hellanzb 10 hellanzb screen -t watching 3 #screen -t interactive.fiction 8 #screen -t hellahella 8 paster serve --daemon /home/gyaresu/downloads/hellahella/hella.ini shelltitle "$ |bash" # THIS IS THE PRETTY BIT #change the hardstatus settings to give an window list at the bottom of the ##screen, with the time and date and with the current window highlighted hardstatus alwayslastline #hardstatus string '%{= mK}%-Lw%{= KW}%50>%n%f* %t%{= mK}%+Lw%< %{= kG}%-=%D %d %M %Y %c:%s%{-}' hardstatus string '%{= kG}[ %{G}%H %{g}][%= %{= kw}%?%-Lw%?%{r}(%{W}%n*%f%t%?(%u)%?%{r})%{w}%?%+Lw%?%?%= %{g}][%{B} %d/%m %{W}%c %{g}]'

    Read the article

  • How to serve Rails application with Passenger/Apache without domain name?

    - by grifaton
    I am trying to serve a Rails application using Passenger and Apache on a Ubuntu server. The Passenger installation instructions say I should add the following to my Apache configuration file - I assume this is /etc/apache2/httpd.conf. <VirtualHost *:80> ServerName www.yourhost.com DocumentRoot /somewhere/public # <-- be sure to point to 'public'! <Directory /somewhere/public> AllowOverride all # <-- relax Apache security settings Options -MultiViews # <-- MultiViews must be turned off </Directory> </VirtualHost> However, I do not yet have a domain pointing at my server, so I'm not sure what I should put for the ServerName parameter. I have tried the IP address, but when I do that, restarting Apache gives apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName [Sun Jan 17 12:49:26 2010] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName [Sun Jan 17 12:49:36 2010] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results and pointing the browser at the IP address gives a 500 Internal Server Error. The closest I have got to something sensible is with <VirtualHost efate:80> ServerName efate DocumentRoot /root/jpf/public <Directory /root/jpf/public> AllowOverride all Options -MultiViews </Directory> </VirtualHost> where "efate" is my server's host name. But now pointing my browser at the server's IP address just gives a page saying "It works!" - presumably this is a default page, but I'm not sure where this is being served from. I might be wrong in thinking that the reason I have been unable to get this to work is related to not having a domain name. This is the first time I have used Apache directly - any help would be most gratefully received!

    Read the article

< Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >