Search Results

Search found 12836 results on 514 pages for 'host mechanic'.

Page 476/514 | < Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >

  • Setting up a home server - what to use? (ZFS vs btrfs, BSD vs Linux, misc other requirements)

    - by monch1962
    I need to get all our home content off individual machines and onto a central server. What I'd like to have is the metaphorical "server under the stairs". Stuff we need: expandable storage. I want to be able to add extra disc as we go along, with minimal maintenance required. Currently we've got about 3Tb of files we need to host, and that's likely to grow by another Tb every 6-12 months based on recent history. I need to be able to add additional disc with minimal pain needs to store all the media (i.e. photos, video, music) we have, and run services to serve the various devices we have in the house to playback (e.g. DAAP so we can play stuff through iTunes, ccxstream so we can play stuff over XBMC). DAAP and ccxstream are needed now, but we also need to support new standards as they emerge (so a closed-box solution isn't going to work) RAID 5, or something broadly equivalent (e.g. RAID-Z) BitTorrent client ssh, NFS, Samba access snapshot capability (as in ZFS), so we can snapshot individual file systems regularly and rollback when my kids delete their school assignments the day before they're due... ability to recover quickly from power outages (it's not unusual for us to have power outages that last longer than our UPS' batteries) FOSS software a modern distributed version control system running on the box, such as Mercurial Stuff I'd like to have on the server, but can live without: PVR capability, so I could record TV to the box Web server. We currently run a small Web server on a very old box, and I'd ideally like to turn the old box off and move the content to the new server just to save some electricity Nagios + mrtg I've been looking at using a EEE Box as the server, primarily because I can get them cheap and they don't consume much power. The choice of OS and file system is more difficult, from what I've found: I've got most experience with various Linux distros, but am happy to use another Unix FreeBSD and OpenSolaris seem to be the best choices for hosting ZFS OpenSolaris' hardware support is nowhere near as good as e.g. Ubuntu btrfs, while looking very good, doesn't seem ready for prime-time yet ZFS doesn't let you (easily?) add new discs to a RAID5 or RAID-Z reading around, it seems that ZFS is a bit short of tools for recovering lost data At the moment, I'm leaning towards running FreeNAS+ZFS, but I'm concerned about the requirement to be able to add new disc on a fairly regular basis to an existing RAID-Z. Can anyone provide some recommendations, or share experiences? Thanks in advance

    Read the article

  • Setting up a home server - what to use? (ZFS vs btrfs, BSD vs Linux, misc other requirements)

    - by monch1962
    I need to get all our home content off individual machines and onto a central server. What I'd like to have is the metaphorical "server under the stairs". Stuff we need: expandable storage. I want to be able to add extra disc as we go along, with minimal maintenance required. Currently we've got about 3Tb of files we need to host, and that's likely to grow by another Tb every 6-12 months based on recent history. I need to be able to add additional disc with minimal pain needs to store all the media (i.e. photos, video, music) we have, and run services to serve the various devices we have in the house to playback (e.g. DAAP so we can play stuff through iTunes, ccxstream so we can play stuff over XBMC). DAAP and ccxstream are needed now, but we also need to support new standards as they emerge (so a closed-box solution isn't going to work) RAID 5, or something broadly equivalent (e.g. RAID-Z) BitTorrent client ssh, NFS, Samba access snapshot capability (as in ZFS), so we can snapshot individual file systems regularly and rollback when my kids delete their school assignments the day before they're due... ability to recover quickly from power outages (it's not unusual for us to have power outages that last longer than our UPS' batteries) FOSS software a modern distributed version control system running on the box, such as Mercurial Stuff I'd like to have on the server, but can live without: PVR capability, so I could record TV to the box Web server. We currently run a small Web server on a very old box, and I'd ideally like to turn the old box off and move the content to the new server just to save some electricity Nagios + mrtg I've been looking at using a EEE Box as the server, primarily because I can get them cheap and they don't consume much power. The choice of OS and file system is more difficult, from what I've found: I've got most experience with various Linux distros, but am happy to use another Unix FreeBSD and OpenSolaris seem to be the best choices for hosting ZFS OpenSolaris' hardware support is nowhere near as good as e.g. Ubuntu btrfs, while looking very good, doesn't seem ready for prime-time yet ZFS doesn't let you (easily?) add new discs to a RAID5 or RAID-Z reading around, it seems that ZFS is a bit short of tools for recovering lost data At the moment, I'm leaning towards running FreeNAS+ZFS, but I'm concerned about the requirement to be able to add new disc on a fairly regular basis to an existing RAID-Z. Can anyone provide some recommendations, or share experiences? Thanks in advance

    Read the article

  • SMTP for multiple domains on virtual interfaces

    - by Pawel Goscicki
    The setup is like this (Ubuntu 9.10): eth0: 1.1.1.1 name.isp.com eth0:0 2.2.2.2 example2.com eth0:1 3.3.3.3 example3.com example2.com and example3.com are web apps which need to send emails to their users. 2.2.2.2 points to example2.com and vice-versa (A/PTR). MX - Google. Google handles all incoming mail. 3.3.3.3 points to example3.com and vice-versa (A/PTR). MX - Google. Google handles all incoming mail. Requirements: Local delivery must be disabled (must deliver to MX specified server), so that the following works (note that there is no local user bob on the machine, but there is an existing bob email user): echo "Test" | mail -s "Test 6" [email protected] I need to be able to specify from which IP/domain name the email is delivered when sending an email. I fought with sendmail. With not much luck. Here's some debug info: sendmail -d0.12 -bt < /dev/null Canonical name: name.isp.com UUCP nodename: host a.k.a.: example2.com a.k.a.: example3.com ... Sendmail always uses canonical name (taken from eth0). I've found no way for it to select one of the UUCP codenames. It uses it for sending email: echo -e "To: [email protected]\nSubject: Test\nTest\n" | sendmail -bm -t -v [email protected]... Connecting to [127.0.0.1] via relay... 220 name.isp.com ESMTP Sendmail 8.14.3/8.14.3/Debian-9ubuntu1; Wed, 31 Mar 2010 16:33:55 +0200; (No UCE/UBE) logging access from: localhost(OK)-localhost [127.0.0.1] >>> EHLO name.isp.com I'm ok with other SMTP solutions. I've looked briefly at nbsmtp, msmtp and nullmailer but I'm not sure thay can deal with disabling local delivery and selecting different domains when sending emails. I also know about spoofing sender field by using mail -a "From: <[email protected]>" but it seems to be a half-solution (mails are still sent from isp.com domain instead of proper example2.com, so PTR records are unused and there's more risk of being flagged as spam/spammer).

    Read the article

  • Apache ProxyPass Missing Images

    - by EpicOfChaos
    I have a apache server that sits in front of my glassfish server. mydomain.com goes directly to my static files on apache, than if you hit the subdomain forum.mydomain.com it goes to the glassfish webapp forum/ at 127.0.0.1:8080/forum/. This proxy seems to work it takes me to the web app but all of the images are missing! Here is how I go my virtual host setup. NameVirtualHost *:80 <VirtualHost *:80> ServerName www.mydomain.com ServerAlias subdomain.mydomain.com mydomain.com DocumentRoot "/usr/local/apache/htdocs" </VirtualHost> <VirtualHost *:80> ServerName forum.mydomain.com # any logging config, etc, that you need ProxyPass / http://127.0.0.1:8080/forum/ ProxyPassReverse / http://127.0.0.1:8080/forum/ </VirtualHost> And in the access log this is what I am seeing. [15/Jan/2012:03:28:02 +0000] "GET /forums/list.page HTTP/1.1" 200 12861 [15/Jan/2012:03:28:02 +0000] "GET /forum/templates/default/images/logo.jpg HTTP/1.1" 404 1075 [15/Jan/2012:03:28:02 +0000] "GET /forum/templates/default/styles/style.css?1326582403934 HTTP/1.1" 404 1075 [15/Jan/2012:03:28:02 +0000] "GET /forum/templates/default/images/icon_mini_recentTopics.gif HTTP/1.1" 404 1075 [15/Jan/2012:03:28:02 +0000] "GET /forum/templates/default/images/icon_mini_search.gif HTTP/1.1" 404 1075 [15/Jan/2012:03:28:02 +0000] "GET /forum/templates/default/images/icon_mini_members.gif HTTP/1.1" 404 1075 [15/Jan/2012:03:28:02 +0000] "GET /forum/templates/default/styles/en_US.css?1326582403934 HTTP/1.1" 404 1075 [15/Jan/2012:03:28:02 +0000] "GET /forum/templates/default/images/icon_mini_groups.gif HTTP/1.1" 404 1075 [15/Jan/2012:03:28:02 +0000] "GET /forum/templates/default/images/folder_big.gif HTTP/1.1" 404 1075 [15/Jan/2012:03:28:02 +0000] "GET /forum/templates/default/images/icon_mini_login.gif HTTP/1.1" 404 1075 [15/Jan/2012:03:28:02 +0000] "GET /forum/templates/default/images/whosonline.gif HTTP/1.1" 404 1075 [15/Jan/2012:03:28:02 +0000] "GET /forum/templates/default/images/icon_mini_register.gif HTTP/1.1" 404 1075 [15/Jan/2012:03:28:02 +0000] "GET /forum/ping_session.jsp HTTP/1.1" 404 1075 [15/Jan/2012:03:28:02 +0000] "GET /forum/templates/default/images/folder_lock.gif HTTP/1.1" 404 1075 [15/Jan/2012:03:28:02 +0000] "GET /forum/templates/default/images/folder.gif HTTP/1.1" 404 1075 [15/Jan/2012:03:28:02 +0000] "GET /forum/templates/default/images/folder_new.gif HTTP/1.1" 404 1075 Any Ideas why the images are not working?

    Read the article

  • Why can I view my site over a 3G connection but not through my wifi?

    - by Jonathan
    So, I am sitting in my office with four computers on the same network and internet connection. Two of the computers can visit this particular website. Two of the computer get a message "Google Chrome could not find". I have tried FF and IE also with the same problem. I can view the site 90% of the time on two of the working computers although the site seems slow and sometimes I also get the same errors as the other two computers. I have flushed the DNS, reset the router, tested the site on other peoples computers with success. Is this likely to be a site issue, an ISP issue, a hosting issue? Any advice is greatly appreciated. Here is the ping from the working machine: C:\Users\Jon>ping www.balihaicruises.com Pinging www.balihaicruises.com [208.113.173.102] with 32 bytes of data: Reply from 208.113.173.102: bytes=32 time=331ms TTL=47 Reply from 208.113.173.102: bytes=32 time=327ms TTL=47 Reply from 208.113.173.102: bytes=32 time=326ms TTL=47 Reply from 208.113.173.102: bytes=32 time=329ms TTL=47 Ping statistics for 208.113.173.102: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 326ms, Maximum = 331ms, Average = 328ms Traceroute: Tracing route to www.balihaicruises.com [208.113.173.102] over a maximum of 30 hops: 1 1 ms 17 ms 3 ms 192.168.1.1 2 42 ms 37 ms 36 ms 180.254.224.1 3 39 ms 47 ms 40 ms 180.252.1.69 4 36 ms 616 ms 57 ms 61.94.115.221 5 84 ms 76 ms 80 ms 180.240.191.98 6 73 ms 80 ms 72 ms 180.240.191.97 7 157 ms 143 ms 116 ms 180.240.190.82 8 115 ms 113 ms 120 ms ae1-123.hkg11.ip4.tinet.net [183.182.80.93] 9 331 ms 332 ms 335 ms xe-3-2-1.was14.ip4.tinet.net [89.149.184.30] 10 327 ms 330 ms 331 ms internap-gw.ip4.tinet.net [77.67.69.254] 11 437 ms 415 ms 350 ms border10.pc2-bbnet2.wdc002.pnap.net [216.52.127.73] 12 322 ms 823 ms 398 ms dreamhost-2.border10.wdc002.pnap.net [216.52.125.74] 13 328 ms 336 ms 326 ms ip-208-113-156-4.dreamhost.com [208.113.156.4] 14 326 ms 328 ms 336 ms ip-208-113-156-14.dreamhost.com [208.113.156.14] 15 327 ms 331 ms 333 ms apache2-udder.crisp.dreamhost.com [208.113.173.102] And then for the machine that doesn't work: C:\Users\Microsoft>ping www.balihaicruises.com Ping request could not find host www.balihaicruises.com. Please check the name and try again. C:\Users\Microsoft>tracert www.balihaicruises.com Unable to resolve target system name www.balihaicruises.com.

    Read the article

  • Updating the $PATH for running an command through SSH with LDAP user account

    - by Guillaume Bodi
    Hi all, I am setting up a Mac OSX 1.6 server to host Git repositories. As such we need to push commits to the server through SSH. The server has only an admin account and uses a user list from a LDAP server. Now, since it is accessing the server through a non interactive shell, git operations are not able to complete since git executables are not in the default path. As the users are network users, they do not have a local home folder. So I cannot use a ~/.bashrc and the like solution. I browsed over several articles here and there but could not get it working in a nice and clean setup. Here are the infos on the methods I gathered so far: I could update the default PATH environment to include the git executables folder. However, I could not manage to do it successfully. Updating /etc/paths didn't change anything and since it's not an interactive shell, /etc/profile and /etc/bashrc are ignored. From the ssh manpage, I read that a BASH_ENV variable can be set to get an optional script to be executed. However I cannot figure how to set it system wide on the server. If it needs to be set up on the client machine, this is not an acceptable solution. If someone has some info on how it is supposed to be done, please, by all means! I can fix this problem by creating a .bashrc with PATH correction in the system root (since all network users would start here as they do not have home). But it just feels wrong. Additionally, if we do create a home folder for an user, then the git command would fail again. I can install a third party application to set up hooks on the login and then run a script creating a home directory with the necessary path corrections. This smells like a backyard tinkering and duct tape solution. I can install a small script on the server and ForceCommand the sshd to this script on login. This script will then look for a command to execute ($SSH_ORIGINAL_COMMAND) and trigger a login shell to run this command, or just trigger a regular login shell for an interactive session. The full details of this method can be found here: http://marc.info/?l=git&m=121378876831164 The last one is the best method I found so far. Any suggestions on how to deal with this properly?

    Read the article

  • Varnish 503 Guru Mediation errors with pfsense and healthy apache

    - by Fammy
    We are running a pfsense firewall / load balancer with varnish as service, In front of Fedora linux webservers running apache. We are getting intermittent 503 guru mediation errors. We are a bit stuck scratching our heads because it is not easily repeatable. The timeouts are set to 30s (connect and first byte) but yet the 503 page will show instantly, not after 30s. Then if you refresh immediately it may very well work instantly and sometimes for a 100 refreshes. The load average on the web servers is < 1, the DB server is < 3 (all servers (web, db, pfsense/varnish) are physical rather than VM. I would have thought if the timeouts were being hit then the 503 page would only appear after 30s am I mistaken? Also when an error happens there does not appear to be any corresponding error in apache's log files. This seems to affect pages as well as images, so it is possible to have the page load fine, and for 9/10 images on the page to be fine but 1 not work An example of the varnish debug is below. It says no backend connection but I can't figure out why, if the load was high on apache I could understand it being flaky The machines are on the same gig ethernet lan 21 ReqStart c *IP-REMOVED* 33418 1274368062 21 RxRequest c GET 21 RxURL c /fashion/ 21 RxProtocol c HTTP/1.1 21 RxHeader c User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.0.5) Gecko/2008121622 Fedora/3.0.5-1.fc10 Firefox/3.0.5 21 RxHeader c Host: *ourdomain.com* 21 RxHeader c Accept: */* 21 RxHeader c Accept-Encoding: deflate, gzip 21 VCL_call c recv lookup 21 VCL_call c hash 21 Hash c /fashion/ 21 Hash c *ourdomain.com* 21 VCL_return c hash 21 VCL_call c miss fetch 21 FetchError c no backend connection 21 VCL_call c error restart 21 VCL_call c recv lookup 21 VCL_call c hash 21 Hash c /fashion/ 21 Hash c *ourdomain.com* 21 VCL_return c hash 21 VCL_call c miss fetch 21 FetchError c no backend connection 21 VCL_call c error restart 21 VCL_call c recv lookup 21 VCL_call c hash 21 Hash c /fashion/ 21 Hash c *ourdomain.com* 21 VCL_return c hash 21 VCL_call c miss fetch 21 FetchError c no backend connection 21 VCL_call c error deliver 21 VCL_call c deliver deliver 21 TxProtocol c HTTP/1.1 21 TxStatus c 503 21 TxResponse c Service Unavailable 21 TxHeader c Server: Varnish 21 TxHeader c Content-Type: text/html; charset=utf-8 21 TxHeader c Content-Length: 384 21 TxHeader c Accept-Ranges: bytes 21 TxHeader c Date: Wed, 11 Apr 2012 10:36:17 GMT 21 TxHeader c X-Varnish: 1274368062 21 TxHeader c Age: 0 21 TxHeader c Via: 1.1 varnish 21 TxHeader c Connection: close 21 TxHeader c X-Cache: MISS 21 Length c 384 21 ReqEnd c 1274368062 1334140577.449995041 1334140577.450334787 1.794108152 0.000282764 0.000056982

    Read the article

  • Sendmail SMART_HOST not working

    - by daniel
    Hello, I've defined SMART_HOST to be a specific server, lets call it foo.bar.com. However, when I send a test mail using 'sendmail -t', sendmail tries to use mx.bar.com, which subsequently rejects my mail. I've verified that foo.bar.com works and that mx.bar.com does not work (yay telnet). I've recompiled sendmail.mc vi make, make -C and m4. I've verified the DS entry in sendmail.cf. I've restarted sendmail correctly. I'm not sure how to proceed at this point. Any ideas? Here is my SMART_HOST line: define(SMART_HOST',foo.bar.com')dnl ...and here is the result of a test mail. It never tries to use foo.bar.com, instead it uses mx.bar.com. $ echo subject: test; echo | sendmail -Am -v -flocaluser -- [email protected] subject: test [email protected]... Connecting to mx.bar.com via relay... 220 mx.bar.com ESMTP >>> EHLO myhost.bar.com 250-mx.bar.com 250-8BITMIME 250 SIZE 52428800 >>> MAIL From:<[email protected]> SIZE=1 250 sender <[email protected]> ok >>> RCPT To:<[email protected]> 550 #5.1.0 Address rejected. >>> RSET 250 reset localuser... Connecting to local... localuser... Sent Closing connection to mx.bar.com. >>> QUIT 221 mx.bar.com And last, here is a test mail sent using foo.bar.com: $ hostname myhost.bar.com $ telnet foo.bar.com 25 Trying ***.***.***.***... Connected to foo.bar.com (***.***.***.***). Escape character is '^]'. 220 foo.bar.com ESMTP Sendmail 8.14.1/8.14.1/ITS-7.0/ldap2-1+tls; Tue, 21 Dec 2010 13:27:44 -0700 (MST) helo foo 250 foo.bar.com Hello myhost.bar.com [***.***.***.***], pleased to meet you mail from: [email protected] 250 2.1.0 [email protected]... Sender ok rcpt to: [email protected] 250 2.1.5 [email protected]... Recipient ok data 354 Enter mail, end with "." on a line by itself testing . 250 2.0.0 oBLKRikZ003758 Message accepted for delivery quit 221 2.0.0 foo.bar.com closing connection Connection closed by foreign host. Any ideas? Thanks

    Read the article

  • Connecting multiple ColdFusion 10 instances to a single Apache 2.2 server

    - by Adam Cameron
    This is on Windows 7 Home Premium edition. I have got two ColdFusion 10 (updater 2) instances: "cfusion" (the default one), and "scratch". I have got a single instance of Apache 2.2 running. Within Apache, I have set up two virtual hosts, each of which needs to be served by a different ColdFusion instance. Each of the CF instances serves files fine via Tomcat's internal web server. Apache serves vanilla HTML files fine too. So both CF instances, and both virtual hosts separately work OK. I can get wsconfig.exe to connect either one of the CF instances to the Apache server, and serve CF files via Apache & that instance. However I cannot find a way of connecting the second CF instance to Apache as well, so that both CF instances are conected, each serving one of the virtual hosts. WSConfig doesn't seem to understand the notion of "multiple CF instances", and the changes it makes to the httpd.conf (via mod_jk.conf) does not seem to be implemented in such a way as to accommodate multiple CF instances talking to a single Apache instance, or multiple virtual hosts. I freely admit to not being confident enough with how mod_jk (or even really httpd.conf) works to be able to guess if I can change stuff to make it work. If I try to add the second CF instance using WSConfig, I just get a message "the web server is already configured for ColdFusion". Be that as it may... not the instance of ColdFusion I want to connect it to! If I remove the existing connector to whichever instance is already connected, I can then connect the other one no problems. Not that this helps, but it demonstrates that the CF instance can connect to Apache. This all used to be fairly straight fwd under older versions of CF and JRun :-( The only docs I have found are on the "Connect multiple Apache virtual hosts on a web server to a single ColdFusion server" page, but that specifically only deals with a single CF instance. There is no equivalent page for multiple CF instances. I'm kinda hoping I can move some of the mod_jk config into my virtual host entries in httpd-vhosts.conf (this is how it used to work for JRun), but I've no idea what to put where. I think I've covered all the necessary info here? If not, sing out and I'll add more. Thanks. PS: tried to specifically tag this as "ColdFusion-10" as the answer will be different from previous CF versions, but it won't let me cos my rep on this site is too low (odd how it doesn't consider my rep from other S/O sites...). If someone with sufficient rep can add it, that'd be cool: it's probably a valid tag to have. Ta.

    Read the article

  • How to configure a tun interface on Linux for SSH port forwarding?

    - by sarshad
    I am trying to forward port 139 from a Windows machine to my Ubuntu SSH server on a tun interface with the ip address 10.0.0.1. This is so that I can access the windows shares on the machine on my Ubuntu server, through the reverse tunnel. I can forward ports to 127.0.0.1, but not to 10.0.0.1. On windows I am using the Tunnelier ssh client. On my Ubuntu server, the following message is printed in auth.log: Received disconnect from 124.109.51.154: 11: Server denied request for client-side server-2-client forwarding on 10.0.0.1:139. So far i have tried the following settings: GatewayPorts yes PermitTunnel yes AllowTcpForwarding yes in the /etc/ssh/sshd_config file, but it did not work. I set up the tun like this: sudo tunctl -t loc_0 -u myusername sudo ifconfig loc_0 inet 10.0.0.1 netmask 255.255.255.0 up The settings in the Tunneler ssh client should not matter because I can forward port 139 successfully to the Microsoft Loopback Adapter on a Windows machine running the WinSSHD server. Versions: Windows is XP SP3, Ubuntu is 10.10. Update: I tried to forward the port to a number greater than 1024 mentioning the IP address of the tun, and it successfully connected but the forwarding was done on 127.0.0.1 instead of the tun's IP address 10.0.0.1. So there are two separate problems now, when connecting from the Windows machine: 1) Forwarding on ports less than 1024 is probably being denied. How can we allow that on the server? 2) Forwarding is done only on 127.0.0.1 even if I mention 10.0.0.1 which is the tun's IP address. Another attempt: I also tried to forward port 22 of a Linux machine to the tun's port 55567. It showed success. But when I tried to ssh into that port using both local addresses, on the Linux machine in its debug display I got the error Connection failed: no route to host when using 127.0.0.1 to connect and simply Connection refused when using the tun's IP address. So the tun is not getting the forwarded port no matter we connect from a Windows client or a Linux client.

    Read the article

  • Mercurial browser on Windows 2003 takes several refreshes before displaying repositories

    - by Tim Murphy
    When attempt to browse my Mercurial repositories it usually takes several refreshes before the repository list is displayed. The configuration is as follows: Windows Server 2003 (Dedicated machine hosted by http://www.server4you.com/. Site has anonymous password protection with self-signed SSL. Mercurial 1.5.3 Python 2.6.5 Python for Windows 32 extensions 214 py2.6 isapi-wsgi 0.4.2 The repositories are being served via ISAPI using the standard hgwebdir_wspi.py file (copy to follow). Other problems with the repository server: Before doing a clone/push/etc I have to browse the repositories first otherwise hg on my local machine can not locate the site. I have one a repository with a large changeset that after a minute or so throw error "abort: error: An existing connection was forcibly closed by the remote host". Will be asking another question for this problem. What can I do to start tracking down this problem? hgwebdir_wsgi.py # Configuration file location hgweb_config = r'C:\Public\Mercurial\WebSite\hgweb.config' # Global settings for IIS path translation path_strip = 0 # Strip this many path elements off (when using url rewrite) path_prefix = 0 # This many path elements are prefixes (depends on the # virtual path of the IIS application). import sys # Adjust python path if this is not a system-wide install #sys.path.insert(0, r'c:\path\to\python\lib') # Enable tracing. Run 'python -m win32traceutil' to debug if hasattr(sys, 'isapidllhandle'): import win32traceutil # To serve pages in local charset instead of UTF-8, remove the two lines below import os os.environ['HGENCODING'] = 'UTF-8' import isapi_wsgi from mercurial import demandimport; demandimport.enable() from mercurial.hgweb.hgwebdir_mod import hgwebdir # Example tweak: Replace isapi_wsgi's handler to provide better error message # Other stuff could also be done here, like logging errors etc. class WsgiHandler(isapi_wsgi.IsapiWsgiHandler): error_status = '500 Internal Server Error' # less silly error message isapi_wsgi.IsapiWsgiHandler = WsgiHandler # Only create the hgwebdir instance once application = hgwebdir(hgweb_config) def handler(environ, start_response): # Translate IIS's weird URLs url = environ['SCRIPT_NAME'] + environ['PATH_INFO'] paths = url[1:].split('/')[path_strip:] script_name = '/' + '/'.join(paths[:path_prefix]) path_info = '/'.join(paths[path_prefix:]) if path_info: path_info = '/' + path_info environ['SCRIPT_NAME'] = script_name environ['PATH_INFO'] = path_info return application(environ, start_response) def __ExtensionFactory__(): return isapi_wsgi.ISAPISimpleHandler(handler) if __name__=='__main__': from isapi.install import * params = ISAPIParameters() HandleCommandLine(params) hgweb.config [paths] / = C:\Public\Mercurial\Repositories\* [web] allow_archive = bz2 gz zip ; Allows archive downloads. allow_push = ######## ; Users that are allowed to push.

    Read the article

  • CentOS 5.8 dig is not resolving ip-address

    - by travisbotello
    I'm running centos 5.8 on a local machine at home. Today I was trying to analyze the DNS-Lookup via dig. $ dig +trace -t A www.heise.de. This is giving me something like this as a response de. 172800 IN NS f.nic.de. de. 172800 IN NS z.nic.de. de. 172800 IN NS s.de.net. de. 172800 IN NS n.de.net. de. 172800 IN NS a.nic.de. de. 172800 IN NS l.de.net. ;; Received 344 bytes from 192.58.128.30#53(192.58.128.30) in 49 ms In contrast my dedicated CentOS machine is returning the following de. 172800 IN NS a.nic.de. de. 172800 IN NS n.de.net. de. 172800 IN NS f.nic.de. de. 172800 IN NS z.nic.de. de. 172800 IN NS l.de.net. de. 172800 IN NS s.de.net. ;; Received 344 bytes from 192.58.128.30#53(j.root-servers.net) in 32 ms As you can see, the last line is different. Any idea why my dedicated machine is giving me the host name of the responding DNS-Server and my local machine is only returning the ip-address? Thanks in advance UPDATE The reverse DNS-Lookup is working without any problems. Also, I just checked this on my local mac and...exactly the same problem occurs. Is it possible that this has to do with the local router/modem/ISP?

    Read the article

  • Empty $_POST data

    - by Antimony
    I am trying to post a post to my MyBB server from a Python script, but try as I might, I can't get it to work. The request shows up in the forensic log and the headers are in the $_SERVER variable, but $_POST is always an empty array. The error log shows nothing, even at the debug level. I've already tried searching, but I haven't found anything that's helped. I already checked the post_max_size thing, which is 8M. Another factor is that it's just my own requests which aren't going through. Browser generated requests seem to do just fine. I've looked and looked, but I can't find anything I'm doing differently that should matter. Anyway, here is an example request. POST /newreply.php?tid=1&processed=1 HTTP/1.1 Host: <redacted> Accept-Encoding: identity Content-Length: 1153 Content-Type: multipart/form-data; boundary=-->0xa216654L Cookie: sid=<redacted>; mybb[lastvisit]=1354995469; mybb[lastactive]=1354995500; mybb[threadread]=a%3A1%3A%7Bi%3A1%3Bi%3A1354995469%3B%7D; mybb[forumread]=a%3A1%3A%7Bi%3A2%3Bi%3A1354995469%3B%7D; loginattempts=1; mybbuser=2_ZlVVfaYS9FstZGQzr4KiNRUm3Z4xAgJkTPPq2ouFcuaragOTVQ Accept: text/html User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:14.0) Gecko/20100101 Firefox/14.0.1 -->0xa216654L Content-Disposition: form-data; name="my_post_key" 257b2bbef4334000d9088169154900a3 -->0xa216654L Content-Disposition: form-data; name="quoted_ids" -->0xa216654L Content-Disposition: form-data; name="tid" 1 -->0xa216654L Content-Disposition: form-data; name="message" foo!2 -->0xa216654L Content-Disposition: form-data; name="attachmentact" -->0xa216654L Content-Disposition: form-data; name="attachmentaid" -->0xa216654L Content-Disposition: form-data; name="icon" -1 -->0xa216654L Content-Disposition: form-data; name="posthash" e93a2c78ce3f6807a86fd475ef4178cf -->0xa216654L Content-Disposition: form-data; name="postoptions[subscriptionmethod]" -->0xa216654L Content-Disposition: form-data; name="replyto" -->0xa216654L Content-Disposition: form-data; name="message_new" foo!2 -->0xa216654L Content-Disposition: form-data; name="submit" Post Reply -->0xa216654L Content-Disposition: form-data; name="attachment"; filename="" Content-Type: application/octet-stream -->0xa216654L Content-Disposition: form-data; name="action" do_newreply -->0xa216654L Content-Disposition: form-data; name="subject" Lol -->0xa216654L

    Read the article

  • No Telnet login prompt when used over SSH tunnel

    - by SCO
    Hi there ! I have a device, let's call it d1, runnning a lightweight Linux. This device is NATed by my internet box/router, hence not reachable from the Internet. That device runs a telnet daemon on it, and only has root as user (no pwd). Its ip address is 192.168.0.126 on the private network. From the private network (let's say 192.168.0.x), I can do: telnet 192.168.0.126 Where 192.168.0.126 is the IP address in the private network. This works correctly. However, to allow administration, I'd need to access that device from outside of that private network. Hence, I created an SSH tunnel like this on d1 : ssh -R 4455:localhost:23 ussh@s1 s1 is a server somewhere in the private network (but this is for testing purposes only, it will endup somewhere in the Internet), running a standard Linux distro and on which I created a user called 'ussh'. s1 IP address is 192.168.0.48. When I 'telnet' with the following, let's say from c1, 192.168.0.19 : telnet -l root s1 4455 I get : Trying 192.168.0.48... Connected to 192.168.0.48. Escape character is '^]'. Connection closed by foreign host . The connection is closed after roughly 30 seconds, and I didn't log. I tried without the -l switch, without any success. I tried to 'telnet' with IP addresses instead of names to avoid reverse DNS issues (although I added to d1 /etc/hosts a line refering to s1 IP/name, just in case), no success. I tried on another port than 4455, no success. I gathered Wireshark logs from s1. I can see : s1 sends SSH data to c1, c1 ACK s1 performs an AAAA DNS request for c1, gets only the Authoritave nameservers. s1 performs an A DNS request, then gets c1's IP address s1 sends a SYN packet to c1, c1 replies with a RST/ACK s1 sends a SYN to c1, C1 RST/ACK (?) After 0.8 seconds, c1 sends a SYN to s1, s1 SYN/ACK and then c1 ACK s1 sends SSH content to d1, d1 sends an ACK back to s1 s1 retries AAAA and A DNS requests After 5 seconds, s1 retries a SYN to c1, once again it is RST/ACKed by c1. This is repeated 3 more times. The last five packets : d1 sends SSH content to s1, s1 sends ACK and FIN/ACK to c1, c1 replies with FIN/ACK, s1 sends ACK to c1. The connection seems to be closed by the telnet daemon after 22 seconds. AFAIK, there is no way to decode the SSH stream, so I'm really stuck here ... Any ideas ? Thank you !

    Read the article

  • Exchange 2010 forwarded emails by external servers being blocked

    - by MadBoy
    Our users were getting spam messages from their own accounts (same domain/login for example [email protected] to [email protected]). This is preety standard trick and I decided to block it so that anonymous users can't send emails as @company.com. This brought some problems on us like our printers not being able to send emails etc but I solved it with secondary smtp receiver on different port with ip restrictions. However it seems to affect forwarding by some e-mail servers as well: Hi. This is the qmail-send program at home.pl. I'm afraid I wasn't able to deliver your message to the following addresses. This is a permanent error; I've given up. Sorry it didn't work out. : 89.14.1.26 failed after I sent the message. Remote host said: 550 5.7.1 Client does not have permissions to send as this sender --- Below this line is a copy of the message. Return-Path: Return-Path: Received: from mail.company.com [89.14.1.26] (HELO mail.company.com) by company.ho.pl [79.93.31.43] with SMTP (IdeaSmtpServer v0.70) id 488fcb01c2f069d9; Tue, 3 Jan 2012 09:46:55 +0100 Received: from EXCHANGE1.COMPANY ([fe80::d425:135f:b655:1223]) by EXCHANGE2.COMPANY ([fe80::193f:51ac:9316:cb27%14]) with mapi id 14.01.0355.002; Tue, 3 Jan 2012 09:46:55 +0100 From: =?iso-8859-2?Q?MadBoy?= So basically server forwards it without affecting email address it was send with and our servers treat it like spam. I used this command to block things: Get-ReceiveConnector "DEFAULT Exchange2" | Get-ADPermission -user "NT AUTHORITY\Anonymous Logon" | where {$_.ExtendedRights -like "ms-exch-smtp-accept-authoritative-domain-sender"} | Remove-ADPermission Is there anyway I can keep on receiveing things like forwards but be able to block things (except some dedicated antispam solution - this will be added later). Also how do I "reassing" back the permissions that was removed? EDIT to clarify: I have a domain domain.com configured as Authorative. Couple of our users are on project for differentcompany.com which is not on our servers or anywhere close. Now when they send an email from their accounts lets say [email protected] to [email protected] that special alias is configured so that any email it receives it forwards to multiple people including a group alias at our domain [email protected] and that group alias puts the email in users mailboxes. After the email is forwarded by [email protected] and it reaches our server it is denied because the forwarding done by the "external" server doesn't affect user information so for the server it seems like the [email protected] was actually sender and it treats it as spam and denies it. The server at differentcompany.com just adds itself to the header that it passed thru it and doesn't modify sender at anyway (seems like this is how forwarding works). Although I could probably allow this particular server as allowed to relay but this would seem to affect more servers/users as anyone can setup forwarding on their email back to our domain...

    Read the article

  • Sendmail Tuning For Batch Mail Jobs

    - by Kyle Brandt
    I have a webservers that send out emails to a sendmail relay server as a batch job. The emails need to be accepted by the relay sendmail server as fast as possible, however, they do not need to go out (be relayed) very quickly. I am seeing a couple timeouts once and a while from the webserver trying to connect to the relay server. The load currently is about 30 emails a second for a couple minutes. There are quite a few tuning options for sendmail in the sendmail tuning guide. What I am focusing on now is the Delivery Mode: Delivery Mode There are a number of delivery modes that sendmail can operate in, set by the DeliveryMode ( d) configuration option. These modes specify how quickly mail will be delivered. Legal modes are: i deliver interactively (synchronously) b deliver in background (asynchronously) q queue only (don't deliver) d defer delivery attempts (don't deliver) There are tradeoffs. Mode i gives the sender the quickest feedback, but may slow down some mailers and is hardly ever necessary. Mode b delivers promptly but can cause large numbers of processes if you have a mailer that takes a long time to deliver a message. Mode q minimizes the load on your machine, but means that delivery may be delayed for up to the queue interval. Mode d is identical to mode q except that it also prevents lookups in maps including the -D flag from working during the initial queue phase; it is intended for ``dial on demand'' sites where DNS lookups might cost real money. Some simple error messages (e.g., host unknown during the SMTP protocol) will be delayed using this mode. Mode b is the usual default. If you run in mode q (queue only), d (defer), or b (deliver in background) sendmail will not expand aliases and follow .forward files upon initial receipt of the mail. This speeds up the response to RCPT commands. Mode i should not be used by the SMTP server. I currently have the CentOS default modes: Sendmail.cf: DeliveryMode=background Submit.cf: DeliveryMode=i Is sendmail.cf/mc for outgoing email from relay (to the intertubes) and sumbit.cf/mc for incoming eamil (from my webservers). Would it make sense to change the outgoing delivery mode to queue? If I did, what would the outbound email flow behave like? If this is the right thing to do, can anyone show me example mc configurations for this change? If it isn't, what recommendations are there for these constraints?

    Read the article

  • ISC DHCP - Force clients to get a new IP address, instead of the being re-issued their previous lease's IP

    - by kce
    We are in the middle of a migration of our DHCP and DNS services from a Debian-based server to a Windows Server 2008 R2 implementation. The Debian server is running isc-dhcpd-V3.1.1. All of workstations are configured to have fixed-addresses between .3 and .40 (the motivation behind that choice is mostly management/political much like here). DHCP leases are given out in the range of .100 to .175. Statically configured servers live in the .200 block and above (which is mostly empty). When we move to the Windows platform, management/political considerations require me to move the IP ranges around again. We would like to keep .1 - .10 reserved for network appliances, switches, and other infrastructure. .200 will remain designated for servers. The addressing space in between should be available to clients and IPs should be dynamically allocated (Edit: instead of automatic as originally mentioned) by the server. My Address Pool on the Windows Server looks like this: 192.168.0.1 192.168.0.254 (Address range for distribution) 192.168.0.1 192.168.0.10 (IP addresses excluded from distribution) 192.168.0.200 192.168.0.254 (IP addresses excluded from distribution) Currently, we have all of our clients still on the .3 - .40 range, and a few machines still active in the .100 - .175 (although there are lots devices that are powered off that still have expired leases with IPs from that range). Since the lease "database" isn't shared between the old and new DHCP server how can I prevent clients from receiving a lease with an IP address that is currently being held by client with a non-expired lease from the old DHCP server? If I just expand the range on the Debian DHCP server to be 192.168.0.10 - 192.168.0.199 is there a way to force clients to not re-use their old IP address when they send their DHCPDISCOVER? Can I make the Windows DHCP server be authoritiative like the ISC implementation? The dhcpd.conf from the Debian server: ddns-update-style none; authoritative; default-lease-time 43200; #12 hours max-lease-time 86400; #24 hours subnet 192.168.0.0 netmask 255.255.255.0 { option routers 192.168.0.1; option subnet-mask 255.255.255.0; option broadcast-address 192.168.0.255; range 192.168.0.100 192.168.0.175; } host workstation-1 { hardware ethernet 00:11:22:33:44:55; fixed-address 192.168.0.3; } ... and so on until 192.168.0.40

    Read the article

  • iptables & allowed port refusing connection

    - by marfarma
    Can you see what I'm doing wrong? On Ubuntu Server 9.1, I'm attempting to allow traffic on port 1143 for a non-privileged IMAP host. Connection is refused when testing with telnet example.com 1143 but connection is allowed testing with telnet example.com 80 from my pc to remote internet hosted server. Both rules appear identical and are located near each other with no rules rejecting connections intervening in the rules file. I can't figure it out. iptables -L returns this: Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere REJECT all -- anywhere 127.0.0.0/8 reject-with icmp-port-unreachable ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT tcp -- anywhere anywhere tcp dpt:www ACCEPT tcp -- anywhere anywhere tcp dpt:https ACCEPT tcp -- anywhere anywhere tcp dpt:http-alt ACCEPT tcp -- anywhere anywhere tcp dpt:7070 ACCEPT tcp -- anywhere anywhere tcp dpt:1143 ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh ACCEPT icmp -- anywhere anywhere icmp echo-request LOG all -- anywhere anywhere limit: avg 5/min burst 5 LOG level debug prefix `iptables denied: ' REJECT all -- anywhere anywhere reject-with icmp-port-unreachable Chain FORWARD (policy ACCEPT) target prot opt source destination REJECT all -- anywhere anywhere reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere and my rules file contains this: # Generated by iptables-save v1.4.4 on Wed May 26 19:08:34 2010 *nat :PREROUTING ACCEPT [3556:217296] :POSTROUTING ACCEPT [6909:414847] :OUTPUT ACCEPT [6909:414847] -A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 8080 -A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 8080 COMMIT # Completed on Wed May 26 19:08:34 2010 # Generated by iptables-save v1.4.4 on Wed May 26 19:08:34 2010 *filter :INPUT ACCEPT [1:52] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [1:212] -A INPUT -i lo -j ACCEPT -A INPUT -d 127.0.0.0/8 ! -i lo -j REJECT --reject-with icmp-port-unreachable -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -p tcp -m tcp --dport 443 -j ACCEPT -A INPUT -p tcp -m tcp --dport 8080 -j ACCEPT -A INPUT -p tcp -m tcp --dport 7070 -j ACCEPT -A INPUT -p tcp -m tcp --dport 1143 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT -A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 -A INPUT -j REJECT --reject-with icmp-port-unreachable -A FORWARD -j REJECT --reject-with icmp-port-unreachable -A OUTPUT -j ACCEPT COMMIT # Completed on Wed May 26 19:08:34 2010

    Read the article

  • RFC 1918 address on open internet?

    - by longneck
    In trying to diagnose a failover problem with my Cisco ASA 5520 firewalls, I ran a traceroute to www.btfl.com and, much to my surprise, some of the hops came back as RFC 1918 addresses. Just to be clear, this host is not behind my firewall and there is no VPN involved. I have to connect across the open internet to get there. How/why is this possible? asa# traceroute www.btfl.com Tracing the route to 157.56.176.94 1 <redacted> 2 <redacted> 3 <redacted> 4 <redacted> 5 nap-edge-04.inet.qwest.net (67.14.29.170) 0 msec 10 msec 10 msec 6 65.122.166.30 0 msec 0 msec 10 msec 7 207.46.34.23 10 msec 0 msec 10 msec 8 * * * 9 207.46.37.235 30 msec 30 msec 50 msec 10 10.22.112.221 30 msec 10.22.112.219 30 msec 10.22.112.223 30 msec 11 10.175.9.193 30 msec 30 msec 10.175.9.67 30 msec 12 100.94.68.79 40 msec 100.94.70.79 30 msec 100.94.71.73 30 msec 13 100.94.80.39 30 msec 100.94.80.205 40 msec 100.94.80.137 40 msec 14 10.215.80.2 30 msec 10.215.68.16 30 msec 10.175.244.2 30 msec 15 * * * 16 * * * 17 * * * and it does the same thing from my FiOS connection at home: C:\>tracert www.btfl.com Tracing route to www.btfl.com [157.56.176.94] over a maximum of 30 hops: 1 1 ms <1 ms <1 ms myrouter.home [192.168.1.1] 2 8 ms 7 ms 8 ms <redacted> 3 10 ms 13 ms 11 ms <redacted> 4 12 ms 10 ms 10 ms ae2-0.TPA01-BB-RTR2.verizon-gni.net [130.81.199.82] 5 16 ms 16 ms 15 ms 0.ae4.XL2.MIA19.ALTER.NET [152.63.8.117] 6 14 ms 16 ms 16 ms 0.xe-11-0-0.GW1.MIA19.ALTER.NET [152.63.85.94] 7 19 ms 16 ms 16 ms microsoft-gw.customer.alter.net [63.65.188.170] 8 27 ms 33 ms * ge-5-3-0-0.ash-64cb-1a.ntwk.msn.net [207.46.46.177] 9 * * * Request timed out. 10 44 ms 43 ms 43 ms 207.46.37.235 11 42 ms 41 ms 40 ms 10.22.112.225 12 42 ms 43 ms 43 ms 10.175.9.1 13 42 ms 41 ms 42 ms 100.94.68.79 14 40 ms 40 ms 41 ms 100.94.80.193 15 * * * Request timed out.

    Read the article

  • Mercurial not receiving push

    - by Jeffrey04
    I have a mercurial web-frontend (hgwebdir.cgi) installed on a server, and an installation of nginx was installed in front of it as a reverse proxy to the web-frontend as my friend suggested. However, whenever a large changeset is pushed (via a script), it would fail. I found an issue ticket @google-code that describe similar problem, and there is a solution that says (#39) So the server side answer is: don't send the 401 back early. Be as slow/dumb as 'hg serve' and make the hg client send the bundle twice. How do I do that? My current nginx config location /repo/testdomain.com { rewrite ^(.*) http://bpj.kkr.gov.my$1/hgwebdir.cgi; } location /repo/testdomain.com/ { rewrite ^(.*) http://bpj.kkr.gov.my$1hgwebdir.cgi; } location /repo/testdomain.com/hgwebdir.cgi { proxy_pass http://localhost:81/repo/testdomain.com/hgwebdir.cgi; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_buffering on; client_max_body_size 4096M; proxy_read_timeout 30000; proxy_send_timeout 30000; } From the access log we keep seeing 408 entries incoming.ip.address - - [18/Nov/2009:08:29:31 +0800] "POST /repo/testdomain.com/hgwebdir.cgi/example_repository?cmd=unbundle&heads=73121b2b6159afc47cc3a028060902883d5b1e74 HTTP/1.1" 408 0 "-" "mercurial/proto-1.0" incoming.ip.address - - [18/Nov/2009:08:37:14 +0800] "POST /repo/testdomain.com/hgwebdir.cgi/example_repository?cmd=unbundle&heads=73121b2b6159afc47cc3a028060902883d5b1e74 HTTP/1.1" 408 0 "-" "mercurial/proto-1.0" Is there anything else I can do on the server because solving it on the server side is preferable :/ Further Findings Bitbucket seems to have this solved ( Check liquidhg bitbucket project and the Diagnosis wiki page ) on the server side, can't find the config anywhere though :/ What happens next varies depending on your server. Some servers refuse the BODY, simplying closing the pipe from the client and causing Mercurial to fail. Some, like Apache (at least the way I configure it, and that could be part of the problem) and nginx (they way BitBucket.org configures it), accept the BODY, though it may take a few retries. Bottom line: if Mercurial doesn't fail the push, it sends the changeset data at least once to a server that has already told it it lacks credentials (more on this at Blame). Assuming Mercurial is still running, it resends the "unbundle" request and data, this time with authentication. Finally, Apache accepts the data successfully. Nginx, OTOH, at least under BitBucket's configuration, seems to reassemble the previous body (the one that lacked authentication) and somehow keep Mercurial from re-sending the whole body.

    Read the article

  • Apache won't serve images larger than ~2K

    - by dtbaker
    Hello, Just upgraded an old box to Ubuntu to 10.04.2 LTS. Apache will not display images to a browser that are over about 2K. Small images seem to display fine. Static HTML and PHP continues to works fine as well. Installed: apache2 2.2.14-5ubuntu8.4 apache2-mpm-prefork 2.2.14-5ubuntu8.4 apache2-utils 2.2.14-5ubuntu8.4 apache2.2-bin 2.2.14-5ubuntu8.4 apache2.2-common 2.2.14-5ubuntu8.4 here is an ngrep of an image that doesn't display fine in the browser: T 192.168.0.4:32907 - 192.168.0.54:80 [AP] GET /path/path/logo.png HTTP/1.1..Host: 192.1 68.0.54..Connection: keep-alive..Accept: application/xml,application/xhtml+ xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5..User-Ag ent: Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/534.13 (KHTML, like Gecko) Chrome/9.0.597.98 Safari/534.13..Accept-Enco ding: gzip,deflate,sdch..Accept-Language: en-US,en;q=0.8..Accept- Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3.... T 192.168.0.54:80 - 192.168.0.4:32907 [A] HTTP/1.1 200 OK..Date: Wed, 09 Mar 2011 05:28:38 GMT..Server: Apa che/2.2.14 (Ubuntu)..Last-Modified: Tue, 05 Oct 2010 11:59:17 GMT ..ETag: "17b6f4-15fe-491dd63eb2f40"..Accept-Ranges: bytes..Conten t-Length: 5630..Keep-Alive: timeout=15, max=100..Connection: Keep -Alive..Content-Type: image/png.....PNG........IHDR...!...v...... .%.....sRGB.........bKGD..............pHYs.................tIME.. etc... This looks ok to me! I have tried firefox and chrome, both display small images fine but when a large image is requested the browser prompts to download the file. When the image file is saved to the local computer it is corrupt, it also takes a long time to save which makes me think the browser cannot see the content-length header sent from apache. Also when I look at the saved image file it includes the headers from apache, along with a bit of garbage at the top, like so: vi logo.png: ^@^UÅd^@$^]V^S^H^@E^@^Q,n!@^@@^F^@^@À¨^@6À¨^@^D^@P^Y¬rÇŹéw^P^@Ú^@^@^A^A^H ^@^GÝ^]^@pbSHTTP/1.1 200 OK^M Date: Wed, 09 Mar 2011 04:47:04 GMT^M Server: Apache/2.2.14 (Ubuntu)^M Last-Modified: Tue, 05 Oct 2010 11:59:17 GMT^M ETag: "17b6ff-157c-491dd63eb2f40"^M Accept-Ranges: bytes^M Content-Length: 5500^M Keep-Alive: timeout=15, max=94^M Connection: Keep-Alive^M Content-Type: image/png^M ^M PNG^M etc... Any ideas? It's driving me nuts. There is nothing in apache error logs, and permissions are fine (because the image data is there, it's just somewhat corrupt). There's no proxy or iptables on this ubuntu box either. Thanks heaps!! Dave ps: just tried on IE from a different computer, same problem :( pps: rebooted server, no help.

    Read the article

  • Web application/ site service (like Google App Engine) for PHP/ MySQL and Postgres

    - by Simon
    I would like to find a service similar to Google App Engine for PHP/ MySQL/ Postgres sites/ applications. We host two different types of site. i). PHP/ Mysql/ Zend Framework <VirtualHost *:80> DocumentRoot "/home/websites/website.com/public" ServerName website.com # This should be omitted in the production environment SetEnv APPLICATION_ENV development <Directory "/home/websites/website.com/public"> Options Indexes MultiViews FollowSymLinks AllowOverride All Order allow,deny Allow from all RewriteEngine On RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ index.php [NC,L] </Directory> </VirtualHost> ii). Matrix CMS - PHP/ Postgres + loads of pear classes <VirtualHost *:80> ServerName server.example.com DocumentRoot /home/websites/mysource_matrix/core/web Options -Indexes FollowSymLinks <Directory /home/websites/mysource_matrix> Order deny,allow Deny from all </Directory> <DirectoryMatch "^/home/websites/mysource_matrix/(core/(web|lib)|data/public|fudge)"> Order allow,deny Allow from all </DirectoryMatch> <DirectoryMatch "^/home/websites/mysource_matrix/data/public/assets"> php_flag engine off </DirectoryMatch> <FilesMatch "\.inc$"> Order allow,deny Deny from all </FilesMatch> <LocationMatch "/(CVS|\.FFV)/"> Order allow,deny Deny from all </LocationMatch> Alias /__fudge /home/websites/mysource_matrix/fudge Alias /__data /home/websites/mysource_matrix/data/public Alias /__lib /home/websites/mysource_matrix/core/lib Alias / /home/websites/mysource_matrix/core/web/index.php/ </VirtualHost> My key requirements are: I don't want to worry/ know/ care about the server/ infrastructure Secure/ up to date software/ os Good monitoring Automatic scalability SLA I apologise for the length of the question. In short all I want to do is i). create vhost, ii). create db iii). install app/ site iv). relax. Thanks. Edit: I include the Matrix vhost because that is the only complication that I cannot really do via a .htaccess file.

    Read the article

  • How to get more information from the system crash

    - by viraptor
    I'd like to debug an issue I'm having with a linux (debian stable) server, but I'm running out of ideas of how to confirm any diagnosis. Some background: The servers are running DL160 class with hardware raid between two disks. They're running a lot of services, mostly utilising network interface and CPU. There are 8 cpus and 7 "main" most cpu-hungry processes are bound to one core each via cpu affinity. Other random background scripts are not forced anywhere. The filesystem is writing ~1.5k blocks/s the whole time (goes up above 2k/s in peak times). Normal CPU usage for those servers is ~60% on 7 cores and some minimal usage on the last (whatever's running on shells usually). What actually happens is that the "main" services start using 100% CPU at some point, mainly stuck in kernel time. After a couple of seconds, LA goes over 400 and we lose any way to connect to the box (KVM is on it's way, but not there yet). Sometimes we see a kernel reporting hung task (but not always): [118951.272884] INFO: task zsh:15911 blocked for more than 120 seconds. [118951.272955] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [118951.273037] zsh D 0000000000000000 0 15911 1 [118951.273093] ffff8101898c3c48 0000000000000046 0000000000000000 ffffffffa0155e0a [118951.273183] ffff8101a753a080 ffff81021f1c5570 ffff8101a753a308 000000051f0fd740 [118951.273274] 0000000000000246 0000000000000000 00000000ffffffbd 0000000000000001 [118951.273335] Call Trace: [118951.273424] [<ffffffffa0155e0a>] :ext3:__ext3_journal_dirty_metadata+0x1e/0x46 [118951.273510] [<ffffffff804294f6>] schedule_timeout+0x1e/0xad [118951.273563] [<ffffffff8027577c>] __pagevec_free+0x21/0x2e [118951.273613] [<ffffffff80428b0b>] wait_for_common+0xcf/0x13a [118951.273692] [<ffffffff8022c168>] default_wake_function+0x0/0xe .... This would point at raid / disk failure, however sometimes the tasks are hung on kernel's gettsc which would indicate some general weird hardware behaviour. It's also running mysql (almost read-only, 99% cache hit), which seems to spawn a lot more threads during the system problems. During the day it does ~200kq/s (selects) and ~10q/s (writes). The host is never running out of memory or swapping, no oom reports are spotted. We've got many boxes with similar/same hardware and they all seem to behave that way, but I'm not sure which part fails, so it's probably not a good idea to just grab something more powerful and hope the problem goes away. Applications themselves don't really report anything wrong when they're running. I can run anything safely on the same hardware in an isolated environment. What can I do to narrow down the problem? Where else should I look for explanation?

    Read the article

  • Command-line connect to wired network for Ubuntu

    - by Tim
    I like to know how to use command-line to connect to a wired network in general for Ubuntu 8.10? In my case, I connect a cable to my laptop but it doesn't work with my WICD. So I like to try command-line method. Here is the ifconfig of my network adapters: $ ifconfig eth0 Link encap:Ethernet HWaddr 00:c0:9f:8d:23:74 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:19 Base address:0x1800 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:4457 errors:0 dropped:0 overruns:0 frame:0 TX packets:4457 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:493002 (493.0 KB) TX bytes:493002 (493.0 KB) wlan0 Link encap:Ethernet HWaddr 00:0e:9b:ab:56:19 UP BROADCAST NOTRAILERS PROMISC ALLMULTI MTU:576 Metric:1 RX packets:1508929 errors:0 dropped:0 overruns:0 frame:0 TX packets:768144 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:806027375 (806.0 MB) TX bytes:78834873 (78.8 MB) wlan0:avahi Link encap:Ethernet HWaddr 00:0e:9b:ab:56:19 inet addr:169.254.5.92 Bcast:169.254.255.255 Mask:255.255.0.0 UP BROADCAST NOTRAILERS PROMISC ALLMULTI MTU:576 Metric:1 wmaster0 Link encap:UNSPEC HWaddr 00-0E-9B-AB-56-19-00-00-00-00-00-00-00-00-00-00 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Thanks and regards! UPDATE: Tried what oyvindio suggested. Here is the failing message: $ sudo dhclient3 eth0 There is already a pid file /var/run/dhclient.pid with pid 18279 killed old client process, removed PID file Internet Systems Consortium DHCP Client V3.1.1 Copyright 2004-2008 Internet Systems Consortium. All rights reserved. For info, please visit http://www.isc.org/sw/dhcp/ mon0: unknown hardware address type 803 wmaster0: unknown hardware address type 801 mon0: unknown hardware address type 803 wmaster0: unknown hardware address type 801 Listening on LPF/eth0/00:c0:9f:8d:23:74 Sending on LPF/eth0/00:c0:9f:8d:23:74 Sending on Socket/fallback DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 5 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 10 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 12 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 15 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 11 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 8 No DHCPOFFERS received. No working leases in persistent database - sleeping.

    Read the article

  • How do I configure a site in IIS 7 for SSL with a wildcard certificate?

    - by michielvoo
    We have an Windows 2008 server with IIS 7 to test sites we develop for our clients. Each site has a binding on a subdomain: clienta.example.com clientb.example.com clientc.example.com (* Using example.com to protect the innocent) For one of these sites we now have to test if it works over https. So I have created a wildcard certificate request with *.example.com as the common name. I have received the certificate (issued by PositiveSSL SA) and completed the request. The certificate is now installed in IIS. Now I have added an https binding to the second site with the following settings: type: https IP address: All Unassigned Port: 443 Host name: clientb.example.com SSL certificate: *.example.com Browsing the site over regular http works fine. When I try to browse the site over https I get the following errors (depending on the browser used): Chrome This webpage is not available Error 102 (net::ERR_CONNECTION_REFUSED): Unknown error. Firefox Unable to connect Firefox can't establish a connection to the server at clientb.example.com Firebug says Status: Aborted Internet Explorer Internet Explorer cannot display the webpage I have checked Failed Request Tracing, and according to the log the request was completed with status 200. I have run the SSL Diagnostics Tool with the following result: System time: Fri, 04 Mar 2011 14:04:35 GMT Connecting to 192.168.2.95:443 Connected Handshake: 115 bytes sent Handshake: 3877 bytes received Handshake: 326 bytes sent Handshake: 59 bytes received Handshake succeeded Verifying server certificate, it might take a while... Server certificate name: *.example.com Server certificate subject: OU=Domain Control Validated, OU=PositiveSSL Wildcard, CN=*.example.com Server certificate issuer: C=GB, S=Greater Manchester, L=Salford, O=Comodo CA Limited, CN=PositiveSSL CA Server certificate validity: From 2-3-2011 1:00:00 To 2-3-2012 0:59:59 1:00:00 To 2-3-2012 0:59:59 HTTPS request: GET / HTTP/1.0 User-Agent: SSLDiag Accept:*/* HTTPS: 85 bytes of encrypted data sent HTTPS: 533 bytes of encrypted data received Status: HTTP/1.1 404 Not Found HTTP/1.1 404 Not Found Content-Type: text/html; charset=us-ascii Server: Microsoft-HTTPAPI/2.0 Date: Fri, 04 Mar 2011 14:04:35 GMT Connection: close Content-Length: 315 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN""http://www.w3.org/TR/html4/strict.dtd"> <HTML><HEAD><TITLE>Not Found</TITLE> <META HTTP-EQUIV="Content-Type" Content="text/html; charset=us-ascii"></HEAD> <BODY><h2>Not Found</h2> <hr><p>HTTP Error 404. The requested resource is not found.</p> </BODY></HTML> HTTPS: server disconnected Final handshake: 37 bytes sent successfully Q: What can I do to make this work?

    Read the article

< Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >