Search Results

Search found 13404 results on 537 pages for 'george host'.

Page 498/537 | < Previous Page | 494 495 496 497 498 499 500 501 502 503 504 505  | Next Page >

  • lighttpd: weird behavior on multiple rewrite rule matches

    - by netmikey
    I have a 20-rewrite.conf for my php application looking like this: $HTTP["host"] =~ "www.mydomain.com" { url.rewrite-once += ( "^/(img|css)/.*" => "$0", ".*" => "/my_app.php" ) } I want to be able to put the webserver in kind of a "maintenance" mode while I update my application from scm. To do this, my idea was to enable an additional rewrite configuration file before this one. The 16-rewrite-maintenance.conf file looks like this: url.rewrite-once += ( "^/(img|css)/.*" => "$0", ".*" => "/maintenance_app.php" ) Now, on the maintenance page, I have a logo that doesn't get loaded. I get a 404 error. Lighttpd debug says the following: 2012-12-13 20:28:06: (response.c.300) -- splitting Request-URI 2012-12-13 20:28:06: (response.c.301) Request-URI : /img/content/logo.png 2012-12-13 20:28:06: (response.c.302) URI-scheme : http 2012-12-13 20:28:06: (response.c.303) URI-authority: localhost 2012-12-13 20:28:06: (response.c.304) URI-path : /img/content/logo.png 2012-12-13 20:28:06: (response.c.305) URI-query : 2012-12-13 20:28:06: (response.c.300) -- splitting Request-URI 2012-12-13 20:28:06: (response.c.301) Request-URI : /img/content/logo.png, /img/content/logo.png 2012-12-13 20:28:06: (response.c.302) URI-scheme : http 2012-12-13 20:28:06: (response.c.303) URI-authority: localhost 2012-12-13 20:28:06: (response.c.304) URI-path : /img/content/logo.png, /img/content/logo.png 2012-12-13 20:28:06: (response.c.305) URI-query : 2012-12-13 20:28:06: (response.c.349) -- sanatising URI 2012-12-13 20:28:06: (response.c.350) URI-path : /img/content/logo.png, /img/content/logo.png 2012-12-13 20:28:06: (mod_access.c.135) -- mod_access_uri_handler called 2012-12-13 20:28:06: (response.c.470) -- before doc_root 2012-12-13 20:28:06: (response.c.471) Doc-Root : /www 2012-12-13 20:28:06: (response.c.472) Rel-Path : /img/content/logo.png, /img/content/logo.png 2012-12-13 20:28:06: (response.c.473) Path : 2012-12-13 20:28:06: (response.c.521) -- after doc_root 2012-12-13 20:28:06: (response.c.522) Doc-Root : /www 2012-12-13 20:28:06: (response.c.523) Rel-Path : /img/content/logo.png, /img/content/logo.png 2012-12-13 20:28:06: (response.c.524) Path : /www/img/content/logo.png, /img/content/logo.png 2012-12-13 20:28:06: (response.c.541) -- logical -> physical 2012-12-13 20:28:06: (response.c.542) Doc-Root : /www 2012-12-13 20:28:06: (response.c.543) Rel-Path : /img/content/logo.png, /img/content/logo.png 2012-12-13 20:28:06: (response.c.544) Path : /www/img/content/logo.png, /img/content/logo.png 2012-12-13 20:28:06: (response.c.561) -- handling physical path 2012-12-13 20:28:06: (response.c.562) Path : /www/img/content/logo.png, /img/content/logo.png 2012-12-13 20:28:06: (response.c.618) -- file not found 2012-12-13 20:28:06: (response.c.619) Path : /www/img/content/logo.png, /img/content/logo.png Any clue on why lighttpd matches both rules (from my application rewrite config and from my maintenance rewrite config) and concatenates them with a comma - that doesn't seem to make any sense?! Shouldn't it stop after the first match with rewrite-once?

    Read the article

  • File/folder permissions and groups on Linux with Apache

    - by phobia
    I'm trying to learn about permissions on linux webserver with apache. Some clues to the system: The server I have to play around with is Fedora based. Apache runs as apache:apache. To allow for e.g. php to write to a file the file needs to be chmod 777. 755 is not sufficiant. What I'm wondering is basically how set up permissions like they should be on e.g. a "shared web host". My main problem is that if I set a permission so that one user cannot access anothers home folder, then apache can't read from the public_html folder either. To keep the users out I need to set chmod 700. But to let apache to read I need to have at least execute on world, so a 701 basically works, but won't let some users in. So I'm really stuck on what to do. Have been concidering adding the apache user to the frous grours below to avoid having to add the world execute flag, but is that a bad thing? Should it be the other way around, the users in the groups below should also be in the apache group? I was aiming at having 4 groups: 1. webapp same as dev_int, but is the only one that can go inside the webapp/live folder to e.g. do an update from the repo. 2. dev_int can read,write and execute everything in the "web root", including the two below, but nothing outside of the web root 3. dev_ext can read write and execute in all client folders, but cannot access anything outside of the webapp root 4. clientsBasic ftp accounts. Has a home folder with a public_html, but cannot access any other home folders An example of folder structure: webroot    no users in the aforementioned groups can go outside of here some_project    :dev_int only webapp live    :webapp only staging    :dev_int and :dev_ext clients    :dev_int and :dev_ext client_1    :dev_int, :dev_ext and client1:clients public_html dev developer_1    developer_1:dev_int OR :dev_ext public_html

    Read the article

  • Forwarding RDP via a Linux machine using iptables: Not working

    - by Nimmy Lebby
    I have a Linux machine and a Windows machine behind a router that implements NAT (the diagram might be overkill, but was fun to make): I am forwarding RDP port (3389) on the router to the Linux machine because I want to audit RDP connections. For the Linux machine to forward RDP traffic, I wrote these iptables rules: iptables -t nat -A PREROUTING -p tcp --dport 3389 -j DNAT --to-destination win-box iptables -A FORWARD -p tcp --dport 3389 -j ACCEPT The port is listening on the Windows machine: C:\Users\nimmy>netstat -a Active Connections Proto Local Address Foreign Address State (..snip..) TCP 0.0.0.0:3389 WIN-BOX:0 LISTENING (..snip..) And the port is forwarding on the Linux machine: # tcpdump port 3389 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes 01:33:11.451663 IP shieldsup.grc.com.56387 > linux-box.myapt.lan.ms-wbt-server: Flags [S], seq 94663035, win 8192, options [mss 1460], length 0 01:33:11.451846 IP shieldsup.grc.com.56387 > win-box.myapt.lan.ms-wbt-server: Flags [S], seq 94663035, win 8192, options [mss 1460], length 0 However, I am not getting any successful RDP connections from the outside. The port is not even responding: C:\Users\outside-nimmy>telnet example.com 3389 Connecting To example.com...Could not open connection to the host, on port 3389: Connect failed Any ideas? Update Per @Zhiqiang Ma, I looked at nf_conntrack proc file during a connection attempt and this is what I see (192.168.3.1 = linux-box, 192.168.3.5 = win-box): # cat /proc/net/nf_conntrack | grep 3389 ipv4 2 tcp 6 118 SYN_SENT src=4.79.142.206 dst=192.168.3.1 sport=43142 dport=3389 packets=6 bytes=264 [UNREPLIED] src=192.168.3.5 dst=4.79.142.206 sport=3389 dport=43142 packets=0 bytes=0 mark=0 secmark=0 zone=0 use=2 2nd update Got tcpdump on the router and it seems that win-box is sending an RST packet: 21:20:24.767792 IP shieldsup.grc.com.45349 > linux-box.myapt.lan.3389: S 19088743:19088743(0) win 8192 <mss 1460> 21:20:24.768038 IP shieldsup.grc.com.45349 > win-box.myapt.lan.3389: S 19088743:19088743(0) win 8192 <mss 1460> 21:20:24.770674 IP win-box.myapt.lan.3389 > shieldsup.grc.com.45349: R 721745706:721745706(0) ack 755785049 win 0 Why would Windows be doing this?

    Read the article

  • Passenger 2.2.4, nginx 0.7.61 and SSL

    - by boompa
    Has anyone had any luck configuring Passenger and nginx with SSL? I've spent hours trying to get this configuration working as I'd like, using what few resources there are out there on the net, and I can't get any of the supposedly forwarded headers to show up in the Rails controller. For example, with a conf file of the following (and multiple variations thereof): server { listen 3000; server_name .example.com; root /Users/website/public; passenger_enabled on; rails_env development; } server { listen 3443; root /Users/website/public; rails_env development; passenger_enabled on; ssl on; #ssl_verify_client on; ssl_certificate /Users/website/ssl/server.crt; ssl_certificate_key /Users/website/ssl/server.key; #ssl_client_certificate /Users/website/ssl/CA.crt; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1; ssl_ciphers ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM:-LOW:-SSLv2:-EXP; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X_FORWARDED_PROTO https; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; #proxy_set_header X-SSL-Subject $ssl_client_s_dn; #proxy_set_header X-SSL-Issuer $ssl_client_i_dn; proxy_redirect off; proxy_max_temp_file_size 0; } and Rails code in the controller like this: request.headers.each { |k, v| RAILS_DEFAULT_LOGGER.error "Header #{k} Val #{v}" } other headers appear, but not those set in nginx, e.g.: Header rack.multithread Val false Header REQUEST_URI Val /login/new Header REMOTE_PORT Val 64021 Header rack.multiprocess Val true Header PASSENGER_USE_GLOBAL_QUEUE Val false Header PASSENGER_APP_TYPE Val rails Header SCGI Val 1 Header SERVER_PORT Val 3443 Header HTTP_ACCEPT_CHARSET Val ISO-8859-1,utf-8;q=0.7,*;q=0.7 Header rack.request.query_hash Val Header DOCUMENT_ROOT Val /Users/website/public I've even gone so far as to modify Passenger's abstract_request_handler's main_loop method, i.e., headers, input = parse_request(client) if headers if headers[REQUEST_METHOD] == PING process_ping(headers, input, client) else headers.each { |h,v| log.unknown "abstract_request_handler: #{h} = #{v}" } process_request(headers, input, client) end end only to find that the supposedly added headers do not exist there either: abstract_request_handler: HTTP_KEEP_ALIVE = 300 abstract_request_handler: HTTP_USER_AGENT = Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.1) Gecko/20090624 Firefox/3.5 abstract_request_handler: PASSENGER_SPAWN_METHOD = smart-lv2 abstract_request_handler: CONTENT_LENGTH = 0 abstract_request_handler: HTTP_IF_NONE_MATCH = "b6e8b9afbc1110ee3bf0c87e119252ad" abstract_request_handler: HTTP_ACCEPT_LANGUAGE = en-us,en;q=0.5 abstract_request_handler: SERVER_PROTOCOL = HTTP/1.1 abstract_request_handler: HTTPS = on abstract_request_handler: REMOTE_ADDR = 127.0.0.1 abstract_request_handler: SERVER_SOFTWARE = nginx/0.7.61 abstract_request_handler: SERVER_ADDR = 127.0.0.1 abstract_request_handler: SCRIPT_NAME = abstract_request_handler: PASSENGER_ENVIRONMENT = development abstract_request_handler: REMOTE_PORT = 64021 abstract_request_handler: REQUEST_URI = /login/new abstract_request_handler: HTTP_ACCEPT_CHARSET = ISO-8859-1,utf-8;q=0.7,*;q=0.7 abstract_request_handler: SERVER_PORT = 3443 abstract_request_handler: SCGI = 1 abstract_request_handler: PASSENGER_APP_TYPE = rails abstract_request_handler: PASSENGER_USE_GLOBAL_QUEUE = false I'm tired of banging my head against the wall, so I'd truly appreciate any help I can get!

    Read the article

  • Volume group disappeared, LVs still available

    - by Ben
    I've run into an issue with my KVM host which runs VMs on a LVM volume. As of last night the logical volumes are no longer seen as such (I can't create snapshots of them even though I have been for months now). Running any scans all result in nothing being found: [root@apollo ~]# pvscan No matching physical volumes found [root@apollo ~]# vgscan Reading all physical volumes. This may take a while... No volume groups found root@apollo ~]# lvscan No volume groups found If I try restoring the VG conf backup from /etc/lvm/backups/vg0 I get the following error: [root@apollo ~]# vgcfgrestore -f /etc/lvm/backup/vg0 vg0 Couldn't find device with uuid 20zG25-H8MU-UQPf-u0hD-NftW-ngsC-mG63dt. Cannot restore Volume Group vg0 with 1 PVs marked as missing. Restore failed. /etc/lvm/backups/vg0 has the following for the physical volume: physical_volumes { pv0 { id = "20zG25-H8MU-UQPf-u0hD-NftW-ngsC-mG63dt" device = "/dev/sda5" # Hint only status = ["ALLOCATABLE"] flags = [] dev_size = 4292870143 # 1.99902 Terabytes pe_start = 384 pe_count = 524031 # 1.99902 Terabytes } } fdisk -l /dev/sda shows the following: [root@apollo ~]# fdisk -l /dev/sda Disk /dev/sda: 6000.1 GB, 6000069312512 bytes 64 heads, 32 sectors/track, 5722112 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000188b7 Device Boot Start End Blocks Id System /dev/sda1 2 32768 33553408 82 Linux swap / Solaris /dev/sda2 32769 33280 524288 83 Linux /dev/sda3 33281 1081856 1073741824 83 Linux /dev/sda4 1081857 3177984 2146435072 85 Linux extended /dev/sda5 1081857 3177984 2146435071+ 8e Linux LVM The server is running a 4 disk HW RAID10 which seems perfectly healthy according to megacli and smartd. The only odd message in /var/log/messages is the following which shows up every couple of hours: Jun 10 09:41:57 apollo udevd[527]: failed to create queue file: No space left on device Output of df -h [root@apollo ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 1016G 119G 847G 13% / /dev/sda2 508M 67M 416M 14% /boot Does anyone have any ideas what to do next? The VMs are all running fine at the moment apart from not being able to snapshot them. Updated with extra info It's not a lack of inodes: [root@apollo ~]# df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda3 67108864 48066 67060798 1% / /dev/sda2 32768 47 32721 1% /boot pvs, vgs & lvs either output nothing or "No volume groups found".

    Read the article

  • Cannot access website from inside network

    - by musclez
    I have a website running from my internal network available at the example IP 192.168.1.5. When I type this in to the browser, it redirects to my domain name ie, "example.com", and gives me Error code: ERR_CONNECTION_REFUSED. Any other machine that is inside of the network can access the website. The website is also accessible outside of the network. Other services from the server, like file sharing or ftp, are available to all machines in the network including the one i'm having issues http issues with. The issue may be linked to a proxy service, but from my understanding the service has been completely disabled and any executable have been uninstalled from the machine. I am wondering if there is some residual proxy information remaining on the machine that limits the connection. I'm fairly positive that "example.com" is what is being blocked by the local machine, and not an IP address being blocked or a faulty connection. When I examine the hosts file, there are no redirects to the local machine for "example.com". There was a rule, as on my other machines within the network: 192.168.1.5 example.com But i have since removed that for troubleshooting purposes. What intrigued me is that when I use the actual IP, the IP address will redirect to the domain in the browser and THEN say ERR_CONNECTION_REFUSED. Server-Side Results The server logs are reporting this: example.com ::1 - - [Date & time] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2. 2.22 (Unix) (internal dummy connection)" However, this seems to be irrelevant as it is not triggered when I try to connect to the server with the specified machine. Fiddler results: Host: *example.com* Proxy-Connection: keep-alive Chrome-Side [Fiddler] The connection to 'example.com' failed. Error: ConnectionRefused (0x274d). System.Net.Sockets. SocketException No connection could be made because the target machine actively refused it 01.23.45.67:80 01.23.45.67:80 would be the external IP, which the server and the machine in question both share. I am doing so reading into 0x274d and its coming back with .NET web.config information. I am still at a loss to what to do with this information. I have WireShark running as well. Theres is a lot of sensitive information in the readout and I'm not sure what to extract from it. Either way, if it helps, I can access that information if anyone would like me to. Thanks for the help!

    Read the article

  • Disk IO causing high load on Xen/CentOS guest

    - by Peter Lindqvist
    I'm having serious issues with a xen based server, this is on the guest partition. It's a paravirtualized CentOS 5.5. The following numbers are taken from top while copying a large file over the network. If i copy the file another time the speed decreases in relation to load average. So the second time it's half the speed of the first time. It needs some time to cool off after this. Load average slowly decreases until it's once again usable. ls / takes about 30 seconds. top - 13:26:44 up 13 days, 21:44, 2 users, load average: 7.03, 5.08, 3.15 Tasks: 134 total, 2 running, 132 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.1%sy, 0.0%ni, 25.3%id, 74.5%wa, 0.0%hi, 0.0%si, 0.1%st Mem: 1048752k total, 1041460k used, 7292k free, 3116k buffers Swap: 2129912k total, 40k used, 2129872k free, 904740k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1506 root 10 -5 0 0 0 S 0.3 0.0 0:03.94 cifsd 1 root 15 0 2172 644 556 S 0.0 0.1 0:00.08 init Meanwhile the host is ~0.5 load avg and steady over time. ~50% wait Server hardware is dual xeon, 3gb ram, 170gb scsi 320 10k rpm, and shouldn't have any problems with copying files over the network. disk = [ "tap:aio:/vm/dev01.img,xvda,w" ] I also get these in the log INFO: task syslogd:1350 blocked for more than 120 seconds. "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. syslogd D 00062E4F 2208 1350 1 1353 1312 (NOTLB) c0ef0ed0 00000286 6e71a411 00062e4f c0ef0f18 00000009 c0f20000 6e738bfd 00062e4f 0001e7ec c0f2010c c181a724 c1abd200 00000000 ffffffff c0ef0ecc c041a180 00000000 c0ef0ed8 c03d6a50 00000000 00000000 c03d6a00 00000000 Call Trace: [<c041a180>] __wake_up+0x2a/0x3d [<ee06a1ea>] log_wait_commit+0x80/0xc7 [jbd] [<c043128b>] autoremove_wake_function+0x0/0x2d [<ee065661>] journal_stop+0x195/0x1ba [jbd] [<c0490a32>] __writeback_single_inode+0x1a3/0x2af [<c04568ea>] do_writepages+0x2b/0x32 [<c045239b>] __filemap_fdatawrite_range+0x66/0x72 [<c04910ce>] sync_inode+0x19/0x24 [<ee09b007>] ext3_sync_file+0xaf/0xc4 [ext3] [<c047426f>] do_fsync+0x41/0x83 [<c04742ce>] __do_fsync+0x1d/0x2b [<c0405413>] syscall_call+0x7/0xb ======================= I have tried disabling irqbalanced as suggested here but it does not seem to make any difference.

    Read the article

  • Adobe Reader not loading form content

    - by wullxz
    We have an FDL file which is used to offer an online application possibility. The FDL is filled out and sent to a mailbox. When I open the received file, Adobe Reader starts, loads the document in Internet Explorer (had to change my default browser because it doesn't work in chrome - the customer uses IE as default) and displays a warning that Adobe Reader has blocked the connection to the server where the initial document is saved: I can then click on "Trust this document once" (translated by me!) or "Add this host to trusted hosts" (also translated by me!). The second option doesn't work at all. The first option works but is a little bit annoying. I looked into Adobe Readers options (Edit - "Voreinstellungen" in german / the last option - Security (advanced)) and found the possibility to add hosts, files and directories or allow Adobe Reader to use the "Trusted Websites" list from Internetoptions. When I add the website either to Trusted Websites or the trusted list in Adobe Readers options, the warning doesn't pop up but the content in the prefilled (by the applicant) input boxes of the document doesn't show up on Windows 7 but it does show up on Windows XP. This Screenshot shows the settings window described in the last paragraph. The big input box at the bottom normally holds the trusted files/directories/hosts list. System Information: Windows 7 Enterprise x64 Adobe Reader X multiple IE versions (mine is latest but there's also IE 7 or 8) How do I get Adobe Reader to load the content of the form? This behaviour can be reproduced on a PC. When opening an fdf from a command line the form fields are blank even though there is data in the fdf and the pdf is located in a mnaully entered trsuted folder. Steps to reproduce: Clean install a Windows 7 PC (or use a virtual box) Map a network drive to a shared folder with a subfolder e.g. c:\test\docs becomes m:\docs Set security permissions to allow full control to everyone Add an fdf and a matching pdf file in the subfolder Manually add m:\docs to each of the trusted folders in the trust manager registry settings Ensure that Enhanced Security is on Run a command line to open the fdf file Expected result: pdf is opened in Adobe Reader with form fields filled out with data Actual results: pdf is opened with blank fields 'Yellow bar' appears asking to add document to trusted locations It appears that Adobe Reader XI is ignoring the privileged locations entries in the registry. Adding the document via the 'yellow bar' adds the individual document, with the same folder, to the privileged locations but means that the process has to be repeated for every document that needs to be opened from the folder.

    Read the article

  • Why do these ipfw delayed pipes have no effect?

    - by troutwine
    I'm on OSX 10.7.5 and am attempting to add some latency to the connection to my personal domain with ipfw, using this article as a guide. Normal latency: > ping -c5 troutwine.us PING troutwine.us (198.101.227.131): 56 data bytes 64 bytes from 198.101.227.131: icmp_seq=0 ttl=56 time=92.714 ms 64 bytes from 198.101.227.131: icmp_seq=1 ttl=56 time=91.436 ms 64 bytes from 198.101.227.131: icmp_seq=2 ttl=56 time=91.218 ms 64 bytes from 198.101.227.131: icmp_seq=3 ttl=56 time=91.451 ms 64 bytes from 198.101.227.131: icmp_seq=4 ttl=56 time=91.243 ms --- troutwine.us ping statistics --- 5 packets transmitted, 5 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 91.218/91.612/92.714/0.559 ms Enabling ipfw: > sudo sysctl -w net.inet.ip.fw.enable=0 net.inet.ip.fw.enable: 1 -> 0 > sudo sysctl -w net.inet.ip.fw.enable=1 net.inet.ip.fw.enable: 0 -> 1 The configuration of the pipe: > sudo ipfw add pipe 1 ip from any to 198.101.227.131 00200 pipe 1 ip from any to any dst-ip 198.101.227.131 > sudo ipfw add pipe 2 ip from 198.101.227.131 to any 00500 pipe 2 ip from 198.101.227.131 to any > sudo ipfw pipe 1 config delay 250ms bw 1Mbit/s plr 0.1 > sudo ipfw pipe 2 config delay 250ms bw 1Mbit/s plr 0.1 The pipes are in place and configured: > sudo ipfw -a list 00100 166 14178 fwd 127.0.0.1,20559 tcp from any to me dst-port 80 in 00200 0 0 pipe 1 ip from any to 198.101.227.131 00300 0 0 pipe 2 ip from 198.101.227.131 to any 65535 37452525 32060610029 allow ip from any to any > sudo ipfw pipe list 00001: 1.000 Mbit/s 250 ms 50 sl.plr 0.100000 0 queues (1 buckets) droptail mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000 00002: 1.000 Mbit/s 250 ms 50 sl.plr 0.100000 0 queues (1 buckets) droptail mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000 Yet, this has had no effect: > ping -c5 troutwine.us PING troutwine.us (198.101.227.131): 56 data bytes 64 bytes from 198.101.227.131: icmp_seq=0 ttl=56 time=100.920 ms 64 bytes from 198.101.227.131: icmp_seq=1 ttl=56 time=91.648 ms 64 bytes from 198.101.227.131: icmp_seq=2 ttl=56 time=91.777 ms 64 bytes from 198.101.227.131: icmp_seq=3 ttl=56 time=91.466 ms 64 bytes from 198.101.227.131: icmp_seq=4 ttl=56 time=93.209 ms --- troutwine.us ping statistics --- 5 packets transmitted, 5 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 91.466/93.804/100.920/3.612 ms What gives? I understand that ipfw is depreciated, but the manpage does not mention it being disabled. Also, I am not using Network Link Controller as I want to affect a single host.

    Read the article

  • centos 6 ps aux hangs up

    - by Guntis
    I have problem with my server. Server is running centos 6 (CloudLinux Server release 6.2). uname -a = 2.6.32-320.4.1.lve1.1.4.el6.x86_64 That is a kvm guest. On host is debian 6. If i run command ps aux, it stuck on random process (shows some processes only), top command is working fine. htop doesn't work too (black screen). top - 12:11:51 up 34 min, 1 user, load average: 4.26, 6.71, 16.15 Tasks: 201 total, 7 running, 192 sleeping, 0 stopped, 2 zombie Cpu(s): 7.9%us, 2.8%sy, 0.0%ni, 87.5%id, 1.6%wa, 0.0%hi, 0.2%si, 0.0%st Mem: 9862044k total, 2359484k used, 7502560k free, 171720k buffers Swap: 10485720k total, 0k used, 10485720k free, 1336872k cached server has one Intel(R) Xeon(R) CPU E5606 @ 2.13GHz, free -m total used free shared buffers cached Mem: 9630 2336 7293 0 170 1324 -/+ buffers/cache: 841 8789 Swap: 10239 0 10239 php -v PHP 5.3.19 (cli) (built: Nov 28 2012 10:03:07) Copyright (c) 1997-2012 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2012 Zend Technologies with the ionCube PHP Loader v4.2.2, Copyright (c) 2002-2012, by ionCube Ltd., and with Zend Guard Loader v3.3, Copyright (c) 1998-2010, by Zend Technologies with Suhosin v0.9.33, Copyright (c) 2007-2012, by SektionEins GmbH mysql Server version: 5.1.63-cll php -i disable_functions => apache_child_terminate, apache_setenv, define_syslog_variables, escapeshellarg, escapeshellcmd, eval, exec, fp, fput, ftp_connect, ftp_e xec, ftp_get, ftp_login, ftp_nb_fput, ftp_put, ftp_raw, ftp_rawlist, highlight_file, ini_alter, ini_get_all, ini_restore, inject_code, openlog, passthru, php _uname, phpAds_remoteInfo, phpAds_XmlRpc, phpAds_xmlrpcDecode, phpAds_xmlrpcEncode, popen, posix_getpwuid, posix_kill, posix_mkfifo, posix_setpgid, posix_set sid, posix_setuid, posix_setuid, posix_uname, proc_close, proc_get_status, proc_nice, proc_open, proc_terminate, shell_exec, syslog, system, xmlrpc_entity_de code, xmlrpc_server_create, putenv, show_source,mail => apache_child_terminate, apache_setenv, define_syslog_variables, escapeshellarg, escapeshellcmd, eval, exec, fp, fput, ftp_connect, ftp_exec, ftp_get, ftp_login, ftp_nb_fput, ftp_put, ftp_raw, ftp_rawlist, highlight_file, ini_alter, ini_get_all, ini_restore, inject_code, openlog, passthru, php_uname, phpAds_remoteInfo, phpAds_XmlRpc, phpAds_xmlrpcDecode, phpAds_xmlrpcEncode, popen, posix_getpwuid, posix_kill, pos ix_mkfifo, posix_setpgid, posix_setsid, posix_setuid, posix_setuid, posix_uname, proc_close, proc_get_status, proc_nice, proc_open, proc_terminate, shell_exe c, syslog, system, xmlrpc_entity_decode, xmlrpc_server_create, putenv, show_source,mail ... suhosin.executor.disable_eval => Off => Off suhosin.executor.eval.blacklist => include,include_once,require,require_once,curl_init,fpassthru,base64_encode,base64_decode,mail,exec,system,proc_open,leak, syslog,pfsockopen,shell_exec,ini_restore,symlink,stream_socket_server,proc_nice,popen,proc_get_status,dl, pcntl_exec, pcntl_fork, pcntl_signal,pcntl_waitpid, pcntl_wexitstatus, pcntl_wifexited, pcntl_wifsignaled,pcntl_wifstopped, pcntl_wstopsig, pcntl_wtermsig, socket_accept,socket_bind, socket_connect, socket_cr eate, socket_create_listen,socket_create_pair,link,register_shutdown_function,register_tick_function,gzinflate => include,include_once,require,require_once,c url_init,fpassthru,base64_encode,base64_decode,mail,exec,system,proc_open,leak,syslog,pfsockopen,shell_exec,ini_restore,symlink,stream_socket_server,proc_nic e,popen,proc_get_status,dl, pcntl_exec, pcntl_fork, pcntl_signal,pcntl_waitpid, pcntl_wexitstatus, pcntl_wifexited, pcntl_wifsignaled,pcntl_wifstopped, pcntl _wstopsig, pcntl_wtermsig, socket_accept,socket_bind, socket_connect, socket_create, socket_create_listen,socket_create_pair,link,register_shutdown_function, register_tick_function,gzinflate Sometimes i cannot kill httpd process. I run kill -9 PID even several times, and nothing happens. php runs via suphp. I learned somewhere that it can be trojan. I ran strace ps aux and it stops on open("/proc/PID/cmdline", O_RDONLY) If i reboot server, problem is gone but after some time it is back again .. :( Thanks.

    Read the article

  • pptpd configuration

    - by Ian R.
    I would like a little help on configuring pptp so I can use my server as a vpn server since I have 10 ip's on it and I travel a lot so that would really help me and my partners. I managed to install everything needed but my vpn client fails to connect due to some reason that I cannot understand. I know there are 2 files in pptp that you're supposed to edit so I will post my 2 files here: /etc/ppp/pptpd-options name pptpd refuse-pap refuse-chap refuse-mschap require-mschap-v2 require-mppe-128 proxyarp nodefaultroute lock nobsdcomp /etc/pptpd.conf option /etc/ppp/pptpd-options logwtmp localip xx.158.177.231 remoteip xx.158.177.103,xx.158.177.116,xx.158.177.121,xx.158.177.124,xx.158.177.125,xx.158.177.131,xx.158.177.134,xx.158.177.139,xx.158.177.142,xx.158.177.145 interfaces file eth0 Link encap:Ethernet HWaddr 00:16:3e:51:31:ba inet addr:xx.158.177.231 Bcast:xx.158.177.255 Mask:255.255.254.0 inet6 addr: xx80::216:3eff:fe51:31ba/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:56352 errors:0 dropped:0 overruns:0 frame:0 TX packets:3xx15 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4884030 (4.8 MB) TX bytes:6780974 (6.7 MB) Interrupt:16 eth0:1 Link encap:Ethernet HWaddr 00:16:3e:51:31:ba inet addr:xx.158.177.103 Bcast:xx.158.177.255 Mask:255.255.254.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:16 eth0:2 Link encap:Ethernet HWaddr 00:16:3e:51:31:ba inet addr:xx.158.177.116 Bcast:xx.158.177.255 Mask:255.255.254.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:16 eth0:3 Link encap:Ethernet HWaddr 00:16:3e:51:31:ba inet addr:xx.158.177.121 Bcast:xx.158.177.255 Mask:255.255.254.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:16 eth0:4 Link encap:Ethernet HWaddr 00:16:3e:51:31:ba inet addr:xx.158.177.124 Bcast:xx.158.177.255 Mask:255.255.254.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:16 eth0:5 Link encap:Ethernet HWaddr 00:16:3e:51:31:ba inet addr:xx.158.177.125 Bcast:xx.158.177.255 Mask:255.255.254.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:16 eth0:6 Link encap:Ethernet HWaddr 00:16:3e:51:31:ba inet addr:xx.158.177.131 Bcast:xx.158.177.255 Mask:255.255.254.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:16 eth0:7 Link encap:Ethernet HWaddr 00:16:3e:51:31:ba inet addr:xx.158.177.134 Bcast:xx.158.177.255 Mask:255.255.254.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:16 eth0:8 Link encap:Ethernet HWaddr 00:16:3e:51:31:ba inet addr:xx.158.177.139 Bcast:xx.158.177.255 Mask:255.255.254.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:16 eth0:9 Link encap:Ethernet HWaddr 00:16:3e:51:31:ba inet addr:xx.158.177.142 Bcast:xx.158.177.255 Mask:255.255.254.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:16 eth0:10 Link encap:Ethernet HWaddr 00:16:3e:51:31:ba inet addr:xx.158.177.145 Bcast:xx.158.177.255 Mask:255.255.254.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:16 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:3 errors:0 dropped:0 overruns:0 frame:0 TX packets:3 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:286 (286.0 B) TX bytes:286 (286.0 B)

    Read the article

  • How should I set up my Hyper-V server and network topology?

    - by Daniel Waechter
    This is my first time setting up either Hyper-V or Windows 2008, so please bear with me. I am setting up a pretty decent server running Windows Server 2008 R2 to be a remote (colocated) Hyper-V host. It will be hosting Linux and Windows VMs, initially for developers to use but eventually also to do some web hosting and other tasks. Currently I have two VMs, one Windows and one Ubuntu Linux, running pretty well, and I plan to clone them for future use. Right now I'm considering the best ways to configure developer and administrator access to the server once it is moved into the colocation facility, and I'm seeking advice on that. My thought is to set up a VPN for access to certain features of the VMs on the server, but I have a few different options for going about this: Connect the server to an existing hardware firewall (an old-ish Netscreen 5-GT) that can create a VPN and map external IPs to the VMs, which will have their own IPs exposed through the virtual interface. One problem with this choice is that I'm the only one trained on the Netscreen, and its interface is a bit baroque, so others may have difficulty maintaining it. Advantage is that I already know how to do it, and I know it will do what I need. Connect the server directly to the network and configure the Windows 2008 firewall to restrict access to the VMs and set up a VPN. I haven't done this before, so it will have a learning curve, but I'm willing to learn if this option is better long-term than the Netscreen. Another advantage is that I won't have to train anyone on the Netscreen interface. Still, I'm not certain if the capabilities of the Windows software firewall as far as creating VPNs, setting up rules for external access to certain ports on the IPs of Hyper-V servers, etc. Will it be sufficient for my needs and easy enough to set up / maintain? Anything else? What are the limitations of my approaches? What are the best practices / what has worked well for you? Remember that I need to set up developer access as well as consumer access to some services. Is a VPN even the right choice?

    Read the article

  • Snow Leopard can see Windows shares in Finder but can't connect

    - by Randy Miller
    I have an iMac with the latest version of Snow Leopard on it. I have a NAS drive and a Windows machine that both show up in the Finder's 'Shared' section. However, if I click on them, Finder says "Connection Failed". Clicking on 'Connect As...' gives an error dialog that says "The server 'blah' may not exist or it is unavailable at this time." Points of interest: All machines are receiving their IP/DNS info from the router using DHCP. I have a Mac Mini on the same network that connects to the NAS drive and windows machine perfectly with no config (i.e. worked out of the box). Both Macs are on the same version of Snow Leopard. There is no password required to access the NAS share. I've never setup a WINS server on any machines and all machines are using 'workgroup' by default. I've tried putting "workgroup" in the Mac's workgroup entry and have tried leaving it blank, neither solves the problem. Here are some things I have tried: Finder-Connect To Server: smb:///share. This works, but by name does not. Terminal-mount_smbfs //@/share share. This also works by ip, but not be name, resulting in "mount_smbfs: server connection failed: No route to host". If I put the IP address of the NAS in the WINS server entry in the Mac's network setup, I can connect by name. It obviously seems to be a name resolution error, but I can't figure out why. The only thing that has changed since it used to work is that I got a new router that now gives out DHCP (all machines are dhcp clients) addresses of 192.168.x.x, but used to be 10.0.x.x. I've grep'd through everything that might have saved that old address, but can't find anything. It's also worth noting that the second Mac (the one that connects successfully) was added to the network after the router change. Please let me know if there are additional points of information needed to troubleshoot this further. Thanks, Randy

    Read the article

  • Trouble with Debian Lenny and Sphinx

    - by Ando
    I've very basic understanding of linux systems, but I've a server which was setup a while ago to host some web apps. Recently I decided to test out and implement Sphinx but unfortunately I cant get the install to work. I'm running a Debian Lenny distro and when I try to install sphinx it says - checking MySQL include files... configure: error: missing include files. ****************************************************************************** ERROR: cannot find MySQL include files. Check that you do have MySQL include files installed. The package name is typically 'mysql-devel'. If include files are installed on your system, but you are still getting this message, you should do one of the following: 1) either specify includes location explicitly, using --with-mysql-includes; 2) or specify MySQL installation root location explicitly, using --with-mysql; 3) or make sure that the path to 'mysql_config' program is listed in your PATH environment variable. To disable MySQL support, use --without-mysql option. ****************************************************************************** I do have mysql 5.1 installed but I can't find the include files, AND one more thing.. I read around the net that I probably need libmysqlclient15-dev but when I try to install that using apt-get i receive the following error. The following packages were automatically installed and are no longer required: libxcb-aux0 libts-0.0-0 libxcb-atom1 ttf-dejavu-extra hunspell-en-us g++-4.3 libmysql++3 libnspr4-0d libdirectfb-1.0-0 libxcb-event1 libasound2 libstdc++6-4.3-dev libhunspell-1.2-0 ttf-dejavu libmozjs2d conkeror-spawn-process-helper libnss3-1d Use 'apt-get autoremove' to remove them. The following NEW packages will be installed: libmysqlclient15-dev 0 upgraded, 1 newly installed, 0 to remove and 276 not upgraded. Need to get 7590 kB of archives. After this operation, 26.3 MB of additional disk space will be used. WARNING: The following packages cannot be authenticated! libmysqlclient15-dev Install these packages without verification [y/N]? Y Err http://ftp.us.debian.org/debian/ lenny/main libmysqlclient15-dev amd64 5.0.51a-24+lenny5 404 Not Found [IP: 35.9.37.225 80] Err http://security.debian.org/ lenny/updates/main libmysqlclient15-dev amd64 5.0.51a-24+lenny5 404 Not Found [IP: 149.20.20.6 80] Failed to fetch http://security.debian.org/pool/updates/main/m/mysql-dfsg-5.0/libmysqlclient15-dev_5.0.51a-24+lenny5_amd64.deb 404 Not Found [IP: 149.20.20.6 80] E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? Can you help me out by suggesting how to install the required packages and run the Sphinx.

    Read the article

  • Setting up a home server - what to use? (ZFS vs btrfs, BSD vs Linux, misc other requirements)

    - by monch1962
    I need to get all our home content off individual machines and onto a central server. What I'd like to have is the metaphorical "server under the stairs". Stuff we need: expandable storage. I want to be able to add extra disc as we go along, with minimal maintenance required. Currently we've got about 3Tb of files we need to host, and that's likely to grow by another Tb every 6-12 months based on recent history. I need to be able to add additional disc with minimal pain needs to store all the media (i.e. photos, video, music) we have, and run services to serve the various devices we have in the house to playback (e.g. DAAP so we can play stuff through iTunes, ccxstream so we can play stuff over XBMC). DAAP and ccxstream are needed now, but we also need to support new standards as they emerge (so a closed-box solution isn't going to work) RAID 5, or something broadly equivalent (e.g. RAID-Z) BitTorrent client ssh, NFS, Samba access snapshot capability (as in ZFS), so we can snapshot individual file systems regularly and rollback when my kids delete their school assignments the day before they're due... ability to recover quickly from power outages (it's not unusual for us to have power outages that last longer than our UPS' batteries) FOSS software a modern distributed version control system running on the box, such as Mercurial Stuff I'd like to have on the server, but can live without: PVR capability, so I could record TV to the box Web server. We currently run a small Web server on a very old box, and I'd ideally like to turn the old box off and move the content to the new server just to save some electricity Nagios + mrtg I've been looking at using a EEE Box as the server, primarily because I can get them cheap and they don't consume much power. The choice of OS and file system is more difficult, from what I've found: I've got most experience with various Linux distros, but am happy to use another Unix FreeBSD and OpenSolaris seem to be the best choices for hosting ZFS OpenSolaris' hardware support is nowhere near as good as e.g. Ubuntu btrfs, while looking very good, doesn't seem ready for prime-time yet ZFS doesn't let you (easily?) add new discs to a RAID5 or RAID-Z reading around, it seems that ZFS is a bit short of tools for recovering lost data At the moment, I'm leaning towards running FreeNAS+ZFS, but I'm concerned about the requirement to be able to add new disc on a fairly regular basis to an existing RAID-Z. Can anyone provide some recommendations, or share experiences? Thanks in advance

    Read the article

  • Setting up a home server - what to use? (ZFS vs btrfs, BSD vs Linux, misc other requirements)

    - by monch1962
    I need to get all our home content off individual machines and onto a central server. What I'd like to have is the metaphorical "server under the stairs". Stuff we need: expandable storage. I want to be able to add extra disc as we go along, with minimal maintenance required. Currently we've got about 3Tb of files we need to host, and that's likely to grow by another Tb every 6-12 months based on recent history. I need to be able to add additional disc with minimal pain needs to store all the media (i.e. photos, video, music) we have, and run services to serve the various devices we have in the house to playback (e.g. DAAP so we can play stuff through iTunes, ccxstream so we can play stuff over XBMC). DAAP and ccxstream are needed now, but we also need to support new standards as they emerge (so a closed-box solution isn't going to work) RAID 5, or something broadly equivalent (e.g. RAID-Z) BitTorrent client ssh, NFS, Samba access snapshot capability (as in ZFS), so we can snapshot individual file systems regularly and rollback when my kids delete their school assignments the day before they're due... ability to recover quickly from power outages (it's not unusual for us to have power outages that last longer than our UPS' batteries) FOSS software a modern distributed version control system running on the box, such as Mercurial Stuff I'd like to have on the server, but can live without: PVR capability, so I could record TV to the box Web server. We currently run a small Web server on a very old box, and I'd ideally like to turn the old box off and move the content to the new server just to save some electricity Nagios + mrtg I've been looking at using a EEE Box as the server, primarily because I can get them cheap and they don't consume much power. The choice of OS and file system is more difficult, from what I've found: I've got most experience with various Linux distros, but am happy to use another Unix FreeBSD and OpenSolaris seem to be the best choices for hosting ZFS OpenSolaris' hardware support is nowhere near as good as e.g. Ubuntu btrfs, while looking very good, doesn't seem ready for prime-time yet ZFS doesn't let you (easily?) add new discs to a RAID5 or RAID-Z reading around, it seems that ZFS is a bit short of tools for recovering lost data At the moment, I'm leaning towards running FreeNAS+ZFS, but I'm concerned about the requirement to be able to add new disc on a fairly regular basis to an existing RAID-Z. Can anyone provide some recommendations, or share experiences? Thanks in advance

    Read the article

  • Setting up a home server - what to use? (ZFS vs btrfs, BSD vs Linux, misc other requirements)

    - by monch1962
    I need to get all our home content off individual machines and onto a central server. What I'd like to have is the metaphorical "server under the stairs". Stuff we need: expandable storage. I want to be able to add extra disc as we go along, with minimal maintenance required. Currently we've got about 3Tb of files we need to host, and that's likely to grow by another Tb every 6-12 months based on recent history. I need to be able to add additional disc with minimal pain needs to store all the media (i.e. photos, video, music) we have, and run services to serve the various devices we have in the house to playback (e.g. DAAP so we can play stuff through iTunes, ccxstream so we can play stuff over XBMC). DAAP and ccxstream are needed now, but we also need to support new standards as they emerge (so a closed-box solution isn't going to work) RAID 5, or something broadly equivalent (e.g. RAID-Z) BitTorrent client ssh, NFS, Samba access snapshot capability (as in ZFS), so we can snapshot individual file systems regularly and rollback when my kids delete their school assignments the day before they're due... ability to recover quickly from power outages (it's not unusual for us to have power outages that last longer than our UPS' batteries) FOSS software a modern distributed version control system running on the box, such as Mercurial Stuff I'd like to have on the server, but can live without: PVR capability, so I could record TV to the box Web server. We currently run a small Web server on a very old box, and I'd ideally like to turn the old box off and move the content to the new server just to save some electricity Nagios + mrtg I've been looking at using a EEE Box as the server, primarily because I can get them cheap and they don't consume much power. The choice of OS and file system is more difficult, from what I've found: I've got most experience with various Linux distros, but am happy to use another Unix FreeBSD and OpenSolaris seem to be the best choices for hosting ZFS OpenSolaris' hardware support is nowhere near as good as e.g. Ubuntu btrfs, while looking very good, doesn't seem ready for prime-time yet ZFS doesn't let you (easily?) add new discs to a RAID5 or RAID-Z reading around, it seems that ZFS is a bit short of tools for recovering lost data At the moment, I'm leaning towards running FreeNAS+ZFS, but I'm concerned about the requirement to be able to add new disc on a fairly regular basis to an existing RAID-Z. Can anyone provide some recommendations, or share experiences? Thanks in advance

    Read the article

  • How to get rid of messages addressed to not existing subdomains?

    - by user71061
    Hi! I have small problem with my sendmail server and need your little help :-) My situation is as follow: User mailboxes are placed on MS exchanege server and all mail to and from outside world are relayed trough my sendmail box. Exchange server ----- sendmail server ------ Internet My servers accept messages for one main domain (say, my.domain.com) and for few other domains (let we narrow it too just one, say my_other.domain.com). After configuring sendmail with showed bellow abbreviated sendmail.mc file, essentially everything works ok, but there is small problem. I want to reject messages addressed to not existing recipients as soon as possible (to avoid sending non delivery reports), so my sendmail server make LDAP queries to exchange server, validating every recipient address. This works well both domains but not for subdomains. Such subdomains do not exist, but someone (I'm mean those heated spamers :-) could try addresses like this: user@any_host.my.domain.com or user@any_host.my_other.domain.com and for those addresses results are as follows: Messages to user@sendmail_hostname.my.domain.com are rejected with error "Unknown user" (due to additional LDAPROUTE_DOMAIN line in my sendmail.mc file, and this is expected behaviour) Messages to user@any_other_hostname.my.domain.com are rejected with error "Relaying denied". Little strange to me, why this time the error is different, but still ok. After all message was rejected and I don't care very much what error code will be returned to sender (spamer). Messages to user@sendmail_hostname.my_other.domain.com and user@any_other_hostname.my_other.domain.com are rejected with error "Unknown user" but only when, there is no user@my_other.domain.com mailbox (on exchange server). If such mailbox exist, then all three addresses (i.e. user@my_other.domain.com, user@sendmail_hostname.my_other.domain.com and user@any_other_hostname.my_other.domain.com) will be accepted. (adding additional line LDAPROUTE_DOMAIN(my_sendmail_host.my_other.domain.com) to my sendmail.mc file don't change anything) My abbreviated sendmail.mc file is as follows (sendmail 8.14.3-5). Both domains are listed in /etc/mail/local-host-names file (FEATURE(use_cw_file) ): define(`_USE_ETC_MAIL_')dnl include(`/usr/share/sendmail/cf/m4/cf.m4')dnl OSTYPE(`debian')dnl DOMAIN(`debian-mta')dnl undefine(`confHOST_STATUS_DIRECTORY')dnl define(`confRUN_AS_USER',`smmta:smmsp')dnl FEATURE(`no_default_msa')dnl define(`confPRIVACY_FLAGS',`needmailhelo,needexpnhelo,needvrfyhelo,restrictqrun,restrictexpand,nobodyreturn,authwarnings')dnl FEATURE(`use_cw_file')dnl FEATURE(`access_db', , `skip')dnl FEATURE(`always_add_domain')dnl MASQUERADE_AS(`my.domain.com')dnl FEATURE(`allmasquerade')dnl FEATURE(`masquerade_envelope')dnl dnl define(`confLDAP_DEFAULT_SPEC',`-p 389 -h my_exchange_server.my.domain.com -b dc=my,dc=domain,dc=com')dnl dnl define(`ALIAS_FILE',`/etc/aliases,ldap:-k (&(|(objectclass=user)(objectclass=group))(proxyAddresses=smtp:%0)) -v mail')dnl FEATURE(`ldap_routing',, `ldap -1 -T<TMPF> -v mail -k proxyAddresses=SMTP:%0', `bounce')dnl LDAPROUTE_DOMAIN(`my.domain.com')dnl LDAPROUTE_DOMAIN(`my_other.domain.com ')dnl LDAPROUTE_DOMAIN(`my_sendmail_host.my.domain.com')dnl define(`confLDAP_DEFAULT_SPEC', `-p 389 -h "my_exchange_server.my.domain.com" -d "CN=sendmail,CN=Users,DC=my,DC=domain,DC=com" -M simple -P /etc/mail/ldap-secret -b "DC=my,DC=domain,DC=com"')dnl FEATURE(`nouucp',`reject')dnl undefine(`UUCP_RELAY')dnl undefine(`BITNET_RELAY')dnl define(`confTRY_NULL_MX_LIST',true)dnl define(`confDONT_PROBE_INTERFACES',true)dnl define(`MAIL_HUB',` my_exchange_server.my.domain.com.')dnl FEATURE(`stickyhost')dnl MAILER_DEFINITIONS MAILER(smtp)dnl Could someone more experienced with sendmail advice my how to reject messages to those unwanted subdomains? P.S. Mailboxes @my_other.domain.com are used only for receiving messages and never for sending.

    Read the article

  • SMTP for multiple domains on virtual interfaces

    - by Pawel Goscicki
    The setup is like this (Ubuntu 9.10): eth0: 1.1.1.1 name.isp.com eth0:0 2.2.2.2 example2.com eth0:1 3.3.3.3 example3.com example2.com and example3.com are web apps which need to send emails to their users. 2.2.2.2 points to example2.com and vice-versa (A/PTR). MX - Google. Google handles all incoming mail. 3.3.3.3 points to example3.com and vice-versa (A/PTR). MX - Google. Google handles all incoming mail. Requirements: Local delivery must be disabled (must deliver to MX specified server), so that the following works (note that there is no local user bob on the machine, but there is an existing bob email user): echo "Test" | mail -s "Test 6" [email protected] I need to be able to specify from which IP/domain name the email is delivered when sending an email. I fought with sendmail. With not much luck. Here's some debug info: sendmail -d0.12 -bt < /dev/null Canonical name: name.isp.com UUCP nodename: host a.k.a.: example2.com a.k.a.: example3.com ... Sendmail always uses canonical name (taken from eth0). I've found no way for it to select one of the UUCP codenames. It uses it for sending email: echo -e "To: [email protected]\nSubject: Test\nTest\n" | sendmail -bm -t -v [email protected]... Connecting to [127.0.0.1] via relay... 220 name.isp.com ESMTP Sendmail 8.14.3/8.14.3/Debian-9ubuntu1; Wed, 31 Mar 2010 16:33:55 +0200; (No UCE/UBE) logging access from: localhost(OK)-localhost [127.0.0.1] >>> EHLO name.isp.com I'm ok with other SMTP solutions. I've looked briefly at nbsmtp, msmtp and nullmailer but I'm not sure thay can deal with disabling local delivery and selecting different domains when sending emails. I also know about spoofing sender field by using mail -a "From: <[email protected]>" but it seems to be a half-solution (mails are still sent from isp.com domain instead of proper example2.com, so PTR records are unused and there's more risk of being flagged as spam/spammer).

    Read the article

  • Why can I view my site over a 3G connection but not through my wifi?

    - by Jonathan
    So, I am sitting in my office with four computers on the same network and internet connection. Two of the computers can visit this particular website. Two of the computer get a message "Google Chrome could not find". I have tried FF and IE also with the same problem. I can view the site 90% of the time on two of the working computers although the site seems slow and sometimes I also get the same errors as the other two computers. I have flushed the DNS, reset the router, tested the site on other peoples computers with success. Is this likely to be a site issue, an ISP issue, a hosting issue? Any advice is greatly appreciated. Here is the ping from the working machine: C:\Users\Jon>ping www.balihaicruises.com Pinging www.balihaicruises.com [208.113.173.102] with 32 bytes of data: Reply from 208.113.173.102: bytes=32 time=331ms TTL=47 Reply from 208.113.173.102: bytes=32 time=327ms TTL=47 Reply from 208.113.173.102: bytes=32 time=326ms TTL=47 Reply from 208.113.173.102: bytes=32 time=329ms TTL=47 Ping statistics for 208.113.173.102: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 326ms, Maximum = 331ms, Average = 328ms Traceroute: Tracing route to www.balihaicruises.com [208.113.173.102] over a maximum of 30 hops: 1 1 ms 17 ms 3 ms 192.168.1.1 2 42 ms 37 ms 36 ms 180.254.224.1 3 39 ms 47 ms 40 ms 180.252.1.69 4 36 ms 616 ms 57 ms 61.94.115.221 5 84 ms 76 ms 80 ms 180.240.191.98 6 73 ms 80 ms 72 ms 180.240.191.97 7 157 ms 143 ms 116 ms 180.240.190.82 8 115 ms 113 ms 120 ms ae1-123.hkg11.ip4.tinet.net [183.182.80.93] 9 331 ms 332 ms 335 ms xe-3-2-1.was14.ip4.tinet.net [89.149.184.30] 10 327 ms 330 ms 331 ms internap-gw.ip4.tinet.net [77.67.69.254] 11 437 ms 415 ms 350 ms border10.pc2-bbnet2.wdc002.pnap.net [216.52.127.73] 12 322 ms 823 ms 398 ms dreamhost-2.border10.wdc002.pnap.net [216.52.125.74] 13 328 ms 336 ms 326 ms ip-208-113-156-4.dreamhost.com [208.113.156.4] 14 326 ms 328 ms 336 ms ip-208-113-156-14.dreamhost.com [208.113.156.14] 15 327 ms 331 ms 333 ms apache2-udder.crisp.dreamhost.com [208.113.173.102] And then for the machine that doesn't work: C:\Users\Microsoft>ping www.balihaicruises.com Ping request could not find host www.balihaicruises.com. Please check the name and try again. C:\Users\Microsoft>tracert www.balihaicruises.com Unable to resolve target system name www.balihaicruises.com.

    Read the article

  • Updating the $PATH for running an command through SSH with LDAP user account

    - by Guillaume Bodi
    Hi all, I am setting up a Mac OSX 1.6 server to host Git repositories. As such we need to push commits to the server through SSH. The server has only an admin account and uses a user list from a LDAP server. Now, since it is accessing the server through a non interactive shell, git operations are not able to complete since git executables are not in the default path. As the users are network users, they do not have a local home folder. So I cannot use a ~/.bashrc and the like solution. I browsed over several articles here and there but could not get it working in a nice and clean setup. Here are the infos on the methods I gathered so far: I could update the default PATH environment to include the git executables folder. However, I could not manage to do it successfully. Updating /etc/paths didn't change anything and since it's not an interactive shell, /etc/profile and /etc/bashrc are ignored. From the ssh manpage, I read that a BASH_ENV variable can be set to get an optional script to be executed. However I cannot figure how to set it system wide on the server. If it needs to be set up on the client machine, this is not an acceptable solution. If someone has some info on how it is supposed to be done, please, by all means! I can fix this problem by creating a .bashrc with PATH correction in the system root (since all network users would start here as they do not have home). But it just feels wrong. Additionally, if we do create a home folder for an user, then the git command would fail again. I can install a third party application to set up hooks on the login and then run a script creating a home directory with the necessary path corrections. This smells like a backyard tinkering and duct tape solution. I can install a small script on the server and ForceCommand the sshd to this script on login. This script will then look for a command to execute ($SSH_ORIGINAL_COMMAND) and trigger a login shell to run this command, or just trigger a regular login shell for an interactive session. The full details of this method can be found here: http://marc.info/?l=git&m=121378876831164 The last one is the best method I found so far. Any suggestions on how to deal with this properly?

    Read the article

  • Sendmail SMART_HOST not working

    - by daniel
    Hello, I've defined SMART_HOST to be a specific server, lets call it foo.bar.com. However, when I send a test mail using 'sendmail -t', sendmail tries to use mx.bar.com, which subsequently rejects my mail. I've verified that foo.bar.com works and that mx.bar.com does not work (yay telnet). I've recompiled sendmail.mc vi make, make -C and m4. I've verified the DS entry in sendmail.cf. I've restarted sendmail correctly. I'm not sure how to proceed at this point. Any ideas? Here is my SMART_HOST line: define(SMART_HOST',foo.bar.com')dnl ...and here is the result of a test mail. It never tries to use foo.bar.com, instead it uses mx.bar.com. $ echo subject: test; echo | sendmail -Am -v -flocaluser -- [email protected] subject: test [email protected]... Connecting to mx.bar.com via relay... 220 mx.bar.com ESMTP >>> EHLO myhost.bar.com 250-mx.bar.com 250-8BITMIME 250 SIZE 52428800 >>> MAIL From:<[email protected]> SIZE=1 250 sender <[email protected]> ok >>> RCPT To:<[email protected]> 550 #5.1.0 Address rejected. >>> RSET 250 reset localuser... Connecting to local... localuser... Sent Closing connection to mx.bar.com. >>> QUIT 221 mx.bar.com And last, here is a test mail sent using foo.bar.com: $ hostname myhost.bar.com $ telnet foo.bar.com 25 Trying ***.***.***.***... Connected to foo.bar.com (***.***.***.***). Escape character is '^]'. 220 foo.bar.com ESMTP Sendmail 8.14.1/8.14.1/ITS-7.0/ldap2-1+tls; Tue, 21 Dec 2010 13:27:44 -0700 (MST) helo foo 250 foo.bar.com Hello myhost.bar.com [***.***.***.***], pleased to meet you mail from: [email protected] 250 2.1.0 [email protected]... Sender ok rcpt to: [email protected] 250 2.1.5 [email protected]... Recipient ok data 354 Enter mail, end with "." on a line by itself testing . 250 2.0.0 oBLKRikZ003758 Message accepted for delivery quit 221 2.0.0 foo.bar.com closing connection Connection closed by foreign host. Any ideas? Thanks

    Read the article

  • Connecting multiple ColdFusion 10 instances to a single Apache 2.2 server

    - by Adam Cameron
    This is on Windows 7 Home Premium edition. I have got two ColdFusion 10 (updater 2) instances: "cfusion" (the default one), and "scratch". I have got a single instance of Apache 2.2 running. Within Apache, I have set up two virtual hosts, each of which needs to be served by a different ColdFusion instance. Each of the CF instances serves files fine via Tomcat's internal web server. Apache serves vanilla HTML files fine too. So both CF instances, and both virtual hosts separately work OK. I can get wsconfig.exe to connect either one of the CF instances to the Apache server, and serve CF files via Apache & that instance. However I cannot find a way of connecting the second CF instance to Apache as well, so that both CF instances are conected, each serving one of the virtual hosts. WSConfig doesn't seem to understand the notion of "multiple CF instances", and the changes it makes to the httpd.conf (via mod_jk.conf) does not seem to be implemented in such a way as to accommodate multiple CF instances talking to a single Apache instance, or multiple virtual hosts. I freely admit to not being confident enough with how mod_jk (or even really httpd.conf) works to be able to guess if I can change stuff to make it work. If I try to add the second CF instance using WSConfig, I just get a message "the web server is already configured for ColdFusion". Be that as it may... not the instance of ColdFusion I want to connect it to! If I remove the existing connector to whichever instance is already connected, I can then connect the other one no problems. Not that this helps, but it demonstrates that the CF instance can connect to Apache. This all used to be fairly straight fwd under older versions of CF and JRun :-( The only docs I have found are on the "Connect multiple Apache virtual hosts on a web server to a single ColdFusion server" page, but that specifically only deals with a single CF instance. There is no equivalent page for multiple CF instances. I'm kinda hoping I can move some of the mod_jk config into my virtual host entries in httpd-vhosts.conf (this is how it used to work for JRun), but I've no idea what to put where. I think I've covered all the necessary info here? If not, sing out and I'll add more. Thanks. PS: tried to specifically tag this as "ColdFusion-10" as the answer will be different from previous CF versions, but it won't let me cos my rep on this site is too low (odd how it doesn't consider my rep from other S/O sites...). If someone with sufficient rep can add it, that'd be cool: it's probably a valid tag to have. Ta.

    Read the article

  • Varnish 503 Guru Mediation errors with pfsense and healthy apache

    - by Fammy
    We are running a pfsense firewall / load balancer with varnish as service, In front of Fedora linux webservers running apache. We are getting intermittent 503 guru mediation errors. We are a bit stuck scratching our heads because it is not easily repeatable. The timeouts are set to 30s (connect and first byte) but yet the 503 page will show instantly, not after 30s. Then if you refresh immediately it may very well work instantly and sometimes for a 100 refreshes. The load average on the web servers is < 1, the DB server is < 3 (all servers (web, db, pfsense/varnish) are physical rather than VM. I would have thought if the timeouts were being hit then the 503 page would only appear after 30s am I mistaken? Also when an error happens there does not appear to be any corresponding error in apache's log files. This seems to affect pages as well as images, so it is possible to have the page load fine, and for 9/10 images on the page to be fine but 1 not work An example of the varnish debug is below. It says no backend connection but I can't figure out why, if the load was high on apache I could understand it being flaky The machines are on the same gig ethernet lan 21 ReqStart c *IP-REMOVED* 33418 1274368062 21 RxRequest c GET 21 RxURL c /fashion/ 21 RxProtocol c HTTP/1.1 21 RxHeader c User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.0.5) Gecko/2008121622 Fedora/3.0.5-1.fc10 Firefox/3.0.5 21 RxHeader c Host: *ourdomain.com* 21 RxHeader c Accept: */* 21 RxHeader c Accept-Encoding: deflate, gzip 21 VCL_call c recv lookup 21 VCL_call c hash 21 Hash c /fashion/ 21 Hash c *ourdomain.com* 21 VCL_return c hash 21 VCL_call c miss fetch 21 FetchError c no backend connection 21 VCL_call c error restart 21 VCL_call c recv lookup 21 VCL_call c hash 21 Hash c /fashion/ 21 Hash c *ourdomain.com* 21 VCL_return c hash 21 VCL_call c miss fetch 21 FetchError c no backend connection 21 VCL_call c error restart 21 VCL_call c recv lookup 21 VCL_call c hash 21 Hash c /fashion/ 21 Hash c *ourdomain.com* 21 VCL_return c hash 21 VCL_call c miss fetch 21 FetchError c no backend connection 21 VCL_call c error deliver 21 VCL_call c deliver deliver 21 TxProtocol c HTTP/1.1 21 TxStatus c 503 21 TxResponse c Service Unavailable 21 TxHeader c Server: Varnish 21 TxHeader c Content-Type: text/html; charset=utf-8 21 TxHeader c Content-Length: 384 21 TxHeader c Accept-Ranges: bytes 21 TxHeader c Date: Wed, 11 Apr 2012 10:36:17 GMT 21 TxHeader c X-Varnish: 1274368062 21 TxHeader c Age: 0 21 TxHeader c Via: 1.1 varnish 21 TxHeader c Connection: close 21 TxHeader c X-Cache: MISS 21 Length c 384 21 ReqEnd c 1274368062 1334140577.449995041 1334140577.450334787 1.794108152 0.000282764 0.000056982

    Read the article

  • CentOS 5.8 dig is not resolving ip-address

    - by travisbotello
    I'm running centos 5.8 on a local machine at home. Today I was trying to analyze the DNS-Lookup via dig. $ dig +trace -t A www.heise.de. This is giving me something like this as a response de. 172800 IN NS f.nic.de. de. 172800 IN NS z.nic.de. de. 172800 IN NS s.de.net. de. 172800 IN NS n.de.net. de. 172800 IN NS a.nic.de. de. 172800 IN NS l.de.net. ;; Received 344 bytes from 192.58.128.30#53(192.58.128.30) in 49 ms In contrast my dedicated CentOS machine is returning the following de. 172800 IN NS a.nic.de. de. 172800 IN NS n.de.net. de. 172800 IN NS f.nic.de. de. 172800 IN NS z.nic.de. de. 172800 IN NS l.de.net. de. 172800 IN NS s.de.net. ;; Received 344 bytes from 192.58.128.30#53(j.root-servers.net) in 32 ms As you can see, the last line is different. Any idea why my dedicated machine is giving me the host name of the responding DNS-Server and my local machine is only returning the ip-address? Thanks in advance UPDATE The reverse DNS-Lookup is working without any problems. Also, I just checked this on my local mac and...exactly the same problem occurs. Is it possible that this has to do with the local router/modem/ISP?

    Read the article

< Previous Page | 494 495 496 497 498 499 500 501 502 503 504 505  | Next Page >