Search Results

Search found 7625 results on 305 pages for 'scraper sites'.

Page 291/305 | < Previous Page | 287 288 289 290 291 292 293 294 295 296 297 298  | Next Page >

  • Detecting source of memory usage on a Linux box

    - by apeace
    I have a toy Linux box with 256mb RAM running Ubuntu 10.04.1 LTS. Here is the output of free -m: total used free shared buffers cached Mem: 245 122 122 0 19 64 -/+ buffers/cache: 38 206 Swap: 511 0 511 Unless I'm reading this wrong, 122mb is being used and only 84mb of that is disk cache. Here are all processes I'm running sorted by memory usage (ps -eo pmem,pcpu,rss,vsize,args | sort -k 1 -r): %MEM %CPU RSS VSZ COMMAND 5.0 0.0 12648 633140 node /home/node/main/sites.js 1.5 0.0 3884 251736 /usr/sbin/console-kit-daemon --no-daemon 1.3 0.0 3328 77108 sshd: apeace [priv] 0.9 0.0 2344 19624 -bash 0.7 0.0 1776 23620 /sbin/init 0.6 0.0 1624 77108 sshd: apeace@pts/0 0.6 0.0 1544 9940 redis-server /etc/redis/redis.conf 0.6 0.0 1524 25848 /usr/sbin/ntpd -p /var/run/ntpd.pid -g -u 103:105 0.5 0.0 1324 119880 rsyslogd -c4 0.4 0.0 1084 49308 /usr/sbin/sshd 0.4 0.0 1028 44376 /usr/sbin/exim4 -bd -q30m 0.3 0.0 904 6876 ps -eo pmem,pcpu,rss,vsize,args 0.3 0.0 888 21124 cron 0.3 0.0 868 23472 dbus-daemon --system --fork 0.2 0.0 732 19624 -bash 0.2 0.0 628 6128 /sbin/getty -8 38400 tty1 0.2 0.0 628 16952 upstart-udev-bridge --daemon 0.2 0.0 564 16800 udevd --daemon 0.2 0.0 552 16796 udevd --daemon 0.2 0.0 548 16796 udevd --daemon 0.0 0.0 0 0 [xenwatch] 0.0 0.0 0 0 [xenbus] 0.0 0.0 0 0 [sync_supers] 0.0 0.0 0 0 [netns] 0.0 0.0 0 0 [migration/3] 0.0 0.0 0 0 [migration/2] 0.0 0.0 0 0 [migration/1] 0.0 0.0 0 0 [migration/0] 0.0 0.0 0 0 [kthreadd] 0.0 0.0 0 0 [kswapd0] 0.0 0.0 0 0 [kstriped] 0.0 0.0 0 0 [ksoftirqd/3] 0.0 0.0 0 0 [ksoftirqd/2] 0.0 0.0 0 0 [ksoftirqd/1] 0.0 0.0 0 0 [ksoftirqd/0] 0.0 0.0 0 0 [ksnapd] 0.0 0.0 0 0 [kseriod] 0.0 0.0 0 0 [kjournald] 0.0 0.0 0 0 [khvcd] 0.0 0.0 0 0 [khelper] 0.0 0.0 0 0 [kblockd/3] 0.0 0.0 0 0 [kblockd/2] 0.0 0.0 0 0 [kblockd/1] 0.0 0.0 0 0 [kblockd/0] 0.0 0.0 0 0 [flush-202:1] 0.0 0.0 0 0 [events/3] 0.0 0.0 0 0 [events/2] 0.0 0.0 0 0 [events/1] 0.0 0.0 0 0 [events/0] 0.0 0.0 0 0 [crypto/3] 0.0 0.0 0 0 [crypto/2] 0.0 0.0 0 0 [crypto/1] 0.0 0.0 0 0 [crypto/0] 0.0 0.0 0 0 [cpuset] 0.0 0.0 0 0 [bdi-default] 0.0 0.0 0 0 [async/mgr] 0.0 0.0 0 0 [aio/3] 0.0 0.0 0 0 [aio/2] 0.0 0.0 0 0 [aio/1] 0.0 0.0 0 0 [aio/0] Now, I know that ps is not the best for viewing process memory usage, but that's because it tends to report more memory than is actually being used...meaning no matter how you look at it all my processes combined shouldn't be using near 122mb, even if you account for the disk cache. What's more, memory usage is growing all the time. I've had to restart my server once a week, because once my 256mb fills up it starts swapping, which it wouldn't do just for disk cache. Shouldn't there be some way for me to see the culprit?! I'm new to server admin, so please if there's something obvious I'm overlooking point it out to me. Just for good measure, the output of cat /proc/meminfo: MemTotal: 251140 kB MemFree: 124604 kB Buffers: 20536 kB Cached: 66136 kB SwapCached: 0 kB Active: 65004 kB Inactive: 37576 kB Active(anon): 15932 kB Inactive(anon): 164 kB Active(file): 49072 kB Inactive(file): 37412 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 524284 kB SwapFree: 524284 kB Dirty: 8 kB Writeback: 0 kB AnonPages: 15916 kB Mapped: 10668 kB Shmem: 188 kB Slab: 18604 kB SReclaimable: 10088 kB SUnreclaim: 8516 kB KernelStack: 536 kB PageTables: 1444 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 649852 kB Committed_AS: 64224 kB VmallocTotal: 34359738367 kB VmallocUsed: 752 kB VmallocChunk: 34359737600 kB DirectMap4k: 262144 kB DirectMap2M: 0 kB EDIT: I had misinterpreted the meaning of free -m at first. But even so: the important thing is that my OS eventually begins to use swap memory if I don't restart my server, which disk caching wouldn't do. So where do I look to see what is using all this memory?

    Read the article

  • DHCP and DNS services configuration for VOIP system, windows domain, etc

    - by Stemen
    My company has numerous physical offices (for purposes of this discussion, 15 buildings). Some of them are well-connected to our primary data center via fiber. Others will be connected to the data center by P2P T1. We are in the beginning stages of implementing an Avaya VOIP telephone system, and we will be replacing a significant portion of our network infrastructure in the process. In tandem with the phone system implementation, we are going to be re-addressing some of our networks, and consolidating most of our Windows domains into one (not all domains, just most). We currently have quite a few Windows domains, and they of course each have their own DNS zones. A few of those networks currently use DHCP, but the majority use static IP assignments for every device. I'm tired of managing static assignments -- I want to use DHCP configuration on everything except servers. Printers and etc will have DHCP reservations. The new IP phones will need to get IP addresses from DHCP, though they need to be in a separate VLAN from the computers/printers/etc. The computers and printers need to be registered in DNS. That's currently handled by the Windows DHCP servers on each of the respective domains. We need to place a priority on DHCP and DNS being available on a per-site basis (in case something were to interrupt the WAN connection) for computers and (primarily) phones. Smaller locations (which will have IP phones but not be a member of any Windows domain) will not have any Windows DNS/DHCP server(s) available. We also are looking for the easiest way to replace a part if it were to fail. That is to say, if a server/appliance/router hosting DHCP were to crash hard, and we couldn't extremely quickly recover the DHCP reservations and leases (and subsequently restore them onto a cold spare), we anticipate that bad things could happen. What is the best idea for how to re-implement DNS and DHCP keeping all of the above in mind? Some thoughts that have been raised (by myself or my coworkers): Use Windows DNS and DHCP servers, where they exist, and use IP helpers to route DHCP requests to some other Windows server if necessary. May not be acceptable if the WAN goes down and clients don't get a DHCP response. Use Windows DNS (everywhere, over WAN in some cases) and a mix of Windows DHCP and DHCP provided by Cisco routers. Every site would be covered for DHCP, but from what I've read, Cisco routers can't handle dynamic registration of DHCP clients to Windows DNS servers, which might create a problem where Cisco routers are used for DHCP. Use Windows DNS (everywhere, over WAN in some cases) and a mix of Windows DHCP and DHCP provided by some service running on an extremely low-price linux server. Is there any such software that would allow DHCP leases granted by these linux boxes to be dynamically registered on the Windows DNS servers? Come up with a Linux solution for both DNS and DHCP, and deploy low-price linux servers to every site. Requirements would be that the DNS zone be multi-master (like Windows DNS integrated with Active Directory), that DHCP be able to make dynamic DNS registrations in that zone, for every lease (where a hostname is provided and is thus possible), and that multiple servers be either authoritative for the same DHCP scope or at least receiving a real-time copy / replication / sync of the leases table so that if one server dies, we still know which MAC has what address. Purchase dedicated DNS/DHCP appliances, deploying to all sites. From what I read/see, this solves all of our technical problems. Then come the financial problems... I don't have a ton of money to spend on this. Or, some other solution that we've thus far overlooked and will consider upon recommendation. Can Cisco routers or Windows servers sync DHCP lease tables so that multiple servers can be authoritative (or active/passive for all I care) for the same scope, in case one of the partners were to fail? I've read online (repeatedly) that ISC's DHCP is able to maintain the same lease table across multiple servers, in order to solve this problem. Does anyone have any experience or advice to regarding that?

    Read the article

  • Wget works, Ping doesn't

    - by derty
    There are some anomalies on a Virtuozzo virtualized Debian 4 (I know, I'm gonna upgrade this one asap, but there dependences). We run some Websites on this one. And a view Days ago exmi4 wasnt able to send mails to SOME people. I'll use live.com as exampledomain! So some of this people got mails and some didn't. Some of the mails got stuck in the queue, and after 2 days they went out!! My Nagios never showed problems with the internet connection or disk space Now i wanted to install "dig" to look how he's solving the dns request. And this Debian tells me he doesn't know dig.. Long story made short, Debian is able to download sites with exact IP or even with wget live.com, but it is not able to ping live.com. I'm 99% sure that the networking is right and the routing too! Some examples of my tring below: wget live.com downloads the site ping live.com ping http://www.live.com ping http://live.com returns: ping: unknown host live.com EDIT: i now use heise.de not live.com any more. and i found out i can ping the heise.de server by using it's IP-address. myserver:~# ping 193.99.144.85 PING 193.99.144.85 (193.99.144.85) 56(84) bytes of data. 64 bytes from 193.99.144.85: icmp_seq=1 ttl=248 time=12.7 ms 64 bytes from 193.99.144.85: icmp_seq=2 ttl=248 time=12.6 ms 64 bytes from 193.99.144.85: icmp_seq=3 ttl=248 time=12.9 ms 64 bytes from 193.99.144.85: icmp_seq=4 ttl=248 time=13.1 ms 64 bytes from 193.99.144.85: icmp_seq=5 ttl=248 time=13.1 ms --- 193.99.144.85 ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4001ms rtt min/avg/max/mdev = 12.671/12.924/13.163/0.238 ms EDIT 2: myserver:/etc/apt# dig heise.de ; <<>> DiG 9.3.4-P1.2 <<>> heise.de ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 40551 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 5, ADDITIONAL: 3 ;; QUESTION SECTION: ;heise.de. IN A ;; ANSWER SECTION: heise.de. 2266 IN A 193.99.144.80 ;; AUTHORITY SECTION: heise.de. 1622 IN NS ns.pop-hannover.de. heise.de. 1622 IN NS ns.s.plusline.de. heise.de. 1622 IN NS ns.plusline.de. heise.de. 1622 IN NS ns2.pop-hannover.net. heise.de. 1622 IN NS ns.heise.de. ;; ADDITIONAL SECTION: ns.plusline.de. 265 IN A 212.19.48.14 ns.pop-hannover.de. 5113 IN A 193.98.1.200 ns2.pop-hannover.net. 15150 IN A 62.48.67.66 ;; Query time: 2 msec ;; SERVER: 193.200.112.80#53(193.200.112.80) ;; WHEN: Tue Oct 9 13:03:50 2012 ;; MSG SIZE rcvd: 216

    Read the article

  • How can I forward ALL traffic over a site-to-site VPN on Cisco ASA?

    - by Scott Clements
    Hi There, I currently have two Cisco ASA 5100 routers. They are at different physical sites and are configured with a site-to-site VPN which is active and working. I can communicate with the subnets on either site from the other and both are connected to the internet, however I need to ensure that all the traffic at my remote site goes through this VPN to my site here. I know that the web traffic is doing so as a "tracert" confirms this, but I need to ensure that all other network traffic is being directed over this VPN to my network here. Here is my config for the ASA router at my remote site: hostname ciscoasa domain-name xxxxx enable password 78rl4MkMED8xiJ3g encrypted names ! interface Ethernet0/0 nameif NIACEDC security-level 100 ip address x.x.x.x 255.255.255.0 ! interface Ethernet0/1 description External Janet Connection nameif JANET security-level 0 ip address x.x.x.x 255.255.255.248 ! interface Ethernet0/2 shutdown no nameif security-level 100 no ip address ! interface Ethernet0/3 shutdown no nameif security-level 100 ip address dhcp setroute ! interface Management0/0 nameif management security-level 100 ip address 192.168.100.1 255.255.255.0 management-only ! passwd 2KFQnbNIdI.2KYOU encrypted ftp mode passive clock timezone GMT/BST 0 clock summer-time GMT/BDT recurring last Sun Mar 1:00 last Sun Oct 2:00 dns domain-lookup NIACEDC dns server-group DefaultDNS name-server 154.32.105.18 name-server 154.32.107.18 domain-name XXXX same-security-traffic permit inter-interface same-security-traffic permit intra-interface access-list ren_access_in extended permit ip any any access-list ren_access_in extended permit tcp any any access-list ren_nat0_outbound extended permit ip 192.168.6.0 255.255.255.0 192.168.3.0 255.255.255.0 access-list NIACEDC_nat0_outbound extended permit ip 192.168.12.0 255.255.255.0 192.168.3.0 255.255.255.0 access-list JANET_20_cryptomap extended permit ip 192.168.12.0 255.255.255.0 192.168.3.0 255.255.255.0 access-list NIACEDC_access_in extended permit ip any any access-list NIACEDC_access_in extended permit tcp any any access-list JANET_access_out extended permit ip any any access-list NIACEDC_access_out extended permit ip any any pager lines 24 logging enable logging asdm informational mtu NIACEDC 1500 mtu JANET 1500 mtu management 1500 icmp unreachable rate-limit 1 burst-size 1 asdm image disk0:/asdm-522.bin no asdm history enable arp timeout 14400 nat-control global (NIACEDC) 1 interface global (JANET) 1 interface nat (NIACEDC) 0 access-list NIACEDC_nat0_outbound nat (NIACEDC) 1 192.168.12.0 255.255.255.0 access-group NIACEDC_access_in in interface NIACEDC access-group NIACEDC_access_out out interface NIACEDC access-group JANET_access_out out interface JANET route JANET 0.0.0.0 0.0.0.0 194.82.121.82 1 route JANET 0.0.0.0 0.0.0.0 192.168.3.248 tunneled timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout uauth 0:05:00 absolute http server enable http 192.168.12.0 255.255.255.0 NIACEDC http 192.168.100.0 255.255.255.0 management http 192.168.9.0 255.255.255.0 NIACEDC no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto ipsec transform-set ESP-AES-256-SHA esp-aes-256 esp-sha-hmac crypto map JANET_map 20 match address JANET_20_cryptomap crypto map JANET_map 20 set pfs crypto map JANET_map 20 set peer X.X.X.X crypto map JANET_map 20 set transform-set ESP-AES-256-SHA crypto map JANET_map interface JANET crypto isakmp enable JANET crypto isakmp policy 10 authentication pre-share encryption aes-256 hash sha group 2 lifetime 86400 crypto isakmp policy 30 authentication pre-share encryption 3des hash sha group 2 lifetime 86400 crypto isakmp policy 50 authentication pre-share encryption aes-256 hash sha group 5 lifetime 86400 tunnel-group X.X.X.X type ipsec-l2l tunnel-group X.X.X.X ipsec-attributes pre-shared-key * telnet timeout 5 ssh timeout 5 console timeout 0 dhcpd address 192.168.100.2-192.168.100.254 management dhcpd enable management ! ! class-map inspection_default match default-inspection-traffic ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum 512 policy-map global_policy class inspection_default inspect dns preset_dns_map inspect ftp inspect h323 h225 inspect h323 ras inspect rsh inspect rtsp inspect esmtp inspect sqlnet inspect skinny inspect sunrpc inspect xdmcp inspect sip inspect netbios inspect tftp inspect http ! service-policy global_policy global prompt hostname context no asdm history enable Thanks in advance, Scott

    Read the article

  • Exchange Server 2007 Setup

    - by AlamedaDad
    Hi, I'm working on a upgrade to Exchange 2007 and I wanted to get some advise on hardware choices. We currently have an Exchange 2003 STD server with 400 users split between 6 AD Sites, that is housed on a single server. We need to move to a redundant, fault tolerant system to support our users. I'm planning on installing 2 Dell 1950 servers with W2k8-std to act as CAS and Hub servers, with NLB to allow abstraction of the actual server name to the users. There won't be an edge system since we have a Barracuda box already that will handle in/out spam/virus filtering. Backend I'm planning on 2 mailbox servers which will be Dell 2950s with 16GB RAM, 2 either dual-core or quad-core CPUs and 6 300GB SAS drives in some RAID config. These systems will be clustered using W2k8 Ent clustering and running CCR in Exchange. My questions are as follows: Is 16GB enough RAM for serving that many mailboxes along with the windows clustering and ccr? I'm trying to figure out disk layouts and I'm unsure of whether to use all local disk or some local and some SAN, via an OpenFiler iSCSI server. The SAN would be a Dell 2850 with 6 - 300GB SCSI drives and a PERC controller to slice as I want, with 8GB RAM. Option 1: 2 drives, RAID 1 - OS 2 drives, RAID 1 - Logs 2 drives, RAID 1 - Mail stores Option 2: 2 drives, RAID 1 - OS and logs 4 drives, RAID 5 - Mail Stores and scratch space for eseutil. Option 3: 2 drives, RAID 1 - OS 2 drives, RAID 1 - Logs 2 drives, RAID 0 - scratch space ~300GB iSCSI volume for mail stores Option 4: 2 drives, RAID 1 - OS 4 drives, RAID 5 - scratch space ~300GB iSCSI volume for mail stores ~300GB iSCSI volume for logs I have 2 sockets for CPUs and need to chose between dual and quad cores. The dual core have faster clocks but less cache and I'm thinking older architecture. Am I better off with more cores and cache while sacraficing clock speed? I am planning on adding the new E2K7 cluster to the E2K3 server and then move each mailbox over, all at once, then remove the old server. This seems more complicated than simply getting rid of the 2003 server and then adding the 2007 cluster and restoring the mailboxes using PowerControls or exmerge. The migration option lets me do this on my time, where a cutover means it all needs to work at once. If I go with the cutover method, how can I prebuild the servers and add them to the domain right after removing the 2003 server, or can't I? I think the answer is no and the migration is my only real option if I want to prebuild. I need to also migrate about 30GB of Public Folders. Is there anything special about this, other than specifying in the E2K7 install that I want older Outlook clients and PF's setup? I guess I could even keep the E2K3 server to host just the PFs? Lastly, if I have a mix of Outlook 200, 2003 and 2007 what do I need to do to make sure they all have access to the GAL and OAB? At time of cutover, we'll be at like 90% 2007, but we will have some older stuff around. My plan is to use Outlook Anywhere on laptops that are used outside the physical network. Are there any gotchas involved in that? I'm even thinking about using is for all Outlook clients, does anyone do that? The reason I'm considering it is that our WAN is really VPN tunnels over internet connections, so not a fully messhed, stable WAN. Thank you all very much for the assistance in advance and I look forward to discussion of these points! Regards...Michael

    Read the article

  • Coda 2 and SCP uploading files with the wrong permission

    - by Tom Black
    Currently I have a basic Ubuntu server running a website. The website is for a few students learning HTML/PHP and each student has their own account with a symbolic link to the shared website folder. Since the students are working on the website together, each user needs to be able to modify all the files (index.html for example). So I created a Webdev group containing all of the students with the default umask of 0002 set in their .bashrc (This allows newly created files to be 774). The shared folder is owned by the group Webdev with a chmod g+s so that new files/folders also belong to the group Webdev. The problem is that the students are using an IDE (Coda 2) and when they create a new file or folder using the IDE the file has the permissions of 644 on the server (not group writable). However when I make a new file through connecting with Cyberduck (SFTP client) the file permissions are 664 (as they should be). So I don't understand why Coda would be any different. However, after some trial and error I believe that Coda is first creating the file on local disk and then uploading that file to the server. On a mac by default a newly created file is 644. When the client uploads a file that's already 644 it stays 644 on the server side (umask is kind of useless in this situation). I've also tried creating ACL permissions for that folder but an uploaded file from my mac via SCP doesn't get the default ACL permissions. In Coda there is an option to change file permissions on a transfer. However this option seems to apply a chmod to all files being uploaded or saved. When one of students is modifying a file created by someone else when they try to upload the file or save it Coda tries to also do a chmod but fails because that user isn't the owner of the file. My current solution is using bindfs... I mount the shared web folder and bindfs sets permissions and group ownership of newly created files. However, bindfs seems to be a bit slow and I'm sure there is a better solution. Even if the students ditched Coda 2 and used Mac vim with scp the newly created files on the server would behave the same (644) which is default on the mac. Other options... 1) Either I teach the students to use (ssh/chmod) with their IDE to change their own file permissions when uploading. 2) I make all the students' Macs have the default umask of 0002 which would upload files with the right permissions. 3) Write a corn script to fix the file permissions every 5 to 15 minutes... (This option I think is the worst if students are working together at the same time). Is there any way that I could make all files that are uploaded via SCP have the default file permissions of 664 even though the uploaded file has a lower permission? (After hours of searching I don't think this is possible) I guess a corn script is my best option for novice users. How do web developers work together on larger sites? similar to this: http://serverfault.com/questions/283492/how-to-specify-file-permission-when-putting-a-file-using-openssh-sftp-command Also similar: http://serverfault.com/questions/395418/managing-linux-directory-permissions-sftp

    Read the article

  • mod evasive not working properly on ubuntu 10.04

    - by Joe Hopfgartner
    I have an ubuntu 10.04 server where I installed mod_evasive using apt-get install libapache2-mod-evasive I already tried several configurations, the result stays the same. The blocking does work, but randomly. I tried with low limis and long blocking periods as well as short limits. The behaviour I expect is that I can request websites until either page or site limit is reached per given interval. After that I expect to be blocked until I did not make another request for as long as the block period. However the behaviour is that I can request sites and after a while I get random 403 blocks, which increase and decrase in percentage, however they are very scattered. This is an output of siege, so you get an idea: HTTP/1.1 200 0.09 secs: 75 bytes ==> /robots.txt HTTP/1.1 403 0.08 secs: 242 bytes ==> /robots.txt HTTP/1.1 200 0.08 secs: 75 bytes ==> /robots.txt HTTP/1.1 403 0.08 secs: 242 bytes ==> /robots.txt HTTP/1.1 200 0.11 secs: 75 bytes ==> /robots.txt HTTP/1.1 403 0.08 secs: 242 bytes ==> /robots.txt HTTP/1.1 200 0.08 secs: 75 bytes ==> /robots.txt HTTP/1.1 403 0.09 secs: 242 bytes ==> /robots.txt HTTP/1.1 200 0.08 secs: 75 bytes ==> /robots.txt HTTP/1.1 200 0.09 secs: 75 bytes ==> /robots.txt HTTP/1.1 200 0.08 secs: 75 bytes ==> /robots.txt HTTP/1.1 200 0.09 secs: 75 bytes ==> /robots.txt HTTP/1.1 403 0.08 secs: 242 bytes ==> /robots.txt HTTP/1.1 200 0.08 secs: 75 bytes ==> /robots.txt HTTP/1.1 403 0.08 secs: 242 bytes ==> /robots.txt HTTP/1.1 200 0.10 secs: 75 bytes ==> /robots.txt HTTP/1.1 403 0.08 secs: 242 bytes ==> /robots.txt HTTP/1.1 200 0.08 secs: 75 bytes ==> /robots.txt HTTP/1.1 403 0.09 secs: 242 bytes ==> /robots.txt HTTP/1.1 200 0.10 secs: 75 bytes ==> /robots.txt HTTP/1.1 403 0.09 secs: 242 bytes ==> /robots.txt HTTP/1.1 200 0.09 secs: 75 bytes ==> /robots.txt HTTP/1.1 200 0.08 secs: 75 bytes ==> /robots.txt HTTP/1.1 200 0.09 secs: 75 bytes ==> /robots.txt HTTP/1.1 200 0.08 secs: 75 bytes ==> /robots.txt HTTP/1.1 200 0.10 secs: 75 bytes ==> /robots.txt HTTP/1.1 200 0.08 secs: 75 bytes ==> /robots.txt The exac limits in place during this test run were: DOSHashTableSize 3097 DOSPageCount 10 DOSSiteCount 100 DOSPageInterval 10 DOSSiteInterval 10 DOSBlockingPeriod 120 DOSLogDir /var/log/mod_evasive DOSEmailNotify ***@gmail.com DOSWhitelist 127.0.0.1 So I would expect to be blocked at least 120 seconds after being blocked once. Any ideas aobut this? I also tried adding my configuration at different places (vhost, server config, directory context) and with of without ifmodule directive... This doesnt change anything.

    Read the article

  • What Apache/PHP configurations do you know and how good are they?

    - by FractalizeR
    Hello. I wanted to ask you about PHP/Apache configuration methods you know, their pros and cons. I will start myself: ---------------- PHP as Apache module---------------- Pros: good speed since you don't need to start exe every time especially in mpm-worker mode. You can also use various PHP accelerators in this mode like APC or eAccelerator. Cons: if you are running apache in mpm-worker mode, you may face stability issues because every glitch in any php script will lead to unstability to the whole thread pool of that apache process. Also in this mode all scripts are executed on behalf of apache user. This is bad for security. mpm-worker configuration requires PHP compiled in thread-safe mode. At least CentOS and RedHat default repositories doesn't have thread-safe PHP version so on these OSes you need to compile at least PHP yourself (there is a way to activate worker mpm on Apache). The use of thread-safe PHP binaries is considered experimental and unstable. Plus, many PHP extensions does not support thread-safe mode or were not well-tested in thread-safe mode. ---------------- PHP as CGI ---------------- This seems to be the slowest default configuration which seems to be a "con" itself ;) ---------------- PHP as CGI via mod_suphp ---------------- Pros: suphp allows you to execute php scipts on behalf of the script file owner. This way you can securely separate different sites on the same machine. Also, suphp allows to use different php.ini files per virtual host. Cons: PHP in CGI mode means less performance. In this mode you can't use php accelerators like APC because each time new process is spawned to handle script rendering the cache of previous process useless. BTW, do you know the way to apply some accelerator in this config? I heard something about using shm for php bytecode cache. Also, you cannot configure PHP via .htaccess files in this mode. You will need to install PECL htscanner for this if you need to set various per-script options via .htaccess (php_value / php_flag directives) ---------------- PHP as CGI via suexec ---------------- This configuration looks the same as with suphp, but I heard, that it's slower and less safe. Almost same pros and cons apply. ---------------- PHP as FastCGI ---------------- Pros: FastCGI standard allows single php process to handle several scripts before php process is killed. This way you gain performance since no need to spin up new php process for each script. You can also use PHP accelerators in this configuration (see cons section for comment). Also, FCGI almost like suphp also allows php processes to be executed on behalf of some user. mod_fcgid seems to have the most complete fcgi support and flexibility for apache. Cons: The use of php accelerator in fastcgi mode will lead to high memory consumption because each PHP process will have his own bytecode cache (unless there is some accelerator that can use shared memory for bytecode cache. Is there such?). FastCGI is also a little bit complex to configure. You need to create various configuration files and make some configuration modifications. It seems, that fastcgi is the most stable, secure, fast and flexible PHP configuration, however, a bit difficult to be configured. But, may be, I missed something? Comments are welcome!

    Read the article

  • How to access remote lan machines through a ipsec / xl2ptd vpn (maybe iptables related)

    - by Simon
    I’m trying to do the setup of a IPSEC / XL2TPD VPN for our office, and I’m having some problems accessing the remote local machines after connecting to the VPN. I can connect, and I can browse Internet sites trough the VPN, but as said, I’m unable to connect or even ping the local ones. My Network setup is something like this: INTERNET eth0 ROUTER / VPN eth2 LAN These are some traceroutes behind the VPN: traceroute to google.com (173.194.78.94), 64 hops max, 52 byte packets 1 192.168.1.80 (192.168.1.80) 74.738 ms 71.476 ms 70.123 ms 2 10.35.192.1 (10.35.192.1) 77.832 ms 77.578 ms 77.865 ms 3 10.47.243.137 (10.47.243.137) 78.837 ms 85.409 ms 76.032 ms 4 10.47.242.129 (10.47.242.129) 78.069 ms 80.054 ms 77.778 ms 5 10.254.4.2 (10.254.4.2) 86.174 ms 10.254.4.6 (10.254.4.6) 85.687 ms 10.254.4.2 (10.254.4.2) 85.664 ms traceroute to 192.168.1.3 (192.168.1.3), 64 hops max, 52 byte packets 1 * * * 2 *traceroute: sendto: No route to host traceroute: wrote 192.168.1.3 52 chars, ret=-1 *traceroute: sendto: Host is down traceroute: wrote 192.168.1.3 52 chars, ret=-1 * traceroute: sendto: Host is down 3 traceroute: wrote 192.168.1.3 52 chars, ret=-1 *traceroute: sendto: Host is down traceroute: wrote 192.168.1.3 52 chars, ret=-1 These are my iptables rules: iptables -A INPUT -i lo -j ACCEPT iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT # allow lan to router traffic iptables -A INPUT -s 192.168.1.0/24 -i eth2 -j ACCEPT # ssh iptables -A INPUT -p tcp --dport ssh -j ACCEPT # vpn iptables -A INPUT -p 50 -j ACCEPT iptables -A INPUT -p ah -j ACCEPT iptables -A INPUT -p udp --dport 500 -j ACCEPT iptables -A INPUT -p udp --dport 4500 -j ACCEPT iptables -A INPUT -p udp --dport 1701 -j ACCEPT # dns iptables -A INPUT -s 192.168.1.0/24 -p tcp --dport 53 -j ACCEPT iptables -A INPUT -s 192.168.1.0/24 -p udp --dport 53 -j ACCEPT iptables -t nat -A POSTROUTING -j MASQUERADE # logging iptables -I INPUT 5 -m limit --limit 1/min -j LOG --log-prefix "iptables denied: " --log-level 7 # block all other traffic iptables -A INPUT -j DROP And here are some firewall log lines: Dec 6 11:11:57 router kernel: [8725820.003323] iptables denied: IN=ppp0 OUT= MAC= SRC=192.168.1.81 DST=192.168.1.3 LEN=60 TOS=0x00 PREC=0x00 TTL=255 ID=62174 PROTO=UDP SPT=61910 DPT=53 LEN=40 Dec 6 11:12:29 router kernel: [8725852.035826] iptables denied: IN=ppp0 OUT= MAC= SRC=192.168.1.81 DST=224.0.0.1 LEN=44 TOS=0x00 PREC=0x00 TTL=1 ID=15344 PROTO=UDP SPT=56329 DPT=8612 LEN=24 Dec 6 11:12:36 router kernel: [8725859.121606] iptables denied: IN=ppp0 OUT= MAC= SRC=192.168.1.81 DST=224.0.0.1 LEN=44 TOS=0x00 PREC=0x00 TTL=1 ID=11767 PROTO=UDP SPT=63962 DPT=8612 LEN=24 Dec 6 11:12:44 router kernel: [8725866.203656] iptables denied: IN=ppp0 OUT= MAC= SRC=192.168.1.81 DST=224.0.0.1 LEN=44 TOS=0x00 PREC=0x00 TTL=1 ID=11679 PROTO=UDP SPT=57101 DPT=8612 LEN=24 Dec 6 11:12:51 router kernel: [8725873.285979] iptables denied: IN=ppp0 OUT= MAC= SRC=192.168.1.81 DST=224.0.0.1 LEN=44 TOS=0x00 PREC=0x00 TTL=1 ID=39165 PROTO=UDP SPT=62625 DPT=8612 LEN=24 I’m pretty sure that the problem should be related with iptables, but after trying a lot of different confs, I was unable to find the right one. Any help will be greetly appreciated ;). Kind regards, Simon. EDIT: This is my route table: default 62.43.193.33.st 0.0.0.0 UG 100 0 0 eth0 62.43.193.32 * 255.255.255.224 U 0 0 0 eth0 192.168.1.0 * 255.255.255.0 U 0 0 0 eth2 192.168.1.81 * 255.255.255.255 UH 0 0 0 ppp0

    Read the article

  • Troubleshooting High Load on Plesk LAMP Dedicated Server

    - by Callmeed
    I have 2 nearly identical dedicated servers with the same provider. They also run a nearly identical software stack: RedHat 5 64-bit, Plesk, PHP, Apache, & MySQL. We use them for hosting custom sites we build. The problem is, while our 1st server has a load average (in top) of around 0.3, the 2nd server consistently has a load average of around 4.0 or higher. Basic functions in Plesk are delayed and there is a bit of latency when executing shell commands. Anyone have ideas why it would be so high? And why it would differ from our other server so much? Here is my current top output (sorted by %MEM) ... Any help is much appreciated ... top - 21:48:04 up 100 days, 4:28, 1 user, load average: 3.74, 4.20, 4.23 Tasks: 336 total, 1 running, 335 sleeping, 0 stopped, 0 zombie Cpu(s): 0.8%us, 0.4%sy, 0.0%ni, 91.3%id, 7.5%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 12290884k total, 11886452k used, 404432k free, 2920212k buffers Swap: 2096472k total, 244k used, 2096228k free, 6560692k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22536 apache 15 0 860m 547m 6484 S 0.0 4.6 0:10.96 httpd 26467 apache 15 0 859m 546m 6408 S 0.0 4.5 0:07.67 httpd 3620 apache 15 0 859m 545m 5552 S 0.0 4.5 0:06.15 httpd 1895 apache 15 0 858m 544m 6356 S 0.0 4.5 0:08.25 httpd 16933 apache 15 0 858m 544m 5488 S 0.0 4.5 0:01.57 httpd 6431 apache 15 0 856m 542m 6076 S 10.6 4.5 0:05.32 httpd 14417 apache 15 0 856m 542m 5568 S 0.0 4.5 0:03.88 httpd 15403 apache 15 0 855m 541m 5616 S 0.0 4.5 0:03.73 httpd 19165 apache 15 0 853m 539m 6252 S 0.0 4.5 0:12.40 httpd 15898 apache 15 0 852m 539m 5376 S 0.0 4.5 0:02.68 httpd 14401 apache 15 0 851m 538m 5460 S 0.0 4.5 0:02.97 httpd 15393 apache 15 0 851m 538m 5404 S 0.0 4.5 0:03.12 httpd 15427 apache 15 0 851m 538m 5496 S 0.0 4.5 0:02.44 httpd 14412 apache 15 0 851m 538m 5324 S 0.0 4.5 0:02.15 httpd 18330 apache 15 0 851m 537m 5136 S 0.0 4.5 0:01.30 httpd 18303 apache 15 0 848m 535m 5140 S 0.0 4.5 0:00.47 httpd 21190 apache 15 0 845m 533m 3988 S 0.0 4.4 0:00.33 httpd 15923 root 18 0 822m 521m 9928 S 0.0 4.3 10:04.81 httpd 22021 apache 15 0 828m 520m 4964 S 0.0 4.3 0:00.16 httpd 22146 apache 15 0 823m 515m 3016 S 0.0 4.3 0:00.02 httpd 22345 apache 15 0 822m 514m 2408 S 0.0 4.3 0:00.00 httpd 14721 apache 15 0 733m 510m 488 S 0.0 4.3 0:00.00 httpd 5094 root 15 0 1452m 122m 15m S 1.0 1.0 852:24.24 java 4636 mysql 15 0 532m 57m 6440 S 1.0 0.5 488:05.84 mysqld 4799 popuser 15 0 166m 53m 2368 S 0.0 0.4 0:36.64 spamd 16761 popuser 15 0 159m 46m 2312 S 0.0 0.4 0:00.38 spamd 4797 root 15 0 158m 45m 2448 S 0.0 0.4 0:01.27 spamd 5074 root 34 19 255m 20m 2144 S 0.0 0.2 1:37.53 yum-updatesd 9917 named 15 0 366m 9804 1980 S 0.0 0.1 0:10.26 named 4332 sso 18 0 119m 8028 5212 S 0.0 0.1 0:00.06 sw-engine-cgi 4341 sso 18 0 119m 8028 5212 S 0.0 0.1 0:00.07 sw-engine-cgi 4350 sso 18 0 119m 8028 5212 S 0.0 0.1 0:00.09 sw-engine-cgi 4352 sso 18 0 119m 8028 5212 S 0.0 0.1 0:00.11 sw-engine-cgi 4376 ntp 15 0 23388 5020 3896 S 0.0 0.0 0:00.58 ntpd 4331 sw-cp-se 15 0 61336 4572 1480 S 0.0 0.0 5:53.22 sw-cp-serverd 4213 haldaemo 15 0 31252 4460 1684 S 0.0 0.0 0:01.52 hald 4778 postgres 18 0 117m 4164 3484 S 0.0 0.0 0:00.11 postmaster 18555 root 16 0 98.3m 3716 2852 S 0.0 0.0 0:00.01 sshd 4488 sso 18 0 119m 3044 224 S 0.0 0.0 0:00.00 sw-engine-cgi 4489 sso 18 0 119m 3044 224 S 0.0 0.0 0:00.00 sw-engine-cgi 4492 sso 18 0 119m 3044 224 S 0.0 0.0 0:00.00 sw-engine-cgi 4493 sso 18 0 119m 3044 224 S 0.0 0.0 0:00.00 sw-engine-cgi 4490 sso 18 0 119m 3040 220 S 0.0 0.0 0:00.00 sw-engine-cgi

    Read the article

  • Why is IIS Anonymous authentication being used with administrative UNC drive access?

    - by Mark Lindell
    My account is local administrator on my machine. If I try to browse to a non-existent drive letter on my own box using a UNC path name: \mymachine\x$ my account would get locked out. I would also get the following warning (Event ID 100, Type “Warning”) 5 times under the “System” group in Event Viewer on my box: The server was unable to logon the Windows NT account 'ourdomain\myaccount' due to the following error: Logon failure: unknown user name or bad password. I would also get the following warning 3 times: The server was unable to logon the Windows NT account 'ourdomain\myaccount' due to the following error: The referenced account is currently locked out and may not be logged on to. On the domain controller, Event ID 680 of type “Failure Audit” would appear 4 times under the “Security” group in Event Viewer: Logon attempt by: MICROSOFT_AUTHENTICATION_PACKAGE_V1_0 Logon account: myaccount Followed by Event ID 644: User Account Locked Out: Target Account Name: myaccount Target Account ID: OURDOMAIN\myaccount Caller Machine Name: MYMACHINE Caller User Name: STAN$ Caller Domain: OURDOMAIN Caller Logon ID: (0x0,0x3E7) Followed by another 4 errors having Event ID 680. Strangely, every time I tried to browse to the UNC path I would be prompted for a user name and password, the above errors would be written to the log, and my account would be locked out. When I hit “Cancel” in response to the user name/password prompt, the following message box would display: Windows cannot find \mymachine\x$. Check the spelling and try again, or try searching for the item by clicking the Start button and then clicking Search. I checked with others in the group using XP and they only got the above message box when browsing to a “bad” drive letter on their box. No one else was prompted for a user name/password and then locked out. So, every time I tried to browse to the “bad” drive letter, behind the scenes XP was trying to login 8 times using bad credentials (or, at least a bad password as the login was correct), causing my account to get locked out on the 4th try. Interestingly, If I tried browsing to a “good” drive such as “c$” it would work fine. As a test, I tried logging on to my box as a different login and browsing the “bad” UNC path. Strangely, my “ourdomain\myaccount” account was getting locked out – not the one I was logged in as! I was totally confused as to why the credentials for the other login were being passed. After much Googling, I found a link referring to some IIS settings I was vaguely familiar with from the past but could not see how they would affect this issue. It was related to the IIS directory security setting “Anonymous access and authentication control” located under: Control Panel/Administrative Tools/Computer Management/Services and Applications/Internet Information Services/Web Sites/Default Web Site/Properties/Directory Security/Anonymous access and authentication control/Edit/Password I found no indication while scouring the Internet that this property was related to my UNC problem. But, I did notice that this property was set to my domain user name and password. And, my password did age recently but I had not reset the password accordingly for this property. Sure enough, keying in the new password corrected the problem. I was no longer prompted for a user name/password when browsing the UNC path and the account lock-outs ceased. Now, a couple of questions: Why would an IIS setting affect the browsing of a UNC path on a local box? Why had I not encountered this problem before? My password has aged several times and I’ve never encountered this problem. And, I can’t remember the last time I updated the “Anonymous access” IIS password it’s been so long. I’ve run the script after a password reset before and never had my account locked-out due to the UNC problem (the script accesses UNC paths as a normal part of its processing). Windows Update did install “Cumulative Security Update for Internet Explorer 7 for Windows XP (KB972260)” on my box on 7/29/2009. I wonder if this is responsible.

    Read the article

  • Trouble getting SSL to work with django + nginx + wsgi

    - by Kevin
    I've followed a couple of examples for Django + nginx + wsgi + ssl, but I can't get them to work. I simply get an error in my browser than I can't connect. I'm running two websites off the host. The config files are identical except for the ip addresses, server names, and directories. When neither use SSL, they work fine. When I try to listen on 443 with one of them, I can't connect to either. My config files are below, and any suggestions would be appreciated. server{ listen xxx.xxx.xxx.xxx:80; server_name sub.domain.com; access_log /home/django/logs/nginx_customerdb_http_access.log; error_log /home/django/logs/nginx_customerdb_http_error.log; location / { proxy_pass http://127.0.0.1:8080; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffers 32 4k; } location /site_media/ { alias /home/django/customerdb_site_media/; } location /admin-media/ { alias /home/django/django_admin_media/; } } server{ listen xxx.xxx.xxx.xxx:443; server_name sub.domain.com; access_log /home/django/logs/nginx_customerdb_http_access.log; error_log /home/django/logs/nginx_customerdb_http_error.log; ssl on; ssl_certificate sub.domain.com.crt; ssl_certificate_key sub.domain.com.key; ssl_prefer_server_ciphers on; location / { proxy_pass http://127.0.0.1:8080; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Protocol https; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffers 32 4k; } location /site_media/ { alias /home/django/customerdb_site_media/; } location /admin-media/ { alias /home/django/django_admin_media/; } } <VirtualHost *:8080> ServerName xxx.xxx.xxx.xxx ServerAlias xxx.xxx.xxx.xxx LogLevel warn ErrorLog /home/django/logs/apache_customerdb_error.log CustomLog /home/django/logs/apache_customerdb_access.log combined WSGIScriptAlias / /home/django/customerdb/apache/django.wsgi WSGIDaemonProcess customerdb_wsgi processes=4 threads=5 WSGIProcessGroup customerdb_wsgi SetEnvIf X-Forwarded-Protocol "^https$" HTTPS=on </VirtualHost> UDPATE: the existence of two sites (on separate IPs) on the host is the issue. if i delete the other site, the setting above mostly work. doing so also brings up another issue: chrome doesn't accept the site as secure saying that some content is not encrypted.

    Read the article

  • Issues configuring Exchange 2010 as well as SSL problems.

    - by Eric Smith
    Possibly-Relevant Background Info: I've recently moved up from icky shared hosting to a glorious, Remote Desktop-administrated VPS server running Windows Server 2008 R2. Even though I'm only 21 now and a computer science major, I've tried to play with every Windows Server release since '03, just to learn new things. What usually happens is inevitably I'll do something wrong and pretty much ruin the install. You're dealing with an amateur here :) Through the past few months of working with my new server, I've mastered DNS, IIS, got Team Foundation Server running (yay!), and can install all of the other basics like SQL Server and Active Directory. The Problem: Now, these last few weeks I've been trying to install Exchange Server 2010 (SP1). To make a long story short, it took me several attempts, and I even had to get my server wiped just so I could start fresh since Exchange decided uninstalling properly was for sissies (cost me $20, bah). Today, at long last, I got Exchange mostly working. There were two main problems left, however, that left me unsatisfied: Exchange installed itself and all of its child sites into Default Web Site. I wanted to access Exchange via mail.domain.com, but instead everything was configured to domain.com. My limited server admin knowledge was not enough to configure IIS or Exchange to move itself over to the website I had set up for it, appropriately titled 'mail.domain.com', which I had bound to a dedicated IP address (I was told this was necessary, but he may have been wrong). I have two SSL certificates: one for my main domain and one for my mail subdomain. For whatever reason, I had issues geting Exchange to use my mail certificate, even though I had assigned the proper roles in the MMC. I did, at one point, get it to work (or mostly work, anyways. Frankly, my memory of today is clouded by intense frustration). Additionally, I was confused which type of SSL certificate I should be using for Exchange. My SSL provider, GoDaddy, allows me to request a new certificate whenever, so I can use either the certificate request provided by IIS or the more complicated and specific request you can create with Exchange. Which type should I be using, the IIS or Exchange certificate? If I must use the Exchange certificate, will that 1) cause issues when I bind that certificate to my mail.domain.com subdomain or 2) is that an unnecessary step? The SSL Certificate Strikes Back When I thought I had the proper SSL certificate assigned for those brief, sweet moments, Google Chrome reported the correct mail.domain.com certificate when browsing https://mail.domain.com. However, Outlook 2010 threw up an error when trying to configure my email account claiming that the certificate didn't match the domain of "mail.domain.com". Is this an issue that will be resolved by problem #2 or is it a separate one entirely? Apologies for the massive wall of text, but I wanted to provide as much info as I possibly could. Exchange is the last thing I'd like installed on my server, and naturally it's turning out to be the hardest. Thanks for any info at all. Even a point in a vague direction would be a huge help at this point. Thanks! -Eric P.S.: The reason I keep ruining my install is that when I attempt to uninstall Exchange, something invariably goes wrong. The last time the uninstaller complained that there was still a mailbox active and it couldn't proceed until I deleted it. ... The only mailbox left was the Administrator account, the built-in one I couldn't delete. So I attempted to manually uninstall it following several guides online only to now be stuck unable to launch the installer and have to get my system wiped AGAIN for the second time today ($40 down the drain, bah!). I do not understand at all why "uninstall" just can't mean "hey, you, delete everything and go away". There's not even a force uninstall option, only a "recover system" option that just fails to fix anything and makes it so I can't even use the GUI uninstaller. </rant>

    Read the article

  • Backup a hosted Sharepoint

    - by David Mackintosh
    One of my customers has outsourced their Sharepoint and Exchange services to a hosted services provider. I believe it is a Sharepoint 2007 service. It is a shared hosting solution, so we do not have any kind of access to the server itself; we only have user-level and sharepoint-administrator-level access to the Sharepoint application. They have come to the point where they would like to have a copy of everything that is on the Sharepoint server. I have downloaded the Office Sharepoint Designer 2007, and it features three (!) ways to backup a Sharepoint server, none (!) of which work for me: File-Export-Personal Web Package: When selecting everything, it calculates a negative size. Barfs with No "content-type" in CGI environment error. File-Export-Sharepoint Template: barfs with a A World Wide Web browser, such as Windows Internet Explorer, is required to use this feature error. Site-Administration-Backup Web Site: wants to create the backup .cmp file on the sharepoint server itself. I don't have access to any servers on the same network so I can't redirect it to any form of the suggested \\server\place. Barfs with a The Web application at $URL could not be found. [...] error. Possibly moot because Google tells me that bad things happen using OSD to back up sites larger than 24MB (which this site is most definitely). So I called the helpdesk of the outsource provider, and got told that they recommend using OSD, but no they don't actually provide any application support for OSD (not that I blame them for that), but they could do a stsadm.exe backup and provide us with that, and OSD should be able to read the resulting cmp file. Then for authorization reasons they had my customer call them directly (since I can't authorize such an operation), and they told him that he didn't want a stsadm.exe backup, he wanted to get into an 'explorer view' and deal with things that way (they were vague). Google hasn't been much help in figuring out what an 'explorer view' is, let alone how I bring one up. The end goal of this operation is to have a backup of the site as it exists (hopefully today, but shortly anyways) in such a format that we don't need another sharepoint server to restore it to. Ie we'd like to be able to pick individual content directly out of this backup. We are not excessively concerned with things like formatting. We just want the documents. This is a fairly complex site with multiple subsites and multiple folders per subsite, so sitting there and manually downloading each file isn't really going to happen if there is a better easier way. So, my questions: Is the stsadm.exe backup what I want? If not, what do I want? If I manage to convince them that I do want the stsadm.exe backup, can I pick files out of the resulting backup file with OSD? If OSD isn't going to let me extract individual files, is there a tool I can use that can?

    Read the article

  • How to stop Apache from crashing my entire server?

    - by CyberShadow
    I maintain a Gentoo server with a few services, including Apache. It's fairly low-end (2GB of RAM and a low-end CPU with 2 cores). My problem is that, despite my best efforts, an over-loaded Apache crashes the entire server. In fact, at this point I'm close to being convinced that Linux is a horrible operating system that isn't worth anyone's time looking for stability under load. Things I tried: Adjusting oom_adj for the root Apache process (and thus all its children). That had close to no effect. When Apache was overloaded it would bring the system to a grind, as the system paged out everything else before it got to kill anything. Turning off swap. Didn't help, it would unload memory paged to binaries of processes and other files on /, thus causing the same effect. Putting it in a memory-limited cgroup (limited to 512 MB of RAM, 1/4th of the total). This "worked", at least in my own stress tests - except the server keeps crashing under load (basically stalling all other processes, inaccessible via SSH, etc.) Running it with idle I/O priority. This wasn't a very good idea in the end, because it just caused the system load to climb indefinitely (into the thousands) with almost no visible effect - until you tried to access an unbuffered part of the disk. This caused the task to freeze. (So much for good I/O scheduling, eh?) Limiting the number of concurrent connections to Apache. Setting the number too low caused web sites to become unresponsive due to most slots being occupied with long requests (file downloads). I tried various Apache MPMs without much success (prefork, event, itk). Switching from prefork/event+php-cgi+suphp to itk+mod_php. This improved performance, but didn't solve the actual problem. Switching I/O schedulers (cfq to deadline). Just to stress this out: I don't care if Apache itself goes down under load, I just want the rest of my system to remain stable. Of course, having Apache recover quickly after a brief period of intensive load would be great to have, but one step at a time. Right now I am mostly dumbfounded by how can humanity, in this day and age, design an operating system where such a seemingly simple task (don't allow one system component to crash the entire system) seems practically impossible - or at least, very hard to do. Please don't suggest things like VMs or "BUY MORE RAM". Some more information gathered with a friend's help: The processes hang when the cgroup oom killer is invoked. Here's the call trace: [<ffffffff8104b94b>] ? prepare_to_wait+0x70/0x7b [<ffffffff810a9c73>] mem_cgroup_handle_oom+0xdf/0x180 [<ffffffff810a9559>] ? memcg_oom_wake_function+0x0/0x6d [<ffffffff810aa041>] __mem_cgroup_try_charge+0x32d/0x478 [<ffffffff810aac67>] mem_cgroup_charge_common+0x48/0x73 [<ffffffff81081c98>] ? __lru_cache_add+0x60/0x62 [<ffffffff810aadc3>] mem_cgroup_newpage_charge+0x3b/0x4a [<ffffffff8108ec38>] handle_mm_fault+0x305/0x8cf [<ffffffff813c6276>] ? schedule+0x6ae/0x6fb [<ffffffff8101f568>] do_page_fault+0x214/0x22b [<ffffffff813c7e1f>] page_fault+0x1f/0x30 At this point, the apache memory cgroup is practically deadlocked, and burning CPU in syscalls (all with the above call trace). This seems like a problem in the cgroup implementation...

    Read the article

  • Wordpress Permissions OS X & MAMP

    - by Matt2020
    I have installed several local versions of Wordpress for development purposes. After the install I can create posts, pages and edit admin options. However as soon as try to upload images which would be saved in wp_content/uploads I get an error: Upload Error: Unable to create directory ...../blog/wp-content/uploads/2011/05. Is its parent directory writable by the server? Looks like MAMP server runs as user _www The blog directory is owned by User1 and the group User1 _www is not in the User1 group, should it be? I do not want to chmod 777 or 765 on the directories just to get it going. Googled up a couple of references: http://codex.wordpress.org/Changing_File_Permissions in "Permission Scheme for WordPress" All files should be owned by your user (ftp) account on your web server, and should be writable by that account. On shared hosts, files should never be owned by the webserver process itself (sometimes this is www, or apache, or nobody user). Any file that needs write access from WordPress should be owned or group-owned by the user account used by the WordPress (which may be different than the server account). For example, you may have a user account that lets you FTP files back and forth to your server, but your server itself may run using a separate user, in a separate usergroup, such as dhapache or nobody. If WordPress is running as the FTP account, that account needs to have write access, i.e., be the owner of the files, or belong to a group that has write access. In the latter case, that would mean permissions are set more permissively than default (for example, 775 rather than 755 for folders, and 664 instead of 644). User and group are User1 (which is admin). Running "ps aux | grep httpd" is running as _www So I think this means Wordpress is running as user _www. So the advice seems contradictory: "files should never be owned by the webserver process" i.e. _www but then later it says "Any file that needs write access from WordPress should be owned or group-owned by the user account used by the WordPress" So isn't this _www again? Another search found this url http://dancingengineer.com/computing/2009/07/how-to-install-wordpress-on-mac-os-x-leopard States Which says: My preferred way to do this is to change the group of the wordpress directory and its contents to _www and give write permissions to the group. Keep the owner as your "username". $ cd /Users/"username"/Sites $ sudo chown -R username:_www wordpress_directory $ sudo chmod -R g+w wordpress_directory However, when I tried this, it did not work for automatic upgrades to newer versions of WordPress although it worked for automatically updating the .htaccess file for pretty permalinks. It is not entirely clear to me what should be done. This last suggestion seems to be saying change the group from User1 to _www and give the group write access, but Wordpress upgrades won't work. Is this the right solution? I would have thought there would be a clear way to set this up on OS X 10.6? Be great if there was a plugin that could run a script for each of the main OS's that Wordpress runs on.

    Read the article

  • Troubleshooting Website problems within the local network

    - by HaydnWVN
    Have an external website which opens fine on some PC's, yet seems to time out (or symptoms of timing out, but never actually does) on others. Seems to only affect (some) of our newer HP Pro 3305 MT Workstations. All of which are running Win7 32bit SP1 with all updates. Older PC's (Win7 32bit SP1 & WinXP) are unaffected. Using Google Chrome & Firefox makes no difference. Opening the website in IE9 Compatibility Mode has exactly the same symptoms. All PC's are on the same local network (Workgroup) using the same DNS server & gateway (inhouse) on the same internet connection, on the same subnet. There is no proxy server, no content filtering, no load balancing etc etc. Only group policy in effect (locally) is for Update scheduling. Local firewalls are all the same (Kaspersky WP4) and our external facing firewall has no IP specific settings. I have no control over the external website, traceroute shows the same destination on all PC's. It is a fairly popular website in our industry (Horticulture) and i'm not aware of any other people (even other sites within our sister companies) with the same problem. Update: Used Fiddler2 to monitor the HTTP request, seems its not getting fulfilled for some reason?! Request sent: GET http://www.rhs.org.uk/ HTTP/1.1 Host: www.rhs.org.uk Connection: keep-alive User-Agent: Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.47 Safari/536.11 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-GB,en-US;q=0.8,en;q=0.6 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Log from Fiddler 2 of the request: This session is not yet complete. Press F5 to refresh when session is complete for updated statistics. Request Count: 1 Bytes Sent: 567 (headers:567; body:0) Bytes Received: 0 (headers:0; body:0) ACTUAL PERFORMANCE -------------- ClientConnected: 17:02:33.720 ClientBeginRequest: 17:02:39.118 GotRequestHeaders: 17:02:39.118 ClientDoneRequest: 17:02:39.118 Determine Gateway: 0ms DNS Lookup: 0ms TCP/IP Connect: 46ms HTTPS Handshake: 0ms ServerConnected: 17:02:39.165 FiddlerBeginRequest: 17:02:39.165 ServerGotRequest: 17:02:39.165 ServerBeginResponse: 00:00:00.000 GotResponseHeaders: 00:00:00.000 ServerDoneResponse: 00:00:00.000 ClientBeginResponse: 00:00:00.000 ClientDoneResponse: 00:00:00.000 RESPONSE BYTES (by Content-Type) -------------- ~headers~: 0 Log of a successful request from a working PC (done this morning, excuse the timestamps being different from above): Request Count: 1 Bytes Sent: 493 (headers:493; body:0) Bytes Received: 20,413 (headers:525; body:19,888) ACTUAL PERFORMANCE -------------- ClientConnected: 08:22:47.766 ClientBeginRequest: 08:22:47.766 GotRequestHeaders: 08:22:47.766 ClientDoneRequest: 08:22:47.766 Determine Gateway: 0ms DNS Lookup: 26ms TCP/IP Connect: 30ms HTTPS Handshake: 0ms ServerConnected: 08:22:47.828 FiddlerBeginRequest: 08:22:47.828 ServerGotRequest: 08:22:47.828 ServerBeginResponse: 08:22:48.905 GotResponseHeaders: 08:22:48.905 ServerDoneResponse: 08:22:48.905 ClientBeginResponse: 08:22:48.905 ClientDoneResponse: 08:22:48.905 Overall Elapsed: 00:00:01.1388020 RESPONSE BYTES (by Content-Type) -------------- text/html: 19,888 ~headers~: 525 So my question has evolved into: What is the difference between the 2 requests and how do I determine why 1 PC is not getting a reply to it's GET request?

    Read the article

  • nginx : backend https, proxy_pass shows ip

    - by Vulpo
    I am using nginx as a reverse proxy listening at port 80 (http). I am using proxy_pass to forward requests to backend http and https servers. Everything works fine for my http server but when I try to reach the https server through nginx reverse proxy the ip of the https server is shown in the client's web browser. I want the uri of the nginx server to be shown instead of the https backend server's ip (once again, this works fine with the http server but not for the https server). See this post on the forum Here is my configuration file : server { listen 80; server_name domain1.com; access_log off; root /var/www; if ($request_method !~ ^(GET|HEAD|POST)$ ) { return 444; } location / { proxy_pass http://ipOfHttpServer:port/; } } server { listen 80; server_name domain2.com; access_log off; root /var/www; if ($request_method !~ ^(GET|HEAD|POST)$ ) { return 444; } location / { proxy_pass http://ipOfHttpsServer:port/; proxy_set_header X_FORWARDED_PROTO https; #proxy_set_header Host $http_host; } } When I try the "proxy_set_header Host $http_host" directive and "proxy_set_header Host $host" the web page can't be reached (page not found). But when I comment it, the ip of the https server is shown in the browser (which is bad). Does anyone have an idea ? My other configs files are : proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; #proxy_hide_header X-Powered-By; proxy_intercept_errors on; proxy_buffering on; proxy_cache_key "$scheme://$host$request_uri"; proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=cache:10m inactive=7d max_size=700m; user www-data; worker_processes 2; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log; server_names_hash_bucket_size 64; sendfile off; tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_comp_level 5; gzip_http_version 1.0; gzip_min_length 0; gzip_types text/plain text/html text/css image/x-icon application/x-javascript; gzip_vary on; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } Thanks for your help !

    Read the article

  • Looking for advice on Hyper-v storage replication

    - by Notre1
    I am designing a 2-host Hyper-V R2 cluster with 6-10 guests stored on a SMB iSCSI SAN device (probably Promise VessRAID). I will be getting at least two of the SAN devices and need to eliminate the storage a single point of failure. Ideally, that would involve real-time failover for the storage, like the Windows failover clustering does for the hosts. This design will be used at around six of our sites, and I would like to allow for us to eventually setup a cluster at colocation site and replicate each site's VMs there for DR. (Ideally a live multi-site cluster, but a manual import of the VMs would be fine for this sort of DR.) The tools that come with enterprise SANs, like EMC and NetApp, seem to be the most commonly used items for a Hyper-V cluster, but I can't afford their prices with my budget. Outside of them, the two tools that seem to be most common for Hyper-V storage replication are SteelEye (now SIOS) DataKeeper Cluster Edition and Double-Take Availability. Originally, I was planning on using Clustered Shared Volume(s) (CSV), but it seems like replication support for these is either not available or brand new in both these products. It looks like CSVs are supported in Double-Take 5.22, see this discussion, but I don't think I want to run something that new in production. Right now, it seems like the best option for me is not to implement CSVs, implement some sort of storage replication, and upgrade to CSVs at a later date once replicating them is more mature. I would love to have live migration, and CSVs are not required for live migration if you are using one LUN per VM, so I guess this is what I'll do. I would prefer to stick to the using the Microsoft Windows Server and Hyper-V tools and features as much as possible. From that standpoint, SteelEye looks more appealing than Double-Take because they make the DataKeeper volume(s) available to the Failover Clustering Manager and then failover clustering is all configured and managed through the native Microsoft tools. Double-Take says that "clustered Hyper-V hosts are not supported," and Double-Take Availability itself seems to be what is used for the actual clustering and failover. Does anyone know if any of these replication tools work with more than two hosts in the cluster? All the information I can find on the web only uses two hosts in their examples. Are there any better tools than SteelEye and Double-Take for doing what I am trying to do, which is eliminate the storage as as single point of failure? Neverfail, AppAssure, and DataCore all seem to offer similar functionality, but they don't seems to be as popular as SteelEye and Double-Take. I have seen a number of people suggest using Starwind iSCSI SAN software for the shared storage, which includes replication (and CSV replication at that). There are a couple of reasons I have not seriously considered this route: 1) The company I work for is exclusively a Dell shop and Dell does not have any servers with that I can pack with more than six 3.5" SATA drives. 2) In the future, it could be advantegous for us to not be locked into a particular brand or type of storage and third-party replication softwares all allow replication to heterogeneous storage devices. I am pretty new to iSCSI and clustering, so please let me know if it looks like I am planning something that goes against best practices or overlooking/missing something.

    Read the article

  • Symantec Protection Suite and System Recovery 2011 Desktop Edition

    - by rihatum
    I am re-posting this as my previous question was being treated as if I am "Shopping or seeking Product Recommendations" even though I was NOT - BTW they have deleted my comments too which were not offensive in nature. anyway - I have re-phrased some parts of my question and I hope SF Admins "Do Not Modify / Edit" this one - will be most grateful for that. I have a lot of respect for the People who visit this SITE and help others ! Just To clarify : Just to go by SF rules - I am not seeking someone to Design this solution, I am simply seeking real world examples, experiences, technical expert opinions / suggestions, any tips or tricks they may have or any problems they may have faced while doing something similar above with these products. I am also not asking for Capacity Planning for Storage, We have done some research and I am seeking Expert Assurance / Suggestions. We (our company) are planning to deploy Symantec Endpoint Protection and Symantec Desktop Recovery 2011 Desktop Edition to our 3000 - 4000 workstations (Windows7 32 and 64) with a few 100s with Windows XP 32/64 Bit. I have read the implementation guide for SEP and have read tech-notes for Desktop Recovery 2011. Our team have planned to deploy this as follows : 1 x dedicated SQL 2008R2 for Symantec Endpoint Protection (Instead of using the Embedded Database) 1 x Dedicated SQL 2008R2 for Symantec Desktop Recovery 2011 (Instead of using the Embedded Database) 1 x Dedicated W2K8 R2 Box for the SEPM (Symantec Endpoint Protection Manager - Mgmt. APP) 1 x Dedicated W2K8 R2 Box for the Symantec Desktop Recovery 2011 Management Application Agent Deployment : As per Symantec Documentation for both of the above, an agent can be pushed via the Mgmt. Application (provided no firewalls are blocking ports required etc. - we have Windows firewall disabled already). Server Hardware : Per SQL Server : 16GB RAM + SAS DISKS + Dual XEON, RAID-10 for the SQL DB or I can always mount a LUN from our existing Hitachi or EMC SAN. SEPM Server : 16GB RAM + SAS DISKS + DUAL XEON System Recovery MGMT SERVER : 16GB RAM + SAS DISKS + DUAL XEON Above is the initial plan we have for 3000 - 4000 client workstation (Windows) Now my Questions :-) a) If we had these users distributed amongst two sites with AD DC / GC in each site, How would I restrict SEPM and Desktop Mgmt. solution to only check for users in their respective site ? b) At present all users are under one building but we are going to move some dept. to a new location (with dedicated connectivity), How would we control which SEPM / MGMT Server is responsible for which site ? c) We have netbackup in our environment backing up other servers, I am planning to protect these 4 (2 x SQL, 1 x SEPM, 1 x System Recovery Mgmt. Server) via netbackup or I can use System recovery 2011 server edition on all 4 of these boxes as well. (License is not an issue as we have the complete symantec portfolio included in our license). d) Now - Saving Desktop backups - What strategies have you implemented ? Any best practice recommendation for a large user base ? I was thinking to either mount a LUN from our Hitachi SAN on the Symantec Recovery Server itself or backup to the users hard drive locally and then copy it over to a network location ? Suggestions welcome :-) If you have anything to add / correct - that will be really helpful before diving into the actual implementation phase. Will be most grateful with your suggestions, recommendations and corrections with above - Many Thanks !

    Read the article

  • Tomcat and ASP site under IIS6 with SSL

    - by Rafe
    I've been working on migrating our companies' website from it's original server to a new one and am having two different but possibly related problems. The box this is sitting on is a Windows 2003 server x64 running IIS 6. The Tomcat version is 5.5.x as it was the version the original deployment was built on. There are two other sites on the server one in plain HTML, another in PHP and the one I am trying to migrate is a combination of Java and ASP (the introductory/sign in pages being Java as well as many reports used for our clients and the administration pages being in ASP) First of all I can only access the site if I enter the ip followed by :8080 (xxx.xxx.xxx.xxx:8080). The original setup had an index.html file in the root of the site with a bit of javascript in the header that pointed the site to 'www.mysite.com/app/public' but if I try going directly to the site without the 8080 I get a 'page not found error' and the javascript redirector causes the same problem because it doesn't add the 8080 into the URL even though on the original site the 8080 wasn't present so I don't understand why it would need it now. The js redirect looks like this: <script language="JavaScript"> <!-- location.href = "/app/public/" location.replace("/app/public/"); //--> </script> When setting the site up I used the command line to unbind IIS from all of the ip's on the system (there are 12 ip's on this box) because I was led to believe Tomcat wanted to use localhost which wasn't accessible. I'm not sure if this was the right thing to do but I'm throwing it in for the sake of completeness. And actually, at this point trying to go to localhost from the server itself throws up a 'could not connect to localhost' error. If I go to localhost:8080 I get the tomcat welcome page. If I do localhost:8080/app/public I get the intro page to our website. So I'm not sure what I'm even looking at in this case, that is what the proper behavior should be. The second part of the problem is that if I do go to either the ip or localhost such as above (localhost:8080/app/public) and click on our login link it is supposed to transfer me to our login page yet instead I receive a 'could not connect' error and the url has changed to localhost:8443/app/secure. From my research I see that port 8443 is Tomcats SSL port and the server.xml alludes to it as follows: <Connector port="8080" maxHttpHeaderSize="8192" maxThreads="150" minSpareThreads="25" maxSpareThreads="75" enableLookups="false" redirectPort="8443" acceptCount="100" connectionTimeout="20000" disableUploadTimeout="true" /> I have an SSL certificate assigned to the site via IIS and was under the impression that by default Tomcat allowed IIS to handle secure connections but apparently something is munged because it's not working. There is another section in the server.xml that reads like this: <Connector port="8009" enableLookups="false" redirectPort="443" protocol="AJP/1.3" /> Which I'm not sure what it is for although port 443 is the SSL port that IIS uses so I'm confused as to what this is supposed to be doing. Another question I have is when does the isap_redirector actually come into play? How does it know when to try and serve pages through Tomcat and when not to? I've hunted around the 'net for an answer and have yet to find a clear dialogue on the subject. Anyone have any pointers as to where to look for a solution to all of this?

    Read the article

  • Outlook Web Access, reverse proxy and browser

    - by M'vy
    Hi SF'ers! We recently moved an exchange server behind a reverse proxy due to the loss of a public IP. I've managed to configure the reverse proxy (httpd proxy_http). But there is a problem for the SSL configuration. When accessing the OWA interface with Firefox, all is ok and working. When accessing with MSIE or Chrome, they do not retrieve the good SSL Certificate. I think this is due to the multiples virtual host for httpd. Is there a workaround to make sure MSIE/Chrome request the certificate for the good domain name like FF does? Already tested with the SSL virtual host : SetEnvIf User-Agent ".*MSIE.*" value BrowserMSIE Header unset WWW-Authenticate Header add WWW-Authenticate "Basic realm=exchange.domain.com" A: ProxyPreserveHost On also: BrowserMatch ".*MSIE.*" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 Or: SetEnvIf User-Agent ".*MSIE.*" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 And lots of ProxyPassand ProxyReversePath on /exchweb /exchange /public etc... And it still don't seem to work. Any clue? Thanks. Edit 1: Precision of versions # openssl version OpenSSL 0.9.8k-fips 25 Mar 2009 /usr/sbin/httpd -v Server version: Apache/2.2.11 (Unix) Server built: Mar 17 2009 09:15:10 Browser versions : MSIE : 8.0.6001 Opera: Version 11.01 Revision 1190 Firefox: 3.6.15 Chrome: 10.0.648.151 Operating System: Windows Vista 32bits. They are all SNI compliant, I've tested them this afternoon https://sni.velox.ch/ You're right Shane Madden, I have multiple sites on the same public IP (and same port as well). The server itself is just a reverse proxy, that rewrite addresses to internal servers. The default host is a dev site, configure with the certificate that does not match the OWA (of course... would have been to easy) <VirtualHost *:443> ServerName dev2.domain.com ServerAdmin [email protected] CustomLog "| /usr/sbin/rotatelogs /var/log/httpd/access-%y%m%d.log 86400" combined ErrorLog "| /usr/sbin/rotatelogs /var/log/httpd/error-%y%m%d.log 86400" LogLevel warn RewriteEngine on SetEnvIfNoCase X-Forwarded-For .+ proxy=yes SSLEngine on SSLProtocol -all +SSLv3 +TLSv1 SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL:+SSLv3 SSLCertificateFile /etc/httpd/ssl/domain.com.crt SSLCertificateKeyFile /etc/httpd/ssl/domain.com.key RewriteCond %{HTTP_HOST} dev2\.domain\.com RewriteRule ^/(.*)$ http://dev2.domain.com/$1 [L,P] </VirtualHost> The certificate of domain is a *.domain.com The second vHost is : <VirtualHost *:443> ServerName exchange.domain2.com ServerAdmin [email protected] CustomLog "| /usr/sbin/rotatelogs /var/log/httpd/exchange/access-%y%m%d.log 86400" combined ErrorLog "| /usr/sbin/rotatelogs /var/log/httpd/exchange/error-%y%m%d.log 86400" LogLevel warn SSLEngine on SSLProxyEngine On SSLProtocol -all +SSLv3 +TLSv1 SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL:+SSLv3 SSLCertificateFile /etc/httpd/ssl/exchange.pem SSLCertificateKeyFile /etc/httpd/ssl/exchange.key RewriteEngine on SetEnvIfNoCase X-Forwarded-For .+ proxy=yes RewriteCond %{HTTP_HOST} exchange\.domain2\.com RewriteRule ^/(.*)$ https://exchange.domain2.com/$1 [L,P] </VirtualHost> and it's certificate is exchange.domain2.com only. I presume the SNI is somewhere not activated on my server. The versions of openssl and apache seams to be ok for the SNI support. The only thing I do not know is if httpd has been compile with the good options. (I assume it's a fedora packet).

    Read the article

  • Cannot connect to website - SSL handshaking fails

    - by ravenspoint
    So I cannot connect to certain websites. Just a few, most are OK. The one I really care about is paypal.com. I have done the usual things. Let's see: Checked my etc/hosts Flushed the DNS cache Checked firewall Switched on & off virus protection Switched on and off ad blocking pinged the sites Eventually, I decided to look at what curl is saying in detail == Info: About to connect() to www.paypal.com port 443 (#0) == Info: Trying 66.211.169.2... == Info: connected == Info: SSLv3, TLS handshake, Client hello (1): => Send SSL data, 110 bytes (0x6e) 0000: 01 00 00 6a 03 01 4f 6c aa 8c 57 2b 3d 1e 74 64 ...j..Ol..W+=.td 0010: c1 27 25 a5 3a 12 7f 3f 41 0a 17 15 2e c9 67 7c .'%.:.?A.....g| 0020: b3 e1 f6 9a db a9 00 00 2a 00 39 00 38 00 35 00 ........*.9.8.5. 0030: 16 00 13 00 0a 00 33 00 32 00 2f 00 07 00 05 00 ......3.2./..... 0040: 04 00 15 00 12 00 09 00 14 00 11 00 08 00 06 00 ................ 0050: 03 00 ff 01 00 00 17 00 00 00 13 00 11 00 00 0e ................ 0060: 77 77 77 2e 70 61 79 70 61 6c 2e 63 6f 6d www.paypal.com (hangs here for ever) This looks to me like paypal is refusing to reply to the first SSL handshake. I don't know much about SSL, but compaing to the output from a site that works for me seems to make it obvious == Info: About to connect() to www.cibc.com port 443 (#0) == Info: Trying 159.231.80.200... == Info: connected == Info: SSLv3, TLS handshake, Client hello (1): => Send SSL data, 108 bytes (0x6c) 0000: 01 00 00 68 03 01 4f 6c ad 6a 1f 67 d5 84 c4 4b ...h..Ol.j.g...K 0010: 0d 49 ae d6 b9 5b c3 63 f9 48 aa 18 da 43 d1 32 .I...[.c.H...C.2 0020: 47 ae 17 e5 cd e9 00 00 2a 00 39 00 38 00 35 00 G.......*.9.8.5. 0030: 16 00 13 00 0a 00 33 00 32 00 2f 00 07 00 05 00 ......3.2./..... 0040: 04 00 15 00 12 00 09 00 14 00 11 00 08 00 06 00 ................ 0050: 03 00 ff 01 00 00 15 00 00 00 11 00 0f 00 00 0c ................ 0060: 77 77 77 2e 63 69 62 63 2e 63 6f 6d www.cibc.com == Info: SSLv3, TLS handshake, Server hello (2): <= Recv SSL data, 74 bytes (0x4a) 0000: 02 00 00 46 03 01 00 00 58 cf 26 e2 e1 65 db 11 ...F....X.&..e.. 0010: bc 6f 26 7b 3b 6d eb 14 5f ad 47 dd 86 ea 4d a3 .o&{;m.._.G...M. 0020: fb 9f b7 2a 54 3e 20 5f 6b 04 5a 12 38 64 5d 18 ...*T> _k.Z.8d]. 0030: 65 9e e9 cd 61 eb 91 c1 16 25 61 30 bb 08 2a 78 e...a....%a0..*x 0040: b8 ee b8 7e f2 65 6a 00 04 00 ...~.ej... == Info: SSLv3, TLS handshake, CERT (11): ... and so on - working nicely eventually get some nice HTML Now I am reaaly stuck. This has been going on for five days, so I am pretty sure that the problem is not with paypal. But what on my system could be interfering with the SSL handshaking done by curl with this particular site? I suppose I could not be offering any certificates that PayPal accepts, but wouldn't I get a reply telling me so, or at least giving an error?

    Read the article

  • Apache URL rewriting in reverse proxy

    - by Jeremy Gooch
    I'm deploying Apache in front of a Karaf-hosted application (Apache and Karaf are on separate servers). I want Apache to operate as a reverse proxy and also to hide part of the URL. The URL to get the log-in page of the application directly from the app server is http://app-server:8181/jellyfish. Pages are served by the Jetty instance running within Karaf. Of course, this behaviour would usually be blocked by the firewall for everything except the reverse proxy server. With the firewall off, if you hit this URL then Jetty loads the log-in page. The browser's address bar correctly changes to http://app-server:8181/jellyfish/login?0 and everything works. What I want is for http://web-server (i.e. from the root) to map to Jetty on the app server with the name of the app (jellyfish) suppressed. e.g. The browser would change to show http://web-server/login?0 in the address bar and all subsequent URLs and content would be served with the web-server's domain and without the jellyfish clutter. I can get Apache to operate as a simple reverse proxy, using the following config (snippet):- ProxyPass /jellyfish http://app-server:8181/jellyfish ProxyPassReverse / http://app-server:8181/ ...but this requires the browser's URL to contain jellyfish and going to the root URL (http://web-server) gives a 404 Not Found. I've spent a lot of time trying to use mod_rewrite with and without its [P] flag to get around this, but without success. I then tried the ProxyPassMatch directive, but I can't seem to get that quite correct either. Here's the current config, as is loaded into /etc/apache2/sites-available/ on the web server. Note that there is a locally-hosted images directory. I've also kept the mod_rewrite proxy exploit protection and am suppressing a couple of mod_security rules that were giving false positives. <VirtualHost *:80> ServerAdmin admin@drummer-server ServerName drummer-server ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /images/ "/var/www/images/" RewriteEngine On RewriteCond %{REQUEST_URI} !^$ RewriteCond %{REQUEST_URI} !^/ RewriteRule .* - [R=400,L] ProxyPass /images ! ProxyPassMatch ^/(.*) http://granny-server:8181/jellyfish/$1 ProxyPassReverse / http://granny-server:8181/jellyfish ProxyPreserveHost On SecRuleRemoveById 981059 981060 <Directory "/var/www/images"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> </VirtualHost> If I go to http://web-server, I get redirected to http://web-server/jellyfish/home but this gives a 404, with a complaint about trying to access /jellyfish/jellyfish/home - NB the browser's address bar does not contain the double /jellyfish. HTTP ERROR 404 Problem accessing /jellyfish/jellyfish/home. Reason: Not Found And, if I go to http://web-server/login, I get redirected to http://web-server/jellyfish/login?0 but this gives a 404, with a complaint about trying to access /jellyfish/jellyfish/login. HTTP ERROR 404 Problem accessing /jellyfish/jellyfish/login. Reason: Not Found So, I'm guessing I'm somehow passing through the rules twice. I am also slightly bemused as to where the home bit of the URL comes from in the first example. Can someone point me in the right direction, please? Thanks, J.

    Read the article

  • Where to download replacement "Explorerframe.DLL" Files for x64 Windows 7 Pro?

    - by Ben Franchuk
    After posting this question, I did some research to reveal what the problem likely was, and found what I need to fix. Following this is the original post, then my updated question. A few months ago I ended up requiring to change my computer's SID to fix a problem it had been having- Instead of fixing the problem, though, it screwed up my at-the-time current install of windows, to the point of me needing to do a fresh install. As I am in possession of an OEM copy of Windows 7 Pro 64 Bit, I successfully reinstalled over the dead copy with that (all the files that were on the computer previous to this windows install were put in a Windows.old folder). Everything installed and worked absolutely fine, except for one thing. The problem I am experiencing is that, in some Windows Explorer windows, the explorer pane doesn't show. Instead, it simply shows a white area where the pane would show. This makes some software not usable, I recently realized; Software such as Cubase, which use just the explorer pane to select file save locations, cannot save at all as the pane itself is... not operational. Below is a screenshot of this problem as it occurs in cubase; ...and again as it shows in UTorrent in the save location selector window. The highlighted area is where the sidebar would NORMALLY be. Pardon my scribbling over some of the things in the window- I would personally rather the internet did not get a glimpse of my files. I have yet to find a common reason why the pane works in some applications when they pull explorer, and others not. I have yet to see it go away, and the software affected by it has been affected since I reinstalled my copy of windows. Initially, I was able to live with it as I can type out save locations in the file name bar to navigate, but with software like Cubase, I do not have this option. Reinstalling windows again is NOT an option. Here's the updated question. After posting this question originally, I did some research on the problem in question, and it turns out that this is extremely easily fixable via replacing the file "ExplorerFrame.DLL" which is located in the System32 and SystemWOW64 Folders, in the windows folder, on the C:\ drive. As I quite frequently customize my computer, this is a normal thing for me to do and I know exactly how to safely and properly replace this file. The only problem is that I cannot for the life of me find a download of this file that actually works with my computer. I tried a couple from some different sites but they all caused explorer to not restart (I was given an error when starting the application from Task Manager) and was forced to revert to the broken .DLL files. Since there are 2 separate "ExplorerFrame.DLL" files; one for 64 bit and the other for 32 bit, I am assuming that I will need to download 2 separate versions to replace the corrupted ones. Where can I acquire these files? I am currently running Windows 7 Professional x64 Bit.

    Read the article

< Previous Page | 287 288 289 290 291 292 293 294 295 296 297 298  | Next Page >