Search Results

Search found 13411 results on 537 pages for 'proxy servers'.

Page 503/537 | < Previous Page | 499 500 501 502 503 504 505 506 507 508 509 510  | Next Page >

  • Why is Windows Task Scheduler trying to launch multiple instances?

    - by Paul H
    We have a number of Windows Scheduled tasks that run on one Server 2008 Webserver (not R2) which is in a cluster. We recently moved from an original webserver Cluster to a new webserver Cluser (Server 2008 - not R2). The new webserver (in the cluster) running the Windows Tasks is setup the same as on the original we believe. BUT we now find that on the new Windows Server the Windows Task Scheduler seems to want to instantly start each task three times. If we set the option to queue up a new task we get: Event ID 324 Task Scheduler queued instance "{9a1a8411-b042-45ff-8e6b-89874df230d7}" of task "\Client Reporting" and will launch it as soon as instance "{2bcc3df6-ea3b-4453-90c2-75b8b1946388}" completes. If we set the option to stop an existing task we get: Event ID 323 Task Scheduler stopped instance "{e685a910-b32b-414e-85fd-96bbe54314a2}" of task "\Client Reporting" in order to launch new instance "{4db66265-1f51-4ede-8535-ac7c3cb5c4c1}" . Ticked settings: Allow task to be run on demand. Run task as soon as possible after a scheduled start is missed. Stop the task if running for longer than 1 hour. If the running task does not end when requested force it to stop. Start the task only if the computer is on AC power. Stop the task if the computer switches to battery power. Selected option: If the task is already running - stop the existing instance. Note: We moved the tasks from one server to another in the cluster to see if it the Task Scheduler on the particular server we'd picked causing the problem. Same behaviour. Could it be something to do with the build of the new servers? We have very similar tasks set up on another server cluster that work OK without all this multiple starting. Comparing those tasks to the ones here - there does not seem to be anything obviously different in terms of settings available to us through the options within the Task Scheduler. Trigger: The task is scheduled to be triggered daily, once an hour - and to be stopped if it exceeds this time. Action: Runs a .bat file. What could be causing this/where we can look to see what logic is causing the tasks to start multiple times in this way?

    Read the article

  • What's the best way to do user profile/folder redirect/home directory archiving?

    - by tpederson
    My company is in dire need of a redesign around how we handle user account administration. I've been tasked with automating the process. The end goal is to have the whole works triggered by the business, and IT only looking in when there's an error reported. The interim phase is going to be semi-manual. That is a level 2 tech inputs the user's info and supervises the process. The current hurdle I'm facing is user profile archiving. Our security team requires us to archive the profile directories for any terminated user for 60 days in case the legal team requires access to their files. Our AD is as much a mess as everything else, so there are some users with home directories and some with profiles. Anyone who has a profile dir in AD also has a good deal of their profile redirected to our file servers over DFS. In order to complete the process manually you find the user in AD, disable them, find their home/profile dir, go there and take ownership, create an archive folder, move all their files over, then delete the old dir. Some users have many many gigs of nonsense and this can take quite some time. Even automated the process would not be a quick one. I'm thinking that I need to have a client side C# GUI for the quick stuff and some server side batch script or console app to offload this long running process. I have a batch script that works decently using takeown and robocopy, but I wonder if a C# console app would do a better job. So, my question at long last is, what do you think is the best way to handle this? I can't imagine this is a unique problem, how do other admins get this done? The last place I worked was easily 10x larger than the place I'm in now. If we would have been doing this manual crap there, they'd have needed a team of at least 30 full time workers to keep up. I have decent skills in C#.net and batch scripting, but am a quick study and I have used most every language once or twice. Thank you for reading this and I look forward to seeing what imaginative solutions you all can come up with.

    Read the article

  • What is good usage scenario for Rackspace Cloud Files CDN (powered by AKAMAI) [closed]

    - by Andrew Smith
    I have just setup my website as static page via Rackspace CDN / Akamai. www.example.co.uk is an alias for d9771e6f24423091aebc-345678991111238fabcdef6114258d0e1.r61.cf3.rackcdn.com. d9771e6f24423091aebc-345678991111238fabcdef6114258d0e1.r61.cf3.rackcdn.com is an alias for a61.rackcdn.com. a61.rackcdn.com is an alias for a61.rackcdn.com.mdc.edgesuite.net. a61.rackcdn.com.mdc.edgesuite.net is an alias for a63.dscg10.akamai.net. a63.dscg10.akamai.net has address 63.166.98.41 a63.dscg10.akamai.net has address 63.166.98.40 a63.dscg10.akamai.net has IPv6 address 2001:428:4c02::cda8:ecb9 a63.dscg10.akamai.net has IPv6 address 2001:428:4c02::cda8:ed09 The HTTP header: HTTP/1.0 200 OK Last-Modified: Fri, 19 Oct 2012 23:27:41 GMT ETag: fdf9e14b77def799e09e8ce815a521da X-Timestamp: 1350689261.23382 Content-Type: text/html X-Trans-Id: tx457979be3bd746c2b4e5403a1189cdbc Cache-Control: public, max-age=900 Expires: Sat, 27 Oct 2012 22:18:56 GMT Date: Sat, 27 Oct 2012 22:03:56 GMT Content-Length: 7124 Connection: keep-alive I am wondering, if it's really the fastest solution to power the website? By investigating it thru http://www.just-ping.com/ it seems, that from many places the ping is very high, and during quick investigation I found that they use GeoIP to resolve addresses based on WHOIS, which is not accurate and because of that from many places the ping is above 300ms (for example, if ISP is in balgladore and request is routed to bangladore even if it's 300ms, for period of 1 month), while by just using Amazon Web Services and Route 53 Anycast DNS servers and only 4 EC2 instances it seems that for example India is always below 100ms, while using Akamai it goes above 300ms in some cases, and this is because Route 53 is using BGP. By quickly checking the Akamai, it seems that they are not getting feedback from the traffic - the high ping stays constant even if I keep downloading large files and videos, which is opposite to what they say on their website. They state, that they optimize the performance by taking feedback from the requests, while it seems they just use GeoIP with per City resolution (which are mostly big cities). Because of this, AWS with Route 53 / Anycast DNS seems to be much more reliable, as well EdgeCast which is using BGP, but I dont know how much does it cost to deploy static website. Actually, I dont know if EdgeCast is not a lie, because from isolated places there are many errors - so their performance is at the cost of quality of delivery, because of BGP switching the routes during transfer of large files. So I was wondering, what is really Akamai good for, because they dont seem to pose any strength in any field in what I do understand now, except they offer some software based WAF on their website, but what I really care about is the core distribiution, so the question is? Is really Akamai good for Videos? For static websites? ??? I found so far AWS most usable with most consistent ping and stable transfers.

    Read the article

  • Are random packets normal?

    - by TheLQ
    About a month ago on one of my servers I started receiving random packets from IPs all over the world. So I did the smart thing and stopped putting off installing an IDS. This IDS is a ClearOS Gateway which comes with Snort and SnortSam. I enabled it, checked There is a total of 4 ports open, two of which forward to the server I'm talking about. These ports are 3724 and 8085, so they aren't going to be easily detected in a port scan. However checking some logs of this server I found that the attack is resuming. I found this ... Accepting connection from '75.166.155.122' [Auth] got unknown packet from '75.166.155.122' Accepting connection from '98.164.154.93' [Auth] got unknown packet from '98.164.154.93' Ping MySQL to keep connection alive Accepting connection from '70.241.195.129' [Auth] got unknown packet from '70.241.195.129' Accepting connection from '67.182.229.169' [Auth] got unknown packet from '67.182.229.169' Accepting connection from '69.137.140.38' [Auth] got unknown packet from '69.137.140.38' Accepting connection from '76.31.72.55' [Auth] got unknown packet from '76.31.72.55' Accepting connection from '97.88.139.39' [Auth] got unknown packet from '97.88.139.39' Accepting connection from '173.35.62.112' [Auth] got unknown packet from '173.35.62.112' Accepting connection from '187.15.10.73' [Auth] got unknown packet from '187.15.10.73' Accepting connection from '66.66.94.124' [Auth] got unknown packet from '66.66.94.124' Accepting connection from '75.159.219.124' [Auth] got unknown packet from '75.159.219.124' Accepting connection from '99.102.100.82' [Auth] got unknown packet from '99.102.100.82' Accepting connection from '24.128.240.45' [Auth] got unknown packet from '24.128.240.45' Accepting connection from '99.231.7.39' [Auth] got unknown packet from '99.231.7.39' Accepting connection from '206.255.79.56' [Auth] got unknown packet from '206.255.79.56' Accepting connection from '68.97.106.235' [Auth] got unknown packet from '68.97.106.235' Accepting connection from '69.134.67.251' [Auth] got unknown packet from '69.134.67.251' Accepting connection from '63.228.138.186' [Auth] got unknown packet from '63.228.138.186' Accepting connection from '184.39.146.193' [Auth] got unknown packet from '184.39.146.193' Accepting connection from '69.171.161.102' [Auth] got unknown packet from '69.171.161.102' Accepting connection from '76.0.47.228' [Auth] got unknown packet from '76.0.47.228' Ping MySQL to keep connection alive Accepting connection from '126.112.201.14' [Auth] got unknown packet from '126.112.201.14' Ping MySQL to keep connection alive Now that scares me. Why isn't Snort detecting this? How were they able to find this specific port? More importantly, what normally would these packets contain? Is this something I should be worried about? How can I stop this?

    Read the article

  • Cygwin's RSYNC for large data transfer

    - by Tim Brigham
    I'm using rsync from Cygwin to do a large scale data transfer from an aging HP MSA 1000 to a new DAS attached to a different server. I have a daemon running on the remote server in read only mode and a local copy writing the files to disk. One of my servers is an image repository with over a million files spread across about 300 directories. Each file averages only a couple hundred kilobytes. More so than any other box this one is proving problematic. The rsync process will work for a while - some times 20 minutes, some times an hour - and then it simply quits and sits idle at a given file name. I have verified that the file isn't corrupt on the remote server and that the file is successfully created on the local drive. I ran the rsync client in -vv mode, which returns nothing. I checked out the logs created by the daemon. I looked at the network utilization on the interface, which is sitting idle. I looked at the AV settings to see if anything could pose a problem there. I even updated to the latest release of Cygwin. What do I need to in order to keep this connection up? EDIT: The client system is using the command rsync.exe server::Drives/f/Repo/ /cygdrive/T/Repo --archive -P -vv The server is using the command rsync.exe --daemon --no-detach --config "rsyncd.conf" The contents of rsyncd.conf: use chroot = false strict modes = false hosts allow = 192.168.100.9 log file = c:/rsyncd.log uid=0 gid=0 [Drives] path = /cygdrive read only = yes EDIT: The file server is 2003, the disk type on the array is GPT and the size is of the array is about 4 TB. EDIT: Stranger.. It looks like the process is reliably erroring out at about 175,000 files. Rsync runs fine when I pick the same directory it has problems with one at a time. EDIT: rsync version 3.0.9 protocol version 30 Copyright (C) 1996-2011 by Andrew Tridgell, Wayne Davison, and others. Web site: http://rsync.samba.org/ Capabilities: 64-bit files, 64-bit inums, 32-bit timestamps, 64-bit long ints, no socketpairs, hardlinks, symlinks, IPv6, batchfiles, inplace, append, ACLs, xattrs, iconv, symtimes A similar failure occurred when going from the same set of files with Cygwin to a Linux install. It didn't happen until several hours later than normal however.

    Read the article

  • Dangers of Running Computers w/o Air Conditioning

    - by Daniel Bingham
    I recently moved in to an apartment with out air conditioning. This is fine most of the time as I am in upstate New York. It only ever gets above the high 70s during the hottest of the summer months. And when it does, I'm stubborn enough that I'll just deal with wearing minimal clothing around the house. However, I'm worried about my computers. I'm a software developer and gamer, so many of my machines are very high powered. And at least one of them is a server that must be left on 24/7 (not just a game server - also serves multiple websites). I've never before had to worry about the heat too much, as I always lived in buildings with central air. The in building temperature rarely got much above 70 F. All of the machines I built had good enough air cooling that I never saw a problem. Now the temperature in building is pushing 100F and I'm worried that the machines will not be able to keep themselves cool enough by simply blowing already hot air over themselves. The hottest of them I've turned off. However, the server I cannot. It's an old Dell (not custom build) that runs on a Pentium 4 (2.2GHz). It only has a single hard drive, integrated video. And it'd not running any processor intensive servers. Just basic LAMP. It used to run a MUD server, but that's off for now. So it should be idling most of the time. I haven't been able to find any sort of built in temperature sensors in the hardware... at least not any that the programs I've found in the Debian repository can read. And it's an inherited machine to which I do not have the full specs, so I don't know the tolerances anyway. How worried should I be about it melting down on me? How worried should I be about the hard drive melting or becoming corrupted? To generalize the question for other people, what are the safe temperature tolerances for most machines. How widely does it vary, and how does one go about determining when their machine is running too hot and needs to be shut down?

    Read the article

  • VirtualBox - multiple guests, each with a single bridged adapter?

    - by Martin
    I am running a dedicated server (located at Hetzner, Germany) that runs VirtualBox in order to virtualize several services accross multiple virtual guests. Those guests are supposed to communicate with each other (for instance, a virtual web server has to access a virtual database server); to be reachable from the dedicated server (for instance, SSH access); and to access the Internet via the dedicated server (for instance, to download security updates) Currently, this is achieved by having host-only adapter vboxnet0 on the dedicated server and two virtual interfaces on each guest. There, virtual adapter eth0 is attached to vboxnet0 (to achieve (1) and (2)), virtual adapter eth1 is attached to VirtualBox' NAT (to achieve (3)). Via eth0, the guests have access to a DHCP and a DNS server, both running on the dedicated server (there, bound to vboxnet0). This allows me to assign custom IP addresses and names. Via eth1, VirtualBox pushes a proper route that enables each guest to access the Internet (via eth0 on the dedicated server). This setup with two virtual adapters frequently leads to problems and at leasts complicates many things. For instance, on the dedicated server there is OpenVPN which allows to access the virtual machines via the Internet; futhermore, there is Shorwall that controls the incoming and outgoing network traffic between the Internet, the dedicated server, and the individual virtual machines. Not to mention automatic installation of servers via PXE... Therefore, I would prefer to have only one single virtual adapter on each guest which would be used for both incoming and outgoing connections. As far as I understand, one would basically use a bridged interface for that very purpose. Now the question arises: Which interface on the dedicated server would the bridge use? eth0 on the host server is not an option, as this is prohibited by the provider. A virtual interface eth0:0 would not make any sense, as a bridge always uses a physical interface (eth0 in this case). Would it be possible to create a bridged interface in each virtual machine that would "dangle in the air"? Thus, without a complement on the dedicated server? How would I have to set up the routing on the host server? Please note that the host / dedicated server has only one network adapter (eth0) which is connected to the provider's network. Regards, Martin

    Read the article

  • Nginx - Redirect any Subdomain to File without Rewriting

    - by Waffle
    Recently I have switched from Apache to Nginx to increase performance on a web server running Ubuntu 11.10. I have been having issues trying to figure out how certain things work in Nginx compared to Apache, but one issue has been stumping me and I have not been able to find the answer online. My problem is that I need to be able to redirect (not rewrite) any sub-domain to a file, but that file needs to be able to get the sub-domain part of the URL in order to do a database look-up of that sub-domain. So far, I have been able to get any sub-domain to rewrite to that file, but then it loses the text of the sub-domain I need. So, for example, I would like test.server.com to redirect to server.com/resolve.php, but still remain as test.server.com. If this is not possible, the thing that I would need at the very least would be something such as going to test.server.com would go to server.com/resolve.php?=test . One of these options must be possible in Nginx. My config as it stands right now looks something like this: server { listen 80; ## listen for ipv4; this line is default and implied listen [::]:80 default ipv6only=on; ## listen for ipv6 root /usr/share/nginx/www; index index.php index.html index.htm; # Make site accessible from http://localhost/ server_name www.server.com server.com; location / { # First attempt to serve request as file, then # as directory, then fall back to index.html try_files $uri $uri/ /index.html; } location /doc { root /usr/share; autoindex on; allow 127.0.0.1; } location /images { root /usr/share; autoindex off; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/www; #} # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } server { listen 80 default; server_name *.server.com; rewrite ^ http://www.server.com/resolve.php; } As I said before, I am very new to Nginx, so I have a feeling the answer is pretty simple, but no examples online seem to deal with just redirects without rewrites or rewriting with the sub-domain section included. Any help on what to do would be most appreciated and if any one has a better idea to accomplish what I need, I am also open to ideas. Thank you very much.

    Read the article

  • Troubleshooting an NFS server hanging after authenticated mount request

    - by Christoph
    I need some advice on troubleshooting an NFS server problem on Scientific Linux (RHEL) 6.1. The log on the server shows that an authenticated mount request was made: Jan 13 16:30:02 ??? rpc.mountd[3996]: authenticated mount request from ????:784 for /shared-storage/cm/shared (/shared-storage/cm/shared) But after that, it does not continue. On the client, it is also hanging. The interesting thing now is that I have two NFS servers, which should be identical, and the one is working perfectly, but the other exhibits the above mentioned behaviour. The problem is also not completely persistent, i. e. sometimes the mount request succeeds. I assume that the problem must be related to the server rather than to the client, because it is working perfectly on the other server. My question is where I should search the problem. I have already re-created the exports using exportfs -r, I have restarted the NFS server, I have compared the rpcinfo outputs of both server - no success. The problem even survives a reboot. Any other ideas are appreciated. As answer to Tim's question: I have sporadically the following in dmesg, but do not know whether it is related e1000e 0000:0c:00.0: eth4: Detected Hardware Unit Hang: TDH <24> TDT <25> next_to_use <25> next_to_clean <24> buffer_info[next_to_clean]: time_stamp <1c3d12940> next_to_watch <24> jiffies <1c3d12940> next_to_watch.status <0> MAC Status <80383> PHY Status <792d> PHY 1000BASE-T Status <7800> PHY Extended Status <3000> PCI Status <10> Further edit: The problem above does not occur on the machine that is working, so it probably is related. Again an edit: The error is not on the (software) device that is used for NFS, but on another one. The NFS mount also does not trigger the message.

    Read the article

  • Pxe net install Centos with Static IP

    - by Stu2000
    I seem to be unable to perform a kickstart installation of centos5.8 with a netinstall. It correctly gets into the text installer, but keeps sending out a request for the dhcp server and failing. I have tried to manually set the IP everywhere. Here is my pxelinux.cfg file DEFAULT menu PROMPT 0 MENU TITLE Ubuntu MAAS TIMEOUT 200 TOTALTIMEOUT 6000 ONTIMEOUT local LABEL centos5.8-net kernel /images/centos5.8-net/vmlinuz MENU LABEL centos5.8-net append initrd=/images/centos5.8-net/initrd.img ip=192.168.1.163 netmask=255.255.255.0 hostname=client101 gateway=192.168.1.1 ksdevice=eth0 dns=8.8.8.8 ks=http://192.168.1.125/cblr/svc/op/ks/profile/centos5.8-net MENU end and here is my kickstart file: # Kickstart file for a very basic Centos 5.8 system # Assigns the server ip: 192.211.48.163 # DNS 8.8.8.8, 8.8.4.4 # London TZ install url --url http://mirror.centos.org/centos-5/5.8/os/i386 lang en_US.UTF-8 keyboard us network --device=eth0 --bootproto=static --ip=192.168.1.163 --netmask=255.255.255.0 --gateway=192.168.1.1 --nameserver=8.8.8.8,8.8.4.4 --hostname=client1-server --onboot=on rootpw --iscrypted $1$Snrd2bB6$CuD/07AX2r/lHgVTPZyAz/ firewall --enabled --port=22:tcp authconfig --enableshadow --enablemd5 selinux --enforcing timezone --utc Europe/London bootloader --location=mbr --driveorder=xvda --append="console=xvc0" # The following is the partition information you requested # Note that any partitions you deleted are not expressed # here so unless you clear all partitions first, this is # not guaranteed to work part /boot --fstype ext3 --size=100 --ondisk=xvda part pv.2 --size=0 --grow --ondisk=xvda volgroup VolGroup00 --pesize=32768 pv.2 logvol swap --fstype swap --name=LogVol01 --vgname=VolGroup00 --size=528 --grow --maxsize=1056 logvol / --fstype ext3 --name=LogVol00 --vgname=VolGroup00 --size=1024 --grow %packages @base @core @dialup @editors @text-internet keyutils iscsi-initiator-utils trousers bridge-utils fipscheck device-mapper-multipath sgpio emacs Here is my dhcp file: ddns-update-style interim; allow booting; allow bootp; ignore client-updates; set vendorclass = option vendor-class-identifier; subnet 192.168.1.0 netmask 255.255.255.0 { host tower { hardware ethernet 50:E5:49:18:D5:C6; fixed-address 192.168.1.163; option routers 192.168.1.1; option domain-name-servers 8.8.8.8,8.8.4.4; option subnet-mask 255.255.255.0; filename "/pxelinux.0"; default-lease-time 21600; max-lease-time 43200; next-server 192.168.1.125; } } Is it impossible to prevent it asking for a dynamic ip before trying to install from the net? Perhaps there is an error in of my files? My dhcp server is set to ignore client-updates, and is set to only works with one mac address whilst testing.

    Read the article

  • Nginx giving a lot of 502 errors

    - by Loki
    Since a while I have installed nginx and everything seemed to be working fine, recently I found out that about 20% of the time users are getting 502-errors. This is also noticable when Google tries to crawl my site in Webmaster Tools (from 10000 posts, approx. 2000 502 errors) At first I was thinking to disable nginx, but I'd really like to keep using it. I'm running it on a server with 2GB RAM and 4 Reserved CPU Cores. WHM/cPanel installed and Mod_Ruid2 enabled + DSO as a PHP Handler with APC caching installed. Is there anything I can change in the config, that can fix this? I have installed Nginx Admin in WHM and here is what's in the configuration editor screen: user nobody; worker_processes 4; error_log /var/log/nginx/error.log info; worker_rlimit_nofile 20480; events { worker_connections 10240; # increase for busier servers use epoll; # you should use epoll here for Linux kernels 2.6.x } http { server_name_in_redirect off; server_names_hash_max_size 10240; server_names_hash_bucket_size 1024; include mime.types; default_type application/octet-stream; server_tokens off; disable_symlinks if_not_owner; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 5; gzip on; gzip_vary on; gzip_disable "MSIE [1-6]\."; gzip_proxied any; gzip_http_version 1.1; gzip_min_length 1000; gzip_comp_level 6; gzip_buffers 16 8k; text/plain text/xml text/css application/x-javascript application/xml image/png image/x-icon image/gif image/jpeg application/xml+rss text/javascript application/atom+xml; ignore_invalid_headers on; client_header_timeout 3m; client_body_timeout 3m; send_timeout 3m; reset_timedout_connection on; connection_pool_size 256; client_header_buffer_size 256k; large_client_header_buffers 4 256k; client_max_body_size 200M; client_body_buffer_size 128k; request_pool_size 32k; output_buffers 4 32k; postpone_output 1460; proxy_temp_path /tmp/nginx_proxy/; client_body_in_file_only on; log_format bytes_log "$msec $bytes_sent ."; include "/etc/nginx/vhosts/*"; } I hope someone can help me out. Thanks in advance!

    Read the article

  • Unextending Sharepoint 2007 Web Application from a zone

    - by dunxd
    When our Sharepoint was migrated from Sharepoint 2003 to Sharepoint 2007 (both fully paid versions), the consultants who carried it out extended each web app into two IIS sites/zones (e.g. the original Web App was http://intranet, then http://newintranet and http://intranet would be created for Sharepoint 2007 - each with its own IIS site). The idea was that during the migration period we would set up DNS to point the old url to SP2003 servers and the new one to SP2007, then once the migration was complete, do a DNS change so the SP2007 would recieve the requests to the http://intranet type URLs. Unfortunately the contractors did not tidy up the application extensions and IIS sites after the migration, and for some time both URLs were in use, resulting in many document links pointing to the http://newintranet type URLs. This means I need to maintain these URLs. Due to a rejig of organisation structure we now need to relocate some Sharepoint sites, and I'd like to use the RDA Collaboration Sharepoint URL Redirector feature. However a limitation of this is that it doesn't work for Web Applications which have been extended into multiple zones. So I have a need to tidy up the situation that our consultants left behind. I think the right thing to do is use the "Remove Sharepoint from IIS Web Site" page in Central Admin to remove the zone for the newintranet type sites, and select the option to also delete the IIS site. That should result in having no IIS sites listening for http://newintranet type URLs. Is this the right procedure? Once I have done that I need to set up Sharepoint to receive requests sent to the http://newintranet type URLs so they will continue to work. I am not sure if I should do this: using Alternative Access Mappings or, by adding a host header to the IIS site or, creating a non Sharepoint IIS site for each http://newintranet type URL, and use IIS redirection to forward the requests to the new URL using variables to pass the path to the Sharepoint site. Does anyone have any thoughts on these options, or any other way of achieving this? Sharepoint 2007 is running on Windows 2003 with IIS6. We don't currently have plans/budget to upgrade to Sharepoint 2010.

    Read the article

  • FTP not listing files behind firewall (setsockopt (ignored): Permission denied)

    - by KennyDs
    We are developing a Magento application that has a module that works with FTP. Today we deployed this on the testing environment which is setup in the following way: Gateway server which has the following iptables rules: # iptables -L -n -v Chain INPUT (policy ACCEPT 2 packets, 130 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 165 13720 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED Chain FORWARD (policy ACCEPT 7 packets, 606 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- eth1 eth0 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 15 965 ACCEPT all -- eth0 eth1 0.0.0.0/0 0.0.0.0/0 0 0 REJECT all -- eth1 eth1 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT 126 packets, 31690 bytes) pkts bytes target prot opt in out source destination These are set at runtime via the following bash script: #!/bin/sh PATH=/usr/sbin:/sbin:/bin:/usr/bin # # delete all existing rules. # iptables -F iptables -t nat -F iptables -t mangle -F iptables -X # Always accept loopback traffic iptables -A INPUT -i lo -j ACCEPT # Allow established connections, and those not coming from the outside iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A FORWARD -i eth1 -o eth0 -m state --state ESTABLISHED,RELATED -j ACCEPT # Allow outgoing connections from the LAN side. iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT # Masquerade. iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE # Don't forward from the outside to the inside. iptables -A FORWARD -i eth1 -o eth1 -j REJECT # Enable routing. echo 1 > /proc/sys/net/ipv4/ip_forward The gateway server is connected to the WAN via eth1 and is connected to the internal network via eth0. One of the servers from eth1 has the following problem when trying to list files over ftp: $ ftp -vd myftpserver.com Connected to myftpserver.com 220 Welcome to MY FTP Server ftp: setsockopt: Bad file descriptor Name (myftpserver.com:magento): XXXXXXXX ---> USER XXXXXXXX 331 User XXXXXXXX, password please Password: ---> PASS XXXX 230 Password Ok, User logged in ---> SYST 215 UNIX Type: L8 Remote system type is UNIX. Using binary mode to transfer files. ftp> ls ftp: setsockopt (ignored): Permission denied ---> PORT 192,168,19,15,135,75 421 Service not available, remote server has closed connection When I try listing the files in passive mode, same result. When I run the same command on the gateway server, everything works fine so I believe that the issue is happening because of the iptables rules not forwarding properly. Does anyone have an idea which rule I need to add to make this work?

    Read the article

  • Very slow printing from print server

    - by evolvd
    Print server is a VM on Xen The VM is Windows 2003 32bit. During the issue the VM is not being taxed in anyway, cpu, memory, hd read/write, and network speed is all good. The problem that I see is the transfer of the print file from the print server to the printer. The 80Mb file is transferred from the client to the print server in about 2 minutes but then it takes about 2 hours for that file to be sent to the printer. I can't figure out why this would just start to happen. The printer is rebooted every evening and is just used for one large print job in the morning. The server has been rebooted with no effect I changed the spool option to send the entire spool to the server before printing starts and it had no effect. This printer problem did happen to come about after some changes to the Xen environment. The Xen servers changed from using HBA NIC cards to software iscsi and a new switch was put in. I don't think this is related to the problem since all the speeds on the VMs are better now. The changed happened on Saturday and the first print to this printer happened on Monday morning. I'm just putting that out there but like I said I don't think it is related but I don't want to rule it out. At this point I don't have many other options besides the physical layer. I can switch out network cable that goes to the printer and I might be able to print the same job to another printer. I wont be able to test those things out till this afternoon though. Any other ideas or test I could do to try to find the reason for the slow speed? I forgot to say that this is only happening when printing to this one printer. ===Update=== I found out that there are a few printers that currently have this issue, not just the one. There are over 30 printers on the server though so I know it's not happening to all of them. I printed a large pdf doc from the server and it was able to print at the normal speed. If the machine sends the large print request it gets to the server fine but then slow to get from the server to the printer. If sent directly from the printer it gets to the printer at the normal speed. The question now is why is there a speed difference when it comes from the machine and why would it start now?

    Read the article

  • resolve access violation exception (0xC0000005) crashing IIS app pool

    - by Joseph
    IIS 7.5, server 2008 r2, classic asp and asp .net 2.0, 3.5 website same server, same app pool. The past 4 weeks thousands of these errors 'C0000005' are occurring. I know from IIS debug diag tool that 'C0000005' is an access violation error. Below is the top line from my debug diag report. In w3wp__PID__6656__Date__01_08_2011__Time_01_42_46AM__281__First Chance Access Violation.dmp the assembly instruction at asp!CActiveScriptEngine::GetApplication+27 in \\?\C:\Windows\System32\inetsrv\asp.dll from Microsoft Corporation has caused an access violation exception (0xC0000005) when trying to read from memory location 0x00000000 on thread 29 Thread 29 - System ID 6736 Entry point 0x00000000 Create time 1/8/2011 12:46:26 AM Time spent in user mode 0 Days 00:00:00.140 Time spent in kernel mode 0 Days 00:00:00.078 Function Source asp!CActiveScriptEngine::GetApplication+27 vbscript!COleScript::GetDebugApplicationCoreNoRef+2b vbscript!COleScript::FDebuggerEnabled+30 vbscript!COleScript::SetScriptSite+cd asp!CActiveScriptEngine::Init+125 asp!CScriptManager::GetEngine+252 asp!AllocAndLoadEngines+28f asp!ExecuteGlobal+17a asp!Execute+b5 asp!CHitObj::ViperAsyncCallback+3fc asp!CViperAsyncRequest::OnCall+6a comsvcs!CSTAActivityWork::STAActivityWorkHelper+32 ole32!EnterForCallback+f4 ole32!SwitchForCallback+1a8 ole32!PerformCallback+a3 ole32!CObjectContext::InternalContextCallback+15b ole32!CObjectContext::DoCallback+1c comsvcs!CSTAActivityWork::DoWork+12f comsvcs!CSTAThread::DoWork+18 comsvcs!CSTAThread::ProcessQueueWork+37 comsvcs!CSTAThread::WorkerLoop+135 msvcrt!_endthreadex+44 msvcrt!_endthreadex+ce kernel32!BaseThreadInitThunk+e ntdll!__RtlUserThreadStart+70 ntdll!_RtlUserThreadStart+1b BELOW is the faulting module. ASP report Executing ASP requests 0 Request(s) ASP templates cached 0 Template(s) ASP template cache size 0.00 Bytes Loaded ASP applications 1 Application(s) ASP.DLL Version 7.5.7600.16620 ASP application report ASP application metabase key Physical Path Virtual Root Session Count 0 Session(s) Request Count 0 Request(s) Session Timeout 0 minutes(s) Path to Global.asa Server side script debugging enabled False Client side script debugging enabled False Out of process COM servers allowed False Session state turned on False Write buffering turned on False Application restart enabled False Parent paths enabled False ASP Script error messages will be sent to browser False ASP!CACTIVESCRIPTENGINE::GETAPPLICATION+27In w3wp__PID__6656__Date__01_08_2011__Time_01_42_46AM__281__First Chance Access Violation.dmp the assembly instruction at asp!CActiveScriptEngine::GetApplication+27 in \\?\C:\Windows\System32\inetsrv\asp.dll from Microsoft Corporation has caused an access violation exception (0xC0000005) when trying to read from memory location 0x00000000 on thread 29 recent events: server was being brute forced by hackers all of Dec and probably earlier, they weren't able to gain access but did get a virus on and blasted out spam. insatlled AVG and about the 17 or 22 latest patches. after that the app pool started crashing and the server has crashed a couple times since then. I am in no mans land as I am a developer and not a sys admin but I have to assume many roles. So I'm reaching out for help. Sometimes I will see hundreds of these 'C0000005' scriptengine errors in the event log in a matter of seconds and other times just a few times an hour. I googled this line 'ASP!CACTIVESCRIPTENGINE::GETAPPLICATION' and got nothing. Its like the function don't exist or something. I have spent many hours google-ing to no avail and am now turning to the experts on the forums. Thank you for your help

    Read the article

  • Sharing folder in a Virtual Private Windows Server 2008 R2 ?

    - by Triztian
    See Edit 2: Hello all, seems my involvement with computers has grown and I've found my self in the need to access a shared folder on a server. I've read some documentation and managed to set up the folder as a share, for this I created a local group and for now just one local user that has access to the share, the folder is in the public user folder and it's permissions should be (and I believe they are) read/write. The problem is that I can't connect from a remote machine I mean I don't know how the way it should be accessed, the server has a public IP and we use it also as a host to our website I don't know if that affects it though, the folder will be used as the "keeper" for the QuickBooks company files and has the database server manager installed. I've tried setting up a VPN Connection to the but no success. The server has a domain name a "http://www.example.com" that redirects to our website, I am unsure if it could be accessed that way, also the share has a location displayed when I right-click properties Heres what I've tried Setting up a VPN Connection (Windows Vista and 7) Got to the point where I got asked for credential and entered the user I created (which is not an admin) but I got a "Connection fail error 800" I suppose this is because in the domain field I entered the servers workgroup. right-click add network connection (Windows 7) Went through the wizard until I reached the point of entering the location, tried many things, the name in the share's properties(\\SOMETHING\Share), the http://www.example.com , the IP address I'm quite unfamiliar with this, so I have my guesses: Since the group and user are local they do not have access to the folder. The firewall in the server is blocking my connection. Anyways, any help and guidence is truly appreciated. EDIT 1: As @tony roth pointed out it may be a security fail, an I commented it out to management and said that that is not an issue, so please bare with me. EDIT 2: I've found out that the real question could be streamlined to "Sharing folder in a Virtual Private Server?", as thats what we have, a virtual private windows server 2008 R2, and I would like to know how to make it show like a normal folder in the client computer. Thanks again for all of your support.

    Read the article

  • what is best config for nginx worker_rlimit_nofile and worker_connections 28672

    - by Binh Nguyen
    i have issue of web-brower response ( especially on ie ) very slow, some time time out, and sometime hang out up to 20 seconds for one file redirect 301 when test with "f12 derverloper tool of ie" .. it report wait/start time very long. but after got connected the elements on web weill be dowload and show out fast ( test at xaluan.com ) It most happen when active user on web more than 2100 ( use google real time live analytic ). server running cenos 5 with ngix, apache, 32core cpu, 96G ram, raid 10 sas hdd.. == flowing is my config == user nobody; # no need for more workers in the proxy mode worker_processes 28; #old 32 #good at 24 error_log /var/log/nginx/error.log; #old add in end: info worker_rlimit_nofile 22528; events { worker_connections 22528; use epoll; # you should use epoll here for Linux kernels 2.6.x } http { server_name_in_redirect off; server_names_hash_max_size 10240; server_names_hash_bucket_size 1024; include mime.types; default_type application/octet-stream; server_tokens off; disable_symlinks off; sendfile on; tcp_nopush on; tcp_nodelay on; server_name_in_redirect off; server_names_hash_max_size 10240; server_names_hash_bucket_size 1024; include mime.types; default_type application/octet-stream; server_tokens off; disable_symlinks off; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 25; #old 5 gzip on; #old on gzip_vary on; gzip_disable "MSIE [1-6]\."; gzip_proxied any; gzip_http_version 1.1; gzip_min_length 1000; gzip_comp_level 6; gzip_buffers 16 8k; ignore_invalid_headers on; client_header_timeout 1m; #3m client_body_timeout 1m; #3m send_timeout 1m; #3m reset_timedout_connection on; connection_pool_size 256; client_header_buffer_size 256k; large_client_header_buffers 4 256k; client_max_body_size 100M; client_body_buffer_size 256k; request_pool_size 32k; output_buffers 4 32k; postpone_output 1460; proxy_temp_path /tmp/nginx_proxy/; client_body_in_file_only on; log_format bytes_log "$msec $bytes_sent ."; limit_conn_zone $binary_remote_addr zone=limit_per_ip:1m; limit_conn limit_per_ip 20; limit_req_zone $binary_remote_addr zone=allips:5m rate=200r/s; limit_req zone=allips burst=200 nodelay; include "/etc/nginx/vhosts/*"; } =========== I have play around with worker config 1- tried increase as some one suggess: worker_rlimit_nofile = worker_connections = worker_processes * 1024 = 32768 2- tried to set low: worker_processes = 28 and other worker at 22582 and other solution too .. but not work cause some time it make server load hight very quick 3- tried to comment out the # worker_rlimit_nofile . so it will be unlimited. it look like solved a bit about issue response time. but it also make server high load quick in peak time... Please help thanks PS: other apache you may have look for help me out thanks Listen 0.0.0.0:8081 User nobody Group nobody ExtendedStatus On ServerAdmin [email protected] ServerName server.xaluan.com LogLevel warn # These can be set in WHM under 'Apache Global Configuration' Timeout 100 TraceEnable Off ServerSignature Off ServerTokens ProductOnly FileETag None StartServers 15 <IfModule prefork.c> MinSpareServers 20 MaxSpareServers 50 #MaxSpareServers 40 </IfModule> ServerLimit 1572 MaxClients 1572 MaxRequestsPerChild 4000 # MaxRequestsPerChild 3000 KeepAlive On KeepAliveTimeout 3 MaxKeepAliveRequests 300 #MaxKeepAliveRequests 130

    Read the article

  • Transient network dropout for Xen DomU's

    - by Stephen C
    We've got a CentOS server running a cluster of virtuals. Occasionally the cluster's internal network drops out for a minute or so ... and then comes back. The problem is somehow related to the actual network traffic, but it is not a simple load issue. (The system is generally lightly loaded, and the problem occurs irrespective of actual load.) The setup: CentOS 5.6 on Dom0, various CentOS on the DomU's Hardware - a Dell R710 with a BroadCom NextXpress 2 NIC (sigh) using the latest drivers for the NIC from BroadCom Xen configured to use network-bridge and vif-bridge Some iptable tweaks to route an unrelated port to one of the virtuals. The system has one externally visible IP address, and Dom0 runs an Apache httpd configured with a number of virtual hosts each of which reverse proxies to web servers running on the virtuals. (The virtuals have to be NAT'ed, primarily because we don't have enough allocated public IP addresses.) The symptoms: Works fine most of the time. When someone tries to UPLOAD a large file to one virtuals, the internal network drops out ... for all virtuals: The Dom0 httpd sees a network timeout talking to the backend server on the virtual and reports a 502. A previously established ssh connection from Dom0 to any of the DomU's freezes. Our monitoring shows ping failures for traffic between virtuals. The Xen consoles to the DomU's do not freeze. No log messages in any log files that I can see, on either Dom0 or the DomU's ... apart from the Dom0 httpd logs. After a minute or so, the problem clears by itself. This is 100% reproducible. What we've tried: Downloading, building and installing the latest BNX2 driver on Dom0 Turning off MSI on the NIC - adding "options bnx2 disable_msi=1" to /etc/modprobe.conf Turning off tcp segmentation offload - "ethtool -K eth0 tso off". Sacrificing a black rooster at midnight. I've exhausted all my options apart from switching to KVM ... or slaughtering more roosters. Any suggestions?

    Read the article

  • Webserver Responses Hanging

    - by drscroogemcduck
    From some networks requesting certain images on our webserver is very flakey. I've looked at tcpdumps on both sides and the server sends back part of the file and the client ACKs the TCP packet but the server never receives the ACK. The servers view: 41 19.941136 212.169.34.114 209.20.73.85 TCP 52456 > http [SYN] Seq=0 Win=8192 Len=0 MSS=1460 WS=2 42 19.941136 209.20.73.85 212.169.34.114 TCP http > 52456 [SYN, ACK] Seq=0 Ack=1 Win=5440 Len=0 MSS=1360 46 20.041142 212.169.34.114 209.20.73.85 TCP 52456 > http [ACK] Seq=1 Ack=1 Win=65280 Len=0 47 20.045142 212.169.34.114 209.20.73.85 HTTP GET /map/map/s+74-WBkWk0aR28Yy-YjXA== HTTP/1.1 48 20.045142 209.20.73.85 212.169.34.114 TCP http > 52456 [ACK] Seq=1 Ack=522 Win=6432 Len=0 49 20.045142 209.20.73.85 212.169.34.114 TCP [TCP segment of a reassembled PDU] (Part of the content of the image 2720 bytes. i assume it is reassembled in tcpdump and it is fragmented over the wire.) ** never receives the ACK sent in frame 282 and will eventually resend the tcp segment ** The clients view: 274 26.161773 10.0.16.67 209.20.73.85 TCP 52456 > http [SYN] Seq=0 Win=8192 Len=0 MSS=1460 WS=2 276 26.262867 209.20.73.85 10.0.16.67 TCP http > 52456 [SYN, ACK] Seq=0 Ack=1 Win=5440 Len=0 MSS=1360 277 26.263255 10.0.16.67 209.20.73.85 TCP 52456 > http [ACK] Seq=1 Ack=1 Win=65280 Len=0 278 26.265193 10.0.16.67 209.20.73.85 HTTP GET /map/map/s+74-WBkWk0aR28Yy-YjXA== HTTP/1.1 279 26.365562 209.20.73.85 10.0.16.67 TCP http > 52456 [ACK] Seq=1 Ack=522 Win=6432 Len=0 280 26.368002 209.20.73.85 10.0.16.67 TCP [TCP segment of a reassembled PDU] (Part of the content of the image. Only 1400 bytes.) 282 26.571380 10.0.16.67 209.20.73.85 TCP 52456 > http [ACK] Seq=522 Ack=1361 Win=65280 Len=0 The network we are having trouble with is NATd. Is there any kind of explanation for this weirdness?

    Read the article

  • Win-XP Browsers Hang on page load - (waiting for...)

    - by CHarmon
    Hello, I’m having problems with my browsers hanging on loading pages on my desktop machine. I’m using Windows XP Pro with SP3 and fully updated except for IE 8. All three of my browsers, IE 7, Chrome and Firefox are having the same problems. Pages are not being loaded and are hanging on “waiting for …”. The browsers are waiting for the page being loaded or ad servers. Sometimes a page will load but the loading graphic continues to be displayed as if the page were still loading when the page appears to be fully loaded. The problem is bad enough that I can’t really use any of my browsers. I can eventually get most pages to load by stopping and restarting the page load. I have DSL modem with a wireless router and I have been able to eliminate the modem and router from being the source of my problem. My laptop doesn’t have any problems even when hardwired to the router and with the wireless connection disabled. I deleted the NIC and let XP re-install. Also tried a different network cable. Tried the same router port used in the laptop test. One clue that may be important is that I can’t connect to my router using the desktop machine…the page hangs while trying to connect. I can ping the router and I can quickly connect to the router using the laptop. I also can’t use the Windows update process – the page never fully loads. The problem affects other user accounts and even happens in safe mode. I am convinced the problem is with part of the O/S…some layer able to affect all of the browsers. The purpose of this post is to see if anyone has some ideas before I do a XP repair. I have done quite a bit of trouble-shooting: Ran a full anti-virus scan with AVG – no problems. Ran full scans with Spybot, MalwareBytes and Sophos anti-rootkit – no problems. Ran Chkdsk with both options checked. Ran Disk Clean up Defragged RE-installed IE7 Cleared all the browser caches Ran Ccleaner (registry tool) Ran HijackThis – nothing unusual (problem happens in safe mode too) Ran Process Explorer – no unusual processes Used System Restore and fell back several days – no change in the problem Booted to last known good configuration – no change in the problem Ran MicrosoftFixit50199.msi – no change in the problem Any ideas or suggestions would be appreciated…I’m not looking forward to doing a repair on XP. Thanks in advance for any help.

    Read the article

  • Running an rsync sweep before initializing lsyncd for synchronizing instances on EC2

    - by chrisallenlane
    My company uses several EC2 servers that will scale up and down according to the load we're receiving on our sites at any given moment. For the sake of our discussion here, we're running four instances: master.ourdomain.com - the file syncing "hub" of the webservers www1/www2/www3.ourdomain.com - three webservers which turn on or off as dictated by load I'm using lsyncd to keep all of the webservers in sync, and for the most part, it's working quite well. We're using a two-way syncing scheme, such that each webserver syncs against master, and master syncs against each webserver. Thus, the webservers are kept in sync, even though they aren't syncing against each other directly. I'm having one problem that I'm having a hard time solving,though. It occurs under these circumstances: When changes are made on master (perhaps after we've pushed new code), while some of the redundant webservers are sleeping And then a sleeping webserver wakes-up to absorb load Under that circumstance, I would like the following to happen: First, the newly-awoken webserver should sync its file structure - one way - against master, to bring its web application code up-to-date. Then, and only then, should it begin pushing changes in its file structure back to master. Unfortunately, currently, when a sleeping server is started, when lsyncd starts up, it pushes changes back to master before updating its own codebase, thus overwriting new code with old. Thus, before lsyncd starts, I'd like to be able to synchronize the webservers code against master's, perhaps by running a simple one-way rsync against the two machines. We're running lsyncd v.2, and I've tried to make this happen by using the "bash" configuration options documented in the lsyncd manual. My configuration file looks like this: settings = { logfile = "/home/user/log/lsyncd/log.txt", statusFile = "/home/user/log/lsyncd/status.txt", maxProcesses = 2, nodaemon = false, } bash = { onStartup = "rsync [email protected]:/home/user/www /home/user/www" } sync{ default.rsyncssh, source="/home/user/www/", host="[email protected]", targetdir="/home/user/www/", rsyncOpts="-ltus", excludeFrom="/home/user/conf/lsyncd/exclude" } (I've obviously redacted that file somewhat to protect the identities of the guilty.) Simply put, though, this just isn't working. How else might I approach this problem? I was looking at the --delete-after option in man rsync, but I don't think that does what I'm looking for. Are there any suggestions about how I should approach this problem? Thanks for lending your time and expertise. Chris

    Read the article

  • Internet Explorer cannot display page from apache with single SSL virtual host

    - by P.scheit
    I have a question that has come up somehow in different questions but I still can't find the solution, yet. My problem is that I'm hosting a site on apache 2.4 on debian with SSL and Internet Explorer 7 on windows xp shows Internet Explorer cannot display the webpage I have only ONE virtual host that uses ssl, but DIFFERENT virtual hosts that use http. Here is my config for the site with SSL enabled (etc/sites-avaible/default-ssl is NOT linked) <Virtualhost xx.yyy.86.193:443> ServerName www.my-certified-domain.de ServerAlias my-certified-domain.de DocumentRoot "/var/local/www/my-certified-domain.de/current/www" Alias /files "/var/local/www/my-certified-domain.de/current/files" CustomLog /var/log/apache2/access.my-certified-domain.de.log combined <Directory "/var/local/www/my-certified-domain.de/current/www"> AllowOverride All </Directory> SSLEngine on SSLCertificateFile /etc/ssl/certs/www.my-certified-domain.de.crt SSLCertificateKeyFile /etc/ssl/private/www.my-certified-domain.de.key SSLCipherSuite HIGH:MEDIUM:!aNULL:+SHA1:+MD5:+HIGH:+MEDIUM SSLCertificateChainFile /etc/apache2/ssl.crt/www.my-certified-domain.de.ca BrowserMatch "MSIE [2-8]" nokeepalive downgrade-1.0 force-response-1.0 </VirtualHost> <VirtualHost *:80> ServerName www.my-certified-domain.de ServerAlias my-certified-domain.de CustomLog /var/log/apache2/access.my-certified-domain.de.log combined Redirect permanent / https://www.my-certified-domain.de/ </VirtualHost> my ports.conf looks like this: NameVirtualHost *:80 Listen 80 <IfModule mod_ssl.c> # If you add NameVirtualHost *:443 here, you will also have to change # the VirtualHost statement in /etc/apache2/sites-available/default-ssl # to <VirtualHost *:443> # Server Name Indication for SSL named virtual hosts is currently not # supported by MSIE on Windows XP. Listen 443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule> the output from apache2ctl -S is like this: xx.yyy.86.193:443 www.my-certified-domain.de (/etc/apache2/sites-enabled/020-my-certified-domain.de:1) wildcard NameVirtualHosts and _default_ servers: *:80 is a NameVirtualHost default server phpmyadmin.my-certified-domain.de (/etc/apache2/conf.d/phpmyadmin.conf:3) port 80 namevhost phpmyadmin.my-certified-domain.de (/etc/apache2/conf.d/phpmyadmin.conf:3) port 80 namevhost staging.my-certified-domain.de (/etc/apache2/sites-enabled/010-staging.my-certified-domain.de:1) port 80 namevhost testing.my-certified-domain.de (/etc/apache2/sites-enabled/015-testing.my-certified-domain.de:1) port 80 namevhost www.my-certified-domain.de (/etc/apache2/sites-enabled/020-my-certified-domain.de:31) I included the solution for this question: Internet explorer cannot display the page, other browsers can, possibly htaccess / server error And I understand the answer from this question: How to setup Apache NameVirtualHost on SSL? In fakt: I only have one ssl certificate for the domain. And I only want to run ONE virtual host with ssl. So I just want to use the one ip for the ssl virtual host. But still (after rebooting / restarting / testing) internet explorer will still not show the page. When I intepret the apachectl -S as well, I already have only one SSL host and this should response to the initial SSH handshake, shouldn't it? What is wrong in this setup? Thank you so much Philipp

    Read the article

  • sub domains with /etc/hosts and apache for gitorious

    - by QLands
    I managed to have a local install of Gitorious. Now I need to finalize the apache integration using a virtual server but nothing seems to work. See for example my /etc/hosts file: 127.0.0.1 localhost 172.26.17.70 darkstar.ilri.org darkstar 172.26.17.70 git.darkstar.ilri.org My vhosts.conf has the following entries: # # Use name-based virtual hosting. # NameVirtualHost *:80 <VirtualHost *:80> <Directory /srv/httpd/htdocs> Options Indexes FollowSymLinks ExecCGI AllowOverride None Order allow,deny Allow from all </Directory> ServerName darkstar.ilri.org DocumentRoot /srv/httpd/htdocs ErrorLog /var/log/httpd/error_log AddHandler cgi-script .cgi </VirtualHost> <VirtualHost *:80> <Directory /srv/httpd/git.darkstar.ilri.org/gitorious/public> Options FollowSymLinks ExecCGI AllowOverride None Order allow,deny Allow from All </Directory> AddHandler cgi-script .cgi DocumentRoot /srv/httpd/git.darkstar.ilri.org/gitorious/public ServerName git.darkstar.ilri.org ErrorLog /var/www/git.darkstar.ilri.org/log/error.log CustomLog /var/www/git.darkstar.ilri.org/log/access.log combined AddOutputFilterByType DEFLATE text/html text/plain text/xml text/javascript text/css application/x-javascript BrowserMatch ^Mozilla/4 gzip-only-text/html BrowserMatch ^Mozilla/4\.0[678] no-gzip BrowserMatch \bMSIE !no-gzip !gzip-only-text/html <FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf)$"> ExpiresActive On ExpiresDefault "access plus 1 year" </FilesMatch> FileETag None RewriteEngine On RewriteCond %{DOCUMENT_ROOT}/system/maintenance.html -f RewriteCond %{SCRIPT_FILENAME} !maintenance.html RewriteRule ^.*$ /system/maintenance.html [L] </VirtualHost> Now, when I go with Firefox to darkstar.ilri.org it shows the default Apache screen: "It works!". but when I go to git.darkstar.ilri.org it waits for few seconds then falls to darkstar.ilri.org and the default apache page. No error is reported. If I run httpd -S I get: VirtualHost configuration: wildcard NameVirtualHosts and _default_ servers: *:80 is a NameVirtualHost default server darkstar.ilri.org (/etc/httpd/extra/httpd-vhosts.conf:21) port 80 namevhost darkstar.ilri.org (/etc/httpd/extra/httpd-vhosts.conf:21) port 80 namevhost git.darkstar.ilri.org (/etc/httpd/extra/httpd-vhosts.conf:37) Syntax OK The funny thing is that if I configure gotirious in a host called gitrepository, add 127.0.0.1 gitrepository and go with Firefox to gitrepository.. Gitorious works... But why not with git.darkstar.ilri.org? Many thanks in advance.

    Read the article

  • nginx timeout albeit ridicolous configuration

    - by Joa Ebert
    The scenario is an API server that should handle uploads. Posting on my.host.com/api/upload should do something with the body the client sends. However the API server has been designed to block the whole request until it fully processed the file, including some analysis which can take up to approx. 5min (...!). This has to change of course. In the meantime I wanted to setup nginx as a load balancer in front of the API servers. I quickly ran into a timeout issue, consulted Google and came up with this ridiculous test configuration: user www-data; worker_processes 4; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; access_log off; sendfile on; send_timeout 3600; keepalive_timeout 3600 120; tcp_nopush on; tcp_nodelay on; gzip off; client_header_timeout 3600; client_body_timeout 3600; proxy_send_timeout 3600; proxy_read_timeout 3600; proxy_connect_timeout 1800; proxy_next_upstream error; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } And upstream test { server host1; server host2; } server { listen 80; server_name my.host.com; client_max_body_size 10m; location /api/ { proxy_pass http://test; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_redirect off; } } Still, when an upload happens, I get the following result in the error.log: 2010/12/22 13:36:42 [error] 5256#0: *187359 upstream timed out (110: Connection timed out) while reading response header from upstream, client: xx.xx.xx.xx, server: my.host.com, request: "POST /api/upload HTTP/1.1", upstream: "http://apiserver:80/upload", host: "my.host.com" What else could I do? If I look at the log of the API server I still see that it is processing the request and analyzing the file. But I think 3600 seconds as a timeout should be more than enough. This happens even after a could of seconds. And I did a reload and force-reload of the configuration as well of course.

    Read the article

  • unable to access a NAT'ed IP via a VPN on Windows 7

    - by crmpicco
    I connect to a range of servers hosted by one provider via a VPN. I can connect to the VPN fine, however when I then go and try and connect to the server(s) it fails. A NAT'ed IP address that has worked up until today, has stopped working either via SSH/SFTP. As you can see below, if I try and ping the IP then it responds with Destination host unreachable, but, for some reason it says the reply is from 192.168.0.8? If it enter this IP address in my browser, I get nothing. Where is this IP coming from and is there any good reason why I cannot access the IP I am trying to ping? C:\Users\crmpicco>ping 172.26.100.x Pinging 172.26.100.x with 32 bytes of data: Reply from 192.168.0.8: Destination host unreachable. Reply from 192.168.0.8: Destination host unreachable. Reply from 192.168.0.8: Destination host unreachable. Reply from 192.168.0.8: Destination host unreachable. Ping statistics for 172.26.100.x: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), I have the VPN remote host address of 80.75.67.x, which shows me as being connected. But i'm unsure if there is a config issue at the server side or my end that has caused this issue? I have had some recent Win7 (automatic) updates, but it's hard to tell if that's caused this problem. This is my output from arp: C:\Users\cmorton>arp -a Interface: 192.168.0.8 --- 0xe Internet Address Physical Address Type 192.168.0.1 00-18-4d-b9-68-5e dynami 192.168.0.6 00-f4-b9-68-0c-9a dynami 192.168.0.7 08-00-27-f2-9f-d6 dynami 192.168.0.255 ff-ff-ff-ff-ff-ff static 224.0.0.22 01-00-5e-00-00-16 static 224.0.0.251 01-00-5e-00-00-fb static 224.0.0.252 01-00-5e-00-00-fc static 239.255.255.250 01-00-5e-7f-ff-fa static 255.255.255.255 ff-ff-ff-ff-ff-ff static Interface: 192.168.56.1 --- 0x15 Internet Address Physical Address Type 192.168.56.255 ff-ff-ff-ff-ff-ff static 224.0.0.22 01-00-5e-00-00-16 static 224.0.0.251 01-00-5e-00-00-fb static 224.0.0.252 01-00-5e-00-00-fc static 255.255.255.255 ff-ff-ff-ff-ff-ff static

    Read the article

< Previous Page | 499 500 501 502 503 504 505 506 507 508 509 510  | Next Page >