Search Results

Search found 18773 results on 751 pages for 'router configuration'.

Page 78/751 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • Configuration for httphandler in classic mode

    - by happyspider
    I have to install an httphandler that needs to run on classic mode. I have created an application on the iis that uses a classic apppool and put the handler assembly there. The vendor gave me a configuration in the deployment document that looks like this: <system.web> <globalization requestEncoding="iso-8859-1" responseEncoding="iso-8859-1" /> <httpModules> </httpModules> <httpHandlers> <add verb="*" path="*" type="ProductName.ProductName, ProductName" /> </httpHandlers> </system.web> <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <handlers> <add name="someUnspecificName" path="*" verb="*" modules="IsapiModule" scriptProcessor="C:\Windows\Microsoft.NET\Framework\v2.0.50727\aspnet_isapi.dll" resourceType="Unspecified" requireAccess="None" preCondition="classicMode,runtimeVersionv2.0,bitness32" /> </handlers> </system.webServer> The error I get when requesting a URL on the application is a 404, so I guess the handle is not used at all. Does the configuration look ok for a 64bit system?

    Read the article

  • Apache can't be restarted after changes to the configuration file

    - by Sharifhs
    Hello, I can't successfully configure the apache and php configuration files, can anybody help me in this way? Apache 2.2.16 (win32-x86-no_ssl.msi) was installed into “C:\Apache2.2 “location. Then PHP 5.3.3 (VC9 x86 Thread Safe) zip file was downloaded and extracted on “C:\php” location. From “C:\php” I renamed the “php.ini-development” file into “php.ini” “php.ini” file was opened with notepad, and modified as: doc_root = "C:\Apache2.2\htdocs" extension_dir = "C:\php\ext" The following lines were added to the Apache's configuration file “httpd.conf”: LoadModule php5_module "C:/php/php5apache2_2.dll" AddType application/x-httpd-php .php PHPIniDir "C:/php" N.B.: Thanks all for comments and answer, but I can't reply none your comments, I don't know why. May be I'm not privileged to put any comment as I'm new here (is it the case?)! That's why I'm to edit my post to reply you all. Tell me what can I do? @ jer.salamon: do you want me to post full httpd.conf file? It'll be longer then! @ davr: the server started first, but when I configured those files, its never started again @jer.salamon: did you mean keeping this way: doc_root = extension_dir = "ext" It not yet restared!

    Read the article

  • Trunking between Juniper Ex3300 with Cisco Router

    - by danijuntak
    Hy Experts, Please tell how to create trunking with Juniper and Cisco. Cisco 2950 Juniper EX3300 Cisco 2621 I create VLAN 100,VLAN 200, VLAN 300 I have create trunk on juniper switch with : set interfaces ge-0/0/2 unit 0 family ethernet-switching vlan members root@switch# set interfaces ge-0/0/23 unit 0 family ethernet-switching port-mode trunk Now I want to telnet Juniper Switch from PC, but I don't know how to give IP address to Juniper switch and how to assign IP to vlan on Juniper switch.

    Read the article

  • Proxmox drbd configuration split brain [on hold]

    - by AudioDan
    I am planning a proxmox HA configuration with two Dell R710 machines (dual 6 core processors in each) with enterprise level drive raid arrays. I would be using DRBD with a quorum disk on a third machine. I would dedicate two 1GB nics on each server to the DRBD communications. We would have approximately 12 to 14 Virtual Machines running on this pair of servers. The proxmox manual recommends creating two DRBD resources - one for the Virtual Machines that normally run on ServerA and one for the Virtual Machines that normally run on ServerB. This is because of the Primary/Primary state in which this configuration runs. If both servers have VMs talking to the same DRBD resource and a split brain situation occurs, there is potential for data corruption that must be resolved. While I understand it would take more effort to create new virtual machines, can anybody foresee any potential problems with running a separate DRBD resource for each VM instead? Does anyone have experience running a setup that way and has it worked well? It seems to me that would allow more flexibility in moving machines back and forth.

    Read the article

  • Unable to ping to outside network from behind a Linux router

    - by Supratik
    Hi My system is behind a Linux firewall, where eth0 is connected to internet and eth1 is connected to my LAN. The issue is I am not able to ping to outside my network. The iptables rule I have used here as below. iptables -t nat -A POSTROUTING -s 192.168.1.0/24 -p icmp -j SNAT --to-source $PUBLICIP Please correct me if I am doing anything wrong here. Warm Regards Supratik

    Read the article

  • Centos iptables configuration for Wordpress and Gmail smtp

    - by Fabrizio
    Let me start off by saying that I'm a Centos newby, so all info, links and suggestions are very welcome! I recently set up a hosted server with Centos 6 and configured it as a webserver. The websites running on it are nothing special, just some low traffic projects. I tried to configure the server as default as possible, but I like it to be secure as well (no ftp, custom ssh port). Getting my Wordpress to run as desired, I'm running into some connection problems. 2 things are not working: installing plugins and updates through ssh2 (failed to connect to localhost:sshportnumber) sending emails from my site using the Gmail smtp (Failed to connect to server: Permission denied (13)) I have the feeling that these are both related to the iptables configuration, because I've tried everything else (I think). I tried opening up the firewall to accept traffic for ports 465 (gmail smtp) and ssh port (lets say this port is 8000), but both the issues remain. Ssh connections from the terminal are working fine though. After each change I tried implementing I restarted the iptables service. This is my iptables configuration (using vim): # Generated by iptables-save v1.4.7 on Sun Jun 1 13:20:20 2014 *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m tcp --dport 8000 -j ACCEPT -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -p tcp -m tcp --dport 465 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited -A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A OUTPUT -o lo -j ACCEPT -A OUTPUT -p tcp -m tcp --dport 8000 -j ACCEPT -A OUTPUT -p tcp -m tcp --dport 465 -j ACCEPT COMMIT # Completed on Sun Jun 1 13:20:20 2014 Are there any (obvious) issues with my iptables setup considering the above mentioned issues? Saying that the firewall is doing exactly nothing in this state is also an answer... And again, if you have any other suggestions for me to increase security (considering the basic things I do with this box), I would love hear it, also the obvious ones! Thanks!

    Read the article

  • netkit: why cant my router 4 pc4 ping my router 1 pc1 - how can I solve this please?

    - by donok
    Below I have four routers connected but my pc1 on r1 cannot ping my pc4 on r4 and also my pc2 on r2 cant ping my pc4 on r4 and vice versa. Below is a network diagram: and the configurations are below that, could anyone help me please on making them accessible? ![connecting 4 routers][1] I cant post my diagram on serverfault(less than 10 rep) so I did on stackoverflow and asked the same question. pc1: ifconfig eth0 195.11.14.5 netmask 255.255.255.0 broadcast 195.11.14.255 up route add default gw 195.11.14.1 dev eth0 pc2.start: ifconfig eth0 200.1.1.7 netmask 255.255.255.0 broadcast 200.1.1.255 up route add default gw 200.1.1.1 dev eth0 pc3: ifconfig eth0 195.20.14.9 netmask 255.255.255.0 broadcast 195.20.1.255 up route add default gw 195.20.14.1 dev eth0 pc4: ifconfig eth0 200.2.1.11 netmask 255.255.255.0 broadcast 200.2.1.255 up route add default gw 200.2.1.1 dev eth0 r1: ifconfig eth0 195.11.14.1 netmask 255.255.255.0 broadcast 195.11.14.255 up ifconfig eth1 100.0.0.9 netmask 255.255.255.252 broadcast 100.0.0.11 up route add -net 200.1.1.0 netmask 255.255.255.0 gw 100.0.0.10 dev eth1 route add default gw 100.0.0.10 lab.conf: if you need more on that Ill post it up but I think most of the info is there. Any help would be greatly appreciated especially trying to make a connection between pc4 and pc1, even if you think it does not make sense please explain why. Thank you.

    Read the article

  • Using a second wifi router as a wireless bridge

    - by Greg-J
    I purchased a D-Link DGL-4500 to replace my aging WRT54G around a year ago, only to find it nowhere near as reliable. It's been collecting dust since. I'm wondering if there is a way to use it as a wireless bridge so I can connect it to my home network and then use it's ethernet ports to provide network access to several devices. Is this something that can be done? If not, are there devices meant for this? Any help would be appreciated.

    Read the article

  • VPN ipsec tunnel from router to single windows server computer (gateway-to-host)

    - by Chris Miller
    Firstly, is this possible? The situation: 2 different ISP's. One has several servers and a firewall running. The other is limited to only one virtual server with one network card running windows server 2008r2. I need to set up a site-to-site style VPN using IPsec between the firewall of one ISP and the windows host on the other (gateway-to-host). This host has to run a SQL-Server that I can access from the other ISP's servers through the VPN tunnel. It seems looking at the RFC for IPsec that this should be possible using the features of Windows 2008, but I can't get it to work so far... It seems that I can't access any services running on the same computer or IP address used as the tunnel endpoint? Thanks Chris

    Read the article

  • configuration issue with respect to .htaccess file on ubuntu

    - by Registered User
    I am building an application tshirtshop I have following configuration in /etc/apache2/sites-enabled/tshirtshop <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/tshirtshop <Directory /var/www/tshirtshop> Options Indexes FollowSymLinks AllowOverride All Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> and following in .htaccess file in location /var/www/tshirtshop/.htaccess <IfModule mod_rewrite.c> # Enable mod_rewrite RewriteEngine On # Specify the folder in which the application resides. # Use / if the application is in the root. RewriteBase /tshirtshop #RewriteBase / # Rewrite to correct domain to avoid canonicalization problems # RewriteCond %{HTTP_HOST} !^www\.example\.com # RewriteRule ^(.*)$ http://www.example.com/$1 [R=301,L] # Rewrite URLs ending in /index.php or /index.html to / RewriteCond %{THE_REQUEST} ^GET\ .*/index\.(php|html?)\ HTTP RewriteRule ^(.*)index\.(php|html?)$ $1 [R=301,L] # Rewrite category pages RewriteRule ^.*-d([0-9]+)/.*-c([0-9]+)/page-([0-9]+)/?$ index.php?DepartmentId=$1&CategoryId=$2&Page=$3 [L] RewriteRule ^.*-d([0-9]+)/.*-c([0-9]+)/?$ index.php?DepartmentId=$1&CategoryId=$2 [L] # Rewrite department pages RewriteRule ^.*-d([0-9]+)/page-([0-9]+)/?$ index.php?DepartmentId=$1&Page=$2 [L] RewriteRule ^.*-d([0-9]+)/?$ index.php?DepartmentId=$1 [L] # Rewrite subpages of the home page RewriteRule ^page-([0-9]+)/?$ index.php?Page=$1 [L] # Rewrite product details pages RewriteRule ^.*-p([0-9]+)/?$ index.php?ProductId=$1 [L] </IfModule> the site is working on localhost and is working as if there is no .htaccess rule specified i.e. if I were to view a page as http://localhost/tshirtshop/nature-d2 then I get a 404 Error but if I view the same page as http://localhost/tshirtshop/index.php?DepartmentId=2 then I can view it. What is the mistake if any one can point out in above configuration, or else I need to check any thing else? sudo apache2ctl -M Loaded Modules: core_module (static) log_config_module (static) logio_module (static) mpm_prefork_module (static) http_module (static) so_module (static) alias_module (shared) auth_basic_module (shared) authn_file_module (shared) authz_default_module (shared) authz_groupfile_module (shared) authz_host_module (shared) authz_user_module (shared) autoindex_module (shared) cgi_module (shared) deflate_module (shared) dir_module (shared) env_module (shared) mime_module (shared) negotiation_module (shared) php5_module (shared) reqtimeout_module (shared) rewrite_module (shared) setenvif_module (shared) status_module (shared) Syntax OK I am using Apache2 on Ubuntu 12.04

    Read the article

  • rsync per-site configuration file?

    - by Scott
    I know how to configure a per-site entry for ssh, but is there any kind of a client configuration for rsync that allows per-site configuration options and aliases or similar shortcuts like the .ssh/config? I'm curious because I have a minimal ssh server installed on my android phone and I also have a minimal rsync tool on it as well. I'm getting tired of having to root login onto the phone and sym-link both tools to standard places the android OS looks for executables as the ssh server is bare bones and has a typical *bear multi-link binary for the basic unix commands (that does not include rsync) I end up having to include --rsync-path=/path/to/rsync/android/files/rsync every time I want to do any rsyncing of the files on my phone, but this path is always the same. I've gotten around it in the meantime with a glob approach in a shell script wrapper, but this sometimes limits the customization I can do with the rsync call. I'm just wondering if there is anything similar to the .ssh/config file where I can create an alias for my phone (e.g. 'android') where specifying rsync android:/mnt/sdcard will automatically assume --rsync-path=/blah/blah/blah --no-g --no-p --no-t etc. Tre`

    Read the article

  • powershell vs GPO for installation, configuration, maintenance

    - by user52874
    My question is about using powershell scripts to install, configure, update and maintain Windows 7 Pro/Ent workstations in a 2008R2 domain, versus using GPO/ADMX/msi. Here's the situation: Because of a comedy of cumulative corporate bumpfuggery we suddenly found ourselves having to design, configure and deploy a full Windows Server 2008R2 and Windows 7 Pro/Enterprise on very short notice and delivery schedule. Of course, I'm not a windows expert by any means, and we're so understaffed that our buzzword bingo includes 'automate' and 'one-button' and 'it needs to Just Work'. (FWIW, I started with DEC, then on to solaris and cisco, then linux of various flavors with a smattering of BSD nowadays. I use Windows for email and to fill out forms). So we decided to bring in a contractor to do this for us. and they met the deadline. The system is up and mostly usable, and this is good. We would not have been able to do this. But it's the 'mostly' part that is proving to be the PIMA now, and I'm having to learn Microsoft stuff anyway until/if we can get a new contract with these guys for ongoing operations. Here's my question. The contractor used powershell almost exclusively for deployment, configuration and updating. My intensive reading over the last week leads me to think that the generally accepted practices for deployment, configuration and updating microsoft stuff uses elements of GPOs and ADMX templates, along with maybe some third party stuff like PolicyPak. Are there solid reasons that I've not found yet that powershell scripts would be preferred over the GPO methods? I'm going to discuss this with the contractor lead when he gets back from his vacation, and he'll be straight with me (nor do I think they set us up). But I can also see this might be a religious issue, so I would still like some background on this. Thoughts? or weblinks? Thanks!

    Read the article

  • An device with an unknown MAC address is connected to my router

    - by Yar
    There is a computer that is not mine that is accessible on my network. I can even access its filesystem via AFP. What I want to know is how the computer could get on my network. My network is secured like this: Does that mean that they've used password cracking tools? The pass is not easy to guess but not hard to figure out via brute-force hacking, I guess. If I am being hacked, should I switch to WPA?

    Read the article

  • An unknown Mac is connected to my router?

    - by Yar
    There is a computer that is not mine that is accessible on my network. I can even access its filesystem via AFP. What I want to know is how the computer could get on my network. My network is secured like this: Does that mean that they've used password cracking tools? The pass is not easy to guess but not hard to figure out via brute-force hacking, I guess. If I am being hacked, should I switch to WPA?

    Read the article

  • Mac on My Router?

    - by Yar
    There is a computer that is not mine that is accessible on my network. I can even access its filesystem via AFP. What I want to know is how the computer could get on my network. My network is secured like this: Does that mean that they've used password cracking tools? The pass is not easy to guess but not hard to figure out via brute-force hacking, I guess. If I am being hacked, should I switch to WPA?

    Read the article

  • mod_wsgi -apache configuration file

    - by Kevin
    guys sorry I'm a newbie to this but I've been following the mod_wsgi configuration tutorial and it's very spotty. In my httpd.conf file I add the virtual host like so: 'Main' server configuration # The directives in this section set up the values used by the 'main' server, which responds to any requests that aren't handled by a definition. These values also provide defaults for any containers you may define later in the file. # All of these directives may appear inside containers, in which case these default settings will be overridden for the virtual host being defined. # ServerName wsgihost DocumentRoot "/Library/WebServer/Documents" <Directory "/Library/WebServer/Documents"> Order allow,deny Allow from all </Directory> WSGIScriptAlias /myapp /Users/KL/modwsgi/env/myapp.wsgi <Directory "/Users/KL/modwsgi/env"> <Files myapp.wsgi> Order allow,deny Allow from all </Files> </Directory> Now, when I also added in my local host the following: 127.0.1.1 wsgihost but I can't seem to connect. Am I doing something terribly wrong?

    Read the article

  • nginx configuration file explained

    - by Chris Muench
    I have a few questions about this configuration file "default" in /etc/nginx/sites-enabled. It is shown below. server { root /usr/share/nginx/www; index index.html index.htm; # Make site accessible from http://localhost/ server_name localhost; location / { proxy_pass http://127.0.0.1:8080; } location /doc { root /usr/share; autoindex on; allow 127.0.0.1; deny all; } location /images { root /usr/share; autoindex off; } } There is no "Listen" directive, how does it know to default to 80 The server_name is localhost, how does another domain work? Why is the location directive embedded in the server directive? Does that mean these locations ONLY apply to this server? None of my configs have listen 80 default_server; how does nginx then pick what configuration to use?

    Read the article

  • Using iptables to make a VPN router

    - by lost_in_the_sauce
    I am attempting to make a VPN connection to a third party VPN site, then forward traffic from my internal computers (ssh and ping for now) out to the VPN site using IPTables. 3rd Party <- (tun0/eth0)Linux VPN Box(eth1) <- Windows7TestBox I am running on CentOS 6.3 Linux and have two network connections eth0-public eth1-private. I am running vpnc-0.5.3-4 which is currently connecting to my destination. When I connect I am able to ping the destination IPAddresses but that is as far as I can get. ping -I tun0 10.1.33.26 success ping -I eth0 10.1.33.26 fail ping -I eth1 10.1.33.26 fail I have my private network Windows 7 test box set up to have the eth1 (private) network of my VPN Server as its gateway and can ping him fine. I need IPTables to send the Windows 7 traffic out the VPN tunnel. I have tried for a few days many different IPTables configurations from this site and others, either the other examples are too simple or overly complicated. The only thing this server is doing is connecting to the VPN and forwarding all traffic. So we can "flush" everything and start from scratch here. It is a blank slate. #!/bin/bash echo "Define variables" ipt="/sbin/iptables" echo "Zero out all counters" $ipt -Z $ipt -t nat -Z $ipt -t mangle -Z echo "Flush all active rules, delete all chains" $ipt -F $ipt -X $ipt -t nat -F $ipt -t nat -X $ipt -t mangle -F $ipt -t mangle -X $ipt -P INPUT ACCEPT $ipt -P FORWARD ACCEPT $ipt -P OUTPUT ACCEPT $ipt -t nat -A POSTROUTING -o tun0 -j MASQUERADE $ipt -A FORWARD -i eth1 -o eth0 -j ACCEPT $ipt -A FORWARD -i eth0 -o eth1 -j ACCEPT $ipt -A FORWARD -i eth0 -o tun0 -j ACCEPT $ipt -A FORWARD -i tun0 -o eth0 -j ACCEPT Again I have done many variations of the above and many other rules from other posts but haven't been able to move forward. It seems like such a simple task, and yet....

    Read the article

  • Avoiding configSections in .NET app.config files

    - by Chris Clark
    I'm looking for a way to avoid declaring my configuration section in the configSections inside the App.config file. Basically, I want to specify my configuration information just like I do for built-in .NET systems. For instance, when configuring WCF, I just put stuff in the <system.serviceModel>, I don't have to declare a section in the configSections up top. The same thing applies for <system.diagnostics> and many other namespaces. I know I could just load it up as an XML file and parse through it, but I'd prefer to stick with the pattern if possible. Moreover, looking at the WCF configuration with Reflector, I notice that it uses the same configuration subsystem (defined in System.Configuration). If you're wondering why this is important, it's because it's confusing our IT people. If it were self contained in one place, it would be much easier on them. I also realize I'll lose the ability to have multiple of the same section type, but that's not important in our case.

    Read the article

  • How match 'other' applications to a tag in awesome-wm?

    - by Mnementh
    I use version 3.3.4 of awesome and it is fine. But I miss one thing I could do with an older version of awesome (without configuration via Lua): I could add a matcher with the regexp .* to add all windows without another tag to a specific tag: rule { name = ".*" tags = "9" } With that all applications I didn't made another rule for were added to tag 9. How can I do something similar with configuration in rc.lua?

    Read the article

  • Used SQL Svr 2008 Config Manager to Set Service Account to Local System: What Did It Change?

    - by Frank Ramage
    Direct shot to foot moment... While setting-up individual non-admin accts for MSSQLSERVER services, I temporarily set Server service login to Local System account. I remembered later that: SQL Server Configuration Manager performs additional configuration such as setting permissions in the Windows Registry so that the new account can read the SQL Server settings. I want my Local System back . (Actually just restored to its original security profile) Any advice? Thanks!

    Read the article

  • Specifying prerequisites for Puppet custom facts?

    - by larsks
    I have written a custom Puppet fact that requires the biosdevname tool to be installed. I'm not sure how to set things up correctly such that this tool will be installed before facter tries to instantiate the custom fact. Facts are loaded early on in the process, so I can't simply put a package { biosdevname: ensure => installed } in the manifest, since by the time Puppet gets this far the custom fact has already failed. I was curious if I could resolve this through Puppet's run stages. I tried: stage { pre: before => Stage[main] } class { biosdevname: stage => pre } And: class biosdevname { package { biosdevname: ensure => installed } } But this doesn't work...Puppet loads facts before entering the pre stage: info: Loading facts in physical_network_config ./physical_network_config.rb:33: command not found: biosdevname -i eth0 info: Applying configuration version '1320248045' notice: /Stage[pre]/Biosdevname/Package[biosdevname]/ensure: created Etc. Is there any way to make this work? EDIT: I should make it clear that I understand, given a suitable package declaration, that the fact will run correctly on subsequent runs. The difficulty here is that this is part of our initial configuration process. We're running Puppet out of kickstart and want the network configuration to be in place before the first reboot. It sounds like the only workable solution is to simply run Puppet twice during the initial system configuration, which will ensure that the necessary packages are in place. Also, for Zoredache: # This produces a fact called physical_network_config that describes # the number of NICs available on the motherboard, on PCI bus 1, and on # PCI bus 2. The fact value is of the form <x>-<y>-<z>, where <x> # is the number of embedded interfaces, <y> is the number of interfaces # on PCI bus 1, and <z> is the number of interfaces on PCI bus 2. em = 0 pci1 = 0 pci2 = 0 Dir['/sys/class/net/*'].each { |file| devname=File.basename(file) biosname=%x[biosdevname -i #{devname}] case when biosname.match('^pci1') pci1 += 1 when biosname.match('^pci2') pci2 += 1 when biosname.match('^em[0-9]') em += 1 end } Facter.add(:physical_network_config) do setcode do "#{em}-#{pci1}-#{pci2}" end end

    Read the article

  • Configure APC for maximum hit rate

    - by Steven De Groote
    I'm currently running php5 with APC, the latter with default configuration. However after setting up munin to monitor APC, I'm surprised by the results: apc.shm_size: 30 apc.gc_ttl: 3600 apc.ttl: 0 Used: 14MB Request rate: 100 requests/second Fragmentation: 0 Hit ratio: 80% (dropping to 0 a few times per hour) So the obvious question: how can I adapt the configuration to achieve a higher hitrate. I find it very strange that the available memory is not fully used which the hitratio is still below what I would expect. Thank for any hints!

    Read the article

  • how to optimize apache on web-server

    - by Prakash
    how can I optimize the server with following configuration. It takes too much time to load a page. IBM X3200 M3 Server - 1 Intel Xeon Processor with 4 GB Ram Below is my current configuration for apache: Start Servers: 5 (Default) Minimum Spare Servers: 10 Maximum Spare Servers: 20 Server Limit: 500 Max Clients: 500 Max Requests Per Child: 10000 (Default) Keep-Alive: On Keep-Alive Timeout: 5 Max Keep-Alive Requests: 100 Timeout: 200

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >