Search Results

Search found 14146 results on 566 pages for 'iis manager'.

Page 120/566 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • Web deploy (msdeploy), syncing everything but sites and pools (but include siteDefaults)

    - by jishi
    Today I do the following to sync two webservers but skip all site configuration: msdeploy -verb:sync -source:webServer -dest:webServer,computerName=web25:8080 -skip:objectName=section,absolutePath=system.applicationHost/sites -skip:objectName=section,absolutePath=system.applicationHost/applicationPools However, this effectively also skip the siteDefaults, which I do like to sync (system.applicationHost/sites/siteDefaults) There doesn't seem to be a way to "include" a section, to override the skip directive. And there doesn't seem to be a way to sync only the siteDefaults section from applicationHost either, since source appHostConfig only seem to sync a specified site, and not siteDefaults. Maybe it is possible to "skip" using an Xpath expression or similar, to only skip the nodes, but include , but I find the documentation a bit confusing and my Xpath is rusty.

    Read the article

  • IIS6 Front Page Extension: Set Reply-To in Webbot mailers

    - by hurikhan77
    We are running some old legacy IIS6 websites with frontpage extensions. The contact forms (using webbot frontpage extension) use the administrators email address as from address which is pretty cumbersome as our customers tend to click just reply in the requests they receive and we always get these mails and have to bother about them. How do you set a reply-to per webbot form? Or how do you set a from or reply-to globally per IIS6 vhost? In the HTML code it looks like this: <!--webbot bot="SaveResults" S-Email-Format="TEXT/PRE" S-Email-Address="[email protected]" ... --> But this is the recipient address. Additional question: Will it be sufficient to just change this code? Or do I need to apply settings elsewhere?

    Read the article

  • SSL certificate only valid when viewed externally

    - by user23522
    We have a SSL certificate installed on our server. When viewed externally it validates correctly, however when the website is viewed from the server it gets an invalid certificate error. We are using the fully qualified domain name to access it for both? Is there any reason this should be happening? Cheers.

    Read the article

  • Integrated Windows Authentication not working in IE only

    - by CoreyT
    In my site I have one folder that does not allow anonymous access. It is set up to use Integrated Windows Authentication as it is on an AD domain. The login works fine in Firefox, Chrome, even Safari, but not IE8. Has anyone encountered this before? I can't seem to find anyone else with a similar issue, except for where the login fails in all browsers of course.

    Read the article

  • Port Forwarding: Why do my local sites on 80 work but not those on 8080?

    - by Chadworthington
    I setup my router to forward port 80 to the PC hosting my web site. As a result, I am able to access this url (Don't bother clicking on it, it's just an example): http://my.url.com/ When i click on this link, it works: http://localhost:8080/tfs/web/ I also forward port 8080 to the same web server box But when I try to access this url I get the eror "Page Cannot be displayed:" http://my.url.com:8080/tfs/web/ I fwded port 8080 the same way I fwded port 80. I also turned off Windows Firewall, in case it was blocking port 8080. Any theories why port 80 works but 8080 does not?

    Read the article

  • Hide subdomain AND subdirectory using mod_rewrite?

    - by Jeremy
    I am trying to hide a subdomain and subdirectory from users. I know it may be easier to use a virtual host but will that not change direct links pointing at our site? The site currently resides at http://mail.ctrc.sk.ca/cms/ I want www.ctrc.sk.ca and ctrc.sk.ca to access this folder but still display www.ctrc.sk.ca. If that makes any sense. Here is what our current .htaccess file looks like, we are using Joomla so there already a few rules set up. Help is appreciated. # Helicon ISAPI_Rewrite configuration file # Version 3.1.0.78 ## # @version $Id: htaccess.txt 14401 2010-01-26 14:10:00Z louis $ # @package Joomla # @copyright Copyright (C) 2005 - 2010 Open Source Matters. All rights reserved. # @license http://www.gnu.org/copyleft/gpl.html GNU/GPL # Joomla! is Free Software ## ##################################################### # READ THIS COMPLETELY IF YOU CHOOSE TO USE THIS FILE # # The line just below this section: 'Options +FollowSymLinks' may cause problems # with some server configurations. It is required for use of mod_rewrite, but may already # be set by your server administrator in a way that dissallows changing it in # your .htaccess file. If using it causes your server to error out, comment it out (add # to # beginning of line), reload your site in your browser and test your sef url's. If they work, # it has been set by your server administrator and you do not need it set here. # ##################################################### ## Can be commented out if causes errors, see notes above. #Options +FollowSymLinks # # mod_rewrite in use RewriteEngine On ########## Begin - Rewrite rules to block out some common exploits ## If you experience problems on your site block out the operations listed below ## This attempts to block the most common type of exploit `attempts` to Joomla! # ## Deny access to extension xml files (uncomment out to activate) #<Files ~ "\.xml$"> #Order allow,deny #Deny from all #Satisfy all #</Files> ## End of deny access to extension xml files RewriteCond %{QUERY_STRING} mosConfig_[a-zA-Z_]{1,21}(=|\%3D) [OR] # Block out any script trying to base64_encode crap to send via URL RewriteCond %{QUERY_STRING} base64_encode.*\(.*\) [OR] # Block out any script that includes a <script> tag in URL RewriteCond %{QUERY_STRING} (\<|%3C).*script.*(\>|%3E) [NC,OR] # Block out any script trying to set a PHP GLOBALS variable via URL RewriteCond %{QUERY_STRING} GLOBALS(=|\[|\%[0-9A-Z]{0,2}) [OR] # Block out any script trying to modify a _REQUEST variable via URL RewriteCond %{QUERY_STRING} _REQUEST(=|\[|\%[0-9A-Z]{0,2}) # Send all blocked request to homepage with 403 Forbidden error! RewriteRule ^(.*)$ index.php [F,L] # ########## End - Rewrite rules to block out some common exploits # Uncomment following line if your webserver's URL # is not directly related to physical file paths. # Update Your Joomla! Directory (just / for root) #RewriteBase / ########## Begin - Joomla! core SEF Section # RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !^/index.php RewriteCond %{REQUEST_URI} (/|\.php|\.html|\.htm|\.feed|\.pdf|\.raw|/[^.]*)$ [NC] RewriteRule (.*) index.php RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L] # ########## End - Joomla! core SEF Section EDIT Yes, mail.ctrc.sk.ca/cms/ is the root directory. Currently the DNS redirects from ctrc.sk.ca and www.ctrc.sk.ca to mail.ctrc.sk.ca/cms. However when it redirects the user still sees the mail.ctrc.sk.ca/cms/ url and I want them to only see www.ctrc.sk.ca.

    Read the article

  • Join domain in windows 7 [on hold]

    - by Hassan Ali Khan
    I have created a domain on server machine and when i am trying to join a domain through another machine of windows 7 through the following steps: Goto MY Computer Properties - Change settings - ComputerName - click on change button - click on radio button "Domain" and enter domain name. After that when i click on OK button and enter the username and password credentials. It show me the following error: An attempt to resolve the DNS name of a domain controller in the domain being joined has failed. Please verify this client is configured to reach a DNS server that can resolve DNS names in the target domain

    Read the article

  • It it possible to have multiple ReWrite rules that all do the same Action, for an IIS7.5 webserver?

    - by Pure.Krome
    I've got rewrite module working great for my IIS7.5 site. Now, I wish to add a number of urls that all goto an HTTP 410-Gone status. Eg. <rule name="Old Site = image1" patternSyntax="ExactMatch" stopProcessing="true"> <match url="image/loading_large.gif"/> <match url="image/aaa.gif"/> <match url="image/bbb.gif"/> <match url="image/ccc.gif"/> <action type="CustomResponse" statusCode="410" statusReason="Gone" statusDescription="The requested resource is no longer available" /> </rule> but that's invalid - the website doesn't start saying there's a rewrite config error. Is there another way I can do this? I don't particularly want define a single URL and ACTION for each url.

    Read the article

  • App Pool doesn't respect memory limits

    - by lucuma
    I am dealing with a legacy .NET app that has a memory leak. In order to try and mitigate a run away memory situation, I've set the app pool memory limits from anywhere between 500KB to 500000KB (500MB) however the app pool doesn't seem to respect the settings as I can login and view the physical memory for it (5GB and above no matter what values). This app is killing the server and I can't seem to determine how to adjust the app pool. What settings do you recommend in order to ensure this app pool doesn't exceed around 500mb of memory. Here is an example, the app pool is using 3.5GB of

    Read the article

  • Intermittently, IIS7 requests get stuck in WindowsAuthenticationModule

    - by Richard Beier
    We're running an IIS7 server hosting several dozen websites. Several of these websites are all part of the same legacy app we've developed. These sites all run the same code and run in the same app pool. Roughly once a month over the past few months, we've found that all requests for this app pool start hanging indefinitely. When this happens, we receive an alert and we recycle the app pool. After that, the sites start working again. This only ever affects this one app pool - never any others on the same server. A couple times, before recycling the pool, I've looked at the currently-executing requests in the worker process. They all show up as executing inside the WindowsAuthenticationModule. Which is strange, because the vast majority of the application does not require authentication. There is a small admin section which uses Windows auth... but all the other requests should be anonymous. Does anyone have any idea as to what might be causing this? There are several unusual things about the way these sites are set up. As I mentioned, they all run the same code - multiple sites point at the same physical directory. The only difference is the host header bindings. I'm not sure why there isn't just one site with all the host headers, but that's how it works. In several of these sites, the same physical directory is mapped at two levels - as the root of the site and again as an application within the site. So if a user goes to http://oursite.com/index.aspx, that maps to c:\files\oursite\index.aspx. If a user goes to http://oursite.com/foo/index.aspx, that also maps to c:\files\oursite\index.aspx. I think there is code which looks at the request URL and handles the two requests differently. This is strange because the same web.config ends up being interpreted as a site config file, and also as an application config file within the site. I don't know if this might be related to the authentication problem. If we can't find the cause, we're thinking of a few workarounds we could try: Move the admin section into a separate site, and give the client a new admin URL. Run that separate site in its own app pool. Then in the web.config shared by all the other sites, remove the WindowsAuthenticationModule. That way there should be no possibility of a hang within the WindowsAuthenticationModule. Try running all these sites in the classic pipeline instead of the integrated pipeline. They were working fine on our old IIS6 server... (If we get desperate) Set up a watchdog script which monitors the sites and auto-recycles the app pool when it detects that requests are getting stuck. What do you think? Thanks for your help, Richard

    Read the article

  • Web.config file permissions

    - by ristonj
    I would like to lock down the web.config file as much as possible, so that as few accounts as necessary can read the file. I saw the list here http://msdn.microsoft.com/en-us/library/ms178699.aspx but allowing the Users group read permission on the web.config file seems excessive. Thanks.

    Read the article

  • Download acceleration with jigdo?

    - by james
    Im using jigdo-lite to download a Debian DVD ISO. I already have the CD version of the image, so I added the CD files to the task. Now I need to download many files (not all) of the DVD ISO. The default jigdo-lite uses wget to download files. It seems jigdo (wget) downloads only one file at a time with one connection. So I'm getting a low download speed. How can I accelerate the download speed using jigdo? Possible Solutions: Using different download manager with jigdo. Is it possible? If yes, How? Using jigdo (wget) to download multiple files at once. How? Getting download links of remaining files to download so that they can be downloaded with a download manager and later added to jigdo iso. How?

    Read the article

  • IE and Google Chrome timeout on an IIS6 hosted SSL page that Firefox handles well

    - by Thomas
    Ok, here's the scenario: Up until a few weeks ago, none of us noticed anything wrong with the corporate website. People were using it without complaint. Then, a client complained that a specific page on the site was timing out for him, and only when he committed a POST action on a form filled with data. I checked it out, and it timed out for me, too. But, it only timed out in Google Chrome and IE, not in Firefox. Additionally, the same page, on the same server, but served from a different domain name (one not under the protection of SSL, either) does not time out under any browser. To clarify: https://www.mysite.com/changes.php times out on POST, but the same with http works fine. That distinction (SSL vs. Non-SSL) seems to be important, as nothing else has changed. Our certificate is valid, and Firefox detects no errors thrown by the page. I've looked at the Request and Response headers from the page, and they all follow the correct formats. Then, after wandering through the site, I noticed a few other things. Both IE and Chrome will frequently time out on any page that is PHP-based. They never time out on static images or html files. I've looked at the site from a variety of different servers, my home and work workstations, and my netbook. Because of that, I've discounted a viral infection, as I highly doubt a virus is going to hit every one of the machines to which I have access in exactly the same manner. My setup is: Server: Win2k3, II6, PHP 5.2.9-1. Clients: IE7, IE8, Chrome (regular and dev channel): Frequent timeouts on PHP pages. Firefox 2, Firefox 3: No timeouts. Firebug shows no errors or even lengthy periods serving the pages. I've spent 2 days searching for any tech knowledge that I can find, and my search parameters are all too general. Everyone has problems loading SSL pages in IE and Chrome for a wide variety of reasons. The infrequent nature of the timeouts and the fact that there are no errors being reported anywhere is starting to drive me insane. Does anyone have any insight on a problem like this?

    Read the article

  • IIS8 Asp.net State service remote connection failure

    - by maxisam
    Recently we upgrade our web server to windows server 2012 with IIS8. We have this issue when users try to connect the asp.net state service to this web server remotely. It always popup Unable to make the session state request to the session state server. Please ensure that the ASP.NET State service is started and that the client and server ports are the same. If the server is on a remote machine, please ensure that it accepts remote requests by checking the value of HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\aspnet_state\Parameters\AllowRemoteConnection. If the server is on the local machine, and if the before mentioned registry value does not exist or is set to 0, then the state server connection string must use either 'localhost' or '127.0.0.1' as the server name. In IIS7 / 7.5 we use the same way and it works fine. As long as the state service is running and firewall is set properly, we don't have any problem. However, in IIS8 it doesn't work. (We even turn off firewall to test it) Thanks for helping.

    Read the article

  • Web Farm Framework - Adding servers are offline

    - by Johan Wikström
    My problem is that im trying to setup a server farm but the nodes all come up as offline. I dont get a connection error saying that the servername is wrong, but get "offline" after it "test connection" I have: - Setup firewalls rules to allow Remote and file share on Domain and private network - Installed WWF 2.0 on both servers - Account that im using is a domain account that is Administrator on both machines. www01 is the same server as the controller below, but same results if i try www02 as primary. Any ideas?

    Read the article

  • Basic IIS7 permissions question

    - by Tom Gullen
    We have a website, with a file: www.example.com/apis/httpapi.asp This file is used by the site internally to make requests joining two systems on the website together (one is Classic ASP, the other ASP.net). However, we do not want the public to be able to access the file. In IIS7.5, is there a setting I can do to make this file internal only? I've tried rewriting the URL for it but this rewrite is also applied internally so the scripts stop working as they fetch the rewritten url. Thanks for any help!

    Read the article

  • SSL Ajax type of certificate for the static domain (image + js)

    - by Alexl
    Hi, I have a page that is SSL and has a valid certificate extended. (mainpage.com) But this page request some static content to another domain(page-static.com), basicly images and js. Actually i have only a certificate for my mainpage.com. So now when i request this page i get invalid ssl page because it contains invalid encrypted data (the one provided by the www.page-static.com) What kind of certificate do i need for the www.page-static.com. Do i need the same one as the mainpage.com, because this certificate are expensive (it's a extended certificate). Or a cheap certificate from godaddy will do the trick. This is another question do both certificates have to be signed by the same root provider and/or the same encryption key length (or it can be only 128 bits)? Thanks for your help

    Read the article

  • Calling LoadLibraryEx on ISAPI filter failed (v4.0.30319)

    - by rob
    I installed .Net 1.1 on a Windows Server 2008 (which already had .NET 4 installed). Afterwards, I started getting the following error: HTTP Error 500.0 - Internal Server Error Calling LoadLibraryEx on ISAPI filter "C:\Windows\Microsoft.NET\Framework\v4.0.30319\\aspnet_filter.dll" failed I have tried running aspnet_regiis without success. I have also tried the suggestions by Rick Strahl but to no avail. I have also removed .NET 4.0.30319 using the cleanup tool. When I reinstalled it, The error was still there. I have already removed 1.1 but still i get that error. Please help.

    Read the article

  • PHP on IIS7 not showing pages

    - by Jeff
    I have a PHP website on a Windows 7 machine I'm working with and it cannot be viewed by any browser - IE, Chrome, Firefox. When navigating to the root of the website (default index.php) the browser reports it cannot find the address. Not a 404 error from the webserver, just as if it cannot resolve the name. Other websites in the same default web application that are also PHP work perfectly. I've aligned all folder permissions and everything else but this has got me stumped. I even went as far to create a new folder and throw in a test phpinfo() page and it worked. Copied this website's content to the new folder and it cannot find the index.php page. I checked all setting I know and can't seem to find what I'm missing. Anyone else encounter this issue? Remember the fix for it?

    Read the article

  • Why isn't 'Low Fragmentation Heap' LFH enabled by default on Windows Server 2003?

    - by James Wiseman
    I've been investigating an issue with a production Classic ASP website running on IIS6 which seems indicative of memory fragmentation. One of the suggestions of how to ameliorate this came from Stackoverflow: How can I find why some classic asp pages randomly take a real long time to execute?. It suggested flipping a setting in the site's global.asa file to 'turn on' Low Fragmentation Heap (LFH). The following code (with a registered version of the accompanying DLL) did the trick. Set LFHObj=CreateObject("TURNONLFH.ObjTurnOnLFH") LFHObj.TurnOnLFH() application("TurnOnLFHResult")=CStr(LFHObj.TurnOnLFHResult) (Really the code isn't that important to the question). An author of a linked post reported a seemingly magic resolution to this issue, and, reading around a little more, I discovered that this setting is enabled by default on Windows Server 2008. So, naturally, this left me a little concerned: Why is this setting not enabled by default on 2003, or If it works in 2008 why have Microsoft not issued a patch to enable it by default on 2003? I suspect the answer to the above is the same for both (if there is one). Obviously, we're testing it in a non-production environment, and doing an array of metrics and comparisons to deem if it does help us. But aside from this I'm really just trying to understand if there's any technical reason why we should do this, or if there are any gotchas that we need to be aware of.

    Read the article

  • How to setup Database Permissions on SqlServer Express 2008

    - by Timo Willemsen
    I'm using a code-first approach of using the Entity Framework. When I first run the application it will try to create the database matching my MVC models. However, it doesn't have permission to create it I think. I get the following error: CREATE DATABASE permission denied in database 'master'. What user is trying to access the SqlServer and how can I add it's permissions to let it work? This is the connectionstring I'm using (which should be right...) <add name="ContextDb" connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;initial catalog=ContextDb" providerName="System.Data.SqlClient"/> Cheers

    Read the article

  • Disadvantages of enabling 'Low Fragmentation Heap' LFH on Windows Server 2003?

    - by James Wiseman
    I've been investigating an issue with a production Classic ASP website running on IIS6 which seems indicative of memory fragmentation. One of the suggestions of how to ameliorate this came from Stackoverflow: How can I find why some classic asp pages randomly take a real long time to execute?. It suggested flipping a setting in the site's global.asa file to 'turn on' Low Fragmentation Heap (LFH). The following code (with a registered version of the accompanying DLL) did the trick. Set LFHObj=CreateObject("TURNONLFH.ObjTurnOnLFH") LFHObj.TurnOnLFH() application("TurnOnLFHResult")=CStr(LFHObj.TurnOnLFHResult) (Really the code isn't that important to the question). An author of a linked post reported a seemingly magic resolution to this issue, and, reading around a little more, I discovered that this setting is enabled by default on Windows Server 2008. So, naturally, this left me a little concerned: Why is this setting not enabled by default on 2003, or If it works in 2008 why have Microsoft not issued a patch to enable it by default on 2003? I suspect the answer to the above is the same for both (if there is one). Obviously, we're testing it in a non-production environment, and doing an array of metrics and comparisons to deem if it does help us. But aside from this I'm really just trying to understand if there's any technical reason why we should do this, or if there are any gotchas that we need to be aware of.

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >