Search Results

Search found 9934 results on 398 pages for 'iis logs'.

Page 89/398 | < Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • How to configure a Web.Config file to allow custom 404 handling while still displaying on-page 500 error detail?

    - by Mark
    To customize 404 handling and based on the hosting company's suggestion, we are currently using the following web.config setup. However, we quickly realized that with this configuration, any page error (500 error) are also getting redirected to this custom error page. How can I modify this config file so we can continue to handle 404 with custom file while still able to view on-page error? <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.webServer> <httpErrors errorMode="DetailedLocalOnly" defaultPath="/Custom404.html" defaultResponseMode="ExecuteURL"> <remove statusCode="404" subStatusCode="-1" /> <error statusCode="404" prefixLanguageFilePath="" path="/Custom404.html" responseMode="ExecuteURL" /> </httpErrors> </system.webServer> <system.web> <customErrors mode="On"> <error statusCode="404" redirect="/Custom404.html" /> </customErrors> </system.web> </configuration>

    Read the article

  • Exchange 2007 OWA not listening on SSL port

    - by krs1
    I have an Exchange 2007 server that went down after a power failure. It has OWA access via SSL both externally and internally. OWA is working fine from the internal notwork, however I am getting a timeout when I attempt to connect externally. I pulled up wireshark and noticed that the server actually redirects to SSL. For some reason the server is not listening on the SSL port, and this seems to be causing the timeout. I normally do only development work, but I'm stuck with this since my sysadmin took off for the week and isn't answering my phone calls. As far as I know it shouldn't be a firewall issue. Aside from me not wanting to work on the damn thing, what should I look for?

    Read the article

  • ConfigurationErrorsException when serving images via UNC on IIS6

    - by Mark Richman
    I have a virtual directory in my web app which connects to a Samba share via UNC. I can browse the files via Windows Explorer without issue, but my web app throws a yellow screen with the following message: Description: An error occurred during the processing of a configuration file required to service this request. Please review the specific error details below and modify your configuration file appropriately. Parser Error Message: An error occurred loading a configuration file: Could not find file '\cluster\cms\qa-images\120400\web.config'. What makes no sense to me is why it's looking for a web.config in that location. I know it's not an authentication issue because the virtual directory can serve images from its root (i.e. \cluster\cms\qa-images\test.jpg serves as http://myserver/upload/test.jpg just fine).

    Read the article

  • App Pool crashes before loading mscorsvr. How to troubleshoot?

    - by codepoke
    I have an app pool that recycles every 29 hours, per default. It recycles smoothly 9 times out of 10, and I'm pretty sure the recycle itself is good for the app. Once every couple weeks the recycle does not work. The old worker process dies cleanly and the new worker process starts, but will not serve up content. Recycling the app pool again manually works like a charm. The failed worker process stops and dies cleanly and a second new worker process fires up and serves content perfectly. I took a crash dump against the failed worker process prior to recycling it, and DebugDiag found nothing to complain about. I tried to dig a little deeper using WinDBG, but mscorsvr/mscorwks is not loaded yet 15 minutes after the new process started. There are 14 threads running (4 async) and 20 pending client connections, but .NET is not even loaded into the process yet. Any suggestions where to poke and prod to find a root cause on this?

    Read the article

  • IIS7 houndreds of connections in CLOSE_WAIT

    - by rjlopes
    I have a .Net application on my IIS7 server it was working fine until I had to move it to another server. I moved the exact same code to the new server and I noticed that after some hours the website stopped responding to remote requests but if I did remote desktop to the server it responded to the request done to localhost. If I stop the website and the application pool it started working fine again. I was able to track the problem to hundreds of requests left in CLOSE_WAIT state to the http port that are never closed (I waited a few hours and they remain the same). Any ideias?

    Read the article

  • How do I scale EC2 and push out code / data to my instances?

    - by chris
    Unfortunately I only have a limited knowledge of server architecture, I come from a development background. I am looking to ensure my new app can scale properly using EC2. I currently have a T1.micro for development running Windows with SQL server 2008. The system allows students to come to our site to search for a mentor, update their profile with pictures and employment history etc. Roughly the same sort of work as a LinkedIn profile. I need this to be able to scale very quickly without wasted resources. I understand the following is important. Separation of data, application etc. I will achieve this I think by hosting images using S3, Database instance via RDS and upgrade the EC2 instance. My main question is: How do I push data / code out to multiple ec2 / RDS instances seamlessly?

    Read the article

  • Transparently rewrite requests to a subdomain.

    - by ptrin
    I would like to rewrite requests to http://www.mysite.com/foo to http://foo.mysite.com without the user's address bar changing. Using IIRF I can do the rewrite, but only if I use the [R] modifier flag which makes the rewrite a redirect. Is there a way for me to transparently rewrite requests to a subdomain? Here's the rewrite rule I've been testing with: RewriteRule ^/foo/(.*)?$ http://foo.mysite.com/index.html?$1 [R,L]

    Read the article

  • Securing internal data accessed by a website on the big, bad internet

    - by aehiilrs
    A close relative of this question on Stack Overflow: When you have a web site in your DMZ that needs to access production data stored on an internal DB, what strategies do you recommend using to lower the risks that come from accessing live data? Is it even considered acceptable to have a connection initiated from the DMZ come inside of your network? An extra detail about the nature of the site that kind of throws a monkey wrench into the machinery is that people using the web site will be competing for "spots" on a first-come, first-serve basis with others using the internal software. Because of this, as close to zero lag time between the two applications as possible is ideal.

    Read the article

  • How do I upload large (30MB) files via a web interface?

    - by Dan
    Because I'm stumped... The client needs to be able to upload large images to a library but the upload fails after 5-6MB (over my poor connection). It seems to be timing out as the filesize at fail isn't consistent. The setup is a form which is accepted by PHP. I've googled and played with php.ini and everything is set for big uploads and long timeouts. Platform is a dedicated windows server at GoDaddy. What's going wrong?

    Read the article

  • web.config caching differences

    - by Ivanhoe123
    What are the differences between these two approaches of caching (set up through web.config file)? <caching> <profiles> <add extension=".php" policy="DisableCache" kernelCachePolicy="DisableCache" /> <add extension=".html" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="7:00:00" /> </profiles> </caching> and: <staticContent> <clientCache cacheControlMode="UseMaxAge" cacheControlMaxAge="7.00:00:00" /> </staticContent> If the first one has img/css/js extensions added, would both work the same way? Thank you!

    Read the article

  • PDF download hanging from server with Firefox/Chrome

    - by Cruachan
    I have a Windows 2008 R2 (virtual) server running a number of websites. My client has uploaded several PDFs by FTP to a download directory from where they can be retrieved via a web page. This works fine in IE and Safari, but when attempting to download with Firefox or Chrome both browsers hang and Firefox posts 'stopped' in the status bar at the bottom of the page. We've tried this on several PCs at different locations so I think this is a server problem. Why would this be? Is there some configuration setting I need to change?

    Read the article

  • What are the most likely reasons an application would fail on only one of my servers?

    - by Rising Star
    I have several servers to test new code on. I primarily push out asp.NET web applications. Last week, I had an issue where I installed a newly developed web application on three servers. The three servers all run in separate environments. The application worked fine on two of them, but consistently crashed on the third server with each web request. The problem was eventually traced to an in-house developed .dll file being out of date on the third server. I'm certain that this kind of thing happens all the time. However, there are numerous things that could go wrong to cause this kind of behavior. I spent quite a bit of time tracing this problem. I would like to make a list of things to be suspicious of next time this happens. What are the most likely reasons that a web application would crash on one of my servers while identical code runs fine on another server?

    Read the article

  • Is there any IP which refers to the internet?

    - by victorferreira
    Hello, guys, good morning. 127.0.0.1 is the local IP, isnt it? Is there a IP that I can use in order to refer to the any user in the internet? The thing is: i want to block a specific file to be accessed by the internet using IIS6. but i doesnt show me a clear way to do this. But i can define some security rules, for example: give IP restrictions for some file. So, i can deny for everybody (using that IP i am asking about) and allow for the internal IP. any suggestions?

    Read the article

  • Access Virtual PC Website from Host Browser

    - by rams
    I have Virtual PC 2007 running Windows XP. I can access a website setup on the Virtual machine from a browser in the Virtual machine. How do I setup the virtual machine so I can access the website from a browser on the host machine. The host machine is also running WinXP. Both host and virtual machine can ping each other via IP and computer name. TIA rams

    Read the article

  • Exchange 2013 really slow outside of localhost

    - by ItsJustJP
    We've got a 12 core xeon, 24GB of ram 2012 server. We've recently migrated from exchange 2010 (which was on another server) to exchange 2013 which resides on our new 12 core server. Accessing the OWA on the exchange server is fine; it's very quick and responsive however accessing it via any other computer connect to the domain via a 1 gpbs connection and it'll take 10-15 seconds to load. Also running slow is public calenders that people in my place need to access, again taking 10-15 seconds to access and can sometimes cause outlook to not respond. Further to that we have phones that connect via the internet (of course) to the exchange so people can get work emails when they are out of the office. Guess what, this is also running slow. I've have search for many solutions and have tried changing outlook authentication methods but there is no change in speed. The old exchange 2010 server no longer exists but there was no problem before the migration. Has anyone got any suggestions? Thanks :) Must also mention that server 2012 that exchange 2013 is installed on is also the DC. Update: It would appear that any connection via https is slow. It took more than 15 mins for an outlook client to download 50MB of emails (outlook anywhere).

    Read the article

  • Website is not accessible from server which is using proxy

    - by Bhoot
    I hosted a website in a win 2008 R2 server which runs in private domain. I set up bindings for port 80 and 443 for http & https respectively. Created inbound rule for port 80 and 443 also in windows firewall. After doing all this, i am still not able to access my website from remote machine. IE : Internet Explorer cannot display the webpage. Chrome : Oops! Google Chrome could not find xxxxxx Tried accessing website by ip address but no luck. I tried to ping that server but it says TTL expired in Transit. Now i found some more information over internet to check if the server is using any kind of proxy in between. I found my IP address at www.getip.com, but ipconfig/all gives me a different IP address. Is it really a problem if we use proxy ? I am not sure if i have concluded it correctly. But is there any way out to resolve this issue? Update ::: I figured it out. I have to call that website with external IP address. due to the proxy settings i was not able to call that website by the server's IP or name of that machine.

    Read the article

  • Best way to set up servers for .NET performance

    - by msigman
    Assume we have 3 physical servers and let's say we are only interested in performance, and not reliability. Is it better to give each server a specific function or make them all duplicates and split the traffic between them? In other words dedicate 1 as DB server, 1 as web server, and 1 as reporting server/data warehouse, or better to put all three services on each server and use them as web farm?

    Read the article

  • Is there any IP which refers to the internet?

    - by victorferreira
    Hello, guys, good morning. 127.0.0.1 is the local IP, isnt it? Is there a IP that I can use in order to refer to the any user in the internet? The thing is: i want to block a specific file to be accessed by the internet using IIS6. but i doesnt show me a clear way to do this. But i can define some security rules, for example: give IP restrictions for some file. So, i can deny for everybody (using that IP i am asking about) and allow for the internal IP. any suggestions?

    Read the article

  • "StartTag: invalid element name" in default.aspx

    - by Epaga
    (Warning - asp newbie) I have an aspx file with the tag <%@ Page Language=VB ... %> right at the beginning of the file. When calling this from my IIS server (http://localhost/myservice/default.aspx), this gives me the error This page contains the following errors: error on line 1 at column 2: StartTag: invalid element name Below is a rendering of the page up to the first error. What am I doing wrong?

    Read the article

  • Deploying MVC2 application to IIS7.5 - Ninject asked to provide controllers for content files

    - by Rune Jacobsen
    I have an application that started life as an MVC (1.0) app in Visual Studio 2008 Sp1 with a bunch of Silverlight 3 projects as part of the site. Nothing fancy at all. Using Ninject for dependency injection (first version 2 beta, now the released version 2 with the MVC extensions). With the release of .Net 4.0, VS2010, MVC2 etc., we decided to move the application to the newest platform. The conversion wizard in VS2010 apparently took care of everything, with one exception - it didn't change references to mvc1 to now point to mvc2, so I had to do that manually. Of course, this makes me think about other MVC2 things that could be missing from my app, that would be there if I did File - New Project... But that is not the focus of this question. When I deploy this application to the IIS 7.5 server (running on Win2008 R2 x64), the application as such works. However, images, scripts and other static content doesn't seem to exist. Of course they are there on disk on the server, but they don't show up in the client web browser. I am fairly new to IIS, so the only trick I knew is to try to open the web page in a browser on the server, as that could give me more information. And here, finally, we meet our enemy. If I try to go directly to the URL of one of the images (http://server/Content/someimage.jpg for instance), I get the following error in the browser: The IControllerFactory 'Ninject.Web.Mvc.NinjectControllerFactory' did not return a controller for a controller named 'Content'. Aha. The web server tries to feed this request to MVC, who with its' default routing setup assumes Content to be a controller, and fails. How can I get it to treat Content/ and Scripts/ (among others) as non-controllers and just pass through the static content? This of course works with Cassini on my developer machine, but as soon as I deploy, this problem hits. I am using the last version of Ninject MVC 2 where the IoC tool should pass missing controllers to the base controller factory, but this has apparently not helped. I have also tried to add ignore routes for Content etc., but this apparently has no effect either. I am not even sure I am addressing the problem on the right level. Does anyone know where to look to get this app going? I have full control of the web server so I can more or less do whatever I want to it, as long as it starts working. Thanks!

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >