Search Results

Search found 52885 results on 2116 pages for 'http redirect'.

Page 163/2116 | < Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >

  • Final ever Virtualisation for Developer slidedeck from NxtGenUG Cambridge

    - by Liam Westley
    Thanks to Chris Hay, Allister Frost and the guys from NxtGenUG Cambridge for hosting an evening of virtualisation, and for their secretary Rachel Hawley for sorting out all the dates and details ;-). It was a good turnout so close to Christmas, obviously the bribe of home made mince pies got some people out on a cold wintery December evening.  Big thanks to Allister for driving me to the railway station to ensure I made the 22:29 train, made all the easier by quaffing a couple of very well kept pints of Adnams Broadside in The Punter after the presentation. For those who want the last ever slide decks, they're available here in PDF and PowerPoint format,   http://www.tigernews.co.uk/blog-twickers/nxtgenugcambs/Virt4DevsPdf.zip   http://www.tigernews.co.uk/blog-twickers/nxtgenugcambs/Virt4DevsPowerPoint.zip And a final thanks to all the user groups who have hosted a Virtualisation or Hyper-V talk in the past two years, and gave me a chance to enthuse developers about virtualisation, Dot Net Developers Network, Bristol * (http://www.dotnetdevnet.com/) DeveloperDeveloperDeveloper 7, Reading (DDD7) NxtGenUG, Oxford * (http://www.nxtgenug.net/Region.aspx?RegionID=3) NxtGenUG, Birmingham (http://www.nxtgenug.net/Region.aspx?RegionID=2) DeveloperDeveloperDeveloper Scotland 2, Glasgow (2011 event details) DevEvening, Woking (http://www.devevening.co.uk/) VistaSquad, London (R.I.P. 2010) NxtGenUG, Southampton (http://www.nxtgenug.net/Region.aspx?RegionID=9) GL.Net, Gloucester (http://www.gl-net.org.uk/) NxtGenUG, Manchester (http://www.nxtgenug.net/Region.aspx?RegionID=11) London .NET User Group, London (http://www.dnug.org.uk/) VBUG, Bracknell (http://www.vbug.co.uk/events/default.aspx?region=Reading) NEBytes, Newcastle Upon Tyne (http://www.nebytes.net/) VBUG, London (http://www.vbug.co.uk/events/default.aspx?region=London) NxtGenUG, Hereford (http://www.nxtgenug.net/Region.aspx?RegionID=10) NxtGenUG, Cambridge (http://www.nxtgenug.net/Region.aspx?RegionID=8) * twice, for both Virtualisation for Developers and Hyper-V for Developers Virtualisation for Developers  2008 - 2010 R.I.P. Hyper-V for Developers 2009 - 2010 R.I.P.

    Read the article

  • Website pages very slow to load on first access

    - by Merianos Nikos
    I have a production web server that makes sites to be very slow when load for first time. After the first time all connections are normal and fast. In the following screen you can see the first test I have run for a random site in my server: and here you can see the result of the second time request: As you can see there is a big change in the second load of the same page. Here is the page load time graph: My problem is that I don't really know anything about Red Had Linux server, and also I don't know where can I start. Can somebody to help me ? I like to find out the solution for the long time "wait" in connection. I know that my question is very minimal, but you can always ask me to give you information about the server.

    Read the article

  • AWS lighttpd: Sending a copy of requests to test.

    - by Martin
    I have a load balanced service on AWS. So the ELB evenly distributes the load across my servers. Each server is running lighttpd that does logging and forwards the requests to my service (on the same machine). I have written a new version of the service. It is installed and running on an EC2 machine test1 (basically a mirror of our current server but the new service running instead of the original) and I have done some preliminary tests that look good. But what I would like to do is mirror a fraction of incoming traffic to the new version of the service so I can do some comparisons between an original version and the new version based on real traffic. Thus I was thinking I could modify one box behind the ELB to duplicate its traffic to the test1. I was thinking I could modify the configuration of lighttpd so that each request is mirrored/duplicated. i.e. the original service keeps responding as before but a mirror request is sent to test1 but the reply is just dropped). Unfortunately I have not been able to work this out. Any ideas on how I could mirror the requests from one box to itself and test1. Or any other ideas for testing.

    Read the article

  • Java Resources for Windows Azure

    - by BuckWoody
    Windows Azure is a Platform as a Service – a PaaS – that runs code you write. That code doesn’t just mean the languages on the .NET platform – you can run code from multiple languages, including Java. In fact, you can develop for Windows and SQL Azure using not only Visual Studio but the Eclipse Integrated Development Environment (IDE) as well.  Although not an exhaustive list, here are several links that deal with Java and Windows Azure: Resource Link Windows Azure Java Development Center http://www.windowsazure.com/en-us/develop/java/  Java Development Guidance http://msdn.microsoft.com/en-us/library/hh690943(VS.103).aspx  Running a Java Environment on Windows Azure http://blogs.technet.com/b/port25/archive/2010/10/28/running-a-java-environment-on-windows-azure.aspx  Running a Java Environment on Windows Azure http://blogs.technet.com/b/port25/archive/2010/10/28/running-a-java-environment-on-windows-azure.aspx  Run Java with Jetty in Windows Azure http://blogs.msdn.com/b/dachou/archive/2010/03/21/run-java-with-jetty-in-windows-azure.aspx  Using the plugin for Eclipse http://blogs.msdn.com/b/craig/archive/2011/03/22/new-plugin-for-eclipse-to-get-java-developers-off-the-ground-with-windows-azure.aspx  Run Java with GlassFish in Windows Azure http://blogs.msdn.com/b/dachou/archive/2011/01/17/run-java-with-glassfish-in-windows-azure.aspx  Improving experience for Java developers with Windows  Azure http://blogs.msdn.com/b/interoperability/archive/2011/02/23/improving-experience-for-java-developers-with-windows-azure.aspx  Java Access to SQL Azure via the JDBC Driver for SQL  Server http://blogs.msdn.com/b/brian_swan/archive/2011/03/29/java-access-to-sql-azure-via-the-jdbc-driver-for-sql-server.aspx  How to Get Started with Java, Tomcat on Windows Azure http://blogs.msdn.com/b/usisvde/archive/2011/03/04/how-to-get-started-with-java-tomcat-on-windows-azure.aspx  Deploying Java Applications in Azure http://blogs.msdn.com/b/mariok/archive/2011/01/05/deploying-java-applications-in-azure.aspx  Using the Windows Azure Storage Explorer in Eclipse http://blogs.msdn.com/b/brian_swan/archive/2011/01/11/using-the-windows-azure-storage-explorer-in-eclipse.aspx  Windows Azure Tomcat Solution Accelerator http://archive.msdn.microsoft.com/winazuretomcat  Deploying a Java application to Windows Azure with  Command-line Ant http://java.interoperabilitybridges.com/articles/deploying-a-java-application-to-windows-azure-with-command-line-ant  Video: Open in the Cloud: Windows Azure and Java http://channel9.msdn.com/Events/PDC/PDC10/CS10  AzureRunMe  http://azurerunme.codeplex.com/  Windows Azure SDK for Java http://www.interoperabilitybridges.com/projects/windows-azure-sdk-for-java  AppFabric SDK for Java http://www.interoperabilitybridges.com/projects/azure-java-sdk-for-net-services  Information Cards for Java http://www.interoperabilitybridges.com/projects/information-card-for-java  Apache Stonehenge http://www.interoperabilitybridges.com/projects/apache-stonehenge  Channel 9 Case Study on Java and Windows Azure http://www.microsoft.com/casestudies/Windows-Azure/Gigaspaces/Solution-Provider-Streamlines-Java-Application-Deployment-in-the-Cloud/400000000081   

    Read the article

  • IIS7 rejecting POST requests with 400 error.

    - by Eli
    I have a web application that is supposed to handle post requests from SAP. This has been working fine at other customers with win2k3 systems (IIS6) and win2k8 (IIS7) systems. However, on this specific customer's site, IIS responds with a 400 response, without calling my aspx page. In fact, I don't even see it appear in the w3c log for the virtual directory. I do see the request using Network Monitor, so I know no firewalls and the like are eating the request, and as far as I can tell, all of the fields of the request are valid (there is "content-length", it looks correct (this is a sending of a 28K tiff file - which isn't MIME encoded, curiously enough now that I think of it...) Ideas?

    Read the article

  • nginx: server_name and server_addr wrong with reverse proxy in front of it

    - by user41356
    I have stunnel in front of nginx in order to handle ssl. (I'm aware that nginx can handle ssl, but I'm migrating off nginx and this is a necessary step.) Stunnel and nginx are running on the same box. Without stunnel in front of nginx, nginx got the server_addr and server_name as the public ip of the box and the domain of the url I was fetching, respectively. Now with stunnel, nginx thinks the server_addr and server_name are 127.0.0.1 and localhost respectively. This is screwing up a bunch of things. How can I make nginx get (or stunnel send) the correct server_addr and server_name?

    Read the article

  • Apache has a 2GB file limit on a CIFS network drive?

    - by netvope
    Setup: A Windows and a Ubuntu Server are hosted in VMware ESXi I have a a 6GB file on a Windows share The Windows share is mount on Ubuntu with smbmount A symlink pointing to the 6GB file is created in a public_html directory, which is readable by Apache The problem: wget gets an error Connection closed at byte 2130706432. Retrying. after downloading 2130706432 bytes (exactly 2032 MiB, and is the same every time) Apache returns 206 Partial Content without showing any errors in the log Same error even if I download from localhost Similar error when Firefox is used instead of wget No error if I md5sum or cp the file on Ubuntu, suggesting that smbmount and the Windows Server are OK with 6GB files. No error if Apache serve a 6GB file from the local disk, suggesting that Apache has no problems handling 6GB files. Any ideas why Apache/symlink/smbmount/Windows would cause an error when used together? How can I fix the problem? Software used: VMware ESXi 4 Update 1 Windows Server 2008 R2 Ubuntu 8.04 Server, vmxnet3 Apache 2.2.8 mount.cifs 1.10-3.0.28a

    Read the article

  • Temporarily Utilizing 304 Header on Apache for Crawlers

    - by Volomike
    I have a client who has a hosting arrangement with 400 customer sites all hosted through SuPHP in CGI mode on Apache. The sysop is now gone and the client is calling on me for rolling out a new PHP thing. Trouble is -- server load is very high right now and we have found that it's due to the crawlers. We had one customer in particular who complained of slow websites, and we engaged a 304 header plugin in his site against most crawlers, and his site perked right up. We'd like to lower that load by issuing a global 304 header to all the crawlers, letting human visitors through. I have a long list of user agent keywords to trap for. What's the best way to temporarily engage that global 304 header, while allowing human visitors to get right on through? I mean, I could roll out 400 .htaccess file changes, but it would be ideal to make this change in like one central Apache config and then it automatically affect all the sites at once.

    Read the article

  • Nginx returning 444 for PUT and DELETE

    - by Zorrocaesar
    I'm trying to build a REST API through Nginx and everything works fine except when I the requests are PUT or DELETE. In these cases, Nginx returns 444 (no response). I did some research and all I could find was something about Nginx being configured with the "--with-http_dav_module" option. I've checked that with nginx -V and and it seems that it was configured with this. So, any idea what else could it be?

    Read the article

  • CUPS basic auth error through web interface

    - by Inaimathi
    I'm trying to configure CUPS to allow remote administration through the web interface. There's enough documentation out there that I can figure out what to change in my cupsd.conf (changing Listen localhost:631 to Port 631, and adding Allow @LOCAL to the /, /admin and /admin/conf sections). I'm now at the point where I can see the CUPS interface from another machine on the same network. The trouble is, when I try to Add Printer, I'm asked for a username and password, but my response is rejected even when I know I've gotten it right (I assume it's asking for the username and password of someone in the lpadmin group on the server machine; I've sshed in with credentials its rejecting, and the user I'm using has been added to the lpadmin group). If I disable auth outright, by changing DefaultAuthType Basic to DefaultAuthType None, I get an "Unauthorized" error instead of a password request when I try to Add Printer. What am I doing wrong? Is there a way of letting users from the local network to administer the print server through the CUPS web interface? EDIT: By request, my complete cupsd.conf (spoiler: minimally edited default config file that comes with the edition of CUPS from the Debian wheezy repos): LogLevel warn MaxLogSize 0 SystemGroup lpadmin Port 631 # Listen localhost:631 Listen /var/run/cups/cups.sock Browsing On BrowseOrder allow,deny BrowseAllow all BrowseLocalProtocols CUPS dnssd # DefaultAuthType Basic DefaultAuthType None WebInterface Yes <Location /> Order allow,deny Allow @LOCAL </Location> <Location /admin> Order allow,deny Allow @LOCAL </Location> <Location /admin/conf> AuthType Default Require user @SYSTEM Order allow,deny Allow @LOCAL </Location> # Set the default printer/job policies... <Policy default> # Job/subscription privacy... JobPrivateAccess default JobPrivateValues default SubscriptionPrivateAccess default SubscriptionPrivateValues default # Job-related operations must be done by the owner or an administrator... <Limit Create-Job Print-Job Print-URI Validate-Job> Order deny,allow </Limit> <Limit Send-Document Send-URI Hold-Job Release-Job Restart-Job Purge-Jobs Set-Job-Attributes Create-Job-Subscription Renew-Subscription Cancel-Subscription Get-Notifications Reprocess-Job Cancel-Current-Job Suspend-Current-Job Resume-Job Cancel-My-Jobs Close-Job CUPS-Move-Job CUPS-Get-Document> Require user @OWNER @SYSTEM Order deny,allow </Limit> # All administration operations require an administrator to authenticate... <Limit CUPS-Add-Modify-Printer CUPS-Delete-Printer CUPS-Add-Modify-Class CUPS-Delete-Class CUPS-Set-Default CUPS-Get-Devices> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # All printer operations require a printer operator to authenticate... <Limit Pause-Printer Resume-Printer Enable-Printer Disable-Printer Pause-Printer-After-Current-Job Hold-New-Jobs Release-Held-New-Jobs Deactivate-Printer Activate-Printer Restart-Printer Shutdown-Printer Startup-Printer Promote-Job Schedule-Job-After Cancel-Jobs CUPS-Accept-Jobs CUPS-Reject-Jobs> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # Only the owner or an administrator can cancel or authenticate a job... <Limit Cancel-Job CUPS-Authenticate-Job> Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit All> Order deny,allow </Limit> </Policy> # Set the authenticated printer/job policies... <Policy authenticated> # Job/subscription privacy... JobPrivateAccess default JobPrivateValues default SubscriptionPrivateAccess default SubscriptionPrivateValues default # Job-related operations must be done by the owner or an administrator... <Limit Create-Job Print-Job Print-URI Validate-Job> AuthType Default Order deny,allow </Limit> <Limit Send-Document Send-URI Hold-Job Release-Job Restart-Job Purge-Jobs Set-Job-Attributes Create-Job-Subscription Renew-Subscription Cancel-Subscription Get-Notifications Reprocess-Job Cancel-Current-Job Suspend-Current-Job Resume-Job Cancel-My-Jobs Close-Job CUPS-Move-Job CUPS-Get-Document> AuthType Default Require user @OWNER @SYSTEM Order deny,allow </Limit> # All administration operations require an administrator to authenticate... <Limit CUPS-Add-Modify-Printer CUPS-Delete-Printer CUPS-Add-Modify-Class CUPS-Delete-Class CUPS-Set-Default> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # All printer operations require a printer operator to authenticate... <Limit Pause-Printer Resume-Printer Enable-Printer Disable-Printer Pause-Printer-After-Current-Job Hold-New-Jobs Release-Held-New-Jobs Deactivate-Printer Activate-Printer Restart-Printer Shutdown-Printer Startup-Printer Promote-Job Schedule-Job-After Cancel-Jobs CUPS-Accept-Jobs CUPS-Reject-Jobs> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # Only the owner or an administrator can cancel or authenticate a job... <Limit Cancel-Job CUPS-Authenticate-Job> AuthType Default Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit All> Order deny,allow </Limit> </Policy>

    Read the article

  • Identical traffic

    - by Walter White
    Hi all, I am running an application server and logging all requests for analysis purposes later. One interesting trend I noticed last night was, I had a visitor from Texas on FIOS share identical traffic with bluecoat in California. What would cause the traffic to be identical? For every request the visitor made, bluecoat made one subsequently within milliseconds of his request. If it is caching, why would there be identical requests? Wouldn't it go through the cache / proxy on their end, and I would only see the proxied request? I'm just curious, this is an interesting pattern that shows similarities of a DDoS attack, but with far fewer resources. Is it possible that the visitor had malware on their computer? Any other ideas? Walter

    Read the article

  • Get Squid to pass X-Requested-With header

    - by tftd
    I have configured a squid 3.1 proxy server. Everything works great except for the X-Requested-With header. I can't manage to figure out how to pass that header to the site I'm attempting to open via the proxy. This is my current configuration: request_header_access Allow allow all request_header_access Authorization allow all request_header_access WWW-Authenticate allow all request_header_access Proxy-Authorization allow all request_header_access Proxy-Authenticate allow all request_header_access Cache-Control allow all request_header_access Content-Encoding allow all request_header_access Content-Length allow all request_header_access Content-Type allow all request_header_access Date allow all request_header_access Expires allow all request_header_access Host allow all request_header_access If-Modified-Since allow all request_header_access Last-Modified allow all request_header_access Location allow all request_header_access Pragma allow all request_header_access Accept allow all request_header_access Accept-Charset allow all request_header_access Accept-Encoding allow all request_header_access Accept-Language allow all request_header_access Content-Language allow all request_header_access Cookie allow all request_header_access Mime-Version allow all request_header_access Retry-After allow all request_header_access Title allow all request_header_access Connection allow all request_header_access User-Agent allow all request_header_access All deny all #remove all other headers # delete "x-forwarder-for.." headers forwarded_for delete request_header_access Via deny all request_header_access X-Forwarded-For deny all I tried to add this line request_header_access X-Requested-With allow all to the configuration but apparently X-Requested-With is an unknown header name... Apparently I'm missing something?

    Read the article

  • nginx + @font-face + Firefox / IE9

    - by Philip Seyfi
    Just transferred my site from a shared hosting to Linode's VPS, and I'm also completely new to nginx, so please don't be harsh if I missed something evident ^^ I've got my WordPress site running pretty well on nginx & MaxCDN, but my @font-face fonts (served from cdn.domain.com) stopped working in IE9 and FF (@font-face failed cross-origin request. Resource access is restricted.) I've googled for hours and tried adding all of the following to my config files: location ~* ^.+\.(eot|otf|ttf|woff)$ { add_header Access-Control-Allow-Origin *; } location ^/fonts/ { add_header Access-Control-Allow-Origin *; } location / { if ($request_filename ~* ^.*?/([^/]*?)$) { set $filename $1; } if ($filename ~* ^.*?\.(eot)|(otf)|(ttf)|(woff)$){ add_header 'Access-Control-Allow-Origin' '*'; } } With all of the following combinations: add_header Access-Control-Allow-Origin *; add_header 'Access-Control-Allow-Origin' *; add_header Access-Control-Allow-Origin '*'; add_header 'Access-Control-Allow-Origin' '*'; Of course, I've restarted nginx after every change. The headers just don't get sent at all no matter what I do. I have the default Ubuntu apt-get build nginx which should include the headers module by default... How do I check what modules are installed, or what else could be causing this error?

    Read the article

  • Lots of 408 Request Timed Out from same IPs

    - by GreatFire
    Web server: Nginx. Checking our log files, there are many log entries of connections that: take 59-61 seconds send an empty request (or at least none is logged) result in a 408 response (request timed out) do not contain any http_user_agent originate from a limited number of IPs We are monitoring average times to serve responses and this obviously inflates our statistics. Apart from that though, is this a problem? Any idea why it is occurring? Does it suggest that somebody is intentionally messing with us? What should we do?

    Read the article

  • Oracle Enterprise Manager Logs

    - by idea_
    Hi, I'm attempting to investigate the root cause of the following error which occurs each time a users call an Oracle 10g Forms applet: FRM-92102: A network error has occurred. The Forms Client has attempted to reestablish its connection to the Server 1 time(s) without success. Please check the network connection and try again later. Details... Java Exception: java.io.IOException: Connection failure with 503 This error remained after restarting all my system components: HTTP_Server, OC4J_BI_Forms, Web Cache, Reports Server, etc. The only way to clear this issue was to restart the server entirely. During the downtime, web pages were rendered with the PL/SQL cartridge and being served, so it appears as though this was isolated to forms. Does anybody know which log files may provide clues here? Any help would be much appreciated :) Update: If somebody can provide me with a way or reference to increase the capacity of my web server to minimize these errors, I will accept this as the solution.

    Read the article

  • (200 ok) ACCEPTED - Is this a hacking attempt?

    - by Byran
    I assume this is some type of hacking attempt. I've try to Google it but all I get are sites that look like they have been exploited already. I'm seeing requests to one of my pages that looks like this. /listMessages.asp?page=8&catid=5+%28200+ok%29+ACCEPTED The '(200 ok) ACCEPTED' is what is odd. But it does not appear to do anything. I'm running on IIS 5 and ASP 3.0. Is this "hack" meant for some other type of web server?

    Read the article

  • Real-time log parsing and reporting

    - by Alienfluid
    We have a small project we are working on part-time that runs on Nginx/MongoDB on Ubuntu 10.04 LTS Server. We'd like to be able to see reports on things like server load, requests/sec, response time, DB load, DB response time, etc. Is there an open source or free (as in beer) tool that can parse such logs and provide a real-time report? I looked into Splunk briefly, but I wanted to see if there are any others that are highly recommended.

    Read the article

  • IIS7 Compression Configuration

    - by Level1Coder
    Hi, Previously when I used IIS6, I used IIS6 Metabase Explorer to edit Metabase.xml and manually turned on compression, specified the compression level and the file extensions to compress. IIS7 seems a bit different, there is no Metabase.xml file in the system32\inetsrv folder. Enabling compression is easy to turn on by checking the checkbox in the Compression module. But how do I manually tweak and set the compression levels and file extensions to compress? I also ran across an article saying that IIS7 also automatically throttles the compression if your CPU load is 50% then compression is turned off. Where are all these settings located?

    Read the article

  • Links from UK TechDays 2010 sessions on Entity Framework, Parallel Programming and Azure

    - by Eric Nelson
    [I will do some longer posts around my sessions when I get back from holiday next week] Big thanks to all those who attended my 3 sessions at TechDays this week (April 13th and 14th, 2010). I really enjoyed both days and watched some great session – my personal fave being the Silverlight/Expression session by my friend and colleague Mike Taulty. The following links should help get you up and running on each of the technologies. Entity Framework 4 Entity Framework 4 Resources http://bit.ly/ef4resources Entity Framework Team Blog http://blogs.msdn.com/adonet Entity Framework Design Blog http://blogs.msdn.com/efdesign/ Parallel Programming Parallel Computing Developer Center http://msdn.com/concurrency Code samples http://code.msdn.microsoft.com/ParExtSamples Managed Team Blog http://blogs.msdn.com/pfxteam Tools Team Blog http://blogs.msdn.com/visualizeparallel My code samples http://gist.github.com/364522  And PDC 2009 session recordings to watch: Windows Azure Platform UK Site http://bit.ly/landazure UK Community http://bit.ly/ukazure (http://ukazure.ning.com ) Feedback www.mygreatwindowsazureidea.com Azure Diagnostics Manager - A client for Windows Azure Diagnostics Cloud Storage Studio - A client for Windows Azure Storage SQL Azure Migration Wizard http://sqlazuremw.codeplex.com

    Read the article

  • I have Ubuntu Server 11.10 64-bit . Updates were working but now fails every time after apt-get update

    - by jason pate
    This is what I get when I try to run apt-get update Err http: //security.ubuntu.com oneiric-security InRelease Err http: //us.archive.ubuntu.com oneiric InRelease Err http: //security.ubuntu.com oneiric-security Release.gpg Temporary failure resolving 'security.ubuntu.com' Err http: //us.archive.ubuntu.com oneiric-updates InRelease Err http: //us.archive.ubuntu.com oneiric Release.gpg Temporary failure resolving 'us.archive.ubuntu.com' Err http: //us.archive.ubuntu.com oneiric-updates Release.gpg Temporary failure resolving 'us.archive.ubuntu.com' Reading package lists... Done W: Failed to fetch http: //us.archive.ubuntu.com/ubuntu/dists/oneiric/InRelease W: Failed to fetch http: //us.archive.ubuntu.com/ubuntu/dists/oneiric-updates/InRelease W: Failed to fetch http: //security.ubuntu.com/ubuntu/dists/oneiric-security/InRelease W: Failed to fetch http: //security.ubuntu.com/ubuntu/dists/oneiric-security/Release.gpg Temporary failure resolving 'security.ubuntu.com' W: Failed to fetch http: //us.archive.ubuntu.com/ubuntu/dists/oneiric/Release.gpg Temporary failure resolving 'us.archive.ubuntu.com' W: Failed to fetch http: //us.archive.ubuntu.com/ubuntu/dists/oneiric-updates/Release.gpg Temporary failure resolving 'us.archive.ubuntu.com' W: Some index files failed to download. They have been ignored, or old ones used instead.

    Read the article

  • Links and code from session on Entity Framework 4, Parallel and C# 4.0 new features

    - by Eric Nelson
    Last week (12th May 2010) I did a session in the city on lot of the new .NET 4.0 Stuff. My demo code and links below. Code Parallel demos http://gist.github.com/364522  C# 4.0 new features http://gist.github.com/403826  EF4 Links Entity Framework 4 Resources http://bit.ly/ef4resources Entity Framework Team Blog http://blogs.msdn.com/adonet Entity Framework Design Blog http://blogs.msdn.com/efdesign/ Parallel Links Parallel Computing Dev Center http://msdn.com/concurrency Code samples http://code.msdn.microsoft.com/ParExtSamples Managed blog http://blogs.msdn.com/pfxteam Tools blog http://blogs.msdn.com/visualizeparallel C# 4.0 New features http://bit.ly/baq3aU  New in .NET 4.0 Coevolution http://bit.ly/axglst  New in C# 4.0 http://bit.ly/bG1U2Y

    Read the article

  • Bypass IIS Basic Authentication for localhost

    - by George
    I'd like to have a website authenticated with basic auth, but then also allow the website to access itself locally. That is, I want to allow unauthenticated access only from localhost. In IIS I have only basic authentication enabled (not worrying about SSL for now), and I have the correct file system permissions such that outside users can login successfully and view the website. I have tried setting IIS_IUSR as owner of the directory, and added IUSR with modify permissions, however I'm still getting a 401 error when the website tries to access itself. Anyone have any idea how to get this to work?

    Read the article

< Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >