Search Results

Search found 17054 results on 683 pages for 'jms request reply'.

Page 415/683 | < Previous Page | 411 412 413 414 415 416 417 418 419 420 421 422  | Next Page >

  • customErrors="RemoteOnly" not working properly in Server 2008

    - by Atomiton
    It would appear that on my brand new Windows Server 2008 with IIS7, customErrors is not working. We have customErrors set to RemoteOnly in the web.config on our Asp.Net sites and applications. However, no matter what we do, it would appear that our sites act like it's set to On and we can't get any detailed messages showing up on our applications when remoted into our servers. I'm not entirely sure how to trace where this is being overrided, or if there is something in the way the server is configured that would make the server think the request is internal? How does this actually resolve correctly, anyway? Any help is appreciated... Our network admin has added domains to our hosts file to direct applications to the IP address.

    Read the article

  • Exchange 2010 OWA with Client Certificates

    - by Christian
    I have enabled Client Certificate Authentication for Exchange 2010 through IIS7 and the users are prompted to choose their User Certificate when they log in, but they are all then presented with the following error message Request Url: https://<domain_name>:443/owa/ User host address: <server_ip_address> OWA version: 14.1.355.2 Exception Exception type: System.NullReferenceException Exception message: Object reference not set to an instance of an object. Call stack Microsoft.Exchange.Clients.Owa.Core.RequestDispatcher.GetUserIdentities(OwaContext owaContext, OwaIdentity& logonIdentity, OwaIdentity& mailboxIdentity, Boolean& isExplicitLogon, Boolean& isAlternateMailbox, ExchangePrincipal& logonExchangePrincipal) Microsoft.Exchange.Clients.Owa.Core.RequestDispatcher.InternalDispatchRequest(OwaContext owaContext) Microsoft.Exchange.Clients.Owa.Core.RequestDispatcher.DispatchRequest(OwaContext owaContext) Microsoft.Exchange.Clients.Owa.Core.OwaRequestEventInspector.OnPostAuthorizeRequest(Object sender, EventArgs e) System.Web.HttpApplication.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) The method I followed to enable Certificate authentiaction was from this post: http://www.miru.ch/2011/04/how-to-enable-certificate-based-authentication-on-exchange-2010/ Any ideas? Google isn't being very helpful

    Read the article

  • T-Mobile releases an App to unlock mobile devices

    - by Gopinath
    T-Mobile is in no mood to stop innovating and outsmarting its rival wireless network providers in USA. Its been talk of the wireless community and rightly deserves the space for its push to make wireless providers more consumer friendly in USA. Just couple of days after US Government passed a law that made unlocking smartphones legal in USA, T-Mobile released an Android App to make unlocking smartphone as easy as few taps. The app aptly named Device Unlock is available in Android Play Store and at the moment it can unlock only Samsung Galaxy Avant smartphones. The app lets you either temporarily unlock your smartphone for 30 days(very helpful for those who travel and wants to use their phone with other carriers) or send a request to T-Mobile to permanently unlock the device for ever. When user tries to unlock the device, the App verifies the user account to make sure that the account complies with T-Mobile rules. If the rule check passes, the app automatically unlocks the phone. The process is very simple and T-Mobile users are going to love this; the other carrier users would envy to have such a simple process to unlock smartphones. Though this app is available for just Android Play Store and works only Samsung Galaxy Avant smartphone, it looks T-Mobile is testing out this feature on small set of users first to learn and improve unlocking process. Hope to see this app able unlock all T-Mobile devices soon.

    Read the article

  • Does MS Forefront TMG cache authentication?

    - by SnOrfus
    I'm testing a client machine that makes requests to a biztalk server using a forefront machine as a web proxy. Upon first test I put in an invalid name/password into the receive port and received the correct error message (407). Then, I set the correct name/password and everything worked correctly. From there, I kept the correct information in the receive port but put an invalid name/password into the send adapter but the process completed successfully (should have failed with 407). I've ensured that both the recieve and send ports are not bypassing the proxy for local addresses. So the only thing that seems to make sense is if TMG is caching the authentication request coming from the machine I'm working on. Is this thinking correct, and if so, does anyone know how to disable it in TMG?

    Read the article

  • Anonymous Access and Sharepoint Web Services

    - by Stacy Vicknair
    A month or so ago I was working on a feature for a project that required a level of anonymity on the Sharepoint site in order to function. At the same time I was also working on another feature that required access to the Sharepoint search.asmx web service. I found out, the hard way, that the Sharepoint Web Services do not operate in an expected way while the IIS site is under anonymous access. Even though these web services expect requests with certain permissions (in theory) they never attempt to request those credentials when the web service is contacted. As a result the services return a 401 Unauthorized response. The fix for my situation was to restrict anonymous access to the area that needed it (in this case the control in question had support for being used in an ASP.NET app that I could throw in a virtual directory). After that I removed anonymous access from IIS for the site itself and the QueryService requests were working once more. Here’s a related article with a bit more depth about a similar experience: http://chrisdomino.com/Blog/Post/401-Reasons-Why-SharePoint-Web-Services-Don-t-Work-Anonymously?Length=4 Technorati Tags: Sharepoint,QueryService,WSS,IIS,Anonymous Access

    Read the article

  • IIS - Forwarding requests to a folder to another port

    - by user1231958
    Context I currently installed Glassfish 3 in a server that currently holds ASP and PHP inside Internet Information Server 7 so we can start moving to a new system architecture (the information system is being remade). Obviously, Glassfish uses another port and without too much configuration (all I had to do is to install it) it worked. If I write www.domain.com:8080, the person will be redirected to the Glassfish server. Issue Obviously I don't want the person to write the port! I also believe it might also hold some security issues. Requirement I need the server to take an address of the form www.domain.com/gf or new.domain.com or something alike, and when it receives such a request, "redirect" (masking the URL) the user to the Glassfish website (www.domain.com:8080). Thank you beforehand!

    Read the article

  • Google Analytics www 301 causing issues with In-Page Analytics

    - by conrad10781
    The closest question I could find to my problem is This one. The similarity is: I have a profile in Google Analytics (GA) that has been collecting data for a year. The domain setting in GA is "http://example.com". The site, however, will redirect any non-www request, to www.example.com, via a typical .htaccess refinement. We do this to keep the traffic on the load balancers. I don't know the method the original user had in place, but we're doing a 301 on any non-www to the www equivalent. I believe this has to be somewhat standard. Where I differ from this question is in the error message I receive when trying to load the In-Page Analytics. I'm instead receiving: Error: The Website in your settings (http://example.com), redirects into a different domain. (http://www.example.com). In-Page Analytics currently works on only one domain. Note that www.example.com and example.com are NOT considered to be on the same domain. Also, make sure you're not redirecting from http:// to https:// or vice versa. I understand what's being explained, it just seems as though this can't be the end-all. I tried updating the Analytics settings, which from day one has been set as "One domain with multiple subdomains", but I don't see any options to change the URL ( which is currently set to http://example.com and not http://www.example.com ). I'd prefer not to have to change the URL if that was at all possible, but I can't seem to find any documentation or anything that provide any possible solutions.

    Read the article

  • hosting people asking for my account username and password to enable curl and socket function only for me

    - by Jayapal Chandran
    I have hosted my site in a shared environment. My hosting people disabled socket function all together. and they said that we can enable only for you if i given a written statement. I did but they asked for my control panel login details so they will run some kind of script to enable it. Is it right for the hosting company to ask for credentials. They have the total control so why cant they do it? Edit: Before six months many websites in their server got hacked. So they think it would be because of socket functions and had disabled it. They say they can enable it for specific users who do programming using that and that is by email request.

    Read the article

  • nginx symlinks permission denied / 403 Forbidden on Mac OSX

    - by Levi Roberts
    So I have an nginx server running on Mac OSX and I am trying to create a symlink in my nginx www directory from somewhere else. In the browser I get the wonderful 403 Forbidden error. I have also tried chmod'ing my life away for the past few hours. There doesn't seem to be anything on the stack about it. One thing concerns me is that I am not sure if symlinks are directly supported by ngninx on Mac. Trying to use disable_symlink directive results in: nginx: [emerg] unknown directive "disable_symlinks" in /usr/local/etc/nginx/nginx.conf:44` Some info about my setup: nginx -v : nginx version: nginx/1.4.2 To create the symlink I do the following: cd /Users/levi/www ln -s "/Users/levi/Desktop/.../client" "/Users/levi/www/client" The error in the log: [error] 11864#0: *7 open() "/Users/levi/www/client" failed (13: Permission denied), client: 127.0.0.1, server: _, request: "GET /client HTTP/1.1", host: "localhost" Any help is much appreciated. Let me know if there's any more information I can give you.

    Read the article

  • SQL SERVER – DMV to Identify Incremental Statistics – Performance improvements in SQL Server 2014 – Part 3

    - by Pinal Dave
    This is the third part of the series Incremental Statistics. Here is the index of the complete series. What is Incremental Statistics? – Performance improvements in SQL Server 2014 – Part 1 Simple Example of Incremental Statistics – Performance improvements in SQL Server 2014 – Part 2 DMV to Identify Incremental Statistics – Performance improvements in SQL Server 2014 – Part 3 In earlier two parts we have seen what is incremental statistics and its simple example. In this blog post we will be discussing about DMV, which will list all the statistics which are enabled for Incremental Updates. SELECT  OBJECT_NAME(sys.stats.OBJECT_ID) AS TableName, sys.columns.name AS ColumnName, sys.stats.name AS StatisticsName FROM   sys.stats INNER JOIN sys.stats_columns ON sys.stats.OBJECT_ID = sys.stats_columns.OBJECT_ID AND sys.stats.stats_id = sys.stats_columns.stats_id INNER JOIN sys.columns ON sys.stats.OBJECT_ID = sys.columns.OBJECT_ID AND sys.stats_columns.column_id = sys.columns.column_id WHERE   sys.stats.is_incremental = 1 If you run above script in the example displayed, in part 1 and part 2 you will get resultset as following. When you execute the above script, it will list all the statistics in your database which are enabled for Incremental Update. The script is very simple and effective. If you have any further improved script, I request you to post in the comment section and I will post that on blog with due credit. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SQL Statistics, Statistics

    Read the article

  • Selling Visual Studio ALM

    - by Tarun Arora
    Introduction As a consultant I have been selling Application Lifecycle Management services using Visual Studio and Team Foundation Server. I’ve been contacted various times by friends working in organization telling me that ALM processes in their company were benchmarked when dinosaurs walked the earth. Most of these individuals already know the great features Microsoft ALM tools offer and are keen to start a conversation with the CIO but don’t exactly know where to start. It is very important how you engage in your first conversation, if you start the conversation with ‘There is this great tooling from Microsoft which offers amazing features to boost developer productivity, … ‘ from experience I can tell you the reply from your CIO would be ‘I already know! Our existing landscape has a combination of bleeding edge open source and cutting edge licensed tools which already cover these features quite well, more over Microsoft products have a high licensing cost associated to them.’ You will always find it harder to sell by feature, the trick is to highlight the gap in the existing processes & tools and then highlight the impact of these gaps to the overall development processes, by now you would have captured enough attention to show off how the ALM tooling offered by Microsoft not only fills those gaps but offers great value adds to take their development practices to the next level. Rangers ALM Assessment Guide Image 1 – Welcome! First look at the Rangers ALM assessment guide Most organization already have some processes in place to cover aspects of ALM. How do you go about proving that there isn’t enough cover in place? This is where Visual Studio ALM Rangers ALM Assessment guide can help. The ALM assessment guide is really a tool that helps you gather information about Development practices and processes within a customer's environment. Several questionnaires are used to identify the current state of individual development lifecycle areas and decide on a desired state for those processes. It also presents guidance and roll-up summaries to help with recommendations moving forward. The ALM Rangers assessment guide can be downloaded from here. Image 2 – ALM Assessment guide divided into different functions of SDLC The assessment guide is divided into different functions of Software Development Lifecycle (listed below), this gives you the ability to access how mature the company is in different areas of SDLC. Architecture & Design Requirement Engineering & UX Development Software Configuration Management Governance Deployment & Operations Testing & Quality Assurance Project Planning & Management Each section has a set of questions, fill in the assessment by selecting “Never/Sometimes/Always” from the Answer column in the question sheets.  Each answer has weightage to the overall score. Each question has a link next to it, clicking the link takes you to the Reference sheet which gives you more details about the question along with a reason for “why you need to ask this question?”, “other ways to phrase the question” and “what to expect as an answer from the customer”. The trick is to engage the customer in a discussion. You need to probe a lot, listen to the customer and have a discussion with several team members, preferably without management to ensure that you receive candid feedback. This reminds me of a funny incident when during an ALM review a customer told me that they have a sophisticated semi-automated application deployment process, further discussions revealed that deployment actually involved 72 manual configuration steps per production node. Such observations can be recorded in the Issue Brainstorming worksheet for further consideration later. It is also worth mentioning the different levels of ALM maturity to the customer. By default the desired state of ALM maturity is set to Standard, it is possible to set a desired state by area, you should strive for Advanced or Dynamic, it always helps by explaining the classification and advantages. Image 3 – ALM levels by description The ALM assessment guide helps you arrive at a quantitative measure of the company’s ALM maturity. The resultant graph plotted on a spider’s web shows you the company’s current state of ALM maturity and the desired state of ALM maturity. Further since the results are classified by area you can immediately spot the areas where the customer needs immediate help. Image 4 – The spiders web! The red cross icons are areas shouting out for immediate attention, the yellow exclamation icons are areas that need improvement. These icons are calculated on the difference between the Current State of ALM maturity VS the Desired state of ALM maturity. Image 5 – Results by area Conclusion To conclude the Rangers ALM assessment guide gives you the ability to, Measure the customer’s current ALM maturity level Understand the ALM maturity level the customer desires to achieve Capture a healthy list of issues the customer wants to brainstorm further Now What’s next…? Download and get started with the Rangers ALM Assessment Guide. If you have successfully captured the above listed three pieces of information you are in a great state to make recommendations on the identified areas highlighting the benefits that Visual Studio ALM tools would offer. In the next post I will be covering how to take the ALM assessment results as the base to actually convert your recommendation into a sell.  Remember to subscribe to http://feeds.feedburner.com/TarunArora. I would love to hear your feedback! If you have any recommendations on things that I should consider or any questions or feedback, feel free to leave a comment. *** A special thanks goes out to fellow ranges Willy, Ethem and Philip for reviewing the blog post and providing valuable feedback. ***

    Read the article

  • Generating Thermal Printer (Zebra Printer) Sized PDFs for FedEx Labels

    - by Michael Hart
    Background I own a company which does a lot of FedEx Ground shipping. We have a 3rd party fulfillment center, which stores some of our inventory and at our request ships it. Zebra/Thermal printers are the most cost effective shipping label printers available and our 3rd party fulfillment center has one. I want to generate the labels locally then e-mail the 3rd party fulfillment center a PDF of the labels which they can then print out on their printer. Problem The trouble is, I can't seem to figure out how to print these 4" x 6" labels to a PDF, as FedEx (both ship manager and fedex.com) uses javascript to detect what printer I have. Question What's a clever way to send my 3rd party fulfillment center a PDF (or equivalent) of our 4" x 6" zebra thermal printer labels so they can print them out without re-entering the data?

    Read the article

  • Duplicity not writing to a pre-existing S3 bucket

    - by Saurabh Nanda
    I'm trying to backup a directory to a pre-existing Amazon S3 bucket using the following command: duplicity --no-encryption system/ s3+http://MY_BUCKET_NAME/backup However, I'm getting the following error consistently: S3CreateError: S3CreateError: 409 Conflict <?xml version="1.0" encoding="UTF-8"?> <Error><Code>BucketAlreadyOwnedByYou</Code><Message>Your previous request to create the named bucket succeeded and you already own it.</Message><BucketName>vacationlabs</BucketName><RequestId>3C1B8C49469E3374</RequestId><HostId>4dU1TKf3Td6R0yvG9MaLKCYvQfwaCpdM8FUcv53aIOh0LeJ6wtVHHduPSTqjDwt0</HostId></Error> The S3 bucket is empty and does NOT have the backup directory The bucket is in Singapore region

    Read the article

  • Cannot read/access Apache2 access logs

    - by webworm
    I have been asked to take a look at some access logs for an Apcahe2 web server running on Ubuntu. I have been told by the administrator of the machine that my login has "admin" access yet I cannot seem to copy the access logs from Apache2 to my local machine via FTP for analysis. I figure one of two things is happening ... I don't really have full admin access Some other process (perhaps Apache2) has control of the log files and won't let me copy them. How can I tell if I truly have admin access? What type of access do I need to request? Root access? Something else? Should I be able to copy these log files with admin access?

    Read the article

  • "File does not exist" in apache error log.

    - by Samuurai
    Hi, This is an example of an error in out log file: File does not exist: /var/www/website/female, referer: http://www.website.com/female/dresses/A-Dress-Black "/female" doesn't exist, because we use friendly urls via our .htaccess file which looks like this: RewriteEngine On # Turn on the rewriting engine RewriteBase / RewriteCond %{http_host} !^www.website.com$ [nc] RewriteRule ^(.*)$ http://www.website.com/$1 [r=301,nc,L] RewriteRule ^News/?$ news.php [NC,L] RewriteRule ^About/?$ about.php [NC,L] RewriteRule ^Contact/?$ contact.php [NC,L] RewriteRule ^Sign-In/Create-Account?$ sign_up_in.php [NC,L] RewriteRule ^Logout?$ sign_up_in.php?l=1 [NC,L] RewriteRule ^Your-Bag?$ your_bag.php [NC,L] RewriteRule ^Help?$ help.php [NC,L] RewriteRule ^Profile?$ profile.php [NC,L] RewriteRule ^Create-Profile?$ profile_create.php [NC,L] # ITEM RewriteRule ^([A-Za-z-]+)/([A-Za-z-]+)/([A-Za-z0-9-]+)/?$ store_focus.php?sex=$1catName=$2&permalink=$3 [NC,L] # PAGE RewriteRule ^([A-Za-z-]+)/([A-Za-z-]+)/page/([0-9]+)/?$ store.php?sex=$1&catName=$2&page=$3 [NC,L] # CATEGORY RewriteRule ^([A-Za-z-]+)/([A-Za-z-]+)/?$ store.php?sex=$1&catName=$2 [NC,L] # SEX RewriteRule ^([A-Za-z-]+)/?$ store.php?sex=$1 [NC,L] Every request for a page results in an error. Has anyone encountered this before? Thanks! Beren

    Read the article

  • CUPS basic auth error through web interface

    - by Inaimathi
    I'm trying to configure CUPS to allow remote administration through the web interface. There's enough documentation out there that I can figure out what to change in my cupsd.conf (changing Listen localhost:631 to Port 631, and adding Allow @LOCAL to the /, /admin and /admin/conf sections). I'm now at the point where I can see the CUPS interface from another machine on the same network. The trouble is, when I try to Add Printer, I'm asked for a username and password, but my response is rejected even when I know I've gotten it right (I assume it's asking for the username and password of someone in the lpadmin group on the server machine; I've sshed in with credentials its rejecting, and the user I'm using has been added to the lpadmin group). If I disable auth outright, by changing DefaultAuthType Basic to DefaultAuthType None, I get an "Unauthorized" error instead of a password request when I try to Add Printer. What am I doing wrong? Is there a way of letting users from the local network to administer the print server through the CUPS web interface? EDIT: By request, my complete cupsd.conf (spoiler: minimally edited default config file that comes with the edition of CUPS from the Debian wheezy repos): LogLevel warn MaxLogSize 0 SystemGroup lpadmin Port 631 # Listen localhost:631 Listen /var/run/cups/cups.sock Browsing On BrowseOrder allow,deny BrowseAllow all BrowseLocalProtocols CUPS dnssd # DefaultAuthType Basic DefaultAuthType None WebInterface Yes <Location /> Order allow,deny Allow @LOCAL </Location> <Location /admin> Order allow,deny Allow @LOCAL </Location> <Location /admin/conf> AuthType Default Require user @SYSTEM Order allow,deny Allow @LOCAL </Location> # Set the default printer/job policies... <Policy default> # Job/subscription privacy... JobPrivateAccess default JobPrivateValues default SubscriptionPrivateAccess default SubscriptionPrivateValues default # Job-related operations must be done by the owner or an administrator... <Limit Create-Job Print-Job Print-URI Validate-Job> Order deny,allow </Limit> <Limit Send-Document Send-URI Hold-Job Release-Job Restart-Job Purge-Jobs Set-Job-Attributes Create-Job-Subscription Renew-Subscription Cancel-Subscription Get-Notifications Reprocess-Job Cancel-Current-Job Suspend-Current-Job Resume-Job Cancel-My-Jobs Close-Job CUPS-Move-Job CUPS-Get-Document> Require user @OWNER @SYSTEM Order deny,allow </Limit> # All administration operations require an administrator to authenticate... <Limit CUPS-Add-Modify-Printer CUPS-Delete-Printer CUPS-Add-Modify-Class CUPS-Delete-Class CUPS-Set-Default CUPS-Get-Devices> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # All printer operations require a printer operator to authenticate... <Limit Pause-Printer Resume-Printer Enable-Printer Disable-Printer Pause-Printer-After-Current-Job Hold-New-Jobs Release-Held-New-Jobs Deactivate-Printer Activate-Printer Restart-Printer Shutdown-Printer Startup-Printer Promote-Job Schedule-Job-After Cancel-Jobs CUPS-Accept-Jobs CUPS-Reject-Jobs> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # Only the owner or an administrator can cancel or authenticate a job... <Limit Cancel-Job CUPS-Authenticate-Job> Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit All> Order deny,allow </Limit> </Policy> # Set the authenticated printer/job policies... <Policy authenticated> # Job/subscription privacy... JobPrivateAccess default JobPrivateValues default SubscriptionPrivateAccess default SubscriptionPrivateValues default # Job-related operations must be done by the owner or an administrator... <Limit Create-Job Print-Job Print-URI Validate-Job> AuthType Default Order deny,allow </Limit> <Limit Send-Document Send-URI Hold-Job Release-Job Restart-Job Purge-Jobs Set-Job-Attributes Create-Job-Subscription Renew-Subscription Cancel-Subscription Get-Notifications Reprocess-Job Cancel-Current-Job Suspend-Current-Job Resume-Job Cancel-My-Jobs Close-Job CUPS-Move-Job CUPS-Get-Document> AuthType Default Require user @OWNER @SYSTEM Order deny,allow </Limit> # All administration operations require an administrator to authenticate... <Limit CUPS-Add-Modify-Printer CUPS-Delete-Printer CUPS-Add-Modify-Class CUPS-Delete-Class CUPS-Set-Default> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # All printer operations require a printer operator to authenticate... <Limit Pause-Printer Resume-Printer Enable-Printer Disable-Printer Pause-Printer-After-Current-Job Hold-New-Jobs Release-Held-New-Jobs Deactivate-Printer Activate-Printer Restart-Printer Shutdown-Printer Startup-Printer Promote-Job Schedule-Job-After Cancel-Jobs CUPS-Accept-Jobs CUPS-Reject-Jobs> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # Only the owner or an administrator can cancel or authenticate a job... <Limit Cancel-Job CUPS-Authenticate-Job> AuthType Default Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit All> Order deny,allow </Limit> </Policy>

    Read the article

  • "The Operation Failed." accepting METHOD: PUBLISH iCalendar files in .pst account

    - by Jamie Kitson
    if I create a new Mail Profile using the Internet E-mail wizard, ie, creating a new local .pst account, and then try to add a .ics iCalendar file with a METHOD of PUBLISH to the calendar of that account, I get the error "The Operation Failed." If I change any of the above it works ok, eg, if I use an Exchange account or METHOD: REQUEST in the iCalendar file. I'm using Outlook 2010 on Windows 7 but I think the user that originally reported this was using Outlook 2007. Does anyone have any idea of why this might be? Thanks, Jamie Kitson

    Read the article

  • How should clients handle HTTP 401 with unknown authentication schemes?

    - by user113215
    What is the proper behavior for an HTTP client receiving a 401 Unauthorized response that specifies only unrecognized authentication schemes? My server supports Kerberos authentication using WWW-Authenticate: Negotiate. On the first request, the server sends a 401 Unauthorized response with a body containing an HTML document. The behavior that I expect is for clients that support Kerberos to perform that authentication and for other clients to simply display the HTML document (a login form). It seems that most of the "other clients" I've encountered do work this way, but a few do not. I haven't found anything that mandates any particular behavior in this situation. There's a brief mention in RFC 2617: HTTP Authentication: Basic and Digest Access Authentication, but is there anything more concrete? It is possible that a server may want to require Digest as its authentication method, even if the server does not know that the client supports it. A client is encouraged to fail gracefully if the server specifies only authentication schemes it cannot handle.

    Read the article

  • Squid site redirection

    - by AndyM
    I have an internal website that cannot be accessed from some machines on my network, due to the physical location, VPN ,network ranges etc. I would like to install Squid on "in between" network to forward request from the clients that cannot reach the website. The issue is the clients have no ability to connect to www.example.com , but they can reach a network with a squid proxy , which in turn can reach www.example.com What is the correct term I need to research in squid , is it just caching www.example.com or do I need to set the clients to use a URL that gets rewritten ? i.e www.squid-example.com -- www.example.com

    Read the article

  • IIS 7 URL rewrite rule

    - by Andrew
    Hello, guys! We have here to web servers behind a router - one IIS and one Tomcat (on different machines / IP addresses). The domain is pointing to out external IP, which is forwarded to IIS (internal IP 192.168.1.10 for example). I'm trying to do the following: when [www.]ourdomain.com is entered the default web site on IIS have to be loaded (this part is ok), but when test.ourdomain.com is entered I want to redirect this request to another web server (192.168.1.11 for example). I created a site "test" on IIS and it is displayed when test.ourdomain.com is entered. Then I tried to redirect it with following rule: Requested URL matches the pattern: * (using wildcards) Condition: {HTTP_HOST} matches test.ourdomain.com Action type: Rewrite Rewrite URL: http://192.168.1.11/{R:0} but when I try to load test.ourdomain.com now I get IIS's error 404 page. Obviously I'm wrong :-) How can I do such a redirect?

    Read the article

  • Apache proxying to Unicorn server times out, how to avoid?

    - by Ian
    I have a Teambox installation running on Unicorn, and the latter sometimes times out after 30 seconds. The idea of this configuration would be for Apache to wait until the Unicorn master server sends a timeout, because if I'm not wrong, Unicorn will quit the timed-out worker process but spawn a new one to handle the same request. Is there a way to configure Apache to not timeout like the nginx configuration of timeout = 0? Thanks for the help! EDIT I found a way, though it doesn't really work as I expected. In the ProxyPass directive you have to specify a retry=0 option after the url: ProxyPass / http://url/ retry=0 It doesn't work if the url is a ProxyBalancer though.

    Read the article

  • Forwarding a subdomain to main domain using Godaddy.

    - by Ryan Hayes
    I have current blog, which was hosted on Tumblr at http://blog.ryanhayes.net. I'm moving it over to http://ryanhayes.net, and have all the 301 redirects set up for the blog entries to map to my new blog, which is hosted using Godaddy (domain included). When I try to set up a subdomain forward, I'm greeted with a nice 403 Forbidden response (as of this writing, you can see it at http://blog.ryanhayes.net. When I try to ping both the subdomain and domain, they point to the same IP address, so I know blog subdomain has at least switched over to point to the same content. I don't really understand why I would get a 403 Forbidden on the same content that I can see perfectly fine via another domain. Currently, I have a CNAME of blog pointing to @, which is how "www" is set up to forward, so I'm assuming it would do the same thing. My question is what is the proper way to set up my DNS to make the blog subdomain forward to my main domain (301) using the GoDaddy DNS manager? Bonus: What is the background on why I am getting a 403 error the current way? Forbidden You don't have permission to access / on this server. Additionally, a 403 Forbidden error was encountered while trying to use an ErrorDocument to handle the request. UPDATE 12/7/2010 Error on site has been fixed, you can no longer view it from my site.

    Read the article

  • Ubuntu eats itself after I followed updater instruction

    - by Tony Martin
    Updater (I assume) put a no entry style alert icon on the panel which informed me that certain package dependencies were not up to snuff. Upgrades were thereafter only partial. The dialogue advised that I (and this is from noob memory) sudo apt-get install -f. I did this and typed in the confirmation phrase and watched apt-get systematically remove every component of linux, both the stuff I installed and the core ubuntu packages. I could only assume at this stage that this was for a fresh install but of course, I know better now. There's much complaint about Windows, but I've never met with advice from Microsoft tools to wipe out the operating system because of a couple of missing .dlls. So what gives? This was a 64 bit install of 12.04. All that is left is grub pointing to a couple of windows recovery partitions on the hard drive. I'm tempted, but I have hopes of recovering the data that I had enough misguided faith to trust to the linux ext4 partition. I've tried pen driving back into it with a 32 bit iso but I'm simply informed that ubuntu is running in low graphics mode and get to watch the dots cycle indefinitely. EDIT: Thanks for the advice vis positive request. I've got onto the machine with a 64 bit stick and can see the file structure left behind by the installation. My first instinct was to run install from the stick but it did not seem to offer a recovery option. My question then: is there a way to recover the current installation so that if I reinstall the packages I had they will pick up the original settings. I'm particularly worried about losing email from evolution - the rest I could probably lash back together. I would also be interested how this disaster came about. I see people in the know recommending this same procedure in similar circumstances. Thanks for your attention, Tony Martin

    Read the article

  • SQLAuthority News – 2 Whitepapers Announced – AlwaysOn Architecture Guide: Building a High Availability and Disaster Recovery Solution

    - by pinaldave
    Understanding AlwaysOn Architecture is extremely important when building a solution with failover clusters and availability groups. Microsoft has just released two very important white papers related to this subject. Both the white papers are written by top experts in industry and have been reviewed by excellent panel of experts. Every time I talk with various organizations who are adopting the SQL Server 2012 they are always excited with the concept of the new feature AlwaysOn. One of the requests I often here is the related to detailed documentations which can help enterprises to build a robust high availability and disaster recovery solution. I believe following two white paper now satisfies the request. AlwaysOn Architecture Guide: Building a High Availability and Disaster Recovery Solution by Using AlwaysOn Availability Groups SQL Server 2012 AlwaysOn Availability Groups provides a unified high availability and disaster recovery (HADR) solution. This paper details the key topology requirements of this specific design pattern on important concepts like quorum configuration considerations, steps required to build the environment, and a workflow that shows how to handle a disaster recovery. AlwaysOn Architecture Guide: Building a High Availability and Disaster Recovery Solution by Using Failover Cluster Instances and Availability Groups SQL Server 2012 AlwaysOn Failover Cluster Instances (FCI) and AlwaysOn Availability Groups provide a comprehensive high availability and disaster recovery solution. This paper details the key topology requirements of this specific design pattern on important concepts like asymmetric storage considerations, quorum model selection, quorum votes, steps required to build the environment, and a workflow. If you are not going to implement AlwaysOn feature, this two Whitepapers are still a great reference material to review as it will give you complete idea regarding what it takes to implement AlwaysOn architecture and what kind of efforts needed. One should at least bookmark above two white papers for future reference. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, T SQL, Technology Tagged: AlwaysOn

    Read the article

  • How to resolve IPs in DNS based on the subnet of the requesting client?

    - by Nohsib
    Is it possible to configure Bind9 or other DNS to resolve the domain name of a machine into different IPs based on the subnet of the requesting client? e.g. Say the same service is running on 2 different application servers at different geographical points and based on the incoming request to resolve the domain name, the name server provides the IP of the application server based on the requesting client's IP, so the service could be offered by servers that are geographically closer to the client. In short, something like a CDN but just the IP resolution part based on the client's subnet. Is this configurable in any DNS?

    Read the article

< Previous Page | 411 412 413 414 415 416 417 418 419 420 421 422  | Next Page >