Search Results

Search found 54190 results on 2168 pages for 'http authentication'.

Page 9/2168 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Microsoft Application Request Routing with Windows Authentication

    - by theplatz
    I'm running into a problem trying to get Windows Authentication working in an environment that uses Microsoft Application Request Routing and was hoping someone might be able to help. The problem I'm running into is that only some requests are authenticated, while others fail with 401 errors. I have followed the Special Case of Running IIS 7.0 in a Web Farm instructions found at http://blogs.msdn.com/b/webtopics/archive/2009/01/19/service-principal-name-spn-checklist-for-kerberos-authentication-with-iis-7-0.aspx to no avail. My current server setup looks like the following: ARR Two servers set up with IIS shared configuration using IIS 7.5 on Windows 2008 R2 Anonymous authentication turned on for the Default Web Site Web Farm Two servers running IIS 7.5 on Windows 2008 R2 Three web sites set up using port binding to differentiate between virtual hosts. Ports being used are 8000, 8001, and 8002 Application pools for Windows Authentication all use a common domain account SPN added to domain account for http/<virthalhost-name>:<port-number> and http/<virtualhost-name>.<fully-qualified-domain>:<port-number> The IIS logs show the following when authentication is working/failing. If I understand correctly, all requests should show DOMAIN\User_Name: 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/stylesheets/techweb.landing.css - 8002 DOMAIN\User_Name ARR-HOST-1-IP-ADDRESS 200 0 0 62 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/user-background-right.gif - 8002 - ARR-HOST-1-IP-ADDRESS 401 2 5 0 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/user-background-left.gif - 8002 DOMAIN\User_Name ARR-HOST-IP-ADDRESS 200 0 0 31 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/user-icon.png - 8002 - ARR-HOST-1-IP-ADDRESS 401 2 5 0 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/user-icon.png - 8002 - ARR-HOST-1-IP-ADDRESS 401 1 2148074248 0 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/application-icon.png - 8002 - ARR-HOST-1-IP-ADDRESS 401 1 2148074248 0 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/user-background-right.gif - 8002 - ARR-HOST-1-IP-ADDRESS 401 1 3221225581 15 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/building.gif - 8002 DOMAIN\User_Name ARR-HOST-2-IP-ADDRESS 200 0 0 218 Does anyone know what might cause this problem and how I can resolve it?

    Read the article

  • Ubuntu 9.10 RSA authentication: ssh fails, filezilla runs fine

    - by MariusPontmercy
    This is quite a mistery for me. I usually use passwordless RSA authentication to login into my remote *nix servers with ssh and sftp. Never had any problem until now. I cannot connect to an Ubuntu 9.10 machine: user@myclient$ ssh -i .ssh/Ganymede_key [email protected] [...] debug1: Host 'ganymede.server.com' is known and matches the RSA host key. debug1: Found key in /home/user/.ssh/known_hosts:14 debug2: bits set: 494/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: .ssh/Ganymede_key (0xb96a0ef8) debug2: key: .ssh/Ganymede_key ((nil)) debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Next authentication method: publickey debug1: Offering public key: .ssh/Ganymede_key debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Trying private key: .ssh/Ganymede_key debug1: read PEM private key done: type RSA debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password,keyboard-interactive debug2: we did not send a packet, disable method debug1: Next authentication method: keyboard-interactive debug2: userauth_kbdint debug2: we sent a keyboard-interactive packet, wait for reply debug2: input_userauth_info_req debug2: input_userauth_info_req: num_prompts 1 Then it falls back to password authentication. If I disable password authentication on the remote machine my connection attempt just fails with a "Permission denied (publickey)." state. Same thing for sftp from command line. The "funny" thing is that the exact same RSA key works like a charm with a Filezilla sftp session instead: 12:08:00 Trace: Offered public key from "/home/user/.filezilla/keys/Ganymede_key" 12:08:00 Trace: Offer of public key accepted, trying to authenticate using it. 12:08:01 Trace: Access granted 12:08:01 Trace: Opened channel for session 12:08:01 Trace: Started a shell/command 12:08:01 Status: Connected to ganymede.server.com 12:08:02 Trace: CSftpControlSocket::ConnectParseResponse() 12:08:02 Trace: CSftpControlSocket::ResetOperation(0) 12:08:02 Trace: CControlSocket::ResetOperation(0) 12:08:02 Status: Retrieving directory listing... 12:08:02 Trace: CSftpControlSocket::SendNextCommand() 12:08:02 Trace: CSftpControlSocket::ChangeDirSend() 12:08:02 Command: pwd 12:08:02 Response: Current directory is: "/root" 12:08:02 Trace: CSftpControlSocket::ResetOperation(0) 12:08:02 Trace: CControlSocket::ResetOperation(0) 12:08:02 Trace: CSftpControlSocket::ParseSubcommandResult(0) 12:08:02 Trace: CSftpControlSocket::ListSubcommandResult() 12:08:02 Trace: CSftpControlSocket::ResetOperation(0) 12:08:02 Trace: CControlSocket::ResetOperation(0) 12:08:02 Status: Directory listing successful Any thoughts? M

    Read the article

  • Multi- authentication scenario for a public internet service using Kerberos

    - by StrangeLoop
    I have a public web server which has users coming from internet (via HTTPS) and from a corporate intranet. I wish to use Kerberos authentication for the intranet users so that they would be automatically logged in the web application without the need to provide any login/password (assuming they are already logged to the Windows domain). For the users coming from internet I want to provide traditional basic/form- based authentication. User/password data for these users would be stored internally in a database used by the application. Web application will be configured to use Kerberos authentication for users coming from specific intranet ip networks and basic/form- based authentication will be used for the rest of the users. From a security perspective, are there some risks involved in this kind of setup or is this a generally accepted solution? My understanding is that server doesn't need access to KDC (see Kerberos authentication, service host and access to KDC) and it can be completely isolated from AD and corporate intranet. The server has a keytab file stored locally that is used to decrypt tickets sent by the users coming from intranet. The tickets only contain username and domain of the incoming user. Server never sees the passwords of authenticated users. If the server would be hacked and the keytab file compromised, it would mean that attacker could forge tickets for any domain user and get access to the web application as any user. But typically this is the case anyway if hacker gains access to the keytab file on the local filesystem. The encryption key contained in the keytab file is based on the service account password in AD and is in hashed form, I guess it is very difficult to brute force this password if strong Kerberos encryption like AES-256-SHA1 is used. As the server has no network access to intranet, even the compromised service account couldn't be directly used for anything.

    Read the article

  • Google chrome proxy authentication dialogue timeout

    - by Nihar Sarangi
    I am on a network that uses LDAP proxy for authentication based on a username and password. Whenever I start Google Chrome, it pops up with a proxy authentication dialogue, but the dialogue disappears automatically after variable amount of time (sometimes it stays for 5 seconds some times less than 1 second). I have found the same issue with Chromium also. Is there any configuration I can set to control this timeout, or say, auto-authenticate with my authentication details from the shell or DE (Gnome3 on Arch)?

    Read the article

  • Nginx Browser Caching using HTTP Headers outside server/location block

    - by Danny O'Sullivan
    I am having difficulty setting the HTTP expires headers for Nginx outside of specific server (and then location) blocks. What I want is to something like the following: location ~* \.(png|jpg|jpeg|gif|ico)$ { expires 1y; } But not have to repeat it in every single server block, because I am hosting a large number of sites. I can put it in every server block, but it's not very DRY. If I try to put that into an HTTP block or outside of all other blocks, I get "location directive is not allowed here." It seems I have to put it into a server block, and I have a different server block for every virtual host. Any help/clarification would be appreciated.

    Read the article

  • Make The Web Fast - The HAR Show: Capturing and Analyzing performance data with HTTP Archive format

    Make The Web Fast - The HAR Show: Capturing and Analyzing performance data with HTTP Archive format Need a flexible format to record, export, and analyze network performance data? Well, that's exactly what the HTTP Archive format (HAR) is designed to do! Even better, did you know that Chrome DevTools supports it? In this episode we'll take a deep dive into the format (as you'll see, its very simple), and explore the many different ways it can help you capture and analyze your sites performance. Join +Ilya Grigorik and +Peter Lubbers to find out how to capture HAR network traces in Chrome, visualize the data via an online tool, share the reports with your clients and coworkers, automate the logging and capture of HAR data for your build scripts, and even adapt it to server-side analysis use cases! Yes, a rapid fire session of awesome demos - see you there. From: GoogleDevelopers Views: 0 6 ratings Time: 00:00 More in Science & Technology

    Read the article

  • Redirect from https://mydomain.com to http://mydomain.com

    - by Charlie
    Many of my visitors have bookmarked my site already wtih https://mydomain.com. Under the bad advice of a programmer, I put my whole Joomla site under ssl. I do not sell anything or provide any member services. I asked him many times if it would slow my site down - he said it wouldn't. I knew it did, I've researched on this site and realized it does slow the site down because of no cache of the pages. Understood. Please, someone tell me how to get away from it now. I'm not sure how to approach this, should I add something to my htaccess or my main index.php file? I've looked all over the net, there is much advice for adding redirectives for going from http to https, but very few answers regarding the opposite of going from https to http. Thank you very much for your time. I appreciate it.

    Read the article

  • Strategy for clients to retrieve real-time log from HTTP server

    - by Jerry Dodge
    I have an HTTP Server Service application which has its own logging mechanism. It's written in Delphi. I would like to provide a way for multiple clients to connect to this service and get a real-time update of the log. The log in the service moves rather fast, there's a lot of things to log. There may be up to 50 messages within 1 second at times. The existing log which is already implemented is not saved, it's only kept in the memory of the server service - where I will need to distribute it to any client which needs it. Once all clients have a log message, it should be deleted. I intend to use HTTP to "ask" the server for the log, and respond with an XML packet. The connections are not keep-alive. The only problem is, the server should only send the client those log records which it needs, not everything. I have no way of the server pushing the log to the clients in real-time, so each client needs to repeatedly ask the server for the latest log records. This HTTP Server is very lightweight, and there is no session management. There isn't even any type of authentication. The only way I see is for a client to register its self on the server, and whenever a log is issued on the server, it creates a copy of the log for each client, where each client has a log queue (string list). However, suppose there are 100 clients connected and expecting to receive this log. That means the server must create 100 copies of each log, add this log to the end of each client log queue, and wait for the client to request it. At that point, when the server replies with the XML log, it should flush (delete) whatever's in the queue. I'm worried however that this could cause memory issues. Each client log queue might get 100 log messages before the client requests the latest logs. How should I go about doing this in the fastest way possible without hindering the performance of the server? I'm trying to avoid having to create a copy of each log for each client.

    Read the article

  • How to publicize new Android's HTTP requests library

    - by Yaniv
    I don't know if this really belongs here, but I developed an open-source HTTP requests library for Android called Unite. This library was built mainly to significantly facilitate the work and coding time, and makes it easy to create and work with HTTP requests. I think a big advantage of this library is that it is open-source, so everyone can contribute to make it even better. I started this project for personal use, and I really like the result. What is the right and proper way to publicize the project, I do think it will be handy to Android developers. So how can I make developers know this library exist?

    Read the article

  • Is this a valid HTTP response? [migrated]

    - by fatmck
    I am writing a web server using C++, which responds the following for all requests: static std::string rsp[] = { "HTTP/1.1 200 OK\r\n", "Server: WebServer\r\n", "Content-Type: text/html\r\n", "Content-Length: 3\r\n", "Connection: close\r\n", "\r\n", "123" }; the content "123" can be successfully shown in browser. But when I use apache-ab to do a test, ab always show errors like this: ab -n 1 -c 1 http://127.0.0.1:1080/ apr_socket_recv: Connection reset by peer (104) I thought that I'm closing the socket too quickly, so I commented the close() function. But ab just hold, ab seems to be waiting for a complete response.

    Read the article

  • HTTP Headers for Unknown Content-Length

    - by jocull
    I am currently trying to stream content out to the web after a trans-coding process. This usually works fine by writing binary out to my web stream, but some browsers (specifically IE7, IE8) do not like not having the Content-Length defined in the HTTP header. I believe that "valid" headers are supposed to have this set. What is the proper way to stream content to the web when you have an unknown Content-Length? The trans-coding process can take awhile, so I want to start streaming it out as it completes.

    Read the article

  • HTTP status code for "success with errors"?

    - by Richard Levasseur
    I've poked around a bit, but I don't see an HTTP status code for when a request's succeeds, but there is an error after the "point of no return". e.g., Say you process a request, its committed to the database, but while returning the result you run of memory, or encounter a NPE, or what have you. It would have been a 200 response, but now, internally, you aren't able to return the proper, well-formed response. 202 Accepted doesn't seem to fit since we've already processed the request. What status code means "Success, but errors"? Does one even exist?

    Read the article

  • Choosing http status code for unknown command reply

    - by w0rldart
    So, I'm writing a small test that I have been required to complete and I just want to give it some final touches by adding some header status code responses and some other stuff. Right now, my dilemma is what HTTP status code to choose for my "Unknown command" response after the $_GET['cmd'] has been compared to the existing commands list. case 404: $text = 'Not Found'; break; case 405: $text = 'Method Not Allowed'; break; case 406: $text = 'Not Acceptable'; break; For which one of the above should I go? And if none, which other?

    Read the article

  • How to set RequestBody for Http Delete method.

    - by Tushar Tarkas
    I am writing a client code for a server which Delete API. The API specification requires data to be sent. I am using HttpComponents v3.1 library for writing client code. Using the HtpDelete class I could not find a way to add request data to it. Is there a way to do so ? Below is the code snippet. HttpDelete deleteReq = new HttpDelete(uriBuilder.toString()); List<NameValuePair> postParams = new ArrayList<NameValuePair>(); postParams.add(new BasicNameValuePair(RestConstants.POST_DATA_PARAM_NAME, postData.toString())); try { UrlEncodedFormEntity entity = new UrlEncodedFormEntity(postParams); entity.setContentEncoding(HTTP.UTF_8); //deleteReq.setEntity(entity); // There is no method setEntity() deleteReq.setHeader(RestConstants.CONTENT_TYPE_HEADER, RestConstants.CONTENT_TYPE_HEADER_VAL); } catch (UnsupportedEncodingException e) { logger.error("UnsupportedEncodingException: " + e); } Thanks in advance.

    Read the article

  • Are your redirects HTTP compliant (absolute URI)?

    - by webbiedave
    In the Hypertext Transfer Protocol 1.1 spec, it states for the Location header: The field value consists of a single absolute URI. Location = "Location" ":" absoluteURI An example is: Location: http://www.w3.org/pub/WWW/People.html I have coded many an app that doesn't include the scheme and domain (i.e., /thankyou/) and every browser I've ever tested with redirects correctly. I'm wondering if it's imperative to go back and change all my redirect code or just ignore this part of the spec. Have many of you produced code that doesn't comply and will you go back and change it? Or will you just trust that clients will continue to resolve them well into the future? (going forward I will certainly adhere to this)

    Read the article

  • yum update works but yum --security update fails to work in Fedora 12

    - by bobo
    I had already installed the yum-security before. And I was going to do an update by entering the following command: [root@localhost /]# yum update Loaded plugins: presto, priorities, refresh-packagekit, security Skipping security plugin, no data Setting up Update Process Resolving Dependencies Skipping security plugin, no data --> Running transaction check ---> Package eject.i686 0:2.1.5-17.fc12 set to be updated ---> Package glibc.i686 0:2.11.1-4 set to be updated ---> Package glibc-common.i686 0:2.11.1-4 set to be updated ---> Package glibc-devel.i686 0:2.11.1-4 set to be updated ---> Package glibc-headers.i686 0:2.11.1-4 set to be updated ---> Package gnome-themes.noarch 0:2.28.1-3.fc12 set to be updated ---> Package gtk2.i686 0:2.18.9-3.fc12 set to be updated ---> Package gtk2-immodule-xim.i686 0:2.18.9-3.fc12 set to be updated ---> Package kernel-PAE.i686 0:2.6.32.11-99.fc12 set to be installed ---> Package kernel-PAE-devel.i686 0:2.6.32.11-99.fc12 set to be installed ---> Package kernel-PAEdebug-devel.i686 0:2.6.32.11-99.fc12 set to be installed ---> Package kernel-debug-devel.i686 0:2.6.32.11-99.fc12 set to be installed ---> Package kernel-devel.i686 0:2.6.32.11-99.fc12 set to be installed ---> Package kernel-firmware.noarch 0:2.6.32.11-99.fc12 set to be updated ---> Package kernel-headers.i686 0:2.6.32.11-99.fc12 set to be updated ---> Package libnetfilter_conntrack.i686 0:0.0.101-1.fc12 set to be updated ---> Package media-player-info.noarch 0:5-1.fc12 set to be updated ---> Package nscd.i686 0:2.11.1-4 set to be updated ---> Package perf.noarch 0:2.6.32.11-99.fc12 set to be updated ---> Package rhythmbox.i686 0:0.12.6-5.fc12 set to be updated ---> Package sysvinit-tools.i686 0:2.87-3.dsf.fc12 set to be updated --> Finished Dependency Resolution --> Running transaction check ---> Package kernel-PAE.i686 0:2.6.31.12-174.2.3.fc12 set to be erased --> Finished Dependency Resolution Dependencies Resolved ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: kernel-PAE i686 2.6.32.11-99.fc12 updates 20 M kernel-PAE-devel i686 2.6.32.11-99.fc12 updates 6.2 M kernel-PAEdebug-devel i686 2.6.32.11-99.fc12 updates 6.2 M kernel-debug-devel i686 2.6.32.11-99.fc12 updates 6.2 M kernel-devel i686 2.6.32.11-99.fc12 updates 6.1 M Updating: eject i686 2.1.5-17.fc12 updates 49 k glibc i686 2.11.1-4 updates 4.2 M glibc-common i686 2.11.1-4 updates 14 M glibc-devel i686 2.11.1-4 updates 953 k glibc-headers i686 2.11.1-4 updates 590 k gnome-themes noarch 2.28.1-3.fc12 updates 1.5 M gtk2 i686 2.18.9-3.fc12 updates 3.2 M gtk2-immodule-xim i686 2.18.9-3.fc12 updates 60 k kernel-firmware noarch 2.6.32.11-99.fc12 updates 968 k kernel-headers i686 2.6.32.11-99.fc12 updates 749 k libnetfilter_conntrack i686 0.0.101-1.fc12 updates 37 k media-player-info noarch 5-1.fc12 updates 32 k nscd i686 2.11.1-4 updates 189 k perf noarch 2.6.32.11-99.fc12 updates 79 k rhythmbox i686 0.12.6-5.fc12 updates 4.0 M sysvinit-tools i686 2.87-3.dsf.fc12 updates 58 k Removing: kernel-PAE i686 2.6.31.12-174.2.3.fc12 @updates 72 M Transaction Summary ================================================================================ Install 5 Package(s) Upgrade 16 Package(s) Remove 1 Package(s) Reinstall 0 Package(s) Downgrade 0 Package(s) Total download size: 75 M Is this ok [y/N]: But then I changed my mind, I decided to do a security-only update instead of a full update, so I entered the following command: [root@localhost /]# yum --security update Loaded plugins: presto, priorities, refresh-packagekit, security Setting up Update Process Resolving Dependencies Limiting packages to security relevant ones http://download.fedoraproject.org/pub/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.cuhk.edu.hk/pub/linux/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.cuhk.edu.hk/pub/linux/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.cuhk.edu.hk/pub/linux/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.riken.jp/Linux/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.riken.jp/Linux/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://mirror.cse.iitk.ac.in/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://mirror.cse.iitk.ac.in/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://mirrors.isu.net.sa/pub/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://mirrors.isu.net.sa/pub/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. ftp://ftp.chu.edu.tw/linux/Fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno -1] Metadata file does not match checksum Trying other mirror. http://mirror.yandex.ru/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://mirror.yandex.ru/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://linus.iyte.edu.tr/linux/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://linus.iyte.edu.tr/linux/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.jaist.ac.jp/pub/Linux/Fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.jaist.ac.jp/pub/Linux/Fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.kddilabs.jp/Linux/packages/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://srv2.ftp.ne.jp/Linux/packages/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://www.ftp.ne.jp/Linux/distributions/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://srv2.ftp.ne.jp/Linux/distributions/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.rhd.ru/pub/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.rhd.ru/pub/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://mirrors.163.com/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://mirrors.163.com/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://mirror.nus.edu.sg/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://mirror.nus.edu.sg/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.yz.yamagata-u.ac.jp/pub/linux/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.yz.yamagata-u.ac.jp/pub/linux/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.linux.org.tr/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.linux.org.tr/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://mirrors.cytanet.com.cy/linux/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://mirrors.cytanet.com.cy/linux/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://fedoramirror.hnsdc.com/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://fedoramirror.hnsdc.com/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.twaren.net/Linux/Fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://c147.twaren.net/pub/Linux/Fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.mirror.tw/pub/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.mirror.tw/pub/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.cs.pu.edu.tw/Linux/Fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.cs.pu.edu.tw/Linux/Fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ubuntu.cn99.com/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ubuntu.cn99.com/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. Error: failure: repodata/updateinfo.xml.gz from updates: [Errno 256] No more mirrors to try. You could try using --skip-broken to work around the problem ^C[root@localhost /]# As it can be seen in the output, when I run the yum --security update command, it did show the Limiting packages to security relevant ones message so it's aware of the option. But I don't know why it keeps reporting the http error 416. I searched in google and found the following description of the error but it doesn't seem to help much. HTTP ERROR 416 - Requested Range Not Satisfiable A 416 status code indicates that the server was unable to fulfill the request. This may be, for example, because the client asked for the 800th-900th bytes of a document, but the document was only 200 bytes long. It suggests me to use the --skip-broken option, I tried and the output is the same. I already tested many times, it just doesn't work when the --security option is used. What could be the possible cause for this problem?

    Read the article

  • Unable to find valid certification path to requested target while CAS authentication

    - by Dmitriy Sukharev
    I'm trying to configure CAS authentication. It requires both CAS and client application to use HTTPS protocol. Unfortunately we should use self-signed certificate (with CN that doesn't have anything in common with our server). Also the server is behind firewall and we have only two ports (ssh and https) visible. As far as there're several application that should be visible externally, we use Apache for ajp reverse proxying requests to these applications. Secure connections are managed by Apache, and all Tomcat are not configured to work with SSL. But I obtained exception while authentication, therefore desided to set keystore in CATALINA_OPTS: export CATALINA_OPTS="-Djavax.net.ssl.keyStore=/path/to/tomcat/ssl/cert.pfx -Djavax.net.ssl.keyStoreType=PKCS12 -Djavax.net.ssl.keyStorePassword=password -Djavax.net.ssl.keyAlias=alias -Djavax.net.debug=ssl" cert.pfx was obtained from certificate and key that are used by Apache HTTP Server: $ openssl pkcs12 -export -out /path/to/tomcat/ssl/cert.pfx -inkey /path/to/apache2/ssl/server-key.pem -in /path/to/apache2/ssl/server-cert.pem When I try to authenticate a user I obtain the following exception: Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target at sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:174) ~[na:1.6.0_32] at java.security.cert.CertPathBuilder.build(CertPathBuilder.java:238) ~[na:1.6.0_32] at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:318) ~[na:1.6.0_32] Meanwhile I can see in catalina.out that Tomcat see certificate in cert.pfx and it's the same as the one that is used while authentication: 09:11:38.886 [http-bio-8080-exec-2] DEBUG o.j.c.c.v.Cas20ProxyTicketValidator - Constructing validation url: https://external-ip/cas/proxyValidate?pgtUrl=https%3A%2F%2Fexternal-ip%2Fclient%2Fj_spring_cas_security_proxyreceptor&ticket=ST-17-PN26WtdsZqNmpUBS59RC-cas&service=https%3A%2F%2Fexternal-ip%2Fclient%2Fj_spring_cas_security_check 09:11:38.886 [http-bio-8080-exec-2] DEBUG o.j.c.c.v.Cas20ProxyTicketValidator - Retrieving response from server. keyStore is : /path/to/tomcat/ssl/cert.pfx keyStore type is : PKCS12 keyStore provider is : init keystore init keymanager of type SunX509 *** found key for : 1 chain [0] = [ [ Version: V1 Subject: CN=wrong.domain.name, O=Our organization, L=Location, ST=State, C=Country Signature Algorithm: SHA1withRSA, OID = 1.2.840.113549.1.1.5 Key: Sun RSA public key, 1024 bits modulus: 13??a lot of digits here??19 public exponent: ????7 Validity: [From: Tue Apr 24 16:32:18 CEST 2012, To: Wed Apr 24 16:32:18 CEST 2013] Issuer: CN=wrong.domain.name, O=Our organization, L=Location, ST=State, C=Country SerialNumber: [ d??????? ????????] ] Algorithm: [SHA1withRSA] Signature: 0000: 65 Signature is here 0070: 96 . ] *** trustStore is: /jdk-home-folder/jre/lib/security/cacerts Here is a lot of trusted CAs. Here is nothing related to our certicate or our (not trusted) CA. ... 09:11:39.731 [http-bio-8080-exec-4] DEBUG o.j.c.c.v.Cas20ProxyTicketValidator - Retrieving response from server. Allow unsafe renegotiation: false Allow legacy hello messages: true Is initial handshake: true Is secure renegotiation: false %% No cached client session *** ClientHello, TLSv1 RandomCookie: GMT: 1347433643 bytes = { 63, 239, 180, 32, 103, 140, 83, 7, 109, 149, 177, 80, 223, 79, 243, 244, 60, 191, 124, 139, 108, 5, 122, 238, 146, 1, 54, 218 } Session ID: {} Cipher Suites: [SSL_RSA_WITH_RC4_128_MD5, SSL_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA, SSL_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, SSL_RSA_WITH_DES_CBC_SHA, SSL_DHE_RSA_WITH_DES_CBC_SHA, SSL_DHE_DSS_WITH_DES_CBC_SHA, SSL_RSA_EXPORT_WITH_RC4_40_MD5, SSL_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA, TLS_EMPTY_RENEGOTIATION_INFO_SCSV] Compression Methods: { 0 } *** http-bio-8080-exec-4, WRITE: TLSv1 Handshake, length = 75 http-bio-8080-exec-4, WRITE: SSLv2 client hello message, length = 101 http-bio-8080-exec-4, READ: TLSv1 Handshake, length = 81 *** ServerHello, TLSv1 RandomCookie: GMT: 1347433643 bytes = { 145, 237, 232, 63, 240, 104, 234, 201, 148, 235, 12, 222, 60, 75, 174, 0, 103, 38, 196, 181, 27, 226, 243, 61, 34, 7, 107, 72 } Session ID: {79, 202, 117, 79, 130, 216, 168, 38, 68, 29, 182, 82, 16, 25, 251, 66, 93, 108, 49, 133, 92, 108, 198, 23, 120, 120, 135, 151, 15, 13, 199, 87} Cipher Suite: SSL_RSA_WITH_RC4_128_SHA Compression Method: 0 Extension renegotiation_info, renegotiated_connection: <empty> *** %% Created: [Session-2, SSL_RSA_WITH_RC4_128_SHA] ** SSL_RSA_WITH_RC4_128_SHA http-bio-8080-exec-4, READ: TLSv1 Handshake, length = 609 *** Certificate chain chain [0] = [ [ Version: V1 Subject: CN=wrong.domain.name, O=Our organization, L=Location, ST=State, C=Country Signature Algorithm: SHA1withRSA, OID = 1.2.840.113549.1.1.5 Key: Sun RSA public key, 1024 bits modulus: 13??a lot of digits here??19 public exponent: ????7 Validity: [From: Tue Apr 24 16:32:18 CEST 2012, To: Wed Apr 24 16:32:18 CEST 2013] Issuer: CN=wrong.domain.name, O=Our organization, L=Location, ST=State, C=Country SerialNumber: [ d??????? ????????] ] Algorithm: [SHA1withRSA] Signature: 0000: 65 Signature is here 0070: 96 . ] *** http-bio-8080-exec-4, SEND TLSv1 ALERT: fatal, description = certificate_unknown http-bio-8080-exec-4, WRITE: TLSv1 Alert, length = 2 http-bio-8080-exec-4, called closeSocket() http-bio-8080-exec-4, handling exception: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target I tried to convert our pem certificate to der format and imported it to trustedKeyStore (cacerts) (without private key), but it didn't change anything. But I'm not confident that I did it rigth. Also I must inform you that I don't know passphrase for our servier-key.pem file, and probably it differs from password for keystore created by me. OS: CentOS 6.2 Architecture: x64 Tomcat version: 7 Apache HTTP Server version: 2.4 Is there any way to make Tomcat accepts our certificate?

    Read the article

  • Does the SPDY protocol eliminate the need for cookieless domains?

    - by Clint Pachl
    With plain HTTP, cookieless domains were an optimization to avoid unnecessarily sending cookie headers for page resources. However, the SPDY protocol compresses HTTP headers and in some cases eliminates unnecessary headers. My question then is, does SPDY make cookieless domains irrelevant? Furthermore, should the page source and all of its resources be hosted at the same domain in order to optimize a SPDY implementation?

    Read the article

  • Understanding HTTP Cookies in Indy 10 for Delphi XE2

    - by Jerry Dodge
    I have been working with Indy 10 HTTP Servers / Clients lately in Delphi XE2, and I need to make sure I'm understanding session management correctly. In the server, I have a "bucket" of sessions, which is a list of objects which each represent a unique session. I don't use username and password to authenticate users, but I rather use a unique API key which is issued to a client, and has an expiration. When a client wishes to connect to the server, it first logs in by calling the "login" command, which is a path like this: http://localhost:1234/login?APIKey=abcdefghij. The server checks this API Key against the database, and if it's valid, it creates a new session in the bucket, issues a new cookie (unique string), and sets the response cookies with Success=Y and Cookie=abcdefghij. This is where I have the question. Assuming the client end has its own method of cookie management, the client will receive this login response back from the server and automatically save the cookies as necessary. Any future request from the client to the server shall automatically send along these cookies, and the client side doesn't have to necessarily worry about setting these cookies when sending requests to the server. Right? PS - I'm asking this question here on programmers.stackexchange.com because I didn't see it fit to ask on stackoverflow.com. If anyone thinks this is appropriate enough for stackoverflow.com, please let me know.

    Read the article

  • https:// search results appearing on Google for purely http:// site

    - by hydrurga
    I started weeding through my site's search results from Google today, using a site: search, to determine if there are any links that cause 404s and thus need redirecting. To my amazement I noticed numerous https:// results relating to various pages. My site doesn't have a SSL certificate, doesn't serve such pages, doesn't internally link to https:// pages, doesn't include any such files in its sitemap.xml and, for all of these, never has. I decided to do a Google search for https://<my site> and found one site that incorrectly refers to the root of my site with a https:// prefix - I will try to contact them to get them to correct this. I'm not sure however how Googlebot managed to index the non-root files as https://. I can't find any external links to them and surely, without certification, Googlebot should have stalled at the first request? I've just added the following lines to the site's .htaccess (although the surfer still has to navigate through the browser's "This site is a security risk. Abandon hope all ye who enter here!" message(s) first to get there): RewriteEngine On RewriteCond %{HTTPS} on RewriteRule ^(.*)$ http://www.<my site>.org/$1 [R=301,L] replacing <my site> with my domain name. My big question is this though - I would like to use the Google Webmaster Tools Remove URLs feature to remove the https:// pages from the index. Can I be guaranteed that this will only remove the https:// versions of each relevant page and not the valid http:// versions? My thanks to anyone who can help me out with this particular question and the issue in general.

    Read the article

  • How to indicate to a web server the language of a resource

    - by Nik M
    I'm writing an HTTP API to a publishing server, and I want resources with representations in multiple languages. A user whose client GETs a resource which has Korean, Japanese and Trad. Chinese representations, and sends Accept-Language: en, ja;q=0.7 should get the Japanese. One resource, identified by one URI, will therefore have a number of different language representations. This seems to me like a totally orthodox use of content negotiation and multiple resource representations. But when each translator comes to provide these alternate language representations to the server, what's the correct way to instruct the server which language to store the representation under? I'm having the translators PUT the representation in its entirety to the same URI, but I can't find out how to do this elegantly. Content-Language is a response header, and none of the request headers seem to fit the bill. It seems my options are Invent a new request header Supply additional metadata in a multipart/related document Provide language as a parameter to the Content-Type of the request, like Content-Type: text/html;language=en I don't want to get into the business of extending HTTP, and I don't feel great about bundling extra metadata into the representation. Neither approach seems friendly to HTTP caches either. So option 3 seems like the best way that I can think of, but even then it's decidedly non-standard to put my own specific parameters on a very well established content type. Is there any by-the-book way of achieving this?

    Read the article

  • How can I set up a 404 error page when people access http://ftp.mydomain.com?

    - by Tim B.
    I am a freelance videographer/developer, and part of my job involves transferring large files over FTP to production houses/television stations. While the majority of people in my industry understand the difference between FTP and HTTP, I've experienced several interactions in the past couple months of people who still open Internet Explorer and try to access http://ftp.mydomain.com, receive an error page served by HostGator, and tell me that they cannot access my FTP server. Instead of spending time delivering instructions via e-mail, I'd much prefer to serve up a custom error page in this instance that instructs them how to download and use an FTP client. I tried setting up a sub-domain in Cpanel hoping I could simply drop in an .htaccess file with the error page, but I got this error: ftp.mydomain.com domainadmin-domainexistsglobal I also tried creating a custom error page in PHP which reads the site URL and serves up the custom content only when http://ftp.mydomain.com is accessed. Unfortunately, the error page works for every subdomain except that one. I'm not entirely sure this is even technically possible, which is why I bring it to the good people of StackOverflow to help. Thanks!

    Read the article

  • How often is authenticated SOCKS5 used as an HTTP proxy in organizations?

    - by brainsnorkel
    I'm wondering how frequently organisations use SOCKS5 as their web proxy protocol over, say, HTTP or authenticated HTTP proxies. Should an application even bother supporting SOCKS5 as an HTTP proxy? What percentage of organisations use SOCKS as a HTTP proxy? If you work in an organisation where you use SOCKS5, particularly authenticated SOCKS5, as the means of achieving HTTP Internet connectivity I'd be interested in hearing your thoughts. If you have experience with requirements for SOCKS5 proxies in your software I'd like to hear your thoughts too.

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >