Search Results

Search found 59381 results on 2376 pages for 'http request'.

Page 41/2376 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • HTTP POST with URL query parameters -- good idea or not?

    - by Steven Huwig
    I'm designing an API to go over HTTP and I am wondering if using the HTTP POST command, but with URL query parameters only and no request body, is a good way to go. Considerations: "Good Web design" requires non-idempotent actions to be sent via POST. This is a non-idempotent action. It is easier to develop and debug this app when the request parameters are present in the URL. The API is not intended for widespread use. It seems like making a POST request with no body will take a bit more work, e.g. a Content-Length: 0 header must be explicitly added. It also seems to me that a POST with no body is a bit counter to most developer's and HTTP frameworks' expectations. Are there any more pitfalls or advantages to sending parameters on a POST request via the URL query rather than the request body? Edit: The reason this is under consideration is that the operations are not idempotent and have side effects other than retrieval. See the HTTP spec: In particular, the convention has been established that the GET and HEAD methods SHOULD NOT have the significance of taking an action other than retrieval. These methods ought to be considered "safe". This allows user agents to represent other methods, such as POST, PUT and DELETE, in a special way, so that the user is made aware of the fact that a possibly unsafe action is being requested. ... Methods can also have the property of "idempotence" in that (aside from error or expiration issues) the side-effects of N 0 identical requests is the same as for a single request. The methods GET, HEAD, PUT and DELETE share this property. Also, the methods OPTIONS and TRACE SHOULD NOT have side effects, and so are inherently idempotent.

    Read the article

  • Setting custom HTTP request headers in an URL object doesn't work.

    - by Blagovest Buyukliev
    I am trying to fetch an image from an IP camera using HTTP. The camera requires HTTP basic authentication, so I have to add the corresponding request header: URL url = new URL("http://myipcam/snapshot.jpg"); URLConnection uc = url.openConnection(); uc.setRequestProperty("Authorization", "Basic " + new String(Base64.encode("user:pass".getBytes()))); // outputs "null" System.out.println(uc.getRequestProperty("Authorization")); I am later passing the url object to ImageIO.read(), and, as you can guess, I am getting an HTTP 401 Unauthorized, although user and pass are correct. What am I doing wrong? I've also tried new URL("http://user:pass@myipcam/snapshot.jpg"), but that doesn't work either.

    Read the article

  • How to emulate a real http request via cfhttp?

    - by maectpo
    Hi, I need to emulate a real http request via cfhttp. I was getting rss feed with ColdFusion, but tonight they started to block my request and send an index page in response instead of rss fead. I added useragent for cfhttp, but it doesn't help. Opera, Firefox and Chrome open feed correctly from the same computer.

    Read the article

  • Problem Executing Async Web Request

    - by davidhayes
    Hi Can anyone tell me what I've done wrong with this simple code? When I run it it hangs on using (Stream postStream = request.EndGetRequestStream(asynchronousResult)) If I comment out the requestState.Wait.WaitOne(); line the code executes correctly but obviously doesn't wait for the response. I'm guessing the the call to EndGetRequestStream is somehow returning me to the context of the main thread?? I'm pretty sure my code is essentially the same as the sample though (MSDN Documentation) using System; using System.Net; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Ink; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Shapes; using System.IO; using System.Text; namespace SBRemoteClient { public class JSONClient { public string ExecuteJSONQuery(string url, string query) { System.Uri uri = new Uri(url); HttpWebRequest request = (HttpWebRequest)WebRequest.Create(uri); request.Method = "POST"; request.Accept = "application/json"; byte[] requestBytes = Encoding.UTF8.GetBytes(query); RequestState requestState = new RequestState(request, requestBytes); IAsyncResult resultRequest = request.BeginGetRequestStream(new AsyncCallback(GetRequestStreamCallback), requestState); requestState.Wait.WaitOne(); IAsyncResult resultResponse = (IAsyncResult)request.BeginGetResponse(new AsyncCallback(GetResponseStreamCallback), requestState); requestState.Wait.WaitOne(); return requestState.Response; } private static void GetRequestStreamCallback(IAsyncResult asynchronousResult) { try { RequestState requestState = (RequestState)asynchronousResult.AsyncState; HttpWebRequest request = requestState.Request; using (Stream postStream = request.EndGetRequestStream(asynchronousResult)) { postStream.Write(requestState.RequestBytes, 0, requestState.RequestBytes.Length); } requestState.Wait.Set(); } catch (Exception e) { Console.Out.WriteLine(e); } } private static void GetResponseStreamCallback(IAsyncResult asynchronousResult) { RequestState requestState = (RequestState)asynchronousResult.AsyncState; HttpWebRequest request = requestState.Request; using (HttpWebResponse response = (HttpWebResponse)request.EndGetResponse(asynchronousResult)) { using (Stream responseStream = response.GetResponseStream()) { using (StreamReader streamRead = new StreamReader(responseStream)) { requestState.Response = streamRead.ReadToEnd(); requestState.Wait.Set(); } } } } } }

    Read the article

  • Haproxy ACL for balance on URL request

    - by Elgreco08
    I'm usung Ubuntu with haproxy 1.4.13 version. Its load balancing two subdomains: app1.domain.com app2.domain.com now i want to be able to use ACL to send based on url request to the right backends For example: http://app1.domain.com/path/games/index.php sould be send to backend1 http://app1.domain.com/path/photos/index.php should be send to backend2 http://app2.domain.com/path/mail/index.php sould be send to backend3 http://app2.domain.com/path/wazap/index.php should be send to backend4 i did used the code the the following acl frontend http-farm bind 0.0.0.0:80 acl app1web hdr_beg(host) -i app1 # for http://app1.domain.com acl app2web hdr_beg(host) -i app2 # for http://app2.domain.com acl msg-url-1 url_reg ^\/path/games/.* acl msg-url-2 url_reg ^\/path/photos/.* acl msg-url-3 url_reg ^\/path/mail/.* acl msg-url-4 url_reg ^\/path/wazap/.* use_backend games if msg-url-1 app1web use_backend photos if msg-url-2 app2web use_backend mail if ..... backend games option httpchk GET /alive.php HTTP/1.1\r\nHost:\ app1.domain.com option forwardfor balance roundrobin server appsrv-1 192.168.1.10:80 check inter 2000 fall 3 server appsrv-2 192.168.1.11:80 check inter 2000 fall 3 backend photos option httpchk GET /alive.php HTTP/1.1\r\nHost:\ app2.domain.com option forwardfor balance roundrobin server appsrv-1 192.168.1.13:80 check inter 2000 fall 3 server appsrv-2 192.168.1.14:80 check inter 2000 fall 3 .... Since the path mail, photos...etc will be application pools on iis, i want to monitor them if they are alive, if the pool does not respond it should stop serving it. my problem is for sure in the regular expression in the ACL acl msg-url-4 url_reg ^\/path/wazap/.* What should i change in the ACL to make it work ? thanks for any hints

    Read the article

  • how to make a ksoap2 request with multiple property in android?

    - by nexusone
    I must make the following soap request, but we can not succeed, I tried in several ways and fails, I always get a blank field in response. Request should look like this: POST /service.asmx HTTP/1.1 Host: host Content-Type: text/xml; charset=utf-8 Content-Length: length SOAPAction: "SOAPAction" <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <soap:Body> <GetQuickParkEvents xmlns="NAMESPACE"> <User> <ID>int</ID> <Name>string</Name> <UserName>string</UserName> <Password>string</Password> </User> <Filter> <TimeSpan> <Since>dateTime</Since> <To>dateTime</To> </TimeSpan> <Reg>string</Reg> <Nalog>string</Nalog> <Status>string</Status> <Value>string</Value> </Filter> </GetQuickParkEvents> </soap:Body> </soap:Envelope> I thank you in advance if anyone can help me!

    Read the article

  • How to get BinarySecurityToken into the wcf soap request

    - by Mr Bell
    I need to sign my soap request to a 3rd party. The provided an example what the call should look like. And I am trying, rather unsuccessfully to make this call with wcf. I need to make a wcf soap call where the header contains BinarySecurityToken, Signature, and SecurityTokenReference. Here is the example they sent me (with some of the values omitted) I have a certificate for signing, but I cant for the life of me figure out how to make this work <?xml version="1.0" encoding="UTF-8"?> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"><soapenv:Header><wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <wsse:BinarySecurityToken EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" ValueType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3" wsu:Id="SecurityToken-..omitted.." xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">..omitted..</wsse:BinarySecurityToken> <ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#"> <ds:SignedInfo> <ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> <ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/> <ds:Reference URI="#Body"> <ds:Transforms> <ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> </ds:Transforms> <ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/> <ds:DigestValue>..omitted...</ds:DigestValue> </ds:Reference> </ds:SignedInfo> <ds:SignatureValue> ..omitted.. </ds:SignatureValue> <ds:KeyInfo><wsse:SecurityTokenReference><wsse:Reference URI="#SecurityToken-..omitted.." ValueType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3"/></wsse:SecurityTokenReference></ds:KeyInfo></ds:Signature></wsse:Security></soapenv:Header><soapenv:Body wsu:Id="Body"><in0 xmlns="http://test.3rdParty.com">123</in0></soapenv:Body></soapenv:Envelope>

    Read the article

  • Microsoft Standalone CA - Set expiration date of an individual request

    - by Hall72215
    I have set up a Microsoft Standalone CA on 2008 R2 as a root CA. I'm trying to setup a subordinate Enterprise CA. I generated the certificate request, and submitted it to the root CA. Then, I ran the following command to set the expiration date to 20 years (the request ID is 5): certutil -setattributes 5 "ValidityPeriod:Years\nValidityPeriodUnits:20" Then, I approved the request, but it failed. The Request Status Code is: The specified time is invalid. 0x8007076d (WIN32: 1901) The Request Disposition Message is: Denied by Policy Module 0x8007076d, The requested validity period is invalid. Confirm that the validity period or expiration data and time specified in the request does not extend beyond the validity period of the CA certificate, the certificate template, and the CA. The validity period of the CA can be verified by running the following commands: certutil -getreg ca\validityPeriod & certutil -getreg ca\ValidityPeriodUnits The validity period of the CA certificate is 40 years (expires in 2052). The template condition doesn't apply since this is a standalone CA. The result of those commands is Years and 1, respectively. It appears that I will need to change the CA's validityPeriod and validityPeriodUnits. But, I want to keep the default expiration for a request at 1 year. Is there a way to set a maximum and default expiration, or am I going to have to change it, issue the certificate, and then change it back?

    Read the article

  • SharePoint MOSS - Serve HTTP content on an HTTPS page without Mixed Content Warning?

    - by kcb263
    Our "portal-like" SharePoint site is served using HTTPS/SSL. So a user goes to https://web.company.com and sees content and different Web Parts. So far, no problem. The desire now is to have new Web Parts added that either frame HTTP content (such as Weather Bug) or HTTP RSS feeds. The issue that arises is that by doing this, results in a "Mixed Content" warning in the browser. Has anybody successfully been able to implement such a scenario, or one similar to it? The options we have looked at, unsuccessfully, have been: using Apache Reverse Proxy Server mirror an external site Custom Web Parts

    Read the article

  • WAN interface responds as LAN when request comes from LAN, is that correct?

    - by Eugenio Miró
    Hi Everyone! I have a problem with my router/modem. I've published an HTTP service from one of my internal computers and when I access the service from the internal lan using the external IP address the modem responds instead of redirecting the call to the forwarded port. I can access the service from outside however, but from the internal network the modem responds to my calls. I'm using a ZTE ZXDSL 831 Series modem with ZXDSL 831IIV7.5.1e_E09_BR1 firmware. Thanks in advance!

    Read the article

  • wcf http 504: Working on a mystery

    - by James Fleming
    Ok,  So you're here because you've been trying to solve the mystery of why you're getting a 504 error. If you've made it to this lonely corner of the Internet, then the advice you're getting from other bloggers isn't the answer you are after. It wasn't the answer I needed either, so once I did solve my problem, I thought I'd share the answer with you. For starters, if by some miracle, you landed here first you may not already know that the 504 error is NOT coming from IIS or Casini, that response code is coming from Fiddler. HTTP/1.1 504 Fiddler - Receive Failure Content-Type: text/html Connection: close Timestamp: 09:43:05.193 ReadResponse() failed: The server did not return a response for this request.       The take away here is Fiddler won't help you with the diagnosis and any further digging in that direction is a red herring. Assuming you've dug around a bit, you may have arrived at posts which suggest you may be getting the error because you're trying to hump too much data over the wire, and have an urgent need to employ an anti-pattern: due to a special case: http://delphimike.blogspot.com/2010/01/error-504-in-wcfnet-35.html Or perhaps you're experiencing wonky behavior using WCF-CustomIsolated Adapter on Windows Server 2008 64bit environment, in which case the rather fly MVP Dwight Goins' advice is what you need. http://dgoins.wordpress.com/2009/12/18/64bit-wcf-custom-isolated-%E2%80%93-rest-%E2%80%93-%E2%80%9C504%E2%80%9D-response/ For me, none of that was helpful. I could perform a get on a single record  http://localhost:8783/Criterion/Skip(0)/Take(1) but I couldn't get more than one record in my collection as in:  http://localhost:8783/Criterion/Skip(0)/Take(2) I didn't have a big payload, or a large number of objects (as you can see by the size of one record below) - - A-1B f5abd850-ec52-401a-8bac-bcea22c74138 .biological/legal mother This item refers to the supervisor’s evaluation of the caseworker’s ability to involve the biological/legal mother in the permanency planning process. 75d8ecb7-91df-475f-aa17-26367aeb8b21 false true Admin account 2010-01-06T17:58:24.88 1.20 764a2333-f445-4793-b54d-1c3084116daa So while I was able to retrieve one record without a hitch (thus the record above) I wasn't able to return multiple records. I confirmed I could get each record individually, (Skip(1)/Take(1))so it stood to reason the problem wasn't with the data at all, so I suspected a serialization error. The first step to resolving this was to enable WCF Tracing. Instructions on how to set it up are here: http://msdn.microsoft.com/en-us/library/ms733025.aspx. The tracing log led me to the solution. The use of type 'Application.Survey.Model.Criterion' as a get-only collection is not supported with NetDataContractSerializer.  Consider marking the type with the CollectionDataContractAttribute attribute or the SerializableAttribute attribute or adding a setter to the property. So I was wrong (but close!). The problem was a deserializing issue in trying to recreate my read only collection. http://msdn.microsoft.com/en-us/library/aa347850.aspx#Y1455 So looking at my underlying model, I saw I did have a read only collection. Adding a setter was all it took.         public virtual ICollection<string> GoverningResponses         {             get             {                 if (!string.IsNullOrEmpty(GoverningResponse))                 {                     return GoverningResponse.Split(';');                 }                 else                     return null;             }                  } Hope this helps. If it does, post a comment.

    Read the article

  • How can I enable http auth in lighttpd for all directories except one?

    - by Nuri Hodges
    I am trying to authenticate access to everything in webroot (/) except anything that resides in a particular directory (/directory/) and I've tried both of these options to no avail: $HTTP["url"] =~ "^(?!(/directory))" { auth.require = ( "" => ( "method" => "basic", "realm" => "auth to this area", "require" => "user=username" ) ) } $HTTP["url"] != "/directory" { auth.require = ( "" => ( "method" => "basic", "realm" => "auth to this area", "require" => "user=username" ) ) }

    Read the article

  • Fixing Chrome&rsquo;s AJAX Request Caching Bug

    - by Steve Wilkes
    I recently had to make a set of web pages restore their state when the user arrived on them after clicking the browser’s back button. The pages in question had various content loaded in response to user actions, which meant I had to manually get them back into a valid state after the page loaded. I got hold of the page’s data in a JavaScript ViewModel using a JQuery ajax call, then iterated over the properties, filling in the fields as I went. I built in the ability to describe dependencies between inputs to make sure fields were filled in in the correct order and at the correct time, and that all worked nicely. To make sure the browser didn’t cache the AJAX call results I used the JQuery’s cache: false option, and ASP.NET MVC’s OutputCache attribute for good measure. That all worked perfectly… except in Chrome. Chrome insisted on retrieving the data from its cache. cache: false adds a random query string parameter to make the browser think it’s a unique request – it made no difference. I made the AJAX call a POST – it made no difference. Eventually what I had to do was add a random token to the URL (not the query string) and use MVC routing to deliver the request to the correct action. The project had a single Controller for all AJAX requests, so this route: routes.MapRoute( name: "NonCachedAjaxActions", url: "AjaxCalls/{cacheDisablingToken}/{action}", defaults: new { controller = "AjaxCalls" }, constraints: new { cacheDisablingToken = "[0-9]+" }); …and this amendment to the ajax call: function loadPageData(url) { // Insert a timestamp before the URL's action segment: var indexOfFinalUrlSeparator = url.lastIndexOf("/"); var uniqueUrl = url.substring(0, indexOfFinalUrlSeparator) + new Date().getTime() + "/" + url.substring(indexOfFinalUrlSeparator); // Call the now-unique action URL: $.ajax(uniqueUrl, { cache: false, success: completePageDataLoad }); } …did the trick.

    Read the article

  • Oracle Communications Service Broker is now available at http://edelivery.oracle.com/EPD/Download/ge

    - by francois.deza
    Oracle Communications Service Broker is now available at http://edelivery.oracle.com/EPD/Download/get_form?egroup_aru_number=12359008 and documented at http://edelivery.oracle.com/EPD/Download/get_form?egroup_aru_number=12359013 See also white paper "Transforming Service Delivery with Oracle Service Brokering" at http://www.oracle.com/us/products/servers-storage/servers/netra-carrier-grade/060194.pdf

    Read the article

  • The Silverlightning Talks

    - by Brian Genisio's House Of Bilz
    Tomorrow, I will be speaking in Grand Rapids at the Silverlight Firestarter.  It is a one day event intended to get people bootstrapped with Silverlight.  I will be giving the “Advanced Topics” presentation.  I have decided to run it as a series of “Lightning Talks”.  The idea is to give a lot of breadth so you know that the topic exists and move quickly between them.  To go along with the talks, here are a bunch of links that you might find useful: MVVM http://msdn.microsoft.com/en-us/magazine/dd458800.aspx http://msdn.microsoft.com/en-us/magazine/dd419663.aspx http://channel9.msdn.com/shows/Continuum/MVVM/ http://karlshifflett.wordpress.com/2008/11/08/learning-wpf-m-v-vm/ http://johnpapa.net/silverlight/5-minute-overview-of-mvvm-in-silverlight/ Good MVVM Frameworks http://www.galasoft.ch/mvvm/getstarted/ http://caliburn.codeplex.com/Wikipage   Prism http://compositewpf.codeplex.com/ http://mtaulty.com/CommunityServer/blogs/mike_taultys_blog/archive/2009/10/27/prism-and-silverlight-screencasts-on-channel-9.aspx http://www.grumpydev.com/2009/07/04/why-shouldn%E2%80%99t-i-use-prism/   Unit Testing Silverlight Unit Testing Framework http://code.msdn.microsoft.com/silverlightut http://silverlight.codeplex.com/ http://www.jeff.wilcox.name/2008/03/silverlight2-unit-testing/ NUnit Testing with Silverlight http://weblogs.asp.net/nunitaddin/archive/2008/05/01/silverlight-nunit-projects.aspx Useful Testing Tools http://testdriven.net/ http://nunit.org/ http://code.google.com/p/moq/ http://www.ayende.com/projects/rhino-mocks.aspx   Navigation Framework http://www.silverlightshow.net/items/The-Silverlight-3-Navigation-Framework.aspx http://www.silverlight.net/learn/videos/silverlight-videos/navigation-framework/   Farseer Physics Engine http://farseerphysics.codeplex.com/Wikipage http://physicshelper.codeplex.com/Wikipage http://www.andybeaulieu.com/Home/tabid/67/Default.aspx   Windows Phone 7 http://www.silverlight.net/getstarted/devices/windows-phone/ http://msdn.microsoft.com/en-us/library/ff402535%28VS.92%29.aspx

    Read the article

  • REST API rule about tunneling

    - by miku
    Just read this in the REST API Rulebook: GET and POST must not be used to tunnel other request methods. Tunneling refers to any abuse of HTTP that masks or misrepresents a message’s intent and undermines the protocol’s transparency. A REST API must not compromise its design by misusing HTTP’s request methods in an effort to accommodate clients with limited HTTP vocabulary. Always make proper use of the HTTP methods as specified by the rules in this section. [highlights by me] But then a lot of frameworks use tunneling to expose REST interfaces via HTML forms, since <form> knows only about GET and POST. My most recent example is a MethodRewriteMiddleware for flask (submitted by the author of the framework): http://flask.pocoo.org/snippets/38/. Any ways to comply to the "Rule" without hacks or add-ons in web frameworks?

    Read the article

  • Building Custom HTTP Help Pages with WCF

    - by Jesse Ezell
    Been asked this a few times and needed to figure it out myself, so I put together a post on how to host custom HTTP help pages for your WCF services: http://blog.iserviceoriented.com/index.php/2010/05/04/building-custom-http-help-pages-with-wcf/ A little help from the WCF team to open up some of the internal classes would make it more straightforward... until them, it takes a bit of hacking and black magic.

    Read the article

  • Common request: export #Tabular model and data to #PowerPivot

    - by Marco Russo (SQLBI)
    I received this request in many courses, messages and also forum discussions: having an Analysis Services Tabular model, it would be nice being able to extract a correspondent PowerPivot data model. In order of priority, here are the specific feature people (including me) would like to see: Create an empty PowerPivot workbook with the same data model of a Tabular model Change the connections of the tables in the PowerPivot workbook extracting data from the Tabular data model Every table should have an EVALUATE ‘TableName’ query in DAX Apply a filter to data extracted from every table For example, you might want to extract all data for a single country or year or customer group Using the same technique of applying filter used for role based security would be nice Expose an API to automate the process of creating a PowerPivot workbook Use case: prepare one workbook for every employee containing only its data, that he can use offline Common request for salespeople who want a mini-BI tool to use in front of the customer/lead/supplier, regardless of a connection available This feature would increase the adoption of PowerPivot and Tabular (and, therefore, Business Intelligence licenses instead of Standard), and would probably raise the sales of Office 2013 / Office 365 driven by ISV, who are the companies who requests this feature more. If Microsoft would do this, it would be acceptable it only works on Office 2013. But if a third-party will do that, it will make sense (for their revenues) to cover both Excel 2010 and Excel 2013. Another important reason for this feature is that the “Offline cube” feature that you have in Excel is not available when your PivotTable is connected to a Tabular model, but it can only be used when you connect to Analysis Services Multidimensional. If you think this is an important features, you can vote this Connect item.

    Read the article

  • Apache on Win32: Slow Transfers of single, static files in HTTP, fast in HTTPS

    - by Michael Lackner
    I have a weird problem with Apache 2.2.15 on Windows 2000 Server SP4. Basically, I am trying to serve larger static files, images, videos etc. The download seems to be capped at around 550kB/s even over 100Mbit LAN. I tried other protocols (FTP/FTPS/FTP+ES/SCP/SMB), and they are all in the multi-megabyte range. The strangest thing is that, when using Apache with HTTPS instead of HTTP, it serves very fast, around 2.7MByte/s! I also tried the AnalogX SimpleWWW server just to test the plain HTTP speed of it, and it gave me a healthy 3.3Mbyte/s. I am at a total loss here. I searched the web, and tried to change the following Apache configuration directives in httpd.conf, one at a time, mostly to no avail at all: SendBufferSize 1048576 #(tried multiples of that too, up to 100Mbytes) EnableSendfile Off #(minor performance boost) EnableMMAP Off Win32DisableAcceptEx HostnameLookups Off #(default) I also tried to tune the following registry parameters, setting their values to 4194304 in decimal (they are REG_DWORD), and rebooting afterwards: HKLM\SYSTEM\CurrentControlSet\Services\AFD\Parameters\DefaultReceiveWindow HKLM\SYSTEM\CurrentControlSet\Services\AFD\Parameters\DefaultSendWindow Additionally, I tried to install mod_bw, which sets the event timer precision to 1ms, and allows for bandwidth throttling. According to some people it boosts static file serving performance when set to unlimited bandwidth for everybody. Unfortunately, it did nothing for me. So: AnalogX HTTP: 3300kB/s Gene6 FTPD, plain: 3500kB/s Gene6 FTPD, Implicit and Explicit SSL, AES256 Cipher: 1800-2000kB/s freeSSHD: 1100kB/s SMB shared folder: about 3000kB/s Apache HTTP, plain: 550kB/s Apache HTTPS: 2700kB/s Clients that were used in the bandwidth testing: Internet Explorer 8 (HTTP, HTTPS) Firefox 8 (HTTP, HTTPS) Chrome 13 (HTTP, HTTPS) Opera 11.60 (HTTP, HTTPS) wget under CygWin (HTTP, HTTPS) FileZilla (FTP, FTPS, FTP+ES, SFTP) Windows Explorer (SMB) Generally, transfer speeds are not too high, but that's because the server machine is an old quad Pentium Pro 200MHz machine with 2GB RAM. However, I would like Apache to serve at at least 2Mbyte/s instead of 550kB/s, and that already works with HTTPS easily, so I fail to see why plain HTTP is so crippled. I am using a Kerio Winroute Firewall, but no Throttling and no special filters peeking into HTTP traffic, just the plain Firewall functionality for blocking/allowing connections. The Apache error.log (Loglevel info) shows no warnings, no errors. Also nothing strange to be seen in access.log. I have already stripped down my httpd.conf to the bare minimum just to make sure nothing is interfering, but that didn't help either. If you have any idea, help would be greatly appreciated, since I am totally out of ideas! Thanks! Edit: I have now tried a newer Apache 2.2.21 to see if it makes any difference. However, the behaviour is exactly the same. Edit 2: KM01 has requested a sniff on the HTTP headers, so here comes the LiveHTTPHeaders output (an extension to Firefox). The Output is generated on downloading a single file called "elephantsdream_source.264", which is an H.264/AVC elementary video stream under an Open Source license. I have taken the freedom to edit the URL, removing folders and changing the actual servers domain name to www.mydomain.com. Here it is: LiveHTTPHeaders, Plain HTTP: http://www.mydomain.com/elephantsdream_source.264 GET /elephantsdream_source.264 HTTP/1.1 Host: www.mydomain.com User-Agent: Mozilla/5.0 (Windows NT 5.2; WOW64; rv:6.0.2) Gecko/20100101 Firefox/6.0.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Connection: keep-alive HTTP/1.1 200 OK Date: Wed, 21 Dec 2011 20:55:16 GMT Server: Apache/2.2.21 (Win32) mod_ssl/2.2.21 OpenSSL/0.9.8r PHP/5.2.17 Last-Modified: Thu, 28 Oct 2010 20:20:09 GMT Etag: "c000000013fa5-29cf10e9-493b311889d3c" Accept-Ranges: bytes Content-Length: 701436137 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/plain LiveHTTPHeaders, HTTPS: https://www.mydomain.com/elephantsdream_source.264 GET /elephantsdream_source.264 HTTP/1.1 Host: www.mydomain.com User-Agent: Mozilla/5.0 (Windows NT 5.2; WOW64; rv:6.0.2) Gecko/20100101 Firefox/6.0.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Connection: keep-alive HTTP/1.1 200 OK Date: Wed, 21 Dec 2011 20:56:57 GMT Server: Apache/2.2.21 (Win32) mod_ssl/2.2.21 OpenSSL/0.9.8r PHP/5.2.17 Last-Modified: Thu, 28 Oct 2010 20:20:09 GMT Etag: "c000000013fa5-29cf10e9-493b311889d3c" Accept-Ranges: bytes Content-Length: 701436137 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/plain

    Read the article

  • Modularity through HTTP

    - by Michael Williamson
    As programmers, we strive for modularity in the code we write. We hope that splitting the problem up makes it easier to solve, and allows us to reuse parts of our code in other applications. Object-orientation is the most obvious of many attempts to get us closer to this ideal, and yet one of the most successful approaches is almost accidental: the web. Programming languages provide us with functions and classes, and plenty of other ways to modularize our code. This allows us to take our large problem, split it into small parts, and solve those small parts without having to worry about the whole. It also makes it easier to reason about our code. So far, so good, but now that we’ve written our small, independent module, for example to send out e-mails to my customers, we’d like to reuse it in another application. By creating DLLs, JARs or our platform’s package container of choice, we can do just that – provided our new application is on the same platform. Want to use a Java library from C#? Well, good luck – it might be possible, but it’s not going to be smooth sailing. Even if a library exists, it doesn’t mean that using it going to be a pleasant experience. Say I want to use Java to write out an XML document to an output stream. You’d imagine this would be a simple one-liner. You’d be wrong: import org.w3c.dom.*; import java.io.*; import javax.xml.transform.*; import javax.xml.transform.dom.*; import javax.xml.transform.stream.*; private static final void writeDoc(Document doc, OutputStream out) throws IOException { try { Transformer t = TransformerFactory.newInstance().newTransformer(); t.setOutputProperty(OutputKeys.DOCTYPE_SYSTEM, doc.getDoctype().getSystemId()); t.transform(new DOMSource(doc), new StreamResult(out)); } catch (TransformerException e) { throw new AssertionError(e); // Can't happen! } } Most of the time, there is a good chance somebody else has written the code before, but if nobody can understand the interface to that code, nobody’s going to use it. The result is that most of the code we write is just a variation on a theme. Despite our best efforts, we’ve fallen a little short of our ideal, but the web brings us closer. If we want to send e-mails to our customers, we could write an e-mail-sending library. More likely, we’d use an existing one for our language. Even then, we probably wouldn’t have niceties like A/B testing or DKIM signing. Alternatively, we could just fire some HTTP requests at MailChimp, and get a whole slew of features without getting anywhere near the code that implements them. The web is inherently language agnostic. So long as your language can send and receive text over HTTP, and probably parse some JSON, you’re about as well equipped as anybody. Instead of building libraries for a specific language, we can build a service that almost every language can reuse. The text-based nature of HTTP also helps to limit the complexity of the API. As SOAP will attest, you can still make a horrible mess using HTTP, but at least it is an obvious horrible mess. Complex data structures are tedious to marshal to and from text, providing a strong incentive to keep things simple. By contrast, spotting the complexities in a class hierarchy is often not as easy. HTTP doesn’t solve every problem. It probably isn’t such a good idea to use it inside an inner loop that’s executed thousands of times per second. What’s more, the HTTP approach might introduce some new problems. We often need to add a thin shim to each application that we wish to communicate over HTTP. For instance, we might need to write a small plugin in PHP if we want to integrate WordPress into our system. Suddenly, instead of a system written in one language, we’re maintaining a system with several distinct languages and platforms. Even then, we should strive to avoid re-implementing the same old thing. As programmers, we consistently underestimate both the cost of building a system and the ongoing maintenance. If we allow ourselves to integrate existing applications, even if they’re in unfamiliar languages, we save ourselves those development and maintenance costs, as well as being able to pick the best solution for our problem. Thanks to the web, HTTP is often the easiest way to get there.

    Read the article

  • Hiding a HTTP Auth-Realm by sending 404 to non-known IPs?

    - by zhenech
    I have an Apache (2.2) serving a web-app on example.com. That web-app has a debug-page reachable via example.com/debug. /debug is currently protected with a HTTP basic auth. As there is only a very small user-base who has access to the debug-page, I would like to hide it based on IP address and return 404 to clients not accessing from our VPN. Serving a 404 based on IP-address only is easy and is described in http://serverfault.com/a/13071. But as soon I add authentication, the users see a 401 instead of a 404. Basically, what I need is: if ($REMOTE_ADDR ~ 10.11.12.*): do_basic_auth (aka return 401) else: return 404

    Read the article

  • SharePoint MOSS - Serve HTTP content on an HTTPS page without Mixed Content Warning?

    - by kcb263
    Our "portal-like" SharePoint site is served using HTTPS/SSL. So a user goes to https://web.company.com and sees content and different Web Parts. So far, no problem. The desire now is to have new Web Parts added that either frame HTTP content (such as Weather Bug) or HTTP RSS feeds. The issue that arises is that by doing this, results in a "Mixed Content" warning in the browser. Has anybody successfully been able to implement such a scenario, or one similar to it? The options we have looked at, unsuccessfully, have been: using Apache Reverse Proxy Server mirror an external site Custom Web Parts

    Read the article

  • Are there other application layer firewalls like Microfot TMG (ISA) that do advanced http rules?

    - by Bret Fisher
    Since the old days ISA and now TMG have had several great features that I often want to deploy to my customers because of the enhanced functionality and security, but often the cost of an additinal server HW, Windows Server, and TMG license is too much to justify when compaired to a $300-500 appliance. Are there other gateway firewalls that can perform one or more of these application layer features: pre-auth incoming http traffic against AD/LDAP before sending packets to internal server (forms auth or basic creds popup)? read host headers of incoming http traffic (even on https) to a single public IP and route packets to different internal servers based on that host header?

    Read the article

  • Handle HTTPS Request in Proxy Server by C#

    - by Masoud Zohrabi
    I'm trying to write a Home Proxy Server in C# and I almost succeeded but I have problem to handle HTTPS requests (CONNECT). I don't know really how to handle this type of requests. In my studies I realized that for this requests we must to connect client to target host directly. Steps for these requests (that I realized): Receive first request from client (CONNECT https://www.example.ltd:443 HTTP/1.1) and send that to target host Send HTTP/1.1 200 Connection Established\r\n\r\n to client Listen to both sockets (client and target host) and send receives from each other to each other Listen until one of sockets disconnected Is this correct? If it is, how?

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >