Search Results

Search found 123115 results on 4925 pages for 'http code 503'.

Page 51/4925 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • Apache on Win32: Slow Transfers of single, static files in HTTP, fast in HTTPS

    - by Michael Lackner
    I have a weird problem with Apache 2.2.15 on Windows 2000 Server SP4. Basically, I am trying to serve larger static files, images, videos etc. The download seems to be capped at around 550kB/s even over 100Mbit LAN. I tried other protocols (FTP/FTPS/FTP+ES/SCP/SMB), and they are all in the multi-megabyte range. The strangest thing is that, when using Apache with HTTPS instead of HTTP, it serves very fast, around 2.7MByte/s! I also tried the AnalogX SimpleWWW server just to test the plain HTTP speed of it, and it gave me a healthy 3.3Mbyte/s. I am at a total loss here. I searched the web, and tried to change the following Apache configuration directives in httpd.conf, one at a time, mostly to no avail at all: SendBufferSize 1048576 #(tried multiples of that too, up to 100Mbytes) EnableSendfile Off #(minor performance boost) EnableMMAP Off Win32DisableAcceptEx HostnameLookups Off #(default) I also tried to tune the following registry parameters, setting their values to 4194304 in decimal (they are REG_DWORD), and rebooting afterwards: HKLM\SYSTEM\CurrentControlSet\Services\AFD\Parameters\DefaultReceiveWindow HKLM\SYSTEM\CurrentControlSet\Services\AFD\Parameters\DefaultSendWindow Additionally, I tried to install mod_bw, which sets the event timer precision to 1ms, and allows for bandwidth throttling. According to some people it boosts static file serving performance when set to unlimited bandwidth for everybody. Unfortunately, it did nothing for me. So: AnalogX HTTP: 3300kB/s Gene6 FTPD, plain: 3500kB/s Gene6 FTPD, Implicit and Explicit SSL, AES256 Cipher: 1800-2000kB/s freeSSHD: 1100kB/s SMB shared folder: about 3000kB/s Apache HTTP, plain: 550kB/s Apache HTTPS: 2700kB/s Clients that were used in the bandwidth testing: Internet Explorer 8 (HTTP, HTTPS) Firefox 8 (HTTP, HTTPS) Chrome 13 (HTTP, HTTPS) Opera 11.60 (HTTP, HTTPS) wget under CygWin (HTTP, HTTPS) FileZilla (FTP, FTPS, FTP+ES, SFTP) Windows Explorer (SMB) Generally, transfer speeds are not too high, but that's because the server machine is an old quad Pentium Pro 200MHz machine with 2GB RAM. However, I would like Apache to serve at at least 2Mbyte/s instead of 550kB/s, and that already works with HTTPS easily, so I fail to see why plain HTTP is so crippled. I am using a Kerio Winroute Firewall, but no Throttling and no special filters peeking into HTTP traffic, just the plain Firewall functionality for blocking/allowing connections. The Apache error.log (Loglevel info) shows no warnings, no errors. Also nothing strange to be seen in access.log. I have already stripped down my httpd.conf to the bare minimum just to make sure nothing is interfering, but that didn't help either. If you have any idea, help would be greatly appreciated, since I am totally out of ideas! Thanks! Edit: I have now tried a newer Apache 2.2.21 to see if it makes any difference. However, the behaviour is exactly the same. Edit 2: KM01 has requested a sniff on the HTTP headers, so here comes the LiveHTTPHeaders output (an extension to Firefox). The Output is generated on downloading a single file called "elephantsdream_source.264", which is an H.264/AVC elementary video stream under an Open Source license. I have taken the freedom to edit the URL, removing folders and changing the actual servers domain name to www.mydomain.com. Here it is: LiveHTTPHeaders, Plain HTTP: http://www.mydomain.com/elephantsdream_source.264 GET /elephantsdream_source.264 HTTP/1.1 Host: www.mydomain.com User-Agent: Mozilla/5.0 (Windows NT 5.2; WOW64; rv:6.0.2) Gecko/20100101 Firefox/6.0.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Connection: keep-alive HTTP/1.1 200 OK Date: Wed, 21 Dec 2011 20:55:16 GMT Server: Apache/2.2.21 (Win32) mod_ssl/2.2.21 OpenSSL/0.9.8r PHP/5.2.17 Last-Modified: Thu, 28 Oct 2010 20:20:09 GMT Etag: "c000000013fa5-29cf10e9-493b311889d3c" Accept-Ranges: bytes Content-Length: 701436137 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/plain LiveHTTPHeaders, HTTPS: https://www.mydomain.com/elephantsdream_source.264 GET /elephantsdream_source.264 HTTP/1.1 Host: www.mydomain.com User-Agent: Mozilla/5.0 (Windows NT 5.2; WOW64; rv:6.0.2) Gecko/20100101 Firefox/6.0.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Connection: keep-alive HTTP/1.1 200 OK Date: Wed, 21 Dec 2011 20:56:57 GMT Server: Apache/2.2.21 (Win32) mod_ssl/2.2.21 OpenSSL/0.9.8r PHP/5.2.17 Last-Modified: Thu, 28 Oct 2010 20:20:09 GMT Etag: "c000000013fa5-29cf10e9-493b311889d3c" Accept-Ranges: bytes Content-Length: 701436137 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/plain

    Read the article

  • Modularity through HTTP

    - by Michael Williamson
    As programmers, we strive for modularity in the code we write. We hope that splitting the problem up makes it easier to solve, and allows us to reuse parts of our code in other applications. Object-orientation is the most obvious of many attempts to get us closer to this ideal, and yet one of the most successful approaches is almost accidental: the web. Programming languages provide us with functions and classes, and plenty of other ways to modularize our code. This allows us to take our large problem, split it into small parts, and solve those small parts without having to worry about the whole. It also makes it easier to reason about our code. So far, so good, but now that we’ve written our small, independent module, for example to send out e-mails to my customers, we’d like to reuse it in another application. By creating DLLs, JARs or our platform’s package container of choice, we can do just that – provided our new application is on the same platform. Want to use a Java library from C#? Well, good luck – it might be possible, but it’s not going to be smooth sailing. Even if a library exists, it doesn’t mean that using it going to be a pleasant experience. Say I want to use Java to write out an XML document to an output stream. You’d imagine this would be a simple one-liner. You’d be wrong: import org.w3c.dom.*; import java.io.*; import javax.xml.transform.*; import javax.xml.transform.dom.*; import javax.xml.transform.stream.*; private static final void writeDoc(Document doc, OutputStream out) throws IOException { try { Transformer t = TransformerFactory.newInstance().newTransformer(); t.setOutputProperty(OutputKeys.DOCTYPE_SYSTEM, doc.getDoctype().getSystemId()); t.transform(new DOMSource(doc), new StreamResult(out)); } catch (TransformerException e) { throw new AssertionError(e); // Can't happen! } } Most of the time, there is a good chance somebody else has written the code before, but if nobody can understand the interface to that code, nobody’s going to use it. The result is that most of the code we write is just a variation on a theme. Despite our best efforts, we’ve fallen a little short of our ideal, but the web brings us closer. If we want to send e-mails to our customers, we could write an e-mail-sending library. More likely, we’d use an existing one for our language. Even then, we probably wouldn’t have niceties like A/B testing or DKIM signing. Alternatively, we could just fire some HTTP requests at MailChimp, and get a whole slew of features without getting anywhere near the code that implements them. The web is inherently language agnostic. So long as your language can send and receive text over HTTP, and probably parse some JSON, you’re about as well equipped as anybody. Instead of building libraries for a specific language, we can build a service that almost every language can reuse. The text-based nature of HTTP also helps to limit the complexity of the API. As SOAP will attest, you can still make a horrible mess using HTTP, but at least it is an obvious horrible mess. Complex data structures are tedious to marshal to and from text, providing a strong incentive to keep things simple. By contrast, spotting the complexities in a class hierarchy is often not as easy. HTTP doesn’t solve every problem. It probably isn’t such a good idea to use it inside an inner loop that’s executed thousands of times per second. What’s more, the HTTP approach might introduce some new problems. We often need to add a thin shim to each application that we wish to communicate over HTTP. For instance, we might need to write a small plugin in PHP if we want to integrate WordPress into our system. Suddenly, instead of a system written in one language, we’re maintaining a system with several distinct languages and platforms. Even then, we should strive to avoid re-implementing the same old thing. As programmers, we consistently underestimate both the cost of building a system and the ongoing maintenance. If we allow ourselves to integrate existing applications, even if they’re in unfamiliar languages, we save ourselves those development and maintenance costs, as well as being able to pick the best solution for our problem. Thanks to the web, HTTP is often the easiest way to get there.

    Read the article

  • Hiding a HTTP Auth-Realm by sending 404 to non-known IPs?

    - by zhenech
    I have an Apache (2.2) serving a web-app on example.com. That web-app has a debug-page reachable via example.com/debug. /debug is currently protected with a HTTP basic auth. As there is only a very small user-base who has access to the debug-page, I would like to hide it based on IP address and return 404 to clients not accessing from our VPN. Serving a 404 based on IP-address only is easy and is described in http://serverfault.com/a/13071. But as soon I add authentication, the users see a 401 instead of a 404. Basically, what I need is: if ($REMOTE_ADDR ~ 10.11.12.*): do_basic_auth (aka return 401) else: return 404

    Read the article

  • What happens when a HTTP request is terminated prematurely?

    - by Gowtham
    Suppose, I enter a URL in my browser and browser submits the HTTP request. The remote HTTP server accepts the request and initiates a long task to serve the request. If I terminate the request before it is complete (for example, press Esc or in Firefox), how is the request closed? Will the browser communicate this abort request to the server (I think it doesn't)? Presuming no, upon completion of the long task, what will the server do with the result? Does it send it back anyway? If it does, what will happen? Does it reach till my PC? Or gets lost on the way? This is just for my curiosity. Thanks for your time :)

    Read the article

  • SharePoint MOSS - Serve HTTP content on an HTTPS page without Mixed Content Warning?

    - by kcb263
    Our "portal-like" SharePoint site is served using HTTPS/SSL. So a user goes to https://web.company.com and sees content and different Web Parts. So far, no problem. The desire now is to have new Web Parts added that either frame HTTP content (such as Weather Bug) or HTTP RSS feeds. The issue that arises is that by doing this, results in a "Mixed Content" warning in the browser. Has anybody successfully been able to implement such a scenario, or one similar to it? The options we have looked at, unsuccessfully, have been: using Apache Reverse Proxy Server mirror an external site Custom Web Parts

    Read the article

  • Are there other application layer firewalls like Microfot TMG (ISA) that do advanced http rules?

    - by Bret Fisher
    Since the old days ISA and now TMG have had several great features that I often want to deploy to my customers because of the enhanced functionality and security, but often the cost of an additinal server HW, Windows Server, and TMG license is too much to justify when compaired to a $300-500 appliance. Are there other gateway firewalls that can perform one or more of these application layer features: pre-auth incoming http traffic against AD/LDAP before sending packets to internal server (forms auth or basic creds popup)? read host headers of incoming http traffic (even on https) to a single public IP and route packets to different internal servers based on that host header?

    Read the article

  • Simple HTTP server that will send the same file for all requests?

    - by Rory McCann
    I need to debug a XML-RPC application, which sends XML replies over HTTP. I have a sample XML reply (i.e. data from the server, sent to the client that isn't working), I'd like to debug my application. Ideally I'd like a simple HTTP server that will serve one file in reply to all requests. Someone requests /? Send them this file. Someone makes a post to /server/page.php with a certain cookie? Just send them this file. I don't care about multithreading, or security. I will only need to use this for a few hours to debug. I have root on the machine. i.e. I'm hoping there's something as easy to use as this: simple_http_server -p 12445 -f my_test_file I'm aware of python's SimpleHTTPServer module, but I'm not sure how to make it work in this case.

    Read the article

  • How do I capture and playback http web requests against multiple web servers?

    - by KevM
    My overall goal is to not interrupt a production system while capturing HTTP Posts to a web application so that I can reverse engineer the telemetry coming from a closed application. I have control over the transmitter of the HTTP Posts but not the receiving web application. It seems like I need a request "forking" proxy. Sort of a reverse proxy that pushes the request to 2 endpoints, a master and slave, only relaying the response from the master endpoint back to the requester. I am not a server geek so something like this may exist but I don't know the term of art for what I am looking for. Another possibility could be a simple logging proxy. Capture a log of the web requests. Rewrite the log to target my "slave" web application. Playback the log with curl or something. Thank you for your assistance.

    Read the article

  • Serving WordPress menu links in only HTTPS or HTTP depending on how it's accessed

    - by Gelatin
    I have a WordPress site which uses WordPress HTTPS to enable SSL when users access it via that protocol. However, currently the menu links point back to the HTTP version. I want users to be linked to HTTPS pages while accessing the site over HTTPS, but not when accessing it over HTTP. Is this possible? Note: I have tried changing the menu options to use // and / for the links, but in both cases they are just rendered as HTTP links.

    Read the article

  • Calling a native callback from managed .NET code (when loading the managed code using COM)

    - by evilfred
    Hi, I am really confused by the multitude of misinformation about native / managed interop. I have a C++ exe which is NOT built using CLR stuff (it is not Managed C++ or C++/CLI and never will be). I would like to access some code I have in a C# assembly. I can access the C# assembly using COM. However, when my C# code detects an event I would like it to call back into my C++ code. The C++ function pointer to call back into will be provided at runtime. Note that the C++ function pointer is a function found in the exe's execution environment. I don't want the managed code to try and load up some DLL to call a function (there is no DLL). How do I pass this C++ function pointer to my C# code through .NET and have my C# code successfully call it? Thanks!

    Read the article

  • Debug .Net Framework's source code only shows disassembly in Visual Studio 2010

    - by jdecuyper
    Hi! I'm trying to debug .Net Framework's source code using Visual Studio 2010 Professional. I followed the steps described in Raj Kaimal's post but I must be doing something wrong since the only code I'm getting to see is the disassembly code: As you can see in the image, the Go to Source Code and the Load Symbols options are disabled. Nevertheless, symbols are downloaded from Microsoft's server since I can see them inside the local cache directory. The code I'm debugging goes as follow: var wr = WebRequest.Create("http://www.google.com"); Console.WriteLine("Web request created"); var req = wr.GetRequestStream(); Console.Read(); When I hit F11 to step into the first line of code, a window pops us looking for the "WebRequst.cs" file inside "f:\dd\ndp\fx\src\Net\System\Net\WebRequest.cs" which does not exists on my machine. What am I missing? Thanks a lot for your help.

    Read the article

  • Formatting code in research reports

    - by RoseOfJericho
    I am currently writing a formal research report, and I'll be including JavaScript and PHP code. None of the sections of code will be more than 25 lines (so they're mere snippets). There will be approx. half a dozen snippets, each of which will have a couple of paragraphs explaining what is happening in the code and a discussion on its pros/cons. Is there an accepted way of displaying code in research reports? I'm thinking in terms of font, spacing, whether the code should be displayed inside the document or in an appendix, and related details. I have no contact with the body the report will be submitted to, and they have no published guidelines on how to format code.

    Read the article

  • Refactor/rewrite code or continue?

    - by Dan
    I just completed a complex piece of code. It works to spec, it meets performance requirements etc etc but I feel a bit anxious about it and am considering rewriting and/or refactoring it. Should I do this (spending time that could otherwise be spent on features that users will actually notice)? The reasons I feel anxious about the code are: The class hierarchy is complex and not obvious Some classes don't have a well defined purpose (they do a number of unrelated things) Some classes use others internals (they're declared as friend classes) to bypass the layers of abstraction for performance, but I feel they break encapsulation by doing this Some classes leak implementation details (eg, I changed a map to a hash map earlier and found myself having to modify code in other source files to make the change work) My memory management/pooling system is kinda clunky and less-than transparent They look like excellent reasons to refactor and clean code, aiding future maintenance and extension, but could be quite time consuming. Also, I'll never be perfectly happy with any code I write anyway... So, what does stackoverflow think? Clean code or work on features?

    Read the article

  • Code Review "ToDo" Follow Up Items

    - by Blake Blackwell
    I am trying to institute code reviews at my company, and recently we had our first code review meeting. During the meeting, several positive suggestions were made and I added TODO comments in my code for following up. After reading suggestions on here on best practices for code reviews, I would like to follow up on all the items I corrected at the start of the next meeting. Is there a way to add a similar type of comment to the code such as FOLLOWUP so that I can quickly highlight through those code segments in the next meeting?

    Read the article

  • Unit testing a SQL code generator

    - by Tom H.
    The team I'm on is currently writing code in TSQL to generate TSQL code that will be saved as scripts and later run. We're having a little difficulty in separating our unit tests between testing the code generator parts and testing the actual code that they generate. I've read through another similar question, but I was hoping to get some specific examples of what kind of unit test cases we might have. As an example, let's say that I have a bit of code that simply generates a DROP statement for a view, given the view schema and name. Do I just test that the generated code matches some expected outcome using string comparisons and then in a later integration or system test make sure that the drop actually drops the view if it exists, does nothing if the view doesn't exist, or raises an error if the view is one that we are marking as not allowing a drop? Thanks for any advice!

    Read the article

  • PPV Code Review - Is it good idea?

    - by user93422
    I believe in (solo) code reviews a lot. But I am in a one-developer-shop now. I would like to have my code reviewed, and willing to pay money for it. Question: Is there a place where I can have my code reviewed for money? Note: I understand that most of us are willing to review some one else code for free, but there is a limit to how much code one (e.g. I) is willing to review for free before getting bored. Question: Is it good idea to pay 3d party to do my code-reviews?

    Read the article

  • Embeding EasyVideoPlayer Code into Wordpress Theme - Video not showing

    - by bbacarat
    I'm attempting to place some embed code into a Premium WordPress Theme. NOTE: I'm not great when it comes to php. The embed code is produced by a video player called EasyVideoPlayer. (Basically it allows me to use Amazon S3 and gives me feedback on when people stop watching the video.) This is the embed code I have: _evpInit('ZXh0cmEtbW9uZXktZnJvbS1ob21lLTEubW92'); I've opened the index.php wordpress file and placed this video embed code in between the that represents the area of the website I want it to show up. However the video is not showing. If we place both the theme and video player aside, would you expect the php code to accept what I've done or is this not the way to go about adding this embed code? NOTE:I've contacted both the Wordpress Premium Theme support at Woothemes.com and the video players support for EasyVideoPlayer.com However both tend to stop at the point that another paid product is involved! Grrreat. website is www.extramoneyfromhome.co.uk

    Read the article

  • Putting a dollar value on code quality

    - by Chris Nelson
    As noted in another thread, "In most businesses, code quality is defined in dollars." So my company has an opportunity to acquire a large-ish C code base. Obviously, if the code quality is good, the code base is worth more than if it's poor. That is, if we can readily read, understand, and update the code, it's worth more to us than if it's a spaghetti-coded mess. Without being able to see the code ahead of time, we'd like to set some objective measure as an acceptance criteria like "If the XXX measure is below the price will be discounted YY%." What criteria can we or should we measure and what tool can we use to measure it?

    Read the article

  • Searching for empty methods

    - by Brian McCord
    I am currently working on a security audit/code review of our system. This requires me to check all pages in the system and make sure that the code behind contains two methods that are used to check security. Sometimes the code in these methods get commented out to make testing easier. So, my question is does anyone know an easy way to search code, make sure the methods are present, and to determine which ones have no code or have all the code commented out. It would make my job much easier if I can get a list instead of having to look at every file... I'm sure I could write this myself, but I thought someone may know of something that already exists. Thanks!

    Read the article

  • JAX-RS --- How to return JSON and HTTP Status code together?

    - by masato-san
    I'm writing REST web app (Netbean6.9, JAX-RS, Toplink-essential) and trying to return JSON and Http status code. I have code ready and working just to return JSON when HTTP GET Method is called from client. Code snippet @Path("get/id") @GET @Produces("application/json") public M_?? getMachineToUpdate(@PathParam("id") String id) { //some code to return JSON . . return myJson But I also want to return HTTP status code (500, 200, 204 etc) along with returning JSON. I tried using HttpServletResponse object, response.sendError("error message", 500); But this made browser to think it's real 500 so output web page was regular Http 500 error page. What I want to is just to return status code so that my Javascript on client side can handle some logic depending on what HTTP status code is returned. (maybe just to display the error code and message on html page.) Is it possible to do so? or should HTTP status code not be used for such thing?

    Read the article

  • RESTful issue with data access when using HTTP DELETE method ...

    - by Wilhelm Murdoch
    I'm having an issue accessing raw request information from PHP when accessing a script using the HTTP DELETE directive. I'm using a JS front end which is accessing a script using Ajax. This script is actually part of a RESTful API which I am developing. The endpoint in this example is: http://api.site.com/session This endpoint is used to generate an authentication token which can be used for subsequent API requests. Using the GET method on this URL along with a modified version of HTTP Basic Authentication will provide an access token for the client. This token must then be included in all other interactions with the service until it expires. Once a token is generated, it is passed back to the client in a format specified by an 'Accept' header which the client sends the service; in this case 'application/json'. Upon success it responds with an HTTP 200 Ok status code. Upon failure, it throws an exception using the HTTP 401 Authorization Required code. Now, when you want to delete a session, or 'log out', you hit the same URL, but with the HTTP DELETE directive. To verify access to this endpoint, the client must prove they were previously authenticated by providing the token they want to terminate. If they are 'logged in', the token and session are terminated and the service should respond with the HTTP 204 No Content status code, otherwise, they are greeted with the 401 exception again. Now, the problem I'm having is with removing sessions. With the DELETE directive, using Ajax, I can't seem to access any parameters I've set once the request hits the service. In this case, I'm looking for the parameter entitled 'token'. I look at the raw request headers using Firebug and I notice the 'Content-Length' header changes with the size of the token being sent. This is telling me that this data is indeed being sent to the server. The question is, using PHP, how the hell to I access parameter information? It's not a POST or GET request, so I can't access it as you normally would in PHP. The parameters are within the content portion of the request. I've tried looking in $_SERVER, but that shows me limited amount of headers. I tried 'apache_request_headers()', which gives me more detailed information, but still, only for headers. I even tried 'file_get_contents('php://stdin');' and I get nothing. How can I access the content portion of a raw HTTP request? Sorry for the lengthy post, but I figured too much information is better than too little. :)

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >