Search Results

Search found 56465 results on 2259 pages for 'http status 401'.

Page 7/2259 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Proper status codes for JSON responses to Ajax calls?

    - by anonymous coward
    My project is returning JSON to Ajax calls from the browser. I'm wondering what the proper status code is for sending back with responses to invalid (but successfully handled) data submissions. For example, jQuery has the following two particular callbacks when making Ajax requests: success: Fired when a 200/2xx status code is delivered along with the response. error: Fired when 4xx, 5xx, etc, status codes come back with the response. If a user attempts to create a new "Person" object, I send back a JSON representation of the newly created object upon success, thus giving javascript access to the necessary unique ID's for the new object, etc. This, of course, is sent with a 200 status code. If a user submits malformed or invalid data (say, an invalid/incomplete "name" field), I would like to send back the validation error messages via JSON. (I don't see why this would be a bad thing). My question is: in doing so, should I send a 200 status code, because I successfully handled their invalid data? Therefore, I'd be using the jQuery success callback, but simply check for errors... Or, should I use a 4xx status code, perhaps 'Bad Request', because the data they sent me is invalid? (and thus, use the error callback to do the necessary client-side notifications).

    Read the article

  • Overwrite HTTP method with JAX-RS

    - by deamon
    Today's browsers (or HTML < 5) only support HTTP GET and POST, but to communicate RESTful one need PUT and DELETE too. If the workaround should not be to use Ajax, something like a hidden form field is required to overwrite the actual HTTP method. Rails uses the following trick: <input name="_method" type="hidden" value="put" /> Is there a possibility to do something similar with JAX-RS?

    Read the article

  • Adobe AIR application as a http server?

    - by iamgopal
    Is there any way to use an Adobe AIR application as a local http server? i.e. Adobe AIR application listening to http://localhost:8020 and I'm able to use a browser to access the application? If I can, is it true or pseudo multithreading? Is there any library or code that I can look for?

    Read the article

  • how to create http headers from scratch

    - by Sean Ochoa
    So, I made a simple socket server using python. And now I'm trying to structure a proper http response. However, I can't seem to find any sort of tutorial or spec that discusses how to format http responses. Could someone point me to the right place?

    Read the article

  • Apache logs: "::1 ... "OPTIONS * HTTP/1.0" 200 -

    - by Meltemi
    Just looking at logs of a not-so-busy site on one of our Apache servers and notice tons of these in the log: ::1 - - [15/Apr/2011:12:11:40 -0700] "OPTIONS * HTTP/1.0" 200 - ::1 - - [15/Apr/2011:12:11:41 -0700] "OPTIONS * HTTP/1.0" 200 - ::1 - - [15/Apr/2011:12:11:44 -0700] "OPTIONS * HTTP/1.0" 200 - They seem to appear multiple times just below the GET requests where Apache has served a page & its related images. what do they mean? what IP is "::1"? if they're benign can I suppress them?

    Read the article

  • SharePoint Server 2010 Bootcamp URLs and Helpful Info

    - by Da_Genester
    Below are the URLs that I found helpful during the time I was teaching the SharePoint 2010 BootCamp. NOT DONE YET!  :) Helpful Third Party tools and sites: Idera.com Quest.com Free add-ins for SharePoint, et al... - codeplex.com Microsoft Virtual Labs - http://tinyurl.com/VirtualLabs Installing SharePoint 2010 on a Windows Server 2008 Web Edition box is a NO NO!  SharePoint 2010 requires the Application Server Role, which is not available on Web Edition. http://tinyurl.com/SP2010InstallInfo http://tinyurl.com/SP2010PlanWk http://tinyurl.com/NamingLimits http://tinyurl.com/KerberosSP http://tinyurl.com/SP2010Upgrade http://tinyurl.com/SP2010ProdHub http://tinyurl.com/SP2010ContTypeSynd http://tinyurl.com/SP2010UnderstandingMgdMeta http://www.robotstxt.org/ http://tinyurl.com/SP2010ContentOrganizer http://tinyurl.com/SP2010GeoDisp http://tinyurl.com/SPWarmupJob http://tinyurl.com/SP2010RecMgt http://tinyurl.com/SP2010WCMTag http://tinyurl.com/SP2010WCMDetailed http://tinyurl.com/SP2010WCMImproved http://tinyurl.com/SP2010ContentOrganizer http://tinyurl.com/SP2010ContentCaching http://tinyurl.com/SP2010PerfPoint http://tinyurl.com/SP2010SSRS2008R2 http://tinyurl.com/SP2010Limits http://tinyurl.com/SQL08R2LogShip http://tinyurl.com/SQL08R2DBMirror http://tinyurl.com/SP2010DBSnapshot http://tinyurl.com/SP2010BURestore http://tinyurl.com/SP2010Backup http://tinyurl.com/W2K8R2NLBOverview http://tinyurl.com/SP2010ExcelSvcs http://tinyurl.com/SP2010SiteTemplates http://tinyurl.com/WSSFab40 http://tinyurl.com/SP2010MySiteManage http://tinyurl.com/SP2010UpgAxceler http://tinyurl.com/SP2010UpgDocAve

    Read the article

  • Reduce HTTP Requests method for js and css

    - by Giberno
    Is these way can Reduce HTTP Requests? multiple javascript files with & symbol <script type="text/javascript" src="http://yui.yahooapis.com/combo?2.5.2/build/yahoo-dom-event/yahoo-dom-event.js &http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js"> </script> multiple css files with @ import <style type="text/css"> @import url(css/style.css); @import url(css/custom.css); </style>

    Read the article

  • Testing HTTP status codes

    - by amusero
    I'm running an Apache Tomcat server. Making some security testing I'd noticed than my server is returning a 200 HTTP status code of the default error page when I try to access to a non-existent element instead of return a 404 status code and redirect me to the default error page. I suspect that this is not the only fail with this issue. Anyone can suggest me a process to chech the most common HTTP status codes?

    Read the article

  • Web interface with FastCGI or with direct HTTP?

    - by Basile Starynkevitch
    Let's assume I want (for fun at start) to play with some new DSL (domain specific language) idea. And I really want its user[s] (probably only me at first) to interact thru a web interface. I'll probably implement it in C++ (probably using LLVM). Should I use an HTTP server library (like libonion or microhttpd) to talk directly HTTP or should I use FastCGI? In particular, I am noticing that several recent web frameworks (Opa, Ocsigen, ...) do not have any FastCGI interface but only HTTP one.... So my feeling is that FastCGI is really out of fashion.... Any opinions on that? Do you know recently started project using FastCGI ? (and what about SCGI?)

    Read the article

  • Site is working with http:// not with http://www

    - by Forza
    My site is working if i type it as domain.com. But If I type it as www.domain.com I get an error 404 page. The domain is registered with Google Apps for Business and the hosting is done with another company. I have some A records that are pointing the site to this server. However, as it appears the A records are only working halfway. What do I have to do to make it working for both http:// and http://www ?

    Read the article

  • HTTP crawler in Erlang

    - by ctp
    I'm coding on a simple HTTP crawler but I have an issue running the code at the bottom. I'm requesting 50 URLs and get the content of 20+ back. I've generated few files with 150kB size each to test the crawler. So I think the 20+ responses are limited by the bandwidth? BUT: how to tell the Erlang snippet not to quit until the last file is not fetched? The test data server is online, so plz try the code out and any hints are welcome :) -module(crawler). -define(BASE_URL, "http://46.4.117.69/"). -export([start/0, send_reqs/0, do_send_req/1]). start() -> ibrowse:start(), proc_lib:spawn(?MODULE, send_reqs, []). to_url(Id) -> ?BASE_URL ++ integer_to_list(Id). fetch_ids() -> lists:seq(1, 50). send_reqs() -> spawn_workers(fetch_ids()). spawn_workers(Ids) -> lists:foreach(fun do_spawn/1, Ids). do_spawn(Id) -> proc_lib:spawn_link(?MODULE, do_send_req, [Id]). do_send_req(Id) -> io:format("Requesting ID ~p ... ~n", [Id]), Result = (catch ibrowse:send_req(to_url(Id), [], get, [], [], 10000)), case Result of {ok, Status, _H, B} -> io:format("OK -- ID: ~2..0w -- Status: ~p -- Content length: ~p~n", [Id, Status, length(B)]); Err -> io:format("ERROR -- ID: ~p -- Error: ~p~n", [Id, Err]) end. That's the output: Requesting ID 1 ... Requesting ID 2 ... Requesting ID 3 ... Requesting ID 4 ... Requesting ID 5 ... Requesting ID 6 ... Requesting ID 7 ... Requesting ID 8 ... Requesting ID 9 ... Requesting ID 10 ... Requesting ID 11 ... Requesting ID 12 ... Requesting ID 13 ... Requesting ID 14 ... Requesting ID 15 ... Requesting ID 16 ... Requesting ID 17 ... Requesting ID 18 ... Requesting ID 19 ... Requesting ID 20 ... Requesting ID 21 ... Requesting ID 22 ... Requesting ID 23 ... Requesting ID 24 ... Requesting ID 25 ... Requesting ID 26 ... Requesting ID 27 ... Requesting ID 28 ... Requesting ID 29 ... Requesting ID 30 ... Requesting ID 31 ... Requesting ID 32 ... Requesting ID 33 ... Requesting ID 34 ... Requesting ID 35 ... Requesting ID 36 ... Requesting ID 37 ... Requesting ID 38 ... Requesting ID 39 ... Requesting ID 40 ... Requesting ID 41 ... Requesting ID 42 ... Requesting ID 43 ... Requesting ID 44 ... Requesting ID 45 ... Requesting ID 46 ... Requesting ID 47 ... Requesting ID 48 ... Requesting ID 49 ... Requesting ID 50 ... OK -- ID: 49 -- Status: "200" -- Content length: 150000 OK -- ID: 47 -- Status: "200" -- Content length: 150000 OK -- ID: 50 -- Status: "200" -- Content length: 150000 OK -- ID: 17 -- Status: "200" -- Content length: 150000 OK -- ID: 48 -- Status: "200" -- Content length: 150000 OK -- ID: 45 -- Status: "200" -- Content length: 150000 OK -- ID: 46 -- Status: "200" -- Content length: 150000 OK -- ID: 10 -- Status: "200" -- Content length: 150000 OK -- ID: 09 -- Status: "200" -- Content length: 150000 OK -- ID: 19 -- Status: "200" -- Content length: 150000 OK -- ID: 13 -- Status: "200" -- Content length: 150000 OK -- ID: 21 -- Status: "200" -- Content length: 150000 OK -- ID: 16 -- Status: "200" -- Content length: 150000 OK -- ID: 27 -- Status: "200" -- Content length: 150000 OK -- ID: 03 -- Status: "200" -- Content length: 150000 OK -- ID: 23 -- Status: "200" -- Content length: 150000 OK -- ID: 29 -- Status: "200" -- Content length: 150000 OK -- ID: 14 -- Status: "200" -- Content length: 150000 OK -- ID: 18 -- Status: "200" -- Content length: 150000 OK -- ID: 01 -- Status: "200" -- Content length: 150000 OK -- ID: 30 -- Status: "200" -- Content length: 150000 OK -- ID: 40 -- Status: "200" -- Content length: 150000 OK -- ID: 05 -- Status: "200" -- Content length: 150000 Update: thanks stemm for the hint with the wait_workers. I've combined your and mine code but same behaviour :( -module(crawler). -define(BASE_URL, "http://46.4.117.69/"). -export([start/0, send_reqs/0, do_send_req/2]). start() -> ibrowse:start(), proc_lib:spawn(?MODULE, send_reqs, []). to_url(Id) -> ?BASE_URL ++ integer_to_list(Id). fetch_ids() -> lists:seq(1, 50). send_reqs() -> spawn_workers(fetch_ids()). spawn_workers(Ids) -> %% collect reference to each worker Refs = [ do_spawn(Id) || Id <- Ids ], %% wait for response from each worker wait_workers(Refs). wait_workers(Refs) -> lists:foreach(fun receive_by_ref/1, Refs). receive_by_ref(Ref) -> %% receive message only from worker with specific reference receive {Ref, done} -> done end. do_spawn(Id) -> Ref = make_ref(), proc_lib:spawn_link(?MODULE, do_send_req, [Id, {self(), Ref}]), Ref. do_send_req(Id, {Pid, Ref}) -> io:format("Requesting ID ~p ... ~n", [Id]), Result = (catch ibrowse:send_req(to_url(Id), [], get, [], [], 10000)), case Result of {ok, Status, _H, B} -> io:format("OK -- ID: ~2..0w -- Status: ~p -- Content length: ~p~n", [Id, Status, length(B)]), %% send message that work is done Pid ! {Ref, done}; Err -> io:format("ERROR -- ID: ~p -- Error: ~p~n", [Id, Err]), %% repeat request if there was error while fetching a page, do_send_req(Id, {Pid, Ref}) %% or - if you don't want to repeat request, put there: %% Pid ! {Ref, done} end. Running the crawler forks fine for a handful of files, but then the code even doesnt fetch the entire files (file size each 150000 bytes) - he crawler fetches some files partially, see the following web server log :( 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /10 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /1 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /3 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /8 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /39 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /7 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /6 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /2 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /5 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /50 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /9 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /44 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /38 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /47 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /49 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /43 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /37 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /46 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /48 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /36 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /42 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /41 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /45 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /17 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /35 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /16 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /15 HTTP/1.1" 200 17020 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /21 HTTP/1.1" 200 120360 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /40 HTTP/1.1" 200 117600 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /34 HTTP/1.1" 200 60660 "-" "-" Any hints are welcome. I have no clue what's going wrong there :(

    Read the article

  • Is SOAP Http POST more complicated than I thought

    - by Pete Petersen
    I'm currently writing a bit of code to send some xml data to a web service via HTTP POST. I thought this would be really simple and have written the following example code (C#) Console.WriteLine("Press enter to send data..."); while (Console.ReadLine() != "q") { HttpWebRequest httpWReq = (HttpWebRequest)WebRequest.Create(@"http://localhost:8888/"); Foo fooItem = new Foo { Member1 = "05", Member2 = "74455604", Member3 = "15101051", Member4 = 1, Member5 = "fsf", Member6 = 6.52, }; ASCIIEncoding encoding = new ASCIIEncoding(); string postData = fooItem.ToXml(); byte[] data = encoding.GetBytes(postData); httpWReq.Method = "POST"; httpWReq.ContentType = "application/xml"; httpWReq.ContentLength = data.Length; using (Stream stream = httpWReq.GetRequestStream()) { stream.Write(data, 0, data.Length); } HttpWebResponse response = (HttpWebResponse)httpWReq.GetResponse(); string responseString = new StreamReader(response.GetResponseStream()).ReadToEnd(); Console.WriteLine("Received " + responseString); Console.WriteLine("Press enter to send data..."); } This is all I thought would be necessary, however I have now been given the details for the web service. This included some information which is unfarmiliar to me and I'm unsure whether I need to include it. The information I was sent was <url>http://sometext/soap/rpc</url> <namespace>http://sometext/a.services</namespace> <method>receiveInfo</method> <parm-id>xmldata</parm-id> (Input data) (Actual XML data as string) <parm-id>status</parm-id> (Output data) <userid>user</userid> <password>pass</password> <secure>false</secure> I guess this means I need to include a username and password somehow, but I'm not sure what the namespace or method fields are used for. Could anyone give me a hint? Sorry I've never used webservices before.

    Read the article

  • Http header 302 error

    - by Katherine Katie
    Response Headers status HTTP/1.1 302 Found connection close pragma no-cache cache-control no-cache location / location /NKiXN/ I don't know how it became so but i used w3 total cache plugin but at this time i have deactivated it please help me to solve it, site is coming down in search engine ranking and google bot is unable to follow it. Urgent help required, if this is configuration problem with server please let me know the solution. Site: http://onlinecheapestcarinsurance.co.uk/

    Read the article

  • HTTP Header - ntCoent-Length

    - by DMcKenna
    I get the following HTTP response headers in a particular response. All looks okay. However I have noticed that the content-length appears twice... Content-Length: 2424 ntCoent-Length: 2424 Is there a particular reason why the content-length is returned a second time as ntCoent-Length? HTTP/1.0 200 OK Date: Wed, 26 May 2010 09:38:19 GMT Server: Apache P3P: CP="NOI DSP COR CURa ADMa TA1a OUR BUS IND UNI COM NAV INT" Accept-Charset: iso-8859-1, unicode-1-1;q=0.8 Expires: Sun, 15 Jul 1990 00:00:00 GMT Pragma: no-cache Cache-Control: no-cache Content-Language: en ntCoent-Length: 2424 Connection: close Content-Type: text/html;charset=iso-8859-1 Content-Length: 2424

    Read the article

  • Oracle HTTP Server access_log - GET /error/404.html HTTP/1.0 200 7001 entries

    - by Pavan
    access_log shows the following entries repeatedly, seems like it is polling some thing. There were so many entries keep on adding to the log, making it difficult to debug for actual error message. aaa.bbb.ccc.ddd - - [07/Nov/2012:00:02:48 -0800] "HEAD /index.html HTTP/1.1" 200 - abc.bcd.cda.dab - - [07/Nov/2012:00:02:50 -0800] "GET /error/404.html HTTP/1.0" 200 7001 abc.bcd.cda.dac - - [07/Nov/2012:00:02:51 -0800] "GET /error/404.html HTTP/1.0" 200 7001 abc.bcd.cda.dab - - [07/Nov/2012:00:02:56 -0800] "GET /error/404.html HTTP/1.0" 200 7001 abc.bcd.cda.dac - - [07/Nov/2012:00:02:56 -0800] "GET /error/404.html HTTP/1.0" 200 7001 abc.bcd.cda.dab - - [07/Nov/2012:00:03:01 -0800] "GET /error/404.html HTTP/1.0" 200 7001 abc.bcd.cda.dac - - [07/Nov/2012:00:03:01 -0800] "GET /error/404.html HTTP/1.0" 200 7001 abc.bcd.cda.dab - - [07/Nov/2012:00:03:06 -0800] "GET /error/404.html HTTP/1.0" 200 7001 abc.bcd.cda.dac - - [07/Nov/2012:00:03:06 -0800] "GET /error/404.html HTTP/1.0" 200 7001 aaa.bbb.ccc.ddd - - [07/Nov/2012:00:03:08 -0800] "HEAD /index.html HTTP/1.1" 200 - how to avoid these repeating entries?

    Read the article

  • IIS 7.5 401.3 Access Denied

    - by Jeffrey
    I am having this weird issue with IIS 7.5 on Windows 2008 R2 x64. I created a site in IIS and manually created a test file index.html and everything worked. When I try to do a deployment, I copy all the files from my local PC to the IIS server, try to access index.html (this is the proper deployed file) and getting 401.3 access denied error. I then try to manually recreate index.html and copy content into this newly created file and the page is accessible again... I just can't figure this out. So the issue is that IIS 7.5 can't server files that have been copied from other PCs. I tried to reset/apply permission settings to the copied folders/files but nothing has worked. Please help. Thanks! By the way, the files that I copied are just some html cutups i.e. generic html, css and image files, nothing special.

    Read the article

  • How reliable is HTTP compression using gzip?

    - by Liam
    YSlow has suggested that I use HTTP compression to improve the performance of my site. However, as noted by Yahoo that are some problems. There are known issues with browsers and proxies that may cause a mismatch in what the browser expects and what it receives with regard to compressed content. Fortunately, these edge cases are dwindling as the use of older browsers drops off. The Apache modules help out by adding appropriate Vary response headers automatically. I understand that the most common problem occurs with IE6 behind a proxy. But how common are these problems today? To quantify it, roughly what percentage of web users experience bugs with HTTP compression?

    Read the article

  • What does this error mean in my IIS7 Failed Request Tracing report?

    - by Pure.Krome
    Hi folks, when I attempt to goto any page in my web application (i'm migrating the code from an asp.net web site to web application, and now testing it) .. i keep getting some not authenticated error(s) . So, i've turned on FREB and this is what it says... I'm not sure what that means? Secondly, i've also made sure that my site (or at least the default document which has been setup to be default.aspx) has anonymous on and the rest off. Proof: - C:\Windows\System32\inetsrv>appcmd list config "My Web App/default.aspx" -section:anonymousAuthentication <system.webServer> <security> <authentication> <anonymousAuthentication enabled="true" userName="IUSR" /> </authentication> </security> </system.webServer> C:\Windows\System32\inetsrv>appcmd list config "My Web App" -section:anonymousAuthentication <system.webServer> <security> <authentication> <anonymousAuthentication enabled="true" userName="IUSR" /> </authentication> </security> </system.webServer> Can someone please help?

    Read the article

  • How can I set up a 404 error page when people access http://ftp.mydomain.com?

    - by Tim B.
    I am a freelance videographer/developer, and part of my job involves transferring large files over FTP to production houses/television stations. While the majority of people in my industry understand the difference between FTP and HTTP, I've experienced several interactions in the past couple months of people who still open Internet Explorer and try to access http://ftp.mydomain.com, receive an error page served by HostGator, and tell me that they cannot access my FTP server. Instead of spending time delivering instructions via e-mail, I'd much prefer to serve up a custom error page in this instance that instructs them how to download and use an FTP client. I tried setting up a sub-domain in Cpanel hoping I could simply drop in an .htaccess file with the error page, but I got this error: ftp.mydomain.com domainadmin-domainexistsglobal I also tried creating a custom error page in PHP which reads the site URL and serves up the custom content only when http://ftp.mydomain.com is accessed. Unfortunately, the error page works for every subdomain except that one. I'm not entirely sure this is even technically possible, which is why I bring it to the good people of StackOverflow to help. Thanks!

    Read the article

  • Harvesting Dynamic HTTP Content to produce Replicating HTTP Static Content

    - by Neil Pitman
    I have a slowly evolving dynamic website served from J2EE. The response time and load capacity of the server are inadequate for client needs. Moreover, ad hoc requests can unexpectedly affect other services running on the same application server/database. I know the reasons and can't address them in the short term. I understand HTTP caching hints (expiry, etags....) and for the purpose of this question, please assume that I have maxed out the opportunities to reduce load. I am thinking of doing a brute force traversal of all URLs in the system to prime a cache and then copying the cache contents to geodispersed cache servers near the clients. I'm thinking of Squid or Apache HTTPD mod_disk_cache. I want to prime one copy and (manually) replicate the cache contents. I don't need a federation or intelligence amongst the slaves. When the data changes, invalidating the cache, I will refresh my master cache and update the slave versions, probably once a night. Has anyone done this? Is it a good idea? Are there other technologies that I should investigate? I can program this, but I would prefer a configuration of open source technologies solution Thanks

    Read the article

  • Apache DVB http video Streaming bandwidth or priority problem

    - by igino manfre'
    I'm streaming few precompressed DVB videos from cloud. The streams are generated from VLC on "impossible" ports (such as 64085, 64086 etc) reverse proxed by Apache on port 80 and 8080. All the generated streams are listed in "http://95.110.164.61/indexv.html". From an ADSL connection with enough downlink bandwidth, recalling the stream generated by VLC (such as "http://95.110.164.61:64087/mpg2_6.4") it flows fluently. Recalling the same stream proxed by Apache ("http://95.110.164.61/mpg2_6.4") the stream stops and goes. The only situation in which the Apache proxed streams flow regularly is from a site connected through 64 Mbps warranted bandwith with RTT to the server less than 10 mseconds. Please note that streams below 2 Mbps are fluently proxed. The system is a single core xeon with windows 2008 R2 on 4 GB of RAM with 1 Gbps of network bandwidth. The drain of computational and bandwidth resources is negligeable, the RAM usage always lower than 50%. On the system I run many VLC streamers. Any of them drains a variable amount of RAM (from about 25 to 70 MB). On the contrary the couple of httpd.exe processes drain no more than 7 MB. Using Wireshark (on the server) I see that VLC directy send to the client much more packets than Apache, and the stream is framgmented on many frames. I'm not a programmer, a newby of Apache. Can anyone please address me to a specific portion of the Apache's huge documentation? Thank you. igino

    Read the article

  • sudo apt-get update errors

    - by Adrian Begi
    Here is what I get on my terminal when running sudo apt-get update errors. I dont know if the issue is from my sources.list or my proxy setup(have not made any changes to proxies). Thank you for any help in advanced. Ign http://security.ubuntu.com oneiric-security Release.gpg Ign http://security.ubuntu.com oneiric-security Release Ign http://security.ubuntu.com oneiric-security/main Sources/DiffIndex Ign http://security.ubuntu.com oneiric-security/restricted Sources/DiffIndex Ign http://security.ubuntu.com oneiric-security/universe Sources/DiffIndex Ign http://security.ubuntu.com oneiric-security/multiverse Sources/DiffIndex Ign http://security.ubuntu.com oneiric-security/main amd64 Packages/DiffIndex Ign http://security.ubuntu.com oneiric-security/restricted amd64 Packages/DiffIndex Ign http://security.ubuntu.com oneiric-security/universe amd64 Packages/DiffIndex Ign http://security.ubuntu.com oneiric-security/multiverse amd64 Packages/DiffIndex Ign http://security.ubuntu.com oneiric-security/main i386 Packages/DiffIndex Ign http://security.ubuntu.com oneiric-security/restricted i386 Packages/DiffIndex Ign http://security.ubuntu.com oneiric-security/universe i386 Packages/DiffIndex Ign http://security.ubuntu.com oneiric-security/multiverse i386 Packages/DiffIndex Ign http://security.ubuntu.com oneiric-security/main TranslationIndex Ign http://security.ubuntu.com oneiric-security/multiverse TranslationIndex Ign http://security.ubuntu.com oneiric-security/restricted TranslationIndex Ign http://security.ubuntu.com oneiric-security/universe TranslationIndex Err http://security.ubuntu.com oneiric-security/main Sources 404 Not Found [IP: 91.189.91.15 80] Err http://security.ubuntu.com oneiric-security/restricted Sources 404 Not Found [IP: 91.189.91.15 80] Err http://security.ubuntu.com oneiric-security/universe Sources 404 Not Found [IP: 91.189.91.15 80] Err http://security.ubuntu.com oneiric-security/multiverse Sources 404 Not Found [IP: 91.189.91.15 80] Err http://security.ubuntu.com oneiric-security/main amd64 Packages 404 Not Found [IP: 91.189.91.15 80] Err http://security.ubuntu.com oneiric-security/restricted amd64 Packages 404 Not Found [IP: 91.189.91.15 80] Err http://security.ubuntu.com oneiric-security/universe amd64 Packages 404 Not Found [IP: 91.189.91.15 80] Err http://security.ubuntu.com oneiric-security/multiverse amd64 Packages 404 Not Found [IP: 91.189.91.15 80] Err http://security.ubuntu.com oneiric-security/main i386 Packages 404 Not Found [IP: 91.189.91.15 80] Err http://security.ubuntu.com oneiric-security/restricted i386 Packages 404 Not Found [IP: 91.189.91.15 80] Err http://security.ubuntu.com oneiric-security/universe i386 Packages 404 Not Found [IP: 91.189.91.15 80] Err http://security.ubuntu.com oneiric-security/multiverse i386 Packages 404 Not Found [IP: 91.189.91.15 80] Ign http://security.ubuntu.com oneiric-security/main Translation-en_US Ign http://security.ubuntu.com oneiric-security/main Translation-en Ign http://security.ubuntu.com oneiric-security/multiverse Translation-en_US Ign http://security.ubuntu.com oneiric-security/multiverse Translation-en Ign http://security.ubuntu.com oneiric-security/restricted Translation-en_US Ign http://security.ubuntu.com oneiric-security/restricted Translation-en Ign http://security.ubuntu.com oneiric-security/universe Translation-en_US Ign http://security.ubuntu.com oneiric-security/universe Translation-en W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/oneiric-security/main/source/Sources 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/oneiric-security/restricted/source/Sources 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/oneiric-security/universe/source/Sources 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/oneiric-security/multiverse/source/Sources 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/oneiric-security/main/binary-amd64/Packages 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/oneiric-security/restricted/binary-amd64/Packages 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/oneiric-security/universe/binary-amd64/Packages 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/oneiric-security/multiverse/binary-amd64/Packages 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/oneiric-security/main/binary-i386/Packages 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/oneiric-security/restricted/binary-i386/Packages 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/oneiric-security/universe/binary-i386/Packages 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/oneiric-security/multiverse/binary-i386/Packages 404 Not Found [IP: 91.189.91.15 80] E: Some index files failed to download. They have been ignored, or old ones used instead. HERE IS MY SOURCES.LIST # # deb cdrom:[Ubuntu-Server 11.10 _Oneiric Ocelot_ - Release amd64 (20111011)]/ dists/oneiric/main/binary-i386/ # deb cdrom:[Ubuntu-Server 11.10 _Oneiric Ocelot_ - Release amd64 (20111011)]/ dists/oneiric/restricted/binary-i386/ # deb cdrom:[Ubuntu-Server 11.10 _Oneiric Ocelot_ - Release amd64 (20111011)]/ oneiric main restricted #deb cdrom:[Ubuntu-Server 11.10 _Oneiric Ocelot_ - Release amd64 (20111011)]/ dists/oneiric/main/binary-i386/ #deb cdrom:[Ubuntu-Server 11.10 _Oneiric Ocelot_ - Release amd64 (20111011)]/ dists/oneiric/restricted/binary-i386/ #deb cdrom:[Ubuntu-Server 11.10 _Oneiric Ocelot_ - Release amd64 (20111011)]/ oneiric main restricted # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://us.archive.ubuntu.com/ubuntu/ oneiric main restricted deb-src http://us.archive.ubuntu.com/ubuntu/ oneiric main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://us.archive.ubuntu.com/ubuntu/ oneiric-updates main restricted deb-src http://us.archive.ubuntu.com/ubuntu/ oneiric-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://us.archive.ubuntu.com/ubuntu/ oneiric universe deb-src http://us.archive.ubuntu.com/ubuntu/ oneiric universe deb http://us.archive.ubuntu.com/ubuntu/ oneiric-updates universe deb-src http://us.archive.ubuntu.com/ubuntu/ oneiric-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://us.archive.ubuntu.com/ubuntu/ oneiric multiverse deb-src http://us.archive.ubuntu.com/ubuntu/ oneiric multiverse deb http://us.archive.ubuntu.com/ubuntu/ oneiric-updates multiverse deb-src http://us.archive.ubuntu.com/ubuntu/ oneiric-updates multiverse ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://us.archive.ubuntu.com/ubuntu/ oneiric-backports main restricted universe multiverse deb-src http://us.archive.ubuntu.com/ubuntu/ oneiric-backports main restricted universe multiverse deb http://security.ubuntu.com/ubuntu oneiric-security main restricted deb-src http://security.ubuntu.com/ubuntu oneiric-security main restricted deb http://security.ubuntu.com/ubuntu oneiric-security universe deb-src http://security.ubuntu.com/ubuntu oneiric-security universe deb http://security.ubuntu.com/ubuntu oneiric-security multiverse deb-src http://security.ubuntu.com/ubuntu oneiric-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. # deb http://archive.canonical.com/ubuntu oneiric partner # deb-src http://archive.canonical.com/ubuntu oneiric partner ## Uncomment the following two lines to add software from Ubuntu's ## 'extras' repository. ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. # deb http://extras.ubuntu.com/ubuntu oneiric main # deb-src http://extras.ubuntu.com/ubuntu oneiric main

    Read the article

  • wireshark does not show any http traffic

    - by hayat
    fMy pc is running windows XP. I have been trying to capture http traffic but if apply following filters, the wireshark gives results not required http result: it only show ssdp traffic http (!udp) result: it does not show any thing http (!http contains ssdp) result: it does not show any thing http && tcp result: it does not show any thing http (!udp.dstport == 1900) result: it does not show any thing tcp result: it shows tcp traffic but i cannot reach to my required http messages please guide what may be the problem, as the same thing is happening when i run it on a different OS (windows7) on my laptop

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >