Search Results

Search found 50475 results on 2019 pages for 'rpc over http'.

Page 121/2019 | < Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >

  • Why NOT use POST method here?

    - by Camran
    I have a classifieds website. In the main page (index) I have several form fields which the user may or may not fill in, in order to specify a detailed search of classifieds. Ex: Category: Cars Price from: 3000 Price to: 10000 Color: Red Area: California The forms' action is set to a php page: <form action='query_sql.php' method='post'> In query_sql.php I fetch the variables like this: category=$_POST['category']; etc etc... Then query MySql: $query="SELECT........WHERE category='$category' etc etc.... $results = mysql_query($query); Then I simply display the results of the query to the user by creating a table which is filled in dynamically depending on the results set. However, according to an answer by Col. Shrapnel in my previous Q I shouldn't use POST here: http://stackoverflow.com/questions/3004754/how-to-hide-url-from-users-when-submitting-this-form The reason I use post is simply to hide the "one-page-word-document" long URL in the browsers adress bar. I am very confused, is it okay to use POST or not? It is working fine both when I use GET or POST now... And it is already on a production server... Btw, in the linked question, I wasn't referring to make URL invisible (or hide it) I just wanted it too look better (which I have accomplished with mod_rewrite). UPDATE: If I use GET, then how should I make the url better looking (beautiful)? Check this previous Q out: http://stackoverflow.com/questions/3000524/how-to-make-this-very-long-url-appear-short

    Read the article

  • Android - How can I upload a txt file to a website?

    - by Donal Rafferty
    I want to upload a txt file to a website, I'll admit I haven't looked into it in any great detail but I have looked at a few examples and would like more experienced opinions on whether I'm going in the right direction. Here is what I have so far: DefaultHttpClient httpClient = new DefaultHttpClient(); HttpContext localContext = new BasicHttpContext(); private String ret; HttpResponse response = null; HttpPost httpPost = null; public String postPage(String url, String data, boolean returnAddr) { ret = null; httpClient.getParams().setParameter(ClientPNames.COOKIE_POLICY, CookiePolicy.RFC_2109); httpPost = new HttpPost(url); response = null; StringEntity tmp = null; try { tmp = new StringEntity(data,"UTF-8"); } catch (UnsupportedEncodingException e) { System.out.println("HTTPHelp : UnsupportedEncodingException : "+e); } httpPost.setEntity(tmp); try { response = httpClient.execute(httpPost,localContext); } catch (ClientProtocolException e) { System.out.println("HTTPHelp : ClientProtocolException : "+e); } catch (IOException e) { System.out.println("HTTPHelp : IOException : "+e); } ret = response.getStatusLine().toString(); return ret; } And I call it as follows: postPage("http://www.testwebsite.com", "data/data/com.testxmlpost.xml/files/logging.txt", true)); I want to be able to upload a file from the device to a website. But when trying this way I get the following response back. HTTP/1.1 405 Method Not Allowed Am I trying the correct way or should I be doing it another way?

    Read the article

  • android httpurlconnection [closed]

    - by user620451
    hi im new android developer i am trying to login to my asterisk server passing my username and password it works good but when i am trying to request anther url to the server after login i get access denied and i now the problem because the login connection has disconnected so i want a way to request to urls the first one is login to the server and the second is to do something else after login please help and thx anyway this is a part of my code i want to request this 2 url url1="http://192.168.1.7:8088/rawman?action=login&username=admin&secret=admin" url2="http://192.168.1.5:8088/rawman?action=updateconfig&reload=yes&srcfilename=users.conf&dstfilename=users.conf&Action-000000=newcat&Cat-000000=6001&Var-000000=&Value-000000=" public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); tv1 = (TextView) this.findViewById(R.id.display); ed1 = (EditText) this.findViewById(R.id.editText); bt1 = (Button) this.findViewById(R.id.submit); bt1.setOnClickListener(new OnClickListener() { public void onClick(View view) { { try{ ServerRequest(url1); ServerRequest(url2); } catch(Exception e) { Log.v("Exception", "Exception:"+e.getMessage()); } } } }); } public String ServerRequest(String serverString) throws MalformedURLException, IOException { String newFeed=serverString; StringBuilder response = new StringBuilder(); Log.v("server","server url:"+newFeed); URL url = new URL(newFeed); HttpURLConnection httpconn = (HttpURLConnection) url.openConnection(); if(httpconn.getResponseCode()==HttpURLConnection.HTTP_OK) { BufferedReader input = new BufferedReader( new InputStreamReader(httpconn.getInputStream()), 8192); String strLine = null; while ((strLine = input.readLine()) != null) { response.append(strLine); } input.close(); } tv1.settext(response); return response.toString(); }

    Read the article

  • Consume a JSON webservice and process data in Android

    - by user1783391
    I am trying to consume a the webservice below http://62.253.195.179/disaster/webservices/login.php?message=[{"email":"[email protected]","password":"welcome"}] This returns a JSON array [{"companyuserId":"2","name":"ben stein","superiorname":"Leon","departmentId":"26","departmentname":"Development","companyId":"23","UDID":"12345","isActive":"1","devicetoken":"12345","email":"[email protected]","phone":"5456465465654","userrole":"1","chngpwdStatus":"1"}] My code is below try{ String weblink = URLEncoder.encode("http://62.253.195.179/disaster/webservices/login.php?message=[{\"email\":\"[email protected]\",\"password\":\"welcome\"}]"); HttpParams httpParameters = new BasicHttpParams(); int timeoutConnection = 7500; HttpConnectionParams.setConnectionTimeout(httpParameters, timeoutConnection); int timeoutSocket = 7500; HttpConnectionParams.setSoTimeout(httpParameters, timeoutSocket); HttpClient client = new DefaultHttpClient(httpParameters); HttpGet request = new HttpGet(); URI link = new URI(weblink); request.setURI(link); HttpResponse response = client.execute(request); BufferedReader rd = new BufferedReader(new InputStreamReader( response.getEntity().getContent())); result = rd.readLine(); JSONObject myData = new JSONObject(result); JSONArray jArray = myData.getJSONArray(""); JSONObject steps = jArray.getJSONObject(0); String name = steps.getString("name"); } catch (JSONException e) { e.printStackTrace(); } But its not working and I am not 100% sure this is the best way to do it. 11-10 10:49:55.489: E/AndroidRuntime(392): java.lang.IllegalStateException: Target host must not be null, or set in parameters.

    Read the article

  • How to implement a log window in a web browser?

    - by Jeremy Friesner
    Hi all, I'm interested in adding an HTML/web-browser based "log window" to my net-enabled device. Specifically, my device has a customized web server and an event log, and I'd like to be able to leave a web browser window open to e.g. http://my.devices.ip.address/system_log and have events show up as text in the web browser window as they happen. People could then use this as a quick way to monitor what the system is doing, without needing run any special software. My question is, what is the best way to implement this? I've tried the obvious approach -- just have my device's embedded web server hold the HTTP/TCP connection open indefinitely, and write the necessary text to the TCP socket when an event occurs -- but the problem with that is that most web browsers (e.g. Safari) don't display the web page until the server has closed the TCP connection has been closed, and so the result is that the log data never appears in the web browser, it just acts as if the page is taking forever to load. Is there some trick to make this work? I could implement it as a Java applet, but I'd much prefer something more lightweight/simple, either using only HTML or possibly HTML+JavaScript. Also I'd like to avoid having the web browser 'poll' the server, since that would either introduce too much latency (if the reload delay was large) or put load on the system (if the delay was small)

    Read the article

  • Login to website and use cookie to get source for another page

    - by Stu
    I am trying to login to the TV Rage website and get the source code of the My Shows page. I am successfully logging in (I have checked the response from my post request) but then when I try to perform a get request on the My Shows page, I am re-directed to the login page. This is the code I am using to login: private string LoginToTvRage() { string loginUrl = "http://www.tvrage.com/login.php"; string formParams = string.Format("login_name={0}&login_pass={1}", "xxx", "xxxx"); string cookieHeader; WebRequest req = WebRequest.Create(loginUrl); req.ContentType = "application/x-www-form-urlencoded"; req.Method = "POST"; byte[] bytes = Encoding.ASCII.GetBytes(formParams); req.ContentLength = bytes.Length; using (Stream os = req.GetRequestStream()) { os.Write(bytes, 0, bytes.Length); } WebResponse resp = req.GetResponse(); cookieHeader = resp.Headers["Set-cookie"]; String responseStream; using (StreamReader sr = new StreamReader(resp.GetResponseStream())) { responseStream = sr.ReadToEnd(); } return cookieHeader; } I then pass the cookieHeader into this method which should be getting the source of the My Shows page: private string GetSourceForMyShowsPage(string cookieHeader) { string pageSource; string getUrl = "http://www.tvrage.com/mytvrage.php?page=myshows"; WebRequest getRequest = WebRequest.Create(getUrl); getRequest.Headers.Add("Cookie", cookieHeader); WebResponse getResponse = getRequest.GetResponse(); using (StreamReader sr = new StreamReader(getResponse.GetResponseStream())) { pageSource = sr.ReadToEnd(); } return pageSource; } I have been using this previous question as a guide but I'm at a loss as to why my code isn't working.

    Read the article

  • (Android) Seems like my JSON query is getting double encode

    - by A Gardner
    Hi, I am getting some weird errors with my Android App. It appears that this code is double encoding the JSON string. What should be sent is ?{"email":"[email protected]","password":"asdf"} or ?%7B%22email%22:%22..... what the server is seeing is %257B%2522email%2522:%2522 .... which means the server sees %7B%22email%22:%22 ..... This confuses the server. Any ideas why this is happening? Thanks for your help Code: DefaultHttpClient c = new DefaultHttpClient(); if(cookies!=null) c.setCookieStore(cookies); if(loginNotLogout){ jso.put("email", userData.email); jso.put("password", userData.password); } URI u = null; if(loginNotLogout) u= new URI("HTTP","www.website.com","/UserService",jso.toString(),""); else u= new URI("HTTP","www.website.com","/UserService",jso.toString(),""); HttpGet httpget = new HttpGet(u); HttpResponse response = c.execute(httpget); ret.jsonString = EntityUtils.toString(response.getEntity());

    Read the article

  • Preserve name of file using cURL to transfer files

    - by Toby
    I'm transferring files from an existing http request using cURL like so... $postargs = array( 'nonfilefield' =>'nonfilevalue', 'fileentry' => '@'.$_FILES['thefile']['tmp_name'][0] ); $ch = curl_init('http://localhost/curl/rec.php'); curl_setopt($ch,CURLOPT_USERAGENT, "Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)"); curl_setopt($ch,CURLOPT_RETURNTRANSFER, true); curl_setopt($ch,CURLOPT_POST,TRUE); curl_setopt($ch,CURLOPT_POSTFIELDS,$postargs); curl_exec($ch); curl_close($ch); The only way I can get this to work is using the tmp_name, without this it won't send. However, I then lose the name value for when I want to name the file later. Is there some way to do this preserving the $_FILES array as it normally would be without curl? I'm also using an array of file fields in my script, so at the moment I have to convert my multidimensional array into a single dimension for this to work

    Read the article

  • How to ensure nginx serves a request from an external IP?

    - by Matt
    I have a strange situation, where my nginx setup stopped handling external requests. I'm pretty stuck. If I hit the domain without a subdomain, I properly get redirected, however, if I request the full url, that fails and doesn't log anything, anywhere. I am able to curl localhost on the server itself, however when I attempt to curl from an external machine, it fails with: curl: (7) couldn't connect to host I've also noticed that bots can get through, I've seen Google hit the log every now and then. My nginx.conf file: upstream mongrels { server 127.0.0.1:5000; } server { listen 80; server_name culini.com; rewrite ^/(.*) http://www.culini.com/$1 permanent; } # the server directive is nginx's virtual host directive. server { # port to listen on. Can also be set to an IP:PORT listen 80; # Set the max size for file uploads to 50Mb client_max_body_size 50M; # sets the domain[s] that this vhost server requests for server_name www.culini.com; # doc root root /var/www/culini/current/public; log_format app '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for" [$upstream_addr $upstream_response_time $upstream_status]'; # vhost specific access log access_log /var/www/culini/current/log/nginx.access.log app; error_log /var/www/culini/current/log/nginx.error.log debug; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect false; proxy_max_temp_file_size 0; proxy_intercept_errors on; proxy_ignore_client_abort on; if (-f $request_filename) { break; } if (!-f $request_filename) { proxy_pass http://mongrels; break; } } } Please, please, any help would be greatly appreciated.

    Read the article

  • How do I deal with different requests that map to the same response?

    - by daxim
    I'm designing a Web service. The request is idempotent, so I chose the GET method. The response is relatively expensive to calculate and not small, so I want to get caching (on the protocol level) right. (Don't worry about memoisation at my part, I have that already covered; my question here is actually also paying attention to the Web as a whole.) There's only one mandatory parameter and a number of optional parameter with default values if missing. For example, the following two map to the same representation of the response. (If this is a dumb way to go about it the interface, propose something better.) GET /service?mandatory_parameter=some_data HTTP/1.1 GET /service?mandatory_parameter=some_data;optional_parameter=default1;another_optional_parameter=default2;yet_another_optional_parameter=default3 HTTP/1.1 However, I imagine clients do not know this and would treat them separate and therefore waste cache storage. What should I do to avoid violating the golden rule of caching? Make up a canonical form, document it (e.g. all parameters are required after all and need to be sorted in a specific order) and return a client error unless the required form is met? Instead of an error, redirect permanently to the canonical form of a request? Or is it enough to not mind how the request looks like, and just respond with the same ETag for same responses?

    Read the article

  • How can I call multiple nutrient information from ESHA Research API? (apid.esha.com)

    - by user1833044
    I want to call ESHA Research nutrient REST API. I cannot seem to figure out how to call multiple nutrients using ESHA REST API. So far I am calling the following and only able to retrieve the calories, or protein, or another type of nutrient information. So I was hoping someone had experience in retrieving all the nutrient information with one call. Is this possible? This is how I call to retrieve the TWIX nutrient http://api.esha.com/analysis?apikey=xxxx&fo=urn:uuid:81d268ac-f1dc-4991-98c1-1b4d3a5006da (returns calories, please note the api key is not xxxx but instead a key generated from Esha once you sign up as developer) The return is JSON format. If I want to call fat it would be the following http://api.esha.com/analysis?apikey=xxxx&fo=urn:uuid:81d268ac-f1dc-4991-98c1-1b4d3a5006da&n=urn:uuid:589294dc-3dcc-4b64-be06-c07e7f65c4bd How can I make a call once and get a return of all the nutrients (so Fat, Calories, Carbs, Vitamins, etc..) for a particular food ID? I have researched and looked at this for a while and cannot seem to find the answer. Thanks in advance for your help.

    Read the article

  • HttpWebRequest Cookie weirdness

    - by Lachman
    I'm sure I must be doing something wrong. But can't for the life of me figure out what is going on. I have a problem where it seems that the HttpWebRequest class in the framework is not correctly parsing the cookies from a web response. I'm using Fiddler to see what is going on and after making a request, the headers of the response look as such: HTTP/1.1 200 Ok Connection: close Date: Wed, 14 Jan 2009 18:20:31 GMT Server: Microsoft-IIS/6.0 P3P: policyref="/w3c/p3p.xml", CP="CAO DSP IND COR ADM CONo CUR CUSi DEV PSA PSD DELi OUR COM NAV PHY ONL PUR UNI" Set-Cookie: user=v.5,0,EX01E508801E$97$2E401000t$1BV6$A1$EC$104$A1$EC$104$A1$EC$104$21O001000$1E31!90$7CP$AE$3F$F3$D8$19o$BC$1Cd$23; Domain=.thedomain.com; path=/ Set-Cookie: minfo=v.4,EX019ECD28D6k$A3$CA$0C$CE$A2$D6$AD$D4!2$8A$EF$E8n$91$96$E1$D7$C8$0F$98$AA$ED$DC$40V$AB$9C$C1$9CF$C9$C1zIF$3A$93$C6$A7$DF$A1$7E$A7$A1$A8$BD$A6$94c$D5$E8$2F$F4$AF$A2$DF$80$89$BA$BBd$F6$2C$B6$A8; expires=Sunday, 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ Set-Cookie: accttype=v.2,3,1,EX017E651B09k$A3$CA$0C$DB$A2$CB$AD$D9$8A$8C$EF$E8t$91$90$E1$DC$C89$98$AA$E0$DC$40O$A8$A4$C1$9C; expires=Sunday, 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ Set-Cookie: tpid=v.1,20001; expires=Sunday, 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ Set-Cookie: MC1=GUID=541977e04a341a2a4f4cdaaf49615487; expires=Sunday, 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ Set-Cookie: linfo=v.4,EQC|0|0|255|1|0||||||||0|0|0||0|0|0|-1|-1; expires=Sunday, 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ Set-Cookie: group=v.1,0; expires=Sunday, 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ Content-Type: text/html But when I look at the response.Cookies, I see far more cookies that I am expecting, with values of different cookies being split up into different cookies. Manually getting the headers seems to result in more wierdness eg: the code foreach(string cookie in response.Headers.GetValues("Set-Cookie")) { Console.WriteLine("Cookie found: " + cookie); } produces the output: Cookie found: user=v.5 Cookie found: 0 Cookie found: EX01E508801E$97$2E401000t$1BV6$A1$EC$104$A1$EC$104$A1$EC$104$21O00 1000$1E31!90$7CP$AE$3F$F3$D8$19o$BC$1Cd$23; Domain=.thedomain.com; path=/ Cookie found: minfo=v.4 Cookie found: EX019ECD28D6k$A3$CA$0C$CE$A2$D6$AD$D4!2$8A$EF$E8n$91$96$E1$D7$C8$0 F$98$AA$ED$DC$40V$AB$9C$C1$9CF$C9$C1zIF$3A$93$C6$A7$DF$A1$7E$A7$A1$A8$BD$A6$94c$ D5$E8$2F$F4$AF$A2$DF$80$89$BA$BBd$F6$2C$B6$A8; expires=Sunday Cookie found: 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ Cookie found: accttype=v.2 Cookie found: 3 Cookie found: 1 Cookie found: EX017E651B09k$A3$CA$0C$DB$A2$CB$AD$D9$8A$8C$EF$E8t$91$90$E1$DC$C89 $98$AA$E0$DC$40O$A8$A4$C1$9C; expires=Sunday Cookie found: 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ Cookie found: tpid=v.1 Cookie found: 20001; expires=Sunday Cookie found: 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ Cookie found: MC1=GUID=541977e04a341a2a4f4cdaaf49615487; expires=Sunday Cookie found: 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ Cookie found: linfo=v.4 Cookie found: EQC|0|0|255|1|0||||||||0|0|0||0|0|0|-1|-1; expires=Sunday Cookie found: 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ Cookie found: group=v.1 Cookie found: 0; expires=Sunday Cookie found: 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ as you can see - the first cookie in the list raw response: Set-Cookie: user=v.5,0,EX01E508801 is getting split into: Cookie found: user=v.5 Cookie found: 0 Cookie found: EX01E508801E$.......... So - what's going on here? Am I wrong? Is the HttpWebRequest class incorrectly parsing the http headers? Is the webserver that it spitting out the requests producing invalid http headers?

    Read the article

  • Can' get couchdb external http handlers to work.

    - by fuzzy lollipop
    following the instructions here http://wiki.apache.org/couchdb/ExternalProcesses this is what I get { * error: "{{badarg,[{erlang,port_command, [#Port<0.2056>, [123, [34,<<"info">>,34], 58, [123, [34,"db_name",34], 58, [34,<<"transfer_central">>,34], 44, [34,"doc_count",34], 58,"39441",44, [34,"doc_del_count",34], 58,"0",44, [34,"update_seq",34], 58,"56508",44, [34,"purge_seq",34], 58,"0",44, [34,"compact_running",34], 58,<<"false">>,44, [34,"disk_size",34], 58,"43593828",44, [34,"instance_start_time",34], 58, [34,<<"1272560477320483">>,34], 44, [34,"disk_format_version",34], 58,"5",125], 44, [34,<<"id">>,34], 58,<<"null">>,44, [34,<<"method">>,34], 58, [34,"GET",34], 44, [34,<<"path">>,34], 58, [91, [34,<<"transfer_central">>,34], 44, [34,<<"_test">>,34], 93], 44, [34,<<"query">>,34], 58,<<"{}">>,44, [34,<<"headers">>,34], 58, [123, [34,<<"Accept">>,34], 58, [34, <<"text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8,application/json">>, 34], 44, [34,<<"Accept-Charset">>,34], 58, [34,<<"ISO-8859-1,utf-8;q=0.7,*;q=0.7">>,34], 44, [34,<<"Accept-Encoding">>,34], 58, [34,<<"gzip,deflate">>,34], 44, [34,<<"Accept-Language">>,34], 58, [34,<<"en-us,en;q=0.5">>,34], 44, [34,<<"Connection">>,34], 58, [34,<<"keep-alive">>,34], 44, [34,<<"Host">>,34], 58, [34,<<"127.0.0.1:5984">>,34], 44, [34,<<"Keep-Alive">>,34], 58, [34,<<"115">>,34], 44, [34,<<"User-Agent">>,34], 58, [34, <<"Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3">>, 34], 125], 44, [34,<<"body">>,34], 58, [34,"undefined",34], 44, [34,<<"peer">>,34], 58, [34,<<"127.0.0.1">>,34], 44, [34,<<"form">>,34], 58,<<"{}">>,44, [34,<<"cookie">>,34], 58,<<"{}">>,44, [34,<<"userCtx">>,34], 58, [123, [34,<<"db">>,34], 58, [34,<<"transfer_central">>,34], 44, [34,<<"name">>,34], 58,<<"null">>,44, [34,<<"roles">>,34], 58,<<"[]">>,125], 125,10]]}, {couch_os_process,writeline,2}, {couch_os_process,writejson,2}, {couch_os_process,handle_call,3}, {gen_server,handle_msg,5}, {proc_lib,init_p_do_apply,3}]}, {gen_server,call, [<0.110.0>, {prompt,{[{<<"info">>, {[{db_name,<<"transfer_central">>}, {doc_count,39441}, {doc_del_count,0}, {update_seq,56508}, {purge_seq,0}, {compact_running,false}, {disk_size,43593828}, {instance_start_time,<<"1272560477320483">>}, {disk_format_version,5}]}}, {<<"id">>,null}, {<<"method">>,'GET'}, {<<"path">>,[<<"transfer_central">>,<<"_test">>]}, {<<"query">>,{[]}}, {<<"headers">>, {[{<<"Accept">>, <<"text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8,application/json">>}, {<<"Accept-Charset">>, <<"ISO-8859-1,utf-8;q=0.7,*;q=0.7">>}, {<<"Accept-Encoding">>,<<"gzip,deflate">>}, {<<"Accept-Language">>,<<"en-us,en;q=0.5">>}, {<<"Connection">>,<<"keep-alive">>}, {<<"Host">>,<<"127.0.0.1:5984">>}, {<<"Keep-Alive">>,<<"115">>}, {<<"User-Agent">>, <<"Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3">>}]}}, {<<"body">>,undefined}, {<<"peer">>,<<"127.0.0.1">>}, {<<"form">>,{[]}}, {<<"cookie">>,{[]}}, {<<"userCtx">>, {[{<<"db">>,<<"transfer_central">>}, {<<"name">>,null}, {<<"roles">>,[]}]}}]}}, infinity]}}" * reason: "{gen_server,call, [<0.109.0>, {execute,{[{<<"info">>, {[{db_name,<<"transfer_central">>}, {doc_count,39441}, {doc_del_count,0}, {update_seq,56508}, {purge_seq,0}, {compact_running,false}, {disk_size,43593828}, {instance_start_time,<<"1272560477320483">>}, {disk_format_version,5}]}}, {<<"id">>,null}, {<<"method">>,'GET'}, {<<"path">>,[<<"transfer_central">>,<<"_test">>]}, {<<"query">>,{[]}}, {<<"headers">>, {[{<<"Accept">>, <<"text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8,application/json">>}, {<<"Accept-Charset">>, <<"ISO-8859-1,utf-8;q=0.7,*;q=0.7">>}, {<<"Accept-Encoding">>,<<"gzip,deflate">>}, {<<"Accept-Language">>,<<"en-us,en;q=0.5">>}, {<<"Connection">>,<<"keep-alive">>}, {<<"Host">>,<<"127.0.0.1:5984">>}, {<<"Keep-Alive">>,<<"115">>}, {<<"User-Agent">>, <<"Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3">>}]}}, {<<"body">>,undefined}, {<<"peer">>,<<"127.0.0.1">>}, {<<"form">>,{[]}}, {<<"cookie">>,{[]}}, {<<"userCtx">>, {[{<<"db">>,<<"transfer_central">>}, {<<"name">>,null}, {<<"roles">>,[]}]}}]}}, infinity]}" }

    Read the article

  • apt-get update getting 404 on debian lenny

    - by JoelFan
    Here is my /etc/apt/sources.list ###### Debian Main Repos deb http://ftp.us.debian.org/debian/ lenny main contrib non-free ###### Debian Update Repos deb http://security.debian.org/ lenny/updates main contrib non-free deb http://ftp.us.debian.org/debian/ lenny-proposed-updates main contrib non-free When I do: # apt-get update I'm getting some good lines, then: Err http://ftp.us.debian.org lenny/contrib Packages 404 Not Found [IP: 35.9.37.225 80] Err http://ftp.us.debian.org lenny/non-free Packages 404 Not Found [IP: 35.9.37.225 80] Err http://ftp.us.debian.org lenny-proposed-updates/main Packages 404 Not Found [IP: 35.9.37.225 80] Err http://ftp.us.debian.org lenny-proposed-updates/contrib Packages 404 Not Found [IP: 35.9.37.225 80] Err http://ftp.us.debian.org lenny-proposed-updates/non-free Packages 404 Not Found [IP: 35.9.37.225 80] Err http://ftp.us.debian.org lenny/main Packages 404 Not Found [IP: 35.9.37.225 80] W: Failed to fetch http://security.debian.org/dists/lenny/updates/main/binary-i386/Packages 404 Not Found [IP: 149.20.20.6 80] W: Failed to fetch http://security.debian.org/dists/lenny/updates/contrib/binary-i386/Packages 404 Not Found [IP: 149.20.20.6 80] W: Failed to fetch http://security.debian.org/dists/lenny/updates/non-free/binary-i386/Packages 404 Not Found [IP: 149.20.20.6 80] W: Failed to fetch http://ftp.us.debian.org/debian/dists/lenny/contrib/binary-i386/Packages 404 Not Found [IP: 35.9.37.225 80] W: Failed to fetch http://ftp.us.debian.org/debian/dists/lenny/non-free/binary-i386/Packages 404 Not Found [IP: 35.9.37.225 80] W: Failed to fetch http://ftp.us.debian.org/debian/dists/lenny-proposed-updates/main/binary-i386/Packages 404 Not Found [IP: 35.9.37.225 80] W: Failed to fetch http://ftp.us.debian.org/debian/dists/lenny-proposed-updates/contrib/binary-i386/Packages 404 Not Found [IP: 35.9.37.225 80] W: Failed to fetch http://ftp.us.debian.org/debian/dists/lenny-proposed-updates/non-free/binary-i386/Packages 404 Not Found [IP: 35.9.37.225 80] W: Failed to fetch http://ftp.us.debian.org/debian/dists/lenny/main/binary-i386/Packages 404 Not Found [IP: 35.9.37.225 80] E: Some index files failed to download, they have been ignored, or old ones used instead. Now what?

    Read the article

  • What is the best practice with KML files when adding geositemap?

    - by Floran
    Im not sure how to deal with kml files. Now important particularly in reference to the Google Venice update. My site basically is a guide of many company listings (sort of Yellow Pages). I want each company listing to have a geolocation associated with it. Which of the options I present below is the way to go? OPTION 1: all locations in a single KML file with a reference to that KML file from a geositemap.xml MYGEOSITEMAP.xml <?xml version="1.0" encoding="UTF-8"?> <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:geo="http://www.google.com/geo/schemas/sitemap/1.0"> <url><loc>http://www.mysite.com/locations.kml</loc> <geo:geo> <geo:format>kml</geo:format></geo:geo></url> </urlset> ALLLOCATIONS.kml <kml xmlns="http://www.opengis.net/kml/2.2" xmlns:atom="http://www.w3.org/2005/Atom"> <Document> <name>MyCompany</name> <atom:author> <atom:name>MyCompany</atom:name> </atom:author> <atom:link href="http://www.mysite.com/locations/3454/MyCompany" rel="related" /> <Placemark> <name>MyCompany, Kalverstraat 26 Amsterdam 1000AG</name> <description><![CDATA[<address><a href="http://www.mysite.com/locations/3454/MyCompany">MyCompany</a><br />Address: Kalverstraat 26, Amsterdam 1000AG <br />Phone: 0646598787</address><p>hello there, im MyCompany</p>]]> </description><Point><coordinates>5.420686499999965,51.6298808,0</coordinates> </Point> </Placemark> </Document> </kml> <kml xmlns="http://www.opengis.net/kml/2.2" xmlns:atom="http://www.w3.org/2005/Atom"> <Document> <name>MyCompany</name><atom:author><atom:name>MyCompany</atom:name></atom:author><atom:link href="http://www.mysite.com/locations/22/companyX" rel="related" /><Placemark><name>MyCompany, Rosestreet 45 Amsterdam 1001XF </name><description><![CDATA[<address><a href="http://www.mysite.com/locations/22/companyX">companyX</a><br />Address: Rosestreet 45, Amsterdam 1001XF <br />Phone: 0642195493</address><p>some text about companyX</p>]]></description><Point><coordinates>5.520686499889632,51.6197705,0</coordinates></Point></Placemark> </Document> </kml> OPTION 2: a separate KML file for each location and a reference to each KML file from a geositemap.xml (kml files placed in a \kmlfiles folder) MYGEOSITEMAP.xml <?xml version="1.0" encoding="UTF-8"?> <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:geo="http://www.google.com/geo/schemas/sitemap/1.0"> <url><loc>http://www.mysite.com/kmlfiles/3454_MyCompany.kml</loc> <geo:geo> <geo:format>kml</geo:format></geo:geo></url> <url><loc>http://www.mysite.com/kmlfiles/22_companyX.kml</loc> <geo:geo> <geo:format>kml</geo:format></geo:geo></url> </urlset> *3454_MyCompany.kml* <kml xmlns="http://www.opengis.net/kml/2.2" xmlns:atom="http://www.w3.org/2005/Atom"> <Document><name>MyCompany</name><atom:author><atom:name>MyCompany</atom:name></atom:author><atom:link href="http://www.mysite.com/locations/3454/MyCompany" rel="related" /><Placemark><name>MyCompany, Kalverstraat 26 Amsterdam 1000AG</name><description><![CDATA[<address><a href="http://www.mysite.com/locations/3454/MyCompany">MyCompany</a><br />Address: Kalverstraat 26, Amsterdam 1000AG <br />Phone: 0646598787</address><p>hello there, im MyCompany</p>]]></description><Point><coordinates>5.420686499999965,51.6298808,0</coordinates></Point></Placemark> </Document> </kml> *22_companyX.kml* <kml xmlns="http://www.opengis.net/kml/2.2" xmlns:atom="http://www.w3.org/2005/Atom"> <Document><name>companyX</name><atom:author><atom:name>companyX</atom:name></atom:author><atom:link href="http://www.mysite.com/locations/22/companyX" rel="related" /><Placemark><name>companyX, Rosestreet 45 Amsterdam 1001XF </name><description><![CDATA[<address><a href="http://www.mysite.com/locations/22/companyX">companyX</a><br />Address: Rosestreet 45, Amsterdam 1001XF <br />Phone: 0642195493</address><p>some text about companyX</p>]]></description><Point><coordinates>5.520686499889632,51.6197705,0</coordinates></Point></Placemark> </Document> </kml> OPTION 3?

    Read the article

  • How to setup SyntaxHighlighter with GeeksWithBlogs in about 10 minutes.

    - by mbcrump
    SyntaxHighlighter is a fully functional self-contained code syntax highlighter developed in JavaScript. Below is a sample of what it looks like in your blog. class Test { static void Main() { System.Console.WriteLine("Sample SyntaxHighlighter"); } } This tutorial will help you setup SyntaxHighlighter with GeeksWithBlogs.net in about 10 minutes. Even though this guide is specifically for GWB, you can use it on any other hosting provider that does not allow you to upload custom CSS/JavaScript. It is recommended that if you are using LiveWriter to go ahead and download Code Snippet with SyntaxHighlighter Support to integrate this functionality within Live Writer. 1) Log into GWB and select Options->Configure Now under the Custom CSS insert the following code at the top of the textbox: @import url("http://alexgorbatchev.com/pub/sh/current/styles/shCore.css"); @import url("http://alexgorbatchev.com/pub/sh/current/styles/shThemeDefault.css"); Please note that you can change the default theme by changing the shThemeDefault.css to one listed below: shThemeDefault.css shThemeDjango.css shThemeEmacs.css shThemeFadeToGrey.css shThemeMidnight.css shThemeRDark.css 2) Under the Static News/Announcements insert the following code at the top: <script type="text/javascript" src="http://alexgorbatchev.com/pub/sh/current/scripts/shCore.js"></script> <script type="text/javascript" language="javascript" src="http://alexgorbatchev.com/pub/sh/current/scripts/shBrushCSharp.js"></script> <script type="text/javascript" language="javascript" src="http://alexgorbatchev.com/pub/sh/current/scripts/shBrushJScript.js"></script> <script src='http://alexgorbatchev.com/pub/sh/current/scripts/shBrushJava.js' type='text/javascript'></script> <script language='javascript'> SyntaxHighlighter.config.bloggerMode = true; SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/current/scripts/clipboard.swf'; SyntaxHighlighter.all(); </script> Please note that this will only give you support for Java, JavaScript and C Sharp. If you want more languages like Ruby and SQL. Then add the proper tags listed below. The reason that I didn’t add them is because I do not want to load languages that I will not be blogging about. <link href='http://alexgorbatchev.com/pub/sh/current/styles/shCore.css' rel='stylesheet' type='text/css'/> <link href='http://alexgorbatchev.com/pub/sh/current/styles/shThemeDefault.css' rel='stylesheet' type='text/css'/> <script src='http://alexgorbatchev.com/pub/sh/current/scripts/shCore.js' type='text/javascript'></script> <script src='http://alexgorbatchev.com/pub/sh/current/scripts/shBrushCpp.js' type='text/javascript'></script> <script src='http://alexgorbatchev.com/pub/sh/current/scripts/shBrushCSharp.js' type='text/javascript'></script> <script src='http://alexgorbatchev.com/pub/sh/current/scripts/shBrushCss.js' type='text/javascript'></script> <script src='http://alexgorbatchev.com/pub/sh/current/scripts/shBrushJava.js' type='text/javascript'></script> <script src='http://alexgorbatchev.com/pub/sh/current/scripts/shBrushJScript.js' type='text/javascript'></script> <script src='http://alexgorbatchev.com/pub/sh/current/scripts/shBrushPhp.js' type='text/javascript'></script> <script src='http://alexgorbatchev.com/pub/sh/current/scripts/shBrushPython.js' type='text/javascript'></script> <script src='http://alexgorbatchev.com/pub/sh/current/scripts/shBrushRuby.js' type='text/javascript'></script> <script src='http://alexgorbatchev.com/pub/sh/current/scripts/shBrushSql.js' type='text/javascript'></script> <script src='http://alexgorbatchev.com/pub/sh/current/scripts/shBrushVb.js' type='text/javascript'></script> <script src='http://alexgorbatchev.com/pub/sh/current/scripts/shBrushXml.js' type='text/javascript'></script> <script src='http://alexgorbatchev.com/pub/sh/current/scripts/shBrushPerl.js' type='text/javascript'></script> <script language='javascript'> SyntaxHighlighter.config.bloggerMode = true; SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/current/scripts/clipboard.swf'; SyntaxHighlighter.all(); </script> 3) Now install Code Snippet with SyntaxHighlighter Support and launch Windows Live Writer. Click on the PreCode Snippet plugin add copy/paste your code into the windows. Make sure you select “PRE” and the Language that you are using. It should look similar to the following screenshot.  After you finish editing the post, hit publish and your code should look nice and neat like the example shown earlier.

    Read the article

  • Do WebSockets have exclusive access to their sockets?

    - by Aoriste
    I'm curious to know if, after a WebSocket has been established (after having received the proper handshake from a server that supports them), whether or not the TCP socket used by the "WebSocket connection" is used exclusively by the WebSocket, or if the browser may still make regular HTTP requests with it. It only makes sense to me that WebSockets would have exclusive use of their TCP sockets, but I don't remember having read in any of the documentation that such is the case.

    Read the article

  • GZip compression with WCF hosted on IIS7

    - by joniba
    So I'm going to add my query to the small ocean of questions on the subject. I'm trying to enable GZip compression on large soap responses from a WCF service. So far, I've followed instructions here and in a variety of other places to enable dynamic compression on IIS. Here's my dynamicTypes section from the applicationHost.config: <dynamicTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/atom+xml" enabled="true" /> <add mimeType="application/xaml+xml" enabled="true" /> <add mimeType="application/xop+xml" enabled="true" /> <add mimeType="application/soap+xml" enabled="true" /> <add mimeType="*/*" enabled="false" /> </dynamicTypes> And also: <urlCompression doDynamicCompression="true" dynamicCompressionBeforeCache="true" /> Though I'm not so clear on why that's needed. Threw some extra mime-types in there just in case. I've implemented IClientMessageInspector to add Accept-Encoding: gzip, deflate to my client's HttpRequests. Here's an example of a request-header taken from fiddler: POST http://[omitted]/TestMtomService/TextService.svc HTTP/1.1 Content-Type: application/soap+xml; charset=utf-8 Accept-Encoding: gzip, deflate Host: [omitted] Content-Length: 542 Expect: 100-continue Now, this doesn't work. There's simply no compression happening, no matter what the size of the message (tried up to 1.5Mb). I've looked at this post, but have not run into an exception as he describes, so I haven't tried the CodeProject implementation that he proposes. Also I've seen a lot of other implementations that are supposed to get this to work, but cannot make sense of them (e.g., msdn's GZip encoder). Why would I need to implement the encoder, or the code-project solution? Shouldn't IIS take care of the compression? So what else do I need to do to get this to work? Joni

    Read the article

  • Detecting REFERRER 301 redirects in AwStats

    - by Riccardo
    About six months ago, I have moved a website to a new domain, and helped migration using 301 redirects into .htaccess of the old domain. This morning I was looking at AwStats log of the new domain, and was surpised to notice that in the "HTTP Status codes"section, 301 redirects score 77% of the whole codes (seems 200 are not tracked here). So, what is the proper meaning of the 301 code in those stats? Does it mean that 77% of traffic is incoming (referrer) from 301 redirects or?

    Read the article

  • Can URIs have non-ASCII characters?

    - by Cheeso
    I tried to find this in the relevant RFC, IETF RFC 3986, but couldn't figure it. Do URIs for HTTP allow Unicode, or non-ASCII of any kind? Can you please cite the section and the RFC that supports your answer. NB: For those who might think this is not programming related - it is. It's related to an ISAPI filter I'm building.

    Read the article

  • What might cause the big overhead of making a HttpWebRequest call?

    - by Dimitri C.
    When I send/receive data using HttpWebRequest (on Silverlight, using the HTTP POST method) in small blocks, I measure the very small throughput of 500 bytes/s over a "localhost" connection. When sending the data in large blocks, I get 2 MB/s, which is some 5000 times faster. Does anyone know what could cause this incredibly big overhead? Update: I did the performance measurement on both Firefox 3.6 and Internet Explorer 7. Both showed similar results. Update: The Silverlight client-side code I use is essentially my own implementation of the WebClient class. The reason I wrote it is because I noticed the same performance problem with WebClient, and I thought that the HttpWebRequest would allow to tweak the performance issue. Regrettably, this did not work. The implementation is as follows: public class HttpCommChannel { public delegate void ResponseArrivedCallback(object requestContext, BinaryDataBuffer response); public HttpCommChannel(ResponseArrivedCallback responseArrivedCallback) { this.responseArrivedCallback = responseArrivedCallback; this.requestSentEvent = new ManualResetEvent(false); this.responseArrivedEvent = new ManualResetEvent(true); } public void MakeRequest(object requestContext, string url, BinaryDataBuffer requestPacket) { responseArrivedEvent.WaitOne(); responseArrivedEvent.Reset(); this.requestMsg = requestPacket; this.requestContext = requestContext; this.webRequest = WebRequest.Create(url) as HttpWebRequest; this.webRequest.AllowReadStreamBuffering = true; this.webRequest.ContentType = "text/plain"; this.webRequest.Method = "POST"; this.webRequest.BeginGetRequestStream(new AsyncCallback(this.GetRequestStreamCallback), null); this.requestSentEvent.WaitOne(); } void GetRequestStreamCallback(IAsyncResult asynchronousResult) { System.IO.Stream postStream = webRequest.EndGetRequestStream(asynchronousResult); postStream.Write(requestMsg.Data, 0, (int)requestMsg.Size); postStream.Close(); requestSentEvent.Set(); webRequest.BeginGetResponse(new AsyncCallback(this.GetResponseCallback), null); } void GetResponseCallback(IAsyncResult asynchronousResult) { HttpWebResponse response = (HttpWebResponse)webRequest.EndGetResponse(asynchronousResult); Stream streamResponse = response.GetResponseStream(); Dim.Ensure(streamResponse.CanRead); byte[] readData = new byte[streamResponse.Length]; Dim.Ensure(streamResponse.Read(readData, 0, (int)streamResponse.Length) == streamResponse.Length); streamResponse.Close(); response.Close(); webRequest = null; responseArrivedEvent.Set(); responseArrivedCallback(requestContext, new BinaryDataBuffer(readData)); } HttpWebRequest webRequest; ManualResetEvent requestSentEvent; BinaryDataBuffer requestMsg; object requestContext; ManualResetEvent responseArrivedEvent; ResponseArrivedCallback responseArrivedCallback; } I use this code to send data back and forth to an HTTP server.

    Read the article

  • Mochiweb's Scalability Features

    - by ErJab
    From all the articles I've read so far about Mochiweb, I've heard this over and over again that Mochiweb provides very good scalability. My question is, how exactly does Mochiweb get its scalability property? Is it from Erlang's inherent scalability properties or does Mochiweb have any additional code that explicitly enables it to scale well? Put another way, if I were to write a simple HTTP server in Erlang myself, with a simple 'loop' (recursive function) to handle requests, would it have the same level scalability as a simple web server built using the Mochiweb framework?

    Read the article

  • Uploading a file using post() method of QNetworkAccessManager

    - by user304361
    I'm having some trouble with a Qt application; specifically with the QNetworkAccessManager class. I'm attempting to perform a simple HTTP upload of a binary file using the post() method of the QNetworkAccessManager. The documentation states that I can give a pointer to a QIODevice to post(), and that the class will transmit the data found in the QIODevice. This suggests to me that I ought to be able to give post() a pointer to a QFile. For example: QFile compressedFile("temp"); compressedFile.open(QIODevice::ReadOnly); netManager.post(QNetworkRequest(QUrl("http://mywebsite.com/upload") ), &compressedFile); What seems to happen on the Windows system where I'm developing this is that my Qt application pushes the data from the QFile, but then doesn't complete the request; it seems to be sitting there waiting for more data to show up from the file. The post request isn't "closed" until I manually kill the application, at which point the whole file shows up at my server end. From some debugging and research, I think this is happening because the read() operation of QFile doesn't return -1 when you reach the end of the file. I think that QNetworkAccessManager is trying to read from the QIODevice until it gets a -1 from read(), at which point it assumes there is no more data and closes the request. If it keeps getting a return code of zero from read(), QNetworkAccessManager assumes that there might be more data coming, and so it keeps waiting for that hypothetical data. I've confirmed with some test code that the read() operation of QFile just returns zero after you've read to the end of the file. This seems to be incompatible with the way that the post() method of QNetworkAccessManager expects a QIODevice to behave. My questions are: Is this some sort of limitation with the way that QFile works under Windows? Is there some other way I should be using either QFile or QNetworkAccessManager to push a file via post()? Is this not going to work at all, and will I have to find some other way to upload my file? Any suggestions or hints would be appreciated. Thanks, Don

    Read the article

  • How to expand URLs in C#?

    - by Hao Wooi Lim
    If I have a URL like http://popurls.com/go/msn.com/l4eba1e6a0ffbd9fc915948434423a7d5, how do I expand it back to the original URL programmatically? Obviously I could use an API like expandurl.com but that will limits me to 100 requests per hour.

    Read the article

  • Why is file_get_contents() faster than using fsock_open()?

    - by eds
    In PHP, sometimes I want to send an HTTP request to a remote site just to look at the response headers, so I declare it all manually and use the fsock_open() function. However, this goes much slower than calling file_get_contents() with a remote URL (which loads the whole page content). Why is this? Is there a good alternative way to get just the response headers (to check if a page returns a 404 error, for example) that works as fast as file_get_contents()?

    Read the article

< Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >