Search Results

Search found 50062 results on 2003 pages for 'http 1 0'.

Page 113/2003 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • Why am I getting "(304) Not Modified" error on some links when using HttpWebRequest?

    - by Greg
    Hi, Any ideas why on some links that I try to access using HttpWebRequest I am getting "The remote server returned an error: (304) Not Modified." in the code? The code I'm using is from Jeff's post here. Note the concept of the code is a simple proxy server, so I'm pointing my browser at this locally running piece of code, which gets my browsers request, and then proxies it on by creating a new HttpWebRequest, as you'll see in the code. It works great for most sites/links, but for some this error comes up. You will see one key bit in the code is where it seems to copy the http header settings from the browser request to it's request out to the site, and it copies in the header attributes. Not sure if the issue is something to do with how it mimics this aspect of the request and then what happens as the result comes back? case "If-Modified-Since": request.IfModifiedSince = DateTime.Parse(listenerContext.Request.Headers[key]); break; I get the issue for example from http://en.wikipedia.org/wiki/Main_Page thanks

    Read the article

  • Why do I get empty request from the Jakarta Commons HttpClient?

    - by polyurethan
    I have a problem with the Jakarta Commons HttpClient. Before my self-written HttpServer gets the real request there is one request which is completely empty. That's the first problem. The second problem is, sometimes the request data ends after the third or fourth line of the http request: POST / HTTP/1.1 User-Agent: Jakarta Commons-HttpClient/3.1 Host: 127.0.0.1:4232 For debugging I am using the Axis TCPMonitor. There every things is fine but the empty request. How I process the stream: StringBuffer requestBuffer = new StringBuffer(); InputStreamReader is = new InputStreamReader(socket.getInputStream(), "UTF-8"); int byteIn = -1; do { byteIn = is.read(); if (byteIn > 0) { requestBuffer.append((char) byteIn); } } while (byteIn != -1 && is.ready()); String requestData = requestBuffer.toString(); How I send the request: client.getParams().setSoTimeout(30000); method = new PostMethod(url.getPath()); method.getParams().setContentCharset("utf-8"); method.setRequestHeader("Content-Type", "application/xml; charset=utf-8"); method.addRequestHeader("Connection", "close"); method.setFollowRedirects(false); byte[] requestXml = getRequestXml(); method.setRequestEntity(new InputStreamRequestEntity(new ByteArrayInputStream(requestXml))); client.executeMethod(method); int statusCode = method.getStatusCode(); Have anyone of you an idea how to solve these problems? Alex

    Read the article

  • Tortoise svn Subversion Update Error

    - by Boushley
    Hey All, I recently was working on an open source project... Everything was going great for a week or two but them something happened and I don't know what, and I can't update anymore! I know the url is correct, because I can check it out on my linux server... but when I try to check it out with tortoise svn on my windows box it doesn't work. The error message I'm getting is this OPTIONS of 'http://opensource.adobe.com/svn/opensource/flex/sdk/branches': 200 OK (http://opensource.adobe.com) Does anyone know what that means. The 200 OK part seems odd to me... it connected to the server but wasn't able to get the code? And what does OPTIONS of... mean? I've looked around, and some people were having proxy issues... but i'm not behind a proxy, and I made sure that tortoise svn is not trying to use a proxy. If anyone could help, that would be great! Boushley

    Read the article

  • How to revert-back from SSL to non-SSL in Tomcat 6 ?

    - by mohamida
    I'm using jsf 2 + jaas + ssl + tomcat 6.0.26 I have in my web site 2 paths: /faces/protected/* which uses SSL /faces/unprotected/* which don't uses SSL. I've put this in my web.xml: <login-config> <auth-method>FORM</auth-method> <form-login-config> <form-login-page>/faces/login.jsp</form-login-page> <form-error-page>/faces/error.jsp</form-error-page> </form-login-config> </login-config> <security-constraint> <web-resource-collection> <web-resource-name>Secure Resource</web-resource-name> <description/> <url-pattern>/faces/unprotected/*</url-pattern> <http-method>GET</http-method> <http-method>POST</http-method> <http-method>HEAD</http-method> <http-method>PUT</http-method> <http-method>OPTIONS</http-method> <http-method>TRACE</http-method> <http-method>DELETE</http-method> </web-resource-collection> <auth-constraint> <role-name>C</role-name> </auth-constraint> </security-constraint> <security-constraint> <web-resource-collection> <web-resource-name>Secure Resource</web-resource-name> <description /> <url-pattern>/faces/protected/*</url-pattern> <http-method>GET</http-method> <http-method>POST</http-method> <http-method>HEAD</http-method> <http-method>PUT</http-method> <http-method>OPTIONS</http-method> <http-method>TRACE</http-method> <http-method>DELETE</http-method> </web-resource-collection> <auth-constraint> <role-name>C</role-name> </auth-constraint> <user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> <security-role> <description> Role Client </description> <role-name>C</role-name> </security-role> and this is my server.xml: <Connector port="8080" protocol="HTTP/1.1" maxThreads="400" maxKeepAliveRequests="1" acceptCount="100" connectionTimeout="3000" redirectPort="8443" compression="on" compressionMinSize="2048" noCompressionUserAgents="gozilla, traviata" compressableMimeType="text/javascript,text/css,text/html, text/xml,text/plain,application/x-javascript,application/javascript,application/xhtml+xml" /> <Connector port="8443" protocol="org.apache.coyote.http11.Http11AprProtocol" SSLEnabled="true" maxThreads="400" scheme="https" secure="true" clientAuth="optional" sslProtocol="TLS" SSLCertificateFile="path/to/crt" SSLCertificateKeyFile="path/to/pem"/> when i enter to protected paths, it switches to HTTPS (port 8443), but when i enter to path /faces/unprotected/somthing... it stays using HTTPS. what i want is when i enter to unprotected paths, it revert-back to non-SSL communications ( otherwise, i have to re-login again when i set the exact adress in my browser). What's wrong with my configurations ? Is there a way so i can do such a thing ?

    Read the article

  • setting cookies

    - by aharon
    Okay, so I'm trying to set cookies using Ruby. I'm in a Rack environment. response[name]=value will add an HTTP header into the HTTP headers hash rack has. I know that it works. The following method doesn't work: def set_cookie(opts={}) args = { :name => nil, :value => nil, :expires => Time.now+314, :path => '/', :domain => Cambium.uri #contains the IP address of the dev server this is running on }.merge(opts) raise ArgumentError, ":name and :value are mandatory" if args[:name].nil? or args[:value].nil? response['Set-Cookie']="#{args[:name]}=#{args[:value]}; expires=#{args[:expires].clone.gmtime.strftime("%a, %d-%b-%Y %H:%M:%S GMT")}; path=#{args[:path]}; domain=#{args[:domain]}" end Why not? And how can I solve it? Thanks.

    Read the article

  • http_access.log on WebSphere 6.1.0.29

    - by DavidG
    I am running WebSphere 6.1.0.29 and I need to track the requests being made to an Enterprise Application. Previously I did this by routing the requests through a proxy server, but I need to repeat the exercise and I figure there must be a simpler way. Does anyone know how to enable HTTP access logging? I have been through the console an thought I had enabled http_access.log and http_error.log via: Application servers server1 HTTP error and NCSA access logging (where 'server1' is the application server) I've enabled the service at startup, and ticked the boxes to enable access logging and error logging - however... nothing has happened. I have restarted the server, restarted the Enterprise apps and even did a "find . -name" for the log files - but they don't seem to be anywhere on the system. I saw on a JavaRanch thread someone suggested writing a custom filter for requests in an application, but this seems like wild overkill - plus I am doing the logs to test a pre-built binary, so I don't want to mess with the code. Anyone have any ideas/suggestions? Help! :-)

    Read the article

  • Why NOT use POST method here?

    - by Camran
    I have a classifieds website. In the main page (index) I have several form fields which the user may or may not fill in, in order to specify a detailed search of classifieds. Ex: Category: Cars Price from: 3000 Price to: 10000 Color: Red Area: California The forms' action is set to a php page: <form action='query_sql.php' method='post'> In query_sql.php I fetch the variables like this: category=$_POST['category']; etc etc... Then query MySql: $query="SELECT........WHERE category='$category' etc etc.... $results = mysql_query($query); Then I simply display the results of the query to the user by creating a table which is filled in dynamically depending on the results set. However, according to an answer by Col. Shrapnel in my previous Q I shouldn't use POST here: http://stackoverflow.com/questions/3004754/how-to-hide-url-from-users-when-submitting-this-form The reason I use post is simply to hide the "one-page-word-document" long URL in the browsers adress bar. I am very confused, is it okay to use POST or not? It is working fine both when I use GET or POST now... And it is already on a production server... Btw, in the linked question, I wasn't referring to make URL invisible (or hide it) I just wanted it too look better (which I have accomplished with mod_rewrite). UPDATE: If I use GET, then how should I make the url better looking (beautiful)? Check this previous Q out: http://stackoverflow.com/questions/3000524/how-to-make-this-very-long-url-appear-short

    Read the article

  • Android - How can I upload a txt file to a website?

    - by Donal Rafferty
    I want to upload a txt file to a website, I'll admit I haven't looked into it in any great detail but I have looked at a few examples and would like more experienced opinions on whether I'm going in the right direction. Here is what I have so far: DefaultHttpClient httpClient = new DefaultHttpClient(); HttpContext localContext = new BasicHttpContext(); private String ret; HttpResponse response = null; HttpPost httpPost = null; public String postPage(String url, String data, boolean returnAddr) { ret = null; httpClient.getParams().setParameter(ClientPNames.COOKIE_POLICY, CookiePolicy.RFC_2109); httpPost = new HttpPost(url); response = null; StringEntity tmp = null; try { tmp = new StringEntity(data,"UTF-8"); } catch (UnsupportedEncodingException e) { System.out.println("HTTPHelp : UnsupportedEncodingException : "+e); } httpPost.setEntity(tmp); try { response = httpClient.execute(httpPost,localContext); } catch (ClientProtocolException e) { System.out.println("HTTPHelp : ClientProtocolException : "+e); } catch (IOException e) { System.out.println("HTTPHelp : IOException : "+e); } ret = response.getStatusLine().toString(); return ret; } And I call it as follows: postPage("http://www.testwebsite.com", "data/data/com.testxmlpost.xml/files/logging.txt", true)); I want to be able to upload a file from the device to a website. But when trying this way I get the following response back. HTTP/1.1 405 Method Not Allowed Am I trying the correct way or should I be doing it another way?

    Read the article

  • android httpurlconnection [closed]

    - by user620451
    hi im new android developer i am trying to login to my asterisk server passing my username and password it works good but when i am trying to request anther url to the server after login i get access denied and i now the problem because the login connection has disconnected so i want a way to request to urls the first one is login to the server and the second is to do something else after login please help and thx anyway this is a part of my code i want to request this 2 url url1="http://192.168.1.7:8088/rawman?action=login&username=admin&secret=admin" url2="http://192.168.1.5:8088/rawman?action=updateconfig&reload=yes&srcfilename=users.conf&dstfilename=users.conf&Action-000000=newcat&Cat-000000=6001&Var-000000=&Value-000000=" public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); tv1 = (TextView) this.findViewById(R.id.display); ed1 = (EditText) this.findViewById(R.id.editText); bt1 = (Button) this.findViewById(R.id.submit); bt1.setOnClickListener(new OnClickListener() { public void onClick(View view) { { try{ ServerRequest(url1); ServerRequest(url2); } catch(Exception e) { Log.v("Exception", "Exception:"+e.getMessage()); } } } }); } public String ServerRequest(String serverString) throws MalformedURLException, IOException { String newFeed=serverString; StringBuilder response = new StringBuilder(); Log.v("server","server url:"+newFeed); URL url = new URL(newFeed); HttpURLConnection httpconn = (HttpURLConnection) url.openConnection(); if(httpconn.getResponseCode()==HttpURLConnection.HTTP_OK) { BufferedReader input = new BufferedReader( new InputStreamReader(httpconn.getInputStream()), 8192); String strLine = null; while ((strLine = input.readLine()) != null) { response.append(strLine); } input.close(); } tv1.settext(response); return response.toString(); }

    Read the article

  • How to implement a log window in a web browser?

    - by Jeremy Friesner
    Hi all, I'm interested in adding an HTML/web-browser based "log window" to my net-enabled device. Specifically, my device has a customized web server and an event log, and I'd like to be able to leave a web browser window open to e.g. http://my.devices.ip.address/system_log and have events show up as text in the web browser window as they happen. People could then use this as a quick way to monitor what the system is doing, without needing run any special software. My question is, what is the best way to implement this? I've tried the obvious approach -- just have my device's embedded web server hold the HTTP/TCP connection open indefinitely, and write the necessary text to the TCP socket when an event occurs -- but the problem with that is that most web browsers (e.g. Safari) don't display the web page until the server has closed the TCP connection has been closed, and so the result is that the log data never appears in the web browser, it just acts as if the page is taking forever to load. Is there some trick to make this work? I could implement it as a Java applet, but I'd much prefer something more lightweight/simple, either using only HTML or possibly HTML+JavaScript. Also I'd like to avoid having the web browser 'poll' the server, since that would either introduce too much latency (if the reload delay was large) or put load on the system (if the delay was small)

    Read the article

  • Consume a JSON webservice and process data in Android

    - by user1783391
    I am trying to consume a the webservice below http://62.253.195.179/disaster/webservices/login.php?message=[{"email":"[email protected]","password":"welcome"}] This returns a JSON array [{"companyuserId":"2","name":"ben stein","superiorname":"Leon","departmentId":"26","departmentname":"Development","companyId":"23","UDID":"12345","isActive":"1","devicetoken":"12345","email":"[email protected]","phone":"5456465465654","userrole":"1","chngpwdStatus":"1"}] My code is below try{ String weblink = URLEncoder.encode("http://62.253.195.179/disaster/webservices/login.php?message=[{\"email\":\"[email protected]\",\"password\":\"welcome\"}]"); HttpParams httpParameters = new BasicHttpParams(); int timeoutConnection = 7500; HttpConnectionParams.setConnectionTimeout(httpParameters, timeoutConnection); int timeoutSocket = 7500; HttpConnectionParams.setSoTimeout(httpParameters, timeoutSocket); HttpClient client = new DefaultHttpClient(httpParameters); HttpGet request = new HttpGet(); URI link = new URI(weblink); request.setURI(link); HttpResponse response = client.execute(request); BufferedReader rd = new BufferedReader(new InputStreamReader( response.getEntity().getContent())); result = rd.readLine(); JSONObject myData = new JSONObject(result); JSONArray jArray = myData.getJSONArray(""); JSONObject steps = jArray.getJSONObject(0); String name = steps.getString("name"); } catch (JSONException e) { e.printStackTrace(); } But its not working and I am not 100% sure this is the best way to do it. 11-10 10:49:55.489: E/AndroidRuntime(392): java.lang.IllegalStateException: Target host must not be null, or set in parameters.

    Read the article

  • (Android) Seems like my JSON query is getting double encode

    - by A Gardner
    Hi, I am getting some weird errors with my Android App. It appears that this code is double encoding the JSON string. What should be sent is ?{"email":"[email protected]","password":"asdf"} or ?%7B%22email%22:%22..... what the server is seeing is %257B%2522email%2522:%2522 .... which means the server sees %7B%22email%22:%22 ..... This confuses the server. Any ideas why this is happening? Thanks for your help Code: DefaultHttpClient c = new DefaultHttpClient(); if(cookies!=null) c.setCookieStore(cookies); if(loginNotLogout){ jso.put("email", userData.email); jso.put("password", userData.password); } URI u = null; if(loginNotLogout) u= new URI("HTTP","www.website.com","/UserService",jso.toString(),""); else u= new URI("HTTP","www.website.com","/UserService",jso.toString(),""); HttpGet httpget = new HttpGet(u); HttpResponse response = c.execute(httpget); ret.jsonString = EntityUtils.toString(response.getEntity());

    Read the article

  • Login to website and use cookie to get source for another page

    - by Stu
    I am trying to login to the TV Rage website and get the source code of the My Shows page. I am successfully logging in (I have checked the response from my post request) but then when I try to perform a get request on the My Shows page, I am re-directed to the login page. This is the code I am using to login: private string LoginToTvRage() { string loginUrl = "http://www.tvrage.com/login.php"; string formParams = string.Format("login_name={0}&login_pass={1}", "xxx", "xxxx"); string cookieHeader; WebRequest req = WebRequest.Create(loginUrl); req.ContentType = "application/x-www-form-urlencoded"; req.Method = "POST"; byte[] bytes = Encoding.ASCII.GetBytes(formParams); req.ContentLength = bytes.Length; using (Stream os = req.GetRequestStream()) { os.Write(bytes, 0, bytes.Length); } WebResponse resp = req.GetResponse(); cookieHeader = resp.Headers["Set-cookie"]; String responseStream; using (StreamReader sr = new StreamReader(resp.GetResponseStream())) { responseStream = sr.ReadToEnd(); } return cookieHeader; } I then pass the cookieHeader into this method which should be getting the source of the My Shows page: private string GetSourceForMyShowsPage(string cookieHeader) { string pageSource; string getUrl = "http://www.tvrage.com/mytvrage.php?page=myshows"; WebRequest getRequest = WebRequest.Create(getUrl); getRequest.Headers.Add("Cookie", cookieHeader); WebResponse getResponse = getRequest.GetResponse(); using (StreamReader sr = new StreamReader(getResponse.GetResponseStream())) { pageSource = sr.ReadToEnd(); } return pageSource; } I have been using this previous question as a guide but I'm at a loss as to why my code isn't working.

    Read the article

  • Preserve name of file using cURL to transfer files

    - by Toby
    I'm transferring files from an existing http request using cURL like so... $postargs = array( 'nonfilefield' =>'nonfilevalue', 'fileentry' => '@'.$_FILES['thefile']['tmp_name'][0] ); $ch = curl_init('http://localhost/curl/rec.php'); curl_setopt($ch,CURLOPT_USERAGENT, "Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)"); curl_setopt($ch,CURLOPT_RETURNTRANSFER, true); curl_setopt($ch,CURLOPT_POST,TRUE); curl_setopt($ch,CURLOPT_POSTFIELDS,$postargs); curl_exec($ch); curl_close($ch); The only way I can get this to work is using the tmp_name, without this it won't send. However, I then lose the name value for when I want to name the file later. Is there some way to do this preserving the $_FILES array as it normally would be without curl? I'm also using an array of file fields in my script, so at the moment I have to convert my multidimensional array into a single dimension for this to work

    Read the article

  • How to ensure nginx serves a request from an external IP?

    - by Matt
    I have a strange situation, where my nginx setup stopped handling external requests. I'm pretty stuck. If I hit the domain without a subdomain, I properly get redirected, however, if I request the full url, that fails and doesn't log anything, anywhere. I am able to curl localhost on the server itself, however when I attempt to curl from an external machine, it fails with: curl: (7) couldn't connect to host I've also noticed that bots can get through, I've seen Google hit the log every now and then. My nginx.conf file: upstream mongrels { server 127.0.0.1:5000; } server { listen 80; server_name culini.com; rewrite ^/(.*) http://www.culini.com/$1 permanent; } # the server directive is nginx's virtual host directive. server { # port to listen on. Can also be set to an IP:PORT listen 80; # Set the max size for file uploads to 50Mb client_max_body_size 50M; # sets the domain[s] that this vhost server requests for server_name www.culini.com; # doc root root /var/www/culini/current/public; log_format app '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for" [$upstream_addr $upstream_response_time $upstream_status]'; # vhost specific access log access_log /var/www/culini/current/log/nginx.access.log app; error_log /var/www/culini/current/log/nginx.error.log debug; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect false; proxy_max_temp_file_size 0; proxy_intercept_errors on; proxy_ignore_client_abort on; if (-f $request_filename) { break; } if (!-f $request_filename) { proxy_pass http://mongrels; break; } } } Please, please, any help would be greatly appreciated.

    Read the article

  • How do I deal with different requests that map to the same response?

    - by daxim
    I'm designing a Web service. The request is idempotent, so I chose the GET method. The response is relatively expensive to calculate and not small, so I want to get caching (on the protocol level) right. (Don't worry about memoisation at my part, I have that already covered; my question here is actually also paying attention to the Web as a whole.) There's only one mandatory parameter and a number of optional parameter with default values if missing. For example, the following two map to the same representation of the response. (If this is a dumb way to go about it the interface, propose something better.) GET /service?mandatory_parameter=some_data HTTP/1.1 GET /service?mandatory_parameter=some_data;optional_parameter=default1;another_optional_parameter=default2;yet_another_optional_parameter=default3 HTTP/1.1 However, I imagine clients do not know this and would treat them separate and therefore waste cache storage. What should I do to avoid violating the golden rule of caching? Make up a canonical form, document it (e.g. all parameters are required after all and need to be sorted in a specific order) and return a client error unless the required form is met? Instead of an error, redirect permanently to the canonical form of a request? Or is it enough to not mind how the request looks like, and just respond with the same ETag for same responses?

    Read the article

  • How can I call multiple nutrient information from ESHA Research API? (apid.esha.com)

    - by user1833044
    I want to call ESHA Research nutrient REST API. I cannot seem to figure out how to call multiple nutrients using ESHA REST API. So far I am calling the following and only able to retrieve the calories, or protein, or another type of nutrient information. So I was hoping someone had experience in retrieving all the nutrient information with one call. Is this possible? This is how I call to retrieve the TWIX nutrient http://api.esha.com/analysis?apikey=xxxx&fo=urn:uuid:81d268ac-f1dc-4991-98c1-1b4d3a5006da (returns calories, please note the api key is not xxxx but instead a key generated from Esha once you sign up as developer) The return is JSON format. If I want to call fat it would be the following http://api.esha.com/analysis?apikey=xxxx&fo=urn:uuid:81d268ac-f1dc-4991-98c1-1b4d3a5006da&n=urn:uuid:589294dc-3dcc-4b64-be06-c07e7f65c4bd How can I make a call once and get a return of all the nutrients (so Fat, Calories, Carbs, Vitamins, etc..) for a particular food ID? I have researched and looked at this for a while and cannot seem to find the answer. Thanks in advance for your help.

    Read the article

  • HttpWebRequest Cookie weirdness

    - by Lachman
    I'm sure I must be doing something wrong. But can't for the life of me figure out what is going on. I have a problem where it seems that the HttpWebRequest class in the framework is not correctly parsing the cookies from a web response. I'm using Fiddler to see what is going on and after making a request, the headers of the response look as such: HTTP/1.1 200 Ok Connection: close Date: Wed, 14 Jan 2009 18:20:31 GMT Server: Microsoft-IIS/6.0 P3P: policyref="/w3c/p3p.xml", CP="CAO DSP IND COR ADM CONo CUR CUSi DEV PSA PSD DELi OUR COM NAV PHY ONL PUR UNI" Set-Cookie: user=v.5,0,EX01E508801E$97$2E401000t$1BV6$A1$EC$104$A1$EC$104$A1$EC$104$21O001000$1E31!90$7CP$AE$3F$F3$D8$19o$BC$1Cd$23; Domain=.thedomain.com; path=/ Set-Cookie: minfo=v.4,EX019ECD28D6k$A3$CA$0C$CE$A2$D6$AD$D4!2$8A$EF$E8n$91$96$E1$D7$C8$0F$98$AA$ED$DC$40V$AB$9C$C1$9CF$C9$C1zIF$3A$93$C6$A7$DF$A1$7E$A7$A1$A8$BD$A6$94c$D5$E8$2F$F4$AF$A2$DF$80$89$BA$BBd$F6$2C$B6$A8; expires=Sunday, 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ Set-Cookie: accttype=v.2,3,1,EX017E651B09k$A3$CA$0C$DB$A2$CB$AD$D9$8A$8C$EF$E8t$91$90$E1$DC$C89$98$AA$E0$DC$40O$A8$A4$C1$9C; expires=Sunday, 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ Set-Cookie: tpid=v.1,20001; expires=Sunday, 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ Set-Cookie: MC1=GUID=541977e04a341a2a4f4cdaaf49615487; expires=Sunday, 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ Set-Cookie: linfo=v.4,EQC|0|0|255|1|0||||||||0|0|0||0|0|0|-1|-1; expires=Sunday, 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ Set-Cookie: group=v.1,0; expires=Sunday, 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ Content-Type: text/html But when I look at the response.Cookies, I see far more cookies that I am expecting, with values of different cookies being split up into different cookies. Manually getting the headers seems to result in more wierdness eg: the code foreach(string cookie in response.Headers.GetValues("Set-Cookie")) { Console.WriteLine("Cookie found: " + cookie); } produces the output: Cookie found: user=v.5 Cookie found: 0 Cookie found: EX01E508801E$97$2E401000t$1BV6$A1$EC$104$A1$EC$104$A1$EC$104$21O00 1000$1E31!90$7CP$AE$3F$F3$D8$19o$BC$1Cd$23; Domain=.thedomain.com; path=/ Cookie found: minfo=v.4 Cookie found: EX019ECD28D6k$A3$CA$0C$CE$A2$D6$AD$D4!2$8A$EF$E8n$91$96$E1$D7$C8$0 F$98$AA$ED$DC$40V$AB$9C$C1$9CF$C9$C1zIF$3A$93$C6$A7$DF$A1$7E$A7$A1$A8$BD$A6$94c$ D5$E8$2F$F4$AF$A2$DF$80$89$BA$BBd$F6$2C$B6$A8; expires=Sunday Cookie found: 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ Cookie found: accttype=v.2 Cookie found: 3 Cookie found: 1 Cookie found: EX017E651B09k$A3$CA$0C$DB$A2$CB$AD$D9$8A$8C$EF$E8t$91$90$E1$DC$C89 $98$AA$E0$DC$40O$A8$A4$C1$9C; expires=Sunday Cookie found: 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ Cookie found: tpid=v.1 Cookie found: 20001; expires=Sunday Cookie found: 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ Cookie found: MC1=GUID=541977e04a341a2a4f4cdaaf49615487; expires=Sunday Cookie found: 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ Cookie found: linfo=v.4 Cookie found: EQC|0|0|255|1|0||||||||0|0|0||0|0|0|-1|-1; expires=Sunday Cookie found: 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ Cookie found: group=v.1 Cookie found: 0; expires=Sunday Cookie found: 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ as you can see - the first cookie in the list raw response: Set-Cookie: user=v.5,0,EX01E508801 is getting split into: Cookie found: user=v.5 Cookie found: 0 Cookie found: EX01E508801E$.......... So - what's going on here? Am I wrong? Is the HttpWebRequest class incorrectly parsing the http headers? Is the webserver that it spitting out the requests producing invalid http headers?

    Read the article

  • Can' get couchdb external http handlers to work.

    - by fuzzy lollipop
    following the instructions here http://wiki.apache.org/couchdb/ExternalProcesses this is what I get { * error: "{{badarg,[{erlang,port_command, [#Port<0.2056>, [123, [34,<<"info">>,34], 58, [123, [34,"db_name",34], 58, [34,<<"transfer_central">>,34], 44, [34,"doc_count",34], 58,"39441",44, [34,"doc_del_count",34], 58,"0",44, [34,"update_seq",34], 58,"56508",44, [34,"purge_seq",34], 58,"0",44, [34,"compact_running",34], 58,<<"false">>,44, [34,"disk_size",34], 58,"43593828",44, [34,"instance_start_time",34], 58, [34,<<"1272560477320483">>,34], 44, [34,"disk_format_version",34], 58,"5",125], 44, [34,<<"id">>,34], 58,<<"null">>,44, [34,<<"method">>,34], 58, [34,"GET",34], 44, [34,<<"path">>,34], 58, [91, [34,<<"transfer_central">>,34], 44, [34,<<"_test">>,34], 93], 44, [34,<<"query">>,34], 58,<<"{}">>,44, [34,<<"headers">>,34], 58, [123, [34,<<"Accept">>,34], 58, [34, <<"text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8,application/json">>, 34], 44, [34,<<"Accept-Charset">>,34], 58, [34,<<"ISO-8859-1,utf-8;q=0.7,*;q=0.7">>,34], 44, [34,<<"Accept-Encoding">>,34], 58, [34,<<"gzip,deflate">>,34], 44, [34,<<"Accept-Language">>,34], 58, [34,<<"en-us,en;q=0.5">>,34], 44, [34,<<"Connection">>,34], 58, [34,<<"keep-alive">>,34], 44, [34,<<"Host">>,34], 58, [34,<<"127.0.0.1:5984">>,34], 44, [34,<<"Keep-Alive">>,34], 58, [34,<<"115">>,34], 44, [34,<<"User-Agent">>,34], 58, [34, <<"Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3">>, 34], 125], 44, [34,<<"body">>,34], 58, [34,"undefined",34], 44, [34,<<"peer">>,34], 58, [34,<<"127.0.0.1">>,34], 44, [34,<<"form">>,34], 58,<<"{}">>,44, [34,<<"cookie">>,34], 58,<<"{}">>,44, [34,<<"userCtx">>,34], 58, [123, [34,<<"db">>,34], 58, [34,<<"transfer_central">>,34], 44, [34,<<"name">>,34], 58,<<"null">>,44, [34,<<"roles">>,34], 58,<<"[]">>,125], 125,10]]}, {couch_os_process,writeline,2}, {couch_os_process,writejson,2}, {couch_os_process,handle_call,3}, {gen_server,handle_msg,5}, {proc_lib,init_p_do_apply,3}]}, {gen_server,call, [<0.110.0>, {prompt,{[{<<"info">>, {[{db_name,<<"transfer_central">>}, {doc_count,39441}, {doc_del_count,0}, {update_seq,56508}, {purge_seq,0}, {compact_running,false}, {disk_size,43593828}, {instance_start_time,<<"1272560477320483">>}, {disk_format_version,5}]}}, {<<"id">>,null}, {<<"method">>,'GET'}, {<<"path">>,[<<"transfer_central">>,<<"_test">>]}, {<<"query">>,{[]}}, {<<"headers">>, {[{<<"Accept">>, <<"text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8,application/json">>}, {<<"Accept-Charset">>, <<"ISO-8859-1,utf-8;q=0.7,*;q=0.7">>}, {<<"Accept-Encoding">>,<<"gzip,deflate">>}, {<<"Accept-Language">>,<<"en-us,en;q=0.5">>}, {<<"Connection">>,<<"keep-alive">>}, {<<"Host">>,<<"127.0.0.1:5984">>}, {<<"Keep-Alive">>,<<"115">>}, {<<"User-Agent">>, <<"Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3">>}]}}, {<<"body">>,undefined}, {<<"peer">>,<<"127.0.0.1">>}, {<<"form">>,{[]}}, {<<"cookie">>,{[]}}, {<<"userCtx">>, {[{<<"db">>,<<"transfer_central">>}, {<<"name">>,null}, {<<"roles">>,[]}]}}]}}, infinity]}}" * reason: "{gen_server,call, [<0.109.0>, {execute,{[{<<"info">>, {[{db_name,<<"transfer_central">>}, {doc_count,39441}, {doc_del_count,0}, {update_seq,56508}, {purge_seq,0}, {compact_running,false}, {disk_size,43593828}, {instance_start_time,<<"1272560477320483">>}, {disk_format_version,5}]}}, {<<"id">>,null}, {<<"method">>,'GET'}, {<<"path">>,[<<"transfer_central">>,<<"_test">>]}, {<<"query">>,{[]}}, {<<"headers">>, {[{<<"Accept">>, <<"text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8,application/json">>}, {<<"Accept-Charset">>, <<"ISO-8859-1,utf-8;q=0.7,*;q=0.7">>}, {<<"Accept-Encoding">>,<<"gzip,deflate">>}, {<<"Accept-Language">>,<<"en-us,en;q=0.5">>}, {<<"Connection">>,<<"keep-alive">>}, {<<"Host">>,<<"127.0.0.1:5984">>}, {<<"Keep-Alive">>,<<"115">>}, {<<"User-Agent">>, <<"Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3">>}]}}, {<<"body">>,undefined}, {<<"peer">>,<<"127.0.0.1">>}, {<<"form">>,{[]}}, {<<"cookie">>,{[]}}, {<<"userCtx">>, {[{<<"db">>,<<"transfer_central">>}, {<<"name">>,null}, {<<"roles">>,[]}]}}]}}, infinity]}" }

    Read the article

  • apt-get update getting 404 on debian lenny

    - by JoelFan
    Here is my /etc/apt/sources.list ###### Debian Main Repos deb http://ftp.us.debian.org/debian/ lenny main contrib non-free ###### Debian Update Repos deb http://security.debian.org/ lenny/updates main contrib non-free deb http://ftp.us.debian.org/debian/ lenny-proposed-updates main contrib non-free When I do: # apt-get update I'm getting some good lines, then: Err http://ftp.us.debian.org lenny/contrib Packages 404 Not Found [IP: 35.9.37.225 80] Err http://ftp.us.debian.org lenny/non-free Packages 404 Not Found [IP: 35.9.37.225 80] Err http://ftp.us.debian.org lenny-proposed-updates/main Packages 404 Not Found [IP: 35.9.37.225 80] Err http://ftp.us.debian.org lenny-proposed-updates/contrib Packages 404 Not Found [IP: 35.9.37.225 80] Err http://ftp.us.debian.org lenny-proposed-updates/non-free Packages 404 Not Found [IP: 35.9.37.225 80] Err http://ftp.us.debian.org lenny/main Packages 404 Not Found [IP: 35.9.37.225 80] W: Failed to fetch http://security.debian.org/dists/lenny/updates/main/binary-i386/Packages 404 Not Found [IP: 149.20.20.6 80] W: Failed to fetch http://security.debian.org/dists/lenny/updates/contrib/binary-i386/Packages 404 Not Found [IP: 149.20.20.6 80] W: Failed to fetch http://security.debian.org/dists/lenny/updates/non-free/binary-i386/Packages 404 Not Found [IP: 149.20.20.6 80] W: Failed to fetch http://ftp.us.debian.org/debian/dists/lenny/contrib/binary-i386/Packages 404 Not Found [IP: 35.9.37.225 80] W: Failed to fetch http://ftp.us.debian.org/debian/dists/lenny/non-free/binary-i386/Packages 404 Not Found [IP: 35.9.37.225 80] W: Failed to fetch http://ftp.us.debian.org/debian/dists/lenny-proposed-updates/main/binary-i386/Packages 404 Not Found [IP: 35.9.37.225 80] W: Failed to fetch http://ftp.us.debian.org/debian/dists/lenny-proposed-updates/contrib/binary-i386/Packages 404 Not Found [IP: 35.9.37.225 80] W: Failed to fetch http://ftp.us.debian.org/debian/dists/lenny-proposed-updates/non-free/binary-i386/Packages 404 Not Found [IP: 35.9.37.225 80] W: Failed to fetch http://ftp.us.debian.org/debian/dists/lenny/main/binary-i386/Packages 404 Not Found [IP: 35.9.37.225 80] E: Some index files failed to download, they have been ignored, or old ones used instead. Now what?

    Read the article

  • What is the best practice with KML files when adding geositemap?

    - by Floran
    Im not sure how to deal with kml files. Now important particularly in reference to the Google Venice update. My site basically is a guide of many company listings (sort of Yellow Pages). I want each company listing to have a geolocation associated with it. Which of the options I present below is the way to go? OPTION 1: all locations in a single KML file with a reference to that KML file from a geositemap.xml MYGEOSITEMAP.xml <?xml version="1.0" encoding="UTF-8"?> <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:geo="http://www.google.com/geo/schemas/sitemap/1.0"> <url><loc>http://www.mysite.com/locations.kml</loc> <geo:geo> <geo:format>kml</geo:format></geo:geo></url> </urlset> ALLLOCATIONS.kml <kml xmlns="http://www.opengis.net/kml/2.2" xmlns:atom="http://www.w3.org/2005/Atom"> <Document> <name>MyCompany</name> <atom:author> <atom:name>MyCompany</atom:name> </atom:author> <atom:link href="http://www.mysite.com/locations/3454/MyCompany" rel="related" /> <Placemark> <name>MyCompany, Kalverstraat 26 Amsterdam 1000AG</name> <description><![CDATA[<address><a href="http://www.mysite.com/locations/3454/MyCompany">MyCompany</a><br />Address: Kalverstraat 26, Amsterdam 1000AG <br />Phone: 0646598787</address><p>hello there, im MyCompany</p>]]> </description><Point><coordinates>5.420686499999965,51.6298808,0</coordinates> </Point> </Placemark> </Document> </kml> <kml xmlns="http://www.opengis.net/kml/2.2" xmlns:atom="http://www.w3.org/2005/Atom"> <Document> <name>MyCompany</name><atom:author><atom:name>MyCompany</atom:name></atom:author><atom:link href="http://www.mysite.com/locations/22/companyX" rel="related" /><Placemark><name>MyCompany, Rosestreet 45 Amsterdam 1001XF </name><description><![CDATA[<address><a href="http://www.mysite.com/locations/22/companyX">companyX</a><br />Address: Rosestreet 45, Amsterdam 1001XF <br />Phone: 0642195493</address><p>some text about companyX</p>]]></description><Point><coordinates>5.520686499889632,51.6197705,0</coordinates></Point></Placemark> </Document> </kml> OPTION 2: a separate KML file for each location and a reference to each KML file from a geositemap.xml (kml files placed in a \kmlfiles folder) MYGEOSITEMAP.xml <?xml version="1.0" encoding="UTF-8"?> <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:geo="http://www.google.com/geo/schemas/sitemap/1.0"> <url><loc>http://www.mysite.com/kmlfiles/3454_MyCompany.kml</loc> <geo:geo> <geo:format>kml</geo:format></geo:geo></url> <url><loc>http://www.mysite.com/kmlfiles/22_companyX.kml</loc> <geo:geo> <geo:format>kml</geo:format></geo:geo></url> </urlset> *3454_MyCompany.kml* <kml xmlns="http://www.opengis.net/kml/2.2" xmlns:atom="http://www.w3.org/2005/Atom"> <Document><name>MyCompany</name><atom:author><atom:name>MyCompany</atom:name></atom:author><atom:link href="http://www.mysite.com/locations/3454/MyCompany" rel="related" /><Placemark><name>MyCompany, Kalverstraat 26 Amsterdam 1000AG</name><description><![CDATA[<address><a href="http://www.mysite.com/locations/3454/MyCompany">MyCompany</a><br />Address: Kalverstraat 26, Amsterdam 1000AG <br />Phone: 0646598787</address><p>hello there, im MyCompany</p>]]></description><Point><coordinates>5.420686499999965,51.6298808,0</coordinates></Point></Placemark> </Document> </kml> *22_companyX.kml* <kml xmlns="http://www.opengis.net/kml/2.2" xmlns:atom="http://www.w3.org/2005/Atom"> <Document><name>companyX</name><atom:author><atom:name>companyX</atom:name></atom:author><atom:link href="http://www.mysite.com/locations/22/companyX" rel="related" /><Placemark><name>companyX, Rosestreet 45 Amsterdam 1001XF </name><description><![CDATA[<address><a href="http://www.mysite.com/locations/22/companyX">companyX</a><br />Address: Rosestreet 45, Amsterdam 1001XF <br />Phone: 0642195493</address><p>some text about companyX</p>]]></description><Point><coordinates>5.520686499889632,51.6197705,0</coordinates></Point></Placemark> </Document> </kml> OPTION 3?

    Read the article

  • How to setup SyntaxHighlighter with GeeksWithBlogs in about 10 minutes.

    - by mbcrump
    SyntaxHighlighter is a fully functional self-contained code syntax highlighter developed in JavaScript. Below is a sample of what it looks like in your blog. class Test { static void Main() { System.Console.WriteLine("Sample SyntaxHighlighter"); } } This tutorial will help you setup SyntaxHighlighter with GeeksWithBlogs.net in about 10 minutes. Even though this guide is specifically for GWB, you can use it on any other hosting provider that does not allow you to upload custom CSS/JavaScript. It is recommended that if you are using LiveWriter to go ahead and download Code Snippet with SyntaxHighlighter Support to integrate this functionality within Live Writer. 1) Log into GWB and select Options->Configure Now under the Custom CSS insert the following code at the top of the textbox: @import url("http://alexgorbatchev.com/pub/sh/current/styles/shCore.css"); @import url("http://alexgorbatchev.com/pub/sh/current/styles/shThemeDefault.css"); Please note that you can change the default theme by changing the shThemeDefault.css to one listed below: shThemeDefault.css shThemeDjango.css shThemeEmacs.css shThemeFadeToGrey.css shThemeMidnight.css shThemeRDark.css 2) Under the Static News/Announcements insert the following code at the top: <script type="text/javascript" src="http://alexgorbatchev.com/pub/sh/current/scripts/shCore.js"></script> <script type="text/javascript" language="javascript" src="http://alexgorbatchev.com/pub/sh/current/scripts/shBrushCSharp.js"></script> <script type="text/javascript" language="javascript" src="http://alexgorbatchev.com/pub/sh/current/scripts/shBrushJScript.js"></script> <script src='http://alexgorbatchev.com/pub/sh/current/scripts/shBrushJava.js' type='text/javascript'></script> <script language='javascript'> SyntaxHighlighter.config.bloggerMode = true; SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/current/scripts/clipboard.swf'; SyntaxHighlighter.all(); </script> Please note that this will only give you support for Java, JavaScript and C Sharp. If you want more languages like Ruby and SQL. Then add the proper tags listed below. The reason that I didn’t add them is because I do not want to load languages that I will not be blogging about. <link href='http://alexgorbatchev.com/pub/sh/current/styles/shCore.css' rel='stylesheet' type='text/css'/> <link href='http://alexgorbatchev.com/pub/sh/current/styles/shThemeDefault.css' rel='stylesheet' type='text/css'/> <script src='http://alexgorbatchev.com/pub/sh/current/scripts/shCore.js' type='text/javascript'></script> <script src='http://alexgorbatchev.com/pub/sh/current/scripts/shBrushCpp.js' type='text/javascript'></script> <script src='http://alexgorbatchev.com/pub/sh/current/scripts/shBrushCSharp.js' type='text/javascript'></script> <script src='http://alexgorbatchev.com/pub/sh/current/scripts/shBrushCss.js' type='text/javascript'></script> <script src='http://alexgorbatchev.com/pub/sh/current/scripts/shBrushJava.js' type='text/javascript'></script> <script src='http://alexgorbatchev.com/pub/sh/current/scripts/shBrushJScript.js' type='text/javascript'></script> <script src='http://alexgorbatchev.com/pub/sh/current/scripts/shBrushPhp.js' type='text/javascript'></script> <script src='http://alexgorbatchev.com/pub/sh/current/scripts/shBrushPython.js' type='text/javascript'></script> <script src='http://alexgorbatchev.com/pub/sh/current/scripts/shBrushRuby.js' type='text/javascript'></script> <script src='http://alexgorbatchev.com/pub/sh/current/scripts/shBrushSql.js' type='text/javascript'></script> <script src='http://alexgorbatchev.com/pub/sh/current/scripts/shBrushVb.js' type='text/javascript'></script> <script src='http://alexgorbatchev.com/pub/sh/current/scripts/shBrushXml.js' type='text/javascript'></script> <script src='http://alexgorbatchev.com/pub/sh/current/scripts/shBrushPerl.js' type='text/javascript'></script> <script language='javascript'> SyntaxHighlighter.config.bloggerMode = true; SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/current/scripts/clipboard.swf'; SyntaxHighlighter.all(); </script> 3) Now install Code Snippet with SyntaxHighlighter Support and launch Windows Live Writer. Click on the PreCode Snippet plugin add copy/paste your code into the windows. Make sure you select “PRE” and the Language that you are using. It should look similar to the following screenshot.  After you finish editing the post, hit publish and your code should look nice and neat like the example shown earlier.

    Read the article

  • Do WebSockets have exclusive access to their sockets?

    - by Aoriste
    I'm curious to know if, after a WebSocket has been established (after having received the proper handshake from a server that supports them), whether or not the TCP socket used by the "WebSocket connection" is used exclusively by the WebSocket, or if the browser may still make regular HTTP requests with it. It only makes sense to me that WebSockets would have exclusive use of their TCP sockets, but I don't remember having read in any of the documentation that such is the case.

    Read the article

  • GZip compression with WCF hosted on IIS7

    - by joniba
    So I'm going to add my query to the small ocean of questions on the subject. I'm trying to enable GZip compression on large soap responses from a WCF service. So far, I've followed instructions here and in a variety of other places to enable dynamic compression on IIS. Here's my dynamicTypes section from the applicationHost.config: <dynamicTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/atom+xml" enabled="true" /> <add mimeType="application/xaml+xml" enabled="true" /> <add mimeType="application/xop+xml" enabled="true" /> <add mimeType="application/soap+xml" enabled="true" /> <add mimeType="*/*" enabled="false" /> </dynamicTypes> And also: <urlCompression doDynamicCompression="true" dynamicCompressionBeforeCache="true" /> Though I'm not so clear on why that's needed. Threw some extra mime-types in there just in case. I've implemented IClientMessageInspector to add Accept-Encoding: gzip, deflate to my client's HttpRequests. Here's an example of a request-header taken from fiddler: POST http://[omitted]/TestMtomService/TextService.svc HTTP/1.1 Content-Type: application/soap+xml; charset=utf-8 Accept-Encoding: gzip, deflate Host: [omitted] Content-Length: 542 Expect: 100-continue Now, this doesn't work. There's simply no compression happening, no matter what the size of the message (tried up to 1.5Mb). I've looked at this post, but have not run into an exception as he describes, so I haven't tried the CodeProject implementation that he proposes. Also I've seen a lot of other implementations that are supposed to get this to work, but cannot make sense of them (e.g., msdn's GZip encoder). Why would I need to implement the encoder, or the code-project solution? Shouldn't IIS take care of the compression? So what else do I need to do to get this to work? Joni

    Read the article

  • Detecting REFERRER 301 redirects in AwStats

    - by Riccardo
    About six months ago, I have moved a website to a new domain, and helped migration using 301 redirects into .htaccess of the old domain. This morning I was looking at AwStats log of the new domain, and was surpised to notice that in the "HTTP Status codes"section, 301 redirects score 77% of the whole codes (seems 200 are not tracked here). So, what is the proper meaning of the 301 code in those stats? Does it mean that 77% of traffic is incoming (referrer) from 301 redirects or?

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >