Search Results

Search found 55652 results on 2227 pages for 'http response'.

Page 42/2227 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • Unauthorized response from Server with API upload

    - by Ethan Shafer
    I'm writing a library in C# to help me develop a Windows application. The library uses the Ubuntu One API. I am able to authenticate and can even make requests to get the Quota (access to Account Admin API) and Volumes (so I know I have access to the Files API at least) Here's what I have as my Upload code: public static void UploadFile(string filename, string filepath) { FileStream file = File.OpenRead(filepath); byte[] bytes = new byte[file.Length]; file.Read(bytes, 0, (int)file.Length); RestClient client = UbuntuOneClients.FilesClient(); RestRequest request = UbuntuOneRequests.BaseRequest(Method.PUT); request.Resource = "/content/~/Ubuntu One/" + filename; request.AddHeader("Content-Length", bytes.Length.ToString()); request.AddParameter("body", bytes, ParameterType.RequestBody); client.ExecuteAsync(request, webResponse => UploadComplete(webResponse)); } Every time I send the request I get an "Unauthorized" response from the server. For now the "/content/~/Ubuntu One/" is hardcoded, but I checked and it is the location of my root volume. Is there anything that I'm missing? UbuntuOneClients.FilesClient() starts the url with "https://files.one.ubuntu.com" UbuntuOneRequests.BaseRequest(Method.{}) is the same requests that I use to send my Quota and Volumes requests, basically just provides all of the parameters needed to authenticate. EDIT:: Here's the BaseRequest() method: public static RestRequest BaseRequest(Method method) { RestRequest request = new RestRequest(method); request.OnBeforeDeserialization = resp => { resp.ContentType = "application/json"; }; request.AddParameter("realm", ""); request.AddParameter("oauth_version", "1.0"); request.AddParameter("oauth_nonce", Guid.NewGuid().ToString()); request.AddParameter("oauth_timestamp", DateTime.Now.ToString()); request.AddParameter("oauth_consumer_key", UbuntuOneRefreshInfo.UbuntuOneInfo.ConsumerKey); request.AddParameter("oauth_token", UbuntuOneRefreshInfo.UbuntuOneInfo.Token); request.AddParameter("oauth_signature_method", "PLAINTEXT"); request.AddParameter("oauth_signature", UbuntuOneRefreshInfo.UbuntuOneInfo.Signature); //request.AddParameter("method", method.ToString()); return request; } and the FilesClient() method: public static RestClient FilesClient() { return (new RestClient("https://files.one.ubuntu.com")); }

    Read the article

  • IIS6 Multiple SSL websites to a single HTTP website?

    - by docflabby
    Running a IIS6 server on Windows 2003. All the websites use ASP.NET I have a number of websites all running separate HTTP websites: www.domain1.com www.domain2.com www.domain3.com I have a separate HTTPS website www.secure.com These websites are all running on the same server. I now wish to intergrate the content of www.secure.com into each of the domains in a transparent way. Such that each website despite having its own SSL connection displays the same website. The complicatrion is www.secure.com needs to know which website the connection has come from to apply the appropriate branding. The idea behind this is to have only one website, and location, but it keeps the core website brand. https://domain1.com looks alot better from a marketing point of view (and avoids users getting confused about what our secure website is) SSL www.domain1.com/secure - displays www.secure.com (branded domain1) SSL www.domain2.com/secure - displays www.secure.com (branded domain2) SSL www.domain3.com/secure - displays www.secure.com (branded domain3) How would the best way of achieving this, i'm open to using additional software if necessery. Would a reverse proxy be sutible for this situation?

    Read the article

  • Server 2003 and XP Client; Why are HTTP connections being silently dropped.

    - by Asa Yeamans
    On my network, my edge-router, a windows 2003 r2 server router with all the latest updates, will drop packets, but only under specific circumstances. I have troubleshot and isolated it down to the most simple configuration i can. There is NO NAT involved. Only fully-public IP addresses. No Firewalls are running either, all ahve been disabled. no packet filters on any interfaces anywhere either. I have a single Windows XP virtual machine and my edge-router(the windows 2003 r2 server, and also a virtual machine) running on a windows 2008 x64 r2 system (running virtual server 2005 as i dont have Intel-VT compatible chip yet). The edge router can access any external http site just fine, no issues. However the windows XP machine is only able to access certain sites. These work: www.google.com www.txstate.edu www.workintexas.com www.thedailywtf.com . These Dont: www.yahoo.com www.utexas.edu en.wikipedia.org slashdot.org www.bing.com. I have removed all possibility of DNS issues by connecting with net-cat from the XP box and sending GET /\r\nHost: \r\n\r\n and that connection replicates the issue as well. The network setup: My statically assigned IP block: x.x.x.168/29 DSL Modem -----PPPoE Connection---- x.x.x.169[EdgeRouter] [EdgeRouter]x.x.x.170 -----Virtual Ethernet----- x.x.x.174 [Test2] Test2's Default gateway is x.x.x.170 and test2 can ping any and every valid, accessible, public IP address with no packet loss what-so-ever. If i connect directly over PPPoE from test2 (the XP box) everything works just fine... Im at my wits end, i have NO IDEA whats causing this.

    Read the article

  • Client-Server connection response timeout issues

    - by Srikar
    User creates a folder in client and in the client-side code I hit an API to the server to make this persistent for that user. But in some cases, my server is so busy that the request timesout. The server has executed my request but timedout before sending a response back to client. The timeout set is 10 seconds in client. At this point the client thinks that server has not executed its request (of creating a folder) and ends up sending it again. Now I have 2 folders on the server but the user has created only 1 folder in the client. How to prevent this? One of the ways to solve this is to use a unique ID with each new request. So the ID acts as a distinguisher between old and new requests from client. But this leads to storing these IDs on my server and do a lookup for each API call which I want to avoid. Other way is to increase the timeout duration. But I dont want to change this from 10 seconds. Something tells me that there are better solutions. I have posted this question in stackoverflow but I think its better suited here. UPDATE: I will make my problem even more explicit. The client is a webbrowser and the server is running nginx+django+mysql (standard stack). The user creates a folder in webbrowser. As a result I need to hit a server API. The API call responds back, thereby client knows API call was success. This is normal scenario. Sometimes though, server successfully completes the API request but the client-side (webbrowser) connection timesout before server can respond back. The client has no clue at this point. The user thinks the request was a fail & clicks again. This time it was a success but when the UI refreshes he sees 2 folders. I want to remedy this situation.

    Read the article

  • Is this safe? <a href=http://javascript:...>

    - by KajMagnus
    I wonder if href and src attributes on <a> and <img> tags are always safe w.r.t. XSS attacks, if they start with http:// or https://. For example, is it possible to append javascript: ... to the href and src attribute in some manner, to execute code? Disregarding whether or not the destination page is e.g. a pishing site, or the <img src=...> triggers a terribly troublesome HTTP GET request. Background: I'm processing text with markdown, and then I sanitize the resulting HTML (using Google Caja's JsHtmlSanitizer). Some sample code in Google Caja assumes all hrefs and srcs that start with http:// or https:// are safe -- I wonder if it's safe to use that sample code. Kind regards, Kaj-Magnus

    Read the article

  • Impact of migrating home page from http to https on search results

    - by 2Stroke SEO
    Guys - I've just had to change one of my site's home page to https from http. I had plenty links coming into the http page and was performing well in Google against many of my targeted search phrases. I did a 301 redirect from the http page to transfer the link juice to the https page (and to prevent duplicate content issues), but my search rankings have tanked which indicates no link juice has been transferred. My PR has vanished - which I'd expect - but I'm really surprised that the SERP rankings fell off the face of the earth. Anyone have any ideas how I can recover this. I've waited a couple of months since the changes took effect just in case Google was taking time to check it out.

    Read the article

  • Impact of migrating home page from HTTP to HTTPS on search results

    - by 2Stroke SEO
    I've had to change one of my site's home page to HTTPS from HTTP. I had plenty of links coming into the HTTP page and was performing well in Google against many of my targeted search phrases. I did a 301 redirect from the HTTP page to transfer the link juice to the HTTPS page (and to prevent duplicate content issues) but my search rankings have tanked which indicates no link juice has been transferred. My PageRank has vanished - which I'd expect - but I'm really surprised that the SERP rankings fell off the face of the earth. Anyone have any ideas how I can recover from this? I've waited a couple of months since the changes took effect just in case Google was taking time to check it out.

    Read the article

  • SharePoint 2010 slow page response time suddenly !

    - by H(at)Ni
    Hello, One of my customers faced a problem that suddenly their SharePoint portal was loading extremely slower than usual. After some basic troubleshooting I did not find anything suspicious in the ULS logs, IIS logs or even Event logs. After that, I came to the part that I like most which is capturing a memory dump for the IIS process and analyzing the threads running. I searched for any common mistakes like looping a large list, calling a remote web service but couldn't find any. After a deep analysis of the memory dump (Which was done by an Escalation Engineer for SharePoint), it seems that the farm root certificate was missing and therefore was trying to validate it from the internet every time the user requests to load the page and this was the resolution http://support.microsoft.com/kb/2625048 Cheers,

    Read the article

  • Running response time tests on php code - how much is 7.2E-5 microseconds?

    - by Ali
    Hi guys I'm using microtime() function of php to tell how long certain snippets of code take to run I do this by taking the time before and after the snippet and subtracting them using microtime function. I got the following results though for the different snippets: 1 - 0.022976 2 - 0.003656 3 - -0.196361 4- 0.006563 5- 7.2E-5 6- 0.847695 7- 0.005092 8- 7.6E-5 9- 0.08024 The first numbers represent the snippt and the following the time taken... I've forgotten whatever I learnt back in College on numerical methods :( - how big is 7.2E-5 microseconds?

    Read the article

  • ASP.NET MVC Response Filter + OutputCache Attribute

    - by Shane Andrade
    I'm not sure if this is an ASP.NET MVC specific thing or ASP.NET in general but here's what's happening. I have an action filter that removes whitespace by the use of a response filter: public class StripWhitespaceAttribute : ActionFilterAttribute { public StripWhitespaceAttribute () { } public override void OnResultExecuted(ResultExecutedContext filterContext) { base.OnResultExecuted(filterContext); filterContext.HttpContext.Response.Filter = new WhitespaceFilter(filterContext.HttpContext.Response.Filter); } } When used in conjunction with the OutputCache attribute, my calls to Response.WriteSubstitution for "donut hole caching" do not work. The first and second time the page loads the callback passed to WriteSubstitution get called, after that they are not called anymore until the output cache expires. I've noticed this with not just this particular filter but any filter used on Response.Filter... am I missing something? I also forgot to mention I've tried this without the use of an MVC action filter attribute by attaching to the PostReleaseRequestState event in the global.asax and setting the Response.Filter value there... but still no luck.

    Read the article

  • How to configure nginx so it works with Express?

    - by Michal Stefanow
    I'm trying to configure nginx so it proxy_pass requests to my node apps. Question on StackOverflow got many upvotes: http://stackoverflow.com/questions/5009324/node-js-nginx-and-now and I'm using config from there. (but since question is about server configuration it is supposed to be on ServerFault) Here is the nginx configuration: server { listen 80; listen [::]:80; root /var/www/services.stefanow.net/public_html; index index.html index.htm; server_name services.stefanow.net; location / { try_files $uri $uri/ =404; } location /test-express { proxy_pass http://127.0.0.1:3002; } location /test-http { proxy_pass http://127.0.0.1:3003; } } Using plain node: var http = require('http'); http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/plain'}); res.end('Hello World\n'); }).listen(3003, '127.0.0.1'); console.log('Server running at http://127.0.0.1:3003/'); It works! Check: http://services.stefanow.net/test-http Using express: var express = require('express'); var app = express(); // app.get('/', function(req, res) { res.redirect('/index.html'); }); app.get('/index.html', function(req, res) { res.send("blah blah index.html"); }); app.listen(3002, "127.0.0.1"); console.log('Server running at http://127.0.0.1:3002/'); It doesn't work :( See: http://services.stefanow.net/test-express I know that something is going on. a) test-express is NOT running b) text-express is running (and I can confirm it is running via command line while ssh on the server) root@stefanow:~# service nginx restart * Restarting nginx nginx [ OK ] root@stefanow:~# curl localhost:3002 Moved Temporarily. Redirecting to /index.html root@stefanow:~# curl localhost:3002/index.html blah blah index.html I tried setting headers as described here: http://www.nginxtips.com/how-to-setup-nginx-as-proxy-for-nodejs/ (still doesn't work) proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; I also tried replacing '127.0.0.1' with 'localhost' and vice versa Please advise. I'm pretty sure I miss some obvious detail and I would like to learn more. Thank you.

    Read the article

  • What MS technology to use for HTTP service returning XML?

    - by Borek
    I need to create a service that: accepts HTTP requests (with query string or HTTP POST parameters) does some processing on the requests (checking if the request is valid, authentication etc.) reads data from a custom store (another HTTP call in our case) returns the result as custom XML (defined with XSD) I'm trying to think of various MS technologies that could help me and how good they would be for this scenario (pretty standard one I guess). The tasks above are relatively separate, this is what comes to mind: HTTP front-end: ASP.NET Web Forms ASP.NET MVC (seems more appropriate here as I won't need server controls, view state etc.) WCF? Don't know much about it or how well it would suit my task. Custom logic on the server: this will probably be a generic C# code in all cases (sometimes "plugged into" or called from MVC controllers or some equivalent place in other technologies) Reading data from internal data stores: As said, this is another HTTP server in our case. Options that come to mind: Just read the data using something like WebClient (Just theoretically) implement a LINQ provider (Just even more theoretically) implement an EF provider Output the data as custom XML: Linq2XML Serialization? Is it flexible enough? Does WCF provide some tools for this? Some "OXM" - Object/XML mapper if there is something like that for .NET I may be wrong in many of my assumptions, this is just a quick list that comes to mind after a quick research. Some general notes / questions: Testing is important Solution with a clear domain model would be much preferred over the one without Can Entity Framework actually help somewhere in my scenario? If so, where and how? Would WCF be an appropriate technology for this? I don't know much about it.

    Read the article

  • how to disable web page cache throughout the servlets

    - by Kurt
    To no-cache web page, in the java controller servlet, I did somthing like this in a method: public ModelAndView home(HttpServletRequest request, HttpServletResponse response) throws Exception { ModelAndView mav = new ModelAndView(ViewConstants.MV_MAIN_HOME); mav.addObject("testing", "Test this string"); mav.addObject(request); response.setHeader("Cache-Control", "no-cache, no-store"); response.setHeader("Pragma", "no-cache"); response.setDateHeader("Expires", 0); return mav; } But this only works for a particular response object. I have many similar methods in a servlet. And I have many servlets too. If I want to disable cache throughout the application, what should I do? (I do not want to add above code for every single response object) Thanks in advance.

    Read the article

  • GZipStream in an WebHttpResponse producing no data

    - by Pierre 303
    I want to compress my HTTP Responses for client that supports it. Here is the code used to send a standard response: IHttpClientContext context = (IHttpClientContext)sender; IHttpRequest request = e.Request; string responseBody = "This is some random text"; IHttpResponse response = request.CreateResponse(context); using (StreamWriter writer = new StreamWriter(response.Body)) { writer.WriteLine(responseBody); writer.Flush(); response.Send(); } The code above works fine. Now I added gzip support below. When I test it with a browser that supports gzip or a custom method, it returns an empty string. I'm sure I'm missing something simple, but I just can't find it... IHttpClientContext context = (IHttpClientContext)sender; IHttpRequest request = e.Request; string acceptEncoding = request.Headers["Accept-Encoding"]; string responseBody = "This is some random text"; IHttpResponse response = request.CreateResponse(context); if (acceptEncoding != null && acceptEncoding.Contains("gzip")) { byte[] bytes = ASCIIEncoding.ASCII.GetBytes(responseBody); response.AddHeader("Content-Encoding", "gzip"); using (GZipStream writer = new GZipStream(response.Body, CompressionMode.Compress)) { writer.Write(bytes, 0, bytes.Length); writer.Flush(); response.Send(); } } else { using (StreamWriter writer = new StreamWriter(response.Body)) { writer.WriteLine(responseBody); writer.Flush(); response.Send(); } }

    Read the article

  • Is GAE Really GZipping My Content? Slow Response Times with GAE as CDN

    - by viatropos
    I am testing out Google App Engine as a free Content Delivery Network and it feels like it's taking a long time to serve up my content. Why does this gae page take a say a half a second to download, while your typical stack overflow page downloads much faster even with a ton more content? What am I missing here? All I have done is create an app and uploaded an image according to that tutorial, but content is being served very slowly it seems. Any suggestions? (Not considering Amazon or other CDNs right now, just looking for help with GAE). Note: I am using Safari when I visit those links, maybe safari is causing problems?

    Read the article

  • Add Access-Control-Allow-Origin to header in PHP

    - by SANDeveloper
    I am trying to workaround CORS restriction on a WebGL application. I have a Web Service which resolves URL and returns images. Since this web service is not CORS enabled, I can't use the returned images as textures. I was planning to: Write a PHP script to handle image requests Image requests would be sent through the query string as a url parameter The PHP Script will: Call the web service with the query string url Fetch the image response (web service returns a content-type:image response) Add the CORS header (Add Access-Control-Allow-Origin) to the response Send the response to the browser I tried to implement this using a variety of techniques including CURL, HTTPResponse, plain var_dump etc. but got stuck at some point in each. So I have 2 questions: Is the approach good enough? Considering the approach is good enough: I made the most progress with CURL. I could get the image header and data with: $ch = curl_init(); $url = $_GET["url"]; curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_HEADER, true); curl_setopt($ch, CURLOPT_HTTPHEADER, array('Content-Type:image/jpeg')); //Execute request $response = curl_exec($ch); //get the default response headers $headers = curl_getinfo($ch); //close connection curl_close($ch); But this doesn't actually change set the response content-type to image/jpeg. It dumps the header + response into a new response of content-type text/html and display the header and the image BLOB data in the browser. How do I get it to send the response in the format I want? Managed to get it working: $ch = curl_init(); $url = $_GET["url"]; curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_HEADER, false); //Execute request $response = curl_exec($ch); //get the default response headers $headers = curl_getinfo($ch); curl_setopt($ch, CURLOPT_HTTPHEADER, $headers); header('Content-Type: image/jpeg'); header("Access-Control-Allow-Origin: *"); // header("Expires: Sat, 26 Jul 2017 05:00:00 GMT"); //close connection curl_close($ch); flush();

    Read the article

  • Getting the location from a WebClient on a HTTP 302 Redirect?

    - by Michael Stum
    I have a URL that returns a HTTP 302 redirect, and I would like to get the URL it redirects to. The problem is that System.Net.WebClient seems to actually follow it, which is bad. HttpWebRequest seems to do the same. Is there a way to make a simple HTTP Request and get back the target Location without the WebClient following it? I'm tempted to do raw socket communication as HTTP is simple enough, but the site uses HTTPS and I don't want to do the Handshaking. At the end, I don't care which class I use, I just don't want it to follow HTTP 302 Redirects :)

    Read the article

  • Can WCF TCP and HTTP endpoints have the same port?

    - by dlanod
    I'm interested in one WCF server exposing both HTTP and TCP interfaces. It'll be used with Silverlight clients, so the thinking is that the HTTP interface will be for secure communications while TCP will be used the rest of the time. Is it possible for these two interfaces to use the same port in their endpoints, e.g. http://localhost:9000/ and net.tcp://localhost:9000/?

    Read the article

  • Ruby: How to post a file via HTTP as multipart/form-data?

    - by kch
    I want to do an HTTP POST that looks like an HMTL form posted from a browser. Specifically, post some text fields and a file field. Posting text fields is straightforward, there's an example right there in the net/http rdocs, but I can't figure out how to post a file along with it. Net::HTTP doesn't look like the best idea. curb is looking good.

    Read the article

  • Can not get json response using $.getJSON

    - by Mellon
    I am currently developing a Ruby on rails 3 application. My server controller function render a json object as response: class DaysController < BaseController ... def the_days ... render :json => days end end In my javascript,I use the following code to get json response from server( that's from the_day function in controller) $.getJSON( url, {emp_id: emp_id}, function(data) { var result = data.response; alert(result) alert(data) }, "json" ); I use firefox browswer and checked with Firebug, in Firebug Net-XHR, I see the Get request is successful, and the response "days" is there. That's both request and response are successful. But I did not see the two alert window defined in the above $.getJSON function, why? Why I can not get the response "days" in $.getJSON function??

    Read the article

  • Jetty servlet respons to Ajax always empty

    - by chris
    Hi I try to run a java server on Jetty which should respond to an ajax call. Unfortunately the response seems to be empty when I call it with ajax. When I call http://localhost:8081/?id=something I get an answer. The Java Server: public class Answer extends AbstractHandler { public void handle(String target, Request baseRequest, HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { String id = request.getParameter("id"); response.setContentType("text/xml"); response.setHeader("Cache-Control", "no-cache"); response.setContentLength(19+id.length()); response.setStatus(HttpServletResponse.SC_OK); response.getWriter().write("<message>"+id+"</message>"); //response.setContentType("text/html;charset=utf-8"); response.flushBuffer(); baseRequest.setHandled(true); } public static void main(String[] args) throws Exception { Server server = new Server(8081); server.setHandler(new Answer()); server.start(); server.join(); } } The js and html: <html> <head> <script> var req; function validate() { var idField = document.getElementById("userid"); var url = "validate?id=" + encodeURIComponent(idField.value); if (typeof XMLHttpRequest != "undefined") { req = new XMLHttpRequest(); } else if (window.ActiveXObject) { req = new ActiveXObject("Microsoft.XMLHTTP"); } req.open("GET", "http://localhost:8081?id=fd", true); req.onreadystatechange = callback; req.send(null); } function callback() { if (req.readyState == 4) { if (req.status == 200) { var message = req.responseXML.getElementsByTagName("message")[0]; document.getElementById("userid").innerHTML = "message.childNodes[0].nodeValue"; } } } </script> </head> <body onload="validate('foobar')"> <div id="userid">hannak</div> </body> </html> I'm actually don't know what I'm doing wrong here. Maybe someone has a good idea. greetings chris

    Read the article

  • How to reliably categorize HTTP sessions in proxy to corresponding browser' windows/tabs user is viewing?

    - by Jehonathan
    I was using the Fiddler core .Net library as a local proxy to record the user activity in web. However I ended up with a problem which seems dirty to solve. I have a web browser say Google Chrome, and the user opened like 10 different tabs each with different web URLs. The problem is that the proxy records all the HTTP session initiated by each pages separately, causing me to figure out using my intelligence the tab which the corresponding HTTP session belonged to. I understand that this is because of the stateless nature of HTTP protocol. However I am just wondering is there an easy way to do this? I ended up with below c# code for that in Fiddler. Still its not a reliable solution due to the heuristics. This is a modification of the sample project bundled with Fiddler core for .NET 4. Basically what it does is filtering HTTP sessions initiated in last few seconds to find the first request or switching to another page made by the same tab in browser. It almost works, but not seems to be a universal solution. Fiddler.FiddlerApplication.AfterSessionComplete += delegate(Fiddler.Session oS) { //exclude other HTTP methods if (oS.oRequest.headers.HTTPMethod == "GET" || oS.oRequest.headers.HTTPMethod == "POST") //exclude other HTTP Status codes if (oS.oResponse.headers.HTTPResponseStatus == "200 OK" || oS.oResponse.headers.HTTPResponseStatus == "304 Not Modified") { //exclude other MIME responses (allow only text/html) var accept = oS.oRequest.headers.FindAll("Accept"); if (accept != null) { if(accept.Count>0) if (accept[0].Value.Contains("text/html")) { //exclude AJAX if (!oS.oRequest.headers.Exists("X-Requested-With")) { //find the referer for this request var referer = oS.oRequest.headers.FindAll("Referer"); //if no referer then assume this as a new request and display the same if(referer!=null) { //if no referer then assume this as a new request and display the same if (referer.Count > 0) { //lock the sessions Monitor.Enter(oAllSessions); //filter further using the response if (oS.oResponse.MIMEType == string.Empty || oS.oResponse.MIMEType == "text/html") //get all previous sessions with the same process ID this session request if(oAllSessions.FindAll(a=>a.LocalProcessID == oS.LocalProcessID) //get all previous sessions within last second (assuming the new tab opened initiated multiple sessions other than parent) .FindAll(z => (z.Timers.ClientBeginRequest > oS.Timers.ClientBeginRequest.AddSeconds(-1))) //get all previous sessions that belongs to the same port of the current session .FindAll(b=>b.port == oS.port ).FindAll(c=>c.clientIP ==oS.clientIP) //get all previus sessions with the same referrer URL of the current session .FindAll(y => referer[0].Value.Equals(y.fullUrl)) //get all previous sessions with the same host name of the current session .FindAll(m=>m.hostname==oS.hostname).Count==0 ) //if count ==0 that means this is the parent request Console.WriteLine(oS.fullUrl); //unlock sessions Monitor.Exit(oAllSessions); } else Console.WriteLine(oS.fullUrl); } else Console.WriteLine(oS.fullUrl); Console.WriteLine(); } } } } };

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >