Search Results

Search found 53597 results on 2144 pages for 'http requests'.

Page 88/2144 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • Maintaining state and data context between requests in ASP.NET + EF4

    - by Nick
    I have a EF4/ASP.NET web application that is structured to use POCOs and generic repositories, based essentially on this excellent article. The application is relatively sophisticated with one page that involves selection and linking of multiple entities to build up a complex user profile. This requires access to multiple entity types (20 or so) and associated repositories across multiple posts. When a repository is first accessed it uses the existing data context if exists, else it creates a new context. The problem is that if the lifetime of the context is only per-request (as suggested in the article) then you have to deal with multiple contexts and the complexity around detaching and attaching entities from contexts. My solution is to share the context between posts by creating a single View Model that includes all required repositories (initialised to share the same context) plus any associated data and store this model in a Session variable, retrieving from Session on subsequent page requests. Therefore maintaining the same context across all posts until the profile is saved. This works fine BUT I am concerned that I don't actually know exactly what is stored in the model session variable or more importantly the size of the Session variable. So two questions I suppose: firstly should I look for a better solution to handle the shared context across posts issue (any suggestions welcome)? And secondly what is actually stored in the Session when it includes a repository plus context? Any help appreciated!

    Read the article

  • WCF and streaming requests and responses

    - by Cheeso
    Is it correct that in WCF, I cannot have a service write to a stream that is received by the client? My understanding is that streaming is supported in WCF for requests, responses, or both. Is it true that in all cases, the receiver of the stream must invoke Read ? I would like to support a scenario where the receiver of the stream can Write on it. Is this supported? Let me show it this way. The simplest example of Streaming in WCF is the service returning a FileStream to a client. This is a streamed response. The server code is like this: [ServiceContract] public interface IStreamService { [OperationContract] Stream GetData(string fileName); } public class StreamService : IStreamService { public Stream GetData(string filename) { FileStream fs = new FileStream(filename, FileMode.Open) return fs; } } And the client code is like this: StreamDemo.StreamServiceClient client = new WcfStreamDemoClient.StreamDemo.StreamServiceClient(); Stream str = client.GetData(@"c:\path\to\myfile.dat"); do { b = str.ReadByte(); //read next byte from stream ... } while (b != -1); (example taken from http://blog.joachim.at/?p=33) Clear, right? The server returns the Stream to the client, and the client invokes Read on it. Is it possible for the client to provide a Stream, and the server to invoke Write on it? In other words, rather than a pull model - where the client pulls data from the server - it is a push model, where the client provides the "sink" stream and the server writes into it. Is this possible in WCF, and if so, how? What are the config settings required for the binding, interface, etc? The analogy is the Response.OutputStream from an ASP.NET request. In ASPNET, any page can invoke Write on the output stream, and the content is received by the client. Can I do something similar in WCF? Thanks.

    Read the article

  • My Rails session is getting reset when I have concurrent requests

    - by alex_c
    I think I might be misunderstanding something about Rails sessions, so please bear with me, I might not be phrasing my question the best way. I'm working on an iPhone app with a Ruby on Rails backend. I have a web view which by default goes to the index action of one controller (and uses sessions), and in the background a bunch of API calls going to a different controller (and which don't need to use sessions). The problem is, the sessions set by my web view seem to be overwitten by the API calls. My staging server is pretty slow, so there's lots of time for the requests to overlap each other - what I see in the logs is basically this: Request A (first controller) starts. Session is empty. Request B (second controller) starts. Session is empty. Request A finishes. Request A has done authentication, and stored the user ID in the session. Session contains user ID. Request B finishes. Session is empty. Request C starts. Session is empty - not what I want. Now, the strange thing is that request B should NOT be writing anything to the session. I do have before and after filters which READ from the session - things like: user = User.find_by_id(session[:id]) or logger.debug session.inspect and if I remove all of those, then everything works as expected - session contents get set by request A, and they're still there when request C starts. So. I think I'm missing something about how sessions work. Why would reading from the session overwrite it? Should I be accessing it some other way? Am I completely on the wrong track and the problem is elsewhere? Thank you for any insights!

    Read the article

  • Memory leak involving jQuery Ajax requests

    - by Eli Courtwright
    I have a webpage that's leaking memory in both IE8 and Firefox; the memory usage displayed in the Windows Process Explorer just keeps growing over time. The following page requests the "unplanned.json" url, which is a static file that never changes (though I do set my Cache-control HTTP header to no-cache to make sure that the Ajax request always goes through). When it gets the results, it clears out an HTML table, loops over the json array it got back from the server, and dynamically adds a row to an HTML table for each entry in the array. Then it waits 2 seconds and repeats this process. Here's the entire webpage: <html> <head> <title>Test Page</title> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.3/jquery.min.js"></script> </head> <body> <script type="text/javascript"> function kickoff() { $.getJSON("unplanned.json", resetTable); } function resetTable(rows) { $("#content tbody").empty(); for(var i=0; i<rows.length; i++) { $("<tr>" + "<td>" + rows[i].mpe_name + "</td>" + "<td>" + rows[i].bin + "</td>" + "<td>" + rows[i].request_time + "</td>" + "<td>" + rows[i].filtered_delta + "</td>" + "<td>" + rows[i].failed_delta + "</td>" + "</tr>").appendTo("#content tbody"); } setTimeout(kickoff, 2000); } $(kickoff); </script> <table id="content" border="1" style="width:100% ; text-align:center"> <thead><tr> <th>MPE</th> <th>Bin</th> <th>When</th> <th>Filtered</th> <th>Failed</th> </tr></thead> <tbody></tbody> </table> </body> </html> If it helps, here's an example of the json I'm sending back (it's this exact array wuith thousands of entries instead of just one): [ { mpe_name: "DBOSS-995", request_time: "09/18/2009 11:51:06", bin: 4, filtered_delta: 1, failed_delta: 1 } ] EDIT: I've accepted Toran's extremely helpful answer, but I feel I should post some additional code, since his removefromdom jQuery plugin has some limitations: It only removes individual elements. So you can't give it a query like `$("#content tbody tr")` and expect it to remove all of the elements you've specified. Any element that you remove with it must have an `id` attribute. So if I want to remove my `tbody`, then I must assign an `id` to my `tbody` tag or else it will give an error. It removes the element itself and all of its descendants, so if you simply want to empty that element then you'll have to re-create it afterwards (or modify the plugin to empty instead of remove). So here's my page above modified to use Toran's plugin. For the sake of simplicity I didn't apply any of the general performance advice offered by Peter. Here's the page which now no longer memory leaks: <html> <head> <title>Test Page</title> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.3/jquery.min.js"></script> </head> <body> <script type="text/javascript"> <!-- $.fn.removefromdom = function(s) { if (!this) return; var el = document.getElementById(this.attr("id")); if (!el) return; var bin = document.getElementById("IELeakGarbageBin"); //before deleting el, recursively delete all of its children. while (el.childNodes.length > 0) { if (!bin) { bin = document.createElement("DIV"); bin.id = "IELeakGarbageBin"; document.body.appendChild(bin); } bin.appendChild(el.childNodes[el.childNodes.length - 1]); bin.innerHTML = ""; } el.parentNode.removeChild(el); if (!bin) { bin = document.createElement("DIV"); bin.id = "IELeakGarbageBin"; document.body.appendChild(bin); } bin.appendChild(el); bin.innerHTML = ""; }; var resets = 0; function kickoff() { $.getJSON("unplanned.json", resetTable); } function resetTable(rows) { $("#content tbody").removefromdom(); $("#content").append('<tbody id="id_field_required"></tbody>'); for(var i=0; i<rows.length; i++) { $("#content tbody").append("<tr><td>" + rows[i].mpe_name + "</td>" + "<td>" + rows[i].bin + "</td>" + "<td>" + rows[i].request_time + "</td>" + "<td>" + rows[i].filtered_delta + "</td>" + "<td>" + rows[i].failed_delta + "</td></tr>"); } resets++; $("#message").html("Content set this many times: " + resets); setTimeout(kickoff, 2000); } $(kickoff); // --> </script> <div id="message" style="color:red"></div> <table id="content" border="1" style="width:100% ; text-align:center"> <thead><tr> <th>MPE</th> <th>Bin</th> <th>When</th> <th>Filtered</th> <th>Failed</th> </tr></thead> <tbody id="id_field_required"></tbody> </table> </body> </html> FURTHER EDIT: I'll leave my question unchanged, though it's worth noting that this memory leak has nothing to do with Ajax. In fact, the following code would memory leak just the same and be just as easily solved with Toran's removefromdom jQuery plugin: function resetTable() { $("#content tbody").empty(); for(var i=0; i<1000; i++) { $("#content tbody").append("<tr><td>" + "DBOSS-095" + "</td>" + "<td>" + 4 + "</td>" + "<td>" + "09/18/2009 11:51:06" + "</td>" + "<td>" + 1 + "</td>" + "<td>" + 1 + "</td></tr>"); } setTimeout(resetTable, 2000); } $(resetTable);

    Read the article

  • 401 error when consuming a Web service with HTTP Basic authentication using CXF

    - by seanhodges
    I'm trying to consume a remote Web service that uses HTTP basic authentication, using Apache CXF, within a JUnit test. The error I am getting is: javax.xml.ws.WebServiceException: Failed to access the WSDL at: http://localhost:8080/services/MyService?wsdl. It failed with: Server returned HTTP response code: 401 for URL: http://localhost:8080/services/MyService?wsdl. at com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.tryWithMex(RuntimeWSDLParser.java:151) at com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parse(RuntimeWSDLParser.java:133) at com.sun.xml.internal.ws.client.WSServiceDelegate.parseWSDL(WSServiceDelegate.java:254) at com.sun.xml.internal.ws.client.WSServiceDelegate.<init>(WSServiceDelegate.java:217) at com.sun.xml.internal.ws.client.WSServiceDelegate.<init>(WSServiceDelegate.java:165) at com.sun.xml.internal.ws.spi.ProviderImpl.createServiceDelegate(ProviderImpl.java:93) at javax.xml.ws.Service.<init>(Service.java:76) at com.wave2.marketplace.importer.impl.adportal.ws.MyServiceService.<init>(MyServiceService.java:37) at com.wave2.marketplace.importer.impl.adportal.MyWSTest.testConsumingTheWS(MyWSTest.java:22) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at junit.framework.TestCase.runTest(TestCase.java:168) at junit.framework.TestCase.runBare(TestCase.java:134) at junit.framework.TestResult$1.protect(TestResult.java:110) at junit.framework.TestResult.runProtected(TestResult.java:128) at junit.framework.TestResult.run(TestResult.java:113) at junit.framework.TestCase.run(TestCase.java:124) at junit.framework.TestSuite.runTest(TestSuite.java:232) at junit.framework.TestSuite.run(TestSuite.java:227) at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:46) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) Caused by: java.io.IOException: Server returned HTTP response code: 401 for URL: http://localhost:8080/services/MyService?wsdl at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1269) at java.net.URL.openStream(URL.java:1029) at com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.createReader(RuntimeWSDLParser.java:793) at com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.resolveWSDL(RuntimeWSDLParser.java:251) at com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parse(RuntimeWSDLParser.java:118) ... 26 more Having read this StackOverflow post, I have attempted to add the auth credentials to my request context, as follows: @Test public void testConsumingTheWS() throws Exception { URL wsdl = new URL("http://localhost:8080/services/MyService?wsdl"); MyServiceService provider = new MyServiceService(wsdl); // <-- Error occurs here MyService service = provider.getMyService(); BindingProvider binding = (BindingProvider)service; binding.getRequestContext().put(BindingProvider.USERNAME_PROPERTY, "username"); binding.getRequestContext().put(BindingProvider.PASSWORD_PROPERTY, "password"); Ping out = service.getPing(); assertNotNull(out); } However, as my in-line comment indicates, the error is occurring before the BindingProvider code is reached, so the error remains the same. I did have a read of this article and its follow-up, but so far I've had trouble determining how to go about adding the interceptor code without the use of Spring (this is for a JUnit test). How might I go about authenticating against this Web service?

    Read the article

  • Equivalent of http://www.cplusplus.com/ for C# .net

    - by David Relihan
    Hi Folks I've read through a lot of the "Learn C# .Net" questions just to see if this question was answered already (directly or indirectly). I program mostly in C++ so I find the website http://www.cplusplus.com/ invaluable and there's rarely a day when it is not open in my browser! However, I'm just wondering is there an C# .Net equivalent that people find themselves constantly referencing? The best I'm aware of is: http://stackoverflow.com http://msdn.microsoft.com/en-us/vcsharp/aa336809.aspx http://www.java2s.com/Tutorial/CSharp/CatalogCSharp.htm Thanks,

    Read the article

  • Setting up a Proxy to record Firefox requests

    - by Marco
    I'm using Ruby+Watir to request pages through Firefox. I would like to record the headers and content of every http request made through the browser. Would it be possible to configure a proxy solution to store this information, either in a file or pipe it straight into an application? Could I use something such as squid or nginx to record header/content information? PS: Running Ubuntu x64.

    Read the article

  • What's the BPF for HTTP?

    - by Gtker
    The definition can be seen here. The candidate answer may be tcp and dst port 80,but can tcp and dst port 80 guarantee it's HTTP traffic and includes all HTTP traffic? It seems not,because some site can be visited by specifying a different port other than 80 this way: http://domain.name:8080 So my question is: what's the exact BPF for HTTP?

    Read the article

  • Tried http post doesn't work

    - by Rebol Tutorial
    I wanted to try the example here http://www.codeconscious.com/rebol/rebol-net.html#HTTP print read/custom http://babelfish.altavista.com/translate.dyn reduce ['POST {text=REBOL+Rules&lp=en_fr}] Since the page has changed since I modified it to write clipboard:// read/custom http://babelfish.altavista.com/translate.dyn reduce ['POST {trtext=hello+world&lp=en_fr&btnTrTxt=Translate}] It does return an html page but it doesn't contain any translation. What did I miss thanks ?

    Read the article

  • http compression shared hosting apache/php

    - by gansodesoya
    Hi, I was sniffing the response header of one my sites and apparently is not using http compression to deliver responses because I'm not seeing the Content-Encoding: gzip in the response header. But the weird thing is that phpinfo() shows me HTTP_ACCEPT_ENCODING: gzip,deflate,sdch Im using a rackspace cloud site (shared hosting, cant access httpdconfig), and I really want to activate http compression but the support guys over there tells me that if the phpinfo() says it, its already on. thanks.!

    Read the article

  • Install / Start Apache HTTP Server on Windows [closed]

    - by Bagira
    I have downloaded the Apache HTTP Server 2.4 from Here and after extracting the zip file i can see a couple of folders but I could not figure out how to start / install server. I want to install server standalone i.e. without PHP / MySQL. This s because I have already MySQL installed and I don't want to use PHP. I want Apache HTTP server to communicate with Apache tomcat. I am using Windows Vista.

    Read the article

  • Browser displays 'Apache is functioning normally' when I enter 'http://you/' in the location bar

    - by bobo
    I am on a Mac and when I enter http://you/ in the location bar, it displays 'Apache is functioning normally' no matter which browser (Safari, Firefox, Chrome) I am using. Is this normal? Or could it be that my computer is hacked in some way and is now acting as a server for the hackers? EDIT: I originally wanted to go to the youtube.com, but entered http://you/ by accident and found out this strange behavior.

    Read the article

  • Nagios only create warning for a http service

    - by MeinAccount
    I would like to also monitor non-crucial services with nagios like for example our GitLab-server or phpMyAdmin instance. Is there any way to just create warnings instead of circuital errors for some services? At the moment I'm using the following: define service { host_name localhost use generic-service service_description HTTP GitLab check_command check_www!git.example.com!'/users/sign_in' } define command { command_name check_www command_line /usr/lib/nagios/plugins/check_http -H '$ARG1$' -I '$HOSTADDRESS$' -e 'HTTP/1.1 200 OK' -u '$ARG2$' }

    Read the article

  • route http and ssh traffic normally, everything else via vpn tunnel

    - by Normadize
    I've read quite a bit and am close, I feel, and I'm pulling my hair out ... please help! I have an OpenVPN cliend whose server sets local routes and also changes the default gw (I know I can prevent that with --route-nopull). I'd like to have all outgoing http and ssh traffic via the local gw, and everything else via the vpn. Local IP is 192.168.1.6/24, gw 192.168.1.1. OpenVPN local IP is 10.102.1.6/32, gw 192.168.1.5 OpenVPN server is at {OPENVPN_SERVER_IP} Here's the route table after openvpn connection: # ip route show table main 0.0.0.0/1 via 10.102.1.5 dev tun0 default via 192.168.1.1 dev eth0 proto static 10.102.1.1 via 10.102.1.5 dev tun0 10.102.1.5 dev tun0 proto kernel scope link src 10.102.1.6 {OPENVPN_SERVER_IP} via 192.168.1.1 dev eth0 128.0.0.0/1 via 10.102.1.5 dev tun0 169.254.0.0/16 dev eth0 scope link metric 1000 192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.6 metric 1 This makes all packets go via to the VPN tunnel except those destined for 192.168.1.0/24. Doing wget -qO- http://echoip.org shows the vpn server's address, as expected, the packets have 10.102.1.6 as source address (the vpn local ip), and are routed via tun0 ... as reported by tcpdump -i tun0 (tcpdump -i eth0 sees none of this traffic). What I tried was: create a 2nd routing table holding the 192.168.1.6/24 routing info (copied from the main table above) add an iptables -t mangle -I PREROUTING rule to mark packets destined for port 80 add an ip rule to match on the mangled packet and point it to the 2nd routing table add an ip rule for to 192.168.1.6 and from 192.168.1.6 to point to the 2nd routing table (though this is superfluous) changed the ipv4 filter validation to none in net.ipv4.conf.tun0.rp_filter=0 and net.ipv4.conf.eth0.rp_filter=0 I also tried an iptables mangle output rule, iptables nat prerouting rule. It still fails and I'm not sure what I'm missing: iptables mangle prerouting: packet still goes via vpn iptables mangle output: packet times out Is it not the case that to achieve what I want, then when doing wget http://echoip.org I should change the packet's source address to 192.168.1.6 before routing it off? But if I do that, the response from the http server would be routed back to 192.168.1.6 and wget would not see it as it is still bound to tun0 (the vpn interface)? Can a kind soul please help? What commands would you execute after the openvpn connects to achieve what I want? Looking forward to hair regrowth ...

    Read the article

  • Setting up reverse proxy via an HTTP proxy?

    - by billc.cn
    I have a software that has to call an external web API from inside our firewall. Currently the only way to do this is through the HTTP proxy feature of the firewall; however, the software itself does not support proxy configuration. So I am wondering is it possible to setup a reverse proxy for the API that goes through the HTTP proxy? The server is running WS2003. I can install any software on it.

    Read the article

  • Setting http auth type in phpMyAdmin on Debian

    - by Daniel Hollands
    I'm trying to set-up the fresh phpMyAdmin install on my Debian 6 server to use http authentication rather than the cookie based auth that is default when it is installed. To do this, I edited the $cfg['Servers'][$i]['auth_type'] line to use 'http' as its setting in /etc/phpmyadmin/config.inc.php, and restarted the server, but the setting seems to be being ignored, as when I goto phpmyadmin, it is still offering up the regular login box. I've done this twice before (once on debian and once on ubuntu), so I'm not sure why it isn't working this time. Thank you

    Read the article

  • Win7 Modifying incoming HTTP packet from specific url automatically

    - by xeross
    Hey, Is there an application that can listen in on my PCs http traffic (Preferably process specific), and modify packets that were requested from a certain url ? So let's say everytime I request http://example.tld/test.html it would replace any occurence of let's say "i" with "I", it's a simple example but still it's an example Thanks for your time, Xeross

    Read the article

  • Any way to view dynamic java content ex-post? Browser session still open

    - by Ryan
    I feel like a grandpa from 1996 asking this, but is it at all possible to view a representation of a particular screen that was rendered as part of a java-based online checkout process I executed a couple days ago? I haven't cleared my browser cache or temp files or anything, and I don't think I've restarted the comp or even the browser since. I'm using mac OS X 10.6.8, and the page(s) were viewed with Chrome version 21.0.1180.89 in standard mode (not incognito). Specifically the page in question was part of Verizon Wireless's 'iconic' contract/checkout process, which leads the user through several pages to make selections on various criteria and seems to be based on java. (Obviously I'm a dummy regarding web stuff so the question is probably not very well defined, I'm happy to elaborate). ^This is the tl;dr question. If it belongs on another site please just let me know. This is what I've been able to figure out on my own, for the bored / ultra-helpful / those who could use a laugh at a noob fumbling his way around cache files with no idea what he's doing: The progress through the selection pages is very clear in Chrome's browser history, the sequential pages are: https://www.verizonwireless.com/b2c/accountholder/estore/phoneupgrade?execution=e3s2 https://www.verizonwireless.com/b2c/accountholder/estore/phoneupgrade?execution=e3s3 https://www.verizonwireless.com/b2c/accountholder/estore/phoneupgrade?execution=e3s4 https://www.verizonwireless.com/b2c/accountholder/estore/phoneupgrade?execution=e3s5 https://preorder.verizonwireless.com/iconic/?format=JSON&value={%22action%22:%22START_ORDER%22,%22custType%22:%22EXISTING%22,%22orderType%22:%22UPGRADE%22,%22lookupMtn%22:%22*(NumberA)*%22,%22lineData%22:[{%22mtn%22:%22*(NumberA)*%22,%22upgType%22:%22ALTERNATE_UPGRADE%22,%22eligibleMtn%22:%22*(NumberB)*%22}]} https://preorder.verizonwireless.com/iconic/iconic/secured/screens/IconicOrder.do?format=JSON&value={%22action%22:%22START_ORDER%22,%22custType%22:%22EXISTING%22,%22orderType%22:%22UPGRADE%22,%22lookupMtn%22:%22*(NumberA)*%22,%22lineData%22:[{%22mtn%22:%22*(NumberA)*%22,%22upgType%22:%22ALTERNATE_UPGRADE%22,%22eligibleMtn%22:%22*(NumberB)*%22}]} https://preorder.verizonwireless.com/iconic/iconic/secured/screens/IconicEligibility.do https://preorder.verizonwireless.com/iconic/iconic/secured/screens/IconicDeviceSelection.do https://preorder.verizonwireless.com/iconic/iconic/secured/screens/PlanOptions.do https://preorder.verizonwireless.com/iconic/iconic/secured/screens/IconicFeatures.do https://preorder.verizonwireless.com/iconic/iconic/secured/screens/IconicAccessories.do https://preorder.verizonwireless.com/iconic/iconic/secured/screens/IconicShipmentBilling.do https://preorder.verizonwireless.com/iconic/iconic/secured/screens/IconicReview.do https://preorder.verizonwireless.com/iconic/iconic/secured/screens/IconicPaymentCreditInfo.do https://preorder.verizonwireless.com/iconic/iconic/secured/screens/IconicConfirmation.do The visual representation I would need could come from any of these pages, as the necessary information was shown at the top of each of them (although the two with long URLs were just like redirects or something). Of course, clicking the link to the page in History right now requires a new sign-in and just returns the user to the initial step for doing the process again; it does not pull up a representation of the page as it was seen several days ago. This I understand. Instead using Chrome's integrated cache viewer by typing about:cache in the address bar, I can search and find links that appear to be relevant, when I click on the link I just get a http header and a bunch of hexadecimal gobbledygook. I've tried to use the URL at the top of the cache and URLs in the http headers, but they take me to current versions of those pages and not the versions I saw during the checkout process. I tried this with a few of them but stopped because I noticed that it updated the date in the http header to the present moment and I don't want to take chances overwriting the cache files since I don't know what I'm doing. The links to the cache files look like this: https://login.verizonwireless.com/amserver/UI/Login?realm=vzw&goto=https%3A%2F%2Fpreorder.verizonwireless.com%3A443%2Ficonic%2Ficonic%2Fsecured%2Fscreens%2FPlanOptions.do https://preorder.verizonwireless.com/iconic/iconic/screens/customerTypeOverlay.jsp https://verizonwireless.tt.omtrdc.net/m2/verizonwireless/mbox/standard?mboxHost=login.verizonwireless.com&mboxSession=1347776884663-145230&mboxPC=1347609748832-956765.19&mboxPage=1347776884663-145230&screenHeight=1200&screenWidth=1920&browserWidth=1299&browserHeight=868&browserTimeOffset=-420&colorDepth=24&mboxCount=1&mbox=My_Verizon_Global&mboxId=0&mboxTime=1347751684666&mboxURL=https%3A%2F%2Flogin.verizonwireless.com%2Famserver%2FUI%2FLogin%3Frealm%3Dvzw%26goto%3Dhttps%253A%252F%252Fpreorder.verizonwireless.com%253A443%252Ficonic%252Ficonic%252Fsecured%252Fscreens%252FPlanOptions.do&mboxReferrer=&mboxVersion=41 and https://verizonwireless.tt.omtrdc.net/m2/verizonwireless/mbox/standard?mboxHost=login.verizonwireless.com&mboxSession=1347735676953-663794&mboxPC=1347609748832-956765.19&mboxPage=1347738347511-550383&screenHeight=1200&screenWidth=1920&browserWidth=1299&browserHeight=845&browserTimeOffset=-420&colorDepth=24&mboxCount=1&mbox=My_Verizon_Global&mboxId=0&mboxTime=1347713147517&mboxURL=https%3A%2F%2Flogin.verizonwireless.com%2Famserver%2FUI%2FLogin%3Frealm%3Dvzw%26goto%3Dhttps%253A%252F%252Fpreorder.verizonwireless.com%253A443%252Ficonic%252Ficonic%252Fsecured%252Fscreens%252FIconicOrder.do%253Fformat%253DJSON%2526value%253D%257B%252522action%252522%253A%252522START_ORDER%252522%252C%252522custType%252522%253A%252522EXISTING%252522%252C%252522orderType%252522%253A%252522UPGRADE%252522%252C%252522lookupMtn%252522%253A%252522*(NumberA)*%252522%252C%252522lineData%252522%253A%255B%257B%252522mtn%252522%253A%252522*(NumberA)*%252522%252C%252522upgType%252522%253A%252522ALTERNATE_UPGRADE%252522%252C%252522eligibleMtn%252522%253A%252522*(NumberB)*%252522%257D%255D%257D&mboxReferrer=&mboxVersion=41 and the http headers look like this: HTTP/1.1 200 OK Server: VZW Date: Sun, 16 Sep 2012 14:55:48 GMT Cache-control: private Pragma: no-cache Expires: 0 X-dsameversion: VZW Am_client_type: genericHTML Content-type: text/html;charset=ISO-8859-1 Content-Encoding: gzip Content-Length: 6220 and HTTP/1.1 200 OK Cache-Control: no-cache Date: Sun, 16 Sep 2012 16:16:30 GMT Content-Type: text/html Expires: Thu, 01 Jan 1970 00:00:00 GMT Content-Encoding: gzip X-Powered-By: Servlet/2.5 JSP/2.1 and HTTP/1.1 302 Moved Temporarily Server: VZW Date: Sun, 16 Sep 2012 16:29:32 GMT Cache-control: private Pragma: no-cache X-dsameversion: VZW Am_client_type: genericHTML Location: https://preorder.verizonwireless.com:443/iconic/iconic/secured/screens/IconicOrder.do?format=JSON&value={%22action%22:%22START_ORDER%22,%22custType%22:%22EXISTING%22,%22orderType%22:%22UPGRADE%22,%22lookupMtn%22:%22*(*(NumberA)*%22,%22lineData%22:[{%22mtn%22:%22*(NumberA)*%22,%22upgType%22:%22ALTERNATE_UPGRADE%22,%22eligibleMtn%22:%22*(NumberB)*%22}]} Content-length: 0 ^^this last one actually returned me to a page in the middle of the process when I used the "Location:" given in this http header rather than the URL at the top of the cache page (and was signed in to Verizon's website through a separate tab), but the page it took me to had already been updated to reflect new information, it wasn't presented as of the time the actions were taken several days ago when the page was originally viewed. (It's clear I can't achieve what I'm looking for by visiting current versions of these pages on the web…I should actually probably disable my network adapter while testing this out). The cache folder seems promising, but I don't know what to make of all that hexadecimal mess - if it contains what I'm looking for and if so, how to view it. Finally, the third thing I've come across is the Google Chrome cache folder on my local machine, at ~/Library/Caches/Google/Chrome/ then there are 'Default' and 'Media Cache' folders within. There are ~4,000 files in the former averaging ~100kb each, and 100 files in the latter averaging ~900kb each. The filenames all start "f_00xxxx" except for files titled data_0 through data_4 in each folder. I'm not sure how to observe the contents of these files and don't really want to start opening them up and potentially overwriting existing cached pages, as I notice there are already some holes in the arrangement of the files which I have never deleted manually. Hopefully this is an easy question to answer for someone who knows this stuff, admittedly web stuff is my weak point. As such, I've spent the past five hours searching around and trying to provide all the information I can. I'm probably asking for a miracle - like can those cached pages full of hexadecimal data be used to recreate the representation of the information that was on screen during the process? Or could screenshots of the previously viewed webpages be lurking in the /Caches folder? I have doubt because the content wasn't viewed at a permanent link, rather it seems like the on-screen information was served by Verizon's db, and probably securely so. I'm just not sure if Chrome saves the visual rendering of the page contents somewhere, even just temporarily. Alternatively I would be happy just to get the raw data that was on the page, even if not a visual representation…I just need to be able to demonstrate the phone line that was referenced on this page: https://preorder.verizonwireless.com/iconic/iconic/secured/screens/IconicFeatures.do . Can anyone point me in the right direction?

    Read the article

  • Identical traffic

    - by Walter White
    Hi all, I am running an application server and logging all requests for analysis purposes later. One interesting trend I noticed last night was, I had a visitor from Texas on FIOS share identical traffic with bluecoat in California. What would cause the traffic to be identical? For every request the visitor made, bluecoat made one subsequently within milliseconds of his request. If it is caching, why would there be identical requests? Wouldn't it go through the cache / proxy on their end, and I would only see the proxied request? I'm just curious, this is an interesting pattern that shows similarities of a DDoS attack, but with far fewer resources. Is it possible that the visitor had malware on their computer? Any other ideas? Walter

    Read the article

  • Ignoring GET parameters in Varnish VCL

    - by JamesHarrison
    Okay: I've got a site set up which has some APIs we expose to developers, which are in the format /api/item.xml?type_ids=34,35,37&region_ids=1000002,1000003&key=SOMERANDOMALPHANUM In this URI, type_ids is always set, region_ids and key are optional. The important thing to note is that the key variable does not affect the content of the response. It is used for internal tracking of requests so we can identify people who make slow or otherwise unwanted requests. In Varnish, we have a VCL like this: if (req.http.host ~ "the-site-in-question.com") { if (req.url ~ "^/api/.+\.xml") { unset req.http.cookie; } } We just strip cookies out and let the backend do the rest as far as times are concerned (this is a hackaround since Rails/authlogic sends session cookies with API responses). At present though, any distinct developers are basically hitting different caches since &key=SOMEALPHANUM is considered as part of the Varnish hash for storage. This is obviously not a great solution and I'm trying to work out how to tell Varnish to ignore that part of the URI.

    Read the article

  • OpenVPN (HideMyAss) client on Ubuntu: Route only HTTP traffic

    - by Andersmith
    I want to use HideMyAss VPN (hidemyass.com) on Ubuntu Linux to route only HTTP (ports 80 & 443) traffic to the HideMyAss VPN server, and leave all the other traffic (MySQL, SSH, etc.) alone. I'm running Ubuntu on AWS EC2 instances. The problem is that when I try and run the default HMA script, I suddenly can't SSH into the Ubuntu instance anymore and have to reboot it from the AWS console. I suspect the Ubuntu instance will also have trouble connecting to the RDS MySQL database, but haven't confirmed it. HMA uses OpenVPN like this: sudo openvpn client.cfg The client configuration file (client.cfg) looks like this: ############################################## # Sample client-side OpenVPN 2.0 config file # # for connecting to multi-client server. # # # # This configuration can be used by multiple # # clients, however each client should have # # its own cert and key files. # # # # On Windows, you might want to rename this # # file so it has a .ovpn extension # ############################################## # Specify that we are a client and that we # will be pulling certain config file directives # from the server. client auth-user-pass #management-query-passwords #management-hold # Disable management port for debugging port issues #management 127.0.0.1 13010 ping 5 ping-exit 30 # Use the same setting as you are using on # the server. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. #;dev tap dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel # if you have more than one. On XP SP2, # you may need to disable the firewall # for the TAP adapter. ;dev-node MyTap # Are we connecting to a TCP or # UDP server? Use the same setting as # on the server. proto tcp ;proto udp # The hostname/IP and port of the server. # You can have multiple remote entries # to load balance between the servers. # All VPN Servers are added at the very end ;remote my-server-2 1194 # Choose a random host from the remote # list for load-balancing. Otherwise # try hosts in the order specified. # We order the hosts according to number of connections. # So no need to randomize the list # remote-random # Keep trying indefinitely to resolve the # host name of the OpenVPN server. Very useful # on machines which are not permanently connected # to the internet such as laptops. resolv-retry infinite # Most clients don't need to bind to # a specific local port number. nobind # Downgrade privileges after initialization (non-Windows only) ;user nobody ;group nobody # Try to preserve some state across restarts. persist-key persist-tun # If you are connecting through an # HTTP proxy to reach the actual OpenVPN # server, put the proxy server/IP and # port number here. See the man page # if your proxy server requires # authentication. ;http-proxy-retry # retry on connection failures ;http-proxy [proxy server] [proxy port #] # Wireless networks often produce a lot # of duplicate packets. Set this flag # to silence duplicate packet warnings. ;mute-replay-warnings # SSL/TLS parms. # See the server config file for more # description. It's best to use # a separate .crt/.key file pair # for each client. A single ca # file can be used for all clients. ca ./keys/ca.crt cert ./keys/hmauser.crt key ./keys/hmauser.key # Verify server certificate by checking # that the certicate has the nsCertType # field set to "server". This is an # important precaution to protect against # a potential attack discussed here: # http://openvpn.net/howto.html#mitm # # To use this feature, you will need to generate # your server certificates with the nsCertType # field set to "server". The build-key-server # script in the easy-rsa folder will do this. ;ns-cert-type server # If a tls-auth key is used on the server # then every client must also have the key. ;tls-auth ta.key 1 # Select a cryptographic cipher. # If the cipher option is used on the server # then you must also specify it here. ;cipher x # Enable compression on the VPN link. # Don't enable this unless it is also # enabled in the server config file. #comp-lzo # Set log file verbosity. verb 3 # Silence repeating messages ;mute 20 # Detect proxy auto matically #auto-proxy # Need this for Vista connection issue route-metric 1 # Get rid of the cached password warning #auth-nocache #show-net-up #dhcp-renew #dhcp-release #route-delay 0 120 # added to prevent MITM attack ns-cert-type server # # Remote servers added dynamically by the master server # DO NOT CHANGE below this line # remote-random remote 173.242.116.200 443 # 0 remote 38.121.77.74 443 # 0 # etc... remote 67.23.177.5 443 # 0 remote 46.19.136.130 443 # 0 remote 173.254.207.2 443 # 0 # END

    Read the article

  • Domain redirection to port on Windows Server 2008

    - by Rauffle
    I have a Windows server running IIS. I wish to run a piece of software that hosts a web interface on a non-standard HTTP port (let's say, port 9999). I have static DNS entries on my router for two FQDNs, both of which direct to the Windows server. I wish to have requests to 'website1' to continue to go to the IIS website on port 80, but requests for 'website2' to instead go to port 9999 to be handled by the other application. How can I accomplish this? Right now I can get to the application by going to 'website1:9999' or 'website2:9999'.

    Read the article

  • Architectural advice - web camera remote access

    - by Alan Hollis
    I'm looking for architectural advice. I have a client who I've built a website for which essentially allows users to view their web cameras remotely. The current flow of data is as follows: User opens page to view web camera image. Javascript script polls url on server ( appended with unique timestamp ) every 1000ms Ftp connection is enabled for the cameras ftp user. Web camera opens ftp connection to server. Web camera begins taking photos. Web camera sends photo to ftp server. On image url request: Server reads latest image on hard drive uploaded via ftp for camera. Server deleted any older images from the server. This is working okay at the moment for a small amount of users/cameras ( about 10 users and around the same amount of cameras), but we're starting to worrying about the scalability of this approach. My original plan was instead of having the files read from the server, the web server would open up an ftp connection to the web server and read the latest images directly from there meaning we should have been able to scale horizontally fairly easily. But ftp connection establishment times were too slow ( mainly due to the fact that PHP out of the ox is unable to persist ftp connections ) and so we abandoned this approach and went straight for reading from the hard drive. The firmware provider for the cameras state they're able to build a http client which instead of using ftp to upload the image could post the image to a web server. This seems plausible enough to me, but I'm looking for some architectural advice. My current thought is a simple Nginx/PHP/Redis stack. Web camera issues post requests of latest image to Nginx/PHP and the latest image for that camera is stored in Redis. The clients can then pull the latest image from Redis which should be extremely quick as the images will always be stored in memory. The data flow would then become: User opens page to view web camera image. Javascript script polls url on server ( appended with unique timestamp ) every 1000ms Camera is sent an http request to start posting images to a provided url Web camera begins taking photos. Web camera sends post requests to server as fast as it can On image url request: Server reads latest image from redis Server tells redis to delete later image My questions are: Are there any greater overheads of transferring images via HTTP instead of FTP? Is there a simple way to calculate how many potential cameras we could have streaming at once? Is there any way to prevent potentially DOS'ing our own servers due to web camera requests? Is Redis a good solution to this problem? Should I abandon PHP/Ngix combination and go for something else? Is this proposed solution actually any good? Will adding HTTPs to the mix cause posting the image to become too slow? Thanks in advance Alan

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >