Search Results

Search found 17081 results on 684 pages for 'request tracking'.

Page 73/684 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • How can I catch connection requests in my framework?

    - by Falx
    I'm building a framework (OSGi-like) where other parties can program a bundle for. But I want my framework to manage the QoS of the connection-requests that the other parties will do. The easy solution would be to ask them to use (or enforce them to use - although I don't know how) a specific ConnectionRequest bundle of the framework. The problem with this approach is that they wouldn't be able to use any of their own preferred libraries that is counting on the standard Java libraries to make a connection(request). So I wondered if there is a way in Java to catch all the requested connections, so I can add some code about my QoS handling, before its is sent of to the underlaying layer?

    Read the article

  • ExpressJS: What is the difference between app.local and res.local?

    - by aeyang
    I'm trying to learn Express and in my app I have middleware that passes the session object from the Request object to my Response object so that I can access it in my views: app.use((req, res, next) -> res.locals.session = req.session next() ) But app.locals is available to the view as well right? So is it the same if I do app.locals.session = req.session? Is there a convention for the types of things app.locals and res.locals are used for? I was also confused on what the difference is between res.render() and res.redirect()? When should each be used? Thanks for reading. Any help related to Express is appreciated!

    Read the article

  • .NET WebRequest.PreAuthenticate not quite what it sounds like

    - by Rick Strahl
    I’ve run into the  problem a few times now: How to pre-authenticate .NET WebRequest calls doing an HTTP call to the server – essentially send authentication credentials on the very first request instead of waiting for a server challenge first? At first glance this sound like it should be easy: The .NET WebRequest object has a PreAuthenticate property which sounds like it should force authentication credentials to be sent on the first request. Looking at the MSDN example certainly looks like it does: http://msdn.microsoft.com/en-us/library/system.net.webrequest.preauthenticate.aspx Unfortunately the MSDN sample is wrong. As is the text of the Help topic which incorrectly leads you to believe that PreAuthenticate… wait for it - pre-authenticates. But it doesn’t allow you to set credentials that are sent on the first request. What this property actually does is quite different. It doesn’t send credentials on the first request but rather caches the credentials ONCE you have already authenticated once. Http Authentication is based on a challenge response mechanism typically where the client sends a request and the server responds with a 401 header requesting authentication. So the client sends a request like this: GET /wconnect/admin/wc.wc?_maintain~ShowStatus HTTP/1.1 Host: rasnote User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en,de;q=0.7,en-us;q=0.3 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive and the server responds with: HTTP/1.1 401 Unauthorized Cache-Control: private Content-Type: text/html; charset=utf-8 Server: Microsoft-IIS/7.5 WWW-Authenticate: basic realm=rasnote" X-AspNet-Version: 2.0.50727 WWW-Authenticate: Negotiate WWW-Authenticate: NTLM WWW-Authenticate: Basic realm="rasnote" X-Powered-By: ASP.NET Date: Tue, 27 Oct 2009 00:58:20 GMT Content-Length: 5163 plus the actual error message body. The client then is responsible for re-sending the current request with the authentication token information provided (in this case Basic Auth): GET /wconnect/admin/wc.wc?_maintain~ShowStatus HTTP/1.1 Host: rasnote User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en,de;q=0.7,en-us;q=0.3 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Cookie: TimeTrakker=2HJ1998WH06696; WebLogCommentUser=Rick Strahl|http://www.west-wind.com/|[email protected]; WebStoreUser=b8bd0ed9 Authorization: Basic cgsf12aDpkc2ZhZG1zMA== Once the authorization info is sent the server responds with the actual page result. Now if you use WebRequest (or WebClient) the default behavior is to re-authenticate on every request that requires authorization. This means if you look in  Fiddler or some other HTTP client Proxy that captures requests you’ll see that each request re-authenticates: Here are two requests fired back to back: and you can see the 401 challenge, the 200 response for both requests. If you watch this same conversation between a browser and a server you’ll notice that the first 401 is also there but the subsequent 401 requests are not present. WebRequest.PreAuthenticate And this is precisely what the WebRequest.PreAuthenticate property does: It’s a caching mechanism that caches the connection credentials for a given domain in the active process and resends it on subsequent requests. It does not send credentials on the first request but it will cache credentials on subsequent requests after authentication has succeeded: string url = "http://rasnote/wconnect/admin/wc.wc?_maintain~ShowStatus"; HttpWebRequest req = HttpWebRequest.Create(url) as HttpWebRequest; req.PreAuthenticate = true; req.Credentials = new NetworkCredential("rick", "secret", "rasnote"); req.AuthenticationLevel = System.Net.Security.AuthenticationLevel.MutualAuthRequested; req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; WebResponse resp = req.GetResponse(); resp.Close(); req = HttpWebRequest.Create(url) as HttpWebRequest; req.PreAuthenticate = true; req.Credentials = new NetworkCredential("rstrahl", "secret", "rasnote"); req.AuthenticationLevel = System.Net.Security.AuthenticationLevel.MutualAuthRequested; req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; resp = req.GetResponse(); which results in the desired sequence: where only the first request doesn’t send credentials. This is quite useful as it saves quite a few round trips to the server – bascially it saves one auth request request for every authenticated request you make. In most scenarios I think you’d want to send these credentials this way but one downside to this is that there’s no way to log out the client. Since the client always sends the credentials once authenticated only an explicit operation ON THE SERVER can undo the credentials by forcing another login explicitly (ie. re-challenging with a forced 401 request). Forcing Basic Authentication Credentials on the first Request On a few occasions I’ve needed to send credentials on a first request – mainly to some oddball third party Web Services (why you’d want to use Basic Auth on a Web Service is beyond me – don’t ask but it’s not uncommon in my experience). This is true of certain services that are using Basic Authentication (especially some Apache based Web Services) and REQUIRE that the authentication is sent right from the first request. No challenge first. Ugly but there it is. Now the following works only with Basic Authentication because it’s pretty straight forward to create the Basic Authorization ‘token’ in code since it’s just an unencrypted encoding of the user name and password into base64. As you might guess this is totally unsecure and should only be used when using HTTPS/SSL connections (i’m not in this example so I can capture the Fiddler trace and my local machine doesn’t have a cert installed, but for production apps ALWAYS use SSL with basic auth). The idea is that you simply add the required Authorization header to the request on your own along with the authorization string that encodes the username and password: string url = "http://rasnote/wconnect/admin/wc.wc?_maintain~ShowStatus"; HttpWebRequest req = HttpWebRequest.Create(url) as HttpWebRequest; string user = "rick"; string pwd = "secret"; string domain = "www.west-wind.com"; string auth = "Basic " + Convert.ToBase64String(System.Text.Encoding.Default.GetBytes(user + ":" + pwd)); req.PreAuthenticate = true; req.AuthenticationLevel = System.Net.Security.AuthenticationLevel.MutualAuthRequested;req.Headers.Add("Authorization", auth); req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; WebResponse resp = req.GetResponse(); resp.Close(); This works and causes the request to immediately send auth information to the server. However, this only works with Basic Auth because you can actually create the authentication credentials easily on the client because it’s essentially clear text. The same doesn’t work for Windows or Digest authentication since you can’t easily create the authentication token on the client and send it to the server. Another issue with this approach is that PreAuthenticate has no effect when you manually force the authentication. As far as Web Request is concerned it never sent the authentication information so it’s not actually caching the value any longer. If you run 3 requests in a row like this: string url = "http://rasnote/wconnect/admin/wc.wc?_maintain~ShowStatus"; HttpWebRequest req = HttpWebRequest.Create(url) as HttpWebRequest; string user = "ricks"; string pwd = "secret"; string domain = "www.west-wind.com"; string auth = "Basic " + Convert.ToBase64String(System.Text.Encoding.Default.GetBytes(user + ":" + pwd)); req.PreAuthenticate = true; req.Headers.Add("Authorization", auth); req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; WebResponse resp = req.GetResponse(); resp.Close(); req = HttpWebRequest.Create(url) as HttpWebRequest; req.PreAuthenticate = true; req.Credentials = new NetworkCredential(user, pwd, domain); req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; resp = req.GetResponse(); resp.Close(); req = HttpWebRequest.Create(url) as HttpWebRequest; req.PreAuthenticate = true; req.Credentials = new NetworkCredential(user, pwd, domain); req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; resp = req.GetResponse(); you’ll find the trace looking like this: where the first request (the one we explicitly add the header to) authenticates, the second challenges, and any subsequent ones then use the PreAuthenticate credential caching. In effect you’ll end up with one extra 401 request in this scenario, which is still better than 401 challenges on each request. Getting Access to WebRequest in Classic .NET Web Service Clients If you’re running a classic .NET Web Service client (non-WCF) one issue with the above is how do you get access to the WebRequest to actually add the custom headers to do the custom Authentication described above? One easy way is to implement a partial class that allows you add headers with something like this: public partial class TaxService { protected NameValueCollection Headers = new NameValueCollection(); public void AddHttpHeader(string key, string value) { this.Headers.Add(key,value); } public void ClearHttpHeaders() { this.Headers.Clear(); } protected override WebRequest GetWebRequest(Uri uri) { HttpWebRequest request = (HttpWebRequest) base.GetWebRequest(uri); request.Headers.Add(this.Headers); return request; } } where TaxService is the name of the .NET generated proxy class. In code you can then call AddHttpHeader() anywhere to add additional headers which are sent as part of the GetWebRequest override. Nice and simple once you know where to hook it. For WCF there’s a bit more work involved by creating a message extension as described here: http://weblogs.asp.net/avnerk/archive/2006/04/26/Adding-custom-headers-to-every-WCF-call-_2D00_-a-solution.aspx. FWIW, I think that HTTP header manipulation should be readily available on any HTTP based Web Service client DIRECTLY without having to subclass or implement a special interface hook. But alas a little extra work is required in .NET to make this happen Not a Common Problem, but when it happens… This has been one of those issues that is really rare, but it’s bitten me on several occasions when dealing with oddball Web services – a couple of times in my own work interacting with various Web Services and a few times on customer projects that required interaction with credentials-first services. Since the servers determine the protocol, we don’t have a choice but to follow the protocol. Lovely following standards that implementers decide to ignore, isn’t it? :-}© Rick Strahl, West Wind Technologies, 2005-2010Posted in .NET  CSharp  Web Services  

    Read the article

  • Why is ValidateInput(False) not working?

    - by xenosyde
    I am converting an application I created using webforms to the asp.net mvc framework using vb.net. I have a problem with one of my views. I get the yellow screen of death saying "A potentially dangerous Request.Form value was detected from the client" when I submit my form. I am using tinymce as my RTE. I have set on the view itself ValidateRequest="false" I know that in MVC it doesn't respect it on the view from what I've read so far. So I put it on the controller action as well. I have tried different setups: <ValidateInput(False), AcceptVerbs(HttpVerbs.Post)> _ ...and... <AcceptVerbs(HttpVerbs.Post), ValidateInput(False)> _ ...and like this as well... <ValidateInput(False)> _ <AcceptVerbs(HttpVerbs.Post)> _ Just to see if it made a difference, yet I still get the yellow screen of death. I only want to set it for this view and the specific action in my controller that my post pertains to. Am I missing something?

    Read the article

  • Android HttpClient and HTTPS

    - by user309769
    Hi all, I'm new to implementing HTTPS connections in Android. Essentially, I'm trying to connect to a server using the org.apache.http.client.HttpClient. I believe, at some point, I'll need to access the application's keystore in order to authorize my client with a private key. But, for the moment, I'm just trying to connect and see what happens; I keep getting an HTTP/1.1 400 Bad Request error. I can't seem to make heads or tails of this despite many examples (none of them seem to work for me). My code looks like this (the BODY constant is XmlRPC): private void connect() throws IOException, URISyntaxException{ HttpPost post = new HttpPost(new URI(PROD_URL)); HttpClient client = new DefaultHttpClient(); post.setEntity(new StringEntity(BODY)); HttpResponse result = client.execute(post); Log.d("MainActivity", result.getStatusLine().toString()); } So, pretty simple. Let me know if anyone out there has any advice. Thanks!

    Read the article

  • Why does my MacBook Pro trackpad sometimes set its tracking speed to slow, all by itself?

    - by Paul D. Waite
    Every now and then, the trackpad on my MacBook Pro will seem to set its own tracking speed to slow. I’ll notice that the cursor is moving slowly, and when I check in System Preferences, the tracking speed is indeed at slow, even though I never set it to slow myself. This might happen before/after switching into a VMWare virtual machine, but I’m not sure. It doesn’t seem to happen on startup or anything, just during use. Anyone else seen this?

    Read the article

  • cancelPreviousPerformRequestWithTarget is not canceling my previously delayed thread started with pe

    - by jmurphy
    Hello, I've launched a delayed thread using performSelector but the user still has the ability to hit the back button on the current view causing dealloc to be called. When this happens my thread still seems to be called which causes my app to crash because the properties that thread is trying to write to have been released. To solve this I am trying to call cancelPreviousPerformRequestsWithTarget to cancel the previous request but it doesn't seem to be working. Below are some code snippets. - (void) viewDidLoad { [self performSelector:@selector(myStopUpdatingLocation) withObject:nil afterDelay:6]; } (void)viewWillDisappear:(BOOL)animated { [NSObject cancelPreviousPerformRequestsWithTarget:self selector:@selector(myStopUpdatingLocation) object:nil]; } Am I doing something incorrect here? The method myStopUpdatingLocation is defined in the same class that I'm calling the perform requests. A little more background. The function that I'm trying to implement is to find a users location, search google for some locations around that location and display several annotations on the map. On viewDidLoad I start updating the location with CLLocationManager. I've build in a timeout after 6 seconds if I don't get my desired accuracy within the timeout and I'm using a performSelector to do this. What can happen is the user clicks the back button in the view and this thread will still execute even though all my properties have been released causing a crash. Thanks in advance! James

    Read the article

  • Javascript + PHP $_POST array empty

    - by Peterim
    While trying to send a POST request via xmlhttp.open("POST", "url", true) (javascript) to the server I get an empty $_POST array. Firebug shows that the data is being sent. Here is the data string from Firebug: a=1&q=151a45a150.... But $_POST['q'] returns nothing. The interesting thing is that file_get_contents('php://input') does have my data (the string above), but PHP somehow doesn't recognize it. Tried both $_POST and $_REQUEST, nothing works. Headers being sent: POST /test.php HTTP/1.1 Host: website.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us;q=0.7,en;q=0.3 Accept-Encoding: gzip,deflate Accept-Charset: utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://website.com/ Content-Length: 156 Content-Type: text/plain; charset=UTF-8 Pragma: no-cache Cache-Control: no-cache Thank you for any suggestions.

    Read the article

  • How to set the request start time with HAProxy?

    - by Tupy
    I would like to measure the time of full request stack. The New Relic capture time of the middleware (e.g. java, python, ruby) and request time (See https://newrelic.com/docs/features/tracking-front-end-time). For this, I need to configure the X-Request-Start header as the request pass through the HAProxy load balance. The haproxy.cfg should look like: backend www balance roundrobin mode http reqadd "X-Request-Start" UNKNOWN_TIME_FUNCTION() server servername 192.168.0.1:80 weight 1 check There is a haproxy native function to replace the UNKNOWN_TIME_FUNCTION()?

    Read the article

  • Unable to set maxReceivedMessageSize through web.config

    - by Michael Mortensen
    Hello there, I have now investigated the 400 - BadRequest code for the last two hours. A lot of sugestions goes towards ensuring the bindingConfiguration attribute is set correctly, and in my case, it is. Now, I need YOUR help before destroying the building i am in :-) I run a WCF RestFull service (very lightweight, using this resource for inspiration: http://msdn.microsoft.com/en-us/magazine/dd315413.aspx) which (for now) accepts an XmlElement (POX) provided through the POST verb. I am currently ONLY using Fiddler's request builder before implementing a true client (as this is mixed environments). When I do this for XML smaller than 65K, it works fine - larger, it throws this exception: The maximum message size quota for incoming messages (65536) has been exceeded. To increase the quota, use the MaxReceivedMessageSize property on the appropriate binding element. Here is my web.config file (which I even included the client-tag for (desperate times!)): <system.web> <httpRuntime maxRequestLength="1500000" executionTimeout="180"/> </system.web> <system.serviceModel> <diagnostics> <messageLogging logEntireMessage="true" logMalformedMessages="true" logMessagesAtServiceLevel="true" logMessagesAtTransportLevel="true" /> </diagnostics> <bindings> <webHttpBinding> <binding name="WebHttpBinding" maxReceivedMessageSize="1500000" maxBufferPoolSize="1500000" maxBufferSize="1500000" closeTimeout="00:03:00" openTimeout="00:03:00" receiveTimeout="00:10:00" sendTimeout="00:03:00"> <readerQuotas maxStringContentLength="1500000" maxArrayLength="1500000" maxBytesPerRead="1500000" /> <security mode="None"/> </binding> </webHttpBinding> </bindings> <client> <endpoint address="" binding="webHttpBinding" bindingConfiguration="WebHttpBinding" contract="Commerce.ICatalogue"/> </client> <services> <service behaviorConfiguration="ServiceBehavior" name="Catalogue"> <endpoint address="" behaviorConfiguration="RestFull" binding="webHttpBinding" bindingConfiguration="WebHttpBinding" contract="Commerce.ICatalogue" /> <!-- endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" / --> </service> </services> <behaviors> <endpointBehaviors> <behavior name="RestFull"> <webHttp/> </behavior> </endpointBehaviors> <serviceBehaviors> <behavior name="ServiceBehavior"> <serviceDebug httpHelpPageEnabled="true" includeExceptionDetailInFaults="true"/> <serviceMetadata httpGetEnabled="true"/> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> Thanks in advance for any help leading to succesfull call with 65K XML ;-)

    Read the article

  • Streaming binary data to WCF rest service gives Bad Request (400) when content length is greater than 64k

    - by Mikey Cee
    I have a WCF service that takes a stream: [ServiceContract] public class UploadService : BaseService { [OperationContract] [WebInvoke(BodyStyle=WebMessageBodyStyle.Bare, Method=WebRequestMethods.Http.Post)] public void Upload(Stream data) { // etc. } } This method is to allow my Silverlight application to upload large binary files, the easiest way being to craft the HTTP request by hand from the client. Here is the code in the Silverlight client that does this: const int contentLength = 64 * 1024; // 64 Kb var request = (HttpWebRequest)WebRequest.Create("http://localhost:8732/UploadService/"); request.AllowWriteStreamBuffering = false; request.Method = WebRequestMethods.Http.Post; request.ContentType = "application/octet-stream"; request.ContentLength = contentLength; using (var outputStream = request.GetRequestStream()) { outputStream.Write(new byte[contentLength], 0, contentLength); outputStream.Flush(); using (var response = request.GetResponse()); } Now, in the case above, where I am streaming 64 kB of data (or less), this works OK and if I set a breakpoint in my WCF method, and I can examine the stream and see 64 kB worth of zeros - yay! The problem arises if I send anything more than 64 kB of data, for instance by changing the first line of my client code to the following: const int contentLength = 64 * 1024 + 1; // 64 kB + 1 B This now throws an exception when I call request.GetResponse(): The remote server returned an error: (400) Bad Request. In my WCF configuration I have set maxReceivedMessageSize, maxBufferSize and maxBufferPoolSize to 2147483647, but to no avail. Here are the relevant sections from my service's app.config: <service name="UploadService"> <endpoint address="" binding="webHttpBinding" bindingName="StreamedRequestWebBinding" contract="UploadService" behaviorConfiguration="webBehavior"> <identity> <dns value="localhost" /> </identity> </endpoint> <host> <baseAddresses> <add baseAddress="http://localhost:8732/UploadService/" /> </baseAddresses> </host> </service> <bindings> <webHttpBinding> <binding name="StreamedRequestWebBinding" bypassProxyOnLocal="true" useDefaultWebProxy="false" hostNameComparisonMode="WeakWildcard" sendTimeout="00:05:00" openTimeout="00:05:00" receiveTimeout="00:05:00" maxReceivedMessageSize="2147483647" maxBufferSize="2147483647" maxBufferPoolSize="2147483647" transferMode="StreamedRequest"> <readerQuotas maxArrayLength="2147483647" maxStringContentLength="2147483647" /> </binding> </webHttpBinding> </bindings> <behaviors> <endpointBehaviors> <behavior name="webBehavior"> <webHttp /> </behavior> <endpointBehaviors> </behaviors> How do I make my service accept more than 64 kB of streamed post data?

    Read the article

  • Symfony : ajax call cause server to queue next queries

    - by Remiz
    Hello, I've a problem with my application when an ajax call on the server takes too much time : it queue all the others queries from the user until it's done server side (I realized that canceling the call client side has no effect and the user still have to wait). Here is my test case : <script type="text/javascript" src="jquery-1.4.1.min.js"></script> <a href="another-page.php">Go to another page on the same server</a> <script type="text/javascript"> url = 'http://localserver/some-very-long-complex-query'; $.get(url); </script> So when the get is fired and then after I click on the link, the server finish serving the first call before bringing me to the other page. My problem is that I want to avoid this behavior. I'm on a LAMP server and I'm looking in a way to inform the server that the user aborted the query with function like connection_aborted(), do you think that's the way to go ? Also, I know that the longest part of this PHP script is a MySQL query, so even if I know that connection_aborted() can detect that the user cancel the call, I still need to check this during the MySQL query... I'm not really sure that PHP can handle this kind of "event". So if you have any better idea, I can't wait to hear it. Thank you. Update : After further investigation, I found that the problem happen only with the Symfony framework (that I omitted to precise, my bad). It seems that an Ajax call lock any other future call. It maybe related to the controller or the routing system, I'm looking into it. Also for those interested by the problem here is my new test case : -new project with Symfony 1.4.3, default configuration, I just created an app and a default module. -jquery 1.4 for the ajax query. Here is my actions.class.php (in my unique module) : class defaultActions extends sfActions { public function executeIndex(sfWebRequest $request) { //Do nothing } public function executeNewpage() { //Do also nothing } public function executeWaitingaction(){ // Wait sleep(30); return false; } } Here is my indexSuccess.php template file : <script type="text/javascript" src="jquery-1.4.1.min.js"></script> <a href="<?php echo url_for('default/newpage');?>">Go to another symfony action</a> <script type="text/javascript"> url = '<?php echo url_for('default/waitingaction');?>'; $.get(url); </script> For the new page template, it's not very relevant... But with this, I'm able to reproduce the lock problem I've on my real application. Is somebody else having the same issue ? Thanks.

    Read the article

  • HTTP DOM: request.use? Usage?

    - by Jim G.
    I'm looking at the following code block in javascript: var request = new Request(); if(request.Use()) // What exactly does this do? { // ...do stuff } else { // no ajax support? } I've never seen anyone invoke the request.Use() method. My Question: What exactly does request.Use() check? Does it in fact check for AJAX support? Can anyone redirect me to an online API reference?

    Read the article

  • Will Tracking Subdomains as Single Entity with Google Analytics Help SEO? [closed]

    - by Sam Gridley
    Possible Duplicate: Does Google Analytics data affect SEO? We have two subdomains, one for our blog and one for our ecommerce store. The blog serves to bring traffic and the store is how we monetize the site. We have them designed to appear as one large site, but I know google sees them as two sites. Here is how the subdomains look: www.example.com (store) blog.example.com (blog) I believe I can configure analytics to use subdomain tracking as explained here: http://support.google.com/googleanalytics/bin/answer.py?hl=en&answer=55524 But my question is whether this will cause google to see our 2 subdomains as one larger domain for SEO purposes. In other words, is there any relationship to how you configure google analytics and how google indexes and ranks your website(s) and pages? Is there anything I need to do in anaytics or webmaster tools to make google aware that these two subdomains work together as one website? Thanks! Sam

    Read the article

  • Pulling in changes from a forked repo without a request on GitHub?

    - by Alec
    I'm new to the social coding community and don't know how to proceed properly in this situation: I've created a GitHub Repository a couple weeks ago. Someone forked the project and has made some small changes that have been on my to-do. I'm thrilled someone forked my project and took the time to add to it. I'd like to pull the changes into my own code, but have a couple of concerns. 1) I don't know how to pull in the changes via git from a forked repo. My understanding is that there is an easy way to merge the changes via a pull request, but it appears as though the forker has to issue that request? 2) Is it acceptable to pull in changes without a pull request? This relates to the first one. I'd put the code aside for a couple of weeks and come back to find that what I was going to work on next was done by someone else, and don't want to just copy their code without giving them credit in some way. Shouldn't there be a to pull the changes in even if they don't explicitly ask you to? What's the etiquette here I may be over thinking this, but thanks for your input in advance. I'm pretty new to the hacker community, but I want to do what I can to contribute!

    Read the article

  • Dashboard to aggregate Google Analytics, Facebook, YouTube etc tracking data?

    - by Richard
    I'd like to see as much tracking data as possible about my online presence, in one single dashboard - so views/conversions from Google Analytics data, the performance of my Facebook campaigns via the Insights API, views/clicks from my YouTube campaigns, etc. This could be as simple as a graph with time on the x-axis, and key indicators from each source on the y-axis (conversions from Analytics, likes on Facebook, views on YouTube, etc). The idea is that I can see customer engagement with each source, over time. I can write my own such dashboard easily enough, but I wondered if there was something off-the-shelf that already did this. Apologies if this isn't the right forum for such a question - would appreciate tips for the best place to ask.

    Read the article

  • Tracking logged in vs. non-logged in users in Google Analytics

    - by Justin
    I am building a social media site that is similar is structure to twitter and facebook.com where unauthenticated users who go to https://mysite.com will see a login + sign-up page, and authenticated users who go to https://mysite.com will see their timeline. My question is, what is the best practice (using Google Analytics) for tracking these two different types of users who are viewing completely different content but are visiting the same URL. I tried searching the Google Analytics docs but couldn't find what they suggested for this scenario. Perhaps I just don't know what keywords to search for. Thanks in advance for any help.

    Read the article

  • Weird response for controller.request.format.html? in Rails

    - by Tony
    In my main controller, I have this: class MainController < ApplicationController before_filter do |controller| logger.info "controller.request.format.html? = #{controller.request.format.html?}" logger.info "controller.request.format.fbml? = #{controller.request.format.fbml?}" controller.send :login_required if controller.request.format.html? controller.send :facebook_auth_required if controller.request.format.fbml? end As expected, I get "true" for the ...fbml? line if a request comes from Facebook (my facebooker gem automatically sets the format). However, I get "5" for the ...html? line if the request comes from Facebook. Why would a method with a ? ever return a "5"? Isn't that against Rails conventions? Also, I think "5" is considered true so this might mess up my filters. Still looking into that... Any ideas?

    Read the article

  • JSF: How to forward a request to another page in action?

    - by Satya
    I want to forward request to another page from action class I am using below code in my jsf action : public String submitUserResponse(){ ...... ...... parseResponse(uri,request,response); .... return "nextpage"; } public String parseResponse(String uri,request,response){ if(uri != null){ RequestDispatcher dispatcher = request.getRequestDispatcher(uri); dispatcher.forward(request, response); return null; } ................. .................. return "xxxx"; } "submitUserResponse" method is being called when user clicks the submit button from the jsp and this method returns "nextpage" string here request forwards to next page in normal flow. but in my requirement i will call "submitUserResponse () " which will execute and forwrd request to next page.It is going. but it is displaying some exceptions in server That is : java.lang.IllegalStateException: Cannot forward after response has been committed Here my doubts are: 1.why next lines of code is being executed after forwarding my request using dispatched.forward(uri) . Same thing happening in response.sendRedirect("").

    Read the article

  • How do I add a header to a VB.NET 2008 SOAP request? [migrated]

    - by robokev
    I have a VB.NET 2008 program that accesses a Siebel web service defined by a WSDL and using the SOAP protocol. The Siebel web service requires that a header containing the username, password and session type be included with the service request, but the header is not defined in the WSDL. So, when I test the WSDL using the soapUI utility, the request as defined by the WSDL looks like this: <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:lov="http://www.siebel.com/xml/LOVService" xmlns:lis="http://www.siebel.com/xml/ListQuery"> <soapenv:Header/> <soapenv:Body> <lov:EAILOVGetListOfValues_Input> <lis:ListsQuery> <lis:ListQuery> <lis:Active>Y</lis:Active> <lis:LanguageCode>ENU</lis:LanguageCode> <lis:Type>CUT_ACCOUNT_TYPE</lis:Type> </lis:ListQuery> </lis:ListsQuery> </lov:EAILOVGetListOfValues_Input> </soapenv:Body> </soapenv:Envelope> But the above does not work because it contains an empty header that is missing user and session credentials. It only works if I manually replace <soapenv:Header/> with a header containing the username, password, and session type as follows: <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:lov="http://www.siebel.com/xml/LOVService" xmlns:lis="http://www.siebel.com/xml/ListQuery"> <soapenv:Header> <UsernameToken xmlns="http://siebel.com/webservices">TESTUSER</UsernameToken> <PasswordText xmlns="http://siebel.com/webservices">TESTPASSWORD</PasswordText> <SessionType xmlns="http://siebel.com/webservices">None</SessionType> </soapenv:Header> <soapenv:Body> <lov:EAILOVGetListOfValues_Input> <lis:ListsQuery> <lis:ListQuery> <lis:Active>Y</lis:Active> <lis:LanguageCode>ENU</lis:LanguageCode> <lis:Type>CUT_ACCOUNT_TYPE</lis:Type> </lis:ListQuery> </lis:ListsQuery> </lov:EAILOVGetListOfValues_Input> </soapenv:Body> </soapenv:Envelope> My problem is that I cannot sort out how to translate the above into VB.NET 2008 code. I have no problem importing the WSDL into Visual Studio 2008, defining the service in VB code and referencing the web service methods. However, I cannot sort out how to define the web service in VB such that the updated header in included in the web service request instead of the empty header. Consequently all my service requests from VB fail. I can define a class that inherits from the SoapHeader class... Public Class MySoapHeader : Inherits System.Web.Services.Protocols.SoapHeader Public Username As String Public Password As String Public SessionType As String End Class ...but how do I include this header in the SOAP request made from VB?

    Read the article

  • What is correct HTTP status code when redirecting to a login page?

    - by PHP_Jedi
    When a user is not logged in and tries to access an page that requires login, what is the correct HTTP status code for a redirect to the login page? I don't feel that any of the 3xx fit that description. 10.3.1 300 Multiple Choices The requested resource corresponds to any one of a set of representations, each with its own specific location, and agent- driven negotiation information (section 12) is being provided so that the user (or user agent) can select a preferred representation and redirect its request to that location. Unless it was a HEAD request, the response SHOULD include an entity containing a list of resource characteristics and location(s) from which the user or user agent can choose the one most appropriate. The entity format is specified by the media type given in the Content- Type header field. Depending upon the format and the capabilities of the user agent, selection of the most appropriate choice MAY be performed automatically. However, this specification does not define any standard for such automatic selection. If the server has a preferred choice of representation, it SHOULD include the specific URI for that representation in the Location field; user agents MAY use the Location field value for automatic redirection. This response is cacheable unless indicated otherwise. 10.3.2 301 Moved Permanently The requested resource has been assigned a new permanent URI and any future references to this resource SHOULD use one of the returned URIs. Clients with link editing capabilities ought to automatically re-link references to the Request-URI to one or more of the new references returned by the server, where possible. This response is cacheable unless indicated otherwise. The new permanent URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s). If the 301 status code is received in response to a request other than GET or HEAD, the user agent MUST NOT automatically redirect the request unless it can be confirmed by the user, since this might change the conditions under which the request was issued. Note: When automatically redirecting a POST request after receiving a 301 status code, some existing HTTP/1.0 user agents will erroneously change it into a GET request. 10.3.3 302 Found The requested resource resides temporarily under a different URI. Since the redirection might be altered on occasion, the client SHOULD continue to use the Request-URI for future requests. This response is only cacheable if indicated by a Cache-Control or Expires header field. The temporary URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s). If the 302 status code is received in response to a request other than GET or HEAD, the user agent MUST NOT automatically redirect the request unless it can be confirmed by the user, since this might change the conditions under which the request was issued. Note: RFC 1945 and RFC 2068 specify that the client is not allowed to change the method on the redirected request. However, most existing user agent implementations treat 302 as if it were a 303 response, performing a GET on the Location field-value regardless of the original request method. The status codes 303 and 307 have been added for servers that wish to make unambiguously clear which kind of reaction is expected of the client. 10.3.4 303 See Other The response to the request can be found under a different URI and SHOULD be retrieved using a GET method on that resource. This method exists primarily to allow the output of a POST-activated script to redirect the user agent to a selected resource. The new URI is not a substitute reference for the originally requested resource. The 303 response MUST NOT be cached, but the response to the second (redirected) request might be cacheable. The different URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s). Note: Many pre-HTTP/1.1 user agents do not understand the 303 status. When interoperability with such clients is a concern, the 302 status code may be used instead, since most user agents react to a 302 response as described here for 303. 10.3.5 304 Not Modified If the client has performed a conditional GET request and access is allowed, but the document has not been modified, the server SHOULD respond with this status code. The 304 response MUST NOT contain a message-body, and thus is always terminated by the first empty line after the header fields. The response MUST include the following header fields: - Date, unless its omission is required by section 14.18.1 If a clockless origin server obeys these rules, and proxies and clients add their own Date to any response received without one (as already specified by [RFC 2068], section 14.19), caches will operate correctly. - ETag and/or Content-Location, if the header would have been sent in a 200 response to the same request - Expires, Cache-Control, and/or Vary, if the field-value might differ from that sent in any previous response for the same variant If the conditional GET used a strong cache validator (see section 13.3.3), the response SHOULD NOT include other entity-headers. Otherwise (i.e., the conditional GET used a weak validator), the response MUST NOT include other entity-headers; this prevents inconsistencies between cached entity-bodies and updated headers. If a 304 response indicates an entity not currently cached, then the cache MUST disregard the response and repeat the request without the conditional. If a cache uses a received 304 response to update a cache entry, the cache MUST update the entry to reflect any new field values given in the response. 10.3.6 305 Use Proxy The requested resource MUST be accessed through the proxy given by the Location field. The Location field gives the URI of the proxy. The recipient is expected to repeat this single request via the proxy. 305 responses MUST only be generated by origin servers. Note: RFC 2068 was not clear that 305 was intended to redirect a single request, and to be generated by origin servers only. Not observing these limitations has significant security consequences. 10.3.7 306 (Unused) The 306 status code was used in a previous version of the specification, is no longer used, and the code is reserved. 10.3.8 307 Temporary Redirect The requested resource resides temporarily under a different URI. Since the redirection MAY be altered on occasion, the client SHOULD continue to use the Request-URI for future requests. This response is only cacheable if indicated by a Cache-Control or Expires header field. The temporary URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s) , since many pre-HTTP/1.1 user agents do not understand the 307 status. Therefore, the note SHOULD contain the information necessary for a user to repeat the original request on the new URI. If the 307 status code is received in response to a request other than GET or HEAD, the user agent MUST NOT automatically redirect the request unless it can be confirmed by the user, since this might change the conditions under which the request was issued. I'm using 302 for now, until I find THE correct answer.

    Read the article

  • Is this technique for stat tracking without a database workable?

    - by baptzmoffire
    If I wanted to create a chess game, for iOS, that tracked both player moves (for retracing the progression of a game and for player stats), what would be the simplest route to take? To clarify, I want to track not only the moves a player has made in a particular game, but how often that player has made that move in past games. For example I want to be able to track: How many times a given player has opened by moving the king pawn up two squares (e4) as white, on move number one. What is the percentage of time the player responds to white's e4 opening move, with moving his own king pawn to e5? What percentage of time does he respond by moving his queenside bishop pawn to c5? And so on. If it's not clear, the stat tracking system should also be able to report how many times this player, as black, move his queen to h1, on move number 30. I'm using Parse.com for my back-end as a server (BaaS) service. If I were to create a class that writes strings that identify move number, player color, moved piece, algebraic notation of the square (e.g. "d8") to a file, locally in the file system saves the file to Parse, and deletes the temporary file from file system upon opening the same game in my tableview (a la a "With Friends" game), download this file from Parse, parse through it and retrieve all stats/history, assign all relevant values to variables Is this plan viable, or is there an easier way?

    Read the article

  • Hough transformation for iris detection in opencv

    - by iva123
    Hi, I wrote the code for iris detection and it works well. Also I can crop the eye location of a face. Now I want to detect the iris of the crop image with applying the Hough transformation(cvHoughCircle). However when I try this procedure, the system is not able to find any circle on the image. Maybe, the reason is, there are noises in the image but I don't think it's the reason. So, how can I detect the iris ? I have the code of binary thresholding maybe I can use it, but I don't know how to do ?? If anyone helps I really appreciated. thx :)

    Read the article

  • Detect Click into Iframe using JavaScript

    - by Russ Bradberry
    I understand that it is not possible to tell what the user is doing inside an iframe if it is cross domain. What I would like to do is track if the user clicked at all in the iframe. I imagine a scenario where there is an invisible div on top of the iframe and the the div will just then pass the click event to the iframe. Is something like this possible? If it is, then how would I go about it? The iframes are ads, so I have no control over the tags that are used.

    Read the article

  • How to track Google Analytics Events in Server Side asp.net?

    - by Raj
    Hello, Is there a way to track Google Analytics Events from Server Side in ASP.NET, the requirement is the the Event should be tracked on button click after some functionalities are executed on Serverside. ? OnClientClick of button, we cannot fulfill this requirement completely as some time serverside functionalities can fail but the event will get tracked in Google? Please help me in this regard. Appreciate expert answers. Thanks in Advance, Raj

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >