Search Results

Search found 92198 results on 3688 pages for 'http error'.

Page 217/3688 | < Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >

  • jQuery $.ajax calls success handler when reuqest fails because of browser reloading

    - by Martin
    I have the following code: $.ajax({ type: "POST", url: url, data: sendable, dataType: "json", success: function(data) { if(customprocessfunc) customprocessfunc(data); }, error: function(XMLHttpRequest, textStatus, errorThrown){ // error handler here } }); I have a timer which makes AJAX requests often. If I do not receive anything in 'data', I show an error message to the user - it means, something wnet bad on the server. The problem is when user reloads the page while the AJAX call is in progress. I can see in the firebug that the AJAX call fails (URL is colored red and no HTTP status is displayed) so I expect that jQuery will stop the reuqest or at least go to the error handler. But it goes to the success handler and passes null in the 'data' variable. As a result, when user reloads the page, sometimes he can see my big red message about unknown error (because data is null). Is there any way to make jQuery abort the request on complete reloading all at least not to call my success function? I have no way to know in the success handler why the data is null - did it came empty from the server or the call was aborted because of reload.

    Read the article

  • Access.Application.CurrentDb is Nothing?

    - by Allain Lalonde
    I'm at a loss to explain this one: I'm getting an error "91" (Object or With block not set) on the second line below: Dim rs As DAO.Recordset Set rs = CurrentDb.OpenRecordset("SELECT * FROM employees") The following also causes it: Set rs = CurrentDb.OpenRecordset("employees") Executing ?CurrentDb.Name alone in the immediate window causes the error as well. Now, clearly the database is open since I'm editing the form within it, so what can cause this error here?

    Read the article

  • How should I handle expected errors? eg. "username already exists"

    - by Pheter
    I am struggling to understand how I should design the error handling parts of my code. I recently asked a similar question about how I should go about returning server error codes to the user, eg. 404 errors. I learnt that I should handle the error from within the current part of the application; seem's simple enough. However, what should I do when I can't handle the error from the current link in the chain? For example, I may have a class that is used to manage authentication. One of it's methods could be createUser($username, $password). Within that function, I need to determine if the username already exists. If this is true, how should I alert the calling code about this? Returning null instead of a user object is one way. But how do I then know what caused the error? How should I handle errors in such a way that calling code can easily find out what caused the error? Is there a design pattern commonly used for this kind of situation?

    Read the article

  • C++ enumaration

    - by asli
    Hi,my question is about enumaration,my codes are : #include<iostream> using namespace std; int main() { enum bolumler{programcilik,donanim,muhasebe,motor,buro} bolum; bolum = donanim; cout<<bolum<<endl; bolum+=2; /* bolum=motor */ cout<<bolum; return 0; } The output should be 1 3 but according to these codes the error is: error C2676: binary '+=' : 'enum main::bolumler' does not define this operator or a conversion to a type acceptable to the predefined operator Error executing cl.exe. 111.obj - 1 error(s), 0 warning(s) Can you help me ?The other question is what can I do if I want to see the output like that "muhasebe"?

    Read the article

  • Avoiding try/catch hell in my web pages

    - by Shaun_web
    I am writing an ASP.NET website, which is a new framework for me. I find that I have a try/catch block in literally every method of my codebehind. All these try/catch blocks do is catch the exception and then pop-up an error message to the user. Isn't there some sort of global error handler in ASP.NET? It's worth noting that my error handling is within control (ASCX) pages, and I would like a way to simply get each ASCX to handle its own errors without forcing all error handling just to a single master page or a redirect...

    Read the article

  • Proper mechanism for sending PHP errors to the client

    - by Chris
    Greetings, I was trying to discover a proper way to send captured errors or business logic exceptions to the client in an Ajax-PHP system. In my case, the browser needs to react differently depending on whether a request was successful or not. However in all the examples I've found, only a simple string is reported back to the browser in both cases. Eg: if (something worked) echo "Success!"; else echo "ERROR: that failed"; So when the browser gets back the Ajax response, the only way to know if an error occurred would be to parse the string (looking for 'error' perhaps). This seems clunky. Is there a better/proper way to send back the Ajax response & notify the browser of an error? Thank you.

    Read the article

  • Zend Framework: How to handle exceptions in Ajax requests?

    - by understack
    Normally when an exception is thrown, Error controller takes command and displays error page with regular common header and footer. This behavior is not wanted in Ajax request. Because in case of error, whole html page is sent over. And in cases where I'm directly loading the content of http response in a div, this is even more unwanted. Instead in case of Ajax request, I just want to receive 'the actual error' thrown by exception. How can I do this? I think, one dirty way could be: set a var in ajax request and process accordingly. Not a good solution.

    Read the article

  • Android: HTTPClient

    - by primal
    Hi, I was trying http-cleint tutorials from svn.apache.org. While running the application I am getting the following error in console. [2010-04-30 09:26:36 - HalloAndroid] ActivityManager: java.lang.SecurityException: Permission Denial: starting Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x10000000 cmp=com.org.example/.HalloAndroid } from null (pid=-1, uid=-1) requires android.permission.INTERNET I have added android.permission.INTERNET in AndroidManifest.xml. <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.org.example" android:versionCode="1" android:versionName="1.0"> <application android:icon="@drawable/icon" android:label="@string/app_name"> <activity android:name=".HalloAndroid" android:label="@string/app_name" android:permission="android.permission.INTERNET"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> <uses-permission android:name="android.permission.INTERNET"></uses-permission> </manifest> The java code in HalloAndroid.java is as follows HttpClient httpclient = new DefaultHttpClient(); HttpGet httpget2 = new HttpGet("http://google.com/"); HttpResponse response2 = null; try { response2 = httpclient.execute(httpget2); } catch (ClientProtocolException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } catch (IOException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } HttpEntity entity = response2.getEntity(); if (entity != null) { long len = entity.getContentLength(); if (len != -1 && len < 2048) { try { Log.d(TAG, EntityUtils.toString(entity)); } catch (ParseException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } else { // Stream content out } Any help is much appreciated.

    Read the article

  • Win32: What is the status of chunked encoding support in WinHttpReadData?

    - by Cheeso
    The documentation for WinHttpReadData says, regarding HTTP's chunked transfer coding: Starting in Windows Vista and Windows Server 2008, WinHttp enables applications to perform chunked transfer encoding on data sent to the server. When the Transfer-Encoding header is present on the WinHttp response, WinHttpReadData strips the chunking information before giving the data to the application. Can anyone decipher this? Q1 First, this text is on the page for WinHttpReadData, which is used to ... read data within an HTTP client application, specifically the response data. So what does it mean when it says Starting in Windows Vista and Windows Server 2008, WinHttp enables applications to perform chunked transfer encoding on data sent to the server. The WinHttpReadData function isn't used with data being sent to the server. It is used when reading data from the server. Consulting the doc for the WinHttpWriteData function, which is used to send data to the server as part of an HTTP request, there is no mention of the chunked transfer capability. Q2 Supposing that I figure out just what the newish chunked transfer support amounts to, how do I get that support? It says that it is new on Vista and WS2008. What happens if I write an app that runs on WS2003, and uses WinHttpReadData and it encounters a chunked response, or WinHttpWriteData, and it wants to send a chunked request? Between the lines, is this documentation saying that I need to link against the Vista-era Windows SDK, or later, in order to get the capability to do chunked encoding? Or is it really impossible on WS2003?, in other words it is the case that the app doing chunked transfer using this library must run on the OS specified? This might read like a rant, but it's not. I truly want to know.

    Read the article

  • Javascript Post Request like a Form Submit

    - by Joseph Holsten
    I'm trying to direct a browser to a different page. If I wanted a GET request, I might say document.location.href = 'http://example.com/q=a'; But the resource I'm trying to access won't respond properly unless I use a POST request. If this were not dynamically generated, I might use the HTML <form action="http://example.com/" method="POST"> <input type="hidden" name="q" value="a"> </form> Then I would just submit the form from the DOM. But really I would like JavaScript that allows me to say post_to_url('http://example.com/', {'q':'a'}); What's the best cross browser implementation? Edit I'm sorry I was not clear. I need a solution that changes the location of the browser, just like submitting a form. If this is possible with XMLHTTPRequest, it is not obvious. And this should not be asynchronous, nor use XML, so AJAX is not the answer.

    Read the article

  • Android HTTPClient not working inspite of giving permissions in manifest file.

    - by primal
    Hi, I was trying http-cleint tutorials from svn.apache.org. While running the application I am getting the following error in console. [2010-04-30 09:26:36 - HalloAndroid] ActivityManager: java.lang.SecurityException: Permission Denial: starting Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x10000000 cmp=com.org.example/.HalloAndroid } from null (pid=-1, uid=-1) requires android.permission.INTERNET I have added android.permission.INTERNET in AndroidManifest.xml. <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.org.example" android:versionCode="1" android:versionName="1.0"> <application android:icon="@drawable/icon" android:label="@string/app_name"> <activity android:name=".HalloAndroid" android:label="@string/app_name" android:permission="android.permission.INTERNET"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> <uses-permission android:name="android.permission.INTERNET"></uses-permission> </manifest> The java code in HalloAndroid.java is as follows HttpClient httpclient = new DefaultHttpClient(); HttpGet httpget2 = new HttpGet("http://google.com/"); HttpResponse response2 = null; try { response2 = httpclient.execute(httpget2); } catch (ClientProtocolException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } catch (IOException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } HttpEntity entity = response2.getEntity(); if (entity != null) { long len = entity.getContentLength(); if (len != -1 && len < 2048) { try { Log.d(TAG, EntityUtils.toString(entity)); } catch (ParseException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } else { // Stream content out } Any help is much appreciated.

    Read the article

  • setcookie, Cannot modify header information - headers already sent

    - by Nano HE
    Hi,I am new to PHP, I practised PHP setcookie() just now and failed. http://localhost/test/index.php <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title></title> </head> <body> <?php $value = 'something from somewhere'; setcookie("TestCookie", $value); ?> </body> </html> http://localhost/test/view.php <?php // I plan to view the cookie value via view.php echo $_COOKIE["TestCookie"]; ?> But I failed to run index.php, IE warning like this. Warning: Cannot modify header information - headers already sent by (output started at C:\xampp\htdocs\test\index.php:9) in C:\xampp\htdocs\test\index.php on line 12 I enabled my IE 6 cookie no doubt. Is there anything wrong on my procedure above? Thank you. WinXP OS and XAMPP 1.7.3 used.

    Read the article

  • Issue with IHttpHandler and relative URLs

    - by vtortola
    Hi, I've developed a IHttpHandler class and I've configured it as verb="*" path="*", so I'm handling all the request with it in an attempt of create my own REST implementation for a test web site that generates the html dynamically. So, when a request for a .css file arrives, I've to do something like context.Response.WriteFile(Server.MapPath(url)) ... same for pictures and so on, I have to response everything myself. My main issue, is when I put relative URLs in the anchors; for example, I have main page with a link like this <a href="page1">Go to Page 1</a> , and in Page 1 I have another link <a href="page2">Go to Page 2</a>. Page 1 and 2 are supposed to be at the same level (http://host/page1 and http://host/page2, but when I click in Go to Page 2, I got this url in the handler: ~/page1/~/page2 ... what is a pain, because I have to do an url = url.SubString(url.LastIndexOf('~')) for clean it, although I feel that there is nothing wrong and this behavior is totally normal. Right now, I can cope with it, but I think that in the future this is gonna bring me some headache. I've tried to set all the links with absolute URLs using the information of context.Request.Url, but it's also a pain :D, so I'd like to know if there is a nicer way to do these kind of things. Don't hesitate in giving me pretty obvious responses because I'm pretty new in web development and probably I'm skipping something basic about URLs, Http and so on. Thanks in advance and kind regards.

    Read the article

  • Access SSAS cube from across domains without direct database connection

    - by SuperKing
    Hello, I'm working with SQL Server Analysis Services for the first time and have the dilemma of working on a project in which users must be able to access SSAS Cubes (via a custom web dashboard) that live across different servers and domains, but without having access to the other server's SSAS database connection strings. So Organization A and Organization B will have their own cubes on their own servers, but Organization A users must be able to view Organization B's cubes, and Organization B users must be able to view Organization A's cubes, but neither organization should have access to the connection string. I've read about allowing HTTP access to the SSAS server and cube from the link below, but that requires setting up users for authentication or allowing anonymous access to one organization's server for users of another organization, and I'm not sure this would be acceptable for this situation, or if this is the preferred way to do this. Is performance acceptable here? http://technet.microsoft.com/en-us/library/cc917711.aspx I also wonder if perhaps it makes sense to run a nightly/weekly process that accesses the other organization's SSAS database via a web service or something, and pull that data into a database on the organization's server, and then rebuild the cube. Then that cube would be queried without having to go and connect to the other organization server when viewing the cube. Has anyone else attempted to accomplish something similar? Is HTTP access the standard way to go for this? Or any other possible options? Thanks, and please let me know if you need more info, still unclear on how some of this works.

    Read the article

  • Login Website, curious Cookie Problem

    - by Collin Peters
    Hello, Language: C# Development Environment: Visual Studio 2008 Sorry if the english is not perfect. I want to login to a Website and get some Data from there. My Problem is that the Cookies does not work. Everytime the Website says that I should activate Cookies but i activated the Cookies trough a Cookiecontainer. I sniffed the traffic serveral times for the login progress and I see no problem there. I tried different methods to login and I have searched if someone else have this Problem but no results... Login Page is: "www.uploaded.to", Here is my Code to Login in Short Form: private void login() { //Global CookieContainer for all the Cookies CookieContainer _cookieContainer = new CookieContainer(); //First Login to the Website HttpWebRequest _request1 = (HttpWebRequest)WebRequest.Create("http://uploaded.to/login"); _request1.Method = "POST"; _request1.CookieContainer = _cookieContainer; string _postData = "email=XXXXX&password=XXXXX"; byte[] _byteArray = Encoding.UTF8.GetBytes(_postData); Stream _reqStream = _request1.GetRequestStream(); _reqStream.Write(_byteArray, 0, _byteArray.Length); _reqStream.Close(); HttpWebResponse _response1 = (HttpWebResponse)_request1.GetResponse(); _response1.Close(); //######################## //Follow the Link from Request1 HttpWebRequest _request2 = (HttpWebRequest)WebRequest.Create("http://uploaded.to/login?coo=1"); _request2.Method = "GET"; _request2.CookieContainer = _cookieContainer; HttpWebResponse _response2 = (HttpWebResponse)_request2.GetResponse(); _response2.Close(); //####################### //Get the Data from the Page after Login HttpWebRequest _request3 = (HttpWebRequest)WebRequest.Create("http://uploaded.to/home"); _request3.Method = "GET"; _request3.CookieContainer = _cookieContainer; HttpWebResponse _response3 = (HttpWebResponse)_request3.GetResponse(); _response3.Close(); } I'm stuck at this problem since many weeks and i found no solution that works, please help...

    Read the article

  • Does mod_php honor HEAD requests properly?

    - by rkulla
    The HTTP/1.1 RFC stipulates "The HEAD method is identical to GET except that the server MUST NOT return a message-body in the response." I know Apache honors the RFC but modules don't have to. My question is, does mod_php5 honor this? The reason I ask is because I just came across an article saying that PHP developers should check this themselves with: if (stripos($_SERVER['REQUEST_METHOD'], 'HEAD') !== FALSE) { exit(); } but seeing as how browsers send HEAD requests for cache checking it seems unlikely to me that no book, docs, etc., advise PHP developers to do this check. I googled a second and not much turned up, other than some people saying they try to strange things like mod_rewrite/redirect after getting HEAD requests and some old bug ticket from like 2002 claiming that mod_php still executed the rest of the script by default. So I just ran a quick test by using PECL::HTTP to run http_head('http://mysite.com/test-head-request.php'); while having: <?php error_log('REST OF SCRIPT STILL RAN'); ?> in test-head-request.php to see if the rest of the script still executed, and it didn't. I figure that should be enough to settle it, but want to get more feedback and maybe help clear up confusion for anyone else who has wondered about this. So if anyone knows off the top of their head (no pun intended) - or have any conventions they use for receiving HEAD requests, that'd be great. Otherwise, I'll grep the C source later and respond in a comment with my findings. Thanks.

    Read the article

  • Translating CURL to FLEX HTTPRequests

    - by Joshua
    I am trying to convert from some CURL code to FLEX/ActionScript. Since I am 100% ignorant about CURL and 50% ignorant about Flex and 90% ignorant on HTTP in general... I'm having some significant difficulty. The following CURL code is from http://code.google.com/p/ga-api-http-samples/source/browse/trunk/src/v2/accountFeed.sh I have every reason to believe that it's working correctly. USER_EMAIL="[email protected]" #Insert your Google Account email here USER_PASS="secretpass" #Insert your password here googleAuth="$(curl https://www.google.com/accounts/ClientLogin -s \ -d Email=$USER_EMAIL \ -d Passwd=$USER_PASS \ -d accountType=GOOGLE \ -d source=curl-accountFeed-v2 \ -d service=analytics \ | awk /Auth=.*/)" feedUri="https://www.google.com/analytics/feeds/accounts/default\ ?prettyprint=true" curl $feedUri --silent \ --header "Authorization: GoogleLogin $googleAuth" \ --header "GData-Version: 2" The following is my abortive attempt to translate the above CURL to AS3 var request:URLRequest=new URLRequest("https://www.google.com/analytics/feeds/accounts/default"); request.method=URLRequestMethod.POST; var GoogleAuth:String="$(curl https://www.google.com/accounts/ClientLogin -s " + "-d [email protected] " + "-d Passwd=secretpass " + "-d accountType=GOOGLE " + "-d source=curl-accountFeed-v2" + "-d service=analytics " + "| awk /Auth=.*/)"; request.requestHeaders.push(new URLRequestHeader("Authorization", "GoogleLogin " + GoogleAuth)); request.requestHeaders.push(new URLRequestHeader("GData-Version", "2")); var loader:URLLoader=new URLLoader(); loader.dataFormat=URLLoaderDataFormat.BINARY; loader.addEventListener(Event.COMPLETE, GACompleteHandler); loader.addEventListener(IOErrorEvent.IO_ERROR, GAErrorHandler); loader.addEventListener(SecurityErrorEvent.SECURITY_ERROR, GAErrorHandler); loader.load(request); This probably provides you all with a good laugh, and that's okay, but if you can find any pity on me, please let me know what I'm missing. I readily admit functional ineptitude, therefore letting me know how stupid I am is optional.

    Read the article

  • Why doesn't Default route work using Html.ActionLink in this case?

    - by StuperUser
    I have a rather perculiar issue with routing. Coming back to routing after not having to worry about configuration for it for a year, I am using the default route and ignore route for resources: routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = UrlParameter.Optional }); I have a RulesController with an action for Index and Lorem and a Index.aspx, Lorem.aspx in Views Rules directory. I have an ActionLink aimed at Rules/Index on the maseter page: <li><div><%: Html.ActionLink("linkText", "Index", "Rules")%></div></li> The link is being rendered as http://localhost:12345/Rules/ and am getting a 404. When I type Index into the URL the application routes it to the action. When I change the default route action from "Index" to "Lorem", the action link is being rendered as http://localhost:12345/Rules/Index adding the Index as it's no longer on the default route and the application routes to the Index action correctly. I have used Phil Haack's Routing Debugger, but entering the url http://localhost:12345/Rules/ is causing a 404 using that too. I think I've covered all of the rookie mistakes, relevant SO questions and basic RTFMs. I'm assuming that "Rules" isn't any sort of reserved word in routing. Other than updating the Routes and debuugging them, what can I look at?

    Read the article

  • How to post to a request using node.js

    - by Mr JSON
    I am trying to post some json to a URL. I saw various other questions about this on stackoverflow but none of them seemed to be clear or work. This is how far I got, I modified the example on the api docs: var http = require('http'); var google = http.createClient(80, 'server'); var request = google.request('POST', '/get_stuff', {'host': 'sever', 'content-type': 'application/json'}); request.write(JSON.stringify(some_json),encoding='utf8'); //possibly need to escape as well? request.end(); request.on('response', function (response) { console.log('STATUS: ' + response.statusCode); console.log('HEADERS: ' + JSON.stringify(response.headers)); response.setEncoding('utf8'); response.on('data', function (chunk) { console.log('BODY: ' + chunk); }); }); When I post this to the server I get an error telling me that it's not of the json format or that it's not utf8, which they should be. I tried to pull the request url but it is null. I am just starting with nodejs so please be nice.

    Read the article

  • Why is Firefox prompting to download a file that is POST'd to?

    - by alex
    This is the most peculiar thing. It is from an old in house CMS. When I attempt to submit my changes, it prompts to save the file linked in the action attribute of the form. Headers Request POST /~site/edit/articles/article_save.php?id=54 HTTP/1.1 Host: example.com User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://example.com Content-Type: multipart/form-data; boundary=---------------------------10102754414578508781458777923 Content-Length: 940 -----------------------------10102754414578508781458777923 Content-Disposition: form-data; name="title" Home Content -----------------------------10102754414578508781458777923 Content-Disposition: form-data; name="catid" 18 -----------------------------10102754414578508781458777923 Content-Disposition: form-data; name="activecheck" 1 -----------------------------10102754414578508781458777923 Content-Disposition: form-data; name="image" -----------------------------10102754414578508781458777923 Content-Disposition: form-data; name="contentWidgToolbarSelectBlock" <p> -----------------------------10102754414578508781458777923 Content-Disposition: form-data; name="content" <p>Edit your article in this text box.</p> -----------------------------10102754414578508781458777923 Content-Disposition: form-data; name="contentWidgEditor" true -----------------------------10102754414578508781458777923-- Response HTTP/0.9 200 OK And then Firefox shows.... I can't determine from the response headers as to why this is prompting to open/save. It has always worked. All other PHP files on the site work fine. Anyone have a clue? Thanks Update Apparently, it just crashes Safari.

    Read the article

  • Is there a max recommended size on bundling js/css files due to chunking or packet loss?

    - by George Mauer
    So we all have heard that its good to bundle your javascript. Of course it is, but it seems to me that the story is too simple. See if my logic makes sense here. Obviously fewer HTTP requests is fewer round trips and hence better. However - and I don't know much about bare http - aren't http responses sent in chunks? And if a file is larger than one of those chunks doesn't it have to be downloaded as multiple (possibly synchronous?) round trips? As opposed to this, several requests for files just under the chunking size would arrive much quicker since modern web browsers download resources like javascripts in parallel. Even if chunking is not an issue, it seems like there would be some max recommended size just due to likelyhood of packet loss alone since a bundled file must wait till it is entirely downloaded to execute, versus the more lenient native rule that scripts must execute in order. Obviously there's also matters of browser caching and code volatility to consider but can someone confirm this or explain why I'm off base? Does anyone have any numbers to put to it?

    Read the article

  • Exclude subdirectory from rewrite rule in web.config

    - by Clog
    This question comes up often, but I can only find solutions for PHP, Apache, htaccess etc but not for web.config I would like my pages to return in HTTP not HTTPS, except for forms within certain subdirectories. I have created the following web.config file, but how do I exclude a subdirectory called forms. <configuration> <system.webServer> <rewrite> <rules> <rule name="Force all to HTTP" stopProcessing="true"> <match url="(.*)" /> <conditions> <add input="{HTTPS}" pattern="on" ignoreCase="true" /> </conditions> <action type="Redirect" redirectType="Found" url="http://www.mysite.com/{R:1}" /> </rule> </rules> </rewrite> </system.webServer> </configuration> Many thanks all you clever clogs.

    Read the article

  • Understanding REST: is GET fundamentally incompatible with any "number of views" counter?

    - by cocotwo
    I'm trying to understand REST. Under REST a GET must not trigger something transactional on the server (this is a definition everybody agrees upon, it is fundamental to REST). So imagine you've got a website like stackoverflow.com (I say like so if I got the underlying details of SO wrong it doesn't change anything to my question), where everytime someone reads a question, using a GET, there's also some display showing "This question has been read 256 times". Now someone else reads that question. The counter now is at 257. The GET is transactional because the number of views got incremented and is now incremented again. The "number of views" is incremented in the DB, there's no arguing about that (for example on SO the number of time any question has been viewed is always displayed). So, is a REST GET fundamentally incompatible with any kind of "number of views" like functionality in a website? So should it want to be "RESTFUL", should the SO main page either stop display plain HTML links that are accessed using GETs or stop displaying the "this question has been viewed x times"? Because incrementing a counter in a DB is transactional and hence "unrestful"? EDIT just so that people Googling this can get some pointers: From http://www.xfront.com/REST-Web-Services.html : 4. All resources accessible via HTTP GET should be side-effect free. That is, the request should just return a representation of the resource. Invoking the resource should not result in modifying the resource. Now to me if the representation contains the "number of views", it is part of the resource [and in SO the "number of views" a question has is a very important information] and accessing it definitely modifies the resource. This is in sharp contrast with, say, a true RESTFUL HTTP GET like the one you can make on an Amazon S3 resource, where your GET is guaranteed not to modify the resource you get back. But then I'm still very confused.

    Read the article

  • Process data BEFORE a 301 Redirect?

    - by Jesse
    So, I've been working on a PHP link shortener (I know, just what the world needs). Basically when the page loads, php determines where it needs to go and sends a 301 Header to redirect the browser, like so... Header( "HTTP/1.1 301 Moved Permanently" ); header("Location: http://newsite.com"; Now, I'm trying to add some tracking to my redirects and insert some custom analytics data into a MySQL table before the redirect happen. It works perfectly if I don't specify the a redirect type and just use: header("Location: http://newsite.com"; But, of course as soon as you add in the 301 header, nothing else gets processed. Actually, on the first request, it sends the data to MySQL, but on any subsequent requests there's no communication with the database. I assume it's a browser caching issue, once it's seen the 301 it decides they're no reason to parse anything on future requests. But, does anyone know if there's any way to get around this? I'd really like to keep it as a 301 for SEO purposes (I believe if you don't specify it sends a 404 by default?). I thought about using .htaccess to prepend a file to the page that will do the MySQL work, but with the 301, wouldn't that just get ignored as well? Anyway, I'm not sure if there's any solution other than using a different type of redirect, but I'm ready to give up just yet. So, any suggestions would be much appreciated. Thanks!

    Read the article

< Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >