Search Results

Search found 415 results on 17 pages for '302'.

Page 14/17 | < Previous Page | 10 11 12 13 14 15 16 17  | Next Page >

  • dig works but dig +trace <domain_name> not working

    - by anoopmathew
    In my local system i can't get the proper result of dig +trace , but dig works fine. I'm using Ubuntu 10.04 LTS version. I'll attach the result of dig and dig +trace along with this updates. dig +trace gmail.com ; << DiG 9.7.0-P1 << +trace gmail.com ;; global options: +cmd ;; Received 12 bytes from 4.2.2.4#53(4.2.2.4) in 291 ms dig gmail.com ; << DiG 9.7.0-P1 << gmail.com ;; global options: +cmd ;; Got answer: ;; -HEADER<<- opcode: QUERY, status: NOERROR, id: 59528 ;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;gmail.com. IN A ;; ANSWER SECTION: gmail.com. 49 IN A 74.125.236.118 gmail.com. 49 IN A 74.125.236.117 ;; Query time: 302 msec ;; SERVER: 4.2.2.4#53(4.2.2.4) ;; WHEN: Sat Oct 13 14:57:56 2012 ;; MSG SIZE rcvd: 59 Please anyone update a solution for this issue. I'm just worried about my issue.

    Read the article

  • 401 Unauthorized returned on GET request (https) with correct credentials

    - by Johnny Grass
    I am trying to login to my web app using HttpWebRequest but I keep getting the following error: System.Net.WebException: The remote server returned an error: (401) Unauthorized. Fiddler has the following output: Result Protocol Host URL 200 HTTP CONNECT mysite.com:443 302 HTTPS mysite.com /auth 401 HTTP mysite.com /auth This is what I'm doing: // to ignore SSL certificate errors public bool AcceptAllCertifications(object sender, System.Security.Cryptography.X509Certificates.X509Certificate certification, System.Security.Cryptography.X509Certificates.X509Chain chain, System.Net.Security.SslPolicyErrors sslPolicyErrors) { return true; } try { // request Uri uri = new Uri("https://mysite.com/auth"); HttpWebRequest request = (HttpWebRequest)WebRequest.Create(uri) as HttpWebRequest; request.Accept = "application/xml"; // authentication string user = "user"; string pwd = "secret"; string auth = "Basic " + Convert.ToBase64String(System.Text.Encoding.Default.GetBytes(user + ":" + pwd)); request.Headers.Add("Authorization", auth); ServicePointManager.ServerCertificateValidationCallback = new System.Net.Security.RemoteCertificateValidationCallback(AcceptAllCertifications); // response. HttpWebResponse response = (HttpWebResponse)request.GetResponse(); // Display Stream dataStream = response.GetResponseStream(); StreamReader reader = new StreamReader(dataStream); string responseFromServer = reader.ReadToEnd(); Console.WriteLine(responseFromServer); // Cleanup reader.Close(); dataStream.Close(); response.Close(); } catch (WebException webEx) { Console.Write(webEx.ToString()); } I am able to log in to the same site with no problem using ASIHTTPRequest in a Mac app like this: NSURL *login_url = [NSURL URLWithString:@"https://mysite.com/auth"]; ASIHTTPRequest *request = [ASIHTTPRequest requestWithURL:login_url]; [request setDelegate:self]; [request setUsername:name]; [request setPassword:pwd]; [request setRequestMethod:@"GET"]; [request addRequestHeader:@"Accept" value:@"application/xml"]; [request startAsynchronous];

    Read the article

  • ExpertPDF and Caching of URLs

    - by Josh
    We are using ExpertPDF to take URLs and turn them into PDFs. Everything we do is through memory, so we build up the request and then read the stream into ExpertPDF and then write the bits to file. All the files we have been requesting so far are just plain HTML documents. Our designers update CSS files or change the HTML and rerequest the documents as PDFs, but often times, things are getting cached. Take, for example, if I rename the only CSS file and view the HTML page through a web browser, the page looks broke because the CSS doesn't exist. But if I request that page through the PDF Generator, it still looks ok, which means somewhere the CSS is cached. Here's the relevant PDF creation code: // Create a request HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(url); request.UserAgent = "IE 8.0"; request.ContentType = "application/x-www-form-urlencoded"; request.Method = "GET"; // Send the request HttpWebResponse resp = (HttpWebResponse)request.GetResponse(); if (resp.IsFromCache) { System.Web.HttpContext.Current.Trace.Write("FROM THE CACHE!!!"); } else { System.Web.HttpContext.Current.Trace.Write("not from cache"); } // Read the response pdf.SavePdfFromHtmlStream(resp.GetResponseStream(), System.Text.Encoding.UTF8, "Output.pdf"); When I check the trace file, nothing is being loaded from cache. I checked the IIS log file and found a 200 response coming from the request, even after a file had been updated (I would expect a 302). We've tried putting the No-Cache attribute on all HTML pages, but still no luck. I even turned off all caching at the IIS level. Is there anything in ExpertPDF that might be caching somewhere or something I can do to the request object to do a hard refresh of all resources? UPDATE I put ?foo at the end of my style href links and this updates the CSS everytime. Is there a setting someplace that can prevent stylesheets from being cached so I don't have to do this inelegant solution?

    Read the article

  • swfupload session problem destroy session

    - by saquib
    Hello Friends, I have a problem with swfupload. I am passing session_id() like this /upload-file.php?s=189477fcfa1ec7f630e70a09e1e84cae but its not maintaining session and destroying my current session (logging me out) here is code in file upload. <?php if(isset($_GET['s'])) { session_id($_GET['s']); session_start(); require_once 'admin/class/user.php'; $u = new User(); //Check for user logged in if($u->islogged() == FALSE) { header("location: index.php"); exit(); code continue ..... } because am not logged in server redirect me to the index.php this is swfupload debugger window output SWF DEBUG: ----- END SWF DEBUG OUTPUT ---- SWF DEBUG: SWF DEBUG: Event: fileDialogStart : Browsing files. Multi Select. Allowed file types: *.jpg SWF DEBUG: Select Handler: Received the files selected from the dialog. Processing the file list... SWF DEBUG: Event: fileQueued : File ID: SWFUpload_0_0 SWF DEBUG: Event: fileDialogComplete : Finished processing selected files. Files selected: 1. Files Queued: 1 SWF DEBUG: StartUpload: First file in queue SWF DEBUG: Event: uploadStart : File ID: SWFUpload_0_0 SWF DEBUG: ReturnUploadStart(): File accepted by startUpload event and readied for upload. Starting upload to /upload-file.php?s='189477fcfa1ec7f630e70a09e1e84cae' for File ID: SWFUpload_0_0 SWF DEBUG: Event: uploadProgress (OPEN): File ID: SWFUpload_0_0 SWF DEBUG: Event: uploadProgress: File ID: SWFUpload_0_0. Bytes: 317793. Total: 317793 SWF DEBUG: Event: uploadError: HTTP ERROR : File ID: SWFUpload_0_0. HTTP Status: 302. SWF DEBUG: Event: uploadComplete : Upload cycle complete. SWF DEBUG: StartUpload: First file in queue SWF DEBUG: StartUpload(): No files found in the queue.

    Read the article

  • PHP header redirection does not reload <iframe> in IE

    - by Marco Demaio
    When displaying data from DB usually I'm in this situation I'm in page A.php that shows data from DB, user performs some action (like edit/delete etc) and page B.php is loaded to perform the action, once page B performed the action, it redirects browser to page A, page A is auto reloaded during step (3) therefor it shows an updated situation of the data In order to make page B to redirect to page A i use a simple PHP header("Location: " . "A.php", TRUE, 302); This works well in all situations, except when pages A.php is displaied into an <iframe>: in such a case it does not reload (step 4 does not get done). This seems to happen only in IE7 (don't know about IE8), it works perfectly on FF/Safari. And only when using an <iframe>, if page A.php is not in <iframe> it gest refreshed also in IE7. In order to solve this I simply added a couple of headers in page A.php to set it to not be cached: header("Cache-Control: no-cache, must-revalidate"); // HTTP/1.1 header("Expires: Sat, 26 Jul 1997 05:00:00 GMT"); // Date in the past But I was curious if you might have experienced the same issue too in the past, and if you good give me some advice about this?

    Read the article

  • How do I use Perl's LWP to log in to a web application?

    - by maxjackie
    I would like to write a script to login to a web application and then move to other parts of the application: use HTTP::Request::Common qw(POST); use LWP::UserAgent; use Data::Dumper; $ua = LWP::UserAgent->new(keep_alive=>1); my $req = POST "http://example.com:5002/index.php", [ user_name => 'username', user_password => "password", module => 'Users', action => 'Authenticate', return_module => 'Users', return_action => 'Login', ]; my $res = $ua->request($req); print Dumper(\$res); if ( $res->is_success ) { print $res->as_string; } When I try this code I am not able to login to the application. The HTTP status code returned is 302 that is found, but with no data. If I post username/password with all required things then it should return the home page of the application and keep the connection live to move other parts of the application.

    Read the article

  • Is it possible to redirect non-HTML files with HTTP? And chaining redirects?

    - by Earlz
    Hello, I have been thinking about a neat way of load balancing and one thing that would be required is to be capable of loading an image on an HTML page from multiple locations without rewriting the URL(on each load) So what I need to be able to do is have one URL which is the "static" URL. Such as http://example.com/myimage.png The image is not actually contained in example.com though. So example.com does a either a 302 or 301 or 307 HTTP response to cause a redirect to 2.example.com. How do browsers handle this with images like in this situation? Also, how do browsers handle multiple redirections for instance if 2.example.com also didn't contain it and it went to 3.example.com ? (Note, I am asking this because I've never seen a 301 redirect on anything but an HTML page) Also, which status code would be best to use. 301 means "moved permanently" which this "move" isn't permanent so I don't want it cached. Should I use 307? Is that supported by search engines and modern browsers?

    Read the article

  • Problem uploading app to google app engine

    - by Oberon
    I'm having problems uploading an app to the google-app-engine from my work place. I believe the problem is related to proxy, because I do not see the same problem when following the same procedure from home. (I do not specify HTTP_PROXY from home). These are the commands I run: HTTP_PROXY=http://proxy.<thehostname>.com:8080 HTTP_PROXY=https://proxy.<thehostname>.com:8080 appcfg.py --insecure update myappfolder When running the commands I get prompted for email and password, as expected, but after that it immediately exits with this errormessage: Error 302: --- begin server output --- <HTML> <HEAD> <TITLE>Moved Temporarily</TITLE> </HEAD> <BODY BGCOLOR="#FFFFFF" TEXT="#000000"> <H1>Moved Temporarily</H1> The document has moved <A HREF="https://www.google.com/accounts/ClientLogin">here</A>. </BODY> </HTML> --- end server output --- Note: I added the --insecure option because else it gave a warning of missing ssl module. Any idea how to solve or workaround this problem?

    Read the article

  • iOS Simulator is black on app execution

    - by Terryn
    I am running xcode 5.1.1 and have been trying to learn objective C / iOS development. Right now whenever I try to run my code on the emulator (I do not have an actual device atm) it comes up with a black screen. Code can be found here. Compiling and running gives me the following error: 2014-08-23 10:42:57.429 Calculator[1862:60b] *** Terminating app due to uncaught exception 'NSUnknownKeyException', reason: '[<XYZViewController 0xe436640> setValue:forUndefinedKey:]: this class is not key value coding-compliant for the key didgetPressed.' *** First throw call stack: ( 0 CoreFoundation 0x017ed1e4 __exceptionPreprocess + 180 1 libobjc.A.dylib 0x0156c8e5 objc_exception_throw + 44 2 CoreFoundation 0x0187cfe1 -[NSException raise] + 17 3 Foundation 0x0122cd9e -[NSObject(NSKeyValueCoding) setValue:forUndefinedKey:] + 282 4 Foundation 0x011991d7 _NSSetUsingKeyValueSetter + 88 5 Foundation 0x01198731 -[NSObject(NSKeyValueCoding) setValue:forKey:] + 267 6 Foundation 0x011fab0a -[NSObject(NSKeyValueCoding) setValue:forKeyPath:] + 412 7 UIKit 0x004e31f4 -[UIRuntimeOutletConnection connect] + 106 8 libobjc.A.dylib 0x0157e7de -[NSObject performSelector:] + 62 9 CoreFoundation 0x017e876a -[NSArray makeObjectsPerformSelector:] + 314 10 UIKit 0x004e1d4d -[UINib instantiateWithOwner:options:] + 1417 11 UIKit 0x0034a6f5 -[UIViewController _loadViewFromNibNamed:bundle:] + 280 12 UIKit 0x0034ae9d -[UIViewController loadView] + 302 13 UIKit 0x0034b0d3 -[UIViewController loadViewIfRequired] + 78 14 UIKit 0x0034b5d9 -[UIViewController view] + 35 15 UIKit 0x0026b267 -[UIWindow addRootViewControllerViewIfPossible] + 66 16 UIKit 0x0026b5ef -[UIWindow _setHidden:forced:] + 312 17 UIKit 0x0026b86b -[UIWindow _orderFrontWithoutMakingKey] + 49 18 UIKit 0x002763c8 -[UIWindow makeKeyAndVisible] + 65 19 UIKit 0x00226bc0 -[UIApplication _callInitializationDelegatesForURL:payload:suspended:] + 2097 20 UIKit 0x0022b667 -[UIApplication _runWithURL:payload:launchOrientation:statusBarStyle:statusBarHidden:] + 824 21 UIKit 0x0023ff92 -[UIApplication handleEvent:withNewEvent:] + 3517 22 UIKit 0x00240555 -[UIApplication sendEvent:] + 85 23 UIKit 0x0022d250 _UIApplicationHandleEvent + 683 24 GraphicsServices 0x037e2f02 _PurpleEventCallback + 776 25 GraphicsServices 0x037e2a0d PurpleEventCallback + 46 26 CoreFoundation 0x01768ca5 __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE1_PERFORM_FUNCTION__ + 53 27 CoreFoundation 0x017689db __CFRunLoopDoSource1 + 523 28 CoreFoundation 0x0179368c __CFRunLoopRun + 2156 29 CoreFoundation 0x017929d3 CFRunLoopRunSpecific + 467 30 CoreFoundation 0x017927eb CFRunLoopRunInMode + 123 31 UIKit 0x0022ad9c -[UIApplication _run] + 840 32 UIKit 0x0022cf9b UIApplicationMain + 1225 33 Calculator 0x00002c8d main + 141 34 libdyld.dylib 0x01e34701 start + 1 35 ??? 0x00000001 0x0 + 1 ) libc++abi.dylib: terminating with uncaught exception of type NSException (lldb) I have attempted the following things, and so far nothing has worked 1) checked the deployment info, it has the main interface set to main 2) double checked all break points to turn them off and disabled all of them through Debug-Disable Breakpoints 3) Reset Content and Settings on the iOS Simulator. 4) Checked the issue with the LLDB Debugger, however from what I have read the issue is no longer present with the 5.1.1 xcode. 5) checked the local host is still set to 127.0.0.1 Thanks! - Terryn

    Read the article

  • Difference in clientX and clientY when going out of the browser on ie/ff

    - by Py
    I just ran into a little problem with clientX and clientY. I put a little event to detect if the mouse goes out of the window and to know where it exits. And there come the trouble, it works fine with firefox, but only sends -1 as an answer in IE. Does someone know if there is a way to solve easily that problem and that without using a framework? A little bit of code to reproduce that: <html> <head> <script type="text/javascript"> document.onmouseout=function(e){ if (!e) var e = window.event; var relTarg = e.relatedTarget || e.toElement; if (!relTarg){ document.getElementById('result1').innerHTML="e.clientY:"+e.clientY+" e.clientX:"+e.clientX; } }; </script> </head> <body> <div id="result1">Not Yet</div> </body> </html> the results if I exit through the left of the window are: e.clientY:302 e.clientX:-130 on firefox e.clientY:-1 e.clientX:-1 on ie. Thanks in advance.

    Read the article

  • javax.xml.ws.soap.SOAPFaultException: Could not send Message - at JaxWsClientProxy.invoke - caused by HTTP response code: 401 for URL

    - by Mikkis
    I moved a working code from dev to test and encountered the following error(s) in test: javax.xml.ws.soap.SOAPFaultException: Could not send Message. at org.apache.cxf.jaxws.JaxWsClientProxy.invoke(JaxWsClientProxy.java:143) ...... at org.apache.cxf.interceptor.MessageSenderInterceptor$MessageSenderEndingInterceptor.handleMessage(MessageSenderInterceptor.java:64) at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:236) at org.apache.cxf.endpoint.ClientImpl.invoke(ClientImpl.java:472) at org.apache.cxf.endpoint.ClientImpl.invoke(ClientImpl.java:302) at org.apache.cxf.endpoint.ClientImpl.invoke(ClientImpl.java:254) at org.apache.cxf.frontend.ClientProxy.invokeSync(ClientProxy.java:73) at org.apache.cxf.jaxws.JaxWsClientProxy.invoke(JaxWsClientProxy.java:123) at $Proxy739.copyIntoItems(Unknown Source) Caused by: java.io.IOException: Server returned HTTP response code: 401 for URL: http:///_vti_bin/Copy.asmx at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1436) at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:379) at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.handleResponseInternal(HTTPConduit.java:2046) Environment specs: Java 1.6 Tomcat 6 Eclipse Helios Maven2 CXF 2.2.3 As a background work, tried to explore about the error in similar category bad URL (ruled out as i am using same URL in dev and test. and the url, userid, password are all accessible from both the machines), connection timeout( error is not 404 or it doesnt specify connection timed out... it says 401 response code for url) Checked if all the jars and same versions are included in the test environment. Can someone shed some light to understand and resolve the error? please let me know if any more details are to be included.

    Read the article

  • Forms authentication: disable redirect to the login page

    - by codeka
    I have an application that uses ASP.NET Forms Authentication. For the most part, it's working great, but I'm trying to add support for a simple API via an .ashx file. I want the ashx file to have optional authentication (i.e. if you don't supply an Authentication header, then it just works anonymously). But, depending on what you do, I want to require authentication under certain conditions. I thought it would be a simple matter of responding with status code 401 if the required authentication was not supplied, but it seems like the Forms Authentcation module is intercepting that and responding with a redirect to the login page instead. What I mean is, if my ProcessRequest method looks like this: public void ProcessRequest(HttpContext context) { Response.StatusCode = 401; Response.StatusDescription = "Authentication required"; } Then instead of getting a 401 error code on the client, like I expect, I'm actually getting a 302 redirect to the login page. For nornal HTTP traffic, I can see how that would be useful, but for my API page, I want the 401 to go through unmodified so that the client-side caller can respond to it programmatically instead. Is there any way to do that?

    Read the article

  • get_post_meta return empty string

    - by Jean-philippe Emond
    I guest it is a little issues but I running a SQL to get some post id. $result = $wpdb->get_results("SELECT wppm.post_id FROM wp_postmeta wppm INNER JOIN wp_posts wpp ON wppm.post_id=wpp.ID WHERE wppm.meta_key LIKE 'activity'"); (count: 302) After that, I get all id and I run get_post_meta like that: foreach($result as $id){ $activity = get_post_meta($id); var_dump($activity); foreach($activity as $key=>$value){ if(is_array($value) && $key=="age"){ var_dump($value); } } } (var_dump result: string "") samething if I run with: $activity = get_post_meta($id,'activity',true); Where we need to get a result. What is wrong? Thank you for your help!!! [Bonus Question] If the "activity" meta_key as an array Value. and I get directly like: $result = $wpdb->get_results("SELECT wppm.meta_value FROM wp_postmeta wppm INNER JOIN wp_posts wpp ON wppm.post_id=wpp.ID WHERE wppm.meta_key LIKE 'activity'"); How I parse it? Thanks again!

    Read the article

  • [php] Firefox refreshes page 1, even after redirection to page 2

    - by Znarkus
    Hi! This is a quite weird and annoying problem, which is reproduced with the script below. Say we have two pages: script.php and script.php?second. Page 1 creates some database entries and redirects to page 2. On page 2, the user is presented with an editor for said entries. If page 1 for some reason crashes on the first try, and prints some error message, a strange thing will happend. If we refresh page 1 (and this time it redirects fine), every consecutive refresh (of page 2) will actually refresh page 1 and again redirect to page 2. In the above example this would create new database entries for every refresh, which is the problem I want to circumvent by redirecting to page 2. <?php header('Content-type: text/plain'); session_start(); if (!isset($_GET['second'])) { $_SESSION['counter'] = isset($_SESSION['counter']) ? $_SESSION['counter'] + 1 : 1; /*$_SESSION['counter'] = 0; exit('asd');*/ header("Location: {$_SERVER['PHP_SELF']}?second", true, 303); exit; } echo "Counter: {$_SESSION['counter']}"; To try the above complete script, first run it with the commented code intact, then by enabling the commented code. I've tried 301, 302 and 303 redirections. Does someone know why this is happening?

    Read the article

  • Ruby On Rails - Contact form not sending email via localhost

    - by anonymousxxx
    similar problem Rails contact form not working guides: https://github.com/thomasklemm/email_form_rails rails 3.2.x app\models\message.rb class Message include ActiveAttr::Model include ActiveModel::Validations attribute :name attribute :email attribute :subject attribute :body attr_accessible :name, :email, :subject, :body validates_presence_of :name validates_presence_of :email validates :email, email_format: { message: "is not looking like a valid email address"} validates_presence_of :subject validates_length_of :body, maximum: 500 end app\mailers\contact_form.rb class ContactForm < ActionMailer::Base default from: "[email protected]" default to: "[email protected]" def email_form(message) @message = message mail subject: "#{message.subject} #{message.name}" mail body: "#{message.body}" end end development.rb config.action_mailer.delivery_method = :smtp config.action_mailer.perform_deliveries = true config.action_mailer.smtp_settings = { :address => "smtp.gmail.com", :port => 587, :domain => "mydomain.com", :user_name => "[email protected]", :password => "mypassword", :authentication => :plain, :enable_starttls_auto => true } config.action_mailer.default_url_options = { :host => "localhost:3000" } output in command Started POST "/email" for 127.0.0.1 at 2012-09-04 22:10:40 +0700 Processing by HomeController#send_email_form as HTML Parameters: {"utf8"="v", "authenticity_token"="w39BLqCrjTMm4RRi/Sm5hZoEpcw46 npyRy/RS0h48x0=", "message"={"name"="anonymousxxx", "email"="[email protected]", "subject"="Test", "body"="send email"}, "commit"="Create Message"} Redirected to localhost:3000/home/contact Completed 302 Found in 1ms (ActiveRecord: 0.0ms) but email (message) no receive my email,..

    Read the article

  • jquery window.unload triggers post after unload

    - by index
    I am trying to do a post to server before unloading a page and I followed this and it's working fine. My problem is the $.post on window.unload is triggered after it has unloaded. I tried it with a signout link and checking on my logs, I get the following: Started GET "/signout" for 127.0.0.1 at 2012-11-22 00:15:08 +0800 Processing by SessionsController#destroy as HTML Redirected to http://localhost:3000/ Completed 302 Found in 1ms Started GET "/" for 127.0.0.1 at 2012-11-22 00:15:08 +0800 Processing by HomeController#index as HTML Rendered home/index.html.erb within layouts/application (0.4ms) Rendered layouts/_messages.html.erb (0.1ms) Completed 200 OK in 13ms (Views: 12.9ms) Started POST "/unloading" for 127.0.0.1 at 2012-11-22 00:15:08 +0800 Processing by HomeController#unloading as */* Parameters: {"p1"=>"1"} WARNING: Can't verify CSRF token authenticity Completed 500 Internal Server Error in 0ms NoMethodError (undefined method `id' for nil:NilClass): app/controllers/home_controller.rb:43:in `unloading' First part is the signout and then user gets redirected to root then it runs the post ('/unloading'). Is there a way to make the '/unloading' execute first then execute whatever the unload action was? I have this as my jquery post $(window).unload -> $.ajax { async: false, beforeSend: (xhr) -> xhr.setRequestHeader('X-CSRF-Token', $('meta[name="csrf-token"]').attr('content')) , url: '/unloading' , type: 'Post' , data: { p1: '1' } }

    Read the article

  • How to use .htaccess to redirect to an url that includes a query parameter

    - by wbervoets
    Hi guys, I've been struggling with a redirect where the final URL includes a query parameter that is an URL. It seems htaccess is escaping some characters. Here is my htaccess: Code: RewriteRule ^mypath http s://www.otherserver.com/cookie?param1=123&redirectto=http://otherserver2.com/&param2=1 [L,R=302] First, if I put Code: http s://www.otherserver.com/cookie?param1=123&redirectto=http://otherserver2.com/&param2=1 in my browser address bar, www.otherserver.com will do its thing and then redirect to otherserver2 (including the &param2=1 which is a parameter of that URL and not of the URL otherserver.com) That's the behaviour I need :-) Now when I try to use the htaccess redirect from my site: http://mysite/mypath; the behaviour is not the same then putting the same URL in the browser address bar; it now tries to redirect to http ://otherserver2.com/ (no param2=1 anymore). (ps: otherserver1 and otherserver2 are not under my control.) I've tried escaping the redirectto parameter in my htaccess, like below, but it didn't work either: Code: http s://www.otherserver.com/cookie?param1=123&redirectto=http%3a%2f%otherserver2.com%2f%3fparam2%3d1 Because then my browser tries to go to httpotherserver.com (all special characters are gone) In the end I would like to see http ://mysite/mypath to show the contents of Code: http s://www.otherserver.com/cookie?param1=123&redirectto=http://otherserver2.com/&param2=1 (preferred solution) or do a redirect to that URL. I hope my message is not to confusing, I hope someone can help me out; as I've already spent hours on this :-)

    Read the article

  • Use `require()` with `node --eval`

    - by rentzsch
    When utilizing node.js's newish support for --eval, I get an error (ReferenceError: require is not defined) when I attempt to use require(). Here's an example of the failure: $ node --eval 'require("http");' undefined:1 ^ ReferenceError: require is not defined at eval at <anonymous> (node.js:762:36) at eval (native) at node.js:762:36 $ Here's a working example of using require() typed into the REPL: $ node > require("http"); { STATUS_CODES: { '100': 'Continue' , '101': 'Switching Protocols' , '102': 'Processing' , '200': 'OK' , '201': 'Created' , '202': 'Accepted' , '203': 'Non-Authoritative Information' , '204': 'No Content' , '205': 'Reset Content' , '206': 'Partial Content' , '207': 'Multi-Status' , '300': 'Multiple Choices' , '301': 'Moved Permanently' , '302': 'Moved Temporarily' , '303': 'See Other' , '304': 'Not Modified' , '305': 'Use Proxy' , '307': 'Temporary Redirect' , '400': 'Bad Request' , '401': 'Unauthorized' , '402': 'Payment Required' , '403': 'Forbidden' , '404': 'Not Found' , '405': 'Method Not Allowed' , '406': 'Not Acceptable' , '407': 'Proxy Authentication Required' , '408': 'Request Time-out' , '409': 'Conflict' , '410': 'Gone' , '411': 'Length Required' , '412': 'Precondition Failed' , '413': 'Request Entity Too Large' , '414': 'Request-URI Too Large' , '415': 'Unsupported Media Type' , '416': 'Requested Range Not Satisfiable' , '417': 'Expectation Failed' , '418': 'I\'m a teapot' , '422': 'Unprocessable Entity' , '423': 'Locked' , '424': 'Failed Dependency' , '425': 'Unordered Collection' , '426': 'Upgrade Required' , '500': 'Internal Server Error' , '501': 'Not Implemented' , '502': 'Bad Gateway' , '503': 'Service Unavailable' , '504': 'Gateway Time-out' , '505': 'HTTP Version not supported' , '506': 'Variant Also Negotiates' , '507': 'Insufficient Storage' , '509': 'Bandwidth Limit Exceeded' , '510': 'Not Extended' } , IncomingMessage: { [Function: IncomingMessage] super_: [Function: EventEmitter] } , OutgoingMessage: { [Function: OutgoingMessage] super_: [Function: EventEmitter] } , ServerResponse: { [Function: ServerResponse] super_: [Circular] } , ClientRequest: { [Function: ClientRequest] super_: [Circular] } , Server: { [Function: Server] super_: { [Function: Server] super_: [Function: EventEmitter] } } , createServer: [Function] , Client: { [Function: Client] super_: { [Function: Stream] super_: [Function: EventEmitter] } } , createClient: [Function] , cat: [Function] } > Is there a way to use require() with node's --eval? I'm on node 0.2.6 on Mac OS X 10.6.5.

    Read the article

  • why is Active Record firing extra query when I use Includes method to fetch data

    - by riddhi_agrawal
    I have the following model structure: class Group < ActiveRecord::Base has_many :group_products, :dependent => :destroy has_many :products, :through => :group_products end class Product < ActiveRecord::Base has_many :group_products, :dependent => :destroy has_many :groups, :through => :group_products end class GroupProduct < ActiveRecord::Base belongs_to :group belongs_to :product end I wanted to minimize my database queries so I decided to use includes.In the console I tried something like, groups = Group.includes(:products) my development logs show the following calls, Group Load (403.0ms) SELECT `groups`.* FROM `groups` GroupProduct Load (60.0ms) SELECT `group_products`.* FROM `group_products` WHERE (`group_products`.group_id IN (1,3,14,15,16,18,19,20,21,22,23,24,25,26,27,28,29,30,33,42,49,51)) Product Load (22.0ms) SELECT `products`.* FROM `products` WHERE (`products`.`id` IN (382,304,353,12,63,103,104,105,262,377,263,264,265,283,284,285,286,287,302,306,307,308,328,335,336,337,340,355,59,60,61,247,309,311,66,30,274,294,324,350,140,176,177,178,64,240,327,332,338,380,383,252,254,255,256,257,325,326)) Product Load (10.0ms) SELECT `products`.* FROM `products` WHERE (`products`.`id` = 377) LIMIT 1 I could analyze the initial three calls were necessary but don't get the reason why the last database call is made, Product Load (10.0ms) SELECT `products`.* FROM `products` WHERE (`products`.`id` = 377) LIMIT 1 Any idea why this is happening? Thanks in advance. :)

    Read the article

  • Lighttpd 403 Errors on HTML and PHP pages

    - by Brian
    I installed lighttpd on CentOS 5.5 64-bit. Everything seems fine and running except I cannot get past 403 errors on both HTML and PHP pages. I have used CHMOD and CHOWN, changed ownership in the config file, done everything possible and have been stuck for 2 days. Appreciate any help, and here's hoping to a stupid error on my part. Here is the log file with debug options on: 2011-02-21 11:23:13: (request.c.304) fd: 7 request-len: 408 GET /index.html HTTP/1.1 Host: 10.0.1.8 User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.13) Gecko/20101203 Firefox/3.6.13 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Cache-Control: max-age=0 2011-02-21 11:23:13: (response.c.241) run condition 2011-02-21 11:23:13: (response.c.300) -- splitting Request-URI 2011-02-21 11:23:13: (response.c.301) Request-URI : /index.html 2011-02-21 11:23:13: (response.c.302) URI-scheme : http 2011-02-21 11:23:13: (response.c.303) URI-authority: 10.0.1.8 2011-02-21 11:23:13: (response.c.304) URI-path : /index.html 2011-02-21 11:23:13: (response.c.305) URI-query : 2011-02-21 11:23:13: (response.c.349) -- sanatising URI 2011-02-21 11:23:13: (response.c.350) URI-path : /index.html 2011-02-21 11:23:13: (response.c.470) -- before doc_root 2011-02-21 11:23:13: (response.c.471) Doc-Root : /srv/www/lighttpd 2011-02-21 11:23:13: (response.c.472) Rel-Path : /index.html 2011-02-21 11:23:13: (response.c.473) Path : 2011-02-21 11:23:13: (response.c.521) -- after doc_root 2011-02-21 11:23:13: (response.c.522) Doc-Root : /srv/www/lighttpd 2011-02-21 11:23:13: (response.c.523) Rel-Path : /index.html 2011-02-21 11:23:13: (response.c.524) Path : /srv/www/lighttpd/index.html 2011-02-21 11:23:13: (response.c.541) -- logical -> physical 2011-02-21 11:23:13: (response.c.542) Doc-Root : /srv/www/lighttpd 2011-02-21 11:23:13: (response.c.543) Rel-Path : /index.html 2011-02-21 11:23:13: (response.c.544) Path : /srv/www/lighttpd/index.html 2011-02-21 11:23:13: (response.c.561) -- handling physical path 2011-02-21 11:23:13: (response.c.562) Path : /srv/www/lighttpd/index.html 2011-02-21 11:23:13: (response.c.608) -- access denied 2011-02-21 11:23:13: (response.c.609) Path : /srv/www/lighttpd/index.html 2011-02-21 11:23:13: (response.c.128) Response-Header: HTTP/1.1 403 Forbidden Content-Type: text/html Content-Length: 345 Date: Mon, 21 Feb 2011 16:23:13 GMT Server: lighttpd/1.4.28 Here is the directory listing. I used CHOWN to set to lighttpd:lighttpd [root@localhost lighttpd]# ls -al total 40 drwxrwxrwx 2 lighttpd lighttpd 4096 Feb 21 10:48 . drwxrwxrwx 3 lighttpd lighttpd 4096 Feb 21 10:57 .. -rwxrwxrwx 1 lighttpd lighttpd 10 Feb 20 08:32 index.html -rwxrwxrwx 1 lighttpd lighttpd 20 Feb 21 10:48 index.php -rwxrwxrwx 1 lighttpd lighttpd 20 Feb 21 10:39 info.php [root@localhost lighttpd]# Requested Commands: [root@localhost lighttpd]# ls -ld / /srv /srv/www drwxr-xr-x 22 root root 4096 Feb 21 04:39 / drwxrwxrwx 3 lighttpd lighttpd 4096 Feb 20 07:38 /srv drwxrwxrwx 3 lighttpd lighttpd 4096 Feb 21 10:57 /srv/www [root@localhost lighttpd]# ps auxZ | grep lighttpd root:system_r:httpd_t lighttpd 3842 0.0 0.2 48368 896 ? S 12:24 0:00 /usr/sbin/lighttpd -f /etc/lighttpd/lighttpd.conf root:system_r:unconfined_t:SystemLow-SystemHigh root 3845 0.0 0.2 61152 764 pts/0 R+ 12:24 0:00 grep lighttpd

    Read the article

  • How to change SMP affinity of an IRQ on Ubuntu domU inside Xen XCP?

    - by Alexander Gladysh
    I'd like to change IRQ SMP affinity for reasons, outlined in this question: CPU0 is swamped with eth1 interrupts But I can't — I see Input/output error when I try to write to /proc/irq/*/smp_affinity. Please point me to the HOWTO on the matter. (A formal reference on /proc/irq/*/ would be cool as well.) Gory details: Note that this is a VM inside an Ubuntu-based Xen XCP host. $ uname -a Linux MYHOST 2.6.38-15-virtual #59-Ubuntu SMP Fri Apr 27 16:40:18 UTC 2012 i686 i686 i386 GNU/Linux $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 11.04 Release: 11.04 Codename: natty $ sudo cat /proc/irq/*/smp_affinity 01 01 01 01 01 80 80 80 80 80 80 40 40 40 40 40 40 20 20 20 20 20 20 10 10 10 10 10 10 08 08 08 08 08 08 04 04 04 04 04 04 02 02 02 02 02 02 01 01 01 01 01 01 Update. The error details: $ N=$(grep -c processor /proc/cpuinfo) $ echo $N 8 $ printf %x $((2**N-1)) ff $ printf %x $((2**N-1)) | sudo tee /proc/irq/*/smp_affinity fftee: /proc/irq/288/smp_affinity: Input/output error tee: /proc/irq/289/smp_affinity: Input/output error tee: /proc/irq/290/smp_affinity: Input/output error tee: /proc/irq/291/smp_affinity: Input/output error tee: /proc/irq/292/smp_affinity: Input/output error tee: /proc/irq/293/smp_affinity: Input/output error tee: /proc/irq/294/smp_affinity: Input/output error tee: /proc/irq/295/smp_affinity: Input/output error tee: /proc/irq/296/smp_affinity: Input/output error tee: /proc/irq/297/smp_affinity: Input/output error tee: /proc/irq/298/smp_affinity: Input/output error tee: /proc/irq/299/smp_affinity: Input/output error tee: /proc/irq/300/smp_affinity: Input/output error tee: /proc/irq/301/smp_affinity: Input/output error tee: /proc/irq/302/smp_affinity: Input/output error tee: /proc/irq/303/smp_affinity: Input/output error tee: /proc/irq/304/smp_affinity: Input/output error tee: /proc/irq/305/smp_affinity: Input/output error tee: /proc/irq/306/smp_affinity: Input/output error tee: /proc/irq/307/smp_affinity: Input/output error tee: /proc/irq/308/smp_affinity: Input/output error tee: /proc/irq/309/smp_affinity: Input/output error tee: /proc/irq/310/smp_affinity: Input/output error tee: /proc/irq/311/smp_affinity: Input/output error tee: /proc/irq/312/smp_affinity: Input/output error tee: /proc/irq/313/smp_affinity: Input/output error tee: /proc/irq/314/smp_affinity: Input/output error tee: /proc/irq/315/smp_affinity: Input/output error tee: /proc/irq/316/smp_affinity: Input/output error tee: /proc/irq/317/smp_affinity: Input/output error tee: /proc/irq/318/smp_affinity: Input/output error tee: /proc/irq/319/smp_affinity: Input/output error tee: /proc/irq/320/smp_affinity: Input/output error tee: /proc/irq/321/smp_affinity: Input/output error tee: /proc/irq/322/smp_affinity: Input/output error tee: /proc/irq/323/smp_affinity: Input/output error tee: /proc/irq/324/smp_affinity: Input/output error tee: /proc/irq/325/smp_affinity: Input/output error tee: /proc/irq/326/smp_affinity: Input/output error tee: /proc/irq/327/smp_affinity: Input/output error tee: /proc/irq/328/smp_affinity: Input/output error tee: /proc/irq/329/smp_affinity: Input/output error tee: /proc/irq/330/smp_affinity: Input/output error tee: /proc/irq/331/smp_affinity: Input/output error tee: /proc/irq/332/smp_affinity: Input/output error tee: /proc/irq/333/smp_affinity: Input/output error tee: /proc/irq/334/smp_affinity: Input/output error tee: /proc/irq/335/smp_affinity: Input/output error Update. irqbalance is running: $ sudo service irqbalance status irqbalance start/running, process 560

    Read the article

  • Magento - Users unable to login from corporate networks with Bluecoat / F5 Load balancers

    - by user1330440
    Hoping someone has come across this issue before with Magento and corporate clients. We have two clients for our Magento site who both have their internal networks setup using bluecoat security devices and F5 load balancers. Some users within these networks are unable to login to Magento - Magento eventually is sending a 302 redirect to /index.php/ when users attempt to log in. Through our testing, the problem appears to be isolated to this setup - we can log into the accounts in question from anywhere outside of these networks without issue, and if the client tries to access the site without going through the F5 load balancer, they are able to log in successfully. Strangely enough, the issue only started occurring for the two sites the day after we introduced a system upgrade which added a new site to the Magento installation. The system upgrade should not have affected any standard login functionality, and as said, the problem does not appear to be with the users in question, but with where the users are accessing the site from. Initially we thought the issue might have something to do with communications between the client's networks and the network which the server was hosted on, so we've tried moving the server to different hosts, but this has not helped. I'm currently waiting for more info from the clients on exact devices / models used in their network setup. I will update this post if more information becomes avaliable. Magento version is enterprise edition of ver. 1.9.0.0 Does anyone know of any tucked away Magento settings that might be able to cause this kind of behavior? Experience with this kind of set-up and ideas for things to look at? All help and ideas for things to follow-up would be appreciated - as this is a current production issue for a large number of users. I will respond asap with any requests for additional information on the topic, but currently am not able to disclose any identifying information on the project in question, and/or the clients experiencing issues. Thanks in advance for any assistance offered :) Note: This question has also been posted on the Magento forums: http://www.magentocommerce.com/boards/viewthread/277917/ And also on Stack overflow (Moved here as a commenter thought this site may be better suited): http://stackoverflow.com/questions/10133978/magento-users-unable-to-login-from-corporate-networks-with-bluecoat-f5-load

    Read the article

  • Apache2 graceful restart stops proxying requests to passenger

    - by Rob
    Issue with apache mod proxy, it stops proxying requests after a graceful restart but not all the time. It seems to happen only on a Sunday when a graceful restart is triggered by logrotate. [Sun Sep 9 05:25:06 2012] [notice] SIGUSR1 received. Doing graceful restart [Sun Sep 9 05:25:06 2012] [notice] Apache/2.2.22 (Ubuntu) Phusion_Passenger/3.0.11 configured -- resuming normal operations [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(492) failed in child 26153 for worker proxy:reverse [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(486) failed in child 26153 for worker http://api.myservice.org/api [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(487) failed in child 26153 for worker http://api.myservice.org/editor/$1 [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(489) failed in child 26153 for worker http://api.myservice.org/build [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(490) failed in child 26153 for worker http://api.myservice.org/help [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(491) failed in child 26153 for worker http://api.myservice.org/motd.html [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(480) failed in child 26153 for worker http://api.myservice.org/api [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(481) failed in child 26153 for worker http://api.myservice.org/editor/$1 [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(483) failed in child 26153 for worker http://api.myservice.org/build [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(484) failed in child 26153 for worker http://api.myservice.org/help [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(485) failed in child 26153 for worker http://api.myservice.org/motd.html [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(479) failed in child 26153 for worker http://api.myservice.org/motd.html After these lines, the logs are flooded with 404's because the requests are not being proxied. It's worth noting that the destination is just another vhost on the same apache instance, but the vhost (http://api.myservice.org) is serving passenger (mod_rails) I was thinking that maybe there's some startup issues with the passenger workers not being ready during a graceful restart? After a full restart resolves it and everything returns to normal. //Edit Here's the vhost config, thanks :) <VirtualHost *:80> UseCanonicalName Off LogFormat "%V %h %l %u %t \"%r\" %s %b" vcommon <Directory /var/www/vhosts> RewriteEngine on AllowOverride All </Directory> RewriteEngine on RewriteCond /var/www/vhosts/%{SERVER_NAME} !-d RewriteCond /var/www/vhosts/%{SERVER_NAME} !-l RewriteRule ^ http://sitenotfound.myservice.org/ [R=302,L] VirtualDocumentRoot /var/www/vhosts/%0/current # Rewrite requests to /assets to map to the /var/file-store/<SERVER_NAME>/ RewriteMap lowercase int:tolower RewriteCond %{REQUEST_URI} ^/assets/ RewriteRule ^/assets/(.*)$ /var/file-store/${lowercase:%{SERVER_NAME}}/$1 # Map /login to /editor.html as it's far friendlier. RewriteCond %{REQUEST_URI} ^/login RewriteRule .* /editor.html [PT] # Forward some requests to the API ProxyPass /api http://api.myservice.org/api ProxyPass /site.json http://api.myservice.org/api/editor/site ProxyPassMatch ^/editor/(.*)$ http://api.myservice.org/editor/$1 ProxyPassMatch ^/api/(.*) http://api.myservice.org/api/$1 ProxyPass /build http://api.myservice.org/build ProxyPass /help http://api.myservice.org/help ProxyPass /motd.html http://api.myservice.org/motd.html <Proxy *> Order allow,deny Allow from all </Proxy> # TODO generate slightly more specific Error Documents for 401/403/500's, # but for now the 404 page is good enough ErrorDocument 401 /404.html ErrorDocument 403 /404.html ErrorDocument 404 /404.html ErrorDocument 500 /404.html </VirtualHost>

    Read the article

  • Why won't IE let users login to a website unless in In Private mode?

    - by Richard Fawcett
    I'm not entirely sure this belongs on SuperUser.com. I also considered ServerFault.com and StackOverflow.com, but on balance, I think it should belong here? We host a website which has the same code responding to multiple domain names. On 28th December (without any changes deployed to the website) a percentage of users suddenly could not login, and the blank login page was just rendered again even when the correct credentials were entered. The issue is still ongoing. After remote controlling an affected user's PC, we've found the following: The issue affects Internet Explorer 9. The user can login from the same machine on Chrome. The user can login from an In Private browser session using IE9. The user can login if the website is added to the Trusted Sites security zone. The user can NOT login from an IE session in safe mode (started with iexplore -extoff). Only one hostname that the website responds to prevents login, the same user account on the other hostname works fine (note that this is identical code and database running server side), even though that site is not in trusted sites zone. Series of HTTP requests in the failure case: GET request to protected page, returns a 302 FOUND response to login page. GET request to login page. POST to login page, containing credentials, returns redirect to protected page. GET request to protected page... for some reason auth fails and browser is redirected to login page, as in step 1. Other information: Operating system is Windows 7 Ultimate Edition. AV system is AVG Internet Security 2012. I can think of lots of things that could be going wrong, but in every case, one of the findings above is incompatible with the theory. Any ideas what is causing login to fail? Update 06-Jan-2012 Enhanced logging has shown that the .ASPXAUTH cookie is being set in step 3. Its expiry date is 28 days in the future, its path is /, the domain is mysite.com, and its value is an encrypted forms ticket, as expected. However, the cookie is not being received by the web server during step 4. Other cookies are being presented to the server during step 4, it's just this one that is missing. I've seen that cookies are usually set with a domain starting with a period, but mine isn't. Should it be .mysite.com instead of mysite.com? However, if this was wrong, it would presumably affect all users?

    Read the article

  • Updated script for downloading from youtube

    - by asksuperuser
    I'm not looking for a software or site to download youtube but for an opensource script in bash or any which is up todate as youtube often changes the download url. I've found this but seems deprecated: http://linux.byexamples.com/archives/302/how-to-wget-flv-from-youtube/ http://www.daniweb.com/forums/thread104419.html

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17  | Next Page >