Search Results

Search found 7249 results on 290 pages for 'https everywhere'.

Page 251/290 | < Previous Page | 247 248 249 250 251 252 253 254 255 256 257 258  | Next Page >

  • Session cookie not being created in Rails, very rarely and frustratingly.

    - by James
    Hi everyone, This is an issue sporadically for very few users, however we haven't been able to replicate it. However I have now got a Chrome instance (Mac) which is reproducing the error (for some unknown reason), and I hope to not restart it until I have this nailed! Rails application, using memcached for session store. While the bug manifests in the _app_session_id cookie not being created, our javascript-generated cookie test and app-generated language cookies are being created successfully. This means that 422 / InvalidAuthToken errors are thrown for every form that is submitted by those afflicted - people can't log into the app. The error occurs across all browsers - had reports for IE7 and Firefox (which most users use). Switching to another browser often fixes the issue (though not always), and standard cache-cookie-clear tactics do not. So now that I have got Chrome open which is having the same issue - in development, staging and live environments (meaning http and https). All other browsers are fine. I've restarted the servers and restarted memcached. I don't really want to restart Chrome - in the risk that the issue does go away with that (having said that, it hasn't worked for users). I've been tcpdumping the requests - and although I'll keep digging, I'd love it if anyone had any suggestions, places to start looking, anything. This is really painful ;) Thanks!

    Read the article

  • Select box is not working properly after including google custom search box in web page

    - by Vinay
    I have got a select box and google custom search box in a page, when i choose a option from select box and navigate away from the page and again if i come back to the same page the option will not be selected (violates the default functionality of select box), The code is below <script src="https://www.google.com/jsapi" type="text/javascript"></script> <script type="text/javascript"> google.load('search', '1', {language : 'en'}); google.setOnLoadCallback(function() { var customSearchControl = new google.search.CustomSearchControl('004920913350056953771:kpkclvhujzk'); customSearchControl.setResultSetSize(google.search.Search.FILTERED_CSE_RESULTSET); var options = new google.search.DrawOptions(); options.setAutoComplete(true); options.enableSearchboxOnly("<?=$homeurl?>my_results.php", "query"); customSearchControl.draw('cse-search-form', options); }, true); </script> <select multiple="yes"> <option>1</option> <option>2</option> </select> If i remove the custom search script, the select box selected option will be retained even after navigating away from the page. (default functionality) Its working fine in chrome, IE but not in Firefox. Is there any solution for this, so that select box must work fine even in the presence of search box, but the order must be same 1) Search box 2)Select Box

    Read the article

  • [iphone,twitter] Accessing the Twitter API through a proxy using NSURLConnectionsm, OAuth problem

    - by akaii
    I'm having no problems with sending an update directly via hxxps://api.twitter.com/, but the app (for the Iphone, I'm using NSURLConnections) I'm working is supposed to allow the user to select a preferred proxy (e.g. hxxps://twitter-proxy.appspot.com/api/ or hxxps://nest.onedd.net/api/), and I keep getting a 401 error (Failed to validate oauth signature and token) whenever I try to get an access token via these proxies. Even though I send my POST request to the proxy, I am still using the direct url for the api (https:// api.twitter.com/[rest api path]) in the base string. Despite the 401 error message above, the status code I'm actually getting from connection:didReceiveResponse: is 200, probably because it was able to successfully contact the proxy... Is there anything else that I need to consider when using a proxy to access the API? Should anything in the authorization header change, for example? Or the base string? I can manage to connect via Basic Auth without issue, but support for that will be dropped in a month. On a somewhat unrelated note... What are the possible causes of Twitter's error 403, and how do you distinguish between them? Is the only way to differentiate an error due to exceeding the status update limit for an hour (150 per hour) vs for a day (1000 per day) by checking the string reply returned in the response? Is there any way for me to simulate a status update limit error without going through the motions of actually sending 150/1000 tweets?

    Read the article

  • Cast errors with IXmlSerializable

    - by Nathan
    I am trying to use the IXmlSerializable interface to deserialize an Object (I am using it because I need specific control over what gets deserialized and what does not. See my previous question for more information). However, I'm stumped as to the error message I get. Below is the error message I get (I have changed some names for clarity): An unhandled exception of type 'System.InvalidCastException' occurred in App.exe Additional information: Unable to cast object of type 'System.Xml.XmlNode[]' to type 'MyObject'. MyObject has all the correct methods defined for the interface, and regardless of what I put in the method body for ReadXml() I still get this error. It doesn't matter if it has my implementation code or if it's just blank. I did some googling and found an error that looks similar to this involving polymorphic objects that implement IXmlSerializable. However, my class does not inherit from any others (besides Object). I suspected this may be an issue because I never reference XmlNode any other time in my code. Microsoft describes a solution to the polymorphism error: https://connect.microsoft.com/VisualStudio/feedback/details/422577/incorrect-deserialization-of-polymorphic-type-that-implements-ixmlserializable?wa=wsignin1.0#details The code the error occurs at is as follows. The object to be read back in is an ArrayList of "MyObjects" IO::FileStream ^fs = gcnew IO::FileStream(filename, IO::FileMode::Open); array<System::Type^>^ extraTypes = gcnew array<System::Type^>(1); extraTypes[0] = MyObject::typeid; XmlSerializer ^xmlser = gcnew XmlSerializer(ArrayList::typeid, extraTypes); System::Object ^obj; obj = xmlser->Deserialize(fs); fs->Close(); ArrayList ^al = safe_cast<ArrayList^>(obj); MyObject ^objs; for each(objs in al) //Error occurs here { //do some processing } Thanks for reading and for any help.

    Read the article

  • How can I detect if this dictionary key exists in C#?

    - by Adam Tuttle
    I am working with the Exchange Web Services Managed API, with contact data. I have the following code, which is functional, but not ideal: foreach (Contact c in contactList) { string openItemUrl = "https://" + service.Url.Host + "/owa/" + c.WebClientReadFormQueryString; row = table.NewRow(); row["FileAs"] = c.FileAs; row["GivenName"] = c.GivenName; row["Surname"] = c.Surname; row["CompanyName"] = c.CompanyName; row["Link"] = openItemUrl; //home address try { row["HomeStreet"] = c.PhysicalAddresses[PhysicalAddressKey.Home].Street.ToString(); } catch (Exception e) { } try { row["HomeCity"] = c.PhysicalAddresses[PhysicalAddressKey.Home].City.ToString(); } catch (Exception e) { } try { row["HomeState"] = c.PhysicalAddresses[PhysicalAddressKey.Home].State.ToString(); } catch (Exception e) { } try { row["HomeZip"] = c.PhysicalAddresses[PhysicalAddressKey.Home].PostalCode.ToString(); } catch (Exception e) { } try { row["HomeCountry"] = c.PhysicalAddresses[PhysicalAddressKey.Home].CountryOrRegion.ToString(); } catch (Exception e) { } //and so on for all kinds of other contact-related fields... } As I said, this code works. Now I want to make it suck a little less, if possible. I can't find any methods that allow me to check for the existence of the key in the dictionary before attempting to access it, and if I try to read it (with .ToString()) and it doesn't exist then an exception is thrown: 500 The given key was not present in the dictionary. How can I refactor this code to suck less (while still being functional)?

    Read the article

  • homebrew path issue

    - by Shaun Stanislaus
    Master:~ shaunstanislaus$ ruby <(curl -fsSkL raw.github.com/mxcl/homebrew/go) ==> This script will install: /usr/local/bin/brew /usr/local/Library/... /usr/local/share/man/man1/brew.1 Press enter to continue ==> Downloading and Installing Homebrew... remote: Counting objects: 82368, done. remote: Compressing objects: 100% (39323/39323), done. remote: Total 82368 (delta 56782), reused 65301 (delta 42220) Receiving objects: 100% (82368/82368), 11.68 MiB | 1.59 MiB/s, done. Resolving deltas: 100% (56782/56782), done. From https://github.com/mxcl/homebrew * [new branch] master -> origin/master HEAD is now at 2ea1a0e smpeg: depends on gtk ==> Installation successful! You should run `brew doctor' *before* you install anything. Now type: brew help Master:~ shaunstanislaus$ brew doctor -bash: /usr/local/bin/brew: /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby: bad interpreter: No such file or directory Master:~ shaunstanislaus$ echo $PATH /usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/opt/X11/bin:/Users/shaunstanislaus/Library/Application Support/GoodSync:/opt/local/bin:/opt/local/sbin:/usr/local/sbin:/Users/shaunstanislaus/.ec2/bin:/Users/shaunstanislaus/.rvm/bin /usr/local/bin/brew: /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby: bad interpreter: No such file or directory how do i fix this path issue? i can't use brew command and i think i previously symlink to wrong location. please advice, thank you.

    Read the article

  • Reading Certificates on iOS Problem

    - by David Schiefer
    I am trying to read certificates from various URLs in iOS. My code however is not working well - the array that should return the information I need always returns null. What am I missing? - (void)findCertificate:(NSString *)url { NSInputStream*input = [[NSInputStream inputStreamWithData:[NSData dataWithContentsOfURL:[NSURL URLWithString:@"https://store.writeitstudios.com"]]] retain]; [input setDelegate:self]; [input scheduleInRunLoop:[NSRunLoop mainRunLoop] forMode:NSDefaultRunLoopMode]; [input open]; NSLog(@"Status: %i",[input streamStatus]); } - (void)stream:(NSStream *)aStream handleEvent:(NSStreamEvent)eventCode { NSLog(@"handle Event: %i",eventCode); if (eventCode == NSStreamStatusOpen) { NSArray *certificates = (NSArray*)CFReadStreamCopyProperty((CFReadStreamRef)aStream, kCFStreamPropertySSLPeerCertificates); NSLog(@"Certs: %@",CFReadStreamCopyProperty((CFReadStreamRef)aStream, kCFStreamPropertySSLPeerCertificates)); if ([certificates count] > 0) { SecCertificateRef certificate = (SecCertificateRef)[certificates objectAtIndex:0]; NSString *description = (NSString*)SecCertificateCopySubjectSummary(certificate); NSData *data = (NSData *)SecCertificateCopyData(certificate); NSLog(@"Description: %@",description); } } } And yes, I am aware that I am leaking memory. This is just a snippet.

    Read the article

  • cookies handling on webrequest and response

    - by manish patel
    I have created a application that has a function mainpost. It is created to post data on a https sites. Here I want to handle cookies in this function. How can I do this task? public string Mainpost(string website, string content) { // this is what we are sending string post_data = content; // this is where we will send it string uri = website; // create a request HttpWebRequest request = (HttpWebRequest) WebRequest.Create(uri); request.KeepAlive = false; request.ProtocolVersion = HttpVersion.Version10; request.Method = "POST"; // turn our request string into a byte stream byte[] postBytes = Encoding.ASCII.GetBytes(post_data); // this is important - make sure you specify type this way request.ContentType = "application/x-www-form-urlencoded"; request.ContentLength = postBytes.Length; Stream requestStream = request.GetRequestStream(); // now send it requestStream.Write(postBytes, 0, postBytes.Length); requestStream.Close(); // grab te response and print it out to the console along with the status // code HttpWebResponse response = (HttpWebResponse)request.GetResponse(); string str = (new StreamReader(response.GetResponseStream()).ReadToEnd()); Console.WriteLine(response.StatusCode); return str; }

    Read the article

  • Uploading to S3 using Curl

    - by Carl Crawley
    Hi All, I'm currently using cURL to upload a file from my server to S3 using AJAX to call the script. So I have the following: $fullfilepath = '/server/sitepath/files/' . $_POST['file']; $upload_url = 'https://'.$_POST['buckets'].'.s3.amazonaws.com/'; $params = array( 'key'=>$_POST['key'], 'AWSAccessKeyId'=>$_POST['AWSAccessKeyId'], 'acl'=>$_POST['acl'], 'success_action_status'=>$_POST['success_action_status'], 'policy'=>$_POST['policy'], 'signature'=>$_POST['signature'], 'Content-Type'=>$_POST['Content-Type'], 'file'=>"@$fullfilepath" ); $ch = curl_init(); curl_setopt($ch, CURLOPT_VERBOSE, 1); curl_setopt($ch, CURLOPT_URL, $upload_url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_POSTFIELDS, $params); $response = curl_exec($ch); curl_close($ch); echo $response; However, I'm getting an S3 error as follows when it posts and I'm unsure why because I'm not passing JSON to it. <?xml version="1.0" encoding="UTF-8"?> <Error><Code>InvalidPolicyDocument</Code><Message>Invalid Policy: Invalid JSON.</Message><RequestId>B29469C6151BE0E8</RequestId><HostId>BFPk6W2kt1b6hTtx0mEq6dWdN/IhO0gNR5bct//7LAOwJxm1C3PrxS4RPv1blzJ8</HostId></Error> I've googled it for the last hour or so and can't seem to figure it out. If I change the order of the Array fields, it gives me a different error - I believe the order of the posted fields is important somehow. any help would be much appreciated! C

    Read the article

  • Paypal Encrypted Website payments

    - by John Isaacks
    I am trying to integrate a PayPal Website Payments Standard Cart Upload payment type into my shopping cart. I integrated Google Checkout a while back and I did not find it overly confusing as I do paypal. I am getting info on how to encrypt it from here: https://cms.paypal.com/us/cgi-bin/?&cmd=_render-content&content_ID=developer/e_howto_html_encryptedwebpayments#id08A3I0P017Q Paypal says I need to generate a private key and a public certificate using OpenSSL. I went to OpenSSL and downloaded the latest release, which is just a folder containing various files but I see no application I can use, not sure what to do here. Even if I were to get OpenSSL to generate me a private key and public cert, the next step is to download either an MS or Java command line tool to create the encrypted cart ahead of time with the cart-total, tax, etc. which sounds crazy to me, like I am supposed to manually do this prior to every order?? Obviously I do not know the items in the cart the customer is going to buy before hand so I need this to be done on the fly on my website using PHP. But I am completely lost. There has to be a way to setup dynamic secure cart uploads to paypal. Can someone please point me in the right direction?

    Read the article

  • Posting status via Facebook's graph api

    - by Simon R
    In PHP, I am trying to post a status to our Facebook fan page using the graph api, despite following the intructions facebook give, the following code does not seem to update the status. Here is the code; $xPost['access_token'] = "{key}"; $xPost['message'] = "Posting a message test."; $ch = curl_init('https://graph.facebook.com/{page_id}/feed'); curl_setopt($ch, CURLOPT_VERBOSE, 1); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_HEADER, 1); curl_setopt($ch, CURLOPT_TIMEOUT, 120); curl_setopt($ch, CURLOPT_POST, 1); curl_setopt($ch, CURLOPT_POSTFIELDS, $xPost); curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 1); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, 1); curl_setopt($ch, CURLOPT_CAINFO, NULL); curl_setopt($ch, CURLOPT_CAPATH, NULL); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 0); $result = curl_exec($ch); Does anyone know why this code is not working? The access_token is correct.

    Read the article

  • nginx error: (99: Cannot assign requested address)

    - by k-g-f
    I am running Ubuntu Hardy 8.04 and nginx 0.7.65, and when I try starting my nginx server: $ sudo /etc/init.d/nginx start I get the following error: Starting nginx: [emerg]: bind() to IP failed (99: Cannot assign requested address) where "IP" is a placeholder for my IP address. Does anybody know why that error might be happening? This is running on EC2. My nginx.conf file looks like this: user www-data www-data; worker_processes 4; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; access_log /usr/local/nginx/logs/access.log; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 3; gzip on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; include /usr/local/nginx/sites-enabled/*; } and my /usr/local/nginx/sites-enabled/example.com looks like: server { listen IP:80; server_name example.com; rewrite ^/(.*) https://example.com/$1 permanent; } server { listen IP:443 default ssl; ssl on; ssl_certificate /etc/ssl/certs/myssl.crt; ssl_certificate_key /etc/ssl/private/myssl.key; ssl_protocols SSLv3 TLSv1; ssl_ciphers ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM:-LOW:-SSLv2:-EXP; server_name example.com; access_log /home/example/example.com/log/access.log; error_log /home/example/example.com/log/error.log; }

    Read the article

  • curl halts script execution

    - by Funky Dude
    my script uses curl to upload images to smugsmug site via smugsmug api. i loop through a folder and upload every image in there. but after 3-4 uploads, curl_exec would fail, stopped everything and prevent other images from uploading. $upload_array = array( "method" => "smugmug.images.upload", "SessionID" => $session_id, "AlbumID" => $alb_id, "FileName" => zerofill($n, 3) . ".jpg", "Data" => base64_encode($data), "ByteCount" => strlen($data), "MD5Sum" => $data_md5); $ch = curl_init(); curl_setopt($ch, CURLOPT_POST, true); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 0); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, 0); curl_setopt($ch, CURLOPT_POSTFIELDS, $upload_array); curl_setopt( $ch, CURLOPT_URL, "https://upload.smugmug.com/services/api/rest/1.2.2/"); $upload_result = curl_exec($ch); //fails here curl_close($ch); updated: so i added logging into my script. when it does fail, the logging stops after fwrite($fh, "begin curl\n"); fwrite($fh, "begin curl\n"); $upload_result = curl_exec($ch); fwrite($fh, "curl executed\n"); fwrite($fh, "curl info: ".print_r(curl_getinfo($ch,true))."\n"); fwrite($fh, "xml dump: $upload_result \n"); fwrite($fh, "curl error: ".curl_error($ch)."\n"); i also curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 60*60);

    Read the article

  • Amazon access key showing in URL for Carrierwave and Fog

    - by kcurtin
    I just switched from storing my images uploaded via Carrierwave locally to using Amazon s3 via the fog gem in my Rails 3.1 app. While images are being added, when I click on an image in my application, the URL is providing my access key and a signature. Here is a sample URL (XXX replaced the string with the info): https://s3.amazonaws.com/bucketname/uploads/photo/image/2/IMG_4842.jpg?AWSAccessKeyId=XXX&Signature=XXX%3D&Expires=1332093418 This is happening in development (localhost:3000) and when I am using heroku for production. Here is my uploader: class ImageUploader < CarrierWave::Uploader::Base include CarrierWave::RMagick storage :fog def store_dir "uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}" end process :convert => :jpg process :resize_to_limit => [640, 640] version :thumb do process :convert => :jpg process :resize_to_fill => [280, 205] end version :avatar do process :convert => :jpg process :resize_to_fill => [120, 120] end end And my config/initializers/fog.rb : CarrierWave.configure do |config| config.fog_credentials = { :provider => 'AWS', :aws_access_key_id => 'XXX', :aws_secret_access_key => 'XXX', } config.fog_directory = 'bucketname' config.fog_public = false end Anyone know how to make sure this information isn't available?

    Read the article

  • Getting Feedburner Subscriber With CURL

    - by Eray Alakese
    I'm using FeedBurner Awareness API. XML data like this : <rsp stat="ok"> - <!-- This information is part of the FeedBurner Awareness API. If you want to hide this information, you may do so via your FeedBurner Account. --> - <feed id="9n66llmt1frfir51p0oa367ru4" uri="teknoblogo"> <entry date="2011-01-15" circulation="11" hits="18" reach="0"/> </feed> </rsp> I want to get circulation data (11) . I'm using this code: $whaturl="https://feedburner.google.com/api/awareness/1.0/GetFeedData?uri=teknoblogo"; //Initialize the Curl session $ch = curl_init(); //Set curl to return the data instead of printing it to the browser. curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); //Set the URL curl_setopt($ch, CURLOPT_URL, $whaturl); //Execute the fetch $data = curl_exec($ch); //Close the connection curl_close($ch); $xml = new SimpleXMLElement($data); $fb = $xml->feed->entry['circulation']; echo $fb; echo "OK"; But, returned data is blank. There isn't any error. Only return OK . How can i solve this ? EDIT : echo $data; returning blank, too.

    Read the article

  • Heroku: bash: bundle: command not found

    - by Space Monkey
    My heroku deployment is crashing with following errors. 2012-12-12T17:16:18+00:00 app[web.1]: bash: bundle: command not found 2012-12-12T17:16:19+00:00 heroku[web.1]: Process exited with status 127 2012-12-12T17:16:19+00:00 heroku[web.1]: State changed from starting to crashed The Heroku documentation for this error is to set PATH and GEM variables as described in https://devcenter.heroku.com/articles/changing-ruby-version-breaks-path I tried that, however that too is not helping. ? heroku config:add PATH=bin:vendor/bundle/ruby/1.9.1/bin:/usr/local/bin:/usr/bin:/bin ? heroku config:add GEM_PATH=vendor/bundle/ruby/1.9.1 ? heroku run rake db:migrate Running rake db:migrate attached to terminal... up, run.7130 bash: bundle: command not found Next, I tried setting Ruby version in my Heroku app. This increased the slugsize. But app was still not up. Gemfile ruby "1.9.2" Pushed to Heroku -----> Using Ruby version: ruby-1.9.2 -----> Installing dependencies using Bundler version 1.2.2 heroku run "ruby -v" Running `ruby -v` attached to terminal... up, run.4483 ruby 1.9.2p320 (2012-04-20 revision 35421) [x86_64-linux] Can someone please advice

    Read the article

  • Getting the Access Token from a Facebook Open Graph response in Ruby

    - by Gearóid
    Hi, I'm trying to implement single sign-on using facebook in my ruby sinatra app. So far, I've been following this tutorial: http://jaywiggins.com/2010/05/facebook-oauth-with-sinatra/ I am able to send a request for a user to connect to my application but I'm having trouble actually "getting" the access token. The user can connect without trouble and I receive a response with the "code" parameter, which I'm supposed to use to exchange an Access Token - but its here where I get stuck. So I submit a url with the following parameters: https://graph.facebook.com/oauth/access_token/{client_id}&{client_secret}&{code}&{redirect_uri} The words in the curly brackets above are obviously replaced by the values. I submit this using the following code: response = open(url) This doesn't seem to return anything of use in the way of an access token (it has a @base_uri which is the url I submitted above and few other parameters, though nothing useful looking). However, if I take that url I submitted and paste it into a browser, I receive back an access token. Can anyone tell me how I can get the request back from facebook and pull out the access token? Thanks.

    Read the article

  • Email function using templates. Includes via ob_start and global vars

    - by Geo
    I have a simple Email() class. It's used to send out emails from my website. <? Email::send($to, $subj, $msg, $options); ?> I also have a bunch of email templates written in plain HTML pierced with a few PHP variables. E.g. /inc/email/templates/account_created.php: <p>Dear <?=$name?>,</p> <p>Thank you for creating an account at <?=$SITE_NAME?>. To login use the link below:</p> <p><a href="https://<?=$SITE_URL?>/account" target="_blank"><?=$SITE_NAME?>/account</a></p> In order to have the PHP vars rendered I had to include the template into my function. But since include does not return the contents but rather just sends it directly to the output, I had to wrap it with the buffer functions: <? abstract class Email { public static function send($to, $subj, $msg, $options = array()) { /* ... */ ob_start(); include '/inc/email/templates/account_created.php'; $msg = ob_get_clean(); /* ... */ } } After that I realized that the PHP vars are not rendered as they are being inside of the function scope, so I had to globalize the variables inside of the template: <? global $SITE_NAME, $SITE_URL, $name; ?> <p>Dear <?=$name?>,</p> ... So the question is whether there is a more elegant solution to this? Mainly I am concerned about my workarounds using ob_start() and global. For some reason that seems to me odd. Or this is pretty much the common practice?

    Read the article

  • Passing a URL as a URL parameter

    - by Andrea
    I am implementing OpenId login in a CakePHP application. At a certain point, I need to redirect to another action, while preserving the information about the OpenId identity, which is itself a URL (with GET parameters), for instance https://www.google.com/accounts/o8/id?id=31g2iy321i3y1idh43q7tyYgdsjhd863Es How do I pass this data? The first attempt would be function openid() { ... $this->redirect(array('controller' => 'users', 'action' => 'openid_create', $openid)); } but the obvious problem is that this completely messes up the way CakePHP parses URL parameters. I'd need to do either of the following: 1) encode the URL in a CakePHP friendly manner for passing it, and decoding it after that, or 2) pass the URL as a POST parameter but I don't know how to do this. EDIT: In response to comments, I should be more clear. I am using the OpenId component, and I have a working OpenId implementation. What I need to do is to link OpenId with an existing user system. When a new user logs in via OpenId, I ask for more details, and then create a new user with this data. The problem is that I have to keep the OpenId URL throughout this process.

    Read the article

  • How to secure login and member area with SSL certificate?

    - by citronas
    Background: I have a asp.net webapplication project that should contain a public and a member area. Now I want to implement a SSL decription to secure communication between the client and the server. (In the university we have a unsecured wireless network and you can use a wlan sniffer to read username/password. I do not want to have this security problem for my application, so I thought of a ssl decription) The application is running on a IIS 7.5. It it possible to have one webapp that has unsecured pages (like the public area) and a secured area (like the member area, which requires a login)? If yes, how can I relealise the communication between these too areas? Example: My webapp is hosted on http://foo.abc. I have pages like http://foo.abc/default.aspx and http://foo.abc/foo.aspx. In the same project is page like /member/default.aspx which is protected by a login on the page http://foo.abc/login.aspx. So I would need to implement SSL for the page /login.aspx and all pages in /member/ How can I do that? I just found out how to create SSL certificates in IIS 7.5 and how to add such a binding to a webapp. How how can I tell my webapp which page should be called with https and not with http. What is the best practise there?

    Read the article

  • Trying to convert existing production database table columns from enum to VARCHAR (Rails)

    - by dchua
    Hi everyone, I have a problem that needs me to convert my existing live production (I've duplicated the schema on my local development box, don't worry :)) table column types from enums to a string. Background: Basically, a previous developer left my codebase in absolute shit, migration versions are extremely out of date, and apparently he never used it after a certain point of time in development and now that I'm tasked with migrating a rails 1.2.6 app to 2.3.5, I can't get the tests to run properly on 2.3.5 because my table columns have ENUM column types and they convert to :string, :limit = 0 on my schema.rb which creates the problem of an invalid default value when doing a rake db:test:prepare, like in the case of: Mysql::Error: Invalid default value for 'own_vehicle': CREATE TABLE `lifestyles` (`id` int(11) DEFAULT NULL auto_increment PRIMARY KEY, `member_id` int(11) DEFAULT 0 NOT NULL, `own_vehicle` varchar(0) DEFAULT 'Y' NOT NULL, `hobbies` text, `sports` text, `AStar_activities` text, `how_know_IRC` varchar(100), `IRC_referral` varchar(200), `IRC_others` varchar(100), `IRC_rdrive` varchar(30)) ENGINE=InnoDB I'm thinking of writing a migration task that looks through all the database tables for columns with enum and replace it with VARCHAR and I'm wondering if this is the right way to approach this problem. I'm also not very sure how to write it such that it would loop through my database tables and replace all ENUM colum_types with a VARCHAR. References [1] https://rails.lighthouseapp.com/projects/8994/tickets/997-dbschemadump-saves-enum-columns-as-varchar0-on-mysql [2] http://dev.rubyonrails.org/ticket/2832

    Read the article

  • HDFS some datanodes of cluster are suddenly disconnected while reducers are running

    - by user1429825
    I have 8 slave computers and 1 master computer for running Hadoop (ver 0.21) some datanodes of cluster are suddenly disconnected while I was running MapReduce code on 10GB data After all mappers finished and around 80% of reducers was processed, randomly one or more datanode disconned from network. and then the other datanodes start to disappear from network even if I killed the MapReduce job when I found some datanode was disconnected. I've tried to change dfs.datanode.max.xcievers to 4096, turned off fire-walls of all computing node, disabled selinux and increased the number of file open limit to 20000 but they didn't work at all... anyone have a idea to solve this problem? followings are error log from mapreduce 12/06/01 12:31:29 INFO mapreduce.Job: Task Id : attempt_201206011227_0001_r_000006_0, Status : FAILED java.io.IOException: Bad connect ack with firstBadLink as ***.***.***.148:20010 at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:889) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:820) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:427) and followings are logs from datanode 2012-06-01 13:01:01,118 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_-5549263231281364844_3453 src: /*.*.*.147:56205 dest: /*.*.*.142:20010 2012-06-01 13:01:01,136 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020) Starting thread to transfer block blk_-3849519151985279385_5906 to *.*.*.147:20010 2012-06-01 13:01:19,135 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020):Failed to transfer blk_-5797481564121417802_3453 to *.*.*.146:20010 got java.net.ConnectException: > Connection timed out at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:701) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:373) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1257) at java.lang.Thread.run(Thread.java:722) 2012-06-01 13:06:20,342 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification succeeded for blk_6674438989226364081_3453 2012-06-01 13:09:01,781 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020):Failed to transfer blk_-3849519151985279385_5906 to *.*.*.147:20010 got java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready for write. ch : java.nio.channels.SocketChannel[connected local=/*.*.*.142:60057 remote=/*.*.*.147:20010] at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246) at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:164) at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:203) at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:388) at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:476) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1284) at java.lang.Thread.run(Thread.java:722) hdfs-site.xml <configuration> <property> <name>dfs.name.dir</name> <value>/home/hadoop/data/name</value> </property> <property> <name>dfs.data.dir</name> <value>/home/hadoop/data/hdfs1,/home/hadoop/data/hdfs2,/home/hadoop/data/hdfs3,/home/hadoop/data/hdfs4,/home/hadoop/data/hdfs5</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.datanode.max.xcievers</name> <value>4096</value> </property> <property> <name>dfs.http.address</name> <value>0.0.0.0:20070</value> <description>50070 The address and the base port where the dfs namenode web ui will listen on. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.http.address</name> <value>0.0.0.0:20075</value> <description>50075 The datanode http server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.secondary.http.address</name> <value>0.0.0.0:20090</value> <description>50090 The secondary namenode http server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.address</name> <value>0.0.0.0:20010</value> <description>50010 The address where the datanode server will listen to. If the port is 0 then the server will start on a free port. </description> <property> <name>dfs.datanode.ipc.address</name> <value>0.0.0.0:20020</value> <description>50020 The datanode ipc server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.https.address</name> <value>0.0.0.0:20475</value> </property> <property> <name>dfs.https.address</name> <value>0.0.0.0:20470</value> </property> </configuration> mapred-site.xml <configuration> <property> <name>mapred.job.tracker</name> <value>masternode:29001</value> </property> <property> <name>mapred.system.dir</name> <value>/home/hadoop/data/mapreduce/system</value> </property> <property> <name>mapred.local.dir</name> <value>/home/hadoop/data/mapreduce/local</value> </property> <property> <name>mapred.map.tasks</name> <value>32</value> <description> default number of map tasks per job.</description> </property> <property> <name>mapred.tasktracker.map.tasks.maximum</name> <value>4</value> </property> <property> <name>mapred.reduce.tasks</name> <value>8</value> <description> default number of reduce tasks per job.</description> </property> <property> <name>mapred.map.child.java.opts</name> <value>-Xmx2048M</value> </property> <property> <name>io.sort.mb</name> <value>500</value> </property> <property> <name>mapred.task.timeout</name> <value>1800000</value> <!-- 30 minutes --> </property> <property> <name>mapred.job.tracker.http.address</name> <value>0.0.0.0:20030</value> <description> 50030 The job tracker http server address and port the server will listen on. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>mapred.task.tracker.http.address</name> <value>0.0.0.0:20060</value> <description> 50060 </property> </configuration>

    Read the article

  • When to Store Temporary Values in Hidden Field vs. Session vs. Database?

    - by viatropos
    I am trying to build a simple OpenID login panel similar to how Stack Overflow's works. The goal is: User clicks OpenID/Oauth provider OpenID/Oauth stuff happens, we end up with the result (already made that) Then we want to confirm that the user wants to actually create a new account (vs. associating account with another OpenID account). In StackOverflow, they keep a hidden field on a form that looks like this: <form action="/users/openidconfirm" method="post"> <p>This is an OpenID we haven't seen on Stack Overflow before:</p> <p class="openid-identifier">https://me.yahoo.com/a/some-hash</p> <p>Do you want to associate this OpenID with your Stack Overflow account?</p> <div> <input type="hidden" name="fkey" value="9792ab2zza1q2a4ac414casdfa137eafba7"> <input type="hidden" name="s" value="c1a3q133-11fa-49r0-a7bz-da19849383218"> <input type="submit" value="Associate OpenID"> <input type="button" value="Cancel" onclick="window.location.href = 'http://stackoverflow.com/users/169992/viatropos?s=c1a3q133-11fa-49r0-a7bz-da19849383218'"> </div> </form> Initial question is, what are those hashes fkey and s? Not that I really care what these specific hashes are, but what it seems like is happening is they have processed the openid response and saved it to the DB in a temporary object or something, and from there they generate these keys, because they don't look like Oauth keys to me. Main situation is: after I have processed OpenID/Oauth responses, I don't yet want to create a new user/account until the user submits the "confirm" form. Should I store the keys and tokens temporarily in a "Confirm" form like this? Or is there a better way? It seems that using a temp database object would be a lot of work to manage properly. Thanks for the help. Lance

    Read the article

  • What algorithm should i follow to retrieve data in the prescribed format?

    - by Prateek
    I have to retrieve data from a database in which tables are consisting of fields like "ttc, rm, atc and lta" namely. Actually these values are stored on daily basis with a 15 min interval like From_time to_time ttc rm atc lta 00:00 00:15 45 10 35 25, 00:15 00:30 35 10 25 25 and so on .. These values are stored for every day of every month and i want it to be previewed in the prescribed format then what algorithm should i follow. I am confused about how to do comparisons for a format like below mentioned. The format is at this link https://drive.google.com/a/itbhu.ac.in/file/d/0B_J0Ljq64i4Za2J1V0lvbDZ4eGc/edit?usp=sharing To be specific once again, my question is, I have to prepare a report from the retrieved data which is being stored in the databases as explained above. But the report which is going to be prepared will be of entire month. So, to say the least, there may be cases that for two particular days the value of "ttc" would be same for some time so i want it to be listed together (as shown in format). And the confusing part is any of the values "ttc", "rm", "atc", "lta" can be same for any particular interval. So what algorithm should i follow for such comparisons. And if still any query with question, u can ask your doubt. Thanks

    Read the article

  • TimeZone change to UTC while updating the Appointment

    - by Firoz Ansari
    I am using EWS 1.2 to send appointments. On creating new Appointments, TimeZone is showing properly on notification mail, but on updating the same appointment, it's TimeZone reset to UTC. Could anyone help me to fix this issue? Here is sample code to replicate the issue: ExchangeService service = new ExchangeService(ExchangeVersion.Exchange2010_SP1, TimeZoneInfo.FindSystemTimeZoneById("Eastern Standard Time")); service.Credentials = new WebCredentials("ews_calendar", PASSWORD, "acme"); service.Url = new Uri("https://acme.com/EWS/Exchange.asmx"); Appointment newAppointment = new Appointment(service); newAppointment.Subject = "Test Subject"; newAppointment.Body = "Test Body"; newAppointment.Start = new DateTime(2012, 03, 27, 17, 00, 0); newAppointment.End = newAppointment.Start.AddMinutes(30); newAppointment.RequiredAttendees.Add("[email protected]"); //Attendees get notification mail for this appointment using (UTC-05:00) Eastern Time (US & Canada) timezone //Here is the notification content received by attendees: //When: Tuesday, March 27, 2012 5:00 PM-5:30 PM. (UTC-05:00) Eastern Time (US & Canada) newAppointment.Save(SendInvitationsMode.SendToAllAndSaveCopy); // Pull existing appointment string itemId = newAppointment.Id.ToString(); Appointment existingAppointment = Appointment.Bind(service, new ItemId(itemId)); //Attendees get notification mail for this appointment using UTC timezone //Here is the notification content received by attendees: //When: Tuesday, March 27, 2012 11:00 PM-11:30 PM. UTC existingAppointment.Update(ConflictResolutionMode.AlwaysOverwrite, SendInvitationsOrCancellationsMode.SendToAllAndSaveCopy);

    Read the article

< Previous Page | 247 248 249 250 251 252 253 254 255 256 257 258  | Next Page >