Search Results

Search found 887 results on 36 pages for 'expires'.

Page 26/36 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • modx friendly urls nginx FPM php5.3 - friendly url's not working

    - by okdan
    Hi, Im using php5.3 on nginx 0.8.53 with FPM on Modx revolution. Im trying to get "friendly url's" to work, but all I get is 404's. In modx config, friendly url's is set to yes, friendly aliases is set to no (so it drops the suffix) My config file: server { listen 80; server_name .mydomain.net; # index index.php; root /home/mylogin/htdocs; location / { index index.php index.html; if (!-e $request_filename) { rewrite ^/(.*)$ /index.php?q=$1 last; } } # serve static files directly location ~* ^.+\.(jpg|jpeg|gif|css|png|js|ico)$ { root /home/mylogin/htdocs; access_log off; expires 30d; break; } } Fast CGI modx file: fastcgi_connect_timeout 60; fastcgi_send_timeout 300; fastcgi_buffers 4 32k; fastcgi_busy_buffers_size 32k; fastcgi_temp_file_write_size 32k; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_ignore_client_abort on; fastcgi_intercept_errors on; fastcgi_read_timeout 300; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param REDIRECT_STATUS 200;

    Read the article

  • Cookie add in the Global.asax warning in application log

    - by Ioxp
    In my Global.ASAX file i have the following: System.Web.HttpCookie isAccess = new System.Web.HttpCookie("IsAccess"); isAccess.Expires = DateTime.Now.AddDays(-1); isAccess.Value = ""; System.Web.HttpContext.Current.Response.Cookies.Add(isAccess); So every time this method this is logged in the application events as a warning: Event code: 3005 Event message: An unhandled exception has occurred. Event time: 5/25/2010 12:23:20 PM Event time (UTC): 5/25/2010 4:23:20 PM Event ID: c515e27a28474eab8d99720c3f5a8e90 Event sequence: 4148 Event occurrence: 332 Event detail code: 0 Application information: Application domain: /LM/W3SVC/2100509645/Root-1-129192259222289896 Trust level: Full Application Virtual Path: / Application Path: <PathRemoved>\www\ Machine name: TIPPER Process information: Process ID: 6936 Process name: w3wp.exe Account name: NT AUTHORITY\NETWORK SERVICE Exception information: Exception type: NullReferenceException Exception message: Object reference not set to an instance of an object. Request information: Request URL: Request path: User host address: User: Is authenticated: False Authentication Type: Thread account name: NT AUTHORITY\NETWORK SERVICE Thread information: Thread ID: 7 Thread account name: NT AUTHORITY\NETWORK SERVICE Is impersonating: False Stack trace: at ASP.global_asax.Session_End(Object sender, EventArgs e) in <PathRemoved>\Global.asax:line 113 Any idea why this code would cause this error?

    Read the article

  • .NET 4.0 Implementing OutputCacheProvider

    - by azamsharp
    I am checking out the OutputCacheProvider in ASP.NET 4.0 and using it to store my output cache into the MongoDb database. I am not able to understand the purpose of Add method which is one of the override methods for OutputCacheProvider. The Add method is invoked when you have VaryByParam set to something. So, if I have VaryByParam = "id" then the Add method will be invoked. But after the Add the Set is also invoked and I can insert into the MongoDb database inside the Set method. public override void Set(string key, object entry, DateTime utcExpiry) { // if there is something in the query and use the path and query to generate the key var url = HttpContext.Current.Request.Url; if (!String.IsNullOrEmpty(url.Query)) { key = url.PathAndQuery; } Debug.WriteLine("Set(" + key + "," + entry + "," + utcExpiry + ")"); _service.Set(new CacheItem() { Key = MD5(key), Item = entry, Expires = utcExpiry }); } Inside the Set method I use the PathAndQuery to get the params of the QueryString and then do a MD5 on the key and save it into the MongoDb database. It seems like the Add method will be useful if I am doing something like VaryByParam = "custom" or something. Can anyone shed some light on the Add method of OutputCacheProvider?

    Read the article

  • Custom webserver caching

    - by Mark Kinsella
    I'm working with a custom webserver on an embedded system and having some problems correctly setting my HTTP Headers for caching. Our webserver is generating all dynamic content as XML and we're using semi-static XSL files to display it with some dynamic JSON requests thrown in for good measure along with semi-static images. I say "semi-static" because the problems occur when we need to do a firmware update which might change the XSL and image files. Here's what needs to be done: cache the XSL and image files and do not cache the XML and JSON responses. I have full control over the HTTP response and am currently: Using ETags with the XSL and image files, using the modified time and size to generate the ETag Setting Cache-Control: no-cache on the XML and JSON responses As I said, everything works dandy until a firmware update when the XSL and image files are sometimes cached. I've seen it work fine with the latest versions of Firefox and Safari but have had some problems with IE. I know one solution to this problem would be simply rename the XSL and image files after each version (eg. logo-v1.1.png, logo-v1.2.png) and set the Expires header to a date in the future but this would be difficult with the XSL files and I'd like to avoid this. Note: There is a clock on the unit but requires the user to set it and might not be 100% reliable which is what might be causing my caching issues when using ETags. What's the best practice that I should employ? I'd like to avoid as many webserver requests as possible but invalidating old XSL and image files after a software update is the #1 priority.

    Read the article

  • How can I validate/secure/authenticate a JavaScript-based POST request?

    - by Bungle
    A product I'm helping to develop will basically work like this: A Web publisher creates a new page on their site that includes a <script> from our server. When a visitor reaches that new page, that <script> gathers the text content of the page and sends it to our server via a POST request (cross-domain, using a <form> inside of an <iframe>). Our server processes the text content and returns a response (via JSONP) that includes an HTML fragment listing links to related content around the Web. This response is cached and served to subsequent visitors until we receive another POST request with text content from the same URL, at which point we regenerate a "fresh" response. These POSTs only happen when our cached TTL expires, at which point the server signifies that and prompts the <script> on the page to gather and POST the text content again. The problem is that this system seems inherently insecure. In theory, anyone could spoof the HTTP POST request (including the referer header, so we couldn't just check for that) that sends a page's content to our server. This could include any text content, which we would then use to generate the related content links for that page. The primary difficulty in making this secure is that our JavaScript is publicly visible. We can't use any kind of private key or other cryptic identifier or pattern because that won't be secret. Ideally, we need a method that somehow verifies that a POST request corresponding to a particular Web page is authentic. We can't just scrape the Web page and compare the content with what's been POSTed, since the purpose of having JavaScript submit the content is that it may be behind a login system. Any ideas? I hope I've explained the problem well enough. Thanks in advance for any suggestions.

    Read the article

  • Refreshing Facebook session from an iframe application

    - by zombat
    I've got a Facebook iframe application that is completely external. By this I mean that once a user accesses the canvas URL to load the application, all the links in the iframe app go to my servers, and the canvas page never gets refreshed unless the user navigates to somewhere else on Facebook and comes back (or does a browser refresh). On the initial load of the app where Facebook creates the iframe, I get passed all the usual parameters like fb_sig_user which allows me to create an internal app session based on the facebook user. This app session (which is not the Facebook session, it's my own app session) is all I need to allow the user to work with the app. The problem comes an hour later. If the user leaves the computer, or uses the app for more than an hour, the Facebook session expires. There are some app pages which require fetching friend information, and once the FB session has expired, these pages break, throwing out errors such as "Error: Session key invalid or no longer valid". My question is whether there is a way to refresh the user's Facebook session from within an iframe application to keep it from expiring an hour later. Do any of the API calls do this? Is there a Facebook Connect trick to ping something? Is there any definitive method to keep it alive? I haven't been able to find any examples that specifically address this.

    Read the article

  • PHP ZipArchive Empty in IE

    - by Jesse Bunch
    Hi, I am using PHP's ZipArchive class to create a zip file containing photos and then serve it up to the browser for download. Here is my code: /** * Grabs the order, packages the files, and serves them up for download. * * @param string $intEntryID * @return void * @author Jesse Bunch */ public static function download_order_by_entry_id($intUniqueID) { $objCustomer = PhotoCustomer::get_customer_by_unique_id($intUniqueID); if ($objCustomer): if (!class_exists('ZipArchive')): trigger_error('ZipArchive Class does not exist', E_USER_ERROR); endif; $objZip = new ZipArchive(); $strZipFilename = sprintf('%s/application/tmp/%s-%s.zip', $_SERVER['DOCUMENT_ROOT'], $objCustomer->getEntryID(), time()); if ($objZip->open($strZipFilename, ZIPARCHIVE::CREATE) !== TRUE): trigger_error('Unable to create zip archive', E_USER_ERROR); endif; foreach($objCustomer->arrPhotosRequested as $objPhoto): $filename = PhotoCart::replace_ee_file_dir_in_string($objPhoto->strHighRes); $objZip->addFile($filename,sprintf('/press_photos/%s-%s', $objPhoto->getEntryID(), basename($filename))); endforeach; $objZip->close(); header('Last-Modified: '.gmdate('D, d M Y H:i:s', filemtime($strZipFilename)).' GMT', TRUE, 200); header('Cache-Control: no-cache', TRUE); header('Pragma: Public', TRUE); header('Expires: ' . gmdate('D, d M Y H:i:s', time()) . ' GMT', TRUE); header('Content-Length: '.filesize($strZipFilename), TRUE); header('Content-disposition: attachment; filename=press_photos.zip', TRUE); header('Content-Type: application/octet-stream', TRUE); ob_start(); readfile($strZipFilename); ob_end_flush(); exit; else: trigger_error('Invalid Customer', E_USER_ERROR); endif; } This code works really well with all browsers but IE. In IE, the file downloads correctly, but the zip archive is empty. When trying to extract the files, Windows tells me that the zip archive is corrupt. Has anyone had this issue before?

    Read the article

  • Discussion - Allowing / blocking user access to pages (Client Side Only!) - Javascript / Jquery

    - by Ozaki
    TLDR Using plain HTML / Javascript (Client Side) I want to prevent viewing of certain pages. The user will have to type a username and password and depending on that they get access to different pages. Answers can NOT include server side whatsoever It does not matter if they can break it easily. There is no sensitive information etc. Also the target audience will not have access to internet OR probably know what a cookie is... At some point the user will have to type username / password.(I can define the cookie here) Currently I thought of using cookies to set a cookie for each page to say "true" / "false" but that would get messy with so many cookies. Or setting an array within a cookie for each page? I have div field "#Content" which as it looks encompasses all of my content on the page so blocking out content will be as simple as replacing it with ("sorry you don't have access") etc. For Example: $.cookie("Access","page1, page2, page3"{ expires: 1 }); I am looking for anyway to do this does not have to be with cookies. Would be nice to get a discussion of different ways this can be done. So the question is: What do YOU think would be a good way to go about doing this with client side validation?

    Read the article

  • Automated testing for Facebook SDK wrapper

    - by Andree
    Hi there! In my Facebook application, I have one Facebook wrapper class to encapsulates some call to Facebook API. I want to to write a unit test for this wrapper class, but since it depends on a so called "access token", which we should get from Facebook dynamically, I'm not sure if it's possible to write one. But apparently the Facebook SDK itself has a PHPUnit test class. After studying the test code for a while, I know that involves a creation of dummy cookie-based session key. private static $VALID_EXPIRED_SESSION = array( 'access_token' => '254752073152|2.I_eTFkcTKSzX5no3jI4r1Q__.3600.1273359600-1677846385|uI7GwrmBUed8seZZ05JbdzGFUpk.', 'expires' => '1273359600', 'secret' => '0d9F7pxWjM_QakY_51VZqw__', 'session_key' => '2.I_eTFkcTKSzX5no3jI4r1Q__.3600.1273359600-1677846385', 'sig' => '9f6ae89510b30dddb3f864f3caf32fb3', 'uid' => '1677846385' ); . . . $cookieName = 'fbs_' . self::APP_ID; $session = self::$VALID_EXPIRED_SESSION; $_COOKIE[$cookieName] = '"' . http_build_query($session) . '"'; What I don't understand is, how do I get the "access_token", "sig", "session_key" etc? As far as I'm concerned, it should be dynamically exchanged from Facebook and involves user action (logging in).

    Read the article

  • Amazon access key showing in URL for Carrierwave and Fog

    - by kcurtin
    I just switched from storing my images uploaded via Carrierwave locally to using Amazon s3 via the fog gem in my Rails 3.1 app. While images are being added, when I click on an image in my application, the URL is providing my access key and a signature. Here is a sample URL (XXX replaced the string with the info): https://s3.amazonaws.com/bucketname/uploads/photo/image/2/IMG_4842.jpg?AWSAccessKeyId=XXX&Signature=XXX%3D&Expires=1332093418 This is happening in development (localhost:3000) and when I am using heroku for production. Here is my uploader: class ImageUploader < CarrierWave::Uploader::Base include CarrierWave::RMagick storage :fog def store_dir "uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}" end process :convert => :jpg process :resize_to_limit => [640, 640] version :thumb do process :convert => :jpg process :resize_to_fill => [280, 205] end version :avatar do process :convert => :jpg process :resize_to_fill => [120, 120] end end And my config/initializers/fog.rb : CarrierWave.configure do |config| config.fog_credentials = { :provider => 'AWS', :aws_access_key_id => 'XXX', :aws_secret_access_key => 'XXX', } config.fog_directory = 'bucketname' config.fog_public = false end Anyone know how to make sure this information isn't available?

    Read the article

  • requesting ajax via HttpWebRequest

    - by Sami Abdelgadir Mohammed
    Hi guys: I'm writing a simple application that will download some piece of data from a website then I can use it later for any purpose The following is the request and response copied from Firebug as the browser did that... when u type http://x5.travian.com.sa/ajax.php?f=k7&x=18&y=-186&xx=12&yy=-192 you will get a php file has some data.. But when I make a request with HttpWebRequest I get wrong data (some unknown letters) Can anyone help me in that.. and if I have to make some encodings or what?? I will be so appreciated.. Response Server nginx Date Tue, 04 Jan 2011 23:03:49 GMT Content-Type application/json; charset=UTF-8 Transfer-Encoding chunked Connection keep-alive X-Powered-By PHP/5.2.8 Expires Mon, 26 Jul 1997 05:00:00 GMT Last-Modified Tue, 04 Jan 2011 23:03:49 GMT Cache-Control no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma no-cache Content-Encoding gzip Vary Accept-Encoding Request Host x5.travian.com.sa User-Agent Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.13) Gecko/20101203 Firefox/3.6.13 Accept text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Cookie CAD=57878984%231292375897%230%230%23%230; T3E=%3DImYykTN2EzMmhjO5QTM2QDN2oDM1ITOyoDOxIjM4EDN5ITM6gjO4MDOxIWZyQWMipTZu9metl2ctl2c6MDNxADN6MDNxADNjMDNxADNjMDNxADN; orderby_b1=0; orderby_b=0; orderby2=0; orderby=0

    Read the article

  • File Handler returning garbled files.

    - by forcripesake
    For the past 3 months my site has been using a PHP file handler in combination with htaccess. Users accessing the uploads folder of the site would be redirected to the handler as such: RewriteRule ^(.+)\.*$ downloader.php?f=%{REQUEST_FILENAME} [L] The purpose of the file handler is pseudo coded below, followed by actual code. //Check if file exists and user is downloading from uploads directory; True. //Check against a file type white list and set the mime type(); $ctype = mime type; header("Pragma: public"); // required header("Expires: 0"); header("Cache-Control: must-revalidate, post-check=0, pre-check=0"); header("Cache-Control: private",false); // required for certain browsers header("Content-Type: $ctype"); header("Content-Disposition: attachment; filename=\"".basename($filename)."\";" ); header("Content-Transfer-Encoding: binary"); header("Content-Length: ".filesize($filename)); readfile("$filename"); As of yesterday, the handler started returning garbled files, unreadable images, and had to be bypassed. I'm wondering what settings could have gone awry to cause this.

    Read the article

  • php sessions in database only writing part of information to the table...

    - by Ronedog
    I'm having difficulty figuring out what's going on here, hoping some one can help me out. I have been using php, mysql storing my session information in the database. The app is only running on localhost, vista. In the php.ini file I commented out the "session.save_handler = files" line and am using a php class to handle the session writes/reads, etc. My login process is this: Submit login credentials via login.php. login.php calls loginprocess.php. loginprocess.php verifies user, and if valid starts a new session and adds data to the session vars, then it redirects to index.php. Here's the problem. the loginprocess.php page has a bunch of session vars that get set like $_SESSION['account_id'] = $account_id; etc. but when I go to index.php and do a var_dump($_SESSION) it just says "array() empty". However, if I do a var_dump($_SESSION) in loginprocess.php, just before the redirection line header("Location: ../index.php"); then it shows all the data in the session variable. If I look in the database where the session information is stored, there is data in the session_id field, created_ts field, and expires field, but the session_data field has nothing inside of it and in the past this is the field where all my session data was stored. How could I be able to var_dump the session in loginprocess.php, but the data not exist in the db table, is it using some kind of caching? I cleared my cookies, etc...but no change. Why is the session_id, being written to the table, but the actual session data is not? Any ideas are appreciated. Thanks.

    Read the article

  • Django caching seems to be causing problems

    - by Issy
    Hey guys, i have just implemented the Django Cache Local Memory back end in some my code, however it seems to be causing a problem. I get the following error when trying to view the site (With Debug On): Traceback (most recent call last): File "/usr/lib/python2.6/dist-packages/django/core/servers/basehttp.py", line 279, in run self.result = application(self.environ, self.start_response) File "/usr/lib/python2.6/dist-packages/django/core/servers/basehttp.py", line 651, in __call__ return self.application(environ, start_response) File "/usr/lib/python2.6/dist-packages/django/core/handlers/wsgi.py", line 245, in __call__ response = middleware_method(request, response) File "/usr/lib/python2.6/dist-packages/django/middleware/cache.py", line 91, in process_response patch_response_headers(response, timeout) File "/usr/lib/python2.6/dist-packages/django/utils/cache.py", line 112, in patch_response_headers response['Expires'] = http_date(time.time() + cache_timeout) TypeError: unsupported operand type(s) for +: 'float' and 'str' I have checked my code, for caching everything seems to be ok. For example, i have the following in my middleware. MIDDLEWARE_CLASSES = ( 'django.contrib.sessions.middleware.SessionMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.middleware.cache.UpdateCacheMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.cache.FetchFromCacheMiddleware', 'django.contrib.flatpages.middleware.FlatpageFallbackMiddleware', ) My settings for Cache: CACHE_BACKEND = 'locmem://' CACHE_MIDDLEWARE_SECONDS = '3600' CACHE_MIDDLEWARE_KEY_PREFIX = 'za' CACHE_MIDDLEWARE_ANONYMOUS_ONLY = True And some of my code (template tag): def get_featured_images(): """ provides featured images """ cache_key = 'featured_images' images = cache.get(cache_key) if images is None: images = FeaturedImage.objects.all().filter(enabled=True)[:5] cache.set(cache_key, images) return {'images': images} Any idea what could be the problem, from the error message below it looks like there's an issue in django's cache.py?

    Read the article

  • ASP.NET Webforms site using HTTPCookie with 100 year timeout times out after 20 minutes

    - by Rob
    I have a site that is using Forms Auth. The client does not want the site session to expire at all for users. In the login page codebehind, the following code is used: // user passed validation FormsAuthentication.Initialize(); // grab the user's roles out of the database String strRole = AssignRoles(UserName.Text); // creates forms auth ticket with expiration date of 100 years from now and make it persistent FormsAuthenticationTicket fat = new FormsAuthenticationTicket(1, UserName.Text, DateTime.Now, DateTime.Now.AddYears(100), true, strRole, FormsAuthentication.FormsCookiePath); // create a cookie and throw the ticket in there, set expiration date to 100 years from now HttpCookie cookie = new HttpCookie(FormsAuthentication.FormsCookieName, FormsAuthentication.Encrypt(fat)) { Expires = DateTime.Now.AddYears(100) }; // add the cookie to the response queue Response.Cookies.Add(cookie); Response.Redirect(FormsAuthentication.GetRedirectUrl(UserName.Text, false)); The web.config file auth section looks like this: <authentication mode="Forms"> <forms name="APLOnlineCompliance" loginUrl="~/Login.aspx" defaultUrl="~/Course/CourseViewer.aspx" /> </authentication> When I log into the site I do see the cookie correctly being sent to the browser and passed back up: However, when I walk away for 20 minutes or so, come back and try to do anything on the site, the login window reappears. This solution was working for a while on our servers - now it's back. The problem doesn't occur on my local dev box running Cassini in VS2008. Any ideas on how to fix this?

    Read the article

  • Encrypt a file base upon a pregenerated "key" C#

    - by Anubis
    Hello everyone. I'm trying to determine the best course of action to implement a simple "licensing" system with a partner of mine. The concept is: Generate an encrypted value based upon several internal hardware components. Have the customer send this value to us which we will implement into our key generator. Once we have that, we add any other restrictions on the license (user, expires, etc.). From there we generate a file which we send to the customer they can add to their installation and voila, happy people about. I have the first part all done. My next part is trying to figure out which encryption methodology I would need to use. I already know Symmetric Encryption is pretty much the only route I can take. Most of the information I have found involves .NET already creating a key from its own internal methods. That's a bit of background, my question is: "Which encryption method could I use which would allow me to encrypt the restrictions based upon the "id" I was given from the customer's computer?" I'm writing this in C# by the way. Any ideas would be greatly appreciated! Take Care!

    Read the article

  • Downloading a file from a PHP page in C#

    - by FoxyShadoww
    Okay, we have a PHP script that creates an download link from a file and we want to download that file via C#. This works fine with progress etc but when the PHP page gives an error the program downloads the error page and saves it as the requested file. Here is the code we have atm: PHP Code: <?php $path = 'upload/test.rar'; if (file_exists($path)) { $mm_type="application/octet-stream"; header("Pragma: public"); header("Expires: 0"); header("Cache-Control: must-revalidate, post-check=0, pre-check=0"); header("Cache-Control: public"); header("Content-Description: File Transfer"); header("Content-Type: " . $mm_type); header("Content-Length: " .(string)(filesize($path)) ); header('Content-Disposition: attachment; filename="'.basename($path).'"'); header("Content-Transfer-Encoding: binary\n"); readfile($path); exit(); } else { print 'Sorry, we could not find requested download file.'; } ?> C# Code: private void btnDownload_Click(object sender, EventArgs e) { string url = "http://***.com/download.php"; WebClient client = new WebClient(); client.DownloadFileCompleted += new AsyncCompletedEventHandler(client_DownloadFileCompleted); client.DownloadProgressChanged += new DownloadProgressChangedEventHandler(ProgressChanged); client.DownloadFileAsync(new Uri(url), @"c:\temp\test.rar"); } private void ProgressChanged(object sender, DownloadProgressChangedEventArgs e) { progressBar.Value = e.ProgressPercentage; } void client_DownloadFileCompleted(object sender, AsyncCompletedEventArgs e) { MessageBox.Show(print); }

    Read the article

  • How long do you keep session cookies around for?

    - by user246114
    Hi, I'm implementing a web app, which uses sessions. I'm using GWT and app engine as my client/server, but I don't think they're doing anything really different than I would do with PHP and apache etc. When a user logs into my web app, I am using HttpSession to start a session for them. I get the session id like this: // From my login servlet: getThreadLocalRequest().getSession(false).getId(); I return the sessionId back to the client, and they store it in a cookie. The tutorial I'm using sets this cookie to 'expire' in two weeks: Cookie.write("sid", theSessionId, 1000 * 60 * 60 * 24 * 14); // two weeks Here's where I'm confused: if the cookie expires in two weeks, then my user will go along using the webapp happily, only to one day browse to my site and be shown a login screen. What are my options? Can I just set no expiration time for this cookie? That way the user would have to explicitly log out, otherwise they could just use the app forever without having to log back in? Or is there a better way to do this? I can't remember sites like Twitter having ever asked me to log back in again. I seem to be permanently logged in. Do they just set no expiration? The webapp isn't protecting any sort of highly sensitive data, so I don't mind leaving a cookie that doesn't expire, but it seems like there must be a better way? This is the tutorial I'm referencing: http://code.google.com/p/google-web-toolkit-incubator/wiki/LoginSecurityFAQ Thanks

    Read the article

  • performSelector:withObject:afterDelay: not working from scrollViewDidZoom

    - by oldbeamer
    Hi all, I feel like I should know this but I've been stumped for hours now and I've run out of ideas. The theory is simple, the user manipulates the zoom and positioning in a scrollview using a pinch. If they hold that pinch for a short period of time then the scrollview records the zoom level and content offsets. So I thought I'd start a performSelector:withObject:withDelay on the scrollViewDidZoom delegate method. If it expires then I record the settings. I delete the scheduled call every time scrollViewDidZoom is called and when the pinch gesture finishes. This is what I have: - (void)scrollViewDidZoom:(UIScrollView *)scrollView{ NSLog(@"resetting timer"); [NSObject cancelPreviousPerformRequestsWithTarget:self selector:@selector(positionLock) object:nil]; [self performSelector:@selector(positionLock) withObject:nil afterDelay:0.4]; } -(void)positionLock{ NSLog(@"position locked "); } - (void)scrollViewDidEndZooming:(UIScrollView *)scrollView withView:(UIView *)view atScale:(float)scale{ NSLog(@"deleting timer"); [NSObject cancelPreviousPerformRequestsWithTarget:self selector:@selector(positionLock) object:nil]; } This is the output: 2010-05-19 22:43:01.931 resetting timer 2010-05-19 22:43:01.964 resetting timer 2010-05-19 22:43:02.231 resetting timer 2010-05-19 22:43:02.253 resetting timer 2010-05-19 22:43:02.269 resetting timer 2010-05-19 22:43:02.298 resetting timer 2010-05-19 22:43:05.399 deleting timer As you can see the delay between the last and second last events should have been more than enough for the delayed selector call to execute. If I replace performSelector:withObject:withDelay with plain old performSelector: I get this: 2010-05-19 23:08:30.333 resetting timer 2010-05-19 23:08:30.333 position locked 2010-05-19 23:08:30.366 resetting timer 2010-05-19 23:08:30.367 position locked 2010-05-19 23:08:30.688 deleting timer Which isn't what I want but serves to show that it's only the delay that's causing it to not function, not something to do with the selector not being declared in the header, being misspelt or any other reason. Any ideas as to why it's not working?

    Read the article

  • Constant CMS Session Expiry On 1&1 Cloud Server?

    - by leen3o
    I have a couple of 1&1's 'Dynamic Cloud Servers' and running Win2008R2 and they are setup as web servers, I have a number of Umbraco CMS installs on them and they have been running fine for over a year. On Saturday on BOTH servers, a very strange thing happened - As soon as I login to the CMS/Umbraco admin I am logged out with about 5 seconds? It's as if my session expires the moment I login? I have checked everything I can as I'm not really a server admin, and everything seems to be exactly as it was last week? Like I say this has happened EXACTLY the same time (Saturday) on TWO different servers? I'm just looking for ideas of what I should be looking for? Also the front end of the sites seem fine... Its only the backend when I login. I have gone to 1&1 about this, and as usual they have washed their hands saying its nothing to do with them - When I am certain it is. How can this happen on two different servers, and affect the same sites in exactly the same way? Any help, tips, things to try would be greatly appreciated.

    Read the article

  • PHP header redirection does not reload <iframe> in IE

    - by Marco Demaio
    When displaying data from DB usually I'm in this situation I'm in page A.php that shows data from DB, user performs some action (like edit/delete etc) and page B.php is loaded to perform the action, once page B performed the action, it redirects browser to page A, page A is auto reloaded during step (3) therefor it shows an updated situation of the data In order to make page B to redirect to page A i use a simple PHP header("Location: " . "A.php", TRUE, 302); This works well in all situations, except when pages A.php is displaied into an <iframe>: in such a case it does not reload (step 4 does not get done). This seems to happen only in IE7 (don't know about IE8), it works perfectly on FF/Safari. And only when using an <iframe>, if page A.php is not in <iframe> it gest refreshed also in IE7. In order to solve this I simply added a couple of headers in page A.php to set it to not be cached: header("Cache-Control: no-cache, must-revalidate"); // HTTP/1.1 header("Expires: Sat, 26 Jul 1997 05:00:00 GMT"); // Date in the past But I was curious if you might have experienced the same issue too in the past, and if you good give me some advice about this?

    Read the article

  • nginx multiple domain virtual host configuration

    - by Poe
    I'm setting up nginx with multiple domain or wildcard support for convenience sake, rather than setting up 50+ different sites-available/* files. Hopefully this is enough to show you what I'm trying to do. Some are static sites, some are dynamic with usually wordpress installed. If an index.php exists, everything works as expected. If a file is requested that does not exist (missing.html), a 500 error is given due to the rewrite. The logged error is: *112 rewrite or internal redirection cycle while processing "/index.php/index.php/index.php/index.php/index.php/index.php/index.php/index.php/index.php/index.php/index.php/missing.html" The basic nginx configuration I'm currently using is: ` listen 80 default; server _; ... location / { root /var/www/$host; if (-f $request_filename) { expires max; break; } # problem, what if index.php does not exist? if (!-e $request_filename) { rewrite ^/(.*)$ /index.php/$1 last; } } ... ` If an index.php does not exist, and the file also does not exist, I would like it to error 404. Currently, nginx does not support multiple condition if's or nested if so I need a workaround.

    Read the article

  • How can I log any login operation in case of "Remember Me" option ?

    - by Space Cracker
    I have an asp.net login web form that have ( username textBox - password textBox ) plus Remember Me CheckBox option When user login i do the below code if (provider.ValidateUser(username, password)) { int timeOut = 0x13; DateTime expireDate = DateTime.Now.AddMinutes(19.0); if (rememberMeCheckBox.Checked) { timeOut = 0x80520; expireDate = DateTime.Now.AddYears(1); } FormsAuthenticationTicket ticket = new FormsAuthenticationTicket(username, true, timeOut); string cookieValue = FormsAuthentication.Encrypt(ticket); HttpCookie cookie = new HttpCookie(FormsAuthentication.FormsCookieName, cookieValue); cookie.Expires = expireDate; HttpContext.Current.Response.Cookies.Add(cookie); AddForLogin(username); Response.Redirect("..."); } as in code after user is authenticated i log that he login in db by calling method AddForLogin(username); But if user choose remember me in login and then he try to go to site any time this login method isn't executed as it use cookies ... so i have many questions: 1- Is this the best way to log login operation or is there any other better ? 2- In my case how to log login operation in case of remember me chosen by user ?

    Read the article

  • autologin component doesn't work on remote server

    - by user606521
    I am using autologin component from http://milesj.me/code/cakephp/auto-login (v 3.5.1). It works on my localhost WAMP server but fails on remote server. I am using this settings in beforeFilter() callback: $this->AutoLogin->settings = array( // Model settings 'model' => 'User', 'username' => 'username', 'password' => 'password', // Controller settings 'plugin' => '', 'controller' => 'users', // Cookie settings 'cookieName' => 'rememberMe', 'expires' => '+1 month', // Process logic 'active' => true, 'redirect' => true, 'requirePrompt' => true ); On remote server it simply doesn't autolog users after the browser was closed. I can't figure out what may cause the problem. -------------------- edit I figured out what is causing the problem but I don't know how to fix this. First of all cookie is set like this: $this->Cookie->write('key',array('username' => 'someusername', 'hash' => 'somehash', ...) ); Then it's readed like this: $cookie = $this->Cookie->read('key'); On my WAMP server $cookie is array('username' => 'someusername', 'hash' => 'somehash', ...) and on remote server returned $cookie is string(159) "{\"username\":\"YWxlay5iYXJzsdsdZXdza2ldssd21haWwuY29t\",\"password\":\"YWxlazc3ODEy\",\"hash\":\"aa15bffff9ca12cdcgfgb351d8bfg2f370bf458\",\"time\":1339923926}" and it should be: array( 'username' => "YWxlay5iYXJzsdsdZXdza2ldssd21haWwuY29t", 'password' => "YWxlazc3ODEy", ...) Why the retuned cookie is string not array?

    Read the article

  • why is css not being applied to this jquery anchor button?

    - by Tim
    I must be missing something very basic in the CSS. My jQuery anchor button is functional, but it's rendering as a simple underlined label, not asa rounded-corner UI button. I would be grateful if someone could point out the error in this simple example. Thanks !DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <HTML LANG="en-US"> <HEAD> <TITLE>button test</TITLE> <META http-equiv="Content-Type" content="text/html; charset=UTF-8"> <meta http-equiv="Expires" content="Sat, 22 May 2010 00:00:11 GMT"> <link href="http://ajax.googleapis.com/ajax/libs/jqueryui/1.8/themes/base/jquery-ui.css" rel="stylesheet" type="text/css"> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js"></script> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.8.1/jquery-ui.min.js"></script> <SCRIPT type="text/javascript"> $(document).ready( function() { $('a','.test').click(function(){showIntro();return false;}); }); function showIntro() { document.location.href="intro.htm"; } </script> <body> <div class='test'><a href="#">Button</a></div> </body> </html>

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >