Search Results

Search found 4963 results on 199 pages for '302 redirect'.

Page 86/199 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • .htaccess rewrite www to non-www and remove .html

    - by lester8891
    I need rewrite rules to redirect the following: http://www. to http:// /file.html to /file I've tried using a combination of these but each time it results in a redirect loop on one of the situations RewriteBase / RewriteRule ^(.*)\.html$ $1 [NC] RewriteCond %{HTTP_HOST} ^www.domain.co.uk [NC, L] RewriteRule ^(.*)$ http://domain.co.uk/$1 [R=301] I figure it's probably something to do with the flags but I don't know how to fix it. Just to be clear it needs to do all these situations: http://www.domain.co.uk to http://domain.co.uk http://www.domain.co.uk/file.html to http://domain.co.uk/file http://domain.co.uk to http://domain.co.uk http://domain.co.uk/file.html to http://domain.co.uk/file Thanks!

    Read the article

  • Company Administrators: Stay Alert!

    - by Pete
    Some of our customers choose to use the Themes feature to rebrand their Training and Support Center link, and redirect it to an internal support site. If your company does this, we strongly advise that for your employees that have the Administrator role, you maintain a separate theme that keeps the Administrator's Training and Support link pointed to the CRM On Demand Training and Support Center, and not redirect it to an internal support site. Why? The company administrator needs access to the Training and Support Center because it gives them pod-specific application alerts on the Support tab and pod-specific release information on the Release Info tab. If a customer no longer has access to the Training and Support Center URL because they have already rebranded that link, they can contact Customer Care to request it again.  

    Read the article

  • How-to logout from ADF Security

    - by frank.nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} ADF Security configures an authentication servlet, AuthenticationServlet, in the web.xml file that also provides a logout functionality. Developers can invoke the logout by a redirect performed from an action method in a managed bean as shown next  public String onLogout() {   FacesContext fctx = FacesContext.getCurrentInstance();   ExternalContext ectx = fctx.getExternalContext();   String url = ectx.getRequestContextPath() +              "/adfAuthentication?logout=true&end_url=/faces/Home.jspx";       try {     ectx.redirect(url);   } catch (IOException e) {     e.printStackTrace();   }   fctx.responseComplete();   return null; } To use this functionality in your application, change the Home.jspx reference to a public page of yours that the user is redirected to after successful logout. Note that for a successful logout, authentication should be through form based authentication. Basic authentication is known as browser sign-on and re-authenticates users after the logout redirect. Basic authentication is confusing to many developers for this reason.

    Read the article

  • Apache web logs "GET http/1.1" 200 error?

    - by songdogtech
    I've got some puzzling (at least to me) errors that show up in my webserver logs. What's puzzling is that they show my error page being referenced, but that error page doesn't show up as being displayed in hits on statcounter.com. Can someone tell me the difference in these two kinds of errors? Or is only one a "real" error? This is in WordPress, and I have my 404.php file in my theme redirecting to a static Wordpress page at mydomain.com/error/ These first two log entries show /error/, but a "hit" on my /error/ page doesn't show in my webstats. The page referenced - /2009/08/delete-preference-files/ - exists, and shows a "hit" at statcounter. 92.17.232.242 - - [13/Mar/2010:01:45:43 -0800] "GET /error/ HTTP/1.1" 200 4012 "http://markratledge.com/2009/08/delete-preference-files/" "Mozilla/5.0 (Windows;U; Windows NT 5.1; en-US; rv:1.9.1.8) Gecko/20100202 Firefox/3.5.8 (.NET CLR 3.5.30729)" 732 4432 92.17.232.242 - - [13/Mar/2010:01:47:11 -0800] "GET /error/ HTTP/1.1" 200 4012 "http://markratledge.com/wp-content/themes/default/style.css" "Mozilla/5.0(Windows; U; Windows NT 5.1; en-US; rv:1.9.1.8) Gecko/20100202 Firefox/3.5.8 (.NET CLR 3.5.30729)" 861 4432 These are two error entries that display my /error/ page, because /error/ gets a "hit" in statcounter. The page /cookies/ doesn't exist. 72.174.16.202 - - [13/Mar/2010:10:53:37 -0800] "GET /cookies HTTP/1.1" 302 21 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2) Gecko/20100115 Firefox/3.6 (.NET CLR 3.5.30729)" 394 642 72.174.16.202 - - [13/Mar/2010:10:53:38 -0800] "GET /error/ HTTP/1.1" 200 4012 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2) Gecko/20100115 Firefox/3.6 (.NET CLR 3.5.30729)" 393 4432

    Read the article

  • File corrupted by some tools (probably virus or antivirus)- does the pattern indicate any known corruptions?

    - by StackTrace
    As part of our software we install postgres(windows). In one of the customer sites, a set of files got corrupted. All files were part of timezone information(postgres/share/timezone). They are some sort of binary files. After the corruption, they all starts with following pattern od -tac output $ od -tac GMT 0000000 can esc etx sub nak dle em | nl em so | o r l _ 030 033 003 032 025 020 031 | \n 031 016 | o r l _ 0000020 \ \ \ \ \ \ \ del 3 fs ] del del del del del \ \ \ \ \ \ \ 377 3 034 ] 377 377 377 377 377 0000040 > ack r v s ack p soh q h r s q w h q 276 206 362 366 363 206 360 201 361 350 362 363 361 367 350 361 0000060 t r ack h eot s } v h | etx p eot ack nul } 364 362 206 350 204 363 375 366 350 374 203 360 204 206 200 375 0000100 | q t s t 8 E E E E E E E E E E 374 361 364 363 364 270 305 305 305 305 305 305 305 305 305 305 0000120 E E E E E E E E E E E E E E E E 305 305 305 305 305 305 305 305 305 305 305 305 305 305 305 305 * 0000240 m ; z dc3 7 sub c can em a u 5 can d 2 B 355 ; z 023 267 232 343 230 031 a u 5 230 d 262 302 0000260 X nul y J o S - 9 ] stx soh L can 1 ! j 330 \0 y 312 o S 255 9 335 202 001 314 030 261 241 j 0000300 dle g o etb n ff em ] 9 F ' dc4 } , em $ 020 g 357 227 n \f 231 ] 271 F 247 024 375 254 231 244 0000320 Q si ff L bs 2 # stx i 5 r % | | c del Q 017 214 314 210 2 # 002 351 5 362 245 374 374 343 177 0000340 m C esc H em enq ~ X o V p / l dc3 N sp m C 033 H 031 205 376 X o 326 360 257 l 023 N 0000360 } ) enq ( syn ! 3 s $ E z dc3 A dc3 ff P

    Read the article

  • HTTP responses curl and wget different results

    - by Fab
    To check HTTP response header for a set of urls I send with curl the following request headers foreach ( $urls as $url ) { // Setup headers - I used the same headers from Firefox version 2.0.0.6 $header[ ] = "Accept: text/xml,application/xml,application/xhtml+xml,"; $header[ ] = "text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5"; $header[ ] = "Cache-Control: max-age=0"; $header[ ] = "Connection: keep-alive"; $header[ ] = "Keep-Alive: 300"; $header[ ] = "Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7"; $header[ ] = "Accept-Language: en-us,en;q=0.5"; $header[ ] = "Pragma: "; // browsers keep this blank. curl_setopt( $ch, CURLOPT_URL, $url ); curl_setopt( $ch, CURLOPT_USERAGENT, 'Googlebot/2.1 (+http://www.google.com/bot.html)'); curl_setopt( $ch, CURLOPT_HTTPHEADER, $header); curl_setopt( $ch, CURLOPT_REFERER, 'http://www.google.com'); curl_setopt( $ch, CURLOPT_HEADER, true ); curl_setopt( $ch, CURLOPT_NOBODY, true ); curl_setopt( $ch, CURLOPT_RETURNTRANSFER, true ); curl_setopt( $ch, CURLOPT_FOLLOWLOCATION, true ); curl_setopt( $ch, CURLOPT_HTTPAUTH, CURLAUTH_ANY ); curl_setopt( $ch, CURLOPT_TIMEOUT, 10 ); //timeout 10 seconds } Sometimes I receive 200 OK which is good other time 301, 302, 307 which I consider good as well, but other times I receive weird status as 406, 500, 504 which should identify an invalid url but when I open it on the browser they are fine for example the script returns http://www.awe.co.uk/ => HTTP/1.1 406 Not Acceptable and wget returns wget http://www.awe.co.uk/ --2011-06-23 15:26:26-- http://www.awe.co.uk/ Resolving www.awe.co.uk... 77.73.123.140 Connecting to www.awe.co.uk|77.73.123.140|:80... connected. HTTP request sent, awaiting response... 200 OK Does anyone know which request header I am missing or adding in excess?

    Read the article

  • Move site from one tld to another

    - by Amol Ghotankar
    If we want to move site from say xyz.com to xyz.org. What all things we need to do to make sure seo works fine. I am doing something like Point both xyz.com and xyz.org to same ip where my site is working Use cannonical url to have xyz.org/* instead of xyz.com/* Add site to webmaster and make a change request. But problem is we are not able to 301 redirect from xyz.com to xyz.org as both are on same i/p and doing so is causing redirect loop and error. How to fix this? Please help.

    Read the article

  • wget hangs in http request sent awaiting response in some sites

    - by gkr
    Using Ubuntu 12.04. wget hangs in http request sent, awaiting response... in some sites. Browser's are not opening sites that are failed in wget. But in WinXP everything works. This works gkr@gkr-desktop:~/Documents/curl$ wget google.com --2012-06-12 21:29:37-- http://google.com/ Resolving google.com (google.com)... 74.125.236.174, 74.125.236.160, 74.125.236.161, ... Connecting to google.com (google.com)|74.125.236.174|:80... connected. HTTP request sent, awaiting response... 301 Moved Permanently Location: http://www.google.com/ [following] --2012-06-12 21:29:38-- http://www.google.com/ Resolving www.google.com (www.google.com)... 74.125.236.179, 74.125.236.180, 74.125.236.176, ... Connecting to www.google.com (www.google.com)|74.125.236.179|:80... connected. HTTP request sent, awaiting response... 302 Found Location: http://www.google.co.in/ [following] --2012-06-12 21:29:38-- http://www.google.co.in/ Resolving www.google.co.in (www.google.co.in)... 74.125.236.184, 74.125.236.191, 74.125.236.183, ... Connecting to www.google.co.in (www.google.co.in)|74.125.236.184|:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: `index.html.3' [ ] 13,383 --.-K/s in 0.04s 2012-06-12 21:29:39 (308 KB/s) - `index.html.3' saved [13383] gkr@gkr-desktop:~/Documents/curl$ This site just stops/hangs in awaiting response. gkr@gkr-desktop:~/Documents/curl$ wget grooveshark.com --2012-06-12 21:27:29-- http://grooveshark.com/ Resolving grooveshark.com (grooveshark.com)... 8.20.213.76 Connecting to grooveshark.com (grooveshark.com)|8.20.213.76|:80... connected. HTTP request sent, awaiting response... ^C gkr@gkr-desktop:~/Documents/curl$ Thanks

    Read the article

  • DNS issue on Fedora 12? wget wordpress.org fails where wget www.google.com works

    - by Tom Auger
    I'm administering a Fedora 12 box, but am quite new to networking specifics. Recently one of our WordPress apps hosted on our server has stopped being able to perform its auto-update or auto-download of plugins. Investigating further, I have tried the following: $ wget wordpress.org --2010-12-17 11:26:50-- http://wordpress.org/ Resolving wordpress.org... failed: Temporary failure in name resolution. wget: unable to resolve host address âwordpress.orgâ Whereas: $ wget www.google.com --2010-12-17 11:27:26-- http://www.google.com/ Resolving www.google.com... 74.125.226.82, 74.125.226.84, 74.125.226.80, ... Connecting to www.google.com|74.125.226.82|:80... connected. HTTP request sent, awaiting response... 302 Found Location: http://www.google.ca/ [following] --2010-12-17 11:27:26-- http://www.google.ca/ Resolving www.google.ca... 173.194.32.104 Connecting to www.google.ca|173.194.32.104|:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: âindex.html.4â [ <=> ] 9,079 --.-K/s in 0.02s 2010-12-17 11:27:26 (462 KB/s) - âindex.html.4â Interestingly: $ ping wordpress.org PING wordpress.org (72.233.56.138) 56(84) bytes of data. 64 bytes from wordpress.org (72.233.56.138): icmp_seq=1 ttl=50 time=81.5 ms 64 bytes from wordpress.org (72.233.56.138): icmp_seq=2 ttl=50 time=67.3 ms ^C --- wordpress.org ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1783ms rtt min/avg/max/mdev = 67.361/74.448/81.536/7.092 ms and $ nslookup wordpress.org Server: 192.168.2.1 Address: 192.168.2.1#53 Non-authoritative answer: Name: wordpress.org Address: 72.233.56.138 Name: wordpress.org Address: 72.233.56.139 nscd has been stopped and flushed. iptables appear to be clean. At this point I have exhausted my limited abilities to diagnose the issue. Can anyone suggest a resolution path?

    Read the article

  • Apache 301 redirection from one domain to another

    - by Sebastien Lachance
    I'm trying to set a redirection in my VirtualHost configuration for my website. So far I am able to redirect non www trafficto the www address like this : <VirtualHost: *:80> ServerAlias www.gcbeauce.com RewriteEngine on RewriteCond %{HTTP_HOST} ^guidedescommercesdebeauce\.com$ [NC] RewriteRule ^(.*)$ http://www.guidedescommercesdebeauce.com$1 [R=301,L] But what I also want is to redirect the old domain to this new one. I have tried adding : RewriteEngine on RewriteCond %{HTTP_HOST} ^guidedescommercesdebeauce\.com$ [NC] RewriteCond %{HTTP_HOST} ^gcbeauce\.com$ [NC] RewriteRule ^(.*)$ http://www.guidedescommercesdebeauce.com$1 [R=301,L] But nothing happens. Am I missing something here?

    Read the article

  • My .htaccess file re-directed problems?

    - by Glenn Curtis
    I am hoping you can help me! Below is my .htaccess files for my Apache server running on top of Ubuntu server. This is my local server which I installed so I can develop my site on this instead of using my live site! However i have all my files and the database on my localhost now but each time I access my server, vaio-server (its a sony laptop), it just takes me to my live site! Now eveything is in the root of Apache, /var/www - its the only site I will develop on this system so I don't need to config this to look at any many than this one site! I think thats all, all the Apache files, site-available/default ect are as standard. - Please Help!! Many Thanks Glenn Curtis. DirectoryIndex index.php index.html # Upload sizes php_value upload_max_filesize 25M php_value post_max_size 25M # Avoid folder listings Options -Indexes <IfModule mod_rewrite.c> Options +FollowSymlinks RewriteEngine on RewriteBase / # Maintenance #RewriteCond %{REQUEST_URI} !/maintenance.html$ #RewriteRule $ /maintenance.html [R=302,L] #Redirects to www #RewriteCond %{HTTP_HOST} !^vaio-server [NC] #RewriteCond %{HTTPS}s ^on(s)|off #RewriteRule ^(.*)$ glenns-showcase.net/$1 [R=301,QSA,L] #Empty string RewriteRule ^$ app/webroot/ [L] RewriteRule (.*) app/webroot/$1 [L] </IfModule>

    Read the article

  • PHP cannot connect to MySQL

    - by yogal
    Hello, I recently installed Apache 2 + PHP 5.3.1 + MySQL 5.1.44 on my Windows 7 64bit machine following this guide: http://sleeplessgeek.blogspot.com/2010/01/setting-up-apache-php-mysql-phpmyadmin.html It all went fine, php is working great (even with XDebug) but I cannot connect to mysql server. A simple script I wrote to test connection (yes, root has no pass): $username = "root"; $password = ""; $database = "test"; $hostname = "localhost"; $conn = mysql_connect($hostname, $username, $password) or die("Unable to connect to MySQL Database!!"); It prints this error after 60sec of timeout: Warning: mysql_connect() [function.mysql-connect]: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. I can connect to mysql using cmdmysql -h localhost -u root Services are working properly. There also seems to be a problem with PhpMyAdmin (using 3.2.5). As soon as I type user and pass the page loads and turns blank (content-lenght in headers is 0 but status code is 302 Found). Looks like something wrong with cookies (my auth method). I hope someone has a clue, it has to be something dumb simple I missed. Thanks in advance.

    Read the article

  • Apache 2: Mod_Rewrite Help - If/else for directory exists

    - by BHare
    This is my current and sloppy Apache 2 mod_rewrite. Keep in mine the part with site1 site2..etc has about 50 sites. RewriteEngine on RewriteCond %{HTTP_HOST} ^([^.]+)\.mainsite\.org$ RewriteCond /home/%1/ -d RewriteRule ^(.+) %{HTTP_HOST}$1 RewriteRule ^([^.]+)\.mainsite\.org/media/(.*) /home/$1/special/media/$2 RewriteRule ^([^.]+)\.mainsite\.org/(.*) /home/$1/www/$2 RewriteCond %{HTTP_HOST} ^mainsite\.org$ [NC] RewriteRule ^(.*)$ http://www.mainsite.org$1 [R=302] RewriteCond %{HTTP_HOST} (site1|site2|site3|site4)\.(com|net|biz|org|us)$ [NC] RewriteCond %{REQUEST_URI} !^/media/ RewriteRule ^/(.*)$ /home/%1/www/$1 RewriteCond %{HTTP_HOST} (site1|site2|site3|site4)\.(com|biz|net|org|us)$ [NC] RewriteRule ^/media/(.*)$ /home/%1/special/media/$1 RewriteCond %{REQUEST_URI} favicon.ico$ RewriteRule ^(.*)$ /misc/favicon.ico So if someone tries to go to theirusername.mainsite.org it will check if /home/theirusername/ exists, and if it does use their www (/home/theirusername/www/) as the file location for web files. If they try they try to access theirusername.mainsite.org/media/ it will make a special file location to look for the file(s) at /home/theirusername/special/media/ I would like it if the username did NOT have /home/username that it would automatically default to www.mainsite.org. I am having a hard time understanding how to do skips and such. so: If someone went to notrealusername.mainsite.org/forum/ it would auto direct to www.mainsite.org/forum/ Extra: I am using repetitive code for other site for example lets say foobar has a website foobar.com, it goes through the same process as mainsite.org so figured maybe having something like: RewriteCond %{HTTP_HOST} ^([^.]+).(mainsite.org|com|net|biz|org)$ where I could have one major rule for all existing domains who have a /home/

    Read the article

  • AWS elastic load balancer basic issues

    - by Jones
    I have an array of EC2 t1.micro instances behind a load balancer and each node can manage ~100 concurrent users before it starts to get wonky. i would THINK if i have 2 such instances it would allow my network to manage 200 concurrent users... apparently not. When i really slam the server (blitz.io) with a full 275 concurrents, it behaves the same as if there is just one node. it goes from 400ms response time to 1.6 seconds (which for a single t1.micro is expected, but not 6). So the question is, am i simply not doing something right or is ELB effectively worthless? Anyone have some wisdom on this? AB logs: Loadbalancer (3x m1.medium) Document Path: /ping/index.html Document Length: 185 bytes Concurrency Level: 100 Time taken for tests: 11.668 seconds Complete requests: 50000 Failed requests: 0 Write errors: 0 Non-2xx responses: 50001 Total transferred: 19850397 bytes HTML transferred: 9250185 bytes Requests per second: 4285.10 [#/sec] (mean) Time per request: 23.337 [ms] (mean) Time per request: 0.233 [ms] (mean, across all concurrent requests) Transfer rate: 1661.35 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 1 2 4.3 2 63 Processing: 2 21 15.1 19 302 Waiting: 2 21 15.0 19 261 Total: 3 23 15.7 21 304 Single instance (1x m1.medium direct connection) Document Path: /ping/index.html Document Length: 185 bytes Concurrency Level: 100 Time taken for tests: 9.597 seconds Complete requests: 50000 Failed requests: 0 Write errors: 0 Non-2xx responses: 50001 Total transferred: 19850397 bytes HTML transferred: 9250185 bytes Requests per second: 5210.19 [#/sec] (mean) Time per request: 19.193 [ms] (mean) Time per request: 0.192 [ms] (mean, across all concurrent requests) Transfer rate: 2020.01 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 1 9 128.9 3 3010 Processing: 1 10 8.7 9 141 Waiting: 1 9 8.7 8 140 Total: 2 19 129.0 12 3020

    Read the article

  • How to handle multiple domains correctly? [closed]

    - by Eric Itzhak
    Possible Duplicate: Could I buy a domain name to increase traffic to my site like this? I have a website with multiple keyword based domains ( 2 actully) Now both domains are common google searches for the topic. What i'm intrested is to handle the domains as such that google can recognize the seconed domain with the same page rank as the primary domain, so it will also appear on the first page. My question is how do i do it correctly so i will help SEO? meaning i want the 2nd domain to be on the first page, because the primary is on the first page for it's keyword. do i simple put a redirect in the index.php file? Or to do a 301 redirect? or change the .htaccess file? Or create a domain pointer in the Control panel?

    Read the article

  • URL Rewrite http to https EXCEPT files in a specific subfolder

    - by BrettRobi
    I am trying to force all traffic on my web site to use HTTPS, using the URL Rewrite 2.0 module added to IIS 7.5. I got that working and now have a need to exclude a couple of pages from using SSL. So I need a rule to rewrite all URL except those referencing this folder to HTTPS. I've been banging my head against the wall on this and am hoping someone can help. I tried creating a rule to match all URL except those in a nossl subfolder as in this example: <rule name="HTTP to HTTPS redirect" enabled="true" stopProcessing="true"> <match url="(/nossl/.*)" negate="true" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{HTTPS}" pattern="off" /> </conditions> <action type="Redirect" url="https://{HTTP_HOST}/{R:1}" redirectType="Found" /> </rule> But this doesn't work. Can anyone help?

    Read the article

  • How to Implement Single Sign-On between Websites

    - by hmloo
    Introduction Single sign-on (SSO) is a way to control access to multiple related but independent systems, a user only needs to log in once and gains access to all other systems. a lot of commercial systems that provide Single sign-on solution and you can also choose some open source solutions like Opensso, CAS etc. both of them use centralized authentication and provide more robust authentication mechanism, but if each system has its own authentication mechanism, how do we provide a seamless transition between them. Here I will show you the case. How it Works The method we’ll use is based on a secret key shared between the sites. Origin site has a method to build up a hashed authentication token with some other parameters and redirect the user to the target site. variables Status Description ssoEncode required hash(ssoSharedSecret + , + ssoTime + , + ssoUserName) ssoTime required timestamp with format YYYYMMDDHHMMSS used to prevent playback attacks ssoUserName required unique username; required when a user is logged in Note : The variables will be sent via POST for security reasons Building a Single Sign-On Solution Origin Site has function to 1. Create the URL for your Request. 2. Generate required authentication parameters 3. Redirect to target site. using System; using System.Web.Security; using System.Text; public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { string postbackUrl = "http://www.targetsite.com/sso.aspx"; string ssoTime = DateTime.Now.ToString("yyyyMMddHHmmss"); string ssoUserName = User.Identity.Name; string ssoSharedSecret = "58ag;ai76"; // get this from config or similar string ssoHash = FormsAuthentication.HashPasswordForStoringInConfigFile(string.Format("{0},{1},{2}", ssoSharedSecret, ssoTime, ssoUserName), "md5"); string value = string.Format("{0}:{1},{2}", ssoHash,ssoTime, ssoUserName); Response.Clear(); StringBuilder sb = new StringBuilder(); sb.Append("<html>"); sb.AppendFormat(@"<body onload='document.forms[""form""].submit()'>"); sb.AppendFormat("<form name='form' action='{0}' method='post'>", postbackUrl); sb.AppendFormat("<input type='hidden' name='t' value='{0}'>", value); sb.Append("</form>"); sb.Append("</body>"); sb.Append("</html>"); Response.Write(sb.ToString()); Response.End(); } } Target Site has function to 1. Get authentication parameters. 2. Validate the parameters with shared secret. 3. If the user is valid, then do authenticate and redirect to target page. 4. If the user is invalid, then show errors and return. using System; using System.Web.Security; using System.Text; public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { if (User.Identity.IsAuthenticated) { Response.Redirect("~/Default.aspx"); } } if (Request.Params.Get("t") != null) { string ticket = Request.Params.Get("t"); char[] delimiters = new char[] { ':', ',' }; string[] ssoVariable = ticket.Split(delimiters, StringSplitOptions.None); string ssoHash = ssoVariable[0]; string ssoTime = ssoVariable[1]; string ssoUserName = ssoVariable[2]; DateTime appTime = DateTime.MinValue; int offsetTime = 60; // get this from config or similar try { appTime = DateTime.ParseExact(ssoTime, "yyyyMMddHHmmss", null); } catch { //show error return; } if (Math.Abs(appTime.Subtract(DateTime.Now).TotalSeconds) > offsetTime) { //show error return; } bool isValid = false; string ssoSharedSecret = "58ag;ai76"; // get this from config or similar string hash = FormsAuthentication.HashPasswordForStoringInConfigFile(string.Format("{0},{1},{2}", ssoSharedSecret, ssoTime, ssoUserName), "md5"); if (string.Compare(ssoHash, hash, true) == 0) { if (Math.Abs(appTime.Subtract(DateTime.Now).TotalSeconds) > offsetTime) { //show error return; } else { isValid = true; } } if (isValid) { //Do authenticate; } else { //show error return; } } else { //show error } } } Summary This is a very simple and basic SSO solution, and its main advantage is its simplicity, only needs to add a single page to do SSO authentication, do not need to modify the existing system infrastructure.

    Read the article

  • URL Rewrite http to https EXCEPT files in a specific subfolder

    - by BrettRobi
    I am trying to force all traffic on my web site to use HTTPS, using the URL Rewrite 2.0 module added to IIS 7.5. I got that working and now have a need to exclude a couple of pages from using SSL. So I need a rule to rewrite all URL except those referencing this folder to HTTPS. I've been banging my head against the wall on this and am hoping someone can help. I tried creating a rule to match all URL except those in a nossl subfolder as in this example: <rule name="HTTP to HTTPS redirect" enabled="true" stopProcessing="true"> <match url="(/nossl/.*)" negate="true" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{HTTPS}" pattern="off" /> </conditions> <action type="Redirect" url="https://{HTTP_HOST}/{R:1}" redirectType="Found" /> </rule> But this doesn't work. Can anyone help?

    Read the article

  • Do search engines rank internal redirects negatively?

    - by siverd
    A client is in the late stages (code complete) of a website redesign and unfortunately hasn't implemented 301 redirects to point high traffic pages to the new URL's. As I understand it our only option at this point is to create redirects within the CMS. Our CMS allows us to do this: www.mysite.com/category/current-page.html will redirect to www.mysite.com/new-category-name/new-page.html The site now uses custom logic on our 404 page to check this list of redirects and if one exists forwards the user to the new-page.html I understand that using 301 redirects would be the correct way to maintain our page rank but I think that would require a code change which isn't possible. Question How will search engines respond to this? Will they wait until the redirect happens and allow us to keep our page rank (authority, trust, etc) or will they see the 404 page and down-rank us? Worst case...will they make our new-page.html start from a rank of "0"? Thanks for your help.

    Read the article

  • How to optimize a one language website's SEO for foreign languages?

    - by moomoochoo
    DETAILS I have a website that's content is in English. It is a niche website with a global market. However I would like users to be able to find the website using their own language. The scenario I envision is that the searcher is looking for the English content, but is searching in their own language. An example could be someone looking for "downloadable English crosswords." MY IDEAS Buy ccTLDs and have them permanently redirect to subdirectories on domain.com. The subdirectories would contain html sitemaps in the target language e.g.-Redirect domain.fr to domain.com/fr OR perhaps it would be better to maintain domain.fr as an independent site in the target language with the html sitemap linking to pages on domain.com ? QUESTION Are the above methods good/bad? What are some other ways I can optimize SEO for foreign languages?

    Read the article

  • Server 2003 R2 - II6- granting access to website via IP with subnet range

    - by John
    We are trying to allow for a client to connect to our website. By default we are Denying all access except for those with the specified IPs we have configured to run, everything before has just been a single IP address. However now we must implement a range of IPs and rather than input thousands of records we want to use the group of computer options in the Grant Access page. However we have it configured to work off of the IP 72.21.192.0 with a subnet mask of 255.255.224.0 They are unable to connect. Looking over our IIS logs they are receiving a 302 error which is the same behavior anyone should get whom is unauthorized to view the page in question. The IP address coming in is 72.21.217.2, so it should be well within the rage of acceptable IP addresses. I'm at a loss as everything I look up tells me to do what we are doing. So any insight would be appreciated. Especially because I'm a software guy not hardware. Thanks!

    Read the article

  • .htaccess file to implement multiple redirects

    - by RobMorta
    I have a dynamic site and from .htaccess file creating clean URLs: RewriteCond %{REQUEST_URI} !(\.png|\.jpg|\.gif|\.jpeg|\.bmp)$ RewriteRule ^([a-zA-Z0-9_\-\+\ ]+)$ flight.php?flights=$1&slug=$1 This code worked fine for me but when I created a new type of page and trying to get clean URLs with the same code i.e.: RewriteCond %{REQUEST_URI} !(\.png|\.jpg|\.gif|\.jpeg|\.bmp)$ RewriteRule ^([a-zA-Z0-9_\-\+\ ]+)$ manual-page.php?url=$1&slug=$1 it's not working and if I comment the previous two lines then its is working fine. Only one code is working at a time. For first I have a URL domain.com/flight.php?flight-san-fransisco-london-flights and I want this being redirect to domain.com/san-fransisco-london-flights & from the second one I have domain.com/manual-page.php?url=my-new-page and I want this being redirect to domain.com/my-new-page. Is these any way to get both working together?

    Read the article

  • What are the Consequences for using Relative Location Headers?

    - by Alan Storm
    According to the spec, Location headers used in a redirect require a server name HTTP/1.1 301 Moved Permanently ... Location: http://example.com/foo/baz/bar However, in 2012, most web browsers will recognize a relative path and redirect you to the new location using the original server name HTTP/1.1 301 Moved Permanently ... Location: /foo/baz/bar Are there any negative/surprising consequences to using the relative URLs in the Location headers? My particular concern is how Google/search-engines will interpret this, but if there's anything else I'm not thinking about I'd love to hear it.

    Read the article

  • ecommerce item deleted by user, 301 rediret to HOME PAGE or 404 not found?

    - by Marco Demaio
    I know this question is someway similar to this one where they reccomend using 404, but after reading this other one where they suggest to use 301 when changing site urls (in the specific case was due to redesign/refactoring) I get a bit of confused and I hope someone could clarify for this specific example: Let's say I have an ecommerce site, let's also say the final user inserted some interesting items in the site and the ecommerce webapp created the item pages at the urls: http://...?id=20, http://...?id=30 etc. Now let's say some of these interesting items got many external links toward them from many other sites because some people found those items very interesting and linked to them. After some years the final user deletes those items, so obviously the pages/urls http://...?id=20, http://...?id=30, etc. now do not exist anymore, but still many pages on the web are linking toward them. What should the ecommerce site do now, just show a 404 page for those items? But, I'm confused, wouldn't this loose all the Google PR passed by the external links to the items pages? So isn't it better to use 301 redirect to HOME PAGE that at least passes the PR to the HOME PAGE? Thanks, EDIT: Well, according to answeres the best thing to do so far is to do a 404/410. In order to make this question more complete, I would like to talk about a special case, just to make sure I understood. properly. Let's say the user creates those items again (the ones he previously deleted at point 4), maybe he changes a bit their names and description, but they are basically the same items. The webapp has no way to know these new added items were the old items so it obviously create them as new items with new urls http://...?id=100, http://...?id=101, does it makes sense at this point to redirect 301 the old urls to the new ones? MORE EDIT (It would be VERY IMPORTANT TO UNDERSTAND): Well according to the clever answers received so far it seems for the special case, explained in my last EDIT, I could use 301, since it's something of not deceptive cause basically the new pages is a replacement for the old page in term of contents. This is basically done to keep the PR passed from external link and also for better user experience. But beside the user experince, that is discussible (*1), in order to preserve PR from external broken linlks why not just always use 301, In my understanding Google dislikes duplicated contents, but are we sure that 301 redirect to HOME PAGE is seen as duplicated contents for Google?! Google itself suggests to redircet 301 index.html to document root so if they consider 301 as duplicated contents wouldn't that be considered duplicated contents too?! Why do they suggest it? Let me provoke you: “why not just add a 301 to HOME PAGE for every not found page?” (*1) as a user, when I follo a broken url from some external link to some website's page I would stick more on this website if I get redirected to HOME PAGE rather than seeing a 404 page where I would think the webiste does not even exist anymore and maybe I don't even try to go to HOME PAGE of the website.

    Read the article

  • Terrible ping time with TP-Link wireless router

    - by rabbid
    I am literally a foot away from this useless TL-WR340G/TL-WR340GD router and check out this ping time: 64 bytes from 192.168.1.8: icmp_seq=291 ttl=64 time=9477.516 ms 64 bytes from 192.168.1.8: icmp_seq=292 ttl=64 time=8954.423 ms 64 bytes from 192.168.1.8: icmp_seq=293 ttl=64 time=8262.836 ms 64 bytes from 192.168.1.8: icmp_seq=294 ttl=64 time=7937.853 ms 64 bytes from 192.168.1.8: icmp_seq=295 ttl=64 time=7517.768 ms 64 bytes from 192.168.1.8: icmp_seq=296 ttl=64 time=7106.063 ms 64 bytes from 192.168.1.8: icmp_seq=297 ttl=64 time=6492.109 ms 64 bytes from 192.168.1.8: icmp_seq=298 ttl=64 time=5835.305 ms 64 bytes from 192.168.1.8: icmp_seq=299 ttl=64 time=5314.897 ms 64 bytes from 192.168.1.8: icmp_seq=300 ttl=64 time=4902.705 ms 64 bytes from 192.168.1.8: icmp_seq=301 ttl=64 time=4716.959 ms 64 bytes from 192.168.1.8: icmp_seq=302 ttl=64 time=5224.450 ms 64 bytes from 192.168.1.8: icmp_seq=303 ttl=64 time=5024.079 ms 64 bytes from 192.168.1.8: icmp_seq=304 ttl=64 time=5044.100 ms 64 bytes from 192.168.1.8: icmp_seq=305 ttl=64 time=4477.990 ms 64 bytes from 192.168.1.8: icmp_seq=306 ttl=64 time=3582.432 ms 64 bytes from 192.168.1.8: icmp_seq=307 ttl=64 time=2911.896 ms At this time mine is the only computer using the router. This happens from time to time. I'd restart the router, and then it'll have a 1-2 ms ping for a while, and then back to terrible ping. Is it just a poor quality router? Suggestions? Thank you

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >