Search Results

Search found 5595 results on 224 pages for '302 permanent redirect'.

Page 24/224 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Can I redirect to a PHP page from a Perl CGI script?

    - by sea_1987
    I am working with a site that uses an outside source to work with payment transactions, one of the prerequisites is that on success a CGI script is called. What I am wanting to know is it possible to do a redirect to a PHP page with the CGI script and have the PHP detect that it has been loaded via a Perl redirect, I currently have this is in my Perl. #!/usr/bin/perl # # fixedredir.cgi use strict; use warnings; my $URL = "http://www.example.com/"; Location: $URL;

    Read the article

  • Using htaccess redirect all files to another domain but exclude the root domain.

    - by Shawn
    Recently, I want to restructure and redesign my website, my old website located at www.example.com, there are lots of blog posts under this domain, like: www.example.com/post1 www.example.com/post2 www.example.com/post3 ... I can redirect all those posts to another sub domain points in another folder (not sub folder), (if I put all those posts in a sub folder, it will recursive the subfolder name) , anyway, it works for me by using the code below: RewriteEngine On RewriteRule ^(.*)$ http://pre.example.com/$1 [L,R=301] But there is one things I want to do is not redirect the main domain, only all the posts. RewriteEngine on RewriteCond %{REQUEST_URI} !^/blog/ # the new blog # I tried below #RewriteCond %{REQUEST_URI} !^/$ #RewriteCond %{REQUEST_URI} !^www.example.com$ RewriteRule ^(.*)$ http://pre.example.com/$1 [L,R=301] Is it possible I can do that? Thx.

    Read the article

  • Do I get SEO rankings for redirects? [closed]

    - by Gavin Morrice
    Possible Duplicate: Could I buy a domain name to increase traffic to my site like this? Url's add SEO weight to any site. If I have a site that (for example) sells chickens and the url is http://cluckorama.com and I own www.chickensforsale.com Will search engines list chickens for sale if I set a permanent redirect to cluckorama.com? (provided the content of cluckorama.com is relevant to chickens for sale)

    Read the article

  • How to redirect http requests to http (nginx)

    - by spuder
    There appear to be many questions and guides out there that instruct how to setup nginx to redirect http requests to https. Many are outdated, or just flat out wrong. server { listen *:80; server_name <%= @fqdn %>; #root /nowhere; #rewrite ^ https://$server_name$request_uri? permanent; #rewrite ^ https://$server_name$request_uri permanent; #return 301 https://$server_name$request_uri; #return 301 http://$server_name$request_uri; #return 301 http://192.168.33.10$request_uri; return 301 http://$host$request_uri; } server { listen *:443 ssl default_server; server_name <%= @fqdn %>; server_tokens off; root <%= @git_home %>/gitlab/public; ssl on; ssl_certificate <%= @gitlab_ssl_cert %>; ssl_certificate_key <%= @gitlab_ssl_key %>; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers AES:HIGH:!ADH:!MDF; ssl_prefer_server_ciphers on; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html @gitlab; } # if a file, which is not found in the root folder is requested, # then the proxy pass the request to the upsteam (gitlab puma) location @gitlab { proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_connect_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_redirect off; ect.... I've restarted after every configuration change, and yet I still only get the 'Welcome to nginx' page when visiting http://192.168.33.10. whereas https://192.168.33.10 works perfectly. Why will nginx still not redirect http requests to https? tailf /var/log/nginx/access.log 192.168.33.1 - - [22/Oct/2013:03:41:39 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:24.0) Gecko/20100101 Firefox/24.0" 192.168.33.1 - - [22/Oct/2013:03:44:43 +0000] "GET / HTTP/1.1" 200 133 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:24.0) Gecko/20100101 Firefox/24.0" tailf /var/log/nginx/gitlab_error.lob 2013/10/22 02:29:14 [crit] 27226#0: *1 connect() to unix:/home/git/gitlab/tmp/sockets/gitlab.socket failed (2: No such file or directory) while connecting to upstream, client: 192.168.33.1, server: gitlab.localdomain, request: "GET / HTTP/1.1", upstream: "http://unix:/home/git/gitlab/tmp/sockets/gitlab.socket:/", host: "192.168.33.10" Resources http://wiki.nginx.org/Pitfalls How to make nginx redirect How to force or redirect to SSL in nginx? nginx ssl redirect Nginx & Https Redirection https://www.tinywp.in/301-redirect-wordpress/ How to force or redirect to SSL in nginx?

    Read the article

  • Any way to identify a redirect when using jQuery's $.ajax() or $.getScript() methods?

    - by Bungle
    Within my company's online application, we've set up a JSONP-based API that returns some data that is used by a bookmarklet I'm developing. Here is a quick test page I set up that hits the API URL using jQuery's $.ajax() method: http://troy.onespot.com/static/3915/index.html If you look at the requests using Firebug's "Net" tab (or the like), you'll see that what's happening is that the URL is requested successfully, but since our app redirects any unauthorized users to a login page, the login page is also requested by the browser and seemingly interpreted as JavaScript. This inevitably causes an exception since the login page is HTML, not JavaScript. Basically, I'm looking for any sort of hook to determine when the request results in a redirect - some way to determine if the URL resolved to a JSONP response (which will execute a method I've predefined in the bookmarklet script) or if it resulted in a redirect. I tried wrapping the $.ajax() method in a try {} catch(e) {} block, but that doesn't trap the exception, I'm assuming because the requests were successful, just not the parsing of the login page as JavaScript. Is there anywhere I could use a try {} catch(e) {} block, or any property of $.ajax() that might allow me to hone in on the exception or otherwise determine that I've been redirected? I actually doubt this is possible, since $.getScript() (or the equivalent setup of $.ajax()) just loads a script dynamically, and can't inspect the response headers since it's cross-domain and not truly AJAX: http://api.jquery.com/jQuery.getScript/ My alternative would be to just fire off the $.ajax() for a period of time until I either get the JSONP callback or don't, and in the latter case, assume the user is not logged in and prompt them to do so. I don't like that method, though, since it would result in a lot of unnecessary requests to the app server, and would also pile up the JavaScript exceptions in the meantime. Thanks for any suggestions!

    Read the article

  • Make PATH variable changes permanent on openSuse

    - by Marlon
    Okay, so I'm trying to do something that should be rather simple but for some reason I can't quite seem to make it work. All I simply want to do is add a path to the PATH environment variable in openSuse. So far, I've edited the following line in /etc/default/su : PATH=/usr/local/bin:/bin:/usr/bin with this line : PATH=/usr/local/bin:/bin:/usr/bin:/usr/local/php/bin:/usr/local/mysql/bin Basically, all I want to do is have access to php and mysqld regardless of how I log in directly from the command prompt without having to type trailing /usr/local/php/bin/ every time. Am I even editing the right file? I'm a bit of a Linux newbie and to achieve something as trivial as this is eluding me. Server gods out there, drop be a crumb, please? :-)

    Read the article

  • Can't make SELinux context types permanent with semanage

    - by Safado
    I created a new folder at /modevasive to hold my mod_evasive scripts and for the Log Directory. I'm trying to change the context type to httpd_sys_content_t so Apache can write to the folder. I did semanage fcontext -a -t "httpd_sys_content_t" /modevasive to change the context and then restorecon -v /modevasive to enable the change, but restorecon didn't do anything. So I used chcon to change it manually, did the restorecon to see what would happen and it changed it back to default_t. semanage fcontext -l gives: /modevasive/ all files system_u:object_r:httpd_sys_content_t:s0` And looking at /etc/selinux/targeted/contexts/files/file_contexts.local gives /modevasive/ system_u:object_r:httpd_sys_content_t:s0 So why does restorecon keep setting it back to default_t?

    Read the article

  • Mod disk_cache permanent caching images and disabling reacurring header updates

    - by user135532
    I am trying to get mod disk_cache to permantly cache images retrieved from an image server on the webserver using ProxyPass. While the image is being retrieved correctly from the server and is served from the cache on further requests, then I am still having the webserver call the image server and causing the cached header to be updated. Because of load concerns then I need to never call the image server on a specific url again after it has been cached once, or extend the refresh time for as long as possible. The webserver is IHS 7.0 The mod's are mod_disk_cache.so, mod_cache.so, mod_proxy.so Version 2.2.8.0 Following is from my httpd.conf: ProxyPass /webserver/media/images/ http://imageserver.com/ws/media/images/ # Caching pictures <IfModule mod_cache.c> <IfModule mod_disk_cache.c> CacheDefaultExpire 2628000 #CacheDisable CacheEnable disk /webserver/media/images/ CacheIgnoreCacheControl On CacheIgnoreHeaders Cookie Referer User-Agent X-Forwarded-For X-Forwarded-Host X-Forwarded-Server Accept-Language Accept Host CacheIgnoreNoLastMod On CacheIgnoreQueryString Off #CacheIgnoreURLSessionIdentifiers CacheLastModifiedFactor 10000000.1 #CacheLock on #CacheLockMaxAge 5 #CacheLockPath CacheMaxExpire 1576800 CacheStoreNoStore On CacheStorePrivate On CacheDirLength 2 CacheDirLevels 3 CacheMaxFileSize 1000000 CacheMinFileSize 1 CacheRoot c:/cacheroot2 </IfModule> </IfModule>

    Read the article

  • permanent NAS-mount in Ubuntu - wrong fs type, bad option, bad superblock

    - by Emil
    My network drive shows up in the file browser, just like my external usb-harddrive. Moving, running and editing files works. Hovering over it shows smb://lacie-2big/nasdisk . BUT, when I want to save a file, the drive doesn't come up as an option. All I can see is my other places, including my usb-harddrive. I am a complete newbie but I am GUESSING that it has something to do with the mount not being a "real" mount but just a shortcut to the smb location. So I ran the tutorial at https://wiki.ubuntu.com/MountWindowsSharesPermanently about how to "mount a network drive permanently". edited my fstab to //LaCie-2big/nasdisk /media/nasmount cifs guest,uid=1000,iocharset=utf8,codepage=unicode,unicode 0 0 and running sudo mount -a gave me the following error: mount: wrong fs type, bad option, bad superblock on //LaCie-2big/nasdisk, missing codepage or helper program, or other error (for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount. helper program) In some cases useful info is found in syslog - try dmesg | tail or so Now thats a very helpful error message, BUT, before I go any further, I'd be really thankful if one of you could tell me if I'm even in the right ballpark, or if my actual need: to be able to download files (ie torrents) directly to the drive, can be possible as it is already. Question: How to fix "wrong fs type, bad option, bad superblock on //LaCie-2big/nasdisk, missing codepage or helper program" when running mount -a

    Read the article

  • bash script with permanent ssh connection

    - by samuelf
    I use a bash script which runs /usr/bin/ssh -f -N -T -L8888:127.0.0.1:3306 [email protected] However, when I run the bash script, it waits.. I see the connection coming up but the script doesn't exit.. it's like it's waiting for the SSH process to finish, because when I manually kill it the bash script finishes as well. Any ideas how to resolve this? UPDATE: I have croned this script.. and the cron process is the one that becomes a zombie.. the actual scripts runs just fine, sorry about that, with ps -auxf I get: root 597 0.0 0.7 2372 912 ? Ss Jul12 0:00 cron root 2595 0.0 0.8 2552 1064 ? S 02:09 0:00 \_ CRON 1001 2597 0.0 0.0 0 0 ? Zs 02:09 0:00 \_ [sh] <defunct> 1001 2603 0.0 0.0 0 0 ? Z 02:09 0:00 \_ [cron] <defunct> and when I kill the ssh the defuncts disappear.. why would they become defunct?

    Read the article

  • bash script with permanent ssh connection

    - by samuelf
    Hi, I use a bash script which runs /usr/bin/ssh -f -N -T -L8888:127.0.0.1:3306 [email protected] However, when I run the bash script, it waits.. I see the connection coming up but the script doesn't exit.. it's like it's waiting for the SSH process to finish, because when I manually kill it the bash script finishes as well. Any ideas how to resolve this? UPDATE: I have croned this script.. and the cron process is the one that becomes a zombie.. the actual scripts runs just fine, sorry about that, with ps -auxf I get: root 597 0.0 0.7 2372 912 ? Ss Jul12 0:00 cron root 2595 0.0 0.8 2552 1064 ? S 02:09 0:00 \_ CRON 1001 2597 0.0 0.0 0 0 ? Zs 02:09 0:00 \_ [sh] <defunct> 1001 2603 0.0 0.0 0 0 ? Z 02:09 0:00 \_ [cron] <defunct> and when I kill the ssh the defuncts disappear.. why would they become defunct?

    Read the article

  • make "sort by" permanent for specific folder

    - by Black Cobra
    I have a folder where I store all my web icons in .png and .ico format. I set view to large icons and sort by to date modified -> descending. When I save new icons in this folder from web, I drag and drop them from browser (chrome) to the folder. The problem is after that the icons in the folder are not sorted by date modified descending any more... So how can I set any file or program or something that will set the sort by to date modified -> descending for that folder only each time I will access (open) the folder? I am using window 7 x64.

    Read the article

  • Setting XFCE terminal PS1 value and making it permanent

    - by Matt
    I'm trying to add the value PS1='\u@\h: \w\$ ' to my terminal in XFCE. I added the line to (what I think is) the correct area in /etc/profile. The relevant segment is: # Set a default shell prompt: #PS1='`hostname`:`pwd`# ' PS1='\u@\h: \w\$ ' if [ "$SHELL" = "/bin/pdksh" ]; then # PS1='! $ ' PS1='\u@\h: \w\$ ' elif [ "$SHELL" = "/bin/ksh" ]; then # PS1='! ${PWD/#$HOME/~}$ ' PS1='\u@\h: \w\$ ' elif [ "$SHELL" = "/bin/zsh" ]; then # PS1='%n@%m:%~%# ' PS1='\u@\h: \w\$ ' elif [ "$SHELL" = "/bin/ash" ]; then # PS1='$ ' PS1='\u@\h: \w\$ ' else PS1='\u@\h: \w\$ ' fi Most of that was already there, I just commented out the existing value and added the one I want. By manually opening the terminal and doing . profile, I can load these values, but they don't stick - I close the terminal and reopen, and I'm back to sh-4.1$. Maybe I'm doing this in the wrong place, but how can I make that value stick? All the info I've found on google is Fedora/Ubuntu-specific. I use Slackware. Any help on this matter would be greatly appreciated.

    Read the article

  • Permanent solution to Win XP SP3 window animation removal

    - by epale85
    Hello everyone, May I know how I can get rid of the Window Animation (seen when you minimise or maximise a window) in Windows XP Service Pack 3 Permanently?? I have tried the following two solutions: I went to the control panel---adjust visual effects--- then unchecked the "Animate windows when maximising and minimising" option. 2.I have tried using windows powertoys tweakUI to disable the animation. 3.I even tried this: Turn Off Window Animation You can shut off the animation displayed when you minimize and maximize Windows. 1. Open RegEdit 2. Go to HKEY_CURRENT_USER\Control panel \Desktop\ WindowMetrics 3. Create a new string value "MinAnimate". 4. Set the value data of 0 for Off or 1 for On But still no help The Big Problem is that the window animation will disappear for a while but returns again some time later. When I navigate back to the "adjust visual effects" window, the checkbox for "Animate windows when maximising and minimising" is checked again. Thank you very much

    Read the article

  • Set Permanent System variable through bat file

    - by shyameniw
    I want to change System variables in XP through running a bat file But when i run it i get the error Too many command-line parameters for the following code **set KEY="HKLM\SYSTEM\CurrentControlSet\Control\Sessions Manager\Environment" set PATHxx=%Path% reg add %KEY% /v Pathx /t REG_EXPAND_SZ 5 /d %PATHxx%** How can i fix this??

    Read the article

  • Really remove non-permanent certificate exception in firefox

    - by user1719315
    I visited japan.indymedia.org and firefox gave me the "Invalid certificate" screen. I added an exception, but did not click "Store this exception permanently." But now firefox still happily visits the same site without giving any warnings, even after a restart of the browser. I tried going to the Options-Advanced-Encryption-View Certificates-Servers to remove the certificate but I did not find it there. How to remove this exception and make firefox give me the warning when visiting the site?

    Read the article

  • MVC OnActionExecuting to Redirect

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2014/08/12/mvc-onactionexecuting-to-redirect.aspxI recently had the following requirements in an MVC application: Given a new user that still has the default password When they first login Then the user must change their password and optionally provide contact information I found that I can override the OnActionExecuting method in a BaseController class.public class BaseController : Controller { [Inject] public ISessionManager SessionManager { get; set; } protected override void OnActionExecuting(ActionExecutingContext filterContext) { // call the base method first base.OnActionExecuting(filterContext); // if the user hasn't changed their password yet, force them to the welcome page if (!filterContext.RouteData.Values.ContainsValue("WelcomeNewUser")) { var currentUser = this.SessionManager.GetCurrentUser(); if (currentUser.FusionUser.IsPasswordChangeRequired) { filterContext.Result = new RedirectResult("/welcome"); } } } } Better yet, you can use an ActionFilterAttribute (and here) and apply the attribute to the Base or individual controllers./// <summary> /// Redirect the user to the WelcomePage if the FusionUser.IsPasswordChangeRequired is true; /// </summary> public class WelcomePageRedirectActionFilterAttribute : ActionFilterAttribute { [Inject] public ISessionManager SessionManager { get; set; } public override void OnActionExecuting(ActionExecutingContext actionContext) { base.OnActionExecuting(actionContext); // if the user hasn't changed their password yet, force them to the welcome page if (actionContext.RouteData.Values.ContainsValue("WelcomeNewUser")) { return; } var currentUser = this.SessionManager.GetCurrentUser(); if (currentUser.FusionUser.IsPasswordChangeRequired) { actionContext.Result = new RedirectResult("/welcome"); } } }  [WelcomePageRedirectActionFilterAttribute] public class BaseController : Controller { ... } The requirement is now met.

    Read the article

  • Duplicate content issue after URL-change with 301-redirects

    - by David
    We got the following problem: We changed all URLs on our page from oldURL.html to newURL.html and set up 301-redirects (ca. 600 URLs) Google re-crawled our page, indexed all the new URLs (newURL.html), but didn't crawl the old URLs (oldURL.html) again, as there were no internal links pointing at those domains anymore after the URL-change. This resulted in massive ranking-drops, etc. because (i) Google thought oldURL.html has exactly the same content as newURL, causing duplicate content issues, and (ii) Google did not transfer the juice from oldURL to newURL, because the 301-redirect was never noticed. Now we reset all internal Links to the old URLs again, which then redirect to the newURLs, in the hope that Google would re-crawl the pages, once there are internal links pointing at them. This is partially happening, but at a really low speed, so it would take multiple months to notice all-redirects. I guess, because Google thinks: "Aah, I already know oldURL.html, so no need to re-crawl it. Possible solutions we thought of are ... Submitting as many of the old URLs to the index as possible via Webmaster Tools, to manually trigger a crawl. Doing that already Submitting a sitemap with all old URLs - but not sure if good idea, because Google does not seem to like 301-redirects in a sitemap ... Both solutions are not perfect - and we cannot wait for three months, just to regain our old rankings. What are your ideas? Best, David

    Read the article

  • Major Google not follow increase since introducing 301 to site

    - by jakob
    Recently we implemented Varnish in front of our web nodes so that the backend would get some rest from time to time. Since varnish is case sensitive and our app was not we implemented a 301 in varnish to redirect to small case. Example: You search for PlumBer StockHOLM you will get a 301 redirect to plumber stockholm and then plumber stockholm will be cached. This worked as a charm, but when checking the Google webmaster tools we suddenly got a crazy amount of Status - Not able to follow errors. As you can see in the image below: This of course stirred up some panic and I started to read up on the documentation once again. If I pressed on one of the links I got to the help section where i found this: Well this is strange, but as the day progressed more and more errors were thrown by Google. We took the decision to make varnish return 200 instead of the 301. Now when testing the links that appears in the Not able to follow section I get a 200 back. I have tested with Chrome, curl and lynx reader and everything looks ok but the amount of errors are still increasing. What is a little bit comforting is that the links that appears in the Not able to follow section are dated before the 200 change in varnish. Why do I get these errors and why do they keep increasing? Did google release something new on October 31? Maybe I do not understand the docs correctly?

    Read the article

  • redirect an old URL using web.config

    - by Dog
    i'm still very new to using URL rewrites and redirects and i'm having some problems on something i thought was quite simple... i've just rebuilt a website and want to redirect the old URLs to the new ones... for example : http://www.mydomain.com/about.asp?lang=1 should now be http://www.mydomain.com/content.asp?id=100230&title=about&langid=1 unfortunately, everything i've tried is giving me errors or simply does nothing. here is one rule i tried : <rule name="redirectoldabout" enabled="true" stopProcessing="true"> <match url="( .*)" negate="true" /> <conditions> <add input="{HTTP_HOST}" pattern="^mydomain\.com/about\.asp\?lang=1$" /> </conditions> <action type="Redirect" url="http://www.mydomain.com/content.asp?id=100230&title=about&langid=1" redirectType="Permanent" /> but i get an error back : HTTP Error 500.19 - Internal Server Error The requested page cannot be accessed because the related configuration data for the page is invalid. any suggestions as to what i'm doing wrong? thanks dog

    Read the article

  • Will search engines discover that our old pages have been 301 redirected if there are no more links to them in the old site?

    - by Obay
    We've moved our website to a new domain. Thousands of our pages come from one PHP file in the old site (e.g. oldsite.com/news.php?id=<id>). So we added some code in news.php file to do a 301 redirect to the specific corresponding news article in the new website (newsite.com/news/<id>). We have not yet done a 301 redirect for the root of the old site (so we could display a notice to our users that we've moved), but all links inside it are already 301 redirected. My concern is that, when Google crawls our old website, it will no longer be able to find the old news articles and discover that they have been 301 Redirected -- is this correct? If so, does that mean our PageRank won't be carried over to the new site? I've also read that we would need to create a sitemap for the new site. Is it possible to indicate in the sitemap the old and new locations of specific pages? Because if not, how will Google know? (I'm not sure change of address in Webmaster Tools would be specific enough).

    Read the article

  • Access Control Service: Programmatically Accessing Identity Provider Information and Redirect URLs

    - by Your DisplayName here!
    In my last post I showed you that different redirect URLs trigger different response behaviors in ACS. Where did I actually get these URLs from? The answer is simple – I asked ACS ;) ACS publishes a JSON encoded feed that contains information about all registered identity providers, their display names, logos and URLs. With that information you can easily write a discovery client which, at the very heart, does this: public void GetAsync(string protocol) {     var url = string.Format( "https://{0}.{1}/v2/metadata/IdentityProviders.js?protocol={2}&realm={3}&version=1.0",         AcsNamespace,         "accesscontrol.windows.net",         protocol,         Realm);     _client.DownloadStringAsync(new Uri(url)); } The protocol can be one of these two values: wsfederation or javascriptnotify. Based on that value, the returned JSON will contain the URLs for either the redirect or notify method. Now with the help of some JSON serializer you can turn that information into CLR objects and display them in some sort of selection dialog. The next post will have a demo and source code.

    Read the article

  • Pipe an infinite stream to internal loop?

    - by Sh3ljohn
    I've seen a lot of things about redirecting stdout to a TCP socket, but no real example of how to do it in practice, specifically when the output stream generated by the first "command" never ends. To talk about something concrete, let's take programs like servers that typically output their log endlessly to stdout (well, as long as they run). If you redirect the output to a log file on the disk, then this file is always open (therefore not readable by others?) and grows infinitely, which eventually is going to cause problems. This might be a nood question, but I don't know what it does or how to do it so. How to redirect the output of a command to the internal loop? I want to make sure that data is sent EVERY time something is written to stdout, and that the pipe won't wait for the command to end (never happens ideally!). Is that right? If 2 is true, is there a buffer system to send chunks of data once it reaches a certain size only? Could you give me concrete command line examples to do the above? Thanks in advance

    Read the article

  • WordPress mod_rewrite redirect specific folders

    - by Ps Cjef
    As a new user, I'm not allowed to post more than two hyperlinks here. So I have added a space after every http (ignore them and read as full URLs). System: Debian Etch, Apache 2.2 I have a WordPress instance with multiple blogs. I would like to redirect some of the folders based on the year and month, while leaving other folders go to the actual locations. Example: I have archives for a few years, like 2010, 2011 and 2012: http ://mydomain.com/wordpress/myblog/2010/02 http ://mydomain.com/wordpress/myblog/2011/01 http ://mydomain.com/wordpress/myblog/2012/01 I would like to redirect all 2010 and 2011 posts to another blog with the same folder structure: http ://mydomain.com/wordpress/myotherblog/2010/02 http ://mydomain.com/wordpress/myotherblog/2011/01 and so on. I would like to have 2012 and beyond to go to the actual site (http ://mydomain.com/wordpress/myblog/2012/01). I tried mod_rewrite with the following, one rule at a time to test redirection for just one year (and to expand later for other years), and none of them worked! * RewriteEngine is already on since there are some default WordPress rewrites. * RewriteBase is set to http://mydomain.com/wordpress/ . * I put my rule before all the other default WordPress rules are processed. Didn't work solution #1 RedirectMatch 301 /myblog/2010/(.*) /myotherblog/2010/$1 Didn't work solution #2 RewriteRule /myblog/2010/(.*) http ://mydomain.com/myotherblog/2010/$1 [R=301] Didn't work solution #3 RedirectPermanent /myblog/2010/(.*) http ://mydomain.com/myotherblog/2010/$1 I've also tried the above rules with and without a fully qualified URL for the new location. The rewrite log, with log level set to 9, did not provide any useful information. It shows that it looks at the pattern specified against the URL (as mentioned in the rule), but finally what happens is a passthrough to http ://mydomain.com/myblog/ for all URLs or a 500 Internal Server Error. Any ideas on where I could be going wrong or any alternative solutions?

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >