Search Results

Search found 19155 results on 767 pages for 'url redirection'.

Page 182/767 | < Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >

  • case-specific mod rewrite on Wordpress subdomain multisite

    - by Steve
    I have split a Wordpress blog into multiple category-specific blogs using subdomains, as the topics in the original blog were too broad to be lumped together effectively. Posts were exported from the parent www blog and imported into the subject-specific subdomain blogs. I believe .htaccess provides mod rewrite for all subdomains (including the original www) in a single .htaccess file. I use .htaccess to perform 301 redirect on post categories to the relevant post on the subdomain's blog. eg: RedirectMatch 301 ^/auto/(.*)$ http://auto.example.com/$1 The problem I have is that the category has been retained in the permalink structure in the subdomain blog, so that www.example.com/auto/mercedes is now auto.example.com/auto/mercedes. The 1st URL is redirect to the 2nd, but unfortunately, the 2nd URL is redirected to auto.example.com/mercedes using the same rewrite rule, which is not found, as the permalink on the subdomain's blog retains the parent category of auto. The solution would be to adjust the permalink structure in the subdomain's WP settings, so that the top level category does not duplicate the subdomain. My question would be: how do I then strip a section of the original (www) blog's post URL from the subdomain's URL when redirecting? eg: How do I redirect www.example.com/auto/mercedes to auto.example.com/mercedes? I'm assuming this would be a regular expression trick, which I am not great at. Update: I might have to use: RewriteCond %{HTTP_HOST} !auto.example.com$ in the default Wordpress if loop in .htaccess, and seperate my custom subdomain redirections into a second if loop section.

    Read the article

  • I got these strange messages on my websites feedback form?

    - by Ali
    Hi guys - I got all of a sudden a number of strange feedback messages from my sites feedback form its where normally users would come and enter feedback and then I would review it on an admin panel. However these messages make little to no sense like for an example: here are two 'messages': 2GyOim <a href=\"http://vdjzpnoyzfji.com/\">vdjzpnoyzfji</a>, [url=http://gixlpbtswcdh.com/]gixlpbtswcdh[/url], [link=http://zudauexgjgot.com/]zudauexgjgot[/link], http://vqhafprwogyf.com/ jF2wdU <a href=\"http://aprjkscbhnxf.com/\">aprjkscbhnxf</a>, [url=http://dhfeoqufoqvu.com/]dhfeoqufoqvu[/url], [link=http://whmzpbqrsume.com/]whmzpbqrsume[/link], http://xxfntqzhhbza.com/ I got about over a dozen of these - and they are all from very different ips is someone playing around and is it a cause for me to get vigilant?

    Read the article

  • Issue with www to non www redirect

    - by bob
    Hello, I am on slicehost and I followed the articles that they gave for DNS redirection and the www to non www url redirection does work. However, what if I want a www.domain.com to be the default domain. Would I put www.domain.com. as my DNS record name or would I keep domain.com. as my DNS record and then do something else. Basically, what happens is if someone goes to the url www.domain.com/directory/something.html they will be redirected to domain.com and not domain.com/directory/something.html I would like the second thing to happen, not just go to domain.com and call it a day. I am running nginx and am confounded on how to solve this issue. I'm not sure whether its an nginx issue or a dns issue. Any help would be greatly appreciated!

    Read the article

  • mod_cache not working

    - by Pistos
    I have a PHP site that has many dynamically generated pages. I'm trying to turn to mod_cache to help boost performance, because in most cases, content does not change in a given day. I have configured mod_cache as best I could, following examples around the web, including the mod_cache page on apache.org. When I set LogLevel debug, I see a bit of information about the caching that is [not] happening. There are plenty of pairs of lines like this: [Fri Jun 01 17:28:18 2012] [debug] mod_cache.c(141): Adding CACHE_SAVE filter for /foo/bar [Fri Jun 01 17:28:18 2012] [debug] mod_cache.c(148): Adding CACHE_REMOVE_URL filter for /foo/bar Which is fine, because I've set CacheEnable disk /foo, to indicate that I want everything under /foo cached. I'm new to mod_cache, but my understanding about these lines is that it just means that mod_cache has acknowledged that the URL is supposed to be cached, but there are supposed to be more lines indicating that it is saving the data to cache, and then later retrieving them on subsequent hits to the same URL. I can hit the same URL till I'm blue in the face, whether with F5 refreshing, or not, or with different browsers, or different computers. It's always that pair of lines that shows in the logs, and nothing else. When I set CacheEnable disk /, then I see more activity. But I don't want to cache the entire site, and there are many, many different subpaths to the site, so I don't want to have to modify code to set no-cache headers in all the necessary places. I'll mention that mod_rewrite is in use here, rewriting /foo/bar to something like index.php?baz=/foo/bar, but my understanding is that mod_cache uses the pre-rewrite URL, not the post-rewrite URL. As far as I can tell, I have the response headers not getting in the way of caching. Here's an example from one hit: Cache-Control:must-revalidate, max-age=3600 Connection:Keep-Alive Content-Encoding:gzip Content-Length:16790 Content-Type:text/html Date:Fri, 01 Jun 2012 21:43:09 GMT Expires:Fri, 1 Jun 2012 18:43:09 -0400 Keep-Alive:timeout=15, max=100 Pragma: Server:Apache Vary:Accept-Encoding mod_cache config is as follows: CacheRoot /var/cache/apache2/ CacheDirLevels 3 CacheDirLength 2 CacheEnable disk /foo What is getting in the way of mod_cache doing its job of caching?

    Read the article

  • Preview Chitika Premium Ads On Your Website Quickly

    - by Gopinath
    Google AdSense is an excellent option for publishers like us to monetize traffic. As Google AdSense allow only 3 ad units per page, we have good amount of space left empty on the blog. Why not we use this empty space to earn some revenue(make sure that you are not annoying your visitors with too many ads)? On Tech Dreams today we started experimenting with Chitika Premium Ads to displays advertisements for visitors landing on us through search engines. Chitika Premium Ads are displayed only to US visitors who finds our pages through search engines. Visitors from outside USA does not see these ads anywhere on our site. We being in India, how to preview the Chitika ads on our site? To preview Chitika ads add #chitikatest at the end of the url. For example to preview the ads on Tech Dreams I use the url http://techdreams.org/#chitikatest The above url displays default list of ads Chitika displays. But if you want to see preview of ads for a specific keyword you can append it at the end of the url. Here is another example http://www.techdreams.org/#chitikatest=ipad   Do You Know What The Word “Chitika” Means? What does Chitika mean? When Chitika co-founders, Venkat Kolluri and Alden DoRosario left Lycos in 2003 to start their own company, they sought a name that would suggest the speed with which its customers would be able to put up ads on their Web sites. Chitika, which means “snap of the fingers” in Telugu (a South Indian language), captured this sentiment and Chitika Inc. was born (via) This article titled,Preview Chitika Premium Ads On Your Website Quickly, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Redirect an Apache2 SSL VirtualHost with mod_alias

    - by Jeff
    I want to make sure there aren't any odd behaviors that I don't know about when redirecting a SSL VirtualHost with mod_alias Redirect as outlined by Apache here. My code seems to work, but since SSL virtual hosts are restricted to just one IP address, I want to make sure there aren't any problems eluding me. Explicitly not using TLS. I'm stuck with Apache 2.2 for now. <VirtualHost *:443> ServerName example.com SSLEngine On Redirect 301 / https://www.example.com/ </VirtualHost> <VirtualHost *:443> ServerName www.example.com SSLEngine On # Do stuff # </VirtualHost> So I guess my question is, should SSL VirtualHost redirection with mod_alias Redirect work the same as non-SSL redirection?

    Read the article

  • Accessing Virtual Host from outside LAN

    - by Ray
    I'm setting up a web development platform that makes things as easy as possible to write and test all code on my local machine, and sync this with my web server. I setup several virtual hosts so that I can access my projects by typing in "project" instead of "localhost/project" as the URL. I also want to set this up so that I can access my projects from any network. I signed up for a DYNDNS URL that points to my computer's IP address. This worked great from anywhere before I setup the virtual hosts. Now when I try to access my projects by typing in my DYNDNS URL, I get the 403 Forbidden Error message, "You don't have permission to access / on this server." To setup my virtual hosts, I edited two files - hosts in the system32/drivers/etc folder, and httpd-vhosts.conf in the Apache folder of my WAMP installation. In the hosts file, I simply added the server name to associate with 127.0.0.1. I added the following to the http-vhosts.conf file: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot "c:/wamp/www/ladybug" ServerName ladybug ErrorLog "logs/your_own-error.log" CustomLog "logs/your_own-access.log" common </VirtualHost> <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot "c:/wamp/www" ServerName localhost ErrorLog "logs/localhost-error.log" CustomLog "logs/localhost-access.log" common </VirtualHost> Any idea why I can't access my projects from typing in my DYNDNS URL? Also, is it possible to setup virtual hosts so that when I type in http://projects from a random computer outside of my network, I access url.dyndns.info/projects (a.k.a. my WAMP projects on my home computer)? Help is much appreciated, thanks!

    Read the article

  • Port Forwarding: Why do my local sites on 80 work but not those on 8080?

    - by Chadworthington
    I setup my router to forward port 80 to the PC hosting my web site. As a result, I am able to access this url (Don't bother clicking on it, it's just an example): http://my.url.com/ When i click on this link, it works: http://localhost:8080/tfs/web/ I also forward port 8080 to the same web server box But when I try to access this url I get the eror "Page Cannot be displayed:" http://my.url.com:8080/tfs/web/ I fwded port 8080 the same way I fwded port 80. I also turned off Windows Firewall, in case it was blocking port 8080. Any theories why port 80 works but 8080 does not?

    Read the article

  • I got these strange messages on my websites feedback form? Is someone trying to hack my site?

    - by Ali
    Hi guys - I got all of a sudden a number of strange feedback messages from my sites feedback form its where normally users would come and enter feedback and then I would review it on an admin panel. However these messages make little to no sense like for an example: here are two 'messages': 2GyOim <a href=\"http://vdjzpnoyzfji.com/\">vdjzpnoyzfji</a>, [url=http://gixlpbtswcdh.com/]gixlpbtswcdh[/url], [link=http://zudauexgjgot.com/]zudauexgjgot[/link], http://vqhafprwogyf.com/ jF2wdU <a href=\"http://aprjkscbhnxf.com/\">aprjkscbhnxf</a>, [url=http://dhfeoqufoqvu.com/]dhfeoqufoqvu[/url], [link=http://whmzpbqrsume.com/]whmzpbqrsume[/link], http://xxfntqzhhbza.com/ I got about over a dozen of these - and they are all from very different ips is someone playing around and is it a cause for me to get vigilant? Also they all have the exact same time and date of entry which is spooky?

    Read the article

  • Loopback connection via PHP's getimage size crashes server (Magento's CMS)

    - by Alex
    We were able to trace down a problem that is crashing our NGINX server running Magento until the following point: Background info: Magento Backend has a CMS function with a WYSIWYG editor. This editor loads some pictures via a controller in magento (cms/directive). When we set the NGINX error_log level to info, we get the following lines (line break inserted for better readability): 2012/10/22 18:05:40 [info] 14105#0: *1 client closed prematurely connection, so upstream connection is closed too while sending request to upstream, client: XXXXXXXXX, server: test.local, request: "GET index.php/admin/cms_wysiwyg/directive/___directive/BASEENCODEDIMAGEURL,,/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:9024", host: "test.local" When checking the code in the debugger, the following call does never return (in ´Varien_Image_Adapter_Abstract::getMimeType()` # $this->_fileName is http://test.local/skin/adminhtml/base/default/images/demo-image-not-existing.gif` # $_SERVER['REQUEST_URI'] = http://test.local/admin/cms_wysiwyg/directive/___directive/BASEENCODEDIMAGEURL list($this->_imageSrcWidth, $this->_imageSrcHeight, $this->_fileType, ) = getimagesize($this->_fileName); The filename requests is an URL to the same server which is requesting the script a link to a static .gif that is not existing. Sample URL: http://test.local/skin/adminhtml/base/default/images/demo-image-not-existing.gif When the above line executed, any subsequent request to the NGNIX server does not respond any more. After waiting for around 10 minutes, the NGINX server starts answering requests again. I tried to reproduce the error with a simple test script that only calls getimagesize() with the given URL - but this not crash. It simple leads to an exception saying that the URL could not be loaded (which is fine as the URL is wrong)

    Read the article

  • Google Mini Search Does Not Return Results

    - by James Lawruk
    I pointed the Google Mini to a small intranet site. (8 simple HTML pages) It crawls all 8 pages just fine, but a simple search with a word clearing contained within the body of the pages returns 0 results. If I enter a search like this: "site:mysite.net", it returns all 8 pages. If I enter a search with a word contained in a Url, it will return that Url. In the search results, only the Url is returned in the list items. There is no Page Title or blurb text you normally see in Google search results. It's as if it is only indexing the Url and skipping the page title, body, etc. I have an old software Version 3.4.14, but I wanted to try to get this thing working without an upgrade. It worked before for another domain, so why would it not work now. Any ideas?

    Read the article

  • Hide subdomain AND subdirectory using mod_rewrite?

    - by Jeremy
    I am trying to hide a subdomain and subdirectory from users. I know it may be easier to use a virtual host but will that not change direct links pointing at our site? The site currently resides at http://mail.ctrc.sk.ca/cms/ I want www.ctrc.sk.ca and ctrc.sk.ca to access this folder but still display www.ctrc.sk.ca. If that makes any sense. Here is what our current .htaccess file looks like, we are using Joomla so there already a few rules set up. Help is appreciated. # Helicon ISAPI_Rewrite configuration file # Version 3.1.0.78 ## # @version $Id: htaccess.txt 14401 2010-01-26 14:10:00Z louis $ # @package Joomla # @copyright Copyright (C) 2005 - 2010 Open Source Matters. All rights reserved. # @license http://www.gnu.org/copyleft/gpl.html GNU/GPL # Joomla! is Free Software ## ##################################################### # READ THIS COMPLETELY IF YOU CHOOSE TO USE THIS FILE # # The line just below this section: 'Options +FollowSymLinks' may cause problems # with some server configurations. It is required for use of mod_rewrite, but may already # be set by your server administrator in a way that dissallows changing it in # your .htaccess file. If using it causes your server to error out, comment it out (add # to # beginning of line), reload your site in your browser and test your sef url's. If they work, # it has been set by your server administrator and you do not need it set here. # ##################################################### ## Can be commented out if causes errors, see notes above. #Options +FollowSymLinks # # mod_rewrite in use RewriteEngine On ########## Begin - Rewrite rules to block out some common exploits ## If you experience problems on your site block out the operations listed below ## This attempts to block the most common type of exploit `attempts` to Joomla! # ## Deny access to extension xml files (uncomment out to activate) #<Files ~ "\.xml$"> #Order allow,deny #Deny from all #Satisfy all #</Files> ## End of deny access to extension xml files RewriteCond %{QUERY_STRING} mosConfig_[a-zA-Z_]{1,21}(=|\%3D) [OR] # Block out any script trying to base64_encode crap to send via URL RewriteCond %{QUERY_STRING} base64_encode.*\(.*\) [OR] # Block out any script that includes a <script> tag in URL RewriteCond %{QUERY_STRING} (\<|%3C).*script.*(\>|%3E) [NC,OR] # Block out any script trying to set a PHP GLOBALS variable via URL RewriteCond %{QUERY_STRING} GLOBALS(=|\[|\%[0-9A-Z]{0,2}) [OR] # Block out any script trying to modify a _REQUEST variable via URL RewriteCond %{QUERY_STRING} _REQUEST(=|\[|\%[0-9A-Z]{0,2}) # Send all blocked request to homepage with 403 Forbidden error! RewriteRule ^(.*)$ index.php [F,L] # ########## End - Rewrite rules to block out some common exploits # Uncomment following line if your webserver's URL # is not directly related to physical file paths. # Update Your Joomla! Directory (just / for root) #RewriteBase / ########## Begin - Joomla! core SEF Section # RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !^/index.php RewriteCond %{REQUEST_URI} (/|\.php|\.html|\.htm|\.feed|\.pdf|\.raw|/[^.]*)$ [NC] RewriteRule (.*) index.php RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L] # ########## End - Joomla! core SEF Section EDIT Yes, mail.ctrc.sk.ca/cms/ is the root directory. Currently the DNS redirects from ctrc.sk.ca and www.ctrc.sk.ca to mail.ctrc.sk.ca/cms. However when it redirects the user still sees the mail.ctrc.sk.ca/cms/ url and I want them to only see www.ctrc.sk.ca.

    Read the article

  • Descubre en una mañana todo lo que Oracle puede hacer por ti

    - by Noelia Gomez
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} En la actualidad, la tecnología está cambiando el mundo de una forma sin precedentes. La convergencia de novedades como la informática en la nube, los dispositivos móviles, las redes sociales, el Big Data y el «Internet de las cosas» está impulsando la innovación y revolucionando los antiguos modelos de negocio. ¿Cómo lograrán las empresas adaptarse a los cambios con rapidez sin poner en peligro el funcionamiento de la actividad comercial? Oracle siempre se ha puesto este reto y por ello queremos presentar en exclusiva para nuestros clientes las mayores novedades de nuestra gama de soluciones, el próximo 5 de Noviembre en el Oracle Day. En la parte de aplicaciones hablaremos de la oportunidad significativa de conseguir una posición de liderazgo en CX, ya que ofrecer una experiencia excelente está directamente vinculado con un aumento de las ventas. Cuanto más relevante y constante sea la experiencia de sus clientes, más probable es que compren. Disfrute de una experiencia única en este evento interactivo, donde podrá participar en debates con directivos de Oracle, ver vídeos y conocer experiencias de clientes, ampliar su red de contactos, asistir a demostraciones prácticas de productos, y un largo etcétera. Para más información acceda aquí. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Spam in Whois: How is it done and how do I protect my domain?

    - by user2964971
    Yes, there are answered questions regarding spam in Whois. But still unclear: How do they do it? How should I respond? What precautions can I take? For example: Whois for google.com [...] Server Name: GOOGLE.COM.ZOMBIED.AND.HACKED.BY.WWW.WEB-HACK.COM IP Address: 217.107.217.167 Registrar: DOMAINCONTEXT, INC. Whois Server: whois.domaincontext.com Referral URL: http://www.domaincontext.com Server Name: GOOGLE.COM.ZZZZZ.GET.LAID.AT.WWW.SWINGINGCOMMUNITY.COM IP Address: 69.41.185.195 Registrar: TUCOWS DOMAINS INC. Whois Server: whois.tucows.com Referral URL: http://domainhelp.opensrs.net Server Name: GOOGLE.COM.ZZZZZZZZZZZZZ.GET.ONE.MILLION.DOLLARS.AT.WWW.UNIMUNDI.COM IP Address: 209.126.190.70 Registrar: PDR LTD. D/B/A PUBLICDOMAINREGISTRY.COM Whois Server: whois.PublicDomainRegistry.com Referral URL: http://www.PublicDomainRegistry.com Server Name: GOOGLE.COM.ZZZZZZZZZZZZZZZZZZZZZZZZZZ.HAVENDATA.COM IP Address: 50.23.75.44 Registrar: PDR LTD. D/B/A PUBLICDOMAINREGISTRY.COM Whois Server: whois.PublicDomainRegistry.com Referral URL: http://www.PublicDomainRegistry.com Server Name: GOOGLE.COMMAS2CHAPTERS.COM IP Address: 216.239.32.21 Registrar: CRAZY DOMAINS FZ-LLC Whois Server: whois.crazydomains.com Referral URL: http://www.crazydomains.com [...] >>> Last update of whois database: Thu, 05 Jun 2014 02:10:51 UTC <<< [...] >>> Last update of WHOIS database: 2014-06-04T19:04:53-0700 <<< [...]

    Read the article

  • Using proxy.pac to access Apache 2 with a hostname?

    - by leeand00
    Note that I do not have a DNS on my network, and that is why I am resorting to using a proxy.pac file. I would like to be able to access my development Apache 2 server using a name instead of an ip without setting up a full blown DNS. I am aware of setting names in the C:\Windows\System32\drivers\etc\hosts file and the /etc/hosts files, however I cannot edit the hosts file on all of the devices that I am testing the site on. I've added a proxy.pac file to my Apache2 server and pointed my browsers settings to it at: http://192.168.2.221/proxyutils/proxy.pac ...where 192.168.2.221 is thehostname's ip address. I set the above URL in Firefox in the following manner: From the menubar selecting "Edit-Preferences" In the resulting "Firefox Preferences" window clicking the "Advanced" tab. Clicking the "Network" tab Clicking the "Settings" button. Selecting the "Automatic proxy configuration URL:" radio button. Entering http://192.168.2.221/proxyutils/proxy.pac and pressing OK. The contents of the proxy.pac file on the Apache server function FindProxyForURL(url, host) { if( dnsDomainIs(host, "thehostname") ) return "PROXY 192.168.2.221:80"; return "DIRECT"; } In Firefox I then access the following URL: http://thehostname/wp-blog/ And instead of the development version of the Wordpress blog I am trying to access I get a URL of http://thehostnamehttp/thehostname/wp-blog/ in my address bar and a 404 Not Found page in the browser window. Looking over proxy.pac, it seems like calling dnsDomainIs shouldn't work considering I don't have a DNS setup on my network, but I've also tried just comparing the host argument with the string "hostname" and it yielded the same result, even after modifying the proxy.pac file and clicking the reload button near the proxy settings. This could also be a Wordpress problem, since I've noticed that directories without Wordpress seem to function perfectly normally. (see cross post here) Is there any way I can modify my configuration so that I can access the site using http://thehostname/wp-blog/ ?

    Read the article

  • How can I move mysites to a new location

    - by Bob
    I recently restored my content and was instructed to create mysites in a different location than was originally used. Now I have several users mysites in /personal. The new desired location is /mysites. From what I found in the documentation I should back them up and restore them to the new location. Here's what I've done: Backup individual site collection for user mysite stsadm -o backup -url "https://myUrl/personal/john_smith" -filename johnsmith.bkup Restore individual site collection for user mysite stsadm -o restore -url "https://myUrl/mysites/john_smith" -filename johnsmith.bkup -overwrite The result of this and the problem is when i enumerate sites i end up with this: <Site Url="https://myUrl/mysites" Owner="domainname\john.smith" ContentDatabase="WSS_Content_MySites" StorageUsedMB="1.6" StorageWarningMB="90000" StorageMaxMB="100000" /> it leaves off the username part of the url. and if I restore more that one they want to overwrite each other.

    Read the article

  • Order of mod_rewrite rules in .htaccess not being followed

    - by user39461
    We're trying to enforce HTTPS on certain URLs and HTTP on others. We are also rewriting URLs so all requests go through our index.php. Here is our .htaccess file. # enable mod_rewrite RewriteEngine on # define the base url for accessing this folder RewriteBase / # Enforce http and https for certain pages RewriteCond %{HTTPS} on RewriteCond %{REQUEST_URI} !^/(en|fr)/(customer|checkout)(.*)$ [NC] RewriteRule ^(.*)$ http://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] RewriteCond %{HTTPS} off RewriteCond %{REQUEST_URI} ^/(en|fr)/(customer|checkout)(.*)$ [NC] RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] # rewrite all requests for file and folders that do not exists RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?query=$1 [L,QSA] If we don't include the last rule (RewriteRule ^(.*)$ index.php?query=$1 [L,QSA]), the HTTPS and HTTP rules work perfectly however; When we add the last three lines our other rules stop working properly. For example if we try to goto https:// www.domain.com/en/customer/login, it redirects to http:// www.domain.com/index.php?query=en/customer/login. It's like the last rule is being applied before the redirection is done and after the [L] flag indicating the the redirection is the last rule to apply.

    Read the article

  • Strange spam posts not making sense

    - by Paaland
    I'm running a web site with a forum where one small part is open for posting from unregistered users. The site uses captcha, but still some spam posts get through every day. Here is the thing. All of the messages follow the same pattern, but all also come from different IP's. That makes me thing this is some sort of automated scripted "attack" from a botnet of some sorts. The strange thing is that all the messages start with six random characters and contains a couple of links. The words have no meaning and the domains in the links does not even exist. Why would anyone use time and resources spreading these things? Below you can see two of these messages: A5Zfs6 exrzvrbspntz, [url=http://nktqoqllnuab.com/]nktqoqllnuab[/url], [link=http://wtrenldadvsy.com/]wtrenldadvsy[/link], [http://rnlrqfgdvdot.com/] O2oLpL nqeffxhryfdk, [url=http://jutyurbpfxow.com/]jutyurbpfxow[/url], [link=http://jpcdtmdalpow.com/]jpcdtmdalpow[/link], [http://qopqwqxwjdjx.com/] Since all the messages come from different IP's I can't see blocking those will help much. For now I'm considering just dropping all messages following this pattern since it's quite easy to match with a regexp. Have anyone else seen these kinds of messages or know the point of posting them?

    Read the article

  • How to Change System Application Pages (Like AccessDenied.aspx, Signout.aspx etc)

    - by Jayant Sharma
    An advantage of SharePoint 2010 over SharePoint 2007 is, we can programatically change the URL of System Application Pages. For Example, It was not very easy to change the URL of AccessDenied.aspx page in SharePoint 2007 but in SharePoint 2010 we can easily change the URL with just few lines of code. For this purpose we have two methods available: GetMappedPage UpdateMappedPage: returns true if the custom application page is successfully mapped; otherwise, false. You can override following pages. Member name Description None AccessDenied Specifies AccessDenied.aspx. Confirmation Specifies Confirmation.aspx. Error Specifies Error.aspx. Login Specifies Login.aspx. RequestAccess Specifies ReqAcc.aspx. Signout Specifies SignOut.aspx. WebDeleted Specifies WebDeleted.aspx. ( http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.administration.spwebapplication.spcustompage.aspx ) So now Its time to implementation, using (SPSite site = new SPSite(http://testserver01)) {   //Get a reference to the web application.   SPWebApplication webApp = site.WebApplication;   webApp.UpdateMappedPage(SPWebApplication.SPCustomPage.AccessDenied, "/_layouts/customPages/CustomAccessDenied.aspx");   webApp.Update(); } Similarly, you can use  SPCustomPage.Confirmation, SPCustomPage.Error, SPCustomPage.Login, SPCustomPage.RequestAccess, SPCustomPage.Signout and SPCustomPage.WebDeleted to override these pages. To  reset the mapping, set the Target value to Null like webApp.UpdateMappedPage(SPWebApplication.SPCustomPage.AccessDenied, null);webApp.Update();One restricted to location in the /_layouts folder. When updating the mapped page, the URL has to start with “/_layouts/”. Ref: http://msdn.microsoft.com/en-us/library/gg512103.aspx#bk_spcustapp http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.administration.spwebapplication.updatemappedpage.aspx

    Read the article

  • Cloudfront - How to invalidate objects in a distribution that was transformed from secured to public?

    - by Gil
    The setting I have an Amazon Cloudfront distribution that was originally set as secured. Objects in this distribution required a URL signing. For example, a valid URL used to be of the following format: https://d1stsppuecoabc.cloudfront.net/images/TheImage.jpg?Expires=1413119282&Signature=NLLRTVVmzyTEzhm-ugpRymi~nM2v97vxoZV5K9sCd4d7~PhgWINoTUVBElkWehIWqLMIAq0S2HWU9ak5XIwNN9B57mwWlsuOleB~XBN1A-5kzwLr7pSM5UzGn4zn6GRiH-qb2zEoE2Fz9MnD9Zc5nMoh2XXwawMvWG7EYInK1m~X9LXfDvNaOO5iY7xY4HyIS-Q~xYHWUnt0TgcHJ8cE9xrSiwP1qX3B8lEUtMkvVbyLw__&Key-Pair-Id=APKAI7F5R77FFNFWGABC The distribution points to an S3 bucket that also used to be secured (it only allowed access through the cloudfront). What happened At some point, the URL singing expired and would return a 403. Since we no longer need to keep the same security level, I recently changed the setting of the cloudfront distribution and of the S3 bucket it is pointing to, both to be public. I then tried to invalidate objects in this distribution. Invalidation did not throw any errors, however the invalidation did not seem to succeed. Requests to the same cloudfront URL (with or without the query string) still return 403. The response header looks like: HTTP/1.1 403 Forbidden Server: CloudFront Date: Mon, 18 Aug 2014 15:16:08 GMT Content-Type: text/xml Content-Length: 110 Connection: keep-alive X-Cache: Error from cloudfront Via: 1.1 3abf650c7bf73e47515000bddf3f04a0.cloudfront.net (CloudFront) X-Amz-Cf-Id: j1CszSXz0DO-IxFvHWyqkDSdO462LwkfLY0muRDrULU7zT_W4HuZ2B== Things I tried I tried to set another cloudfront distribution that points to the same S3 as origin server. Requests to the same object in the new distribution were successful. The question Did anyone encounter the same situation where a cloudfront URL that returns 403 cannot be invalidated? Is there any reason why wouldn't the object get invalidated? Thanks for your help!

    Read the article

  • Firefox not displaying icons in KhanAcademy

    - by ADTC
    If you don't know what Khan Academy is, check it out. It's awesome. (For testing purpose you may view any video on the website.) My problem -- it's a minor problem, but annoying -- is that in Firefox (Windows 7), the icons below the video are shown as boxes with hex codes in them. This means the icons come from some font that isn't getting downloaded by Firefox. How it appears on Chrome (Windows 7), Safari (Mac OS X) and Stainless (Mac OS X): I checked out the source and found that the font in question is called "FontAwesome". I found this question in S.O. that may explain why this happens -- the CSS does use single quotes to enclose the font's src location. However I don't have any write access to Khan Academy servers so I can't modify the actual website. I want to know if this can be fixed in Firefox, and how. I can run Greasemonkey scripts if that would help. Also, would manually downloading the font and adding it to Windows' Fonts folder help? I tried this with the TTF font, and it does not help. For reference, the CSS that sets this font up (not processed properly by Firefox) is: @font-face { font-family:'FontAwesome'; src:url('./fontawesome-webfont.eot'); src:url('./fontawesome-webfont.eot?#iefix') format('embedded-opentype'), url('./fontawesome-webfont.woff') format('woff'), url('./fontawesome-webfont.ttf') format('truetype'), url('./fontawesome-webfont.svg#FontAwesome') format('svg'); font-weight:normal; font-style:normal } [class^="icon-"]:before, [class*=" icon-"]:before { font-family:FontAwesome; font-weight:normal; font-style:normal; display:inline-block; text-decoration:inherit }

    Read the article

  • Page Spamming via locations

    - by codemonkey
    Hi guys I am new here so please be gentle :) I have created a web page for a small mail order business. The page asks the reader if they are in need of a supplier for products in their "area" and if they have ever been let down by a supplier in that "area" etc. It also lists all the local villages and hamlets around the [area] where they can also supply too. This page is dynamically created and the [area] changes and so do the small towns that are local to the town. The page also contains information on the products so the word count vs town names is not stupid. An example of one of the URL would be www.website.com/1014/Halesowen/ It basically covers the whole of the UK so around 800 main towns with 28,000 local villages. The URL changes, so does the title and h1 tags, also each page is Geo coded for that town. My question really is this a good or bad idea? Is it a black hat technique ? I have been told if I have to ask the question then it probably is but the site does supply to all these areas just as any mail order company does and would like to get listed higher in each town for the products. I have seen this done on a few sites but only with a few targeted towns and not the whole of the UK so I would be really interested in your guys thoughts on this. I would post the URL to the site but as I am new here I am a bit unsure of the rules regarding posting links. The whole site needs a lot of other onsite SEO work doing and I will be doing that over the next few weeks. I look forward to your views on this. p.s. If I am allowed to post the URL without getting into trouble so you can see it someone let me know? Thanks in advance

    Read the article

  • Firefox cannot render icons from Font Awesome webfont set

    - by ADTC
    In Firefox (Windows 7), icons and glyphs that are called from the Font Awesome package do not render properly. An example of this can be seen on the Khan Academy website. Below the video the icons are shown as boxes with hex codes in them. This means that it isn't getting downloaded by Firefox. How it appears on Chrome (Windows 7), Safari (Mac OS X) and Stainless (Mac OS X): I found this question on Stack Overflow that may explain why this happens -- the CSS does use single quotes to enclose the font's src location. However, I don't have any write access to Khan Academy servers so I can't modify the actual website. I want to know if this can be fixed in Firefox, and how. I can run Greasemonkey scripts if that would help. I've already tried manually downloading the font and adding it to Windows' Fonts folder but this does not help. For reference, the CSS that sets this font up (not processed properly by Firefox) is: @font-face { font-family:'FontAwesome'; src:url('./fontawesome-webfont.eot'); src:url('./fontawesome-webfont.eot?#iefix') format('embedded-opentype'), url('./fontawesome-webfont.woff') format('woff'), url('./fontawesome-webfont.ttf') format('truetype'), url('./fontawesome-webfont.svg#FontAwesome') format('svg'); font-weight:normal; font-style:normal } [class^="icon-"]:before, [class*=" icon-"]:before { font-family:FontAwesome; font-weight:normal; font-style:normal; display:inline-block; text-decoration:inherit }

    Read the article

  • How to 301 redirect from old query string urls to CakePHP Canonical urls?

    - by Daniel Bingham
    I currently have a .htaccess file that looks like this: RewriteCond %{QUERY_STRING} ^action=view&item=([0-9]+)$ RewriteRule ^index\.php$ /index.php?url=item/%1 [R=301] RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ index.php?url=$1 [QSA,L] It is meant to 301 redirect my old query string based URLs to new CakePHP urls. This will successfully send users to the correct page. However, Google doesn't seem to like it (see below). I previously tried doing this: RewriteCond %{QUERY_STRING} ^action=view&item=([0-9]+)$ RewriteRule ^index\.php$ /item/%1 [R=301] RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ index.php?url=$1 [QSA,L] But that fails. The second rewrite rule doesn't seem to catch the rewritten URL. It goes straight through. Using the first version wouldn't be a problem, except that I suspect that is what is choking up Google. It hasn't indexed my sitemap full of the new URLs. My old sitemap had been fully indexed and all the URLs are in Google's index. But it isn't following the redirects from the old URLs to the new. I have a 'not followed' error for every one of the query urls that was in my old sitemap. Am I properly using a 301 redirect here? Is it the weird rewrite rule? What can I do to send both Google and users to the proper page and save my page rank?

    Read the article

  • Dynamically loading chef recipies from URLs

    - by andy
    I'm deploying a web app on AWS. I intend to use chef to build AMIs which I'll then put into production. I want to have Chef monitor a URL stored in simpleDB. The URL would point to a tarball in S3. There would be different URLs, one for a config tarball, one for a code tarball. When I update the URL in simpleDB, I want chef to spot this and pull in and apply the configs/deploy the code. Is this possible? Has anything like this been done before or would I need to roll my own code? I think Chef can monitor URLs, but how would be the best way of getting it to load that URL from simpleDB?

    Read the article

< Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >