Search Results

Search found 20405 results on 817 pages for 'url rewrite'.

Page 59/817 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • SH404SEF URLs in Joomla 1.5

    - by Tao Bellamine
    I have two modules to play with urls, the global configuration module and the sh404sef module. The global config is set to "Sef urls: YES" and "mod rewrite enabled: YES" and the sh404sef is set "url optimization: NO". My problem is, even with "Sef urls" set in the global config, my urls still don't seem to be that "user friendly" so I turn on the "Url optimization" using the sh404sef module, and I get better descriptive urls. However, the problem I inherit from doing this is that my dynamically populated chronoforms get messed up (only the chrono forms, other forms are fine); These forms are now showing up at the homepage instead of their own reserved page. Here's an example: Old form "GOOD" url: http://www.mycraftwork.com/index.php?option=com_content&view=article&id=94 New optimized "BAD" URL: http://www.mycraftwork.com/handthrown-pottery/alladin-teapot/index.php?option=com_content&view=article&id=94 Any help would be GREATLY appreciated! I can even turn the sh404sef on and off if some people are interested in seeing the issue LIVE. Thanks!! Tao Bellamine

    Read the article

  • What should filenames and URLs of images contain for SEO benefit?

    - by Baumr
    We know that good site architecture usually looks like this: example-company.com/ example-company.com/about/ example-company.com/contact/ example-company.com/products/ example-company.com/products/category/ example-company.com/products/category/productname/ Now, when it comes to Google Image search, it is clear that the img alt tag, filename/URL, and surrounding text (captions, headings, paragraphs) have an effect on ranking. I want to ask about the filename of the images that we should use (e.g. product-photo.jpg). ...but first about the URL: Often web developers stick all images in a single folder in the root: example-company.com/img/ — and I have stopped doing that. (I don't want to get into it, but basically, it seems more semantic for images which make up part of the content at each sub-directory) However, when all images appear in a folder, I feel that their filename needs to reflect what they are a bit more than usual, for example: example-company.com/img/example-company-productname-category.jpg It's a longer filename than just product.png, but as long as it's relevant, I see no problem with regards to SEO (unless you're keyword stuffing), and it could even help rank for keywords: "example company" "productname" "category" So no questions there. But what about when we have places images in the site architecture we outlined at the beginning? In other words, what if image URL paths look like this: example-company.com/products/category/productname/productname.jpg My question is, should the URL be kept short like above and only have the "productname" (and some descriptive keywords) as part of it's filename? Or, should it also include the "example-company" and "category"? Like so: example-company.com/products/category/productname/example-company-category-productname.jpg That seems much longer, and redundant when we look at the URL, but here are a few considerations. Images are often downloaded onto computers, and, to the average user, they lose their original URL and thus — it isn't clear where they came from. Also, some social networks, forums, and other platforms leave the filename intact when uploaded. (Many others rewrite it, for example, Pinterest and Facebook.) Another consideration, will this really help (even if ever so slightly) rank in Google Image Search, or at least inform Google that the product is something specific to the "example-company"? For example, what if this product can only be bought at this store and is the flagship product? In addition to an abundance of internal links to this product page, would having the "example company" name and "category" help it appear in "example company" searches? In other words, is less more?

    Read the article

  • 500.50 error using URL Rewrite

    I ran into a 500.50 error yesterday, which wasnt very descriptive initially, so I thought I would provide some details on what it could be. First, off, if you hit the page remotely, its going to hide the real details.  It will look something like this: Server Error 500 - Internal server error. There is a problem with the resource you are looking for, and it cannot be displayed. Checking the IIS logs will show that its a 500.50 error in this case. The first thing to do after running into...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Unshorten.it! Unpacks Shortened URLs and Provides Safety Rating

    - by Jason Fitzpatrick
    Shortened URLs sure are convenient and compact but they hide the destination URL. Unshorten.it! is a free Chrome/Firefox extension that not only shows you the full URL but will even give you a safety rating–no need to click blindly again. Install the extension for Chrome or Firefox and then, when you come across a shortened URL, simply right click on it and click “Unshorten this link” to see both the unpacked URL and a safety rating provided by Web of Trust and HPHosts. Unshorten.it! is a free extension, available for both Chrome and Firefox. Unshorten.it! [via Gizmo's Freeware] How To Delete, Move, or Rename Locked Files in Windows HTG Explains: Why Screen Savers Are No Longer Necessary 6 Ways Windows 8 Is More Secure Than Windows 7

    Read the article

  • Redirect/Rewrite Subdomain to Subfolder

    - by Laurent Ho
    I'm trying to redirect a subdomain to a subfolder e.g. forums.domain.com to www.domain.com/forums Note that I started the forums in the subfolder format but worried that members might mistakenly try to access the forums using the subdomain format. RewriteCond %{HTTP_HOST} ^(www\.)?forums\.domain\.com RewriteRule .* /forums [L] From what I read the codes above should work through .htaccess, but do I still need to create a DNS A record to point to the IP address of the server?

    Read the article

  • Ubuntu 12.04 LTS Desktop 64 bit user permissions or apache2 rewrite problem

    - by mtm
    have installed 12.04 Desktop 64 bit, manually installed LAMP, phpmyadmin, php5-dev,PEAR, PECL, apc, ssh, created user to own /var/www/ and transferred 3 sites to /www/. sites are in subfolders, sites - available all configured, and enabled. One site is pure html, two athers - php. Enabled curl, but phpmyadmin started at first, also php sites, than stopped working /show blank pages/ sites said Clean urls cannot be enabled. Html site still working. Where is the problem, and why the php sites stop working? In all apache .conf files Allow Override is set to ALL. php sites have .htaccess files. And this configuration worked with Ubuntu 10.04.

    Read the article

  • Rewrite rule for 403

    - by Jitesh Tukadiya
    I have an .htaccess file: In this file it redirects to index.php in case a file or directory is not found. My code is as below: <IfModule mod_rewrite.c> RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?/$1 [L] </IfModule> Everything is working fine with this code. Now when I get Forbidden error(403), I would like it to be redirect to index.php. Do you have an idea how to write an .htaccess file for this purpose?

    Read the article

  • Need a little help with sound in a JApplet

    - by jacob schuschel
    I am working on a solitaire game in Java, and i need to implement sound when the desk is shuffled, card flipped, etc. I used the following sites as reference to try and get it to work, but i am getting Null Pointer Exceptions or mishandled URL exception (depending on what i tweak). here Also, i am using netbeans 6.7.1 as my IDE. I will try to break down the code and explain: package cardgame; import java.applet.*; import java.util.logging.Level; import java.util.logging.Logger; import javax.swing.JApplet; import javax.swing.*; import java.io.*; import java.net.*; /** * * @author jacob */ public class Sound extends JApplet { private AudioClip song; // Sound player private String URL = null; private URL songPath; // Sound path /* *sound_1 = shuffling cards *sound_2 = to discard *sound_3 = from discard *sound_4 = cardflip 1 *sound_5 = cardflip 2 */ Sound(String filename) { try { songPath = new URL(getCodeBase(),filename); // Get the Sound URL } catch (MalformedURLException ex) { Logger.getLogger(Sound.class.getName()).log(Level.SEVERE, null, ex); } song = getAudioClip(songPath); // Load the Sound } Sound(int i) { URL = "./sounds/sound_" + i + ".wav"; System.out.println(URL); try { songPath = new URL(URL); // Get the Sound URL song = getAudioClip(songPath); } catch (MalformedURLException ex) { Logger.getLogger(Sound.class.getName()).log(Level.SEVERE, null, ex); } } public void playSound() { song.loop(); // Play } public void stopSound() { song.stop(); // Stop } public void playSoundOnce() { song.play(); // Play only once } } The 2 different construcors are for different ways i tried to implement this. The first one creates the filepath, and passes it in. The second one builds the filepath in the constructor, given a sound # (i made a list of what numbers correspond to what sound for reference). I am getting the followig errors out: ./sounds/sound_1.wav Nov 16, 2009 4:14:13 PM cardgame.Sound ./sounds/sound_2.wav SEVERE: null java.net.MalformedURLException: no protocol: ./sounds/sound_1.wav ./sounds/sound_3.wav at java.net.URL.(URL.java:583) at java.net.URL.(URL.java:480) at java.net.URL.(URL.java:429) ./sounds/sound_4.wav ./sounds/sound_5.wav at cardgame.Sound.(Sound.java:46) at cardgame.Game.loadSounds(Game.java:712) at cardgame.Game.(Game.java:62) at cardgame.Main.main(Main.java:25) Nov 16, 2009 4:14:13 PM cardgame.Sound SEVERE: null java.net.MalformedURLException: no protocol: ./sounds/sound_2.wav at java.net.URL.(URL.java:583) at java.net.URL.(URL.java:480) at java.net.URL.(URL.java:429) at cardgame.Sound.(Sound.java:46) at cardgame.Game.loadSounds(Game.java:712) at cardgame.Game.(Game.java:62) at cardgame.Main.main(Main.java:25) Nov 16, 2009 4:14:13 PM cardgame.Sound SEVERE: null java.net.MalformedURLException: no protocol: ./sounds/sound_3.wav at java.net.URL.(URL.java:583) at java.net.URL.(URL.java:480) at java.net.URL.(URL.java:429) at cardgame.Sound.(Sound.java:46) at cardgame.Game.loadSounds(Game.java:712) at cardgame.Game.(Game.java:62) at cardgame.Main.main(Main.java:25) Nov 16, 2009 4:14:13 PM cardgame.Sound SEVERE: null java.net.MalformedURLException: no protocol: ./sounds/sound_4.wav at java.net.URL.(URL.java:583) at java.net.URL.(URL.java:480) at java.net.URL.(URL.java:429) at cardgame.Sound.(Sound.java:46) at cardgame.Game.loadSounds(Game.java:712) at cardgame.Game.(Game.java:62) at cardgame.Main.main(Main.java:25) Nov 16, 2009 4:14:13 PM cardgame.Sound SEVERE: null java.net.MalformedURLException: no protocol: ./sounds/sound_5.wav at java.net.URL.(URL.java:583) at java.net.URL.(URL.java:480) at java.net.URL.(URL.java:429) at cardgame.Sound.(Sound.java:46) at cardgame.Game.loadSounds(Game.java:712) at cardgame.Game.(Game.java:62) at cardgame.Main.main(Main.java:25) Thanks for those who read and more thanks to those who help. I know it is somewhat long, but i would rather get it all out there, than have 50 questions that come back or have people not answer due to lack of initial info. also only lets me post a single link right now, so the links are given below dreamincode.net/forums/showtopic14083.htm stackoverflow.com/questions/512436/java-playing-wav-sounds deitel.com/articles/java_tutorials/20060422/LoadingPlayingAudioClips/index.html

    Read the article

  • Question about RewriteRule and HTTP_HOST server variable

    - by SeancoJr
    In evaluating a rewrite rule that redirects to a specific URL and say the rewrite condition is met, would it be possible to use HTTP_HOST as part of the URL to be redirected to? Example in question: RewriteRule .*\.(jpg|jpe?g|gif|png|bmp)$ http://%{HTTP_HOST}/no-leech.jpg [R,NC] The motive behind this question is a desire to create a single htaccess file that would match against an addon domain (on a shared hosting account) and an infinite amount of subdomains below it to prevent hotlinking of images.

    Read the article

  • WebDav uploads fail on files with certain characters on Apache

    - by bnferguson
    Have webdav uploads working great on one our boxes but anytime there is a ; # or * (and maybe a few others) the upload fails. That is expected since they're restricted characters but I'm curious if there's a way to rewrite/rename those files on their way through. We don't care what the name is really it just has to make it up to the server. Started looking at mod_rewrite solutions but my rewrite fu is rather weak.

    Read the article

  • mod_rewrite all but two files causing loop

    - by mpounsett
    I'm trying to set up a web site to allow the creation of a semaphore file to close the site. The logic I want to follow is: when the semaphore file exists and the request is not for /style.css or /favicon.icon show the content of /closed.html I have 1 and 3 working, but my exceptions for 2 result in a processing loop when style.css or favicon.ico are requested. This is my most recent attempt: RewriteEngine on RewriteCond %{REQUEST_URI} !^/style.css RewriteCond %{REQUEST_URI} !^/favicon.ico RewriteCond /usr/local/etc/site/closed -f RewriteRule ^.*$ /closed.html [L] This is in a VirtualHost block, not in a Directory. There is no .htaccess file in play. I have also recently tried this, based on an answer I found elsewhere, but with the same (looping) result: RewriteCond %{REQUEST_URI} ^/style.css [OR] RewriteCond %{REQUEST_URI} ^/favicon.ico RewriteRule ^.*$ - [L] RewriteCond /usr/local/etc/site/closed -f RewriteRule ^.*$ /closed.html [L] I expect a request for /style.css or /favicon.ico to fail to match one of the first two rewrite conditions, which should prevent the URI from being rewritten, which should stop the mod_rewrite iteration. However, mod_rewrite seems to think the URI has been rewritten in those cases, and iterates over the rules again (and again, and again). The above works properly in all cases except for style.css or favicon.ico. In those cases I exceed the loop limits. What am I missing here to cause the rewrite iteration to stop when someone requests style.css or favicon.ico? EDIT: Here's a loglevel 9 example of what happens using the first ruleset when a request arrives for /style.css. This is just the first two iterations.. it continues to loop identically until the limit is reached. 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1db0a0/initial] (2) init rewrite engine with requested uri /style.css 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1db0a0/initial] (3) applying pattern '^.*$' to uri '/style.css' 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1db0a0/initial] (4) RewriteCond: input='/style.css' pattern='!^/style.css' => not-matched 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1db0a0/initial] (1) pass through /style.css 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1dd0a0/initial] (2) init rewrite engine with requested uri /style.css 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1dd0a0/initial] (3) applying pattern '^.*$' to uri '/style.css' 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1dd0a0/initial] (4) RewriteCond: input='/style.css' pattern='!^/style.css' => not-matched 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1dd0a0/initial] (1) pass through /style.css

    Read the article

  • Using URL rewrite module for http to https redirect

    - by johnnyb10
    Following ruslany's suggestion on the URL Rewrite Tips page here, I'm trying to use URL Rewrite to redirect http:// requests for my site to https://. I've written and tested the rule using a test site I set up, and so now the final piece is to create a second site (http) to redirect to my https site. (I need to use a second site because I don't want to uncheck the "Require SSL encryption" checkbox on my existing site.) I'm an IIS newbie so my question is: how do I do this? Should I create a site with the same name and host header, only it will be bound to http? Will IIS let me create a site with the same name? I don't want to screw anything up with my existing site (which is a SharePoint site, currently used by external users). That site currently has http and https bound to it. So my assumption is that, using ISS (not SharePoint), I will create a new site (http only) with the same name and host header as my existing site, and add the URL Rewrite rule to the http site. And then I guess I should remove the http binding from my existing site? Does that seem correct? Any advice, gotchas, etc., would be appreciated. Thanks.

    Read the article

  • Using URL rewrite module for http to https redirect

    - by johnnyb10
    Following ruslany's suggestion on the URL Rewrite Tips page here, I'm trying to use URL Rewrite to redirect http:// requests for my site to https://. I've written and tested the rule using a test site I set up, and so now the final piece is to create a second site (http) to redirect to my https site. (I need to use a second site because I don't want to uncheck the "Require SSL encryption" checkbox on my existing site.) I'm an IIS newbie so my question is: how do I do this? Should I create a site with the same name and host header, only it will be bound to http? Will IIS let me create a site with the same name? I don't want to screw anything up with my existing site (which is a SharePoint site, currently used by external users). That site currently has http and https bound to it. So my assumption is that, using ISS (not SharePoint), I will create a new site (http only) with the same name and host header as my existing site, and add the URL Rewrite rule to the http site. And then I guess I should remove the http binding from my existing site? Does that seem correct? Any advice, gotchas, etc., would be appreciated. Thanks.

    Read the article

  • Help with redirect and query strings

    - by James
    I'm a total novice at mod rewrite so I'll try and present my question as clearly as possible: I'm trying to create a url redirect of the following (static) affiliate url that can append it self to any product links after using a query string: affiliate url: hxxp://clk.affilite.com/fs-bin/click?id=aFb*BBBBBpQ&subid=&offerid=9999.2&type=5&tmpid=9999&RD_PARM1= product url: hxxp:// example.domain.com What I want to achieve is redirecting the affiliate code as below and being able to add dynamic product urls after as the following examples show: rewritten affiliate url: hxxp://domain.com/go affiliate url + product url: hxxp://domain.com/go?=http://example.domain.com redirects to: hxxp://clk.affilite.com/fs-bin/click?id=aFb*BBBBBpQ&subid=&offerid=9999.2&type=5&tmpid=9999&RD_PARM1=http://example.domain.com

    Read the article

  • Modify “Link”/ "HyperLink"/URL field using Powershell

    - by KunaalKapoor
    If you are trying to update a hyperlink/url type of column of a SharePoint list item using PowerShell and are getting the exception:Unable to index into an object of type Microsoft.SharePoint.SPListItem.At C:\mypowershell.ps1:39 char:10+       $item[ <<<< "Website"] = $itemUrl          + CategoryInfo          : InvalidOperation: (RW_Website:String) [], RuntimeException    + FullyQualifiedErrorId : CannotIndexThen look no further :)The url is basically stored like a simple string with url, description divided by comma.So all you need to do is:$myUrl = "http://www.google.com, Google"$listitem["Link"] = $myUrlThat will, assuming "Link" is a type of "Hyperlink or Picture" (Hyperlink), create a link that says Google and links to http://www.google.com.Also make sure you don't miss out on the 'http://' part as without that the value will not pass the SharePoint validation of allowed values.

    Read the article

  • URL Multiple Query Parameters Encoded with HTML Entities

    - by BRADINO
    I came across a situation where a URL with multiple query parameters was encoded using htmlentities() and PHP was not recognizing the query parameters using $_GET. A common case for encoding urls using htmlentities() is to use them inside XML documents. So a url with multiple query parameters, encoded using htmlentities() would look like this: http://www.bradino.com/?color=white&amp;size=medium&amp;quantity=3 and when that url is accessed the second and third query parameters are not recognized because instead of separating the subsequent variables with an & that character gets converted into &amp;. I could not find a good way to resolve this, so basically I just encoded the query string back to normal using html_entity_decode() and then slammed the parameters back into the $_GET array using parse_str(). $query = html_entity_decode($_SERVER['QUERY_STRING']); parse_str($query,$_GET); There must be a better way! Anyone come across this before?

    Read the article

  • URL blocked in robots.txt but still showing up on Google search [closed]

    - by Ahmad Alfy
    Possible Duplicate: Why do Google search results include pages disallowed in robots.txt? In my robots.txt I am disallowing a lot of URLs. Google webmaster tools says there're +750 URL blocked. The problem is the URLs are still showing on Google search. For example I have the following rule: Disallow: /entity/child-health/ But when I search some-keyword + child health the following URL shows up : http://www.sitename.com/entity/child-health/ Am I doing anything wrong? Is is possible for a URL to be blocked using robots.txt and still show up on search results?

    Read the article

  • Google keeps indexing /comment/reply URL

    - by jaypabs
    With the new update of Google algorithm called Penguin, I think my site was being penalized due to webspam. But of course I don't create post which seems to be spam to Google. It is just I think how Google index my site. I found out that Google index the URL of my site like: http://www.example.com/comment/reply/3866/26556 So there are so many comment/reply URL index by Google. I have already added: Disallow: /comment/reply/ Disallow: /?q=comment/reply/ but still Google still index this URL. Any idea how to prevent Google from indexing comments?

    Read the article

  • Google Webmasters tools crawl error caused by URL split into two lines

    - by Shiro
    I am looking in to Google Webmaster Tools - Crawl Error section. How should I handle for those URL due their system / application showed invalid URL. e.g http://www.example/images/products/s_=enlarge_16gb.jpg but, I dunno what happen to yahoo groups, it break the link into http://www.example/images/products/s_= enlarge_16gb.jpg and I only make the top part become hyperlink, which is http://www.example/images/products/s_= Because of the URL, Google show crawl error, I got few error because of this kind of result or because other people typo error. How do I prevent this. I am sure I don't have the right go and change other people post. What is the solution for this. Thanks!

    Read the article

  • Track url from Amazon S3 using Google Analytics

    - by morktron
    I couldn't find any decent pay per view video solutions for low budget clients. So I'm considering using a membership extension with Joomla and hosting the video with amazon S3. The only issue is that once someone has signed up to view or download the video if they have any web development experience they will be able to get the url of the video and freely publish it on the web. How can this be prevented? It looks like it can be done using IAM User Temporary Credentials - AWS SDK for PHP but the client would prefer not to have to pay someone to spend hours writing custom php code to get this to work. With Amazon s3 I could at least check the log files I guess to manually monitor the url but is there a way to track the url with Google Analytics? or is there a more elegant solution?

    Read the article

  • Joomla url issue with sh404SEF

    - by user5858
    it's couple of months I've been using SH404SEF With my site. But in my site I'm getting url's in the form: http://www.downloadformsindia.com/Income-Tax-Forms/all-income-tax-return-itr-forms-2010-2011.html?task=view If I remove this suffix(?task=view), it takes us to the same page. I had raised this issue in SH404SEF forum, and I was told that this data is taken as parameter by search engines hence ignored. I want to redirect using RewriteMatch in .htaccess all such url's to the url's without ?task=view ones : ....downloadformsindia.com/Income-Tax-Forms/all-income-tax-return-itr-forms-2010-2011.html?task=view to be redirected to http://www.downloadformsindia.com/Income-Tax-Forms/all-income-tax-return-itr-forms-2010-2011.html So my question is: Will this redirection create 404's in the Google webmaster. I've thousand's of pages in the site

    Read the article

  • seo for complex url

    - by Mark
    I have a jobs website. There are categories and sub categories e.g the url for builders is www.mydomain.com/jobs/all/active/202/tradesmen/799/builders/county/all Because of the framework im using (Elgg), the "all" and "active" come after the plugin name. Also, i need to do a lot of things with the url: get the pplugin (job), get the correct type of entities (all and active), get the category (202 and tradesmen), get the subcategory (799 & builders) and get the location (county and all). Is this URL seo friendly or is it too messy? Is something like www.mydomain.com/jobs/tradesmen/builders/all better?

    Read the article

  • Loop URL (2 replies)

    I have a 3rd party product that is called on a server with a URL. I will have to run thousands of these URLs in a batch. I will do this in winforms, perhaps in a Windows Service, running on the server. Are there dotnet commands to loop through URLs, where I can run the URL, then wait for its response, then run the next one... in a loop. I am just unclear on the call to run the URL, and the waiting...

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >