Search Results

Search found 8013 results on 321 pages for 'clean urls'.

Page 35/321 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Is it considered duplicate content when search results can be retrieved via 2 different urls? [closed]

    - by Floran
    Possible Duplicate: What is duplicate content and how can I avoid being penalized for it on my site? I'm building up friendly url's like so: http://www.1001locaties.nl/trouwlocaties But the same content can also be viewed when using the filter options on the left side, but via a different url: http://www.1001locaties.nl/locaties/?search=1&category=Trouwlocaties Is this considered duplicate content by Google? And if so: what can I do about it?

    Read the article

  • What is the SEO-recommended method for using underscores and dashes in URLs that contain geographic locations?

    - by ElHaix
    In reading through this article: In Subfolder & File Names, Use Dashes, Not Underscores Good: Good: http://www.domain.com/sub-folder/file-name.htm Bad: http://www.domain.com/sub_folder/file_name.htm In my URL's, I may have one or two city names, ending with the province/state: Burnaby_New_Westminister-BC/[some search term]. My URL rules currently are defined such that everything after the dash is the prov/state. Some geographic locations already contain dashes: Notre-Dame-de-Grâce (in QC), which I would convert to ~/Notre_Dame_de_Grace-QC/ I thought of placing the prov/state after another "/", however in some cases the province/state name may not exist, thus ~/Notre_Dame_de_Grace/, so the first term after the domain name contains the geo location {city, city_name-state}. I am now revisiting this, and wondering if this rule set should change, and if so, what is the recommended way of implementing this? -- UPDATE -- After reviewing this video, I see that I should be using the dashes, rather than underscores. However since I still want to have my geo locations in the first URL section, is there anything wrong with using a double-dash separator - ie: /city-name--state/ ?

    Read the article

  • Are there advantages of using hard coded URLs for localization?

    - by nbolton
    On the Synergy website, localization is detected (and can be overridden) but uses the same URL for all languages. Some websites however, like Wikipedia have language specific subdomains. What are the advantages of having either subdomains or subdirectories (i.e. a specific URL) for each language localization? Also, should it automatically redirect the user to the specific subdomain/subdir based on the language that the browser requests? I suspect that there are advantages, which I'm guessing are: When the website appears in search results for non-English languages, the translated page description will be shown (assuming there is a translation provided by the website). When a user shares a page (e.g. through twitter), it will show in a specific language. Perhaps this is a disadvantage though? Am I correct, if so, are there more advantages?

    Read the article

  • Why do some urls in Firefox change when copy / paste?

    - by user203748
    This may not be a Firefox / Ubuntu specific issue. When I Copy / Paste a web link with _ and ( ) it is rendered as %20, %28, and %29. Yet in the Firefox URL these % symbols do not appear. The %20 is particularly weird because the _ itself does render in the URL: https://www.capitalsecuritybank.com/en/PDF/CSB_%20Account_%20Application_%20Form_%20%28Personal%29.pdf Can anyone explain why the URL is different when Copied / Pasted?

    Read the article

  • Software to clean up photos of whiteboards and documents?

    - by Norman Ramsey
    I take a lot of photos of whiteboards, blackboards, and so on for teaching purposes (examples online through May 2010). I'm interested in cleaning them up for archival purposes, preferably using Linux. Commercial products ClearBoard and PhotoNote are priced a little aggressively for my purposes, plus my students would like to have this capability too. Does anyone know of any good, open source software for Converting photographs to images with just a few colors? Eliminating perspective distortion? Removing unwanted junk from around the edges of an image? or anything like that? I'm imagining that I start out with a picture of my whiteboard using red and black markers, and I end up with a three-color image using just white, red, and black. Or I photograph a laser-printed document and end up with a clean black-and-white image. I have tried standard tools that reduce the number of colors in an image, and they do a terrible job—probably because they are trying to reproduce the uneven illumination of the original image. Command-line Linux tools would be ideal.

    Read the article

  • Is there a clean way to obtain exclusive access to a physical partition under Windows?

    - by zneak
    Hey guys, I'm trying, under Windows 7, to run a virtual machine with VMWare Player from an OS installed on a physical partition. However, when I boot the virtual machine, VMWare Player says that it couldn't access the physical drive for writing. This seems to be a generally acknowledged problem in the VMWare community, as Windows Vista introduced a compelling new security feature that makes it impossible to write to a raw drive without obtaining exclusive access to it first. I have googled the issue and found a few workarounds. However, the clean ones seem to only work on whole physical disks, and not on partitions. So I would be left with the dirty solution. In short, it meddles with the MBR to erase any trace of the partitions to use, makes Windows forget about them, then restores the MBR so we can launch the VM. I'm not sure I want to do that. Is there a way to let VMWare acquire exclusive access to the partition without requiring me to nuke it away? What I'd be looking for, I suppose, is a way to put just partitions offline instead of whole physical drives.

    Read the article

  • What is the recommended method of HTTP Redirection from multiple URLs to one URL?

    - by ChrisHDog
    I have a website that has a number of URLs that people use to connect to that site (uses the bindings on the IIS website and everything works as intended): http://www.sample.com http://sample.com https://www.sample.com http://xyz.sample.com http://oldurl.com Now what I want to do is have all of the URLs go to https://www.sample.com - so if you type in "http://xyz.sample.com" or "sample.com" you should go to https://www.sample.com The question is what is the best mechanism to do this? I have one possible solution (which I will put as an answer to this question), but I get the feeling that there might be another, better solution available.

    Read the article

  • Why did mislav-will_paginate start adding so much garbage to urls between rails 2.3.2 and 2.3.5?

    - by user30997
    I've used will_paginate in a number of projects now, but when I moved one of them to Rails 2.3.5, clicking on any of the pagination links (page number, next, prev, etc.,) went from getting nice URLs like this: http://foo.com/user/1/date/2005_01_31/phone/555-6161 to this: http://foo.com/?options[]=user&options[]=date&options[]=2005_01_31&options[]=phone&options[]=555-6161 I have a route that looks like this that is probably the source of the 'options' keyword: map.connect '/browse/*options', :controller=>'assets', :action=>'browse' It's enough of an annoyance that I'm willing to roll a paginator to get around this if there isn't a way to get back to where I was before. Is there a way to get will_paginate to turn array-style routes into sane urls again? Thanks.

    Read the article

  • How to monitor HTTP(S) traffic / URLs in Android?

    - by Pawel Krakowiak
    I'm interested in whether there is a way to monitor HTTP(S) traffic on an Android phone? What I would like to do is to be able to retrieve all URLs that have been accessed on the phone's browser. I thought that there would be a browser intent for that, but have not seen anything - given I'm green, maybe I just did not know where to look? I followed the question and answer here, but it works only for hyperlinks which were clicked by the user, however I need to catch all URLs - including the ones typed in by the user. Basically I need to know about every single URL that was opened in the web browser. Can I register some kind of a handler with the browser? Is something like that feasible at all?

    Read the article

  • How do I force SSL for some URLs and force non-SSL for all others?

    - by brad
    I'd like to ensure that certain URLs on my site are always accessed via HTTPS while all other URLs are accessed via HTTP. I can get either case working in my .htaccess file, however if I enable both, then I get infinite redirects. My .htaccess file is: <IfModule mod_expires.c> # turn off the module for this directory ExpiresActive off </IfModule> Options +FollowSymLinks AddHandler application/x-httpd-php .csv RewriteEngine On RewriteRule ^/?registration(.*)$ /register$1 [R=301,L] # Force SSL for certain URL's RewriteCond %{HTTPS} off RewriteCond %{REQUEST_URI} (login|register|account) RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [R=301,L] # Force non-SSL for certain URL's RewriteCond %{HTTPS} on RewriteCond %{REQUEST_URI} !(login|register|account) RewriteRule ^(.*)$ http://%{HTTP_HOST}%{REQUEST_URI} [R=301,L] # Force files ending in X to use same protocol as initial request RewriteRule \.(gif|jpg|jpeg|jpe|png|ico|css|js)$ - [S=1] # Use index.php as the controller RewriteCond %{REQUEST_URI} !\.(exe|css|js|jpe?g|gif|png|pdf|doc|txt|rtf|xls|swf|htc|ico)$ [NC] RewriteCond %{REQUEST_URI} !^(/js.*)$ RewriteRule ^(.*)$ index.php [NC,L]

    Read the article

  • How to match the last url in a line containing multiple urls, using regular expressions?

    - by Mert Nuhoglu
    I want to write a regex that matches a url that ends with ".mp4" given that there are multiple urls in a line. For example, for the following line: "http://www.link.org/1610.jpg","Debt","http://www.archive.org/610_.mp4","66196517" Using the following pattern matches from the first http until mp4. (http:\/\/[^"].*?\.mp4)[",].*? How can I make it match only the last url only? Note that, the lines may contain any number of urls and anything in between. But only the last url contains .mp4 ending.

    Read the article

  • Validating Internationalized URLs - Is this going to be a problem?

    - by VirtuosiMedia
    After reading about the new Arabic URLs, and with more languages to come, how should URL validation be done for internationalized applications? Does the validation change at all and will existing solutions break? Is regex still a good approach? If so, what would that regex look like? If not, what's a good strategy? What are some good resources to read more on the topic? I ask this because it has the potential to cause a good many localized applications to have to be rewritten if they have to validate URLs at any point.

    Read the article

  • django-haystack urlpatterns include('haystack.urls') where does it lead to?

    - by Eugene
    I've recently begun to learn/install django/haystack/solr. Following the tutorial given in haystack site, I have urlpatterns = pattern('', r'^search/', include('haystack.urls')) I found haystack installed in /usr/local/lib/python2.6/dist-packages/haystack and located urls.py there. It has urlpatterns=patterns('haystack.views', url(r'^$', SearchView(), name='haystack_search'),) I thought the second argument of url() should be callable object. I looked at the views.py and SearchView is a class. What is going on here? What's get called eventually?

    Read the article

  • .htacces to create friendly URLs. Help needed....

    - by Jonathan
    Hi, I'm having a hard time with .htacces. I want to create friendly URLs for a site I'm working on... Basically I want to convert this: http://website.com/index.php?ctrl=pelicula&id=0221889 http://website.com/index.php?ctrl=pelicula&id=0160399&tab=posters Into this: http://website.com/pelicula/0221889/ http://website.com/pelicula/0221889/posters/ In case I need it later I would also want to know how to add the article title to the end of the URL like this (I'm using PHP): http://website.com/pelicula/0221889/the-article-name/ http://website.com/pelicula/0221889/the-article-name/posters/ Note: Stackoverflow method is also good for me, for example the url of this question is: http://stackoverflow.com/questions/3033407/htacces-to-create-friendly-urls-help-needed But you can put anything after the id and it will also work. like this: http://stackoverflow.com/questions/3033407/just-anything-i-want I have used some automatic web tools for creating the .htacces file, but its not working correctly. So I ask for your help. I will also be glad if you can recommend .htacces best practices and recommendations.. Thanks!

    Read the article

  • How do I extract info from a block of URLs in php?

    - by Jack
    I have a list of urls, which can come in any format. One per line, separated by commas, have random text in between them, etc. the URLs are all from 2 different sites, and have a similar structure For this example, lets say it looks like this Random Text - http://www.domain2.com/variable-value Random Text 2 - http://www.domain1.com/variable-value, http://www.domain1.com/variable-value, http://www.domain1.com/variable-value http://www.domain1.com/variable-value http://www.domain2.com/variable-value http://www.domain1.com/variable-value http://www.domain2.com/variable-value http://www.domain1.com/variable-value I need to extract 2 pieces of information. Check to see if its domain1 or domain2 and the value that follows "variable-" So it should create a multi-dimensional array, which would have 2 items: domain + value. Whats the best way of doing that?

    Read the article

  • Cleaning your BizTalk Build Server

    - by Michael Stephenson
    Just a little note for myself this one.At one of my customers where it is still BizTalk 2006 one of the build servers is intermittently getting issues so I wanted to run a script periodically to clean things up a little.  The below script is an example of how you can stop cruise control and all of the biztalk services, then clean the biztalk databases and reset the backup process and then click everything off again.This should keep the server a little cleaner and reduce the number of builds that occasionally fail for adhoc environmental issues.REM Server Clean ScriptREM =================== REM This script is ran to move the build server back to a clean state echo Stop Cruise Controlnet stop CCService echo Stop IISiisreset /stop echo Stop BizTalk Servicesnet stop BTSSvc$<Name of BizTalk Host><Repeat for other BizTalk services> echo Stop SSOnet stop ENTSSO echo Stop SQL Job Agentnet stop SQLSERVERAGENT echo Clean Message Boxsqlcmd -E -d BizTalkMsgBoxDB -Q "Exec bts_CleanupMsgbox"sqlcmd -E -d BizTalkMsgBoxDB -Q "Exec bts_PurgeSubscriptions"  echo Clean Tracking Databasesqlcmd -E -d BizTalkDTADb -Q "Exec dtasp_CleanHMData" echo Reset TDDS Stream Statussqlcmd -E -d BizTalkDTADb -Q "Update TDDS_StreamStatus Set lastSeqNum = 0" echo Force Full Backupsqlcmd -E -d BizTalkMgmtDB -Q "Exec sp_ForceFullBackup" echo Clean Backup Directorydel E:\BtsBackups\*.* /q  echo Start SSOnet start ENTSSO echo Start SQL Job Agentnet start SQLSERVERAGENT echo Start BizTalk Servicesnet start BTSSvc$<Name of BizTalk Host><Repeat for other BizTalk services> echo Start IISiisreset /start echo Start Cruise Controlnet start CCService

    Read the article

  • Memcached server: Is it a good practice to point two server urls to the same server?

    - by Niro
    I have a system where there are connections to a memcache server from several different files and servers. I would like to stay with one server but keep the option of increasing the number of memcache servers (for periods of of high traffic). My idea is to tell memcache there are two servers, while the two urls will point (by DNS) to a single server. In the future if I want I can add a server and change DNS without changing the code in many places. Is this a good practice? Is there a performance cost to the fact that there are two server connections but they both point to the same server? Any other idea how to achive instant expeandability of memcache capacity without need to change the code and deploy ?

    Read the article

  • Multiple urls to 1 website with a wild card ssl.

    - by dagda1
    Hi, At the moment, we have 27 single sites in IIS6, all with their own urls, all with the same subdomain, e.g. https://company1.mycompany.com https://company2.mycompany.com etc., etc. To further complicate things, there is 1 wild card certificate which deals with the subdomain *.mycompany.com and is assigned to each website. All these websites run under the same codebase. We want to consolidate all these websites into 1 website. Are there any issues with having a large number of host headers running under 1 IIS6 site or is there a better way of configuring the site? Thanks Paul

    Read the article

  • SharePoint 2007: Moving main site, to be a subsite - How can urls be redirected/changed?

    - by program247365
    The setup: SharePoint 2007 (MOSS Enterprise) on WINSVR03/IIS6 One site collection, with one access mapping (http://mainsite) currently I'm moving the main SharePoint site, in our one site collection, to be a subsite in a new site collection. I'm using SharePoint Content Deployment Wizard to complete this task (http://spdeploymentwizard.codeplex.com/). The Question So the main site http://mainsite being moved has many subsites, etc. I want to be sure that urls like this: http://mainsite/subsite/doclib/doc1.docx map to and redirect to the new url: http://newsite/mainsite/subsite/doclib/doc1.docx ? And furthermore: I'm aware of this - http://rdacollaboration.codeplex.com/releases/view/28073 , however is it IIS7 only? That'd wouldn't work for me. Looking at this question - http://serverfault.com/questions/107537/dealing-with-moved-documents-and-sites-in-sharepoint is the only one I see that is similar. Would an IIS redirect of http://mainsite to http://newsite/mainsite work only for the root url?

    Read the article

  • Is there a free tool/package that can monitor web traffic and display URLS accessed? [closed]

    - by Anthony
    I couldn't find a similar question but then maybe I am searching for the wrong terms. A few years ago I used a router like device, I'm pretty sure it was a SonicWall, that did this on a clients site. Basically all traffic would be routed through this device and it allowed the manager/administrator to inspect web usage of the workers, determine how often certain resources were accessed and block them if necessary (much like content filter). It showed reports based on domain name reached etc. Facebbok.com, Bebo.com and so on. It also displayed the usual IP traffic information etc. it was a UTM also. I have tried Endian firewall, with it's NTOP install, but I don't think that will show URLs browsed. Maybe I just haven't found it in NTOP yet? I need this to troubleshoot connection and traffic issue at my home, with about twenty devices/users so didn't want to buy a dedicated solution and have spare hardware to use a community product.

    Read the article

  • Is there a method to export the URLs of the open tabs of a Firefox window?

    - by hekevintran
    If I have a Firefox window open that contains 10 tabs, is there a way in Firefox or by a plug-in to get the URLs of those 10 tabs as a text file or some other format? Right now if I want to do this I need to copy the URL of tab A, paste it somewhere, move to tab B, and repeat. I could also bookmark all the tabs into a folder and export that, but that seems like such a hassle. If there is no such method, could someone point me to some documents that describe the basics of writing a Firefox plug-in. I am willing to write this myself if there is no "standard" way.

    Read the article

  • Where to find URLs for sources.list for debian for running apt-get update?

    - by Boda Cydo
    Can anyone tell me where to find URLs to put in /etc/apt/sources.list for debian so that I could run apt-get update? I couldn't find the precise answer by searching Google. When I currently try running apt-get update I get: W: Failed to fetch ftp://ftp.debian.org/debian/dists/lenny/contrib/binary-i386/Packages Unable to fetch file, server said 'Failed to open file. ' [IP: 130.89.148.12 21] W: Failed to fetch ftp://ftp.debian.org/debian/dists/lenny/non-free/binary-i386/Packages Unable to fetch file, server said 'Failed to open file. ' [IP: 130.89.148.12 21] I have no idea how to solve this. Here is how my current sources.list looks like: deb ftp://ftp.debian.org/debian lenny main contrib non-free deb-src ftp://ftp.debian.org/debian lenny main contrib non-free deb ftp://ftp.debian.org/debian lenny/updates main contrib non-free deb-src ftp://ftp.debian.org/debian lenny/updates main contrib non-free I'm running debian_version 5.0.8: # cat /etc/debian_version 5.0.8 Thanks!

    Read the article

  • How can I redirect URLs using the proxy module in Apache?

    - by LearningIT
    This seems like a super-basic question but I am having a hard time tracking down a straightforward solution, so appreciate any help and patience with me on this: I want to configure my Apache proxy server to redirect certain URLs so that, for example, a web browser HTTP request for www.olddomain.com gets passed to the proxy server which then routes the request to www.newdomain.com which sends a response to the proxy server which then passes it back to the web browser. Seems so simple, yet I don't see how to achieve this on Apache. I know Squid/Squirm offer this functionality so am guessing I am missing something really basic. I know I can use RewriteRule to dynamically modify the URL and pass it to the proxy server, but I effectively want to do the reverse, whereby the proxy server receives the original URL, applies the RewriteRule, and then forwards the HTTP request to the new URL. Hope that makes sense. Thanks in advance for any help.

    Read the article

  • Is there a way to redirect certain URLs to specific web browsers in Linux?

    - by jraxxo
    I'm using Chrome as my default browser in Ubuntu 12.10. I need to use Firefox for business purposes (certain websites pertaining to my work only work with Firefox). Is there a way to force Ubuntu to use Firefox for certain types of URLs (maybe as defined by a regular expression) while maintaining Chrome as my default browser for all my other tasks? Perhaps as a shell script running in the background? I'd like this to work system-wide, covering links from Chrome itself as well as PDFs/ODTs, etc. I have searched for solutions, but I couldn't find anything besides OpenWith, a Firefox extension which adds a button to open certain links in other browsers which would again require me to open Firefox beforehand, which does not help me at all. Does anyone have any ideas? Something like Choosy for Linux?

    Read the article

  • De-index URL parameters by value

    - by Doug Firr
    Upon reading over this question is lengthy so allow me to provide a one sentence summary: I need to get Google to de-index URLs that have parameters with certain values appended I have a website example.com with language translations. There used to be many translations but I deleted them all so that only English (Default) and French options remain. When one selects a language option a parameter is aded to the URL. For example, the home page: https://example.com (default) https://example.com/main?l=fr_FR (French) I added a robots.txt to stop Google from crawling any of the language translations: # robots.txt generated at http://www.mcanerin.com User-agent: * Disallow: Disallow: /cgi-bin/ Disallow: /*?l= So any pages containing "?l=" should not be crawled. I checked in GWT using the robots testing tool. It works. But under html improvements the previously crawled language translation URLs remain indexed. The internet says to add a 404 to the header of the removed URLs so the Googles knows to de-index it. I checked to see what my CMS would throw up if I visited one of the URLs that should no longer exist. This URL was listed in GWT under duplicate title tags (One of the reasons I want to scrub up my URLS) https://example.com/reports/view/884?l=vi_VN&l=hy_AM This URL should not exist - I removed the language translations. The page loads when it should not! I played around. I typed example.com?whatever123 It seems that parameters always load as long as everything before the question mark is a real URL. So if Google has indexed all these URLS with parameters how do I remove them? I cannot check if a 404 is being generated because the page always loads because it's a parameter that needs to be de-indexed.

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >