Search Results

Search found 3028 results on 122 pages for 'urls'.

Page 21/122 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Configure Django project in a subdirectory using mod_python. Admin not working.

    - by David
    HI guys. I was trying to configure my django project in a subdirectory of the root, but didn't get things working.(LOcally it works perfect). I followed the django official django documentarion to deploy a project with mod_python. The real problem is that I am getting "Page not found" errors, whenever I try to go to the admin or any view of my apps. Here is my python.conf file located in /etc/httpd/conf.d/ in Fedora 7 LoadModule python_module modules/mod_python.so SetHandler python-program PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE mysite.settings PythonOption django.root /mysite PythonDebug On PythonPath "['/var/www/vhosts/mysite.com/httpdocs','/var/www/vhosts/mysite.com/httpdocs/mysite'] + sys.path" I know /var/www/ is not the best place to put my django project, but I just want to send a demo of my work in progress to my customer, later I will change the location. For example. If I go to www.domain.com/mysite/ I get the index view I configured in mysite.urls. But I cannot access to my app.urls (www.domain.com/mysite/app/) and any of the admin.urls.(www.domain.com/mysite/admin/) Here is mysite.urls: urlpatterns = patterns('', url(r'^admin/password_reset/$', 'django.contrib.auth.views.password_reset', name='password_reset'), (r'^password_reset/done/$', 'django.contrib.auth.views.password_reset_done'), (r'^reset/(?P<uidb36>[0-9A-Za-z]+)-(?P<token>.+)/$', 'django.contrib.auth.views.password_reset_confirm'), (r'^reset/done/$', 'django.contrib.auth.views.password_reset_complete'), (r'^$', 'app.views.index'), (r'^admin/', include(admin.site.urls)), (r'^app/', include('mysite.app.urls')), (r'^photologue/', include('photologue.urls')), ) I also tried changing admin.site.urls with ''django.contrib.admin.urls' , but it didn't worked. I googled a lot to solve this problem and read how other developers configure their django project, but didn't find too much information to deploy django in a subdirectory. I have the admin enabled in INSTALLED_APPS and the settings.py is ok. Please if you have any guide or telling me what I am doing wrong it will be much appreciated. THanks.

    Read the article

  • Am I correctly handling duplicate URLs for my homepage?

    - by Rob Goldstein
    I own a Job Search site named www.conservationjobboard.com and have a concern about how the domain is viewed by search engines. The issue is that when the site was first designed, the default page was left as default.php, but the homepage was actually JobBoard.php. To handle this, the default.php page performed a redirect to the JobBoard.php file when www.conservationjobboard.com/ was requested. The main problem resulted because the redirect was a temporary redirect causing search engines to index conservationjobboard.com/ and conservationjobboard.com/JobBoard.php as 2 separate pages. This has since been corrected to use the .htaccess file so that JobBoard.php is now the default file for the root directory eliminating the need for the redirect. Problem is that search engines still show both URL's in search results (one including JobBoard.php and one that ends with /). Another potential problem is that some of my early backlinks are to conservationjobboard.com/JobBoard.php while the rest are to conservationjobboard.com The 2 outstanding questions are as follows: 1. Is my domain still being penalized by search engines like Google for having duplicate homepage URL's? 2. Are all of the back links to my homepage being considered as the same now or is the total number of back links being split between the 2 different URL's? If you think there are still issues with how we have this set-up, I was wondering if you could give me advice on what we should do differently. Thanks.

    Read the article

  • Why do some user agents have spam urls in them?

    - by Erx_VB.NExT.Coder
    If you go to (say) the last 100 entries (visits) to the botsvsbrowsers.com website (exact link, feel free to take a look: http://www.botsvsbrowsers.com/recent/listings/index.html ), you'd notice that almost every User Agent that has the keywords "Opera" and "Presto" inside them, will almost certainly have a web link (URL/Web Address) inside it, and it won't just be a normal web address, but a HTML anchor tag/link to that address. Why is this so, I could not even find a single discussion about it on the internet, nowhere, I tried varying my search terms many times. If the user agent contains the words "Opera" and "Presto" it doesnt mean it will have this weblink, but it means there is about an 80% change that it will. A typical anchor tag/link inside a user agent will look like this: Mozilla/4.0 <a href="http://osis-uk.co.uk/disabled-equipment">disability equipment</a> (Windows NT 5.1; U; en) Presto/2.10.229 Version/11.60 If you check it out at the website, http://www.botsvsbrowsers.com/recent/listings/index.html you will notice that the back and forward arrows are in there unescaped format. This isn't just true for botsvsbrowsers, but several other user agent listing sites. I'm really confused and feel line I'm in a room full of 10,000 people and am the only one seeing this ghost :). If I'm doing statistical analysis, should I include or exclude this type of user agent from my listing (ie: are these just normal users who've set their user agents to attempt to drive some traffic to their sites as they browser the web), or is there something else going on? The fact that it is so consistent in terms of its format leads me to believe that it is an automated process (the setting or alteration of the user agent) so I cannot decide or understand the process by which this change is made (I know how to change a user agent), but unsure which program or facility is doing this, especially since it is exclusive to Opera (Presto) user agents that are beyond I think an 8 or 9 point something browser version. I've run some statistical tests, parsing entries from all over the place, writing custom programs, to get a better understanding of this. Keep in mind that I see normal URL's in user agents infrequently, they are just text such as +http://www.someSite.com appended to a user agent normally, especially if its a crawler or bot it provided its service URL, this is normal and isnt done with an embedded link (A HREF=) etc, so I'm not talking about "those".

    Read the article

  • for a blog with posts and categories what are all the best ways to create user friendly and seo friendly urls

    - by Jayapal Chandran
    I am creating a module in my website which displays ringtones. it is like creating blog posts and categories It will have categories(tags) and posts. (i am using category and tag interchangeably) i am using the following linking for this module sitename.com/blog sitename.com/blog/category/category-name-slug/ - will list all ringtones of that category/tag sitename.com/blog/title/name-slug-of-the-ringtone/ - this will display the details and a download link in all page at the left i display the category/tag . This is how i have formed the url structure. it will be user friendly i hope yet will it be seo friendly? Please hint if i am missing something or other ways to improve. meanwhile i am browsing the net to get more information on linking content (categorizing) and to find best ways for the user and search engine.

    Read the article

  • Why is a # sign added to the end of URLS?

    - by Niro
    Note: I'm asking this from the perspective of the site developers (trying to help someone there). not as a user. Please don't forward this to superuser.com. It's a server admin question. Have a look here http://www.wanimo.com/fr/chiens/coussin-matelas-tapis-pour-chien-sc28/tapis-plat-urban-chic-sf7263/ you'll see that the page gets redirected to the same page with # at the end. Worse, when you click back you get garbage url. I'm trying to debug what is causing the redirect. Any advice on how to find it ?

    Read the article

  • Should I add a "nofollow" attribute to download links, or disallow the URLs in robots.txt?

    - by Laurent
    I have a download link very similar to Opera's one - it's just a script that sends the file. It doesn't have an extension and there's no obvious way to tell that it's actually a download link. So since I don't want robots to crawl this link, do I need to add it to robots.txt or maybe add a "nofollow" attribute to it? I see that on Opera's website they didn't do either of this, so perhaps it's not necessary?

    Read the article

  • What should filenames and URLs of images contain for SEO benefit?

    - by Baumr
    We know that good site architecture usually looks like this: example-company.com/ example-company.com/about/ example-company.com/contact/ example-company.com/products/ example-company.com/products/category/ example-company.com/products/category/productname/ Now, when it comes to Google Image search, it is clear that the img alt tag, filename/URL, and surrounding text (captions, headings, paragraphs) have an effect on ranking. I want to ask about the filename of the images that we should use (e.g. product-photo.jpg). ...but first about the URL: Often web developers stick all images in a single folder in the root: example-company.com/img/ — and I have stopped doing that. (I don't want to get into it, but basically, it seems more semantic for images which make up part of the content at each sub-directory) However, when all images appear in a folder, I feel that their filename needs to reflect what they are a bit more than usual, for example: example-company.com/img/example-company-productname-category.jpg It's a longer filename than just product.png, but as long as it's relevant, I see no problem with regards to SEO (unless you're keyword stuffing), and it could even help rank for keywords: "example company" "productname" "category" So no questions there. But what about when we have places images in the site architecture we outlined at the beginning? In other words, what if image URL paths look like this: example-company.com/products/category/productname/productname.jpg My question is, should the URL be kept short like above and only have the "productname" (and some descriptive keywords) as part of it's filename? Or, should it also include the "example-company" and "category"? Like so: example-company.com/products/category/productname/example-company-category-productname.jpg That seems much longer, and redundant when we look at the URL, but here are a few considerations. Images are often downloaded onto computers, and, to the average user, they lose their original URL and thus — it isn't clear where they came from. Also, some social networks, forums, and other platforms leave the filename intact when uploaded. (Many others rewrite it, for example, Pinterest and Facebook.) Another consideration, will this really help (even if ever so slightly) rank in Google Image Search, or at least inform Google that the product is something specific to the "example-company"? For example, what if this product can only be bought at this store and is the flagship product? In addition to an abundance of internal links to this product page, would having the "example company" name and "category" help it appear in "example company" searches? In other words, is less more?

    Read the article

  • What's the best way to version CSS and JS URLs?

    - by David Eyk
    As per Yahoo's much-ballyhooed Best Practices for Speeding Up Your Site, we serve up static content from a CDN using far-future cache expiration headers. Of course, we need to occasionally update these "static" files, so we currently add an infix version as part of the filename (based on the SHA1 sum of the file contents). Thus: styles.min.css Becomes: styles.min.abcd1234.css However, managing the versioned files can become tedious, and I was wondering if a GET argument notation might be cleaner and better: styles.min.css?v=abcd1234 Which do you use, and why? Are there browser- or proxy/cache-related considerations that I should consider?

    Read the article

  • How can I test for a URLs existeance before redirecting to it?

    - by ckliborn
    I am using Apache's mod_rewrite to redirect mobile users to my mobile site based on their http_user_agent. However not all pages have a mobile equivalent. Also mobile pages end in .html and "full" pages end in .shtml. Here is some pseudo code. Does the user have a certain HTTP_USER_AGENT? Is there a mobile page? If so take them there. If not, no redirection is needed. I want to do this with apache.

    Read the article

  • .htaccess rules to rewrite URLs to front end page?

    - by Dizzley
    I am adding a new application to my site at example.com/app. I want views at that URL to always open myapp.php. E.g. example.com/app -> example.com/app/myapp.php and example.com/app/ -> example.com/app/myapp.php What's the correct form of rewrite rules in the .htaccess file? I've got: <IfModule mod_rewrite.c> RewriteEngine On RewriteBase /app/ RewriteRule ^myapp\.php$ - [L] RewriteRule ^myapp.php$ - [L] RewriteRule . - [L] </IfModule> ...based on what the Wordpress front-end does. But all I see at example.com/app is a directory of files. :( (I put those rewrites at the top of my .htaccess file). Any ideas? Update What actually worked: RewriteEngine On RewriteBase / RewriteCond %{REQUEST_URI} ^/app(/.*)?$ [NC] RewriteCond %{REQUEST_FILENAME} !-f RewriteRule . /app/myapp.php [L] This is good because: Explicit or implicit calls to app/myapp.php work. example.com/app redirects to app/myapp.php example.com/app/ redirects to app/myapp.php example.com/app/subfunction redirects to app/myapp.php All other calls to example.com/otherstuff are untouched. Item 4 is Wordpress-like Front Controller pattern behaviour. I think that rule RewriteCond %{REQUEST_URI} ^/app.*$ [NC] needs refining as it allows /app-oh-my-goodness etc. through too. Thanks for the answers.

    Read the article

  • Why do some user agents have spam urls in them (and why are they always Opera/Presto User-Agents)?

    - by Erx_VB.NExT.Coder
    If you go to (say) the last 100 entries (visits) to the botsvsbrowsers.com website (exact link, feel free to take a look: http://www.botsvsbrowsers.com/recent/listings/index.html ), you'd notice that almost every User Agent that has the keywords "Opera" and "Presto" inside them, will almost certainly have a web link (URL/Web Address) inside it, and it won't just be a normal web address, but a HTML anchor tag/link to that address. Why is this so, I could not even find a single discussion about it on the internet, nowhere, I tried varying my search terms many times. If the user agent contains the words "Opera" and "Presto" it doesnt mean it will have this weblink, but it means there is about an 80% change that it will. A typical anchor tag/link inside a user agent will look like this: Mozilla/4.0 <a href="http://osis-uk.co.uk/disabled-equipment">disability equipment</a> (Windows NT 5.1; U; en) Presto/2.10.229 Version/11.60 If you check it out at the website, http://www.botsvsbrowsers.com/recent/listings/index.html you will notice that the back and forward arrows are in there unescaped format. This isn't just true for botsvsbrowsers, but several other user agent listing sites. I'm really confused and feel line I'm in a room full of 10,000 people and am the only one seeing this ghost :). If I'm doing statistical analysis, should I include or exclude this type of user agent from my listing (ie: are these just normal users who've set their user agents to attempt to drive some traffic to their sites as they browser the web), or is there something else going on? The fact that it is so consistent in terms of its format leads me to believe that it is an automated process (the setting or alteration of the user agent) so I cannot decide or understand the process by which this change is made (I know how to change a user agent), but unsure which program or facility is doing this, especially since it is exclusive to Opera (Presto) user agents that are beyond I think an 8 or 9 point something browser version. I've run some statistical tests, parsing entries from all over the place, writing custom programs, to get a better understanding of this. Keep in mind that I see normal URL's in user agents infrequently, they are just text such as +http://www.someSite.com appended to a user agent normally, especially if its a crawler or bot it provided its service URL, this is normal and isnt done with an embedded link (A HREF=) etc, so I'm not talking about "those".

    Read the article

  • Is it considered duplicate content when search results can be retrieved via 2 different urls? [closed]

    - by Floran
    Possible Duplicate: What is duplicate content and how can I avoid being penalized for it on my site? I'm building up friendly url's like so: http://www.1001locaties.nl/trouwlocaties But the same content can also be viewed when using the filter options on the left side, but via a different url: http://www.1001locaties.nl/locaties/?search=1&category=Trouwlocaties Is this considered duplicate content by Google? And if so: what can I do about it?

    Read the article

  • What is the SEO-recommended method for using underscores and dashes in URLs that contain geographic locations?

    - by ElHaix
    In reading through this article: In Subfolder & File Names, Use Dashes, Not Underscores Good: Good: http://www.domain.com/sub-folder/file-name.htm Bad: http://www.domain.com/sub_folder/file_name.htm In my URL's, I may have one or two city names, ending with the province/state: Burnaby_New_Westminister-BC/[some search term]. My URL rules currently are defined such that everything after the dash is the prov/state. Some geographic locations already contain dashes: Notre-Dame-de-Grâce (in QC), which I would convert to ~/Notre_Dame_de_Grace-QC/ I thought of placing the prov/state after another "/", however in some cases the province/state name may not exist, thus ~/Notre_Dame_de_Grace/, so the first term after the domain name contains the geo location {city, city_name-state}. I am now revisiting this, and wondering if this rule set should change, and if so, what is the recommended way of implementing this? -- UPDATE -- After reviewing this video, I see that I should be using the dashes, rather than underscores. However since I still want to have my geo locations in the first URL section, is there anything wrong with using a double-dash separator - ie: /city-name--state/ ?

    Read the article

  • Are there advantages of using hard coded URLs for localization?

    - by nbolton
    On the Synergy website, localization is detected (and can be overridden) but uses the same URL for all languages. Some websites however, like Wikipedia have language specific subdomains. What are the advantages of having either subdomains or subdirectories (i.e. a specific URL) for each language localization? Also, should it automatically redirect the user to the specific subdomain/subdir based on the language that the browser requests? I suspect that there are advantages, which I'm guessing are: When the website appears in search results for non-English languages, the translated page description will be shown (assuming there is a translation provided by the website). When a user shares a page (e.g. through twitter), it will show in a specific language. Perhaps this is a disadvantage though? Am I correct, if so, are there more advantages?

    Read the article

  • Why do some urls in Firefox change when copy / paste?

    - by user203748
    This may not be a Firefox / Ubuntu specific issue. When I Copy / Paste a web link with _ and ( ) it is rendered as %20, %28, and %29. Yet in the Firefox URL these % symbols do not appear. The %20 is particularly weird because the _ itself does render in the URL: https://www.capitalsecuritybank.com/en/PDF/CSB_%20Account_%20Application_%20Form_%20%28Personal%29.pdf Can anyone explain why the URL is different when Copied / Pasted?

    Read the article

  • What is the recommended method of HTTP Redirection from multiple URLs to one URL?

    - by ChrisHDog
    I have a website that has a number of URLs that people use to connect to that site (uses the bindings on the IIS website and everything works as intended): http://www.sample.com http://sample.com https://www.sample.com http://xyz.sample.com http://oldurl.com Now what I want to do is have all of the URLs go to https://www.sample.com - so if you type in "http://xyz.sample.com" or "sample.com" you should go to https://www.sample.com The question is what is the best mechanism to do this? I have one possible solution (which I will put as an answer to this question), but I get the feeling that there might be another, better solution available.

    Read the article

  • Why did mislav-will_paginate start adding so much garbage to urls between rails 2.3.2 and 2.3.5?

    - by user30997
    I've used will_paginate in a number of projects now, but when I moved one of them to Rails 2.3.5, clicking on any of the pagination links (page number, next, prev, etc.,) went from getting nice URLs like this: http://foo.com/user/1/date/2005_01_31/phone/555-6161 to this: http://foo.com/?options[]=user&options[]=date&options[]=2005_01_31&options[]=phone&options[]=555-6161 I have a route that looks like this that is probably the source of the 'options' keyword: map.connect '/browse/*options', :controller=>'assets', :action=>'browse' It's enough of an annoyance that I'm willing to roll a paginator to get around this if there isn't a way to get back to where I was before. Is there a way to get will_paginate to turn array-style routes into sane urls again? Thanks.

    Read the article

  • How to monitor HTTP(S) traffic / URLs in Android?

    - by Pawel Krakowiak
    I'm interested in whether there is a way to monitor HTTP(S) traffic on an Android phone? What I would like to do is to be able to retrieve all URLs that have been accessed on the phone's browser. I thought that there would be a browser intent for that, but have not seen anything - given I'm green, maybe I just did not know where to look? I followed the question and answer here, but it works only for hyperlinks which were clicked by the user, however I need to catch all URLs - including the ones typed in by the user. Basically I need to know about every single URL that was opened in the web browser. Can I register some kind of a handler with the browser? Is something like that feasible at all?

    Read the article

  • How do I force SSL for some URLs and force non-SSL for all others?

    - by brad
    I'd like to ensure that certain URLs on my site are always accessed via HTTPS while all other URLs are accessed via HTTP. I can get either case working in my .htaccess file, however if I enable both, then I get infinite redirects. My .htaccess file is: <IfModule mod_expires.c> # turn off the module for this directory ExpiresActive off </IfModule> Options +FollowSymLinks AddHandler application/x-httpd-php .csv RewriteEngine On RewriteRule ^/?registration(.*)$ /register$1 [R=301,L] # Force SSL for certain URL's RewriteCond %{HTTPS} off RewriteCond %{REQUEST_URI} (login|register|account) RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [R=301,L] # Force non-SSL for certain URL's RewriteCond %{HTTPS} on RewriteCond %{REQUEST_URI} !(login|register|account) RewriteRule ^(.*)$ http://%{HTTP_HOST}%{REQUEST_URI} [R=301,L] # Force files ending in X to use same protocol as initial request RewriteRule \.(gif|jpg|jpeg|jpe|png|ico|css|js)$ - [S=1] # Use index.php as the controller RewriteCond %{REQUEST_URI} !\.(exe|css|js|jpe?g|gif|png|pdf|doc|txt|rtf|xls|swf|htc|ico)$ [NC] RewriteCond %{REQUEST_URI} !^(/js.*)$ RewriteRule ^(.*)$ index.php [NC,L]

    Read the article

  • How to match the last url in a line containing multiple urls, using regular expressions?

    - by Mert Nuhoglu
    I want to write a regex that matches a url that ends with ".mp4" given that there are multiple urls in a line. For example, for the following line: "http://www.link.org/1610.jpg","Debt","http://www.archive.org/610_.mp4","66196517" Using the following pattern matches from the first http until mp4. (http:\/\/[^"].*?\.mp4)[",].*? How can I make it match only the last url only? Note that, the lines may contain any number of urls and anything in between. But only the last url contains .mp4 ending.

    Read the article

  • Validating Internationalized URLs - Is this going to be a problem?

    - by VirtuosiMedia
    After reading about the new Arabic URLs, and with more languages to come, how should URL validation be done for internationalized applications? Does the validation change at all and will existing solutions break? Is regex still a good approach? If so, what would that regex look like? If not, what's a good strategy? What are some good resources to read more on the topic? I ask this because it has the potential to cause a good many localized applications to have to be rewritten if they have to validate URLs at any point.

    Read the article

  • django-haystack urlpatterns include('haystack.urls') where does it lead to?

    - by Eugene
    I've recently begun to learn/install django/haystack/solr. Following the tutorial given in haystack site, I have urlpatterns = pattern('', r'^search/', include('haystack.urls')) I found haystack installed in /usr/local/lib/python2.6/dist-packages/haystack and located urls.py there. It has urlpatterns=patterns('haystack.views', url(r'^$', SearchView(), name='haystack_search'),) I thought the second argument of url() should be callable object. I looked at the views.py and SearchView is a class. What is going on here? What's get called eventually?

    Read the article

  • How do I extract info from a block of URLs in php?

    - by Jack
    I have a list of urls, which can come in any format. One per line, separated by commas, have random text in between them, etc. the URLs are all from 2 different sites, and have a similar structure For this example, lets say it looks like this Random Text - http://www.domain2.com/variable-value Random Text 2 - http://www.domain1.com/variable-value, http://www.domain1.com/variable-value, http://www.domain1.com/variable-value http://www.domain1.com/variable-value http://www.domain2.com/variable-value http://www.domain1.com/variable-value http://www.domain2.com/variable-value http://www.domain1.com/variable-value I need to extract 2 pieces of information. Check to see if its domain1 or domain2 and the value that follows "variable-" So it should create a multi-dimensional array, which would have 2 items: domain + value. Whats the best way of doing that?

    Read the article

  • .htacces to create friendly URLs. Help needed....

    - by Jonathan
    Hi, I'm having a hard time with .htacces. I want to create friendly URLs for a site I'm working on... Basically I want to convert this: http://website.com/index.php?ctrl=pelicula&id=0221889 http://website.com/index.php?ctrl=pelicula&id=0160399&tab=posters Into this: http://website.com/pelicula/0221889/ http://website.com/pelicula/0221889/posters/ In case I need it later I would also want to know how to add the article title to the end of the URL like this (I'm using PHP): http://website.com/pelicula/0221889/the-article-name/ http://website.com/pelicula/0221889/the-article-name/posters/ Note: Stackoverflow method is also good for me, for example the url of this question is: http://stackoverflow.com/questions/3033407/htacces-to-create-friendly-urls-help-needed But you can put anything after the id and it will also work. like this: http://stackoverflow.com/questions/3033407/just-anything-i-want I have used some automatic web tools for creating the .htacces file, but its not working correctly. So I ask for your help. I will also be glad if you can recommend .htacces best practices and recommendations.. Thanks!

    Read the article

  • Memcached server: Is it a good practice to point two server urls to the same server?

    - by Niro
    I have a system where there are connections to a memcache server from several different files and servers. I would like to stay with one server but keep the option of increasing the number of memcache servers (for periods of of high traffic). My idea is to tell memcache there are two servers, while the two urls will point (by DNS) to a single server. In the future if I want I can add a server and change DNS without changing the code in many places. Is this a good practice? Is there a performance cost to the fact that there are two server connections but they both point to the same server? Any other idea how to achive instant expeandability of memcache capacity without need to change the code and deploy ?

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >