Search Results

Search found 3028 results on 122 pages for 'urls'.

Page 2/122 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • problem in decoupling urls.py , while following a tutorial of django

    - by Nitin Garg
    http://docs.djangoproject.com/en/dev/intro/tutorial03/ I was at the step Decoupling the URLconfs where the tutorial illustrates how to decouple urls.py. On doing exactly what it says, i get the following error- error at /polls/1/ nothing to repeat Request Method: GET Request URL: http://localhost:8000/polls/1/ Exception Type: error Exception Value: nothing to repeat Exception Location: C:\jython2.5.1\Lib\re.py in _compile, line 241 Python Executable: C:\jython2.5.1\jython.bat Python Version: 2.5.1 Python Path: ['E:\\Programming\\Project\\django_app\\mysite', 'C:\\jython2.5.1\\Lib\\site-packages\\setuptools-0.6c11-py2.5.egg', 'C:\\jython2.5.1\\Lib', '__classpath__', '__pyclasspath__/', 'C:\\jython2.5.1\\Lib\\site-packages'] Server time: Mon, 12 Apr 2010 12:02:56 +0530

    Read the article

  • Generating Component SEF URLs in Joomla

    - by Jarrod
    I have a custom Joomla component and a router for building my SEF URL's for use within the site, and everything is usually shiny - internally, all of my links look and act fantastic. I recently route a controller action that sends a list of links through email, and I've noticed that my URLs are coming out.... funky - hopefully someone can tell me why. Usually, my router generates an internal link that looks like this: http://localhost/Registry/calendar/265889635/Some-Long-Boring-Event However, when I send an email and preparing the same URL through the same router I get: http://localhost/Registry/Registry/component/calendar/569555803/Some-Long-Boring-Event Has anybody run into this issue before?

    Read the article

  • Drupal Clean Urls break randomly for arbitrary paths.

    - by picardo
    I've done everything right. My server has mod_rewrite enabled, my virtualhost path has AllowOverride set to All, and I have the .htaccess file in place with the rewrite rules same as everyone. But I have trouble accessing some pages using their clean url paths. So for 90% of the pages, clean urls work fine. But for that 10%, they don't. I have checked whether those pages exist -- they do. Checked whether they are accessible using index.php?q=[path] -- and they are. They are only inaccessible through clean url paths. Can anyone help me with this mystery?

    Read the article

  • Javascript: find URLs in a document

    - by user317005
    how do I find URLs (i.e. www.domain.com) within a document, and put those within anchors: < a href="www.domain.com" www.domain.com< /a html: Hey dude, check out this link www.google.com and www.yahoo.com! javascript: (function(){var text = document.body.innerHTML;/*do replace regex => text*/})(); output: Hey dude, check out this link <a href="www.google.com">www.google.com</a> and <a href="www.yahoo.com">www.yahoo.com</a>!

    Read the article

  • Django: Why Doesn't the Current URL Match any Patterns in urls.py

    - by austin_sherron
    I've found a few questions here related to my issue, but I haven't found anything that has helped me resolve my issue. I'm using Python 2.7.5 and Django 1.8.dev20140627143448. I have a view that's interacting with my database to delete objects, and it takes two arguments in addition to a request: def delete_data_item(request, dataclass_id, dataitem_id): form = AddDataItemForm(request.POST) data_set = get_object_or_404(DataClass, pk=dataclass_id) context = {'data_set': data_set, 'form': form} data_item = get_object_or_404(DataItem, pk=dataitem_id) data_item.delete() data_set.save() return HttpResponseRedirect(reverse('detail', args=(dataclass_id,))) The URL in myapp.urls.py looks something like this: url(r'^(?P<dataclass_id>[0-9]+)/(?P<dataitem_id>[0-9]+)/delete_data_item/$', views.delete_data_item, name='delete_data_item') and the portion of my template relevant to the view is: <a href="{% url 'delete_data_item' data_set.id data_item.id %}">DELETE</a> Whenever I click on the DELETE link, django tells me that the request URL: http://127.0.0.1:8000/myapp/5/%7B%%20url%20'delete_data_item'%20data_set.id%20data_item.id%20%%7D doesn't match any of my URL patterns. What am I missing? The URL on which the DELETE links exist is myapp/(<dataclass_id>[0-9]+)/

    Read the article

  • Does Django cache url regex patterns somehow?

    - by Emre Sevinç
    I'm a Django newbie who needs help: Even though I change some urls in my urls.py I keep on getting the same error message from Django. Here is the relevant line from my settings.py: ROOT_URLCONF = 'mydjango.urls' Here is my urls.py: from django.conf.urls.defaults import * # Uncomment the next two lines to enable the admin: from django.contrib import admin admin.autodiscover() urlpatterns = patterns('', # Example: # (r'^mydjango/', include('mydjango.foo.urls')), # Uncomment the admin/doc line below and add 'django.contrib.admindocs' # to INSTALLED_APPS to enable admin documentation: #(r'^admin/doc/', include(django.contrib.admindocs.urls)), # (r'^polls/', include('mydjango.polls.urls')), (r'^$', 'mydjango.polls.views.homepage'), (r'^polls/$', 'mydjango.polls.views.index'), (r'^polls/(?P<poll_id>\d+)/$', 'mydjango.polls.views.detail'), (r'^polls/(?P<poll_id>\d+)/results/$', 'mydjango.polls.views.results'), (r'^polls/(?P<poll_id>\d+)/vote/$', 'mydjango.polls.views.vote'), (r'^polls/randomTest1/', 'mydjango.polls.views.randomTest1'), (r'^admin/', include(admin.site.urls)), ) So I expect that whenever I visit http://mydjango.yafz.org/polls/randomTest1/ the mydjango.polls.views.randomTest1 function should run because in my polls/views.py I have the relevant function: def randomTest1(request): # mainText = request.POST['mainText'] return HttpResponse("Default random test") However I keep on getting the following error message: Page not found (404) Request Method: GET Request URL: http://mydjango.yafz.org/polls/randomTest1 Using the URLconf defined in mydjango.urls, Django tried these URL patterns, in this order: 1. ^$ 2. ^polls/$ 3. ^polls/(?P<poll_id>\d+)/$ 4. ^polls/(?P<poll_id>\d+)/results/$ 5. ^polls/(?P<poll_id>\d+)/vote/$ 6. ^admin/ 7. ^polls/randomTest/$ The current URL, polls/randomTest1, didn't match any of these. I'm surprised because again and again I check urls.py and there is no ^polls/randomTest/$ in it, but there is ^polls/randomTest1/' It seems like Django is somehow storing the previous contents of urls.py and I just don't know how to make my latest changes effective. Any ideas? Why do I keep on seeing some old version of regexes when I try to load that page even though I changed my urls.py?

    Read the article

  • adding slugs to the URLs afterwards

    - by altuure
    we have a website for last 5 months and we did not used slug at bottom level elements so urls was like /apps/webmasters/badges/1100 would it make sense to add name to the URL after that point and redirect to the new ones ? I am interested in building more search terms. and increase page ranks ..... /apps/webmasters/badges/1100 - redirect and served at /apps/webmasters/badges/1100-supporter Or should I keep old URLs as is and create new urls with slugs. I would also appreciate some advice on shared urls on facebook or on twitter in those cases. Thanks in advance...

    Read the article

  • Do keyword-based filenames and URLs really matter?

    - by Justin Scott
    We've developed a dynamic web application which uses URLs such as product.cfm?id=42 but our marketing team says we should use search friendly URLs and put our keywords into the URLs (so it would be product-name.cfm instead). Our developers tell us this will cost more money and take additional time. Is it worth the effort? How important is this to the search engines and will it impact our rankings?

    Read the article

  • Are shorter URLS better for SEO?

    - by articlestack
    Many people shorten their URLs. But as per my understanding it creates overhead of extra redirection, other can not guess about the target article with their url, and it should be less friendly for "inurl:..." type search. Should I shorten the URLs of my sites? Is there any advantage with short URLs besides the fact that they take fewer characters in anchor tags on the page (good for site loading)?

    Read the article

  • Long URLs (Bitly and TinyURL)

    - by Sixfoot Studio
    I'm sitting with a problem where I need to pass more than 2000 characters from my Flash application to an HTML page which reads the information and displays the correct options made in the Flash app the person came from. All's good but on the final stage, when the user needs to post their choices to a form, the character cannot be sent because the string is too long. Is there a way to use a service such as Bitly or TinyURL to send these long string and for them to be "deconstruction" on the other end when the form is sent? Otherwise, is there another solution to this problem? Many thanks!

    Read the article

  • present a static page url as different url which is SEO friendly

    - by Gaurav Sharma
    Hi, I have developed a site, which has some static pages. Like explore, home, feedback. The link for these goes as follows website.com/views/explore.php website.com/index.php website.com/views/feedback.php I want to write a different SEO URL for each of the URL mentioned above. Is it possible ? i.e. for example website.com/views/explore.php should be convereted/visible as website.com/explore website.com/views/feedback.php should be convereted/visible as website.com/give/feedback and so on

    Read the article

  • Create Shortened goo.gl URLs in Your Favorite Browser

    - by Asian Angel
    Now that the new goo.gl URL shortening service has been active for a bit you may be wanting to add it to your favorite non Internet Explorer/Firefox browser. See how easy it is to enjoy that goo.gl URL shortening goodness with the “goo.gl with prompt for copying Bookmarklet”. goo.gl with Prompt for Copying Bookmarklet In Action To add the bookmarklet to your browser simply drag it to your “Bookmarks Toolbar” and get ready to start creating those shortened URLs. For our example we chose one of the wonderful malware removal articles here at the site. All that you will need to do is click on the bookmarklet to create your shortened URL. The wonderful thing about this bookmarklet is the small pop-up window that provides an easy way for you to copy the shortened URL. You can see the “Context Menu” for the small window here…definitely nice. Once you have copied your new URL you can close the window by clicking on “OK” or pressing “Enter”. Now you are ready to use your new shortened goo.gl URL wherever you like or need to. Conclusion If you have been wanting to add goo.gl URL shortening power to your favorite browser then this is the perfect bookmarklet to have. Links Add the goo.gl with Prompt for Copying Bookmarklet to Your Favorite Browser Similar Articles Productive Geek Tips See Where Shortened URLs “Link To” in Your Favorite BrowserVerify the Destinations of Shortened URLs the Easy WayA Quick Look at URL Shortening Services & ExtensionsCreate Shortened goo.gl URLs in Google Chrome the Easy WayText-Only URL Quick-Fix for Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Open Multiple Links At One Go NachoFoto Searches Images in Real-time Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi LocPDF is a Visual PDF Search Tool

    Read the article

  • Google is re-indexing pages after redirecting URLs from HTTP to HTTPS incorrectly

    - by SLIM
    I upgraded my site so that all pages have gone from using HTTP to HTTPS. I didn't consider that Google treats HTTPS pages differently than HTTP. I recreated my sitemap to so that all links now reflect the new HTTPS URLs and let it be for a few days. (Whoops!) Google is now re-indexing all the HTTPS pages. I have about 19k pages on the site, and Google has already indexed about 8k of the new HTTPS pages. The problem is that Google sees all of these as brand new pages when many of them have a long HTTP history. Of course most of you will recognize the problem, I didn't set up a 301 from the old HTTP to the new HTTPS URLs. Is it too late to do this? Should I switch my sitemap back to HTTP URLs and then 301 redirect to the new HTTPS URls? Or should I leave the sitemap as is, and setup 301 redirects anyway... I'm not even sure if Google is trying to reach the HTTP site anymore. Currently the site is doing 303 redirects (from HTTP to HTTPS), although I haven't figured out why yet.

    Read the article

  • Access Control Service: Programmatically Accessing Identity Provider Information and Redirect URLs

    - by Your DisplayName here!
    In my last post I showed you that different redirect URLs trigger different response behaviors in ACS. Where did I actually get these URLs from? The answer is simple – I asked ACS ;) ACS publishes a JSON encoded feed that contains information about all registered identity providers, their display names, logos and URLs. With that information you can easily write a discovery client which, at the very heart, does this: public void GetAsync(string protocol) {     var url = string.Format( "https://{0}.{1}/v2/metadata/IdentityProviders.js?protocol={2}&realm={3}&version=1.0",         AcsNamespace,         "accesscontrol.windows.net",         protocol,         Realm);     _client.DownloadStringAsync(new Uri(url)); } The protocol can be one of these two values: wsfederation or javascriptnotify. Based on that value, the returned JSON will contain the URLs for either the redirect or notify method. Now with the help of some JSON serializer you can turn that information into CLR objects and display them in some sort of selection dialog. The next post will have a demo and source code.

    Read the article

  • Should my URLs be lowercase?

    - by Rowan Freeman
    According to this blog ("Understanding SEO Friendly URL Syntax Practices") I should change http://example.com/Hello-Dolly To http://example.com/hello-dolly The reasons given are: URLs, in general, are case-sensitive it will simplify any case sensitive SEO and analytics reports According to this GIF that I found on Wikipedia's article on URL Normalization I should convert my URLs from any uppercase to all lowercase. However I use ASP.NET MVC4 and by default my URLs are structured like this (CamelCase): http://www.domain.com/Controller/Action/Parameter http://www.greatsite.com/Categories/List/Bicycles I've skimmed through the RFC1738 but I didn't see any definitive answers to this. Should I go out of my way to force the framework to change everything to lower case? Why did Microsoft choose to design their framework like this if everybody is telling me to use lowercase?

    Read the article

  • Robots.txt never downloaded but some blocked URLs in GWT

    - by Zistoloen
    There is something I don't understand in Google Webmaster Tools (GWT) for my Wordpress site. In menu "Blocked URLs", it mention that my robots.txt has never been downloaded but there are some blocked URLs. It's kind of weird and not logical. Am i missing something? User-agent : * Disallow: /*? Disallow: /wp-login.php Disallow: /wp-admin Disallow: /wp-includes Disallow: /wp-content Allow: /wp-content/uploads Disallow: */trackback Disallow: /*/feed Disallow: /*/comments Disallow: /cgi-bin Disallow: /*.php$ Disallow: /*.inc$ Disallow: /*.gz$ Disallow: /*.cgi$ Disallow: /author/* I'm afraid my robots.txt doesn't block several URLs I want to block.

    Read the article

  • How important are SEO Friendly URLs [closed]

    - by nute
    Possible Duplicate: Is a URL with a query string better or worse for SEO then one without one? Currently, my URLs look something like http://mydomain.ext/question/5 where question is the Controller and 5 is the ID of the object or article retrieved. In theory I could spend some development time and some server resources to have URLs that would contain more information about the page loaded. However, seeing how websites like Youtube or many others just keep simple URLs with just an ID, I am asking, does it matter? It is worth it??

    Read the article

  • Is it safe to block redirected (but still linked) URLs with robots.txt?

    - by Edgar Quintero
    I have a website that has all URLs optimized and 301 redirected from nasty URLs to clean ones. However, everywhere throughout the site the unclean URLs are linked in menus, content, products, etc. Google currently has all clean URLs indexed, along with a few unclean URLs too. So the site still has linked everywhere the old URLs (ideally this wouldn't be the case but this is how it is ATM). I would like to block the unclean URLs with robots.txt. The question: if I block these unclean URLs with the robots.txt, when the entire website is linked with them (but they all redirect to the clean version), will this affect the indexing status at all?

    Read the article

  • Should I use title case in URLs?

    - by Amadiere
    We are currently deciding on a consistent naming convention across a site with multiple web applications. Historically, I've been an advocate of the 'lowercase all the letters!' when creating URLs: http://example.com/mysystem/account/view/1551 However, within the last year or two, specifically since I began using ASP.NET MVC & had more dealings with REST based URLs, I've become a fan of capitalizing the first letter of each section/word within the URL as it makes it easier to read (imho). http://example.com/MySystem/Account/View/1551 We're not in a situation where people need to read or be able to understand the URLs, so that's not a driver per se. The main thing we are after is a consistent approach that is rational and makes sense. Are there any standards that declare it good to do one way or another, or issues that we may run into on (at least realistically modern) setups that would choose a preference over another? What is the general consensus for this debate currently?

    Read the article

  • Is it safe to Block These URLs with Robots.txt?

    - by Edgar Quintero
    I have a website that has all URLs optimized and 301 redirected from nasty URLs to clean ones. However, everywhere throughout the site the unclean URLs are linked in menus, content, products, etc. Google currently has all clean URLs indexed, along with a few unclean URLs too. So the site still has linked everywhere the old URLs (ideally this wouldn't be the case but this is how it is ATM). I would like to block the unclean URLs with robots.txt. The question: If I block these unclean URLs with the robots.txt, when the entire website is linked with them (but they all redirect to the clean version), will this affect the indexing status at all?

    Read the article

  • Weird URLs being access by Googlebot

    - by Avishai
    Lately I've been seing all sorts of strange URLs show up as errors in my Webmaster tools account, but they're URLs that don't actually exist on my site, nor are linked from the pages that Google claims they're linked from. URL Response Code Detected yR3kna/5RfA4+ndtn/X4zcevudMlXbqbIrnPbH9irw= 404 9/16/12 OK4iaOVdr6Ocjmz+u1kuR5Q486mhDo/e45nwjl2+y8= 404 9/9/12 pxGz/oHEA0BS8U3VFBzJcZnnIHMsFXb3/rIxMxh2ws= 404 9/16/12 Af8tbvQ0HniIpf53I8Txz1hM1/JxxrFQxgqPuErWII= 404 9/9/12 7Bk7c0LDmm4PHyTjml017EGwNNPCn/p/0xMSWWPDic= 404 9/16/12 umCwnDvTE8ybpUB19MIb+VRj5xRJncyYGGfAQ2Mxn0= 404 9/1/12 # etc... Do you know how to make these stop? It's not at all clear to me why it would be going to these URLs in the first place.

    Read the article

  • Django admin fails when using includes in urlpatterns

    - by zenWeasel
    I am trying to refactor out my application a little bit to keep it from getting too unwieldily. So I started to move some of the urlpatterns out to sub files as the documentation proposes. Besides that fact that it just doesn't seem to be working (the items are not being rerouted) but when I go to the admin, it says that 'urlpatterns has not been defined'. The urls.py I have at the root of my application is: if settings.ENABLE_SSL: urlpatterns = patterns('', (r'^checkout/orderform/onepage/(\w*)/$','checkout.views.one_page_orderform',{'SSL':True},'commerce.checkout.views.single_product_orderform'), ) else: urlpatterns = patterns('', (r'^checkout/orderform/onepage/(\w*)/$','commerce.checkout.views.single_product_orderform'), ) urlpatterns+= patterns('', (r'^$', 'alchemysites.views.route_to_home'), (r'^%s/' % settings.DAJAXICE_MEDIA_PREFIX, include('dajaxice.urls')), (r'^/checkout/', include('commerce.urls')), (r'^/offers',include('commerce.urls')), (r'^/order/',include('commerce.urls')), (r'^admin/', include(admin.site.urls)), (r'^accounts/login/$', login), (r'^accounts/logout/$', logout), (r'^(?P<path>.*)/$','alchemysites.views.get_path'), (r'^static/(?P<path>.*)$', 'django.views.static.serve', {'document_root':settings.MEDIA_ROOT}), The urls I have moved out so far are the checkout/offers/order which are all subapps of 'commerce' where the urls.py for the apps are so to be clear. /urls.py in questions (included here) /commerce/urls.py where the urls.py I want to include is: order_info = { 'queryset': Order.objects.all(), } urlpatterns+= patterns('', (r'^offers/$','offers.views.start_offers'), (r'^offers/([a-zA-Z0-9-]*)/order/(\d*)/add/([a-zA-Z0-9-]*)/(\w*)/next/([a-zA-Z0-9-)/$','offers.views.show_offer'), (r'^reports/orders/$', list_detail.object_list,order_info), ) and the applications offers lies under commerce. And so the additional problem is that admin will not work at all, so I'm thinking because I killed it somewhere with my includes. Things I have checked for: Is the urlpatterns variable accidentally getting reset somewhere (i.e. urlpatterns = patterns, instead of urlpatterns+= patterns) Are the patterns in commerce.urls valid (yes, when moved back to root they work). So from there I am stumped. I can move everything back into the root, but was trying to get a little decoupled, not just for theoretical reason but for some short terms ones. Lastly if I enter www.domainname/checkout/orderform/onepage/xxxjsd I get the correct page. However, entering www.domainname/checkout/ gets handled by the alchemysites.views.get_path. If not the answer (because this is pretty darn specific), then is there a good way for troubleshoot urls.py? It seems to just be trial and error. Seems there should be some sort of parser that will tell you what your urlpatterns will do.

    Read the article

  • Mod_Rewrite - Remove Query String From URLs

    There are plenty of tutorials and articles showing how to use the apache mod_rewrite module to tidy up messy URLs can create nice, human readable, search engine friendly URLs. No problem there and if that is what you are looking for you wont find that here.

    Read the article

  • Moving from a static site to a CMS with new URLs and meta-data for pages

    - by Chris J
    Hi I am in the process of rebuilding a site from static pages to a CMS which will be using mod_rewrite to generate new page URLs. In this process our marketing people and myself have decided to tidy up the descriptions, keywords and titles. Eg: a page which who's URL is currently "website-name/about_us.html" and has a title of "website-name - something not quite page specific" will change to "website-name/about-us/" and title: "about us - website-name" and may have a few keywords and the description changed. Our goal with updating the meta data is to improve our page rankings and try to keep in line with some best practices for SEO. Though our current page rankings are quite good in many aspects, there is room for improvement. All of the pages will also have content changes (like rearranging heading tags, new menu on all pages, new content in footer, extra pieces of dynamic content relating to other pages). In this new site process I plan to use 301 redirects for all the old URLs pointing to the new URLs. My question is what can I expect to happen to the page rankings in Google, in the sort term and long term? Will this be like kicking off a new site which will have to build up trust over time or will the original page rankings have affect?

    Read the article

  • nginx url rewrites for php-urls

    - by Tronic
    i have to permament redirect some old urls in nginx. the old urls are old-style php urls including a parameter for loading content. they look like this: http://www.foo.com/index.php?site=foo http://www.foo.com/index.php?site=bar i want to redirect them to other urls like: http://www.foo.com/news http://www.foo.com/gallery any advice on how i can achieve this? my tries failed. thanks in advance!

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >