Search Results

Search found 6362 results on 255 pages for 'django urls'.

Page 178/255 | < Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >

  • google webmaster showing 6 pages submitted 0 indexed, yet i can see them all there when i search in google?

    - by sam
    I have a small 'brochure' type site with 6 pages, i can see them all the pages showing up in google when i search for my site. But in webmaster tools under the sitemaps section it says 6 submitted, (the blue bar of the graph), but the indexed pages - the red bar is showing 0 indexed pages, even though they seem to be indexed ? any idea why this is ? I dont really think its that important as the pages are still indexed, but it just seems odd. =================================================== UPDATE 9/3/12 having just looked in google webmaster its showing that there are 11 pages indexed, under the health index status tab.. but under the optimization sitemap tab it shows 6 urls submitted but only 1 indexed ? please see images bellow index status: Sitemap status:

    Read the article

  • Static HTML to Wordpress Migration SEO Implications?

    - by Kayle
    Recently, I migrated a client's site to a new server and a new home within wordpress so they could more easily edit their website and start a blog section. The static site was 10 years old a was showing up at place #3 for it's primary keyword, consistently, according to my client, and has dropped to rank #6-8 following the migration. At launch, we made sure the urls were identical (save the removal of ".htm" which we used 301 redirects to compensate for) and we generated a new XML map and pinged google with the new site. We keep a 404 log to make sure we're not losing any incoming links. We also have Google Webmaster Tools on this site and have zero errors/suggestions, everything seems ok. I was told by numerous sources that Google would not penalize us for the use of 301s, but it's the only thing I can think of right now that is different about the site, other than the platform. Any ideas about what we could be getting docked for?

    Read the article

  • How can I backup my PPAs?

    - by Scaine
    Related to this question. But my concern is that over the past year, most of my more interesting (or used) applications are from PPAs, and just backing up my sources list won't add the associated launchpad keys the way that add-apt-repository does. So I'm looking for a way to list all the PPA urls (like ppa:chromium-daily/stable) so that I can easily script a series of add-apt-repository commands to add them into a new installation gracefully. Short of dumping my bash history of course. Which might be feasible, depending on how far back that file goes back?

    Read the article

  • Best way to redirect users back to the pretty URL who land on the _escaped_fragment_ one?

    - by Ryan
    I am working on an AJAX site and have successfully implemented Google's AJAX recommendation by creating _escape_fragment_ versions of each page for it to index. Thus each page has 2 URLs: pretty: example.com#!blog ugly: example.com?_escaped_fragment_=blog However, I have noticed in my analytics that some users are arriving on the site via the "ugly" URL and am looking for a clean way to redirect them to the pretty URL without impacting Google's ability to index the site. I have considered using a 301 redirect in the head but fear that Googlebot might try to follow it and end up in an endless loop. I have also considered using a JavaScript redirect that Googlebot wouldn't execute but fear that Google may interpret this as cloaking and penalize the website. Is there a good, clean, acceptable way to redirect real users away from the ugly URL if for some reason or another they end up arriving at the site that way?

    Read the article

  • How to track in Google Analytics registrations come from Google AdWords ads?

    - by automatix
    I created a campaign in Google AdWords and some ads in it and gave them URLs like mydomain.tld/registration/?utm_campaign=mycampaing&ad=x mydomain.tld/registration/?utm_campaign=mycampaing&ad=y mydomain.tld/registration/?utm_campaign=mycampaing&ad=z All ads lead to the registration page. A registration is a visit of the page mydomain.tld/registration-complited/?user={ID} So I can track the registrations in Google Analytics. I just go to Behavior -> Site Content -> All Pages and filter the pages to registration-complited. But how can I see, how many and which users have registered, after they came from an ad of a campaign, e.g. utm_campaign? And how can I also track this for a sigle ad of the campaign, e.g. x?

    Read the article

  • Website falsely blocked because of spam. Does anyone know how we should proceed?

    - by Thomas Crepain
    I'm responsible for ICT at FOS Open Scouting, a belgian scouting organisation. Our website was hacked a few years back and blocked by Facebook as a result. After we regained control over the site Facebook continued to block our domain and this is causing us a number of problems. We have tried many times in the past year to contact Facebook using their 'I am blocked from adding content' form (https://www.facebook.com/help/contact.php?show_form=block_appeal) to no avail. The blocked URLs are: http://www.fos.be and http://www.fosopenscouting.be Does anyone know how we should/could proceed?

    Read the article

  • What is the best way to have the same website in multiple domains?

    - by Daniel Magliola
    I would like to have the same website to sell a specific product, in multiple domains , to take advantage of keywords matching the domain name, for several different searches. However, I understand that having the same content in multiple sites will unleash the wrath of Google. If I have a redirect from all domains minus one, to that last one, do I still get any bonus for the "magic exact domain match jackpot"? Same question applies to canonical URLs... What's the best way to approach this? Thanks!

    Read the article

  • Getting the keyword as a parameter from Adwords using ValueTrack

    - by Stephen Ostermiller
    I set up an AdWords campaign for website following the instructions for Google AdWords ValueTrack. One of the things that it is supposed to be able to do is pass the keyword as a URL parameter using the code {keyword} in the URL. I set it up for integration with Google Analytics such the landing URLs would look like: http://example.com/landing.html?utm_source=adwords&utm_medium=cpc&utm_term=%7Bkeyword%7D&utm_content=my_content&utm_campaign=my_page where {keyword} is in the utm_term parameter. Hower, this keyword substitution isn't happening. Why?

    Read the article

  • Redirect before rewrite

    - by Kirk Strobeck
    Had an issue where I need to redirect old URLs, but not disable the mod_rewrite for page structure. redirect 301 /home.html http://www.url.com/ It needs to live on the Symphony 2.0 .htaccess file ### Symphony 2.0.x ### Options +FollowSymlinks -Indexes <IfModule mod_rewrite.c> RewriteEngine on RewriteBase / ### DO NOT APPLY RULES WHEN REQUESTING "favicon.ico" RewriteCond %{REQUEST_FILENAME} favicon.ico [NC] RewriteRule .* - [S=14] ### IMAGE RULES RewriteRule ^image\/(.+\.(jpg|gif|jpeg|png|bmp))$ extensions/jit_image_manipulation/lib/image.php?param=$1 [L,NC] ### CHECK FOR TRAILING SLASH - Will ignore files RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_URI} !/$ RewriteCond %{REQUEST_URI} !(.*)/$ RewriteRule ^(.*)$ $1/ [L,R=301] ### ADMIN REWRITE RewriteRule ^symphony\/?$ index.php?mode=administration&%{QUERY_STRING} [NC,L] RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^symphony(\/(.*\/?))?$ index.php?symphony-page=$1&mode=administration&%{QUERY_STRING} [NC,L] ### FRONTEND REWRITE - Will ignore files and folders RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*\/?)$ index.php?symphony-page=$1&%{QUERY_STRING} [L] </IfModule> ######

    Read the article

  • A lot of 302 redirects

    - by user3651934
    I have a website for which one month stat shows: Unique Visitors 6274 Total Visitors 7260 Pages visited 9520 Hits 88891 Whats concerns me about is the HTTP status code: 302 Moved temporarily (redirect) 36302 How come 40% hits are being redirected. If it is not normal, what could be the possible reasons? ------------------------ adding more information ------------------------ Ok, here is the code I'm using in my .htaccess file for clean URLs. Is this causing as many as 36302 redirect hits? RewriteCond %{REQUEST_FILENAME}.php -f RewriteRule ^([^\.]+)$ $1.php [L] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^(.+[^/])$ $1/ [R] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?page=$1 [L,QSA] RewriteRule ^(.*)/$ index.php?page=$1 [L,QSA]

    Read the article

  • Moving one site in Webmaster Tools to more than one site [closed]

    - by Towhid
    Possible Duplicate: How should I structure my urls for both SEO and localization? I have a Question and Answer site about immigration. now I divided it into 2 sites: mysite.co.uk about immigration to UK mysite.com with sub domains for every country, Like: australia.mysite.com , sweden.mysite.com , ... now I had moved All the content from my first site into .co.uk and .com site and it's sub domains to fill theme. I now that Google will detect my new 2 sites as duplicate of first on and it is very bad for SEO. and I don't think Google webmaster tools has a tool for it. so Please Guide me how to fix this problem.

    Read the article

  • Rewrite rule to show as directory using .htaccess

    - by chanchal1987
    I want to implement a rewrite rule in my .htaccess file to show a specific url as a directory of my server. See the code below I written, RewriteRule ^(.*)/$ ?page=$1 [NC] This will rewrites urls like www.mysite.com/abc/ to www.mysite.com/index.php?page=abc. But if I request www.mysite.com/abc then it is throwing an 404 error. How can I write a rewrite rule which will match www.mysite.com/abc and www.mysite.com/abc/ both? Edit: My current .htaccess file (After Litso's answer's 3rd revision) is like below: ## ErrorDocument 401 /index.php?error=401 ErrorDocument 400 /index.php?error=400 ErrorDocument 403 /index.php?error=403 ErrorDocument 500 /index.php?error=500 ErrorDocument 404 /index.php?error=404 DirectoryIndex index.htm index.html index.php RewriteEngine on RewriteBase / Options +FollowSymlinks RewriteRule ^(.+)\.html?$ $1.php RewriteCond !-d RewriteRule ^(.*)/$ ?page=$1 [NC,L] RewriteCond %{REQUEST_URI} !index.php RewriteRule ^(.*)$ ?page=$1 [NC,L] ##

    Read the article

  • URL-rewriting on Plesk using ISAPI_rewrite3 Lite

    - by Anusha
    I am using Plesk Windows based web server with Windows 2008 server OS with IIS-6 for my e-commerce website. I want to rewrite URLs for all dynamic pages, So I installed ISAPI_Rewrite 3 Lite on my web server also I had uploaded the .htaccess file with the basic rules as follows RewriteEngine on RewriteRule ^contact\.html$ contactus.php? [NC,R] I never worked before with ISAPI neither on URL- rewriting. My doubt is How should I proceed after installation. Should I upload .htaccess or httpd.conf file OR This s/w has ISAPI_Rewrite Manager which gives place to edit httpd.conf, Should I write rules on this. Anyways I had tried all these steps but unfortunately I couldn't find any remedies. Any immediate solution will be appreciable.

    Read the article

  • What are the Consequences for using Relative Location Headers?

    - by Alan Storm
    According to the spec, Location headers used in a redirect require a server name HTTP/1.1 301 Moved Permanently ... Location: http://example.com/foo/baz/bar However, in 2012, most web browsers will recognize a relative path and redirect you to the new location using the original server name HTTP/1.1 301 Moved Permanently ... Location: /foo/baz/bar Are there any negative/surprising consequences to using the relative URLs in the Location headers? My particular concern is how Google/search-engines will interpret this, but if there's anything else I'm not thinking about I'd love to hear it.

    Read the article

  • Extracting meta tags attribute using wget [migrated]

    - by Amit
    I have a file having some URLs per line. I need to extract the "keywords" present in the tags i.e. if there is meta tag for "keywords" then i want to get "content" value for it. Example: if the web-page has this meta-tag then for that URL i want "wikipedia,encyclopedia" to be extracted. One approach is to download the web-page using "wget" and then parse it using some standard HTML parser. I was wondering is there any better way to do this without downloading the entire web-page.

    Read the article

  • Ubuntu 12.04 LTS Desktop 64 bit user permissions or apache2 rewrite problem

    - by mtm
    have installed 12.04 Desktop 64 bit, manually installed LAMP, phpmyadmin, php5-dev,PEAR, PECL, apc, ssh, created user to own /var/www/ and transferred 3 sites to /www/. sites are in subfolders, sites - available all configured, and enabled. One site is pure html, two athers - php. Enabled curl, but phpmyadmin started at first, also php sites, than stopped working /show blank pages/ sites said Clean urls cannot be enabled. Html site still working. Where is the problem, and why the php sites stop working? In all apache .conf files Allow Override is set to ALL. php sites have .htaccess files. And this configuration worked with Ubuntu 10.04.

    Read the article

  • best/simplest way to inform search engine of sitemap location

    - by Don
    AFAIK, there are 2 ways to make search engines aware of a sitemap's location: Include an absolute link to it in robots.txt Submit it to them directly. The relevant URLs are: http://www.google.com/webmasters/tools/ping?sitemap=SITEMAP_URL http://www.bing.com/webmaster/ping.aspx?sitemap=SITEMAP_URL Where SITEMAP_URL is the absolute URL of the sitemap. Currently, I do both. Regarding (2), I have a job that runs automatically every day which submits the sitemap to Bing and Google. I don't think there's any reason to do (1) and (2), but I'm paranoid, so I do. I imagine you can avoid both (1) and (2) if you just make your sitemap accessible at a conventional URL (like robots.txt). What's the simplest and most reliable way to ensure that search engines can find your sitemap?

    Read the article

  • How relevant is PHP today for browser games?

    - by Bitgarden
    I was the lead developer of 2 moderately successful browser games quite a few years back, and plan on working on a new game soon. At the time, I wrote them in pure PHP (no template engine or anything of the sort). I'd like to start working on a new game, but have been out of the web development world for a while. Reading around, I hear a lot of good about Rails, Django, Node.js, etc., with which I have no experience (although I know my way around Python, Javascript, and the others quite well). So my question is the following- if I were to go in my old ways and go with PHP again, would I be making things hard for myself? Would picking something more "trendy" have a real impact on my development? In addition, does anyone have any pointers relating to specifically developing browser games with these more modern tools?

    Read the article

  • Working with different URL structures

    - by Dane411
    As I'm quite newbie to this field, I've doubts and there are some I couldn't find on Google, i.e: If I'm not wrong, index.html makes it possible to avoid to add the filename to the url, www.example.com/ is equal to www.example.com/index.html. And that works for the following subdirectories, right? www.example.com/music/ Is there any other way to achieve this without using an index.html file? (I've read smth about converting dynamic urls to static: ./?var1=value1&varN=valueN - ./value1/valueN) How can I convert www.example.com/music/ to music.example.com/ and why should it be used? Thanks in advance!

    Read the article

  • How to show the right country domain in Google Places?

    - by Baumr
    Background A site has multiple ccTLDs: example.com for the US, example.co.uk for UK users, example.de for Germans, etc. Googling for certain city keywords will return rich snippets with a list of Google Places: Problem When searching on Google Germany, the domain for US users (example.com) appears instead of the corresponding ccTLD (example.de). This is not good user experience, as users would most likely like to book on a site localized for them (e.g. language and currency). Question What solutions are there? Is it possible to return different ccTLDs in rich snippets for Google searches in Germany/UK? Ideas Would implementing the hreflang annotation resolve this? What about entering multiple corresponding URLs in the structured data markup?

    Read the article

  • XUbuntu: Open file browser via "run command" menu

    - by mbelow
    In older Xubuntu versions, if I entered a path to a directory in the "run command" dialog, a thunar-window was opened showing this directory. I found that to be very handy if I would open f.e.g "/tmp", I just needed to press WindowsKey+r, enter /tmp and press enter, there I was. This was also great for URLs (ftp, http etc) Unfortunately, since 12.04, this doesn't work anymore. It seem as if the "run command" is now integrated with the program finder. It still works when I type "thunar ftp://...." or "thunar /tmp", but it's a bit tedious now. Is there a way to restore the old behavior? Note: I'm running a german localization of Xubuntu - I hope I translated "run command" and "program finder" correctly...

    Read the article

  • Open screen and run some projects and applications

    - by trex
    I am a python web developer, I need to run my local 3-4 django projects in screen sessions and need to launch some of my applications like skype, chrome, eclipse and a text file daily status.txt. Is there any way to write a script to launch all of them by running a shell script only? #!/bin/bash # gnome-terminal -e "screen -dmS myapps" #(Attach following command to one of the screen) cd /var/opt/project1 python manage.py runserver 127.0.0.1:8001 #(Attach another command to one of the screen) cd /var/opt/project2 python manage.py runserver 127.0.0.1:8002 #(Attach another command to one of the screen) cd /var/opt/project3 python manage.py runserver 127.0.0.1:8003 #start my applications eclipse skype gedit "/home/myname/Desktop/daily status.txt" [...] Can one help me to write a shell script to do this.

    Read the article

  • RewriteRule working local but not on remote server

    - by m0tv
    I have a .htaccess file with one simple RewriteRule: RewriteEngine on RewriteRule ^([A-Za-z0-9-]+)$ ?site=$1 I want to have an url like http://www.example.com/imprint and forward it to http://www.example.com/?site=imprint I checked this rule with an RewriteRule tester which gave me the results I want to achieve. On my local development system it works well too. But on a remote server the URLs just give me a 404 error. Other more simple rewrite rules are working with no problems, so everything must be set up correctly (I think..). The problem is that I don't have access to any error logs or the server configs. So the only thing I can do is to guess... Can anyone tell me if theres something wrong with this rule? Or anything else I can do or test to solve this? Or has someone an idea what could be wrong on the server?

    Read the article

  • Blocking path scanning

    - by clinisbut
    I'm seeing in my access log a number of request very suspicious: /i /im /imaa /imag /image /images /images/d /images/di /images/dis They part from a known resource (in the above example /images/disrupt.jpg). All comming from same IP. Requests varies from 1/sec to 10/sec, seems somewhat random. It's obviously they are trying to find something and seems they are using a script. How do I block this kind of behaviour? I though of blocking the IP request, at least for a given time. Keeping in mind that: Request intervals seems legitimate (at least I think so). I don't want to end blocking a search engine bot, which may find 404 urls too (and that's a different problem, I know). ¿Do they use always same IP?

    Read the article

  • IIS 6 nested virtual directory redirection

    - by threedaysatsea
    We're running IIS 6 on a WinServer2k3 box and we're having some trouble with the following problem: E-mails were sent out to users asking them to go to the following URL: alias.contoso.com/directory2/view.aspx?queryparam1=no&queryparam2=blue However, the URLS are actually supposed to be: server.contoso.com/directory2/view.aspx?queryparam1=no&queryparam2=blue It's too late to recall all of the e-mails, and we'd like to redirect traffic to make this as seamless as possible for our users. The real problem here is that the server (server.contoso.com) is hosting the alias (alias.contoso.com) as a redirect thusly, and the existing redirect we need to keep functional: Default Web Site (server.contoso.com) --Directory1 --Directory2 --Directory3 Redirection to Directory3 (alias.contoso.com) --Essentially alias.contoso.com will take the user to server.contoso.com/Directory3 Is there any way to host a separate redirect inside of the existing redirect? We need to keep alias.contoso.com taking the user to server.contoso.com/Directory3 but also make alias.contoso.com/directory2/view.aspx?queryparam1=no&queryparam2=blue point to server.contoso.com/directory2/view.aspx?queryparam1=no&queryparam2=blue Any tips? Is this even possible?

    Read the article

< Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >