Search Results

Search found 6498 results on 260 pages for 'indexed views'.

Page 104/260 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • h1 title attribute [closed]

    - by codemonkey
    Possible Duplicate: Why don't TITLE tags get indexed in google? Hello there, I have a h1 element on my page which is working great. I have also been using the title attribute on this element which I don't think has been causing much harm at all. My h1 title is "The Great Ocean Road" and the title attribute on that is "Great Ocean Road" - so it's a variation of the h1 text. These are both keywords for the site so i'm hoping it's a good thing for seo. Is that a bad idea do you think? I'm not sure what Search Engines think about using a title attribute or even if they would 'mark me down' for doing it in such a way. Do you think if the h1 text is "The Great Ocean Road", the title attribute should be "The Great Ocean Road" aswell? Thank you in advance

    Read the article

  • Speaking in Omaha: December 7, 2011

    - by Bill Graziano
    I’m presenting in Omaha on Writing Faster SQL at 6PM on December 7th.  You can find meeting details on the Omaha SQL Server User Group page. The meeting location requires an RSVP so building security has a list of attendees. The presentation is a series of suggestions on improving performance.  It ranges from simple things like comparing indexed columns to scalar values up to tips for reducing query compiles and asynchronous processing patterns.  Nearly all of these come from specific issues I’ve encountered working on poorly performing SQL Servers.

    Read the article

  • Google is still crawling and indexing my old, dummy, test pages which now are 404 not found

    - by Ace
    I have set up my site with sample pages and data (lorem ipsum, etc..) and Google has crawled these pages. I deleted all these pages and actually added real content but in webmaster tools, i still get a lot of 404 errors Google trying to crawl these pages. I have set them to "mark as resolved" but some pages still come back as 404. Furthermore, I have a lot of these sample pages still listed when i do a search of my site on Google. How to remove them. I think these irrelevant pages are hurting my rating. I actually wanted to erase all these pages and start getting my site being being indexed as a new one but I read it's not possible? (I have submitted a sitemap and used "Fetch as Google.")

    Read the article

  • Is there a word or description for this type of query?

    - by Nick
    We have the requirement to find a result in a collection of records based on a prioritised set of search criteria against a relational db (I'm talking indexed field matching here rather than text search). The way we are thinking about designing the query is to begin with a highly refined and specific set of criteria. If there are no results for this initial query we want to progressively reduce the criteria one by one in order of reducing priority, querying each time such a less specific set of criteria until we find a result we can accept. Alternatively, we have considered starting with a smaller set of criteria and increasing until we have reduced number of results down to the last set. What I would like to know is if an existing term to describe this type of query exists? So that we can look to model our own on existing patterns and use best practice.

    Read the article

  • Google showing meta descriptions from other pages in the SERPs

    - by ojek
    Recently I added some content to my website and submitted a sitemap files to Google. Now that Google has indexed those pages, I discovered that some of words and sentences that are listed in Google that lead to my website are having their meta descriptions somehow mixed up. Here is how it works: After I put a sentence on Google to check for my website ranking, I can see a page title in the results of page1, a link to page1, and a description from page2. Since my website is a forum, if Google mixes the links of threads, it leads my users to different kind of material that they were looking for. Is there anything I can do about it?

    Read the article

  • Moved sitemaps to a different subdomain and losing search referrals around the same time. Red herring or correlation?

    - by er1234
    We started to lose search referral traffic around the same time that I moved some of our sitemaps to a subdomain. Could this have hurt us? I followed Google's steps to creating a sitemap under a different subdomain. The new sitemaps.foo.com subdomain is being crawled and indexed well. Both www.foo.com and sitemaps.foo.com have been verified in Google Webmaster Tools. They appear as distinct sites. Is this correct? I can't find a way in Webmaster Tools to say "Hey, sitemaps.foo.com is really owned by www.foo.com, so show them together and make sure to attribute sitemaps.foo urls to www.foo" Our www.foo.com/robots.txt Sitemap: http://www.foo.com/sitemap.xml Sitemap: http://sitemaps.foo.com/subdir/sitemap.xml.gz

    Read the article

  • Should I add rel nofollow to internal links which already have meta noindex?

    - by CamSpy
    Let's say I have a products page with listing producsts and the page has pagination. I would like the 1st page to have all the SE ranking weight so I decided to put meta noindex on the rest of the paginated pages (from page 2 to N). My common sense says that if I don't want pages to not get indexed, I shouldn't also pass link/PR juice to these pages. (Is that smart?) What happens if I set rel="nofollow" for all pagination links from page 2 to page N?

    Read the article

  • How do I get the root index page to redirect to a subdirectory without affecting SEO?

    - by paradroid
    I am reviving/reorganising my personal WordPress blog. It's using a URL that looks like this: http://mydomain.com/blog The webserver 301 redirects www.mydomain.com to mydomain.com. I want to use the blog subdirectory because I plan to add other parts to the site, with the blog only being one part of the site. However, at the moment there is nothing there but the blog, so I want to have the root index page redirect to the blog for the time being. I have been using this on the root index.html page to do the redirect... <meta http-equiv="REFRESH" content="0;url=./blog"></HEAD> ...but this seemed to have stopped the site being indexed by Google and Bing. How do I do this without affecting SEO? Also, what URL should I put in the sitemap.xml?

    Read the article

  • Making sub domain the new main domains for ssl

    - by Dean Legg
    What would be your best advise for changing your main domain to a sub domain? Are site used to be example.co.uk but has now changed to https://secure.example.co.uk/. Any example.co.uk url's re-direct to the new secure domain. Effectively the example.co.uk is now just there to redirect any links and is no longer part of the sites url structure. I have added a new domain to Google Webmaster https://secure.example.co.uk/ and added the site map. Waiting for it to be indexed. Is there anything else you would advise and will this take away a lot of the juice from all the links I developed for example.co.uk? Guessing this is not best practise as I have struggled to find any information online based on this subject.

    Read the article

  • Can I redirect the HTTP request towards an old folder to the homepage using .htaccess file?

    - by AndreaNobili
    I have to following situation: I had an old blog that was made using Joomla (this blog was indexed well enough by search engines). For some problems I delete it and I have create it again using WordPress. Now I have many visit (from Google) that leading to specific pages of the old site (pages that don't exist in the new version). For example I have visit to URL as: /scorejava/index.php/corso-spring-mvc/1-test that don't exist on my new site. I would know if using the .htaccess file (or other sistem) I can redirect the HTTP request directed to some subfolder (that don't exist in the new version) to the homepage of my new site. For example I have the request towards the void URL: /scorejava/index.php/corso-spring-mvc/1-test. And I would create a regular expression that say something like: all the request toward the subfolder corso-spring-mvc (and all it's content file and subfolder) have to be redirected to www.scorejava.com. Is it possible?

    Read the article

  • After a domain change, what can I do to recover lost traffic, rankings, impressions etc? [duplicate]

    - by Felix
    This question already has an answer here: How do I rename a domain and preserve PageRank? 3 answers I moved my site to a legacy exact-match domain I purchased about a couple of months ago. I have seen significant reduction in traffic, impressions, and rankings. I did all the right steps/best practices: change of address in GWT, map old site hierarchy and match to new site for 301 redirects etc. Indexation has gone through the Google process: old site has all but dissappeared from he index and new site is indexed, albeit with some 404 errors which I am addressing. Does anyone else who has gone through eh domain change process have any thoughts/advice? Thanks!

    Read the article

  • 301 redirect from a country specific domain

    - by Raj
    I originally started using a .do domain extension for my site, but later realized that this country specific domain would prevent us from appearing in search results for places outside of the Dominican Republic. We started using a .co domain extension and redirected all requests to the new domain using an HTTP 301. The "Crawl Stats" in Google Webmaster Tools shows me that the .co domain is being crawled, but the "Index Status" shows the number of pages indexed at 0. The "Crawl Stats" for the .do domain says that it's being crawled and the "Index Status" shows a number greater than 0. I also set a "Change Of Address" in Google Webmaster Tools to have the .do domain point to the new .co domain. We're still not appearing in search results at all even for very specific strings where I would expect to find us. Am I doing something wrong?

    Read the article

  • CSS being Displayed by Google spiders

    - by vipul_vj
    I have written an article HTML Image Tag for the site and it has been indexed by Google. But when I search it, google displays HTML Image Tag - ProgrammingBulls http://programmingbulls.com/html-image-tag-1: content { font-family:verdana; font-size:14px; font-weight:normal } We often use images in a webpage. To insert images in our webpage < img tag is used in. Why is CSS displayed in the google search? I know that CSS and HTML is ignored by Google but due to some reason HTML is being displayed.

    Read the article

  • SQL Server Data Type Precedence

    I am executing a simple query/stored procedure from my application against a large table and it's taking a long time to execute. The column I'm using in my WHERE clause is indexed and it's very selective. The search column is not wrapped in a function so that's not the issue. What could be going wrong? Schedule Azure backupsRed Gate’s Cloud Services makes it simple to create and schedule backups of your SQL Azure databases to Azure blob storage or Amazon S3. Try it for free today.

    Read the article

  • Site returning 404 header to google, not sure why

    - by Damon
    A Drupal site that works fine for regular users returns a 404 not found error when I try to use the W3C validator on it; it is also not being indexed by google at all (which is the main issue but I suspect there is a connection). It is a https:// site with .htaccess rule to redirect any http:// request to the https://. I had had it running in google webmaster tools and thought it was fine, but it turns out I had not added the https domain. After adding the https domain it's also returning the header as HTTP/1.1 404 Not Found Date: Mon, 15 Oct 2012 19:37:43 GMT Server: Apache Expires: Sun, 19 Nov 1978 05:00:00 GMT Cache-Control: no-cache, must-revalidate, post-check=0, pre-check=0 Robots.txt just has User-agent: * Crawl-delay: 10 # Files Disallow: /cron.php How can I check what the issue is here?

    Read the article

  • How can I allow search engines to index my invite only website in ruby on rails?

    - by tstyle
    I have a ruby on rails website that will be in invite-only mode for the next couple of months. Currently I have it set up so visits to any page performs an authentication: before_filter :authenticate, :except => [:beta] //authenticate checks for a logged in user But the webpage has a lot of content that I would like to see indexed by search engines, and I was wondering if there's an easy way to allow crawlers to do their work? I am not very knowledgable on SEO related stuff at all, so sorry if this is an suboptimal way to phrase the question.

    Read the article

  • How to prevent a search engines from indexing a section of a page?

    - by BrunoLM
    I have many pages with lots of text in it. But I will always have two sections of text and I want to prevent one section from appearing in search results, the other section must be indexed. <p class="please-index-me">text</p> <p class="get-out">never index me please</p> I thought that maybe if I load the "please don't index me text" with Javascript maybe search engines wouldn't look for it. But I am not sure it would work and this is not really nice. I was wondering if there is a way to tell search engines "hey, this text you can't grab, move on". So, is there a way to do it?

    Read the article

  • Remove home page from Google cache

    - by Steve
    The Google Webmaster Tools Remove URL section allows you to specify a page URL to be removed from the index, or cache, or both. However, I want to remove just the home page, which is / I want to remove it from the cache because it is indexed when the "under construction" page was up. This URL is not recognised by the Remove URL section as an individual page. Instead Google assumes you want to remove the entire website from the index. I've specified /index.php and /index.html to be removed from the cache, but this is not the URL listed in the search results for the home page I want removed from the cache.

    Read the article

  • How to improve a single-paged site search result [closed]

    - by Trisism
    Possible Duplicate: How to SEO a Single-Page website I created an online CV of mine a couple of weeks ago and it has had quite a few visits. Now I want to improve the chance it will appear in google search results; however, my web CV is a one-paged site and it contains only internal links (those with hash #) so I can't really submit a sitemap. I could have changed the internal links to normal links to be processed on server-side, but there's no point of doing so. I'm very new to web SEO so I would really appreciate if somebody can show me what should I do with a single-paged site with internal links to be effectively indexed by crawlers.

    Read the article

  • Is Google indexing pages that has no connection with other pages? [duplicate]

    - by Grkmksk
    This question already has an answer here: How did Google find my unlinked newly created pages? 3 answers I am working on a web project that has nearly 100 thousand instant users and there is a webpage that we are using for test cases. There are no links pointing to it from other pages. It shouldn't be indexed by Google or any other search engines. "noindex" can be used in this situation, I know but I wonder if Google (or any others) indexes this page, if I don't do anything to prevent it.

    Read the article

  • Need to add 30K new pages to a 10K page website - troubles ahead? (SEO)

    - by Jurga
    We have a situation with a website where we plan to add a huge amount of new pages. The domain is over 10 years old, approximately 10 thousand indexed pages, and the planned addition is approx. 30K new pages. Any idea how we should go about it? Must we schedule a gradual data release? Have you heard of any industry standards as to how many new pages per day / week / month should be added in order to appear natural and not get in trouble with Google? I.e. should we plan a bi-weekly addition of 5K?

    Read the article

  • Dell Management Packs in System Center Operations Manager 2007 R2?

    - by bwerks
    Hey all, I recently set up SCOM in a small business network environment. The root management server is a Dell Poweredge 2950, and I'd like to use SCOM to monitor it using Dell's management packs. I've imported the management packs into the SCOM deployment and followed Dell's installation instructions, but it doesn't seem to be fully working yet. Currently, the Diagram views in the Dell tree (Monitoring tab) seem to show me the server's place in the network topology, so it seems that at least part of it is working. However, none of the reports under "Performance and Power Monitoring Views" provide any information. When clicking on one of them (Power Consumption (Watts), for instance), the display area is blank and there is a tooltip visible that reads "No performance counter is selected. To select a counter, place a check mark in the Show column in legend below." However, in the legend, there's nothing there for me to check. I've installed OpenManage 6.2 on the server as per the Dell documentation, but I don't know what else I could have done that I missed. Does this sound like a familiar problem to anyone?

    Read the article

  • MS SQL Server slows down over time?

    - by Dave Holland
    Have any of you experienced the following, and have you found a solution: A large part of our website's back-end is MS SQL Server 2005. Every week or two weeks the site begins running slower - and I see queries taking longer and longer to complete in SQL. I have a query that I like to use: USE master select text,wait_time,blocking_session_id AS "Block", percent_complete, * from sys.dm_exec_requests CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS s2 order by start_time asc Which is fairly useful... it gives a snapshot of everything that's running right at that moment against your SQL server. What's nice is that even if your CPU is pegged at 100% for some reason and Activity Monitor is refusing to load (I'm sure some of you have been there) this query still returns and you can see what query is killing your DB. When I run this, or Activity Monitor during the times that SQL has begun to slow down I don't see any specific queries causing the issue - they are ALL running slower across the board. If I restart the MS SQL Service then everything is fine, it speeds right up - for a week or two until it happens again. Nothing that I can think of has changed, but this just started a few months ago... Ideas? --Added Please note that when this database slowdown happens it doesn't matter if we are getting 100K page views an hour (busier time of day) or 10K page views an hour (slow time) the queries all take a longer time to complete than normal. The server isn't really under stress - the CPU isn't high, the disk usage doesn't seem to be out of control... it feels like index fragmentation or something of the sort but that doesn't seem to be the case. As far as pasting results of the query I pasted above I really can't do that. The Query above lists the login of the user performing the task, the entire query, etc etc.. and I'd really not like to hand out the names of my databases, tables, columns and the logins online :)... I can tell you that the queries running at that time are normal, standard queries for our site that run all the time, nothing out of the norm.

    Read the article

  • Is TortoiseSVN really this buggy?

    - by John Isaacks
    I have been using tortoise svn for a couple weeks now. I get errors very often. Almost everything I do creates an error. this is with repositories on the internet, locally on my machine or a machine on the network. So I started to keep track. Some examples are below. 12/31/2010 Can't move 'C:\Users\jisaacks\Desktop\my branch test.svn\tmp\entries' to 'C:\Users\jisaacks\Desktop\my branch test.svn\entries': The file or directory is corrupted and unreadable. 01/04/2011 Commit failed (details follow): Server sent unexpected return value (405 Method Not Allowed) in response to MKCOL request for '/svn/kranichs-svn/!svn/wrk/b316f15e-0869-4644-9c53-87aa0103506b/branches' 01/06/2011 Can't move 'C:\Users\jisaacks\Desktop\DVD Catalog\vendors.svn\tmp\entries' to 'C:\Users\jisaacks\Desktop\DVD Catalog\vendors.svn\entries': The file or directory is corrupted and unreadable. 01/06/2011 Can't move 'C:\Users\jisaacks\Desktop\DVD Catalog\cake\tests\test_app\views\layouts.svn\tmp\entries' to 'C:\Users\jisaacks\Desktop\DVD Catalog\cake\tests\test_app\views\layouts.svn\entries': The file or directory is corrupted and unreadable. 01/06/2011 Commit failed (details follow): attempt to write a readonly database attempt to write a readonly database That last one about the read only database happens every time I commit. Say if I am working on the head revision (7) in a working copy. I make a change and commit it. It gives me this error. But if I look at the log it tells me that there is now a revision 8 (the commit I just made) but I am still on revision 7. So I need to run update to be on the current revision that I just commited. I hope I explained that clearly. Anyways with all these errors I wonder.. Is TSVN just this unstable, does everyone have these issues. Or is it just me? If just me, what could I be doing wrong?

    Read the article

  • What is the optimum way to secure a company wide wiki?

    - by Mark Robinson
    We have a wiki which is used by over half our company. Generally it has been very positively received. However, there is a concern over security - not letting confidential information fall into the wrong hands (i.e. competitors). The default answer is to create a complicated security matrix defining who can read what document (wiki page) based on who created it. Personally I think this mainly solves the wrong problem because it creates barriers within the company instead of a barrier to the external world. But some are concerned that people at a customer site might share information with a customer which then goes to the competitor. The administration of such a matrix is a nightmare because (1) the matrix is based on department and not projects (this is a matrix organisation), and (2) because in a wiki all pages are by definition dynamic so what is confidential today might not be confidential tomorrow (but the history is always readable!). Apart from the security matrix, we've considered restricting content on the wiki to non super secret stuff, but off course that needs to be monitored. Another solution (the current) is to monitor views and report anything suspicious (e.g. one person at a customer site having 2000 views in two days was reported). Again - this is not ideal because this does not directly imply a wrong motive. Does anyone have a better solution? How can a company wide wiki be made secure and yet keep its low threshold USP? BTW we use MediaWiki with Lockdown to exclude some administrative staff.

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >