Search Results

Search found 69200 results on 2768 pages for 'file indexing'.

Page 15/2768 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Efficient SQL Server Indexing by Design

    Having a good set of indexes on your SQL Server database is critical to performance. Efficient indexes don't happen by accident; they are designed to be efficient. Greg Larsen discusses whether primary keys should be clustered, when to use filtered indexes and what to consider when using the Fill Factor.

    Read the article

  • Is it safe to block redirected (but still linked) URLs with robots.txt?

    - by Edgar Quintero
    I have a website that has all URLs optimized and 301 redirected from nasty URLs to clean ones. However, everywhere throughout the site the unclean URLs are linked in menus, content, products, etc. Google currently has all clean URLs indexed, along with a few unclean URLs too. So the site still has linked everywhere the old URLs (ideally this wouldn't be the case but this is how it is ATM). I would like to block the unclean URLs with robots.txt. The question: if I block these unclean URLs with the robots.txt, when the entire website is linked with them (but they all redirect to the clean version), will this affect the indexing status at all?

    Read the article

  • Search for partial IP address using Windows Search?

    - by Dr. Dre
    I have a folder, c:\projects\, added to Windows Index. I know the indexing is working because I search for stuff in this folder all the time and the results come up very fast, and I've never noticed any accuracy problem until now. (I have had to tweak Indexing options to expand which file types have their contents indexed rather than just the file name, etc, but after that Search has worked pretty well for me). I've encountered a problem while trying to search for references to a particular IP address subnet. I'm trying to find all references to IP's with the pattern "192.168.220.xxx" (AKA, the 192.168.220.0/24, AKA 192.168.220.0/255.255.255.0 IP/netmask). Within Windows Explorer: c:\projects**.* is indexed c:\projects\work\project1\network_list.txt contains several "192.168.220.xxx" IP's Indexing status says all items are indexed (193,000 items). When I try to search for partial IP match, there are no search results. Tried searching for: 192.168.220, 192.168, 192.168.220., 192.168.220., 192.168.220.?, 192.168.220.??, 192.168.220.???, 192.168., 192.168.. Also tried variants of all the above surrounded with double quotes. All the searches returned 0 results. Within MS Outlook 2007: My mailbox, and all my offline .pst's are indexed. I search in Outlook pretty frequently, so I'm pretty sure indexed searches work across inbox and all .pst's. Indexing status in Outlook says all items are indexed. I also have references to these IP's in email, and I'd like to find all of them. Basically same deal as above, can't search for "192.168.220.xxx" IP's. Any way to fix this?

    Read the article

  • Google is not indexing my entire site despite having a sitemap

    - by Anusha
    I have an e-commerce website www.beyondtime.in. I have been constantly monitoring Googlebot crawling on my website and my webmaster account. Lately, I have found two issues that I have not been able to understand. 1.) The Google Bots have been only crawling www.beyondtime.in/telecom.php when the URL is not even valid. What needs to be done to let Google crawl other pages of the website as well? 2.) The second question is about the Google Webmaster account, where I've submitted my sitemap with 227 URLs. Out of that, only 156 have been indexed. None of the images of my website have been indexed by Google.

    Read the article

  • Correctly indexing multiple domains with same content in Google and others

    - by AJweb
    I have a client with a dozen territorial domains, like mydomain.co.uk, mydomain.fr, mydomain.de, etc Most of these domains hold a different language of the same dynamic content (shop), but some, like co.uk and .com, have the same language and content, except for some content customized to each country/domain in the front page, contact and other pages. I am aware that we should use the canonical meta tag to mark those duplicated contents, but, we want the co.uk to be present in UK ( indexed in google.co.uk ) and the .com to be present in US and other countries, for example, or least that is the goal. Is there anything we can do to "help" google determine the geographical meaning of each domain? If we mark with canonical tag the .com and co.uk sites, do you know how google will decide which one to show on a given search?

    Read the article

  • Scalable distributed file system for blobs like images and other documents

    - by Pinnacle
    Cassandra & HBase both do not efficiently support storage of blobs like images. Storing directly on HDFS stresses the Namenode because of huge number of files. Facebook uses Haystack for images and attachments storage, but this is not open source. So is Lustre a good choice for distributed blob storage? I have read that Amazon S3 is used by many, but this would cost money and personally, I would not like to rely on third party system. What are other suggestions?

    Read the article

  • Why the amount of 'indexed' images can go down?

    - by Roman Matveev
    I have a site with several thousand of images. All those images included into the sitemap submitted to Google Webmaster Tools. The amount of 'submitted' images is OK, but the amount of 'indexed' is significantly lower than the amount of 'submitted' and it is going DOWN! I'd understand if not all of my images got indexed (however it is also not clear and very frustrating for me) but I can not understand how the indexing can go in the negative direction?! All the images stays on their places. And pages containing them stays unchanged. At least they intended to be. Any thoughts?

    Read the article

  • How to test robots.txt in googlebot to find out what is being indexed

    - by Amar Jarubula
    This question is a continuation for this answer How to check if googlebot will index a given url? As was told I did go to the Webmaster Tools and tested contents of my robots.txt file. However this is just giving me the info if that content is good enough or not. However for my scenario I need to test whether disallowing some patterns is being indexed or not. For example I have something like this below in my robots.txt disallow:/pattern* My understanding is the URLs with word pattern should not crawled, but how do I test this pattern is enforced while indexing the website?

    Read the article

  • Guaranteed Google Indexing

    Do you believe that you can get any site listed in Google in under seven days? Are you frustrated at hearing this but still your waiting months to get your site indexed in Google? If this sounds like you then maybe I can help.

    Read the article

  • Best practice Java - String array constant and indexing it

    - by Pramod
    For string constants its usual to use a class with final String values. But whats the best practice for storing string array. I want to store different categories in a constant array and everytime a category has been selected, I want to know which category it belongs to and process based on that. Addition : To make it more clear, I have a categories A,B,C,D,E which is a constant array. Whenever a user clicks one of the items(button will have those texts) I should know which item was clicked and do processing on that. I can define an enum(say cat) and everytime do if clickedItem == cat.A .... else if clickedItem = cat.B .... else if .... or even register listeners for each item seperately. But I wanted to know the best practice for doing handling these kind of problems.

    Read the article

  • Open Source PHP based secure file download script?

    - by SiddharthP
    Basically I need a self hosted solution where I as the admin can create client areas (which can be simple folders) where I upload files and secure them with username / pass. A client page will then be automatically generated which the client can access the username / pass and download the files. It's relatively simple script but i'm having a hard time finding open source solutions which accomplish what i need. Any help would be appreciated.

    Read the article

  • Wordpress with user login and file manager support

    - by Don
    This may be a RTFM kind of thing, so I'll apologize up front. I've been asked by a friend I used to freelance for if there's a solution in Wordpress where users an login, then they can upload their own files in a "my docs" kind of thing. I've never used WP, so before I dig into their info I thought I'd see if anyone here can confirm or maybe point me to a resource. It's one of those "I'll look up at lunch and get back to you" things, which is why I'm bugging you all before reading the docs. Thanks

    Read the article

  • Efficient SQL Server Indexing by Design

    Having a good set of indexes on your SQL Server database is critical to performance. Efficient indexes don't happen by accident; they are designed to be efficient. Greg Larsen discusses whether primary keys should be clustered, when to use filtered indexes and what to consider when using the Fill Factor.

    Read the article

  • Is it safe to Block These URLs with Robots.txt?

    - by Edgar Quintero
    I have a website that has all URLs optimized and 301 redirected from nasty URLs to clean ones. However, everywhere throughout the site the unclean URLs are linked in menus, content, products, etc. Google currently has all clean URLs indexed, along with a few unclean URLs too. So the site still has linked everywhere the old URLs (ideally this wouldn't be the case but this is how it is ATM). I would like to block the unclean URLs with robots.txt. The question: If I block these unclean URLs with the robots.txt, when the entire website is linked with them (but they all redirect to the clean version), will this affect the indexing status at all?

    Read the article

  • Alexa indexing browsing history?

    - by Haluk
    We have this test.php sitting around in a forgotten folder. It is a script which just sends an email to our site admin. We never had a page linking to it. It is not indexed by Google. It does not exist in the Internet Archive Wayback Machine. But every now and then it gets crawled by ia_archiver. I wonder how it got indexed. Could it be because of the Alexa toolbar installed on our computer? Does Alexa index our personal browsing history?

    Read the article

  • How do I stop Google indexing my main page as https [duplicate]

    - by user2897488
    This question already has an answer here: https:// search results appearing on Google for purely http:// site 2 answers Due to historic reasons, we have things set up so that "www.mydomain.com" redirects to "store.mydomain.com". This has worked perfectly fine until recently, when Google appears to be sending visitors to "https:// www.mydomain.com" which doesn't have an SSL-certificate (and never has). Strangely, its only the first link that goes to "https:// www.mydomain.com", all other links point correctly to "http:// store.mydomain.com". Because there is no certificate on the "www" version, users are getting an error message. How do I make Google revert to pointing the main link at "http:// store.mydomain.com" (or even "http:// www.mydomain.com.") If I remove "https:// www.mydomain.com" from Google webmaster tools, will this also remove the redirected page ("http:// store.mydomain.com)? Thanks.

    Read the article

  • Microsoft BI Indexing Connector Announced

    - by Enrique Lima
    Wait?  More awesome stuff released. With Microsoft’s acquisition of FAST, the options for content being indexed increased.  That’s not all that happens, but for the purpose of this post, since we focus on Business Intelligence content … that is where we see that benefit at this time. Here is the link to the SharePoint Insights: BI In Action blog. You will find guidance and components to download.

    Read the article

  • Re-indexing website with clean URL's

    - by artsi
    So I have a website with URL's like this: http://www.domain.com/profile.php?id=151 I've now cleaned them up with mod_rewrite into this: http://www.domain.com/profile/firstname-lastname/151 I've fetched and re-indexed my website after the change. What is the best way to make the old dirty ones disappear from search results and keep the clean ones? Is blocking profile.php with robots.txt enough?

    Read the article

  • What measures can be taken to make sure Google is aware of the existence of a newly created page?

    - by knorv
    Consider a website with a large number of pages. New pages are published regularly. When publishing a new page the website operator wants to get the newly created paged indexed in Google as soon as possible. The website operator wants to minimize the time spent between publication and indexing. Consider the site http://www.example.com/ with hundreds of thousands of pages. The page page http://www.example.com/something/important-page.html is created at say 12:00. I want to get important-page.html indexed as soon as possible after 12:00. Ideally within seconds or minutes. What options are available to try to get Google to index a specific newly created page as soon as possible?

    Read the article

  • Hosting files with support for file tagging / keywords

    - by Zev Chonoles
    I have a large (approx. 25GB) collection of files I would like to host online for people to view or download. I have a spare computer I can use as a dedicated server for these files. I'm looking for a method of, or piece of software for, hosting my files where I can assign tags or keywords to the files, and people viewing my files online can search the collection via the tags. By way of approximate solutions I've found so far, I see that there is software such as Collectorz.com or Readerware for creating databases of one's books / music / movies, and these databases can be searched by tags or keywords, and the databases can be made available and searchable online; this would suit my purposes except that my files are not necessarily books, music, or movies, and I want the files themselves accessible online, not a database describing my files. A commercially-available solution like the ones above would be acceptable, but I'd prefer to have the whole setup under my control (i.e. I'd like to either implement it by hand, or use commercial software that doesn't rely on using the company's servers, paying them a continued fee, etc.). The current extent of my internet experience is designing a few Google Sites, so I know there's a fair chance I won't understand the answers I receive, but I'm always happy to have a summer project :)

    Read the article

  • Proper Search Engine Indexing

    Google's search engine is nothing but a very simplistic search box, but behind the scenes, it's a complicated algorithm and getting increasingly difficult as time goes on. Search engines like Google, Yahoo, and Bing are getting more precise at sorting websites in order to help people find the information they are looking for.

    Read the article

  • Monitoring File I/O on file Server.

    - by jenglee
    Is there a way to monitoring the file I/O on my file server. I want to gather some metrics on my current file system. I am running an old windows 2003 file server and I am planning on moving to a new file server running either windows server 2008 or 2012. I want to use these metrics and get a new file server that improve file I/O and access. Can some please advise me to what is the best way to monitor file access and get file I/O information so I can upgrade to a better file server.

    Read the article

  • A relatively new blog seems to be getting very poor Google indexing

    - by Genadinik
    I have a new blog that is 2 months old. In the first few weeks, it was getting indexed nicely and my GoogleWebmaster reports were showing that it was getting crawled and began ranking for some terms. Then as I kept writing, the GoogleWebmaster report thinned out and showed less and less terms that this blog ranks for. Now there are only 4 terms with one of them being my name. Is there something I need to do to keep the old posts to remain indexed and crawled? Thanks, Alex

    Read the article

  • LOB Pointer Indexing Proposal

    - by jchang
    My observations are that IO to lob pages (and row overflow pages as well?) is restricted to synchronous IO, which can result in serious problems when these reside on disk drive storage. Even if the storage system is comprised of hundreds of HDDs, the realizable IO performance to lob pages is that of a single disk, with some improvement in parallel execution plans. The reason for this appears to be that each thread must work its way through a page to find the lob pointer information, and then generates...(read more)

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >