Search Results

Search found 21183 results on 848 pages for 'indexing service'.

Page 20/848 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Why the amount of 'indexed' images can go down?

    - by Roman Matveev
    I have a site with several thousand of images. All those images included into the sitemap submitted to Google Webmaster Tools. The amount of 'submitted' images is OK, but the amount of 'indexed' is significantly lower than the amount of 'submitted' and it is going DOWN! I'd understand if not all of my images got indexed (however it is also not clear and very frustrating for me) but I can not understand how the indexing can go in the negative direction?! All the images stays on their places. And pages containing them stays unchanged. At least they intended to be. Any thoughts?

    Read the article

  • How to test robots.txt in googlebot to find out what is being indexed

    - by Amar Jarubula
    This question is a continuation for this answer How to check if googlebot will index a given url? As was told I did go to the Webmaster Tools and tested contents of my robots.txt file. However this is just giving me the info if that content is good enough or not. However for my scenario I need to test whether disallowing some patterns is being indexed or not. For example I have something like this below in my robots.txt disallow:/pattern* My understanding is the URLs with word pattern should not crawled, but how do I test this pattern is enforced while indexing the website?

    Read the article

  • Guaranteed Google Indexing

    Do you believe that you can get any site listed in Google in under seven days? Are you frustrated at hearing this but still your waiting months to get your site indexed in Google? If this sounds like you then maybe I can help.

    Read the article

  • Best practice Java - String array constant and indexing it

    - by Pramod
    For string constants its usual to use a class with final String values. But whats the best practice for storing string array. I want to store different categories in a constant array and everytime a category has been selected, I want to know which category it belongs to and process based on that. Addition : To make it more clear, I have a categories A,B,C,D,E which is a constant array. Whenever a user clicks one of the items(button will have those texts) I should know which item was clicked and do processing on that. I can define an enum(say cat) and everytime do if clickedItem == cat.A .... else if clickedItem = cat.B .... else if .... or even register listeners for each item seperately. But I wanted to know the best practice for doing handling these kind of problems.

    Read the article

  • Aberdeen 10/25 Webcast: Service Excellence and the Path to Business Transformation

    - by Charles Knapp
    The uncertain economy has had a sustained impact on service organizations and processes. The impact has contributed to new complexities - new customer engagement channels, enhanced user and customer expectations, rapidly evolving technologies, increased competition, and increased compliance and regulatory mandates. Yet many organizations have embraced these challenges by investing in and transforming customer service to evolve, differentiate, and thrive under current constraints. What is their secret? Transforming Support Centers into Profit Centers According to the recent Aberdeen research report, “Service Excellence and the Path to Business Transformation”, service is now viewed as a strategic profit center at nearly 70% of organizations. As customers demand improved service, in terms of speed, efficiency and reliability, an organization's success has become increasingly dependent on optimizing the customer ownership experience. Those service organizations focused on providing easy, consistent, and relevant interactions across the customer lifecycle, including service and support delivery, are experiencing higher levels of customer acquisition and retention and are achieving better revenue and margin growth rates.  Don't miss this opportunity to learn how to transform to provide the next generation of service offerings. Click here to register now for the webcast and download a complimentary copy of this informative new research paper.

    Read the article

  • Efficient SQL Server Indexing by Design

    Having a good set of indexes on your SQL Server database is critical to performance. Efficient indexes don't happen by accident; they are designed to be efficient. Greg Larsen discusses whether primary keys should be clustered, when to use filtered indexes and what to consider when using the Fill Factor.

    Read the article

  • Alexa indexing browsing history?

    - by Haluk
    We have this test.php sitting around in a forgotten folder. It is a script which just sends an email to our site admin. We never had a page linking to it. It is not indexed by Google. It does not exist in the Internet Archive Wayback Machine. But every now and then it gets crawled by ia_archiver. I wonder how it got indexed. Could it be because of the Alexa toolbar installed on our computer? Does Alexa index our personal browsing history?

    Read the article

  • Is it safe to Block These URLs with Robots.txt?

    - by Edgar Quintero
    I have a website that has all URLs optimized and 301 redirected from nasty URLs to clean ones. However, everywhere throughout the site the unclean URLs are linked in menus, content, products, etc. Google currently has all clean URLs indexed, along with a few unclean URLs too. So the site still has linked everywhere the old URLs (ideally this wouldn't be the case but this is how it is ATM). I would like to block the unclean URLs with robots.txt. The question: If I block these unclean URLs with the robots.txt, when the entire website is linked with them (but they all redirect to the clean version), will this affect the indexing status at all?

    Read the article

  • How do I stop Google indexing my main page as https [duplicate]

    - by user2897488
    This question already has an answer here: https:// search results appearing on Google for purely http:// site 2 answers Due to historic reasons, we have things set up so that "www.mydomain.com" redirects to "store.mydomain.com". This has worked perfectly fine until recently, when Google appears to be sending visitors to "https:// www.mydomain.com" which doesn't have an SSL-certificate (and never has). Strangely, its only the first link that goes to "https:// www.mydomain.com", all other links point correctly to "http:// store.mydomain.com". Because there is no certificate on the "www" version, users are getting an error message. How do I make Google revert to pointing the main link at "http:// store.mydomain.com" (or even "http:// www.mydomain.com.") If I remove "https:// www.mydomain.com" from Google webmaster tools, will this also remove the redirected page ("http:// store.mydomain.com)? Thanks.

    Read the article

  • Microsoft BI Indexing Connector Announced

    - by Enrique Lima
    Wait?  More awesome stuff released. With Microsoft’s acquisition of FAST, the options for content being indexed increased.  That’s not all that happens, but for the purpose of this post, since we focus on Business Intelligence content … that is where we see that benefit at this time. Here is the link to the SharePoint Insights: BI In Action blog. You will find guidance and components to download.

    Read the article

  • Re-indexing website with clean URL's

    - by artsi
    So I have a website with URL's like this: http://www.domain.com/profile.php?id=151 I've now cleaned them up with mod_rewrite into this: http://www.domain.com/profile/firstname-lastname/151 I've fetched and re-indexed my website after the change. What is the best way to make the old dirty ones disappear from search results and keep the clean ones? Is blocking profile.php with robots.txt enough?

    Read the article

  • What measures can be taken to make sure Google is aware of the existence of a newly created page?

    - by knorv
    Consider a website with a large number of pages. New pages are published regularly. When publishing a new page the website operator wants to get the newly created paged indexed in Google as soon as possible. The website operator wants to minimize the time spent between publication and indexing. Consider the site http://www.example.com/ with hundreds of thousands of pages. The page page http://www.example.com/something/important-page.html is created at say 12:00. I want to get important-page.html indexed as soon as possible after 12:00. Ideally within seconds or minutes. What options are available to try to get Google to index a specific newly created page as soon as possible?

    Read the article

  • Google Site Search (commercial) not indexing files in sitemap

    - by melat0nin
    I have a client for whom we have purchased Google Site Search. It works well for HTML pages served by the CMS, but files aren't being reliably indexed. I wrote a script to generate an XML feed (sitemap) of all the files in the CMS which I've plugged in to Google Webmaster Tools for the site. It says that for that sitemap 923 URLs have been submitted, but only 26 have been indexed. The client relies heavily on searching within files, which is why we decided to use Google search, so this is a bit of a problem. Many of the files aren't linked to from any page on the site, as they are old and therefore don't merit having a page of their own. But they still need to be accessible through search for archiving purposes. The file archive xml can be found at www.sniffer.org.uk/file-archive and the standard xml sitemap (of pages) can be found at www.sniffer.org.uk/sitemap.xml. Any thought would be much appreciated!

    Read the article

  • Proper Search Engine Indexing

    Google's search engine is nothing but a very simplistic search box, but behind the scenes, it's a complicated algorithm and getting increasingly difficult as time goes on. Search engines like Google, Yahoo, and Bing are getting more precise at sorting websites in order to help people find the information they are looking for.

    Read the article

  • Google Indexing Issue after htaccess changes

    - by Klement
    I have a site called www.FuneralCoverFinder.co.za. I have about 30 pages on the site and usually have 29 indexed. (Excluding 15 blog posts) They are new. I recently upgraded my entire site and made some redirection changes in my .htaccess file. I have made my url's more SEO friendly (Removing index.php/) and redirecting dead pages to working pages. I have tons of unique content all checked by grammarly and plagium to ensure I have no duplicate content. I have since resubmited my sitemap to Google and now have only one page indexed. It was within a couple of minutes. I usually see results almost immediately after submitting, now it's stuck on 1 page indexed. I assume I might have made errors in the .htaccess file as this was my first attempt. The site runs perfectly and all the url's redirect the way they should. I'm scared I have some or other loop, although the website runs fine. I still see many of my old indexed pages in the SERP's, I'm just worried that the issue with the new sitemap can cause my rankings some harm. My website is pretty SEO optimized onsite. I have about 1500 indexed backlinks and have been building them steadily over about half a year. I would really appreciate some clarity on this matter.

    Read the article

  • A relatively new blog seems to be getting very poor Google indexing

    - by Genadinik
    I have a new blog that is 2 months old. In the first few weeks, it was getting indexed nicely and my GoogleWebmaster reports were showing that it was getting crawled and began ranking for some terms. Then as I kept writing, the GoogleWebmaster report thinned out and showed less and less terms that this blog ranks for. Now there are only 4 terms with one of them being my name. Is there something I need to do to keep the old posts to remain indexed and crawled? Thanks, Alex

    Read the article

  • LOB Pointer Indexing Proposal

    - by jchang
    My observations are that IO to lob pages (and row overflow pages as well?) is restricted to synchronous IO, which can result in serious problems when these reside on disk drive storage. Even if the storage system is comprised of hundreds of HDDs, the realizable IO performance to lob pages is that of a single disk, with some improvement in parallel execution plans. The reason for this appears to be that each thread must work its way through a page to find the lob pointer information, and then generates...(read more)

    Read the article

  • Google Indexing Gets You Noticed

    Getting attention on the online world is not as easy as it might seem and your website has to do a lot to get its share of visitors. The internet has so many websites and if someone wants to go through your website they eventually have to browse through various other businesses like yours.

    Read the article

  • Provide a user with service start/stop permissions

    - by slakr007
    I have a very basic domain that I use for development. I want to create a GPO that provides users in the Backup Operators group with start/stop permissions for two specific services on a specific server. I have read several articles about this, and they all indicate that this is very easy. Create a GPO, give the user start/stop permissions to the services under Computer Configuration Policies Windows Settings Security Settings System Services, and voila. Done. Not so much, but I have to be doing something wrong. My install is pretty much the default. The domain controller is in the Domain Controllers OU, the Backup Operators group is under Builtin, and I created a user called Backup under Users. I created a GPO and linked it to the Domain Controllers OU. In the GPO I give the Backup user permission to start/stop two specific services on the server. I forced an update with gpupdate. I used Group Policy Results to verify that my GPO is the winning GPO giving the user the permission to start/stop the two services. However, the user is still unable to start/stop the services. I attempted different loopback settings on the GPO to no avail. I'm sort of at a loss here.

    Read the article

  • .Net 3.5 Windows Service hide WCF Service Host

    - by Melursus
    I got a Windows service installed on my development machine (that I made) and I want to interact with it. For a reason I don't know, each time I start the client, a WCF Service Host pop and said that the address is already in use ... which is true ... but how can I do to NOT start that Windows ? Is it because my two projects (server and client) are in the same solution ?

    Read the article

  • Consuming WebSphere service from WCF client: Unable to create AxisService from ServiceEndpointAddres

    - by JohnIdol
    I am consuming (or trying to consume) a WebSphere service from a WCF client (service reference + bindings generated through svcutil). Connection seems to be established successfully but I am getting the following error: CWWSS7200E: Unable to create AxisService from ServiceEndpointAddress [address] Rings any bell? I am guessing the request format is somehow being rejected by the service, I am sniffing it with fiddler and it looks fine overall (can post if ppl think it could help). Found this article, but it doesn't seem to apply to my case. Any help appreciated!

    Read the article

  • How to start a process from within a windows service

    - by BaBu
    I want to pop a browser with a given url from within a windows service. Like so: System.Diagnostics.Process.Start("http://www.venganza.org/"); Works fine when running in a console but not from within the service. No error messages, no exceptions, the Process.Start() command just seem to do nothing. It smells of some security issue, maybe something with the service properties and/or logon options? Annoying stuff this... Anybody? (Oh, and on windows 7/.NET framework 3.5.)

    Read the article

  • Problems with VBScript - RegRead when running as a service

    - by Brandon
    I am working on a script that runs under a custom installation utility, which is running as a service. To get the current user name the script executes this command: str_Acct_Name_Val = "HKCU\Software\Microsoft\Windows\CurrentVersion\Explorer\Logon User Name" str_Acct_Name = RegRead(str_Acct_Name_Val) When I run the script from the command prompt, it can read that value just fine (under an administrator account). When the value is attempted to be read with service/local system privileges, the read fails. What is the problem here? EDIT: Some additional information. When running as a service calling the current user name returns "SYSTEM" and my guess is that HKCU doesn't "exist" under the view of the SYSTEM, since there is technically no current user. There is a user logged in at the time, but not in the scope of the running script. Maybe there is somewhere in HKLM I could find the currently logged on user?

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >