Search Results

Search found 1124 results on 45 pages for 'indexing'.

Page 5/45 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Strategies for Indexing Custom Fields in RavenDB

    - by Adrian Thompson Phillips
    In the relational database world, if I was developing a CRM system and wanted to have the user add their own custom fields that are searchable, I could have tables that store the name of the new column, the data type and the value, etc. (which would be less inefficient to index) or I could use the less elegant (but more searchable) solution that software like Dynamics and SharePoint use, whereas I create a load of columns on my aggregate root called CustomInt1, CustomInt2, etc. (which looks dirty and has a limit of how many custom fields a user can have, but has indexing advantages). But my questions is this, in NoSQL databases, what would be the best way of achieving the same thing? My priority would be for searchability. So what would be the best way to store this data? If I used a predefined set of properties (i.e. CustomData1, CustomData2, etc.), because these are all stored as JSON (i.e. strings) in the database, does this make it simpler because I don't have to worry about data types?

    Read the article

  • Keep search engine from indexing specific content on your site

    - by Jimmy Chopps
    I've got a pretty weird scenario that I was wondering someone could help me out with. I recently created a blog site and noticed that search engines have been including the content of my footer in with the description. This presents a problem because my footer is basic a brief legal statement saying that the views are my own and don't represent the company I work for (and yada yada yada). So, basically, I need a way to prevent search engines from indexint that content in my footer or even my footer altogether. I've been looking back through some of my SEO books and searching through forums but this doesn't seem possible. Is it possible to keep search engines from indexing only certain content on a page? If it isn't possible, what alternatives are there to ensure this legal mumbo jumbo doesn't show up in the results?

    Read the article

  • Why is Google still not indexing my !# website?

    - by Zubair
    I have been working on a website which uses #! (2minutecv.com), but even after 6 weeks of the site up and running and conforming to the Google hash bang guidelines stated here, you can still see that Google still hasn't indexed the site yet. For example if you use Google to search for 2MinuteCV.com benefits it does not find this page which is referenced from the homepage. Can anyone tell me why Google isn't indexing this website? Update: Thanks for al lthe help with this answer. So just to make sure I understand what is wrong. According to the answers Google never actually indexes the pages after the Javascript has run. I need to create a "shadow site" which google indexes (which google calls HTNL snapshots). If I am right in thinking this then I can pick a winner for the bounty

    Read the article

  • Search for odt files without indexing

    - by josinalvo
    I am looking for: a way to search inside odt files (i.e. search for contents, not name) that does not require any kind of indexing that is graphical and very user-friendly (for a relatively old person, who does not like computers much) I know that it is possible to have 1) and 2): for x in `find . -iname '*odt'`; do odt2txt $x | grep Query; done works well enough, and it's pretty fast. But I wonder if there is already a good solution that does this with a GUI (or can be adapted to do this easily)

    Read the article

  • sprite group doesn't support indexing

    - by user3956
    I have a sprite group created with pygame.sprite.Group() (and add sprites to it with the add method) How would I retrieve the nth sprite in this group? Code like this does not work: mygroup = pygame.sprite.Group(mysprite01) print mygroup[n].rect It returns the error: group object does not support indexing. For the moment I'm using the following function: def getSpriteByPosition(position,group): for index,spr in enumerate(group): if (index == position): return spr return False Although working, it just doesn't seem right... Is there a cleaner way to do this? EDIT: I mention a "position" and index but it's not important actually, returning any single sprite from the group is enough

    Read the article

  • Tune Your Indexing Strategy with SQL Server DMVs

    SQL Server Indexes need to be effective. It is wrong to have too few or too many. The ones you create must ensure that the workload reads the data quickly with a minimum of I/O. As well as a sound knowledge of the way that relational databases work, it helps to be familiar with the Dynamic Management Objects that are there to assist with your indexing strategy. New! SQL Backup Pro 7.2 - easy, automated backup and restoresTry out the latest features and get faster, smaller, verified backups. Download a free trial.

    Read the article

  • If C-Panel Indexing Manager sets a folder to "No Indexing" can it be crawled by a webcrawler?

    - by Graham
    People are able to view directories / folders on my site right now. So, they could go to mysite.com/images and see the full index. To prevent this, C-Panel offers an option to set a directory / folder to "No Indexing" under the "Index Manager." Will this option allow webcrawlers to crawl / index the images? Or, is there a simpler alternative to block access to all folders directly while still having it SEO friendly? My old server restricted direct access to folders by default. But, the new one does not. Any ideas on this? Thanks!

    Read the article

  • What are the popular file indexing engines on Linux?

    - by netvope
    It would be nice if you can share your experience on the pros and cons of each of them. Personally I only know Google Desktop and Beagle, and I haven't really used them. I mainly store my files on Windows (and use its integrated indexed search) but I'm trying to see if I can switch over to Linux. Also, can any one of the search indexer run without X? Does any of them provide an API for search queries?

    Read the article

  • Full-text Indexing Books Online

    - by Most Valuable Yak (Rob Volk)
    While preparing for a recent SQL Saturday presentation, I was struck by a crazy idea (shocking, I know): Could someone import the content of SQL Server Books Online into a database and apply full-text indexing to it?  The answer is yes, and it's really quite easy to do. The first step is finding the installed help files.  If you have SQL Server 2012, BOL is installed under the Microsoft Help Library.  You can find the install location by opening SQL Server Books Online and clicking the gear icon for the Help Library Manager.  When the new window pops up click the Settings link, you'll get the following: You'll see the path under Library Location. Once you navigate to that path you'll have to drill down a little further, to C:\ProgramData\Microsoft\HelpLibrary\content\Microsoft\store.  This is where the help file content is kept if you downloaded it for offline use. Depending on which products you've downloaded help for, you may see a few hundred files.  Fortunately they're named well and you can easily find the "SQL_Server_Denali_Books_Online_" files.  We are interested in the .MSHC files only, and can skip the Installation and Developer Reference files. Despite the .MHSC extension, these files are compressed with the standard Zip format, so your favorite archive utility (WinZip, 7Zip, WinRar, etc.) can open them.  When you do, you'll see a few thousand files in the archive.  We are only interested in the .htm files, but there's no harm in extracting all of them to a folder.  7zip provides a command-line utility and the following will extract to a D:\SQLHelp folder previously created: 7z e –oD:\SQLHelp "C:\ProgramData\Microsoft\HelpLibrary\content\Microsoft\store\SQL_Server_Denali_Books_Online_B780_SQL_110_en-us_1.2.mshc" *.htm Well that's great Rob, but how do I put all those files into a full-text index? I'll tell you in a second, but first we have to set up a few things on the database side.  I'll be using a database named Explore (you can certainly change that) and the following setup is a fragment of the script I used in my presentation: USE Explore; GO CREATE SCHEMA help AUTHORIZATION dbo; GO -- Create default fulltext catalog for later FT indexes CREATE FULLTEXT CATALOG FTC AS DEFAULT; GO CREATE TABLE help.files(file_id int not null IDENTITY(1,1) CONSTRAINT PK_help_files PRIMARY KEY, path varchar(256) not null CONSTRAINT UNQ_help_files_path UNIQUE, doc_type varchar(6) DEFAULT('.xml'), content varbinary(max) not null); CREATE FULLTEXT INDEX ON help.files(content TYPE COLUMN doc_type LANGUAGE 1033) KEY INDEX PK_help_files; This will give you a table, default full-text catalog, and full-text index on that table for the content you're going to insert.  I'll be using the command line again for this, it's the easiest method I know: for %a in (D:\SQLHelp\*.htm) do sqlcmd -S. -E -d Explore -Q"set nocount on;insert help.files(path,content) select '%a', cast(c as varbinary(max)) from openrowset(bulk '%a', SINGLE_CLOB) as c(c)" You'll need to copy and run that as one line in a command prompt.  I'll explain what this does while you run it and watch several thousand files get imported: The "for" command allows you to loop over a collection of items.  In this case we want all the .htm files in the D:\SQLHelp folder.  For each file it finds, it will assign the full path and file name to the %a variable.  In the "do" clause, we'll specify another command to be run for each iteration of the loop.  I make a call to "sqlcmd" in order to run a SQL statement.  I pass in the name of the server (-S.), where "." represents the local default instance. I specify -d Explore as the database, and -E for trusted connection.  I then use -Q to run a query that I enclose in double quotes. The query uses OPENROWSET(BULK…SINGLE_CLOB) to open the file as a data source, and to treat it as a single character large object.  In order for full-text indexing to work properly, I have to convert the text content to varbinary. I then INSERT these contents along with the full path of the file into the help.files table created earlier.  This process continues for each file in the folder, creating one new row in the table. And that's it! 5 SQL Statements and 2 command line statements to unzip and import SQL Server Books Online!  In case you're wondering why I didn't use FILESTREAM or FILETABLE, it's simply because I haven't learned them…yet. I may return to this blog after I figure that out and update it with the steps to do so.  I believe that will make it even easier. In the spirit of exploration, I'll leave you to work on some fulltext queries of this content.  I also recommend playing around with the sys.dm_fts_xxxx DMVs (I particularly like sys.dm_fts_index_keywords, it's pretty interesting).  There are additional example queries in the download material for my presentation linked above. Many thanks to Kevin Boles (t) for his advice on (re)checking the content of the help files.  Don't let that .htm extension fool you! The 2012 help files are actually XML, and you'd need to specify '.xml' in your document type column in order to extract the full-text keywords.  (You probably noticed this in the default definition for the doc_type column.)  You can query sys.fulltext_document_types to get a complete list of the types that can be full-text indexed. I also need to thank Hilary Cotter for giving me the original idea. I believe he used MSDN content in a full-text index for an article from waaaaaaaaaaay back, that I can't find now, and had forgotten about until just a few days ago.  He is also co-author of Pro Full-Text Search in SQL Server 2008, which I highly recommend.  He also has some FTS articles on Simple Talk: http://www.simple-talk.com/sql/learn-sql-server/sql-server-full-text-search-language-features/ http://www.simple-talk.com/sql/learn-sql-server/sql-server-full-text-search-language-features,-part-2/

    Read the article

  • Thinking Sphinx with Rails - Delta indexing seems to work fine for one model but not for the other

    - by hack3r
    I have 2 models User and Discussion. I have defined the indices for the models as below: For the User model: define_index do indexes email indexes first_name indexes last_name, :sortable => true indexes groups(:name), :as => :group_names has "IF(email_confirmed = true and status = 'approved', true, false)", :as => :approved_user, :type => :boolean has "IF(email_confirmed = true and (status = 'approved' or status='blocked'), true, false)", :as => :approved_or_blocked_user, :type => :boolean has points, :type => :integer has created_at, :type => :datetime has user(:id) set_property :delta => true end For the Discussion model: define_index do indexes title indexes description indexes category(:title), :as => :category_title indexes tags(:title), :as => :tag_title has "IF(publish_to_blog = true AND sticky = false, true, false)", :as => :publish_to_main, :type => :boolean has created_at has updated_at, :type => :datetime has recent_activity_at, :type => :datetime has views_count, :type => :integer has featured has publish_to_blog has sticky set_property :delta => true end I have added a delta column to both tables as per the documentation. My problem is that delta indexing works only for the Discussion model and not for the User model. For ex: When I update the 'title' of a discussion, I can see the thinking sphinx is rotating the indices etc. (as is evident from the logs). But when I update the 'first_name' or the 'last_name' of a user, nothing happens. The User model also has a has_many :through association through a model called GroupsUser. I have setup a after_save on the GroupsUser as follows: def set_user_delta_flag user.delta = true user.save end Even this doesn't seem to trigger delta indexing on the User model. A similar setup for the Discussion model works perfectly! Can anyone tell me why this is happening?

    Read the article

  • SQL Server – Learning SQL Server Performance: Indexing Basics – Interview of Vinod Kumar by Pinal Dave

    - by pinaldave
    Recently I just wrote a blog post on about Learning SQL Server Performance: Indexing Basics and I received lots of request that if we can share some insight into the course. Every single time when Performance is discussed, Indexes are mentioned along with it. In recent times, data and application complexity is continuously growing.  The demand for faster query response, performance, and scalability by organizations is increasing and developers and DBAs need to now write efficient code to achieve this. When we developed the course – we made sure that this course remains practical and demo heavy instead of just theories on this subject. Vinod Kumar and myself we often thought about this and realized that practical understanding of the indexes is very important. One can not master every single aspects of the index. However there are some minimum expertise one should gain if performance is one of the concern. Here is 200 seconds interview of Vinod Kumar I took right after completing the course. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology, Video

    Read the article

  • Data indexing frameworks fit for large E-Commerce applications

    - by Dabu
    we wrote and still maintain a large E-Commerce application. Our feature list resembles what you would expect from most shops. We'd like to improve some of our features, and now the search/suggestion list functionality (enter some letters, a JScripted suggestion list appears) has caught our eye. Currently, we use http://xapian.org/. It has some drawbacks. Firstly, it's not actually the right solution. It has been created to index documents, not ever-changing data in a granularity that an E-Commerce application would need. Secondly, the load on the database is significant when we reindex all data every night. We'd like a framework that has been designed for indexing database data, which can add to the index easily and without much load, which can supply data changes in the backoffice quickly to the frontend without much load and delay. I'm aware of the fact that Xapian is Open Source and even Free Software, so we could adapt it to our needs if we decided to invest the time and manpower. But taking a quick look around for a solution more suited seems fair, right? Oh, and commercial applications are fine, too. FOSS is not required. Thanks a bunch.

    Read the article

  • PHP-based indexing and search implementation

    - by Chris
    Is there such thing? I designed a while back a rudimentary form based app for my users. We receive from our suppliers hardware manufacturing data in XML files: file name is made of eleven fields separated by tildes, with each field having its own meaning. R&D guys wanted to be able to search each field of the file names so I used regex() with decent results. Problem is that we have now in the upwards of 2.5 million files. And my app can't hack it anymore. I looked at Apache Lucene & Solr. Though it seemed like the best solution to my problem, the fields in the filenames are not peers to the file content. Big no-no with Solr. What is the best way to implement a PHP app with indexing and search capability with such large number of files? Do I have to buy Zend and use Zend_Search? Is it the only way? Thanks for your input.

    Read the article

  • Spotlight Infinite Indexing issue (external data drive)

    - by Manca Weeks
    This is an external drive, formerly a boot drive which is now in use only to access music files (sibelius, audio, midi, live, logic etc.) without transferring the data into a new boot system, partly because of the issue I am about to describe, but mostly because the majority of the data is mainly there for archival purposes. The user is a composer and prominent musician and needs to be able to rehash the data at will. I have tried several things - here is a list: - make complete filesystem clone with antonio diaz's ddrescue - run Disk Warrior on copy, repair whatever errors occurred - wipe out all ACLs on entire drive - set all permissions to the same value - wide open 777 - remove any system data (applications, system files, including hidden files to the best of my knowledge) by selecting only non-system/app data and using Carbon Copy Cloner to put only the data of interest onto a newly formatted drive - transfer data to newly formatted drive folder by folder, resetting the spotlight index in between adding each to observe for issues (interesting here is that no issues occurred except for in Documents folder - when I transferred only the Documents folder to a newly formatted drive on its own - no trouble. It appears almost as thought it may not be the content but the quantity or specific combination of data that results in problems) - use DataRescue to transfer the data to yet another newly formatted drive to expose any missed hidden files Between each of the above steps I stopped Spotlight (search for anything beginning with md in Activity Monitor - All Processes and quitting it), deleted the .Spotlight-V100 directory from the affected drive. Restart Splotlight indexing by adding drive to Spotlight privacy list and removing it. In each case the same issue occurs - Spotlight begins indexing normally (or so it seems), then the index estimated time increases, usually to 4 hours remaining. This is where it gets stuck and continues to predict 4 hours remaining but never finishes. Sometimes I can't eject the drive and have to quit the md.. processes from Activity Monitor to be able to eject the drive without Force Eject. Once I disconnect the drive after the 4 hours remaining situation - if I reattach it, Spotlight forever estimates remaining time and never gets going again. So there it is. It is apparently not a filesystem issue, not a permissions issue and not tied to any particular piece of hardware or protocol (used USB and FW drives). I have tried this on several machines (3 to be precise) and in 10.5.8 and 10.6.5. Simply disabling Spotlight on this volume is not an option because the owner has no clue where things are as the data on the volume dates back to music projects and compositions from 2003 and before. He needs to be able to query for results. Anyone got any ideas? ---update 2-6-11 Since I have not received any responses except the one below which appears to misunderstand my point, I am updating this post hoping to get more responses. I have used the terminal command sudo opensnoop -p PID where PID is the mdworker process ID to try and determine what Spotlight is doing and hopefully find the files it's having trouble with. Here's what happens: After indexing for a few hours, mdworker is gone. It no longer shows up in Activity Monitor under "All Processes" and the Terminal window with the opensnoop result stops moving. I then proceeded to execute the same command on mds to see what it was doing and here's what I get, repeatedly: 501 57 mds 21 / 501 57 mds 21 /Volumes/Sno Leppard 501 57 mds 21 /Volumes/Tiger 501 57 mds 21 /Volumes/Leppard 501 57 mds 21 /Volumes/Disk Warrior 501 57 mds 21 /Volumes/ONM Data These represent all the volumes currently mounted in the system. All except ONM Data, which is the one I am trying to index, are excluded from SPotlight indexing at the moment. The sequence above repeats over and over, with slight variation, sometimes skipping one of the volumes. Questions - what happened to mdworker? What is mds doing? I will let this run until tomorrow morning and throughout the day and monitor for any changes. Any input would be very much appreciated. Even if you're not sure what the ultimate answer is, please alert me to anything you think I may be missing. Hopefully at some point we will figure this out... Thanks, M __final edit__ I finally resolved the issue and here is how I did it. I used the terminal command "sudo opensnoop -p PID" where the PID is the process id of the processes I was monitoring. I was looking at all instances of mds and mdworker running in the system. After the third time through indexing the same data set (see info above), I contacted Apple and got to their highest level of support - they were flabbergasted as well. They advised me to install yet another default 10.6.6 system and try again. The same pattern repeated - mds and mdworker(s) would start indexing and eventually the spotlight icon would say 6 hours remaining and all mdworkers were gone, mds at 90% or so of CPU. But I did finally figure out that the first time mdworker stopped like that, the last file it touched was always in the same folder. I excluded that folder from spotlight search and the rest of the data set indexed within about 2 hours with no strange behavior or failures. I copied that folder to another machine and Spotlight barfed immediately. Exclude that folder and all is well again. I have no clue what is causing this behavior, still, but I did find a functional solution to the problem. Anyone with a similar problem - run opensnoop on all instances of mds and mdworker and wait patiently for wdworker to exit. Look at the last file it touched and exclude the enclosing folder from being indexed. I was able to repeat the issue and solution on 2 different installs and 2 different copies of the data set. Hope this helps. If we find an actual cause of the folder being such a problem (it is called MICHAEL BRECKER RECORD SOLOS and contains almost 1 GB of audio related files - performer, live, SD2 - things like that), I will edit again to let you all know. Thanks for ay attempts to help, M

    Read the article

  • Indexing data from multiple tables with Oracle Text

    - by Roger Ford
    It's well known that Oracle Text indexes perform best when all the data to be indexed is combined into a single index. The query select * from mytable where contains (title, 'dog') 0 or contains (body, 'cat') 0 will tend to perform much worse than select * from mytable where contains (text, 'dog WITHIN title OR cat WITHIN body') 0 For this reason, Oracle Text provides the MULTI_COLUMN_DATASTORE which will combine data from multiple columns into a single index. Effectively, it constructs a "virtual document" at indexing time, which might look something like: <title>the big dog</title> <body>the ginger cat smiles</body> This virtual document can be indexed using either AUTO_SECTION_GROUP, or by explicitly defining sections for title and body, allowing the query as expressed above. Note that we've used a column called "text" - this might have been a dummy column added to the table simply to allow us to create an index on it - or we could created the index on either of the "real" columns - title or body. It should be noted that MULTI_COLUMN_DATASTORE doesn't automatically handle updates to columns used by it - if you create the index on the column text, but specify that columns title and body are to be indexed, you will need to arrange triggers such that the text column is updated whenever title or body are altered. That works fine for single tables. But what if we actually want to combine data from multiple tables? In that case there are two approaches which work well: Create a real table which contains a summary of the information, and create the index on that using the MULTI_COLUMN_DATASTORE. This is simple, and effective, but it does use a lot of disk space as the information to be indexed has to be duplicated. Create our own "virtual" documents using the USER_DATASTORE. The user datastore allows us to specify a PL/SQL procedure which will be used to fetch the data to be indexed, returned in a CLOB, or occasionally in a BLOB or VARCHAR2. This PL/SQL procedure is called once for each row in the table to be indexed, and is passed the ROWID value of the current row being indexed. The actual contents of the procedure is entirely up to the owner, but it is normal to fetch data from one or more columns from database tables. In both cases, we still need to take care of updates - making sure that we have all the triggers necessary to update the indexed column (and, in case 1, the summary table) whenever any of the data to be indexed gets changed. I've written full examples of both these techniques, as SQL scripts to be run in the SQL*Plus tool. You will need to run them as a user who has CTXAPP role and CREATE DIRECTORY privilege. Part of the data to be indexed is a Microsoft Word file called "1.doc". You should create this file in Word, preferably containing the single line of text: "test document". This file can be saved anywhere, but the SQL scripts need to be changed so that the "create or replace directory" command refers to the right location. In the example, I've used C:\doc. multi_table_indexing_1.sql : creates a summary table containing all the data, and uses multi_column_datastore Download link / View in browser multi_table_indexing_2.sql : creates "virtual" documents using a procedure as a user_datastore Download link / View in browser

    Read the article

  • Easy way to identify pages which are index, but not linked to on my site

    - by Stephen Connolly
    I'm just wondering if there is an easy way to find out what pages Google has indexed, that are not directly linked on my site, for example: www.mysite.com/skype.html originally had a link in my site menu, but this link was removed. The page is still available when the url is typed directly, but there is no direct link to it. I don't want Google indexing this page any more and want to put a 'disallow' in my robots.txt.

    Read the article

  • Windows 7: Search indexing is stuck

    - by Ricket
    When I open Indexing Options, it says: 4,317 items indexed Indexing in progress. Search results might not be complete during this time. It's stuck at 4,317 though; no more items have been indexed. Worst of all, SearchIndexer.exe is taking up 100% CPU (well, 50%, but I have a dual core CPU; it's taking up all processing power it can). It is not causing hard drive activity though. I tried clicking "Troubleshoot search and indexing" at the bottom of the Indexing Options window, but it couldn't find any problem. I've also tried the repair registry key that several websites suggest; I change HKLM\SOFTWARE\Microsoft\Windows Search SetupCompletedSuccessfully to 0 and restarted the computer, and it apparently repaired because it flipped back to 1, but the same problem continues to occur. It's reducing the battery life of my laptop and making it really hot so that my fans are running all the time. I've had to disable the Windows Search service. How can I fix this? Do I need to just flat-out reformat my computer? Update: I've tried rebuilding a couple times. There's nothing unusual about the locations I have to index, and I don't have any downloads in progress or anything like that. I don't see any reason why it stopped, and I noticed it much too late to do a system restore. At this point, I'm hoping someone will offer up some secret answer that will fix the problem, thus the bounty. Another update: I tried starting the service again, just to let it try yet again. It seemed okay at first (Indexing Options showed it operating at reduced speed due to user activity, and the number of files was going up). A while later I checked, and the service had stopped. Event viewer revealed some errors like this: Log Name: Application Source: Application Error Date: 2/1/2010 7:34:23 PM Event ID: 1000 Task Category: (100) Level: Error Keywords: Classic User: N/A Computer: ricky-win7 Description: Faulting application name: SearchIndexer.exe, version: 7.0.7600.16385, time stamp: 0x4a5bcdd0 Faulting module name: NLSData0007.dll, version: 6.1.7600.16385, time stamp: 0x4a5bda88 Exception code: 0xc0000005 Fault offset: 0x002141ba Faulting process id: 0x13a0 Faulting application start time: 0x01caa39f2a70ec02 Faulting application path: C:\Windows\system32\SearchIndexer.exe Faulting module path: C:\Windows\System32\NLSData0007.dll Report Id: b4f7a7ae-0f92-11df-87fc-e5d65d8794c2 Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Application Error" /> <EventID Qualifiers="0">1000</EventID> <Level>2</Level> <Task>100</Task> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2010-02-02T00:34:23.000000000Z" /> <EventRecordID>10689</EventRecordID> <Channel>Application</Channel> <Computer>ricky-win7</Computer> <Security /> </System> <EventData> <Data>SearchIndexer.exe</Data> <Data>7.0.7600.16385</Data> <Data>4a5bcdd0</Data> <Data>NLSData0007.dll</Data> <Data>6.1.7600.16385</Data> <Data>4a5bda88</Data> <Data>c0000005</Data> <Data>002141ba</Data> <Data>13a0</Data> <Data>01caa39f2a70ec02</Data> <Data>C:\Windows\system32\SearchIndexer.exe</Data> <Data>C:\Windows\System32\NLSData0007.dll</Data> <Data>b4f7a7ae-0f92-11df-87fc-e5d65d8794c2</Data> </EventData> </Event> If you are having the same error and arrived here from a Google search, please comment or add an answer detailing your progress on this, if any...

    Read the article

  • UG Session - Service Broker & Indexing

    - by NeilHambly
    SQL Server User Group Session in Reading this Wednesday (21st April 2010 6pm - 10pm) Along with Tony Rogerson MVP, I {Neil Hambly} will be presenting @ the forthcoming User Group meeting @ Microsoft Campus, Reading Tony will be presenting the session he gave @ SQLBits VI on Thinking Sets, Normalisation, Surrogate Keys, Referential Integrity This is very insightful and was a very popular session. I will be continuing my recent presentation on Indexed views @ London UG, this time i will be doing a...(read more)

    Read the article

  • Google indexing wrong domain that does not work

    - by user174117
    Can anyone tell me why Google indexes a site http://0.3c.7aae.static.theplanet.com instead of the actual domain name (my domain was registered with webmaster tools). The aforementioned link was linking to my site, but has now become a broken link. Initially I tried 301 directs, but nothing worked. The hosting company cannot explain why. Leaving me wondering what to do? Any guidance would be appreciated. Thks

    Read the article

  • Google De-Index many pages at once?

    - by Jakobud
    On one of our websites, Google has been indexing something it wasn't supposed to. We fixed the problem so it shouldn't happen anymore, but are interested in requesting that Google de-index these pages. The problem is that there are about 10,000 pages. They all look similar to this: http://www.mysite.com/file.php?o=34995 http://www.mysite.com/file.php?o=4566 http://www.mysite.com/file.php?o=223af http://www.mysite.com/file.php?o=6ga3h http://www.mysite.com/file.php?o=sfh45a etc... All the pages are file.php with get parameters like above. Is it possible to put in a de-index request like: http://www.mysite.com/file.php* so that Google removes all 10,000 pages?

    Read the article

  • Preventing indexing duplicate content by search engines

    - by umesh awasthi
    I am in process of migrating my old domain (www.oldurl.com) to new domain (www.newurl.com). Almost all the content,URL structure as well database is same except for few URL's and only difference will be in the domain name. I have made entries in the Apache's .htaccess file to set 301 redirect and currently have blocked all search engines from crawling my new domain by setting in robot.txt file. I am not sure how i will handle the duplicate content issue as when i will make the new domain go live. Should i block search engines to index/crawl my old domain? i am new to this field and not sure if this is actually any duplicate content issue or not.

    Read the article

  • To disallow indexing the category and tag listings in a blog

    - by Mert Nuhoglu
    Mark Wilson says that category and tag listings in a blog should be disallowed in order to prevent duplicate content. I understand this. However, I want to put internal links on keywords in the blog posts to the tag and category pages in order for the readers to find more relevant content. I wonder whether putting those internal links to the category/tag pages which are disallowed in robots.txt is counted as useful from the perspective of SEO internal linking?

    Read the article

  • Best way to prevent Google from indexing a directory [duplicate]

    - by Gkhan14
    This question already has an answer here: Stopping Google index some web pages I have 5 answers I've researched many methods on how to prevent Google/other search engines from crawling a specific directory. The two most popular ones I've seen are: Adding it into the robots.txt file: Disallow: /directory/ Adding a meta tag: <meta name="robots" content="noindex, nofollow"> Which method would work the best? I want this directory to remain "invisible" from search engines so it does not affect any of my site's ranking. In other words, I want this directory to be neutral/invisible and "just there." I don't want it to affect any ranking. Which method would be the best to achieve this?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >