Search Results

Search found 19515 results on 781 pages for 'advanced search'.

Page 34/781 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • Is there any descent open-source search engine solutions?

    - by Nazariy
    Few weeks ago my friend asked me how hard is it to launch your own search engine service with list of websites that suppose to be crawled time to time. First what come at my mind was Google Custom Search however pricing policy is quite tricky and would drain your budget if you reach 500K queries per year. Another solution I found here was SearchBlox, which can be compared to Google Mini service. It's quite good solution if you planing to cover search over small amount of websites but for larger projects it is not very handy. I also found few other search platforms like Lucene, Hadoop and Xapian which seems to be quite powerful solutions to reach Google search quality, and Nutch as a web crawler. As most of open-source projects they share same problem, luck of comprehensive guidance of usage, examples and it's expected that you are expert in this subject. I'm wondering if any of you using this solutions, which of them would you recommend, and what should I be aware of?

    Read the article

  • Advanced TSQL Tuning: Why Internals Knowledge Matters

    - by Paul White
    There is much more to query tuning than reducing logical reads and adding covering nonclustered indexes.  Query tuning is not complete as soon as the query returns results quickly in the development or test environments.  In production, your query will compete for memory, CPU, locks, I/O and other resources on the server.  Today’s entry looks at some tuning considerations that are often overlooked, and shows how deep internals knowledge can help you write better TSQL. As always, we’ll need some example data.  In fact, we are going to use three tables today, each of which is structured like this: Each table has 50,000 rows made up of an INTEGER id column and a padding column containing 3,999 characters in every row.  The only difference between the three tables is in the type of the padding column: the first table uses CHAR(3999), the second uses VARCHAR(MAX), and the third uses the deprecated TEXT type.  A script to create a database with the three tables and load the sample data follows: USE master; GO IF DB_ID('SortTest') IS NOT NULL DROP DATABASE SortTest; GO CREATE DATABASE SortTest COLLATE LATIN1_GENERAL_BIN; GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest', SIZE = 3GB, MAXSIZE = 3GB ); GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest_log', SIZE = 256MB, MAXSIZE = 1GB, FILEGROWTH = 128MB ); GO ALTER DATABASE SortTest SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE SortTest SET AUTO_CLOSE OFF ; ALTER DATABASE SortTest SET AUTO_CREATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_SHRINK OFF ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS_ASYNC ON ; ALTER DATABASE SortTest SET PARAMETERIZATION SIMPLE ; ALTER DATABASE SortTest SET READ_COMMITTED_SNAPSHOT OFF ; ALTER DATABASE SortTest SET MULTI_USER ; ALTER DATABASE SortTest SET RECOVERY SIMPLE ; USE SortTest; GO CREATE TABLE dbo.TestCHAR ( id INTEGER IDENTITY (1,1) NOT NULL, padding CHAR(3999) NOT NULL,   CONSTRAINT [PK dbo.TestCHAR (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestMAX ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAX (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestTEXT ( id INTEGER IDENTITY (1,1) NOT NULL, padding TEXT NOT NULL,   CONSTRAINT [PK dbo.TestTEXT (id)] PRIMARY KEY CLUSTERED (id), ) ; -- ============= -- Load TestCHAR (about 3s) -- ============= INSERT INTO dbo.TestCHAR WITH (TABLOCKX) ( padding ) SELECT padding = REPLICATE(CHAR(65 + (Data.n % 26)), 3999) FROM ( SELECT TOP (50000) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) - 1 FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) AS Data ORDER BY Data.n ASC ; -- ============ -- Load TestMAX (about 3s) -- ============ INSERT INTO dbo.TestMAX WITH (TABLOCKX) ( padding ) SELECT CONVERT(VARCHAR(MAX), padding) FROM dbo.TestCHAR ORDER BY id ; -- ============= -- Load TestTEXT (about 5s) -- ============= INSERT INTO dbo.TestTEXT WITH (TABLOCKX) ( padding ) SELECT CONVERT(TEXT, padding) FROM dbo.TestCHAR ORDER BY id ; -- ========== -- Space used -- ========== -- EXECUTE sys.sp_spaceused @objname = 'dbo.TestCHAR'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAX'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestTEXT'; ; CHECKPOINT ; That takes around 15 seconds to run, and shows the space allocated to each table in its output: To illustrate the points I want to make today, the example task we are going to set ourselves is to return a random set of 150 rows from each table.  The basic shape of the test query is the same for each of the three test tables: SELECT TOP (150) T.id, T.padding FROM dbo.Test AS T ORDER BY NEWID() OPTION (MAXDOP 1) ; Test 1 – CHAR(3999) Running the template query shown above using the TestCHAR table as the target, we find that the query takes around 5 seconds to return its results.  This seems slow, considering that the table only has 50,000 rows.  Working on the assumption that generating a GUID for each row is a CPU-intensive operation, we might try enabling parallelism to see if that speeds up the response time.  Running the query again (but without the MAXDOP 1 hint) on a machine with eight logical processors, the query now takes 10 seconds to execute – twice as long as when run serially. Rather than attempting further guesses at the cause of the slowness, let’s go back to serial execution and add some monitoring.  The script below monitors STATISTICS IO output and the amount of tempdb used by the test query.  We will also run a Profiler trace to capture any warnings generated during query execution. DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TC.id, TC.padding FROM dbo.TestCHAR AS TC ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; Let’s take a closer look at the statistics and query plan generated from this: Following the flow of the data from right to left, we see the expected 50,000 rows emerging from the Clustered Index Scan, with a total estimated size of around 191MB.  The Compute Scalar adds a column containing a random GUID (generated from the NEWID() function call) for each row.  With this extra column in place, the size of the data arriving at the Sort operator is estimated to be 192MB. Sort is a blocking operator – it has to examine all of the rows on its input before it can produce its first row of output (the last row received might sort first).  This characteristic means that Sort requires a memory grant – memory allocated for the query’s use by SQL Server just before execution starts.  In this case, the Sort is the only memory-consuming operator in the plan, so it has access to the full 243MB (248,696KB) of memory reserved by SQL Server for this query execution. Notice that the memory grant is significantly larger than the expected size of the data to be sorted.  SQL Server uses a number of techniques to speed up sorting, some of which sacrifice size for comparison speed.  Sorts typically require a very large number of comparisons, so this is usually a very effective optimization.  One of the drawbacks is that it is not possible to exactly predict the sort space needed, as it depends on the data itself.  SQL Server takes an educated guess based on data types, sizes, and the number of rows expected, but the algorithm is not perfect. In spite of the large memory grant, the Profiler trace shows a Sort Warning event (indicating that the sort ran out of memory), and the tempdb usage monitor shows that 195MB of tempdb space was used – all of that for system use.  The 195MB represents physical write activity on tempdb, because SQL Server strictly enforces memory grants – a query cannot ‘cheat’ and effectively gain extra memory by spilling to tempdb pages that reside in memory.  Anyway, the key point here is that it takes a while to write 195MB to disk, and this is the main reason that the query takes 5 seconds overall. If you are wondering why using parallelism made the problem worse, consider that eight threads of execution result in eight concurrent partial sorts, each receiving one eighth of the memory grant.  The eight sorts all spilled to tempdb, resulting in inefficiencies as the spilled sorts competed for disk resources.  More importantly, there are specific problems at the point where the eight partial results are combined, but I’ll cover that in a future post. CHAR(3999) Performance Summary: 5 seconds elapsed time 243MB memory grant 195MB tempdb usage 192MB estimated sort set 25,043 logical reads Sort Warning Test 2 – VARCHAR(MAX) We’ll now run exactly the same test (with the additional monitoring) on the table using a VARCHAR(MAX) padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TM.id, TM.padding FROM dbo.TestMAX AS TM ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query takes around 8 seconds to complete (3 seconds longer than Test 1).  Notice that the estimated row and data sizes are very slightly larger, and the overall memory grant has also increased very slightly to 245MB.  The most marked difference is in the amount of tempdb space used – this query wrote almost 391MB of sort run data to the physical tempdb file.  Don’t draw any general conclusions about VARCHAR(MAX) versus CHAR from this – I chose the length of the data specifically to expose this edge case.  In most cases, VARCHAR(MAX) performs very similarly to CHAR – I just wanted to make test 2 a bit more exciting. MAX Performance Summary: 8 seconds elapsed time 245MB memory grant 391MB tempdb usage 193MB estimated sort set 25,043 logical reads Sort warning Test 3 – TEXT The same test again, but using the deprecated TEXT data type for the padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TT.id, TT.padding FROM dbo.TestTEXT AS TT ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query runs in 500ms.  If you look at the metrics we have been checking so far, it’s not hard to understand why: TEXT Performance Summary: 0.5 seconds elapsed time 9MB memory grant 5MB tempdb usage 5MB estimated sort set 207 logical reads 596 LOB logical reads Sort warning SQL Server’s memory grant algorithm still underestimates the memory needed to perform the sorting operation, but the size of the data to sort is so much smaller (5MB versus 193MB previously) that the spilled sort doesn’t matter very much.  Why is the data size so much smaller?  The query still produces the correct results – including the large amount of data held in the padding column – so what magic is being performed here? TEXT versus MAX Storage The answer lies in how columns of the TEXT data type are stored.  By default, TEXT data is stored off-row in separate LOB pages – which explains why this is the first query we have seen that records LOB logical reads in its STATISTICS IO output.  You may recall from my last post that LOB data leaves an in-row pointer to the separate storage structure holding the LOB data. SQL Server can see that the full LOB value is not required by the query plan until results are returned, so instead of passing the full LOB value down the plan from the Clustered Index Scan, it passes the small in-row structure instead.  SQL Server estimates that each row coming from the scan will be 79 bytes long – 11 bytes for row overhead, 4 bytes for the integer id column, and 64 bytes for the LOB pointer (in fact the pointer is rather smaller – usually 16 bytes – but the details of that don’t really matter right now). OK, so this query is much more efficient because it is sorting a very much smaller data set – SQL Server delays retrieving the LOB data itself until after the Sort starts producing its 150 rows.  The question that normally arises at this point is: Why doesn’t SQL Server use the same trick when the padding column is defined as VARCHAR(MAX)? The answer is connected with the fact that if the actual size of the VARCHAR(MAX) data is 8000 bytes or less, it is usually stored in-row in exactly the same way as for a VARCHAR(8000) column – MAX data only moves off-row into LOB storage when it exceeds 8000 bytes.  The default behaviour of the TEXT type is to be stored off-row by default, unless the ‘text in row’ table option is set suitably and there is room on the page.  There is an analogous (but opposite) setting to control the storage of MAX data – the ‘large value types out of row’ table option.  By enabling this option for a table, MAX data will be stored off-row (in a LOB structure) instead of in-row.  SQL Server Books Online has good coverage of both options in the topic In Row Data. The MAXOOR Table The essential difference, then, is that MAX defaults to in-row storage, and TEXT defaults to off-row (LOB) storage.  You might be thinking that we could get the same benefits seen for the TEXT data type by storing the VARCHAR(MAX) values off row – so let’s look at that option now.  This script creates a fourth table, with the VARCHAR(MAX) data stored off-row in LOB pages: CREATE TABLE dbo.TestMAXOOR ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAXOOR (id)] PRIMARY KEY CLUSTERED (id), ) ; EXECUTE sys.sp_tableoption @TableNamePattern = N'dbo.TestMAXOOR', @OptionName = 'large value types out of row', @OptionValue = 'true' ; SELECT large_value_types_out_of_row FROM sys.tables WHERE [schema_id] = SCHEMA_ID(N'dbo') AND name = N'TestMAXOOR' ; INSERT INTO dbo.TestMAXOOR WITH (TABLOCKX) ( padding ) SELECT SPACE(0) FROM dbo.TestCHAR ORDER BY id ; UPDATE TM WITH (TABLOCK) SET padding.WRITE (TC.padding, NULL, NULL) FROM dbo.TestMAXOOR AS TM JOIN dbo.TestCHAR AS TC ON TC.id = TM.id ; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAXOOR' ; CHECKPOINT ; Test 4 – MAXOOR We can now re-run our test on the MAXOOR (MAX out of row) table: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) MO.id, MO.padding FROM dbo.TestMAXOOR AS MO ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; TEXT Performance Summary: 0.3 seconds elapsed time 245MB memory grant 0MB tempdb usage 193MB estimated sort set 207 logical reads 446 LOB logical reads No sort warning The query runs very quickly – slightly faster than Test 3, and without spilling the sort to tempdb (there is no sort warning in the trace, and the monitoring query shows zero tempdb usage by this query).  SQL Server is passing the in-row pointer structure down the plan and only looking up the LOB value on the output side of the sort. The Hidden Problem There is still a huge problem with this query though – it requires a 245MB memory grant.  No wonder the sort doesn’t spill to tempdb now – 245MB is about 20 times more memory than this query actually requires to sort 50,000 records containing LOB data pointers.  Notice that the estimated row and data sizes in the plan are the same as in test 2 (where the MAX data was stored in-row). The optimizer assumes that MAX data is stored in-row, regardless of the sp_tableoption setting ‘large value types out of row’.  Why?  Because this option is dynamic – changing it does not immediately force all MAX data in the table in-row or off-row, only when data is added or actually changed.  SQL Server does not keep statistics to show how much MAX or TEXT data is currently in-row, and how much is stored in LOB pages.  This is an annoying limitation, and one which I hope will be addressed in a future version of the product. So why should we worry about this?  Excessive memory grants reduce concurrency and may result in queries waiting on the RESOURCE_SEMAPHORE wait type while they wait for memory they do not need.  245MB is an awful lot of memory, especially on 32-bit versions where memory grants cannot use AWE-mapped memory.  Even on a 64-bit server with plenty of memory, do you really want a single query to consume 0.25GB of memory unnecessarily?  That’s 32,000 8KB pages that might be put to much better use. The Solution The answer is not to use the TEXT data type for the padding column.  That solution happens to have better performance characteristics for this specific query, but it still results in a spilled sort, and it is hard to recommend the use of a data type which is scheduled for removal.  I hope it is clear to you that the fundamental problem here is that SQL Server sorts the whole set arriving at a Sort operator.  Clearly, it is not efficient to sort the whole table in memory just to return 150 rows in a random order. The TEXT example was more efficient because it dramatically reduced the size of the set that needed to be sorted.  We can do the same thing by selecting 150 unique keys from the table at random (sorting by NEWID() for example) and only then retrieving the large padding column values for just the 150 rows we need.  The following script implements that idea for all four tables: SET STATISTICS IO ON ; WITH TestTable AS ( SELECT * FROM dbo.TestCHAR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id = ANY (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAX ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestTEXT ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAXOOR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; All four queries now return results in much less than a second, with memory grants between 6 and 12MB, and without spilling to tempdb.  The small remaining inefficiency is in reading the id column values from the clustered primary key index.  As a clustered index, it contains all the in-row data at its leaf.  The CHAR and VARCHAR(MAX) tables store the padding column in-row, so id values are separated by a 3999-character column, plus row overhead.  The TEXT and MAXOOR tables store the padding values off-row, so id values in the clustered index leaf are separated by the much-smaller off-row pointer structure.  This difference is reflected in the number of logical page reads performed by the four queries: Table 'TestCHAR' logical reads 25511 lob logical reads 000 Table 'TestMAX'. logical reads 25511 lob logical reads 000 Table 'TestTEXT' logical reads 00412 lob logical reads 597 Table 'TestMAXOOR' logical reads 00413 lob logical reads 446 We can increase the density of the id values by creating a separate nonclustered index on the id column only.  This is the same key as the clustered index, of course, but the nonclustered index will not include the rest of the in-row column data. CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestCHAR (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAX (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestTEXT (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAXOOR (id); The four queries can now use the very dense nonclustered index to quickly scan the id values, sort them by NEWID(), select the 150 ids we want, and then look up the padding data.  The logical reads with the new indexes in place are: Table 'TestCHAR' logical reads 835 lob logical reads 0 Table 'TestMAX' logical reads 835 lob logical reads 0 Table 'TestTEXT' logical reads 686 lob logical reads 597 Table 'TestMAXOOR' logical reads 686 lob logical reads 448 With the new index, all four queries use the same query plan (click to enlarge): Performance Summary: 0.3 seconds elapsed time 6MB memory grant 0MB tempdb usage 1MB sort set 835 logical reads (CHAR, MAX) 686 logical reads (TEXT, MAXOOR) 597 LOB logical reads (TEXT) 448 LOB logical reads (MAXOOR) No sort warning I’ll leave it as an exercise for the reader to work out why trying to eliminate the Key Lookup by adding the padding column to the new nonclustered indexes would be a daft idea Conclusion This post is not about tuning queries that access columns containing big strings.  It isn’t about the internal differences between TEXT and MAX data types either.  It isn’t even about the cool use of UPDATE .WRITE used in the MAXOOR table load.  No, this post is about something else: Many developers might not have tuned our starting example query at all – 5 seconds isn’t that bad, and the original query plan looks reasonable at first glance.  Perhaps the NEWID() function would have been blamed for ‘just being slow’ – who knows.  5 seconds isn’t awful – unless your users expect sub-second responses – but using 250MB of memory and writing 200MB to tempdb certainly is!  If ten sessions ran that query at the same time in production that’s 2.5GB of memory usage and 2GB hitting tempdb.  Of course, not all queries can be rewritten to avoid large memory grants and sort spills using the key-lookup technique in this post, but that’s not the point either. The point of this post is that a basic understanding of execution plans is not enough.  Tuning for logical reads and adding covering indexes is not enough.  If you want to produce high-quality, scalable TSQL that won’t get you paged as soon as it hits production, you need a deep understanding of execution plans, and as much accurate, deep knowledge about SQL Server as you can lay your hands on.  The advanced database developer has a wide range of tools to use in writing queries that perform well in a range of circumstances. By the way, the examples in this post were written for SQL Server 2008.  They will run on 2005 and demonstrate the same principles, but you won’t get the same figures I did because 2005 had a rather nasty bug in the Top N Sort operator.  Fair warning: if you do decide to run the scripts on a 2005 instance (particularly the parallel query) do it before you head out for lunch… This post is dedicated to the people of Christchurch, New Zealand. © 2011 Paul White email: @[email protected] twitter: @SQL_Kiwi

    Read the article

  • Implementing Search for BlogReader Windows 8 Sample

    - by Harish Ranganathan
    The BlogReader sample is an excellent place to start speeding up your Windows 8 development skills.  The tutorial is available here and the complete source code is available here Create a project called WindowsBlogReader and create pages for ItemsPage.xaml, SplitPage.xaml and DetailPage.xaml and copy the corresponding code blocks from the sample listed above. Created a class file FeedData.cs and copy the code.  Finally, create a class DateConverter.cs and copy the code associated with it. With that you should be able to build and run the project.  There seems to be one issue in the sample feeds listed that the first week (feed1) doesn’t seem to expose it.  So you can skip that and use the second feed as first feed.  You will end up with one feed less but it works. I had demonstrated this in the recent TechDays at Chennai.  How we can use the Search Contract and implement Search for within the Blog Titles. First off, we need to declare that the App will be using Search Contract, in the Package.appmanifest file Next, we would need a handle of the Search Contract when user types on the search window in Charms Menu. If you had completed the code sample from the link above, you would have ItemsPage.xaml and ItemsPage.xaml.cs.  Open the ItemsPage.xaml.cs. Import the namespaces using System.Collections.ObjectModel and System.Linq. in the ItemsPage() constructor, right after this.InitializeComponent(); add the following code Windows.ApplicationModel.Search.SearchPane.GetForCurrentView().QuerySubmitted += ItemsPage_QuerySubmitted; This event is fired when users open up the Search Panel from Charms Menu, type something and hit enter. We need to handle this event declared in the delegate.  For that we need to pull the FeedDataSource instantiation to the root of the class to make it global. So, add the following as the first line within the partial class FeedDataSource feedDataSource; Also, modify the LoadState method, as follows:- protected override void LoadState(Object navigationParameter, Dictionary<String, Object> pageState)        {            feedDataSource = (FeedDataSource)App.Current.Resources["feedDataSource"];            if (feedDataSource != null)            {                this.DefaultViewModel["Items"] = feedDataSource.Feeds;            }        } Next is to implement the ItemsPage_QuerySubmitted method void ItemsPage_QuerySubmitted(Windows.ApplicationModel.Search.SearchPane sender, Windows.ApplicationModel.Search.SearchPaneQuerySubmittedEventArgs args)         {             this.DefaultViewModel["Items"] = from dynamic item in feedDataSource.Feeds                                              where                                              item.Title.Contains(args.QueryText)                                              select item;         } As you can see we are almost using the same defaultviewmodel with the change that we are using a linq query to do a search on feeds which has the Title that matches QueryText. With this we are ready to run the app. Run the App.  Hit the Charms Menu with Windows + C key combination and type a text to search within the blog. You can see that it filters the Blogs which has the matching text. We can modify the above Linq query to do a search for the Text in other attributes like description, actual blog content etc., I have uploaded the complete code since the original WindowsBlogReader Code is not available for download.  You can download it from here note:  this code is provided as-is without any warranties.  Cheers!!!

    Read the article

  • Advanced reporting in Oracle Service Bus

    - by [email protected]
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 21 false false false FR-BE X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tableau Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Reporting in OSB is useful, it allows you to audit message going through OSB. The service bus console allows you to view the content that you reported. To report data you simply use the Report action in your proxy. The action itself is rather straightforward. You specify the content to report ($body for example), an optional key for easier search (for example the id of the record) and that's it. Sometimes though, what you want to is a bit more complicated. I recently had a case where the key was built from the message type (XML) and the id of the message. Seems quite simple but the id could be any element anywhere in the message depending on its type. This could be handled by 'if' statement but adding new cases would mean changing the proxy service and if you have lots of message types this can get boring so I wanted the solution to be as dynamic as possible (read "just change a configuration file and that's it"). The following entry details how you can make this dynamic in your proxy by using XQuery/XSLT.   First step the XQuery We're going to use an XQuery to make the mapping between the XML message type and the location of the identifier in it. We assume here that the message type is the first node of the input XML and use a rather simple Xpath to find the identifier.  The XQuery looks like this for two messages : <reportmapping>                 <row>                                <logical>messageType1</logical>                                <type>MT1</type>                                <reportingreferencelocation>//customID</reportingreferencelocation>                 </row>                 <row>                                <logical>messageType2</logical>                                <type>MT2</type>                                <reportingreferencelocation>//theOtherIDLocation</reportingreferencelocation>                 </row>   </reportmapping>   Second step the XSLT To get the identifier value of the dynamic path, we're going to use an XSLT transformation. This XSLT takes an XML parameter as input which contains our xpath (coming from the previous XQuery). The XSLT looks like this : <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xalan="http://xml.apache.org/xalan">               <xsl:param name="PathToNode"/>               <xsl:template match="/">                             <IDVALUE>                                           <xsl:value-of select="xalan:evaluate($PathToNode/reportingreferencelocation)"/>                             </ IDVALUE >               </xsl:template> </xsl:stylesheet> (note the use of a xalan function here. Xalan is the XSLT processor used in weblogic server)   Last step, the proxy service We're now going to wire everything in the proxy service. First we assign the XQuery to a variable. We then get the entry in the XQuery corresponding to the record we're treating. We're then extracting the id of the message using the XSLT transformation Final assign is to built the final variable that will be used as the reporting key. The report action is then called with this variable. Everything is setup. We're now ready to test.   Testing the solution Using the test console, we're sending our first XML ... <messageType1>                 <sender>test console 1</sender>                 <customID>ID12345</customID >                 <content>                                 <field1>value of field 1</field1>                 </content> </messageType1>   ... and a second one of another supported type <messageType2>                 <header>                                 <theOtherIDLocation >ID67890</theOtherIDLocation >                 </header> <body>                                <data>Test data</data>                 </body> </messageType2>   Reporting result is :  Conclusion Report is done as expected. Now if a new message type must be supported we only have to modify the XQuery and nothing at the proxy service level.   Sample project attached to this entry.sbconfig-dynamicReport.jar  

    Read the article

  • Gmail Now Searches Inside PDF, Word, and PowerPoint Attachments

    - by Jason Fitzpatrick
    Gmail has long had a robust system for searching within the subjects and bodies of your emails, now you can search inside select attachments–PDF, Word, and PowerPoint attachments are all searchable. Prior to this update, Gmail could search inside of HTML attachments but lacked more advanced attachment querying abilities. Now when you search your Gmail account you’ll see search results for not only the subject and body contents but also the contents of popular formats like PDF and Word documents. Don’t forget to take advantage of advanced search terms to speed up your query. If you know the information you need is in an attachment but can’t remember which email, include “has:attachment” in your search to only peek inside emails with attachments. [via GadgetBox] HTG Explains: How Antivirus Software Works HTG Explains: Why Deleted Files Can Be Recovered and How You Can Prevent It HTG Explains: What Are the Sys Rq, Scroll Lock, and Pause/Break Keys on My Keyboard?

    Read the article

  • Rails multiple select box issue for search

    - by Reido
    First off here is my model, controller, view: My model, this is where I have my search code:--------------------------- def self.find_by_lcc(params) where = [] where << "category = 'Land'" unless params[:mls].blank? where << "mls = :mls" end unless params[:county].blank? where << "county = :county" end unless params[:acreage_range].blank? where << "acreage_range = :acreage_range" end unless params[:landtype].blank? where << "landtype = :landtype" end unless params[:price_range].blank? where << "price_range = :price_range" end if where.empty? [] else find(:all, :conditions => [where.join(" AND "), params], :order => "county, price desc") end end My controller:---------------- def land @counties = ['Adams', 'Alcorn', 'Amite', 'Attala'] @title = "Browse" return if params[:commit].nil? @properties = Property.find_by_lcc(params) else 'No properties were found' render :action = 'land_table' end My View: ---------------------- <table width="900"> <tr> <td> <% form_tag({ :action => "land" }, :method => "get") do %> <fieldset> <legend>Search our Land Properties</legend> <div class="form_row"><p>&nbsp;</p></div> <div class="form_row"> <label for="mls">MLS Number:</label>&nbsp; <%= text_field_tag 'mls', params[:mls] %> </div> <div class="form_row"> <label for "county"><font color="#ff0000">*County:</font></label>&nbsp; <%= select_tag "county", options_for_select(@counties), :multiple => true, :size => 6 %> </div> <div class="form_row"> <label for "acreage_range">Acreage:</label>&nbsp; <%= select_tag "acreage_range", options_for_select([['All',''],['1-10','1-10'],['11-25','11-25'],['26-50','26-50'],['51-100','51-100']]) %> </div> <div class="form_row"> <label for "landtype">Type:</label>&nbsp; <%= select_tag "landtype", options_for_select([['All',''],['Waterfront','Waterfront'],['Wooded','Wooded'],['Pasture','Pasture'],['Woods/Pasture','Woods/Pasture'],['Lot','Lot']]) %> </div> <div class="form_row"> <label for="price_range"><font color="#ff0000">*Price:</font></label>&nbsp; <%= select_tag "price_range", options_for_select([['All',''],['0-1,000','0-1,000'],['1,001-10,000','1,001-10,000'],['10,001-50,000','10,001-50,000'],['50,001-100,000','50,001-100,000'],['100,001-150,000']])%> </div> <input type="text" style="display: none;" disabled="disabled" size="1" /> <%= submit_tag "Search", :class => "submit" %> </fieldset> <% end%> </td> </tr> </table> The search works fine until I add ", :multiple = true, :size = 6" to make the county field multiple select. Then I get the error: Processing PublicController#land (for 65.0.81.83 at 2010-04-01 13:11:30) [GET] Parameters: {"acreage_range"=>"", "commit"=>"Search", "county"=>["Adams", "Amite"], "landtype"=>"", "price_range"=>"", "mls"=>""} ActiveRecord::StatementInvalid (Mysql::Error: Operand should contain 1 column(s): SELECT * FROM `properties` WHERE (category = 'Land' AND county = 'Adams','Amite') ORDER BY county, price desc): app/models/property.rb:93:in `find_by_lcc' app/controllers/public_controller.rb:84:in `land' /usr/lib/ruby/1.8/thread.rb:135:in `synchronize' fcgi (0.8.7) lib/fcgi.rb:117:in `session' fcgi (0.8.7) lib/fcgi.rb:104:in `each_request' fcgi (0.8.7) lib/fcgi.rb:36:in `each' dispatch.fcgi:24 I've tried to make the county, acreage_range, and price_range fields into multiple select boxes numerous ways, but can not get any method to work correctly. Any help would be greatly appreciated. Thanks,

    Read the article

  • Assistance with building an inverted-index

    - by tipu
    It's part of an information retrieval thing I'm doing for school. The plan is to create a hashmap of words using the the first two letters of the word as a key and any words with the two letters saved as a string value. So, hashmap["ba"] = "bad barley base" Once I'm done tokenizing a line I take that hashmap, serialize it, and append it to the text file named after the key. The idea is that if I take my data and spread it over hundreds of files I'll lessen the time it takes to fulfill a search by lessening the density of each file. The problem I am running into is when I'm making 100+ files in each run it happens to choke on creating a few files for whatever reason and so those entries are empty. Is there any way to make this more efficient? Is it worth continuing this, or should I abandon it? I'd like to mention I'm using PHP. The two languages I know relatively intimately are PHP and Java. I chose PHP because the front end will be very simple to do and I will be able to add features like autocompletion/suggested search without a problem. I also see no benefit in using Java. Any help is appreciated, thanks.

    Read the article

  • jQuery Autocomplete Json Ajax cross browser issue with Google Search Appliance

    - by skyfoot
    I am implementing a jquery autocomplete on a search form and am getting the suggestions from the Google Search Appliance Autocomple suggestions service which returns a result set in json. What I am trying to do is go off to the GSA to get suggestions when the user types something in the search box. The url to get the json suggestions is as follows: http://gsaurl/suggest?q=<query>&max=10&site=default_site&client=default_frontend&access=p&format=rich The json which is returned is as follows: { "query":"re", "results": [ {"name":"red", "type":"suggest"}, {"name":"read", "type":"suggest"}] } The jQuery autocomplete code is as follows: $(#q).autocomplete(searchUrl, { width: 320, dataType: 'json', highlight: false, scroll: true, scrollHeight: 300, parse: function(data) { var array = new Array(); for(var i=0;i<data.results.length;i++) { array[i] = { data: data.results[i], value: data.results[i].name, result: data.results[i].name }; } return array; }, formatItem: function(row) { return row.name; } }); This works in IE but fails in firefox as the data returned in the parse function is null. Any ideas why this would be the case? Workaround I created an aspx page to call the GSA suggest service and to return the json from the suggest service. Using this page as a proxy and setting it as the url in the jQuery autocomplete worked in both IE and FireFox.

    Read the article

  • Getting Google results in Java? Need help!

    - by Cris Carter
    Hello. Right now, I'm trying to get the results from Google in Java, by searching for a term. I'm using a desktop program, not an applet. That in itself isn't complicated. but then Google gave me a 403 error. Anyways, I added referrer and User Agent and then it worked. Now, my problem is that I don't get the results page from Google. Instead, I get their script which gets the results page. My code right now simply uses a GET request on "http://www.google.com/search?q=" + Dork; Then it outputs each line. Here is what I get when I run my program: <.!doctype html<.head<.titledork - Google Search<./title<.scriptwindow.google={kEI:"9myaS-Date).getTime()}}};try{}catch(u){}window.google.jsrt_kill=1; align:center}#logo{display:block;overflow:hidden;position:relative;width:103px;height:37px; <./ script<./div Lots of stuff like that. I shortened it (A LOT) and put in dots to fit it here. So my big question is: How do I turn this whole mess into the nice results page I get when searching Google with a browser? Any help would be seriously appreciated, and I really need the answer fast. Also, please keep in mind that I do NOT want to use Google's API for this. Thanks in advance!

    Read the article

  • Mysql - Help me alter this search query to get desired results

    - by sandeepan-nath
    Following is a dump of the tables and data needed to answer understand the system:- The system consists of tutors and classes. The data in the table All_Tag_Relations stores tag relations for each tutor registered and each class created by a tutor. The tag relations are used for searching classes. CREATE TABLE IF NOT EXISTS `Tags` ( `id_tag` int(10) unsigned NOT NULL auto_increment, `tag` varchar(255) default NULL, PRIMARY KEY (`id_tag`), UNIQUE KEY `tag` (`tag`), KEY `id_tag` (`id_tag`), KEY `tag_2` (`tag`), KEY `tag_3` (`tag`), KEY `tag_4` (`tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `Tags` (`id_tag`, `tag`) VALUES (1, 'Sandeepan'), (2, 'Nath'), (3, 'first'), (4, 'class'), (5, 'new'), (6, 'Bob'), (7, 'Cratchit'); CREATE TABLE IF NOT EXISTS `All_Tag_Relations` ( `id_tag` int(10) unsigned NOT NULL default '0', `id_tutor` int(10) default NULL, `id_wc` int(10) unsigned default NULL, KEY `All_Tag_Relations_FKIndex1` (`id_tag`), KEY `id_wc` (`id_wc`), KEY `id_tag` (`id_tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `All_Tag_Relations` (`id_tag`, `id_tutor`, `id_wc`) VALUES (1, 1, NULL), (2, 1, NULL), (3, 1, 1), (4, 1, 1), (6, 2, NULL), (7, 2, NULL), (5, 2, 2), (4, 2, 2); Following is my query:- This query searches for "first class" (tag for first = 3 and for class = 4, in Tags table) and returns all those classes such that both the terms first and class are present in the class name. SELECT wtagrels.id_wc,SUM(DISTINCT( wtagrels.id_tag =3)) AS key_1_total_matches, SUM(DISTINCT( wtagrels.id_tag =4)) AS key_2_total_matches FROM all_tag_relations AS wtagrels WHERE ( wtagrels.id_tag =3 OR wtagrels.id_tag =4 ) GROUP BY wtagrels.id_wc HAVING key_1_total_matches = 1 AND key_2_total_matches = 1 LIMIT 0, 20 And it returns the class with id_wc = 1. But, I want the search to show all those classes such that all the search terms are present in the class name or its tutor name So that searching "Sandeepan class" (wtagrels.id_tag = 1,4) or "Sandeepan Nath" also returns the class with id_wc=1. And Searching. Searching "Bob First" should not return any classes. Please modify the above query or suggest a new query, if possible using MyIsam - fulltext search, but somehow help me get the result.

    Read the article

  • Search and highlight html - ignoring and maintaining tags

    - by Sleepwalker
    I am looking for a good way to highlight key words in a block of html with stripping the html tags. I can regex to search for key words within html tags, but I haven't found a great way to search across tags. For example, if the key word phrase is "not bound" I want to be able to make this <p>I am not<strong>bound to please thee</strong> with my answers.</p> become wrapped in highlight tags, without breaking the "strong" tag (and making the html invalid) and become: <p>I am <span class="highlight">not</span><strong><span class="highlight">bound</span> to please thee</strong> with my answers.</p> The main issue is maintaining the html as it is AND wrapping blocks of text with highlight tags. I need to maintain the original html. Otherwise I would strip the tags. The best solution to this that I can think of right now would entail making a copy of the html and placing counter tokens where each space occurs, then stripping all tags and search for matching phrases, then looking back to the original and the tokenized strings and figuring out where to start building the highlight tags, then start walking forward, starting and ending highlight spans as needed from the beginning of the match until the end. This seems like overkill. I would like to something more elegant if possible. The solution would be written in C# or perhaps javascript, depending.

    Read the article

  • Hibernate Search + Spring

    - by Zane
    I'm trying to integrate Hibernate Search with Spring, but I can't seem to index anything. I was able to get Hibernate Search to work without Spring, but I'm having a problem integrating it with Spring. Any help would be much appreciated. Below is my springmvc-servlet.xml: <bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager"> <property name="entityManagerFactory" ref="entityManagerFactory" /> </bean> <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalEntityManagerFactoryBean"> <property name="persistenceUnitName" value="enewsclipsPersistenceUnit" /> </bean> And here is my DAO class: @Repository public class SearchDaoImpl implements SearchDao { JpaTemplate jpaTemplate; @Autowired public SearchDaoImpl(EntityManagerFactory entityManagerFactory) { this.jpaTemplate = new JpaTemplate(entityManagerFactory); } @SuppressWarnings("unchecked") public void updateSearchIndex() { /* Implement the callback method */ jpaTemplate.execute(new JpaCallback() { public Object doInJpa(EntityManager em) throws PersistenceException { List<Article> articles = em.createQuery("select a from Article a").getResultList(); FullTextEntityManager ftEm = Search.getFullTextEntityManager(em); ftEm.getTransaction().begin(); for(Article article : articles) { System.out.println("Indexing Item " + article.getTitle()); ftEm.index(article); } ftEm.getTransaction().commit(); return null; } }); } } I think that it may have to do with the transactions but I'm not exactly sure. If you could just point me in the right direction, that would be helpful too! Thank you.

    Read the article

  • Extending / changing how Zend_Search_Lucene searches

    - by Grant Collins
    Hi, I am currently using Zend_Search_Lucene to index and search a number of documents currently at around a 1000 or so. What I would like to do is change how the engine scores hits on a document, from the current default. Zend_Search_Lucene scores on the frequency of number of hits within a document, so a document that has 10 matches of the word PHP will score higher than a document with only 3 matches of PHP. What I am trying to do is pass a number of key words and score depending on the hits of those keywords. e.g. I pass 5 key words say,PHP, MySQL, Javascript, HTML and CSS that I search against the index. One document has 3 matches to those key words and one document has all 4 matches, the 4 matches scores the highest. The number of instances of those words in the document do not concern me. Now I've had a quick look at Zend_Search_Lucene_Search_Similarity however I have to confess that I am not sure (or that bright) to know how to use this to achieve what I am after. Is what I want to do possible using Lucene or is there a better solution out there?

    Read the article

  • Jquery Ajax Search on pressing return key in Internet Explorer

    - by haribo83
    Hi I have found the following code to create a search on my site - this works really well the only problem is in Internet Explorer the search doesn't work when you press the return key. Does anybody have any ideas? The search code is below - if anything else is needed please let me know. $(function() { $(".search_button").click(function() { var search_word = $("#search_box").val(); var dataString = 'search_word='+ search_word; if(search_word=='') { } else { $.ajax({ type: "GET", url: "searchdata.php", data: dataString, cache: false, beforeSend: function(html) { document.getElementById("insert_search").innerHTML = ''; $("#flash").show(); $("#searchword").show(); $(".searchword").html(search_word); $("#flash").html('<img src="ajax-loader.gif" /> Loading Results...'); }, success: function(html){ $("#insert_search").show(); $("#insert_search").append(html); $("#flash").hide(); } }); } return false; }); });

    Read the article

  • Lucene raise document score if sibling entity matches query

    - by Pitagoras
    I have the following design situation. I use hibernate search (lucene in the back). Tha application manages ITEMs which have title, description and tags. These are full text indexed. On the other hand, we have COLLECTION of ITEMs. The user can create a COLLECTION and add as many ITEMs as she wants. ITEMs can also belong to many COLLECTIONs. I have a boosted query so that search terms that appear in the tags are more important than in the title, and lastly in the description. But I need an additional matching criteria: for a given ITEM, it whould rank better if other documents in some COLLECTION where the ITEM belongs, also match the query. This is like to say: the title/tags/description of "fellow" items (i.e. items in some shared collection) make the item rank better. I was thinking that adding an ITEM to a COLLECTION would add something like "extra tags" to every other ITEM in the collection, being these extra tags the elements to match in the added ITEM. I feel a more clever solution lucene-wise should exists. Any ideas/pointers are welcome. Thanks.

    Read the article

  • Your Content Can Get Visibility Through Search Engines - Part 1

    Search engine marketing aims at getting higher search engine rankings for your website but for that you need to get yourself into the search engine Index first. Your content designed for your SEO activity should be accessible to millions of internet users and if your content is not found through the search engines, then all your search engine optimization efforts will go waste.

    Read the article

  • Website not coming in Search engine results because of a term

    - by curiosity
    We have this site which is named Vialogues (Video+Discussion web based application). https://vialogues.com It has been around for sometime on the internet and we have also submitted sitemap.xml to search engines. However when we search on google or bing or yahoo using the keyword Vialogues, We are given results of the keyword dialogues and this message “showing results for dialogues, search instead for vialogues”. I am wondering if it's possible to list the site without the search engine suggesting “showing results for dialogues, search instead for vialogues”?

    Read the article

  • 5 Common Mistakes Made by Search Engine Optimization Companies

    Search engine optimization process or SEO is used for making the website search-friendly. The websites are made friendly not only for the search-engines, but also for those who search for products and services with the help of search-engines. There are 5 common mistakes which people make while optimizing the site. You should be aware that these mistakes can affect your ranking in the SE.

    Read the article

  • How Can a Website Benefit From Search Engine Optimization

    Search Engine Optimization, also plainly referred to as SEO is tuning the potentials of a website with the aim of getting high rankings in search engines like Google, Yahoo and Bing. Businesses battle for this high ranking because when web users search for particular services or products online, they only use specific keywords in the search engines then perform a web search.

    Read the article

  • Search Engine Optimisation For Online Businesses

    Most Internet users make use of search engines to get apt results for their search and this is the reason that a website with good search engine ranking is preferred much by the visitors. Landing on top search engine results not only helps to draw quality traffic on the website, but also helps the website to become a brand in the market. The more visitors navigate the web page, more publicity for the site shall boost up and this is made possible with Search engine optimization.

    Read the article

  • SEO - Search Engine Optimization Working For You

    SEO (search engine optimization), sometimes referred to as natural or organic search optimization, website SEO, or just plain old SEO, is the process of ensuring that the ranking of your website with search engines i.e. Google, Yahoo etc puts your web site near the top of searches. Natural or Organic search results are "free" i.e., you do not pay for a listing in the search results pages for your keywords. This means you need to do all the work.

    Read the article

  • Basic SEO Strategies - How to Get Your Website at the Top of the Search Results

    Search Engine Optimisation means making changes on your website so that it can be crawled more easily by search engines and found in search listings. By making changes on your website you can increase targeted visitors to your site. Search Engines use robots to crawl websites and the robots are only able to read content that is text. Your website results are displayed based on how relevant the content is in relation to the keyword that is being searched for in the search engine.

    Read the article

  • Where can I safetly search domain whois without worrying about the search engine parking on the domain immediately after the search?

    - by Evan Plaice
    There are a lot of companies that provide domain whois but I've heard of a lot of people who had bad experiences where the domain was bought soon after the whois search and the price was increased dramatically. Where can I gain access to a domain whois where I don't have to worry about that happening? Update: Apparently, the official name for this practice is called Domain Front Running and some sites go as far as to create explicit policies stating that they don't do it. This is where a domain registrar or an intermediary (like a domain lookup site) mines the searches for possibly attractive domains and then either sells the data to a third-party, or goes ahead and registers the name themselves ahead of you. In one case a registrar took advantage of what's known as the "grace period" and registered every single domain users looked up through them and held on to them for 5 days before releasing them back into the pool at no cost to themselves. Source: domainwarning.com And apparently, after ICANN was notified of the practice, they wrote it off as a coincidence of random 'domain tasting'. Source: See for yourself

    Read the article

  • Python - Using "Google AJAX Search" API's Local Search Objects

    - by user330739
    Hi! I've just started using Google's search API to find addresses and the distances between those addresses. I used geopy for this, but, I often had the problem of not getting the correct addresses for my queries. I decided to experiment, therefore, with Google's "Local Search" (http://code.google.com/apis/ajaxsearch/local.html). Anyway, I wanted to ask if I could use the "Local Search" objects provided by the API within python. Something tells me that I can't and that I have to use json. Does anyone know if there is a work around? PS: Im trying to make something like this: http://www.google.com/uds/samples/random/lead.html ... except a matrix type deal where the insides will be filled with distances between the addresses. Thanks for reading!

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >