Search Results

Search found 37688 results on 1508 pages for 'site search'.

Page 223/1508 | < Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >

  • Searching Techniques/Algorithms for Resources over a given area

    - by Raydon
    I have a flat area with nodes randomly placed on this flat surface. I need techniques which are able to take a starting point, move in a certain way (the algorithm), find nodes and continue searching. I do not have an overall view of the surface (i.e. I cannot see everything), only a limited view (i.e. 4 cells in any direction). Ideally, these methods would be efficient in the way that they work. Any points in the right direction would be greatly appreciated.

    Read the article

  • What is the mysql 5.5 equivalent for the sys.dm_fts_index_keywords_by_document in sql 2008

    - by djsurge
    I'm making a web application that uses the data in the sys.dm_fts_index_keywords_by_document. I'm interested how many times a given term occurs in each string that is indexed. For example, I have a table with a column called comments, the table has various strings in the comments field. When I make that column full text searchable, the dm_fts_index_keywords_by_document is created and I can see the word per document data. Can i do the same thing in mySQL?

    Read the article

  • MSSQL 2008 FTS CONTAINSTABLE Not Returning More Than Five Rows

    - by Elijah Glover
    I have a single table called "Indexes", it contains one nvarchar and three ntext columns (all Full Text Indexes). Index is up to date. CONTAINSTABLE(Indexes, *), 'test', 5) //5 results No matter what I change the above keyword too, it only returns the first 3-5 results. It should roughly return 90-120 results, for the above query. SELECT count(*) FROM Indexes WHERE [Description] like '%test%' //122 results How would I start to troubleshoot this problem?

    Read the article

  • Best way to handle SQL Server fulltext index updates

    - by tlianza
    Hi all, I have a fulltext index which doesn't need to be immediately up-to-date, I'd like to spare myself the I/O (when I do bulk updates, I see a ton of I/O related to the index) and do the index updates during low usage times (nightly, perhaps even weekly). It seems there are two ways to go about this: Turn off change tracking (SET CHANGE_TRACKING OFF) and add a timestamp field to the indexed table, so that you can run alter fulltext index on <table> start INCREMENTAL population, or Enable change tracking, but set it to MANUAL, so that you can run alter fulltext index on <table> start UPDATE population when you need it updated. Is there a preferred method? I couldn't tell from this overview if there was a performance benefit one way or the other. Tom

    Read the article

  • Poor execution plans when using a filter and CONTAINSTABLE in a query

    - by Paul McLoughlin
    We have an interesting problem that I was hoping someone could help to shed some light on. At a high level the problem is as below: The following query executes quickly (1 second): SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID but if we add a filter to the query, then it takes approximately 2 minutes to return: SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID WHERE SA.CHG_DATE'19 Feb 2010' Looking at the execution plan for the two queries, I can see that in the second case there are two places where there are huge differences between the actual and estimated number of rows, these being: 1) For the FulltextMatch table valued function where the estimate is approx 22,000 rows and the actual is 29 million rows (which are then filtered down to 1670 rows before the join) and 2) For the index seek on the full text index, where the estimate is 1 row and the actual is 13,000 rows As a result of the estimates, the optimiser is choosing to use a nested loops join (since it assumes a small number of rows) hence the plan is inefficient. We can work around the problem by either (a) parameterising the query and adding an OPTION (OPTIMIZE FOR UNKNOWN) to the query or (b) by forcing a HASH JOIN to be used. In both of these cases the query returns in sub 1 second and the estimates appear reasonable. My question really is 'why are the estimates being used in the poorly performing case so wildly inaccurate and what can be done to improve them'? Statistics are up to date on the indexes on the indexed view being used here. Any help greatly appreciated.

    Read the article

  • BST insert operation. don't insert a node if a duplicate exists already

    - by jeev
    the following code reads an input array, and constructs a BST from it. if the current arr[i] is a duplicate, of a node in the tree, then arr[i] is discarded. count in the struct node refers to the number of times a number appears in the array. fi refers to the first index of the element found in the array. after the insertion, i am doing a post-order traversal of the tree and printing the data, count and index (in this order). the output i am getting when i run this code is: 0 0 7 0 0 6 thank you for your help. Jeev struct node{ int data; struct node *left; struct node *right; int fi; int count; }; struct node* binSearchTree(int arr[], int size); int setdata(struct node**node, int data, int index); void insert(int data, struct node **root, int index); void sortOnCount(struct node* root); void main(){ int arr[] = {2,5,2,8,5,6,8,8}; int size = sizeof(arr)/sizeof(arr[0]); struct node* temp = binSearchTree(arr, size); sortOnCount(temp); } struct node* binSearchTree(int arr[], int size){ struct node* root = (struct node*)malloc(sizeof(struct node)); if(!setdata(&root, arr[0], 0)) fprintf(stderr, "root couldn't be initialized"); int i = 1; for(;i<size;i++){ insert(arr[i], &root, i); } return root; } int setdata(struct node** nod, int data, int index){ if(*nod!=NULL){ (*nod)->fi = index; (*nod)->left = NULL; (*nod)->right = NULL; return 1; } return 0; } void insert(int data, struct node **root, int index){ struct node* new = (struct node*)malloc(sizeof(struct node)); setdata(&new, data, index); struct node** temp = root; while(1){ if(data<=(*temp)->data){ if((*temp)->left!=NULL) *temp=(*temp)->left; else{ (*temp)->left = new; break; } } else if(data>(*temp)->data){ if((*temp)->right!=NULL) *temp=(*temp)->right; else{ (*temp)->right = new; break; } } else{ (*temp)->count++; free(new); break; } } } void sortOnCount(struct node* root){ if(root!=NULL){ sortOnCount(root->left); sortOnCount(root->right); printf("%d %d %d\n", (root)->data, (root)->count, (root)->fi); } }

    Read the article

  • Best way to store list of numbers and to retrieve them

    - by bingoNumbers
    Hi. What is the best way to store a list of random numbers (like lotto/bingo numbers) and retrieve them? I'd like to store on a Database a number of rows, where each row contains 5-10 numbers ranging from 0 to 90. I will store a big number of those rows. What I'd like to be able is to retrieve the rows that have at least X number in common to a newly generated row. Example: [3,4,33,67,85,99] [55,56,77,89,98,99] [3,4,23,47,85,91] Those are on the DB I will generate this: [1,2,11,45,47,88] and now I want to get the rows that have at least 1 number in common with this one. The easiest (and dumbest?) way is to make 6 select and check for similar results. I thought to store numbers with a large binary string like 000000000000000000000100000000010010110000000000000000000000000 with 99 numbers where each number represent a number from 1 to 99, so if I have 1 at the 44th position, it means that I have 44 on that row. This method is probably shifting the difficult tasks to the Db but it's again not very smart. Any suggestion?

    Read the article

  • How can I identify unknown query string fragments that are coming to my site?

    - by Jon
    In the Google Analytics content overview for a site that I work on, the home page is getting many pageviews with some unfamiliar query string fragments, example: /?jkId=1234567890abcdef1234567890abcdef&jt=1&jadid=1234567890&js=1&jk=key words&jsid=12345&jmt=1 (potentially identifiable IDs have been changed) It clearly looks like some kind of ad tracking info, but noone who works on the site knows where it comes from, and I haven't been able to find any useful information from searching. Is there some listing of common query string keys available anywhere? Alternatively, does anyone happen to know where these keys (jkId, jt, jadid, js, jk, jsid and jmt) might come from?

    Read the article

  • Add domain to relative urls

    - by Rick
    How can I add http://facebook.com to relative URL's contained within #facebook_urls? Eg: <a href="/test.html"> becomes <a href="http://facebook.com/test.html"> #facebook_urls also contains absolute urls, so I want to make sure I don't touch those. Thanks!

    Read the article

  • SQL SERVER FULL-TEXT INDEX, CONTAINS return empty

    - by max
    Hi, All: I got a issue about full index, any body can help me on this? set up full text index CREATE FULLTEXT INDEX ON dbo.Companies(my table name) ( CompanyName(colum of my table) Language 0X0 ) KEY INDEX IX_Companies_CompanyAlias ON QuestionsDB WITH CHANGE_TRACKING AUTO GO Using CONTAINS to find the matched rows SELECT CompanyId, CompanyName FROM dbo.Companies WHERE CONTAINS(CompanyName,'Micro') All is going well. just just just return empty resultset. And I am sure there is company with CompanyName "Microsoft" in Table Company Much appreciated if anybody does me a favor on this.

    Read the article

  • RegularExpression-esque search matching Objects in List

    - by Pindatjuh
    I'm currently working on an implementation of the following idea, and I was wondering if there is any literature on this subject. Working with Java, but the principle applies on any language with a decent type-system, I like to implement: matching Objects from a List using a RegularExpression-esque search: So let's say I have a List containing List<Object> x = new ArrayList<Object>(); x.add(new Object()); x.add("Hello World"); x.add("Second String"); x.add(5); // Integer (auto-boxing) x.add(6); // Integer Then I create a "Regular Expression" (not working with a stream of characters, but working with a stream of Objects), and instead of character-classes, I use type-system properties: [String][Integer] And this would match one sublist: {Match["Second String", 5]}. The expression: [String:length()<15] Will match two sublist (each of length 1) containing a String which instance is passing the expression instance.length() < 5: {Match["Hello World"],Match["Second String"]}. [Object][Object] Matches any pair in the List: {Match[Object,"Hello World"],Match["Second String", 5]}, in a streamed manner (no overlapping matches). Ofcourse, my implementation will have grouping, lookahead/lookbehinds and is hierarchical (i.e. matching n elements from Lists in Lists), etc. The above merely illustrates the concept. Is there a name for this principle, and is there literature available on it?

    Read the article

  • Finding key Solr performance metrics

    - by Mike Malloy
    To improve performance of Solr find your slowest searches, monitor query results, cache hit rate and cache size, document cache and filter cache; find problems with Solr update handlers by tracking index operations and document operations. There is a tool from New Relic which may help. http://www.newrelic.com/solr.html

    Read the article

  • cakephp filter index pages according to foreign keys

    - by Marki
    Hi there, I'm pretty new to CakePHP and was missing a crucial feature not generated as scaffold: filtering. What do I have to do to provide dropdowns or multi-selects on the index pages for each field that is a (foreign) key, thereby allowing to filter the table ("OR" inside multi-select, "AND" between different multi-selects, if any)? From what my websearch has shown me there are many more people trying to accomplish the same thing, although I couldn't find anything that would work for me because either they have text fields and do wildcard filtering, or the plugins they propose only work for 1.2 whereas i now started with 1.3 etc. etc. Can someone alleviate the confusion and maybe present some working code or direct me to the definitive guide[tm] where this matter has been solved? Thx

    Read the article

  • Are there any good reasons to intentionally serve a new web site in Quirks mode?

    - by wsanville
    I was a little surprised that Amazon's site doesn't specify a doctype, and is rendered in quirks mode. What could possibly be the reason for this? I understand what quirks mode is and why doctypes were introduced, but I can't understand why this would be intentionally left off. I guess it might simplify markup if they're trying to support ancient browsers, but isn't that like shooting yourself in the foot when it comes to modern browsers, especially when their site is so Javascript rich? Does this level the playing field when it comes to supporting really old browsers? Is there something else I'm missing?

    Read the article

  • Lucene (.NET) Document stucture and performance suggestions.

    - by Josh Handel
    Hello, I am indexing about 100M documents that consist of a few string identifiers and a hundred or so numaric terms.. I won't be doing range queries, so I haven't dugg too deep into Numaric Field but I'm not thinking its the right choose here. My problem is that the query performance degrades quickly when I start adding OR criteria to my query.. All my queries are on specific numaric terms.. So a document looks like StringField:[someString] and N DataField:[someNumber].. I then query it with something like DataField:((+1 +(2 3)) (+75 +(3 5 52)) (+99 +88 +(102 155 199))). Currently these queries take about 7 to 16 seconds to run on my laptop.. I would like to make sure thats really the best they can do.. I am open to suggestions on field structure and query structure :-). Thanks Josh PS: I have already read over all the other lucene performance discussions on here, and on the Lucene wiki and at lucid imiagination... I'm a bit further down the rabbit hole then that...

    Read the article

  • Counting Alphabetic Characters That Are Contained in an Array with C

    - by Craig
    Hello everyone, I am having trouble with a homework question that I've been working at for quite some time. I don't know exactly why the question is asking and need some clarification on that and also a push in the right direction. Here is the question: (2) Solve this problem using one single subscripted array of counters. The program uses an array of characters defined using the C initialization feature. The program counts the number of each of the alphabetic characters a to z (only lower case characters are counted) and prints a report (in a neat table) of the number of occurrences of each lower case character found. Only print the counts for the letters that occur at least once. That is do not print a count if it is zero. DO NOT use a switch statement in your solution. NOTE: if x is of type char, x-‘a’ is the difference between the ASCII codes for the character in x and the character ‘a’. For example if x holds the character ‘c’ then x-‘a’ has the value 2, while if x holds the character ‘d’, then x-‘a’ has the value 3. Provide test results using the following string: “This is an example of text for exercise (2).” And here is my source code so far: #include<stdio.h> int main() { char c[] = "This is an example of text for exercise (2)."; char d[26]; int i; int j = 0; int k; j = 0; //char s = 97; for(i = 0; i < sizeof(c); i++) { for(s = 'a'; s < 'z'; s++){ if( c[i] == s){ k++; printf("%c,%d\n", s, k); k = 0; } } } return 0; } As you can see, my current solution is a little anemic. Thanks for the help, and I know everyone on the net doesn't necessarily like helping with other people's homework. ;P

    Read the article

  • Get highest frequency terms from Lucene index

    - by Julia
    Hello! i need to extract terms with highest frequencies from several lucene indexes, to use them for some semantic analysis. So, I want to get maybe top 30 most occuring terms(still did not decide on threshold, i will analyze results) and their per-index counts. I am aware that I might lose some precision because of potentionally dropped duplicates, but for now, lets say i am ok with that. So for the proposed solutions, (needless to say maybe) speed is not important, since I would do static analysis, I would put accent on simplicity of implementation because im not so skilled with Lucene (not the programming guru too :/ ) and cant wrap my mind around many concepts of it.. I can not find any code samples from something similar, so all concrete advices (code, pseudocode, links to code samples...) I will apretiate very much!!! Thank you!

    Read the article

  • Keep Google Analytics in a backup site or not?

    - by Yannis Dran
    I backed up my website and uploaded it to another server for testing and backup purposes. Should I remove the Google Analytics snippet from the index.php (which is for the real site), or does it not matter as it's not the same server and url address as the one declared at Google Analytics account? The reason I don't want to remove it is in case someone forgets about it if they upload the backup to the real site in case the real one breaks. Also I know that if I turn off the website there is no GA snippet, but I need it open so I can easily access and test it so I don't have to write pass all the time.

    Read the article

  • Can sphinx be used over cassandra?

    - by Mickey Shine
    I am planning to build a cassandra store system and also I need a full-text(Chinese) system too. Can sphinx be used on cassandra? (sphinx supports xml format but I am not going to use it, cause it is slow and much of time are spent on xml parsing). Or you can share your experiences if you have ever built a full-text searching system over cassandra. Thank you

    Read the article

  • How to Serialize Binary Tree

    - by Veljko Skarich
    I went to an interview today where I was asked to serialize a binary tree. I implemented an array-based approach where the children of node i (numbering in level-order traversal) were at the 2*i index for the left child and 2*i + 1 for the right child. The interviewer seemed more or less pleased, but I'm wondering what serialize means exactly? Does it specifically pertain to flattening the tree for writing to disk, or would serializing a tree also include just turning the tree into a linked list, say. Also, how would we go about flattening the tree into a (doubly) linked list, and then reconstructing it? Can you recreate the exact structure of the tree from the linked list? Thank you/

    Read the article

  • SQL Server 2008 FTS CONTAINSTABLE Not Returning More Than Five Rows

    - by Elijah Glover
    I have a single table called "Indexes", it contains one nvarchar and three ntext columns (all Full Text Indexes). Index is up to date. CONTAINSTABLE(Indexes, *), 'test', 5) //5 results No matter what I change the above keyword too, it only returns the first 3-5 results. It should roughly return 90-120 results, for the above query. SELECT count(*) FROM Indexes WHERE [Description] like '%test%' //122 results How would I start to troubleshoot this problem?

    Read the article

  • 2D Array values frequency

    - by Morano88
    If I have a 2D array that is arranged as follows : String X[][] = new String [][] {{"127.0.0.9", "60", "75000","UDP", "Good"}, {"127.0.0.8", "75", "75000","TCP", "Bad"}, {"127.0.0.9", "75", "70000","UDP", "Good"}, {"127.0.0.1", "", "70000","UDP", "Good"}, {"127.0.0.1", "75", "75000","TCP", "Bad"} }; I want to know the frequency of each value .. so I27.0.0.9 gets 2. How can I do a general solution for this ? In Java or any algorithm for any language ?

    Read the article

  • Is there a general rule of thumb for which browsers to optimize your site for?

    - by Christian
    I have a site (recently relaunched it with a new design) that I have put off optimizing for ie7 for far too long. I was just never too worried about it. The site is optimized for ie8-10, Firefox, Chrome, Opera, Safari, etc.. Then I asked myself, is it even worth it? I checked traffic over the last couple months before the relaunch and about 1.3% of the traffic is coming from ie7. So, is there a general cuttoff percentage when you would not optimize for a specific browser?

    Read the article

  • How to find files according RegEx in C#

    - by bao
    I need to get list of files on some drive with paths that matches specific pattern, for example FA\d\d\d\d.xml where \d is digit (0,1,2..9). So files can have names like FA5423.xml. What is the most efficient name to do this?

    Read the article

< Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >