Search Results

Search found 18361 results on 735 pages for 'hibernate search'.

Page 206/735 | < Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >

  • Finding a string inside an XmlDocument

    - by Gady
    Hi, I need to find an inner text of an element inside an XmlDocument and return it's Xpath. for example, searching for "ThisText" inside : <xml> <xml2>ThisText</xml2> </xml> should return the Xpath of xml2 what's the most efficient way of doing this in c#?

    Read the article

  • Save a binary file in SQL Server as BLOB and text (or get the text from Full-Text index)

    - by Glennular
    Currently we are saving files (PDF, DOC) into the database as BLOB fields. I would like to be able to retrieve the raw text of the file to be able to manipulate it for hit-highlighting and other functions. Does anyone know of a simple way to either parse out the files and save the raw text on save, either via SQL or .net code. I have found that Adobe has a filtdump utility that will convert the PDF to text. Filtdump seems to be a command line tool, and i don't see a way to use a file stream. And what would the extractor be for Office documents and other file types? -or- Is there a way to pull out the raw text from the Full text index? Note i am trying to build a .net & MSSql solution without having to use a third party tool such as Lucene

    Read the article

  • How to find similar/related text with Zend Lucene?

    - by Arty
    Say I need to make searching for related titles just like stackoverflow does before you add your question or digg.com before submitting news. I didn't find a way how to do this with Zend Lucene. There are setSlop method for queries, but as I understand, it doesn't help. Is there any way to do this kind of searches?

    Read the article

  • How select the rest of the word in incremental search in Eclipse?

    - by arberg
    When in incremental search mode in Eclipse, is there a way to select the rest of the word? For example, suppose I want to find the word “handleReservationGranted”. I type Ctrl-f to enter incremental search mode, and start typing the letters “han”. Now suppose I have found the beginning of “handleReservationGranted”. In my search box I have “han”, but I would now like to be able to select the rest of the word, so that the search box contains “handleReservationGranted” instead of “han”. In Xemacs, I can type Ctrl-s, type “han”, and then type Ctrl-w. Now my search term is “handleReservationGranted”, and not “han”. So now if I press Ctrl-s, I find the next occurrence of “handleReservationGranted”. I frequently prefer the incremental search over the search dialog, as the search dialog takes too much space on my screen, and most annoying it frequently hides the found matches. I am using Eclipse Galileo (3.5.2). Ctrl-Shift-L gives me the list of possible shortcuts in the given context, but none seems to fit what I'm looking for.

    Read the article

  • How can I store data in a table as a trie? (SQL Server)

    - by Matt
    Hi, To make things easier, the table contains all the words in the English dictionary. What I would like to do is be able to store the data as a trie. This way I can traverse the different branches of the trie and return the most relevant result. First, how do I store the data in the table as a trie? Second, how do I traverse the tree? If it helps at all, the suggestion in this previous question is where this question was sparked from. Please make sure it's SQL we're talking about. I understood the Mike Dunlavey's C implementation because of pointers but can't see how this part (The trie itself) works in SQL. Thanks, Matt

    Read the article

  • Searching Techniques/Algorithms for Resources over a given area

    - by Raydon
    I have a flat area with nodes randomly placed on this flat surface. I need techniques which are able to take a starting point, move in a certain way (the algorithm), find nodes and continue searching. I do not have an overall view of the surface (i.e. I cannot see everything), only a limited view (i.e. 4 cells in any direction). Ideally, these methods would be efficient in the way that they work. Any points in the right direction would be greatly appreciated.

    Read the article

  • What is the mysql 5.5 equivalent for the sys.dm_fts_index_keywords_by_document in sql 2008

    - by djsurge
    I'm making a web application that uses the data in the sys.dm_fts_index_keywords_by_document. I'm interested how many times a given term occurs in each string that is indexed. For example, I have a table with a column called comments, the table has various strings in the comments field. When I make that column full text searchable, the dm_fts_index_keywords_by_document is created and I can see the word per document data. Can i do the same thing in mySQL?

    Read the article

  • Best way to store list of numbers and to retrieve them

    - by bingoNumbers
    Hi. What is the best way to store a list of random numbers (like lotto/bingo numbers) and retrieve them? I'd like to store on a Database a number of rows, where each row contains 5-10 numbers ranging from 0 to 90. I will store a big number of those rows. What I'd like to be able is to retrieve the rows that have at least X number in common to a newly generated row. Example: [3,4,33,67,85,99] [55,56,77,89,98,99] [3,4,23,47,85,91] Those are on the DB I will generate this: [1,2,11,45,47,88] and now I want to get the rows that have at least 1 number in common with this one. The easiest (and dumbest?) way is to make 6 select and check for similar results. I thought to store numbers with a large binary string like 000000000000000000000100000000010010110000000000000000000000000 with 99 numbers where each number represent a number from 1 to 99, so if I have 1 at the 44th position, it means that I have 44 on that row. This method is probably shifting the difficult tasks to the Db but it's again not very smart. Any suggestion?

    Read the article

  • Add domain to relative urls

    - by Rick
    How can I add http://facebook.com to relative URL's contained within #facebook_urls? Eg: <a href="/test.html"> becomes <a href="http://facebook.com/test.html"> #facebook_urls also contains absolute urls, so I want to make sure I don't touch those. Thanks!

    Read the article

  • Poor execution plans when using a filter and CONTAINSTABLE in a query

    - by Paul McLoughlin
    We have an interesting problem that I was hoping someone could help to shed some light on. At a high level the problem is as below: The following query executes quickly (1 second): SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID but if we add a filter to the query, then it takes approximately 2 minutes to return: SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID WHERE SA.CHG_DATE'19 Feb 2010' Looking at the execution plan for the two queries, I can see that in the second case there are two places where there are huge differences between the actual and estimated number of rows, these being: 1) For the FulltextMatch table valued function where the estimate is approx 22,000 rows and the actual is 29 million rows (which are then filtered down to 1670 rows before the join) and 2) For the index seek on the full text index, where the estimate is 1 row and the actual is 13,000 rows As a result of the estimates, the optimiser is choosing to use a nested loops join (since it assumes a small number of rows) hence the plan is inefficient. We can work around the problem by either (a) parameterising the query and adding an OPTION (OPTIMIZE FOR UNKNOWN) to the query or (b) by forcing a HASH JOIN to be used. In both of these cases the query returns in sub 1 second and the estimates appear reasonable. My question really is 'why are the estimates being used in the poorly performing case so wildly inaccurate and what can be done to improve them'? Statistics are up to date on the indexes on the indexed view being used here. Any help greatly appreciated.

    Read the article

  • MSSQL 2008 FTS CONTAINSTABLE Not Returning More Than Five Rows

    - by Elijah Glover
    I have a single table called "Indexes", it contains one nvarchar and three ntext columns (all Full Text Indexes). Index is up to date. CONTAINSTABLE(Indexes, *), 'test', 5) //5 results No matter what I change the above keyword too, it only returns the first 3-5 results. It should roughly return 90-120 results, for the above query. SELECT count(*) FROM Indexes WHERE [Description] like '%test%' //122 results How would I start to troubleshoot this problem?

    Read the article

  • Best way to handle SQL Server fulltext index updates

    - by tlianza
    Hi all, I have a fulltext index which doesn't need to be immediately up-to-date, I'd like to spare myself the I/O (when I do bulk updates, I see a ton of I/O related to the index) and do the index updates during low usage times (nightly, perhaps even weekly). It seems there are two ways to go about this: Turn off change tracking (SET CHANGE_TRACKING OFF) and add a timestamp field to the indexed table, so that you can run alter fulltext index on <table> start INCREMENTAL population, or Enable change tracking, but set it to MANUAL, so that you can run alter fulltext index on <table> start UPDATE population when you need it updated. Is there a preferred method? I couldn't tell from this overview if there was a performance benefit one way or the other. Tom

    Read the article

  • BST insert operation. don't insert a node if a duplicate exists already

    - by jeev
    the following code reads an input array, and constructs a BST from it. if the current arr[i] is a duplicate, of a node in the tree, then arr[i] is discarded. count in the struct node refers to the number of times a number appears in the array. fi refers to the first index of the element found in the array. after the insertion, i am doing a post-order traversal of the tree and printing the data, count and index (in this order). the output i am getting when i run this code is: 0 0 7 0 0 6 thank you for your help. Jeev struct node{ int data; struct node *left; struct node *right; int fi; int count; }; struct node* binSearchTree(int arr[], int size); int setdata(struct node**node, int data, int index); void insert(int data, struct node **root, int index); void sortOnCount(struct node* root); void main(){ int arr[] = {2,5,2,8,5,6,8,8}; int size = sizeof(arr)/sizeof(arr[0]); struct node* temp = binSearchTree(arr, size); sortOnCount(temp); } struct node* binSearchTree(int arr[], int size){ struct node* root = (struct node*)malloc(sizeof(struct node)); if(!setdata(&root, arr[0], 0)) fprintf(stderr, "root couldn't be initialized"); int i = 1; for(;i<size;i++){ insert(arr[i], &root, i); } return root; } int setdata(struct node** nod, int data, int index){ if(*nod!=NULL){ (*nod)->fi = index; (*nod)->left = NULL; (*nod)->right = NULL; return 1; } return 0; } void insert(int data, struct node **root, int index){ struct node* new = (struct node*)malloc(sizeof(struct node)); setdata(&new, data, index); struct node** temp = root; while(1){ if(data<=(*temp)->data){ if((*temp)->left!=NULL) *temp=(*temp)->left; else{ (*temp)->left = new; break; } } else if(data>(*temp)->data){ if((*temp)->right!=NULL) *temp=(*temp)->right; else{ (*temp)->right = new; break; } } else{ (*temp)->count++; free(new); break; } } } void sortOnCount(struct node* root){ if(root!=NULL){ sortOnCount(root->left); sortOnCount(root->right); printf("%d %d %d\n", (root)->data, (root)->count, (root)->fi); } }

    Read the article

  • SQL SERVER FULL-TEXT INDEX, CONTAINS return empty

    - by max
    Hi, All: I got a issue about full index, any body can help me on this? set up full text index CREATE FULLTEXT INDEX ON dbo.Companies(my table name) ( CompanyName(colum of my table) Language 0X0 ) KEY INDEX IX_Companies_CompanyAlias ON QuestionsDB WITH CHANGE_TRACKING AUTO GO Using CONTAINS to find the matched rows SELECT CompanyId, CompanyName FROM dbo.Companies WHERE CONTAINS(CompanyName,'Micro') All is going well. just just just return empty resultset. And I am sure there is company with CompanyName "Microsoft" in Table Company Much appreciated if anybody does me a favor on this.

    Read the article

  • RegularExpression-esque search matching Objects in List

    - by Pindatjuh
    I'm currently working on an implementation of the following idea, and I was wondering if there is any literature on this subject. Working with Java, but the principle applies on any language with a decent type-system, I like to implement: matching Objects from a List using a RegularExpression-esque search: So let's say I have a List containing List<Object> x = new ArrayList<Object>(); x.add(new Object()); x.add("Hello World"); x.add("Second String"); x.add(5); // Integer (auto-boxing) x.add(6); // Integer Then I create a "Regular Expression" (not working with a stream of characters, but working with a stream of Objects), and instead of character-classes, I use type-system properties: [String][Integer] And this would match one sublist: {Match["Second String", 5]}. The expression: [String:length()<15] Will match two sublist (each of length 1) containing a String which instance is passing the expression instance.length() < 5: {Match["Hello World"],Match["Second String"]}. [Object][Object] Matches any pair in the List: {Match[Object,"Hello World"],Match["Second String", 5]}, in a streamed manner (no overlapping matches). Ofcourse, my implementation will have grouping, lookahead/lookbehinds and is hierarchical (i.e. matching n elements from Lists in Lists), etc. The above merely illustrates the concept. Is there a name for this principle, and is there literature available on it?

    Read the article

  • Finding key Solr performance metrics

    - by Mike Malloy
    To improve performance of Solr find your slowest searches, monitor query results, cache hit rate and cache size, document cache and filter cache; find problems with Solr update handlers by tracking index operations and document operations. There is a tool from New Relic which may help. http://www.newrelic.com/solr.html

    Read the article

  • cakephp filter index pages according to foreign keys

    - by Marki
    Hi there, I'm pretty new to CakePHP and was missing a crucial feature not generated as scaffold: filtering. What do I have to do to provide dropdowns or multi-selects on the index pages for each field that is a (foreign) key, thereby allowing to filter the table ("OR" inside multi-select, "AND" between different multi-selects, if any)? From what my websearch has shown me there are many more people trying to accomplish the same thing, although I couldn't find anything that would work for me because either they have text fields and do wildcard filtering, or the plugins they propose only work for 1.2 whereas i now started with 1.3 etc. etc. Can someone alleviate the confusion and maybe present some working code or direct me to the definitive guide[tm] where this matter has been solved? Thx

    Read the article

  • Counting Alphabetic Characters That Are Contained in an Array with C

    - by Craig
    Hello everyone, I am having trouble with a homework question that I've been working at for quite some time. I don't know exactly why the question is asking and need some clarification on that and also a push in the right direction. Here is the question: (2) Solve this problem using one single subscripted array of counters. The program uses an array of characters defined using the C initialization feature. The program counts the number of each of the alphabetic characters a to z (only lower case characters are counted) and prints a report (in a neat table) of the number of occurrences of each lower case character found. Only print the counts for the letters that occur at least once. That is do not print a count if it is zero. DO NOT use a switch statement in your solution. NOTE: if x is of type char, x-‘a’ is the difference between the ASCII codes for the character in x and the character ‘a’. For example if x holds the character ‘c’ then x-‘a’ has the value 2, while if x holds the character ‘d’, then x-‘a’ has the value 3. Provide test results using the following string: “This is an example of text for exercise (2).” And here is my source code so far: #include<stdio.h> int main() { char c[] = "This is an example of text for exercise (2)."; char d[26]; int i; int j = 0; int k; j = 0; //char s = 97; for(i = 0; i < sizeof(c); i++) { for(s = 'a'; s < 'z'; s++){ if( c[i] == s){ k++; printf("%c,%d\n", s, k); k = 0; } } } return 0; } As you can see, my current solution is a little anemic. Thanks for the help, and I know everyone on the net doesn't necessarily like helping with other people's homework. ;P

    Read the article

  • Lucene (.NET) Document stucture and performance suggestions.

    - by Josh Handel
    Hello, I am indexing about 100M documents that consist of a few string identifiers and a hundred or so numaric terms.. I won't be doing range queries, so I haven't dugg too deep into Numaric Field but I'm not thinking its the right choose here. My problem is that the query performance degrades quickly when I start adding OR criteria to my query.. All my queries are on specific numaric terms.. So a document looks like StringField:[someString] and N DataField:[someNumber].. I then query it with something like DataField:((+1 +(2 3)) (+75 +(3 5 52)) (+99 +88 +(102 155 199))). Currently these queries take about 7 to 16 seconds to run on my laptop.. I would like to make sure thats really the best they can do.. I am open to suggestions on field structure and query structure :-). Thanks Josh PS: I have already read over all the other lucene performance discussions on here, and on the Lucene wiki and at lucid imiagination... I'm a bit further down the rabbit hole then that...

    Read the article

  • 2D Array values frequency

    - by Morano88
    If I have a 2D array that is arranged as follows : String X[][] = new String [][] {{"127.0.0.9", "60", "75000","UDP", "Good"}, {"127.0.0.8", "75", "75000","TCP", "Bad"}, {"127.0.0.9", "75", "70000","UDP", "Good"}, {"127.0.0.1", "", "70000","UDP", "Good"}, {"127.0.0.1", "75", "75000","TCP", "Bad"} }; I want to know the frequency of each value .. so I27.0.0.9 gets 2. How can I do a general solution for this ? In Java or any algorithm for any language ?

    Read the article

  • Can sphinx be used over cassandra?

    - by Mickey Shine
    I am planning to build a cassandra store system and also I need a full-text(Chinese) system too. Can sphinx be used on cassandra? (sphinx supports xml format but I am not going to use it, cause it is slow and much of time are spent on xml parsing). Or you can share your experiences if you have ever built a full-text searching system over cassandra. Thank you

    Read the article

  • Get highest frequency terms from Lucene index

    - by Julia
    Hello! i need to extract terms with highest frequencies from several lucene indexes, to use them for some semantic analysis. So, I want to get maybe top 30 most occuring terms(still did not decide on threshold, i will analyze results) and their per-index counts. I am aware that I might lose some precision because of potentionally dropped duplicates, but for now, lets say i am ok with that. So for the proposed solutions, (needless to say maybe) speed is not important, since I would do static analysis, I would put accent on simplicity of implementation because im not so skilled with Lucene (not the programming guru too :/ ) and cant wrap my mind around many concepts of it.. I can not find any code samples from something similar, so all concrete advices (code, pseudocode, links to code samples...) I will apretiate very much!!! Thank you!

    Read the article

  • How to Serialize Binary Tree

    - by Veljko Skarich
    I went to an interview today where I was asked to serialize a binary tree. I implemented an array-based approach where the children of node i (numbering in level-order traversal) were at the 2*i index for the left child and 2*i + 1 for the right child. The interviewer seemed more or less pleased, but I'm wondering what serialize means exactly? Does it specifically pertain to flattening the tree for writing to disk, or would serializing a tree also include just turning the tree into a linked list, say. Also, how would we go about flattening the tree into a (doubly) linked list, and then reconstructing it? Can you recreate the exact structure of the tree from the linked list? Thank you/

    Read the article

< Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >