Search Results

Search found 18288 results on 732 pages for 'meta search'.

Page 47/732 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • scroll/search JList when user starts typing

    - by alex
    I would like to implement one of the fanciest features I every now and then. I would like to allow a user to click on a JList and if words are typed, do a query and advance the caret to the next match (prefix). Is there and example of such an implementation in Java somewhere? I'm thinking a combination of key listeners, getNextMatch() and setSelectValue().

    Read the article

  • manipulating strings, search text

    - by alhambraeidos
    Hi all, I try explain my issue: note 1: I have only strings, not files, ONLY strings. I have a string like this (NOTE: I include line numbers for better explain) The line separator is \r\n (CRLF) string allText = 1 Lorem ipsum Lorem ipsum 2 == START 001partXXX.sql == 3 Lorem ipsum TEXT Lorem ipsum 4 == END 001partXXX.sql == 5 Lorem ipsum TEXT Lorem ipsum 6 == START 002partzzz.sql == 7 Lorem ipsum TEXT Lorem ipsum 8 == END 002partzzz.sql == I have contents strings like this: string contents1 = == START 001partXXX.sql == Lorem ipsum TEXT Lorem ipsum == END 001partXXX.sql == the other content string: string contents2 = == START 002partzzz.sql == Lorem ipsum TEXT Lorem ipsum == END 002partzzz.sql == Then, allText.IndexOf(contents1) != -1 allText.IndexOf(contents2) != -1 I need function thats receive 3 parameters: allText, Contents, and text to find in contents, and it returns the line number of Text To Find in AllText For example, input: allText, contents2, "TEXT" ouput = line number 7 Another sample, input: allText, contents1, "TEXT" ouput = line number 3 Another sample, input: allText, contents1, "TEXT NOT FOUND" ouput = line number -1 How can I implement this function ?? any help very useful for me, Thanks in advanced.

    Read the article

  • Smart image search via Powershell

    - by Oleg Svechkarenko
    I interested in file searching by custom properties. For example, I want to find all JPEG-images with certain dimensions. Something looks like Get-ChildItem -Path C:\ -Filter *.jpg -Recursive | where-object { $_.Dimension -eq '1024x768' } I suspect it's about using of System.Drawing. How it can be done? Thanks in advance

    Read the article

  • Search a variable for an address

    - by chrissygormley
    Hello, I am trying to match information stored in a variable. I have a list of uuid's and ip addresses beside them. The code I have is: r = re.compile(r'urn:uuid:5EEF382F-JSQ9-3c45-D5E0-K15X8M8K76') m = r.match(str(serv)) if m1: print'Found' The string serv contains is: urn:uuid:7FDS890A-KD9E-3h53-G7E8-BHJSD6789D:[u'http://10.10.10.20:12365/7FDS890A-KD9E-3h53-G7E8-BHJSD6789D/'] --------------------------------------------- urn:uuid:5EEF382F-JSQ9-3c45-D5E0-K15X8M8K76:[u'http://10.10.10.10:42365'] --------------------------------------------- urn:uuid:8DSGF89S-FS90-5c87-K3DF-SDFU890US9:[u'http://10.10.10.40:5234'] --------------------------------------------- So basically I am wanting to find the uuid string and find out what it's address is and store it as a variable. So far I have just tried to get it to match the string to no avail. Can anyone point out a solution to this. Thanks

    Read the article

  • MSSQL Search Proper Names Full Text Index vs LIKE + SOUNDEX

    - by Matthew Talbert
    I have a database of names of people that has (currently) 35 million rows. I need to know what is the best method for quickly searching these names. The current system (not designed by me), simply has the first and last name columns indexed and uses "LIKE" queries with the additional option of using SOUNDEX (though I'm not sure this is actually used much). Performance has always been a problem with this system, and so currently the searches are limited to 200 results (which still takes too long to run). So, I have a few questions: Does full text index work well for proper names? If so, what is the best way to query proper names? (CONTAINS, FREETEXT, etc) Is there some other system (like Lucene.net) that would be better? Just for reference, I'm using Fluent NHibernate for data access, so methods that work will with that will be preferred. I'm using MS SQL 2008 currently.

    Read the article

  • Need a tool to search large structure text documents for words, phrases and related phrases

    - by pitosalas
    I have to keep up with structured documents containing things such as requests for proposals, government program reports, threat models and all kinds of things like that. They are in techno-legalese as I would call them: highly structured, with section numbering and 3, 4 and 5 levels of nesting. All in English I need a more efficient way to locate those paragraphs of nuggets that matter to me. So what I’d like is kind of a local document index/repository, that would allow me to have some standing queries and easily locate sections in documents that talk about my queries. Here’s an example: I’d like to load in 10 large PDF files, each of say 100 pages. Each PDF contains English text, formatted very nicely into paragraphs and sections. I’d like to specify that I am interested in “blogging platforms”, “weaknesses in Ruby”, “localization and internationalization” Ideally then look at a list that showed the section of text, the name of the document, and other information that seemed to be related to and/or include the words and phrases I specified. I am sure something like this exists. I would call it something like document indexing, document comprehension or structured searching.

    Read the article

  • Smart search/replace in Vim

    - by Amir Rachum
    I have a file with the following expressions: something[0] Where instead of 0 there could be different numbers. I want to replace all these occurances with somethingElse0 Where the number should be the same as in the expression I replaced. How do I do that?

    Read the article

  • Google search box

    - by user343282
    I am working on a google box, something like this, http://mytwentyfive.com/blog/wp-content/uploads/byme/Google%20Search%20Appliances.jpg I am pointing the crawler to a folder where there are html files. before the crawler was crawling the files and indexing them but right now it finds the pattern or the folder but not following any html files within the folder. I have tried everything I could and know but, can't think of anything else. Can someone help? thanks

    Read the article

  • Recursive Binary Search Tree Insert

    - by Nick Sinklier
    So this is my first java program, but I've done c++ for a few years. I wrote what I think should work, but in fact it does not. So I had a stipulation of having to write a method for this call: tree.insertNode(value); where value is an int. I wanted to write it recursively, for obvious reasons, so I had to do a work around: public void insertNode(int key) { Node temp = new Node(key); if(root == null) root = temp; else insertNode(temp); } public void insertNode(Node temp) { if(root == null) root = temp; else if(temp.getKey() <= root.getKey()) insertNode(root.getLeft()); else insertNode(root.getRight()); } Thanks for any advice.

    Read the article

  • Delphi Search files and directories fastest alghorithm

    - by radu-barbu
    Hi, I'm using Delphi7 and i need a solution to a big problem.Can someone provide me a faster way for searching through files and folders than using findnext and findfirst? because i also process the data for each file/folder (creation date/author/size/etc) and it takes a lot of time...I've searched a lot under WinApi but probably I haven't see the best function in order to accomplish this. All the examples which I've found made in Delphi are using findfirst and findnext... Also, I don't want to buy components or use some free ones... Thanks in advance!

    Read the article

  • XPath ordered priority attribute search

    - by user94000
    I want to write an XPath that can return some link elements on an HTML DOM. The syntax is wrong, but here is the gist of what I want: //web:link[@text='Login' THEN_TRY @href='login.php' THEN_TRY @index=0] THEN_TRY is a made-up operator, because I can't find what operator(s) to use. If many links exist on the page for the given set of [attribute=name] pairs, the link which matches the most left-most attribute(s) should be returned instead of any others. For example, consider a case where the above example XPath finds 3 links that match any of the given attributes: link A: text='Sign In', href='Login.php', index=0 link B: text='Login', href='Signin.php', index=15 link C: text='Login', href='Login.php', index=22 Link C ranks as the best match because it matches the First and Second attributes. Link B ranks second because it only matches the First attribute. Link A ranks last because it does not match the First attribute; it only matches the Second and Third attributes. The XPath should return the best match, Link C. If more than one link were tied for "best match", the XPath should return the first best link that it found on the page.

    Read the article

  • Radial Grid Search Algorithm

    - by grey
    I'm sure there's a clean way to do this, but I'm probably not using the right keywords for find it. So let's say I have a grid. Starting from a position on the grid, return all of the grid coordinates that fall within a given distance. So I call something like: getCoordinates( currentPosition, distance ) And for each coordinate, starting from the initial position, add all cardinal directions, and then add the spaces around those and so forth until the distance is reached. I imagine that on a grid this would look like a diamond.

    Read the article

  • Breadth first search all paths

    - by Amndeep7
    First of all, thank you for looking at this question. For a school assignment we're supposed to create a BFS algorithm and use it to do various things. One of these things is that we're supposed to find all of the paths between the root and the goal nodes of a graph. I have no idea how to do this as I can't find a way to keep track of all of the alternate routes without also including copies/cycles. Here is my BFS code: def makePath(predecessors, last): return makePath(predecessors, predecessors[last]) + [last] if last else [] def BFS1b(node, goal): Q = [node] predecessor = {node:None} while Q: current = Q.pop(0) if current[0] == goal: return makePath(predecessor, goal) for subnode in graph[current[0]][2:]: if subnode[0] not in predecessor: predecessor[subnode[0]] = current[0] Q.append(subnode[0]) A conceptual push in the right direction would be greatly appreciated. tl;dr How do I use BFS to find all of the paths between two nodes?

    Read the article

  • Exploring search options for PHP

    - by Joshua
    I have innoDB table using numerous foreign keys, but we just want to look up some basic info out of it. I've done some research but still lost. 1) How can I tell if my host has Sphinx installed already? I don't see it as an option for table storage method (i.e. innodb, myisam). 2) Zend_Search_Lucene, responsive enough for AJAX functionality of millions of records? 3) Mirror my innoDB with a myisam? Make every innodb transaction end with a write to the myisam, then use 1:1 lookups? How would I do this automagically? This should make MyISAM ACID-compliant and free(er) from corruption no? 4) PostgreSQL fulltext queries don't even look like SQL to me wtf, I don't have time to learn a new SQL syntax I need noob options 5) ???????????????????? This is high volume site on a decently-equipped VPS Thanks very much for any ideas.

    Read the article

  • Select distinct... in fulltext search

    - by lam3r4370
    <?php session_start(); $user =$_GET['user']; $conn = mysql_connect("localhost","...","..."); mysql_select_db("..."); $sql= "SELECT filter FROM userfilter WHERE user='$user'"; $mksql = mysql_query($sql); while($row =mysql_fetch_assoc($mksql)) { $filter=$row['filter']; $sql2 = "SELECT DISTINCT * FROM rss WHERE MATCH(content,title) AGAINST ('$filter')"; $mksql2 = mysql_query($sql2) or die(mysql_error()); while($rows=mysql_fetch_assoc($mksql2)) { echo ..... } ?> If I have two rows content that contains the $filter ,it outputs me that content but it's repeating. For example: title|content asd |This is a sample content ,number one das |This is a sample content ,number two .... And if my keywords are "sample" and "number" ,it outputs me twice the title and the content.How to prevent that?

    Read the article

  • How to index and search .doc files

    - by Jared
    I have an application that needs to have .doc files uploaded to it. These documents should then be index and the whole collection of documents should be searchable. This will run on a Windows Server, without Word installed, using IIS and SqlServer, but I'd rather not be tied to SqlServer's full text indexing. I was thinking of using Lucene.Net for the indexing part and was wondering what the best way to get the text out of the .doc files would be. I could probably extract the text by reading in the whole stream and then using a regEx to pull out any regular characters, but that seems hefty and prone to error. I saw an article on using iFilters that sounds promising, but I thought I'd put this out there since it's not something I'm familiar with. P.S. If it matters, these .doc files will have mail-merge fields in them and there's no other current alternative for the .doc format.

    Read the article

  • C++ file input/output search

    - by Brian J
    Hi I took the following code from a program I'm writing to check a user generated string against a dictionary as well as other validation. My problem is that although my dictionary file is referenced correctly,the program gives the default "no dictionary found".I can't see clearly what I'm doing in error here,if anyone has any tips or pointers it would be appreciated, Thanks. //variables for checkWordInFile #define gC_FOUND 99 #define gC_NOT_FOUND -99 // static bool certifyThat(bool condition, const char* error) { if(!condition) printf("%s", error); return !condition; } //method to validate a user generated password following password guidelines. void validatePass() { FILE *fptr; char password[MAX+1]; int iChar,iUpper,iLower,iSymbol,iNumber,iTotal,iResult,iCount; //shows user password guidelines printf("\n\n\t\tPassword rules: "); printf("\n\n\t\t 1. Passwords must be at least 9 characters long and less than 15 characters. "); printf("\n\n\t\t 2. Passwords must have at least 2 numbers in them."); printf("\n\n\t\t 3. Passwords must have at least 2 uppercase letters and 2 lowercase letters in them."); printf("\n\n\t\t 4. Passwords must have at least 1 symbol in them (eg ?, $, £, %)."); printf("\n\n\t\t 5. Passwords may not have small, common words in them eg hat, pow or ate."); //gets user password input get_user_password: printf("\n\n\t\tEnter your password following password rules: "); scanf("%s", &password); iChar = countLetters(password,&iUpper,&iLower,&iSymbol,&iNumber,&iTotal); iUpper = countLetters(password,&iUpper,&iLower,&iSymbol,&iNumber,&iTotal); iLower =countLetters(password,&iUpper,&iLower,&iSymbol,&iNumber,&iTotal); iSymbol =countLetters(password,&iUpper,&iLower,&iSymbol,&iNumber,&iTotal); iNumber = countLetters(password,&iUpper,&iLower,&iSymbol,&iNumber,&iTotal); iTotal = countLetters(password,&iUpper,&iLower,&iSymbol,&iNumber,&iTotal); if(certifyThat(iUpper >= 2, "Not enough uppercase letters!!!\n") || certifyThat(iLower >= 2, "Not enough lowercase letters!!!\n") || certifyThat(iSymbol >= 1, "Not enough symbols!!!\n") || certifyThat(iNumber >= 2, "Not enough numbers!!!\n") || certifyThat(iTotal >= 9, "Not enough characters!!!\n") || certifyThat(iTotal <= 15, "Too many characters!!!\n")) goto get_user_password; iResult = checkWordInFile("dictionary.txt", password); if(certifyThat(iResult != gC_FOUND, "Password contains small common 3 letter word/s.")) goto get_user_password; iResult = checkWordInFile("passHistory.txt",password); if(certifyThat(iResult != gC_FOUND, "Password contains previously used password.")) goto get_user_password; printf("\n\n\n Your new password is verified "); printf(password); //writing password to passHistroy file. fptr = fopen("passHistory.txt", "w"); // create or open the file for( iCount = 0; iCount < 8; iCount++) { fprintf(fptr, "%s\n", password[iCount]); } fclose(fptr); printf("\n\n\n"); system("pause"); }//end validatePass method int checkWordInFile(char * fileName,char * theWord){ FILE * fptr; char fileString[MAX + 1]; int iFound = -99; //open the file fptr = fopen(fileName, "r"); if (fptr == NULL) { printf("\nNo dictionary file\n"); printf("\n\n\n"); system("pause"); return (0); // just exit the program } /* read the contents of the file */ while( fgets(fileString, MAX, fptr) ) { if( 0 == strcmp(theWord, fileString) ) { iFound = -99; } } fclose(fptr); return(0); }//end of checkwORDiNFile

    Read the article

  • Selectively search and replace certain lines using a regular expression

    - by eneveu
    I have a file containing a lot of SQL statements, such as: CREATE TABLE "USER" ( "ID" INTEGER PRIMARY KEY, "NAME" CHARACTER VARYING(50) NOT NULL, "AGE" INTEGER NOT NULL ); COPY "USER" (id, name, age) FROM stdin; 1 Skywalker 19 2 Kenobi 57 I want the column names in the COPY statements to be uppercased and quoted: COPY "USER" ("ID", "NAME", "AGE") FROM stdin; Using sed, I found the following regexp: sed -r 's/([( ])(\w+)([,)])/\1"\U\2\E"\3/g' It does replace the column names, but it is not selective enough, and replaces other words in the file: ~/test]$sed -r 's/([( ])(\w+)([,)])/\1"\U\2\E"\3/g' star_wars_example CREATE TABLE "USER" ( "ID" INTEGER PRIMARY "KEY", "NAME" CHARACTER VARYING("50")NOT "NULL", "AGE" INTEGER NOT NULL ); COPY "USER" ("ID", "NAME", "AGE") FROM stdin; 1 Skywalker 19 2 Kenobi 57 To avoid this problem, I want sed to only apply my regexp to the lines starting with COPY and ending with FROM stdin;. I have looked into lookahead / lookbehind, but they are not supported in sed. They seem to be supported in super-sed, but I am currently using Cygwin (Windows is mandatory here...) and it does not seem available in the package list. Is there a way to force sed to only consider specific line? I've considered piping my file through grep before applying sed, but other lines will then disappear from the output. Am I missing something obvious? It would be great if the answer was easily applicable on a default Cygwin install. I guess I could try installing super-sed on cygwin, but I'd like to know if there are more obvious ideas

    Read the article

  • Best way to search for a saturation value in a sorted list

    - by AB Kolan
    A question from Math Battle. This particular question was also asked to me in one of my job interviews. " A monkey has two coconuts. It is fooling around by throwing coconut down from the balconies of M-storey building. The monkey wants to know the lowest floor when coconut is broken. What is the minimal number of attempts needed to establish that fact? " Conditions: if a coconut is broken, you cannot reuse the same. You are left with only with the other coconut Possible approaches/strategies I can think of are Binary break ups & once you find the floor on which the coconut breaks use upcounting from the last found Binary break up lower index. Window/Slices of smaller sets of floors & use binary break up within the Window/Slice (but on the down side this would require a Slicing algorithm of it's own.) Wondering if there are any other way to do this.

    Read the article

  • findNode in binary search tree

    - by Weadadada Awda
    Does this look right? I mean I am trying to implement the delete function. Node* BST::findNode(int tofind) { Node* node = new Node; node = root; while (node != NULL) { if (node->val == tofind) { return node; } else if (tofind < node->val) { node = node->left; } else { node = node->right; } } } Here is the delete, it's not even close to done but, void BST::Delete(int todelete) { // bool found = false; Node* toDelete = new Node(); toDelete=findNode(todelete); if(toDelete->val!=NULL) { cout << toDelete->val << endl; } } This causes a segmentation fault just running that, any ideas?

    Read the article

  • Binary Search Tree can't delete the root

    - by Ali Zahr
    Everything is working fine in this function, but the problem is that I can't delete the root, I couldn't figure out what's the bug here.I've traced the "else part" it works fine until the return, it returns the old value I don't know why. Plz Help! node *removeNode(node *Root, int key) { node *tmp = new node; if(key > Root->value) Root->right = removeNode(Root->right,key); else if(key < Root->value) Root->left = removeNode(Root->left, key); else if(Root->left != NULL && Root->right != NULL) { node *minNode = findNode(Root->right); Root->value = minNode->value; Root->right = removeNode(Root->right,Root->value); } else { tmp = Root; if(Root->left == NULL) Root = Root->right; else if(Root->right == NULL) Root = Root->left; delete tmp; } return Root; }

    Read the article

  • Practicing inserting data into an array by using binary search, few problems

    - by HelpNeeder
    I'm trying to create a method which inserts and then sort elements in form of binary form. The problem I am experiencing that my code doesn't insert data correctly which means that output does not appear to be in order at all. The list is not organized, and data is added in order that is being inserted. Now, 2 questions, what am I doing wrong here? And how to fix this? public void insertBinarySearch(long value) // put element into array { int j = 0; int lower = 0; int upper = elems-1; int cur = 0; while (cur < elems) { curIn = (lower + upper ) / 2; if(a[cur] < value) { j = cur + 1; break; } else if(a[cur] > value) { j = cur; break; } else { if(a[cur] < value) lower = cur + 1; else upper = cur - 1; } } for(int k = elems; k > j; k--) a[k] = a[k-1]; a[j] = value; elems++; }

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >