Search Results

Search found 21427 results on 858 pages for 'enterprise search'.

Page 39/858 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • search array and get array key

    - by SoulieBaby
    Hi all, I have an array, which I'd like to search for a value in and retreive the array key if it exists, but not sure how to even go about doing that. Here's my array: Array ( [hours] => Array ( [0] => 5 [1] => 5 [2] => 6 [3] => 6 [4] => 8 [5] => 10 ) ) So I'd like to search the hours array for 10, if 10 exists in the array, I want the key (5) to be returned. If that makes sense? Am trying to do it dynamically so the search string (10) will change, but I figure if I can get it working for number 10, I can get it working with a variable number :)

    Read the article

  • Lucene.NET faceted search.

    - by Paul Knopf
    I found a great tutorial on performing a faceted search. http://www.devatwork.nl/articles/lucenenet/faceted-search-and-drill-down-lucenenet/ This article does not explain how to retrieve the narrowed available attributes to filter from (for further drill down). Lets say I am looking for planners that are red. When I perform the faceted search, I want to return all available attributes to filter from that are red. Then when I add a "weekly format" filter, I want the attribute list to get even smaller, containing only filters available for the segmented group.

    Read the article

  • Having trouble using 'AND' in CONTAINSTABLE SQL SERVER FULL TEXT SEARCH

    - by Joshua
    I've been using FULL-TEXT for awhile but I cannot seem to get the most relevant results sometimes. If I have an field with something like "An Overview of Pain Medicine 5/12/2006" and a user types "An Overview 5/12/2006" So we create a search like: '"An" AND "Overview" AND "5/12/2006"' - 0 results (bad) '"Overview" AND "5/12/2006"' - 1 result (good) The CONTAINSTABLE portion of my query: FROM ce_Activity A INNER JOIN CONTAINSTABLE(View_Activities,(Searchable), @Search) AS KeyTbl ON A.ActivityID = KeyTbl.[KEY] "Searchable" is a field contains the activity title, and start date(converted to string) in one field so it's all search friendly. Why would this happen?

    Read the article

  • Android Market, Search results position Mystery.

    - by Paul G.
    How is an app's position in the Android Market search results determined? Is it as mysterious and complex as Google Web search results? We obviously don't want to change any words in our app's title or description that would hurt our position. Same question applies for not only search results, but when clicking on a Category in the Android Market. How is the order of the list determined? Hopefully someone here can help. I would think that Google would have published some guidelines at least that could help, but I haven't found anything yet.

    Read the article

  • Have boost effect on lucene/compass field search.

    - by PeterP
    Hi there, In our compass mapping, we're boosting "better" documents to push them up in the list of search results. Something like this: <boost name="boostFactor" default="1.0"/> <property name="name"><meta-data>name</meta-data></property> While this works fine for fulltext search, it does not when doing a field search, e.g. the boost is ignored when searching something like name:Peter Is there any way to enable boosting for field searches? Thanks for your help and sorry if this is a dumb question - I am new to Lucene/Compass. Best regards, Peter

    Read the article

  • Depth First Search Basics

    - by cam
    I'm trying to improve my current algorithm for the 8 Queens problem, and this is the first time I'm really dealing with algorithm design/algorithms. I want to implement a depth-first search combined with a permutation of the different Y values described here: http://en.wikipedia.org/wiki/Eight_queens_puzzle#The_eight_queens_puzzle_as_an_exercise_in_algorithm_design I've implemented the permutation part to solve the problem, but I'm having a little trouble wrapping my mind around the depth-first search. It is described as a way of traversing a tree/graph, but does it generate the tree graph? It seems the only way that this method would be more efficient only if the depth-first search generates the tree structure to be traversed, by implementing some logic to only generate certain parts of the tree. So essentially, I would have to create an algorithm that generated a pruned tree of lexigraphic permutations. I know how to implement the pruning logic, but I'm just not sure how to tie it in with the permutation generator since I've been using next_permutation. Is there any resources that could help me with the basics of depth first searches or creating lexigraphic permutations in tree form?

    Read the article

  • How to search for "R" materials?

    - by James Lavin
    "The Google" is very helpful... unless your language is called "R," in which case it spits out tons of irrelevant stuff. Anyone have any search engine tricks for "R"? There are some specialized websites, like those below, but how can you tell Google you mean "R" the language? If I'm searching for something specific, I'll use an R-specific term, like "cbind." Are there other such tricks? http://rweb.stat.umn.edu/R/doc/html/search/SearchEngine.html www.rseek.org http://search.r-project.org/ www.dangoldstein.com/search_r.html

    Read the article

  • Search Lucene with precise edit distances

    - by askullhead
    I would like to search a Lucene index with edit distances. For example, say, there is a document with a field FIRST_NAME; I want all documents with first names that are 1 edit distance away from, say, 'john'. I know that Lucene supports fuzzy searches (FIRST_NAME:john~) and takes a number between 0 and 1 to control the fuzziness. The problem (for me) is this number does not directly translate to an edit distance. And when the values in the documents are short strings (less than 3 characters) the fuzzy search has difficulty finding them. For example if there is a document with FIRST_NAME 'J' and I search for FIRST_NAME:I~0.0 I don't get anything back.

    Read the article

  • Free (as in beer) Reverse Image Search API/Library/Service

    - by Bauer
    TinEye provides a great way to "reverse" search by image (i.e. upload/transload an image and have multiple possible sources of that image returned as results.) Since screen-scraping is messy and unreliable, I'm looking for a free API/library/web-service that could offer the same (or similar) reverse-image search function. At present, TinEye offers a commercial API, but since I'll only be using the service for small personal projects, it's hard to justify the cost of the service (the lowest being 1,000 searches for $70 USD). Is anyone aware of such a free service? Or is there a simpler way to approach this (programmatic solution; any language)? I understand that this is a tall order, and submitting the question is really only a last resort in the hope that there is some solution. Example image search is 99designs' StackOverflow logo competition entry by wolv

    Read the article

  • automatic link submission in search engine

    - by Bharanikumar
    Hi , in Google i find one open source search engine , This Open Source This is my first search engine project , This si one open source , In this site, There is a link called ADD link , There visitor will add his/her site , Then admin will look and later , admin index the user added links , this is basic functionality of this open source, My doubt is How Google really fetch and produce the search result , Yes there is one way ,i know , The user add his site in Google so Google cron the contents , But one of my site , am not added in google but google croned , i dont know how it is ? Ok come to my doubt , i want to add other site automatically, wihtout manual , really is it possible or not , What steps i should follow for that ? Regards Bharanikumar

    Read the article

  • Implications of Fulltext Search over many columns

    - by Alex
    Hello, I have a really wide table which includes separate columns for billing address, shipping address, primary address, names, aliases etc. (I can't normalize this table further, and that's not the question here anyways). I'm implementing SQL Server fulltext search, and I'm wondering whether I should limit the search ability to just the primary fields (primary address and names for example), or if I can extend the search ability across all columns without occurring too much of a performance or memory penalty. I've done some basic testing with 10,000 sample rows and it's quite fast but I don't have much experience with fulltext indexing, especially its dictionary internals, so I don't know if the index is going to grow over time, or if there is anything else to consider. Thoughts?

    Read the article

  • Building a code search engine for java code in git repositories

    - by zero1
    I'm trying to build a Java code search engine. Apart from just searching for keywords, I would also like cross-referencing between classes to work. It should work the way eclipse's referencing works - click on anything to open the definition. Bonus would be if something like search-all-usages-of-foo works. I'm thinking of using Apache Solr to index the files and build the basic search. But I'm not sure how I'd do the crossreferencing part since Solr doesn't understand Java code. Any suggestions on what I could use here? EDIT: I mainly want to index a lot of java git repositories.

    Read the article

  • R storing a complex search as a string

    - by Tahnoon Pasha
    Hi I'm working with a large data frame that I frequently need to subset in different combinations of variables. I'd like to be able to store the search in a string so I can just refer to the string when I want to see a subset. x = read.table(textConnection(" cat1 cat2 value A Z 1 A Y 2 A X 3 B N 2"),header=T,strip.white=T) search_string="cat1== 'A' & cat2=='Z'" with(x,subset(x,search)) doesn't work. What I'd be looking for is the result of a search similar to the one below. with(x,subset(x,cat1=='A' & cat2=='Z')) I'd prefer not to just create multiple subsetted data frames at the start if another solution exists. Is there a simple way to do what I'm trying?

    Read the article

  • Redirecting search results into an ASP.NET page

    - by Arjun Vasudevan
    I've an ASP.NET page with a textbox and a option from user of the following choices: Wikipedia, Google, Dictionary.com, Flickr, Google images. The user enters a word(s) in the textbox and selects a choice among the following. Depending on the choice select by the user I wish to return the following. Wikipedia: Return the content and link to the page corresponding to the topic about the word. Google: Return the top 10 results of google search for this word. Flickr: Return a few images atmost 10 images from flickr search GoogleImage: Return a few images from google image search. Dictionary: Return the meaning of the word. How can I do that?

    Read the article

  • Macro for search Variabl, Date or value

    - by John
    To whom it may concern Good Day I have an excel work book with 10 sheets. In that work book 1 to 5 rows are header. I would like to search a Value, Variable or Date as I required. If it found then all rows should copy to a new work book. I need button for run macro. Program should ask what I need to search for. If I put a date macro should search all workbook if found all result should copy to a new workbook. Can any one give a solution for this.

    Read the article

  • Sharepoint 2010 search: How to modify search query before submit (programmatically?)

    - by palgo
    Hi, I'm making a custom search box Web Part, similar to the OOTB Web Part from SharePoint (SearchBoxEx class). I'm interested in modifying the search query with additional text before it is submitted, based on a custom checkbox added on the Web Part. Any help on how I can achieve this? UPDATE: I've used the AppendToQuery and AppQueryTerms properties, but this will rewrite the text in the search box as well. I'm interested in passing the values "in the background", maybe as an extra parameter. Point is that the query modification should happen without the user seeing it explicitly.

    Read the article

  • Sql (partial) search in a list and get matched fields

    - by qods
    I have two tables, I want to search TermID in Table-A through TermID in Table-B and If there is a termID like in Table-A and then want to get result table as shown below. TermIDs are in different length. There is no search pattern to search with "like %" TermIDs in Table-A are part of the TermIDs in Table-B Regards, Table-A ID TermID 101256666 126006230 101256586 126006231 101256810 126006233 101256841 126006238 101256818 126006239 101256734 1190226408 101256809 1190226409 101256585 1200096999 101256724 1200096997 101256748 1200097005 Table-B TermNo TermID 14 8990010901190226366F 16 8990010901190226374F 15 8990010901190226382F 18 8990010901190226408F 19 8990010901190226416F 11 8990010901200096981F 10 8990010901200096999F 12 8990010901200097005F 13 8990010901200097013F 17 8990010901260062337F As a result I want to get this table; Result Table -TableA.ID TableA.TermID TableB.TermNo A.ID A.TermID B.TermNo 101256734 1190226408 18 101256585 1200096999 10 101256748 1200097005 12

    Read the article

  • MySQL Full-text Search Workaround for innoDB tables

    - by Rob
    I'm designing an internal web application that uses MySQL as its backend database. The integrity of the data is crucial, so I am using the innoDB engine for its foreign key constraint features. I want to do a full-text search of one type of records, and that is not supported natively with innoDB tables. I'm not willing to move to MyISAM tables due to their lack of foreign key support and due to the fact that their locking is per table, not per row. Would it be bad practice to create a mirrored table of the records I need to search using the MyISAM engine and use that for the full-text search? This way I'm just searching a copy of the data and if anything happens to that data it's not as big of a deal because it can always be re-created. Or is this an awkward way of doing this that should be avoided? Thanks.

    Read the article

  • Reverse Search Best Practices?

    - by edub
    I'm making an app that has a need for reverse searches. By this, I mean that users of the app will enter search parameters and save them; then, when any new objects get entered onto the system, if they match the existing search parameters that a user has saved, a notification will be sent, etc. I am having a hard time finding solutions for this type of problem. I am using Django and thinking of building the searches and pickling them using Q objects as outlined here: http://www.djangozen.com/blog/the-power-of-q The way I see it, when a new object is entered into the database, I will have to load every single saved query from the db and somehow run it against this one new object to see if it would match that search query... This doesn't seem ideal - has anyone tackled such a problem before?

    Read the article

  • fastest way to perform string search in general and in python

    - by Rkz
    My task is to search for a string or a pattern in a list of documents that are very short (say 200 characters long). However, say there are 1 million documents of such time. What is the most efficient way to perform this search?. I was thinking of tokenizing each document and putting the words in hashtable with words as key and document number as value, there by creating a bag of words. Then perform the word search and retrieve the list of documents that contained this word. From what I can see is this operation will take O(n) operations. Is there any other way? may be without using hash-tables?. Also, is there a python library or third party package that can perform efficient searches?

    Read the article

  • Android search list, String

    - by NightSky
    Hey guys what is the best way to search through my list of objects, they return a few strings, last name and first name for example. Here how i'm currently searching but my search needs to match the entire string which I don't want. The search needs it to match part of the string like our contacts list on our phone and ignore the case. if (searchQ.equalsIgnoreCase(child.first_name)) { addChildToList(child); } Ive tried contains and starts with for example, they did not work. Whats going on? Thanks! Cheers!

    Read the article

  • In Chrome, is there a way to search only within my bookmarked sites?

    - by McGirl
    I'm not trying to search for a bookmark - I'm trying to search for a term that may appear on one of my bookmarked sites. For instance, if I wanted to search on the term "Obama" and restrict my query to the New York Times web site, I know I could type this into the main URL/Search bar in Chrome: obama site:nytimes.com I'm wondering if there's a way to change that so that it's effectively: obama site:any one of the sites in my Chrome bookmark folder Thanks in advance for your help!

    Read the article

  • "I'm Feeling Lucky"-style behavior in Windows 8.1 search?

    - by vorou
    Is it possible to make Windows 8.1 automatically select the most relevant search result, so that I could just press Enter to execute it? Right now, I have to press down-arrow each time to select it, and then press Enter. I occasionally forget to press the down-arrow, and Windows opens that very useful Search Results window for me. It makes me sad since I'm absolutelly happy with the top search result nearly all the time. P.S.: I'm taking about the Search in Start menu (the menu which opens when I press winkey)

    Read the article

  • Solving Big Problems with Oracle R Enterprise, Part II

    - by dbayard
    Part II – Solving Big Problems with Oracle R Enterprise In the first post in this series (see https://blogs.oracle.com/R/entry/solving_big_problems_with_oracle), we showed how you can use R to perform historical rate of return calculations against investment data sourced from a spreadsheet.  We demonstrated the calculations against sample data for a small set of accounts.  While this worked fine, in the real-world the problem is much bigger because the amount of data is much bigger.  So much bigger that our approach in the previous post won’t scale to meet the real-world needs. From our previous post, here are the challenges we need to conquer: The actual data that needs to be used lives in a database, not in a spreadsheet The actual data is much, much bigger- too big to fit into the normal R memory space and too big to want to move across the network The overall process needs to run fast- much faster than a single processor The actual data needs to be kept secured- another reason to not want to move it from the database and across the network And the process of calculating the IRR needs to be integrated together with other database ETL activities, so that IRR’s can be calculated as part of the data warehouse refresh processes In this post, we will show how we moved from sample data environment to working with full-scale data.  This post is based on actual work we did for a financial services customer during a recent proof-of-concept. Getting started with the Database At this point, we have some sample data and our IRR function.  We were at a similar point in our customer proof-of-concept exercise- we had sample data but we did not have the full customer data yet.  So our database was empty.  But, this was easily rectified by leveraging the transparency features of Oracle R Enterprise (see https://blogs.oracle.com/R/entry/analyzing_big_data_using_the).  The following code shows how we took our sample data SimpleMWRRData and easily turned it into a new Oracle database table called IRR_DATA via ore.create().  The code also shows how we can access the database table IRR_DATA as if it was a normal R data.frame named IRR_DATA. If we go to sql*plus, we can also check out our new IRR_DATA table: At this point, we now have our sample data loaded in the database as a normal Oracle table called IRR_DATA.  So, we now proceeded to test our R function working with database data. As our first test, we retrieved the data from a single account from the IRR_DATA table, pull it into local R memory, then call our IRR function.  This worked.  No SQL coding required! Going from Crawling to Walking Now that we have shown using our R code with database-resident data for a single account, we wanted to experiment with doing this for multiple accounts.  In other words, we wanted to implement the split-apply-combine technique we discussed in our first post in this series.  Fortunately, Oracle R Enterprise provides a very scalable way to do this with a function called ore.groupApply().  You can read more about ore.groupApply() here: https://blogs.oracle.com/R/entry/analyzing_big_data_using_the1 Here is an example of how we ask ORE to take our IRR_DATA table in the database, split it by the ACCOUNT column, apply a function that calls our SimpleMWRR() calculation, and then combine the results. (If you are following along at home, be sure to have installed our myIRR package on your database server via  “R CMD INSTALL myIRR”). The interesting thing about ore.groupApply is that the calculation is not actually performed in my desktop R environment from which I am running.  What actually happens is that ore.groupApply uses the Oracle database to perform the work.  And the Oracle database is what actually splits the IRR_DATA table by ACCOUNT.  Then the Oracle database takes the data for each account and sends it to an embedded R engine running on the database server to apply our R function.  Then the Oracle database combines all the individual results from the calls to the R function. This is significant because now the embedded R engine only needs to deal with the data for a single account at a time.  Regardless of whether we have 20 accounts or 1 million accounts or more, the R engine that performs the calculation does not care.  Given that normal R has a finite amount of memory to hold data, the ore.groupApply approach overcomes the R memory scalability problem since we only need to fit the data from a single account in R memory (not all of the data for all of the accounts). Additionally, the IRR_DATA does not need to be sent from the database to my desktop R program.  Even though I am invoking ore.groupApply from my desktop R program, because the actual SimpleMWRR calculation is run by the embedded R engine on the database server, the IRR_DATA does not need to leave the database server- this is both a performance benefit because network transmission of large amounts of data take time and a security benefit because it is harder to protect private data once you start shipping around your intranet. Another benefit, which we will discuss in a few paragraphs, is the ability to leverage Oracle database parallelism to run these calculations for dozens of accounts at once. From Walking to Running ore.groupApply is rather nice, but it still has the drawback that I run this from a desktop R instance.  This is not ideal for integrating into typical operational processes like nightly data warehouse refreshes or monthly statement generation.  But, this is not an issue for ORE.  Oracle R Enterprise lets us run this from the database using regular SQL, which is easily integrated into standard operations.  That is extremely exciting and the way we actually did these calculations in the customer proof. As part of Oracle R Enterprise, it provides a SQL equivalent to ore.groupApply which it refers to as “rqGroupEval”.  To use rqGroupEval via SQL, there is a bit of simple setup needed.  Basically, the Oracle Database needs to know the structure of the input table and the grouping column, which we are able to define using the database’s pipeline table function mechanisms. Here is the setup script: At this point, our initial setup of rqGroupEval is done for the IRR_DATA table.  The next step is to define our R function to the database.  We do that via a call to ORE’s rqScriptCreate. Now we can test it.  The SQL you use to run rqGroupEval uses the Oracle database pipeline table function syntax.  The first argument to irr_dataGroupEval is a cursor defining our input.  You can add additional where clauses and subqueries to this cursor as appropriate.  The second argument is any additional inputs to the R function.  The third argument is the text of a dummy select statement.  The dummy select statement is used by the database to identify the columns and datatypes to expect the R function to return.  The fourth argument is the column of the input table to split/group by.  The final argument is the name of the R function as you defined it when you called rqScriptCreate(). The Real-World Results In our real customer proof-of-concept, we had more sophisticated calculation requirements than shown in this simplified blog example.  For instance, we had to perform the rate of return calculations for 5 separate time periods, so the R code was enhanced to do so.  In addition, some accounts needed a time-weighted rate of return to be calculated, so we extended our approach and added an R function to do that.  And finally, there were also a few more real-world data irregularities that we needed to account for, so we added logic to our R functions to deal with those exceptions.  For the full-scale customer test, we loaded the customer data onto a Half-Rack Exadata X2-2 Database Machine.  As our half-rack had 48 physical cores (and 96 threads if you consider hyperthreading), we wanted to take advantage of that CPU horsepower to speed up our calculations.  To do so with ORE, it is as simple as leveraging the Oracle Database Parallel Query features.  Let’s look at the SQL used in the customer proof: Notice that we use a parallel hint on the cursor that is the input to our rqGroupEval function.  That is all we need to do to enable Oracle to use parallel R engines. Here are a few screenshots of what this SQL looked like in the Real-Time SQL Monitor when we ran this during the proof of concept (hint: you might need to right-click on these images to be able to view the images full-screen to see the entire image): From the above, you can notice a few things (numbers 1 thru 5 below correspond with highlighted numbers on the images above.  You may need to right click on the above images and view the images full-screen to see the entire image): The SQL completed in 110 seconds (1.8minutes) We calculated rate of returns for 5 time periods for each of 911k accounts (the number of actual rows returned by the IRRSTAGEGROUPEVAL operation) We accessed 103m rows of detailed cash flow/market value data (the number of actual rows returned by the IRR_STAGE2 operation) We ran with 72 degrees of parallelism spread across 4 database servers Most of our 110seconds was spent in the “External Procedure call” event On average, we performed 8,200 executions of our R function per second (110s/911k accounts) On average, each execution was passed 110 rows of data (103m detail rows/911k accounts) On average, we did 41,000 single time period rate of return calculations per second (each of the 8,200 executions of our R function did rate of return calculations for 5 time periods) On average, we processed over 900,000 rows of database data in R per second (103m detail rows/110s) R + Oracle R Enterprise: Best of R + Best of Oracle Database This blog post series started by describing a real customer problem: how to perform a lot of calculations on a lot of data in a short period of time.  While standard R proved to be a very good fit for writing the necessary calculations, the challenge of working with a lot of data in a short period of time remained. This blog post series showed how Oracle R Enterprise enables R to be used in conjunction with the Oracle Database to overcome the data volume and performance issues (as well as simplifying the operations and security issues).  It also showed that we could calculate 5 time periods of rate of returns for almost a million individual accounts in less than 2 minutes. In a future post, we will take the same R function and show how Oracle R Connector for Hadoop can be used in the Hadoop world.  In that next post, instead of having our data in an Oracle database, our data will live in Hadoop and we will how to use the Oracle R Connector for Hadoop and other Oracle Big Data Connectors to move data between Hadoop, R, and the Oracle Database easily.

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >