Search Results

Search found 57986 results on 2320 pages for 'breadth first search'.

Page 42/2320 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • php codeigniter MySQL search query

    - by kalafun
    I want to create a search query on MySQL database that will consist of 5 different strings typed in from user. I want to query 5 different table columns with these strings. When I for example have input fields like: first name, last name, address, post number, city. How should I query the database that I dont always get all the rows. My query is something like this: SELECT user_id, username from users where a like %?% AND b like %?% AND c like %?% AND d like %?% AND e like %?%; When I exchange the AND for OR I always get all the results which makes sense, and when I use AND I get only the exact matches... Is there any function or statement that would help me with this?

    Read the article

  • Search server 2008 express working with WSS 3.0 (error when crawl second web application (website) s

    - by tberube
    My search server 2008 express crawls 3 sharepoint servers and 1 windows file server. The file server and 2 other sharepoint server crawl all content and master and sub sites just fine. On the 3rd sharepoint server the default site crawl just fine...The second web application site (content database) crawls the top site and crawls all sub sites BUT The subsites get the below error in the crawl log. Deleted by the gatherer (The start address or content source that contained this item was deleted and hence this item was deleted.) I have checked the security rights (OK) I have checked the setting to crawl a SharePoint site (if wrong the top site would not work)... ???? Last part = I can crawl 3 other sharepoint sites (web applications/other content databases) on the same server...It seems to be just this one site...

    Read the article

  • Javascript search and replace sequence of characters that contain square brackets

    - by Ruth
    Hello all I'm trying to search for '[EN]' in the string 'Nationality [EN] [ESP]', I want to remove this from the string so I'm using a replace method, code examaple below var str = 'Nationality [EN] [ESP]'; var find = "[EN]"; var regex = new RegExp(find, "g"); alert(str.replace(regex, '')); Since [EN] is identified as a character set this will output the string 'Nationality [] [ESP]' but I want to remove the square brackets aswell. I thought that I could escape them using \ but it didn't work Any advice would be much appreciated

    Read the article

  • How do search engines see dynamic profiles?

    - by Lumpy
    Recently search engines have been able to page dynamic content on social networking sites. I would like to understand how this is done. Are there static pages created by a site like Facebook that update semi frequently. Does Google attempt to store every possible user name? As I understand it, a page like www.facebook.com/username, is not an actual file stored on disk but is shorthand for a query like: select username from users and display the information on the page. How does Google know about every user, this gets even more complicated when things like tweets are involved.

    Read the article

  • Search for string within text column in MySQL

    - by user94154
    I have mysql table that has a column that stores xml as a string. I need to find all tuples where the xml column contains a given string of 6 characters. Nothing else matters--all I need to know is if this 6 character string is there or not. So it probably doesn't matter that the text is formatted as xml. Question: how can I search within mysql? ie SELECT * FROM items WHERE items.xml [contains the text '123456'] Is there a way I can use the LIKE operator to do this? Thanks

    Read the article

  • PHP - Search array in array

    - by Anonymous2011
    I have tried googling for the past one hour straight now and tried many ways to search for an array, in an array. My objective is, to find a keyword in the URL, and the keywords are in a txt file. This is what i have so far - but doesn't work. $file = "keywords.txt"; $open = fopen($file,'r'); $data = fread($open,filesize($file)); $data = explode(" ",$data); $url = (!empty($_SERVER['HTTPS'])) ? "https://".$_SERVER['SERVER_NAME'].$_SERVER['REQUEST_URI'] : "http://".$_SERVER['SERVER_NAME'].$_SERVER['REQUEST_URI']; $url = parse_url($url); //parse the URL into an array foreach($data as $d) { if(strstr($d,$url)) { echo "yes"; } } This works WITHOUT the text file, or array - but that's not what i want. I'd appreciate it if anyone can assist me.

    Read the article

  • Faceted search with Solr on Windows

    - by Dr.NETjes
    With over 10 million hits a day, funda.nl is probably the largest ASP.NET website which uses Solr on a Windows platform. While all our data (i.e. real estate properties) is stored in SQL Server, we're using Solr 1.4.1 to return the faceted search results as fast as we can.And yes, Solr is very fast. We did do some heavy stress testing on our Solr service, which allowed us to do over 1,000 req/sec on a single 64-bits Solr instance; and that's including converting search-url's to Solr http-queries and deserializing Solr's result-XML back to .NET objects! Let me tell you about faceted search and how to integrate Solr in a .NET/Windows environment. I'll bet it's easier than you think :-) What is faceted search? Faceted search is the clustering of search results into categories, allowing users to drill into search results. By showing the number of hits for each facet category, users can easily see how many results match that category. If you're still a bit confused, this example from CNET explains it all: The SQL solution for faceted search Our ("pre-Solr") solution for faceted search was done by adding a lot of redundant columns to our SQL tables and doing a COUNT(...) for each of those columns:   So if a user was searching for real estate properties in the city 'Amsterdam', our facet-query would be something like: SELECT COUNT(hasGarden), COUNT(has2Bathrooms), COUNT(has3Bathrooms), COUNT(etc...) FROM Houses WHERE city = 'Amsterdam' While this solution worked fine for a couple of years, it wasn't very easy for developers to add new facets. And also, performing COUNT's on all matched rows only performs well if you have a limited amount of rows in a table (i.e. less than a million). Enter Solr "Solr is an open source enterprise search server based on the Lucene Java search library, with XML/HTTP and JSON APIs, hit highlighting, faceted search, caching, replication, and a web administration interface." (quoted from Wikipedia's page on Solr) Solr isn't a database, it's more like a big index. Every time you upload data to Solr, it will analyze the data and create an inverted index from it (like the index-pages of a book). This way Solr can lookup data very quickly. To explain the inner workings of Solr is beyond the scope of this post, but if you want to learn more, please visit the Solr Wiki pages. Getting faceted search results from Solr is very easy; first let me show you how to send a http-query to Solr:    http://localhost:8983/solr/select?q=city:Amsterdam This will return an XML document containing the search results (in this example only three houses in the city of Amsterdam):    <response>     <result name="response" numFound="3" start="0">         <doc>            <long name="id">3203</long>            <str name="city">Amsterdam</str>            <str name="steet">Keizersgracht</str>            <int name="numberOfBathrooms">2</int>        </doc>         <doc>             <long name="id">3205</long>             <str name="city">Amsterdam</str>             <str name="steet">Vondelstraat</str>             <int name="numberOfBathrooms">3</int>          </doc>          <doc>             <long name="id">4293</long>             <str name="city">Amsterdam</str>             <str name="steet">Wibautstraat</str>             <int name="numberOfBathrooms">2</int>          </doc>       </result>   </response> By adding a facet-querypart for the field "numberOfBathrooms", Solr will return the facets for this particular field. We will see that there's one house in Amsterdam with three bathrooms and two houses with two bathrooms.    http://localhost:8983/solr/select?q=city:Amsterdam&facet=true&facet.field=numberOfBathrooms The complete XML response from Solr now looks like:    <response>      <result name="response" numFound="3" start="0">         <doc>            <long name="id">3203</long>            <str name="city">Amsterdam</str>            <str name="steet">Keizersgracht</str>            <int name="numberOfBathrooms">2</int>         </doc>         <doc>            <long name="id">3205</long>            <str name="city">Amsterdam</str>            <str name="steet">Vondelstraat</str>            <int name="numberOfBathrooms">3</int>         </doc>         <doc>            <long name="id">4293</long>            <str name="city">Amsterdam</str>            <str name="steet">Wibautstraat</str>            <int name="numberOfBathrooms">2</int>         </doc>      </result>      <lst name="facet_fields">         <lst name="numberOfBathrooms">            <int name="2">2</int>            <int name="3">1</int>         </lst>      </lst>   </response> Trying Solr for yourself To run Solr on your local machine and experiment with it, you should read the Solr tutorial. This tutorial really only takes 1 hour, in which you install Solr, upload sample data and get some query results. And yes, it works on Windows without a problem. Note that in the Solr tutorial, you're using Jetty as a Java Servlet Container (that's why you must start it using "java -jar start.jar"). In our environment we prefer to use Apache Tomcat to host Solr, which installs like a Windows service and works more like .NET developers expect. See the SolrTomcat page.Some best practices for running Solr on Windows: Use the 64-bits version of Tomcat. In our tests, this doubled the req/sec we were able to handle!Use a .NET XmlReader to convert Solr's XML output-stream to .NET objects. Don't use XPath; it won't scale well.Use filter queries ("fq" parameter) instead of the normal "q" parameter where possible. Filter queries are cached by Solr and will speed up Solr's response time (see FilterQueryGuidance)In my next post I’ll talk about how to keep Solr's indexed data in sync with the data in your SQL tables. Timestamps / rowversions will help you out here!

    Read the article

  • Google tweets – Now search twitter archives using Google

    - by samsudeen
    Google has launched a Twitter archive service which allows you to  search tweets in real time as well as on its huge public archive (remember Twitter crossed 10 billionth tweet last month). The search results are displayed as tweets with twitter logo. To explore the twitter search go to Google.com homepage  and select   “Show options” on the search results page, then select “Updates.”.  The search is similar to the Google search with options to dig through the tweets by timeframe. You can explore results by zooming through a particular time range  or date. In addition to the time chart, it also displays the relative volume of an activity on Twitter about the topic. as you can see there is a spike about GSLV launch after 3 PM today.There is also a short cut link “Now” on the left corner which displays the latest results on the topics searched.The tweets also gets refreshed automatically.   Considering the huge volume of activity (50 million messages per day) on twitter, the archive is going to more and bigger. By providing such feature Google has once again proved it is way ahead of others in search Related Posts:None FoundJoin us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • Google tweets – Now search twitter archives using Google

    - by samsudeen
    Google has launched a Twitter archive service which allows you to  search tweets in real time as well as on its huge public archive (remember Twitter crossed 10 billionth tweet last month). The search results are displayed as tweets with twitter logo. To explore the twitter search go to Google.com homepage  and select   “Show options” on the search results page, then select “Updates.”.  The search is similar to the Google search with options to dig through the tweets by timeframe. You can explore results by zooming through a particular time range  or date. In addition to the time chart, it also displays the relative volume of an activity on Twitter about the topic. as you can see there is a spike about GSLV launch after 3 PM today.There is also a short cut link “Now” on the left corner which displays the latest results on the topics searched.The tweets also gets refreshed automatically.   Considering the huge volume of activity (50 million messages per day) on twitter, the archive is going to more and bigger. By providing such feature Google has once again proved it is way ahead of others in search Related Posts:None FoundJoin us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • Finding if a Binary Tree is a Binary Search Tree

    - by dharam
    Today I had an interview where I was asked to write a program which takes a Binary Tree and returns true if it is also a Binary Search Tree otherwise false. My Approach1: Perform an inroder traversal and store the elements in O(n) time. Now scan through the array/list of elements and check if element at ith index is greater than element at (i+1)th index. If such a condition is encountered, return false and break out of the loop. (This takes O(n) time). At the end return true. But this gentleman wanted me to provide an efficient solution. I tried but I was unsuccessfult, because to find if it is a BST I have to check each node. Moreover he was pointing me to think over recusrion. My Approach 2: A BT is a BST if for any node N N-left is < N and N-right N , and the INorder successor of left node of N is less than N and the inorder successor of right node of N is greater than N and the left and right subtrees are BSTs. But this is going to be complicated and running time doesn't seem to be good. Please help if you know any optimal solution. Thanks.

    Read the article

  • rails search nested set (categories and sub categories)

    - by bob
    Hello, I am using the http://github.com/collectiveidea/awesome_nested_set awesome nested set plugin and currently, if I choose a sub category as my category_id for an item, I can not search by its parent. Category.parent Category.Child I choose Category.child as the category that my item is in. So now my item has category_id of 4 stored in it. If I go to a page in my rails application, lets say teh Category page and I am on the Category.parent's page, I want to show products that have category_id's of all the descendants as well. So ideally i want to have a find method that can take into account the descendants. You can get the descendants of a root by calling root.descendants (a built in plugin method). How would I go about making it so I can query a find that gets the descendants of a root instead of what its doing now which is binging up nothing unless the product had a specific category_id of the Category.parent. I hope I am being clear here. I either need to figure out a way to create a find method or named_scope that can query and return an array of objects that have id's corresponding tot he descendants of a root OR if I have any other options, what are they? I thought about creating a field in my products table like parent_id which can keep track of the parent so i can then create two named scopes one finding the parent stuff and one finding the child stuff and chaining them. I know I can create a named scope for each child and chain them together for multiple children but this seems a very tedious process and also, if you add more children, you would need to specify more named scopes.

    Read the article

  • shell scripting: search/replace & check file exist

    - by johndashen
    I have a perl script (or any executable) E which will take a file foo.xml and write a file foo.txt. I use a Beowulf cluster to run E for a large number of XML files, but I'd like to write a simple job server script in shell (bash) which doesn't overwrite existing txt files. I'm currently doing something like #!/bin/sh PATTERN="[A-Z]*0[1-2][a-j]"; # this matches foo in all cases todo=`ls *.xml | grep $PATTERN -o`; isdone=`ls *.txt | grep $PATTERN -o`; whatsleft=todo - isdone; # what's the unix magic? #tack on the .xml prefix with sed or something #and then call the job server; jobserve E "$whatsleft"; and then I don't know how to get the difference between $todo and $isdone. I'd prefer using sort/uniq to something like a for loop with grep inside, but I'm not sure how to do it (pipes? temporary files?) As a bonus question, is there a way to do lookahead search in bash grep? To clarify: so the simplest way to do what i'm asking is (in pseudocode) for i in `/bin/ls *.xml` do replace xml suffix with txt if [that file exists] add to whatsleft list end done

    Read the article

  • What is the correct way to implement a massive hierarchical, geographical search for news?

    - by Philip Brocoum
    The company I work for is in the business of sending press releases. We want to make it possible for interested parties to search for press releases based on a number of criteria, the most important being location. For example, someone might search for all news sent to New York City, Massachusetts, or ZIP code 89134, sent from a governmental institution, under the topic of "traffic". Or whatever. The problem is, we've sent, literally, hundreds of thousands of press releases. Searching is slow and complex. For example, a press release sent to Queens, NY should show up in the search I mentioned above even though it wasn't specifically sent to New York City, because Queens is a subset of New York City. We may also want to implement "and" and "or" and negation and text search to the query to create complex searches. These searches also have to be fast enough to function as dynamic RSS feeds. I really don't know anything about search theory, or how it's properly done. The way we are getting by right now is using a data mart to store the locations the releases were sent to in a single table. However, because of the subset thing mentioned above, the data mart is gigantic with millions of rows. And we haven't even implemented cities yet, and there are about 50,000 cities in the United States, which will exponentially increase the size of the data mart by so much I'm afraid it just won't work anymore. Anyway, I realize this is not a simple question and there won't be a "do this" answer. However, I'm hoping one of you can point me in the right direction where I can learn about how massive searches are done? Because I really know nothing about it. And such a search engine is turning out to be incredibly difficult to make. Thanks! I know there must be a way because if Google can search the entire internet we must be able to search our own database :-)

    Read the article

  • average case running time of linear search algorithm

    - by Brahadeesh
    Hi all. I am trying to derive the average case running time for deterministic linear search algorithm. The algorithm searches an element x in an unsorted array A in the order A[1], A[2], A[3]...A[n]. It stops when it finds the element x or proceeds until it reaches the end of the array. I searched on wikipedia and the answer given was (n+1)/(k+1) where k is the number of times x is present in the array. I approached in another way and am getting a different answer. Can anyone please give me the correct proof and also let me know whats wrong with my method? E(T)= 1*P(1) + 2*P(2) + 3*P(3) ....+ n*P(n) where P(i) is the probability that the algorithm runs for 'i' time (i.e. compares 'i' elements). P(i)= (n-i)C(k-1) * (n-k)! / n! Here, (n-i)C(k-1) is (n-i) Choose (k-1). As the algorithm has reached the ith step, the rest of k-1 x's must be in the last n-i elements. Hence (n-i)C(k-i). (n-k)! is the total number of ways of arranging the rest non x numbers, and n! is the total number of ways of arranging the n elements in the array. I am not getting (n+1)/(k+1) on simplifying.

    Read the article

  • Python SQLite FTS3 alternatives?

    - by Mike Cialowicz
    Are there any good alternatives to SQLite + FTS3 for python? I'm iterating over a series of text documents, and would like to categorize them according to some text queries. For example, I might want to know if a document mentions the words "rating" or "upgraded" within three words of "buy." The FTS3 syntax for this query is the following: (rating OR upgraded) NEAR/3 buy That's all well and good, but if I use FTS3, this operation seems rather expensive. The process goes something like this: # create an SQLite3 db in memory conn = sqlite3.connect(':memory:') c = conn.cursor() c.execute('CREATE VIRTUAL TABLE fts USING FTS3(content TEXT)') conn.commit() Then, for each document, do something like this: #insert the document text into the fts table, so I can run a query c.execute('insert into fts(content) values (?)', content) conn.commit() # execute my FTS query here, look at the results, etc # remove the document text from the fts table before working on the next document c.execute('delete from fts') conn.commit() This seems rather expensive to me. The other problem I have with SQLite FTS is that it doesn't appear to work with Python 2.5.4. The 'CREATE VIRTUAL TABLE' syntax is unrecognized. This means that I'd have to upgrade to Python 2.6, which means re-testing numerous existing scripts and programs to make sure they work under 2.6. Is there a better way? Perhaps a different library? Something faster? Thank you.

    Read the article

  • PHP: If no Results - Split the Searchrequest and Try to find Parts of the Search

    - by elmaso
    Hello, i want to split the searchrequest into parts, if there's nothing to find. example: "nelly furtado ft. jimmy jones" - no results - try to find with nelly, furtado, jimmy or jones.. i have an api url.. thats the difficult part.. i show you some of the actually snippets: $query = urlencode (strip_tags ($_GET[search])); and $found = '0'; if ($source == 'all') { if (!($res = @get_url ('http://api.example.com/?key=' . $API . '&phrase=' . $query . ' . '&sort=' . $sort))) { exit ('<error>Cannot get requested information.</error>'); ; } how can i put a else request in this snippet, like if nothing found take the first word, or the second word, is this possible? or maybe you can tell me were i can read stuff about this function? thank you!!

    Read the article

  • Binary Search Tree - Postorder logic

    - by daveb
    I am looking at implementing code to work out binary search tree. Before I do this I was wanting to verify my input data in postorder and preorder. I am having trouble working out what the following numbers would be in postorder and preorder I have the following numbers 4, 3, 14 ,8 ,1, 15, 9, 5, 13, 10, 2, 7, 6, 12, 11, that I am intending to put into an empty binary tree in that order. The order I arrived at for the numbers in POSTORDER is 2, 1, 6, 3, 7, 11, 12, 10, 9, 8, 13, 15, 14, 4. Have I got this right? I was wondering if anyone here would be able to kindly verify if the postorder sequence I came up with is indeed the correct sequence for my input i.e doing left subtree, right subtree and then root. The order I got for pre order (Visit root, do left subtree, do right subtree) is 4, 3, 1, 2, 5, 6, 14 , 8, 7, 9, 10, 12, 11, 15, 13. I can't be certain I got this right. Very grateful for any verification. Many Thanks

    Read the article

  • how do i filter my lucene search results?

    - by Andrew Bullock
    Say my requirement is "search for all users by name, who are over 18" If i were using SQL, i might write something like: Select * from [Users] Where ([firstname] like '%' + @searchTerm + '%' OR [lastname] like '%' + @searchTerm + '%') AND [age] >= 18 However, im having difficulty translating this into lucene.net. This is what i have so far: var parser = new MultiFieldQueryParser({ "firstname", "lastname"}, new StandardAnalyser()); var luceneQuery = parser.Parse(searchterm) var query = FullTextSession.CreateFullTextQuery(luceneQuery, typeof(User)); var results = query.List<User>(); How do i add in the "where age = 18" bit? I've heard about .SetFilter(), but this only accepts LuceneQueries, and not IQueries. If SetFilter is the right thing to use, how do I make the appropriate filter? If not, what do I use and how do i do it? Thanks! P.S. This is a vastly simplified version of what I'm trying to do for clarity, my WHERE clause is actually a lot more complicated than shown here. In reality i need to check if ids exist in subqueries and check a number of unindexed properties. Any solutions given need to support this. Thanks

    Read the article

  • Deletion procedure for a Binary Search Tree

    - by Metz
    Consider the deletion procedure on a BST, when the node to delete has two children. Let's say i always replace it with the node holding the minimum key in its right subtree. The question is: is this procedure commutative? That is, deleting x and then y has the same result than deleting first y and then x? I think the answer is no, but i can't find a counterexample, nor figure out any valid reasoning. EDIT: Maybe i've got to be clearer. Consider the transplant(node x, node y) procedure: it replace x with y (and its subtree). So, if i want to delete a node (say x) which has two children i replace it with the node holding the minimum key in its right subtree: y = minimum(x.right) transplant(y, y.right) // extracts the minimum (it doesn't have left child) y.right = x.right y.left = x.left transplant(x,y) The question was how to prove the procedure above is not commutative.

    Read the article

  • First languages with generic programming support

    - by oluies
    Which was the first language with generic programming support, and what was the first major staticly typed language (widely used) with generics support. Generics implement the concept of parameterized types to allow for multiple types. The term generic means "pertaining to or appropriate to large groups of classes." I have seen the following mentions of "first": First-order parametric polymorphism is now a standard element of statically typed programming languages. Starting with System F [20,42] and functional programming lan- guages, the constructs have found their way into mainstream languages such as Java and C#. In these languages, first-order parametric polymorphism is usually called generics. From "Generics of a Higher Kind", Adriaan Moors, Frank Piessens, and Martin Odersky Generic programming is a style of computer programming in which algorithms are written in terms of to-be-specified-later types that are then instantiated when needed for specific types provided as parameters. This approach, pioneered by Ada in 1983 From Wikipedia Generic Programming

    Read the article

  • does lucene search function work in large size document?

    - by shaon-fan
    Hi,there I have a problem when do search with lucene. First, in lucene indexing function, it works well to huge size document. such as .pst file, the outlook mail storage. It can build indexing file include all the information of .pst. The only problem is to large sometimes, include very much words. So when i search using lucene, it only can process the front part of this indexing file, if one word come out the back part of the indexing file, it couldn't find this word and no hits in result. But when i separate this indexing file to several parts in stupid way when debugging, and searching every parts, it can work well. So i want to know how to separate indexing file, how much size should be the limit of searching? cheers and wait 4 reply. ++++++++++++++++++++++++++++++++++++++++++++++++++ hi,there, follow Coady siad, i set the length to max 2^31-1. But the search result still can't include what i want. simply, i convert the doc word to string array[] to analyze, one doc word has 79680 words include the space and any symbol. when i search certain word, it just return 300 count, actually it has more than 300 results. The same reason, when i search a word in back part of the doc, it also couldn't find. //////////////set the length idexwriter.SetMaxFieldLength(2147483647); ////////////////////search IndexSearcher searcher = new ndexSearcher(Program.Parameters["INDEX_LOCATION"].ToString()); Hits hits = searcher.Search(query); This is my code, as others same. I found that problem when i need to count every word hits in a doc. So i also found it couldn't search word in back part of doc. pls help me to find, is there any set searcher length somewhere? how u meet this problem.

    Read the article

  • Google Image Search Quick Fix

    - by Asian Angel
    Are you tired of unneeded webpage loading and extra link clicking just to access an image found using Google Image Search? Now you can jump directly to the image itself with the clickGOOGLEview extension for Google Chrome. The Problem When you find an image that you like using Google Image Search you always have to go through extra hassle just to get to the image itself. First you have an entire webpage loading in your browser and then you have to click through that irritating “See full size image” link. All that you need is the image, right? Problem Fixed Once you have installed the clickGOOGLEview extension you will absolutely love the result. Find an image that you like, click the link, and there is your new image without any of the hassle or extra link clicking. Big or small having direct access to the image is how it should have been from the beginning. Conclusion The clickGOOGLEview extension does one thing and does it extremely well…it gets you to those images without the extra hassle or additional link clicking. Links Download the clickGOOGLEview extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Make Firefox Quick Search Use Google’s Beta Search KeysChange Internet Explorer in Windows Vista to Search Google by DefaultMake Firefox Built-In Search Box Use Google’s Experimental Search KeysQuick Tip: Show PageRank in Firefox while Google Toolbar is HiddenQuick Tip: Use Google Talk Sidebar in Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Kill Processes Quickly with Process Assassin Need to Come Up with a Good Name? Try Wordoid StockFox puts a Lightweight Stock Ticker in your Statusbar Explore Google Public Data Visually The Ultimate Excel Cheatsheet Convert the Quick Launch Bar into a Super Application Launcher

    Read the article

  • MSSQL Search Proper Names Full Text Index vs LIKE + SOUNDEX

    - by Matthew Talbert
    I have a database of names of people that has (currently) 35 million rows. I need to know what is the best method for quickly searching these names. The current system (not designed by me), simply has the first and last name columns indexed and uses "LIKE" queries with the additional option of using SOUNDEX (though I'm not sure this is actually used much). Performance has always been a problem with this system, and so currently the searches are limited to 200 results (which still takes too long to run). So, I have a few questions: Does full text index work well for proper names? If so, what is the best way to query proper names? (CONTAINS, FREETEXT, etc) Is there some other system (like Lucene.net) that would be better? Just for reference, I'm using Fluent NHibernate for data access, so methods that work will with that will be preferred. I'm using MS SQL 2008 currently.

    Read the article

  • Skip the first RenderTarget when writing to MRT with Opaque blending

    - by cubrman
    I am writing to three rendertargets and whant to know how to tell a GPU not to write to the first RT. When you write a shader you can simply output less data than you have RTs (like output a single float4 when writing to three RTs) and only the first RTs will be affected, but you cannot specify to output this data anywhere else but to COLOR0, then 1, etc. Is there a way to write to several RTs but skip the first target? If I output zeroes, the data in the target will become zeroes, but I need it to remain untuched in the first target and only change in the specified ones. The reason I need this is to prevent data loss when calling SetRendertarget() with DiscardContents RTs. I write to all the RTs at one point and I need to write to only the specified ones afterwards. It must be the first texture as I have a depth buffer linked to it (XNA 4.0). Thanks.

    Read the article

  • First languages with generic programming support

    - by oluies
    Which was the first language with generic programming support, and what was the first major staticly typed language (widely used) with generics support. Generics implement the concept of parameterized types to allow for multiple types. The term generic means "pertaining to or appropriate to large groups of classes." I have seen the following mentions of "first": First-order parametric polymorphism is now a standard element of statically typed programming languages. Starting with System F [20,42] and functional programming lan- guages, the constructs have found their way into mainstream languages such as Java and C#. In these languages, first-order parametric polymorphism is usually called generics. From "Generics of a Higher Kind", Adriaan Moors, Frank Piessens, and Martin Odersky Generic programming is a style of computer programming in which algorithms are written in terms of to-be-specified-later types that are then instantiated when needed for specific types provided as parameters. This approach, pioneered by Ada in 1983 From Wikipedia Generic Programming

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >