Search Results

Search found 81412 results on 3257 pages for 'file search'.

Page 41/3257 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • wssql always returning zero rows

    - by Lavinski
    I'm using the windows search 4.0 service (wssql) to find some files, it works fine on my computer but on our server which has two drives C: and D: always returns 0 rows when searching D: Also i'm not sure if it's related but cd d: goes back to c: in the command prompt.

    Read the article

  • Searching subversion history (full text)

    - by rjmunro
    Is there a way to perform a full text search of a subversion repository, including all the history? For example, I've written a feature that I used somewhere, but then it wasn't needed, so I svn rm'd the files, but now I need to find it again to use it for something else. The svn log probably says something like "removed unused stuff", and there's loads of checkins like that.

    Read the article

  • How to create managed properties at site collection level in SharePoint2013

    - by ybbest
    In SharePoint2013, you can create managed properties at site collection. Today, I’d like to show you how to do so through PowerShell. 1. Define your managed properties and crawled properties and managed property Type in an external csv file. PowerShell script will read this file and create the managed and the mapping. 2. As you can see I also defined variant Type, this is because you need the variant type to create the crawled property. In order to have the crawled properties, you need to do a full crawl and also make sure you have data populated for your custom column. However, if you do not want to a full crawl to create those crawled properties, you can create them yourself by using the PowerShell; however you need to make sure the crawled properties you created have the same name if created by a full crawl. Managed properties type: Text = 1 Integer = 2 Decimal = 3 DateTime = 4 YesNo = 5 Binary = 6 Variant Type: Text = 31 Integer = 20 Decimal = 5 DateTime = 64 YesNo = 11 3. You can use the following script to create your managed properties at site collection level, the differences for creating managed property at site collection level is to pass in the site collection id. param( [string] $siteUrl="http://SP2013/", [string] $searchAppName = "Search Service Application", $ManagedPropertiesList=(IMPORT-CSV ".\ManagedProperties.csv") ) Add-PSSnapin Microsoft.SharePoint.PowerShell -ErrorAction SilentlyContinue $searchapp = $null function AppendLog { param ([string] $msg, [string] $msgColor) $currentDateTime = Get-Date $msg = $msg + " --- " + $currentDateTime if (!($logOnly -eq $True)) { # write to console Write-Host -f $msgColor $msg } # write to log file Add-Content $logFilePath $msg } $scriptPath = Split-Path $myInvocation.MyCommand.Path $logFilePath = $scriptPath + "\CreateManagedProperties_Log.txt" function CreateRefiner {param ([string] $crawledName, [string] $managedPropertyName, [Int32] $variantType, [Int32] $managedPropertyType,[System.GUID] $siteID) $cat = Get-SPEnterpriseSearchMetadataCategory –Identity SharePoint -SearchApplication $searchapp $crawledproperty = Get-SPEnterpriseSearchMetadataCrawledProperty -Name $crawledName -SearchApplication $searchapp -SiteCollection $siteID if($crawledproperty -eq $null) { Write-Host AppendLog "Creating Crawled Property for $managedPropertyName" Yellow $crawledproperty = New-SPEnterpriseSearchMetadataCrawledProperty -SearchApplication $searchapp -VariantType $variantType -SiteCollection $siteID -Category $cat -PropSet "00130329-0000-0130-c000-000000131346" -Name $crawledName -IsNameEnum $false } $managedproperty = Get-SPEnterpriseSearchMetadataManagedProperty -Identity $managedPropertyName -SearchApplication $searchapp -SiteCollection $siteID -ErrorAction SilentlyContinue if($managedproperty -eq $null) { Write-Host AppendLog "Creating Managed Property for $managedPropertyName" Yellow $managedproperty = New-SPEnterpriseSearchMetadataManagedProperty -Name $managedPropertyName -Type $managedPropertyType -SiteCollection $siteID -SearchApplication $searchapp -Queryable:$true -Retrievable:$true -FullTextQueriable:$true -RemoveDuplicates:$false -RespectPriority:$true -IncludeInMd5:$true } $mappedProperty = $crawledproperty.GetMappedManagedProperties() | ?{$_.Name -eq $managedProperty.Name } if($mappedProperty -eq $null) { Write-Host AppendLog "Creating Crawled -> Managed Property mapping for $managedPropertyName" Yellow New-SPEnterpriseSearchMetadataMapping -CrawledProperty $crawledproperty -ManagedProperty $managedproperty -SearchApplication $searchapp -SiteCollection $siteID } $mappedProperty = $crawledproperty.GetMappedManagedProperties() | ?{$_.Name -eq $managedProperty.Name } #Get-FASTSearchMetadataCrawledPropertyMapping -ManagedProperty $managedproperty } $searchapp = Get-SPEnterpriseSearchServiceApplication $searchAppName $site= Get-SPSite $siteUrl $siteId=$site.id Write-Host "Start creating Managed properties" $i = 1 FOREACH ($property in $ManagedPropertiesList) { $propertyName=$property.managedPropertyName $crawledName=$property.crawledName $managedPropertyType=$property.managedPropertyType $variantType=$property.variantType Write-Host $managedPropertyType Write-Host "Processing managed property $propertyName $($i)..." $i++ CreateRefiner $crawledName $propertyName $variantType $managedPropertyType $siteId Write-Host "Managed property created " $propertyName } Key Concepts Crawled Properties: Crawled properties are discovered by the search index service component when crawling content. Managed Properties: Properties that are part of the Search user experience, which means they are available for search results, advanced search, and so on, are managed properties. Mapping Crawled Properties to Managed Properties: To make a crawled property available for the Search experience—to make it available for Search queries and display it in Advanced Search and search results—you must map it to a managed property. References Administer search in SharePoint 2013 Preview Managing Metadata

    Read the article

  • What are the Search engine affects of registering the same domain on multiple top level domains (ie. .com, .ie, .nl etc.)?

    - by user1020317
    I'm looking to register a few more domains for my company, I have my-company.com at the moment, but now require my-company.com.au and my-company.nl and some others.. I'm running through my options and wondering what is the best.. Duplicate all the content on the .com package and make a replica at the other domains Buy the other domains but do a 301 redirect back to the .com domain. Create a full new website with different content for the new domains, thus having no text duplication We currently sell over the world so would like to raise our Search rankings in various countries, can this be done by buying the domain in the country, and if so, how will the above methods affect our search rankings. Any other suggestions are welcome!

    Read the article

  • SQL Server Full-Text Search: Hung processes with MSSEARCH wait type

    - by CheeseInPosition
    We have a SQL Server 2005 SP2 machine running a large number of databases, all of which contain full-text catalogs. Whenever we try to drop one of these databases or rebuild a full-text index, the drop or rebuild process hangs indefinitely with a MSSEARCH wait type. The process can’t be killed, and a server reboot is required to get things running again. Based on a Microsoft forums post[1], it appears that the problem might be an improperly removed full-text catalog. Can anyone recommend a way to determine which catalog is causing the problem, without having to remove all of them? [1] [http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=2681739&SiteID=1] “Yes we did have full text catalogues in the database, but since I had disabled full text search for the database, and disabled msftesql, I didn't suspect them. I got however an article from Microsoft support, showing me how I could test for catalogues not properly removed. So I discovered that there still existed an old catalogue, which I ,after and only after re-enabling full text search, were able to delete, since then my backup has worked”

    Read the article

  • Unable to get simple ruby on rails Search to work :/

    - by edu222
    I am new to RoR, any help would be greatly appreciated :) I have a basic scaffolding CRUD app to add customers. I am trying to search by first_name or last_name fields. The error that I am getting is: NoMethodError in Clientes#find You have a nil object when you didn't expect it! You might have expected an instance of Array. The error occurred while evaluating nil.each Extracted source (around line #9): 6: <th>Apellido</th> 7: </tr> 8: 9: <% for cliente in @clientes %> 10: <tr> 11: <td><%=h cliente.client_name %></td> 12: <td><%=h cliente.client_lastname %></td> Application Trace C:/Rails/clientes/app/views/clientes/find.html.erb:9:in `_run_erb_app47views47clientes47find46html46erb' My find function in controllers/clientes_controlee.rb is: # Find def find @cliente = Cliente.find(:all, :conditions=>["client_name = ? OR client_lastname = ?", params[:search_string], params[:search_string]]) end My views/layouts clientes.html.erb form code fragment is: <span style="text-align: right"> <% form_tag "/clientes/find" do %> <%= text_field_tag :search_string %> <%= submit_tag "Search" %> <% end %> </span> The search template I created in views/clientes/find.html.erb: <h1>Listing clientes for <%= params[:search_string] %></h1> <table> <tr> <th>Nombre</th> <th>Apellido</th> </tr> <% for cliente in @clientes %> <tr> <td><%=h cliente.client_name %></td> <td><%=h cliente.client_lastname %></td> <td><%= link_to 'Mostrar', cliente %></td> <td><%= link_to 'Editar', edit_cliente_path(cliente) %></td> <td><%= link_to 'Eliminar', cliente, :confirm =>'Estas Seguro de que desear eliminar a este te cliente?', :method => :delete %></td> </tr> <% end %> </table> <%= link_to 'Atras', clientes_path %

    Read the article

  • MySQL full text search with partial words

    - by Rob
    MySQL Full Text searching appears to be great and the best way to search in SQL. However, I seem to be stuck on the fact that it won't search partial words. For instance if I have an article titled "MySQL Tutorial" and search for "MySQL", it won't find it. Having done some searching I found various references to support for this coming in MySQL 4 (i'm using 5.1.40). I've tried using "MySQL" and "%MySQL%", but neither works (one link I found suggested it was stars but you could only do it at the end or the beginning not both). Here's my table structure and my query, if someone could tell me where i'm going wrong that would be great. I'm assuming partial word matching is built in somehow. CREATE TABLE IF NOT EXISTS `articles` ( `article_id` smallint(5) unsigned NOT NULL AUTO_INCREMENT, `article_name` varchar(64) NOT NULL, `article_desc` text NOT NULL, `article_link` varchar(128) NOT NULL, `article_hits` int(11) NOT NULL, `article_user_hits` int(7) unsigned NOT NULL DEFAULT '0', `article_guest_hits` int(10) unsigned NOT NULL DEFAULT '0', `article_rating` decimal(4,2) NOT NULL DEFAULT '0.00', `article_site_id` smallint(5) unsigned NOT NULL DEFAULT '0', `article_time_added` int(10) unsigned NOT NULL, `article_discussion_id` smallint(5) unsigned NOT NULL DEFAULT '0', `article_source_type` varchar(12) NOT NULL, `article_source_value` varchar(12) NOT NULL, PRIMARY KEY (`article_id`), FULLTEXT KEY `article_name` (`article_name`,`article_desc`,`article_link`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=7 ; INSERT INTO `articles` VALUES (1, 'MySQL Tutorial', 'Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.', 'http://www.domain.com/', 6, 3, 1, '1.50', 1, 1269702050, 1, '0', '0'), (2, 'How To Use MySQL Well', 'Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.', 'http://www.domain.com/', 1, 2, 0, '3.00', 1, 1269702050, 1, '0', '0'), (3, 'Optimizing MySQL', 'Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.', 'http://www.domain.com/', 0, 1, 0, '3.00', 1, 1269702050, 1, '0', '0'), (4, '1001 MySQL Tricks', 'Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.', 'http://www.domain.com/', 0, 1, 0, '3.00', 1, 1269702050, 1, '0', '0'), (5, 'MySQL vs. YourSQL', 'Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.', 'http://www.domain.com/', 0, 2, 0, '3.00', 1, 1269702050, 1, '0', '0'), (6, 'MySQL Security', 'Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.', 'http://www.domain.com/', 0, 2, 0, '3.00', 1, 1269702050, 1, '0', '0'); SELECT count(a.article_id) FROM articles a WHERE MATCH (a.article_name, a.article_desc, a.article_link) AGAINST ('mysql') GROUP BY a.article_id ORDER BY a.article_time_added ASC The prefix is used as it comes from a function that sometimes adds additional joins. As you can see a search for MySQL should return a count of 6, but unfortunately it doesn't.

    Read the article

  • Best way to retrieve certain field of all documents returned by a lucen search

    - by Philipp
    Hi, I was wondering what the best way is to retrieve a certain field of all documents returned by a Searcher of Lucene. Background: each document has a date field (written on) and I would like to show a timeline of all found documents, so I need to extract the date (day) field of all the documents I find with the search. I currently retrieve every document using Searcher.doc(int, FieldSelector) having the selector only retrieve the certain field. I have indexed 250k documents, the search itself takes no time and returns about 10k document ids. Retrieving those however, takes 20+ seconds. What can I do to speed things up, but still get all the values I need. Thx in advance Philipp

    Read the article

  • search results filter effects - flex

    - by Adam
    I've created a search with a couple of comboboxes that allow users to filter their search results. The results are currently using a TileList & itemRenderer to display, and now I'd like to add an animation effect when the user filters their results. I know that you can use the itemsChangeEffect to create an animation effect when the user drags and moves result itmes. So I'd like to know if there's a way to create a similar effect triggered by the filtering on the comboboxes? Thanks.

    Read the article

  • Optimizing encrypted column search

    - by Sung Meister
    I have a table called,tblClient with an encrypted column called SSN. Due to company policy, we encrypted SSN using a symmetric key (chosen over asymmetric key due to performance reasons) using a password. Here is a partial LIKE search on SSN declare @SSN varchar(11) set @SSN = '111-22-%' open symmetric key SSN_KEY decrypt by password = 'secret' select Client_ID from tblClient (nolock) where convert(nvarchar(11), DECRYPTBYKEY(SSN)) like @SSN close symmetric key SSN_KEY Before encryption, searching thru 150,000 records took less than 1 second. but with the mix of decryption, the same search takes around 5 seconds. What strategy can I apply to try to optimize searching thru encrypted column?

    Read the article

  • Lucene search taking TOOO long.

    - by Josh Handel
    I;m using Lucene.net (2.9.2.2) on a (currently) 70Gig index.. I can do a fairly complicated search and get all the document IDs back in 1 ~ 2 seconds.. But to actually load up all the hits (about 700 thousand in my test queries) takes 5+ minutes. We aren't using lucene for UI, this is a datastore between processes where we have hundreds of millions of pre-cached data elements, and the part I am working on exports a few specific fields from each found document. (ergo, pagination doesn't make since as this is an export between processes). My question is what is the best way to get all of the documents in a search result? currently I am using a custom collector that does a get on the document (with a MapFieldSelector) as its collecting.. I've also tried iterating through the list after the collector has finished.. but that was even worse. I'm open to ideas :-). Thanks in advance.

    Read the article

  • Eclipse Search Only Specific Folders

    - by Craig
    Hello, I already saw the answers for a question almost identical to this: http://stackoverflow.com/questions/443169/eclipse-exclude-folders-from-search. However I am looking for a resolution that would allow me to say look at 10 of my 200 folders and those 10 change all the time. Is there a way I can search through just 1 folder and avoid any other folders that are not inside it? I don't want to create a different project as I am using SVN and I have had cases with mxml files and other files that adding a file that we moved from one project to another caused problems for other developers.

    Read the article

  • Search select statement

    - by Nana
    I am creating a page which would have different field for the user to search from. e.g. search by: Grade: -dropdownlist1- Student name: -dropdownlist2- Student ID: -dropdownlist3- Lessons: -dropdownlist4- Year: -dropdownlist5- How do I write the select statement for this? Each dropdownlist would need a select statement which would extract out different data from the database. But, I want to write ONE select statement which can dynamically choose the dropdownlist options. Instead of writing many many select statement. Lets say; Grade: -dropdownlist1- ; default value(all) Student name: -dropdownlist2-; default value(all) Student ID: -dropdownlist3-; 0-100 is choosen Lessons: -dropdownlist4-; A-C is choosen Year: -dropdownlist5-; 2009 is choosen

    Read the article

  • PostgreSql XML Text search

    - by cro
    I have a text column in a table. We store XML in this column. Now I want to search for tags and values Example data: Citi Bank ..... ..... / I would like to run the following query: select * from xxxx where to_tsvector('english',xml_column) @@ to_tsquery('Citi Bank') This works fine but it also works for tags like name1 or no tag. How do I have to setup my search in order for this to work so I get an exact match for the tag and value ?

    Read the article

  • SQL Server Full Text Search Leading Wildcard

    - by aherrick
    After taking a look at this SO question and doing my own research, it appears that you cannot have a leading wildcard while using full text search. So in the most simple example, if I have a Table with 1 column like below: TABLE1 coin coinage undercoin select COLUMN1 from TABLE1 where COLUMN1 LIKE '%coin%' Would get me the results I want. How can I get the exact same results with FULL TEXT SEARCH enabled on the column? The following two queries return the exact same data, which is not exactly what I want. SELECT COLUMN1 FROM TABLE1 WHERE CONTAINS(COLUMN1, '"coin*"') SELECT COLUMN1 FROM TABLE1 WHERE CONTAINS(COLUMN1, '"*coin*"')

    Read the article

  • Search 2 Columns with 1 Input Field

    - by Norbert
    I have a db with two columns: first name and last name. The first name can have multiple words. Last name can contain hyphenated words. Is there a way to search both columns with only one input box? Database ID `First Name` `Last Name` 1 John Peter Doe 2 John Fubar 3 Michael Doe Search john peter returns id 1 john returns id 1,2 doe returns id 1,3 john doe returns id 1 peter john returns id 1 peter doe returns id 1 doe john returns id 1 I previously tried the following. Searching for John Doe: SELECT * FROM names WHERE ( `first` LIKE '%john%' OR `first` LIKE '%doe%' OR `last` LIKE '%john%' OR `last` LIKE '%doe%' ) which returns both 1 and 3

    Read the article

  • Meta Tags in Google Site Search

    - by ullmark
    Hi, I'm planning on implementing a google site search (and paying for it so I can get access to the XML). One thing I am wondering about is the possibility to use custom meta tags in it. I've heard yes from colleagues but nothing confirmed. Searching for an answer has given nothing (maybe because you cant?) Anybody knows? Edit: I want to be able to retrieve those meta tags from the search result to be able to provide different styling for different types of pages.

    Read the article

  • search engine crawling frequency

    - by Aditya Pratap Singh
    I want to design a search engine for news websites ie. download various article pages from these websites, index the pages, and answer search queries on the index. I want a short pseudocode to find an appropriate crawling frequency -- i do not want to crawl too often because the website may not have changed, and do not want to crawl too infrequently because index would then be out of date. Assume that crawling code looks as follows while(1) { sleep(sleep_interval); // sleep for sleep_interval crawl(website); // crawls the entire website diff = diff(currently_crawled_website, previously_crawled_website); // returns a % value of difference between the latest and previous crawls of the website sleep_interval = infer_sleep_interval(diff, sleep_interval); } looking for a pseudocode for the infer_sleep_interval method: long sleep_interval infer_sleep_interval(int diff_percentage,long previous_sleep_interval) { ... ... ... } i want to design method which adaptively alters the sleeping interval based on the update frequency of the website.

    Read the article

  • Problem with ranking of search results in SharePoint 2007 if using the CONTAINS predicate

    - by mythicdawn
    While writing a front-end for the SharePoint Search web service for work, I did some quick testing with the MOSS Search Tool to make sure things were working right under the hood. What I found was that queries composed only of CONTAINS predicates (FREETEXT ones were fine) would have a rank of 1000 for any results that were returned. According to the documentation (http://msdn.microsoft.com/en-us/library/ms544086.aspx): "If the query returns a document because a non–full-text predicate evaluates to TRUE for that document, the rank value is calculated as 1000." Given that the behaviour I am seeing seems to contradict the documentation, is it the case that all queries that use only the CONTAINS predicate will produce ranking like this?

    Read the article

  • Writing a post search algorithm.

    - by MdaG
    I'm trying to write a free text search algorithm for finding specific posts on a wall (similar kind of wall as Facebook uses). A user is suppose to be able to write some words in a search field and get hits on posts that contain the words; with the best match on top and then other posts in decreasing order according to match score. I'm using the edit distance (Levenshtein) "e(x, y) = e" to calculate the score for each post when compared to the query word "x" and post word "y" according to: score(x, y) = 2^(2 - e)(1 - min(e, |x|) / |x|) Each word in a post contributes to the total score for that specific post. This approach seems to work well when the posts are of roughly the same size, but sometime certain large posts manages to rack up score solely on having a lot of words in them while in practice not being relevant to the query. Am I approaching this problem in the wrong way or is there some way to normalize the score that I haven't thought of?

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >