Search Results

Search found 32130 results on 1286 pages for 'local search'.

Page 36/1286 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • How Can a Website Benefit From Search Engine Optimization

    Search Engine Optimization, also plainly referred to as SEO is tuning the potentials of a website with the aim of getting high rankings in search engines like Google, Yahoo and Bing. Businesses battle for this high ranking because when web users search for particular services or products online, they only use specific keywords in the search engines then perform a web search.

    Read the article

  • Search Engine Optimisation For Online Businesses

    Most Internet users make use of search engines to get apt results for their search and this is the reason that a website with good search engine ranking is preferred much by the visitors. Landing on top search engine results not only helps to draw quality traffic on the website, but also helps the website to become a brand in the market. The more visitors navigate the web page, more publicity for the site shall boost up and this is made possible with Search engine optimization.

    Read the article

  • SEO - Search Engine Optimization Working For You

    SEO (search engine optimization), sometimes referred to as natural or organic search optimization, website SEO, or just plain old SEO, is the process of ensuring that the ranking of your website with search engines i.e. Google, Yahoo etc puts your web site near the top of searches. Natural or Organic search results are "free" i.e., you do not pay for a listing in the search results pages for your keywords. This means you need to do all the work.

    Read the article

  • Basic SEO Strategies - How to Get Your Website at the Top of the Search Results

    Search Engine Optimisation means making changes on your website so that it can be crawled more easily by search engines and found in search listings. By making changes on your website you can increase targeted visitors to your site. Search Engines use robots to crawl websites and the robots are only able to read content that is text. Your website results are displayed based on how relevant the content is in relation to the keyword that is being searched for in the search engine.

    Read the article

  • Used SQL Svr 2008 Config Manager to Set Service Account to Local System: What Did It Change?

    - by Frank Ramage
    Direct shot to foot moment... While setting-up individual non-admin accts for MSSQLSERVER services, I temporarily set Server service login to Local System account. I remembered later that: SQL Server Configuration Manager performs additional configuration such as setting permissions in the Windows Registry so that the new account can read the SQL Server settings. I want my Local System back . (Actually just restored to its original security profile) Any advice? Thanks!

    Read the article

  • rc.local is not always executed upon boot

    - by starcorn
    Hey, I have some weird problem with the rc.local file which is located in /etc/rc.local the thing is that it is not always running when I boot up the laptop. Maybe every second time, I haven't counted. Anyway when that happens I have to manually go to terminal and type sudo /etc/init.d/rc.local start, which kinda kills the purpose of having this script. Anyone know what the problem could be? EDIT Since this wasn't obvious. This is an issue where I make a fresh boot up. Which mean I have shut down the computer. And next time when I boot up the computer, the rc.local file is randomly deciding whether it will automatically start or not. Here's a copy of what my rc.local file contains echo -n 255 > /sys/devices/platform/i8042/serio1/serio2/sensitivity echo level 2 > /proc/acpi/ibm/fan touch /home/starcorn/Desktop/foo rfkill block bluetooth exit 0

    Read the article

  • Where can I safetly search domain whois without worrying about the search engine parking on the domain immediately after the search?

    - by Evan Plaice
    There are a lot of companies that provide domain whois but I've heard of a lot of people who had bad experiences where the domain was bought soon after the whois search and the price was increased dramatically. Where can I gain access to a domain whois where I don't have to worry about that happening? Update: Apparently, the official name for this practice is called Domain Front Running and some sites go as far as to create explicit policies stating that they don't do it. This is where a domain registrar or an intermediary (like a domain lookup site) mines the searches for possibly attractive domains and then either sells the data to a third-party, or goes ahead and registers the name themselves ahead of you. In one case a registrar took advantage of what's known as the "grace period" and registered every single domain users looked up through them and held on to them for 5 days before releasing them back into the pool at no cost to themselves. Source: domainwarning.com And apparently, after ICANN was notified of the practice, they wrote it off as a coincidence of random 'domain tasting'. Source: See for yourself

    Read the article

  • Syncing objects to a remote server, and caching on local storage

    - by Harry
    What's the best method of sycing objects (as JSON) to a remote server, with local caching? I have some objects that will pretty much just be plain-text with some extra meta-data. I was thinking of perhaps including a "last modified date" for both Local storage and Remote storage. This could then be used to determine which object is the most recent. For example, even though objects will be saved to both local and remote when they are saved, sometimes the user may not have internet access, or the server may be down, or any other number of things. In this case, the last modified date for remote storage would be reverted to its previous date. Local storage would remain as it is. At this point, the user could exit the application, and when they reload the application would then look at the last modified dates of the local and remote storages, and decide. Is there anything I'm missing with this? Is there a better method that I could use?

    Read the article

  • How to make XAMPP virtual hosts accessible to VM's and other computers on LAN?

    - by martin's
    XAMPP running on Vista 64 Ultimate dev machine (don't think it matters). Machine / Browser configuration Safari, Firefox, Chrome and IE9 on dev machine IE7 and IE8 on separate XP Pro VM's (VMWare on dev machine) IE10 and Chrome on Windows 8 VM (VMware on dev machine) Safari, Firefox and Chrome running on a iMac (same network as dev) Safari, Firefox and Chrome running on a couple of Mac Pro's (same network as dev) IE7, IE8, IE9 running on other PC's on the same network as dev machine Development Configuration Multiple virtual hosts for different projects .local fake TLD for development No firewall restrictions on dev machine for Apache Some sites have .htaccess mapping www to non-www Port 80 is open in the dev machine's firewall Problem XAMPP local home page (http://192.168.1.98/xampp/) can be accessed from everywhere, real or virtual, by IP All .local sites can be accessed from the browsers on the dev machine. All .local sites can be accessed form the browsers in the XP VM's. Some .local sites cannot be accessed from IE10 or Chrome on the W8 VM Sites that cannot be accessed from W8 VM have a minimal .htaccess file No .local sites can be accessed from ANY machine (PC or Mac) on the LAN hosts on dev machine (relevant excerpt) 127.0.0.1 site1.local 127.0.0.1 site2.local 127.0.0.1 site3.local 127.0.0.1 site4.local 127.0.0.1 site5.local 127.0.0.1 site6.local 127.0.0.1 site7.local 127.0.0.1 site8.local 127.0.0.1 site9.local 192.168.1.98 site1.local 192.168.1.98 site2.local 192.168.1.98 site3.local 192.168.1.98 site4.local 192.168.1.98 site5.local 192.168.1.98 site6.local 192.168.1.98 site7.local 192.168.1.98 site8.local 192.168.1.98 site9.local httpd-vhosts.conf on dev machine (relevant excerpt) NameVirtualHost *:80 <VirtualHost *:80> ServerName localhost ServerAlias localhost *.localhost.* DocumentRoot D:/xampp/htdocs </VirtualHost> # ======================================== site1.local <VirtualHost *:80> ServerName site1.local ServerAlias site1.local *.site1.local DocumentRoot D:/xampp-sites/site1/public_html ErrorLog D:/xampp-sites/site1/logs/access.log CustomLog D:/xampp-sites/site1/logs/error.log combined <Directory D:/xampp-sites/site1> Options Indexes FollowSymLinks AllowOverride All Require all granted </Directory> </VirtualHost> NOTE: The above <VirtualHost *:80> block is repeated for each of the nine virtual hosts in the file, no sense in posting it here. hosts on all VM's and physical machines on the network (relevant excerpt) 127.0.0.1 localhost ::1 localhost 192.168.1.98 site1.local 192.168.1.98 site2.local 192.168.1.98 site3.local 192.168.1.98 site4.local 192.168.1.98 site5.local 192.168.1.98 site6.local 192.168.1.98 site7.local 192.168.1.98 site8.local 192.168.1.98 site9.local None of the VM's have any firewall blocks on http traffic. They can reach any site on the real Internet. The same is true of the real machines on the network. The biggest puzzle perhaps is that the W8 VM actually DOES reach some of the virtual hosts. It does NOT reach site2, site6 and site 9, all of which have this minimal .htaccess file. .htaccess file <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{HTTP_HOST} !^www\. RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] </IfModule> Adding this file to any of the virtual hosts that do work on the W8 VM will break the site (only for W8 VM, not the XP VM's) and require a cache flush on the W8 VM before it will see the site again after deleting the file. Regardless of whether a .htaccess file exists or not, no machine on the same LAN can access anything other than the XAMPP home page via IP. Even with hosts files on all machines. I can ping any virtual host from any machine on the network and get a response from the correct IP address. I can't see anything in out Netgear router that might prevent one machine from reaching the other. Besides, once the local hosts file resolves to an ip address that's all that goes out onto the local network. I've gone through an extensive number of posts on both SO and as the result of Google searches. I can't say that I have found anything definitive anywhere.

    Read the article

  • Python - Using "Google AJAX Search" API's Local Search Objects

    - by user330739
    Hi! I've just started using Google's search API to find addresses and the distances between those addresses. I used geopy for this, but, I often had the problem of not getting the correct addresses for my queries. I decided to experiment, therefore, with Google's "Local Search" (http://code.google.com/apis/ajaxsearch/local.html). Anyway, I wanted to ask if I could use the "Local Search" objects provided by the API within python. Something tells me that I can't and that I have to use json. Does anyone know if there is a work around? PS: Im trying to make something like this: http://www.google.com/uds/samples/random/lead.html ... except a matrix type deal where the insides will be filled with distances between the addresses. Thanks for reading!

    Read the article

  • Algoritms for "fuzzy matching" strings

    - by Alexey Romanov
    By fuzzy matching I don't mean similar strings by Levenshtein distance or something similar, but the way it's used in TextMate/Ido/Icicles: given a list of strings, find those which include all characters in the search string, but possibly with other characters between, preferring the best fit.

    Read the article

  • wssql always returning zero rows

    - by Lavinski
    I'm using the windows search 4.0 service (wssql) to find some files, it works fine on my computer but on our server which has two drives C: and D: always returns 0 rows when searching D: Also i'm not sure if it's related but cd d: goes back to c: in the command prompt.

    Read the article

  • Searching subversion history (full text)

    - by rjmunro
    Is there a way to perform a full text search of a subversion repository, including all the history? For example, I've written a feature that I used somewhere, but then it wasn't needed, so I svn rm'd the files, but now I need to find it again to use it for something else. The svn log probably says something like "removed unused stuff", and there's loads of checkins like that.

    Read the article

  • How to create managed properties at site collection level in SharePoint2013

    - by ybbest
    In SharePoint2013, you can create managed properties at site collection. Today, I’d like to show you how to do so through PowerShell. 1. Define your managed properties and crawled properties and managed property Type in an external csv file. PowerShell script will read this file and create the managed and the mapping. 2. As you can see I also defined variant Type, this is because you need the variant type to create the crawled property. In order to have the crawled properties, you need to do a full crawl and also make sure you have data populated for your custom column. However, if you do not want to a full crawl to create those crawled properties, you can create them yourself by using the PowerShell; however you need to make sure the crawled properties you created have the same name if created by a full crawl. Managed properties type: Text = 1 Integer = 2 Decimal = 3 DateTime = 4 YesNo = 5 Binary = 6 Variant Type: Text = 31 Integer = 20 Decimal = 5 DateTime = 64 YesNo = 11 3. You can use the following script to create your managed properties at site collection level, the differences for creating managed property at site collection level is to pass in the site collection id. param( [string] $siteUrl="http://SP2013/", [string] $searchAppName = "Search Service Application", $ManagedPropertiesList=(IMPORT-CSV ".\ManagedProperties.csv") ) Add-PSSnapin Microsoft.SharePoint.PowerShell -ErrorAction SilentlyContinue $searchapp = $null function AppendLog { param ([string] $msg, [string] $msgColor) $currentDateTime = Get-Date $msg = $msg + " --- " + $currentDateTime if (!($logOnly -eq $True)) { # write to console Write-Host -f $msgColor $msg } # write to log file Add-Content $logFilePath $msg } $scriptPath = Split-Path $myInvocation.MyCommand.Path $logFilePath = $scriptPath + "\CreateManagedProperties_Log.txt" function CreateRefiner {param ([string] $crawledName, [string] $managedPropertyName, [Int32] $variantType, [Int32] $managedPropertyType,[System.GUID] $siteID) $cat = Get-SPEnterpriseSearchMetadataCategory –Identity SharePoint -SearchApplication $searchapp $crawledproperty = Get-SPEnterpriseSearchMetadataCrawledProperty -Name $crawledName -SearchApplication $searchapp -SiteCollection $siteID if($crawledproperty -eq $null) { Write-Host AppendLog "Creating Crawled Property for $managedPropertyName" Yellow $crawledproperty = New-SPEnterpriseSearchMetadataCrawledProperty -SearchApplication $searchapp -VariantType $variantType -SiteCollection $siteID -Category $cat -PropSet "00130329-0000-0130-c000-000000131346" -Name $crawledName -IsNameEnum $false } $managedproperty = Get-SPEnterpriseSearchMetadataManagedProperty -Identity $managedPropertyName -SearchApplication $searchapp -SiteCollection $siteID -ErrorAction SilentlyContinue if($managedproperty -eq $null) { Write-Host AppendLog "Creating Managed Property for $managedPropertyName" Yellow $managedproperty = New-SPEnterpriseSearchMetadataManagedProperty -Name $managedPropertyName -Type $managedPropertyType -SiteCollection $siteID -SearchApplication $searchapp -Queryable:$true -Retrievable:$true -FullTextQueriable:$true -RemoveDuplicates:$false -RespectPriority:$true -IncludeInMd5:$true } $mappedProperty = $crawledproperty.GetMappedManagedProperties() | ?{$_.Name -eq $managedProperty.Name } if($mappedProperty -eq $null) { Write-Host AppendLog "Creating Crawled -> Managed Property mapping for $managedPropertyName" Yellow New-SPEnterpriseSearchMetadataMapping -CrawledProperty $crawledproperty -ManagedProperty $managedproperty -SearchApplication $searchapp -SiteCollection $siteID } $mappedProperty = $crawledproperty.GetMappedManagedProperties() | ?{$_.Name -eq $managedProperty.Name } #Get-FASTSearchMetadataCrawledPropertyMapping -ManagedProperty $managedproperty } $searchapp = Get-SPEnterpriseSearchServiceApplication $searchAppName $site= Get-SPSite $siteUrl $siteId=$site.id Write-Host "Start creating Managed properties" $i = 1 FOREACH ($property in $ManagedPropertiesList) { $propertyName=$property.managedPropertyName $crawledName=$property.crawledName $managedPropertyType=$property.managedPropertyType $variantType=$property.variantType Write-Host $managedPropertyType Write-Host "Processing managed property $propertyName $($i)..." $i++ CreateRefiner $crawledName $propertyName $variantType $managedPropertyType $siteId Write-Host "Managed property created " $propertyName } Key Concepts Crawled Properties: Crawled properties are discovered by the search index service component when crawling content. Managed Properties: Properties that are part of the Search user experience, which means they are available for search results, advanced search, and so on, are managed properties. Mapping Crawled Properties to Managed Properties: To make a crawled property available for the Search experience—to make it available for Search queries and display it in Advanced Search and search results—you must map it to a managed property. References Administer search in SharePoint 2013 Preview Managing Metadata

    Read the article

  • What are the Search engine affects of registering the same domain on multiple top level domains (ie. .com, .ie, .nl etc.)?

    - by user1020317
    I'm looking to register a few more domains for my company, I have my-company.com at the moment, but now require my-company.com.au and my-company.nl and some others.. I'm running through my options and wondering what is the best.. Duplicate all the content on the .com package and make a replica at the other domains Buy the other domains but do a 301 redirect back to the .com domain. Create a full new website with different content for the new domains, thus having no text duplication We currently sell over the world so would like to raise our Search rankings in various countries, can this be done by buying the domain in the country, and if so, how will the above methods affect our search rankings. Any other suggestions are welcome!

    Read the article

  • Conflicting ip routes with local table on attaching a virtual network interface

    - by user1071840
    I have an EC2 instance with these ip rules: $ sudo ip rule show 0: from all lookup local 32766: from all lookup main 32767: from all lookup default I can attach an elastic network interface to it with a private IP. Say the IP of my machine is 10.1.3.12 and the IP of the interface is 10.1.1.190. As soon as I attach the interface to my machine a new entry is added to the routing policy and local routing table: sudo ip rule show 0: from all lookup local 32765: from 10.1.1.190 lookup 10003 32766: from all lookup main 32767: from all lookup default $ sudo ip route show table local broadcast 10.1.1.0 dev eth3 proto kernel scope link src 10.1.1.190 local 10.1.1.190 dev eth3 proto kernel scope host src 10.1.1.190 broadcast 10.1.1.255 dev eth3 proto kernel scope link src 10.1.1.190 broadcast 10.1.3.0 dev eth0 proto kernel scope link src 10.1.3.12 local 10.1.3.12 dev eth0 proto kernel scope host src 10.1.3.12 broadcast 10.1.3.255 dev eth0 proto kernel scope link src 10.1.3.12 broadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1 local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1 local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1 broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1 I can send traffic to this ENI directly from a host that can have the same IP as the host the ENI is attached to. This is where the problem starts. I ran tcpdump on the port in question and saw multiple SYNs going to the ENI with src '10.1.3.12' and destination '10.1.1.190' but didn't see even a single ACK. In my understanding if ACKs were being sent from the ENI they'd have destination as 10.1.3.12 i.e. the same as the local machine's IP and such packets will now be routed as local packets matching local routing policy: local 10.1.3.12 dev eth0 proto kernel scope host src 10.1.3.12 I'd like to send all the packets originating from 10.1.1.190 (my ENI) to go back on the same interface i.e. eth3 in this case. Contents of the nee table 10003 are: $ sudo ip route show table 10003 default via 10.1.1.1 dev eth3 I think I can do the following: I don't know if its possible but probably decrease the priority of local table so the packets match the table 10003. Use iptables to mangle these packets and update the local table route to include the mark information But I'm not sure if these are the right approaches.

    Read the article

  • Setup Guide for updating local system and the repository with the incremental Solaris 11.1 SRU

    - by Gurubalan
    This guide covers the steps to implement the following setup. I. Updating the local system from Solaris 11.1 to Solaris 11.1 SRU 16.5II. Setting up local system as an IPS Repository Server (HTTP interface)III. Updating the local repository with the incremental Solaris 11.1 SRU 16.5I. Updating the local system from Solaris 11.1 to Solaris 11.1 SRU 16.5We assume that the local system is currently installed with Solaris 11.1 GA and the system doesn't have internet connectivity.What I have:1. Two parts of full repo iso files downloaded from http://www.oracle.com/technetwork/server-storage/solaris11/downloads/index.html. Both files are concatenated to a single file using the following command. $ cat sol-11_1-repo-full.iso-a sol-11_1-repo-full.iso-b > sol-11_1-repo-full.iso I suggest to verify the downloaded file against its md5checksum value [http://download.oracle.com/otn/solaris/11_1/md5sum.txt] using the following command digest -a md5 <file-name>  // the output of this command should match the original checksum value for that file.2. Incremental repo sol-11_1_16_5_0-incr-repo.iso downloaded from MOS [Patch 18269379: ORACLE SOLARIS 11.1.16.5.0 REPO ISO IMAGE (SPARC/X86 (64-BIT)]. You can get the checksum value of incremental repo iso by clicking the check box "show digest details" when you download the file.3. The local system IP is 192.168.10.10 & port 81 is reserved for repo serverPlease note that this repo file (either full or incremental) is common for both SPARC and X86(64BIT).Steps to update the local system: 1. #mounting s11.1 full repo iso to mnt        $ mount -F hsfs /soft/sol-11_1-repo-full.iso /mnt 2. Setting the pkg publisher to full repo source         $ pkg set-publisher -g file:///mnt/repo solaris 3. Perform the update of the packages.        $ pkg updateII. Setting up local system (Oracle Solaris 11.1) as an IPS Repository Server(HTTP interface):Please note that we have already mounted the full repo iso at /mnt    1. # copying /mnt permanently to the disk location at /s11.1        #zfs create -o atime=off -o mountpoint=/s11.1 rpool/s11.1        #rsync -aP /mnt/* /s11.1     2. #unmounting mnt         #umount /mnt3. To allow clients to access the local repository via HTTP, enable the application/pkg/server Service Management Facility (SMF) service.        svccfg -s application/pkg/server setprop pkg/inst_root=<data_source>/repo        eg: $svccfg -s application/pkg/server setprop pkg/inst_root=/s11.1/repo4. Setting port# to 81      svccfg -s application/pkg/server setprop pkg/port=<port_number>      eg: svccfg -s application/pkg/server setprop pkg/port="81"5a. Enable the pkg/server service (if the service is disabled)     $svcs pkg/server     STATE          STIME    FMRI     disabled        19:55:03 svc:/application/pkg/server:default      $svcadm enable pkg/server5b. Refresh/Restart the service, if it is already online       $svcadm refresh application/pkg/server       $svcadm restart application/pkg/server6. Setting pkg publisher on repo server and repo clients:      pkg set-publisher -G '*' -g http://<ip>:<port> solaris      eg: $pkg set-publisher -G '*' -g 'http://192.168.10.10:81' solaris7. Verify the Solaris 11.1 version from the repository         $pkgrepo list -s http://192.168.10.10:81 | grep entire         solaris   entire     0.5.11,5.11-0.175.1.0.0.24.2:20120919T190135Z You will have multiple row entries if the repository is setup with incremental SRUs.III. Updating the local repository with the incremental Solaris 11.1 SRU 16.51. #mounting s11.1 incremental SRU repo iso to mnt        $ mount -F hsfs <full_path_to>/sol-11_1_sruN_bldnum_respinnum-incr-repo.iso  /mnt        $ mount -F hsfs /soft/sol-11_1_16_5_0-incr-repo.iso /mnt2. Updating the local repository        $pkgrecv -s  /mnt/repo -d /s11.1/repo '*'3. Building a Search Index    $pkgrepo -s /s11.1/repo refresh     Initiating repository refresh.4. Refresh/Restart the service       $svcadm refresh svc:/application/pkg/server       $svcadm restart svc:/application/pkg/server5. Verify the repo has the incremental SRU as well.       # pkgrepo list -s http://192.168.10.10:81 | grep entire        solaris   entire      0.5.11,5.11-0.175.1.16.0.5.0:20140218T165248Z       solaris   entire      0.5.11,5.11-0.175.1.0.0.24.2:20120919T190135Z

    Read the article

  • SQL Server Full-Text Search: Hung processes with MSSEARCH wait type

    - by CheeseInPosition
    We have a SQL Server 2005 SP2 machine running a large number of databases, all of which contain full-text catalogs. Whenever we try to drop one of these databases or rebuild a full-text index, the drop or rebuild process hangs indefinitely with a MSSEARCH wait type. The process can’t be killed, and a server reboot is required to get things running again. Based on a Microsoft forums post[1], it appears that the problem might be an improperly removed full-text catalog. Can anyone recommend a way to determine which catalog is causing the problem, without having to remove all of them? [1] [http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=2681739&SiteID=1] “Yes we did have full text catalogues in the database, but since I had disabled full text search for the database, and disabled msftesql, I didn't suspect them. I got however an article from Microsoft support, showing me how I could test for catalogues not properly removed. So I discovered that there still existed an old catalogue, which I ,after and only after re-enabling full text search, were able to delete, since then my backup has worked”

    Read the article

  • Unable to get simple ruby on rails Search to work :/

    - by edu222
    I am new to RoR, any help would be greatly appreciated :) I have a basic scaffolding CRUD app to add customers. I am trying to search by first_name or last_name fields. The error that I am getting is: NoMethodError in Clientes#find You have a nil object when you didn't expect it! You might have expected an instance of Array. The error occurred while evaluating nil.each Extracted source (around line #9): 6: <th>Apellido</th> 7: </tr> 8: 9: <% for cliente in @clientes %> 10: <tr> 11: <td><%=h cliente.client_name %></td> 12: <td><%=h cliente.client_lastname %></td> Application Trace C:/Rails/clientes/app/views/clientes/find.html.erb:9:in `_run_erb_app47views47clientes47find46html46erb' My find function in controllers/clientes_controlee.rb is: # Find def find @cliente = Cliente.find(:all, :conditions=>["client_name = ? OR client_lastname = ?", params[:search_string], params[:search_string]]) end My views/layouts clientes.html.erb form code fragment is: <span style="text-align: right"> <% form_tag "/clientes/find" do %> <%= text_field_tag :search_string %> <%= submit_tag "Search" %> <% end %> </span> The search template I created in views/clientes/find.html.erb: <h1>Listing clientes for <%= params[:search_string] %></h1> <table> <tr> <th>Nombre</th> <th>Apellido</th> </tr> <% for cliente in @clientes %> <tr> <td><%=h cliente.client_name %></td> <td><%=h cliente.client_lastname %></td> <td><%= link_to 'Mostrar', cliente %></td> <td><%= link_to 'Editar', edit_cliente_path(cliente) %></td> <td><%= link_to 'Eliminar', cliente, :confirm =>'Estas Seguro de que desear eliminar a este te cliente?', :method => :delete %></td> </tr> <% end %> </table> <%= link_to 'Atras', clientes_path %

    Read the article

  • MySQL full text search with partial words

    - by Rob
    MySQL Full Text searching appears to be great and the best way to search in SQL. However, I seem to be stuck on the fact that it won't search partial words. For instance if I have an article titled "MySQL Tutorial" and search for "MySQL", it won't find it. Having done some searching I found various references to support for this coming in MySQL 4 (i'm using 5.1.40). I've tried using "MySQL" and "%MySQL%", but neither works (one link I found suggested it was stars but you could only do it at the end or the beginning not both). Here's my table structure and my query, if someone could tell me where i'm going wrong that would be great. I'm assuming partial word matching is built in somehow. CREATE TABLE IF NOT EXISTS `articles` ( `article_id` smallint(5) unsigned NOT NULL AUTO_INCREMENT, `article_name` varchar(64) NOT NULL, `article_desc` text NOT NULL, `article_link` varchar(128) NOT NULL, `article_hits` int(11) NOT NULL, `article_user_hits` int(7) unsigned NOT NULL DEFAULT '0', `article_guest_hits` int(10) unsigned NOT NULL DEFAULT '0', `article_rating` decimal(4,2) NOT NULL DEFAULT '0.00', `article_site_id` smallint(5) unsigned NOT NULL DEFAULT '0', `article_time_added` int(10) unsigned NOT NULL, `article_discussion_id` smallint(5) unsigned NOT NULL DEFAULT '0', `article_source_type` varchar(12) NOT NULL, `article_source_value` varchar(12) NOT NULL, PRIMARY KEY (`article_id`), FULLTEXT KEY `article_name` (`article_name`,`article_desc`,`article_link`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=7 ; INSERT INTO `articles` VALUES (1, 'MySQL Tutorial', 'Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.', 'http://www.domain.com/', 6, 3, 1, '1.50', 1, 1269702050, 1, '0', '0'), (2, 'How To Use MySQL Well', 'Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.', 'http://www.domain.com/', 1, 2, 0, '3.00', 1, 1269702050, 1, '0', '0'), (3, 'Optimizing MySQL', 'Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.', 'http://www.domain.com/', 0, 1, 0, '3.00', 1, 1269702050, 1, '0', '0'), (4, '1001 MySQL Tricks', 'Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.', 'http://www.domain.com/', 0, 1, 0, '3.00', 1, 1269702050, 1, '0', '0'), (5, 'MySQL vs. YourSQL', 'Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.', 'http://www.domain.com/', 0, 2, 0, '3.00', 1, 1269702050, 1, '0', '0'), (6, 'MySQL Security', 'Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.', 'http://www.domain.com/', 0, 2, 0, '3.00', 1, 1269702050, 1, '0', '0'); SELECT count(a.article_id) FROM articles a WHERE MATCH (a.article_name, a.article_desc, a.article_link) AGAINST ('mysql') GROUP BY a.article_id ORDER BY a.article_time_added ASC The prefix is used as it comes from a function that sometimes adds additional joins. As you can see a search for MySQL should return a count of 6, but unfortunately it doesn't.

    Read the article

  • Best way to retrieve certain field of all documents returned by a lucen search

    - by Philipp
    Hi, I was wondering what the best way is to retrieve a certain field of all documents returned by a Searcher of Lucene. Background: each document has a date field (written on) and I would like to show a timeline of all found documents, so I need to extract the date (day) field of all the documents I find with the search. I currently retrieve every document using Searcher.doc(int, FieldSelector) having the selector only retrieve the certain field. I have indexed 250k documents, the search itself takes no time and returns about 10k document ids. Retrieving those however, takes 20+ seconds. What can I do to speed things up, but still get all the values I need. Thx in advance Philipp

    Read the article

  • search results filter effects - flex

    - by Adam
    I've created a search with a couple of comboboxes that allow users to filter their search results. The results are currently using a TileList & itemRenderer to display, and now I'd like to add an animation effect when the user filters their results. I know that you can use the itemsChangeEffect to create an animation effect when the user drags and moves result itmes. So I'd like to know if there's a way to create a similar effect triggered by the filtering on the comboboxes? Thanks.

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >