Search Results

Search found 28685 results on 1148 pages for 'query performance'.

Page 32/1148 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • ATG Live Webcast March 21 Reminder: Network, WAN, and PC Performance Tuning (Performance Series Part 3 of 3)

    - by BillSawyer
    A quick reminder about tomorrow's webcast:  Andy Tremayne, Senior Architect, Applications Performance, and co-author of Oracle Applications Performance Tuning Handbook from Oracle Press, and Uday Moogala, Senior Principal Engineer, Applications Performance, will discuss network performance for E-Business Suite. Andy and Uday will cover tuning the client and tuning the network. They will share real-life examples of network performance, and show you tools and techniques that you can use to estimate or simulate performance on your own network.The agenda for the Performance Tuning - Part 3 of 3 webcast includes the following topics: Tuning the Client Tuning the Network Date:               Thursday, March 21, 2012Time:              8:00 AM - 9:00 AM Pacific Standard TimePresenters:  Andy Tremayne, Senior Architect, Applications Performance                        Uday Moogala, Senior Principal Engineer, Applications PerformanceWebcast Registration Link (Preregistration is optional but encouraged)To hear the audio feed:   Domestic Participant Dial-In Number:           877-697-8128    International Participant Dial-In Number:      706-634-9568    Additional International Dial-In Numbers Link:    Dial-In Passcode:                                              99341To see the presentation:    The Direct Access Web Conference details are:    Website URL: https://ouweb.webex.com    Meeting Number:  591264961If you miss the webcast, or you have missed any webcast, don't worry -- we'll post links to the recording as soon as it's available from Oracle University.  You can monitor this blog for pointers to the replay. And, you can find our archive of our past webcasts and training here.

    Read the article

  • How to select a server that supports Windows scheduled file IO

    - by Kristof Verbiest
    Background: I am developing an application that needs to read data from disk with a fairly consistent throughput. It is important that this throughput is not influenced by other actions that happen on the disk (e.g. by other processes). For this purpose, I was hoping to use the 'Scheduled File I/O' feature in Windows (throught the GetFileBandwithReservation and SetFileBandwithReservation functions). However, this StackOverflow question has thought me that this feature is only available if the device driver supports it. Currently I have no computer at my disposition that seems to support this feature (I have an HP Proliant server and a Dell Precision workstation). Question: If I were to order a new server, how can I know beforehand if this feature will be supported by the device driver? How 'upscale' does the server have to be? Has anybody used this feature with success and cares to share his experiences?

    Read the article

  • Which will give more free RAM to linux?

    - by Linda Thomas
    Trying to avoid some issues so I've been trying to learn vm. in kernel tuning but still a little confused even after googling. The lower background_ratio is the sooner the flushes? the lower dirty_ratio is the less dirty ram that is kept, right vm.dirty_ratio = 20 vm.dirty_background_ratio = 1 or vm.dirty_ratio = 60 vm.dirty_background_ratio = 20 or vm.dirty_ratio = 20 vm.dirty_background_ratio = 10 or vm.dirty_ratio = 20 vm.dirty_background_ratio = 5

    Read the article

  • Best tool for monitoring backups, etc. and trending statstics from that data

    - by Randy Syring
    I have done some research on nagios, opennms, and zenoss but am not confident that I have found what I am looking for. The main driving force for me right now is being able to monitor backups. This includes mysql, mssql, and eventually some file system backups. We have a tool that wraps the backup process for these different systems and collects statistics. So, items like: number of databases backed up size of db backup file size of db backup file compressed time to make backup time to zip file I want to be able to A) have notifications if the jobs are not run according to schedule B) be able to set thresholds on the statistics which would trigger notifications C) I want to be able to trend and graph the statistics I am planning on sending this information to the monitoring application through an HTTP POST. Or, the monitoring application could pull it from a log file as well. However, we will have other processes with other "arbitrary" (from the monitoring system's perspective) statics that will want to monitor and trend, so flexibility is very important. The tool or tools should also be able to do general monitoring and trending of network interfaces, server load, etc. Once we get the backup monitoring in place, we will want to include those items as well. Thanks.

    Read the article

  • How to check for bottlenecks in Windows Server 2008 R2

    - by Phil Koury
    I recently switched out a 10 year old server for a brand new server in a small office and upgraded from Windows Server 2000 to Windows Server 2008 R2. After the switch was complete and some configurations were changed around we are running into what appear to be some bottlenecks in the network speed. Accessing programs on the server is slower (resulting in long loading times, slower report generation, etc.) than it was on the old server hardware. I am wondering what options or tools I have, if there are any at all, to find out exactly where these hang ups might be coming from.

    Read the article

  • How to know if my nginx is in good health?

    - by Howard
    I am running a nginx on EC2 (m1.small) for SSL termination. I am using 2 workers on Ubuntu, with latest nginx (stable), the network throughput is around 2Mbps and system load average is around 2 to 3. I am wondering if this system is in good health for now, e.g. what is the queue length (I know nginx can handle a lot of concurrent request, but I mean before the request is being served, how many of them need to wait before being served) what is the average queue time for a given request to be served. I want to know because if my nginx is cpu bounded (e.g. due to SSL), I will need to upgrade to a faster instance. My current nginx status Active connections: 4076 server accepts handled requests 90664283 90664283 104117012 Reading: 525 Writing: 81 Waiting: 3470

    Read the article

  • Performance implications of Synchronous Sockets vs Asynchronous Sockets

    - by Akash Kava
    We are trying to build an SMTP Server to receive mail notifications from various clients over internet. As each of the communication will be longer and it needs to log everything, doing this Asynchronous way is little challenging as well as by using Socket's Asynchronous methods we are not sure of how flow of control and error handling happens. Previously we wrote lot of server/client apps but we always used Synchronous sockets, reason being they are longer sessions and each session also has lot of local data to manage and parsing messages etc. Does anyone have any experience over real performance differences between these two methods? Async calls use ThreadPool which we have experienced many times to just die for no reason. And we fail to restart threadpool etc. In one way Request-Response protocol of HTTP, Async Sockets makes sense, but SMTP/IMAP etc protocols are longer and they have interleaved messages plus state machine of server. So Async methods are really complicated to program. However if anyone can share the performance of Sockets, it will be helpful.

    Read the article

  • SQL Server 2005 high memory usage and performance problems

    - by emzero
    Hi there guys. I have this ASP.NET/SQLServer2005 website running on a production server (Win2003, QuadCore, 4GB). The site runs smoothly normally, but after 2-3 weeks I notice a slow performance on the site (especifically in one particular page). Also I notice that the SQL Server process is using like 2GBs of RAM. So I restart the service, the site runs fast again and the process 300-400MBs. I'm looking for an explanation of why is this happening? What is SQL Server storing in RAM that takes too much space and degrades the performance? What can I do to avoid this? I'm trying to avoid restarting the SQLServer everytime this happens. Thank you!

    Read the article

  • MySQL Query Select using sub-select takes too long

    - by True Soft
    I noticed something strange while executing a select from 2 tables: SELECT * FROM table_1 WHERE id IN ( SELECT id_element FROM table_2 WHERE column_2=3103); This query took approximatively 242 seconds. But when I executed the subquery SELECT id_element FROM table_2 WHERE column_2=3103 it took less than 0.002s (and resulted 2 rows). Then, when I did SELECT * FROM table_1 WHERE id IN (/* prev.result */) it was the same: 0.002s. I was wondering why MySQL is doing the first query like that, taking much more time than the last 2 queries separately? Is it an optimal solution for selecting something based from the results of a sub-query? Other details: table_1 has approx. 9000 rows, and table_2 has 90000 rows. After I added an index on column_2 from table_2, the first query took 0.15s.

    Read the article

  • Slow query with unexpected index scan

    - by zerkms
    Hello I have this query: SELECT * FROM sample INNER JOIN test ON sample.sample_number = test.sample_number INNER JOIN result ON test.test_number = result.test_number WHERE sampled_date BETWEEN '2010-03-17 09:00' AND '2010-03-17 12:00' the biggest table here is RESULT, contains 11.1M records. The left 2 tables about 1M. this query works slowly (more than 10 minutes) and returns about 800 records. executing plan shows clustered index scan (over it's PRIMARY KEY (result.result_number, which actually doesn't take part in query)) over all 11M records. RESULT.TEST_NUMBER is a clustered primary key. if I change 2010-03-17 09:00 to 2010-03-17 10:00 - i get about 40 records. it executes for 300ms. and plan shows index seek (over result.test_number index) if i replace * in SELECT clause to result.test_number (covered with index) - then all become fast in first case too. this points to hdd IO issues, but doesn't clarifies changing plan. so, any ideas? UPDATE: sampled_date is in table sample and covered by index. other fields from this query: test.sample_number is covered by index and result.test_number too. UPDATE 2: obviously than sql server in any reasons don't want to use index. i did a small experiment: i remove INNER JOIN with result, select all test.test_number and after that do SELECT * FROM RESULT WHERE TEST_NUMBER IN (...) this, of course, works fast. but i cannot get what is the difference and why query optimizer choose such inappropriate way to select data in 1st case.

    Read the article

  • What's wrong with my MySql query ?!

    - by Anytime
    This is a query I am doing with mysql using PHP This is the query line <?php $query = "SELECT * FROM node WHERE type = 'student_report' AND uid = '{$uid}' LIMIT 1 ORDER BY created DESC"; ?> I get the following error You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'ORDER BY created DESC' at line 1

    Read the article

  • WCF performance improvements

    - by Burt
    I am developing a WPF application that talks to a server via WCF services over the internet. After profiling the application I noticed a lot of time is being taking up by creating the appropriate WCF client proxy and making the call to the server. The code on the server is optimised and doesn't take any time to run yet I am still seeing a 1.5 second delay from when a service is invloked to it returning to the client. A few points to give a bit of background: I am using the ASP.Net membership for security I will eventually hook into the same server side code through a website I would eventually like to have offline support in the application I really need to nail the performance early though as if the app is taking a couple of seconds to come back it is too long for what I am trying to do. Can anyone suggest performance tips that will help me please?

    Read the article

  • complex sql which runs extremely slow when the query has order by clause

    - by basit.
    I have following complex query which I need to use. When I run it, it takes 30 to 40 seconds. But if I remove the order by clause, it takes 0.0317 sec to return the result, which is really fast compare to 30 sec or 40. select DISTINCT media.* , username from album as album , album_permission as permission , user as user, media as media where ((media.album_id = album.album_id and album.private = 'yes' and album.album_id = permission.album_id and (permission.email = '' or permission.user_id = '') ) or (media.album_id = album.album_id and album.private = 'no' ) or media.album_id = '0' ) and media.user_id = user.user_id and media.media_type = 'video' order by media.id DESC LIMIT 0,20 The id on order by is primary key which is indexed too. So I don't know what is the problem. I also have album and album permission table, just to check if media is public or private, if private then check if user has permission or not. I was thinking maybe that is causing the issue. What if I did this in sub query, would that work better? Also can someone help me write that sub query, if that is the solution? If you can't help write it, just at least tell me. I'm really going crazy with this issue.. SOLUTION MAYBE Yes, I think sub-query would be best solution for this, because the following query runs at 0.0022 seconds. But I'm not sure if validation of an album would be accurate or not, please check. select media.*, username from media as media , user as user where media.user_id = user.user_id and media.media_type = 'video' and media.id in (select media2.id from media as media2 , album as album , album_permission as permission where ((media2.album_id = album.album_id and album.private = 'yes' and album.album_id = permission.album_id and (permission.email = '' or permission.user_id = '')) or (media.album_id = album.album_id and album.private = 'no' ) or media.album_id = '0' ) and media.album_id = media2.album_id ) order by media.id DESC LIMIT 0,20

    Read the article

  • Specifying a variable name in QUERY WHERE clause in JDBC

    - by Noona
    Could someone please give me a link on how to create a query in JDBC that gets a variable name in the WHERE statement, or write an example, to be more specific, my code looks something like this: private String getLastModified(String url) { String lastModified = null; ResultSet resultSet; String query = "select LastModified from CacheTable where " + " URL.equals(url)"; try { resultSet = sqlStatement.executeQuery(query); } Now I need the syntax that enables me to return a ResultSet object where URL in the cacheTable equals url from the method's argument. thanks

    Read the article

  • SQL select descendants of a row

    - by Joey Adams
    Suppose a tree structure is implemented in SQL like this: CREATE TABLE nodes ( id INTEGER PRIMARY KEY, parent INTEGER -- references nodes(id) ); Although cycles can be created in this representation, let's assume we never let that happen. The table will only store a collection of roots (records where parent is null) and their descendants. The goal is to, given an id of a node on the table, find all nodes that are descendants of it. A is a descendant of B if either A's parent is B or A's parent is a descendant of B. Note the recursive definition. Here is some sample data: INSERT INTO nodes VALUES (1, NULL); INSERT INTO nodes VALUES (2, 1); INSERT INTO nodes VALUES (3, 2); INSERT INTO nodes VALUES (4, 3); INSERT INTO nodes VALUES (5, 3); INSERT INTO nodes VALUES (6, 2); which represents: 1 `-- 2 |-- 3 | |-- 4 | `-- 5 | `-- 6 We can select the (immediate) children of 1 by doing this: SELECT a.* FROM nodes AS a WHERE parent=1; We can select the children and grandchildren of 1 by doing this: SELECT a.* FROM nodes AS a WHERE parent=1 UNION ALL SELECT b.* FROM nodes AS a, nodes AS b WHERE a.parent=1 AND b.parent=a.id; We can select the children, grandchildren, and great grandchildren of 1 by doing this: SELECT a.* FROM nodes AS a WHERE parent=1 UNION ALL SELECT b.* FROM nodes AS a, nodes AS b WHERE a.parent=1 AND b.parent=a.id UNION ALL SELECT c.* FROM nodes AS a, nodes AS b, nodes AS c WHERE a.parent=1 AND b.parent=a.id AND c.parent=b.id; How can a query be constructed that gets all descendants of node 1 rather than those at a finite depth? It seems like I would need to create a recursive query or something. I'd like to know if such a query would be possible using SQLite. However, if this type of query requires features not available in SQLite, I'm curious to know if it can be done in other SQL databases.

    Read the article

  • How to call Named Query

    - by sandeep
    I wrote a named query in the entity class Voter NamedQuery(name = "Voter.findvoter", query = "SELECT count(*) FROM Voter v WHERE v.voterID = :voterID" and where v.password= : password), I want to call this named query and I also need to set voterID and password. Can you help me. Thank you

    Read the article

  • How to maximize http.sys file upload performance

    - by anelson
    I'm building a tool that transfers very large streaming data sets (possibly on the order of terabytes in a single stream; routinely in the tens of gigabytes) from one server to another. The client portion of the tool will read blocks from the source disk, and send them over the network. The server side will read these blocks off the network and write them to a file on the server disk. Right now I'm trying to decide which transport to use. Options are raw TCP, and HTTP. I really, REALLY want to be able to use HTTP. The HttpListener (or WCF if I want to go that route) make it easy to plug in to the HTTP Server API (http.sys), and I can get things like authentication and SSL for free. The problem right now is performance. I wrote a simple test harness that sends 128K blocks of NULL bytes using the BeginWrite/EndWrite async I/O idiom, with async BeginRead/EndRead on the server side. I've modified this test harness so I can do this with either HTTP PUT operations via HttpWebRequest/HttpListener, or plain old socket writes using TcpClient/TcpListener. To rule out issues with network cards or network pathways, both the client and server are on one machine and communicate over localhost. On my 12-core Windows 2008 R2 test server, the TCP version of this test harness can push bytes at 450MB/s, with minimal CPU usage. On the same box, the HTTP version of the test harness runs between 130MB/s and 200MB/s depending upon how I tweak it. In both cases CPU usage is low, and the vast majority of what CPU usage there is is kernel time, so I'm pretty sure my usage of C# and the .NET runtime is not the bottleneck. The box has two 6-core Xeon X5650 processors, 24GB of single-ranked DDR3 RAM, and is used exclusively by me for my own performance testing. I already know about HTTP client tweaks like ServicePointManager.MaxServicePointIdleTime, ServicePointManager.DefaultConnectionLimit, ServicePointManager.Expect100Continue, and HttpWebRequest.AllowWriteStreamBuffering. Does anyone have any ideas for how I can get HTTP.sys performance beyond 200MB/s? Has anyone seen it perform this well on any environment?

    Read the article

  • MySQL index cardinality - performance vs storage efficiency

    - by Sean
    Say you have a MySQL 5.0 MyISAM table with 100 million rows, with one index (other than primary key) on two integer columns. From my admittedly poor understanding of B-tree structure, I believe that a lower cardinality means the storage efficiency of the index is better, because there are less parent nodes. Whereas a higher cardinality means less efficient storage, but faster read performance, because it has to navigate through less branches to get to whatever data it is looking for to narrow down the rows for the query. (Note - by "low" vs "high", I don't mean e.g. 1 million vs 99 million for a 100 million row table. I mean more like 90 million vs 95 million) Is my understanding correct? Related question - How does cardinality affect write performance?

    Read the article

  • Wpf: Tips for better performance

    - by viky
    I am working on a wpf application. In which I am working with a TreeView, each node represents different datatypes, these datatypes are having properties defined and using data template to show their properties. My application reads from xml and create tree accordingly. My problem is that when I load it, it is too slow, I want to know about the tricks that will help me to improve performance of my(any) wpf application. Edit: Please provide me some tips for better performance in wpf!! I am using wpf Profiler but it is not much helpful for me.

    Read the article

  • c# linq to xml dynamic query

    - by David Archer
    Right, bit of a strange question; I have been doing some linq to XML work recently (see my other recent posts here and here). Basically, I want to be able to create a query that checks whether a textbox is null before it's value is included in the query, like so: XDocument db = XDocument.Load(xmlPath); var query = (from vals in db.Descendants("Customer") where (if(textbox1.Text != "") {vals.Element("CustomerID") == Convert.ToInt32(textbox1.Text) } || if(textbox2.Text != "") {vals.Element("Name") == textbox2.Text}) select vals).ToList();

    Read the article

  • PostgreSQL - Why are some queries on large datasets so incredibly slow

    - by Brad Mathews
    Hello, I have two types of queries I run often on two large datasets. They run much slower than I would expect them to. The first type is a sequential scan updating all records: Update rcra_sites Set street = regexp_replace(street,'/','','i') rcra_sites has 700,000 records. It takes 22 minutes from pgAdmin! I wrote a vb.net function that loops through each record and sends an update query for each record (yes, 700,000 update queries!) and it runs in less than half the time. Hmmm.... The second type is a simple update with a relation and then a sequential scan: Update rcra_sites as sites Set violations='No' From narcra_monitoring as v Where sites.agencyid=v.agencyid and v.found_violation_flag='N' narcra_monitoring has 1,700,000 records. This takes 8 minutes. The query planner refuses to use my indexes. The query runs much faster if I start with a set enable_seqscan = false;. I would prefer if the query planner would do its job. I have appropriate indexes, I have vacuumed and analyzed. I optimized my shared_buffers and effective_cache_size best I know to use more memory since I have 4GB. My hardware is pretty darn good. I am running v8.4 on Windows 7. Is PostgreSQL just this slow? Or am I still missing something? Thanks! Brad

    Read the article

  • Help me alter this query to get the desired results - New*

    - by sandeepan
    Please dump these data first CREATE TABLE IF NOT EXISTS `all_tag_relations` ( `id_tag_rel` int(10) NOT NULL AUTO_INCREMENT, `id_tag` int(10) unsigned NOT NULL DEFAULT '0', `id_tutor` int(10) DEFAULT NULL, `id_wc` int(10) unsigned DEFAULT NULL, PRIMARY KEY (`id_tag_rel`), KEY `All_Tag_Relations_FKIndex1` (`id_tag`), KEY `id_wc` (`id_wc`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=19 ; INSERT INTO `all_tag_relations` (`id_tag_rel`, `id_tag`, `id_tutor`, `id_wc`) VALUES (1, 1, 1, NULL), (2, 2, 1, NULL), (3, 6, 2, NULL), (4, 7, 2, NULL), (8, 3, 1, 1), (9, 4, 1, 1), (10, 5, 2, 2), (11, 4, 2, 2), (15, 8, 1, 3), (16, 9, 1, 3), (17, 10, 1, 4), (18, 4, 1, 4), (19, 1, 2, 5), (20, 4, 2, 5); CREATE TABLE IF NOT EXISTS `tags` ( `id_tag` int(10) unsigned NOT NULL AUTO_INCREMENT, `tag` varchar(255) DEFAULT NULL, PRIMARY KEY (`id_tag`), UNIQUE KEY `tag` (`tag`), KEY `id_tag` (`id_tag`), KEY `tag_2` (`tag`), KEY `tag_3` (`tag`), KEY `tag_4` (`tag`), FULLTEXT KEY `tag_5` (`tag`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=11 ; INSERT INTO `tags` (`id_tag`, `tag`) VALUES (1, 'Sandeepan'), (2, 'Nath'), (3, 'first'), (4, 'class'), (5, 'new'), (6, 'Bob'), (7, 'Cratchit'), (8, 'more'), (9, 'fresh'), (10, 'second'); CREATE TABLE IF NOT EXISTS `webclasses` ( `id_wc` int(10) NOT NULL AUTO_INCREMENT, `id_author` int(10) NOT NULL, `name` varchar(50) DEFAULT NULL, PRIMARY KEY (`id_wc`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=5 ; INSERT INTO `webclasses` (`id_wc`, `id_author`, `name`) VALUES (1, 1, 'first class'), (2, 2, 'new class'), (3, 1, 'more fresh'), (4, 1, 'second class'), (5, 2, 'sandeepan class'); About the system - The system consists of tutors and classes. - The data in the table All_Tag_Relations stores tag relations for each tutor registered and each class created by a tutor. The tag relations are used for searching classes. The current data dump corresponds to tutor "Sandeepan Nath" who has created classes named "first class", "more fresh", "second class" and tutor "Bob Cratchit" who has created classes "new class" and "Sandeepan class". I am trying for a search query performs AND logic on the search keywords and returns wvery such class for which the search terms are present in the class name or its tutor name To make it easy, following is the list of search terms and desired results:- Search term result classes (check the id_wc in the results) first class 1 Sandeepan Nath class 1 Sandeepan Nath 1,3 Bob Cratchit 2 Sandeepan Nath bob none Sandeepan Class 1,4,5 I have so far reached upto this query -- Two keywords search SET @tag1 = 4, @tag2 = 1; -- Setting some user variables to see where the ids go. SELECT wc.id_wc, sum( DISTINCT ( wtagrels.id_tag = @tag1 ) ) AS key_1_class_matches, sum( DISTINCT ( wtagrels.id_tag = @tag2 ) ) AS key_2_class_matches, sum( DISTINCT ( ttagrels.id_tag = @tag1 ) ) AS key_1_tutor_matches, sum( DISTINCT ( ttagrels.id_tag = @tag2 ) ) AS key_2_tutor_matches, sum( DISTINCT ( ttagrels.id_tag = wtagrels.id_tag ) ) AS key_class_tutor_matches FROM WebClasses as wc join all_tag_relations AS wtagrels on wc.id_wc = wtagrels.id_wc join all_tag_relations as ttagrels on (wc.id_author = ttagrels.id_tutor) WHERE ( wtagrels.id_tag = @tag1 OR wtagrels.id_tag = @tag2 OR ttagrels.id_tag = @tag1 OR ttagrels.id_tag = @tag2 ) GROUP BY wtagrels.id_wc LIMIT 0 , 20 For search with 1 or 3 terms, remove/add the variable part in this query. Tabulating my observation of the values of key_1_class_matches, key_2_class_matches,key_1_tutor_matches (say, class keys),key_2_tutor_matches for various cases (say, tutor keys). Search term expected result Observation first class 1 for class 1, all class keys+all tutor keys =1 Sandeepan Nath class 1 for class 1, one class key+ all tutor keys = 1 Sandeepan Nath 1,3 both tutor keys =1 for these classes Bob Cratchit 2 both tutor keys = 1 Sandeepan Nath bob none no complete tutor matches for any class I found a pattern that, for any case, the class(es) which should appear in the result have the highest number of matches (all class keys and tutor keys). E.g. searching "first class", only for class =1, total of key matches = 4(1+1+1+1) searching "Sandeepan Nath", for classes 1, 3,4(all classes by Sandeepan Nath) have all the tutor keys matching. But no pattern in the search for "Sandeepan Class" - classes 1,4,5 should match. Now, how do I put a condition into the query, based on that pattern so that only those classes are returned. Do I need to use full text search here because it gives a scoring/rank value indicating the strength of the match? Any sample query would help. Please note - I have already found solution for showing classes when any/all of the search terms match with the class name. http://stackoverflow.com/questions/3030022/mysql-help-me-alter-this-search-query-to-get-desired-results But if all the search terms are in tutor name, it does not work. So, I am modifying the query and experimenting.

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >