Search Results

Search found 1714 results on 69 pages for 'optimizer hints'.

Page 10/69 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Need advice in setting up server. fastCGI, suExec, speed, security, etc.

    - by lewisqic
    I am running my own dedicated server with centOS 5 and WHM/cPanel. I would like to configure my server to meet my needs but I need a little help. It will only be my own websites being run on this server. I'm still a little green when it comes to server administration so please forgive my ignorance. What I Would Like to Have: I need some public directories to be writable (for user image uploads and things like that) but I don't want those directories to have 777 permissions. I need individual accounts to have the ability to set custom php settings for their own account without affecting other accounts, whether through a php.ini file or through .htaccess or any other method. I would like things to run as fast as possible, whether that means using a php optimizer or cacher, such as eaccelerator or xcache or anything else. I need things to be as secure as possible. Here Are My Questions What should I use for my php handler? DSO? CGI? fastCGI? suPHP? Other? Should I be using suEXEC? What are the benefits or downfalls of this? What php optimizer/cacher is best to use? Are there any other security tips I need to know about all of this? I'd appreciate any advice or direction that can be offered. Thanks!

    Read the article

  • Operator of the week - Assert

    - by Fabiano Amorim
    Well my friends, I was wondering how to help you in a practical way to understand execution plans. So I think I'll talk about the Showplan Operators. Showplan Operators are used by the Query Optimizer (QO) to build the query plan in order to perform a specified operation. A query plan will consist of many physical operators. The Query Optimizer uses a simple language that represents each physical operation by an operator, and each operator is represented in the graphical execution plan by an icon. I'll try to talk about one operator every week, but so as to avoid having to continue to write about these operators for years, I'll mention only of those that are more common: The first being the Assert. The Assert is used to verify a certain condition, it validates a Constraint on every row to ensure that the condition was met. If, for example, our DDL includes a check constraint which specifies only two valid values for a column, the Assert will, for every row, validate the value passed to the column to ensure that input is consistent with the check constraint. Assert  and Check Constraints: Let's see where the SQL Server uses that information in practice. Take the following T-SQL: IF OBJECT_ID('Tab1') IS NOT NULL   DROP TABLE Tab1 GO CREATE TABLE Tab1(ID Integer, Gender CHAR(1))  GO  ALTER TABLE TAB1 ADD CONSTRAINT ck_Gender_M_F CHECK(Gender IN('M','F'))  GO INSERT INTO Tab1(ID, Gender) VALUES(1,'X') GO To the command above the SQL Server has generated the following execution plan: As we can see, the execution plan uses the Assert operator to check that the inserted value doesn't violate the Check Constraint. In this specific case, the Assert applies the rule, 'if the value is different to "F" and different to "M" than return 0 otherwise returns NULL'. The Assert operator is programmed to show an error if the returned value is not NULL; in other words, the returned value is not a "M" or "F". Assert checking Foreign Keys Now let's take a look at an example where the Assert is used to validate a foreign key constraint. Suppose we have this  query: ALTER TABLE Tab1 ADD ID_Genders INT GO  IF OBJECT_ID('Tab2') IS NOT NULL   DROP TABLE Tab2 GO CREATE TABLE Tab2(ID Integer PRIMARY KEY, Gender CHAR(1))  GO  INSERT INTO Tab2(ID, Gender) VALUES(1, 'F') INSERT INTO Tab2(ID, Gender) VALUES(2, 'M') INSERT INTO Tab2(ID, Gender) VALUES(3, 'N') GO  ALTER TABLE Tab1 ADD CONSTRAINT fk_Tab2 FOREIGN KEY (ID_Genders) REFERENCES Tab2(ID) GO  INSERT INTO Tab1(ID, ID_Genders, Gender) VALUES(1, 4, 'X') Let's look at the text execution plan to see what these Assert operators were doing. To see the text execution plan just execute SET SHOWPLAN_TEXT ON before run the insert command. |--Assert(WHERE:(CASE WHEN NOT [Pass1008] AND [Expr1007] IS NULL THEN (0) ELSE NULL END))      |--Nested Loops(Left Semi Join, PASSTHRU:([Tab1].[ID_Genders] IS NULL), OUTER REFERENCES:([Tab1].[ID_Genders]), DEFINE:([Expr1007] = [PROBE VALUE]))           |--Assert(WHERE:(CASE WHEN [Tab1].[Gender]<>'F' AND [Tab1].[Gender]<>'M' THEN (0) ELSE NULL END))           |    |--Clustered Index Insert(OBJECT:([Tab1].[PK]), SET:([Tab1].[ID] = RaiseIfNullInsert([@1]),[Tab1].[ID_Genders] = [@2],[Tab1].[Gender] = [Expr1003]), DEFINE:([Expr1003]=CONVERT_IMPLICIT(char(1),[@3],0)))           |--Clustered Index Seek(OBJECT:([Tab2].[PK]), SEEK:([Tab2].[ID]=[Tab1].[ID_Genders]) ORDERED FORWARD) Here we can see the Assert operator twice, first (looking down to up in the text plan and the right to left in the graphical plan) validating the Check Constraint. The same concept showed above is used, if the exit value is "0" than keep running the query, but if NULL is returned shows an exception. The second Assert is validating the result of the Tab1 and Tab2 join. It is interesting to see the "[Expr1007] IS NULL". To understand that you need to know what this Expr1007 is, look at the Probe Value (green text) in the text plan and you will see that it is the result of the join. If the value passed to the INSERT at the column ID_Gender exists in the table Tab2, then that probe will return the join value; otherwise it will return NULL. So the Assert is checking the value of the search at the Tab2; if the value that is passed to the INSERT is not found  then Assert will show one exception. If the value passed to the column ID_Genders is NULL than the SQL can't show a exception, in that case it returns "0" and keeps running the query. If you run the INSERT above, the SQL will show an exception because of the "X" value, but if you change the "X" to "F" and run again, it will show an exception because of the value "4". If you change the value "4" to NULL, 1, 2 or 3 the insert will be executed without any error. Assert checking a SubQuery: The Assert operator is also used to check one subquery. As we know, one scalar subquery can't validly return more than one value: Sometimes, however, a  mistake happens, and a subquery attempts to return more than one value . Here the Assert comes into play by validating the condition that a scalar subquery returns just one value. Take the following query: INSERT INTO Tab1(ID_TipoSexo, Sexo) VALUES((SELECT ID_TipoSexo FROM Tab1), 'F')    INSERT INTO Tab1(ID_TipoSexo, Sexo) VALUES((SELECT ID_TipoSexo FROM Tab1), 'F')    |--Assert(WHERE:(CASE WHEN NOT [Pass1016] AND [Expr1015] IS NULL THEN (0) ELSE NULL END))        |--Nested Loops(Left Semi Join, PASSTHRU:([tempdb].[dbo].[Tab1].[ID_TipoSexo] IS NULL), OUTER REFERENCES:([tempdb].[dbo].[Tab1].[ID_TipoSexo]), DEFINE:([Expr1015] = [PROBE VALUE]))              |--Assert(WHERE:([Expr1017]))             |    |--Compute Scalar(DEFINE:([Expr1017]=CASE WHEN [tempdb].[dbo].[Tab1].[Sexo]<>'F' AND [tempdb].[dbo].[Tab1].[Sexo]<>'M' THEN (0) ELSE NULL END))              |         |--Clustered Index Insert(OBJECT:([tempdb].[dbo].[Tab1].[PK__Tab1__3214EC277097A3C8]), SET:([tempdb].[dbo].[Tab1].[ID_TipoSexo] = [Expr1008],[tempdb].[dbo].[Tab1].[Sexo] = [Expr1009],[tempdb].[dbo].[Tab1].[ID] = [Expr1003]))              |              |--Top(TOP EXPRESSION:((1)))              |                   |--Compute Scalar(DEFINE:([Expr1008]=[Expr1014], [Expr1009]='F'))              |                        |--Nested Loops(Left Outer Join)              |                             |--Compute Scalar(DEFINE:([Expr1003]=getidentity((1856985942),(2),NULL)))              |                             |    |--Constant Scan              |                             |--Assert(WHERE:(CASE WHEN [Expr1013]>(1) THEN (0) ELSE NULL END))              |                                  |--Stream Aggregate(DEFINE:([Expr1013]=Count(*), [Expr1014]=ANY([tempdb].[dbo].[Tab1].[ID_TipoSexo])))             |                                       |--Clustered Index Scan(OBJECT:([tempdb].[dbo].[Tab1].[PK__Tab1__3214EC277097A3C8]))              |--Clustered Index Seek(OBJECT:([tempdb].[dbo].[Tab2].[PK__Tab2__3214EC27755C58E5]), SEEK:([tempdb].[dbo].[Tab2].[ID]=[tempdb].[dbo].[Tab1].[ID_TipoSexo]) ORDERED FORWARD)  You can see from this text showplan that SQL Server as generated a Stream Aggregate to count how many rows the SubQuery will return, This value is then passed to the Assert which then does its job by checking its validity. Is very interesting to see that  the Query Optimizer is smart enough be able to avoid using assert operators when they are not necessary. For instance: INSERT INTO Tab1(ID_TipoSexo, Sexo) VALUES((SELECT ID_TipoSexo FROM Tab1 WHERE ID = 1), 'F') INSERT INTO Tab1(ID_TipoSexo, Sexo) VALUES((SELECT TOP 1 ID_TipoSexo FROM Tab1), 'F')  For both these INSERTs, the Query Optimiser is smart enough to know that only one row will ever be returned, so there is no need to use the Assert. Well, that's all folks, I see you next week with more "Operators". Cheers, Fabiano

    Read the article

  • Top 5 Developer Enabling Nuggets in MySQL 5.6

    - by Rob Young
    MySQL 5.6 is truly a better MySQL and reflects Oracle's commitment to the evolution of the most popular and widelyused open source database on the planet.  The feature-complete 5.6 release candidate was announced at MySQL Connect in late September and the production-ready, generally available ("GA") product should be available in early 2013.  While the message around 5.6 has been focused mainly on mass appeal, advanced topics like performance/scale, high availability, and self-healing replication clusters, MySQL 5.6 also provides many developer-friendly nuggets that are designed to enable those who are building the next generation of web-based and embedded applications and services. Boiling down the 5.6 feature set into a smaller set, of simple, easy to use goodies designed with developer agility in mind, these things deserve a quick look:Subquery Optimizations Using semi-JOINs and late materialization, the MySQL 5.6 Optimizer delivers greatly improved subquery performance. Specifically, the optimizer is now more efficient in handling subqueries in the FROM clause; materialization of subqueries in the FROM clause is now postponed until their contents are needed during execution. Additionally, the optimizer may add an index to derived tables during execution to speed up row retrieval. Internal tests run using the DBT-3 benchmark Query #13, shown below, demonstrate an order of magnitude improvement in execution times (from days to seconds) over previous versions. select c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice, sum(l_quantity)from customer, orders, lineitemwhere o_orderkey in (                select l_orderkey                from lineitem                group by l_orderkey                having sum(l_quantity) > 313  )  and c_custkey = o_custkey  and o_orderkey = l_orderkeygroup by c_name, c_custkey, o_orderkey, o_orderdate, o_totalpriceorder by o_totalprice desc, o_orderdateLIMIT 100;What does this mean for developers?  For starters, simplified subqueries can now be coded instead of complex joins for cross table lookups: SELECT title FROM film WHERE film_id IN (SELECT film_id FROM film_actor GROUP BY film_id HAVING count(*) > 12); And even more importantly subqueries embedded in packaged applications no longer need to be re-written into joins.  This is good news for both ISVs and their customers who have access to the underlying queries and who have spent development cycles writing, testing and maintaining their own versions of re-written queries across updated versions of a packaged app.The details are in the MySQL 5.6 docs. Online DDL OperationsToday's web-based applications are designed to rapidly evolve and adapt to meet business and revenue-generationrequirements. As a result, development SLAs are now most often measured in minutes vs days or weeks. For example, when an application must quickly support new product lines or new products within existing product lines, the backend database schema must adapt in kind, and most commonly while the application remains available for normal business operations.  MySQL 5.6 supports this level of online schema flexibility and agility by providing the following new ALTER TABLE online DDL syntax additions:  CREATE INDEX DROP INDEX Change AUTO_INCREMENT value for a column ADD/DROP FOREIGN KEY Rename COLUMN Change ROW FORMAT, KEY_BLOCK_SIZE for a table Change COLUMN NULL, NOT_NULL Add, drop, reorder COLUMN Again, the details are in the MySQL 5.6 docs. Key-value access to InnoDB via Memcached APIMany of the next generation of web, cloud, social and mobile applications require fast operations against simple Key/Value pairs. At the same time, they must retain the ability to run complex queries against the same data, as well as ensure the data is protected with ACID guarantees. With the new NoSQL API for InnoDB, developers have allthe benefits of a transactional RDBMS, coupled with the performance capabilities of Key/Value store.MySQL 5.6 provides simple, key-value interaction with InnoDB data via the familiar Memcached API.  Implemented via a new Memcached daemon plug-in to mysqld, the new Memcached protocol is mapped directly to the native InnoDB API and enables developers to use existing Memcached clients to bypass the expense of query parsing and go directly to InnoDB data for lookups and transactional compliant updates.  The API makes it possible to re-use standard Memcached libraries and clients, while extending Memcached functionality by integrating a persistent, crash-safe, transactional database back-end.  The implementation is shown here:So does this option provide a performance benefit over SQL?  Internal performance benchmarks using a customized Java application and test harness show some very promising results with a 9X improvement in overall throughput for SET/INSERT operations:You can follow the InnoDB team blog for the methodology, implementation and internal test cases that generated these results here. How to get started with Memcached API to InnoDB is here. New Instrumentation in Performance SchemaThe MySQL Performance Schema was introduced in MySQL 5.5 and is designed to provide point in time metrics for key performance indicators.  MySQL 5.6 improves the Performance Schema in answer to the most common DBA and Developer problems.  New instrumentations include: Statements/Stages What are my most resource intensive queries? Where do they spend time? Table/Index I/O, Table Locks Which application tables/indexes cause the most load or contention? Users/Hosts/Accounts Which application users, hosts, accounts are consuming the most resources? Network I/O What is the network load like? How long do sessions idle? Summaries Aggregated statistics grouped by statement, thread, user, host, account or object. The MySQL 5.6 Performance Schema is now enabled by default in the my.cnf file with optimized and auto-tune settings that minimize overhead (< 5%, but mileage will vary), so using the Performance Schema ona production server to monitor the most common application use cases is less of an issue.  In addition, new atomic levels of instrumentation enable the capture of granular levels of resource consumption by users, hosts, accounts, applications, etc. for billing and chargeback purposes in cloud computing environments.The MySQL docs are an excellent resource for all that is available and that can be done with the 5.6 Performance Schema. Better Condition Handling - GET DIAGNOSTICSMySQL 5.6 enables developers to easily check for error conditions and code for exceptions by introducing the new MySQL Diagnostics Area and corresponding GET DIAGNOSTICS interface command. The Diagnostic Area can be populated via multiple options and provides 2 kinds of information:Statement - which provides affected row count and number of conditions that occurredCondition - which provides error codes and messages for all conditions that were returned by a previous operation The addressable items for each are: The new GET DIAGNOSTICS command provides a standard interface into the Diagnostics Area and can be used via the CLI or from within application code to easily retrieve and handle the results of the most recent statement execution.  An example of how it is used might be:mysql> DROP TABLE test.no_such_table; ERROR 1051 (42S02): Unknown table 'test.no_such_table' mysql> GET DIAGNOSTICS CONDITION 1 -> @p1 = RETURNED_SQLSTATE, @p2 = MESSAGE_TEXT; mysql> SELECT @p1, @p2; +-------+------------------------------------+| @p1   | @p2                                | +-------+------------------------------------+| 42S02 | Unknown table 'test.no_such_table' | +-------+------------------------------------+ Options for leveraging the MySQL Diagnotics Area and GET DIAGNOSTICS are detailed in the MySQL Docs.While the above is a summary of some of the key developer enabling 5.6 features, it is by no means exhaustive. You can dig deeper into what MySQL 5.6 has to offer by reading this developer zone article or checking out "What's New in MySQL 5.6" in the MySQL docs.BONUS ALERT!  If you are developing on Windows or are considering MySQL as an alternative to SQL Server for your next project, application or shipping product, you should check out the MySQL Installer for Windows.  The installer includes the MySQL 5.6 RC database, all drivers, Visual Studio and Excel plugins, tray monitor and development tools all a single download and GUI installer.   So what are your next steps? Register for Dec. 13 "MySQL 5.6: Building the Next Generation of Web-Based Applications and Services" live web event.  Hurry!  Seats are limited. Download the MySQL 5.6 Release Candidate (look under the Development Releases tab) Provide Feedback <link to http://bugs.mysql.com/> Join the Developer discussion on the MySQL Forums Explore all MySQL Products and Developer Tools As always, thanks for your continued support of MySQL!

    Read the article

  • Performance Enhancement in Full-Text Search Query

    - by Calvin Sun
    Ever since its first release, we are continuing consolidating and developing InnoDB Full-Text Search feature. There is one recent improvement that worth blogging about. It is an effort with MySQL Optimizer team that simplifies some common queries’ Query Plans and dramatically shorted the query time. I will describe the issue, our solution and the end result by some performance numbers to demonstrate our efforts in continuing enhancement the Full-Text Search capability. The Issue: As we had discussed in previous Blogs, InnoDB implements Full-Text index as reversed auxiliary tables. The query once parsed will be reinterpreted into several queries into related auxiliary tables and then results are merged and consolidated to come up with the final result. So at the end of the query, we’ll have all matching records on hand, sorted by their ranking or by their Doc IDs. Unfortunately, MySQL’s optimizer and query processing had been initially designed for MyISAM Full-Text index, and sometimes did not fully utilize the complete result package from InnoDB. Here are a couple examples: Case 1: Query result ordered by Rank with only top N results: mysql> SELECT FTS_DOC_ID, MATCH (title, body) AGAINST ('database') AS SCORE FROM articles ORDER BY score DESC LIMIT 1; In this query, user tries to retrieve a single record with highest ranking. It should have a quick answer once we have all the matching documents on hand, especially if there are ranked. However, before this change, MySQL would almost retrieve rankings for almost every row in the table, sort them and them come with the top rank result. This whole retrieve and sort is quite unnecessary given the InnoDB already have the answer. In a real life case, user could have millions of rows, so in the old scheme, it would retrieve millions of rows' ranking and sort them, even if our FTS already found there are two 3 matched rows. Apparently, the million ranking retrieve is done in vain. In above case, it should just ask for 3 matched rows' ranking, all other rows' ranking are 0. If it want the top ranking, then it can just get the first record from our already sorted result. Case 2: Select Count(*) on matching records: mysql> SELECT COUNT(*) FROM articles WHERE MATCH (title,body) AGAINST ('database' IN NATURAL LANGUAGE MODE); In this case, InnoDB search can find matching rows quickly and will have all matching rows. However, before our change, in the old scheme, every row in the table was requested by MySQL one by one, just to check whether its ranking is larger than 0, and later comes up a count. In fact, there is no need for MySQL to fetch all rows, instead InnoDB already had all the matching records. The only thing need is to call an InnoDB API to retrieve the count The difference can be huge. Following query output shows how big the difference can be: mysql> select count(*) from searchindex_inno where match(si_title, si_text) against ('people')  +----------+ | count(*) | +----------+ | 666877 | +----------+ 1 row in set (16 min 17.37 sec) So the query took almost 16 minutes. Let’s see how long the InnoDB can come up the result. In InnoDB, you can obtain extra diagnostic printout by turning on “innodb_ft_enable_diag_print”, this will print out extra query info: Error log: keynr=2, 'people' NL search Total docs: 10954826 Total words: 0 UNION: Searching: 'people' Processing time: 2 secs: row(s) 666877: error: 10 ft_init() ft_init_ext() keynr=2, 'people' NL search Total docs: 10954826 Total words: 0 UNION: Searching: 'people' Processing time: 3 secs: row(s) 666877: error: 10 Output shows it only took InnoDB only 3 seconds to get the result, while the whole query took 16 minutes to finish. So large amount of time has been wasted on the un-needed row fetching. The Solution: The solution is obvious. MySQL can skip some of its steps, optimize its plan and obtain useful information directly from InnoDB. Some of savings from doing this include: 1) Avoid redundant sorting. Since InnoDB already sorted the result according to ranking. MySQL Query Processing layer does not need to sort to get top matching results. 2) Avoid row by row fetching to get the matching count. InnoDB provides all the matching records. All those not in the result list should all have ranking of 0, and no need to be retrieved. And InnoDB has a count of total matching records on hand. No need to recount. 3) Covered index scan. InnoDB results always contains the matching records' Document ID and their ranking. So if only the Document ID and ranking is needed, there is no need to go to user table to fetch the record itself. 4) Narrow the search result early, reduce the user table access. If the user wants to get top N matching records, we do not need to fetch all matching records from user table. We should be able to first select TOP N matching DOC IDs, and then only fetch corresponding records with these Doc IDs. Performance Results and comparison with MyISAM The result by this change is very obvious. I includes six testing result performed by Alexander Rubin just to demonstrate how fast the InnoDB query now becomes when comparing MyISAM Full-Text Search. These tests are base on the English Wikipedia data of 5.4 Million rows and approximately 16G table. The test was performed on a machine with 1 CPU Dual Core, SSD drive, 8G of RAM and InnoDB_buffer_pool is set to 8 GB. Table 1: SELECT with LIMIT CLAUSE mysql> SELECT si_title, match(si_title, si_text) against('family') as rel FROM si WHERE match(si_title, si_text) against('family') ORDER BY rel desc LIMIT 10; InnoDB MyISAM Times Faster Time for the query 1.63 sec 3 min 26.31 sec 127 You can see for this particular query (retrieve top 10 records), InnoDB Full-Text Search is now approximately 127 times faster than MyISAM. Table 2: SELECT COUNT QUERY mysql>select count(*) from si where match(si_title, si_text) against('family‘); +----------+ | count(*) | +----------+ | 293955 | +----------+ InnoDB MyISAM Times Faster Time for the query 1.35 sec 28 min 59.59 sec 1289 In this particular case, where there are 293k matching results, InnoDB took only 1.35 second to get all of them, while take MyISAM almost half an hour, that is about 1289 times faster!. Table 3: SELECT ID with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, match(si_title, si_text) against(<TERM>) as rel FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.5 sec 5.05 sec 10.1 family film 0.95 sec 25.39 sec 26.7 Pizza restaurant orange county California 0.93 sec 32.03 sec 34.4 President united states of America 2.5 sec 36.98 sec 14.8 Table 4: SELECT title and text with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, si_title, si_text, ... as rel FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.61 sec 41.65 sec 68.3 family film 1.15 sec 47.17 sec 41.0 Pizza restaurant orange county california 1.03 sec 48.2 sec 46.8 President united states of america 2.49 sec 44.61 sec 17.9 Table 5: SELECT ID with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, match(si_title, si_text) against(<TERM>) as rel  FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.5 sec 5.05 sec 10.1 family film 0.95 sec 25.39 sec 26.7 Pizza restaurant orange county califormia 0.93 sec 32.03 sec 34.4 President united states of america 2.5 sec 36.98 sec 14.8 Table 6: SELECT COUNT(*) mysql> SELECT count(*) FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.47 sec 82 sec 174.5 family film 0.83 sec 131 sec 157.8 Pizza restaurant orange county califormia 0.74 sec 106 sec 143.2 President united states of america 1.96 sec 220 sec 112.2  Again, table 3 to table 6 all showing InnoDB consistently outperform MyISAM in these queries by a large margin. It becomes obvious the InnoDB has great advantage over MyISAM in handling large data search. Summary: These results demonstrate the great performance we could achieve by making MySQL optimizer and InnoDB Full-Text Search more tightly coupled. I think there are still many cases that InnoDB’s result info have not been fully taken advantage of, which means we still have great room to improve. And we will continuously explore the area, and get more dramatic results for InnoDB full-text searches. Jimmy Yang, September 29, 2012

    Read the article

  • Table Variables: an empirical approach.

    - by Phil Factor
    It isn’t entirely a pleasant experience to publish an article only to have it described on Twitter as ‘Horrible’, and to have it criticized on the MVP forum. When this happened to me in the aftermath of publishing my article on Temporary tables recently, I was taken aback, because these critics were experts whose views I respect. What was my crime? It was, I think, to suggest that, despite the obvious quirks, it was best to use Table Variables as a first choice, and to use local Temporary Tables if you hit problems due to these quirks, or if you were doing complex joins using a large number of rows. What are these quirks? Well, table variables have advantages if they are used sensibly, but this requires some awareness by the developer about the potential hazards and how to avoid them. You can be hit by a badly-performing join involving a table variable. Table Variables are a compromise, and this compromise doesn’t always work out well. Explicit indexes aren’t allowed on Table Variables, so one cannot use covering indexes or non-unique indexes. The query optimizer has to make assumptions about the data rather than using column distribution statistics when a table variable is involved in a join, because there aren’t any column-based distribution statistics on a table variable. It assumes a reasonably even distribution of data, and is likely to have little idea of the number of rows in the table variables that are involved in queries. However complex the heuristics that are used might be in determining the best way of executing a SQL query, and they most certainly are, the Query Optimizer is likely to fail occasionally with table variables, under certain circumstances, and produce a Query Execution Plan that is frightful. The experienced developer or DBA will be on the lookout for this sort of problem. In this blog, I’ll be expanding on some of the tests I used when writing my article to illustrate the quirks, and include a subsequent example supplied by Kevin Boles. A simplified example. We’ll start out by illustrating a simple example that shows some of these characteristics. We’ll create two tables filled with random numbers and then see how many matches we get between the two tables. We’ll forget indexes altogether for this example, and use heaps. We’ll try the same Join with two table variables, two table variables with OPTION (RECOMPILE) in the JOIN clause, and with two temporary tables. It is all a bit jerky because of the granularity of the timing that isn’t actually happening at the millisecond level (I used DATETIME). However, you’ll see that the table variable is outperforming the local temporary table up to 10,000 rows. Actually, even without a use of the OPTION (RECOMPILE) hint, it is doing well. What happens when your table size increases? The table variable is, from around 30,000 rows, locked into a very bad execution plan unless you use OPTION (RECOMPILE) to provide the Query Analyser with a decent estimation of the size of the table. However, if it has the OPTION (RECOMPILE), then it is smokin’. Well, up to 120,000 rows, at least. It is performing better than a Temporary table, and in a good linear fashion. What about mixed table joins, where you are joining a temporary table to a table variable? You’d probably expect that the query analyzer would throw up its hands and produce a bad execution plan as if it were a table variable. After all, it knows nothing about the statistics in one of the tables so how could it do any better? Well, it behaves as if it were doing a recompile. And an explicit recompile adds no value at all. (we just go up to 45000 rows since we know the bigger picture now)   Now, if you were new to this, you might be tempted to start drawing conclusions. Beware! We’re dealing with a very complex beast: the Query Optimizer. It can come up with surprises What if we change the query very slightly to insert the results into a Table Variable? We change nothing else and just measure the execution time of the statement as before. Suddenly, the table variable isn’t looking so much better, even taking into account the time involved in doing the table insert. OK, if you haven’t used OPTION (RECOMPILE) then you’re toast. Otherwise, there isn’t much in it between the Table variable and the temporary table. The table variable is faster up to 8000 rows and then not much in it up to 100,000 rows. Past the 8000 row mark, we’ve lost the advantage of the table variable’s speed. Any general rule you may be formulating has just gone for a walk. What we can conclude from this experiment is that if you join two table variables, and can’t use constraints, you’re going to need that Option (RECOMPILE) hint. Count Dracula and the Horror Join. These tables of integers provide a rather unreal example, so let’s try a rather different example, and get stuck into some implicit indexing, by using constraints. What unusual words are contained in the book ‘Dracula’ by Bram Stoker? Here we get a table of all the common words in the English language (60,387 of them) and put them in a table. We put them in a Table Variable with the word as a primary key, a Table Variable Heap and a Table Variable with a primary key. We then take all the distinct words used in the book ‘Dracula’ (7,558 of them). We then create a table variable and insert into it all those uncommon words that are in ‘Dracula’. i.e. all the words in Dracula that aren’t matched in the list of common words. To do this we use a left outer join, where the right-hand value is null. The results show a huge variation, between the sublime and the gorblimey. If both tables contain a Primary Key on the columns we join on, and both are Table Variables, it took 33 Ms. If one table contains a Primary Key, and the other is a heap, and both are Table Variables, it took 46 Ms. If both Table Variables use a unique constraint, then the query takes 36 Ms. If neither table contains a Primary Key and both are Table Variables, it took 116383 Ms. Yes, nearly two minutes!! If both tables contain a Primary Key, one is a Table Variables and the other is a temporary table, it took 113 Ms. If one table contains a Primary Key, and both are Temporary Tables, it took 56 Ms.If both tables are temporary tables and both have primary keys, it took 46 Ms. Here we see table variables which are joined on their primary key again enjoying a  slight performance advantage over temporary tables. Where both tables are table variables and both are heaps, the query suddenly takes nearly two minutes! So what if you have two heaps and you use option Recompile? If you take the rogue query and add the hint, then suddenly, the query drops its time down to 76 Ms. If you add unique indexes, then you've done even better, down to half that time. Here are the text execution plans.So where have we got to? Without drilling down into the minutiae of the execution plans we can begin to create a hypothesis. If you are using table variables, and your tables are relatively small, they are faster than temporary tables, but as the number of rows increases you need to do one of two things: either you need to have a primary key on the column you are using to join on, or else you need to use option (RECOMPILE) If you try to execute a query that is a join, and both tables are table variable heaps, you are asking for trouble, well- slow queries, unless you give the table hint once the number of rows has risen past a point (30,000 in our first example, but this varies considerably according to context). Kevin’s Skew In describing the table-size, I used the term ‘relatively small’. Kevin Boles produced an interesting case where a single-row table variable produces a very poor execution plan when joined to a very, very skewed table. In the original, pasted into my article as a comment, a column consisted of 100000 rows in which the key column was one number (1) . To this was added eight rows with sequential numbers up to 9. When this was joined to a single-tow Table Variable with a key of 2 it produced a bad plan. This problem is unlikely to occur in real usage, and the Query Optimiser team probably never set up a test for it. Actually, the skew can be slightly less extreme than Kevin made it. The following test showed that once the table had 54 sequential rows in the table, then it adopted exactly the same execution plan as for the temporary table and then all was well. Undeniably, real data does occasionally cause problems to the performance of joins in Table Variables due to the extreme skew of the distribution. We've all experienced Perfectly Poisonous Table Variables in real live data. As in Kevin’s example, indexes merely make matters worse, and the OPTION (RECOMPILE) trick does nothing to help. In this case, there is no option but to use a temporary table. However, one has to note that once the slight de-skew had taken place, then the plans were identical across a huge range. Conclusions Where you need to hold intermediate results as part of a process, Table Variables offer a good alternative to temporary tables when used wisely. They can perform faster than a temporary table when the number of rows is not great. For some processing with huge tables, they can perform well when only a clustered index is required, and when the nature of the processing makes an index seek very effective. Table Variables are scoped to the batch or procedure and are unlikely to hang about in the TempDB when they are no longer required. They require no explicit cleanup. Where the number of rows in the table is moderate, you can even use them in joins as ‘Heaps’, unindexed. Beware, however, since, as the number of rows increase, joins on Table Variable heaps can easily become saddled by very poor execution plans, and this must be cured either by adding constraints (UNIQUE or PRIMARY KEY) or by adding the OPTION (RECOMPILE) hint if this is impossible. Occasionally, the way that the data is distributed prevents the efficient use of Table Variables, and this will require using a temporary table instead. Tables Variables require some awareness by the developer about the potential hazards and how to avoid them. If you are not prepared to do any performance monitoring of your code or fine-tuning, and just want to pummel out stuff that ‘just runs’ without considering namby-pamby stuff such as indexes, then stick to Temporary tables. If you are likely to slosh about large numbers of rows in temporary tables without considering the niceties of processing just what is required and no more, then temporary tables provide a safer and less fragile means-to-an-end for you.

    Read the article

  • Gathering statistics for an Oracle WebCenter Content Database

    - by Nicolas Montoya
    Have you ever heard: "My Oracle WebCenter Content instance is running slow. I checked the memory and CPU usage of the application server and it has plenty of resources. What could be going wrong?An Oracle WebCenter Content instance runs on an application server and relies on a database server on the back end. If your application server tier is running fine, chances are that your database server tier may host the root of the problem. While many things could cause performance problems, on active Enterprise Content Management systems, keeping database statistics updated is extremely important.The Oracle Database have a set of built-in optimizer utilities that can help make database queries more efficient. It is strongly recommended to update or re-create the statistics about the physical characteristics of a table and the associated indexes in order to maximize the efficiency of optimizers. These physical characteristics include: Number of records Number of pages Average record length The frequency with which you need to update statistics depends on how quickly the data is changing. Typically, statistics should be updated when the number of new items since the last update is greater than ten percent of the number of items when the statistics were last updated. If a large amount of documents are being added or removed from the system, the a post step should be added to gather statistics upon completion of this massive data change. In some cases, you may need to collect statistics in the middle of the data processing to expedite its execution. These proceses include but are not limited to: data migration, bootstrapping of a new system, records management disposition processing (typically at the end of the calendar year), etc. A DOCUMENTS table with a ten million rows will often generate a very different plan than a table with just a thousand.A quick check of the statistics for the WebCenter Content (WCC) Database could be performed via the below query:SELECT OWNER, TABLE_NAME, NUM_ROWS, BLOCKS, AVG_ROW_LEN,TO_CHAR(LAST_ANALYZED, 'MM/DD/YYYY HH24:MI:SS')FROM DBA_TABLESWHERE TABLE_NAME='DOCUMENTS';OWNER                          TABLE_NAME                       NUM_ROWS------------------------------ ------------------------------ ----------    BLOCKS AVG_ROW_LEN TO_CHAR(LAST_ANALYZ---------- ----------- -------------------ATEAM_OCS                      DOCUMENTS                            4172        46          61 04/06/2012 11:17:51This output will return not only the date when the WCC table DOCUMENTS was last analyzed, but also it will return the <DATABASE SCHEMA OWNER> for this table in the form of <PREFIX>_OCS.This database username could later on be used to check on other objects owned by the WCC <DATABASE SCHEMA OWNER> as shown below:SELECT OWNER, TABLE_NAME, NUM_ROWS, BLOCKS, AVG_ROW_LEN,TO_CHAR(LAST_ANALYZED, 'MM/DD/YYYY HH24:MI:SS')FROM DBA_TABLESWHERE OWNER='ATEAM_OCS'ORDER BY NUM_ROWS ASC;...OWNER                          TABLE_NAME                       NUM_ROWS------------------------------ ------------------------------ ----------    BLOCKS AVG_ROW_LEN TO_CHAR(LAST_ANALYZ---------- ----------- -------------------ATEAM_OCS                      REVISIONS                            2051        46         141 04/09/2012 22:00:22ATEAM_OCS                      DOCUMENTS                            4172        46          61 04/06/2012 11:17:51ATEAM_OCS                      ARCHIVEHISTORY                       4908       244         218 04/06/2012 11:17:49OWNER                          TABLE_NAME                       NUM_ROWS------------------------------ ------------------------------ ----------    BLOCKS AVG_ROW_LEN TO_CHAR(LAST_ANALYZ---------- ----------- -------------------ATEAM_OCS                      DOCUMENTHISTORY                      5865       110          72 04/06/2012 11:17:50ATEAM_OCS                      SCHEDULEDJOBSHISTORY                10131       244         131 04/06/2012 11:17:54ATEAM_OCS                      SCTACCESSLOG                        10204       496         268 04/06/2012 11:17:54...The Oracle Database allows to collect statistics of many different kinds as an aid to improving performance. The DBMS_STATS package is concerned with optimizer statistics only. The database sets automatic statistics collection of this kind on by default, DBMS_STATS package is intended for only specialized cases.The following subprograms gather certain classes of optimizer statistics:GATHER_DATABASE_STATS Procedures GATHER_DICTIONARY_STATS Procedure GATHER_FIXED_OBJECTS_STATS Procedure GATHER_INDEX_STATS Procedure GATHER_SCHEMA_STATS Procedures GATHER_SYSTEM_STATS Procedure GATHER_TABLE_STATS ProcedureThe DBMS_STATS.GATHER_SCHEMA_STATS PL/SQL Procedure gathers statistics for all objects in a schema.DBMS_STATS.GATHER_SCHEMA_STATS (    ownname          VARCHAR2,    estimate_percent NUMBER   DEFAULT to_estimate_percent_type                                                 (get_param('ESTIMATE_PERCENT')),    block_sample     BOOLEAN  DEFAULT FALSE,    method_opt       VARCHAR2 DEFAULT get_param('METHOD_OPT'),   degree           NUMBER   DEFAULT to_degree_type(get_param('DEGREE')),    granularity      VARCHAR2 DEFAULT GET_PARAM('GRANULARITY'),    cascade          BOOLEAN  DEFAULT to_cascade_type(get_param('CASCADE')),    stattab          VARCHAR2 DEFAULT NULL,    statid           VARCHAR2 DEFAULT NULL,    options          VARCHAR2 DEFAULT 'GATHER',    objlist          OUT      ObjectTab,   statown          VARCHAR2 DEFAULT NULL,    no_invalidate    BOOLEAN  DEFAULT to_no_invalidate_type (                                     get_param('NO_INVALIDATE')),  force             BOOLEAN DEFAULT FALSE);There are several values for the OPTIONS parameter that we need to know about: GATHER reanalyzes the whole schema     GATHER EMPTY only analyzes tables that have no existing statistics GATHER STALE only reanalyzes tables with more than 10 percent modifications (inserts, updates,   deletes) GATHER AUTO will reanalyze objects that currently have no statistics and objects with stale statistics. Using GATHER AUTO is like combining GATHER STALE and GATHER EMPTY. Example:exec dbms_stats.gather_schema_stats( -   ownname          => '<PREFIX>_OCS', -   options          => 'GATHER AUTO' -);

    Read the article

  • IPhone like scrolling on Silverlight ListBox

    - by Larsi
    Hi! I need a listbox with IPhone-like functionality for Silverlight. That is, animated scrolling, and click and drag to scroll. Scrolling will continue a bit after the mouse up event based on the "speed" of the dragging. I've search and found no control vendors providing this. So question is how should I build it? I need some hints to get started. There's two parts to this question: Part 1, How to get the animated scrolling of the listbox. Part 2, How to build a "draggable" scrolling, I guess I should put a canvas on top and track the mouseevent, and simulate some physics. Some hints here would have been great. Thanks Larsi.

    Read the article

  • setting library include paths in c++

    - by Drew
    Hi all, I just installed gd2 using mac ports (sudo install gd2), which installed libraries in the following places: /opt/local/include/gd.h /opt/local/lib/libgd.dylib (link) /opt/local/lib/libgd.la /opt/local/lib/libgd.a So when I create my c++ app I add '#include "gd.h"', which throws: main.cpp:4:16: error: gd.h: No such file or directory If I set gd.h as an absolute path (as above)(not a solution, but was curious), I am thrown: g++ -L/opt/local/include -L/opt/local/lib main.o Heatmap_Map.o Heatmap_Point.o -o heatmap Undefined symbols: "_gdImagePng", referenced from: _main in main.o "_gdImageLine", referenced from: _main in main.o "_gdImageColorAllocate", referenced from: _main in main.o _main in main.o "_gdImageDestroy", referenced from: _main in main.o "_gdImageCreate", referenced from: _main in main.o "_gdImageJpeg", referenced from: _main in main.o ld: symbol(s) not found So, I understand this means that ld can not find the libraries it needs (hence trying to give it hints with the "-L" values). So after giving g++ the -L hints and the absolute path in #include, I can get it to work, but I don't think I have to do this, how can I make g++/ld search int eh right places for the libraries? Drew J. Sonne.

    Read the article

  • Beginner's resources/introductions to classification algorithms.

    - by Dirk
    Hi, everybody. I am entirely new to the topic of classification algorithms, and need a few good pointers about where to start some "serious reading". I am right now in the process of finding out, whether machine learning and automated classification algorithms could be a worthwhile thing to add to some application of mine. I already scanned through "How to Solve It: Modern heuristics" by Z. Michalewicz and D. Fogel (in particular, the chapters about linear classifiers using neuronal networks), and on the practical side, I am currently looking through the WEKA toolkit source code. My next (planned) step would be to dive into the realm of Bayesian classification algorithms. Unfortunately, I am lacking a serious theoretical foundation in this area (let alone, having used it in any way as of yet), so any hints at where to look next would be appreciated; in particular, a good introduction of available classification algorithms would be helpful. Being more a craftsman and less a theoretician, the more practical, the better... Hints, anyone?

    Read the article

  • why refresh evrytime?

    - by cowboy
    I have form below which contains a text field and a button. when i start typing it shows hints from my database and on clicking button or hiting enter key it displys search results. ajax technology is used for both hints and search results and it is working well but THE PROBLEM IS THAT IT WORKS ONLY THE FIRST TIME PAGE LOADS OR ONLY AFTER REFRESHING THE PAGE. IF I WANT ANOTHER SEARCH IT DOESNT RESPOND, JUST SHOWS THE PREVIOS SEARCH RESULT UNLESS THE PAGE IS REFRESHED. <input type="text" id="txt" onkeyup="showhint(this.value)"/> <input type="button" value="search" onclick="searchresult()"/> what kind of problem is this?? any solution?? plz plz plz

    Read the article

  • java.lang.NumberFormatException: unable to parse '' as integer one more time

    - by Quzziy
    I will take two numbers from user, but this number from EditText must be converted to int. I think it should be working, but I still have problem with compilation code in Android Studio. CatLog show error in line with: int wiek = Integer.parseInt(wiekEditText.getText().toString()); Below is my full Android code: public class MyActivity extends ActionBarActivity { int Wynik; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_my); int Tmax, RT; EditText wiekEditText = (EditText) findViewById(R.id.inWiek); EditText tspoczEditText = (EditText) findViewById(R.id.inTspocz); int wiek = Integer.parseInt(wiekEditText.getText().toString()); int tspocz = Integer.parseInt(tspoczEditText.getText().toString()); Tmax = 220 - wiek; RT = Tmax - tspocz; Wynik = 70*RT/100 + tspocz; final EditText tempWiekEdit = wiekEditText; TabHost tabHost = (TabHost) findViewById(R.id.tabHost); //Do TabHost'a z layoutu tabHost.setup(); TabHost.TabSpec tabSpec = tabHost.newTabSpec("Calc"); tabSpec.setContent(R.id.Calc); tabSpec.setIndicator("Calc"); tabHost.addTab(tabSpec); tabSpec = tabHost.newTabSpec("Hints"); tabSpec.setContent(R.id.Hints); tabSpec.setIndicator("Hints"); tabHost.addTab(tabSpec); final Button Btn = (Button) findViewById(R.id.Btn); Btn.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { Toast.makeText(getApplicationContext(),"blablabla"+ "Wynik",Toast.LENGTH_SHORT).show(); } }); wiekEditText.addTextChangedListener(new TextWatcher() { @Override public void beforeTextChanged(CharSequence s, int start, int count, int after) { } @Override public void onTextChanged(CharSequence s, int start, int before, int count) { Btn.setEnabled(!(tempWiekEdit.getText().toString().trim().isEmpty())); } @Override public void afterTextChanged(Editable s) { } }); } @Override public boolean onCreateOptionsMenu(Menu menu) { // Inflate the menu; this adds items to the action bar if it is present. getMenuInflater().inflate(R.menu.my, menu); return true; } @Override public boolean onOptionsItemSelected(MenuItem item) { // Handle action bar item clicks here. The action bar will // automatically handle clicks on the Home/Up button, so long // as you specify a parent activity in AndroidManifest.xml. int id = item.getItemId(); if (id == R.id.action_settings) { return true; } return super.onOptionsItemSelected(item); } }

    Read the article

  • More SQL Smells

    - by Nick Harrison
    Let's continue exploring some of the SQL Smells from Phil's list. He has been putting together. Datatype mis-matches in predicates that rely on implicit conversion.(Plamen Ratchev) This is a great example poking holes in the whole theory of "If it works it's not broken" Queries will this probably will generally work and give the correct response. In fact, without careful analysis, you probably may be completely oblivious that there is even a problem. This subtle little problem will needlessly complicate queries and slow them down regardless of the indexes applied. Consider this example: CREATE TABLE [dbo].[Page](     [PageId] [int] IDENTITY(1,1) NOT NULL,     [Title] [varchar](75) NOT NULL,     [Sequence] [int] NOT NULL,     [ThemeId] [int] NOT NULL,     [CustomCss] [text] NOT NULL,     [CustomScript] [text] NOT NULL,     [PageGroupId] [int] NOT NULL;  CREATE PROCEDURE PageSelectBySequence ( @sequenceMin smallint , @sequenceMax smallint ) AS BEGIN SELECT [PageId] , [Title] , [Sequence] , [ThemeId] , [CustomCss] , [CustomScript] , [PageGroupId] FROM [CMS].[dbo].[Page] WHERE Sequence BETWEEN @sequenceMin AND @SequenceMax END  Note that the Sequence column is defined as int while the sequence parameter is defined as a small int. The problem is that the database may have to do a lot of type conversions to evaluate the query. In some cases, this may even negate the indexes that you have in place. Using Correlated subqueries instead of a join   (Dave_Levy/ Plamen Ratchev) There are two main problems here. The first is a little subjective, since this is a non-standard way of expressing the query, it is harder to understand. The other problem is much more objective and potentially problematic. You are taking much of the control away from the optimizer. Written properly, such a query may well out perform a corresponding query written with traditional joins. More likely than not, performance will degrade. Whenever you assume that you know better than the optimizer, you will most likely be wrong. This is the fundmental problem with any hint. Consider a query like this:  SELECT Page.Title , Page.Sequence , Page.ThemeId , Page.CustomCss , Page.CustomScript , PageEffectParams.Name , PageEffectParams.Value , ( SELECT EffectName FROM dbo.Effect WHERE EffectId = dbo.PageEffects.EffectId ) AS EffectName FROM Page INNER JOIN PageEffect ON Page.PageId = PageEffects.PageId INNER JOIN PageEffectParam ON PageEffects.PageEffectId = PageEffectParams.PageEffectId  This can and should be written as:  SELECT Page.Title , Page.Sequence , Page.ThemeId , Page.CustomCss , Page.CustomScript , PageEffectParams.Name , PageEffectParams.Value , EffectName FROM Page INNER JOIN PageEffect ON Page.PageId = PageEffects.PageId INNER JOIN PageEffectParam ON PageEffects.PageEffectId = PageEffectParams.PageEffectId INNER JOIN dbo.Effect ON dbo.Effects.EffectId = dbo.PageEffects.EffectId  The correlated query may just as easily show up in the where clause. It's not a good idea in the select clause or the where clause. Few or No comments. This one is a bit more complicated and controversial. All comments are not created equal. Some comments are helpful and need to be included. Other comments are not necessary and may indicate a problem. I tend to follow the rule of thumb that comments that explain why are good. Comments that explain how are bad. Many people may be shocked to hear the idea of a bad comment, but hear me out. If a comment is needed to explain what is going on or how it works, the logic is too complex and needs to be simplified. Comments that explain why are good. Comments may explain why the sql is needed are good. Comments that explain where the sql is used are good. Comments that explain how tables are related should not be needed if the sql is well written. If they are needed, you need to consider reworking the sql or simplify your data model. Use of functions in a WHERE clause. (Anil Das) Calling a function in the where clause will often negate the indexing strategy. The function will be called for every record considered. This will often a force a full table scan on the tables affected. Calling a function will not guarantee that there is a full table scan, but there is a good chance that it will. If you find that you often need to write queries using a particular function, you may need to add a column to the table that has the function already applied.

    Read the article

  • wamp cannot load mysqli extension

    - by localhost
    WAMP installed fine, no problems, BUT... When going to phpMyAdmin, I get the error from phpMyAdmin as follows: "Cannot load mysqli extension. Please check your PHP configuration". Also, phpMyAdmin documentation explains this error message as follows: "To connect to a MySQL server, PHP needs a set of MySQL functions called "MySQL extension". This extension may be part of the PHP distribution (compiled-in), otherwise it needs to be loaded dynamically. Its name is probably mysql.so or php_mysql.dll. phpMyAdmin tried to load the extension but failed. Usually, the problem is solved by installing a software package called "PHP-MySQL" or something similar." Finally, the apache_error.log file has the following PHP warnings (see the mySQL warning): PHP Warning: Zend Optimizer does not support this version of PHP - please upgrade to the latest version of Zend Optimizer in Unknown on line 0 PHP Warning: Zend Platform does not support this version of PHP - please upgrade to the latest version of Zend Platform in Unknown on line 0 PHP Warning: Zend Debug Server does not support this version of PHP - please upgrade to the latest version of Zend Debug Server in Unknown on line 0 PHP Warning: gd wrapper does not support this version of PHP - please upgrade to the latest version of gd wrapper in Unknown on line 0 PHP Warning: java wrapper does not support this version of PHP - please upgrade to the latest version of java wrapper in Unknown on line 0 PHP Warning: mysql wrapper does not support this version of PHP - please upgrade to the latest version of mysql wrapper in Unknown on line 0 So, for some reason PHP is not recognizing the mysql extension. Anyone know why? Any solution or workaround?

    Read the article

  • Sybase stored procedure - how do I create an index on a #table?

    - by DVK
    I have a stored procedure which creates and works with a temporary #table Some of the queries would be tremendously optimized if that temporary #table would have an index created on it. However, creating an index within the stored procedure fails: create procedure test1 as SELECT f1, f2, f3 INTO #table1 FROM main_table WHERE 1 = 2 -- insert rows into #table1 create index my_idx on #table1 (f1) SELECT f1, f2, f3 FROM #table1 (index my_idx) WHERE f1 = 11 -- "QUERY X" When I call the above, the query plan for "QUERY X" shows a table scan. If I simply run the code above outside the stored procedure, the messages show the following warning: Index 'my_idx' specified as optimizer hint in the FROM clause of table '#table1' does not exist. Optimizer will choose another index instead. This can be resolved when running ad-hoc (outside the stored procedure) by splitting the code above in two batches by addding "go" after index creation: create index my_idx on #table1 (f1) go Now, "QUERY X" query plan shows the use of index "my_idx". QUESTION: How do I mimique running the "create index" in a separate batch when it's inside the stored procedure? I can't insert a "go" there like I do with the ad-hoc copy above. P.S. If it matters, this is on Sybase 12.

    Read the article

  • Beginner Question: For extract a large subset of a table from MySQL, how does Indexing, order of tab

    - by chongman
    Sorry if this is too simple, but thanks in advance for helping. This is for MySQL but might be relevant for other RDMBSs tblA has 4 columns: colA, colB, colC, mydata, A_id It has about 10^9 records, with 10^3 distinct values for colA, colB, colC. tblB has 3 columns: colA, colB, B_id It has about 10^4 records. I want all the records from tblA (except the A_id) that have a match in tblB. In other words, I want to use tblB to describe the subset that I want to extract and then extract those records from tblA. Namely: SELECT a.colA, a.colB, a.colC, a.mydata FROM tblA as a INNER JOIN tblB as b ON a.colA=b.colA a.colB=b.colB ; It's taking a really long time (more than an hour) on a newish computer (4GB, Core2Quad, ubuntu), and I just want to check my understanding of the following optimization steps. ** Suppose this is the only query I will ever run on these tables. So ignore the need to run other queries. Now my questions: 1) What indexes should I create to optimize this query? I think I just need a multiple index on (colA, colB) for both tables. I don't think I need separate indexes for colA and colB. Another stack overflow article (that I can't find) mentioned that when adding new indexes, it is slower when there are existing indexes, so that might be a reason to use the multiple index. 2) Is INNER JOIN correct? I just want results where a match is found. 3) Is it faster if I join (tblA to tblB) or the other way around, (tblB to tblA)? This previous answer says that the optimizer should take care of that. 4) Does the order of the part after ON matter? This previous answer say that the optimizer also takes care of the execution order.

    Read the article

  • Limiting DOPs &ndash; Who rules over whom?

    - by jean-pierre.dijcks
    I've gotten a couple of questions from Dan Morgan and figured I start to answer them in this way. While Dan is running on a big system he is running with Database Resource Manager and he is trying to make sure the system doesn't go crazy (remember end user are never, ever crazy!) on very high DOPs. Q: How do I control statements with very high DOPs driven from user hints in queries? A: The best way to do this is to work with DBRM and impose limits on consumer groups. The Max DOP setting you can set in DBRM allows you to overwrite the hint. Now let's go into some more detail here. Assume my object (and for simplicity we assume there is a single object - and do remember that we always pick the highest DOP when in doubt and when conflicting DOPs are available in a query) has PARALLEL 64 as its setting. Assume that the query that selects something cool from that table lives in a consumer group with a max DOP of 32. Assume no goofy things (like running out of parallel_max_servers) are happening. A query selecting from this table will run at DOP 32 because DBRM caps the DOP. As of 11.2.0.1 we also use the DBRM cap to create the original plan (at compile time) and not just enforce the cap at runtime. Now, my user is smart and writes a query with a parallel hint requesting DOP 128. This query is still capped by DBRM and DBRM overrules the hint in the statement. The statement, despite the hint, runs at DOP 32. Note that in the hinted scenario we do compile the statement with DOP 128 (the optimizer obeys the hint). This is another reason to use table decoration rather than hints. Q: What happens if I set parallel_max_servers higher than processes (e.g. the max number of processes allowed to run on my machine)? A: Processes rules. It is important to understand that processes are fixed at startup time. If you increase parallel_max_servers above the number of processes in the processes parameter you should get a warning in the alert log stating it can not take effect. As a follow up, a hinted query requesting more parallel processes than either parallel_max_servers or processes will not be able to acquire the requested number. Parallel_max_processes will prevent this. And since parallel_max_servers should be lower than max processes you can never go over either...

    Read the article

  • How to launch LOV and Date dialogs using the keyboard

    - by frank.nimphius
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Using the ADF Faces JavaScript API, developers can listen for user keyboard input in input components to filter or respond to specific characters or key combination. The JavaScript shown below can be used with an af:clientListener tag on af:inputListOfValues or af:inputDate. At runtime, the JavaScript code determines the component type it is executed on and either opens the LOV dialog or the input Date popup.   <af:resource type="javascript">     /**     * function to launch dialog if cursor is in LOV or     * input date field     * @param evt argument to capture the AdfUIInputEvent object     */   function launchPopUpUsingF8(evt) {      var component = evt.getSource();      if (evt.getKeyCode() == AdfKeyStroke.F8_KEY) {      //check for input LOV component        if (component.getTypeName() == 'AdfRichInputListOfValues') {            AdfLaunchPopupEvent.queue(component, true);            //event is handled on the client. Server does not need            //to be notified            evt.cancel();          }         //check for input Date component               else if (component.getTypeName() == 'AdfRichInputDate') {           //the inputDate af:popup component ID always is ::pop           var popupClientId = component.getAbsoluteLocator() + '::pop';           var popup = component.findComponent(popupClientId);           var hints = {align : AdfRichPopup.ALIGN_END_AFTER,                        alignId : component.getAbsoluteLocator()};           popup.show(hints);           //event is handled on the client. Server does not need           //to be notified           evt.cancel();        }              } } </af:resource> The af:clientListener that calls the JavaScript is added as shown below. <af:inputDate label="Label 1" id="id1">    <af:clientListener method="launchPopUpUsingF8" type="keyDown"/> </af:inputDate> As you may have noticed, the call to open the popup is different for the af:inputListOfValues and the af:inputDate. For the list of values component, an ADF Faces AdfLaunchPopupEvent is queued with the LOV component passed s an argument. Launching the input date popup is a bit more complicate and requires you to lookup the implicit popup dialog and to open it manually. Because the popup is opened manually using the show() method on the af:popup component, the alignment of the dialog also needs to be handled manually. For this, the popup component specifies alignment hints, that for the ALIGN_END_AFTER hint aligns the dialog at the end and below the date component. The align Id hint specifies the component the dialog is relatively positioned to, which of course should be the input date field. The ADF Faces JavaScript API and how to use it is further explained in the Using JavaScript in ADF Faces Rich Client Applications whitepaper available from the Oracle Technology Network (OTN) http://www.oracle.com/technetwork/developer-tools/jdev/1-2011-javascript-302460.pdf An ADF Insider recording about JavaScript in ADF Faces can be watched from here http://download.oracle.com/otn_hosted_doc/jdeveloper/11gdemos/adf-insider-javascript/adf-insider-javascript.html

    Read the article

  • SQL SERVER – Quiz and Video – Introduction to Basics of a Query Hint

    - by pinaldave
    This blog post is inspired from SQL Architecture Basics Joes 2 Pros: Core Architecture concepts – SQL Exam Prep Series 70-433 – Volume 3. [Amazon] | [Flipkart] | [Kindle] | [IndiaPlaza] This is follow up blog post of my earlier blog post on the same subject - SQL SERVER – Introduction to Basics of a Query Hint – A Primer. In the article we discussed various basics terminology of the query hints. The article further covers following important concepts of query hints. Expecting Seek and getting a Scan Creating an index for improved optimization Implementing the query hint Above three are the most important concepts related to query hint and SQL Server.  There are many more things one has to learn but without beginners fundamentals one can’t learn the advanced  concepts. Let us have small quiz and check how many of you get the fundamentals right. Quiz 1) You have the following query: DECLARE @UlaChoice TinyInt SET @Type = 1 SELECT * FROM LegalActivity WHERE UlaChoice = @UlaChoice You have a nonclustered index named IX_Legal_Ula on the UlaChoice field. The Primary key is on the ID field and called PK_Legal_ID 99% of the time the value of the @UlaChoice is set to ‘YP101′. What query will achieve the best optimization for this query? SELECT * FROM LegalActivity WHERE UlaChoice = @UlaChoice WITH(INDEX(X_Legal_Ula)) SELECT * FROM LegalActivity WHERE UlaChoice = @UlaChoice WITH(INDEX(PK_Legal_ID)) SELECT * FROM LegalActivity WHERE UlaChoice = @UlaChoice OPTION (Optimize FOR(@UlaChoice = ‘YP101′)) 2) You have the following query: SELECT * FROM CurrentProducts WHERE ShortName = ‘Yoga Trip’ You have a nonclustered index on the ShortName field and the query runs an efficient index seek. You change your query to use a variable for ShortName and now you are using a slow index scan. What query hint can you use to get the same execution time as before? WITH LOCK FAST OPTIMIZE FOR MAXDOP READONLY Now make sure that you write down all the answers on the piece of paper. Watch following video and read earlier article over here. If you want to change the answer you still have chance. Solution 1) 3 2) 4 Now compare let us check the answers and compare your answers to following answers. I am very confident you will get them correct. Available at USA: Amazon India: Flipkart | IndiaPlaza Volume: 1, 2, 3, 4, 5 Please leave your feedback in the comment area for the quiz and video. Did you know all the answers of the quiz? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #034

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 UDF – User Defined Function to Strip HTML – Parse HTML – No Regular Expression The UDF used in the blog does fantastic task – it scans entire HTML text and removes all the HTML tags. It keeps only valid text data without HTML task. This is one of the quite commonly requested tasks many developers have to face everyday. De-fragmentation of Database at Operating System to Improve Performance Operating system skips MDF file while defragging the entire filesystem of the operating system. It is absolutely fine and there is no impact of the same on performance. Read the entire blog post for my conversation with our network engineers. Delay Function – WAITFOR clause – Delay Execution of Commands How do you delay execution of the commands in SQL Server – ofcourse by using WAITFOR keyword. In this blog post, I explain the same with the help of T-SQL script. Find Length of Text Field To measure the length of TEXT fields the function is DATALENGTH(textfield). Len will not work for text field. As of SQL Server 2005, developers should migrate all the text fields to VARCHAR(MAX) as that is the way forward. Retrieve Current Date Time in SQL Server CURRENT_TIMESTAMP, GETDATE(), {fn NOW()} There are three ways to retrieve the current datetime in SQL SERVER. CURRENT_TIMESTAMP, GETDATE(), {fn NOW()} Explanation and Comparison of NULLIF and ISNULL An interesting observation is NULLIF returns null if it comparison is successful, whereas ISNULL returns not null if its comparison is successful. In one way they are opposite to each other. Here is my question to you - How to create infinite loop using NULLIF and ISNULL? If this is even possible? 2008 Introduction to SERVERPROPERTY and example SERVERPROPERTY is a very interesting system function. It returns many of the system values. I use it very frequently to get different server values like Server Collation, Server Name etc. SQL Server Start Time We can use DMV to find out what is the start time of SQL Server in 2008 and later version. In this blog you can see how you can do the same. Find Current Identity of Table Many times we need to know what is the current identity of the column. I have found one of my developers using aggregated function MAX () to find the current identity. However, I prefer following DBCC command to figure out current identity. Create Check Constraint on Column Some time we just need to create a simple constraint over the table but I have noticed that developers do many different things to make table column follow rules than just creating constraint. I suggest constraint is a very useful concept and every SQL Developer should pay good attention to this subject. 2009 List Schema Name and Table Name for Database This is one of the blog post where I straight forward display script. One of the kind of blog posts, which I still love to read and write. Clustered Index on Separate Drive From Table Location A table devoid of primary key index is called heap, and here data is not arranged in a particular order, which gives rise to issues that adversely affect performance. Data must be stored in some kind of order. If we put clustered index on it then the order will be forced by that index and the data will be stored in that particular order. Understanding Table Hints with Examples Hints are options and strong suggestions specified for enforcement by the SQL Server query processor on DML statements. The hints override any execution plan the query optimizer might select for a query. 2010 Data Pages in Buffer Pool – Data Stored in Memory Cache One of my earlier year article, which I still read it many times and point developers to read it again. It is clear from the Resultset that when more than one index is used, datapages related to both or all of the indexes are stored in Memory Cache separately. TRANSACTION, DML and Schema Locks Can you create a situation where you can see Schema Lock? Well, this is a very simple question, however during the interview I notice over 50 candidates failed to come up with the scenario. In this blog post, I have demonstrated the situation where we can see the schema lock in database. 2011 Solution – Puzzle – Statistics are not updated but are Created Once In this example I have created following situation: Create Table Insert 1000 Records Check the Statistics Now insert 10 times more 10,000 indexes Check the Statistics – it will be NOT updated Auto Update Statistics and Auto Create Statistics for database is TRUE Now I have requested two things in the example 1) Why this is happening? 2) How to fix this issue? Selecting Domain from Email Address This is a straight to script blog post where I explain how to select only domain name from entire email address. Solution – Generating Zero Without using Any Numbers in T-SQL How to get zero digit without using any digit? This is indeed a very interesting question and the answer is even interesting. Try to come up with answer in next 10 minutes and if you can’t come up with the answer the blog post read this post for solution. 2012 Simple Explanation and Puzzle with SOUNDEX Function and DIFFERENCE Function In simple words - SOUNDEX converts an alphanumeric string to a four-character code to find similar-sounding words or names. DIFFERENCE function returns an integer value. The  integer returned is the number of characters in the SOUNDEX values that are the same. Read Only Files and SQL Server Management Studio (SSMS) I have come across a very interesting feature in SSMS related to “Read Only” files. I believe it is a little unknown feature as well so decided to write a blog about the same. Identifying Column Data Type of uniqueidentifier without Querying System Tables How do I know if any table has a uniqueidentifier column and what is its value without using any DMV or System Catalogues? Only information you know is the table name and you are allowed to return any kind of error if the table does not have uniqueidentifier column. Read the blog post to find the answer. Solution – User Not Able to See Any User Created Object in Tables – Security and Permissions Issue Interesting question – “When I try to connect to SQL Server, it lets me connect just fine as well let me open and explore the database. I noticed that I do not see any user created instances but when my colleague attempts to connect to the server, he is able to explore the database as well see all the user created tables and other objects. Can you help me fix it?” Importing CSV File Into Database – SQL in Sixty Seconds #018 – Video Here is interesting small 60 second video on how to import CSV file into Database. ColumnStore Index – Batch Mode vs Row Mode Here is the logic behind when Columnstore Index uses Batch Mode and when it uses Row Mode. A batch typically represents about 1000 rows of data. Batch mode processing also uses algorithms that are optimized for the multicore CPUs and increased memory throughput. Follow up – Usage of $rowguid and $IDENTITY This is an excellent follow up blog post of my earlier blog post where I explain where to use $rowguid and $identity.  If you do not know the difference between them, this is a blog with a script example. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Microsoft Business Intelligence Seminar 2011

    - by DavidWimbush
    I was lucky enough to attend the maiden presentation of this at Microsoft Reading yesterday. It was pretty gripping stuff not only because of what was said but also because of what could only be hinted at. Here's what I took away from the day. (Disclaimer: I'm not a BI guru, just a reasonably experienced BI developer, so I may have misunderstood or misinterpreted a few things. Particularly when so much of the talk was about the vision and subtle hints of what is coming. Please comment if you think I've got anything wrong. I'm also not going to even try to cover Master Data Services as I struggled to imagine how you would actually use it.) I was a bit worried when I learned that the whole day was going to be presented by one guy but Rafal Lukawiecki is a very engaging speaker. He's going to be presenting this about 20 times around the world over the coming months. If you get a chance to hear him speak, I say go for it. No doubt some of the hints will become clearer as Denali gets closer to RTM. Firstly, things are definitely happening in the SQL Server Reporting and BI world. Traditionally IT would build a data warehouse, then cubes on top of that, and then publish them in a structured and controlled way. But, just as with many IT projects in general, by the time it's finished the business has moved on and the system no longer meets their requirements. This not sustainable and something more agile is needed but there has to be some control. Apparently we're going to be hearing the catchphrase 'Balancing agility with control' a lot. More users want more access to more data. Can they define what they want? Of course not, but they'll recognise it when they see it. It's estimated that only 28% of potential BI users have meaningful access to the data they need, so there is a real pent-up demand. The answer looks like: give them some self-service tools so they can experiment and see what works, and then IT can help to support the results. It's estimated that 32% of Excel users are comfortable with its analysis tools such as pivot tables. It's the power user's preferred tool. Why fight it? That's why PowerPivot is an Excel add-in and that's why they released a Data Mining add-in for it as well. It does appear that the strategy is going to be to use Reporting Services (in SharePoint mode), PowerPivot, and possibly something new (smiles and hints but no details) to create reports and explore data. Everything will be published and managed in SharePoint which gives users the ability to mash-up, share and socialise what they've found out. SharePoint also gives IT tools to understand what people are looking at and where to concentrate effort. If PowerPivot report X becomes widely used, it's time to check that it shows what they think it does and perhaps get it a bit more under central control. There was more SharePoint detail that went slightly over my head regarding where Excel Services and Excel Web Application fit in, the differences between them, and the suggestion that it is likely they will one day become one (but not in the immediate future). That basic pattern is set to be expanded upon by further exploiting Vertipaq (the columnar indexing engine that enables PowerPivot to store and process a lot of data fast and in a small memory footprint) to provide scalability 'from the desktop to the data centre', and some yet to be detailed advances in 'frictionless deployment' (part of which is about making the difference between local and the cloud pretty much irrelevant). Excel looks like becoming Microsoft's primary BI client. It already has: the ability to consume cubes strong visualisation tools slicers (which are part of Excel not PowerPivot) a data mining add-in PowerPivot A major hurdle for self-service BI is presenting the data in a consumable format. You can't just give users PowerPivot and a server with a copy of the OLTP database(s). Building cubes is labour intensive and doesn't always give the user what they need. This is where the BI Semantic Model (BISM) comes in. I gather it's a layer of metadata you define that can combine multiple data sources (and types of data source) into a clear 'interface' that users can work with. It comes with a new query language called DAX. SSAS cubes are unlikely to go away overnight because, with their pre-calculated results, they are still the most efficient way to work with really big data sets. A few other random titbits that came up: Reporting Services is going to get some good new stuff in Denali. Keep an eye on www.projectbotticelli.com for the slides. You can also view last year's seminar sessions which covered a lot of the same ground as far as the overall strategy is concerned. They plan to add more material as Denali's features are publicly exposed. Check out the PASS keynote address for a showing of Yahoo's SQL BI servers. Apparently they wheeled the rack out on stage still plugged in and running! Check out the Excel 2010 Data Mining Add-Ins. 32 bit only at present but 64 bit is on the way. There are lots of data sets, many of them free, at the Windows Azure Marketplace Data Market (where you can also get ESRI shape files). If you haven't already seen it, have a look at the Silverlight Pivot Viewer (http://weblogs.asp.net/scottgu/archive/2010/06/29/silverlight-pivotviewer-now-available.aspx). The Bing Maps Data Connector is worth a look if you're into spatial stuff (http://www.bing.com/community/site_blogs/b/maps/archive/2010/07/13/data-connector-sql-server-2008-spatial-amp-bing-maps.aspx).  

    Read the article

  • How to get the revision history of a branch with bzrlib

    - by David Planella
    I'm trying to get a list of committers to a bzr branch. I know I can get it through the command line with something along these lines: bzr log -n0 | grep committer | sed -e 's/^[[:space:]]*committer: //' | uniq However, I'd like to get that list programmatically with bzrlib. After having looked at the bzrlib documentation, I can't manage to find out how I would even get the full list of revisions from my branch. Any hints on how to get the full history of revisions from a branch with bzrlib, or ultimately, the list of committers?

    Read the article

  • Oracle WebCenter in Action: Best Practices from Oracle Consulting

    - by Kellsey Ruppel
    Oracle WebCenter in Action: Best Practices from Oracle ConsultingSee concrete, real-world examples of deployments throughout the Oracle WebCenter stack. Oracle Consulting will lead you through a discussion about best practices and key customer use cases, as well as offer practical tips to support web experience management, enterprise content management, and portal deployments.Watch this webcast as our presenters discuss: Best practices for deployments of large complex architectures with Oracle WebCenter Sites Key deployments and helpful hints for Oracle WebCenter Content Performance tuning takeaways when using Oracle WebCenter Portal Watch the webcast by registering now. REGISTER NOW

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >