Search Results

Search found 17968 results on 719 pages for 'query tuning'.

Page 10/719 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Output problem in mysql query in MFC program

    - by D.Gaughan
    Im currently working on a small MFC program that outputs data from a mysql database. I can get output when im using an sql statement that does not contain any variable eg. select album from Artists; but when i try to use a variable the program compiles but i get no output eg. mysql_perform_query(conn,select album from Artists where artists = '"+m_search_edit"'") Here is the function for mysql_perform_query: MYSQL_RES* mysql_perform_query(MYSQL *conn, const char* query) { // send the query to the database if (mysql_query(conn, query)) { // printf("MySQL query error : %s\n", mysql_error(conn)); // exit(1); } return mysql_use_result(conn); } And here is the code block for outputting the data: struct connection_details mysqlD; mysqlD.server = "www.freesqldatabase.com"; // where the mysql database is mysqlD.user = "**********"; // the root user of mysql mysqlD.password = "***********"; // the password of the root user in mysql mysqlD.database = "***************"; // the databse to pick // connect to the mysql database conn = mysql_connection_setup(mysqlD); CStringA query; query.Format("select album from Artists where artist = '%s'", CT2CA(m_search_edit)); res = mysql_perform_query(conn, query); //res = mysql_perform_query (conn, "select distinct artist from Artists"); while((row = mysql_fetch_row(res)) != NULL){ CString str; UpdateData(); str = ("%s\n", row[0]); UpdateData(FALSE); m_list_control.AddString(str); } The m_search_edit variable is the variable for an edit box. I am using Visual Studio 2008 with one copy of this program unicode and one nonunicode, I also have a version built with VC++ 6. Any tips on how I can get output from the databse using the m_search_edit variable??

    Read the article

  • SSIS Lookup component tuning tips

    - by jamiet
    Yesterday evening I attended a London meeting of the UK SQL Server User Group at Microsoft’s offices in London Victoria. As usual it was both a fun and informative evening and in particular there seemed to be a few questions arising about tuning the SSIS Lookup component; I rattled off some comments and figured it would be prudent to drop some of them into a dedicated blog post, hence the one you are reading right now. Scene setting A popular pattern in SSIS is to use a Lookup component to determine whether a record in the pipeline already exists in the intended destination table or not and I cover this pattern in my 2006 blog post Checking if a row exists and if it does, has it changed? (note to self: must rewrite that blog post for SSIS2008). Fundamentally the SSIS lookup component (when using FullCache option) sucks some data out of a database and holds it in memory so that it can be compared to data in the pipeline. One of the big benefits of using SSIS dataflows is that they process data one buffer at a time; that means that not all of the data from your source exists in the dataflow at the same time and is why a SSIS dataflow can process data volumes that far exceed the available memory. However, that only applies to data in the pipeline; for reasons that are hopefully obvious ALL of the data in the lookup set must exist in the memory cache for the duration of the dataflow’s execution which means that any memory used by the lookup cache will not be available to be used as a pipeline buffer. Moreover, there’s an obvious correlation between the amount of data in the lookup cache and the time it takes to charge that cache; the more data you have then the longer it will take to charge and the longer you have to wait until the dataflow actually starts to do anything. For these reasons your goal is simple: ensure that the lookup cache contains as little data as possible. General tips Here is a simple tick list you can follow in order to tune your lookups: Use a SQL statement to charge your cache, don’t just pick a table from the dropdown list made available to you. (Read why in SELECT *... or select from a dropdown in an OLE DB Source component?) Only pick the columns that you need, ignore everything else Make the database columns that your cache is populated from as narrow as possible. If a column is defined as VARCHAR(20) then SSIS will allocate 20 bytes for every value in that column – that is a big waste if the actual values are significantly less than 20 characters in length. Do you need DT_WSTR typed columns or will DT_STR suffice? DT_WSTR uses twice the amount of space to hold values that can be stored using a DT_STR so if you can use DT_STR, consider doing so. Same principle goes for the numerical datatypes DT_I2/DT_I4/DT_I8. Only populate the cache with data that you KNOW you will need. In other words, think about your WHERE clause! Thinking outside the box It is tempting to build a large monolithic dataflow that does many things, one of which is a Lookup. Often though you can make better use of your available resources by, well, mixing things up a little and here are a few ideas to get your creative juices flowing: There is no rule that says everything has to happen in a single dataflow. If you have some particularly resource intensive lookups then consider putting that lookup into a dataflow all of its own and using raw files to pass the pipeline data in and out of that dataflow. Know your data. If you think, for example, that the majority of your incoming rows will match with only a small subset of your lookup data then consider chaining multiple lookup components together; the first would use a FullCache containing that data subset and the remaining data that doesn’t find a match could be passed to a second lookup that perhaps uses a NoCache lookup thus negating the need to pull all of that least-used lookup data into memory. Do you need to process all of your incoming data all at once? If you can process different partitions of your data separately then you can partition your lookup cache as well. For example, if you are using a lookup to convert a location into a [LocationId] then why not process your data one region at a time? This will mean your lookup cache only has to contain data for the location that you are currently processing and with the ability of the Lookup in SSIS2008 and beyond to charge the cache using a dynamically built SQL statement you’ll be able to achieve it using the same dataflow and simply loop over it using a ForEach loop. Taking the previous data partitioning idea further … a dataflow can contain more than one data path so why not split your data using a conditional split component and, again, charge your lookup caches with only the data that they need for that partition. Lookups have two uses: to (1) find a matching row from the lookup set and (2) put attributes from that matching row into the pipeline. Ask yourself, do you need to do these two things at the same time? After all once you have the key column(s) from your lookup set then you can use that key to get the rest of attributes further downstream, perhaps even in another dataflow. Are you using the same lookup data set multiple times? If so, consider the file caching option in SSIS 2008 and beyond. Above all, experiment and be creative with different combinations. You may be surprised at what works. Final  thoughts If you want to know more about how the Lookup component differs in SSIS2008 from SSIS2005 then I have a dedicated blog post about that at Lookup component gets a makeover. I am on a mini-crusade at the moment to get a BULK MERGE feature into the database engine, the thinking being that if the database engine can quickly merge massive amounts of data in a similar manner to how it can insert massive amounts using BULK INSERT then that’s a lot of work that wouldn’t have to be done in the SSIS pipeline. If you think that is a good idea then go and vote for BULK MERGE on Connect. If you have any other tips to share then please stick them in the comments. Hope this helps! @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Developer Training – 6 Online Courses to Learn SQL Server, MySQL and Technology

    - by Pinal Dave
    Video courses are the next big thing and I am so happy that I have so far authored 6 different video courses with Pluralsight. Here is the list of the courses. I have listed all of my video courses over here. Note: If you click on the courses and it does not open, you need to login to Pluralsight with a valid username and password or sign up for a FREE trial. Please leave a comment with your favorite course in the comment section. Random 10 winners will get surprise gift via email. Bonus: If you list your favorite module from the course site. SQL Server Performance: Introduction to Query Tuning SQL Server performance tuning is an in-depth topic, and an art to master. A key component of overall application performance tuning is query tuning. Writing queries in an efficient manner, and making sure they execute in the most optimal way possible, is always a challenge. The basics revolve around the details of how SQL Server carries out query execution, so the optimizations explored in this course follow along the same lines. Click to View Course SQL Server Performance: Indexing Basics Indexes are the most crucial objects of the database. They are the first stop for any DBA and Developer when it is about performance tuning. There is a good side as well evil side of the indexes. To master the art of performance tuning one has to understand the fundamentals of the indexes and the best practices associated with the same. This course is for every DBA and Developer who deals with performance tuning and wants to use indexes to improve the performance of the server. Click to View Course SQL Server Questions and Answers This course is designed to help you better understand how to use SQL Server effectively. The course presents many of the common misconceptions about SQL Server, and then carefully debunks those misconceptions with clear explanations and short but compelling demos, showing you how SQL Server really works. This course is for anyone working with SQL Server databases who wants to improve her knowledge and understanding of this complex platform. Click to View Course MySQL Fundamentals MySQL is a popular choice of database for use in web applications, and is a central component of the widely used LAMP open source web application software stack. This course covers the fundamentals of MySQL, including how to install MySQL as well as written basic data retrieval and data modification queries. Click to View Course Building a Successful Blog Expressing yourself is the most common behavior of humans. Blogging has made easy to express yourself. Just like a letter or book has a structure and formula, blogging also has structure and formula. In this introductory course on blogging we will go over a few of the basics of blogging and show the way to get started with blogging immediately. If you already have a blog, this course will be even more relevant as this will discuss many of the common questions and issue you face in your blogging routine. Click to View Course Introduction to ColdFusion ColdFusion is rapid web application development platform. In this course you will learn the basics of how to use ColdFusion platform and rapidly develop web sites. The course begins with learning basics of ColdFusion Markup Language and moves to common development language practices. From there we move to frequent database operations and advanced concepts of Forms, Sessions and Cookies. The last module sums up all the concepts covered in the course with sample application. Click to View Course Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, T SQL, Technology

    Read the article

  • Dynamically select field names in a query with Spring JDBCTemplate

    - by Francesco
    Hi, I have a problem with parameters replacing by Spring JdbcTemplate. I have this query : <bean id="fixQuery" class="java.lang.String"> <constructor-arg type="java.lang.String" value="select fa.id, fi.? from fix_ambulation fa left join fix_i18n fi on fa.translation_id = fi.id order by name" /> And this method : public List<FixAmbulation> readFixAmbulation(String locale) throws Exception { List<FixAmbulation> ambulations = this.getJdbcTemplate().query( fixQuery, new Object[] {locale.toLowerCase()}, ParameterizedBeanPropertyRowMapper .newInstance(FixAmbulation.class)); return ambulations; } And I'd like to have the ? filled with the string representing the locale the user is using. So if the user is brasilian I'd send him the column pt_br from the table fix_i18n, otherwise if he's american I'd send him the column en_us. What I get from this method is a PostgreSQL exception org.postgresql.util.PSQLException: ERROR: syntax error at or near "$1" If I replace fi.? with just ? (the column name of the locale is unique, so if I run this query in the database it works just fine) what I get is that every object returned from method has the string locale into the field name. I.e. in name field I have "en_us". The only way to have it working I found was to change the method into : public List<FixAmbulation> readFixAmbulation(String locale) throws Exception { String query = "select fa.id, fi." + locale.toLowerCase() + " as name " + fixQuery; this.log.info("QUERY : " + query); List<FixAmbulation> ambulations = this.getJdbcTemplate().query( query, ParameterizedBeanPropertyRowMapper .newInstance(FixAmbulation.class)); return ambulations; } and setting fixQuery to : <bean id="fixQuery" class="java.lang.String"> <constructor-arg type="java.lang.String" value=" from telemedicina.fix_ambulation fa left join telemedicina.fix_i18n fi on fa.translation_id = fi.id order by name" /> </bean> My DAO extends Spring JdbcDaoSupport and works just fine for all other queries. What am I doing wrong?

    Read the article

  • Linq to LLBLGen query problem

    - by Jeroen Breuer
    Hello, I've got a Stored Procedure and i'm trying to convert it to a Linq to LLBLGen query. The query in Linq to LLBGen works, but when I trace the query which is send to sql server it is far from perfect. This is the Stored Procedure: ALTER PROCEDURE [dbo].[spDIGI_GetAllUmbracoProducts] -- Add the parameters for the stored procedure. @searchText nvarchar(255), @startRowIndex int, @maximumRows int, @sortExpression nvarchar(255) AS BEGIN SET @startRowIndex = @startRowIndex + 1 SET @searchText = '%' + @searchText + '%' -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; -- This is the query which will fetch all the UmbracoProducts. -- This query also supports paging and sorting. WITH UmbracoOverview As ( SELECT ROW_NUMBER() OVER( ORDER BY CASE WHEN @sortExpression = 'productName' THEN umbracoProduct.productName WHEN @sortExpression = 'productCode' THEN umbracoProduct.productCode END ASC, CASE WHEN @sortExpression = 'productName DESC' THEN umbracoProduct.productName WHEN @sortExpression = 'productCode DESC' THEN umbracoProduct.productCode END DESC ) AS row_num, umbracoProduct.umbracoProductId, umbracoProduct.productName, umbracoProduct.productCode FROM umbracoProduct INNER JOIN product ON umbracoProduct.umbracoProductId = product.umbracoProductId WHERE (umbracoProduct.productName LIKE @searchText OR umbracoProduct.productCode LIKE @searchText OR product.code LIKE @searchText OR product.description LIKE @searchText OR product.descriptionLong LIKE @searchText OR product.unitCode LIKE @searchText) ) SELECT UmbracoOverview.UmbracoProductId, UmbracoOverview.productName, UmbracoOverview.productCode FROM UmbracoOverview WHERE (row_num >= @startRowIndex AND row_num < (@startRowIndex + @maximumRows)) -- This query will count all the UmbracoProducts. -- This query is used for paging inside ASP.NET. SELECT COUNT (umbracoProduct.umbracoProductId) AS CountNumber FROM umbracoProduct INNER JOIN product ON umbracoProduct.umbracoProductId = product.umbracoProductId WHERE (umbracoProduct.productName LIKE @searchText OR umbracoProduct.productCode LIKE @searchText OR product.code LIKE @searchText OR product.description LIKE @searchText OR product.descriptionLong LIKE @searchText OR product.unitCode LIKE @searchText) END This is my Linq to LLBLGen query: using System.Linq.Dynamic; var q = ( from up in MetaData.UmbracoProduct join p in MetaData.Product on up.UmbracoProductId equals p.UmbracoProductId where up.ProductCode.Contains(searchText) || up.ProductName.Contains(searchText) || p.Code.Contains(searchText) || p.Description.Contains(searchText) || p.DescriptionLong.Contains(searchText) || p.UnitCode.Contains(searchText) select new UmbracoProductOverview { UmbracoProductId = up.UmbracoProductId, ProductName = up.ProductName, ProductCode = up.ProductCode } ).OrderBy(sortExpression); //Save the count in HttpContext.Current.Items. This value will only be saved during 1 single HTTP request. HttpContext.Current.Items["AllProductsCount"] = q.Count(); //Returns the results paged. return q.Skip(startRowIndex).Take(maximumRows).ToList<UmbracoProductOverview>(); This is my Initial expression to process: value(SD.LLBLGen.Pro.LinqSupportClasses.DataSource`1[Eurofysica.DB.EntityClasses.UmbracoProductEntity]).Join(value(SD.LLBLGen.Pro.LinqSupportClasses.DataSource`1[Eurofysica.DB.EntityClasses.ProductEntity]), up => up.UmbracoProductId, p => p.UmbracoProductId, (up, p) => new <>f__AnonymousType0`2(up = up, p = p)).Where(<>h__TransparentIdentifier0 => (((((<>h__TransparentIdentifier0.up.ProductCode.Contains(value(Eurofysica.BusinessLogic.BLL.Controllers.UmbracoProductController+<>c__DisplayClass1).searchText) || <>h__TransparentIdentifier0.up.ProductName.Contains(value(Eurofysica.BusinessLogic.BLL.Controllers.UmbracoProductController+<>c__DisplayClass1).searchText)) || <>h__TransparentIdentifier0.p.Code.Contains(value(Eurofysica.BusinessLogic.BLL.Controllers.UmbracoProductController+<>c__DisplayClass1).searchText)) || <>h__TransparentIdentifier0.p.Description.Contains(value(Eurofysica.BusinessLogic.BLL.Controllers.UmbracoProductController+<>c__DisplayClass1).searchText)) || <>h__TransparentIdentifier0.p.DescriptionLong.Contains(value(Eurofysica.BusinessLogic.BLL.Controllers.UmbracoProductController+<>c__DisplayClass1).searchText)) || <>h__TransparentIdentifier0.p.UnitCode.Contains(value(Eurofysica.BusinessLogic.BLL.Controllers.UmbracoProductController+<>c__DisplayClass1).searchText))).Select(<>h__TransparentIdentifier0 => new UmbracoProductOverview() {UmbracoProductId = <>h__TransparentIdentifier0.up.UmbracoProductId, ProductName = <>h__TransparentIdentifier0.up.ProductName, ProductCode = <>h__TransparentIdentifier0.up.ProductCode}).OrderBy( => .ProductName).Count() Now this is how the queries look like that are send to sql server: Select query: Query: SELECT [LPA_L2].[umbracoProductId] AS [UmbracoProductId], [LPA_L2].[productName] AS [ProductName], [LPA_L2].[productCode] AS [ProductCode] FROM ( [eurofysica].[dbo].[umbracoProduct] [LPA_L2] INNER JOIN [eurofysica].[dbo].[product] [LPA_L3] ON [LPA_L2].[umbracoProductId] = [LPA_L3].[umbracoProductId]) WHERE ( ( ( ( ( ( ( ( [LPA_L2].[productCode] LIKE @ProductCode1) OR ( [LPA_L2].[productName] LIKE @ProductName2)) OR ( [LPA_L3].[code] LIKE @Code3)) OR ( [LPA_L3].[description] LIKE @Description4)) OR ( [LPA_L3].[descriptionLong] LIKE @DescriptionLong5)) OR ( [LPA_L3].[unitCode] LIKE @UnitCode6)))) Parameter: @ProductCode1 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @ProductName2 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @Code3 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @Description4 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @DescriptionLong5 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @UnitCode6 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Count query: Query: SELECT TOP 1 COUNT(*) AS [LPAV_] FROM (SELECT [LPA_L2].[umbracoProductId] AS [UmbracoProductId], [LPA_L2].[productName] AS [ProductName], [LPA_L2].[productCode] AS [ProductCode] FROM ( [eurofysica].[dbo].[umbracoProduct] [LPA_L2] INNER JOIN [eurofysica].[dbo].[product] [LPA_L3] ON [LPA_L2].[umbracoProductId] = [LPA_L3].[umbracoProductId]) WHERE ( ( ( ( ( ( ( ( [LPA_L2].[productCode] LIKE @ProductCode1) OR ( [LPA_L2].[productName] LIKE @ProductName2)) OR ( [LPA_L3].[code] LIKE @Code3)) OR ( [LPA_L3].[description] LIKE @Description4)) OR ( [LPA_L3].[descriptionLong] LIKE @DescriptionLong5)) OR ( [LPA_L3].[unitCode] LIKE @UnitCode6))))) [LPA_L1] Parameter: @ProductCode1 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @ProductName2 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @Code3 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @Description4 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @DescriptionLong5 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @UnitCode6 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". As you can see no sorting or paging is done (like in my Stored Procedure). This is probably done inside the code after all the results are fetched. This costs a lot of performance! Does anybody know how I can convert my Stored Procedure to Linq to LLBLGen the proper way?

    Read the article

  • Date problem in MYSQL Query

    - by davykiash
    Am looking for a query to sum values in a particular time duration say an year or a particular month in an year using MYSQL syntax.Note that my transaction_date column stores daily amount transacted. Am example of a query that returns total sales in an year query would look something like this SELECT SUM(transaction_amount) WHERE transaction_date = (YEAR) Am example of a query that returns total sales in an particular month and year would look something like this SELECT SUM(transaction_amount) WHERE transaction_date = (YEAR)(MONTH) How achievable is this?

    Read the article

  • File system query

    - by Balaji
    Is there an easy way to query data in file system? We are storing data in File system (instead of database) Is there a way to query the content of the file system? The data in the file system is stored in xml format. since the data is growing day by day we are finding it difficult to query the content of the files in the file system. Can anyone suggest what could be the tool/method to query the data in the existing file system?

    Read the article

  • converting linq query to icollection

    - by bergin
    Hi there. I need to take the results of a query: var query = from m in db.SoilSamplingSubJobs where m.order_id == id select m; and prepare as an ICollection so that I can have something like ICollection<SoilSamplingSubJob> subjobs at the moment I create a list, which isnt appropriate to my needs: query.ToList(); what do I do - is it query.ToIcollection() ?

    Read the article

  • Drawbacks of Dynamic Query in Sqlserver 2005 ?

    - by KuldipMCA
    I have using the many dynamic Query in my database for the procedures because my filter is not fix so i have taken @filter as parameter and pass in the procedure. Declare @query as varchar(8000) Declare @Filter as varchar(1000) set @query = 'Select * from Person.Address where 1=1 and ' + @Filter exec(@query) Like that my filter contain any Field from the table for comparison. It will affect my performance or not ? is there any alternate way to achieve this type of things

    Read the article

  • WordPress SQL Query on Category/Terms

    - by mroggle
    Hi, i am modifying a plugin slightly to meet my needs, and need to change this query to return post ID's of just one category. I know it has something to do with INNER JOIN, but cant get the query right. Here is the original query $query = "SELECT ID as PID FROM $wpdb->posts"; $results = $wpdb->get_results($querydetails,ARRAY_A);

    Read the article

  • Improve long mysql query

    - by John Adawan
    I have a php mysql query like this $query = "SELECT * FROM articles FORCE INDEX (articleindex) WHERE category='$thiscat' and did>'$thisdid' and mid!='$thismid' and status='1' and group='$thisgroup' and pid>'$thispid' LIMIT 10"; As optimization, I've indexed all the parameters in articleindex and I use force index to force mysql to use the index, supposedly for faster processing. But it seems that this query is still quite slow and it's causing a jam and maxing out the max mysql connection limit. Let's discuss how we can improve on such long query.

    Read the article

  • Improve long mysql query

    - by John Adawan
    I have a php mysql query like this $query = "SELECT * FROM articles FORCE INDEX (articleindex) WHERE category='$thiscat' and did>'$thisdid' and mid!='$thismid' and status='1' and group='$thisgroup' and pid>'$thispid' LIMIT 10"; As optimization, I've indexed all the parameters in articleindex and I use force index to force mysql to use the index, supposedly for faster processing. But it seems that this query is still quite slow and it's causing a jam and maxing out the max mysql connection limit. Let's discuss how we can improve on such long query.

    Read the article

  • running same query in different databases

    - by user316833
    I wrote a query that I want to run in several access databases. I have 1000+ access databases with the same tables (same names, same fields). So far, I have been manually copying this query from a txt file to the sql view in the access query design screen for each database and then run it. I did not need to change the query language - everything is the same for the 1000 databases. Is there a way to automate this?

    Read the article

  • Grails query not using GORM

    - by Tihom
    What is the best way to query for something without using GORM in grails? I have query that doesn't seem to fit in the GORM model, the query has a subquery and a computed field. I posted on stackoverflow already with no response so I decided to take a different approach. I want to query for something not using GORM within a grails application. Is there an easy way to get the connection and go through the result set?

    Read the article

  • A Query to remove relationships that do not belong [closed]

    - by Segfault
    In a SQL Server 2008 R2 database, given this schema: AgentsAccounts _______________ AgentID int UNIQUE AccountID FinalAgents ___________ AgentID I need to create a query that does this: For each AgentID 'final' in FinalAgents remove all of the OTHER AgentID's from AgentsAccounts that have the same AccountID as 'final'. So if the tables have these rows before the query: AgentsAccounts AgentID AccountID 1 A 2 A 3 B 4 B FinalAgents 1 3 then after the query the AgentsAccounts table will look like this: AgentsAccounts AgentID AccountID 1 A 3 B What T-SQL query will delete the correct rows without using a curosr?

    Read the article

  • Oracle BI Server Modeling, Part 1- Designing a Query Factory

    - by bob.ertl(at)oracle.com
      Welcome to Oracle BI Development's BI Foundation blog, focused on helping you get the most value from your Oracle Business Intelligence Enterprise Edition (BI EE) platform deployments.  In my first series of posts, I plan to show developers the concepts and best practices for modeling in the Common Enterprise Information Model (CEIM), the semantic layer of Oracle BI EE.  In this segment, I will lay the groundwork for the modeling concepts.  First, I will cover the big picture of how the BI Server fits into the system, and how the CEIM controls the query processing. Oracle BI EE Query Cycle The purpose of the Oracle BI Server is to bridge the gap between the presentation services and the data sources.  There are typically a variety of data sources in a variety of technologies: relational, normalized transaction systems; relational star-schema data warehouses and marts; multidimensional analytic cubes and financial applications; flat files, Excel files, XML files, and so on. Business datasets can reside in a single type of source, or, most of the time, are spread across various types of sources. Presentation services users are generally business people who need to be able to query that set of sources without any knowledge of technologies, schemas, or how sources are organized in their company. They think of business analysis in terms of measures with specific calculations, hierarchical dimensions for breaking those measures down, and detailed reports of the business transactions themselves.  Most of them create queries without knowing it, by picking a dashboard page and some filters.  Others create their own analysis by selecting metrics and dimensional attributes, and possibly creating additional calculations. The BI Server bridges that gap from simple business terms to technical physical queries by exposing just the business focused measures and dimensional attributes that business people can use in their analyses and dashboards.   After they make their selections and start the analysis, the BI Server plans the best way to query the data sources, writes the optimized sequence of physical queries to those sources, post-processes the results, and presents them to the client as a single result set suitable for tables, pivots and charts. The CEIM is a model that controls the processing of the BI Server.  It provides the subject areas that presentation services exposes for business users to select simplified metrics and dimensional attributes for their analysis.  It models the mappings to the physical data access, the calculations and logical transformations, and the data access security rules.  The CEIM consists of metadata stored in the repository, authored by developers using the Administration Tool client.     Presentation services and other query clients create their queries in BI EE's SQL-92 language, called Logical SQL or LSQL.  The API simply uses ODBC or JDBC to pass the query to the BI Server.  Presentation services writes the LSQL query in terms of the simplified objects presented to the users.  The BI Server creates a query plan, and rewrites the LSQL into fully-detailed SQL or other languages suitable for querying the physical sources.  For example, the LSQL on the left below was rewritten into the physical SQL for an Oracle 11g database on the right. Logical SQL   Physical SQL SELECT "D0 Time"."T02 Per Name Month" saw_0, "D4 Product"."P01  Product" saw_1, "F2 Units"."2-01  Billed Qty  (Sum All)" saw_2 FROM "Sample Sales" ORDER BY saw_0, saw_1       WITH SAWITH0 AS ( select T986.Per_Name_Month as c1, T879.Prod_Dsc as c2,      sum(T835.Units) as c3, T879.Prod_Key as c4 from      Product T879 /* A05 Product */ ,      Time_Mth T986 /* A08 Time Mth */ ,      FactsRev T835 /* A11 Revenue (Billed Time Join) */ where ( T835.Prod_Key = T879.Prod_Key and T835.Bill_Mth = T986.Row_Wid) group by T879.Prod_Dsc, T879.Prod_Key, T986.Per_Name_Month ) select SAWITH0.c1 as c1, SAWITH0.c2 as c2, SAWITH0.c3 as c3 from SAWITH0 order by c1, c2   Probably everybody reading this blog can write SQL or MDX.  However, the trick in designing the CEIM is that you are modeling a query-generation factory.  Rather than hand-crafting individual queries, you model behavior and relationships, thus configuring the BI Server machinery to manufacture millions of different queries in response to random user requests.  This mass production requires a different mindset and approach than when you are designing individual SQL statements in tools such as Oracle SQL Developer, Oracle Hyperion Interactive Reporting (formerly Brio), or Oracle BI Publisher.   The Structure of the Common Enterprise Information Model (CEIM) The CEIM has a unique structure specifically for modeling the relationships and behaviors that fill the gap from logical user requests to physical data source queries and back to the result.  The model divides the functionality into three specialized layers, called Presentation, Business Model and Mapping, and Physical, as shown below. Presentation services clients can generally only see the presentation layer, and the objects in the presentation layer are normally the only ones used in the LSQL request.  When a request comes into the BI Server from presentation services or another client, the relationships and objects in the model allow the BI Server to select the appropriate data sources, create a query plan, and generate the physical queries.  That's the left to right flow in the diagram below.  When the results come back from the data source queries, the right to left relationships in the model show how to transform the results and perform any final calculations and functions that could not be pushed down to the databases.   Business Model Think of the business model as the heart of the CEIM you are designing.  This is where you define the analytic behavior seen by the users, and the superset library of metric and dimension objects available to the user community as a whole.  It also provides the baseline business-friendly names and user-readable dictionary.  For these reasons, it is often called the "logical" model--it is a virtual database schema that persists no data, but can be queried as if it is a database. The business model always has a dimensional shape (more on this in future posts), and its simple shape and terminology hides the complexity of the source data models. Besides hiding complexity and normalizing terminology, this layer adds most of the analytic value, as well.  This is where you define the rich, dimensional behavior of the metrics and complex business calculations, as well as the conformed dimensions and hierarchies.  It contributes to the ease of use for business users, since the dimensional metric definitions apply in any context of filters and drill-downs, and the conformed dimensions enable dashboard-wide filters and guided analysis links that bring context along from one page to the next.  The conformed dimensions also provide a key to hiding the complexity of many sources, including federation of different databases, behind the simple business model. Note that the expression language in this layer is LSQL, so that any expression can be rewritten into any data source's query language at run time.  This is important for federation, where a given logical object can map to several different physical objects in different databases.  It is also important to portability of the CEIM to different database brands, which is a key requirement for Oracle's BI Applications products. Your requirements process with your user community will mostly affect the business model.  This is where you will define most of the things they specifically ask for, such as metric definitions.  For this reason, many of the best-practice methodologies of our consulting partners start with the high-level definition of this layer. Physical Model The physical model connects the business model that meets your users' requirements to the reality of the data sources you have available. In the query factory analogy, think of the physical layer as the bill of materials for generating physical queries.  Every schema, table, column, join, cube, hierarchy, etc., that will appear in any physical query manufactured at run time must be modeled here at design time. Each physical data source will have its own physical model, or "database" object in the CEIM.  The shape of each physical model matches the shape of its physical source.  In other words, if the source is normalized relational, the physical model will mimic that normalized shape.  If it is a hypercube, the physical model will have a hypercube shape.  If it is a flat file, it will have a denormalized tabular shape. To aid in query optimization, the physical layer also tracks the specifics of the database brand and release.  This allows the BI Server to make the most of each physical source's distinct capabilities, writing queries in its syntax, and using its specific functions. This allows the BI Server to push processing work as deep as possible into the physical source, which minimizes data movement and takes full advantage of the database's own optimizer.  For most data sources, native APIs are used to further optimize performance and functionality. The value of having a distinct separation between the logical (business) and physical models is encapsulation of the physical characteristics.  This encapsulation is another enabler of packaged BI applications and federation.  It is also key to hiding the complex shapes and relationships in the physical sources from the end users.  Consider a routine drill-down in the business model: physically, it can require a drill-through where the first query is MDX to a multidimensional cube, followed by the drill-down query in SQL to a normalized relational database.  The only difference from the user's point of view is that the 2nd query added a more detailed dimension level column - everything else was the same. Mappings Within the Business Model and Mapping Layer, the mappings provide the binding from each logical column and join in the dimensional business model, to each of the objects that can provide its data in the physical layer.  When there is more than one option for a physical source, rules in the mappings are applied to the query context to determine which of the data sources should be hit, and how to combine their results if more than one is used.  These rules specify aggregate navigation, vertical partitioning (fragmentation), and horizontal partitioning, any of which can be federated across multiple, heterogeneous sources.  These mappings are usually the most sophisticated part of the CEIM. Presentation You might think of the presentation layer as a set of very simple relational-like views into the business model.  Over ODBC/JDBC, they present a relational catalog consisting of databases, tables and columns.  For business users, presentation services interprets these as subject areas, folders and columns, respectively.  (Note that in 10g, subject areas were called presentation catalogs in the CEIM.  In this blog, I will stick to 11g terminology.)  Generally speaking, presentation services and other clients can query only these objects (there are exceptions for certain clients such as BI Publisher and Essbase Studio). The purpose of the presentation layer is to specialize the business model for different categories of users.  Based on a user's role, they will be restricted to specific subject areas, tables and columns for security.  The breakdown of the model into multiple subject areas organizes the content for users, and subjects superfluous to a particular business role can be hidden from that set of users.  Customized names and descriptions can be used to override the business model names for a specific audience.  Variables in the object names can be used for localization. For these reasons, you are better off thinking of the tables in the presentation layer as folders than as strict relational tables.  The real semantics of tables and how they function is in the business model, and any grouping of columns can be included in any table in the presentation layer.  In 11g, an LSQL query can also span multiple presentation subject areas, as long as they map to the same business model. Other Model Objects There are some objects that apply to multiple layers.  These include security-related objects, such as application roles, users, data filters, and query limits (governors).  There are also variables you can use in parameters and expressions, and initialization blocks for loading their initial values on a static or user session basis.  Finally, there are Multi-User Development (MUD) projects for developers to check out units of work, and objects for the marketing feature used by our packaged customer relationship management (CRM) software.   The Query Factory At this point, you should have a grasp on the query factory concept.  When developing the CEIM model, you are configuring the BI Server to automatically manufacture millions of queries in response to random user requests. You do this by defining the analytic behavior in the business model, mapping that to the physical data sources, and exposing it through the presentation layer's role-based subject areas. While configuring mass production requires a different mindset than when you hand-craft individual SQL or MDX statements, it builds on the modeling and query concepts you already understand. The following posts in this series will walk through the CEIM modeling concepts and best practices in detail.  We will initially review dimensional concepts so you can understand the business model, and then present a pattern-based approach to learning the mappings from a variety of physical schema shapes and deployments to the dimensional model.  Along the way, we will also present the dimensional calculation template, and learn how to configure the many additivity patterns.

    Read the article

  • EJB-QL query never returning unless another query is run

    - by KevMo
    I have a strange strange problem. When executing the following EJB-QL query, my ENTIRE application will stop responding to requests, as the query never finishes executing. Query q = em.createQuery("SELECT o from RoomReservation as o WHERE o.deleted = FALSE AND o.room.id IN (Select r.id from Room as r where r.deleted = FALSE AND r.type.name = 'CLASSROOM')"); However, if I execute this query before I execute the other query, it runs without issue. Query dumbQuery = em.createQuery("SELECT o from Room as o WHERE o.deleted = FALSE"); Any idea what in the world is going on?

    Read the article

  • Slow INFORMATION_SCHEMA query

    - by Thomas
    We have a .NET Windows application that runs the following query on login to get some information about the database: SELECT t.TABLE_NAME, ISNULL(pk_ccu.COLUMN_NAME,'') PK, ISNULL(fk_ccu.COLUMN_NAME,'') FK FROM INFORMATION_SCHEMA.TABLES t LEFT JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS pk_tc ON pk_tc.TABLE_NAME = t.TABLE_NAME AND pk_tc.CONSTRAINT_TYPE = 'PRIMARY KEY' LEFT JOIN INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE pk_ccu ON pk_ccu.CONSTRAINT_NAME = pk_tc.CONSTRAINT_NAME LEFT JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS fk_tc ON fk_tc.TABLE_NAME = t.TABLE_NAME AND fk_tc.CONSTRAINT_TYPE = 'FOREIGN KEY' LEFT JOIN INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE fk_ccu ON fk_ccu.CONSTRAINT_NAME = fk_tc.CONSTRAINT_NAME Usually this runs in a couple seconds, but on one server running SQL Server 2000, it is taking over four minutes to run. I ran it with the execution plan enabled, and the results are huge, but this part caught my eye (it won't let me post an image): http://img35.imageshack.us/i/plank.png/ I then updated the statistics on all of the tables that were mentioned in the execution plan: update statistics sysobjects update statistics syscolumns update statistics systypes update statistics master..spt_values update statistics sysreferences But that didn't help. The index tuning wizard doesn't help either, because it doesn't let me select system tables. There is nothing else running on this server, so nothing else could be slowing it down. What else can I do to diagnose or fix the problem on that server?

    Read the article

  • Why does "commit" appear in the mysql slow query log?

    - by Tom
    In our MySQL slow query logs I often see lines that just say "COMMIT". What causes a commit to take time? Another way to ask this question is: "How can I reproduce getting a slow commit; statement with some test queries?" From my investigation so far I have found that if there is a slow query within a transaction, then it is the slow query that gets output into the slow log, not the commit itself. Testing In mysql command line client: mysql begin; Query OK, 0 rows affected (0.00 sec) mysql UPDATE members SET myfield=benchmark(9999999, md5('This is to slow down the update')) WHERE id = 21560; Query OK, 0 rows affected (2.32 sec) Rows matched: 1 Changed: 0 Warnings: 0 At this point (before the commit) the UPDATE is already in the slow log. mysql commit; Query OK, 0 rows affected (0.01 sec) The commit happens fast, it never appeared in the slow log. I also tried a UPDATE which changes a large amount of data but again it was the UPDATE that was slow not the COMMIT. However, I can reproduce a slow ROLLBACK that takes 46s and gets output to the slow log: mysql begin; Query OK, 0 rows affected (0.00 sec) mysql UPDATE members SET myfield=CONCAT(myfield,'TEST'); Query OK, 481446 rows affected (53.31 sec) Rows matched: 481446 Changed: 481446 Warnings: 0 mysql rollback; Query OK, 0 rows affected (46.09 sec) I understand why rollback has a lot of work to do and therefore takes some time. But I'm still struggling to understand the COMMIT situation - i.e. why it might take a while.

    Read the article

  • SQL Query Not Functioning - No Error Message

    - by gamerzfuse
    // Write the data to the database $query = "INSERT INTO staff (name, lastname, username, password, position, department, birthmonth, birthday, birthyear, location, phone, email, street, city, state, country, zip, tags, photo) VALUES ('$name', '$lastname', '$username', '$password', '$position', '$department', '$birthmonth', '$birthday', '$birthyear', '$location', '$phone', '$email', '$street', '$city', '$state', '$country', '$zip', '$tags', '$photo')"; mysql_query($query); var_dump($query); echo '<p>' . $name . ' has been added to the Employee Directory.</p>'; if (!$query) { die('Invalid query: ' . mysql_error()); } Can someone tell me why the above code produced: string(332) "INSERT INTO staff (name, lastname, username, password, position, department, birthmonth, birthday, birthyear, location, phone, email, street, city, state, country, zip, tags, photo) VALUES ('Craig', 'Hooghiem', 'sdf', 'sdf', 'sdf', 'sdf', '01', '01', 'sdf', 'sdf', '', 'sdf', 'sdf', 'sd', 'sdf', 'sdf', 'sd', 'sdg', 'leftround.gif')" Craig has been added to the Employee Directory. But does not actually add anything into the database table "staff" ? I must be missing something obvious here.

    Read the article

  • java: decoding URI query string

    - by Jason S
    I need to decode a URI that contains a query string; expected input/output behavior is something like the following: abstract class URIParser { /** example input: * something?alias=pos&FirstName=Foo+A%26B%3DC&LastName=Bar */ URIParser(String input) { ... } /** should return "something" for the example input */ public String getPath(); /** should return a map * {alias: "pos", FirstName: "Foo+A&B=C", LastName: "Bar"} */ public Map<String,String> getQuery(); } I've tried using java.net.URI, but it seems to decode the query string so in the above example I'm left with "alias=pos&FirstName=Foo+A&B=C&LastName=Bar" so there is ambiguity whether a "&" is a query separator or is a character in a query component. edit: just tried URI.getRawQuery() and it doesn't do the encoding, so I can split the query string with a "&", but then what do I do? Any suggestions?

    Read the article

  • MsSQL 2005 query performance

    - by Max
    I have the following query: select ............. from //one table and about 20 left joins// where ( ( this_.driverName like 'blah*' or this_.renterName like 'blah*' ) or exists ( select this0__.id as y0_ from ThirdParty this0__ where this0__.name like 'blah*' and this0__.claim_id=this_.id ) ) order by this_.id asc And I have two environment: One with 175 000 records in table "this_" and second with 25 000 records in table "this_". This query works right on 175k database and it works smth about 2 seconds, but on base with 25k this query freezes. and if drop one the folloing item from where clause: ( this_.driverName like 'blah*' or this_.renterName like 'blah*' ) or exists ( select this0__.id as y0_ from ThirdParty this0__ where this0__.name like 'blah*' and this0__.claim_id=this_.id ) query runs normally. How can I to increase performance of this query?

    Read the article

  • PDO update query with conditional?

    - by dmontain
    I have a PDO mysql that updates 3 fields. $update = $mypdo->prepare("UPDATE tablename SET field1=:field1, field2=:field2, field3=:field3 WHERE key=:key"); But I want field3 to be updated only when $update3 = true; (meaning that the update of field3 is controlled by a conditional statement) Is this possible to accomplish with a single query? I could do it with 2 queries where I update field1 and field2 then check the boolean and update field3 if needed in a separate query. //run this query to update only fields 1 and 2 $update_part1 = $mypdo->prepare("UPDATE tablename SET field1=:field1, field2=:field2 WHERE key=:key"); //if field3 should be update, run a separate query to update it separately if ($update3){ $update_part2 = $mypdo->prepare("UPDATE tablename SET field3=:field3 WHERE key=:key"); } But hopefully there is a way to accomplish this in 1 query?

    Read the article

  • MySQL query killing my server

    - by Webnet
    Looking at this query there's got to be something bogging it down that I'm not noticing. I ran it for 7 minutes and it only updated 2 rows. //set product count for makes $tru->query->run(array( 'name' => 'get-make-list', 'sql' => 'SELECT id, name FROM vehicle_make', 'connection' => 'core' )); while($tempMake = $tru->query->getArray('get-make-list')) { $tru->query->run(array( 'name' => 'update-product-count', 'sql' => 'UPDATE vehicle_make SET product_count = ( SELECT COUNT(product_id) FROM taxonomy_master WHERE v_id IN ( SELECT id FROM vehicle_catalog WHERE make_id = '.$tempMake['id'].' ) ) WHERE id = '.$tempMake['id'], 'connection' => 'core' )); } I'm sure this query can be optimized to perform better, but I can't think of how to do it. vehicle_make = 45 rows taxonomy_master = 11,223 rows vehicle_catalog = 5,108 rows All tables have appropriate indexes

    Read the article

  • make reference to an empty query in flex

    - by Adam
    a bit of a dumb questions I'm sure I'm trying to allow user to set an item to be default. I've got a function that run a query to first find the current default item. Then runs a second query that unsets the current default item. Then a third query runs to set the new user selected item to be the default. This seem to work fine when a default item has been perviously selected, but when I try to set the default item initially I get the good old "Cannot access a property or method of a null object reference." error. This is because the first query that runs returns no items I'm sure. So I need to write an if statement that if the first query returns nothing to skip the second and go right to the third. The only problem is I can't make a reference to a null object. So how do I go about writing this statement. Thanks

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >