Search Results

Search found 1369 results on 55 pages for 'clause'.

Page 40/55 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • MySQL GROUP BY with three tables

    - by Psaniko
    I have the following tables: posts (post_id, content, etc) comments (comment_id, post_id, content, etc) posts_categories (post_category_id, post_id, category_id) and this query: SELECT `p`.*, COUNT(comments.comment_id) AS cmts, posts_categories.*,comments.* FROM `posts` AS `p` LEFT JOIN `posts_categories` ON `p`.post_id = `posts_categories`.post_id LEFT JOIN `comments` ON `p`.post_id = `comments`.post_id GROUP BY `p`.`post_id` There are three comments on post_id=1 and four in total. In posts_categories there are two rows, both assigned to post_id=1. I have four rows in posts. But if I query the statement above I get a result of 6 for COUNT(comments.comment_id) at post_id=1. How is this possible? I guess the mistake is somewhere in the GROUP BY clause but I can't figure out where. Any suggestions?

    Read the article

  • SQL change "like" to "contains"

    - by Paul
    products table (mySQL) record_id categories (comma-delimited list) --------- -------------------------------- 1 960|1,957|1,958|1 I have the following dynamic query (simplified for the purposes of this question). The query is passed specified categories, each in the format xxxx|yyyy, and I need to return products having the passed category in its comma-delimited list of categories. The current query looks like: select p.* from products p where (p.categories like '%27|0%' or p.categories like '%972|1%' or p.categories like '%969|1%') But, the LIKE clause sometimes permits anomalies. I would like to write the query more like: select p.* from products p where (p.categories contains '27|0' or p.categories contains'972|1' or p.categories contains '969|1') How would I do this?

    Read the article

  • Dynamic "WHERE IN" on IQueryable (linq to SQL)

    - by user320235
    I have a LINQ to SQL query returning rows from a table into an IQueryable object. IQueryable<MyClass> items = from table in DBContext.MyTable select new MyClass { ID = table.ID, Col1 = table.Col1, Col2 = table.Col2 } I then want to perform a SQL "WHERE ... IN ...." query on the results. This works fine using the following. (return results with id's ID1 ID2 or ID3) sQuery = "ID1,ID2,ID3"; string[] aSearch = sQuery.Split(','); items = items.Where(i => aSearch.Contains(i.ID)); What I would like to be able to do, is perform the same operation, but not have to specify the i.ID part. So if I have the string of the field name I want to apply the "WHERE IN" clause to, how can I use this in the .Contains() method?

    Read the article

  • sql select from a large number of IDs

    - by Claudiu
    I have a table, Foo. I run a query on Foo to get the ids from a subset of Foo. I then want to run a more complicated set of queries, but only on those IDs. Is there an efficient way to do this? The best I can think of is creating a query such as: SELECT ... --complicated stuff WHERE ... --more stuff AND id IN (1, 2, 3, 9, 413, 4324, ..., 939393) That is, I construct a huge "IN" clause. Is this efficient? Is there a more efficient way of doing this, or is the only way to JOIN with the inital query that gets the IDs? If it helps, I'm using SQLObject to connect to a PostgreSQL database, and I have access to the cursor that executed the query to get all the IDs.

    Read the article

  • Oracle ORA-01795 error in Rails

    - by Cyborgo
    Hi, I am using Oracle as database for my Rails applications and have got some pretty intense tables. I'm trying to find the particular entries using a query like this Author.all( :conditions => { :name => names } ) I have been working SQlite all along and just migrated to Oracle which complains that IN clause has more than 1000 entries. Obvious workaround would be to break it into subclauses for which I need to write some raw sql queries. Is there anything in Rails that facilitate this?

    Read the article

  • what's the name of this language that description another language syntax?

    - by Boolean
    for example: <SELECT statement> ::= [WITH <common_table_expression> [,...n]] <query_expression> [ ORDER BY { order_by_expression | column_position [ ASC | DESC ] } [ ,...n ] ] [ COMPUTE { { AVG | COUNT | MAX | MIN | SUM } ( expression ) } [ ,...n ] [ BY expression [ ,...n ] ] ] [ <FOR Clause>] [ OPTION ( <query_hint> [ ,...n ] ) ] <query_expression> ::= { <query_specification> | ( <query_expression> ) } [ { UNION [ ALL ] | EXCEPT | INTERSECT } <query_specification> | ( <query_expression> ) [...n ] ] <query_specification> ::= SELECT [ ALL | DISTINCT ] [TOP expression [PERCENT] [ WITH TIES ] ] < select_list > [ INTO new_table ] [ FROM { <table_source> } [ ,...n ] ] [ WHERE <search_condition> ] [ <GROUP BY> ] [ HAVING < search_condition > ] whats the language called?

    Read the article

  • How can I sum a group of sums? SQL-Sever 2008

    - by billynomates
    I have a query with a sum in it like this: SELECT Table1.ID, SUM(Table2.[Number1] + Table2.[Number2]) AS SumColumn FROM Table1 INNER JOIN Table3 ON Table1.ID = Table3.ID INNER JOIN Table2 ON Table3.ID = Table2.ID WHERE (Table2.[Something] = 'Whatever') GROUP BY Table1.ID, Table2.[Number1] , Table2.[Number2] and it gives me a table like this: ID SumColumn 67 1 67 4 70 2 70 6 70 3 70 6 80 5 97 1 97 3 How can I make it give me a table like this, where the SumColumn is summed, grouped by the ID column? ID SumColumn 67 5 70 17 80 5 97 4 I cannot GROUP BY SumColumn because I get an error (Invalid column name 'SumColumn'.) COALESCE doesn't work either. Thanks in advance. EDIT: Just grouping by the ID gives me an error: [Number1, Number2 and the other column names that I'm selecting] is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.

    Read the article

  • :from parameter in active record find not well designed?

    - by potlee
    i got this error: SQLite3::SQLException: no such column: apis.name: SELECT * FROM examples WHERE ("apis"."name" = 'deep') my code Api.find :all, :from => params[:table_name], :conditions => {:name => 'deep' } I need to make a back end rails application which will be used by a silverlight application. one of the requirements is to fetch simple data from the database. i need to be able to query different tables with the same code.(my app has 2000 tables!) i think it does not make sense for rails to put in "apis" in the WHERE clause. is there any speciic reason for this?

    Read the article

  • Does the order of the columns in a SELECT statement make a difference?

    - by Frank Computer
    This question was inspired by a previous question posted on SO, "Does the order of the WHERE clause make a differnece?". Would it improve a SELECT statement's performance if the the columns used in the WHERE section are placed at the begining of the SELECT statement? example: SELECT customer.id, transaction.id, transaction.efective_date, transaction.a, [...] FROM customer, transaction WHERE customer.id = transaction.id; I do know that limiting the list of columns to only the needed ones in a SELECT statement improves performance as opposed to using SELECT * because the current list is smaller.

    Read the article

  • Any way for linq query to check against existing select?

    - by danrhul
    I have an an offer, that can be in any number of categories. I don't however want that offer to then appear twice or however more. I was wondering if its possible to have a where clause that ascertains whether that offer already exists in that select statement and if so obviously to ignore it. Here is the linq query: Offers = from o in offerCategories orderby o.RewardCategory.Ordering, o.Order where o.RewardOffer.IsDeleted == false select new OfferOverviewViewModel { Partner = o.RewardOffer.Partner, Description = String.Format("{0} {1}", o.RewardOffer.MainTitle, o.RewardOffer.SecondaryTitle), OfferId = o.OfferId, FeaturedOffer = o.RewardOffer.FeaturedOfferOrder.HasValue, Categories = from c in offerCategories.Where(oc => oc.OfferId == o.OfferId) orderby c.RewardCategory.Ordering select new CategoryDetailViewModel { Description = c.RewardCategory.DisplayName } },

    Read the article

  • compare date split across colums

    - by alex-tech
    Greetings. I am querying tables from Microsoft SQL 2008 which have date split across 3 columns: day, month and year. Unfortunately, I do not have control over this because data is coming in to the database daily from a 3rd party source in that format. I need to add between to a where clause so user can pull records within a range. Would be easy enough if date was in a single column but finding it nearly impossible when its split across three columns. To display the date, I am doing a CAST( CAST(year as varchar(4)) + '-' + CAST(month as varchar(2)) + '-' + CAST(day as varchar(2)) as date) AS "date"` in a select. I tried to put it as a parameter for datediff function or just the regular between but get no results. Thanks for any help.

    Read the article

  • Optimizing MySQL queries with IN operator

    - by Arkadiusz Kondas
    I have a MySQL database with a fairly large table where the products are. Each of them has its own id and categoryId field where there is a category id belongs to this product. Now I have a query that pulls out products from given categories such as: SELECT * FROM products WHERE categoryId IN ( 1, 2, 3, 4, 5, 34, 6, 7, 8, 9, 10, 11, 12 ) Of course, come a WHERE clause and ORDER BY sort but not in this thing. Let's say that these products is 250k and the visits are over 100k per day. Under such conditions in the table slow_log registered weight of these queries with large generation time. Do you have any ideas how to optimize the given problem? Table engine is MyISAM.

    Read the article

  • Serialization of a TChan String

    - by J Fritsch
    I have declared the following type KEY = (IPv4, Integer) type TPSQ = TVar (PSQ.PSQ KEY POSIXTime) type TMap = TVar (Map.Map KEY [String]) data Qcfg = Qcfg { qthresh :: Int, tdelay :: Rational, cwpsq :: TPSQ, cwmap :: TMap, cw chan :: TChan String } deriving (Show) and would like this to be serializable in a sense that Qcfg can either be written to disk or be sent over the network. When I compile this I get the error No instances for (Show TMap, Show TPSQ, Show (TChan String)) arising from the 'deriving' clause of a data type declaration Possible fix: add instance declarations for (Show TMap, Show TPSQ, Show (TChan String)) or use a standalone 'deriving instance' declaration, so you can specify the instance context yourself When deriving the instance for (Show Qcfg) I am now not quite sure whether there is a chance at all to serialize my TChan although all individual nodes in it are members of the show class. For TMap and TPSQ I wonder whether there are ways to show the values in the TVar directly (because it does not get changed, so there should no need to lock it) without having to declare an instance that does a readTVar ?

    Read the article

  • Why does var evaluate to System.Object in "foreach (var row in table.Rows)"?

    - by DanM
    When I enter this foreach statement... foreach (var row in table.Rows) ...the tooltip for var says class System.Object I'm confused why it's not class System.Data.DataRow. (In case you're wondering, yes, I have using System.Data at the top of my code file.) If I declare the type explicitly, as in... foreach (DataRow row in table.Rows) ...it works fine with no errors. Also if I do... var numbers = new int[] { 1, 2, 3 }; foreach (var number in numbers) ...var evaluates to struct System.Int32. So, the problem is not that var doesn't work in a foreach clause. So, there's something strange about DataRowCollection where the items don't automatically evaluate to DataRow. But I can't figure out what it is. Does anyone have an explanation?

    Read the article

  • Why do these seemingly similar queries have such drastically different run times?

    - by Jherico
    I'm working with an oracle DB trying to tune some queries and I'm having trouble understanding why working a particular clause in a particular way has such a drastic impact on the query performance. Here is a performant version of the query I'm doing select * from ( select a.*, rownum rn from ( select * from table_foo ) a where rownum < 3 ) where rn >= 2 The same query by replacing the last two lines with this ) a where rownum >=2 rownum < 3 ) performs horribly. Several orders of magnitude worse ) a where rownum between 2 and 3 ) also performs horribly. I don't understand the magic from the first query and how to apply it to further similar queries.

    Read the article

  • With a SELECT...WHERE id IN (...), order results by IN() ?

    - by Deca
    With a query such as: SELECT * FROM images WHERE id IN (12,9,15,3,1) is it possible to order the results by the contents of the IN clause? The result I'm looking for would be something like: [0] => Array ( [id] => 12 [file_name] => foo ) [1] => Array ( [id] => 9 [file_name] => bar ) [2] => Array ( [id] => 15 [file_name] => baz ) ...

    Read the article

  • where clausule on field defined by sub-query

    - by stUrb
    I have this query which displays some properties and count the number of references to it from an other table: SELECT p.id,p.propName ( SELECT COUNT(*) FROM propLoc WHERE propLoc.propID = p.id ) AS number FROM property as p WHERE p.category != 'natural' This generates a good table with all the information I want to filter: id | propName | number 3 | Name 1 | 3 4 | Name 2 | 1 5 | Name 3 | 0 6 | Name 4 | 10 etc etc I now want to filter out the properties with number <= 0 So I tried to add an AND number > 0 But it reacts with Unknown column 'number' in 'where clause' apparently you can't filter on a name specified by a subquery? How can I achieve my goal?

    Read the article

  • Count, inner join

    - by Urosh
    I have two tables: DRIVER (Driver_Id,First name,Last name,...) PARTICIPANT IN CAR ACCIDENT (Participant_Id,Driver_Id-foreign key,responsibility-yes or no,...) Now, I need to find out which driver participated in accident where responsibility is 'YES', and how many times. I did this: Select Driver_ID, COUNT (Participant.Driver_ID)as 'Number of accidents' from Participant in car accident where responsibility='YES' group by Driver_ID order by COUNT (Participant.Driver_ID) desc But, I need to add drivers first and last name from the first table(using inner join, I suppose). I don't know how, because it is not contained in either an aggregate function or the GROUP BY clause. Please help :)

    Read the article

  • Outer select column value in joined subquery?

    - by Michael DePetrillo
    Is it possible to use a column value from an outer select within a joined subquery? SELECT table1.id, table2.cnt FROM table1 LEFT JOIN (SELECT COUNT(*) as `cnt` FROM table2 where table2.lt > table1.lt and table2.rt < table1.rt) as table2 ON 1; This results in "Unknown column 'table1.lt' in 'where clause'". Here is the db dump. CREATE TABLE IF NOT EXISTS `table1` ( `id` int(1) NOT NULL, `lt` int(1) NOT NULL, `rt` int(4) NOT NULL) ENGINE=MyISAM DEFAULT CHARSET=latin1; CREATE TABLE IF NOT EXISTS `table2` ( `id` int(1) NOT NULL, `lt` int(1) NOT NULL, `rt` int(4) NOT NULL) ENGINE=MyISAM DEFAULT CHARSET=latin1; INSERT INTO `table1` (`id`, `lt`, `rt`) VALUES (1, 1, 4); INSERT INTO `table2` (`id`, `lt`, `rt`) VALUES (2, 2, 3);

    Read the article

  • Update multiple values in a single statement

    - by Kluge
    I have a master / detail table and want to update some summary values in the master table against the detail table. I know I can update them like this: update MasterTbl set TotalX = (select sum(X) from DetailTbl where DetailTbl.MasterID = MasterTbl.ID) update MasterTbl set TotalY = (select sum(Y) from DetailTbl where DetailTbl.MasterID = MasterTbl.ID) update MasterTbl set TotalZ = (select sum(Z) from DetailTbl where DetailTbl.MasterID = MasterTbl.ID) But, I'd like to do it in a single statement, something like this: update MasterTbl set TotalX = sum(DetailTbl.X), TotalY = sum(DetailTbl.Y), TotalZ = sum(DetailTbl.Z) from DetailTbl where DetailTbl.MasterID = MasterTbl.ID group by MasterID but that doesn't work. I've also tried versions that omit the "group by" clause. I'm not sure whether I'm bumping up against the limits of my particular database (Advantage), or the limits of my SQL. Probably the latter. Can anyone help?

    Read the article

  • String path validation

    - by CMAñora
    I have here a string(an input from the user) for a file path. I checked the string so that it will qualify the criteria: check for invalid characters for a file path will not accept absolute path (\Sample\text.txt) I have tried catching the invalid characters in catch clause. It work except for '\'. It will accept 'C:\\Sample\text.txt' which is an invalid file path. The following examples should be invalid paths: :\text.txt :text.txt \:text.txt \text.txt C:\\\text.txt I have been through similar questions posted here but none of them seemed to solve my issue. What would be the best way to do such check?

    Read the article

  • SQL SERVER – FT_IFTS_SCHEDULER_IDLE_WAIT – Full Text – Wait Type – Day 13 of 28

    - by pinaldave
    In the last few days during this series, I got many question about this Wait type. It would be great if you read my original related wait stats query in the first post because I have filtered it out in WHERE clause. However, I still get questions about this being one of the most wait types they encounter. The truth is, this is a background task processing and it really does not matter and it should be filtered out. There are many new Wait types related to Full Text Search that are introduced in SQL Server 2008. If you run the following query, you will be able to find them in the list. Currently there is not enough information for all of them available on BOL or any other place. But don’t worry; I will write an in-depth article when I learn more about them. SELECT * FROM sys.dm_os_wait_stats WHERE wait_type LIKE 'FT_%' The result set will contain following rows. FT_RESTART_CRAWL FT_METADATA_MUTEX FT_IFTSHC_MUTEX FT_IFTSISM_MUTEX FT_IFTS_RWLOCK FT_COMPROWSET_RWLOCK FT_MASTER_MERGE FT_IFTS_SCHEDULER_IDLE_WAIT We have understood so far that there is not much information available. But the problem is when you have this Wait type, what should you do?  The answer is to filter them out for the moment (i.e, do not pay attention on them) and focus on other pressing issues in wait stats or performance tuning. Here are two of my informal suggestions, which are totally independent from wait stats: Turn off the Full Text Search service in your system if you are  not necessarily using it on your server. Learn proper Full Text Search methodology. You can get Michael Coles’ book: Pro Full-Text Search in SQL Server 2008. Now I invite you to speak out your suggestions or any input regarding Full Text-related best practices and wait stats issue. Please leave a comment. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussions of Wait Stats in this blog are generic and vary from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQL SERVER – Index Created on View not Used Often – Observation of the View

    - by pinaldave
    I always enjoy writing about concepts on Views. Views are frequently used concepts, and so it’s not surprising that I have seen so many misconceptions about this subject. To clear such misconceptions, I have previously written the article SQL SERVER – The Limitations of the Views – Eleven and more…. I also wrote a follow up article wherein I demonstrated that without even creating index on the basic table, the query on the View will not use the View. You can read about this demonstration over here: SQL SERVER – Index Created on View not Used Often – Limitation of the View 12. I promised in that post that I would also write an article where I would demonstrate the condition where the Index will be used. I got many responses suggesting that I can do that with using NOEXPAND; I agree. I have already written about this in my original summary article. Here is a way for you to see how Index created on View can be utilized. We will do the following steps on this exercise: Create a Table Create a View Create Index On View Write SELECT with ORDER BY on View USE tempdb GO IF EXISTS (SELECT * FROM sys.views WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[SampleView]')) DROP VIEW [dbo].[SampleView] GO IF EXISTS (SELECT * FROM sys.objects WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[mySampleTable]') AND TYPE IN (N'U')) DROP TABLE [dbo].[mySampleTable] GO -- Create SampleTable CREATE TABLE mySampleTable (ID1 INT, ID2 INT, SomeData VARCHAR(100)) INSERT INTO mySampleTable (ID1,ID2,SomeData) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY o1.name), ROW_NUMBER() OVER (ORDER BY o2.name), o2.name FROM sys.all_objects o1 CROSS JOIN sys.all_objects o2 GO -- Create View CREATE VIEW SampleView WITH SCHEMABINDING AS SELECT ID1,ID2,SomeData FROM dbo.mySampleTable GO -- Create Index on View CREATE UNIQUE CLUSTERED INDEX [IX_ViewSample] ON [dbo].[SampleView] ( ID2 ASC ) GO -- Select from view SELECT ID1,ID2,SomeData FROM SampleView ORDER BY ID2 GO When we check the execution plan for this , we find it clearly that the Index created on the View is utilized. ORDER BY clause uses the Index created on the View. I hope this makes the puzzle simpler on how the Index is used on the View. Again, I strongly recommend reading my earlier series about the limitations of the Views found here: SQL SERVER – The Limitations of the Views – Eleven and more…. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL View, T SQL, Technology

    Read the article

  • Backup Meta-Data

    - by BuckWoody
    I'm working on a PowerShell script to show me the trending durations of my backup activities. The first thing I need is the data, so I looked at the Standard Reports in SQL Server Management Studio, and found a report that suited my needs, so I pulled out the script that it runs and modified it to this T-SQL Script. A few words here - you need to be in the MSDB database for this to run, and you can add a WHERE clause to limit to a database, timeframe, type of backup, whatever. For that matter, I won't use all of the data in this query in my PowerShell script, but it gives me lots of avenues to graph: SELECT distinct t1.name AS 'DatabaseName' ,(datediff( ss,  t3.backup_start_date, t3.backup_finish_date)) AS 'DurationInSeconds' ,t3.user_name AS 'UserResponsible' ,t3.name AS backup_name ,t3.description ,t3.backup_start_date ,t3.backup_finish_date ,CASE WHEN t3.type = 'D' THEN 'Database' WHEN t3.type = 'L' THEN 'Log' WHEN t3.type = 'F' THEN 'FileOrFilegroup' WHEN t3.type = 'G' THEN 'DifferentialFile' WHEN t3.type = 'P' THEN 'Partial' WHEN t3.type = 'Q' THEN 'DifferentialPartial' END AS 'BackupType' ,t3.backup_size AS 'BackupSizeKB' ,t6.physical_device_name ,CASE WHEN t6.device_type = 2 THEN 'Disk' WHEN t6.device_type = 102 THEN 'Disk' WHEN t6.device_type = 5 THEN 'Tape' WHEN t6.device_type = 105 THEN 'Tape' END AS 'DeviceType' ,t3.recovery_model  FROM sys.databases t1 INNER JOIN backupset t3 ON (t3.database_name = t1.name )  LEFT OUTER JOIN backupmediaset t5 ON ( t3.media_set_id = t5.media_set_id ) LEFT OUTER JOIN backupmediafamily t6 ON ( t6.media_set_id = t5.media_set_id ) ORDER BY backup_start_date DESC I'll munge this into my Excel PowerShell chart script tomorrow. Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. Yes, there are always multiple ways to do things, and this script may not work in every situation, for everything. It’s just a script, people. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • OWB 11gR2 &ndash; Degenerate Dimensions

    - by David Allan
    Ever wondered how to build degenerate dimensions in OWB and get the benefits of slowly changing dimensions and cube loading? Now its possible through some changes in 11gR2 to make the dimension and cube loading much more flexible. This will let you get the benefits of OWB's surrogate key handling and slowly changing dimension reference when loading the fact table and need degenerate dimensions (see Ralph Kimball's degenerate dimensions design tip). Here we will see how to use the cube operator to load slowly changing, regular and degenerate dimensions. The cube and cube operator can now work with dimensions which have no surrogate key as well as dimensions with surrogates, so you can get the benefit of the cube loading and incorporate the degenerate dimension loading. What you need to do is create a dimension in OWB that is purely used for ETL metadata; the dimension itself is never deployed (its table is, but has not data) it has no surrogate keys has a single level with a business attribute the degenerate dimension data and a dummy attribute, say description just to pass the OWB validation. When this degenerate dimension is added into a cube, you will need to configure the fact table created and set the 'Deployable' flag to FALSE for the foreign key generated to the degenerate dimension table. The degenerate dimension reference will then be in the cube operator and used when matching. Create the degenerate dimension using the regular wizard. Delete the Surrogate ID attribute, this is not needed. Define a level name for the dimension member (any name). After the wizard has completed, in the editor delete the hierarchy STANDARD that was automatically generated, there is only a single level, no need for a hierarchy and this shouldn't really be created. Deploy the implementing table DD_ORDERNUMBER_TAB, this needs to be deployed but with no data (the mapping here will do a left outer join of the source data with the empty degenerate dimension table). Now, go ahead and build your cube, use the regular TIMES dimension for example and your degenerate dimension DD_ORDERNUMBER, can add in SCD dimensions etc. Configure the fact table created and set Deployable to false, so the foreign key does not get generated. Can now use the cube in a mapping and load data into the fact table via the cube operator, this will look after surrogate lookups and slowly changing dimension references.   If you generate the SQL you will see the ON clause for matching includes the columns representing the degenerate dimension columns. Here we have seen how this use case for loading fact tables using degenerate dimensions becomes a whole lot simpler using OWB 11gR2. I'm sure there are other use cases where using this mix of dimensions with surrogate and regular identifiers is useful, Fact tables partitioned by date columns is another classic example that this will greatly help and make the cube operator much more useful. Good to hear any comments.

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >