Search Results

Search found 20265 results on 811 pages for 'oracle bi 11g scorecards dashboards strategy execution'.

Page 465/811 | < Previous Page | 461 462 463 464 465 466 467 468 469 470 471 472  | Next Page >

  • Create a trigger that updates a column on one table when a column in another table is updated

    - by GigaPr
    Hi, i have two tables Order(id, date, note) and Delivery(Id, Note, Date) I want to create a trigger that updates the date in Delivery when the date is updated in Order. I was thinking to do something like CREATE OR REPLACE TRIGGER your_trigger_name BEFORE UPDATE ON Order DECLARE BEGIN UPDATE Delivery set date = ??? where id = ??? END; How do I get the date and row id? thanks

    Read the article

  • Greatest not null column

    - by Álvaro G. Vicario
    I need to update a row with a formula based on the largest value of two DATETIME columns. I would normally do this: GREATEST(date_one, date_two) However, both columns are allowed to be NULL. I need the greatest date even when the other is NULL (of course, I expect NULL when both are NULL) and GREATEST() returns NULL when one of the columns is NULL. This seems to work: GREATEST(COALESCE(date_one, date_two), COALESCE(date_two, date_one)) But I wonder... am I missing a more straightforward method?

    Read the article

  • Not sure how to use Decode, NVL, and/or isNull (or something else?) in this situation

    - by RSW
    I have a table of orders for particular products, and a table of products that are on sale. (It's not ideal database structure, but that's out of my control.) What I want to do is outer join the order table to the sale table via product number, but I don't want to include any particular data from the sale table, I just want a Y if the join exists or N if it doesn't in the output. Can anyone explain how I can do this in SQL? Thanks in advance!

    Read the article

  • delete rows using sql 'like' command using data from another table

    - by Captastic
    Hi All, I am trying to delete rows from a table ("lovalarm") where a field ("pointid") is like any one of a number of strings. Currently I am entering them all manually however I need to be able to have a list of over 100,000 options. My thoughts are to have a table ("lovdata") containing all possible strings and running a query to delete rows where the field is 'like' any of the strings in the other table. Can anyone point me in the right direction as to if/how I can use like in this way? Many thanks, Cap

    Read the article

  • Multiple column Union Query without duplicates

    - by Adam Halegua
    I'm trying to write a Union Query with multiple columns from two different talbes (duh), but for some reason the second column of the second Select statement isn't showing up in the output. I don't know if that painted the picture properly but here is my code: Select empno, job From EMP Where job = 'MANAGER' Union Select empno, empstate From EMPADDRESS Where empstate = 'NY' Order By empno The output looks like: EMPNO JOB 4600 NY 5300 MANAGER 5300 NY 7566 MANAGER 7698 MANAGER 7782 MANAGER 7782 NY 7934 NY 9873 NY Instead of 5300 and 7782 appearing twice, I thought empstate would appear next to job in the output. For all other empno's I thought the values in the fields would be (null). Am I not understanding Unions correctly, or is this how they are supposed to work? Thanks for any help in advance.

    Read the article

  • Does the order of the columns in a SELECT statement make a difference?

    - by Frank Computer
    This question was inspired by a previous question posted on SO, "Does the order of the WHERE clause make a differnece?". Would it improve a SELECT statement's performance if the the columns used in the WHERE section are placed at the begining of the SELECT statement? example: SELECT customer.id, transaction.id, transaction.efective_date, transaction.a, [...] FROM customer, transaction WHERE customer.id = transaction.id; I do know that limiting the list of columns to only the needed ones in a SELECT statement improves performance as opposed to using SELECT * because the current list is smaller.

    Read the article

  • alternative to lag SQL command

    - by mahen
    I have a table which has a table like this. Month-----Book_Type-----sold_in_Dollars Jan----------A------------ 100 Jan----------B------------ 120 Feb----------A------------ 50 Mar----------A------------ 60 Mar----------B------------ 30 and so on I have to calculate the expected sales for each month and book type based on the last 2 months sales. So for March and type A it would be (100+50)/2 = 75 For March and type B it is 120/1 since no data for Feb is there. I was trying to use the lag function but it wouldn't work since there is data missing in a few rows. Any ideas on this?

    Read the article

  • sql exception handling

    - by christine33990
    CREATE OR REPLACE PROCEDURE p_createLocaltable IS table_already_exist EXCEPTION; PRAGMA EXCEPTION_INIT (table_already_exist, -00955); BEGIN create table local_table as select * from supplied_table where rownum < 1; EXCEPTION when table_already_exist then DBMS_OUTPUT.put_line('Table already exists , does not need to recreate it'); END; can anyone see any problem of the above code?

    Read the article

  • SQL Modify question

    - by Jeff
    I need to replace a lot of values for a Table in SQL if the inactivity is greater then 30 days. I have UPDATE VERSION SET isActive = 0 WHERE customerNo = ( SELECT c.VersionNo FROM Activity b INNER JOIN VERSION c ON b.VersionNo = c.VersionNo WHERE (Months_between(sysdate, b.Activitye) > 30) ); It only works for one value though, if there is more then one returned it fails. What am I missing here? If someone could educate me on what is going on, I'd also appreciate it.

    Read the article

  • Performance: Subquery or Joining

    - by Auro
    Hello I got a little question about performance of a subquery / joining another table INSERT INTO Original.Person ( PID, Name, Surname, SID ) ( SELECT ma.PID_new , TBL.Name , ma.Surname, TBL.SID FROM Copy.Person TBL , original.MATabelle MA WHERE TBL.PID = p_PID_old AND TBL.PID = MA.PID_old ); This is my SQL, now this thing runs around 1 million times or more. Now my question is what would be faster? if I change TBL.SID to (Select new from helptable where old = tbl.sid) or if I add helptable to the from and do the joining in the where? greets Auro

    Read the article

  • HowTo check whether Exception Block is available for the main PLSQL block or routine

    - by user1297211
    I am trying to think of a validator that checks for Exception block available in PL/SQL block or any routine for the main body ( Highlighted in Bold). Eg : DECLARE some data Procedure xyx IS BEGIN .... EXCEPTION .. END; BEGIN some data BEGIN .... EXCEPTION .. END; **EXCEPTION** some data BEGIN .... EXCEPTION .. END; END; This is a simple example there can be many other scenarios but my need id to find that Exception block is avaialble for the main block of PL/SQL code. Please let me know if you have any suggestion. Thanks

    Read the article

  • if not (i_ReLaunch = 1 and (dt_enddate is not null)) How this epression will be evaluated in Oracle

    - by Phani Kumar PV
    if not (i_ReLaunch = 1 and (dt_enddate is not null)) How this epression will be evaluated in Oracle 10g when the input value of the i_ReLaunch = null and the value of the dt_enddate is not null it is entering the loop. according to the rules in normal c# and all it should not enter the loop as it will be as follows with the values. If( not(false and (true)) = if not( false) =if( true) which implies it should enters the loop But it is not happening Can someone let me know if i am wrong at any place

    Read the article

  • replicating master tables mapping in transaction tables

    - by NoDisplay
    I have three master tables for location information Country {ID, Name} State {ID, Name, CountryID} City {ID, Name, StateID} Now I have one transcation table called Person which hold the person name and his location information. My Question is shall I have only CityID in the Person table like this: Person {ID, Name, CityID}' And have view of join query which give me detail like "Person{ID,Name,City,State,Country}" or Shall I replicate the mapping Person {ID, Name, CityID, StateID, CountryID} Please suggest which do you feel is to be selected and why? if there is any other option available, please suggest. Thanks in advance.

    Read the article

  • Create trigger for auto incerment id and default unix datetime

    - by user1804985
    Any one help me to create a trigger for auto increment fld_id and Unix datetime. My table field is fld_id(int),fld_date(number),fld_value(varchar2). My insert query is insert into table (fld_value)values('xxx'); insert into table (fld_value)values('yyy'); I need the table record like this fld_id fld_date fld_value 1 1354357476 xxx 2 1354357478 yyy Please help me to create this.I can't able to do this..

    Read the article

  • Fill data gaps without UNION

    - by Dave Jarvis
    Problem There are data gaps that need to be filled, possibly using PARTITION BY. Query Statement The select statement reads as follows: SELECT count( r.incident_id ) AS incident_tally, r.severity_cd, r.incident_typ_cd FROM report_vw r GROUP BY r.severity_cd, r.incident_typ_cd ORDER BY r.severity_cd, r.incident_typ_cd Code Tables The severity codes and incident type codes are from: severity_vw incident_type_vw Actual Result Data 36 0 ENVIRONMENT 1 1 DISASTER 27 1 ENVIRONMENT 4 2 SAFETY 1 3 SAFETY Required Result Data 36 0 ENVIRONMENT 0 0 DISASTER 0 0 SAFETY 27 1 ENVIRONMENT 0 1 DISASTER 0 1 SAFETY 0 2 ENVIRONMENT 0 2 DISASTER 4 2 SAFETY 0 3 ENVIRONMENT 0 3 DISASTER 1 3 SAFETY Any ideas how to use PARTITION BY (or JOINs) to fill in the zero counts?

    Read the article

  • Difference between INSERT INTO and INSERT ALL INTO

    - by emily soto
    While I was inserting some records in table i found that.. INSERT INTO T_CANDYBAR_DATA SELECT CONSUMER_ID,CANDYBAR_NAME,SURVEY_YEAR,GENDER,1 AS STAT_TYPE,OVERALL_RATING FROM CANDYBAR_CONSUMPTION_DATA UNION SELECT CONSUMER_ID,CANDYBAR_NAME,SURVEY_YEAR,GENDER,2 AS STAT_TYPE,NUMBER_BARS_CONSUMED FROM CANDYBAR_CONSUMPTION_DATA; 79 rows inserted. INSERT ALL INTO t_candybar_data VALUES (consumer_id,candybar_name,survey_year,gender,1,overall_rating) INTO t_candybar_data VALUES (consumer_id,candybar_name,survey_year,gender,2,number_bars_consumed) SELECT * FROM candybar_consumption_data 86 rows inserted. I have read somewhere that INSERT ALL INTO automatically unions then why those difference is showing.

    Read the article

  • Select distinct ... inner join vs. select ... where id in (...)

    - by Tonio
    I'm trying to create a subset of a table (as a materialized view), defined as those records which have a matching record in another materialized view. For example, let's say I have a Users table with user_id and name columns, and a Log table, with entry_id, user_id, activity, and timestamp columns. First I create a materialized view of the Log table, selecting only those rows with timestamp some_date. Now I want a materliazed view of the Users referenced in my snapshot of the Log table. I can either create it as select * from Users where user_id in (select user_id from Log_mview), or I can do select distinct u.* from Users u inner join Log_mview l on u.user_id = l.user_id (need the distinct to avoid multiple hits from users with multiple log entries). The former seems cleaner and more elegant, but takes much longer. Am I missing something? Is there a better way to do this?

    Read the article

  • Caluculating sum of activity

    - by Maddy
    I have a table which is with following kind of information activity cost order date other information 10 1 100 -- 20 2 100 10 1 100 30 4 100 40 4 100 20 2 100 40 4 100 20 2 100 10 1 101 10 1 101 20 1 101 My requirement is to get sum of all activities over a work order ex: for order 100 1+2+4+4=11 1(for activity 10) 2(for activity 20) 4 (for activity 30) etc. i tried with group by, its taking lot time for calculation. There are 1lakh plus records in warehouse. is there any possibility in efficient way. SELECT SUM(MIN(cost)) FROM COST_WAREHOUSE a WHERE order = 100 GROUP BY (order, ACTIVITY)

    Read the article

  • Table Variables: an empirical approach.

    - by Phil Factor
    It isn’t entirely a pleasant experience to publish an article only to have it described on Twitter as ‘Horrible’, and to have it criticized on the MVP forum. When this happened to me in the aftermath of publishing my article on Temporary tables recently, I was taken aback, because these critics were experts whose views I respect. What was my crime? It was, I think, to suggest that, despite the obvious quirks, it was best to use Table Variables as a first choice, and to use local Temporary Tables if you hit problems due to these quirks, or if you were doing complex joins using a large number of rows. What are these quirks? Well, table variables have advantages if they are used sensibly, but this requires some awareness by the developer about the potential hazards and how to avoid them. You can be hit by a badly-performing join involving a table variable. Table Variables are a compromise, and this compromise doesn’t always work out well. Explicit indexes aren’t allowed on Table Variables, so one cannot use covering indexes or non-unique indexes. The query optimizer has to make assumptions about the data rather than using column distribution statistics when a table variable is involved in a join, because there aren’t any column-based distribution statistics on a table variable. It assumes a reasonably even distribution of data, and is likely to have little idea of the number of rows in the table variables that are involved in queries. However complex the heuristics that are used might be in determining the best way of executing a SQL query, and they most certainly are, the Query Optimizer is likely to fail occasionally with table variables, under certain circumstances, and produce a Query Execution Plan that is frightful. The experienced developer or DBA will be on the lookout for this sort of problem. In this blog, I’ll be expanding on some of the tests I used when writing my article to illustrate the quirks, and include a subsequent example supplied by Kevin Boles. A simplified example. We’ll start out by illustrating a simple example that shows some of these characteristics. We’ll create two tables filled with random numbers and then see how many matches we get between the two tables. We’ll forget indexes altogether for this example, and use heaps. We’ll try the same Join with two table variables, two table variables with OPTION (RECOMPILE) in the JOIN clause, and with two temporary tables. It is all a bit jerky because of the granularity of the timing that isn’t actually happening at the millisecond level (I used DATETIME). However, you’ll see that the table variable is outperforming the local temporary table up to 10,000 rows. Actually, even without a use of the OPTION (RECOMPILE) hint, it is doing well. What happens when your table size increases? The table variable is, from around 30,000 rows, locked into a very bad execution plan unless you use OPTION (RECOMPILE) to provide the Query Analyser with a decent estimation of the size of the table. However, if it has the OPTION (RECOMPILE), then it is smokin’. Well, up to 120,000 rows, at least. It is performing better than a Temporary table, and in a good linear fashion. What about mixed table joins, where you are joining a temporary table to a table variable? You’d probably expect that the query analyzer would throw up its hands and produce a bad execution plan as if it were a table variable. After all, it knows nothing about the statistics in one of the tables so how could it do any better? Well, it behaves as if it were doing a recompile. And an explicit recompile adds no value at all. (we just go up to 45000 rows since we know the bigger picture now)   Now, if you were new to this, you might be tempted to start drawing conclusions. Beware! We’re dealing with a very complex beast: the Query Optimizer. It can come up with surprises What if we change the query very slightly to insert the results into a Table Variable? We change nothing else and just measure the execution time of the statement as before. Suddenly, the table variable isn’t looking so much better, even taking into account the time involved in doing the table insert. OK, if you haven’t used OPTION (RECOMPILE) then you’re toast. Otherwise, there isn’t much in it between the Table variable and the temporary table. The table variable is faster up to 8000 rows and then not much in it up to 100,000 rows. Past the 8000 row mark, we’ve lost the advantage of the table variable’s speed. Any general rule you may be formulating has just gone for a walk. What we can conclude from this experiment is that if you join two table variables, and can’t use constraints, you’re going to need that Option (RECOMPILE) hint. Count Dracula and the Horror Join. These tables of integers provide a rather unreal example, so let’s try a rather different example, and get stuck into some implicit indexing, by using constraints. What unusual words are contained in the book ‘Dracula’ by Bram Stoker? Here we get a table of all the common words in the English language (60,387 of them) and put them in a table. We put them in a Table Variable with the word as a primary key, a Table Variable Heap and a Table Variable with a primary key. We then take all the distinct words used in the book ‘Dracula’ (7,558 of them). We then create a table variable and insert into it all those uncommon words that are in ‘Dracula’. i.e. all the words in Dracula that aren’t matched in the list of common words. To do this we use a left outer join, where the right-hand value is null. The results show a huge variation, between the sublime and the gorblimey. If both tables contain a Primary Key on the columns we join on, and both are Table Variables, it took 33 Ms. If one table contains a Primary Key, and the other is a heap, and both are Table Variables, it took 46 Ms. If both Table Variables use a unique constraint, then the query takes 36 Ms. If neither table contains a Primary Key and both are Table Variables, it took 116383 Ms. Yes, nearly two minutes!! If both tables contain a Primary Key, one is a Table Variables and the other is a temporary table, it took 113 Ms. If one table contains a Primary Key, and both are Temporary Tables, it took 56 Ms.If both tables are temporary tables and both have primary keys, it took 46 Ms. Here we see table variables which are joined on their primary key again enjoying a  slight performance advantage over temporary tables. Where both tables are table variables and both are heaps, the query suddenly takes nearly two minutes! So what if you have two heaps and you use option Recompile? If you take the rogue query and add the hint, then suddenly, the query drops its time down to 76 Ms. If you add unique indexes, then you've done even better, down to half that time. Here are the text execution plans.So where have we got to? Without drilling down into the minutiae of the execution plans we can begin to create a hypothesis. If you are using table variables, and your tables are relatively small, they are faster than temporary tables, but as the number of rows increases you need to do one of two things: either you need to have a primary key on the column you are using to join on, or else you need to use option (RECOMPILE) If you try to execute a query that is a join, and both tables are table variable heaps, you are asking for trouble, well- slow queries, unless you give the table hint once the number of rows has risen past a point (30,000 in our first example, but this varies considerably according to context). Kevin’s Skew In describing the table-size, I used the term ‘relatively small’. Kevin Boles produced an interesting case where a single-row table variable produces a very poor execution plan when joined to a very, very skewed table. In the original, pasted into my article as a comment, a column consisted of 100000 rows in which the key column was one number (1) . To this was added eight rows with sequential numbers up to 9. When this was joined to a single-tow Table Variable with a key of 2 it produced a bad plan. This problem is unlikely to occur in real usage, and the Query Optimiser team probably never set up a test for it. Actually, the skew can be slightly less extreme than Kevin made it. The following test showed that once the table had 54 sequential rows in the table, then it adopted exactly the same execution plan as for the temporary table and then all was well. Undeniably, real data does occasionally cause problems to the performance of joins in Table Variables due to the extreme skew of the distribution. We've all experienced Perfectly Poisonous Table Variables in real live data. As in Kevin’s example, indexes merely make matters worse, and the OPTION (RECOMPILE) trick does nothing to help. In this case, there is no option but to use a temporary table. However, one has to note that once the slight de-skew had taken place, then the plans were identical across a huge range. Conclusions Where you need to hold intermediate results as part of a process, Table Variables offer a good alternative to temporary tables when used wisely. They can perform faster than a temporary table when the number of rows is not great. For some processing with huge tables, they can perform well when only a clustered index is required, and when the nature of the processing makes an index seek very effective. Table Variables are scoped to the batch or procedure and are unlikely to hang about in the TempDB when they are no longer required. They require no explicit cleanup. Where the number of rows in the table is moderate, you can even use them in joins as ‘Heaps’, unindexed. Beware, however, since, as the number of rows increase, joins on Table Variable heaps can easily become saddled by very poor execution plans, and this must be cured either by adding constraints (UNIQUE or PRIMARY KEY) or by adding the OPTION (RECOMPILE) hint if this is impossible. Occasionally, the way that the data is distributed prevents the efficient use of Table Variables, and this will require using a temporary table instead. Tables Variables require some awareness by the developer about the potential hazards and how to avoid them. If you are not prepared to do any performance monitoring of your code or fine-tuning, and just want to pummel out stuff that ‘just runs’ without considering namby-pamby stuff such as indexes, then stick to Temporary tables. If you are likely to slosh about large numbers of rows in temporary tables without considering the niceties of processing just what is required and no more, then temporary tables provide a safer and less fragile means-to-an-end for you.

    Read the article

  • Step Aside Google

    - by David Dorf
    Step aside Google. While search will always be a huge part of the web, I can see a day in the not-too-distance future where search takes a backseat to the social graph. Links between pages will give way to relationships between people, including context like location. What does this mean for retail? It means your e-commerce strategy will slowly transition to an f-commerce strategy. Remember when a large portion of the online population was held captive inside the walls of AOL? All the commercials listed an AOL keyword, not a web address because that's where the majority of people surfed. Now, people are spending a huge amount of time in Facebook (despite Betty White's proclamation that its a big waste of time). According to Facebook, users spend 500 billion minutes per month on the site. Selling products where consumers are spending their time makes sense. The power of Like and Share are the most effective approach to marketing. More and more stores are popping up on Facebook, and soon they will be the front-end to e-commerce systems. As sites adopt the Facebook Open Graph API, users will have a harder time distinguishing the open web from their Facebook experience, including shopping. Of course e-commerce sites won't go away, but a large portion of their traffic will emanate from Facebook and in some cases Facebook will act as the front-end for the web store. Ignore Facebook Open Graph at your peril. In a Mashable article, Mitchell Harper made several predictions about how e-commerce will change based on Facebook. His five points are not far-fetched at all, so we need to watch this space carefully.

    Read the article

< Previous Page | 461 462 463 464 465 466 467 468 469 470 471 472  | Next Page >