Search Results

Search found 26878 results on 1076 pages for 'temp table'.

Page 27/1076 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • MySQL syntax: can't create table

    - by peng
    mysql> create table balance_sheet( -> Cash_and_cash_equivalents VARCHAR(20), -> Trading_financial_assets VARCHAR(20), -> Note_receivable VARCHAR(20), -> Account_receivable VARCHAR(20), -> Advance_money VARCHAR(20), -> Interest_receivable VARCHAR(20), -> Dividend_receivable VARCHAR(20), -> Other_notes_receivable VARCHAR(20), -> Due_from_related_parties VARCHAR(20), -> Inventory VARCHAR(20), -> Consumptive_biological_assets VARCHAR(20), -> Non_current_asset(expire_in_a_year) VARCHAR(20), -> Other_current_assets VARCHAR(20), -> Total_current_assets VARCHAR(20), -> Available_for_sale_financial_assets VARCHAR(20), -> Held_to_maturity_investment VARCHAR(20), -> Long_term_account_receivable VARCHAR(20), -> Long_term_equity_investment VARCHAR(20), -> Investment_real_eastate VARCHAR(20), -> Fixed_assets VARCHAR(20), -> Construction_in_progress VARCHAR(20), -> Project_material VARCHAR(20), -> Liquidation_of_fixed_assets VARCHAR(20), -> Capitalized_biological_assets VARCHAR(20), -> Oil_and_gas_assets VARCHAR(20), -> Intangible_assets VARCHAR(20), -> R&d_expense VARCHAR(20), -> Goodwill VARCHAR(20), -> Deferred_assets VARCHAR(20), -> Deferred_income_tax_assets VARCHAR(20), -> Other_non_current_assets VARCHAR(20), -> Total_non_current_assets VARCHAR(20), -> Total_assets VARCHAR(20), -> Short_term_borrowing VARCHAR(20), -> Transaction_financial_liabilities VARCHAR(20), -> Notes_payable VARCHAR(20), -> Account_payable VARCHAR(20), -> Item_received_in_advance VARCHAR(20), -> Employee_pay_payable VARCHAR(20), -> Tax_payable VARCHAR(20), -> Interest_payable VARCHAR(20), -> Dividend_payable VARCHAR(20), -> Other_account_payable VARCHAR(20), -> Due_to_related_parties VARCHAR(20), -> Non_current_liabilities_due_within_one_year VARCHAR(20), -> Other_current_liabilities VARCHAR(20), -> Total_current_liabilities VARCHAR(20), -> Long_term_loan VARCHAR(20), -> Bonds_payable VARCHAR(20), -> Long_term_payable VARCHAR(20), -> Specific_payable VARCHAR(20), -> Estimated_liabilities VARCHAR(20), -> Deferred_income_tax_liabilities VARCHAR(20), -> Other_non_current_liabilities VARCHAR(20), -> Total_non_current_liabilities VARCHAR(20), -> Total_liabilities VARCHAR(20), -> Paid_in_capital VARCHAR(20), -> Contributed_surplus VARCHAR(20), -> Treasury_stock VARCHAR(20), -> Earned_surplus VARCHAR(20), -> Retained_earnings VARCHAR(20), -> Translation_reserve VARCHAR(20), -> Nonrecurring_items VARCHAR(20), -> Total_equity(non) VARCHAR(20), -> Minority_interests VARCHAR(20), -> Total_equity VARCHAR(20), -> Total_liabilities_&_shareholder's_equity VARCHAR(20)); '> '> when i press enter,there is the output of ' ,no other reaction ,what's wrong?

    Read the article

  • SQL Cartesian product joining table to itself and inserting into existing table

    - by Emma
    I am working in phpMyadmin using SQL. I want to take the primary key (EntryID) from TableA and create a cartesian product (if I am using the term correctly) in TableB (empty table already created) for all entries which share the same value for FieldB in TableA, except where TableA.EntryID equals TableA.EntryID So, for example, if the values in TableA were: TableA.EntryID TableA.FieldB 1 23 2 23 3 23 4 25 5 25 6 25 The result in TableB would be: Primary key EntryID1 EntryID2 FieldD (Default or manually entered) 1 1 2 Default value 2 1 3 Default value 3 2 1 Default value 4 2 3 Default value 5 3 1 Default value 6 3 2 Default value 7 4 5 Default value 8 4 6 Default value 9 5 4 Default value 10 5 6 Default value 11 6 4 Default value 12 6 5 Default value I am used to working in Access and this is the first query I have attempted in SQL. I started trying to work out the query and got this far. I know it's not right yet, as I’m still trying to get used to the syntax and pieced this together from various articles I found online. In particular, I wasn’t sure where the INSERT INTO text went (to create what would be an Append Query in Access). SELECT EntryID FROM TableA.EntryID TableA.EntryID WHERE TableA.FieldB=TableA.FieldB TableA.EntryID<>TableA.EntryID INSERT INTO TableB.EntryID1 TableB.EntryID2 After I've got that query right, I need to do a TRIGGER query (I think), so if an entry changes it's value in TableA.FieldB (changing it’s membership of that grouping to another grouping), the cartesian product will be re-run on THAT entry, unless TableB.FieldD = valueA or valueB (manually entered values). I have been using the Designer Tab. Does there have to be a relationship link between TableA and TableB. If so, would it be two links from the EntryID Primary Key in TableA, one to each EntryID in TableB? I assume this would not work because they are numbered EntryID1 and EntryID2 and the name needs to be the same to set up a relationship? If you can offer any suggestions, I would be very grateful. Research: http://www.fluffycat.com/SQL/Cartesian-Joins/ Cartesian Join example two Q: You said you can have a Cartesian join by joining a table to itself. Show that! Select * From Film_Table T1, Film_Table T2;

    Read the article

  • Updating a sql server table with data from another table

    - by David G
    I have two basic SQL Server tables: Customer (ID [pk], AddressLine1, AddressLine2, AddressCity, AddressDistrict, AddressPostalCode) CustomerAddress(ID [pk], CustomerID [fk], Line1, Line2, City, District, PostalCode) CustomerAddress contains multiple addresses for the Customer record. For each Customer record I want to merge the most recent CustomerAddress record where most recent is determined by the highest CustomerAddress ID value. I've currently got the following: UPDATE Customer SET AddressLine1 = CustomerAddress.Line1, AddressPostalCode = CustomerAddress.PostalCode FROM Customer, CustomerAddress WHERE Customer.ID = CustomerAddress.CustomerID which works but how can I ensure that the most recent (highest ID) CustomerAddress record is selected to update the Customer table?

    Read the article

  • What advantages do we have when creating a separate mapping table for two relational tables

    - by Pankaj Upadhyay
    In various open source CMS, I have noticed that there is a separate table for mapping two relational tables. Like for categories and products, there is a separate product_category_mapping table. This table just has a primary key and two foreign keys from the categories and product tables. My question is what are the benefits of this database design rather than just linking the tables directly by defining a foreign key in either table? Is it just matter of convenience?

    Read the article

  • Sql Table Refactoring Challenge

    Ive been working a bit on cleaning up a large table to make it more efficient.  I pretty much know what I need to do at this point, but I figured Id offer up a challenge for my readers, to see if they can catch everything I have as well as to see if Ive missed anything.  So to that end, I give you my table: CREATE TABLE [dbo].[lq_ActivityLog]( [ID] [bigint] IDENTITY(1,1) NOT NULL, [PlacementID] [int] NOT NULL, [CreativeID] [int] NOT NULL, [PublisherID] [int] NOT NULL, [CountryCode] [nvarchar](10) NOT NULL, [RequestedZoneID] [int] NOT NULL, [AboveFold] [int] NOT NULL, [Period] [datetime] NOT NULL, [Clicks] [int] NOT NULL, [Impressions] [int] NOT NULL, CONSTRAINT [PK_lq_ActivityLog2] PRIMARY KEY CLUSTERED ( [Period] ASC, [PlacementID] ASC, [CreativeID] ASC, [PublisherID] ASC, [RequestedZoneID] ASC, [AboveFold] ASC, [CountryCode] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY] And now some assumptions and additional information: The table has 200,000,000 rows currently PlacementID ranges from 1 to 5000 and should support at least 50,000 CreativeID ranges from 1 to 5000 and should support at least 50,000 PublisherID ranges from 1 to 500 and should support at least 50,000 CountryCode is a 2-character ISO standard (e.g. US) and there is a country table with an integer ID already.  There are < 300 rows. RequestedZoneID ranges from 1 to 100 and should support at least 50,000 AboveFold has values of 1, 0, or 1 only. Period is a date (no time). Clicks range from 0 to 5000. Impressions range from 0 to 5000000. The table is currently write-mostly.  Its primary purpose is to log advertising activity as quickly as possible.  Nothing in the rest of the system reads from it except for batch jobs that pull the data into summary tables. Heres the current information on the database tables size: Design Goals This table has been in use for about 5 years and has performed very well during that time.  The only complaints we have are that it is quite large and also there are occasionally timeouts for queries that reference it, particularly when batch jobs are pulling data from it.  Any changes should be made with an eye toward keeping write performance optimal  while trying to reduce space and improve read performance / eliminate timeouts during read operations. Refactor There are, I suggest to you, some glaringly obvious optimizations that can be made to this table.  And Im sure there are some ninja tweaks known to SQL gurus that would be a big help as well.  Ill post my own suggested changes in a follow-up post for now feel free to comment with your suggestions. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to access a row from af:table out of context

    - by Vijay Mohan
    Scenario : Lets say you have an adf table in a jsff and it is included as af:region inside other page(parent page).Now your requirement is to access some specific rows from the table and do some operations. Now, since you are aceessing the table outside the context in which it is present, so first you will have to setup the context and then you can use the visitCallback mechanism to do the opeartions on table. Here is the sample code: ================= final RichTable table = this.getRichTable();         FacesContext facesContext = FacesContext.getCurrentInstance();         VisitContext visitContext =   RequestContext.getCurrentInstance().createVisitContext(facesContext,null, EnumSet.of(VisitHint.SKIP_TRANSIENT,VisitHint.SKIP_UNRENDERED), null);         //Annonymous call         UIXComponent.visitTree(visitContext,facesContext.getViewRoot(),new VisitCallback(){             public VisitResult visit(VisitContext context, UIComponent target)               {                   if (table != target)                   {                     return VisitResult.ACCEPT;                   }                   else if(table == target)                   {                       //Here goes the Actual Logic                       Iterator selection = table.getSelectedRowKeys().iterator();                       while (selection.hasNext()) {                           Object key = selection.next();                           //store the original key                           Object origKey = table.getRowKey();                           try {                               table.setRowKey(key);                               Object o = table.getRowData();                               JUCtrlHierNodeBinding rowData = (JUCtrlHierNodeBinding)o;                               Row row = rowData.getRow();                               System.out.println(row.getAttribute(0));                           }                           catch(Exception ex){                               ex.printStackTrace();                           }                           finally {                               //restore original key                               table.setRowKey(origKey);                           }                       }                   }                   return VisitResult.COMPLETE;               }         }); 

    Read the article

  • Sql Table Refactoring Challenge

    Ive been working a bit on cleaning up a large table to make it more efficient.  I pretty much know what I need to do at this point, but I figured Id offer up a challenge for my readers, to see if they can catch everything I have as well as to see if Ive missed anything.  So to that end, I give you my table: CREATE TABLE [dbo].[lq_ActivityLog]( [ID] [bigint] IDENTITY(1,1) NOT NULL, [PlacementID] [int] NOT NULL, [CreativeID] [int] NOT NULL, [PublisherID] [int] NOT NULL, [CountryCode] [nvarchar](10) NOT NULL, [RequestedZoneID] [int] NOT NULL, [AboveFold] [int] NOT NULL, [Period] [datetime] NOT NULL, [Clicks] [int] NOT NULL, [Impressions] [int] NOT NULL, CONSTRAINT [PK_lq_ActivityLog2] PRIMARY KEY CLUSTERED ( [Period] ASC, [PlacementID] ASC, [CreativeID] ASC, [PublisherID] ASC, [RequestedZoneID] ASC, [AboveFold] ASC, [CountryCode] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY] And now some assumptions and additional information: The table has 200,000,000 rows currently PlacementID ranges from 1 to 5000 and should support at least 50,000 CreativeID ranges from 1 to 5000 and should support at least 50,000 PublisherID ranges from 1 to 500 and should support at least 50,000 CountryCode is a 2-character ISO standard (e.g. US) and there is a country table with an integer ID already.  There are < 300 rows. RequestedZoneID ranges from 1 to 100 and should support at least 50,000 AboveFold has values of 1, 0, or 1 only. Period is a date (no time). Clicks range from 0 to 5000. Impressions range from 0 to 5000000. The table is currently write-mostly.  Its primary purpose is to log advertising activity as quickly as possible.  Nothing in the rest of the system reads from it except for batch jobs that pull the data into summary tables. Heres the current information on the database tables size: Design Goals This table has been in use for about 5 years and has performed very well during that time.  The only complaints we have are that it is quite large and also there are occasionally timeouts for queries that reference it, particularly when batch jobs are pulling data from it.  Any changes should be made with an eye toward keeping write performance optimal  while trying to reduce space and improve read performance / eliminate timeouts during read operations. Refactor There are, I suggest to you, some glaringly obvious optimizations that can be made to this table.  And Im sure there are some ninja tweaks known to SQL gurus that would be a big help as well.  Ill post my own suggested changes in a follow-up post for now feel free to comment with your suggestions. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Stale statistics on a newly created temporary table in a stored procedure can lead to poor performance

    - by sqlworkshops
    When you create a temporary table you expect a new table with no past history (statistics based on past existence), this is not true if you have less than 6 updates to the temporary table. This might lead to poor performance of queries which are sensitive to the content of temporary tables.I was optimizing SQL Server Performance at one of my customers who provides search functionality on their website. They use stored procedure with temporary table for the search. The performance of the search depended on who searched what in the past, option (recompile) by itself had no effect. Sometimes a simple search led to timeout because of non-optimal plan usage due to this behavior. This is not a plan caching issue rather temporary table statistics caching issue, which was part of the temporary object caching feature that was introduced in SQL Server 2005 and is also present in SQL Server 2008 and SQL Server 2012. In this customer case we implemented a workaround to avoid this issue (see below for example for workarounds).When temporary tables are cached, the statistics are not newly created rather cached from the past and updated based on automatic update statistics threshold. Caching temporary tables/objects is good for performance, but caching stale statistics from the past is not optimal.We can work around this issue by disabling temporary table caching by explicitly executing a DDL statement on the temporary table. One possibility is to execute an alter table statement, but this can lead to duplicate constraint name error on concurrent stored procedure execution. The other way to work around this is to create an index.I think there might be many customers in such a situation without knowing that stale statistics are being cached along with temporary table leading to poor performance.Ideal solution is to have more aggressive statistics update when the temporary table has less number of rows when temporary table caching is used. I will open a connect item to report this issue.Meanwhile you can mitigate the issue by creating an index on the temporary table. You can monitor active temporary tables using Windows Server Performance Monitor counter: SQL Server: General Statistics->Active Temp Tables. The script to understand the issue and the workaround is listed below:set nocount onset statistics time offset statistics io offdrop table tab7gocreate table tab7 (c1 int primary key clustered, c2 int, c3 char(200))gocreate index test on tab7(c2, c1, c3)gobegin trandeclare @i intset @i = 1while @i <= 50000begininsert into tab7 values (@i, 1, ‘a’)set @i = @i + 1endcommit trangoinsert into tab7 values (50001, 1, ‘a’)gocheckpointgodrop proc test_slowgocreate proc test_slow @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_slow 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'exec test_slow 2godrop proc test_with_recompilegocreate proc test_with_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_recompile 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'–low reads on 3rd execution as expected for parameter ’2'exec test_with_recompile 2godrop proc test_with_alter_table_recompilegocreate proc test_with_alter_table_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create a constraint–but this might lead to duplicate constraint name error on concurrent usagealter table #temp1 add constraint test123 unique(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_alter_table_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_alter_table_recompile 2godrop proc test_with_index_recompilegocreate proc test_with_index_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create an indexcreate index test on #temp1(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgoset statistics time onset statistics io ondbcc dropcleanbuffersgo–high reads as expected for parameter ’1'exec test_with_index_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_index_recompile 2go

    Read the article

  • SQL Server: Are temp tables or unions better?

    - by Jonathan Allen
    Generally speaking, for combining a lot of data is it better to use a temp table/temp variable as a staging area or should I just stick to "UNION ALL"? Assumptions: No further processing is needed, the results are sent directly to the client. The client waits for the complete recordset, so streaming results isn't necessary.

    Read the article

  • ProgrammingError: (1146, "Table 'test_<DB>.<TABLE>' doesn't exist") when running unit test for Djang

    - by abigblackman
    I'm running a unit test using the Django framework and get this error. Running the actual code does not have this problem, running the unit tests creates a test database on the fly so I suspect the issue lies there. The code that throws the error looks like this member = Member.objects.get(email=email_address) and the model looks like class Member(models.Model): member_id = models.IntegerField(primary_key=True) created_on = models.DateTimeField(editable=False, default=datetime.datetime.utcnow()) flags = models.IntegerField(default=0) email = models.CharField(max_length=150, blank=True) phone = models.CharField(max_length=150, blank=True) country_iso = models.CharField(max_length=6, blank=True) location_id = models.IntegerField(null=True, blank=True) facebook_uid = models.IntegerField(null=True, blank=True) utc_offset = models.IntegerField(null=True, blank=True) tokens = models.CharField(max_length=3000, blank=True) class Meta: db_table = u'member' there's nothing too odd there i can see. the user running the tests has the same permissions to the database server as the user that runs the website where else can I look to see what's going wrong, why is this table not being created?

    Read the article

  • How to rotate table-headline in Latex table

    - by pagid
    Hi, is there a way to rotate the "Demo 1", "Demo2" and "Demo 3" headlines 90° in the following LaText table? \documentclass[a4paper,twoside,10pt]{report} \begin{document} \begin{tabular}{|l|l|l|l|} \hline & Demo1 & Demo2 & Demo3 \\ \hline Person 1 & x & & \\ \hline Person 2 & x & & x \\ \hline Person 3 & x & x & \\ \hline Person 4 & & x & x \\ \hline \end{tabular} \end{document} Thanks

    Read the article

  • MySQL not releasing temp file descriptors

    - by Wakaru44
    Since a few days ago, we’ve been experiencing some serious problems with our MySQL installation: MySQL keeps opening temporal files (normal behaviour) but these files are never released. The consequence is that, eventually, the disk space is exhausted and we have to restart the service and clean up /tmp manually. Using lsof, we see something like this: mysqld 16866 mysql 5u REG 8,3 0 692 /tmp/ibyWJylQ (deleted) mysqld 16866 mysql 6u REG 8,3 0 707 /tmp/ibf5adsT (deleted) mysqld 16866 mysql 7u REG 8,3 0 728 /tmp/ibGjPRyW (deleted) mysqld 16866 mysql 8u REG 8,3 0 5678 /tmp/ibMQDLMZ (deleted) mysqld 16866 mysql 13u REG 8,3 0 5679 /tmp/ibQAnM42 (deleted) Maybe it's not related, but when we shutdown the server, the files are finally freed, and we can see the following warnings in the MySQL log: 121029 7:44:27 [Warning] /usr/local/mysql/bin/mysqld: Forcing close of thread 1333 user: 'xxx' 121029 7:44:27 [Warning] /usr/local/mysql/bin/mysqld: Forcing close of thread 1156 user: 'yyy' 121029 7:44:27 [Warning] /usr/local/mysql/bin/mysqld: Forcing close of thread 1151 user: 'zzz' where 'xxx', 'yyy' and 'zzz' are distinct mysql users (and the only 3 users with active connections to the database). We have a few theories: There is a problem in the OS, that keeps file handlers open. Could it be possible that the OS "delete" operation blocks the threads until shutdown? This may explain the warning at shutdown and the fact that files are finally deleted when the process dies. Until now, data sets were so small that temp files were relatively small and there was enough time to release the file handles without exhausting disk space. We are using Mysql 5.5 on a RHEL 6.2 with the default kernel.

    Read the article

  • Escaping %’s in file-/folder-names at the command-line

    - by Synetech
    Does anybody of a way to access files and directories that have a % in their name (which is valid) from the command-line? Specifically, if there are two %’s and the text between them happens to correspond to an environment variable. For example, if there is a file called C:\blah\%temp%.txt or a folder called C:\Program Files\%temp%\, none of the following will work because the variable gets expanded: > dir "c:\blah\%temp%.txt" > dir "c:\blah\^%temp^%.txt" > dir "c:\blah\%%temp%%.txt" > dir "c:\blah\\%temp\%.txt" > dir "c:\program files\%temp%" > dir "c:\program files\^%temp^%" > dir "c:\program files\%%temp%%" > dir "c:\program files\\%temp\%" Using wildcards will work, but does not uniquely select the file/folder and may include others: > dir "c:\blah\?temp?.txt"        (also shows ztempz.temp, 1tempa.txt, etc.) > dir "c:\program files\?temp?"   (likewise) (This is frustrating because every now and then—usually when Explorer is restarted for whatever reason—the environment variables stop expanding and some places where they are used end up creating files or directories with the environment variable in it. For example, because I configured Chromium to store its cache in a subdirectory of %temp%, if the variable expands, it is fine, but when it doesn’t, Chromium creates a directory called %temp% under its own directory and stores the cache—which can get large—there. I want to add a line to my temp-/junk-file cleaning script to automatically delete that folder if it exists, but I cannot figure out how to access it from the command-line without resorting to wildcards.)

    Read the article

  • Incremental Statistics Maintenance – what statistics will be gathered after DML occurs on the table?

    - by Maria Colgan
    Incremental statistics maintenance was introduced in Oracle Database 11g to improve the performance of gathering statistics on large partitioned table. When incremental statistics maintenance is enabled for a partitioned table, oracle accurately generated global level  statistics by aggregating partition level statistics. As more people begin to adopt this functionality we have gotten more questions around how they expected incremental statistics to behave in a given scenario. For example, last week we got a question around what partitions should have statistics gathered on them after DML has occurred on the table? The person who asked the question assumed that statistics would only be gathered on partitions that had stale statistics (10% of the rows in the partition had changed). However, what they actually saw when they did a DBMS_STATS.GATHER_TABLE_STATS was all of the partitions that had been affected by the DML had statistics re-gathered on them. This is the expected behavior, incremental statistics maintenance is suppose to yield the same statistics as gathering table statistics from scratch, just faster. This means incremental statistics maintenance needs to gather statistics on any partition that will change the global or table level statistics. For instance, the min or max value for a column could change after just one row is inserted or updated in the table. It might easier to demonstrate this using an example. Let’s take the ORDERS2 table, which is partitioned by month on order_date.  We will begin by enabling incremental statistics for the table and gathering statistics on the table. After the statistics gather the last_analyzed date for the table and all of the partitions now show 13-Mar-12. And we now have the following column statistics for the ORDERS2 table. We can also confirm that we really did use incremental statistics by querying the dictionary table sys.HIST_HEAD$, which should have an entry for each column in the ORDERS2 table. So, now that we have established a good baseline, let’s move on to the DML. Information is loaded into the latest partition of the ORDERS2 table once a month. Existing orders maybe also be update to reflect changes in their status. Let’s assume the following transactions take place on the ORDERS2 table this month. After these transactions have occurred we need to re-gather statistic since the partition ORDERS_MAR_2012 now has rows in it and the number of distinct values and the maximum value for the STATUS column have also changed. Now if we look at the last_analyzed date for the table and the partitions, we will see that the global statistics and the statistics on the partitions where rows have changed due to the update (ORDERS_FEB_2012) and the data load (ORDERS_MAR_2012) have been updated. The column statistics also reflect the changes with the number of distinct values in the status column increase to reflect the update. So, incremental statistics maintenance will gather statistics on any partition, whose data has changed and that change will impact the global level statistics.

    Read the article

  • Table Alias in SubSonic

    - by rockacola
    How can I assign alias to tables with SubSonic 2.1? I am trying to reproduce the following query: SELECT * FROM posts P RIGHT OUTER JOIN post_meta X ON P.post_id = X.post_id RIGHT OUTER JOIN post_meta Y ON P.post_id = Y.post_id WHERE X.meta_key = "category" AND X.meta_value = "technology" AND Y.meta_key = "keyword" AND Y.meta_value = "cloud" I'm am using SubSonic 2.1 and upgrading to 2.2 isn't an option (yet). Thanks.

    Read the article

  • How to JOIN a COUNT from a table, and then effect that COUNT with another JOIN

    - by jakenoble
    Hi I have three tables Post ID Name 1 'Something' 2 'Something else' 3 'One more' Comment ID PostId ProfileID Comment 1 1 1 'Hi my name is' 2 2 2 'I like cakes' 3 3 3 'I hate cakes' Profile ID Approved 1 1 2 0 3 1 I want to count the comments for a post where the profile for the comment is approved I can select the data from Post and then join a count from Comment fine. But this count should be dependent on if the Profile is approved or not. The results I am expecting is CommentCount PostId Count 1 1 2 0 3 1 Thanks for any help.

    Read the article

  • Collapse Tables with specific Table ID? JavaScript

    - by medoix
    I have the below JS at the top of my page and it successfully collapses ALL tables on load. However i am trying to figure out how to only collapse tables with the ID of "ctable" or is there some other way of specifying the tables to make collapsible etc? <script type="text/javascript"> var ELpntr=false; function hideall() { locl = document.getElementsByTagName('tbody'); for (i=0;i<locl.length;i++) { locl[i].style.display='none'; } } function showHide(EL,PM) { ELpntr=document.getElementById(EL); if (ELpntr.style.display=='none') { document.getElementById(PM).innerHTML=' - '; ELpntr.style.display='block'; } else { document.getElementById(PM).innerHTML=' + '; ELpntr.style.display='none'; } } onload=hideall; </script>

    Read the article

  • ValueError with multi-table inheritance in Django Admin

    - by jorde
    I created two new classes which inherit model Entry: class Entry(models.Model): LANGUAGE_CHOICES = settings.LANGUAGES language = models.CharField(max_length=2, verbose_name=_('Comment language'), choices=LANGUAGE_CHOICES) user = models.ForeignKey(User) country = models.ForeignKey(Country, null=True, blank=True) created = models.DateTimeField(auto_now=True) class Comment(Entry): comment = models.CharField(max_length=2000, blank=True, verbose_name=_('Comment in English')) class Discount(Entry): discount = models.CharField(max_length=2000, blank=True, verbose_name=_('Comment in English')) coupon = models.CharField(max_length=2000, blank=True, verbose_name=_('Coupon code if needed')) After adding these new models to admin via admin.site.register I'm getting ValueError when trying to create a comment or a discount via admin. Adding entries works fine. Error msg: ValueError at /admin/reviews/discount/add/ Cannot assign "''": "Discount.discount" must be a "Discount" instance. Request Method: GET Request URL: http://127.0.0.1:8000/admin/reviews/discount/add/ Exception Type: ValueError Exception Value: Cannot assign "''": "Discount.discount" must be a "Discount" instance. Exception Location: /Library/Python/2.6/site-packages/django/db/models/fields/related.py in set, line 211 Python Executable: /usr/bin/python Python Version: 2.6.1

    Read the article

  • SQL: Recursively get parent records using Common Table Expressions

    - by Martijn B
    Hi there, Suposse you have to following tables where a sale consists of products and a product can be placed in multiple categories. Whereby categories have a hierachly structure like: Man Shoes Sport Casual Watches Women Shoes Sport Casual Watches Tables: Sale: id name 1 Sale1 Product: id saleidfk name 1 1 a 2 1 b 3 1 c 4 1 d 5 1 e ProductCategory : productid categoryid 1 3 2 3 3 4 4 5 5 10 Category: id ParentCategoryIdFk name 1 null Men 2 1 Shoes 3 2 Sport 4 2 Casual 5 1 Watches 6 null Women 7 6 Shoes 8 7 Sport 9 7 Casual 10 6 Watches Question: Now on my website I want to create a control where only the categories are shown of a certain sale and where the categories are filled with the products of the sale. I also want to include the hierachly structure of the categories. So if we have a leave node, recusivly go up to the top node. So with sale1 I should have a query with the following result: Men Shoes Sport Casual Watches Women Watches This thing is driving me crazy :-) Thanks in advance! Gr Martijn

    Read the article

  • One-to-many relationship in the same table in zend

    - by Behrang
    I have groupTable(group_id,group_name,group_date,group_parent_id) in face each group have many group child I create groupModel and I want to begin coding is this right code to handle protected $_name = 'group'; protected $_dependentTables = array('Model_group'); protected $_referenceMap = array('Model_group' = array('columns' = array('group_parent_id') , 'refTableClass' = 'Model_group' , 'refColumns' = array('group_id') , 'onDelete' = self::CASCADE , 'onUpdate' = self::RESTRICT) );

    Read the article

  • Table Partitioning

    - by Ankur Gahlot
    How advantageous is it to use partitioning of tables as compared to normal approach ? Is there a sort of sample case or detailed comparative analysis that could statistically ( i know this is too strong a word, but it would really help if it is illustrated by some numbers ) emphasize on the utility of the process. Thanks, Ankur

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >