Search Results

Search found 31357 results on 1255 pages for 'database indexes'.

Page 17/1255 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Database Table of Boolean Values

    - by guazz
    What's the best method of storing a large number of booleans in a database table? Should I create a column for each boolean value or is there a more optimal method? Employee Table IsHardWorking IsEfficient IsCrazy IsOverworked IsUnderpaid ...etc.

    Read the article

  • Database design for heavy timed data logging

    - by user293995
    Hi, I have an application where I receive each data 40.000 rows. I have 5 million rows to handle (500 Mb MySQL 5.0 database). Actually, thoses rows are stored in the same table = slow to update, hard to backup, ... Which king of scheme is used in such application to allow long term accessibility to the datas without problems with too big tables, easy backup, fast read / write ? Is postgresql better than mysql for such purpose ? Thanks in advance BEst regards

    Read the article

  • Good tool to visualise database schema?

    - by Mat
    Are there any good tools for visualising a pre-existing database schema? I'm using MySQL if it matters. I'm currently using MySQL Workbench to process an SQL create script dump, but it's clunky, slow and a manual process to drag all the tables about (which would be okay if it wasn't so slow).

    Read the article

  • internal implementation of database Queries

    - by harigm
    In my experience I have used many queries like select, order by, where clause etc.. in mysql, sql-server, oracle etc For a moment i have thought, 1)how is this internally written to implement the above queries 2) which language do they use? 3) is that programming language? if yes which language? 4)what kind of environment required to implement this kind of complex database

    Read the article

  • MultiOS "Jet Database" for C++/Qt?

    - by Airjoe
    Hopefully I can articulate this well: I'm porting an application I made years ago from VB6 (I know, I know!) to C++/Qt. In my original application, one thing I liked was that I didn't need an actual SQL server running, I could just use MS Access .mdb files. I was wondering if something similar exists for Qt that will work on multiple OSes - a database stored in a file, pretty much, but that I can still run SQL queries with. Not sure if something like this exists or not, but any help appreciated, thanks!

    Read the article

  • (database design):Which tables should be created for all kindes files (images, attached email files,

    - by meyosef
    Hi, I new in database design: I have question with my own few solution,what do you thinks?: Which tables should be created for all kinds files (images, attached email files,text files for store email body, etc..) that stored in my online store? *option 1:use seperate table for files types * files{ id files_types_id FK file_path file_extension } files_types{ id type_name (unique) } *option 2: use bool field for each file type * files{ id file_path file_extension is_image_main is_image_icon is_image_logo is_pdf_file is_text_file } **option 3: use 1 enum field 'file_type' for each file type ** files{ id file_path file_extension file_type (image_main,image_icon,image_logo,image_main,pdf,text) **enum** } Thanks you, Yosef

    Read the article

  • Database Owner Conundrum

    - by Johnm
    Have you ever restored a database from a production environment on Server A into a development environment on Server B and had some items, such as Service Broker, mysteriously cease functioning? You might want to consider reviewing the database owner property of the database. The Scenario Recently, I was developing some messaging functionality that utilized the Service Broker feature of SQL Server in a development environment. Within the instance of the development environment resided two databases: One was a restored version of a production database that we will call "RestoreDB". The second database was a brand new database that has yet to exist in the production environment that we will call "DevDB". The goal is to setup a communication path between RestoreDB and DevDB that will later be implemented into the production database. After implementing all of the Service Broker objects that are required to communicate within a database as well as between two databases on the same instance I found myself a bit confounded. My testing was showing that the communication was successful when it was occurring internally within DevDB; but the communication between RestoreDB and DevDB did not appear to be working. Profiler to the rescue After carefully reviewing my code for any misspellings, missing commas or any other minor items that might be a syntactical cause of failure, I decided to launch Profiler to aid in the troubleshooting. After simulating the cross database messaging, I noticed the following error appearing in Profiler: An exception occurred while enqueueing a message in the target queue. Error: 33009, State: 2. The database owner SID recorded in the master database differs from the database owner SID recorded in database '[Database Name Here]'. You should correct this situation by resetting the owner of database '[Database Name Here]' using the ALTER AUTHORIZATION statement. Now, this error message is a helpful one. Not only does it identify the issue in plain language, it also provides a potential solution. An execution of the following query that utilizes the catalog view sys.transmission_queue revealed the same error message for each communication attempt: SELECT     * FROM        sys.transmission_queue; Seeing the situation as a learning opportunity I dove a bit deeper. Reviewing the database properties  The owner of a specific database can be easily viewed by right-clicking the database in SQL Server Management Studio and selecting the "properties" option. The owner is listed on the "General" page of the properties screen. In my scenario, the database in the production server was created by Frank the DBA; therefore his server login appeared as the owner: "ServerName\Frank". While this is interesting information, it certainly doesn't tell me much in regard to the SID (security identifier) and its existence, or lack thereof, in the master database as the error suggested. I pulled together the following query to gather more interesting information: SELECT     a.name     , a.owner_sid     , b.sid     , b.name     , b.type_desc FROM        master.sys.databases a     LEFT OUTER JOIN master.sys.server_principals b         ON a.owner_sid = b.sid WHERE     a.name not in ('master','tempdb','model','msdb'); This query also helped identify how many other user databases in the instance were experiencing the same issue. In this scenario, I saw that there were no matching SIDs in server_principals to the owner SID for my database. What login should be used as the database owner instead of Frank's? The system stored procedure sp_helplogins will provide a list of the valid logins that can be used. Here is an example of its use, revealing all available logins: EXEC sp_helplogins;  Fixing a hole The error message stated that the recommended solution was to execute the ALTER AUTHORIZATION statement. The full statement for this scenario would appear as follows: ALTER AUTHORIZATION ON DATABASE:: [Database Name Here] TO [Login Name]; Another option is to execute the following statement using the sp_changedbowner system stored procedure; but please keep in mind that this stored procedure has been deprecated and will likely disappear in future versions of SQL Server: EXEC dbo.sp_changedbowner @loginname = [Login Name]; .And They Lived Happily Ever After Upon changing the database owner to an existing login and simulating the inner and cross database messaging the errors have ceased. More importantly, all messages sent through this feature now successfully complete their journey. I have added the ownership change to my restoration script for the development environment.

    Read the article

  • Query optimization using composite indexes

    - by xmarch
    Many times, during the process of creating a new Coherence application, developers do not pay attention to the way cache queries are constructed; they only check that these queries comply with functional specs. Later, performance testing shows that these perform poorly and it is then when developers start working on improvements until the non-functional performance requirements are met. This post describes the optimization process of a real-life scenario, where using a composite attribute index has brought a radical improvement in query execution times.  The execution times went down from 4 seconds to 2 milliseconds! E-commerce solution based on Oracle ATG – Endeca In the context of a new e-commerce solution based on Oracle ATG – Endeca, Oracle Coherence has been used to calculate and store SKU prices. In this architecture, a Coherence cache stores the final SKU prices used for Endeca baseline indexing. Each SKU price is calculated from a base SKU price and a series of calculations based on information from corporate global discounts. Corporate global discounts information is stored in an auxiliary Coherence cache with over 800.000 entries. In particular, to obtain each price the process needs to execute six queries over the global discount cache. After the implementation was finished, we discovered that the most expensive steps in the price calculation discount process were the global discounts cache query. This query has 10 parameters and is executed 6 times for each SKU price calculation. The steps taken to optimise this query are described below; Starting point Initial query was: String filter = "levelId = :iLevelId AND  salesCompanyId = :iSalesCompanyId AND salesChannelId = :iSalesChannelId "+ "AND departmentId = :iDepartmentId AND familyId = :iFamilyId AND brand = :iBrand AND manufacturer = :iManufacturer "+ "AND areaId = :iAreaId AND endDate >=  :iEndDate AND startDate <= :iStartDate"; Map<String, Object> params = new HashMap<String, Object>(10); // Fill all parameters. params.put("iLevelId", xxxx); // Executing filter. Filter globalDiscountsFilter = QueryHelper.createFilter(filter, params); NamedCache globalDiscountsCache = CacheFactory.getCache(CacheConstants.GLOBAL_DISCOUNTS_CACHE_NAME); Set applicableDiscounts = globalDiscountsCache.entrySet(globalDiscountsFilter); With the small dataset used for development the cache queries performed very well. However, when carrying out performance testing with a real-world sample size of 800,000 entries, each query execution was taking more than 4 seconds. First round of optimizations The first optimisation step was the creation of separate Coherence index for each of the 10 attributes used by the filter. This avoided object deserialization while executing the query. Each index was created as follows: globalDiscountsCache.addIndex(new ReflectionExtractor("getXXX" ) , false, null); After adding these indexes the query execution time was reduced to between 450 ms and 1s. However, these execution times were still not good enough.  Second round of optimizations In this optimisation phase a Coherence query explain plan was used to identify how many entires each index reduced the results set by, along with the cost in ms of executing that part of the query. Though the explain plan showed that all the indexes for the query were being used, it also showed that the ordering of the query parameters was "sub-optimal".  Parameters associated to object attributes with high-cardinality should appear at the beginning of the filter, or more specifically, the attributes that filters out the highest of number records should be placed at the beginning. But examining corporate global discount data we realized that depending on the values of the parameters used in the query the “good” order for the attributes was different. In particular, if the attributes brand and family had specific values it was more optimal to have a different query changing the order of the attributes. Ultimately, we ended up with three different optimal variants of the query that were used in its relevant cases: String filter = "brand = :iBrand AND familyId = :iFamilyId AND departmentId = :iDepartmentId AND levelId = :iLevelId "+ "AND manufacturer = :iManufacturer AND endDate >= :iEndDate AND salesCompanyId = :iSalesCompanyId "+ "AND areaId = :iAreaId AND salesChannelId = :iSalesChannelId AND startDate <= :iStartDate"; String filter = "familyId = :iFamilyId AND departmentId = :iDepartmentId AND levelId = :iLevelId AND brand = :iBrand "+ "AND manufacturer = :iManufacturer AND endDate >=  :iEndDate AND salesCompanyId = :iSalesCompanyId "+ "AND areaId = :iAreaId  AND salesChannelId = :iSalesChannelId AND startDate <= :iStartDate"; String filter = "brand = :iBrand AND departmentId = :iDepartmentId AND familyId = :iFamilyId AND levelId = :iLevelId "+ "AND manufacturer = :iManufacturer AND endDate >= :iEndDate AND salesCompanyId = :iSalesCompanyId "+ "AND areaId = :iAreaId AND salesChannelId = :iSalesChannelId AND startDate <= :iStartDate"; Using the appropriate query depending on the value of brand and family parameters the query execution time dropped to between 100 ms and 150 ms. But these these execution times were still not good enough and the solution was cumbersome. Third and last round of optimizations The third and final optimization was to introduce a composite index. However, this did mean that it was not possible to use the Coherence Query Language (CohQL), as composite indexes are not currently supporte in CohQL. As the original query had 8 parameters using EqualsFilter, 1 using GreaterEqualsFilter and 1 using LessEqualsFilter, the composite index was built for the 8 attributes using EqualsFilter. The final query had an EqualsFilter for the multiple extractor, a GreaterEqualsFilter and a LessEqualsFilter for the 2 remaining attributes.  All individual indexes were dropped except the ones being used for LessEqualsFilter and GreaterEqualsFilter. We were now running in an scenario with an 8-attributes composite filter and 2 single attribute filters. The composite index created was as follows: ValueExtractor[] ve = { new ReflectionExtractor("getSalesChannelId" ), new ReflectionExtractor("getLevelId" ),    new ReflectionExtractor("getAreaId" ), new ReflectionExtractor("getDepartmentId" ),    new ReflectionExtractor("getFamilyId" ), new ReflectionExtractor("getManufacturer" ),    new ReflectionExtractor("getBrand" ), new ReflectionExtractor("getSalesCompanyId" )}; MultiExtractor me = new MultiExtractor(ve); NamedCache globalDiscountsCache = CacheFactory.getCache(CacheConstants.GLOBAL_DISCOUNTS_CACHE_NAME); globalDiscountsCache.addIndex(me, false, null); And the final query was: ValueExtractor[] ve = { new ReflectionExtractor("getSalesChannelId" ), new ReflectionExtractor("getLevelId" ),    new ReflectionExtractor("getAreaId" ), new ReflectionExtractor("getDepartmentId" ),    new ReflectionExtractor("getFamilyId" ), new ReflectionExtractor("getManufacturer" ),    new ReflectionExtractor("getBrand" ), new ReflectionExtractor("getSalesCompanyId" )}; MultiExtractor me = new MultiExtractor(ve); // Fill composite parameters.String SalesCompanyId = xxxx;...AndFilter composite = new AndFilter(new EqualsFilter(me,                   Arrays.asList(iSalesChannelId, iLevelId, iAreaId, iDepartmentId, iFamilyId, iManufacturer, iBrand, SalesCompanyId)),                                     new GreaterEqualsFilter(new ReflectionExtractor("getEndDate" ), iEndDate)); AndFilter finalFilter = new AndFilter(composite, new LessEqualsFilter(new ReflectionExtractor("getStartDate" ), iStartDate)); NamedCache globalDiscountsCache = CacheFactory.getCache(CacheConstants.GLOBAL_DISCOUNTS_CACHE_NAME); Set applicableDiscounts = globalDiscountsCache.entrySet(finalFilter);      Using this composite index the query improved dramatically and the execution time dropped to between 2 ms and  4 ms.  These execution times completely met the non-functional performance requirements . It should be noticed than when using the composite index the order of the attributes inside the ValueExtractor was not relevant.

    Read the article

  • Database - Designing an "Events" Table

    - by Alix Axel
    After reading the tips from this great Nettuts+ article I've come up with a table schema that would separate highly volatile data from other tables subjected to heavy reads and at the same time lower the number of tables needed in the whole database schema, however I'm not sure if this is a good idea since it doesn't follow the rules of normalization and I would like to hear your advice, here is the general idea: I've four types of users modeled in a Class Table Inheritance structure, in the main "user" table I store data common to all the users (id, username, password, several flags, ...) along with some TIMESTAMP fields (date_created, date_updated, date_activated, date_lastLogin, ...). To quote the tip #16 from the Nettuts+ article mentioned above: Example 2: You have a “last_login” field in your table. It updates every time a user logs in to the website. But every update on a table causes the query cache for that table to be flushed. You can put that field into another table to keep updates to your users table to a minimum. Now it gets even trickier, I need to keep track of some user statistics like how many unique times a user profile was seen how many unique times a ad from a specific type of user was clicked how many unique times a post from a specific type of user was seen and so on... In my fully normalized database this adds up to about 8 to 10 additional tables, it's not a lot but I would like to keep things simple if I could, so I've come up with the following "events" table: |------|----------------|----------------|--------------|-----------| | ID | TABLE | EVENT | DATE | IP | |------|----------------|----------------|--------------|-----------| | 1 | user | login | 201004190030 | 127.0.0.1 | |------|----------------|----------------|--------------|-----------| | 1 | user | login | 201004190230 | 127.0.0.1 | |------|----------------|----------------|--------------|-----------| | 2 | user | created | 201004190031 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 2 | user | activated | 201004190234 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 2 | user | approved | 201004190930 | 217.0.0.1 | |------|----------------|----------------|--------------|-----------| | 2 | user | login | 201004191200 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | created | 201004191230 | 127.0.0.1 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | impressed | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | clicked | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | clicked | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | clicked | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | clicked | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | clicked | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 2 | user | blocked | 201004200319 | 217.0.0.1 | |------|----------------|----------------|--------------|-----------| | 2 | user | deleted | 201004200320 | 217.0.0.1 | |------|----------------|----------------|--------------|-----------| Basically the ID refers to the primary key (id) field in the TABLE table, I believe the rest should be pretty straightforward. One thing that I've come to like in this design is that I can keep track of all the user logins instead of just the last one, and thus generate some interesting metrics with that data. Due to the nature of the events table I also thought of making some optimizations, such as: #9: Since there is only a finite number of tables and a finite (and predetermined) number of events, the TABLE and EVENTS columns could be setup as ENUMs instead of VARCHARs to save some space. #14: Store IPs as UNSIGNED INT with INET_ATON() instead of VARCHARs. Store DATEs as TIMESTAMPs instead of DATETIMEs. Use the ARCHIVE (or the CSV?) engine instead of InnoDB / MyISAM. Overall, each event would only consume 14 bytes which is okay for my traffic I guess. Pros: Ability to store more detailed data (such as logins). No need to design (and code for) almost a dozen additional tables (dates and statistics). Reduces a few columns per table and keeps volatile data separated. Cons: Non-relational (still not as bad as EAV): SELECT * FROM events WHERE id = 2 AND table = 'user' ORDER BY date DESC(); 6 bytes overhead per event (ID, TABLE and EVENT). I'm more inclined to go with this approach since the pros seem to far outweigh the cons, but I'm still a little bit reluctant.. Am I missing something? What are your thoughts on this? Thanks!

    Read the article

  • Ha a hutés nem elég a gépteremben: Sun Cooling Door a Database Machine-hoz

    - by Fekete Zoltán
    A Database Machine hatalmas teljesítménye miatt általában jóval kevesebb hutésre van szükség, mintha egy külön high-end servert és külön high-end storage-ot hutenénk! Ha viszont a géptermünk maradék hutési kapacitása nem elegendo, és nem elégszünk meg a "hagyományos mosóporral", akkor újabb hutési trükkre van szükség. Erre kínálnak megoldást a Sun Cooling Door modellek, például az 5200-as és az 5600-as modellek.

    Read the article

  • Adatlap az Exadata, Database Machine supporthoz

    - by Fekete Zoltán
    Letöltheto és megtekintheto, sot elolvasható :) a COMPLETE SUPPORT SERVICES FOR ORACLE EXADATA dokumentum, amely részletesen leírja, milyen trméktámogatási lehetoségeket lehet igénybe venni egy Database Machine megoldáshoz. Az Exadata termékcsalád elemei mind tranzakciós rendszerek mind adattárházak adatbázisának futtatásához és adatbázisok egy közös szerveren muködtetéséhez nyújtanak optimális, nagy teljesítményu platformot.

    Read the article

  • Details on Oracle's Primavera P6 Reporting Database R2

    - by mark.kromer
    Below is a graphic screenshot of our detailed announcement for the new Oracle data warehouse product for Primavera P6 called P6 Reporting Database R2. This DW product includes the ETL, data warehouse star schemas and ODS that you'll need to build an enterprise reporting solution for your projects & portfolios. This product is included on a restricted license basis with the new Primavera P6 Analytics R1 product from Oracle because those Analytics are built in OBIEE based on this data warehouse product.

    Read the article

  • SQL SERVER – Creating All New Database with Full Recovery Model

    - by pinaldave
    Sometimes, complex problems have very simple solutions. Let us see the following email which I received recently. “Hi Pinal, In our system when we create new database, by default, they are all created with the Simple Recovery Model. We have to manually change the recovery model after we create the database. We used the following simple T-SQL code: CREATE DATABASE dbname. We are very frustrated with this situation. We want all our databases to have the Full Recovery Model option by default. We are considering the following methods; please suggest the most efficient one among them. 1) Creating a Policy; when it is violated, the database model can be fixed 2) Triggers at Server Level 3) Automated Job which goes through all the databases and checks their recovery model; if the DBA has not changed the model, then the job will list the Databases and change their recovery model Also, we have a situation where we need a database in the Simple Recovery Model as well – how to white list them? Please suggest the best method.” Indeed, an interesting email! The answer to their question, i.e., which is the best method to fit their needs (white list, default, etc)? It will be NONE of the above. Here is the solution in one line and also the easiest way: Just go to your Model database: Path in SSMS >> Databases > System Databases >> model >> Right Click Properties >> Options >> Recovery Model - Select Full from dropdown. Every newly created database takes its base template from the Model Database. If you create a custom SP in the Model Database, when you create a new database, it will automatically exist in that database. Any database that was already created before making changes in the Model Database will not be affected at all. Creating Policy is also a good method, and I will blog about this in a separate blog post, but looking at current specifications of the reader, I think the Model Database should be modified to have a Full Recovery Option. While writing this blog post, I remembered my another blog post where the model database log file was growing drastically even though there were no transactions SQL SERVER – Log File Growing for Model Database – model Database Log File Grew Too Big. NOTE: Please do not touch the Model Database unnecessary. It is a strict “No.” If you want to create an object that you need in all the databases, then instead of creating it in model database, I suggest that you create a new database called maintenance and create the object there. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, Readers Question, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Oracle Database 12 c New Partition Maintenance Features by Gwen Lazenby

    - by hamsun
    One of my favourite new features in Oracle Database 12c is the ability to perform partition maintenance operations on multiple partitions. This means we can now add, drop, truncate and merge multiple partitions in one operation, and can split a single partition into more than two partitions also in just one command. This would certainly have made my life slightly easier had it been available when I administered a data warehouse at Oracle 9i. To demonstrate this new functionality and syntax, I am going to create two tables, ORDERS and ORDERS_ITEMS which have a parent-child relationship. ORDERS is to be partitioned using range partitioning on the ORDER_DATE column, and ORDER_ITEMS is going to partitioned using reference partitioning and its foreign key relationship with the ORDERS table. This form of partitioning was a new feature in 11g and means that any partition maintenance operations performed on the ORDERS table will also take place on the ORDER_ITEMS table as well. First create the ORDERS table - SQL CREATE TABLE orders ( order_id NUMBER(12), order_date TIMESTAMP, order_mode VARCHAR2(8), customer_id NUMBER(6), order_status NUMBER(2), order_total NUMBER(8,2), sales_rep_id NUMBER(6), promotion_id NUMBER(6), CONSTRAINT orders_pk PRIMARY KEY(order_id) ) PARTITION BY RANGE(order_date) (PARTITION Q1_2007 VALUES LESS THAN (TO_DATE('01-APR-2007','DD-MON-YYYY')), PARTITION Q2_2007 VALUES LESS THAN (TO_DATE('01-JUL-2007','DD-MON-YYYY')), PARTITION Q3_2007 VALUES LESS THAN (TO_DATE('01-OCT-2007','DD-MON-YYYY')), PARTITION Q4_2007 VALUES LESS THAN (TO_DATE('01-JAN-2008','DD-MON-YYYY')) ); Table created. Now the ORDER_ITEMS table SQL CREATE TABLE order_items ( order_id NUMBER(12) NOT NULL, line_item_id NUMBER(3) NOT NULL, product_id NUMBER(6) NOT NULL, unit_price NUMBER(8,2), quantity NUMBER(8), CONSTRAINT order_items_fk FOREIGN KEY(order_id) REFERENCES orders(order_id) on delete cascade) PARTITION BY REFERENCE(order_items_fk) tablespace example; Table created. Now look at DBA_TAB_PARTITIONS to get details of what partitions we have in the two tables – SQL select table_name,partition_name, partition_position position, high_value from dba_tab_partitions where table_owner='SH' and table_name like 'ORDER_%' order by partition_position, table_name; TABLE_NAME PARTITION_NAME POSITION HIGH_VALUE -------------- --------------- -------- ------------------------- ORDERS Q1_2007 1 TIMESTAMP' 2007-04-01 00:00:00' ORDER_ITEMS Q1_2007 1 ORDERS Q2_2007 2 TIMESTAMP' 2007-07-01 00:00:00' ORDER_ITEMS Q2_2007 2 ORDERS Q3_2007 3 TIMESTAMP' 2007-10-01 00:00:00' ORDER_ITEMS Q3_2007 3 ORDERS Q4_2007 4 TIMESTAMP' 2008-01-01 00:00:00' ORDER_ITEMS Q4_2007 4 Just as an aside it is also now possible in 12c to use interval partitioning on reference partitioned tables. In 11g it was not possible to combine these two new partitioning features. For our first example of the new 12cfunctionality, let us add all the partitions necessary for 2008 to the tables using one command. Notice that the partition specification part of the add command is identical in format to the partition specification part of the create command as shown above - SQL alter table orders add PARTITION Q1_2008 VALUES LESS THAN (TO_DATE('01-APR-2008','DD-MON-YYYY')), PARTITION Q2_2008 VALUES LESS THAN (TO_DATE('01-JUL-2008','DD-MON-YYYY')), PARTITION Q3_2008 VALUES LESS THAN (TO_DATE('01-OCT-2008','DD-MON-YYYY')), PARTITION Q4_2008 VALUES LESS THAN (TO_DATE('01-JAN-2009','DD-MON-YYYY')); Table altered. Now look at DBA_TAB_PARTITIONS and we can see that the 4 new partitions have been added to both tables – SQL select table_name,partition_name, partition_position position, high_value from dba_tab_partitions where table_owner='SH' and table_name like 'ORDER_%' order by partition_position, table_name; TABLE_NAME PARTITION_NAME POSITION HIGH_VALUE -------------- --------------- -------- ------------------------- ORDERS Q1_2007 1 TIMESTAMP' 2007-04-01 00:00:00' ORDER_ITEMS Q1_2007 1 ORDERS Q2_2007 2 TIMESTAMP' 2007-07-01 00:00:00' ORDER_ITEMS Q2_2007 2 ORDERS Q3_2007 3 TIMESTAMP' 2007-10-01 00:00:00' ORDER_ITEMS Q3_2007 3 ORDERS Q4_2007 4 TIMESTAMP' 2008-01-01 00:00:00' ORDER_ITEMS Q4_2007 4 ORDERS Q1_2008 5 TIMESTAMP' 2008-04-01 00:00:00' ORDER_ITEMS Q1_2008 5 ORDERS Q2_2008 6 TIMESTAMP' 2008-07-01 00:00:00' ORDER_ITEM Q2_2008 6 ORDERS Q3_2008 7 TIMESTAMP' 2008-10-01 00:00:00' ORDER_ITEMS Q3_2008 7 ORDERS Q4_2008 8 TIMESTAMP' 2009-01-01 00:00:00' ORDER_ITEMS Q4_2008 8 Next, we can drop or truncate multiple partitions by giving a comma separated list in the alter table command. Note the use of the plural ‘partitions’ in the command as opposed to the singular ‘partition’ prior to 12c– SQL alter table orders drop partitions Q3_2008,Q2_2008,Q1_2008; Table altered. Now look at DBA_TAB_PARTITIONS and we can see that the 3 partitions have been dropped in both the two tables – TABLE_NAME PARTITION_NAME POSITION HIGH_VALUE -------------- --------------- -------- ------------------------- ORDERS Q1_2007 1 TIMESTAMP' 2007-04-01 00:00:00' ORDER_ITEMS Q1_2007 1 ORDERS Q2_2007 2 TIMESTAMP' 2007-07-01 00:00:00' ORDER_ITEMS Q2_2007 2 ORDERS Q3_2007 3 TIMESTAMP' 2007-10-01 00:00:00' ORDER_ITEMS Q3_2007 3 ORDERS Q4_2007 4 TIMESTAMP' 2008-01-01 00:00:00' ORDER_ITEMS Q4_2007 4 ORDERS Q4_2008 5 TIMESTAMP' 2009-01-01 00:00:00' ORDER_ITEMS Q4_2008 5 Now let us merge all the 2007 partitions together to form one single partition – SQL alter table orders merge partitions Q1_2005, Q2_2005, Q3_2005, Q4_2005 into partition Y_2007; Table altered. TABLE_NAME PARTITION_NAME POSITION HIGH_VALUE -------------- --------------- -------- ------------------------- ORDERS Y_2007 1 TIMESTAMP' 2008-01-01 00:00:00' ORDER_ITEMS Y_2007 1 ORDERS Q4_2008 2 TIMESTAMP' 2009-01-01 00:00:00' ORDER_ITEMS Q4_2008 2 Splitting partitions is a slightly more involved. In the case of range partitioning one of the new partitions must have no high value defined, and in list partitioning one of the new partitions must have no list of values defined. I call these partitions the ‘everything else’ partitions, and will contain any rows contained in the original partition that are not contained in the any of the other new partitions. For example, let us split the Y_2007 partition back into 4 quarterly partitions – SQL alter table orders split partition Y_2007 into (PARTITION Q1_2007 VALUES LESS THAN (TO_DATE('01-APR-2007','DD-MON-YYYY')), PARTITION Q2_2007 VALUES LESS THAN (TO_DATE('01-JUL-2007','DD-MON-YYYY')), PARTITION Q3_2007 VALUES LESS THAN (TO_DATE('01-OCT-2007','DD-MON-YYYY')), PARTITION Q4_2007); Now look at DBA_TAB_PARTITIONS to get details of the new partitions – TABLE_NAME PARTITION_NAME POSITION HIGH_VALUE -------------- --------------- -------- ------------------------- ORDERS Q1_2007 1 TIMESTAMP' 2007-04-01 00:00:00' ORDER_ITEMS Q1_2007 1 ORDERS Q2_2007 2 TIMESTAMP' 2007-07-01 00:00:00' ORDER_ITEMS Q2_2007 2 ORDERS Q3_2007 3 TIMESTAMP' 2007-10-01 00:00:00' ORDER_ITEMS Q3_2007 3 ORDERS Q4_2007 4 TIMESTAMP' 2008-01-01 00:00:00' ORDER_ITEMS Q4_2007 4 ORDERS Q4_2008 5 TIMESTAMP' 2009-01-01 00:00:00' ORDER_ITEMS Q4_2008 5 Partition Q4_2007 has a high value equal to the high value of the original Y_2007 partition, and so has inherited its upper boundary from the partition that was split. As for a list partitioning example let look at the following another table, SALES_PAR_LIST, which has 2 partitions, Americas and Europe and a partitioning key of country_name. SQL select table_name,partition_name, high_value from dba_tab_partitions where table_owner='SH' and table_name = 'SALES_PAR_LIST'; TABLE_NAME PARTITION_NAME HIGH_VALUE -------------- --------------- ----------------------------- SALES_PAR_LIST AMERICAS 'Argentina', 'Canada', 'Peru', 'USA', 'Honduras', 'Brazil', 'Nicaragua' SALES_PAR_LIST EUROPE 'France', 'Spain', 'Ireland', 'Germany', 'Belgium', 'Portugal', 'Denmark' Now split the Americas partition into 3 partitions – SQL alter table sales_par_list split partition americas into (partition south_america values ('Argentina','Peru','Brazil'), partition north_america values('Canada','USA'), partition central_america); Table altered. Note that no list of values was given for the ‘Central America’ partition. However it should have inherited any values in the original ‘Americas’ partition that were not assigned to either the ‘North America’ or ‘South America’ partitions. We can confirm this by looking at the DBA_TAB_PARTITIONS view. SQL select table_name,partition_name, high_value from dba_tab_partitions where table_owner='SH' and table_name = 'SALES_PAR_LIST'; TABLE_NAME PARTITION_NAME HIGH_VALUE --------------- --------------- -------------------------------- SALES_PAR_LIST SOUTH_AMERICA 'Argentina', 'Peru', 'Brazil' SALES_PAR_LIST NORTH_AMERICA 'Canada', 'USA' SALES_PAR_LIST CENTRAL_AMERICA 'Honduras', 'Nicaragua' SALES_PAR_LIST EUROPE 'France', 'Spain', 'Ireland', 'Germany', 'Belgium', 'Portugal', 'Denmark' In conclusion, I hope that DBA’s whose work involves maintaining partitions will find the operations a bit more straight forward to carry out once they have upgraded to Oracle Database 12c. Gwen Lazenby is a Principal Training Consultant at Oracle. She is part of Oracle University's Core Technology delivery team based in the UK, teaching Database Administration and Linux courses. Her specialist topics include using Oracle Partitioning and Parallelism in Data Warehouse environments, as well as Oracle Spatial and RMAN.

    Read the article

  • Should experienced programmers know database queries?

    - by Shamim Hafiz
    There are so many programmers out there who are also an expert at Query writing and Database design. Should this be a core requirement to be an expert programmer or software engineer? Though there are lots of similarities in the way queries and codes are developed, my personal opinion is, Queries seem to have a different Structure than Code and it can be tough to Master both simultaneously due to the different approaches.

    Read the article

  • Statistical Sampling for Verifying Database Backups

    A DBA's huge workload can start to threaten best practices for data backup and recovery, but ingenuity, and an eye for a good tactic, can usually find a way. For Tom, the revelation about a solution came from eating crabs. Statistical sampling can be brought to bear to minimise the risk of faliure of an emergency database restore.

    Read the article

  • Get Info From Database, or Build Inferred Info?

    - by Zaemz
    Does it make more sense to store and retrieve properties or information directly related to an item in a database, or, say in such a case that a product's ID could describe information about it, should the information be gathered from that? Example: Item SKU -- 4HBU12 4 - is the number of motors H - the voltage B - the color, blue U - the model 12 - the length Should I store those individual attributes as well as the SKU, or should I store only the SKU and build the attributes from it?

    Read the article

  • Configurable tables in sql database

    - by dot
    I have the following tables in my database: Config Table: ====================================== Start_Range | End Range | Config_id 10 | 15 | 1 ====================================== Available_UserIDs ========================== ID | UserID | Used_YN | 1 | 10 | t | 1 | 11 | f | 1 | 12 | f | 1 | 13 | f | 1 | 14 | f | 1 | 15 | f | ========================== Users ========================== UserId | FName | LName | 10 |John | Doe | ========================== This is used in a reservation system of sorts... which lets an administrator specify a range of numbers that will be assigned to users in the config table. Once the range has been defined, the system then populates the Available_userIDs table with all the numbers in between the range, and sets the Used_YN flag to false As users sign up, they grab the next user_id number that's not in use... and reserve it. Then the system adds a record to the Users table. Once the admin has specified a range, it is possible that they can change it. For example, they can start with 10-15... and then when the range is used up, they should be able to specify another range like 16 - 99. I've put a unique constraint on the Available_UserIDs table, as well as on the Users table - to ensure that UserIds can't be duplicated. My questions are as follows: What's the best way to prevent the admins from using a range that's already in use? I thought of the following options: -- check either the Users table to see if the start range or ending range numbers are being used. If they are, assume that all the numbers in between are in use too, and reject the range. -- let them specify whatever they want, try to populate the Available_UserIDs table. If there are duplicates, just ignore that specific error message from the database and continue on. How do I find gaps in the number ranges? For example, if they specify 10-15, and then 20-25, it'd be nice to be able to somehow suggest on my web page that 16-19 is currently available. I found this article: http://stackoverflow.com/questions/1312101/how-to-find-a-gap-in-running-counter-with-sql But it only seems to return the first available number... so in my example above, it would only return the number 16. I'm sure there's a simpler way to do things that I'm overlooking!

    Read the article

  • Database ERD design: 2 types user in one table

    - by Giskin Leow
    I have read this (Database design: 3 types of users, separate or one table?) I decided to put admin and normal user in one table since the attributes are similar: fullname, address, phone, email, gender ... Then I want to draw ERD, suddenly my mind pop out a question. How to draw? Customer make appointment and admin approve appointment. now only two tables, and admin, customer in same table. Help.

    Read the article

  • How to connect to database on remote server

    - by user137263
    Where there is VPN to remote server and then access to the database via local network interface, how can one establish a remote link between one's computer (with a programme such as Visual Studio 2010) and SQL Server (e.g. 2008 R2) ? Any attempts to create a direct link to the SQL Server are blocked. Whilst the SQL Server can be configured to allow external access, this provides its own host of problems. Any help would be much appreciated.

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >