Search Results

Search found 11052 results on 443 pages for 'linked tables'.

Page 94/443 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • Mysqldump causes "Too many connections"

    - by vbachev
    A scheduled backup using mysqldump on one of our databases is causing Too many connections. The database is of both InnoDB and MyISAM tables with size of around 500Mb. The Too many connections appears for about 2-3 minutes We understand that mysqldump locks the tables and causes all other queries and connections to pile up and jam the mysql server. We need frequent backups and we cannot afford server downtime or putting websites in maintenance mode while doing it. Our websites are global and traffic is high all the time so its hard to find a moment for backups. How can we avoid downtime during backups?Is there maybe a way to use mysqldump in way that it will not lock all tables at the same time?Is there an alternative to backing up with mysqldump?

    Read the article

  • DB2 users and groups

    - by Arun Srini
    Just want to know everyone's experience and take on managing users/authentication on a multi-node db2 cluster with users groups. I have 17 apps in production (project based company, only 2 online apps), and some 30 users with 7 groups. prodsel - group that has select privilege on all tables produpdt - update group on selective tables (as required by the apps) proddel - delete prodins - insert permissions for the group Now what my company does is when an app uses certain user (called app1user), and needs select and insert privilege on a table, they 1. grant select and insert for prodsel, prodins respectively 2. add the user under those two groups... now this creates one to many relationship between user and privileges, and this app1user also gets select on other tables granted for the prodsel group. I know this is wrong. Before I explain, I need to know how this is done elsewhere. Please share your experiences, even if you use other Databases that uses OS level authentication.

    Read the article

  • JZ0SJ: Metadata accessor information was not found on this database.

    - by Anthony Kong
    The complete message is: (JZ0SJ: Metadata accessor information was not found on this database. Please install the required tables as mentioned in the jConnect documentation The server is Adaptive Server Enterprise/15.5/EBF 18380 SMP ESD#3/P/x86_64/Enterprise Linux/asear155/2531/64-bit/FBO/Fri Jan 14 07:04:05 2011 Google search does not turn up definite answer to the problem. A lof of result are rather dated too. I checked the documentation but it does not mention any tables. What is the root cause of this problem? At the very least I would like to see a list of the required tables somewhere?

    Read the article

  • Bash script for mysql backup - error handling

    - by Jure1873
    I'm trying to backup a bunch of MyISAM tables in a way that would allow me to rsync/rdiff the backup directory to a remote location. I've came up with a script that dumps only the recently changed tables and sets the date of the file so that rsync can pick up only the changed ones, but now I don't know how to do the error handling - I would like the script to exit with a non 0 value if there are errors. How could I do that? #/bin/bash BKPDIR="/var/backups/db-mysql" mkdir -p $BKPDIR ERRORS=0 FIELDS="TABLE_SCHEMA, TABLE_NAME, UPDATE_TIME" W_COND="UPDATE_TIME >= DATE_ADD(CURDATE(), INTERVAL -2 DAY) AND TABLE_SCHEMA<>'information_schema'" mysql --skip-column-names -e "SELECT $FIELDS FROM information_schema.tables WHERE $W_COND;" | while read db table tstamp; do echo "DB: $db: TABLE: $table: ($tstamp)" mysqldump $db $table | gzip > $BKPDIR/$db-$table.sql.gz touch -d "$tstamp" $BKPDIR/$db-$table.sql.gz done exit $ERRORS

    Read the article

  • How to safely backup MySQL using VSS-based backup solutions

    - by Rhyven
    One of my clients is running MySQL on a Windows Server 2008 system. Their regular backups are performed using StorageCraft's ShadowCopy, which uses the VSS service to perform backups of open files. Some investigation indicates that MySQL is not entirely VSS-aware, and that the tables need to be locked prior to the shadow operation, then unlocked afterwards. There is a post at http://forum.storagecraft.com/Community/forums/p/548/2702.aspx which indicates the steps that need to be performed, however the user had some difficulty in performing them and no follow up solution was ever posted. Specifically, they succeeded in writing a batch file to lock the database, however once the batch file returns from MySQL it drops the connection and thus relinquishes the lock. I'm looking for a method that I can send the MySQL command FLUSH TABLES WITH READ LOCK, then perform the backup, then send UNLOCK TABLES when the backup is complete. Alternatively, I can exclude the MySQL data storage folder from backup, and schedule a mysqldump backup into a folder that will then be backed up by VSS. Can I have some recommendations please?

    Read the article

  • PostgreSQL 9: Does Vacuuming a table on the primary replicate on the mirror?

    - by Scott Herbert
    Running PostgreSQL 9.0.1, with streaming replication keeping one read-only mirror instance up to date. Auto-vaccuum is on on the primary, except for a few tables which are not vacuumed by the auto-vacuum daemon, in an effort to reduce business-hour IO. These tables are "materialised views". Each night at midnight, we run a vacuum across the database in order to clean up those tables that are excluded from the auto-vacuum. I'm wondering if that process replicates across to the mirror, or if I need to set up vacuum on the mirror as well?

    Read the article

  • PHPMyAdmin - Can I view actual field values instead of the foreign keys?

    - by Callum
    I have a database with some MyISAM tables. The tables contain foreign key relationships, but they are not inherently "enforced" (I believe "referential integrity" is the term). Using PHPMyAdmin, what I'd like to do is be able to view records from the tables but have all foreign keys replaced by whatever value they are representing. Eg, instead of seeing: 23 | 14 | $6.50 I'd like to see.. Extra Large | Thin Crust | $6.50 ($6.50 would be an editable field, the first two fields would not be). Any helpers? Thanks.

    Read the article

  • Is it faster to create indexes before or after data loading in MySQL?

    - by Josh Glover
    I have a data replication process that drops and recreates a few tables in a target database, then loads them up with data from a source database (running on another host, but that is immaterial to the question at hand). The target database does need primary keys and a few other indexes on its tables, but not during the data loading. I'm currently loading all of the data, then creating the indexes. However, index creation takes a pretty long time--30 minutes of my data loader's 5 and a half hour running time. My intuition tells me that creating the indexes at the end should be faster than creating them first, since the index would need to be rewritten with each insert. Can anyone tell me for sure which way is faster? FWIW, I'm running MySQL 5.1 with InnoDB tables.

    Read the article

  • How do I start mysqld with options

    - by xiankai
    I need to start up mysqld with command line options as from here: http://dev.mysql.com/doc/refman/5.1/en/server-options.html#option_mysqld_skip-grant-tables I normally do sudo service mysqld start, but passing the option as sudo service mysqld start --skip-grant-tables does not seem to work. Alternatively I have tried starting as a daemon, sudo mysqld_safe --skip-grant-tables & But it seems to terminate too soon: 131101 04:59:57 mysqld_safe Logging to '/var/lib/mysql/vagrant.example.com.err'. 131101 04:59:57 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 131101 05:00:03 mysqld_safe mysqld from pid file /var/lib/mysql/vagrant.example.com.pid ended My last option seems to specify the option in /etc/my.cnf instead, but is there any way to do it via the command line?

    Read the article

  • EXEC() syntax error using ODBC

    - by Mike Trader
    I have written a little ETL application that I wish to run a few lines of TSQL from. If i enter a simple query like "SELECT * FROM MyTable" everything is fine. All single line commands run as expected. A multiline query like this is also fine: DECLARE @TableName NVARCHAR(MAX) set @TableName = 'MyTable' EXECute ( 'DROP TABLE '+ @TableName ) Howevery when I try and run: DECLARE @TableName NVARCHAR(MAX) OPEN Tables FETCH NEXT FROM Tables INTO @TableName WHILE @@FETCH_STATUS = 0 BEGIN EXEC( 'DROP TABLE ' + @TableName ) FETCH NEXT FROM Tables INTO @TableName END I get a syntax error after TABLE in the EXEC() call. I have spent 6 hours trying to figure this out thinking perhaps I need to escape the single quote or something. I just cannot see the problem. A set of fresh eyes would be appreciated.

    Read the article

  • Randomize table guests in Excel

    - by Jo Voud
    I have a list of people: Column A: person A, person A guest, person B, person C, person C guest, ... Column B: 1, 1, 2, 3, 3, ... So in column A there is the person's name, column B gives a person a unique ID (the same id for their guest so we know that they are together). Now pretend we have a list of 100 people (also note that not all persons have guests) and we have to seat them. We have a list of tables (for example 10 * 4 person table and 10*6 person tables). We have to randomize that each person is assigned to a table and the guest is seated on the same table. What is the best way to do this? (it is also needed that I can generate this 4 times in a row without the same results, so when during the 4 courses of the diner the person are switching tables but not losing their guest).

    Read the article

  • Formula in table header cells

    - by Cylindric
    I have a table in Excel 2007 that I want to summarise, in a similar fashion to a Pivot Table, but for various reasons I can't use a pivot table. I like the "Format as table" features of sort and filter buttons, automatic formatting etc, so have used that to create a simple table: A B C N +-----------+------------+------------+-------+------------+ 1 | | 01/01/2010 | 01/02/2010 | ... | 01/12/2010 | +-----------+------------+------------+-------+------------+ 2 | CategoryA | 15 | 545 | | 634 | 3 | CategoryB | 32 | 332 | | 231 | 4 | CategoryC | 5 | 234 | | 644 | | ... | | | | | 27 | CategoryZ | 2 | 123 | | 64 | +-----------+------------+------------+-------+------------+ The numbers are retrieved from a "back-end" pivot table using GETPIVOTDATA(). All that works fine. Now, the problem is that I can't seem to use formulas for my column headings in these new "smart" tables - they are converted to text or just broken. For example if in B1 I put NOW(), I don't get the date, I get 00/01/1900. Is there any way of getting a formula to work in the auto tables? Or do I have to use standard tables and manually alternate-colour my rows etc?

    Read the article

  • MSWord table shading prints too dark

    - by Relaxed1
    My friend has a very light shading in his MSWord tables. However they still print too dark to read the text. When emailed to a colleague using the same printer, it prints light nicely. However they cannot find any setting that is different between them. Any ideas? Thanks! (P.s. for myself this would help for non-tables also, when 'highlighting' text. I do know that 'shading' gives more colour options for non-tables, but it would be nice to know anyway. Thanks)

    Read the article

  • db2 tablespace size and performance impact

    - by jrhickey
    Originally when we began moving to db2 LUW we ran into some issues where our tables were too big to fit into the default 4K table space. As a result of "pressure" to get it done we just went with a 32K default table space and put ALL of our tables there. What impact would that have if any? I talked to one person who said that it would possible make out database MUCH larger than it needed to be. Is that true? What about memory? Would there be any benefit to moving the smaller tables back to a 4K table space? I have looked around in forums and what not but cannot seem to find a good answer.

    Read the article

  • Scaling web application with SQL Server 2008 database

    - by John
    I have a database which has 90% of read only tables. 10% of the tables has writable data. We need to scale the ASP.NET application.We need to add more users who will not be writing to the database. We are thinking of adding another server and routing the users who need read only access to that server. Is there a way to replicate just some tables to another database server. Since the 90% of data doesnt change, we don't want to setup any full database replication. Please advise.

    Read the article

  • Unable to update the EntitySet because it has a DefiningQuery and no &lt;UpdateFunction&gt; element

    - by Harish Ranganathan
    When working with ADO.NET Entity Data Model, its often common that we generate entity schema for more than a single table from our Database.  With Entity Model generation automated with Visual Studio support, it becomes even tempting to create and work entity models to achieve an object mapping relationship. One of the errors that you might hit while trying to update an entity set either programmatically using context.SaveChanges or while using the automatic insert/update code generated by GridView etc., is “Unable to update the EntitySet <EntityName> because it has a DefiningQuery and no <UpdateFunction> element exists in the <ModificationFunctionMapping> element to support the current operation” While the description is pretty lengthy, the immediate thing that would come to our mind is to open our the entity model generated code and see if you can update it accordingly. However, the first thing to check if that, if the Entity Set is generated from a table, whether the Table defines a primary key.  Most of the times, we create tables with primary keys.  But some reference tables and tables which don’t have a primary key cannot be updated using the context of Entity and hence it would throw this error.  Unless it is a View, in which case, the default model is read-only, most of the times the above error occurs since there is no primary key defined in the table. There are other reasons why this error could popup which I am not going into for the sake of simplicity of the post.  If you find something new, please feel free to share it in comments. Hope this helps. Cheers !!!

    Read the article

  • How to: Check which table is the biggest, in SQL Server

    - by AngelEyes
    The company I work with had it's DB double its size lately, so I needed to find out which tables were the biggest. I found this on the web, and decided it's worth remembering! Taken from http://www.sqlteam.com/article/finding-the-biggest-tables-in-a-database, the code is from http://www.sqlteam.com/downloads/BigTables.sql   /************************************************************************************** * *  BigTables.sql *  Bill Graziano (SQLTeam.com) *  [email protected] *  v1.1 * **************************************************************************************/ DECLARE @id INT DECLARE @type CHARACTER(2) DECLARE @pages INT DECLARE @dbname SYSNAME DECLARE @dbsize DEC(15, 0) DECLARE @bytesperpage DEC(15, 0) DECLARE @pagesperMB DEC(15, 0) CREATE TABLE #spt_space   (      objid    INT NULL,      ROWS     INT NULL,      reserved DEC(15) NULL,      data     DEC(15) NULL,      indexp   DEC(15) NULL,      unused   DEC(15) NULL   ) SET nocount ON -- Create a cursor to loop through the user tables DECLARE c_tables CURSOR FOR   SELECT id   FROM   sysobjects   WHERE  xtype = 'U' OPEN c_tables FETCH NEXT FROM c_tables INTO @id WHILE @@FETCH_STATUS = 0   BEGIN       /* Code from sp_spaceused */       INSERT INTO #spt_space                   (objid,                    reserved)       SELECT objid = @id,              SUM(reserved)       FROM   sysindexes       WHERE  indid IN ( 0, 1, 255 )              AND id = @id       SELECT @pages = SUM(dpages)       FROM   sysindexes       WHERE  indid < 2              AND id = @id       SELECT @pages = @pages + Isnull(SUM(used), 0)       FROM   sysindexes       WHERE  indid = 255              AND id = @id       UPDATE #spt_space       SET    data = @pages       WHERE  objid = @id       /* index: sum(used) where indid in (0, 1, 255) - data */       UPDATE #spt_space       SET    indexp = (SELECT SUM(used)                        FROM   sysindexes                        WHERE  indid IN ( 0, 1, 255 )                               AND id = @id) - data       WHERE  objid = @id       /* unused: sum(reserved) - sum(used) where indid in (0, 1, 255) */       UPDATE #spt_space       SET    unused = reserved - (SELECT SUM(used)                                   FROM   sysindexes                                   WHERE  indid IN ( 0, 1, 255 )                                          AND id = @id)       WHERE  objid = @id       UPDATE #spt_space       SET    ROWS = i.ROWS       FROM   sysindexes i       WHERE  i.indid < 2              AND i.id = @id              AND objid = @id       FETCH NEXT FROM c_tables INTO @id   END SELECT TOP 25 table_name = (SELECT LEFT(name, 25)                             FROM   sysobjects                             WHERE  id = objid),               ROWS = CONVERT(CHAR(11), ROWS),               reserved_kb = Ltrim(Str(reserved * d.low / 1024., 15, 0) + ' ' + 'KB'),               data_kb = Ltrim(Str(data * d.low / 1024., 15, 0) + ' ' + 'KB'),               index_size_kb = Ltrim(Str(indexp * d.low / 1024., 15, 0) + ' ' + 'KB'),               unused_kb = Ltrim(Str(unused * d.low / 1024., 15, 0) + ' ' + 'KB') FROM   #spt_space,        MASTER.dbo.spt_values d WHERE  d.NUMBER = 1        AND d.TYPE = 'E' ORDER  BY reserved DESC DROP TABLE #spt_space CLOSE c_tables DEALLOCATE c_tables

    Read the article

  • Using transactions with LINQ-to-SQL

    - by Jalpesh P. Vadgama
    Today one of my colleague asked that how we can use transactions with the LINQ-to-SQL Classes when we use more then one entities updated at same time. It was a good question. Here is my answer for that.For ASP.NET 2.0  or higher version have a new class called TransactionScope which can be used to manage transaction with the LINQ. Let’s take a simple scenario we are having a shopping cart application in which we are storing details or particular order placed into the database using LINQ-to-SQL. There are two tables Order and OrderDetails which will have all the information related to order. Order will store particular information about orders while OrderDetails table will have product and quantity of product for particular order.We need to insert data in both tables as same time and if any errors comes then it should rollback the transaction. To use TransactionScope in above scenario first we have add a reference to System.Transactions like below. After adding the transaction we need to drag and drop the Order and Order Details tables into Linq-To-SQL Classes it will create entities for that. Below is the code for transaction scope to use mange transaction with Linq Context. MyContextDataContext objContext = new MyContextDataContext(); using (System.Transactions.TransactionScope tScope = new System.Transactions.TransactionScope(TransactionScopeOption.Required)) { objContext.Order.InsertOnSubmit(Order); objContext.OrderDetails.InsertOnSumbit(OrderDetails); objContext.SubmitChanges(); tScope.Complete(); } Here it will commit transaction only if using blocks will run successfully. Hope this will help you. Technorati Tags: Linq,Transaction,System.Transactions,ASP.NET

    Read the article

  • OWB 11gR2 - Early Arriving Facts

    - by Dawei Sun
    A common challenge when building ETL components for a data warehouse is how to handle early arriving facts. OWB 11gR2 introduced a new feature to address this for dimensional objects entitled Orphan Management. An orphan record is one that does not have a corresponding existing parent record. Orphan management automates the process of handling source rows that do not meet the requirements necessary to form a valid dimension or cube record. In this article, a simple example will be provided to show you how to use Orphan Management in OWB. We first import a sample MDL file that contains all the objects we need. Then we take some time to examine all the objects. After that, we prepare the source data, deploy the target table and dimension/cube loading map. Finally, we run the loading maps, and check the data in target dimension/cube tables. OK, let’s start… 1. Import MDL file and examine sample project First, download zip file from here, which includes a MDL file and three source data files. Then we open OWB design center, import orphan_management.mdl by using the menu File->Import->Warehouse Builder Metadata. Now we have several objects in BI_DEMO project as below: Mapping LOAD_CHANNELS_OM: The mapping for dimension loading. Mapping LOAD_SALES_OM: The mapping for cube loading. Dimension CHANNELS_OM: The dimension that contains channels data. Cube SALES_OM: The cube that contains sales data. Table CHANNELS_OM: The star implementation table of dimension CHANNELS_OM. Table SALES_OM: The star implementation table of cube SALES_OM. Table SRC_CHANNELS: The source table of channels data, that will be loaded into dimension CHANNELS_OM. Table SRC_ORDERS and SRC_ORDER_ITEMS: The source tables of sales data that will be loaded into cube SALES_OM. Sequence CLASS_OM_DIM_SEQ: The sequence used for loading dimension CHANNELS_OM. Dimension CHANNELS_OM This dimension has a hierarchy with three levels: TOTAL, CLASS and CHANNEL. Each level has three attributes: ID (surrogate key), NAME and SOURCE_ID (business key). It has a standard star implementation. The orphan management policy and the default parent setting are shown in the following screenshots: The orphan management policy options that you can set for loading are: Reject Orphan: The record is not inserted. Default Parent: You can specify a default parent record. This default record is used as the parent record for any record that does not have an existing parent record. If the default parent record does not exist, Warehouse Builder creates the default parent record. You specify the attribute values of the default parent record at the time of defining the dimensional object. If any ancestor of the default parent does not exist, Warehouse Builder also creates this record. No Maintenance: This is the default behavior. Warehouse Builder does not actively detect, reject, or fix orphan records. While removing data from a dimension, you can select one of the following orphan management policies: Reject Removal: Warehouse Builder does not allow you to delete the record if it has existing child records. No Maintenance: This is the default behavior. Warehouse Builder does not actively detect, reject, or fix orphan records. (More details are at http://download.oracle.com/docs/cd/E11882_01/owb.112/e10935/dim_objects.htm#insertedID1) Cube SALES_OM This cube is references to dimension CHANNELS_OM. It has three measures: AMOUNT, QUANTITY and COST. The orphan management policy setting are shown as following screenshot: The orphan management policy options that you can set for loading are: No Maintenance: Warehouse Builder does not actively detect, reject, or fix orphan rows. Default Dimension Record: Warehouse Builder assigns a default dimension record for any row that has an invalid or null dimension key value. Use the Settings button to define the default parent row. Reject Orphan: Warehouse Builder does not insert the row if it does not have an existing dimension record. (More details are at http://download.oracle.com/docs/cd/E11882_01/owb.112/e10935/dim_objects.htm#BABEACDG) Mapping LOAD_CHANNELS_OM This mapping loads source data from table SRC_CHANNELS to dimension CHANNELS_OM. The operator CHANNELS_IN is bound to table SRC_CHANNELS; CHANNELS_OUT is bound to dimension CHANNELS_OM. The TOTALS operator is used for generating a constant value for the top level in the dimension. The CLASS_FILTER operator is used to filter out the “invalid” class name, so then we can see what will happen when those channel records with an “invalid” parent are loading into dimension. Some properties of the dimension operator in this mapping are important to orphan management. See the screenshot below: Create Default Level Records: If YES, then default level records will be created. This property must be set to YES for dimensions and cubes if one of their orphan management policies is “Default Parent” or “Default Dimension Record”. This property is set to NO by default, so the user may need to set this to YES manually. LOAD policy for INVALID keys/ LOAD policy for NULL keys: These two properties have the same meaning as in the dimension editor. The values are set to the same as the dimension value when user drops the dimension into the mapping. The user does not need to modify these properties. Record Error Rows: If YES, error rows will be inserted into error table when loading the dimension. REMOVE Orphan Policy: This property is used when removing data from a dimension. Since the dimension loading type is set to LOAD in this example, this property is disabled. Mapping LOAD_SALES_OM This mapping loads source data from table SRC_ORDERS and SRC_ORDER_ITEMS to cube SALES_OM. This mapping seems a little bit complicated, but operators in the red rectangle are used to filter out and generate the records with “invalid” or “null” dimension keys. Some properties of the cube operator in a mapping are important to orphan management. See the screenshot below: Enable Source Aggregation: Should be checked in this example. If the default dimension record orphan policy is set for the cube operator, then it is recommended that source aggregation also be enabled. Otherwise, the orphan management processing may produce multiple fact rows with the same default dimension references, which will cause an “unstable rowset” execution error in the database, since the dimension refs are used as update match attributes for updating the fact table. LOAD policy for INVALID keys/ LOAD policy for NULL keys: These two properties have the same meaning as in the cube editor. The values are set to the same as in the cube editor when the user drops the cube into the mapping. The user does not need to modify these properties. Record Error Rows: If YES, error rows will be inserted into error table when loading the cube. 2. Deploy objects and mappings We now can deploy the objects. First, make sure location SALES_WH_LOCAL has been correctly configured. Then open Control Center Manager by using the menu Tools->Control Center Manager. Expand BI_DEMO->SALES_WH_LOCAL, click SALES_WH node on the project tree. We can see the following objects: Deploy all the objects in the following order: Sequence CLASS_OM_DIM_SEQ Table CHANNELS_OM, SALES_OM, SRC_CHANNELS, SRC_ORDERS, SRC_ORDER_ITEMS Dimension CHANNELS_OM Cube SALES_OM Mapping LOAD_CHANNELS_OM, LOAD_SALES_OM Note that we deployed source tables as well. Normally, we import source table from database instead of deploying them to target schema. However, in this example, we designed the source tables in OWB and deployed them to database for the purpose of this demonstration. 3. Prepare and examine source data Before running the mappings, we need to populate and examine the source data first. Run SRC_CHANNELS.sql, SRC_ORDERS.sql and SRC_ORDER_ITEMS.sql as target user. Then we check the data in these three tables. Table SRC_CHANNELS SQL> select rownum, id, class, name from src_channels; Records 1~5 are correct; they should be loaded into dimension without error. Records 6,7 and 8 have null parents; they should be loaded into dimension with a default parent value, and should be inserted into error table at the same time. Records 9, 10 and 11 have “invalid” parents; they should be rejected by dimension, and inserted into error table. Table SRC_ORDERS and SRC_ORDER_ITEMS SQL> select rownum, a.id, a.channel, b.amount, b.quantity, b.cost from src_orders a, src_order_items b where a.id = b.order_id; Record 178 has null dimension reference; it should be loaded into cube with a default dimension reference, and should be inserted into error table at the same time. Record 179 has “invalid” dimension reference; it should be rejected by cube, and inserted into error table. Other records should be aggregated and loaded into cube correctly. 4. Run the mappings and examine the target data In the Control Center Manager, expand BI_DEMO-> SALES_WH_LOCAL-> SALES_WH-> Mappings, right click on LOAD_CHANNELS_OM node, click Start. Use the same way to run mapping LOAD_SALES_OM. When they successfully finished, we can check the data in target tables. Table CHANNELS_OM SQL> select rownum, total_id, total_name, total_source_id, class_id,class_name, class_source_id, channel_id, channel_name,channel_source_id from channels_om order by abs(dimension_key); Records 1,2 and 3 are the default dimension records for the three levels. Records 8, 10 and 15 are the loaded records that originally have null parents. We see their parents name (class_name) is set to DEF_CLASS_NAME. Those records whose CHANNEL_NAME are Special_4, Special_5 and Special_6 are not loaded to this table because of the invalid parent. Error Table CHANNELS_OM_ERR SQL> select rownum, class_source_id, channel_id, channel_name,channel_source_id, err$$$_error_reason from channels_om_err order by channel_name; We can see all the record with null parent or invalid parent are inserted into this error table. Error reason is “Default parent used for record” for the first three records, and “No parent found for record” for the last three. Table SALES_OM SQL> select a.*, b.channel_name from sales_om a, channels_om b where a.channels=b.channel_id; We can see the order record with null channel_name has been loaded into target table with a default channel_name. The one with “invalid” channel_name are not loaded. Error Table SALES_OM_ERR SQL> select a.amount, a.cost, a.quantity, a.channels, b.channel_name, a.err$$$_error_reason from sales_om_err a, channels_om b where a.channels=b.channel_id(+); We can see the order records with null or invalid channel_name are inserted into error table. If the dimension reference column is null, the error reason is “Default dimension record used for fact”. If it is invalid, the error reason is “Dimension record not found for fact”. Summary In summary, this article illustrated the Orphan Management feature in OWB 11gR2. Automated orphan management policies improve ETL developer and administrator productivity by addressing an important cause of cube and dimension load failures, without requiring developers to explicitly build logic to handle these orphan rows.

    Read the article

  • PASS 13 Dispatches: Memory Optimized = On

    - by Tony Davis
    I'm at the PASS Summit in Charlotte for the Day 1 keynote by Quentin Clarke, Corporate VP of the data platform group at Microsoft. He's talking about how SQL Server 2014 is “pushing boundaries” and first up is SQL Server 2014's In-Memory OLTP technology (former codename “hekaton”) It is a feature that provokes a lot of interest and for good reason as, without any need for application rewrites or hardware updates, it can enable us to ensure that an application can find in memory most or all of the data it needs, and can lead to huge improvements in processing times. A good recent hekaton use cases article talks about applications that need a “Shock Absorber” when either spikes or just a high rate of incoming workload (including data in ETL scenarios) become a primary bottleneck. To get a really deep look at this technology, I would check out David DeWitt's summit keynote tomorrow (it will be live streamed). Other than that, to get started I'd recommend Kalen Delaney's whitepaper. She offers a lot of insight into how it works and how to start to define memory-optimized tables, and natively compiled stored procedures. These memory-optimized tables uses completely optimistic multi-version concurrency control – no waiting on locks! After that, Tom LaRock has compiled a useful set of links to drill deeper, and includes one to Microsoft's AMR tool to help you gauge the tables that might benefit most. Tony.

    Read the article

  • Reading and conditionally updating N rows, where N > 100,000 for DNA Sequence processing

    - by makerofthings7
    I have a proof of concept application that uses Azure tables to associate DNA sequences to "something". Table 1 is the master table. It uniquely lists every DNA sequence. The PK is a load balanced hash of the RK. The RK is the unique encoded value of the DNA sequence. Additional tables are created per subject. Each subject has a list of N DNA sequences that have one reference in the Master table, where N is 100,000. It is possible for many tables to reference the same DNA sequence, but in this case only one entry will be present in the Master table. My Azure dilemma: I need to lock the reference in the Master table as I work with the data. I need to handle timeouts, and prevent other threads from overwriting my data as one C# thread is working with the information. Other threads need to realise that this is locked, and move onto other unlocked records and do the work. Ideally I'd like to get some progress report of how my computation is going, and have the option to cancel the process (and unwind the locks). Question What is the best approach for this? I'm looking at these code snippets for inspiration: http://blogs.msdn.com/b/jimoneil/archive/2010/10/05/azure-home-part-7-asynchronous-table-storage-pagination.aspx http://stackoverflow.com/q/4535740/328397

    Read the article

  • Dynamic Forms: Pattern or AntiPattern?

    - by Segfault
    I'm sure you've seen it. The database has a bunch of tables called Forms, Controls,FormsControls, ControlSets, Actions and the program that queries these tables has a dynamically generated user interface. It will read all the forms, load a home page that has links to them all, or embed them in some tabbed or paged home page, and for each of those forms it will read the various text boxes, check boxes, radio buttons, submit buttons, combo boxes, labels and whatnot from the controls and form-to-control join tables, lay those elements out according to the database and link all the controls to logic according to other rules in the database. To me, this is an anti-pattern. It actually make the application more difficult to maintain because the design of it is now spread out into multiple different systems. Also, the database is not source controlled. Sure, it may make one or two changes go more quickly, after you've analyzed the program anyway to understand how to change the data and as long as you don't stray from the sort of changes that were anticipated and accounted for, but that's often just not sustainable. What say you?

    Read the article

  • Formalizing a requirements spec written in narrative English

    - by ProfK
    I have a fairly technical functionality requirements spec, expressed in English prose, produced by my project manager. It is structured as a collection of UI tabs, where the requirements for each tab are expressed as a lit of UI fields and a list of business rules for the tab. Most business rules are for UI fields on a tab, e.g: a) Must be alphanumeric, max length 20. b) Must be a dropdown, with values from table x. c) Is mandatory. d) Is mandatory under certain conditions, e.g. another field is just populated, or has a specific value. Then other business rules get a little more complex. The spec is for a job application, so the central business object (table) is the Applicant, and we have several other tables with one-to-many relationships with applicant, such as Degree, HighSchool, PreviousEmployer, Diploma, etc. e) One such complex rule says a status field can only be assigned a certain value if a many-side record exists in at least one of the many-side tables. E.g. the Applicant has at least one HighSchool or at least one Diploma record. I am looking for advice on how to codify these requirements into a more structured specification defined in terms of tables, fields, and relationships, especially for the conditional rules for fields and for the presence of related records. Any suggestions and advice will be most welcome, but I would be overjoyed if i could find an already defined system or structure for expressing things like this.

    Read the article

  • How to filter a mysql database with user input on a website and then spit the filtered table back to the website? [migrated]

    - by Luke
    I've been researching this on google for literally 3 weeks, racking my brain and still not quite finding anything. I can't believe this is so elusive. (I'm a complete beginner so if my terminology sounds stupid then that's why.) I have a database in mysql/phpmyadmin on my web host. I'm trying to create a front end that will allow a user to specify criteria for querying the database in a way that they don't have to know sql, basically just combo boxes and checkboxes on a form. Then have this form 'submit' a query to the database, and show the filtered tables. This is how the SQL looks in Microsoft Access: PARAMETERS TEXTINPUT1 Text ( 255 ), NUMBERINPUT1 IEEEDouble; // pops up a list of parameters for the user to input SELECT DISTINCT Table1.Column1, Table1.Column2, Table1.Column3,* // selects only the unique rows in these three columns FROM Table1 // the table where this query is happening WHERE (((Table1.Column1) Like TEXTINPUT1] AND ((Table1.Column2)<=[NUMBERINPUT1] AND ((Table1.Column3)>=[NUMBERINPUT1])); // the criteria for the filter, it's comparing the user input parameters to the data in the rows and only selecting matches according to the equal sign, or greater than + equal sign, or less than + equal sign What I don't get: WHAT IN THE WORLD AM I SUPPOSED TO USE (that isn't totally hard)!? I've tried google fusion tables - doesn't filter right with numerical data or empty cells in rows, can't relate tables I've tried DataTables.net, can't filter right with numerical data and can't use SQL without a bunch of indepth knowledge, not even sure it can if you have that.. I've looked into using jQuery with google spreadsheets, doesn't work at all either I have no idea how I'm supposed to build a front end with my database. Every place that looks promising (like zohocreator) is asking for money, and is far too simplified to be able to do the LIKE criteria or SELECT DISTINCT stuff.

    Read the article

  • Codifying a natural language requirements spec

    - by ProfK
    I have a fairly technical functionality requirements spec, expressed in English prose, produced by my project manager. It is structured as a collection of UI tabs, where the requirements for each tab are expressed as a lit of UI fields and a list of business rules for the tab. Most business rules are for UI fields on a tab, e.g: a) Must be alphanumeric, max length 20. b) Must be a dropdown, with values from table x. c) Is mandatory. d) Is mandatory under certain conditions, e.g. another field is just populated, or has a specific value. Then other business rules get a little more complex. The spec is for a job application, so the central business object (table) is the Applicant, and we have several other tables with one-to-many relationships with applicant, such as Degree, HighSchool, PreviousEmployer, Diploma, etc. e) One such complex rule says a status field can only be assigned a certain value if a many-side record exists in at least one of the many-side tables. E.g. the Applicant has at least one HighSchool or at least one Diploma record. I am looking for advice on how to codify these requirements into a more structured specification defined in terms of tables, fields, and relationships, especially for the conditional rules for fields and for the presence of related records. Any suggestions and advice will be most welcome, but I would be overjoyed if i could find an already defined system or structure for expressing things like this.

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >