Search Results

Search found 1787 results on 72 pages for 'foreign'.

Page 23/72 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • What advantages do we have when creating a separate mapping table for two relational tables

    - by Pankaj Upadhyay
    In various open source CMS, I have noticed that there is a separate table for mapping two relational tables. Like for categories and products, there is a separate product_category_mapping table. This table just has a primary key and two foreign keys from the categories and product tables. My question is what are the benefits of this database design rather than just linking the tables directly by defining a foreign key in either table? Is it just matter of convenience?

    Read the article

  • Online ALTER TABLE in MySQL 5.6

    - by Marko Mäkelä
    This is the low-level view of data dictionary language (DDL) operations in the InnoDB storage engine in MySQL 5.6. John Russell gave a more high-level view in his blog post April 2012 Labs Release – Online DDL Improvements. MySQL before the InnoDB Plugin Traditionally, the MySQL storage engine interface has taken a minimalistic approach to data definition language. The only natively supported operations were CREATE TABLE, DROP TABLE and RENAME TABLE. Consider the following example: CREATE TABLE t(a INT); INSERT INTO t VALUES (1),(2),(3); CREATE INDEX a ON t(a); DROP TABLE t; The CREATE INDEX statement would be executed roughly as follows: CREATE TABLE temp(a INT, INDEX(a)); INSERT INTO temp SELECT * FROM t; RENAME TABLE t TO temp2; RENAME TABLE temp TO t; DROP TABLE temp2; You could imagine that the database could crash when copying all rows from the original table to the new one. For example, it could run out of file space. Then, on restart, InnoDB would roll back the huge INSERT transaction. To fix things a little, a hack was added to ha_innobase::write_row for committing the transaction every 10,000 rows. Still, it was frustrating that even a simple DROP INDEX would make the table unavailable for modifications for a long time. Fast Index Creation in the InnoDB Plugin of MySQL 5.1 MySQL 5.1 introduced a new interface for CREATE INDEX and DROP INDEX. The old table-copying approach can still be forced by SET old_alter_table=0. This interface is used in MySQL 5.5 and in the InnoDB Plugin for MySQL 5.1. Apart from the ability to do a quick DROP INDEX, the main advantage is that InnoDB will execute a merge-sort algorithm before inserting the index records into each index that is being created. This should speed up the insert into the secondary index B-trees and potentially result in a better B-tree fill factor. The 5.1 ALTER TABLE interface was not perfect. For example, DROP FOREIGN KEY still invoked the table copy. Renaming columns could conflict with InnoDB foreign key constraints. Combining ADD KEY and DROP KEY in ALTER TABLE was problematic and not atomic inside the storage engine. The ALTER TABLE interface in MySQL 5.6 The ALTER TABLE storage engine interface was completely rewritten in MySQL 5.6. Instead of introducing a method call for every conceivable operation, MySQL 5.6 introduced a handful of methods, and data structures that keep track of the requested changes. In MySQL 5.6, online ALTER TABLE operation can be requested by specifying LOCK=NONE. Also LOCK=SHARED and LOCK=EXCLUSIVE are available. The old-style table copying can be requested by ALGORITHM=COPY. That one will require at least LOCK=SHARED. From the InnoDB point of view, anything that is possible with LOCK=EXCLUSIVE is also possible with LOCK=SHARED. Most ALGORITHM=INPLACE operations inside InnoDB can be executed online (LOCK=NONE). InnoDB will always require an exclusive table lock in two phases of the operation. The execution phases are tied to a number of methods: handler::check_if_supported_inplace_alter Checks if the storage engine can perform all requested operations, and if so, what kind of locking is needed. handler::prepare_inplace_alter_table InnoDB uses this method to set up the data dictionary cache for upcoming CREATE INDEX operation. We need stubs for the new indexes, so that we can keep track of changes to the table during online index creation. Also, crash recovery would drop any indexes that were incomplete at the time of the crash. handler::inplace_alter_table In InnoDB, this method is used for creating secondary indexes or for rebuilding the table. This is the ‘main’ phase that can be executed online (with concurrent writes to the table). handler::commit_inplace_alter_table This is where the operation is committed or rolled back. Here, InnoDB would drop any indexes, rename any columns, drop or add foreign keys, and finalize a table rebuild or index creation. It would also discard any logs that were set up for online index creation or table rebuild. The prepare and commit phases require an exclusive lock, blocking all access to the table. If MySQL times out while upgrading the table meta-data lock for the commit phase, it will roll back the ALTER TABLE operation. In MySQL 5.6, data definition language operations are still not fully atomic, because the data dictionary is split. Part of it is inside InnoDB data dictionary tables. Part of the information is only available in the *.frm file, which is not covered by any crash recovery log. But, there is a single commit phase inside the storage engine. Online Secondary Index Creation It may occur that an index needs to be created on a new column to speed up queries. But, it may be unacceptable to block modifications on the table while creating the index. It turns out that it is conceptually not so hard to support online index creation. All we need is some more execution phases: Set up a stub for the index, for logging changes. Scan the table for index records. Sort the index records. Bulk load the index records. Apply the logged changes. Replace the stub with the actual index. Threads that modify the table will log the operations to the logs of each index that is being created. Errors, such as log overflow or uniqueness violations, will only be flagged by the ALTER TABLE thread. The log is conceptually similar to the InnoDB change buffer. The bulk load of index records will bypass record locking. We still generate redo log for writing the index pages. It would suffice to log page allocations only, and to flush the index pages from the buffer pool to the file system upon completion. Native ALTER TABLE Starting with MySQL 5.6, InnoDB supports most ALTER TABLE operations natively. The notable exceptions are changes to the column type, ADD FOREIGN KEY except when foreign_key_checks=0, and changes to tables that contain FULLTEXT indexes. The keyword ALGORITHM=INPLACE is somewhat misleading, because certain operations cannot be performed in-place. For example, changing the ROW_FORMAT of a table requires a rebuild. Online operation (LOCK=NONE) is not allowed in the following cases: when adding an AUTO_INCREMENT column, when the table contains FULLTEXT indexes or a hidden FTS_DOC_ID column, or when there are FOREIGN KEY constraints referring to the table, with ON…CASCADE or ON…SET NULL option. The FOREIGN KEY limitations are needed, because MySQL does not acquire meta-data locks on the child or parent tables when executing SQL statements. Theoretically, InnoDB could support operations like ADD COLUMN and DROP COLUMN in-place, by lazily converting the table to a newer format. This would require that the data dictionary keep multiple versions of the table definition. For simplicity, we will copy the entire table, even for DROP COLUMN. The bulk copying of the table will bypass record locking and undo logging. For facilitating online operation, a temporary log will be associated with the clustered index of table. Threads that modify the table will also write the changes to the log. When altering the table, we skip all records that have been marked for deletion. In this way, we can simply discard any undo log records that were not yet purged from the original table. Off-page columns, or BLOBs, are an important consideration. We suspend the purge of delete-marked records if it would free any off-page columns from the old table. This is because the BLOBs can be needed when applying changes from the log. We have special logging for handling the ROLLBACK of an INSERT that inserted new off-page columns. This is because the columns will be freed at rollback.

    Read the article

  • .htaccess / 301 redirection question

    - by John K
    All my WordPress post URLs generate subdirectories with duplicate content and I do not know what regular expression to use to consistently 301 redirect domain.com/category/post/random-number/ to domain.com/category/post/ and domain.com/category/post/random-number/another-random-number/ also to domain.com/category/post/. Here is an example of my problem: http://www.example.com/features/harb-constitution-not-to-allow-kr-provinces-to-receive-foreign-officials/ http://www.example.com/features/harb-constitution-not-to-allow-kr-provinces-to-receive-foreign-officials/1345257927000/

    Read the article

  • i got mysql error on this statement i don't know why [closed]

    - by John Smiith
    i got mysql error on this statement i don't know why error is: #1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'CONSTRAINT fk_objet_code FOREIGN KEY (objet_code) REFERENCES objet(code) ) ENG' at line 6 sql code is CREATE TABLE IF NOT EXISTS `class` ( `numero` int(11) NOT NULL AUTO_INCREMENT, `type_class` varchar(100) DEFAULT NULL, `images` varchar(200) NOT NULL, PRIMARY KEY (`numero`) CONSTRAINT fk_objet_code FOREIGN KEY (objet_code) REFERENCES objet(code) ) ENGINE=InnoDB;;

    Read the article

  • OTRS upgrade 3.0 to 3.1 fails

    - by Valentin0S
    Today I've started upgrading OTRS from version 2.3 to 2.4 , 2.4 to 3.0 and 3.0 to 3.1. Everything went smoothly except the upgrade from 3.0 to 3.1 OTRS provides a few perl scripts which make the upgrade easier. I've used these scripts for each upgrade step. The upgrade from 3.0 to 3.1 fails at the following after using the upgrade script. scripts/DBUpdate-to-3.1.pl The error is : root@tickets:/opt/otrs# su - otrs $ scripts/DBUpdate-to-3.1.pl Migration started... Step 1 of 24: Refresh configuration cache... If you see warnings about 'Subroutine Load redefined', that's fine, no need to worry! Subroutine Load redefined at /opt/otrs/Kernel/Config/Files/ZZZAAuto.pm line 5. Subroutine Load redefined at /opt/otrs/Kernel/Config/Files/ZZZAuto.pm line 4. done. Step 2 of 24: Check framework version... done. Step 3 of 24: Creating DynamicField tables (if necessary)... done. DBD::mysql::db do failed: Cannot add or update a child row: a foreign key constraint fails (`pp_otrs`.`dynamic_field`, CONSTRAINT `FK_dynamic_field_create_by_id` FOREIGN KEY (`create_by`) REFERENCES `users` (`id`)) at /opt/otrs-3.1.10/Kernel/System/DB.pm line 478. ERROR: OTRS-DBUpdate-to-3.1-10 Perl: 5.14.2 OS: linux Time: Wed Sep 5 15:36:20 2012 Message: Cannot add or update a child row: a foreign key constraint fails (`pp_otrs`.`dynamic_field`, CONSTRAINT `FK_dynamic_field_create_by_id` FOREIGN KEY (`create_by`) REFERENCES `users` (`id`)), SQL: 'INSERT INTO dynamic_field (name, label, field_order, field_type, object_type, config, valid_id, create_time, create_by, change_time, change_by) VALUES (?, ?, ?, 'Text', 'Ticket', '--- {} ', 1, '2012-09-05 15:36:20' , 1, '2012-09-05 15:36:20' , 1)' Traceback (20405): Module: main::_DynamicFieldCreation (v1.85) Line: 466 Module: scripts/DBUpdate-to-3.1.pl (v1.85) Line: 95 Could not create new DynamicField TicketFreeKey1 at scripts/DBUpdate-to-3.1.pl line 477. Step 4 of 24: Create new dynamic fields for free fields (text, key, date)... $ Did anyone else face the same issue? Thanks in advance

    Read the article

  • Megacli is killing me, any help appreciated

    - by Stefan
    I run a server with 2 drives in raid0 configured through BIOS. I just added 2 more drives using hotplug (the server is dell r610 with RHEL 5.4 64bit) and I would like to configure a separate raid0 partition on these drives. I am getting the following error: /opt/MegaRAID/MegaCli/MegaCli64 -CfgLdAdd r0[32:2, 32:3] -a0 The specified physical disk does not have the appropriate attributes to complete the requested command. Exit Code: 0x26 All the parameters are correct and there is just no reason why this command could not work, see this (fujitsu is current raid, seagate is the new one I want to create): /opt/MegaRAID/MegaCli/MegaCli64 -PDList -aALL | egrep 'Adapter|Enclosure|Slot|Inquiry' Adapter #0 Enclosure Device ID: 32 Slot Number: 0 Enclosure position: 0 Inquiry Data: FUJITSU MBD2147RC D807D0A4PA101174 Enclosure Device ID: 32 Slot Number: 1 Enclosure position: 0 Inquiry Data: FUJITSU MBD2147RC D807D0A4PA10115T Enclosure Device ID: 32 Slot Number: 2 Enclosure position: 0 Inquiry Data: SEAGATE ST9300603SS FS033SE0TF5K Enclosure Device ID: 32 Slot Number: 3 Enclosure position: 0 Inquiry Data: SEAGATE ST9300603SS FS023SE070FK I also tried to set up the drive as hotspare, also some strange error: /opt/MegaRAID/MegaCli/MegaCli64 -PDHSP -Set -physdrv[32:3] -a0 Adapter: 0: Set Physical Drive at EnclId-32 SlotId-3 as Hot Spare Failed. FW error description: The specified device is in a state that doesn't support the requested command. Exit Code: 0x32 As you can see the disk is in Unconfigured, Good state: Enclosure Device ID: 32 Slot Number: 3 Enclosure position: 0 Device Id: 3 Sequence Number: 1 Media Error Count: 0 Other Error Count: 0 Predictive Failure Count: 0 Last Predictive Failure Event Seq Number: 0 PD Type: SAS Raw Size: 279.396 GB [0x22ecb25c Sectors] Non Coerced Size: 278.896 GB [0x22dcb25c Sectors] Coerced Size: 278.875 GB [0x22dc0000 Sectors] Firmware state: Unconfigured(good), Spun Up SAS Address(0): 0x5000c50005cd20b1 SAS Address(1): 0x0 Connected Port Number: 3(path0) Inquiry Data: SEAGATE ST9300603SS FS023SE070FK FDE Capable: Not Capable FDE Enable: Disable Secured: Unsecured Locked: Unlocked Needs EKM Attention: No Foreign State: Foreign Foreign Secure: Drive is not secured by a foreign lock key Device Speed: Unknown Link Speed: Unknown Media Type: Hard Disk Device Drive Temperature :30C (86.00 F)

    Read the article

  • Problem creating a database with PHP PDO

    - by Leandro Alonso
    Hello guys, I'm having a problem with a SQL query in my PHP Application. When the user access it for the first time, the app executes this query to create all the database: CREATE TABLE `databases` ( `id` bigint(20) NOT NULL auto_increment, `driver` varchar(45) NOT NULL, `server` text NOT NULL, `user` text NOT NULL, `password` text NOT NULL, `database` varchar(200) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=2 ; -- -------------------------------------------------------- -- -- Table structure for table `modules` -- CREATE TABLE `modules` ( `id` bigint(20) unsigned NOT NULL auto_increment, `title` varchar(100) NOT NULL, `type` varchar(150) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=29 ; -- -------------------------------------------------------- -- -- Table structure for table `modules_data` -- CREATE TABLE `modules_data` ( `id` bigint(20) NOT NULL auto_increment, `module_id` bigint(20) unsigned NOT NULL, `key` varchar(150) NOT NULL, `value` tinytext, PRIMARY KEY (`id`), KEY `fk_modules_data_modules` (`module_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=184 ; -- -------------------------------------------------------- -- -- Table structure for table `modules_position` -- CREATE TABLE `modules_position` ( `user_id` bigint(20) unsigned NOT NULL, `tab_id` bigint(20) unsigned NOT NULL, `module_id` bigint(20) unsigned NOT NULL, `column` smallint(1) default NULL, `line` smallint(1) default NULL, PRIMARY KEY (`user_id`,`tab_id`,`module_id`), KEY `fk_modules_order_users` (`user_id`), KEY `fk_modules_order_tabs` (`tab_id`), KEY `fk_modules_order_modules` (`module_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; -- -------------------------------------------------------- -- -- Table structure for table `tabs` -- CREATE TABLE `tabs` ( `id` bigint(20) unsigned NOT NULL auto_increment, `title` varchar(60) NOT NULL, `columns` smallint(1) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=12 ; -- -------------------------------------------------------- -- -- Table structure for table `tabs_has_modules` -- CREATE TABLE `tabs_has_modules` ( `tab_id` bigint(20) unsigned NOT NULL, `module_id` bigint(20) unsigned NOT NULL, PRIMARY KEY (`tab_id`,`module_id`), KEY `fk_tabs_has_modules_tabs` (`tab_id`), KEY `fk_tabs_has_modules_modules` (`module_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; -- -------------------------------------------------------- -- -- Table structure for table `users` -- CREATE TABLE `users` ( `id` bigint(20) unsigned NOT NULL auto_increment, `login` varchar(60) NOT NULL, `password` varchar(64) NOT NULL, `email` varchar(100) NOT NULL, `name` varchar(250) default NULL, `user_level` bigint(20) unsigned NOT NULL, PRIMARY KEY (`id`), KEY `fk_users_user_levels` (`user_level`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=4 ; -- -------------------------------------------------------- -- -- Table structure for table `users_has_tabs` -- CREATE TABLE `users_has_tabs` ( `user_id` bigint(20) unsigned NOT NULL, `tab_id` bigint(20) unsigned NOT NULL, `order` smallint(2) NOT NULL, `columns_width` varchar(255) default NULL, PRIMARY KEY (`user_id`,`tab_id`), KEY `fk_users_has_tabs_users` (`user_id`), KEY `fk_users_has_tabs_tabs` (`tab_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; -- -------------------------------------------------------- -- -- Table structure for table `user_levels` -- CREATE TABLE `user_levels` ( `id` bigint(20) unsigned NOT NULL auto_increment, `level` smallint(2) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=3 ; -- -------------------------------------------------------- -- -- Table structure for table `user_meta` -- CREATE TABLE `user_meta` ( `id` bigint(20) unsigned NOT NULL auto_increment, `user_id` bigint(20) unsigned default NULL, `key` varchar(150) NOT NULL, `value` longtext NOT NULL, PRIMARY KEY (`id`), KEY `fk_user_meta_users` (`user_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=4 ; -- -- Constraints for dumped tables -- -- -- Constraints for table `modules_data` -- ALTER TABLE `modules_data` ADD CONSTRAINT `fk_modules_data_modules` FOREIGN KEY (`module_id`) REFERENCES `modules` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION; -- -- Constraints for table `modules_position` -- ALTER TABLE `modules_position` ADD CONSTRAINT `fk_modules_order_modules` FOREIGN KEY (`module_id`) REFERENCES `modules` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION, ADD CONSTRAINT `fk_modules_order_tabs` FOREIGN KEY (`tab_id`) REFERENCES `tabs` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION, ADD CONSTRAINT `fk_modules_order_users` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION; -- -- Constraints for table `users` -- ALTER TABLE `users` ADD CONSTRAINT `fk_users_user_levels` FOREIGN KEY (`user_level`) REFERENCES `user_levels` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION; -- -- Constraints for table `user_meta` -- ALTER TABLE `user_meta` ADD CONSTRAINT `fk_user_meta_users` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION; INSERT INTO `user_levels` VALUES(1, 10); INSERT INTO `user_levels` VALUES(2, 1); INSERT INTO `users` VALUES(1, 'admin', 'password', '[email protected]', NULL, 1); INSERT INTO `user_meta` VALUES (NULL, 1, 'last_tab', 1); In some environments i get this error: SQLSTATE[HY000]: General error: 1005 Can't create table 'dms.databases' (errno: 150) I tried everything that I could find on Google but nothing works. The strange part is that if I run this query in PhpMyAdmin he creates my database, without any error.

    Read the article

  • referencing part of the composite primary key

    - by Zavael
    I have problems with setting the reference on database table. I have following structure: CREATE TABLE club( id INTEGER NOT NULL, name_short VARCHAR(30), name_full VARCHAR(70) NOT NULL ); CREATE UNIQUE INDEX club_uix ON club(id); ALTER TABLE club ADD CONSTRAINT club_pk PRIMARY KEY (id); CREATE TABLE team( id INTEGER NOT NULL, club_id INTEGER NOT NULL, team_name VARCHAR(30) ); CREATE UNIQUE INDEX team_uix ON team(id, club_id); ALTER TABLE team ADD CONSTRAINT team_pk PRIMARY KEY (id, club_id); ALTER TABLE team ADD FOREIGN KEY (club_id) REFERENCES club(id); CREATE TABLE person( id INTEGER NOT NULL, first_name VARCHAR(20), last_name VARCHAR(20) NOT NULL ); CREATE UNIQUE INDEX person_uix ON person(id); ALTER TABLE person ADD PRIMARY KEY (id); CREATE TABLE contract( person_id INTEGER NOT NULL, club_id INTEGER NOT NULL, wage INTEGER ); CREATE UNIQUE INDEX contract_uix on contract(person_id); ALTER TABLE contract ADD CONSTRAINT contract_pk PRIMARY KEY (person_id); ALTER TABLE contract ADD FOREIGN KEY (club_id) REFERENCES club(id); ALTER TABLE contract ADD FOREIGN KEY (person_id) REFERENCES person(id); CREATE TABLE player( person_id INTEGER NOT NULL, team_id INTEGER, height SMALLINT, weight SMALLINT ); CREATE UNIQUE INDEX player_uix on player(person_id); ALTER TABLE player ADD CONSTRAINT player_pk PRIMARY KEY (person_id); ALTER TABLE player ADD FOREIGN KEY (person_id) REFERENCES person(id); -- ALTER TABLE player ADD FOREIGN KEY (team_id) REFERENCES team(id); --this is not working It gives me this error: Error code -5529, SQL state 42529: a UNIQUE constraint does not exist on referenced columns: TEAM in statement [ALTER TABLE player ADD FOREIGN KEY (team_id) REFERENCES team(id)] As you can see, team table has composite primary key (club_id + id), the person references club through contract. Person has some common attributes for player and other staff types. One club can have multiple teams. Employed person has to have a contract with a club. Player (is the specification of person) - if emplyed - can be assigned to one of the club's teams. Is there better way to design my structure? I thought about excluding the club_id from team's primary key, but I would like to know if this is the only way. Thanks. UPDATE 1 I would like to have the id as team identification only within the club, so multiple teams can have equal id as long as they belong to different clubs. Is it possible? UPDATE 2 updated the naming convention as adviced by philip Some business rules to better understand the structure: One club can have 1..n teams (Main squad, Reserve squad, Youth squad or Team A, Team B... only team can play match, not club) One team belongs to one club only A player is type of person (other types (staff) are scouts, coaches etc so they do not need to belong to specific team, just to the club, if employed) Person can have 0..1 contract with 1 club (that means he is employed or unemployed) Player (if employed) belongs to one team of the club Now thinking about it - moving team_id from player to contract would solve my problem, and it could hold the condition "Player (if employed) belongs to one team of the club", but it would be redundant for other staff types. What do you think?

    Read the article

  • MySQL Cluster 7.3 - Join This Week's Webinar to Learn What's New

    - by Mat Keep
    The first Development Milestone and Early Access releases of MySQL Cluster 7.3 were announced just several weeks ago. To provide more detail and demonstrate the new features, Andrew Morgan and I will be hosting a live webinar this coming Thursday 25th October at 0900 Pacific Time / 16.00 UTC Even if you can't make the live webinar, it is still worth registering for the event as you will receive a notification when the replay will be available, to view on-demand at your convenience In the webinar, we will discuss the enhancements being previewed as part of MySQL Cluster 7.3, including: - Foreign Key Constraints: Yes, we've looked into the future and decided Foreign Keys are it ;-) You can read more about the implementation of Foreign Keys in MySQL Cluster 7.3 here - Node.js NoSQL API: Allowing web, mobile and cloud services to query and receive results sets from MySQL Cluster, natively in JavaScript, enables developers to seamlessly couple high performance, distributed applications with a high performance, distributed, persistence layer delivering 99.999% availability. You can study the Node.js / MySQL Cluster tutorial here - Auto-Installer: This new web-based GUI makes it simple for DevOps teams to quickly configure and provision highly optimized MySQL Cluster deployments on-premise or in the cloud You can view a YouTube tutorial on the MySQL Cluster Auto-Installer here  So we have a lot to cover in our 45 minute session. It will be time well spent if you want to know more about the future direction of MySQL Cluster and how it can help you innovate faster, with greater simplicity. Registration is open 

    Read the article

  • Developing Schema Compare for Oracle (Part 2): Dependencies

    - by Simon Cooper
    In developing Schema Compare for Oracle, one of the issues we came across was the size of the databases. As detailed in my last blog post, we had to allow schema pre-filtering due to the number of objects in a standard Oracle database. Unfortunately, this leads to some quite tricky situations regarding object dependencies. This post explains how we deal with these dependencies. 1. Cross-schema dependencies Say, in the following database, you're populating SchemaA, and synchronizing SchemaA.Table1: SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(Col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100) REFERENCES SchemaB.Table1(Col1)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100) PRIMARY KEY); We need to do a rebuild of SchemaA.Table1 to change Col1 from a VARCHAR2(100) to a NUMBER. This consists of: Creating a table with the new schema Inserting data from the old table to the new table, with appropriate conversion functions (in this case, TO_NUMBER) Dropping the old table Rename new table to same name as old table Unfortunately, in this situation, the rebuild will fail at step 1, as we're trying to create a NUMBER column with a foreign key reference to a VARCHAR2(100) column. As we're only populating SchemaA, the naive implementation of the object population prefiltering (sticking a WHERE owner = 'SCHEMAA' on all the data dictionary queries) will generate an incorrect sync script. What we actually have to do is: Drop foreign key constraint on SchemaA.Table1 Rebuild SchemaB.Table1 Rebuild SchemaA.Table1, adding the foreign key constraint to the new table This means that in order to generate a correct synchronization script for SchemaA.Table1 we have to know what SchemaB.Table1 is, and that it also needs to be rebuilt to successfully rebuild SchemaA.Table1. SchemaB isn't the schema that the user wants to synchronize, but we still have to load the table and column information for SchemaB.Table1 the same way as any table in SchemaA. Fortunately, Oracle provides (mostly) complete dependency information in the dictionary views. Before we actually read the information on all the tables and columns in the database, we can get dependency information on all the objects that are either pointed at by objects in the schemas we’re populating, or point to objects in the schemas we’re populating (think about what would happen if SchemaB was being explicitly populated instead), with a suitable query on all_constraints (for foreign key relationships) and all_dependencies (for most other types of dependencies eg a function using another function). The extra objects found can then be included in the actual object population, and the sync wizard then has enough information to figure out the right thing to do when we get to actually synchronize the objects. Unfortunately, this isn’t enough. 2. Dependency chains The solution above will only get the immediate dependencies of objects in populated schemas. What if there’s a chain of dependencies? A.tbl1 -> B.tbl1 -> C.tbl1 -> D.tbl1 If we’re only populating SchemaA, the implementation above will only include B.tbl1 in the dependent objects list, whereas we might need to know about C.tbl1 and D.tbl1 as well, in order to ensure a modification on A.tbl1 can succeed. What we actually need is a graph traversal on the dependency graph that all_dependencies represents. Fortunately, we don’t have to read all the database dependency information from the server and run the graph traversal on the client computer, as Oracle provides a method of doing this in SQL – CONNECT BY. So, we can put all the dependencies we want to include together in big bag with UNION ALL, then run a SELECT ... CONNECT BY on it, starting with objects in the schema we’re populating. We should end up with all the objects that might be affected by modifications in the initial schema we’re populating. Good solution? Well, no. For one thing, it’s sloooooow. all_dependencies, on my test databases, has got over 110,000 rows in it, and the entire query, for which Oracle was creating a temporary table to hold the big bag of graph edges, was often taking upwards of two minutes. This is too long, and would only get worse for large databases. But it had some more fundamental problems than just performance. 3. Comparison dependencies Consider the following schema: SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100)); What will happen if we used the dependency algorithm above on the source & target database? Well, SchemaA.Table1 has a foreign key reference to SchemaB.Table1, so that will be included in the source database population. On the target, SchemaA.Table1 has no such reference. Therefore SchemaB.Table1 will not be included in the target database population. In the resulting comparison of the two objects models, what you will end up with is: SOURCE  TARGET SchemaA.Table1 -> SchemaA.Table1 SchemaB.Table1 -> (no object exists) When this comparison is synchronized, we will see that SchemaB.Table1 does not exist, so we will try the following sequence of actions: Create SchemaB.Table1 Rebuild SchemaA.Table1, with foreign key to SchemaB.Table1 Oops. Because the dependencies are only followed within a single database, we’ve tried to create an object that already exists. To fix this we can include any objects found as dependencies in the source or target databases in the object population of both databases. SchemaB.Table1 will then be included in the target database population, and we won’t try and create objects that already exist. All good? Well, consider the following schema (again, only explicitly populating SchemaA, and synchronizing SchemaA.Table1): SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100) PRIMARY KEY); CREATE TABLE SchemaC.Table1 ( Col1 NUMBER);   CREATE TABLE SchemaC.Table1 ( Col1 VARCHAR2(100) REFERENCES SchemaB.Table1); Although we’re now including SchemaB.Table1 on both sides of the comparison, there’s a third table (SchemaC.Table1) that we don’t know about that will cause the rebuild of SchemaB.Table1 to fail if we try and synchronize SchemaA.Table1. That’s because we’re only running the dependency query on the schemas we’re explicitly populating; to solve this issue, we would have to run the dependency query again, but this time starting the graph traversal from the objects found in the other database. Furthermore, this dependency chain could be arbitrarily extended.This leads us to the following algorithm for finding all the dependencies of a comparison: Find initial dependencies of schemas the user has selected to compare on the source and target Include these objects in both the source and target object populations Run the dependency query on the source, starting with the objects found as dependents on the target, and vice versa Repeat 2 & 3 until no more objects are found For the schema above, this will result in the following sequence of actions: Find initial dependenciesSchemaA.Table1 -> SchemaB.Table1 found on sourceNo objects found on target Include objects in both source and targetSchemaB.Table1 included in source and target Run dependency query, starting with found objectsNo objects to start with on sourceSchemaB.Table1 -> SchemaC.Table1 found on target Include objects in both source and targetSchemaC.Table1 included in source and target Run dependency query on found objectsNo objects found in sourceNo objects to start with in target Stop This will ensure that we include all the necessary objects to make any synchronization work. However, there is still the issue of query performance; the CONNECT BY on the entire database dependency graph is still too slow. After much sitting down and drawing complicated diagrams, we decided to move the graph traversal algorithm from the server onto the client (which turned out to run much faster on the client than on the server); and to ensure we don’t read the entire dependency graph onto the client we also pull the graph across in bits – we start off with dependency edges involving schemas selected for explicit population, and whenever the graph traversal comes across a dependency reference to a schema we don’t yet know about a thunk is hit that pulls in the dependency information for that schema from the database. We continue passing more dependent objects back and forth between the source and target until no more dependency references are found. This gives us the list of all the extra objects to populate in the source and target, and object population can then proceed. 4. Object blacklists and fast dependencies When we tested this solution, we were puzzled in that in some of our databases most of the system schemas (WMSYS, ORDSYS, EXFSYS, XDB, etc) were being pulled in, and this was increasing the database registration and comparison time quite significantly. After debugging, we discovered that the culprits were database tables that used one of the Oracle PL/SQL types (eg the SDO_GEOMETRY spatial type). These were creating a dependency chain from the database tables we were populating to the system schemas, and hence pulling in most of the system objects in that schema. To solve this we introduced blacklists of objects we wouldn’t follow any dependency chain through. As well as the Oracle-supplied PL/SQL types (MDSYS.SDO_GEOMETRY, ORDSYS.SI_COLOR, among others) we also decided to blacklist the entire PUBLIC and SYS schemas, as any references to those would likely lead to a blow up in the dependency graph that would massively increase the database registration time, and could result in the client running out of memory. Even with these improvements, each dependency query was taking upwards of a minute. We discovered from Oracle execution plans that there were some columns, with dependency information we required, that were querying system tables with no indexes on them! To cut a long story short, running the following query: SELECT * FROM all_tab_cols WHERE data_type_owner = ‘XDB’; results in a full table scan of the SYS.COL$ system table! This single clause was responsible for over half the execution time of the dependency query. Hence, the ‘Ignore slow dependencies’ option was born – not querying this and a couple of similar clauses to drastically speed up the dependency query execution time, at the expense of producing incorrect sync scripts in rare edge cases. Needless to say, along with the sync script action ordering, the dependency code in the database registration is one of the most complicated and most rewritten parts of the Schema Compare for Oracle engine. The beta of Schema Compare for Oracle is out now; if you find a bug in it, please do tell us so we can get it fixed!

    Read the article

  • Why am I not getting an sRGB default framebuffer?

    - by Aaron Rotenberg
    I'm trying to make my OpenGL Haskell program gamma correct by making appropriate use of sRGB framebuffers and textures, but I'm running into issues making the default framebuffer sRGB. Consider the following Haskell program, compiled for 32-bit Windows using GHC and linked against 32-bit freeglut: import Foreign.Marshal.Alloc(alloca) import Foreign.Ptr(Ptr) import Foreign.Storable(Storable, peek) import Graphics.Rendering.OpenGL.Raw import qualified Graphics.UI.GLUT as GLUT import Graphics.UI.GLUT(($=)) main :: IO () main = do (_progName, _args) <- GLUT.getArgsAndInitialize GLUT.initialDisplayMode $= [GLUT.SRGBMode] _window <- GLUT.createWindow "sRGB Test" -- To prove that I actually have freeglut working correctly. -- This will fail at runtime under classic GLUT. GLUT.closeCallback $= Just (return ()) glEnable gl_FRAMEBUFFER_SRGB colorEncoding <- allocaOut $ glGetFramebufferAttachmentParameteriv gl_FRAMEBUFFER gl_FRONT_LEFT gl_FRAMEBUFFER_ATTACHMENT_COLOR_ENCODING print colorEncoding allocaOut :: Storable a => (Ptr a -> IO b) -> IO a allocaOut f = alloca $ \ptr -> do f ptr peek ptr On my desktop (Windows 8 64-bit with a GeForce GTX 760 graphics card) this program outputs 9729, a.k.a. gl_LINEAR, indicating that the default framebuffer is using linear color space, even though I explicitly requested an sRGB window. This is reflected in the rendering results of the actual program I'm trying to write - everything looks washed out because my linear color values aren't being converted to sRGB before being written to the framebuffer. On the other hand, on my laptop (Windows 7 64-bit with an Intel graphics chip), the program prints 0 (huh?) and I get an sRGB default framebuffer by default whether I request one or not! And on both machines, if I manually create a non-default framebuffer bound to an sRGB texture, the program correctly prints 35904, a.k.a. gl_SRGB. Why am I getting different results on different hardware? Am I doing something wrong? How can I get an sRGB framebuffer consistently on all hardware and target OSes?

    Read the article

  • Working Abroad Advice Needed

    - by RBA
    Hi, First of all, Happy New Year everybody! I wish you for 2011 all the best! For several years I was thinking about to work abroad, and I want to make this step(to work in a/several foreign countries, for several years). I am a Software Developer with a Bachelor of Engineering in Computer Science, with a +4 years of experience in Delphi(and small working experience in other programming languages). Until now I've applied at aprox. 100 positions over the world, and I've been contacted by 4-5 hiring managers. Our technical discussions went good, but then we reached at working visa 'problem'. I don't have legal/health problems, but I don't poses a working permit in other countries except Romania. I've been reading several forums concerning the working visas (here I take out working visa for US or other 'hard to get visa' countries), and there are several steps which companies must do (so I can take out the bureaucracy problem) to make the papers. Concerning the cost of a visa, this goes up to one medium salary from that country(in most of the cases). I've been working for the last years with different clients, from a wide variety of countries, and I don't believe I will have problems with integration in a foreign country So, the problem is that the market don't need Delphi Developers (there is a small amount of open positions on the recruiting sites), and I should start learning other programming language(all the time is better to know other programming languages - but to master it, requires some time) with a higher market 'rank' (Java/C#/etc), OR the problem is only concerning the working visa, and maybe the reticence of the employer to foreign possible candidates? I'm asking this especially for those users who made this step in their life or they want to make it in the future. Best regards,

    Read the article

  • When someone deletes a shared data source in SSRS

    - by Rob Farley
    SQL Server Reporting Services plays nicely. You can have things in the catalogue that get shared. You can have Reports that have Links, Datasets that can be used across different reports, and Data Sources that can be used in a variety of ways too. So if you find that someone has deleted a shared data source, you potentially have a bit of a horror story going on. And this works for this month’s T-SQL Tuesday theme, hosted by Nick Haslam, who wants to hear about horror stories. I don’t write about LobsterPot client horror stories, so I’m writing about a situation that a fellow MVP friend asked me about recently instead. The best thing to do is to grab a recent backup of the ReportServer database, restore it somewhere, and figure out what’s changed. But of course, this isn’t always possible. And it’s much nicer to help someone with this kind of thing, rather than to be trying to fix it yourself when you’ve just deleted the wrong data source. Unfortunately, it lets you delete data sources, without trying to scream that the data source is shared across over 400 reports in over 100 folders, as was the case for my friend’s colleague. So, suddenly there’s a big problem – lots of reports are failing, and the time to turn it around is small. You probably know which data source has been deleted, but getting the shared data source back isn’t the hard part (that’s just a connection string really). The nasty bit is all the re-mapping, to get those 400 reports working again. I know from exploring this kind of stuff in the past that the ReportServer database (using its default name) has a table called dbo.Catalog to represent the catalogue, and that Reports are stored here. However, the information about what data sources these deployed reports are configured to use is stored in a different table, dbo.DataSource. You could be forgiven for thinking that shared data sources would live in this table, but they don’t – they’re catalogue items just like the reports. Let’s have a look at the structure of these two tables (although if you’re reading this because you have a disaster, feel free to skim past). Frustratingly, there doesn’t seem to be a Books Online page for this information, sorry about that. I’m also not going to look at all the columns, just ones that I find interesting enough to mention, and that are related to the problem at hand. These fields are consistent all the way through to SQL Server 2012 – there doesn’t seem to have been any changes here for quite a while. dbo.Catalog The Primary Key is ItemID. It’s a uniqueidentifier. I’m not going to comment any more on that. A minor nice point about using GUIDs in unfamiliar databases is that you can more easily figure out what’s what. But foreign keys are for that too… Path, Name and ParentID tell you where in the folder structure the item lives. Path isn’t actually required – you could’ve done recursive queries to get there. But as that would be quite painful, I’m more than happy for the Path column to be there. Path contains the Name as well, incidentally. Type tells you what kind of item it is. Some examples are 1 for a folder and 2 a report. 4 is linked reports, 5 is a data source, 6 is a report model. I forget the others for now (but feel free to put a comment giving the full list if you know it). Content is an image field, remembering that image doesn’t necessarily store images – these days we’d rather use varbinary(max), but even in SQL Server 2012, this field is still image. It stores the actual item definition in binary form, whether it’s actually an image, a report, whatever. LinkSourceID is used for Linked Reports, and has a self-referencing foreign key (allowing NULL, of course) back to ItemID. Parameter is an ntext field containing XML for the parameters of the report. Not sure why this couldn’t be a separate table, but I guess that’s just the way it goes. This field gets changed when the default parameters get changed in Report Manager. There is nothing in dbo.Catalog that describes the actual data sources that the report uses. The default data sources would be part of the Content field, as they are defined in the RDL, but when you deploy reports, you typically choose to NOT replace the data sources. Anyway, they’re not in this table. Maybe it was already considered a bit wide to throw in another ntext field, I’m not sure. They’re in dbo.DataSource instead. dbo.DataSource The Primary key is DSID. Yes it’s a uniqueidentifier... ItemID is a foreign key reference back to dbo.Catalog Fields such as ConnectionString, Prompt, UserName and Password do what they say on the tin, storing information about how to connect to the particular source in question. Link is a uniqueidentifier, which refers back to dbo.Catalog. This is used when a data source within a report refers back to a shared data source, rather than embedding the connection information itself. You’d think this should be enforced by foreign key, but it’s not. It does allow NULLs though. Flags this is an int, and I’ll come back to this. When a Data Source gets deleted out of dbo.Catalog, you might assume that it would be disallowed if there are references to it from dbo.DataSource. Well, you’d be wrong. And not because of the lack of a foreign key either. Deleting anything from the catalogue is done by calling a stored procedure called dbo.DeleteObject. You can look at the definition in there – it feels very much like the kind of Delete stored procedures that many people write, the kind of thing that means they don’t need to worry about allowing cascading deletes with foreign keys – because the stored procedure does the lot. Except that it doesn’t quite do that. If it deleted everything on a cascading delete, we’d’ve lost all the data sources as configured in dbo.DataSource, and that would be bad. This is fine if the ItemID from dbo.DataSource hooks in – if the report is being deleted. But if a shared data source is being deleted, you don’t want to lose the existence of the data source from the report. So it sets it to NULL, and it marks it as invalid. We see this code in that stored procedure. UPDATE [DataSource]    SET       [Flags] = [Flags] & 0x7FFFFFFD, -- broken link       [Link] = NULL FROM    [Catalog] AS C    INNER JOIN [DataSource] AS DS ON C.[ItemID] = DS.[Link] WHERE    (C.Path = @Path OR C.Path LIKE @Prefix ESCAPE '*') Unfortunately there’s no semi-colon on the end (but I’d rather they fix the ntext and image types first), and don’t get me started about using the table name in the UPDATE clause (it should use the alias DS). But there is a nice comment about what’s going on with the Flags field. What I’d LIKE it to do would be to set the connection information to a report-embedded copy of the connection information that’s in the shared data source, the one that’s about to be deleted. I understand that this would cause someone to lose the benefit of having the data sources configured in a central point, but I’d say that’s probably still slightly better than LOSING THE INFORMATION COMPLETELY. Sorry, rant over. I should log a Connect item – I’ll put that on my todo list. So it sets the Link field to NULL, and marks the Flags to tell you they’re broken. So this is your clue to fixing it. A bitwise AND with 0x7FFFFFFD is basically stripping out the ‘2’ bit from a number. So numbers like 2, 3, 6, 7, 10, 11, etc, whose binary representation ends in either 11 or 10 get turned into 0, 1, 4, 5, 8, 9, etc. We can test for it using a WHERE clause that matches the SET clause we’ve just used. I’d also recommend checking for Link being NULL and also having no ConnectionString. And join back to dbo.Catalog to get the path (including the name) of broken reports are – in case you get a surprise from a different data source being broken in the past. SELECT c.Path, ds.Name FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; When I just ran this on my own machine, having deleted a data source to check my code, I noticed a Report Model in the list as well – so if you had thought it was just going to be reports that were broken, you’d be forgetting something. So to fix those reports, get your new data source created in the catalogue, and then find its ItemID by querying Catalog, using Path and Name to find it. And then use this value to fix them up. To fix the Flags field, just add 2. I prefer to use bitwise OR which should do the same. Use the OUTPUT clause to get a copy of the DSIDs of the ones you’re changing, just in case you need to revert something later after testing (doing it all in a transaction won’t help, because you’ll just lock out the table, stopping you from testing anything). UPDATE ds SET [Flags] = [Flags] | 2, [Link] = '3AE31CBA-BDB4-4FD1-94F4-580B7FAB939D' /*Insert your own GUID*/ OUTPUT deleted.Name, deleted.DSID, deleted.ItemID, deleted.Flags FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; But please be careful. Your mileage may vary. And there’s no reason why 400-odd broken reports needs to be quite the nightmare that it could be. Really, it should be less than five minutes. @rob_farley

    Read the article

  • Problems Enforcing Referential Integrity on SQL Server Tables

    - by SidC
    Hello All, I have a SQL Server 2005 database comprised of Customer, Quote, QuoteDetail tables. I want/need to enforce referential integrity such that when an insert is made on quotedetail, the quote and customer tables are also affected. I have tried my best to set up primary/foreign keys on my tables but need some help. Here's the scripts for my tables as they stand now (please don't laugh): Customers: USE [Diel_inventory] GO /****** Object: Table [dbo].[Customers] Script Date: 05/08/2010 03:39:04 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[Customers]( [pkCustID] [int] IDENTITY(1,1) NOT NULL, [CompanyName] [nvarchar](50) NULL, [Address] [nvarchar](50) NULL, [City] [nvarchar](50) NULL, [State] [nvarchar](2) NULL, [ZipCode] [nvarchar](5) NULL, [OfficePhone] [nvarchar](12) NULL, [OfficeFAX] [nvarchar](12) NULL, [Email] [nvarchar](50) NULL, [PrimaryContactName] [nvarchar](50) NULL, CONSTRAINT [PK_Customers] PRIMARY KEY CLUSTERED ([pkCustID] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] Quotes: USE [Diel_inventory] GO /****** Object: Table [dbo].[Quotes] Script Date: 05/08/2010 03:30:46 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[Quotes]( [pkQuoteID] [int] IDENTITY(1,1) NOT NULL, [fkCustomerID] [int] NOT NULL, [QuoteDate] [timestamp] NOT NULL, [NeedbyDate] [datetime] NULL, [QuoteAmt] [decimal](6, 2) NOT NULL, [QuoteApproved] [bit] NOT NULL, [fkOrderID] [int] NOT NULL, CONSTRAINT [PK_Bids] PRIMARY KEY CLUSTERED ( [pkQuoteID] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO ALTER TABLE [dbo].[Quotes] WITH CHECK ADD CONSTRAINT [fkCustomerID] FOREIGN KEY([fkCustomerID]) REFERENCES [dbo].[Customers] ([pkCustID]) GO ALTER TABLE [dbo].[Quotes] CHECK CONSTRAINT [fkCustomerID] QuoteDetail: USE [Diel_inventory] GO /****** Object: Table [dbo].[QuoteDetail] Script Date: 05/08/2010 03:31:58 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[QuoteDetail]( [ID] [int] IDENTITY(1,1) NOT NULL, [fkQuoteID] [int] NOT NULL, [fkCustomerID] [int] NOT NULL, [fkPartID] [int] NULL, [PartNumber1] [float] NOT NULL, [Qty1] [int] NOT NULL, [PartNumber2] [float] NULL, [Qty2] [int] NULL, [PartNumber3] [float] NULL, [Qty3] [int] NULL, [PartNumber4] [float] NULL, [Qty4] [int] NULL, [PartNumber5] [float] NULL, [Qty5] [int] NULL, [PartNumber6] [float] NULL, [Qty6] [int] NULL, [PartNumber7] [float] NULL, [Qty7] [int] NULL, [PartNumber8] [float] NULL, [Qty8] [int] NULL, [PartNumber9] [float] NULL, [Qty9] [int] NULL, [PartNumber10] [float] NULL, [Qty10] [int] NULL, [PartNumber11] [float] NULL, [Qty11] [int] NULL, [PartNumber12] [float] NULL, [Qty12] [int] NULL, [PartNumber13] [float] NULL, [Qty13] [int] NULL, [PartNumber14] [float] NULL, [Qty14] [int] NULL, [PartNumber15] [float] NULL, [Qty15] [int] NULL, [PartNumber16] [float] NULL, [Qty16] [int] NULL, [PartNumber17] [float] NULL, [Qty17] [int] NULL, [PartNumber18] [float] NULL, [Qty18] [int] NULL, [PartNumber19] [float] NULL, [Qty19] [int] NULL, [PartNumber20] [float] NULL, [Qty20] [int] NULL, CONSTRAINT [PK_QuoteDetail] PRIMARY KEY CLUSTERED ( [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO ALTER TABLE [dbo].[QuoteDetail] WITH CHECK ADD CONSTRAINT [FK_QuoteDetail_Customers] FOREIGN KEY ([fkCustomerID]) REFERENCES [dbo].[Customers] ([pkCustID]) GO ALTER TABLE [dbo].[QuoteDetail] CHECK CONSTRAINT [FK_QuoteDetail_Customers] GO ALTER TABLE [dbo].[QuoteDetail] WITH CHECK ADD CONSTRAINT [FK_QuoteDetail_PartList] FOREIGN KEY ([fkPartID]) REFERENCES [dbo].[PartList] ([RecID]) GO ALTER TABLE [dbo].[QuoteDetail] CHECK CONSTRAINT [FK_QuoteDetail_PartList] GO ALTER TABLE [dbo].[QuoteDetail] WITH CHECK ADD CONSTRAINT [FK_QuoteDetail_Quotes] FOREIGN KEY([fkQuoteID]) REFERENCES [dbo].[Quotes] ([pkQuoteID]) GO ALTER TABLE [dbo].[QuoteDetail] CHECK CONSTRAINT [FK_QuoteDetail_Quotes] Your advice/guidance on how to set these up so that customer ID in Customers is the same as in Quotes (referential integrity) and that CustomerID is inserted on Quotes and Customers when an insert is made to QuoteDetial would be much appreciated. Thanks, Sid

    Read the article

  • Problem with cascade delete using Entity Framework and System.Data.SQLite

    - by jamone
    I have a SQLite DB that is set up so when I delete a Person the delete is cascaded. This works fine when I manually delete a Person (all records that reference the PersonID are deleted). But when I use Entity Framework to delete the Person I get an error: System.InvalidOperationException: The operation failed: The relationship could not be changed because one or more of the foreign-key properties is non-nullable. When a change is made to a relationship, the related foreign-key property is set to a null value. If the foreign-key does not support null values, a new relationship must be defined, the foreign-key property must be assigned another non-null value, or the unrelated object must be deleted. I don't understand why this is occurring. My trigger is set to clean up all related objects before deleting the object it was told to delete. When I go into the model editor and check the properties of the relationship it shows no action for the OnDelete property. Why isn't this set correctly by pulling it from the DB? If I change this value to Cascade everything works properly, but I would rather not rely on this manual change because what if I refresh my model from the DB and it looses that. Here's the relivent SQL for my tables. CREATE TABLE [SomeTable] ( [SomeTableID] INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT, [PersonID] INTEGER NOT NULL REFERENCES [Person](PersonID) ON DELETE CASCADE ) CREATE TABLE [Person] ( [PersonID] INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT )

    Read the article

  • Problem deleting record using Entity Framework and System.Data.SQLite

    - by jamone
    I have a SQLite DB that is set up so when I delete a Person the delete is cascaded. This works fine when I manually delete a Person (all records that reference the PersonID are deleted). But when I use Entity Framework to delete the Person I get an error: System.InvalidOperationException: The operation failed: The relationship could not be changed because one or more of the foreign-key properties is non-nullable. When a change is made to a relationship, the related foreign-key property is set to a null value. If the foreign-key does not support null values, a new relationship must be defined, the foreign-key property must be assigned another non-null value, or the unrelated object must be deleted. I don't understand why this is occurring. My trigger is set to clean up all related objects before deleting the object it was told to delete. When I go into the model editor and check the properties of the relationship it shows no action for the OnDelete property. Why isn't this set correctly by pulling it from the DB? Here's the relivent SQL for my tables. CREATE TABLE [SomeTable] ( [SomeTableID] INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT, [PersonID] INTEGER NOT NULL REFERENCES [Person](PersonID) ON DELETE CASCADE ) CREATE TABLE [Person] ( [PersonID] INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT )

    Read the article

  • Displaying Many-To-Many Database relationship in VB.NET 2008 with DataGrid, MS SQL 2008

    - by user337501
    Computer bombed while posting this, couldnt find a duplicate question but if there is one, forgive me. So, I've run into a wall. And rather than use a ladder to avoid it, I'd like go through it. I'm setting up what I can best describe as a many-to-many relationship in a database. To examplify, imagine I have three primary tables: Items, Categories, Sections(nevermind the potential redundancy) Then I have another table, Properties. Items, Categories, and Sections can be associated with many properties. A single property can be associated with one, all, or none of the other tables. The best way I can figure to do this is to have join tables make the relationship. i.e. tblItems----(Foreign Key)----tblItems_To_Properties----(Foreign Key)----tblProperties In this example, tblItems simply has an "ItemID" Primary Key. tblItems_To_Properties has its own Primary Key(tblItems_To_PropertiesID), a Foreign Key to the Item(ItemID) and a Foreign key to the Property(PropertyID). The Properties table simply has its primary key(PropertyID) I hope this example isnt too confusing...if I have to I can find a way to put a diagram up or something. My problem is, I want to display this in a DataGrid using the Master-Detail method(DevExpress GridControl). I use the tblItems as a test, and I can see the Items in the parent view, but in the child view I see(understandably) the join table and that is it. My goal is to make it so the Grid ignores the join table and shows the Properties table as the only child. Any help on this method or insight into another solution would be muuuuuuuch appreciat

    Read the article

  • RIA Services for transmitting non DB object-graph

    - by Mike Gates
    I have been getting into RIA services because I thought it would simplify dealing with the services layer of web applications I wish to build. I see lots of examples out there showing how to create DomainService classes which expose and consume entities that have some kind of relational database backing, and therefore have foreign-key relationships. However, I would like to know how to expose and consume normal object graphs...objects that contain references to eachother but don't have foreign keys. For example, say I want a service operation called "GetFolderInformation(string pathToFolder)". I want this to return a custom object called "FolderInformation" structured with: - string Name - IEnumerable<FileInformation> Files I cannot get this to work because it seems that RIA wants to deal with entities that have foreign key relationships. Why? Why can't the serializer just see my object references and recreate that in the proxy on the other side? Data exists behind service layers that doesn't necessarily have foreign key relationships...like folder/file for example. EDIT: I realized I hadn't asked my question! My question is, is there a way to do what I am trying to do?

    Read the article

  • Common one-to-many table for multiple entities

    - by Ben V
    Suppose I have two tables, Customer and Vendor. I want to have a common address table for customer and vendor addresses. Customers and Vendors can both have one to many addresses. Option 1 Add columns for the AddressID to the Customer and Vendor tables. This just doesn't seem like a clean solution to me. Customer Vendor Address -------- --------- --------- CustomerID VendorID AddressID AddressID1 AddressID1 Street AddressID2 AddressID2 City... Option 2 Move the foreign key to the Address table. For a Customer, Address.CustomerID will be populated. For a Vendor, Address.VendorID will be populated. I don't like this either - I shouldn't need to modify the address table every time I want to use it for another entity. Customer Vendor Address -------- --------- --------- CustomerID VendorID AddressID CustomerID VendorID Option 3 I've also seen this - only 1 foreign key column on the Address table with another column to identify which foreign key table the address belongs to. I don't like this one because it requires all the foreign key tables to have the same type of ID. It also seems messy once you start coding against it. Customer Vendor Address -------- --------- --------- CustomerID VendorID AddressID FKTable FKID So, am I just too picky, or is there something I haven't thought of?

    Read the article

  • one table is shared between several websites

    - by sami
    I have a static table that's shared by several websites. By static, I mean that the data is read but never updated by the websites. Currently, all websites are served from the same server but that may change. I want to minimize the need for creating/maintaining this table for each of the websites, so I thought about turning it to an xml file that's stored in a shared library that all websites have access to. The problem is I use an ORM and use forign key constraints to ensure integrity of the ids used from that table, so by removing that table out of the MySQL database into an XML file, will this affect the integrity of the ids coming from that table? My table looks like this <table name="entry"> <column name="id" type="INTEGER" primaryKey="true" autoIncrement="true" /> <column name="title" type="VARCHAR" size="500" required="true" /> </table> and I use it as a foreign key in other tables <table name="refer"> <column name="id" type="INTEGER" primaryKey="true" autoIncrement="true" /> <column name="linkto" type="INTEGER"/> <foreign-key foreignTable="entry"> <reference local="linkto" foreign="id" /> </foreign-key> </table> So I'm wondering if I remove that table out of the database, is there a way to retain that referential integrity? And of course are these any other efficient ways to do the same thing? I just don't want to have to repeat that table for several websites.

    Read the article

  • SQL - Query to display average as either "longer than" or "shorter than"

    - by user1840801
    Here are the tables I've created: CREATE TABLE Plane_new (Pnum char(3), Feature varchar2(20), Ptype varchar2(15), primary key (Pnum)); CREATE TABLE Employee_new (eid char(3), ename varchar(10), salary number(7,2), mid char(3), PRIMARY KEY (eid), FOREIGN KEY (mid) REFERENCES Employee_new); CREATE TABLE Pilot_new (eid char(3), Licence char(9), primary key (eid), foreign key (eid) references Employee_new on delete cascade); CREATE TABLE FlightI_new (Fnum char(4), Fdate date, Duration number(2), Pid char(3), Pnum char(3), primary key (Fnum), foreign key (Pid) references Pilot_new (eid), foreign key (Pnum) references Plane_new); And here is the query I must complete: For each flight, display its number, the name of the pilot who implemented the flight and the words ‘Longer than average’ if the flight duration was longer than average or the words ‘Shorter than average’ if the flight duration was shorter than or equal to the average. For the column holding the words ‘Longer than average’ or ‘Shorter than average’ make a header Length. Here is what I've come up with - with no luck! SELECT F.Fnum, E.ename, CASE Length WHEN F.Duration>(SELECT AVG(F.Duration) FROM FlightI_new F) THEN "Longer than average" WHEN F.Duration<=(SELECT AVG(F.Duration) FROM FlightI_new F) THEN 'Shorter than average' END FROM FlightI_new F LEFT OUTER JOIN Employee_new E ON F.Pid=E.eid GROUP BY F.Fnum, E.ename; Where am I going wrong?

    Read the article

  • JPA 2.1 Schema Generation (TOTD #187)

    - by arungupta
    This blog explained some of the key features of JPA 2.1 earlier. Since then Schema Generation has been added to JPA 2.1. This Tip Of The Day (TOTD) will provide more details about this new feature in JPA 2.1. Schema Generation refers to generation of database artifacts like tables, indexes, and constraints in a database schema. It may or may not involve generation of a proper database schema depending upon the credentials and authorization of the user. This helps in prototyping of your application where the required artifacts are generated either prior to application deployment or as part of EntityManagerFactory creation. This is also useful in environments that require provisioning database on demand, e.g. in a cloud. This feature will allow your JPA domain object model to be directly generated in a database. The generated schema may need to be tuned for actual production environment. This usecase is supported by allowing the schema generation to occur into DDL scripts which can then be further tuned by a DBA. The following set of properties in persistence.xml or specified during EntityManagerFactory creation controls the behaviour of schema generation. Property Name Purpose Values javax.persistence.schema-generation-action Controls action to be taken by persistence provider "none", "create", "drop-and-create", "drop" javax.persistence.schema-generation-target Controls whehter schema to be created in database, whether DDL scripts are to be created, or both "database", "scripts", "database-and-scripts" javax.persistence.ddl-create-script-target, javax.persistence.ddl-drop-script-target Controls target locations for writing of scripts. Writers are pre-configured for the persistence provider. Need to be specified only if scripts are to be generated. java.io.Writer (e.g. MyWriter.class) or URL strings javax.persistence.ddl-create-script-source, javax.persistence.ddl-drop-script-source Specifies locations from which DDL scripts are to be read. Readers are pre-configured for the persistence provider. java.io.Reader (e.g. MyReader.class) or URL strings javax.persistence.sql-load-script-source Specifies location of SQL bulk load script. java.io.Reader (e.g. MyReader.class) or URL string javax.persistence.schema-generation-connection JDBC connection to be used for schema generation javax.persistence.database-product-name, javax.persistence.database-major-version, javax.persistence.database-minor-version Needed if scripts are to be generated and no connection to target database. Values are those obtained from JDBC DatabaseMetaData. javax.persistence.create-database-schemas Whether Persistence Provider need to create schema in addition to creating database objects such as tables, sequences, constraints, etc. "true", "false" Section 11.2 in the JPA 2.1 specification defines the annotations used for schema generation process. For example, @Table, @Column, @CollectionTable, @JoinTable, @JoinColumn, are used to define the generated schema. Several layers of defaulting may be involved. For example, the table name is defaulted from entity name and entity name (which can be specified explicitly as well) is defaulted from the class name. However annotations may be used to override or customize the values. The following entity class: @Entity public class Employee {    @Id private int id;    private String name;     . . .     @ManyToOne     private Department dept; } is generated in the database with the following attributes: Maps to EMPLOYEE table in default schema "id" field is mapped to ID column as primary key "name" is mapped to NAME column with a default VARCHAR(255). The length of this field can be easily tuned using @Column. @ManyToOne is mapped to DEPT_ID foreign key column. Can be customized using JOIN_COLUMN. In addition to these properties, couple of new annotations are added to JPA 2.1: @Index - An index for the primary key is generated by default in a database. This new annotation will allow to define additional indexes, over a single or multiple columns, for a better performance. This is specified as part of @Table, @SecondaryTable, @CollectionTable, @JoinTable, and @TableGenerator. For example: @Table(indexes = {@Index(columnList="NAME"), @Index(columnList="DEPT_ID DESC")})@Entity public class Employee {    . . .} The generated table will have a default index on the primary key. In addition, two new indexes are defined on the NAME column (default ascending) and the foreign key that maps to the department in descending order. @ForeignKey - It is used to define foreign key constraint or to otherwise override or disable the persistence provider's default foreign key definition. Can be specified as part of JoinColumn(s), MapKeyJoinColumn(s), PrimaryKeyJoinColumn(s). For example: @Entity public class Employee {    @Id private int id;    private String name;    @ManyToOne    @JoinColumn(foreignKey=@ForeignKey(foreignKeyDefinition="FOREIGN KEY (MANAGER_ID) REFERENCES MANAGER"))    private Manager manager;     . . . } In this entity, the employee's manager is mapped by MANAGER_ID column in the MANAGER table. The value of foreignKeyDefinition would be a database specific string. A complete replay of Linda's talk at JavaOne 2012 can be seen here (click on CON4212_mp4_4212_001 in Media). These features will be available in GlassFish 4 promoted builds in the near future. JPA 2.1 will be delivered as part of Java EE 7. The different components in the Java EE 7 platform are tracked here. JPA 2.1 Expert Group has released Early Draft 2 of the specification. Section 9.4 and 11.2 provide all details about Schema Generation. The latest javadocs can be obtained from here. And the JPA EG would appreciate feedback.

    Read the article

  • JMS Step 7 - How to Write to an AQ JMS (Advanced Queueing JMS) Queue from a BPEL Process

    - by John-Brown.Evans
    JMS Step 7 - How to Write to an AQ JMS (Advanced Queueing JMS) Queue from a BPEL Process ol{margin:0;padding:0} .jblist{list-style-type:disc;margin:0;padding:0;padding-left:0pt;margin-left:36pt} .c4_7{vertical-align:top;width:468pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c3_7{vertical-align:top;width:234pt;border-style:solid;border-color:#000000;border-width:1pt;padding:0pt 5pt 0pt 5pt} .c6_7{vertical-align:top;width:156pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c16_7{background-color:#ffffff;padding:0pt 0pt 0pt 0pt} .c0_7{height:11pt;direction:ltr} .c9_7{color:#1155cc;text-decoration:underline} .c17_7{color:inherit;text-decoration:inherit} .c5_7{direction:ltr} .c18_7{background-color:#ffff00} .c2_7{background-color:#f3f3f3} .c14_7{height:0pt} .c8_7{text-indent:36pt} .c11_7{text-align:center} .c7_7{font-style:italic} .c1_7{font-family:"Courier New"} .c13_7{line-height:1.0} .c15_7{border-collapse:collapse} .c12_7{font-weight:bold} .c10_7{font-size:8pt} .title{padding-top:24pt;line-height:1.15;text-align:left;color:#000000;font-size:36pt;font-family:"Arial";font-weight:bold;padding-bottom:6pt} .subtitle{padding-top:18pt;line-height:1.15;text-align:left;color:#666666;font-style:italic;font-size:24pt;font-family:"Georgia";padding-bottom:4pt} li{color:#000000;font-size:10pt;font-family:"Arial"} p{color:#000000;font-size:10pt;margin:0;font-family:"Arial"} h1{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:24pt;font-family:"Arial";font-weight:normal} h2{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:18pt;font-family:"Arial";font-weight:normal} h3{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:14pt;font-family:"Arial";font-weight:normal} h4{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:12pt;font-family:"Arial";font-weight:normal} h5{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:11pt;font-family:"Arial";font-weight:normal} h6{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:10pt;font-family:"Arial";font-weight:normal} This post continues the series of JMS articles which demonstrate how to use JMS queues in a SOA context. The previous posts were: JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g JMS Step 2 - Using the QueueSend.java Sample Program to Send a Message to a JMS Queue JMS Step 3 - Using the QueueReceive.java Sample Program to Read a Message from a JMS Queue JMS Step 4 - How to Create an 11g BPEL Process Which Writes a Message Based on an XML Schema to a JMS Queue JMS Step 5 - How to Create an 11g BPEL Process Which Reads a Message Based on an XML Schema from a JMS Queue JMS Step 6 - How to Set Up an AQ JMS (Advanced Queueing JMS) for SOA Purposes This example demonstrates how to write a simple message to an Oracle AQ via the the WebLogic AQ JMS functionality from a BPEL process and a JMS adapter. If you have not yet reviewed the previous posts, please do so first, especially the JMS Step 6 post, as this one references objects created there. 1. Recap and Prerequisites In the previous example, we created an Oracle Advanced Queue (AQ) and some related JMS objects in WebLogic Server to be able to access it via JMS. Here are the objects which were created and their names and JNDI names: Database Objects Name Type AQJMSUSER Database User MyQueueTable Advanced Queue (AQ) Table UserQueue Advanced Queue WebLogic Server Objects Object Name Type JNDI Name aqjmsuserDataSource Data Source jdbc/aqjmsuserDataSource AqJmsModule JMS System Module AqJmsForeignServer JMS Foreign Server AqJmsForeignServerConnectionFactory JMS Foreign Server Connection Factory AqJmsForeignServerConnectionFactory AqJmsForeignDestination AQ JMS Foreign Destination queue/USERQUEUE eis/aqjms/UserQueue Connection Pool eis/aqjms/UserQueue 2 . Create a BPEL Composite with a JMS Adapter Partner Link This step requires that you have a valid Application Server Connection defined in JDeveloper, pointing to the application server on which you created the JMS Queue and Connection Factory. You can create this connection in JDeveloper under the Application Server Navigator. Give it any name and be sure to test the connection before completing it. This sample will write a simple XML message to the AQ JMS queue via the JMS adapter, based on the following XSD file, which consists of a single string element: stringPayload.xsd <?xml version="1.0" encoding="windows-1252" ?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"                xmlns="http://www.example.org"                targetNamespace="http://www.example.org"                elementFormDefault="qualified">  <xsd:element name="exampleElement" type="xsd:string">  </xsd:element> </xsd:schema> The following steps are all executed in JDeveloper. The SOA project will be created inside a JDeveloper Application. If you do not already have an application to contain the project, you can create a new one via File > New > General > Generic Application. Give the application any name, for example JMSTests and, when prompted for a project name and type, call the project   JmsAdapterWriteAqJms  and select SOA as the project technology type. If you already have an application, continue below. Create a SOA Project Create a new project and select SOA Tier > SOA Project as its type. Name it JmsAdapterWriteAqJms . When prompted for the composite type, choose Composite With BPEL Process. When prompted for the BPEL Process, name it JmsAdapterWriteAqJms too and choose Synchronous BPEL Process as the template. This will create a composite with a BPEL process and an exposed SOAP service. Double-click the BPEL process to open and begin editing it. You should see a simple BPEL process with a Receive and Reply activity. As we created a default process without an XML schema, the input and output variables are simple strings. Create an XSD File An XSD file is required later to define the message format to be passed to the JMS adapter. In this step, we create a simple XSD file, containing a string variable and add it to the project. First select the xsd item in the left-hand navigation tree to ensure that the XSD file is created under that item. Select File > New > General > XML and choose XML Schema. Call it stringPayload.xsd  and when the editor opens, select the Source view. then replace the contents with the contents of the stringPayload.xsd example above and save the file. You should see it under the XSD item in the navigation tree. Create a JMS Adapter Partner Link We will create the JMS adapter as a service at the composite level. If it is not already open, double-click the composite.xml file in the navigator to open it. From the Component Palette, drag a JMS adapter over onto the right-hand swim lane, under External References. This will start the JMS Adapter Configuration Wizard. Use the following entries: Service Name: JmsAdapterWrite Oracle Enterprise Messaging Service (OEMS): Oracle Advanced Queueing AppServer Connection: Use an existing application server connection pointing to the WebLogic server on which the connection factory created earlier is located. You can use the “+” button to create a connection directly from the wizard, if you do not already have one. Adapter Interface > Interface: Define from operation and schema (specified later) Operation Type: Produce Message Operation Name: Produce_message Produce Operation Parameters Destination Name: Wait for the list to populate. (Only foreign servers are listed here, because Oracle Advanced Queuing was selected earlier, in step 3) .         Select the foreign server destination created earlier, AqJmsForeignDestination (queue) . This will automatically populate the Destination Name field with the name of the foreign destination, queue/USERQUEUE . JNDI Name: The JNDI name to use for the JMS connection. This is the JNDI name of the connection pool created in the WebLogic Server.JDeveloper does not verify the value entered here. If you enter a wrong value, the JMS adapter won’t find the queue and you will get an error message at runtime. In our example, this is the value eis/aqjms/UserQueue Messages URL: We will use the XSD file we created earlier, stringPayload.xsd to define the message format for the JMS adapter. Press the magnifying glass icon to search for schema files. Expand Project Schema Files > stringPayload.xsd and select exampleElement : string . Press Next and Finish, which will complete the JMS Adapter configuration. Wire the BPEL Component to the JMS Adapter In this step, we link the BPEL process/component to the JMS adapter. From the composite.xml editor, drag the right-arrow icon from the BPEL process to the JMS adapter’s in-arrow.   This completes the steps at the composite level. 3. Complete the BPEL Process Design Invoke the JMS Adapter Open the BPEL component by double-clicking it in the design view of the composite.xml. This will display the BPEL process in the design view. You should see the JmsAdapterWrite partner link under one of the two swim lanes. We want it in the right-hand swim lane. If JDeveloper displays it in the left-hand lane, right-click it and choose Display > Move To Opposite Swim Lane. An Invoke activity is required in order to invoke the JMS adapter. Drag an Invoke activity between the Receive and Reply activities. Drag the right-hand arrow from the Invoke activity to the JMS adapter partner link. This will open the Invoke editor. The correct default values are entered automatically and are fine for our purposes. We only need to define the input variable to use for the JMS adapter. By pressing the green “+” symbol, a variable of the correct type can be auto-generated, for example with the name Invoke1_Produce_Message_InputVariable. Press OK after creating the variable. Assign Variables Drag an Assign activity between the Receive and Invoke activities. We will simply copy the input variable to the JMS adapter and, for completion, so the process has an output to print, again to the process’s output variable. Double-click the Assign activity and create two Copy rules: for the first, drag Variables > inputVariable > payload > client:process > client:input_string to Invoke1_Produce_Message_InputVariable > body > ns2:exampleElement for the second, drag the same input variable to outputVariable > payload > client:processResponse > client:result This will create two copy rules, similar to the following: Press OK. This completes the BPEL and Composite design. 4. Compile and Deploy the Composite Compile the process by pressing the Make or Rebuild icons or by right-clicking the project name in the navigator and selecting Make... or Rebuild... If the compilation is successful, deploy it to the SOA server connection defined earlier. (Right-click the project name in the navigator, select Deploy to Application Server, choose the application server connection, choose the partition on the server (usually default) and press Finish. You should see the message ----  Deployment finished.  ---- in the Deployment frame, if the deployment was successful. 5. Test the Composite Execute a Test Instance In a browser, log in to the Enterprise Manager 11g Fusion Middleware Control (EM) for your SOA installation. Navigate to SOA > soa-infra (soa_server1) > default (or wherever you deployed your composite) and click on  JmsAdapterWriteAqJms [1.0] , then press the Test button. Enter any string into the text input field, for example “Test message from JmsAdapterWriteAqJms” then press Test Web Service. If the instance is successful, you should see the same text you entered in the Response payload frame. Monitor the Advanced Queue The test message will be written to the advanced queue created at the top of this sample. To confirm it, log in to the database as AQJMSUSER and query the MYQUEUETABLE database table. For example, from a shell window with SQL*Plus sqlplus aqjmsuser/aqjmsuser SQL> SELECT user_data FROM myqueuetable; which will display the message contents, for example Similarly, you can use the JDeveloper Database Navigator to view the contents. Use a database connection to the AQJMSUSER and in the navigator, expand Queues Tables and select MYQUEUETABLE. Select the Data tab and scroll to the USER_DATA column to view its contents. This concludes this example. The following post will be the last one in this series. In it, we will learn how to read the message we just wrote using a BPEL process and AQ JMS. Best regards John-Brown Evans Oracle Technology Proactive Support Delivery

    Read the article

  • what is ip 10.1.1.130 to which seems monitored by NT Kernel & System process on Windows 7?

    - by EndangeringSpecies
    I used netstat to see what is happening with network connections, and I see this weird ip address somehow listed together with PID 4 "NT Kernel & System", whatever that might be. Netstat describes it as a "local address" and there is no "foreign address" involved (btw, what are local and foreign addresses anyway?) In the column to the right there is neither "listening" nor "established" record, so no record at all there.

    Read the article

  • Delete a directory with pipe (|) in its name?

    - by Dave Jarvis
    Without booting to Linux, how do you delete a directory that was created in Linux on an NTFS partition that contains a pipe in the file name? For example: f:\flac\foreign\Yoshida_Brothers\Best_of_Yoshida_Brothers_|_Tsugaru_Shamisen Tried and failed: Midnight Commander Recursively deleting the parent folder del /f /s /q Yoshida_Brothers del /f /s /q "\\?f:\flac\foreign\Yoshida_Brothers\" rmdir /s Yoshida_Brothers rmdir Best* FileASSASSIN Cannot delete folder Other ideas?

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >