Search Results

Search found 137085 results on 5484 pages for 'passing data from one dat'.

Page 30/5484 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • Big Data Matters with ODI12c

    - by Madhu Nair
    contributed by Mike Eisterer On October 17th, 2013, Oracle announced the release of Oracle Data Integrator 12c (ODI12c).  This release signifies improvements to Oracle’s Data Integration portfolio of solutions, particularly Big Data integration. Why Big Data = Big Business Organizations are gaining greater insights and actionability through increased storage, processing and analytical benefits offered by Big Data solutions.  New technologies and frameworks like HDFS, NoSQL, Hive and MapReduce support these benefits now. As further data is collected, analytical requirements increase and the complexity of managing transformations and aggregations of data compounds and organizations are in need for scalable Data Integration solutions. ODI12c provides enterprise solutions for the movement, translation and transformation of information and data heterogeneously and in Big Data Environments through: The ability for existing ODI and SQL developers to leverage new Big Data technologies. A metadata focused approach for cataloging, defining and reusing Big Data technologies, mappings and process executions. Integration between many heterogeneous environments and technologies such as HDFS and Hive. Generation of Hive Query Language. Working with Big Data using Knowledge Modules  ODI12c provides developers with the ability to define sources and targets and visually develop mappings to effect the movement and transformation of data.  As the mappings are created, ODI12c leverages a rich library of prebuilt integrations, known as Knowledge Modules (KMs).  These KMs are contextual to the technologies and platforms to be integrated.  Steps and actions needed to manage the data integration are pre-built and configured within the KMs.  The Oracle Data Integrator Application Adapter for Hadoop provides a series of KMs, specifically designed to integrate with Big Data Technologies.  The Big Data KMs include: Check Knowledge Module Reverse Engineer Knowledge Module Hive Transform Knowledge Module Hive Control Append Knowledge Module File to Hive (LOAD DATA) Knowledge Module File-Hive to Oracle (OLH-OSCH) Knowledge Module  Nothing to beat an Example: To demonstrate the use of the KMs which are part of the ODI Application Adapter for Hadoop, a mapping may be defined to move data between files and Hive targets.  The mapping is defined by dragging the source and target into the mapping, performing the attribute (column) mapping (see Figure 1) and then selecting the KM which will govern the process.  In this mapping example, movie data is being moved from an HDFS source into a Hive table.  Some of the attributes, such as “CUSTID to custid”, have been mapped over. Figure 1  Defining the Mapping Before the proper KM can be assigned to define the technology for the mapping, it needs to be added to the ODI project.  The Big Data KMs have been made available to the project through the KM import process.   Generally, this is done prior to defining the mapping. Figure 2  Importing the Big Data Knowledge Modules Following the import, the KMs are available in the Designer Navigator. v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US ZH-TW X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Figure 3  The Project View in Designer, Showing Installed IKMs Once the KM is imported, it may be assigned to the mapping target.  This is done by selecting the Physical View of the mapping and examining the Properties of the Target.  In this case MOVIAPP_LOG_STAGE is the target of our mapping. Figure 4  Physical View of the Mapping and Assigning the Big Data Knowledge Module to the Target Alternative KMs may have been selected as well, providing flexibility and abstracting the logical mapping from the physical implementation.  Our mapping may be applied to other technologies as well. The mapping is now complete and is ready to run.  We will see more in a future blog about running a mapping to load Hive. To complete the quick ODI for Big Data Overview, let us take a closer look at what the IKM File to Hive is doing for us.  ODI provides differentiated capabilities by defining the process and steps which normally would have to be manually developed, tested and implemented into the KM.  As shown in figure 5, the KM is preparing the Hive session, managing the Hive tables, performing the initial load from HDFS and then performing the insert into Hive.  HDFS and Hive options are selected graphically, as shown in the properties in Figure 4. Figure 5  Process and Steps Managed by the KM What’s Next Big Data being the shape shifting business challenge it is is fast evolving into the deciding factor between market leaders and others. Now that an introduction to ODI and Big Data has been provided, look for additional blogs coming soon using the Knowledge Modules which make up the Oracle Data Integrator Application Adapter for Hadoop: Importing Big Data Metadata into ODI, Testing Data Stores and Loading Hive Targets Generating Transformations using Hive Query language Loading Oracle from Hadoop Sources For more information now, please visit the Oracle Data Integrator Application Adapter for Hadoop web site, http://www.oracle.com/us/products/middleware/data-integration/hadoop/overview/index.html Do not forget to tune in to the ODI12c Executive Launch webcast on the 12th to hear more about ODI12c and GG12c. Normal 0 false false false EN-US ZH-TW X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • Linq-to-sql Add item and a one-to-many record at once

    - by Oskar Kjellin
    I have a function where I can add articles and users can comment on them. This is done with a one to many relationship like= "commentId=>ArticleId". However when I try to add the comment to the database at the same time as I add the one to many record, the commentId is not known. Like this code: Comment comment = new Comment(); comment.Date = DateTime.UtcNow; comment.Text = text; comment.UserId = userId; db.Comments.InsertOnSubmit(comment); comment.Articles.Add(new CommentsForArticle() { ArticleId = articleId, CommentId = comment.CommentId }); The commentId will be 0 before i press submit. Is there any way arround not having to submit in between or do I simply have to cut out the part where I have a one-to-many relationship and just use a CommentTable with a column like "ArticleId". What is best in a performance perspective? I understand the underlying issue, I just want to know which solution works best.

    Read the article

  • Python many-to-one mapping (creating equivalence classes)

    - by Adam Matan
    Hi, I have a project of converting one database to another. One of the original database columns defines the row's category. This coulmn should be mapepd to a new category in the new databse. For example, let's assume the original categories are:parrot, spam, cheese_shop, Cleese, Gilliam, Palin Now that's a little verbose for me, And I want to have these rows categorized as sketch, actor - That is, define all the sketches and all the actors as two equivalence classes. >>> monty={'parrot':'sketch', 'spam':'sketch', 'cheese_shop':'sketch', 'Cleese':'actor', 'Gilliam':'actor', 'Palin':'actor'} >>> monty {'Gilliam': 'actor', 'Cleese': 'actor', 'parrot': 'sketch', 'spam': 'sketch', 'Palin': 'actor', 'cheese_shop': 'sketch'} That's quite awkward- I would prefer having something like: monty={ ('parrot','spam','cheese_shop'): 'sketch', ('Cleese', 'Gilliam', 'Palin') : 'actors'} But this, of course, sets the entire tuple as a key: >>> monty['parrot'] Traceback (most recent call last): File "<pyshell#29>", line 1, in <module> monty['parrot'] KeyError: 'parrot' Any ideas how to create an elegant many-to-one dictionary in Python? Thanks, Adam

    Read the article

  • iPhone: Best Method for Passing Data to and from a Server

    - by SAPNA
    I am developing an iPhone application that downloads data from a website. The website database is implemented in SQL and the site itself uses the classic ASP interface. I am unsure as to which method would be best for transferring data to and from the server. Both JSON and SOAP require XML processing and I'm not sure how that affects performance or which of those two is best. What would be the best method in general for data transfer given the server configuration we currently have? I very new to this field and I'm a bit confused. Any help would be appreciated.

    Read the article

  • CEIL is one too high for exact integer divisions

    - by Synetech
    This morning I lost a bunch of files, but because the volume they were one was both internally and externally defragmented, all of the information necessary for a 100% recovery is available; I just need to fill in the FAT where required. I wrote a program to do this and tested it on a copy of the FAT that I dumped to a file and it works perfectly except that for a few of the files (17 out of 526), the FAT chain is one single cluster too long, and thus cross-linked with the next file. Fortunately I know exactly what the problem is. I used ceil in my EOF calculation because even a single byte over will require a whole extra cluster: //Cluster is the starting cluster of the file //Size is the size (in bytes) of the file //BPC is the number of bytes per cluster //NumClust is the number of clusters in the file //EOF is the last cluster of the file’s FAT chain DWORD NumClust = ceil( (float)(Size / BPC) ) DWORD EOF = Cluster + NumClust; This algorithm works fine for everything except files whose size happens to be exactly a multiple of the cluster size, in which case they end up being one cluster too much. I thought about it for a while but am at a loss as to a way to do this. It seems like it should be simple but somehow it is surprisingly tricky. What formula would work for files of any size?

    Read the article

  • JavaScript Data Binding Frameworks

    - by dwahlin
    Data binding is where it’s at now days when it comes to building client-centric Web applications. Developers experienced with desktop frameworks like WPF or web frameworks like ASP.NET, Silverlight, or others are used to being able to take model objects containing data and bind them to UI controls quickly and easily. When moving to client-side Web development the data binding story hasn’t been great since neither HTML nor JavaScript natively support data binding. This means that you have to write code to place data in a control and write code to extract it. Although it’s certainly feasible to do it from scratch (many of us have done it this way for years), it’s definitely tedious and not exactly the best solution when it comes to maintenance and re-use. Over the last few years several different script libraries have been released to simply the process of binding data to HTML controls. In fact, the subject of data binding is becoming so popular that it seems like a new script library is being released nearly every week. Many of the libraries provide MVC/MVVM pattern support in client-side JavaScript apps and some even integrate directly with server frameworks like Node.js. Here’s a quick list of a few of the available libraries that support data binding (if you like any others please add a comment and I’ll try to keep the list updated): AngularJS MVC framework for data binding (although closely follows the MVVM pattern). Backbone.js MVC framework with support for models, key/value binding, custom events, and more. Derby Provides a real-time environment that runs in the browser an in Node.js. The library supports data binding and templates. Ember Provides support for templates that automatically update as data changes. JsViews Data binding framework that provides “interactive data-driven views built on top of JsRender templates”. jQXB Expression Binder Lightweight jQuery plugin that supports bi-directional data binding support. KnockoutJS MVVM framework with robust support for data binding. For an excellent look at using KnockoutJS check out John Papa’s course on Pluralsight. Meteor End to end framework that uses Node.js on the server and provides support for data binding on  the client. Simpli5 JavaScript framework that provides support for two-way data binding. WinRT with HTML5/JavaScript If you’re building Windows 8 applications using HTML5 and JavaScript there’s built-in support for data binding in the WinJS library.   I won’t have time to write about each of these frameworks, but in the next post I’m going to talk about my (current) favorite when it comes to client-side JavaScript data binding libraries which is AngularJS. AngularJS provides an extremely clean way – in my opinion - to extend HTML syntax to support data binding while keeping model objects (the objects that hold the data) free from custom framework method calls or other weirdness. While I’m writing up the next post, feel free to visit the AngularJS developer guide if you’d like additional details about the API and want to get started using it.

    Read the article

  • Protect Data and Save Money? Learn How Best-in-Class Organizations do Both

    - by roxana.bradescu
    Databases contain nearly two-thirds of the sensitive information that must be protected as part of any organization's overall approach to security, risk management, and compliance. Solutions for protecting data housed in databases vary from encrypting data at the application level to defense-in-depth protection of the database itself. So is there a difference? Absolutely! According to new research from the Aberdeen Group, Best-in-Class organizations experience fewer data breaches and audit deficiencies - at lower cost -- by deploying database security solutions. And the results are dramatic: Aberdeen found that organizations encrypting data within their databases achieved 30% fewer data breaches and 15% greater audit efficiency with 34% less total cost when compared to organizations encrypting data within applications. Join us for a live webcast with Derek Brink, Vice President and Research Fellow at the Aberdeen Group, next week to learn how your organization can become Best-in-Class.

    Read the article

  • Bad Data is Really the Monster

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Bad Data is really the monster – is an article written by Bikram Sinha who I borrowed the title and the inspiration for this blog. Sinha writes: “Bad or missing data makes application systems fail when they process order-level data. One of the key items in the supply-chain industry is the product (aka SKU). Therefore, it becomes the most important data element to tie up multiple merchandising processes including purchase order allocation, stock movement, shipping notifications, and inventory details… Bad data can cause huge operational failures and cost millions of dollars in terms of time, resources, and money to clean up and validate data across multiple participating systems. Yes bad data really is the monster, so what do we do about it? Close our eyes and hope it stays in the closet? We’ve tacked this problem for some years now at Oracle, and with our latest introduction of Oracle Enterprise Data Quality along with our integrated Oracle Master Data Management products provides a complete, best-in-class answer to the bad data monster. What’s unique about it? Oracle Enterprise Data Quality also combines powerful data profiling, cleansing, matching, and monitoring capabilities while offering unparalleled ease of use. What makes it unique is that it has dedicated capabilities to address the distinct challenges of both customer and product data quality – [different monsters have different needs of course!]. And the ability to profile data is just as important to identify and measure poor quality data and identify new rules and requirements. Included are semantic and pattern-based recognition to accurately parse and standardize data that is poorly structured. Finally all of the data quality components are integrated with Oracle Master Data Management, including Oracle Customer Hub and Oracle Product Hub, as well as Oracle Data Integrator Enterprise Edition and Oracle CRM. Want to learn more? On Tuesday Nov 15th, I invite you to listen to our webcast on Reduce ERP consolidation risks with Oracle Master Data Management I’ll be joined by our partner iGate Patni and be talking about one specific way to deal with the bad data monster specifically around ERP consolidation. Look forward to seeing you there!

    Read the article

  • My files disappeared from the UbuntuOne synced folder

    - by Junji
    I set up an UbuntuOne account on PC1 (Ubuntu 10.10) and the same account on PC2 (Ubuntu 10.04). I did the following: Created file named maverick.txt in PC1's ~/Ubuntu One/log Created file named venus.txt in PC2's ~/Ubuntu One/log Bot files appeared in one.ubuntu.com A few hours later, those two files are disappeared from PC1's Ubuntu One/log PC2's Ubuntu One/log one.ubuntu.com So, my files are gone forever. Why did this happen? Is there any way to recover those files?

    Read the article

  • JS closures - Passing a function to a child, how should the shared object be accessed

    - by slicedtoad
    I have a design and am wondering what the appropriate way to access variables is. I'll demonstrate with this example since I can't seem to describe it better than the title. Term is an object representing a bunch of time data (a repeating duration of time defined by a bunch of attributes) Term has some print functionality but does not implement the print functions itself, rather they are passed in as anonymous functions by the parent. This would be similar to how shaders can be passed to a renderer rather than defined by the renderer. A container (let's call it Box) has a Schedule object that can understand and use Term objects. Box creates Term objects and passes them to Schedule as required. Box also defines the print functions stored in Term. A print function usually takes an argument and uses it to return a string based on that argument and Term's internal data. Sometime the print function could also use data stored in Schedule, though. I'm calling this data shared. So, the question is, what is the best way to access this shared data. I have a lot of options since JS has closures and I'm not familiar enough to know if I should be using them or avoiding them in this case. Options: Create a local "reference" (term used lightly) to the shared data (data is not a primitive) when defining the print function by accessing the shared data through Schedule from Box. Example: var schedule = function(){ var sched = Schedule(); var t1 = Term( function(x){ // Term.print() return (x + sched.data).format(); }); }; Bind it to Term explicitly. (Pass it in Term's constructor or something). Or bind it in Sched after Box passes it. And then access it as an attribute of Term. Pass it in at the same time x is passed to the print function, (from sched). This is the most familiar way for my but it doesn't feel right given JS's closure ability. Do something weird like bind some context and arguments to print. I'm hoping the correct answer isn't purely subjective. If it is, then I guess the answer is just "do whatever works". But I feel like there are some significant differences between the approaches that could have a large impact when stretched beyond my small example.

    Read the article

  • Where can I locate business data to use in my application?

    - by Aaron McIver
    This question talks about any and all free public raw data which appeared to have valuable pieces but nothing that really provides what I am looking for. Instead of using a socially defined listing of businesses (foursquare), I would like a business listing data set of registered businesses and associated addresses that could then be searchable based on location (coordinates). The critical need is that the data set should be filterable based on varying criteria (give me all restaurants, coffee shops, etc...). If the data is free that is great but anywhere that sells this type of data would also suffice. Infochimps looked like a possibility but perhaps something a bit more extensive exists. Where can I find a free or for fee data set of registered business that is filterable based on type of business and location?

    Read the article

  • 2 VB Scripts one to remove Default Gateway and one to add a Default Gateway

    - by Tom
    Hello everyone, I have a client with a bunch of children using about 30 machines on a regular basis. All machines that the children user are set with Static IP Addresses. The machines that the kids use, I would like to be able to run a script that will remove the default gateway so they cant get to the Internet. Then I need another that will add the Default gateway, so Windows and software updates can be run. Both scripts need to use the domain admin account for permissions Any help would be greatly appreciated

    Read the article

  • Passing object from PHP to Mysql Stored procedure

    - by user268982
    Hi All, Scenario :- I have to call MYSQL stored procedure from PHP and do some operations ( around 15 commands ) on the database Problem :- I have to call stored procedure with 36 parameters. Lot of parameters . I don't think it is a good idea to pass these many individual parameters and even heard passing individul parameters increases network traffic. Looking for :- I created a Data Object at PHP side and is there any way I can create similar kind of Object in MYSQL and pass this object as a parameter and extract the data from the object in MYSQL stored procedure Thanks for your help Regards Kiran

    Read the article

  • Connect three computers (including one laptop) to one monitor

    - by Jesse Beder
    I have the following hardware: 2 Desktop PCs, running Windows XP and Ubuntu Macbook Pro a LCD monitor, a wired keyboard, and a wired mouse Currently, I'm using an oldish IOGear KVM switch to connect the two PCs to the input/output (and it works very well). I'd like a setup that includes the laptop as well, ideally maintaining as much portability as possible (meaning I'd like to be able to sit down, easily plug in my laptop, work on all computers, then easily pick up and leave with the laptop - is docking station the right word here?). What hardware do I need to do this?

    Read the article

  • Jquery Toggle & passing data

    - by Ross
    I have created a toggle button with jquery but I need to pass my $id so it can execute the mysql in toggle_visibility.php. how do I pass my variable from my tag ? Yes $("a.toggleVisibility").toggle( function () { $(this).html("No"); }, function () { $.ajax({ type: "POST", url: "toggle_visibility.php", data: "id", success: function(msg){ alert( "Data Saved: " + msg ); } }); $(this).html("Yes"); } );

    Read the article

  • Forward one RDP port on one machine to multiple external users at the same time

    - by matnagel
    We have a windows server 2003 machine with rdp service listening on the standard port 3389. For security reasons this port is not opened on the router, but we have freesshd service running and a remote admin can login via ssh and this port is forwarded to external port 33001 for the first external user. This works great. Now we have another admin who wants to work remote (he uses a different windows account, but needs to work on the same machine.) So this is basically a ssh port forwarding question. Will the other user be able to login at the same time using the same port 33001 ? Please keep in mind that there will be a second tunnel, and this second tunnel will also use the local port 3389 on the windows server.

    Read the article

  • One network, two macbooks, one is fast and the other is slow

    - by Brendan
    I really need help for my friend. I know next to nothing about computers. My roommate and I both have macbook pros from the same year running OS X, are both connecting wirelessly to the same xfinity wifi, and while mine runs perfectly fine, my roommate complains that his works very slowly and times out every few seconds. I can't seem to figure out why this is. He is trying to get me to switch internet providers because he is convinced that it is their problem, but this cannot possibly be the issue since it works great on mine. He has an xbox hooked up to the wifi that he says also works poorly. I really can't see switching providers given that I am experiencing absolutely zero problems. How can I help my friend?

    Read the article

  • One SSL certificate (one domain) for two servers ?

    - by marioosh.net
    I have two servers. On SERVER1 i have configured SSL certificate (on Apache) for domain https://somedomain.com. I need to connect to my working domain some app that exists on remote server SERVER2 - working app for example: https://remoteapps.com/remoteApp. I used mod_proxy to do it, but SSL certificate doesn't work. ProxyPass /remoteApp https://remoteapps.com/remoteApp ProxyPassReverse /remoteApp https://myapp.com/remoteApp How to make certificate for https://somedomain.com/remoteApp work too ?

    Read the article

  • Always use one slow connection in preference of a "faster" one

    - by billc.cn
    In Windows, there's this automatic metric thing where the metric is selected according to the declared speed of the link. I now have a gigabit LAN routed to a 2MB DSL service and a HSDPA mobile broadband connection. The former is always chosen for Internet packets even though the latter is actually faster. I tried setting the mobile broadband's interface metric to 1 and raising its priority in the advanced settings of the adapter settings, but this does not seem to affect the metric of the default route. The default route to the Ethernet interface always have a lower metric than the mobile broadband interface. Am I missing something here?

    Read the article

  • Use one home directory for more than one operating system

    - by Just Jake
    I want to configure the same user account across multiple operating systems. Right now, I'm set up for general use in Mac OS 10.6.6 "Snow Leopard," and I have about 200gb of files in my home directory (/Users/justjake/). I want to use this user (and home directory) for other operating systems on other partitions. For example, I have Mac OS 10.5 installed on a 12gb partition. How can I share permissions, user accounts across my two operating systems? Would moving the my /Users directory from 10.6 to it's own partition then mounting it using /etc/fstab solve my issue?

    Read the article

  • asp.net-mvc feature - one css file per (view / master-page / user-control)

    - by Mendy
    I'm trying to implement the following feature: I want just one css file to be attached for any page that I rendered. For example take StackOverflow site. For the questions page, we will have questions.css file. so.com/questions ---> questions.css so.com/question/1234/title ---> question.css so.com/about ---> about.css so.com/faq ---> faq.css Now, I know that this css files share code in common, because they may have the same MasterPage(s) / UserControls. So, the solution need to take into account MasterPages, views and usercontrols as well. So, what will be the right solution for this kind of problem? I'm thinking about one solution, I'll put is as an answer, but maybe you have a better solution for this?

    Read the article

  • Burn more than one ISO to one dvd?

    - by Doug
    I just turned 11 CDs with a total of 1gb of stuff into ISOs. Is it possible for me to just burn all the ISO videos into a DVD? Any alternative way for me to do it? Edit: I need to be playable on any DVD player. It's videos for my grandparents. Update: I read that DVD shrink should work (re-authoring software), but I didn't try it because when I imported the VIDEO_TS folders into the software, my video wasn't played widescreen, and I dont' know how to fix it.

    Read the article

  • unique constraint (w/o Trigger) on "one-to-many" relation

    - by elgcom
    To illustrate the problem, I make an example: A tag_bundle consists of one or more than one tags. A unique tag combination can map to a unique tag_bundle, vice versa. tag_bundle tag tag_bundle_relation +---------------+ +--------+ +---------------+--------+ | tag_bundle_id | | tag_id | | tag_bundle_id | tag_id | +---------------+ +--------+ +---------------+--------+ | 1 | | 100 | | 1 | 100 | +---------------+ +--------+ +---------------+--------+ | 101 | | 1 | 101 | +--------+ +---------------+--------+ There can't be another tag_bundle having the combination from tag 100 and tag 101. How can I ensure such unique constraint when executing SQL "concurrently"!! that is, to prevent concurrently adding two bundles with the same tag combination Adding a simple unique constraint on any table does not work, Is there any solution other than Trigger or explicit lock. I come to only this simple way: make tag combination into string, and let it be unique. tag_bundle (unique on tags) tag tag_bundle_relation +---------------+--------+ +--------+ +---------------+--------+ | tag_bundle_id | tags | | tag_id | | tag_bundle_id | tag_id | +---------------+--------+ +--------+ +---------------+--------+ | 1 | 100,101| | 100 | | 1 | 100 | +---------------+--------+ +--------+ +---------------+--------+ | 101 | | 1 | 101 | +--------+ +---------------+--------+ but it seems not a good way :(

    Read the article

  • Sybase PowerDesigner Change Many (Find/Replace/Convert) Data Item's Data Types

    - by Andy
    Hello, I have a relatively large Conceptual Data Model in PowerDesigner. After generating a Physical Data Model and seeing the DBMS data types, I need to update all of data types(NUMBER/TEXT) for each data item. I'd like to either do a find/replace within the Conceptual Data Model or somehow map to different data types when creating the Physical Data Model. Ex. Change the auto conversion of Text - Clob, to Text - NVARCHAR(20). Thanks!

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >