Search Results

Search found 1115 results on 45 pages for 'relationships'.

Page 13/45 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Kohana 3 ORM limitations

    - by yoda
    Hi, What are the limitations of Kohana 3 ORM regarding table relationships? I'm trying to modify the build-in Auth module in order to accept groups of users in adition, having now the following tables : groups groups_users roles roles_groups users user_tokens By default, this module is set to work without groups, and linking the users and roles using a third table named roles_users, but I need to add groups to it. I'm linking, as you can see by the names, the groups to users and the roles to groups, but I'm failing building the ORM code for it, so that's pretty much the question here, if ORM is limited to 2 relationships or if it can handle 3 in this case. Cheers!

    Read the article

  • jQuery Cycle Plugin - Content not cycling

    - by fmz
    I am setting up a page with jQuery's Cycle plugin and have four divs set to fade. I have the code in place, the images set, but it doesn't cycle properly. Firefox says there is a problem with the following code: <script type="text/javascript"> $(document).ready(function() { $('.slideshow').cycle({ fx: 'fade' }); }); </script> Here is the html: <div class="slideshow"> <div id="mainImg-1" class="slide"> <div class="quote"> <h2>Building Big Relationships with Small Business.</h2> <p>&ldquo;This is quote Number One.<br /> They are there when I need them the most.&rdquo;</p> <p><span class="author">Jane Doe &ndash; Charlotte Flower Shop</span></p> <div class="help"><a href="cb_services.html">Let Us Help You</a></div> </div> </div> <div id="mainImg-2" class="slide"> <div class="quote"> <h2>Building Big Relationships with Small Business.</h2> <p>&ldquo;This is quote Number Two.<br /> They are there when I need them the most.&rdquo;</p> <p><span class="author">Jane Doe &ndash; Charlotte Flower Shop</span></p> <div class="help"><a href="cb_services.html">Let Us Help You</a></div> </div> </div> <div id="mainImg-3" class="slide"> <div class="quote"> <h2>Building Big Relationships with Small Business.</h2> <p>&ldquo;This is quote Number three.<br /> They are there when I need them the most.&rdquo;</p> <p><span class="author">Jane Doe &ndash; Charlotte Flower Shop</span></p> <div class="help"><a href="cb_services.html">Let Us Help You</a></div> </div> </div> <div id="mainImg-4" class="slide"> <div class="quote"> <h2>Building Big Relationships with Small Business.</h2> <p>&ldquo;This is quote Number Fout.<br /> They are there when I need them the most.&rdquo;</p> <p><span class="author">Jane Doe &ndash; Charlotte Flower Shop</span></p> <div class="help"><a href="cb_services.html">Let Us Help You</a></div> </div> </div> Here is the CSS: .slideshow { width: 946px; height: 283px; border: 1px solid #c29c5d; margin: 8px; overflow: hidden; z-index: 1; } #mainImg-1 { width: 946px; height: 283px; background: url(../_images/main.jpg) no-repeat 9px 9px; } #mainImg-2 { width: 946px; height: 283px; background: url(../_images/main.jpg) no-repeat 9px 9px; } #mainImg-3 { width: 946px; height: 283px; background: url(../_images/main.jpg) no-repeat 9px 9px; } #mainImg-4 { width: 946px; height: 283px; background: url(../_images/main.jpg) no-repeat 9px 9px; } #mainImg-1 .quote, #mainImg-2 .quote, #mainImg-3 .quote, #mainImg-4 .quote { width: 608px; height: 168px; float: right; margin: 80px 11px 0 0; background: url(../_images/bg_quoteBox.png) repeat-x; } Before you go off and say, "hey, those images are all the same". You are right, the images are all the same right now, but the text should be rotating as well and there is a slight difference there. In addition, the fade should still show up. Anyway, you can see the dev page here: http://173.201.163.213/projectpath/first_trust/index.html I would appreciate some help to get this cycling through as it should. Thanks!

    Read the article

  • Implementing tagging in JDO

    - by Julie Paltrow
    I am implementing a tagging system for a website that uses JDO . I would like to use this method. However I am new to relationships in JDO. To keep it simple, what I have looks like this: @PersistentCapable class Post { @Persistent String title; @Persistent String body; } @PersistentCapable class Tag { @Persistent String name; } What kind of JDO relationships do I need and how to implement them? I want to be able to list all Tags that belong to a Post, and also be able to list all Posts that have a given Tag. So in the end I would like to have something like this: Table: Post Columns: PostID, Title, Body Table: Tag Columns: TagID, name Table: PostTag Columns: PostID, TagID

    Read the article

  • How do I update with a newly-created detached entity using NHibernate?

    - by Daniel T.
    Explanation: Let's say I have an object graph that's nested several levels deep and each entity has a bi-directional relationship with each other. A -> B -> C -> D -> E Or in other words, A has a collection of B and B has a reference back to A, and B has a collection of C and C has a reference back to B, etc... Now let's say I want to edit some data for an instance ofC. In Winforms, I would use something like this: var instanceOfC; using (var session = SessionFactory.OpenSession()) { // get the instance of C with Id = 3 instanceOfC = session.Linq<C>().Where(x => x.Id == 3); } SendToUIAndLetUserUpdateData(instanceOfC); using (var session = SessionFactory.OpenSession()) { // re-attach the detached entity and update it session.Update(instanceOfC); } In plain English, we grab a persistent instance out of the database, detach it, give it to the UI layer for editing, then re-attach it and save it back to the database. Problem: This works fine for Winform applications because we're using the same entity all throughout, the only difference being that it goes from persistent to detached to persistent again. The problem occurs when I'm using a web service and a browser, sending over JSON data. In this case, the data that comes back is no longer a detached entity, but rather a transient one that just happens to have the same ID as the persistent one. If I use this entity to update, it will wipe out the relationship to B and D unless I sent the entire object graph over to the UI and got it back in one piece. Question: My question is, how do I serialize detached entities over the web, receive them back, and save them, while preserving any relationships that I didn't explicitly change? I know about ISession.SaveOrUpdateCopy and ISession.Merge() (they seem to do the same thing?), but this will still wipe out the relationships if I don't explicitly set them. I could copy the fields from the transient entity to the persistent entity one by one, but this doesn't work too well when it comes to relationships and I'd have to handle version comparisons manually.

    Read the article

  • cant use Activerecord find method with associations.

    - by fenec
    here are my models: #game class Game < ActiveRecord::Base #relationships with the teams for a given game belongs_to :team_1,:class_name=>"Team",:foreign_key=>"team_1_id" belongs_to :team_2,:class_name=>"Team",:foreign_key=>"team_2_id" def self.find_games(name) items = Game.find(:all,:include=>[:team_1,:team_2] , :conditions => ["team_1.name = ?", name] ) end end #teams class Team < ActiveRecord::Base #relationships with games has_many :games, :foreign_key =>'team_1' has_many :games, :foreign_key =>'team_2' end When i execute Game.find_games("real") i get : ActiveRecord::StatementInvalid: SQLite3::SQLException: no such column: team_1.name How can i fix the problem i thought that using :include would fix the problem.

    Read the article

  • To join or not to join - database structure

    - by Industrial
    Hi! We're drawing up the database structure with the help of mySQL Workbench for a new app and the number of joins required to make a listing of the data is increasing drastically as the many-to-many relationships increases. The questions: Is it really that bad to merge tables where needed and thereby reducing joins? Is there a better way then pivot tables to take care of many-to-many relationships? We discussed about instead storing all data in serialized text columns and having the application make the sorting instead of the database, but this seems like a very bad idea, even though that the database will be heavily cached. What do you think? Thanks!

    Read the article

  • TSQL, select values from large many-to-many relationship

    - by eugeneK
    I have two tables Publishers and Campaigns, both have similar many-to-many relationships with Countries,Regions,Languages and Categories. more info Publisher2Categories has publisherID and categoryID which are foreign keys to publisherID in Publishers and categoryID in Categories which are identity columns. On other side i have Campaigns2Categories with campaignID and categoryID columns which are foreign keys to campaignID in Campaigns and categoryID in Categories which again are identities. Same goes for Regions, Languages and Countries relationships I pass to query certain publisherID and want to get campaignIDs of Campaigns that have at least one equal to Publisher value from regions, countries, language or categories thanks

    Read the article

  • Doctrine: Relating a model to itself using a link table, like "This event is related to to the following other events"

    - by mattalexx
    So in English, the relationship would sound like "This event is related to to the following other events". My first instinct is to create an EventEvent model, with a first_event_id field and a second_event_id field. Then I would define the following two relationships in the Event model: $this->hasMany('Event as FirstRelatedEvents', array('local' => 'first_event_id', 'foreign' => 'second_event_id', 'refClass' => 'EventEvent')); $this->hasMany('Event as SecondRelatedEvents', array('local' => 'second_event_id', 'foreign' => 'first_event_id', 'refClass' => 'EventEvent')); But I would rather not have to use two relationships on the Event model. Is there a better way to do this?

    Read the article

  • Ria Services loading foreign keys with Linq-to-SQL

    - by Stephan
    I have a database that consists of 5 tables : Course, Category, Location, CourseCategories, and CourseLocations. The last 2 tables just contain the two foreign keys. A Course has many-to-many relationships with both category and location. I am trying to load the data into a Silverlight app using Ria Services. My DB model is Linq-to-SQL. I have tried adding the [Include] attribute to the metadata classes and I have added the DataLoadOptions so it should load the all tables when you ask for a Course. However on the client side I am never getting back any entries in the CourseCategories and CourseLocations properties. What else needs to be done to get the foreign key relationships to exist across the serialization.

    Read the article

  • What is the difference between causal models and directed graphical models?

    - by Neil G
    What is the difference between causal models and directed graphical models? or: What is the difference between causal relationships and directed probabilistic relationships? or, even better: What would you put in the interface of a DirectedProbabilisticModel class, and what in a CausalModel class? Would one inherit from the other? Collaborative solution: interface DirectedModel { map<Node, double> InferredProbabilities(map<Node, double> observed_probabilities, set<Node> nodes_of_interest) } interface CausalModel: DirectedModel { bool NodesDependent(set<Node> nodes, map<Node, double> context) map<Node, double> InferredProbabilities(map<Node, double> observed_probabilities, map<Node, double> externally_forced_probabilities, set<Node> nodes_of_interest) }

    Read the article

  • Correct association mapping in Entity Framework

    - by Matt Thrower
    Hi, Trying to change two relationships in our entity framework from many-to-one to many-to-many relationships. So I tried the obvious thing: clicked on each association on the diagram, changed the appropriate end of the association accordingly and then changed the name of the navigation property to a plural to reflect the change. This lead to the following build error, or one each for the two changes I've made: Error 3002: Problem in mapping fragments starting at line 1761:Potential runtime violation of table CustomerServices's keys (CustomerServices.Id): Columns (CustomerServices.Id) are mapped to EntitySet CompiledDatabaseCustomerService's properties (CompiledDatabaseCustomerService.CustomerService.Id) on the conceptual side but they do not form the EntitySet's key properties (CompiledDatabaseCustomerService.CompiledDatabase.Id, CompiledDatabaseCustomerService.CustomerService.Id) I'm not entirely sure why this is happening, so unsurprisngly I haven't had much luck fixing it. I've tried fiddling with the mapping details and adding referential constraints to no avail. Anyone point me in the right direction? cheers, Matt

    Read the article

  • SQL Analysis Services - Dimension attributes with a "many" cardinality

    - by MonkeyBrother
    I am creating a cube with the following tables: Customer CustomerID, Name Customer Rep CustomerID, RepID Rep RepID, Name The important thing here is that there is a many to many relationship between Reps and Customers. I want to be able to ask the question "How much sales for customers working with rep 'A'?" In the data source view i set up the relationships between both customerid columns and both repid columns. I set up the rep attribute in the dimension builder and when I try to build the cube I get this error: Errors in the high-level relationship engine. the 'Rep' table that is required for a join cannot be reached based on the relationships in the data source view.

    Read the article

  • What to do if 2 (or more) relationship tables would have the same name?

    - by primehunter326
    So I know the convention for naming M-M relationship tables in SQL is to have something like so: For tables User and Data the relationship table would be called UserData User_Data or something similar (from here) What happens then if you need to have multiple relationships between User and Data, representing each in its own table? I have a site I'm working on where I have two primary items and multiple independent M-M relationships between them. I know I could just use a single relationship table and have a field which determines the relationship type, but I'm not sure whether this is a good solution. Assuming I don't go that route, what naming convention should I follow to work around my original problem?

    Read the article

  • many-to-many relationship in CI (not using ORM)

    - by Ross
    I'm implementing a categories system in my CI app and trying to work out the best way of working with many to many relationships. I'm not using an ORM at this stage, but could use say Doctrine if necessary. Each entry may have multiple categories. I have three tables (simplified) Entries: entryID, entryName Categories: categoryID, categoryname Entry_Category: entryID, categoryID my CI code returns a record set like this: entryID, entryName, categoryID, categoryName but, as expected with Many-to-Many relationships, each "entry" is repeated for each "category". What would the best way to "group" the categories so that when I output the results, I am left with something like: Entry Name Appears in Category: Foo, Bar rather than: Entry Name Appears in Category: Foo Entry Name Appears in Category: Bar I believe the option is to track if the post ID matches a previous entry, and if so, store the respective category, and output it as one, rather than several, but am unsure of how to do this in CI. thanks for any pointers (I appreciate this is may be a vague/complex question without a better knowledge of the system).

    Read the article

  • SQLAuthority News – Microsoft SQL Server Protocol Documentation Download

    - by pinaldave
    The Microsoft SQL Server protocol documentation provides detailed technical specifications for Microsoft proprietary protocols (including extensions to industry-standard or other published protocols) that are implemented and used in Microsoft SQL Server to interoperate or communicate with Microsoft products. The documentation includes a set of companion overview and reference documents that supplement the technical specifications with conceptual background, overviews of inter-protocol relationships and interactions, and technical reference information. Microsoft SQL Server Protocol Documentation Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQLAuthority News – SQL Server 2008 R2 System Views Map

    - by pinaldave
    SQL Server 2008 R2 System Views Map is released. I am very proud that my organization (Solid Quality Mentors) is part of making this possible. This map shows the key system views included in SQL Server 2008 and 2008 R2, and the relationships between them. SQL Server 2008 R2 System Views Map Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • LLBLGen Pro feature highlights: model views

    - by FransBouma
    (This post is part of a series of posts about features of the LLBLGen Pro system) To be able to work with large(r) models, it's key you can view subsets of these models so you can have a better, more focused look at them. For example because you want to display how a subset of entities relate to one another in a different way than the list of entities. LLBLGen Pro offers this in the form of Model Views. Model Views are views on parts of the entity model of a project, and the subsets are displayed in a graphical way. Additionally, one can add documentation to a Model View. As Model Views are displaying parts of the model in a graphical way, they're easier to explain to people who aren't familiar with entity models, e.g. the stakeholders you're interviewing for your project. The documentation can then be used to communicate specifics of the elements on the model view to the developers who have to write the actual code. Below I've included an example. It's a model view on a subset of the entities of AdventureWorks. It displays several entities, their relationships (both relational and inheritance relationships) and also some specifics gathered from the interview with the stakeholder. As the information is inside the actual project the developer will work with, the information doesn't have to be converted back/from e.g .word documents or other intermediate formats, it's the same project. This makes sure there are less errors / misunderstandings. (of course you can hide the docked documentation pane or dock it to another corner). The Model View can contain entities which are placed in different groups. This makes it ideal to group entities together for close examination even though they're stored in different groups. The Model View is a first-class citizen of the code-generator. This means you can write templates which consume Model Views and generate code accordingly. E.g. you can write a template which generates a service per Model View and exposes the entities in the Model View as a single entity graph, fetched through a method. (This template isn't included in the LLBLGen Pro package, but it's easy to write it up yourself with the built-in template editor). Viewing an entity model in different ways is key to fully understand the entity model and Model Views help with that.

    Read the article

  • Visual Database Design Application

    - by tshauck
    Hi, I'm getting to the point where the applications I write need a little more structure during the planning phase. So I'd like to use some sort of visual tool to design the tables and relationships. I'm on a mac and have tried mysql workbench, but I find it buggy and a bit bloated for my intended use. Is something that I could design in that has a nice interface and is primarily a tool for visual design? Thanks

    Read the article

  • Implementing User-Defined Hierarchies in SQL Server Analysis Services

    To be able to drill into multidimensional cube data at several levels, you must implement all of the hierarchies on the database dimensions. Then you'll create the attribute relationships necessary to optimize performance. Analysis Services hierarchies offer plenty of possibilities for displaying the data that your business requires. Rob Sheldon continues his series on SQL Server Analysis Services 2008.

    Read the article

  • Understanding Column Properties for a SQL Server Table

    Designing a table can be a little complicated if you don’t have the correct knowledge of data types, relationships, and even column properties. In this tip, Brady Upton goes over the column properties and provides examples. "It really helped us isolate where we were experiencing a bottleneck"- John Q Martin, SQL Server DBA. Get started with SQL Monitor today to solve tricky performance problems - download a free trial

    Read the article

  • Implementing User-Defined Hierarchies in SQL Server Analysis Services

    To be able to drill into multidimensional cube data at several levels, you must implement all of the hierarchies on the database dimensions. Then you'll create the attribute relationships necessary to optimize performance. Analysis Services hierarchies offer plenty of possibilities for displaying the data that your business requires. Rob Sheldon continues his series on SQL Server Analysis Services 2008.

    Read the article

  • Deterministic/Consistent Unique Masking

    - by Dinesh Rajasekharan-Oracle
    One of the key requirements while masking data in large databases or multi database environment is to consistently mask some columns, i.e. for a given input the output should always be the same. At the same time the masked output should not be predictable. Deterministic masking also eliminates the need to spend enormous amount of time spent in identifying data relationships, i.e. parent and child relationships among columns defined in the application tables. In this blog post I will explain different ways of consistently masking the data across databases using Oracle Data Masking and Subsetting The readers of post should have minimal knowledge on Oracle Enterprise Manager 12c, Application Data Modeling, Data Masking concepts. For more information on these concepts, please refer to Oracle Data Masking and Subsetting document Oracle Data Masking and Subsetting 12c provides four methods using which users can consistently yet irreversibly mask their inputs. 1. Substitute 2. SQL Expression 3. Encrypt 4. User Defined Function SUBSTITUTE The substitute masking format replaces the original value with a value from a pre-created database table. As the method uses a hash based algorithm in the back end the mappings are consistent. For example consider DEPARTMENT_ID in EMPLOYEES table is replaced with FAKE_DEPARTMENT_ID from FAKE_TABLE. The substitute masking transformation that all occurrences of DEPARTMENT_ID say ‘101’ will be replaced with ‘502’ provided same substitution table and column is used , i.e. FAKE_TABLE.FAKE_DEPARTMENT_ID. The following screen shot shows the usage of the Substitute masking format with in a masking definition: Note that the uniqueness of the masked value depends on the number of columns being used in the substitution table i.e. if the original table contains 50000 unique values, then for the masked output to be unique and deterministic the substitution column should also contain 50000 unique values without which only consistency is maintained but not uniqueness. SQL EXPRESSION SQL Expression replaces an existing value with the output of a specified SQL Expression. For example while masking an EMPLOYEES table the EMAIL_ID of an employee has to be in the format EMPLOYEE’s [email protected] while FIRST_NAME and LAST_NAME are the actual column names of the EMPLOYEES table then the corresponding SQL Expression will look like %FIRST_NAME%||’.’||%LAST_NAME%||’@COMPANY.COM’. The advantage of this technique is that if you are masking FIRST_NAME and LAST_NAME of the EMPLOYEES table than the corresponding EMAIL ID will be replaced accordingly by the masking scripts. One of the interesting aspect’s of a SQL Expressions is that you can use sub SQL expressions, which means that you can write a nested SQL and use it as SQL Expression to address a complex masking business use cases. SQL Expression can also be used to consistently replace value with hashed value using Oracle’s PL/SQL function ORA_HASH. The following SQL Expression will help in the previous example for replacing the DEPARTMENT_IDs with a hashed number ORA_HASH (%DEPARTMENT_ID%, 1000) The following screen shot shows the usage of encrypt masking format with in the masking definition: ORA_HASH takes three arguments: 1. Expression which can be of any data type except LONG, LOB, User Defined Type [nested table type is allowed]. In the above example I used the Original value as expression. 2. Number of hash buckets which can be number between 0 and 4294967295. The default value is 4294967295. You can also co-relate the number of hash buckets to a range of numbers. In the above example above the bucket value is specified as 1000, so the end result will be a hashed number in between 0 and 1000. 3. Seed, can be any number which decides the consistency, i.e. for a given seed value the output will always be same. The default seed is 0. In the above SQL Expression a seed in not specified, so it to 0. If you have to use a non default seed then the function will look like. ORA_HASH (%DEPARTMENT_ID%, 1000, 1234 The uniqueness depends on the input and the number of hash buckets used. However as ORA_HASH uses a 32 bit algorithm, considering birthday paradox or pigeonhole principle there is a 0.5 probability of collision after 232-1 unique values. ENCRYPT Encrypt masking format uses a blend of 3DES encryption algorithm, hashing, and regular expression to produce a deterministic and unique masked output. The format of the masked output corresponds to the specified regular expression. As this technique uses a key [string] to encrypt the data, the same string can be used to decrypt the data. The key also acts as seed to maintain consistent outputs for a given input. The following screen shot shows the usage of encrypt masking format with in the masking definition: Regular Expressions may look complex for the first time users but you will soon realize that it’s a simple language. There are many resources in internet, oracle documentation, oracle learning library, my oracle support on writing a Regular Expressions, out of all the following My Oracle Support document helped me to get started with Regular Expressions: Oracle SQL Support for Regular Expressions[Video](Doc ID 1369668.1) USER DEFINED FUNCTION [UDF] User Defined Function or UDF provides flexibility for the users to code their own masking logic in PL/SQL, which can be called from masking Defintion. The standard format of an UDF in Oracle Data Masking and Subsetting is: Function udf_func (rowid varchar2, column_name varchar2, original_value varchar2) returns varchar2; Where • rowid is the row identifier of the column that needs to be masked • column_name is the name of the column that needs to be masked • original_value is the column value that needs to be masked You can achieve deterministic masking by using Oracle’s built in hash functions like, ORA_HASH, DBMS_CRYPTO.MD4, DBMS_CRYPTO.MD5, DBMS_UTILITY. GET_HASH_VALUE.Please refers to the Oracle Database Documentation for more information on the Oracle Hash functions. For example the following masking UDF generate deterministic unique hexadecimal values for a given string input: CREATE OR REPLACE FUNCTION RD_DUX (rid varchar2, column_name varchar2, orig_val VARCHAR2) RETURN VARCHAR2 DETERMINISTIC PARALLEL_ENABLE IS stext varchar2 (26); no_of_characters number(2); BEGIN no_of_characters:=6; stext:=substr(RAWTOHEX(DBMS_CRYPTO.HASH(UTL_RAW.CAST_TO_RAW(text),1)),0,no_of_characters); RETURN stext; END; The uniqueness depends on the input and length of the string and number of bits used by hash algorithm. In the above function MD4 hash is used [denoted by argument 1 in the DBMS_CRYPTO.HASH function which is a 128 bit algorithm which produces 2^128-1 unique hashed values , however this is limited by the length of the input string which is 6, so only 6^6 unique values will be generated. Also do not forget about the birthday paradox/pigeonhole principle mentioned earlier in this post. An another example is to consistently replace characters or numbers preserving the length and special characters as shown below: CREATE OR REPLACE FUNCTION RD_DUS(rid varchar2,column_name varchar2,orig_val VARCHAR2) RETURN VARCHAR2 DETERMINISTIC PARALLEL_ENABLE IS stext varchar2(26); BEGIN DBMS_RANDOM.SEED(orig_val); stext:=TRANSLATE(orig_val,'ABCDEFGHILKLMNOPQRSTUVWXYZ',DBMS_RANDOM.STRING('U',26)); stext:=TRANSLATE(stext,'abcdefghijklmnopqrstuvwxyz',DBMS_RANDOM.STRING('L',26)); stext:=TRANSLATE(stext,'0123456789',to_char(DBMS_RANDOM.VALUE(1,9))); stext:=REPLACE(stext,'.','0'); RETURN stext; END; The following screen shot shows the usage of an UDF with in a masking definition: To summarize, Oracle Data Masking and Subsetting helps you to consistently mask data across databases using one or all of the methods described in this post. It saves the hassle of identifying the parent-child relationships defined in the application table. Happy Masking

    Read the article

  • CRM vs VRM

    - by David Dorf
    In a previous post, I discussed the potential power of combining social, interest, and location graphs in order to personalize marketing and shopping experiences for consumers.  Marketing companies have been trying to collect detailed information for that very purpose, a large majority of which comes from tracking people on the internet.  But their approaches stem from the one-way nature of traditional advertising.  With TV, radio, and magazines there is no opportunity to truly connect to customers, which has trained marketing companies to [covertly] collect data and segment customers into easily identifiable groups.  To a large extent, we think of this as CRM. But what if we turned this viewpoint upside-down to accommodate for the two-way nature of social media?  The notion of marketing as conversations was the basis for the Cluetrain, an early attempt at drawing attention to the fact that customers are actually unique humans.  A more practical implementation is Project VRM, which is a reverse CRM of sorts.  Instead of vendors managing their relationships with customers, customers manage their relationships with vendors. Your shopping experience is not really controlled by you; rather, its controlled by the retailer and advertisers.  And unfortunately, they typically don't give you a say in the matter.  Yes, they might tailor the content for "female age 25-35 interested in shoes" but that's not really the essence of you, is it?  A better approach is to the let consumers volunteer information about themselves.  And why wouldn't they if it means a better, more relevant shopping experience?  I'd gladly list out my likes and dislikes in exchange for getting rid of all those annoying cookies on my harddrive. I really like this diagram from Beyond SocialCRM as it captures the differences between CRM and VRM. The closest thing to VRM I can find is Buyosphere, a start-up that allows consumers to track their shopping history across many vendors, then share it appropriately.  Also, Amazon does a pretty good job allowing its customers to edit their profile, which includes everything you've ever purchased from Amazon.  You can mark items as gifts, or explicitly exclude them from their recommendation engine.  This is a win-win for both the consumer and retailer. So here is my plea to retailers: Instead of trying to infer my interests from snapshots of my day, please just ask me.  We'll both have a better experience in the long-run.

    Read the article

  • WebCenter Customer Spotlight: Institute of Financing for Agriculture and Fisheries

    - by kellsey.ruppel
     Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryThe Institute of Financing for Agriculture and Fisheries (IFAP) provides access, process payments, and oversee the application of EU and domestic funds distribution to individuals and companies. IFAP business objectives were to establish electronic processing of EU funds, improve relations between government agencies and public in compliance with the International Organization for Standardization (ISO) requirements for information management and security They implemented a complete solution for managing the entire document content life cycle through the use of Oracle WebCenter Content and Oracle WebCenter Capture. IFAP improved relationships with the public by accelerating payments electronically to individuals and organizations engaged in agriculture and fisheries, which is much easier, faster, and more secure than paper-based payments and the solution complies with ISO information and security requirements.  Company OverviewAs part of the Ministry of Agriculture, Rural Development, and Fisheries, the mission of the Institute of Financing for Agriculture and Fisheries (IFAP) is to provide access, process payments, and oversee the application of European Union (EU) and domestic funds distribution to individuals and companies engaged in the agriculture, rural development, and fisheries industries. Business ChallengesIFAP main business objective was to establish electronic processing of EU funds invested in agriculture and fisheries, improve relations between government agencies and the public and  comply with International Organization for Standardization (ISO) requirements for information management and security systems regarding access to stored documents. Solution DeployedIFAP implemented a complete solution for managing the entire document content life cycle through the use of Oracle WebCenter Content and Oracle WebCenter Capture.  The use of paper was replaced with digital formats, accelerating internal processes and ensuring compliance with ISO requirements Business Results Scalability The number of documents included and managed in the document system, called iDOC, increased to a total of 490,847, of which 103,298 are internally generated, 113,824 are digitized correspondence, and 264,870 are forms that have been digitized or received via the institute’s Web site. Efficiency  IFAP improved relationships with the public by accelerating payments electronically to individuals and organizations engaged in agriculture and fisheries, which is much easier, faster, and more secure than paper-based payments. The overall productivity increased through the use of digital formats and citizens’ ID cards as digital signatures. Compliance The implemented solution complies with International Organization for Standardization (ISO) requirements for information management and security systems regarding access to stored documents. Oracle Products and Services IFAP Customer Snapshot Oracle WebCenter Content Oracle WebCenter Capture Oracle Application Server Oracle Forms Oracle Reports

    Read the article

  • The Sensemaking Spectrum for Business Analytics: Translating from Data to Business Through Analysis

    - by Joe Lamantia
    One of the most compelling outcomes of our strategic research efforts over the past several years is a growing vocabulary that articulates our cumulative understanding of the deep structure of the domains of discovery and business analytics. Modes are one example of the deep structure we’ve found.  After looking at discovery activities across a very wide range of industries, question types, business needs, and problem solving approaches, we've identified distinct and recurring kinds of sensemaking activity, independent of context.  We label these activities Modes: Explore, compare, and comprehend are three of the nine recognizable modes.  Modes describe *how* people go about realizing insights.  (Read more about the programmatic research and formal academic grounding and discussion of the modes here: https://www.researchgate.net/publication/235971352_A_Taxonomy_of_Enterprise_Search_and_Discovery) By analogy to languages, modes are the 'verbs' of discovery activity.  When applied to the practical questions of product strategy and development, the modes of discovery allow one to identify what kinds of analytical activity a product, platform, or solution needs to support across a spread of usage scenarios, and then make concrete and well-informed decisions about every aspect of the solution, from high-level capabilities, to which specific types of information visualizations better enable these scenarios for the types of data users will analyze. The modes are a powerful generative tool for product making, but if you've spent time with young children, or had a really bad hangover (or both at the same time...), you understand the difficult of communicating using only verbs.  So I'm happy to share that we've found traction on another facet of the deep structure of discovery and business analytics.  Continuing the language analogy, we've identified some of the ‘nouns’ in the language of discovery: specifically, the consistently recurring aspects of a business that people are looking for insight into.  We call these discovery Subjects, since they identify *what* people focus on during discovery efforts, rather than *how* they go about discovery as with the Modes. Defining the collection of Subjects people repeatedly focus on allows us to understand and articulate sense making needs and activity in more specific, consistent, and complete fashion.  In combination with the Modes, we can use Subjects to concretely identify and define scenarios that describe people’s analytical needs and goals.  For example, a scenario such as ‘Explore [a Mode] the attrition rates [a Measure, one type of Subject] of our largest customers [Entities, another type of Subject] clearly captures the nature of the activity — exploration of trends vs. deep analysis of underlying factors — and the central focus — attrition rates for customers above a certain set of size criteria — from which follow many of the specifics needed to address this scenario in terms of data, analytical tools, and methods. We can also use Subjects to translate effectively between the different perspectives that shape discovery efforts, reducing ambiguity and increasing impact on both sides the perspective divide.  For example, from the language of business, which often motivates analytical work by asking questions in business terms, to the perspective of analysis.  The question posed to a Data Scientist or analyst may be something like “Why are sales of our new kinds of potato chips to our largest customers fluctuating unexpectedly this year?” or “Where can innovate, by expanding our product portfolio to meet unmet needs?”.  Analysts translate questions and beliefs like these into one or more empirical discovery efforts that more formally and granularly indicate the plan, methods, tools, and desired outcomes of analysis.  From the perspective of analysis this second question might become, “Which customer needs of type ‘A', identified and measured in terms of ‘B’, that are not directly or indirectly addressed by any of our current products, offer 'X' potential for ‘Y' positive return on the investment ‘Z' required to launch a new offering, in time frame ‘W’?  And how do these compare to each other?”.  Translation also happens from the perspective of analysis to the perspective of data; in terms of availability, quality, completeness, format, volume, etc. By implication, we are proposing that most working organizations — small and large, for profit and non-profit, domestic and international, and in the majority of industries — can be described for analytical purposes using this collection of Subjects.  This is a bold claim, but simplified articulation of complexity is one of the primary goals of sensemaking frameworks such as this one.  (And, yes, this is in fact a framework for making sense of sensemaking as a category of activity - but we’re not considering the recursive aspects of this exercise at the moment.) Compellingly, we can place the collection of subjects on a single continuum — we call it the Sensemaking Spectrum — that simply and coherently illustrates some of the most important relationships between the different types of Subjects, and also illuminates several of the fundamental dynamics shaping business analytics as a domain.  As a corollary, the Sensemaking Spectrum also suggests innovation opportunities for products and services related to business analytics. The first illustration below shows Subjects arrayed along the Sensemaking Spectrum; the second illustration presents examples of each kind of Subject.  Subjects appear in colors ranging from blue to reddish-orange, reflecting their place along the Spectrum, which indicates whether a Subject addresses more the viewpoint of systems and data (Data centric and blue), or people (User centric and orange).  This axis is shown explicitly above the Spectrum.  Annotations suggest how Subjects align with the three significant perspectives of Data, Analysis, and Business that shape business analytics activity.  This rendering makes explicit the translation and bridging function of Analysts as a role, and analysis as an activity. Subjects are best understood as fuzzy categories [http://georgelakoff.files.wordpress.com/2011/01/hedges-a-study-in-meaning-criteria-and-the-logic-of-fuzzy-concepts-journal-of-philosophical-logic-2-lakoff-19731.pdf], rather than tightly defined buckets.  For each Subject, we suggest some of the most common examples: Entities may be physical things such as named products, or locations (a building, or a city); they could be Concepts, such as satisfaction; or they could be Relationships between entities, such as the variety of possible connections that define linkage in social networks.  Likewise, Events may indicate a time and place in the dictionary sense; or they may be Transactions involving named entities; or take the form of Signals, such as ‘some Measure had some value at some time’ - what many enterprises understand as alerts.   The central story of the Spectrum is that though consumers of analytical insights (represented here by the Business perspective) need to work in terms of Subjects that are directly meaningful to their perspective — such as Themes, Plans, and Goals — the working realities of data (condition, structure, availability, completeness, cost) and the changing nature of most discovery efforts make direct engagement with source data in this fashion impossible.  Accordingly, business analytics as a domain is structured around the fundamental assumption that sense making depends on analytical transformation of data.  Analytical activity incrementally synthesizes more complex and larger scope Subjects from data in its starting condition, accumulating insight (and value) by moving through a progression of stages in which increasingly meaningful Subjects are iteratively synthesized from the data, and recombined with other Subjects.  The end goal of  ‘laddering’ successive transformations is to enable sense making from the business perspective, rather than the analytical perspective.Synthesis through laddering is typically accomplished by specialized Analysts using dedicated tools and methods. Beginning with some motivating question such as seeking opportunities to increase the efficiency (a Theme) of fulfillment processes to reach some level of profitability by the end of the year (Plan), Analysts will iteratively wrangle and transform source data Records, Values and Attributes into recognizable Entities, such as Products, that can be combined with Measures or other data into the Events (shipment of orders) that indicate the workings of the business.  More complex Subjects (to the right of the Spectrum) are composed of or make reference to less complex Subjects: a business Process such as Fulfillment will include Activities such as confirming, packing, and then shipping orders.  These Activities occur within or are conducted by organizational units such as teams of staff or partner firms (Networks), composed of Entities which are structured via Relationships, such as supplier and buyer.  The fulfillment process will involve other types of Entities, such as the products or services the business provides.  The success of the fulfillment process overall may be judged according to a sophisticated operating efficiency Model, which includes tiered Measures of business activity and health for the transactions and activities included.  All of this may be interpreted through an understanding of the operational domain of the businesses supply chain (a Domain).   We'll discuss the Spectrum in more depth in succeeding posts.

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >