Search Results

Search found 1115 results on 45 pages for 'relationships'.

Page 11/45 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • A Change of Seasons...

    - by James Michael Hare
    As some of you already know, today is my last day at Scottrade. It has been a great place to work and I'll miss all the relationships I've formed over the last 5 years immensely! Starting Monday, I will be taking a new position at Amazon.com in Seattle. It should be an exciting new adventure and I look forward to sharing more about my experiences in the days to come! I do intend to continue blogging (after the move settles down) about C# as I'm able, and may mix in some Java as well as I rekindle (Amazon? Kindle? Get it? Okay, that was lame, I know...) my knowledge of the language for my new job responsibilities. I'll miss all the relationships I've developed with the .NET community in St. Louis and the surrounding area, and hope to come back sometime to participate in future Days of .NET conferences, if able! Stay tuned for more updates!

    Read the article

  • Many-to-many relations in RDBMS databases

    - by Industrial
    What is the best way of handling many-to-many relations in a RDBMS database like mySQL? Have tried using a pivot table to keep track of the relationships, but it leads to either one of the following: Normalization gets left behind Columns that is empty or null What approach have you taken in order to support many-to-many relationships?

    Read the article

  • sql server: import operation won't copy full schema

    - by P a u l
    I recall that the import tool in sql server 2000 would copy indexes, relationships, etc. In sql server 2005/2008 the import tool in SSMS will only create the tables, copy the data, but the keys, indexes, relationships are missing. I can find no option in the import wizard to enable this? What am I missing here? Is this not possible anymore for any good reason?

    Read the article

  • Hibernate pojo file issues

    - by Truezplaya
    Hi all I am currently using netbeans for a project. I have created my db using MySQL Workbench. I have two relationships that are one to one. However once I create the POJOs using netbeans hibernate mapping files tools they are being created as one to many. I have tried reversing the db within workbench and the relationships are shown as one to ones. Has anyone had a similar problem? Cheers in advanced

    Read the article

  • Multi-Column Primary Key in MySQL 5

    - by Kaji
    I'm trying to learn how to use keys and to break the habit of necessarily having SERIAL type IDs for all rows in all my tables. At the same time, I'm also doing many-to-many relationships, and so requiring unique values on either column of the tables that coordinate the relationships would hamper that. How can I define a primary key on a table such that any given value can be repeated in any column, so long as the combination of values across all columns is never repeated exactly?

    Read the article

  • Inheritance Mapping Strategies with Entity Framework Code First CTP5 Part 1: Table per Hierarchy (TPH)

    - by mortezam
    A simple strategy for mapping classes to database tables might be “one table for every entity persistent class.” This approach sounds simple enough and, indeed, works well until we encounter inheritance. Inheritance is such a visible structural mismatch between the object-oriented and relational worlds because object-oriented systems model both “is a” and “has a” relationships. SQL-based models provide only "has a" relationships between entities; SQL database management systems don’t support type inheritance—and even when it’s available, it’s usually proprietary or incomplete. There are three different approaches to representing an inheritance hierarchy: Table per Hierarchy (TPH): Enable polymorphism by denormalizing the SQL schema, and utilize a type discriminator column that holds type information. Table per Type (TPT): Represent "is a" (inheritance) relationships as "has a" (foreign key) relationships. Table per Concrete class (TPC): Discard polymorphism and inheritance relationships completely from the SQL schema.I will explain each of these strategies in a series of posts and this one is dedicated to TPH. In this series we'll deeply dig into each of these strategies and will learn about "why" to choose them as well as "how" to implement them. Hopefully it will give you a better idea about which strategy to choose in a particular scenario. Inheritance Mapping with Entity Framework Code FirstAll of the inheritance mapping strategies that we discuss in this series will be implemented by EF Code First CTP5. The CTP5 build of the new EF Code First library has been released by ADO.NET team earlier this month. EF Code-First enables a pretty powerful code-centric development workflow for working with data. I’m a big fan of the EF Code First approach, and I’m pretty excited about a lot of productivity and power that it brings. When it comes to inheritance mapping, not only Code First fully supports all the strategies but also gives you ultimate flexibility to work with domain models that involves inheritance. The fluent API for inheritance mapping in CTP5 has been improved a lot and now it's more intuitive and concise in compare to CTP4. A Note For Those Who Follow Other Entity Framework ApproachesIf you are following EF's "Database First" or "Model First" approaches, I still recommend to read this series since although the implementation is Code First specific but the explanations around each of the strategies is perfectly applied to all approaches be it Code First or others. A Note For Those Who are New to Entity Framework and Code-FirstIf you choose to learn EF you've chosen well. If you choose to learn EF with Code First you've done even better. To get started, you can find a great walkthrough by Scott Guthrie here and another one by ADO.NET team here. In this post, I assume you already setup your machine to do Code First development and also that you are familiar with Code First fundamentals and basic concepts. You might also want to check out my other posts on EF Code First like Complex Types and Shared Primary Key Associations. A Top Down Development ScenarioThese posts take a top-down approach; it assumes that you’re starting with a domain model and trying to derive a new SQL schema. Therefore, we start with an existing domain model, implement it in C# and then let Code First create the database schema for us. However, the mapping strategies described are just as relevant if you’re working bottom up, starting with existing database tables. I’ll show some tricks along the way that help you dealing with nonperfect table layouts. Let’s start with the mapping of entity inheritance. -- The Domain ModelIn our domain model, we have a BillingDetail base class which is abstract (note the italic font on the UML class diagram below). We do allow various billing types and represent them as subclasses of BillingDetail class. As for now, we support CreditCard and BankAccount: Implement the Object Model with Code First As always, we start with the POCO classes. Note that in our DbContext, I only define one DbSet for the base class which is BillingDetail. Code First will find the other classes in the hierarchy based on Reachability Convention. public abstract class BillingDetail  {     public int BillingDetailId { get; set; }     public string Owner { get; set; }             public string Number { get; set; } } public class BankAccount : BillingDetail {     public string BankName { get; set; }     public string Swift { get; set; } } public class CreditCard : BillingDetail {     public int CardType { get; set; }                     public string ExpiryMonth { get; set; }     public string ExpiryYear { get; set; } } public class InheritanceMappingContext : DbContext {     public DbSet<BillingDetail> BillingDetails { get; set; } } This object model is all that is needed to enable inheritance with Code First. If you put this in your application you would be able to immediately start working with the database and do CRUD operations. Before going into details about how EF Code First maps this object model to the database, we need to learn about one of the core concepts of inheritance mapping: polymorphic and non-polymorphic queries. Polymorphic Queries LINQ to Entities and EntitySQL, as object-oriented query languages, both support polymorphic queries—that is, queries for instances of a class and all instances of its subclasses, respectively. For example, consider the following query: IQueryable<BillingDetail> linqQuery = from b in context.BillingDetails select b; List<BillingDetail> billingDetails = linqQuery.ToList(); Or the same query in EntitySQL: string eSqlQuery = @"SELECT VAlUE b FROM BillingDetails AS b"; ObjectQuery<BillingDetail> objectQuery = ((IObjectContextAdapter)context).ObjectContext                                                                          .CreateQuery<BillingDetail>(eSqlQuery); List<BillingDetail> billingDetails = objectQuery.ToList(); linqQuery and eSqlQuery are both polymorphic and return a list of objects of the type BillingDetail, which is an abstract class but the actual concrete objects in the list are of the subtypes of BillingDetail: CreditCard and BankAccount. Non-polymorphic QueriesAll LINQ to Entities and EntitySQL queries are polymorphic which return not only instances of the specific entity class to which it refers, but all subclasses of that class as well. On the other hand, Non-polymorphic queries are queries whose polymorphism is restricted and only returns instances of a particular subclass. In LINQ to Entities, this can be specified by using OfType<T>() Method. For example, the following query returns only instances of BankAccount: IQueryable<BankAccount> query = from b in context.BillingDetails.OfType<BankAccount>() select b; EntitySQL has OFTYPE operator that does the same thing: string eSqlQuery = @"SELECT VAlUE b FROM OFTYPE(BillingDetails, Model.BankAccount) AS b"; In fact, the above query with OFTYPE operator is a short form of the following query expression that uses TREAT and IS OF operators: string eSqlQuery = @"SELECT VAlUE TREAT(b as Model.BankAccount)                       FROM BillingDetails AS b                       WHERE b IS OF(Model.BankAccount)"; (Note that in the above query, Model.BankAccount is the fully qualified name for BankAccount class. You need to change "Model" with your own namespace name.) Table per Class Hierarchy (TPH)An entire class hierarchy can be mapped to a single table. This table includes columns for all properties of all classes in the hierarchy. The concrete subclass represented by a particular row is identified by the value of a type discriminator column. You don’t have to do anything special in Code First to enable TPH. It's the default inheritance mapping strategy: This mapping strategy is a winner in terms of both performance and simplicity. It’s the best-performing way to represent polymorphism—both polymorphic and nonpolymorphic queries perform well—and it’s even easy to implement by hand. Ad-hoc reporting is possible without complex joins or unions. Schema evolution is straightforward. Discriminator Column As you can see in the DB schema above, Code First has to add a special column to distinguish between persistent classes: the discriminator. This isn’t a property of the persistent class in our object model; it’s used internally by EF Code First. By default, the column name is "Discriminator", and its type is string. The values defaults to the persistent class names —in this case, “BankAccount” or “CreditCard”. EF Code First automatically sets and retrieves the discriminator values. TPH Requires Properties in SubClasses to be Nullable in the Database TPH has one major problem: Columns for properties declared by subclasses will be nullable in the database. For example, Code First created an (INT, NULL) column to map CardType property in CreditCard class. However, in a typical mapping scenario, Code First always creates an (INT, NOT NULL) column in the database for an int property in persistent class. But in this case, since BankAccount instance won’t have a CardType property, the CardType field must be NULL for that row so Code First creates an (INT, NULL) instead. If your subclasses each define several non-nullable properties, the loss of NOT NULL constraints may be a serious problem from the point of view of data integrity. TPH Violates the Third Normal FormAnother important issue is normalization. We’ve created functional dependencies between nonkey columns, violating the third normal form. Basically, the value of Discriminator column determines the corresponding values of the columns that belong to the subclasses (e.g. BankName) but Discriminator is not part of the primary key for the table. As always, denormalization for performance can be misleading, because it sacrifices long-term stability, maintainability, and the integrity of data for immediate gains that may be also achieved by proper optimization of the SQL execution plans (in other words, ask your DBA). Generated SQL QueryLet's take a look at the SQL statements that EF Code First sends to the database when we write queries in LINQ to Entities or EntitySQL. For example, the polymorphic query for BillingDetails that you saw, generates the following SQL statement: SELECT  [Extent1].[Discriminator] AS [Discriminator],  [Extent1].[BillingDetailId] AS [BillingDetailId],  [Extent1].[Owner] AS [Owner],  [Extent1].[Number] AS [Number],  [Extent1].[BankName] AS [BankName],  [Extent1].[Swift] AS [Swift],  [Extent1].[CardType] AS [CardType],  [Extent1].[ExpiryMonth] AS [ExpiryMonth],  [Extent1].[ExpiryYear] AS [ExpiryYear] FROM [dbo].[BillingDetails] AS [Extent1] WHERE [Extent1].[Discriminator] IN ('BankAccount','CreditCard') Or the non-polymorphic query for the BankAccount subclass generates this SQL statement: SELECT  [Extent1].[BillingDetailId] AS [BillingDetailId],  [Extent1].[Owner] AS [Owner],  [Extent1].[Number] AS [Number],  [Extent1].[BankName] AS [BankName],  [Extent1].[Swift] AS [Swift] FROM [dbo].[BillingDetails] AS [Extent1] WHERE [Extent1].[Discriminator] = 'BankAccount' Note how Code First adds a restriction on the discriminator column and also how it only selects those columns that belong to BankAccount entity. Change Discriminator Column Data Type and Values With Fluent API Sometimes, especially in legacy schemas, you need to override the conventions for the discriminator column so that Code First can work with the schema. The following fluent API code will change the discriminator column name to "BillingDetailType" and the values to "BA" and "CC" for BankAccount and CreditCard respectively: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) {     modelBuilder.Entity<BillingDetail>()                 .Map<BankAccount>(m => m.Requires("BillingDetailType").HasValue("BA"))                 .Map<CreditCard>(m => m.Requires("BillingDetailType").HasValue("CC")); } Also, changing the data type of discriminator column is interesting. In the above code, we passed strings to HasValue method but this method has been defined to accepts a type of object: public void HasValue(object value); Therefore, if for example we pass a value of type int to it then Code First not only use our desired values (i.e. 1 & 2) in the discriminator column but also changes the column type to be (INT, NOT NULL): modelBuilder.Entity<BillingDetail>()             .Map<BankAccount>(m => m.Requires("BillingDetailType").HasValue(1))             .Map<CreditCard>(m => m.Requires("BillingDetailType").HasValue(2)); SummaryIn this post we learned about Table per Hierarchy as the default mapping strategy in Code First. The disadvantages of the TPH strategy may be too serious for your design—after all, denormalized schemas can become a major burden in the long run. Your DBA may not like it at all. In the next post, we will learn about Table per Type (TPT) strategy that doesn’t expose you to this problem. References ADO.NET team blog Java Persistence with Hibernate book a { text-decoration: none; } a:visited { color: Blue; } .title { padding-bottom: 5px; font-family: Segoe UI; font-size: 11pt; font-weight: bold; padding-top: 15px; } .code, .typeName { font-family: consolas; } .typeName { color: #2b91af; } .padTop5 { padding-top: 5px; } .padTop10 { padding-top: 10px; } p.MsoNormal { margin-top: 0in; margin-right: 0in; margin-bottom: 10.0pt; margin-left: 0in; line-height: 115%; font-size: 11.0pt; font-family: "Calibri" , "sans-serif"; }

    Read the article

  • Oracle BI Server Modeling, Part 1- Designing a Query Factory

    - by bob.ertl(at)oracle.com
      Welcome to Oracle BI Development's BI Foundation blog, focused on helping you get the most value from your Oracle Business Intelligence Enterprise Edition (BI EE) platform deployments.  In my first series of posts, I plan to show developers the concepts and best practices for modeling in the Common Enterprise Information Model (CEIM), the semantic layer of Oracle BI EE.  In this segment, I will lay the groundwork for the modeling concepts.  First, I will cover the big picture of how the BI Server fits into the system, and how the CEIM controls the query processing. Oracle BI EE Query Cycle The purpose of the Oracle BI Server is to bridge the gap between the presentation services and the data sources.  There are typically a variety of data sources in a variety of technologies: relational, normalized transaction systems; relational star-schema data warehouses and marts; multidimensional analytic cubes and financial applications; flat files, Excel files, XML files, and so on. Business datasets can reside in a single type of source, or, most of the time, are spread across various types of sources. Presentation services users are generally business people who need to be able to query that set of sources without any knowledge of technologies, schemas, or how sources are organized in their company. They think of business analysis in terms of measures with specific calculations, hierarchical dimensions for breaking those measures down, and detailed reports of the business transactions themselves.  Most of them create queries without knowing it, by picking a dashboard page and some filters.  Others create their own analysis by selecting metrics and dimensional attributes, and possibly creating additional calculations. The BI Server bridges that gap from simple business terms to technical physical queries by exposing just the business focused measures and dimensional attributes that business people can use in their analyses and dashboards.   After they make their selections and start the analysis, the BI Server plans the best way to query the data sources, writes the optimized sequence of physical queries to those sources, post-processes the results, and presents them to the client as a single result set suitable for tables, pivots and charts. The CEIM is a model that controls the processing of the BI Server.  It provides the subject areas that presentation services exposes for business users to select simplified metrics and dimensional attributes for their analysis.  It models the mappings to the physical data access, the calculations and logical transformations, and the data access security rules.  The CEIM consists of metadata stored in the repository, authored by developers using the Administration Tool client.     Presentation services and other query clients create their queries in BI EE's SQL-92 language, called Logical SQL or LSQL.  The API simply uses ODBC or JDBC to pass the query to the BI Server.  Presentation services writes the LSQL query in terms of the simplified objects presented to the users.  The BI Server creates a query plan, and rewrites the LSQL into fully-detailed SQL or other languages suitable for querying the physical sources.  For example, the LSQL on the left below was rewritten into the physical SQL for an Oracle 11g database on the right. Logical SQL   Physical SQL SELECT "D0 Time"."T02 Per Name Month" saw_0, "D4 Product"."P01  Product" saw_1, "F2 Units"."2-01  Billed Qty  (Sum All)" saw_2 FROM "Sample Sales" ORDER BY saw_0, saw_1       WITH SAWITH0 AS ( select T986.Per_Name_Month as c1, T879.Prod_Dsc as c2,      sum(T835.Units) as c3, T879.Prod_Key as c4 from      Product T879 /* A05 Product */ ,      Time_Mth T986 /* A08 Time Mth */ ,      FactsRev T835 /* A11 Revenue (Billed Time Join) */ where ( T835.Prod_Key = T879.Prod_Key and T835.Bill_Mth = T986.Row_Wid) group by T879.Prod_Dsc, T879.Prod_Key, T986.Per_Name_Month ) select SAWITH0.c1 as c1, SAWITH0.c2 as c2, SAWITH0.c3 as c3 from SAWITH0 order by c1, c2   Probably everybody reading this blog can write SQL or MDX.  However, the trick in designing the CEIM is that you are modeling a query-generation factory.  Rather than hand-crafting individual queries, you model behavior and relationships, thus configuring the BI Server machinery to manufacture millions of different queries in response to random user requests.  This mass production requires a different mindset and approach than when you are designing individual SQL statements in tools such as Oracle SQL Developer, Oracle Hyperion Interactive Reporting (formerly Brio), or Oracle BI Publisher.   The Structure of the Common Enterprise Information Model (CEIM) The CEIM has a unique structure specifically for modeling the relationships and behaviors that fill the gap from logical user requests to physical data source queries and back to the result.  The model divides the functionality into three specialized layers, called Presentation, Business Model and Mapping, and Physical, as shown below. Presentation services clients can generally only see the presentation layer, and the objects in the presentation layer are normally the only ones used in the LSQL request.  When a request comes into the BI Server from presentation services or another client, the relationships and objects in the model allow the BI Server to select the appropriate data sources, create a query plan, and generate the physical queries.  That's the left to right flow in the diagram below.  When the results come back from the data source queries, the right to left relationships in the model show how to transform the results and perform any final calculations and functions that could not be pushed down to the databases.   Business Model Think of the business model as the heart of the CEIM you are designing.  This is where you define the analytic behavior seen by the users, and the superset library of metric and dimension objects available to the user community as a whole.  It also provides the baseline business-friendly names and user-readable dictionary.  For these reasons, it is often called the "logical" model--it is a virtual database schema that persists no data, but can be queried as if it is a database. The business model always has a dimensional shape (more on this in future posts), and its simple shape and terminology hides the complexity of the source data models. Besides hiding complexity and normalizing terminology, this layer adds most of the analytic value, as well.  This is where you define the rich, dimensional behavior of the metrics and complex business calculations, as well as the conformed dimensions and hierarchies.  It contributes to the ease of use for business users, since the dimensional metric definitions apply in any context of filters and drill-downs, and the conformed dimensions enable dashboard-wide filters and guided analysis links that bring context along from one page to the next.  The conformed dimensions also provide a key to hiding the complexity of many sources, including federation of different databases, behind the simple business model. Note that the expression language in this layer is LSQL, so that any expression can be rewritten into any data source's query language at run time.  This is important for federation, where a given logical object can map to several different physical objects in different databases.  It is also important to portability of the CEIM to different database brands, which is a key requirement for Oracle's BI Applications products. Your requirements process with your user community will mostly affect the business model.  This is where you will define most of the things they specifically ask for, such as metric definitions.  For this reason, many of the best-practice methodologies of our consulting partners start with the high-level definition of this layer. Physical Model The physical model connects the business model that meets your users' requirements to the reality of the data sources you have available. In the query factory analogy, think of the physical layer as the bill of materials for generating physical queries.  Every schema, table, column, join, cube, hierarchy, etc., that will appear in any physical query manufactured at run time must be modeled here at design time. Each physical data source will have its own physical model, or "database" object in the CEIM.  The shape of each physical model matches the shape of its physical source.  In other words, if the source is normalized relational, the physical model will mimic that normalized shape.  If it is a hypercube, the physical model will have a hypercube shape.  If it is a flat file, it will have a denormalized tabular shape. To aid in query optimization, the physical layer also tracks the specifics of the database brand and release.  This allows the BI Server to make the most of each physical source's distinct capabilities, writing queries in its syntax, and using its specific functions. This allows the BI Server to push processing work as deep as possible into the physical source, which minimizes data movement and takes full advantage of the database's own optimizer.  For most data sources, native APIs are used to further optimize performance and functionality. The value of having a distinct separation between the logical (business) and physical models is encapsulation of the physical characteristics.  This encapsulation is another enabler of packaged BI applications and federation.  It is also key to hiding the complex shapes and relationships in the physical sources from the end users.  Consider a routine drill-down in the business model: physically, it can require a drill-through where the first query is MDX to a multidimensional cube, followed by the drill-down query in SQL to a normalized relational database.  The only difference from the user's point of view is that the 2nd query added a more detailed dimension level column - everything else was the same. Mappings Within the Business Model and Mapping Layer, the mappings provide the binding from each logical column and join in the dimensional business model, to each of the objects that can provide its data in the physical layer.  When there is more than one option for a physical source, rules in the mappings are applied to the query context to determine which of the data sources should be hit, and how to combine their results if more than one is used.  These rules specify aggregate navigation, vertical partitioning (fragmentation), and horizontal partitioning, any of which can be federated across multiple, heterogeneous sources.  These mappings are usually the most sophisticated part of the CEIM. Presentation You might think of the presentation layer as a set of very simple relational-like views into the business model.  Over ODBC/JDBC, they present a relational catalog consisting of databases, tables and columns.  For business users, presentation services interprets these as subject areas, folders and columns, respectively.  (Note that in 10g, subject areas were called presentation catalogs in the CEIM.  In this blog, I will stick to 11g terminology.)  Generally speaking, presentation services and other clients can query only these objects (there are exceptions for certain clients such as BI Publisher and Essbase Studio). The purpose of the presentation layer is to specialize the business model for different categories of users.  Based on a user's role, they will be restricted to specific subject areas, tables and columns for security.  The breakdown of the model into multiple subject areas organizes the content for users, and subjects superfluous to a particular business role can be hidden from that set of users.  Customized names and descriptions can be used to override the business model names for a specific audience.  Variables in the object names can be used for localization. For these reasons, you are better off thinking of the tables in the presentation layer as folders than as strict relational tables.  The real semantics of tables and how they function is in the business model, and any grouping of columns can be included in any table in the presentation layer.  In 11g, an LSQL query can also span multiple presentation subject areas, as long as they map to the same business model. Other Model Objects There are some objects that apply to multiple layers.  These include security-related objects, such as application roles, users, data filters, and query limits (governors).  There are also variables you can use in parameters and expressions, and initialization blocks for loading their initial values on a static or user session basis.  Finally, there are Multi-User Development (MUD) projects for developers to check out units of work, and objects for the marketing feature used by our packaged customer relationship management (CRM) software.   The Query Factory At this point, you should have a grasp on the query factory concept.  When developing the CEIM model, you are configuring the BI Server to automatically manufacture millions of queries in response to random user requests. You do this by defining the analytic behavior in the business model, mapping that to the physical data sources, and exposing it through the presentation layer's role-based subject areas. While configuring mass production requires a different mindset than when you hand-craft individual SQL or MDX statements, it builds on the modeling and query concepts you already understand. The following posts in this series will walk through the CEIM modeling concepts and best practices in detail.  We will initially review dimensional concepts so you can understand the business model, and then present a pattern-based approach to learning the mappings from a variety of physical schema shapes and deployments to the dimensional model.  Along the way, we will also present the dimensional calculation template, and learn how to configure the many additivity patterns.

    Read the article

  • LLBLGen Pro v3.1 released!

    - by FransBouma
    Yesterday we released LLBLGen Pro v3.1! Version 3.1 comes with new features and enhancements, which I'll describe briefly below. v3.1 is a free upgrade for v3.x licensees. What's new / changed? Designer Extensible Import system. An extensible import system has been added to the designer to import project data from external sources. Importers are plug-ins which import project meta-data (like entity definitions, mappings and relational model data) from an external source into the loaded project. In v3.1, an importer plug-in for importing project elements from existing LLBLGen Pro v3.x project files has been included. You can use this importer to create source projects from which you import parts of models to build your actual project with. Model-only relationships. In v3.1, relationships of the type 1:1, m:1 and 1:n can be marked as model-only. A model-only relationship isn't required to have a backing foreign key constraint in the relational model data. They're ideal for projects which have to work with relational databases where changes can't always be made or some relationships can't be added to (e.g. the ones which are important for the entity model, but are not allowed to be added to the relational model for some reason). Custom field ordering. Although fields in an entity definition don't really have an ordering, it can be important for some situations to have the entity fields in a given order, e.g. when you use compound primary keys. Field ordering can be defined using a pop-up dialog which can be opened through various ways, e.g. inside the project explorer, model view and entity editor. It can also be set automatically during refreshes based on new settings. Command line relational model data refresher tool, CliRefresher.exe. The command line refresh tool shipped with v2.6 is now available for v3.1 as well Navigation enhancements in various designer elements. It's now easier to find elements like entities, typed views etc. in the project explorer from editors, to navigate to related entities in the project explorer by right clicking a relationship, navigate to the super-type in the project explorer when right-clicking an entity and navigate to the sub-type in the project explorer when right-clicking a sub-type node in the project explorer. Minor visual enhancements / tweaks LLBLGen Pro Runtime Framework Entity creation is now up to 30% faster and takes 5% less memory. Creating an entity object has been optimized further by tweaks inside the framework to make instantiating an entity object up to 30% faster. It now also takes up to 5% less memory than in v3.0 Prefetch Path node merging is now up to 20-25% faster. Setting entity references required the creation of a new relationship object. As this relationship object is always used internally it could be cached (as it's used for syncing only). This increases performance by 20-25% in the merging functionality. Entity fetches are now up to 20% faster. A large number of tweaks have been applied to make entity fetches up to 20% faster than in v3.0. Full WCF RIA support. It's now possible to use your LLBLGen Pro runtime framework powered domain layer in a WCF RIA application using the VS.NET tools for WCF RIA services. WCF RIA services is a Microsoft technology for .NET 4 and typically used within silverlight applications. SQL Server DQE compatibility level is now per instance. (Usable in Adapter). It's now possible to set the compatibility level of the SQL Server Dynamic Query Engine (DQE) per instance of the DQE instead of the global setting it was before. The global setting is still available and is used as the default value for the compatibility level per-instance. You can use this to switch between CE Desktop and normal SQL Server compatibility per DataAccessAdapter instance. Support for COUNT_BIG aggregate function (SQL Server specific). The aggregate function COUNT_BIG has been added to the list of available aggregate functions to be used in the framework. Minor changes / tweaks I'm especially pleased with the import system, as that makes working with entity models a lot easier. The import system lets you import from another LLBLGen Pro v3 project any entity definition, mapping and / or meta-data like table definitions. This way you can build repository projects where you store model fragments, e.g. the building blocks for a customer-order system, a user credential model etc., any model you can think of. In most projects, you'll recognize that some parts of your new model look familiar. In these cases it would have been easier if you would have been able to import these parts from projects you had pre-created. With LLBLGen Pro v3.1 you can. For example, say you have an Oracle schema called CRM which contains the bread 'n' butter customer-order-product kind of model. You create an entity model from that schema and save it in a project file. Now you start working on another project for another customer and you have to use SQL Server. You also start using model-first development, so develop the entity model from scratch as there's no existing database. As this customer also requires some CRM like entity model, you import the entities from your saved Oracle project into this new SQL Server targeting project. Because you don't work with Oracle this time, you don't import the relational meta-data, just the entities, their relationships and possibly their inheritance hierarchies, if any. As they're now entities in your project you can change them a bit to match the new customer's requirements. This can save you a lot of time, because you can re-use pre-fab model fragments for new projects. In the example above there are no tables yet (as you work model first) so using the forward mapping capabilities of LLBLGen Pro v3 creates the tables, PK constraints, Unique Constraints and FK constraints for you. This way you can build a nice repository of model fragments which you can re-use in new projects.

    Read the article

  • Is SugarCRM really adequate for custom development (or adequate at all)? [closed]

    - by dukeofgaming
    Have you used SugarCRM for custom development successfully?, if so, have you done it programmatically or through the Module Builder? Were you successful? If not, why? I used SugarCRM for a project about two years ago, I ran into errors from the very installation, having to hack the actual installation file to deploy the software in the server and other erros that I can't recall now. Two years after, I'm picking it up for a project once again. I'm feeling like I should have developed the whole thing from scratch myself. Some examples: I couldn't install it in the server (again). I had to install it locally, then copy the files and database over to the server and manually edit the config file. Constantly getting deployment errors from the module builder. One reason is SugarCRM keeps creating a record in the upgrade_history table for a file that does not exist, I keep deleting such record and it keeps coming back corrupt. I get other deployment errors, but have not figured them out. then I have to rollback all files and database to try again. I deleted a custom module with relationships, the relationships stayed in the other modules and cannot be deleted anymore, PHP warnings all over the place. Quick create for custom modules does not appear, hack needed. Its whole cache directory is a joke, permanent data/files are stored there. The module builder interface disappears required fields. Edit the wrong thing, module builder won't deploy again, then pray Quick Repair and/or Rebuild Relationships do the trick. My impression of SugarCRM now is that, regardless of its pretty exterior and apparent functionality, it is a very low quality piece of software. This even scared me more: http://amplicate.com/hate/sugarcrm; a quote: I wis this info had been available when I tried to implement it 2 years ago... I searched high and low and the only info I found was positive. Yes, it's a piece of crap. The community edition was full of bugs... nothing worked. Essentially I got fired for implementing it. I'm glad though, because now I work for myself, am much happier and make more money... so, I should really thank SugarCRM for sucking so much I guess! I figured that perhaps some of you have had similar experiences, and have either sticked with SugarCRM or moved on to another solution. I'm very interested in knowing what your resolutions were -or your current situations are- to make up my own mind, since the project I'm working on is long term and I'm feeling SugarCRM will be more an obstacle than an aid. After further failed attempts to continue using this software I continued to stumble upon dead-ends when using the module editor, I could only recover from this errors by using version control. We are now moving on to a custom implementation using Symfony; perhaps if we were using it with its out-of-the-box modules we would have sticked with it.

    Read the article

  • Big Data – Operational Databases Supporting Big Data – Columnar, Graph and Spatial Database – Day 14 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the Key-Value Pair Databases and Document Databases in the Big Data Story. In this article we will understand the role of Columnar, Graph and Spatial Database supporting Big Data Story. Now we will see a few of the examples of the operational databases. Relational Databases (The day before yesterday’s post) NoSQL Databases (The day before yesterday’s post) Key-Value Pair Databases (Yesterday’s post) Document Databases (Yesterday’s post) Columnar Databases (Tomorrow’s post) Graph Databases (Today’s post) Spatial Databases (Today’s post) Columnar Databases  Relational Database is a row store database or a row oriented database. Columnar databases are column oriented or column store databases. As we discussed earlier in Big Data we have different kinds of data and we need to store different kinds of data in the database. When we have columnar database it is very easy to do so as we can just add a new column to the columnar database. HBase is one of the most popular columnar databases. It uses Hadoop file system and MapReduce for its core data storage. However, remember this is not a good solution for every application. This is particularly good for the database where there is high volume incremental data is gathered and processed. Graph Databases For a highly interconnected data it is suitable to use Graph Database. This database has node relationship structure. Nodes and relationships contain a Key Value Pair where data is stored. The major advantage of this database is that it supports faster navigation among various relationships. For example, Facebook uses a graph database to list and demonstrate various relationships between users. Neo4J is one of the most popular open source graph database. One of the major dis-advantage of the Graph Database is that it is not possible to self-reference (self joins in the RDBMS terms) and there might be real world scenarios where this might be required and graph database does not support it. Spatial Databases  We all use Foursquare, Google+ as well Facebook Check-ins for location aware check-ins. All the location aware applications figure out the position of the phone with the help of Global Positioning System (GPS). Think about it, so many different users at different location in the world and checking-in all together. Additionally, the applications now feature reach and users are demanding more and more information from them, for example like movies, coffee shop or places see. They are all running with the help of Spatial Databases. Spatial data are standardize by the Open Geospatial Consortium known as OGC. Spatial data helps answering many interesting questions like “Distance between two locations, area of interesting places etc.” When we think of it, it is very clear that handing spatial data and returning meaningful result is one big task when there are millions of users moving dynamically from one place to another place & requesting various spatial information. PostGIS/OpenGIS suite is very popular spatial database. It runs as a layer implementation on the RDBMS PostgreSQL. This makes it totally unique as it offers best from both the worlds. Courtesy: mushroom network Tomorrow In tomorrow’s blog post we will discuss about very important components of the Big Data Ecosystem – Hive. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • TechEd North America 2012 – Day 1 #msTechEd

    - by Marco Russo (SQLBI)
    Yesterday I and Alberto delivered the PreCon day about BISM Tabular in Analysis Services 2012. We received very good feedback and now I am looking forward to meet people that read our blogs and our books! Ping me on Twitter at @marcorus if you want to contact me during the conference. This is my schedule for the next few days: ·         Monday, June 11, 2012 o   10:30am-12:30pm I will be in the Technical Learning Center area, at the Breaktrough Insights (station #8) in the Database & Business Intelligence area (dedicated to SQL Server 2012) o   I will try to watch some sessions in the afternoon o   6:30pm-7:00pm I will be at the O’Reilly booth meeting book readers and doing some book signing ·         Tuesday, June 12, 2012 o   12:30pm-3:30pm I will be in the Technical Learning Center area, at the Breaktrough Insights (station #8) in the Database & Business Intelligence area (dedicated to SQL Server 2012) o   5:00pm-6:15pm I will attend the Alberto’s session DBI413 Many-to-Many Relationships in BISM Tabular (room S330E) o   6:15pm-9:00pm Community Night & Ask the Experts, we’ll discuss about Analysis Services, Tabular and Multidimensional! ·         Wednesday, June 13, 2012 o   11:15am-11:30am Don’t miss this special demo session at the Private Cloud, Public Cloud and Data Platform Theater in the Technical Learning Center area (next to the SQL Server 2012 zone). I and Alberto will present Querying multi-billion rows with many to many relationships in SSAS Tabular (xVelocity) and you’re invited to guess the response time of DAX queries on a 4 billion rows table with many-to-many relationships before we run them! We’ll give away some 8GB USB key if you guess the right answer! o   12:30pm-1:00pm I and Alberto will have a book signing session at the TechEd Bookstore o   3:00pm-5:00pm I will be in the Technical Learning Center area, at the Breaktrough Insights (station #8) in the Database & Business Intelligence area (dedicated to SQL Server 2012) ·         Thursday, June 14, 2012 o   2:45pm-4:00pm I will deliver my DBI319 BISM: Multidimensional vs. Tabular breakthrough session in room S320A. I expect many questions here! And if you want to learn more about Analysis Services Tabular, we announced two more online sessions of our SSAS Tabular Workshop: ·         July 2-3, 2012 - SSAS Workshop Online - America's time zone ·         September 3-4, 2012 - SSAS Workshop Online - America's time zone Register now if you are interested, the early bird for the July session expires on June 19, 2012! I will also deliver a SSAS Workshop in Oslo (Norway) on August 27-28, 2012.  

    Read the article

  • Data Modeling: Logical Modeling Exercise

    - by swisscheese
    In trying to learn the art of data storage I have been trying to take in as much solid information as possible. PerformanceDBA posted some really helpful tutorials/examples in the following posts among others: is my data normalized? and Relational table naming convention. I already asked a subset question of this model here. So to make sure I understood the concepts he presented and I have seen elsewhere I wanted to take things a step or two further and see if I am grasping the concepts. Hence the purpose of this post, which hopefully others can also learn from. Everything I present is conceptual to me and for learning rather than applying it in some production system. It would be cool to get some input from PerformanceDBA also since I used his models to get started, but I appreciate all input given from anyone. As I am new to databases and especially modeling I will be the first to admit that I may not always ask the right questions, explain my thoughts clearly, or use the right verbage due to lack of expertise on the subject. So please keep that in mind and feel free to steer me in the right direction if I head off track. If there is enough interest in this I would like to take this from the logical to physical phases to show the evolution of the process and share it here on Stack. I will keep this thread for the Logical Diagram though and start new one for the additional steps. For my understanding I will be building a MySQL DB in the end to run some tests and see if what I came up with actually works. Here is the list of things that I want to capture in this conceptual model. Edit for V1.2 The purpose of this is to list Bands, their members, and the Events that they will be appearing at, as well as offer music and other merchandise for sale Members will be able to match up with friends Members can write reviews on the Bands, their music, and their events. There can only be one review per member on a given item, although they can edit their reviews and history will be maintained. BandMembers will have the chance to write a single Comment on Reviews about the Band they are associated with. Collectively as a Band only one Comment is allowed per Review. Members can then rate all Reviews and Comments but only once per given instance Members can select their favorite Bands, music, Merchandise, and Events Bands, Songs, and Events will be categorized into the type of Genre that they are and then further subcategorized into a SubGenre if necessary. It is ok for a Band or Event to fall into more then one Genre/SubGenre combination. Event date, time, and location will be posted for a given band and members can show that they will be attending the Event. An Event can be comprised of more than one Band, and multiple Events can take place at a single location on the same day Every party will be tied to at least one address and address history shall be maintained. Each party could also be tied to more then one address at a time (i.e. billing, shipping, physical) There will be stored profiles for Bands, BandMembers, and general members. So there it is, maybe a bit involved but could be a great learning tool for many hopefully as the process evolves and input is given by the community. Any input? EDIT v1.1 In response to PerformanceDBA U.3) That means no merchandise other than Band merchandise in the database. Correct ? That was my original thought but you got me thinking. Maybe the site would want to sell its own merchandise or even other merchandise from the bands. Not sure a mod to make for that. Would it require an entire rework of the Catalog section or just the identifying relationship that exists with the Band? Attempted a mod to sell both complete albums or song. Either way they would both be in electronic format only available for download. That is why I listed an Album as being comprised of Songs rather then 2 separate entities. U.5) I understand what you bring up about the circular relation with Favorite. I would like to get to this “It is either one Entity with some form of differentiation (FavoriteType) which identifies its treatment” but how to is not clear to me. What am I missing here? u.6) “Business Rules This is probably the only area you are weak in.” Thanks for the honest response. I will readdress these but I hope to clear up some confusion in my head first with the responses I have posted back to you. Q.1) Yes I would like to have Accepted, Rejected, and Blocked. I am not sure what you are referring to as to how this would change the logical model? Q.2) A person does not have to be a User. They can exist only as a BandMember. Is that what you are asking? Minor Issue Zero, One, or More…Oops I admit I forgot to give this attention when building the model. I am submitting this version as is and will address in a future version. I need to read up more on Constraint Checking to make sure I am understanding things. M.4) Depends if you envision OrderPurchase in the future. Can you expand as to what you mean here? EDIT V1.2 In response to PerformanceDBA input... Lessons learned. I was mixing the concept of Identifying / Non-Identifying and Cardinality (i.e. Genre / SubGenre), and doing so inconsistently to make things worse. Associative Tables are not required in Logical Diagrams as their many-to-many relationships can be depicted and then expanded in the Physical Model. I was overlooking the Cardinality in a lot of the relationships The importance of reading through relationships using effective Verb Phrases to reassure I am modeling what I want to accomplish. U.2) In the concept of this model it is only required to track a Venue as a location for an Event. No further data needs to be collected. With that being said Events will take place on a given EventDate and will be hosted at a Venue. Venues will host multiple events and possibly multiple events on a given date. In my new model my thinking was that EventDate is already tied to Event . Therefore, Venue will not need a relationship with EventDate. The 5th and 6th bullets you have listed under U.2) leave me questioning my thinking though. Am I missing something here? U.3) Is it time to move the link between Item and Band up to Item and Party instead? With the current design I don't see a possibility to sell merchandise not tied to the band as you have brought up. U.5) I left as per your input rather than making it a discrete Supertype/Subtype Relationship as I don’t see a benefit of having that type of roll up. Additional Revisions AR.1) After going through the exercise for FavoriteItem, I feel that Item to Review requires a many-to-many relationship so that is indicated. Necessary? Ok here we go for v1.3 I took a few days on this version, going back and forth with my design. Once the logical process is complete, as I want to see if I am on the right track, I will go through in depth what I had learned and the troubles I faced as a beginner going through this process. The big point for this version was it took throwing in some Keys to help see what I was missing in the past. Going through the process of doing a matrix proved to be of great help also. Regardless of anything, if it wasn't for the input given by PerformanceDBA I would still be a lost soul wondering in the dark. Who knows my current design might reaffirm that I still am, but I have learned a lot so I am know I at least have a flashlight in my hand. At this point in time I admit that I am still confused about identifying and non-identifying relationships. In my model I had to use non-identifying relationships with non nulls just to join the relationships I wanted to model. In reading a lot on the subject there seems to be a lot of disagreement and indecisiveness on the subject so I did what I thought represented the right things in my model. When to force (identifying) and when to be free (non-identifying)? Anyone have inputs? EDIT V1.4 Ok took the V1.3 inputs and cleaned things up for this V1.4 Currently working on a V1.5 to include attributes.

    Read the article

  • Using Mapping Models to migrate between Core Data Object Models

    - by westsider
    I have a fairly simply scheme. Essentially, Run <-- Data (where a Run holds a data, e.g., Temperature, sampled from some sort of sensor). Now, it seems that sensors can have more than one measurement (e.g., Temperature and Humidity). So, a single Run could have multiple data samples. Hence, Run <-- Sample and Sample <-- Data. (And for simplicity I am leaving Run <-- Data in place, for now.) If I create a new mapping model, then things generally work - except that no new Samples are created, no relationships are established between Runs and Samples nor between Samples and Datas. I am trying to get mapping model to migrate my model but even the slightest change to the generated mapping model results in Cocoa error 134110. For example, if I take the "Sample" mapping (which has no Source) and set its Source to 'Run' (so that I can set Sample's inverse relationship 'run' appropriately) then the mapping changes its name to "RunToSample". There are two relationships handled in this mapping: data and run. The data property gets set automatically to FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:" , "DataToData", $source.dataSet) Following this example, I set the run property to FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:" , "RunToRun", $source) Similarly, I set the 'sample' property mapping in RunToRun to FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:" , "RunToSample", $source) and the 'sample' property in DataToData to FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:" , "RunToSample", $source.run) So, what, I wonder, is going wrong? I have tried various permutations, such as leaving the 'inverse' relationships unspecified. But I continue to get the same error (134110) regardless. I imagine that this is a lot easier than it seems and that I am missing some fundamental but minor piece. I have also tried subclassing NSEntityMigrationPolicy and overriding -createDestinationInstancesForSourceInstance: but these efforts have met with much the same results. Thanks in advance for any pointers or (relevant :-) advice.

    Read the article

  • Got a table of people, who I want to link to each other, many-to-many, with the links being bidirect

    - by dflock
    Imagine you live in very simplified example land - and imagine that you've got a table of people in your MySQL database: create table person ( person_id int, name text ) select * from person; +-------------------------------+ | person_id | name | +-------------------------------+ | 1 | Alice | | 2 | Bob | | 3 | Carol | +-------------------------------+ and these people need to collaborate/work together, so you've got a link table which links one person record to another: create table person__person ( person__person_id int, person_id int, other_person_id int ) This setup means that links between people are uni-directional - i.e. Alice can link to Bob, without Bob linking to Alice and, even worse, Alice can link to Bob and Bob can link to Alice at the same time, in two separate link records. As these links represent working relationships, in the real world they're all two-way mutual relationships. The following are all possible in this setup: select * from person__person; +---------------------+-----------+--------------------+ | person__person_id | person_id | other_person_id | +---------------------+-----------+--------------------+ | 1 | 1 | 2 | | 2 | 2 | 1 | | 3 | 2 | 2 | | 4 | 3 | 1 | +---------------------+-----------+--------------------+ For example, with person__person_id = 4 above, when you view Carol's (person_id = 3) profile, you should see a relationship with Alice (person_id = 1) and when you view Alice's profile, you should see a relationship with Carol, even though the link goes the other way. I realize that I can do union and distinct queries and whatnot to present the relationships as mutual in the UI, but is there a better way? I've got a feeling that there is a better way, one where this issue would neatly melt away by setting up the database properly, but I can't see it. Anyone got a better idea?

    Read the article

  • How to prevent Hibernate from nullifying relationship column during entity removal

    - by Grzegorz
    I have two entities, A and B. I need to easily retrieve entities A, joined with entities B on the condition of equal values of some column (some column from A equal to some column in B). Those columns are not primary or foreign keys, they contain same business data. I just need to have access from each instance of A to the collection of B's with the same value of this column. So I model it like this: class A { @OneToMany @JoinColumn(name="column_in_B", referencedColumnName="column_in_A") Collection<B> bs; This way, I can run queries like "select A join fetch a.bs b where b...." (Actually, the real relationship here is many-to-many. But when I use @ManyToMany, Hibernate forces me to use join table, which doesnt exist here. So I have to use @OneToMany as workaround). So far so good. The main problem is: whenever I delete an instance of A, hibernate calls "Update B set column_in_B = null", becuase it thinks the column_in_B is foreign key pointing at primary key in A (and because row in A is deleted, it tries to clean the foreign key in B). BUT the column_in_B IS NOT a foreign key, and can't be modified, because it causes data lost (and this column is NOT NULL anyway in my case, causing data integerity exception to be thrown). Plese help me with this. How to model such relationships with Hibernate? (I would call it "virtual relationships", or "secondary relationships" or so: as they are not based on foreign keys, they are just some shortcuts which allows for retrieving related objects and quering for them with HQL)

    Read the article

  • RIA Services for transmitting non DB object-graph

    - by Mike Gates
    I have been getting into RIA services because I thought it would simplify dealing with the services layer of web applications I wish to build. I see lots of examples out there showing how to create DomainService classes which expose and consume entities that have some kind of relational database backing, and therefore have foreign-key relationships. However, I would like to know how to expose and consume normal object graphs...objects that contain references to eachother but don't have foreign keys. For example, say I want a service operation called "GetFolderInformation(string pathToFolder)". I want this to return a custom object called "FolderInformation" structured with: - string Name - IEnumerable<FileInformation> Files I cannot get this to work because it seems that RIA wants to deal with entities that have foreign key relationships. Why? Why can't the serializer just see my object references and recreate that in the proxy on the other side? Data exists behind service layers that doesn't necessarily have foreign key relationships...like folder/file for example. EDIT: I realized I hadn't asked my question! My question is, is there a way to do what I am trying to do?

    Read the article

  • How to bundle extension methods requiring configuration in a library

    - by Greg
    Hi, I would like to develop a library that I can re-use to add various methods involved in navigating/searching through a graph (nodes/relationships, or if you like vertexs/edges). The generic requirements would be: There are existing classes in the main project that already implement the equivalent of the graph class (which contains the lists of nodes / relationships), node class and relationship class (which links nodes together) - the main project likely already has persistence mechanisms for the info (e.g. these classes might be built using Entity Framework for persistance) Methods would need to be added to each of these 3 classes: (a) graph class - methods like "search all nodes", (b) node class - methods such as "find all children to depth i", c) relationship class - methods like "return relationship type", "get parent node", "get child node". I assume there would be a need to inform the library with the extending methods the class names for the graph/node/relationships table (as different project might use different names). To some extent it would need to be like how a generics collection works (where you pass the classes to the collection so it knows what they are). Need to be a way to inform the library of which node property to use for equality checks perhaps (e.g. if it were a graph of webpages the equality field to use might be the URI path) I'm assuming that using abstract base classes wouldn't really work as this would tie usage down to have to use the same persistence approach, and same class names etc. Whereas really I want to be able to, for a project that has "graph-like" characteristics, the ability to add graph searching/walking methods to it.

    Read the article

  • With this generics code why am I getting "Argument 1: cannot convert from 'ToplogyLibrary.Relationsh

    - by Greg
    Hi, Any see why I'm getting a "Argument 1: cannot convert from 'ToplogyLibrary.RelationshipBase' to 'TRelationship'" in the code below, in CreateRelationship() ? public class TopologyBase<TKey, TNode, TRelationship> where TNode : NodeBase<TKey>, new() where TRelationship : RelationshipBase<TKey>, new() { // Properties public Dictionary<TKey, TNode> Nodes { get; private set; } public List<TRelationship> Relationships { get; private set; } // Constructors protected TopologyBase() { Nodes = new Dictionary<TKey, TNode>(); Relationships = new List<TRelationship>(); } // Methods public TNode CreateNode(TKey key) { var node = new TNode {Key = key}; Nodes.Add(node.Key, node); return node; } public void CreateRelationship(TNode parent, TNode child) { // Validation if (!Nodes.ContainsKey(parent.Key) || !Nodes.ContainsKey(child.Key)) { throw new ApplicationException("Can not create relationship as either parent or child was not in the graph: Parent:" + parent.Key + ", Child:" + child.Key); } // Add Relationship var r = new RelationshipBase<TNode>(); r.Parent = parent; r.Child = child; Relationships.Add(r); // *** HERE *** "Argument 1: cannot convert from 'ToplogyLibrary.RelationshipBase<TNode>' to 'TRelationship'" } } public class RelationshipBase<TNode> { public TNode Parent { get; set; } public TNode Child { get; set; } } public class NodeBase<T> { public T Key { get; set; } public NodeBase() { } public NodeBase(T key) { Key = key; } }

    Read the article

  • Pecking order of pigeons?

    - by sc_ray
    I was going though problems on graph theory posted by Prof. Ericksson from my alma-mater and came across this rather unique question about pigeons and their innate tendency to form pecking orders. The question goes as follows: Whenever groups of pigeons gather, they instinctively establish a pecking order. For any pair of pigeons, one pigeon always pecks the other, driving it away from food or potential mates. The same pair of pigeons always chooses the same pecking order, even after years of separation, no matter what other pigeons are around. Surprisingly, the overall pecking order can contain cycles—for example, pigeon A pecks pigeon B, which pecks pigeon C, which pecks pigeon A. Prove that any finite set of pigeons can be arranged in a row from left to right so that every pigeon pecks the pigeon immediately to its left. Since this is a question on Graph theory, the first things that crossed my mind that is this just asking for a topological sort of a graphs of relationships(relationships being the pecking order). What made this a little more complex was the fact that there can be cyclic relationships between the pigeons. If we have a cyclic dependency as follows: A-B-C-A where A pecks on B,B pecks on C and C goes back and pecks on A If we represent it in the way suggested by the problem, we have something as follows: C B A But the above given row ordering does not factor in the pecking order between C and A. I had another idea of solving it by mathematical induction where the base case is for two pigeons arranged according to their pecking order, assuming the pecking order arrangement is valid for n pigeons and then proving it to be true for n+1 pigeons. I am not sure if I am going down the wrong track here. Some insights into how I should be analyzing this problem will be helpful. Thanks

    Read the article

  • SQL Database Schema Design For Large 3 Billion Relationship Database.

    - by K-Bell
    Get your geek on. Can you solve this? I am designing a products database for SQL Server 2008 R2 Ed. (not Enterprise Ed.) that will be used to store custom product configurations for over 30,000 distinct products. The database will have up to 500 users at a time. Here is the design problem… Each Product has a collection of Parts (up to 50 parts per product). So if I have 30,000 Products and each of them can have up to 50 Parts, that’s 1.5 million distinct Product-to-Part relationships …or as an equation… 30,000 (Products) X 50 (Parts) = 1.5 million Product-to-Parts records. …and If… Each Part can have up to 2000 finish options (A finish is a paint color). NOTE: Only one finish will be selected by a user at run-time. The 2000 finish options I need to store are the allowed options for a specific part on a specific product. So if I have 1.5 million distinct product-to-part relationships/records and each of those parts can have up to 2,000 finishes that is 3 billion allowable product-to-part-to finish relationships/records …or as an equation… 1.5 million (Parts) x 2,000 (Finishes) = 3 Billion Product-to-Part-to-Finishes records. How can I design this database so that I can execute fast and efficient queries for a specific product and return its list of Parts and all the allowable Finishes for each part without 3 Billion Product-to-Part-to-Finish records? Read time is more important then write time. Please post your thoughts/suggestions if you have experience with large databases. Thanks!

    Read the article

  • What software is available to keep track of hundreds of servers?

    - by djangofan
    What good software is available (free or not free) to help me keep track of information relating to hundreds of servers, their relationships to each other (parent/child, category, type), and information on connecting to them, as well as possibly showing a picture or grid of some kind that allows me to report these relationships and key information to my supervisor. I am trying to avoid the "spreadsheet solution" or "visio solution" because I want to share this information and make changes with other persons in my server team. In other words, the solution I am looking for is a cross between a spreadsheet solution and a visio solution, providing both graphing and configuration information WITHOUT monitoring, and in a consistent format.

    Read the article

  • Expanding the Oracle Enterprise Repository with functional documentation by Marc Kuijpers

    - by JuergenKress
    Introduction Have you ever experienced the challenge to map both your functional and technical assets in one software package? Finding a software package that is able to describe the metadata about these assets and their mutual relationships? And if you found the correct software package, was it maintainable? The Oracle Enterprise Repository (OER) is a powerful SOA repository. Its core task is to map and visualize the interaction between technical assets generated by the SOA Suite and OSB. However, OER can be configured to not only contain these technical assets, but also to contain functional assets, i.e.: functional designs, use cases and a logical data model. Now that’s interesting! OER is able to show all the assets in your system and, if necessary, zoom in on one of the assets and their mutual relationships (Figure 1). This opens a set of doors to powerful features, e.g.: Impact analsysis If a functional design is adjusted, which other functional designs and use cases do I need to adjust? Traceability If a web service generates an error, in which functional and technical designs is the web service described This sounds great, but how do we get all the functional and technical documents in OER, and how are we going to keep this repository up-to-date? Read the full article. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: OER,SOA Governance,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Formalizing a requirements spec written in narrative English

    - by ProfK
    I have a fairly technical functionality requirements spec, expressed in English prose, produced by my project manager. It is structured as a collection of UI tabs, where the requirements for each tab are expressed as a lit of UI fields and a list of business rules for the tab. Most business rules are for UI fields on a tab, e.g: a) Must be alphanumeric, max length 20. b) Must be a dropdown, with values from table x. c) Is mandatory. d) Is mandatory under certain conditions, e.g. another field is just populated, or has a specific value. Then other business rules get a little more complex. The spec is for a job application, so the central business object (table) is the Applicant, and we have several other tables with one-to-many relationships with applicant, such as Degree, HighSchool, PreviousEmployer, Diploma, etc. e) One such complex rule says a status field can only be assigned a certain value if a many-side record exists in at least one of the many-side tables. E.g. the Applicant has at least one HighSchool or at least one Diploma record. I am looking for advice on how to codify these requirements into a more structured specification defined in terms of tables, fields, and relationships, especially for the conditional rules for fields and for the presence of related records. Any suggestions and advice will be most welcome, but I would be overjoyed if i could find an already defined system or structure for expressing things like this.

    Read the article

  • Watch @marcorus and @ferrarialberto sessions online #teched #msteched #tee2012

    - by Marco Russo (SQLBI)
    In June I participated to two TechEd editions (North America and Europe). I and Alberto delivered a Pre Conference and two sessions about Tabular. Both conferences provides recorded sessions freely available on Channel 9 so that you can compare which one has been delivered in the best way! If you have to choose between the two versions, consider that in North America we receive more questions during and after the session (still recording), increasing the interaction, whereas in Europe questions usually comes after the session finished (so no recording available). If you’re curious, watch both and let me know which version you prefer, especially for Multidimensional vs Tabular! BISM: Multidimensional vs. Tabular (TechEd North America 2012) BISM: Multidimensional vs. Tabular (TechEd Europe 2012) Many-to-Many Relationships in BISM Tabular (TechEd North America 2012) Many-to-Many Relationships in BISM Tabular (TechEd Europe 2012) If you are interested to learn SSAS Tabular, don’t miss the next SSAS Tabular Workshop online on September 3-4, 2012. We are also planning dates for another roadshow in Europe this fall and I’m happy to announce we’ll have two dates in Germany, too. More updates in the coming weeks.

    Read the article

  • Watch @marcorus and @ferrarialberto sessions online #teched #msteched #tee2012

    - by Marco Russo (SQLBI)
    In June I participated to two TechEd editions (North America and Europe). I and Alberto delivered a Pre Conference and two sessions about Tabular. Both conferences provides recorded sessions freely available on Channel 9 so that you can compare which one has been delivered in the best way! If you have to choose between the two versions, consider that in North America we receive more questions during and after the session (still recording), increasing the interaction, whereas in Europe questions usually comes after the session finished (so no recording available). If you’re curious, watch both and let me know which version you prefer, especially for Multidimensional vs Tabular! BISM: Multidimensional vs. Tabular (TechEd North America 2012) BISM: Multidimensional vs. Tabular (TechEd Europe 2012) Many-to-Many Relationships in BISM Tabular (TechEd North America 2012) Many-to-Many Relationships in BISM Tabular (TechEd Europe 2012) If you are interested to learn SSAS Tabular, don’t miss the next SSAS Tabular Workshop online on September 3-4, 2012. We are also planning dates for another roadshow in Europe this fall and I’m happy to announce we’ll have two dates in Germany, too. More updates in the coming weeks.

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >