Search Results

Search found 13501 results on 541 pages for 'dimensional model'.

Page 1/541 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Deploying Data Mining Models using Model Export and Import, Part 2

    - by [email protected]
    In my last post, Deploying Data Mining Models using Model Export and Import, we explored using DBMS_DATA_MINING.EXPORT_MODEL and DBMS_DATA_MINING.IMPORT_MODEL to enable moving a model from one system to another. In this post, we'll look at two distributed scenarios that make use of this capability and a tip for easily moving models from one machine to another using only Oracle Database, not an external file transport mechanism, such as FTP. The first scenario, consider a company with geographically distributed business units, each collecting and managing their data locally for the products they sell. Each business unit has in-house data analysts that build models to predict which products to recommend to customers in their space. A central telemarketing business unit also uses these models to score new customers locally using data collected over the phone. Since the models recommend different products, each customer is scored using each model. This is depicted in Figure 1.Figure 1: Target instance importing multiple remote models for local scoring In the second scenario, consider multiple hospitals that collect data on patients with certain types of cancer. The data collection is standardized, so each hospital collects the same patient demographic and other health / tumor data, along with the clinical diagnosis. Instead of each hospital building it's own models, the data is pooled at a central data analysis lab where a predictive model is built. Once completed, the model is distributed to hospitals, clinics, and doctor offices who can score patient data locally.Figure 2: Multiple target instances importing the same model from a source instance for local scoring Since this blog focuses on model export and import, we'll only discuss what is necessary to move a model from one database to another. Here, we use the package DBMS_FILE_TRANSFER, which can move files between Oracle databases. The script is fairly straightforward, but requires setting up a database link and directory objects. We saw how to create directory objects in the previous post. To create a database link to the source database from the target, we can use, for example: create database link SOURCE1_LINK connect to <schema> identified by <password> using 'SOURCE1'; Note that 'SOURCE1' refers to the service name of the remote database entry in your tnsnames.ora file. From SQL*Plus, first connect to the remote database and export the model. Note that the model_file_name does not include the .dmp extension. This is because export_model appends "01" to this name.  Next, connect to the local database and invoke DBMS_FILE_TRANSFER.GET_FILE and import the model. Note that "01" is eliminated in the target system file name.  connect <source_schema>/<password>@SOURCE1_LINK; BEGIN  DBMS_DATA_MINING.EXPORT_MODEL ('EXPORT_FILE_NAME' || '.dmp',                                 'MY_SOURCE_DIR_OBJECT',                                 'name =''MY_MINING_MODEL'''); END; connect <target_schema>/<password>; BEGIN  DBMS_FILE_TRANSFER.GET_FILE ('MY_SOURCE_DIR_OBJECT',                               'EXPORT_FILE_NAME' || '01.dmp',                               'SOURCE1_LINK',                               'MY_TARGET_DIR_OBJECT',                               'EXPORT_FILE_NAME' || '.dmp' );  DBMS_DATA_MINING.IMPORT_MODEL ('EXPORT_FILE_NAME' || '.dmp',                                 'MY_TARGET_DIR_OBJECT'); END; To clean up afterward, you may want to drop the exported .dmp file at the source and the transferred file at the target. For example, utl_file.fremove('&directory_name', '&model_file_name' || '.dmp');

    Read the article

  • Deploying Data Mining Models using Model Export and Import

    - by [email protected]
    In this post, we'll take a look at how Oracle Data Mining facilitates model deployment. After building and testing models, a next step is often putting your data mining model into a production system -- referred to as model deployment. The ability to move data mining model(s) easily into a production system can greatly speed model deployment, and reduce the overall cost. Since Oracle Data Mining provides models as first class database objects, models can be manipulated using familiar database techniques and technology. For example, one or more models can be exported to a flat file, similar to a database table dump file (.dmp). This file can be moved to a different instance of Oracle Database EE, and then imported. All methods for exporting and importing models are based on Oracle Data Pump technology and found in the DBMS_DATA_MINING package. Before performing the actual export or import, a directory object must be created. A directory object is a logical name in the database for a physical directory on the host computer. Read/write access to a directory object is necessary to access the host computer file system from within Oracle Database. For our example, we'll work in the DMUSER schema. First, DMUSER requires the privilege to create any directory. This is often granted through the sysdba account. grant create any directory to dmuser; Now, DMUSER can create the directory object specifying the path where the exported model file (.dmp) should be placed. In this case, on a linux machine, we have the directory /scratch/oracle. CREATE OR REPLACE DIRECTORY dmdir AS '/scratch/oracle'; If you aren't sure of the exact name of the model or models to export, you can find the list of models using the following query: select model_name from user_mining_models; There are several options when exporting models. We can export a single model, multiple models, or all models in a schema using the following procedure calls: BEGIN   DBMS_DATA_MINING.EXPORT_MODEL ('MY_MODEL.dmp','dmdir','name =''MY_DT_MODEL'''); END; BEGIN   DBMS_DATA_MINING.EXPORT_MODEL ('MY_MODELS.dmp','dmdir',              'name IN (''MY_DT_MODEL'',''MY_KM_MODEL'')'); END; BEGIN   DBMS_DATA_MINING.EXPORT_MODEL ('ALL_DMUSER_MODELS.dmp','dmdir'); END; A .dmp file can be imported into another schema or database using the following procedure call, for example: BEGIN   DBMS_DATA_MINING.IMPORT_MODEL('MY_MODELS.dmp', 'dmdir'); END; As with models from any data mining tool, when moving a model from one environment to another, care needs to be taken to ensure the transformations that prepare the data for model building are matched (with appropriate parameters and statistics) in the system where the model is deployed. Oracle Data Mining provides automatic data preparation (ADP) and embedded data preparation (EDP) to reduce, or possibly eliminate, the need to explicitly transport transformations with the model. In the case of ADP, ODM automatically prepares the data and includes the necessary transformations in the model itself. In the case of EDP, users can associate their own transformations with attributes of a model. These transformations are automatically applied when applying the model to data, i.e., scoring. Exporting and importing a model with ADP or EDP results in these transformations being immediately available with the model in the production system.

    Read the article

  • PHP: Aggregate Model Classes or Uber Model Classes?

    - by sunwukung
    In many of the discussions regarding the M in MVC, (sidestepping ORM controversies for a moment), I commonly see Model classes described as object representations of table data (be that an Active Record, Table Gateway, Row Gateway or Domain Model/Mapper). Martin Fowler warns against the development of an anemic domain model, i.e. a class that is nothing more than a wrapper for CRUD functionality. I've been working on an MVC application for a couple of months now. The DBAL in the application I'm working on started out simple (on account of my understanding - oh the benefits of hindsight), and is organised so that Controllers invoke Business Logic classes, that in turn access the database via DAO/Transaction Scripts pertinent to the task at hand. There are a few "Entity" classes that aggregate these DAO objects to provide a convenient CRUD wrapper, but also embody some of the "behaviour" of that Domain concept (for example, a user - since it's easy to isolate). Taking a look at some of the code, and thinking along refactoring some of the code into a Rich Domain Model, it occurred to me that were I to try and wrap the CRUD routines and behaviour of say, a Company into a single "Model" class, that would be a sizeable class. So, my question is this: do Models represent domain objects, business logic, service layers, all of the above combined? How do you go about defining the responsibilities for these components?

    Read the article

  • Design Book–Dimensional or No Dimensional, that is..the question

    - by drsql
    So, it is right there in the title of the book “Relational Database Design” etc (the title is kinda long :)  But as I consider what to cover and, conversely, what not to cover, dimensional design inevitably pops up. So I am considering including it in the book. One thing I try to do is to cover topics to a level where you can start using it immediately, and I am not sure that I could get a deep enough coverage of the subject to do that. I don’t really feel like it has to be the definitive source...(read more)

    Read the article

  • Rails model belongs to model that belongs to model but i want to use another name

    - by Micke
    Hello. This may be a stupid question but im just starting to learn Rail thats why i am asking thsi question. I have one model called "User" which handles all the users in my community. Now i want to add a guestbook to every user. So i created a model called "user_guestbook" and inserted this into the new model: belongs_to :user and this into the user model: has_one :user_guestbook, :as => :guestbook The next thing i did was to add a new model to handle the posts inside the guestbook. I named it "guestbook_posts" and added this code into the new model: belongs_to :user_guestbook And this into the user_guestbook model: has_many :guestbook_posts, :as => :posts What i wanted to achive was to be able to fetch all the posts to a certain user by: @user = User.find(1) puts @user.guestbook.posts But it doesnt work for me. I dont know what i am doing wrong and if there is any easier way to do this please tell me so. Just to note, i have created some migrations for it to as follows: create_user_guestbook: t.integer :user_id create_guestbook_posts: t.integer :guestbook_id t.integer :from_user t.string :post Thanks in advance!

    Read the article

  • Oracle BI Server Modeling, Part 1- Designing a Query Factory

    - by bob.ertl(at)oracle.com
      Welcome to Oracle BI Development's BI Foundation blog, focused on helping you get the most value from your Oracle Business Intelligence Enterprise Edition (BI EE) platform deployments.  In my first series of posts, I plan to show developers the concepts and best practices for modeling in the Common Enterprise Information Model (CEIM), the semantic layer of Oracle BI EE.  In this segment, I will lay the groundwork for the modeling concepts.  First, I will cover the big picture of how the BI Server fits into the system, and how the CEIM controls the query processing. Oracle BI EE Query Cycle The purpose of the Oracle BI Server is to bridge the gap between the presentation services and the data sources.  There are typically a variety of data sources in a variety of technologies: relational, normalized transaction systems; relational star-schema data warehouses and marts; multidimensional analytic cubes and financial applications; flat files, Excel files, XML files, and so on. Business datasets can reside in a single type of source, or, most of the time, are spread across various types of sources. Presentation services users are generally business people who need to be able to query that set of sources without any knowledge of technologies, schemas, or how sources are organized in their company. They think of business analysis in terms of measures with specific calculations, hierarchical dimensions for breaking those measures down, and detailed reports of the business transactions themselves.  Most of them create queries without knowing it, by picking a dashboard page and some filters.  Others create their own analysis by selecting metrics and dimensional attributes, and possibly creating additional calculations. The BI Server bridges that gap from simple business terms to technical physical queries by exposing just the business focused measures and dimensional attributes that business people can use in their analyses and dashboards.   After they make their selections and start the analysis, the BI Server plans the best way to query the data sources, writes the optimized sequence of physical queries to those sources, post-processes the results, and presents them to the client as a single result set suitable for tables, pivots and charts. The CEIM is a model that controls the processing of the BI Server.  It provides the subject areas that presentation services exposes for business users to select simplified metrics and dimensional attributes for their analysis.  It models the mappings to the physical data access, the calculations and logical transformations, and the data access security rules.  The CEIM consists of metadata stored in the repository, authored by developers using the Administration Tool client.     Presentation services and other query clients create their queries in BI EE's SQL-92 language, called Logical SQL or LSQL.  The API simply uses ODBC or JDBC to pass the query to the BI Server.  Presentation services writes the LSQL query in terms of the simplified objects presented to the users.  The BI Server creates a query plan, and rewrites the LSQL into fully-detailed SQL or other languages suitable for querying the physical sources.  For example, the LSQL on the left below was rewritten into the physical SQL for an Oracle 11g database on the right. Logical SQL   Physical SQL SELECT "D0 Time"."T02 Per Name Month" saw_0, "D4 Product"."P01  Product" saw_1, "F2 Units"."2-01  Billed Qty  (Sum All)" saw_2 FROM "Sample Sales" ORDER BY saw_0, saw_1       WITH SAWITH0 AS ( select T986.Per_Name_Month as c1, T879.Prod_Dsc as c2,      sum(T835.Units) as c3, T879.Prod_Key as c4 from      Product T879 /* A05 Product */ ,      Time_Mth T986 /* A08 Time Mth */ ,      FactsRev T835 /* A11 Revenue (Billed Time Join) */ where ( T835.Prod_Key = T879.Prod_Key and T835.Bill_Mth = T986.Row_Wid) group by T879.Prod_Dsc, T879.Prod_Key, T986.Per_Name_Month ) select SAWITH0.c1 as c1, SAWITH0.c2 as c2, SAWITH0.c3 as c3 from SAWITH0 order by c1, c2   Probably everybody reading this blog can write SQL or MDX.  However, the trick in designing the CEIM is that you are modeling a query-generation factory.  Rather than hand-crafting individual queries, you model behavior and relationships, thus configuring the BI Server machinery to manufacture millions of different queries in response to random user requests.  This mass production requires a different mindset and approach than when you are designing individual SQL statements in tools such as Oracle SQL Developer, Oracle Hyperion Interactive Reporting (formerly Brio), or Oracle BI Publisher.   The Structure of the Common Enterprise Information Model (CEIM) The CEIM has a unique structure specifically for modeling the relationships and behaviors that fill the gap from logical user requests to physical data source queries and back to the result.  The model divides the functionality into three specialized layers, called Presentation, Business Model and Mapping, and Physical, as shown below. Presentation services clients can generally only see the presentation layer, and the objects in the presentation layer are normally the only ones used in the LSQL request.  When a request comes into the BI Server from presentation services or another client, the relationships and objects in the model allow the BI Server to select the appropriate data sources, create a query plan, and generate the physical queries.  That's the left to right flow in the diagram below.  When the results come back from the data source queries, the right to left relationships in the model show how to transform the results and perform any final calculations and functions that could not be pushed down to the databases.   Business Model Think of the business model as the heart of the CEIM you are designing.  This is where you define the analytic behavior seen by the users, and the superset library of metric and dimension objects available to the user community as a whole.  It also provides the baseline business-friendly names and user-readable dictionary.  For these reasons, it is often called the "logical" model--it is a virtual database schema that persists no data, but can be queried as if it is a database. The business model always has a dimensional shape (more on this in future posts), and its simple shape and terminology hides the complexity of the source data models. Besides hiding complexity and normalizing terminology, this layer adds most of the analytic value, as well.  This is where you define the rich, dimensional behavior of the metrics and complex business calculations, as well as the conformed dimensions and hierarchies.  It contributes to the ease of use for business users, since the dimensional metric definitions apply in any context of filters and drill-downs, and the conformed dimensions enable dashboard-wide filters and guided analysis links that bring context along from one page to the next.  The conformed dimensions also provide a key to hiding the complexity of many sources, including federation of different databases, behind the simple business model. Note that the expression language in this layer is LSQL, so that any expression can be rewritten into any data source's query language at run time.  This is important for federation, where a given logical object can map to several different physical objects in different databases.  It is also important to portability of the CEIM to different database brands, which is a key requirement for Oracle's BI Applications products. Your requirements process with your user community will mostly affect the business model.  This is where you will define most of the things they specifically ask for, such as metric definitions.  For this reason, many of the best-practice methodologies of our consulting partners start with the high-level definition of this layer. Physical Model The physical model connects the business model that meets your users' requirements to the reality of the data sources you have available. In the query factory analogy, think of the physical layer as the bill of materials for generating physical queries.  Every schema, table, column, join, cube, hierarchy, etc., that will appear in any physical query manufactured at run time must be modeled here at design time. Each physical data source will have its own physical model, or "database" object in the CEIM.  The shape of each physical model matches the shape of its physical source.  In other words, if the source is normalized relational, the physical model will mimic that normalized shape.  If it is a hypercube, the physical model will have a hypercube shape.  If it is a flat file, it will have a denormalized tabular shape. To aid in query optimization, the physical layer also tracks the specifics of the database brand and release.  This allows the BI Server to make the most of each physical source's distinct capabilities, writing queries in its syntax, and using its specific functions. This allows the BI Server to push processing work as deep as possible into the physical source, which minimizes data movement and takes full advantage of the database's own optimizer.  For most data sources, native APIs are used to further optimize performance and functionality. The value of having a distinct separation between the logical (business) and physical models is encapsulation of the physical characteristics.  This encapsulation is another enabler of packaged BI applications and federation.  It is also key to hiding the complex shapes and relationships in the physical sources from the end users.  Consider a routine drill-down in the business model: physically, it can require a drill-through where the first query is MDX to a multidimensional cube, followed by the drill-down query in SQL to a normalized relational database.  The only difference from the user's point of view is that the 2nd query added a more detailed dimension level column - everything else was the same. Mappings Within the Business Model and Mapping Layer, the mappings provide the binding from each logical column and join in the dimensional business model, to each of the objects that can provide its data in the physical layer.  When there is more than one option for a physical source, rules in the mappings are applied to the query context to determine which of the data sources should be hit, and how to combine their results if more than one is used.  These rules specify aggregate navigation, vertical partitioning (fragmentation), and horizontal partitioning, any of which can be federated across multiple, heterogeneous sources.  These mappings are usually the most sophisticated part of the CEIM. Presentation You might think of the presentation layer as a set of very simple relational-like views into the business model.  Over ODBC/JDBC, they present a relational catalog consisting of databases, tables and columns.  For business users, presentation services interprets these as subject areas, folders and columns, respectively.  (Note that in 10g, subject areas were called presentation catalogs in the CEIM.  In this blog, I will stick to 11g terminology.)  Generally speaking, presentation services and other clients can query only these objects (there are exceptions for certain clients such as BI Publisher and Essbase Studio). The purpose of the presentation layer is to specialize the business model for different categories of users.  Based on a user's role, they will be restricted to specific subject areas, tables and columns for security.  The breakdown of the model into multiple subject areas organizes the content for users, and subjects superfluous to a particular business role can be hidden from that set of users.  Customized names and descriptions can be used to override the business model names for a specific audience.  Variables in the object names can be used for localization. For these reasons, you are better off thinking of the tables in the presentation layer as folders than as strict relational tables.  The real semantics of tables and how they function is in the business model, and any grouping of columns can be included in any table in the presentation layer.  In 11g, an LSQL query can also span multiple presentation subject areas, as long as they map to the same business model. Other Model Objects There are some objects that apply to multiple layers.  These include security-related objects, such as application roles, users, data filters, and query limits (governors).  There are also variables you can use in parameters and expressions, and initialization blocks for loading their initial values on a static or user session basis.  Finally, there are Multi-User Development (MUD) projects for developers to check out units of work, and objects for the marketing feature used by our packaged customer relationship management (CRM) software.   The Query Factory At this point, you should have a grasp on the query factory concept.  When developing the CEIM model, you are configuring the BI Server to automatically manufacture millions of queries in response to random user requests. You do this by defining the analytic behavior in the business model, mapping that to the physical data sources, and exposing it through the presentation layer's role-based subject areas. While configuring mass production requires a different mindset than when you hand-craft individual SQL or MDX statements, it builds on the modeling and query concepts you already understand. The following posts in this series will walk through the CEIM modeling concepts and best practices in detail.  We will initially review dimensional concepts so you can understand the business model, and then present a pattern-based approach to learning the mappings from a variety of physical schema shapes and deployments to the dimensional model.  Along the way, we will also present the dimensional calculation template, and learn how to configure the many additivity patterns.

    Read the article

  • Why model => model.Reason_ID turns to model =>Convert(model.Reason_ID)

    - by er-v
    I have my own html helper extension, wich I use this way <%=Html.LocalizableLabelFor(model => model.Reason_ID, Register.PurchaseReason) %> which declared like this. public static MvcHtmlString LocalizableLabelFor<T>(this HtmlHelper<T> helper, Expression<Func<T, object>> expr, string captionValue) where T : class { return helper.LocalizableLabelFor(ExpressionHelper.GetExpressionText(expr), captionValue); } but when I open it in debugger expr.Body.ToString() will show me Convert(model.Reason_ID). But should model.Reason_ID. That's a big problem, becouse ExpressionHelper.GetExpressionText(expr) returns empty string. What a strange magic is that? How can I get rid of it?

    Read the article

  • SQL SERVER – Log File Growing for Model Database – model Database Log File Grew Too Big

    - by pinaldave
    After reading my earlier article SQL SERVER – master Database Log File Grew Too Big, I received an email recently from another reader asking why does the log file of model database grow every day when he is not carrying out any operation in the model database. As per the email, he is absolutely sure that he is doing nothing on his model database; he had used policy management to catch any T-SQL operation in the model database and there were none. This was indeed surprising to me. I sent a request to access to his server, which he happily agreed for and within a min, we figured out the issue. He was taking the backup of the model database every day taking the database backup every night. When I explained the same to him, he did not believe it; so I quickly wrote down the following script. The results before and after the usage of the script were very clear. What is a model database? The model database is used as the template for all databases created on an instance of SQL Server. Any object you create in the model database will be automatically created in subsequent user database created on the server. NOTE: Do not run this in production environment. During the demo, the model database was in full recovery mode and only full backup operation was performed (no log backup). Before Backup Script Backup Script in loop DECLARE @FLAG INT SET @FLAG = 1 WHILE(@FLAG < 1000) BEGIN BACKUP DATABASE [model] TO  DISK = N'D:\model.bak' SET @FLAG = @FLAG + 1 END GO After Backup Script Why did this happen? The model database was in full recovery mode and taking full backup is logged operation. As there was no log backup and only full backup was performed on the model database, the size of the log file kept growing. Resolution: Change the backup mode of model database from “Full Recovery” to “Simple Recovery.”. Take full backup of the model database “only” when you change something in the model database. Let me know if you have encountered a situation like this? If so, how did you resolve it? It will be interesting to know about your experience. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Data Modeling Resources

    - by Dejan Sarka
    You can find many different data modeling resources. It is impossible to list all of them. I selected only the most valuable ones for me, and, of course, the ones I contributed to. Books Chris J. Date: An Introduction to Database Systems – IMO a “must” to understand the relational model correctly. Terry Halpin, Tony Morgan: Information Modeling and Relational Databases – meet the object-role modeling leaders. Chris J. Date, Nikos Lorentzos and Hugh Darwen: Time and Relational Theory, Second Edition: Temporal Databases in the Relational Model and SQL – all theory needed to manage temporal data. Louis Davidson, Jessica M. Moss: Pro SQL Server 2012 Relational Database Design and Implementation – the best SQL Server focused data modeling book I know by two of my friends. Dejan Sarka, et al.: MCITP Self-Paced Training Kit (Exam 70-441): Designing Database Solutions by Using Microsoft® SQL Server™ 2005 – SQL Server 2005 data modeling training kit. Most of the text is still valid for SQL Server 2008, 2008 R2, 2012 and 2014. Itzik Ben-Gan, Lubor Kollar, Dejan Sarka, Steve Kass: Inside Microsoft SQL Server 2008 T-SQL Querying – Steve wrote a chapter with mathematical background, and I added a chapter with theoretical introduction to the relational model. Itzik Ben-Gan, Dejan Sarka, Roger Wolter, Greg Low, Ed Katibah, Isaac Kunen: Inside Microsoft SQL Server 2008 T-SQL Programming – I added three chapters with theoretical introduction and practical solutions for the user-defined data types, dynamic schema and temporal data. Dejan Sarka, Matija Lah, Grega Jerkic: Training Kit (Exam 70-463): Implementing a Data Warehouse with Microsoft SQL Server 2012 – my first two chapters are about data warehouse design and implementation. Courses Data Modeling Essentials – I wrote a 3-day course for SolidQ. If you are interested in this course, which I could also deliver in a shorter seminar way, you can contact your closes SolidQ subsidiary, or, of course, me directly on addresses [email protected] or [email protected]. This course could also complement the existing courseware portfolio of training providers, which are welcome to contact me as well. Logical and Physical Modeling for Analytical Applications – online course I wrote for Pluralsight. Working with Temporal data in SQL Server – my latest Pluralsight course, where besides theory and implementation I introduce many original ways how to optimize temporal queries. Forthcoming presentations SQL Bits 12, July 17th – 19th, Telford, UK – I have a full-day pre-conference seminar Advanced Data Modeling Topics there.

    Read the article

  • How to use the client object model with SharePoint2010

    - by ybbest
    In SharePoint2010, you can use client object model to communicate with SharePoint server. Today, I’d like to show you how to achieve this by using the c# console application. You can download the solution here. 1. Create a Console application in visual studio and add the following references to the project. 2. Insert your code as below ClientContext context = new ClientContext("http://demo2010a"); Web currentWeb = context.Web; context.Load(currentWeb, web =&gt; web.Title); context.ExecuteQuery(); Console.WriteLine(currentWeb.Title); Console.ReadLine(); 3. Run your code then you will get the web title displayed as shown below Note: If you got the following errors, you need to change your target framework from .Net Framework 4 client profile to .Net Framework 4 as shown below: Change from TO

    Read the article

  • Creating an Entity Data Model using the Model First approach

    - by nikolaosk
    This is going to be the second post of a series of posts regarding Entity Framework and how we can use Entity Framework version 4.0 new features. You can read the first post here . In order to follow along you must have some knowledge of C# and know what an ORM system is and what kind of problems Entity Framework addresses.It will be handy to know how to work inside the Visual Studio 2010 IDE . I have a post regarding ASP.Net and EntityDataSource . You can read it here .I have 3 more posts on Profiling...(read more)

    Read the article

  • Given the presentation model pattern, is the view, presentation model, or model responsible for adding child views to an existing view at runtime?

    - by Ryan Taylor
    I am building a Flex 4 based application using the presentation model design pattern. This application will have several different components to it as shown in the image below. The MainView and DashboardView will always be visible and they each have corresponding presentation models and models as necessary. These views are easily created by declaring their MXML in the application root. <s:HGroup width="100%" height="100%"> <MainView width="75% height="100%"/> <DashboardView width="25%" height="100%"/> </s:HGroup> There will also be many WidgetViewN views that can be added to the DashboardView by the user at runtime through a simple drop down list. This will need to be accomplished via ActionScript. The drop down list should always show what WidgetViewN has already been added to the DashboardView. Therefore some state about which WidgetViewN's have been created needs to be stored. Since the list of available WidgetViewN and which ones are added to the DashboardView also need to be accessible from other components in the system I think this needs to be stored in a Model object. My understanding of the presentation model design pattern is that the view is very lean. It contains as close to zero logic as is practical. The view communicates/binds to the presentation model which contains all the necessary view logic. The presentation model is effectively an abstract representation of the view which supports low coupling and eases testability. The presentation model may have one or more models injected in in order to display the necessary information. The models themselves contain no view logic whatsoever. So I have a several questions around this design. Who should be responsible for creating the WidgetViewN components and adding these to the DashboardView? Is this the responsibility of the DashboardView, DashboardPresentationModel, DashboardModel or something else entirely? It seems like the DashboardPresentationModel would be responsible for creating/adding/removing any child views from it's display but how do you do this without passing in the DashboardView to the DashboardPresentationModel? The list of available and visible WidgetViewN components needs to be accessible to a few other components as well. Is it okay for a reference to a WidgetViewN to be stored/referenced in a model? Are there any good examples of the presentation model pattern online in Flex that also include creating child views at runtime?

    Read the article

  • MVP, WinForms - how to avoid bloated view, presenter and presentation model

    - by MatteS
    When implementing MVP pattern in winforms I often find bloated view interfaces with too many properties, setters and getters. An easy example with be a view with 3 buttons and 7 textboxes, all having value, enabled and visible properties exposed from the view. Adding validation results for this, and you could easily end up with an interface with 40ish properties. Using the Presentation Model, there'll be a model with the same number of properties aswell. How do you easily sync the view and the presentation model without having bloated presenter logic that pass all the values back and forth? (With that 80ish line presenter code, imagine with the presenter test that mocks the model and view will look like..160ish lines of code just to mock that transfer.) Is there any framework to handle this without resorting to winforms databinding? (you might want to use different views than a winforms view. According to some, this sync should be the presenters job..) Would you use AutoMapper? Maybe im asking the wrong questions, but it seems to me MVP easily gets bloated without some good solution here..

    Read the article

  • New Communications Industry Data Model with "Factory Installed" Predictive Analytics using Oracle Da

    - by charlie.berger
    Oracle Introduces Oracle Communications Data Model to Provide Actionable Insight for Communications Service Providers   We've integrated pre-installed analytical methodologies with the new Oracle Communications Data Model to deliver automated, simple, yet powerful predictive analytics solutions for customers.  Churn, sentiment analysis, identifying customer segments - all things that can be anticipated and hence, preconcieved and implemented inside an applications.  Read on for more information! TM Forum Management World, Nice, France - 18 May 2010 News Facts To help communications service providers (CSPs) manage and analyze rapidly growing data volumes cost effectively, Oracle today introduced the Oracle Communications Data Model. With the Oracle Communications Data Model, CSPs can achieve rapid time to value by quickly implementing a standards-based enterprise data warehouse that features communications industry-specific reporting, analytics and data mining. The combination of the Oracle Communications Data Model, Oracle Exadata and the Oracle Business Intelligence (BI) Foundation represents the most comprehensive data warehouse and BI solution for the communications industry. Also announced today, Hong Kong Broadband Network enhanced their data warehouse system, going live on Oracle Communications Data Model in three months. The leading provider increased its subscriber base by 37 percent in six months and reduced customer churn to less than one percent. Product Details Oracle Communications Data Model provides industry-specific schema and embedded analytics that address key areas such as customer management, marketing segmentation, product development and network health. CSPs can efficiently capture and monitor critical data and transform it into actionable information to support development and delivery of next-generation services using: More than 1,300 industry-specific measurements and key performance indicators (KPIs) such as network reliability statistics, provisioning metrics and customer churn propensity. Embedded OLAP cubes for extremely fast dimensional analysis of business information. Embedded data mining models for sophisticated trending and predictive analysis. Support for multiple lines of business, such as cable, mobile, wireline and Internet, which can be easily extended to support future requirements. With Oracle Communications Data Model, CSPs can jump start the implementation of a communications data warehouse in line with communications-industry standards including the TM Forum Information Framework (SID), formerly known as the Shared Information Model. Oracle Communications Data Model is optimized for any Oracle Database 11g platform, including Oracle Exadata, which can improve call data record query performance by 10x or more. Supporting Quotes "Oracle Communications Data Model covers a wide range of business areas that are relevant to modern communications service providers and is a comprehensive solution - with its data model and pre-packaged templates including BI dashboards, KPIs, OLAP cubes and mining models. It helps us save a great deal of time in building and implementing a customized data warehouse and enables us to leverage the advanced analytics quickly and more effectively," said Yasuki Hayashi, executive manager, NTT Comware Corporation. "Data volumes will only continue to grow as communications service providers expand next-generation networks, deploy new services and adopt new business models. They will increasingly need efficient, reliable data warehouses to capture key insights on data such as customer value, network value and churn probability. With the Oracle Communications Data Model, Oracle has demonstrated its commitment to meeting these needs by delivering data warehouse tools designed to fill communications industry-specific needs," said Elisabeth Rainge, program director, Network Software, IDC. "The TM Forum Conformance Mark provides reassurance to customers seeking standards-based, and therefore, cost-effective and flexible solutions. TM Forum is extremely pleased to work with Oracle to certify its Oracle Communications Data Model solution. Upon successful completion, this certification will represent the broadest and most complete implementation of the TM Forum Information Framework to date, with more than 130 aggregate business entities," said Keith Willetts, chairman and chief executive officer, TM Forum. Supporting Resources Oracle Communications Oracle Communications Data Model Data Sheet Oracle Communications Data Model Podcast Oracle Data Warehousing Oracle Communications on YouTube Oracle Communications on Delicious Oracle Communications on Facebook Oracle Communications on Twitter Oracle Communications on LinkedIn Oracle Database on Twitter The Data Warehouse Insider Blog

    Read the article

  • Model Driven Architecture Approach in programming / modelling

    - by yak
    I know the basics of the model driven architecture: it is all about model the system which I want to create and create the core code afterwards. I used CORBA a while ago. First thing that I needed to do was to create an abstract interface (some kind of model of the system I want to build) and generate core code later. But I have a different question: is model driven architecture a broad approach or not? I mean, let's say, that I have the language (modelling language) in which I want to model EXISTING system (opposite to the system I want to CREATE), and then analyze the model of the created system and different facts about that modeled abstraction. In this case, can the process I described above be considered the model driven architecture approach? I mean, I have the model, but this is the model of the existing system, not the system to be created.

    Read the article

  • Embedded Model Designing -- top down or bottom up?

    - by Jeff
    I am trying to learn RoR and develop a webapp. I have a few models I have thought of for this app, and they are fairly embedded. For example (please excuse my lack of RoR syntax): Model: textbook title:string type:string has_many: chapters Model: chapter content:text has_one: review_section Model: review_section title:string has_many: questions has_many: answers , through :questions Model: questions ... Model: answers ... My question is, with the example I gave, should I start at the top model (textbook) or the bottom most (answers)?

    Read the article

  • Are factors such as Intellisense support and strong typing enough to justify the use of an 'Anaemic Domain Model'?

    - by David Osborne
    It's easy to accept that objects should be used in all layers except a layer nominated as a data layer. However, it's just as easy to end-up with an 'anaemic domain model' that is just an object representation of data with no real functionality ( http://martinfowler.com/bliki/AnemicDomainModel.html ). However, using objects in this fashion brings the benefit of factors such as Intellisense support, strong typing, readability, discoverability, etc. Are these factors strong arguments for an otherwise, anaemic domain model?

    Read the article

  • Using design-patterns to transform web-service model classes into local model classes and vise versa

    - by Daniil Petrov
    There is a web-application built with play framework 1.2.7. It contains less than 10 model classes. The main purpose of the application is a lightweight access to a complex remote application (more than 50 model classes). The remote application has its own SOAP API and we use it for synchronization of data. There is a scheduled job in the web-app which makes requests to the remote app. It gets bunches of objects from the remote model and populates corresponding objects of the local model. Currently, there are two groups of classes - the local model and the remote model (generated from wsdl schema). It is not allowed to make any modifications to the remote model. Transformations are being made in the scheduled job class. When it gets objects from the remote app it creates local objects. Recently, it was decided to add a possibility to modify the remote objects. It requires more transformations on our side. We need to transform from remote to local model when reading objects and from local to remote when changing objects. I wonder if this would be possible to use some design-patterns to reduce a number of transformations?

    Read the article

  • Waterfall Model (SDLC) vs. Prototyping Model

    The characters in the fable of the Tortoise and the Hare can easily be used to demonstrate the similarities and differences between the Waterfall and Prototyping software development models. This children fable is about a race between a consistently slow moving but steadfast turtle and an extremely fast but unreliable rabbit. After closely comparing each character’s attributes in correlation with both software development models, a trend seems to appear in that the Waterfall closely resembles the Tortoise in that Waterfall Model is typically a slow moving process that is broken up in to multiple sequential steps that must be executed in a standard linear pattern. The Tortoise can be quoted several times in the story saying “Slow and steady wins the race.” This is the perfect mantra for the Waterfall Model in that this model is seen as a cumbersome and slow moving. Waterfall Model Phases Requirement Analysis & Definition This phase focuses on defining requirements for a project that is to be developed and determining if the project is even feasible. Requirements are collected by analyzing existing systems and functionality in correlation with the needs of the business and the desires of the end users. The desired output for this phase is a list of specific requirements from the business that are to be designed and implemented in the subsequent steps. In addition this phase is used to determine if any value will be gained by completing the project. System Design This phase focuses primarily on the actual architectural design of a system, and how it will interact within itself and with other existing applications. Projects at this level should be viewed at a high level so that actual implementation details are decided in the implementation phase. However major environmental decision like hardware and platform decision are typically decided in this phase. Furthermore the basic goal of this phase is to design an application at the system level in those classes, interfaces, and interactions are defined. Additionally decisions about scalability, distribution and reliability should also be considered for all decisions. The desired output for this phase is a functional  design document that states all of the architectural decisions that have been made in regards to the project as well as a diagrams like a sequence and class diagrams. Software Design This phase focuses primarily on the refining of the decisions found in the functional design document. Classes and interfaces are further broken down in to logical modules based on the interfaces and interactions previously indicated. The output of this phase is a formal design document. Implementation / Coding This phase focuses primarily on implementing the previously defined modules in to units of code. These units are developed independently are intergraded as the system is put together as part of a whole system. Software Integration & Verification This phase primarily focuses on testing each of the units of code developed as well as testing the system as a whole. There are basic types of testing at this phase and they include: Unit Test and Integration Test. Unit Test are built to test the functionality of a code unit to ensure that it preforms its desired task. Integration testing test the system as a whole because it focuses on results of combining specific units of code and validating it against expected results. The output of this phase is a test plan that includes test with expected results and actual results. System Verification This phase primarily focuses on testing the system as a whole in regards to the list of project requirements and desired operating environment. Operation & Maintenance his phase primarily focuses on handing off the competed project over to the customer so that they can verify that all of their requirements have been met based on their original requirements. This phase will also validate the correctness of their requirements and if any changed need to be made. In addition, any problems not resolved in the previous phase will be handled in this section. The Waterfall Model’s linear and sequential methodology does offer a project certain advantages and disadvantages. Advantages of the Waterfall Model Simplistic to implement and execute for projects and/or company wide Limited demand on resources Large emphasis on documentation Disadvantages of the Waterfall Model Completed phases cannot be revisited regardless if issues arise within a project Accurate requirement are never gather prior to the completion of the requirement phase due to the lack of clarification in regards to client’s desires. Small changes or errors that arise in applications may cause additional problems The client cannot change any requirements once the requirements phase has been completed leaving them no options for changes as they see their requirements changes as the customers desires change. Excess documentation Phases are cumbersome and slow moving Learn more about the Major Process in the Sofware Development Life Cycle and Waterfall Model. Conversely, the Hare shares similar traits with the prototyping software development model in that ideas are rapidly converted to basic working examples and subsequent changes are made to quickly align the project with customers desires as they are formulated and as software strays from the customers vision. The basic concept of prototyping is to eliminate the use of well-defined project requirements. Projects are allowed to grow as the customer needs and request grow. Projects are initially designed according to basic requirements and are refined as requirement become more refined. This process allows customer to feel their way around the application to ensure that they are developing exactly what they want in the application This model also works well for determining the feasibility of certain approaches in regards to an application. Prototypes allow for quickly developing examples of implementing specific functionality based on certain techniques. Advantages of Prototyping Active participation from users and customers Allows customers to change their mind in specifying requirements Customers get a better understanding of the system as it is developed Earlier bug/error detection Promotes communication with customers Prototype could be used as final production Reduced time needed to develop applications compared to the Waterfall method Disadvantages of Prototyping Promotes constantly redefining project requirements that cause major system rewrites Potential for increased complexity of a system as scope of the system expands Customer could believe the prototype as the working version. Implementation compromises could increase the complexity when applying updates and or application fixes When companies trying to decide between the Waterfall model and Prototype model they need to evaluate the benefits and disadvantages for both models. Typically smaller companies or projects that have major time constraints typically head for more of a Prototype model approach because it can reduce the time needed to complete the project because there is more of a focus on building a project and less on defining requirements and scope prior to the start of a project. On the other hand, Companies with well-defined requirements and time allowed to generate proper documentation should steer towards more of a waterfall model because they are in a position to obtain clarified requirements and have to design and optimal solution prior to the start of coding on a project.

    Read the article

  • AngularJS on top of ASP.NET: Moving the MVC framework out to the browser

    - by Varun Chatterji
    Heavily drawing inspiration from Ruby on Rails, MVC4’s convention over configuration model of development soon became the Holy Grail of .NET web development. The MVC model brought with it the goodness of proper separation of concerns between business logic, data, and the presentation logic. However, the MVC paradigm, was still one in which server side .NET code could be mixed with presentation code. The Razor templating engine, though cleaner than its predecessors, still encouraged and allowed you to mix .NET server side code with presentation logic. Thus, for example, if the developer required a certain <div> tag to be shown if a particular variable ShowDiv was true in the View’s model, the code could look like the following: Fig 1: To show a div or not. Server side .NET code is used in the View Mixing .NET code with HTML in views can soon get very messy. Wouldn’t it be nice if the presentation layer (HTML) could be pure HTML? Also, in the ASP.NET MVC model, some of the business logic invariably resides in the controller. It is tempting to use an anti­pattern like the one shown above to control whether a div should be shown or not. However, best practice would indicate that the Controller should not be aware of the div. The ShowDiv variable in the model should not exist. A controller should ideally, only be used to do the plumbing of getting the data populated in the model and nothing else. The view (ideally pure HTML) should render the presentation layer based on the model. In this article we will see how Angular JS, a new JavaScript framework by Google can be used effectively to build web applications where: 1. Views are pure HTML 2. Controllers (in the server sense) are pure REST based API calls 3. The presentation layer is loaded as needed from partial HTML only files. What is MVVM? MVVM short for Model View View Model is a new paradigm in web development. In this paradigm, the Model and View stuff exists on the client side through javascript instead of being processed on the server through postbacks. These frameworks are JavaScript frameworks that facilitate the clear separation of the “frontend” or the data rendering logic from the “backend” which is typically just a REST based API that loads and processes data through a resource model. The frameworks are called MVVM as a change to the Model (through javascript) gets reflected in the view immediately i.e. Model > View. Also, a change on the view (through manual input) gets reflected in the model immediately i.e. View > Model. The following figure shows this conceptually (comments are shown in red): Fig 2: Demonstration of MVVM in action In Fig 2, two text boxes are bound to the same variable model.myInt. Thus, changing the view manually (changing one text box through keyboard input) also changes the other textbox in real time demonstrating V > M property of a MVVM framework. Furthermore, clicking the button adds 1 to the value of model.myInt thus changing the model through JavaScript. This immediately updates the view (the value in the two textboxes) thus demonstrating the M > V property of a MVVM framework. Thus we see that the model in a MVVM JavaScript framework can be regarded as “the single source of truth“. This is an important concept. Angular is one such MVVM framework. We shall use it to build a simple app that sends SMS messages to a particular number. Application, Routes, Views, Controllers, Scope and Models Angular can be used in many ways to construct web applications. For this article, we shall only focus on building Single Page Applications (SPAs). Many of the approaches we will follow in this article have alternatives. It is beyond the scope of this article to explain every nuance in detail but we shall try to touch upon the basic concepts and end up with a working application that can be used to send SMS messages using Sent.ly Plus (a service that is itself built using Angular). Before you read on, we would like to urge you to forget what you know about Models, Views, Controllers and Routes in the ASP.NET MVC4 framework. All these words have different meanings in the Angular world. Whenever these words are used in this article, they will refer to Angular concepts and not ASP.NET MVC4 concepts. The following figure shows the skeleton of the root page of an SPA: Fig 3: The skeleton of a SPA The skeleton of the application is based on the Bootstrap starter template which can be found at: http://getbootstrap.com/examples/starter­template/ Apart from loading the Angular, jQuery and Bootstrap JavaScript libraries, it also loads our custom scripts /app/js/controllers.js /app/js/app.js These scripts define the routes, views and controllers which we shall come to in a moment. Application Notice that the body tag (Fig. 3) has an extra attribute: ng­app=”smsApp” Providing this tag “bootstraps” our single page application. It tells Angular to load a “module” called smsApp. This “module” is defined /app/js/app.js angular.module('smsApp', ['smsApp.controllers', function () {}]) Fig 4: The definition of our application module The line shows above, declares a module called smsApp. It also declares that this module “depends” on another module called “smsApp.controllers”. The smsApp.controllers module will contain all the controllers for our SPA. Routing and Views Notice that in the Navbar (in Fig 3) we have included two hyperlinks to: “#/app” “#/help” This is how Angular handles routing. Since the URLs start with “#”, they are actually just bookmarks (and not server side resources). However, our route definition (in /app/js/app.js) gives these URLs a special meaning within the Angular framework. angular.module('smsApp', ['smsApp.controllers', function () { }]) //Configure the routes .config(['$routeProvider', function ($routeProvider) { $routeProvider.when('/binding', { templateUrl: '/app/partials/bindingexample.html', controller: 'BindingController' }); }]); Fig 5: The definition of a route with an associated partial view and controller As we can see from the previous code sample, we are using the $routeProvider object in the configuration of our smsApp module. Notice how the code “asks for” the $routeProvider object by specifying it as a dependency in the [] braces and then defining a function that accepts it as a parameter. This is known as dependency injection. Please refer to the following link if you want to delve into this topic: http://docs.angularjs.org/guide/di What the above code snippet is doing is that it is telling Angular that when the URL is “#/binding”, then it should load the HTML snippet (“partial view”) found at /app/partials/bindingexample.html. Also, for this URL, Angular should load the controller called “BindingController”. We have also marked the div with the class “container” (in Fig 3) with the ng­view attribute. This attribute tells Angular that views (partial HTML pages) defined in the routes will be loaded within this div. You can see that the Angular JavaScript framework, unlike many other frameworks, works purely by extending HTML tags and attributes. It also allows you to extend HTML with your own tags and attributes (through directives) if you so desire, you can find out more about directives at the following URL: http://www.codeproject.com/Articles/607873/Extending­HTML­with­AngularJS­Directives Controllers and Models We have seen how we define what views and controllers should be loaded for a particular route. Let us now consider how controllers are defined. Our controllers are defined in the file /app/js/controllers.js. The following snippet shows the definition of the “BindingController” which is loaded when we hit the URL http://localhost:port/index.html#/binding (as we have defined in the route earlier as shown in Fig 5). Remember that we had defined that our application module “smsApp” depends on the “smsApp.controllers” module (see Fig 4). The code snippet below shows how the “BindingController” defined in the route shown in Fig 5 is defined in the module smsApp.controllers: angular.module('smsApp.controllers', [function () { }]) .controller('BindingController', ['$scope', function ($scope) { $scope.model = {}; $scope.model.myInt = 6; $scope.addOne = function () { $scope.model.myInt++; } }]); Fig 6: The definition of a controller in the “smsApp.controllers” module. The pieces are falling in place! Remember Fig.2? That was the code of a partial view that was loaded within the container div of the skeleton SPA shown in Fig 3. The route definition shown in Fig 5 also defined that the controller called “BindingController” (shown in Fig 6.) was loaded when we loaded the URL: http://localhost:22544/index.html#/binding The button in Fig 2 was marked with the attribute ng­click=”addOne()” which added 1 to the value of model.myInt. In Fig 6, we can see that this function is actually defined in the “BindingController”. Scope We can see from Fig 6, that in the definition of “BindingController”, we defined a dependency on $scope and then, as usual, defined a function which “asks for” $scope as per the dependency injection pattern. So what is $scope? Any guesses? As you might have guessed a scope is a particular “address space” where variables and functions may be defined. This has a similar meaning to scope in a programming language like C#. Model: The Scope is not the Model It is tempting to assign variables in the scope directly. For example, we could have defined myInt as $scope.myInt = 6 in Fig 6 instead of $scope.model.myInt = 6. The reason why this is a bad idea is that scope in hierarchical in Angular. Thus if we were to define a controller which was defined within the another controller (nested controllers), then the inner controller would inherit the scope of the parent controller. This inheritance would follow JavaScript prototypal inheritance. Let’s say the parent controller defined a variable through $scope.myInt = 6. The child controller would inherit the scope through java prototypical inheritance. This basically means that the child scope has a variable myInt that points to the parent scopes myInt variable. Now if we assigned the value of myInt in the parent, the child scope would be updated with the same value as the child scope’s myInt variable points to the parent scope’s myInt variable. However, if we were to assign the value of the myInt variable in the child scope, then the link of that variable to the parent scope would be broken as the variable myInt in the child scope now points to the value 6 and not to the parent scope’s myInt variable. But, if we defined a variable model in the parent scope, then the child scope will also have a variable model that points to the model variable in the parent scope. Updating the value of $scope.model.myInt in the parent scope would change the model variable in the child scope too as the variable is pointed to the model variable in the parent scope. Now changing the value of $scope.model.myInt in the child scope would ALSO change the value in the parent scope. This is because the model reference in the child scope is pointed to the scope variable in the parent. We did no new assignment to the model variable in the child scope. We only changed an attribute of the model variable. Since the model variable (in the child scope) points to the model variable in the parent scope, we have successfully changed the value of myInt in the parent scope. Thus the value of $scope.model.myInt in the parent scope becomes the “single source of truth“. This is a tricky concept, thus it is considered good practice to NOT use scope inheritance. More info on prototypal inheritance in Angular can be found in the “JavaScript Prototypal Inheritance” section at the following URL: https://github.com/angular/angular.js/wiki/Understanding­Scopes. Building It: An Angular JS application using a .NET Web API Backend Now that we have a perspective on the basic components of an MVVM application built using Angular, let’s build something useful. We will build an application that can be used to send out SMS messages to a given phone number. The following diagram describes the architecture of the application we are going to build: Fig 7: Broad application architecture We are going to add an HTML Partial to our project. This partial will contain the form fields that will accept the phone number and message that needs to be sent as an SMS. It will also display all the messages that have previously been sent. All the executable code that is run on the occurrence of events (button clicks etc.) in the view resides in the controller. The controller interacts with the ASP.NET WebAPI to get a history of SMS messages, add a message etc. through a REST based API. For the purposes of simplicity, we will use an in memory data structure for the purposes of creating this application. Thus, the tasks ahead of us are: Creating the REST WebApi with GET, PUT, POST, DELETE methods. Creating the SmsView.html partial Creating the SmsController controller with methods that are called from the SmsView.html partial Add a new route that loads the controller and the partial. 1. Creating the REST WebAPI This is a simple task that should be quite straightforward to any .NET developer. The following listing shows our ApiController: public class SmsMessage { public string to { get; set; } public string message { get; set; } } public class SmsResource : SmsMessage { public int smsId { get; set; } } public class SmsResourceController : ApiController { public static Dictionary<int, SmsResource> messages = new Dictionary<int, SmsResource>(); public static int currentId = 0; // GET api/<controller> public List<SmsResource> Get() { List<SmsResource> result = new List<SmsResource>(); foreach (int key in messages.Keys) { result.Add(messages[key]); } return result; } // GET api/<controller>/5 public SmsResource Get(int id) { if (messages.ContainsKey(id)) return messages[id]; return null; } // POST api/<controller> public List<SmsResource> Post([FromBody] SmsMessage value) { //Synchronize on messages so we don't have id collisions lock (messages) { SmsResource res = (SmsResource) value; res.smsId = currentId++; messages.Add(res.smsId, res); //SentlyPlusSmsSender.SendMessage(value.to, value.message); return Get(); } } // PUT api/<controller>/5 public List<SmsResource> Put(int id, [FromBody] SmsMessage value) { //Synchronize on messages so we don't have id collisions lock (messages) { if (messages.ContainsKey(id)) { //Update the message messages[id].message = value.message; messages[id].to = value.message; } return Get(); } } // DELETE api/<controller>/5 public List<SmsResource> Delete(int id) { if (messages.ContainsKey(id)) { messages.Remove(id); } return Get(); } } Once this class is defined, we should be able to access the WebAPI by a simple GET request using the browser: http://localhost:port/api/SmsResource Notice the commented line: //SentlyPlusSmsSender.SendMessage The SentlyPlusSmsSender class is defined in the attached solution. We have shown this line as commented as we want to explain the core Angular concepts. If you load the attached solution, this line is uncommented in the source and an actual SMS will be sent! By default, the API returns XML. For consumption of the API in Angular, we would like it to return JSON. To change the default to JSON, we make the following change to WebApiConfig.cs file located in the App_Start folder. public static class WebApiConfig { public static void Register(HttpConfiguration config) { config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); var appXmlType = config.Formatters.XmlFormatter. SupportedMediaTypes. FirstOrDefault( t => t.MediaType == "application/xml"); config.Formatters.XmlFormatter.SupportedMediaTypes.Remove(appXmlType); } } We now have our backend REST Api which we can consume from Angular! 2. Creating the SmsView.html partial This simple partial will define two fields: the destination phone number (international format starting with a +) and the message. These fields will be bound to model.phoneNumber and model.message. We will also add a button that we shall hook up to sendMessage() in the controller. A list of all previously sent messages (bound to model.allMessages) will also be displayed below the form input. The following code shows the code for the partial: <!--­­ If model.errorMessage is defined, then render the error div -­­> <div class="alert alert-­danger alert-­dismissable" style="margin­-top: 30px;" ng­-show="model.errorMessage != undefined"> <button type="button" class="close" data­dismiss="alert" aria­hidden="true">&times;</button> <strong>Error!</strong> <br /> {{ model.errorMessage }} </div> <!--­­ The input fields bound to the model --­­> <div class="well" style="margin-­top: 30px;"> <table style="width: 100%;"> <tr> <td style="width: 45%; text-­align: center;"> <input type="text" placeholder="Phone number (eg; +44 7778 609466)" ng­-model="model.phoneNumber" class="form-­control" style="width: 90%" onkeypress="return checkPhoneInput();" /> </td> <td style="width: 45%; text-­align: center;"> <input type="text" placeholder="Message" ng­-model="model.message" class="form-­control" style="width: 90%" /> </td> <td style="text-­align: center;"> <button class="btn btn-­danger" ng-­click="sendMessage();" ng-­disabled="model.isAjaxInProgress" style="margin­right: 5px;">Send</button> <img src="/Content/ajax-­loader.gif" ng­-show="model.isAjaxInProgress" /> </td> </tr> </table> </div> <!--­­ The past messages ­­--> <div style="margin-­top: 30px;"> <!­­-- The following div is shown if there are no past messages --­­> <div ng­-show="model.allMessages.length == 0"> No messages have been sent yet! </div> <!--­­ The following div is shown if there are some past messages --­­> <div ng-­show="model.allMessages.length == 0"> <table style="width: 100%;" class="table table-­striped"> <tr> <td>Phone Number</td> <td>Message</td> <td></td> </tr> <!--­­ The ng-­repeat directive is line the repeater control in .NET, but as you can see this partial is pure HTML which is much cleaner --> <tr ng-­repeat="message in model.allMessages"> <td>{{ message.to }}</td> <td>{{ message.message }}</td> <td> <button class="btn btn-­danger" ng-­click="delete(message.smsId);" ng­-disabled="model.isAjaxInProgress">Delete</button> </td> </tr> </table> </div> </div> The above code is commented and should be self explanatory. Conditional rendering is achieved through using the ng-­show=”condition” attribute on various div tags. Input fields are bound to the model and the send button is bound to the sendMessage() function in the controller as through the ng­click=”sendMessage()” attribute defined on the button tag. While AJAX calls are taking place, the controller sets model.isAjaxInProgress to true. Based on this variable, buttons are disabled through the ng-­disabled directive which is added as an attribute to the buttons. The ng-­repeat directive added as an attribute to the tr tag causes the table row to be rendered multiple times much like an ASP.NET repeater. 3. Creating the SmsController controller The penultimate piece of our application is the controller which responds to events from our view and interacts with our MVC4 REST WebAPI. The following listing shows the code we need to add to /app/js/controllers.js. Note that controller definitions can be chained. Also note that this controller “asks for” the $http service. The $http service is a simple way in Angular to do AJAX. So far we have only encountered modules, controllers, views and directives in Angular. The $http is new entity in Angular called a service. More information on Angular services can be found at the following URL: http://docs.angularjs.org/guide/dev_guide.services.understanding_services. .controller('SmsController', ['$scope', '$http', function ($scope, $http) { //We define the model $scope.model = {}; //We define the allMessages array in the model //that will contain all the messages sent so far $scope.model.allMessages = []; //The error if any $scope.model.errorMessage = undefined; //We initially load data so set the isAjaxInProgress = true; $scope.model.isAjaxInProgress = true; //Load all the messages $http({ url: '/api/smsresource', method: "GET" }). success(function (data, status, headers, config) { this callback will be called asynchronously //when the response is available $scope.model.allMessages = data; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }). error(function (data, status, headers, config) { //called asynchronously if an error occurs //or server returns response with an error status. $scope.model.errorMessage = "Error occurred status:" + status; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }); $scope.delete = function (id) { //We are making an ajax call so we set this to true $scope.model.isAjaxInProgress = true; $http({ url: '/api/smsresource/' + id, method: "DELETE" }). success(function (data, status, headers, config) { // this callback will be called asynchronously // when the response is available $scope.model.allMessages = data; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }); error(function (data, status, headers, config) { // called asynchronously if an error occurs // or server returns response with an error status. $scope.model.errorMessage = "Error occurred status:" + status; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }); } $scope.sendMessage = function () { $scope.model.errorMessage = undefined; var message = ''; if($scope.model.message != undefined) message = $scope.model.message.trim(); if ($scope.model.phoneNumber == undefined || $scope.model.phoneNumber == '' || $scope.model.phoneNumber.length < 10 || $scope.model.phoneNumber[0] != '+') { $scope.model.errorMessage = "You must enter a valid phone number in international format. Eg: +44 7778 609466"; return; } if (message.length == 0) { $scope.model.errorMessage = "You must specify a message!"; return; } //We are making an ajax call so we set this to true $scope.model.isAjaxInProgress = true; $http({ url: '/api/smsresource', method: "POST", data: { to: $scope.model.phoneNumber, message: $scope.model.message } }). success(function (data, status, headers, config) { // this callback will be called asynchronously // when the response is available $scope.model.allMessages = data; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }). error(function (data, status, headers, config) { // called asynchronously if an error occurs // or server returns response with an error status. $scope.model.errorMessage = "Error occurred status:" + status // We are done with AJAX loading $scope.model.isAjaxInProgress = false; }); } }]); We can see from the previous listing how the functions that are called from the view are defined in the controller. It should also be evident how easy it is to make AJAX calls to consume our MVC4 REST WebAPI. Now we are left with the final piece. We need to define a route that associates a particular path with the view we have defined and the controller we have defined. 4. Add a new route that loads the controller and the partial This is the easiest part of the puzzle. We simply define another route in the /app/js/app.js file: $routeProvider.when('/sms', { templateUrl: '/app/partials/smsview.html', controller: 'SmsController' }); Conclusion In this article we have seen how much of the server side functionality in the MVC4 framework can be moved to the browser thus delivering a snappy and fast user interface. We have seen how we can build client side HTML only views that avoid the messy syntax offered by server side Razor views. We have built a functioning app from the ground up. The significant advantage of this approach to building web apps is that the front end can be completely platform independent. Even though we used ASP.NET to create our REST API, we could just easily have used any other language such as Node.js, Ruby etc without changing a single line of our front end code. Angular is a rich framework and we have only touched on basic functionality required to create a SPA. For readers who wish to delve further into the Angular framework, we would recommend the following URL as a starting point: http://docs.angularjs.org/misc/started. To get started with the code for this project: Sign up for an account at http://plus.sent.ly (free) Add your phone number Go to the “My Identies Page” Note Down your Sender ID, Consumer Key and Consumer Secret Download the code for this article at: https://docs.google.com/file/d/0BzjEWqSE31yoZjZlV0d0R2Y3eW8/edit?usp=sharing Change the values of Sender Id, Consumer Key and Consumer Secret in the web.config file Run the project through Visual Studio!

    Read the article

  • strange data annotations issue in MVC 2

    - by femi
    Hello, I came across something strange when creating an edit form with MVC 2. i realised that my error messages come up on form sumission even when i have filled ut valid data! i am using a buddy class which i have configured correctly ( i know that cos i can see my custom errors). Here is the code from the viewmodel that generates this; <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<TG_Careers.Models.Applicant>" %> <script src="/Scripts/MicrosoftAjax.js" type="text/javascript"></script> <script src="/Scripts/MicrosoftMvcAjax.js" type="text/javascript"></script> <script src="/Scripts/MicrosoftMvcValidation.js" type="text/javascript"></script> <%= Html.ValidationSummary() %> <% Html.EnableClientValidation(); %> <% using (Html.BeginForm()) {%> <div class="confirm-module"> <table cellpadding="4" cellspacing="2"> <tr> <td><%= Html.LabelFor(model => model.FirstName) %> </td> <td><%= Html.EditorFor(model => model.FirstName) %></td> </tr> <tr> <td colspan="2"><%= Html.ValidationMessageFor(model => model.FirstName) %></td> </tr> <tr> <td><%= Html.LabelFor(model => model.MiddleName) %></td> <td><%= Html.EditorFor(model => model.MiddleName) %></td> </tr> <tr> <td colspan="2"><%= Html.ValidationMessageFor(model => model.MiddleName) %></td> </tr> <tr> <td><%= Html.LabelFor(model => model.LastName) %></td> <td><%= Html.EditorFor(model => model.LastName) %></td> </tr> <tr> <td colspan="2"><%= Html.ValidationMessageFor(model => model.LastName) %></td> </tr> <tr> <td><%= Html.LabelFor(model => model.Gender) %></td> <td><%= Html.EditorFor(model => model.Gender) %></td> </tr> <tr> <td colspan="2"><%= Html.ValidationMessageFor(model => model.Gender) %></td> </tr> <tr> <td><%= Html.LabelFor(model => model.MaritalStatus) %></td> <td> <%= Html.EditorFor(model => model.MaritalStatus) %></td> </tr> <tr> <td colspan="2"><%= Html.ValidationMessageFor(model => model.MaritalStatus) %></td> </tr> <tr> <td><%= Html.LabelFor(model => model.DateOfBirth) %></td> <td><%= Html.EditorFor(model => model.DateOfBirth) %></td> </tr> <tr> <td colspan="2"><%= Html.ValidationMessageFor(model => model.DateOfBirth) %></td> </tr> <tr> <td><%= Html.LabelFor(model => model.Address) %></td> <td><%= Html.EditorFor(model => model.Address) %></td> </tr> <tr> <td colspan="2"><%= Html.ValidationMessageFor(model => model.Address) %></td> </tr> <tr> <td><%= Html.LabelFor(model => model.City) %></td> <td><%= Html.EditorFor(model => model.City) %></td> </tr> <tr> <td colspan="2"><%= Html.ValidationMessageFor(model => model.City) %></td> </tr> <tr> <td><%= Html.LabelFor(model => model.State) %></td> <td><%= Html.EditorFor(model => model.State) %></td> </tr> <tr> <td colspan="2"><%= Html.ValidationMessageFor(model => model.State) %></td> </tr> <tr> <td><%= Html.LabelFor(model => model.StateOfOriginID) %></td> <td><%= Html.DropDownList("StateOfOriginID", new SelectList(ViewData["States"] as IEnumerable, "StateID", "Name", Model.StateOfOriginID))%></td> </tr> <tr> <td colspan="2"><%= Html.ValidationMessageFor(model => model.StateOfOriginID) %></td> </tr> <tr> <td><%= Html.LabelFor(model => model.CompletedNYSC) %></td> <td><%= Html.EditorFor(model => model.CompletedNYSC) %></td> </tr> <tr> <td colspan="2"><%= Html.ValidationMessageFor(model => model.CompletedNYSC) %></td> </tr> <tr> <td><%= Html.LabelFor(model => model.YearsOfExperience) %></td> <td><%= Html.EditorFor(model => model.YearsOfExperience) %></td> </tr> <tr> <td colspan="2"><%= Html.ValidationMessageFor(model => model.YearsOfExperience) %></td> </tr> <tr> <td><%= Html.LabelFor(model => model.MobilePhone) %></td> <td><%= Html.EditorFor(model => model.MobilePhone) %></td> </tr> <tr> <td colspan="2"><%= Html.ValidationMessageFor(model => model.MobilePhone) %></td> </tr> <tr> <td><%= Html.LabelFor(model => model.DayPhone) %></td> <td> <%= Html.EditorFor(model => model.DayPhone) %></td> </tr> <tr> <td colspan="2"><%= Html.ValidationMessageFor(model => model.DayPhone) %></td> </tr> <tr> <td><%= Html.LabelFor(model => model.CVFileName) %></td> <td><%= Html.EditorFor(model => model.CVFileName) %></td> </tr> <tr> <td colspan="2"><%= Html.ValidationMessageFor(model => model.CVFileName) %></td> </tr> <tr> <td><%= Html.LabelFor(model => model.CurrentPosition) %></td> <td><%= Html.EditorFor(model => model.CurrentPosition) %></td> </tr> <tr> <td colspan="2"><%= Html.ValidationMessageFor(model => model.CurrentPosition) %></td> </tr> <tr> <td><%= Html.LabelFor(model => model.EmploymentCommenced) %></td> <td><%= Html.EditorFor(model => model.EmploymentCommenced) %></td> </tr> <tr> <td colspan="2"><%= Html.ValidationMessageFor(model => model.EmploymentCommenced) %></td> </tr> <tr> <td><%= Html.LabelFor(model => model.DateofTakingupCurrentPosition) %></td> <td><%= Html.EditorFor(model => model.DateofTakingupCurrentPosition) %></td> </tr> <tr> <td colspan="2"><%= Html.ValidationMessageFor(model => model.DateofTakingupCurrentPosition) %></td> </tr> <tr> <td>&nbsp;</td> <td>&nbsp;</td> </tr> <tr> <td colspan="2">&nbsp;</td> </tr> </table> <p> <input type="submit" value="Save Profile Details" /> </p> </div> <% } %> Any ideas on this one please? Thanks

    Read the article

  • ASP.NET MVC Postbacks and HtmlHelper Controls ignoring Model Changes

    - by Rick Strahl
    So here's a binding behavior in ASP.NET MVC that I didn't really get until today: HtmlHelpers controls (like .TextBoxFor() etc.) don't bind to model values on Postback, but rather get their value directly out of the POST buffer from ModelState. Effectively it looks like you can't change the display value of a control via model value updates on a Postback operation. To demonstrate here's an example. I have a small section in a document where I display an editable email address: This is what the form displays on a GET operation and as expected I get the email value displayed in both the textbox and plain value display below, which reflects the value in the mode. I added a plain text value to demonstrate the model value compared to what's rendered in the textbox. The relevant markup is the email address which needs to be manipulated via the model in the Controller code. Here's the Razor markup: <div class="fieldcontainer"> <label> Email: &nbsp; <small>(username and <a href="http://gravatar.com">Gravatar</a> image)</small> </label> <div> @Html.TextBoxFor( mod=> mod.User.Email, new {type="email",@class="inputfield"}) @Model.User.Email </div> </div>   So, I have this form and the user can change their email address. On postback the Post controller code then asks the business layer whether the change is allowed. If it's not I want to reset the email address back to the old value which exists in the database and was previously store. The obvious thing to do would be to modify the model. Here's the Controller logic block that deals with that:// did user change email? if (!string.IsNullOrEmpty(oldEmail) && user.Email != oldEmail) { if (userBus.DoesEmailExist(user.Email)) { userBus.ValidationErrors.Add("New email address exists already. Please…"); user.Email = oldEmail; } else // allow email change but require verification by forcing a login user.IsVerified = false; }… model.user = user; return View(model); The logic is straight forward - if the new email address is not valid because it already exists I don't want to display the new email address the user entered, but rather the old one. To do this I change the value on the model which effectively does this:model.user.Email = oldEmail; return View(model); So when I press the Save button after entering in my new email address ([email protected]) here's what comes back in the rendered view: Notice that the textbox value and the raw displayed model value are different. The TextBox displays the POST value, the raw value displays the actual model value which are different. This means that MVC renders the textbox value from the POST data rather than from the view data when an Http POST is active. Now I don't know about you but this is not the behavior I expected - initially. This behavior effectively means that I cannot modify the contents of the textbox from the Controller code if using HtmlHelpers for binding. Updating the model for display purposes in a POST has in effect - no effect. (Apr. 25, 2012 - edited the post heavily based on comments and more experimentation) What should the behavior be? After getting quite a few comments on this post I quickly realized that the behavior I described above is actually the behavior you'd want in 99% of the binding scenarios. You do want to get the POST values back into your input controls at all times, so that the data displayed on a form for the user matches what they typed. So if an error occurs, the error doesn't mysteriously disappear getting replaced either with a default value or some value that you changed on the model on your own. Makes sense. Still it is a little non-obvious because the way you create the UI elements with MVC, it certainly looks like your are binding to the model value:@Html.TextBoxFor( mod=> mod.User.Email, new {type="email",@class="inputfield",required="required" }) and so unless one understands a little bit about how the model binder works this is easy to trip up. At least it was for me. Even though I'm telling the control which model value to bind to, that model value is only used initially on GET operations. After that ModelState/POST values provide the display value. Workarounds The default behavior should be fine for 99% of binding scenarios. But if you do need fix up values based on your model rather than the default POST values, there are a number of ways that you can work around this. Initially when I ran into this, I couldn't figure out how to set the value using code and so the simplest solution to me was simply to not use the MVC Html Helper for the specific control and explicitly bind the model via HTML markup and @Razor expression: <input type="text" name="User.Email" id="User_Email" value="@Model.User.Email" /> And this produces the right result. This is easy enough to create, but feels a little out of place when using the @Html helpers for everything else. As you can see by the difference in the name and id values, you also are forced to remember the naming conventions that MVC imposes in order for ModelBinding to work properly which is a pain to remember and set manually (name is the same as the property with . syntax, id replaces dots with underlines). Use the ModelState Some of my original confusion came because I didn't understand how the model binder works. The model binder basically maintains ModelState on a postback, which holds a value and binding errors for each of the Post back value submitted on the page that can be mapped to the model. In other words there's one ModelState entry for each bound property of the model. Each ModelState entry contains a value property that holds AttemptedValue and RawValue properties. The AttemptedValue is essentially the POST value retrieved from the form. The RawValue is the value that the model holds. When MVC binds controls like @Html.TextBoxFor() or @Html.TextBox(), it always binds values on a GET operation. On a POST operation however, it'll always used the AttemptedValue to display the control. MVC binds using the ModelState on a POST operation, not the model's value. So, if you want the behavior that I was expecting originally you can actually get it by clearing the ModelState in the controller code:ModelState.Clear(); This clears out all the captured ModelState values, and effectively binds to the model. Note this will produce very similar results - in fact if there are no binding errors you see exactly the same behavior as if binding from ModelState, because the model has been updated from the ModelState already and binding to the updated values most likely produces the same values you would get with POST back values. The big difference though is that any values that couldn't bind - like say putting a string into a numeric field - will now not display back the value the user typed, but the default field value or whatever you changed the model value to. This is the behavior I was actually expecting previously. But - clearing out all values might be a bit heavy handed. You might want to fix up one or two values in a model but rarely would you want the entire model to update from the model. So, you can also clear out individual values on an as needed basis:if (userBus.DoesEmailExist(user.Email)) { userBus.ValidationErrors.Add("New email address exists already. Please…"); user.Email = oldEmail; ModelState.Remove("User.Email"); } This allows you to remove a single value from the ModelState and effectively allows you to replace that value for display from the model. Why? While researching this I came across a post from Microsoft's Brad Wilson who describes the default binding behavior best in a forum post: The reason we use the posted value for editors rather than the model value is that the model may not be able to contain the value that the user typed. Imagine in your "int" editor the user had typed "dog". You want to display an error message which says "dog is not valid", and leave "dog" in the editor field. However, your model is an int: there's no way it can store "dog". So we keep the old value. If you don't want the old values in the editor, clear out the Model State. That's where the old value is stored and pulled from the HTML helpers. There you have it. It's not the most intuitive behavior, but in hindsight this behavior does make some sense even if at first glance it looks like you should be able to update values from the model. The solution of clearing ModelState works and is a reasonable one but you have to know about some of the innards of ModelState and how it actually works to figure that out.© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Tips on ensuring Model Quality

    - by [email protected]
    Given enough data that represents well the domain and models that reflect exactly the decision being optimized, models usually provide good predictions that ensure lift. Nevertheless, sometimes the modeling situation is less than ideal. In this blog entry we explore the problems found in a few such situations and how to avoid them.1 - The Model does not reflect the problem you are trying to solveFor example, you may be trying to solve the problem: "What product should I recommend to this customer" but your model learns on the problem: "Given that a customer has acquired our products, what is the likelihood for each product". In this case the model you built may be too far of a proxy for the problem you are really trying to solve. What you could do in this case is try to build a model based on the result from recommendations of products to customers. If there is not enough data from actual recommendations, you could use a hybrid approach in which you would use the [bad] proxy model until the recommendation model converges.2 - Data is not predictive enoughIf the inputs are not correlated with the output then the models may be unable to provide good predictions. For example, if the input is the phase of the moon and the weather and the output is what car did the customer buy, there may be no correlations found. In this case you should see a low quality model.The solution in this case is to include more relevant inputs.3 - Not enough cases seenIf the data learned does not include enough cases, at least 200 positive examples for each output, then the quality of recommendations may be low. The obvious solution is to include more data records. If this is not possible, then it may be possible to build a model based on the characteristics of the output choices rather than the choices themselves. For example, instead of using products as output, use the product category, price and brand name, and then combine these models.4 - Output leaking into input giving the false impression of good quality modelsIf the input data in the training includes values that have changed or are available only because the output happened, then you will find some strong correlations between the input and the output, but these strong correlations do not reflect the data that you will have available at decision (prediction) time. For example, if you are building a model to predict whether a web site visitor will succeed in registering, and the input includes the variable DaysSinceRegistration, and you learn when this variable has already been set, you will probably see a big correlation between having a Zero (or one) in this variable and the fact that registration was successful.The solution is to remove these variables from the input or make sure they reflect the value as of the time of decision and not after the result is known. 

    Read the article

  • Data Model Dissonance

    - by Tony Davis
    So often at the start of the development of database applications, there is a premature rush to the keyboard. Unless, before we get there, we’ve mapped out and agreed the three data models, the Conceptual, the Logical and the Physical, then the inevitable refactoring will dog development work. It pays to get the data models sorted out up-front, however ‘agile’ you profess to be. The hardest model to get right, the most misunderstood, and the one most neglected by the various modeling tools, is the conceptual data model, and yet it is critical to all that follows. The conceptual model distils what the business understands about itself, and the way it operates. It represents the business rules that govern the required data, its constraints and its properties. The conceptual model uses the terminology of the business and defines the most important entities and their inter-relationships. Don’t assume that the organization’s understanding of these business rules is consistent or accurate. Too often, one department has a subtly different understanding of what an entity means and what it stores, from another. If our conceptual data model fails to resolve such inconsistencies, it will reduce data quality. If we don’t collect and measure the raw data in a consistent way across the whole business, how can we hope to perform meaningful aggregation? The conceptual data model has more to do with business than technology, and as such, developers often regard it as a worthy but rather arcane ceremony like saluting the flag or only eating fish on Friday. However, the consequences of getting it wrong have a direct and painful impact on many aspects of the project. If you adopt a silo-based (a.k.a. Domain driven) approach to development), you are still likely to suffer by starting with an incomplete knowledge of the domain. Even when you have surmounted these problems so that the data entities accurately reflect the business domain that the application represents, there are likely to be dire consequences from abandoning the goal of a shared, enterprise-wide understanding of the business. In reading this, you may recall experiences of the consequence of getting the conceptual data model wrong. I believe that Phil Factor, for example, witnessed the abandonment of a multi-million dollar banking project due to an inadequate conceptual analysis of how the bank defined a ‘customer’. We’d love to hear of any examples you know of development projects poleaxed by errors in the conceptual data model. Cheers, Tony

    Read the article

  • Achieving decoupling in Model classes

    - by Guven
    I am trying to test-drive (or at least write unit tests) my Model classes but I noticed that my classes end up being too coupled. Since I can't break this coupling, writing unit tests is becoming harder and harder. To be more specific: Model Classes: These are the classes that hold the data in my application. They resemble pretty much the POJO (plain old Java objects), but they also have some methods. The application is not too big so I have around 15 model classes. Coupling: Just to give an example, think of a simple case of Order Header - Order Item. The header knows the item and the item knows the header (needs some information from the header for performing certain operations). Then, let's say there is the relationship between Order Item - Item Report. The item report needs the item as well. At this point, imagine writing tests for Item Report; you need have a Order Header to carry out the tests. This is a simple case with 3 classes; things get more complicated with more classes. I can come up with decoupled classes when I design algorithms, persistence layers, UI interactions, etc... but with model classes, I can't think of a way to separate them. They currently sit as one big chunk of classes that depend on each other. Here are some workarounds that I can think of: Data Generators: I have a package that generates sample data for my model classes. For example, the OrderHeaderGenerator class creates OrderHeaders with some basic data in it. I use the OrderHeaderGenerator from my ItemReport unit-tests so that I get an instance to OrderHeader class. The problem is these generators get complicated pretty fast and then I also need to test these generators; defeating the purpose a little bit. Interfaces instead of dependencies: I can come up with interfaces to get rid of the hard dependencies. For example, the OrderItem class would depend on the IOrderHeader interface. So, in my unit tests, I can easily mock the behaviour of an OrderHeader with a FakeOrderHeader class that implements the IOrderHeader interface. The problem with this approach is the complexity that the Model classes would end up having. Would you have other ideas on how to break this coupling in the model classes? Or, how to make it easier to unit-test the model classes?

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >