Search Results

Search found 15947 results on 638 pages for 'model quality'.

Page 6/638 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Automating HP Quality Center with Python or Java

    - by Hari
    Hi, We have a project that uses HP Quality Center and one of the regular issues we face is people not updating comments on the defect. So I was thinkingif we could come up with a small script or tool that could be used to periodically throw up a reminder and force the user to update the comments. I came across the Open Test Architecture API and was wondering if there are any good Python or java examples for the same that I could see. Thanks Hari

    Read the article

  • Passing variables from Model to Model in codeigniter

    - by Craig Ward
    Hi, I need to pass a variable to model, that model needs to send another back and use that variable to query a different model. EG: I have a product_ID which I send to the product model, From that I find out the supplier_ID. I want to grab that supplier_ID to the supplier model to get the supplier name. How do you implement this in codeigniter?

    Read the article

  • how to Improve DrawDIB's quality?

    - by sxingfeng
    I am coding in c++, gdi I use stretchDIBits to draw Images to dc. ::SetStretchBltMode(hDC, HALFTONE); ::StretchDIBits( hDC, des.left,des.top,des.right - des.left,des.bottom - des.top, 0, 0, img.getWidth(), img.getHeight(), (img.accessPixels()), (img.getInfo()), DIB_RGB_COLORS, SRCCOPY ); However It is slow. So I changed to use DrawDib function. ::SetStretchBltMode(hDC, HALFTONE); DrawDibDraw( hdd, hDC, des.left,des.top,des.right - des.left,des.bottom - des.top, (LPBITMAPINFOHEADER)(img.getInfo()), (img.accessPixels()), 0, 0, img.getWidth(), img.getHeight(), DDF_HALFTONE ); However the result is just like draw by COLORONCOLOR Mode. How can I improve the drawing quality?

    Read the article

  • High quality software examples

    - by Francisco Garcia
    One of the best ways to learn about programming is reading high quality code/projects from great engineers. Which open-source projects do you think is worth looking at? I mean, that code that you can print and sit under a tree with a glass of wine and enjoy reading. If you can, also specify if the software is great to look at because its documentation, design, UML diagrams or just plain code. I believe UML is not very common within open-source projects. Is there such a thing as a project branch that polishes code and design with the sole objective to give other programmers a great example of great software?

    Read the article

  • JQuery pass model to controller

    - by slandau
    I want to pass the mvc page model back to my controller within a Javascript Object. How would I do that? var urlString = "<%= System.Web.VirtualPathUtility.ToAbsolute("~/mvc/Indications.cfc/ExportToExcel")%>"; var jsonNickname = { model: Model, viewName: "<%= VirtualPathUtility.ToAbsolute("~/Views/Indications/TermSheetViews/Swap/CashFlows.aspx")%>", fileName: 'Cashflows.xls' } $.ajax({ type: "POST", url: urlString, data: jsonNickname, async: false, success: function (data) { $('#termSheetPrinted').append(data); } }); So where it says model: Model, I want the Model to be the actual page model that I declare at the top of the page: Inherits="System.Web.Mvc.ViewPage<Chatham.Web.Models.Indications.SwapModel>" How can I do that?

    Read the article

  • Oracle Enterprise Data Quality: Ever Integration-ready

    - by Mala Narasimharajan
    It is closing in on a year now since Oracle’s acquisition of Datanomic, and the addition of Oracle Enterprise Data Quality (EDQ) to the Oracle software family. The big move has caused some big shifts in emphasis and some very encouraging excitement from the field.  To give an illustration, combined with a shameless promotion of how EDQ can help to give quick insights into your data, I did a quick Phrase Profile of the subject field of emails to the Global EDQ mailing list since it was set up last September. The results revealed a very clear theme:   Integration, Integration, Integration! As well as the important Siebel and Oracle Data Integrator (ODI) integrations, we have been asked about integration with a huge variety of Oracle applications, including EBS, Peoplesoft, CRM on Demand, Fusion, DRM, Endeca, RightNow, and more - and we have not stood still! While it would not have been possible to develop specific pre-integrations with all of the above within a year, we have developed a package of feature-rich out-of-the-box web services and batch processes that can be plugged into any application or middleware technology with ease. And with Siebel, they work out of the box. Oracle Enterprise Data Quality version 9.0.4 includes the Customer Data Services (CDS) pack – a ready set of standard processes with standard interfaces, to provide integrated: Address verification and cleansing  Individual matching Organization matching The services can are suitable for either Batch or Real-Time processing, and are enabled for international data, with simple configuration options driving the set of locale-specific dictionaries that are used. For example, large dictionaries are provided to support international name transcription and variant matching, including highly specialized handling for Arabic, Japanese, Chinese and Korean data. In total across all locales, CDS includes well over a million dictionary entries.   Excerpt from EDQ’s CDS Individual Name Standardization Dictionary CDS has been developed to replace the OEM of Informatica Identity Resolution (IIR) for attached Data Quality on the Oracle price list, but does this in a way that creates a ‘best of both worlds’ situation for customers, who can harness not only the out-of-the-box functionality of pre-packaged matching and standardization services, but also the flexibility of OEDQ if they want to customize the interfaces or the process logic, without having to learn more than one product. From a competitive point of view, we believe this stands us in good stead against our key competitors, including Informatica, who have separate ‘Identity Resolution’ and general DQ products, and IBM, who provide limited out-of-the-box capabilities (with a steep learning curve) in both their QualityStage data quality and Initiate matching products. Here is a brief guide to the main services provided in the pack: Address Verification and Standardization EDQ’s CDS Address Cleaning Process The Address Verification and Standardization service uses EDQ Address Verification (an OEM of Loqate software) to verify and clean addresses in either real-time or batch. The Address Verification processor is wrapped in an EDQ process – this adds significant capabilities over calling the underlying Address Verification API directly, specifically: Country-specific thresholds to determine when to accept the verification result (and therefore to change the input address) based on the confidence level of the API Optimization of address verification by pre-standardizing data where required Formatting of output addresses into the input address fields normally used by applications Adding descriptions of the address verification and geocoding return codes The process can then be used to provide real-time and batch address cleansing in any application; such as a simple web page calling address cleaning and geocoding as part of a check on individual data.     Duplicate Prevention Unlike Informatica Identity Resolution (IIR), EDQ uses stateless services for duplicate prevention to avoid issues caused by complex replication and synchronization of large volume customer data. When a record is added or updated in an application, the EDQ Cluster Key Generation service is called, and returns a number of key values. These are used to select other records (‘candidates’) that may match in the application data (which has been pre-seeded with keys using the same service). The ‘driving record’ (the new or updated record) is then presented along with all selected candidates to the EDQ Matching Service, which decides which of the candidates are a good match with the driving record, and scores them according to the strength of match. In this model, complex multi-locale EDQ techniques can be used to generate the keys and ensure that the right balance between performance and matching effectiveness is maintained, while ensuring that the application retains control of data integrity and transactional commits. The process is explained below: EDQ Duplicate Prevention Architecture Note that where the integration is with a hub, there may be an additional call to the Cluster Key Generation service if the master record has changed due to merges with other records (and therefore needs to have new key values generated before commit). Batch Matching In order to allow customers to use different match rules in batch to real-time, separate matching templates are provided for batch matching. For example, some customers want to minimize intervention in key user flows (such as adding new customers) in front end applications, but to conduct a more exhaustive match on a regular basis in the back office. The batch matching jobs are also used when migrating data between systems, and in this case normally a more precise (and automated) type of matching is required, in order to minimize the review work performed by Data Stewards.  In batch matching, data is captured into EDQ using its standard interfaces, and records are standardized, clustered and matched in an EDQ job before matches are written out. As with all EDQ jobs, batch matching may be called from Oracle Data Integrator (ODI) if required. When working with Siebel CRM (or master data in Siebel UCM), Siebel’s Data Quality Manager is used to instigate batch jobs, and a shared staging database is used to write records for matching and to consume match results. The CDS batch matching processes automatically adjust to Siebel’s ‘Full Match’ (match all records against each other) and ‘Incremental Match’ (match a subset of records against all of their selected candidates) modes. The Future The Customer Data Services Pack is an important part of the Oracle strategy for EDQ, offering a clear path to making Data Quality Assurance an integral part of enterprise applications, and providing a strong value proposition for adopting EDQ. We are planning various additions and improvements, including: An out-of-the-box Data Quality Dashboard Even more comprehensive international data handling Address search (suggesting multiple results) Integrated address matching The EDQ Customer Data Services Pack is part of the Enterprise Data Quality Media Pack, available for download at http://www.oracle.com/technetwork/middleware/oedq/downloads/index.html.

    Read the article

  • How should I architect my Model and Data Access layer objects in my website?

    - by Robin Winslow
    I've been tasked with designing Data layer for a website at work, and I am very interested in architecture of code for the best flexibility, maintainability and readability. I am generally acutely aware of the value in completely separating out my actual Models from the Data Access layer, so that the Models are completely naive when it comes to Data Access. And in this case it's particularly useful to do this as the Models may be built from the Database or may be built from a Soap web service. So it seems to me to make sense to have Factories in my data access layer which create Model objects. So here's what I have so far (in my made-up pseudocode): class DataAccess.ProductsFromXml extends DataAccess.ProductFactory {} class DataAccess.ProductsFromDatabase extends DataAccess.ProductFactory {} These then get used in the controller in a fashion similar to the following: var xmlProductCreator = DataAccess.ProductsFromXml(xmlDataProvider); var databaseProductCreator = DataAccess.ProductsFromXml(xmlDataProvider); // Returns array of Product model objects var XmlProducts = databaseProductCreator.Products(); // Returns array of Product model objects var DbProducts = xmlProductCreator.Products(); So my question is, is this a good structure for my Data Access layer? Is it a good idea to use a Factory for building my Model objects from the data? Do you think I've misunderstood something? And are there any general patterns I should read up on for how to write my data access objects to create my Model objects?

    Read the article

  • Oracle Enterprise Data Quality: A Leader in Customer Satisfaction

    - by Mala Narasimharajan
    It’s always good to hear feedback from practitioners – the ones who are in the trenches who have experienced both the good and the bad sides of enterprise software. Gartner recently released a report which surveyed 260 data quality professionals from around the world and found that most expressed considerable satisfaction as a whole from their data quality tool vendors. However, a couple of key findings stand out which include, Datanomic (acquired by Oracle), leading the pack in terms of overall customer satisfaction among data quality tools. Read all about it right here http://bit.ly/Ay45SG

    Read the article

  • Partner Webcast - Focus on Oracle Data Profiling and Data Quality 11g

    - by lukasz.romaszewski(at)oracle.com
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-ansi-language:RO;} Partner Webcast Focus on Oracle Data Profiling and Data Quality 11g February 24th, 12am  CET   Oracle offers an integrated suite Data Quality software architected to discover and correct today's data quality problems and establish a platform prepared for tomorrow's yet unknown data challenges. Oracle Data Profiling provides data investigation, discovery, and profiling in support of quality, migration, integration, stewardship, and governance initiatives. It includes a broad range of features that expand upon basic profiling, including automated monitoring, business-rule validation, and trend analysis. Oracle Data Quality for Data Integrator provides cleansing, standardization, matching, address validation, location enrichment, and linking functions for global customer data and operational business data. It ensures that data adheres to established standards that are adaptable to fit each organization's specific needs.  Both single - and double - byte data are processed in local languages to provide a unique and centralized view of customers, products and services.   During this in-person briefing, Data Integration Solution Specialists will be providing a technical overview and a walkthrough.   Agenda ·         Oracle Data Integration Strategy overview ·         A focus on Oracle Data Profiling and Oracle Data Quality for Data Integrator: o   Oracle Data Profiling o   Oracle Data Quality for Data Integrator o   Live demoo   Q&A Delivery Format  This FREE online LIVE eSeminar will be delivered over the Web and Conference Call. Registrations   received less than 24hours  prior to start time may not receive confirmation to attend. To register , click here. For any questions please contact [email protected]

    Read the article

  • How can I have better sound quality?

    - by joelalmeidaptg
    I have been using Ubuntu for years now, and the latest Ubuntu 13.10 is working perfectly on my N56VZ. The image quality is awesome, better than on Windows, but there is one thing that is really bugging me and that kills my "cinema" experience... The sound. Ubuntu sound quality isn't nearly as good as it does on Windows using Realtek (with Powerful on the equalizer). On Ubuntu the sound is like "faded", it isn't as clear as on Windows. This happens on the system overall: VLC, Youtube, Rhythmbox... I think it is pulse itself that has a horrible sound quality. So, does anyone knows a solution for this? How can I have better sound quality on Ubuntu?

    Read the article

  • How do you formulate the Domain Model in Domain Driven Design properly (Bounded Contexts, Domains)?

    - by lko
    Say you have a few applications which deal with a few different Core Domains. The examples are made up and it's hard to put a real example with meaningful data together (concisely). In Domain Driven Design (DDD) when you start looking at Bounded Contexts and Domains/Sub Domains, it says that a Bounded Context is a "phase" in a lifecycle. An example of Context here would be within an ecommerce system. Although you could model this as a single system, it would also warrant splitting into separate Contexts. Each of these areas within the application have their own Ubiquitous Language, their own Model, and a way to talk to other Bounded Contexts to obtain the information they need. The Core, Sub, and Generic Domains are the area of expertise and can be numerous in complex applications. Say there is a long process dealing with an Entity for example a Book in a core domain. Now looking at the Bounded Contexts there can be a number of phases in the books life-cycle. Say outline, creation, correction, publish, sale phases. Now imagine a second core domain, perhaps a store domain. The publisher has its own branch of stores to sell books. The store can have a number of Bounded Contexts (life-cycle phases) for example a "Stock" or "Inventory" context. In the first domain there is probably a Book database table with basically just an ID to track the different book Entities in the different life-cycles. Now suppose you have 10+ supporting domains e.g. Users, Catalogs, Inventory, .. (hard to think of relevant examples). For example a DomainModel for the Book Outline phase, the Creation phase, Correction phase, Publish phase, Sale phase. Then for the Store core domain it probably has a number of life-cycle phases. public class BookId : Entity { public long Id { get; set; } } In the creation phase (Bounded Context) the book could be a simple class. public class Book : BookId { public string Title { get; set; } public List<string> Chapters { get; set; } //... } Whereas in the publish phase (Bounded Context) it would have all the text, release date etc. public class Book : BookId { public DateTime ReleaseDate { get; set; } //... } The immediate benefit I can see in separating by "life-cycle phase" is that it's a great way to separate business logic so there aren't mammoth all-encompassing Entities nor Domain Services. A problem I have is figuring out how to concretely define the rules to the physical layout of the Domain Model. A. Does the Domain Model get "modeled" so there are as many bounded contexts (separate projects etc.) as there are life-cycle phases across the core domains in a complex application? Edit: Answer to A. Yes, according to the answer by Alexey Zimarev there should be an entire "Domain" for each bounded context. B. Is the Domain Model typically arranged by Bounded Contexts (or Domains, or both)? Edit: Answer to B. Each Bounded Context should have its own complete "Domain" (Service/Entities/VO's/Repositories) C. Does it mean there can easily be 10's of "segregated" Domain Models and multiple projects can use it (the Entities/Value Objects)? Edit: Answer to C. There is a complete "Domain" for each Bounded Context and the Domain Model (Entity/VO layer/project) isn't "used" by the other Bounded Contexts directly, only via chosen paths (i.e. via Domain Events). The part that I am trying to figure out is how the Domain Model is actually implemented once you start to figure out your Bounded Contexts and Core/Sub Domains, particularly in complex applications. The goal is to establish the definitions which can help to separate Entities between the Bounded Contexts and Domains.

    Read the article

  • What is a good strategy for binding view objects to model objects in C++?

    - by B.J.
    Imagine I have a rich data model that is represented by a hierarchy of objects. I also have a view hierarchy with views that can extract required data from model objects and display the data (and allow the user to manipulate the data). Actually, there could be multiple view hierarchies that can represent and manipulate the model (e.g. an overview-detail view and a direct manipulation view). My current approach for this is for the controller layer to store a reference to the underlying model object in the View object. The view object can then get the current data from the model for display, and can send the model object messages to update the data. View objects are effectively observers of the model objects and the model objects broadcast notifications when properties change. This approach allows all the views to update simultaneously when any view changes the model. Implemented carefully, this all works. However, it does require a lot of work to ensure that no view or model objects hold any stale references to model objects. The user can delete model objects or sub-hierarchies of the model at any time. Ensuring that all the view objects that hold references to the model objects that have been deleted is time-consuming and difficult. It feels like the approach I have been taking is not especially clean; while I don't want to have to have explicit code in the controller layer for mediating the communication between the views and the model, it seems like there must be a better (implicit) approach for establishing bindings between the view and the model and between related model objects. In particular, I am looking for an approach (in C++) that understands two key points: There is a many to one relationship between view and model objects If the underlying model object is destroyed, all the dependent view objects must be cleaned up so that no stale references exist While shared_ptr and weak_ptr can be used to manage the lifetimes of the underlying model objects and allows for weak references from the view to the model, they don't provide for notification of the destruction of the underlying object (they do in the sense that the use of a stale weak_ptr allows for notification), but I need an approach that notifies the dependent objects that their weak reference is going away. Can anyone suggest a good strategy to manage this?

    Read the article

  • How to cut the line between quality and time?

    - by m3th0dman
    On one hand, I have been taught by various software engineering books ([1] as example) that my job as a programmer is to make the best possible software: great design, flexibility, to be easily maintained etc. One the other hand although I realize that I actually write software for money and not for entertainment, although is very nice to write good code and plan ahead and refactor after writing and ... I wonder if it is always best for the business (after all we should be responsible). Is the business always benefiting from a best code? Maybe I'm over-engineering something, and it's not always useful? So how should I know when to stop in the process to achieving the best possible code? I am sure that experience is something that makes a difference here, but I believe this cannot be the only answer. [1] Uncle Bob's in Clean Code says at page 6 about the fact that: They [managers] may defend the schedule and requirements with passion; but that’s their job. It’s your job to defend the code with equal passion.

    Read the article

  • Oracle Enterprise Data Quality: A Leader in Customer Satisfaction

    - by Mala Narasimharajan
     It’s always good to hear feedback from practitioners – the ones who are in the trenches who have experienced both the good and the bad sides of enterprise software.   Gartner recently released a report which surveyed 260 data quality professionals from around the world and found that most expressed considerable satisfaction as a whole from their data quality tool vendors.  However, a couple of key findings stand out which include, Datanomic (acquired by Oracle), leading the pack in terms of overall customer satisfaction among data quality tools.  Read all about it right here http://bit.ly/Ay45SG

    Read the article

  • How to Build Quality Back Links For Your Site

    Search engine optimization, known as SEO, relies on high quality back links In this regard; one should always know that their sites will rank higher in the search engines if they have quality back links, which is to say that one will receive organic traffic to their site from those who may be searching for what you're offering. To help you get quality back links, the following tips can help you.

    Read the article

  • Visual Studio project using the Quality Center API remains in memory

    - by Traveling Tech Guy
    Hi, Currently developing a connector DLL to HP's Quality Center. I'm using their (insert expelative) COM API to connect to the server. An Interop wrapper gets created automatically by VStudio. My solution has 2 projects: the DLL and a tester application - essentially a form with buttons that call functions in the DLL. Everything works well - I can create defects, update them and delete them. When I close the main form, the application stops nicely. But when I call a function that returns a list of all available projects (to fill a combo box), if I close the main form, VStudio still shows the solution as running and I have to stop it. I've managed to pinpoint a single function in my code that when I call, the solution remains "hung" and if I don't, it closes well. It's a call to a property in the TDC object get_VisibleProjects that returns a List (not the .Net one, but a type in the COM library) - I just iterate over it and return a proper list (that I later use to fill the combo box): public List<string> GetAvailableProjects() { List<string> projects = new List<string>(); foreach (string project in this.tdc.get_VisibleProjects(qcDomain)) { projects.Add(project); } return projects; } My assumption is that something gets retained in memory. If I run the EXE outside of VStudio it closes - but who knows what gets left behind in memory? My question is - how do I get rid of whatever calling this property returns? Shouldn't the GC handle this? Do I need to delve into pointers? Things I've tried: getting the list into a variable and setting it to null at the end of the function Adding a destructor to the class and nulling the tdc object Stepping through the tester function application all the way out, whne the form closes and the Main function ends - it closes, but VStudio still shows I'm running. Thanks for your assistance!

    Read the article

  • ASP.NET. MVC2. Entity Framework. Cannot pass primary key value back from view to [HttpPost]

    - by Paul Connolly
    I pass a ViewModel (which contains a "Person" object) from the "EditPerson" controller action into the view. When posted back from the view, the ActionResult receives all of the Person properties except the ID (which it says is zero instead of say its real integer) Can anyone tell me why? The controllers look like this: public ActionResult EditPerson(int personID) { var personToEdit = repository.GetPerson(personID); FormationViewModel vm = new FormationViewModel(); vm.Person = personToEdit; return View(vm); } [HttpPost] public ActionResult EditPerson(FormationViewModel model) <<Passes in all properties except ID { // Persistence code } The View looks like this: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<Afp.Models.Formation.FormationViewModel>" %> <% using (Html.BeginForm()) {% <%= Html.ValidationSummary(true) % <fieldset> <legend>Fields</legend> <div class="editor-label"> <%= Html.LabelFor(model => model.Person.Title) %> </div> <div class="editor-field"> <%= Html.TextBoxFor(model => model.Person.Title) %> <%= Html.ValidationMessageFor(model => model.Person.Title) %> </div> <div class="editor-label"> <%= Html.LabelFor(model => model.Person.Forename)%> </div> <div class="editor-field"> <%= Html.TextBoxFor(model => model.Person.Forename)%> <%= Html.ValidationMessageFor(model => model.Person.Forename)%> </div> <div class="editor-label"> <%= Html.LabelFor(model => model.Person.Surname)%> </div> <div class="editor-field"> <%= Html.TextBoxFor(model => model.Person.Surname)%> <%= Html.ValidationMessageFor(model => model.Person.Surname)%> </div> <div class="editor-label"> <%= Html.LabelFor(model => model.DOB) %> </div> <div class="editor-field"> <%= Html.TextBoxFor(model => model.DOB, String.Format("{0:g}", Model.DOB)) <%= Html.ValidationMessageFor(model => model.DOB) %> </div>--%> <div class="editor-label"> <%= Html.LabelFor(model => model.Person.Nationality)%> </div> <div class="editor-field"> <%= Html.TextBoxFor(model => model.Person.Nationality)%> <%= Html.ValidationMessageFor(model => model.Person.Nationality)%> </div> <div class="editor-label"> <%= Html.LabelFor(model => model.Person.Occupation)%> </div> <div class="editor-field"> <%= Html.TextBoxFor(model => model.Person.Occupation)%> <%= Html.ValidationMessageFor(model => model.Person.Occupation)%> </div> <div class="editor-label"> <%= Html.LabelFor(model => model.Person.CountryOfResidence)%> </div> <div class="editor-field"> <%= Html.TextBoxFor(model => model.Person.CountryOfResidence)%> <%= Html.ValidationMessageFor(model => model.Person.CountryOfResidence)%> </div> <div class="editor-label"> <%= Html.LabelFor(model => model.Person.PreviousNameForename)%> </div> <div class="editor-field"> <%= Html.TextBoxFor(model => model.Person.PreviousNameForename)%> <%= Html.ValidationMessageFor(model => model.Person.PreviousNameForename)%> </div> <div class="editor-label"> <%= Html.LabelFor(model => model.Person.PreviousSurname)%> </div> <div class="editor-field"> <%= Html.TextBoxFor(model => model.Person.PreviousSurname)%> <%= Html.ValidationMessageFor(model => model.Person.PreviousSurname)%> </div> <div class="editor-label"> <%= Html.LabelFor(model => model.Person.Email)%> </div> <div class="editor-field"> <%= Html.TextBoxFor(model => model.Person.Email)%> <%= Html.ValidationMessageFor(model => model.Person.Email)%> </div> <p> <input type="submit" value="Save" /> </p> </fieldset> <% } % And the Person class looks like: [MetadataType(typeof(Person_Validation))] public partial class Person { public Person() { } } [Bind(Exclude = "ID")] public class Person_Validation { public int ID { get; private set; } public string Title { get; set; } public string Forename { get; set; } public string Surname { get; set; } public System.DateTime DOB { get; set; } public string Nationality { get; set; } public string Occupation { get; set; } public string CountryOfResidence { get; set; } public string PreviousNameForename { get; set; } public string PreviousSurname { get; set; } public string Email { get; set; } } And ViewModel: public class FormationViewModel { public Company Company { get; set; } public Address RegisteredAddress { get; set; } public Person Person { get; set; } public PersonType PersonType { get; set; } public int CurrentStep { get; set; } } }

    Read the article

  • LLBLGen Pro feature highlights: grouping model elements

    - by FransBouma
    (This post is part of a series of posts about features of the LLBLGen Pro system) When working with an entity model which has more than a few entities, it's often convenient to be able to group entities together if they belong to a semantic sub-model. For example, if your entity model has several entities which are about 'security', it would be practical to group them together under the 'security' moniker. This way, you could easily find them back, yet they can be left inside the complete entity model altogether so their relationships with entities outside the group are kept. In other situations your domain consists of semi-separate entity models which all target tables/views which are located in the same database. It then might be convenient to have a single project to manage the complete target database, yet have the entity models separate of each other and have them result in separate code bases. LLBLGen Pro can do both for you. This blog post will illustrate both situations. The feature is called group usage and is controllable through the project settings. This setting is supported on all supported O/R mapper frameworks. Situation one: grouping entities in a single model. This situation is common for entity models which are dense, so many relationships exist between all sub-models: you can't split them up easily into separate models (nor do you likely want to), however it's convenient to have them grouped together into groups inside the entity model at the project level. A typical example for this is the AdventureWorks example database for SQL Server. This database, which is a single catalog, has for each sub-group a schema, however most of these schemas are tightly connected with each other: adding all schemas together will give a model with entities which indirectly are related to all other entities. LLBLGen Pro's default setting for group usage is AsVisualGroupingMechanism which is what this situation is all about: we group the elements for visual purposes, it has no real meaning for the model nor the code generated. Let's reverse engineer AdventureWorks to an entity model. By default, LLBLGen Pro uses the target schema an element is in which is being reverse engineered, as the group it will be in. This is convenient if you already have categorized tables/views in schemas, like which is the case in AdventureWorks. Of course this can be switched off, or corrected on the fly. When reverse engineering, we'll walk through a wizard which will guide us with the selection of the elements which relational model data should be retrieved, which we can later on use to reverse engineer to an entity model. The first step after specifying which database server connect to is to select these elements. below we can see the AdventureWorks catalog as well as the different schemas it contains. We'll include all of them. After the wizard completes, we have all relational model data nicely in our catalog data, with schemas. So let's reverse engineer entities from the tables in these schemas. We select in the catalog explorer the schemas 'HumanResources', 'Person', 'Production', 'Purchasing' and 'Sales', then right-click one of them and from the context menu, we select Reverse engineer Tables to Entity Definitions.... This will bring up the dialog below. We check all checkboxes in one go by checking the checkbox at the top to mark them all to be added to the project. As you can see LLBLGen Pro has already filled in the group name based on the schema name, as this is the default and we didn't change the setting. If you want, you can select multiple rows at once and set the group name to something else using the controls on the dialog. We're fine with the group names chosen so we'll simply click Add to Project. This gives the following result:   (I collapsed the other groups to keep the picture small ;)). As you can see, the entities are now grouped. Just to see how dense this model is, I've expanded the relationships of Employee: As you can see, it has relationships with entities from three other groups than HumanResources. It's not doable to cut up this project into sub-models without duplicating the Employee entity in all those groups, so this model is better suited to be used as a single model resulting in a single code base, however it benefits greatly from having its entities grouped into separate groups at the project level, to make work done on the model easier. Now let's look at another situation, namely where we work with a single database while we want to have multiple models and for each model a separate code base. Situation two: grouping entities in separate models within the same project. To get rid of the entities to see the second situation in action, simply undo the reverse engineering action in the project. We still have the AdventureWorks relational model data in the catalog. To switch LLBLGen Pro to see each group in the project as a separate project, open the Project Settings, navigate to General and set Group usage to AsSeparateProjects. In the catalog explorer, select Person and Production, right-click them and select again Reverse engineer Tables to Entities.... Again check the checkbox at the top to mark all entities to be added and click Add to Project. We get two groups, as expected, however this time the groups are seen as separate projects. This means that the validation logic inside LLBLGen Pro will see it as an error if there's e.g. a relationship or an inheritance edge linking two groups together, as that would lead to a cyclic reference in the code bases. To see this variant of the grouping feature, seeing the groups as separate projects, in action, we'll generate code from the project with the two groups we just created: select from the main menu: Project -> Generate Source-code... (or press F7 ;)). In the dialog popping up, select the target .NET framework you want to use, the template preset, fill in a destination folder and click Start Generator (normal). This will start the code generator process. As expected the code generator has simply generated two code bases, one for Person and one for Production: The group name is used inside the namespace for the different elements. This allows you to add both code bases to a single solution and use them together in a different project without problems. Below is a snippet from the code file of a generated entity class. //... using System.Xml.Serialization; using AdventureWorks.Person; using AdventureWorks.Person.HelperClasses; using AdventureWorks.Person.FactoryClasses; using AdventureWorks.Person.RelationClasses; using SD.LLBLGen.Pro.ORMSupportClasses; namespace AdventureWorks.Person.EntityClasses { //... /// <summary>Entity class which represents the entity 'Address'.<br/><br/></summary> [Serializable] public partial class AddressEntity : CommonEntityBase //... The advantage of this is that you can have two code bases and work with them separately, yet have a single target database and maintain everything in a single location. If you decide to move to a single code base, you can do so with a change of one setting. It's also useful if you want to keep the groups as separate models (and code bases) yet want to add relationships to elements from another group using a copy of the entity: you can simply reverse engineer the target table to a new entity into a different group, effectively making a copy of the entity. As there's a single target database, changes made to that database are reflected in both models which makes maintenance easier than when you'd have a separate project for each group, with its own relational model data. Conclusion LLBLGen Pro offers a flexible way to work with entities in sub-models and control how the sub-models end up in the generated code.

    Read the article

  • How to set up a one-man research in the difference between BDD and Waterfall?

    - by Martijn van der Maas
    Earlier, I asked a question about how to measure the quality of a project. The outcome of that question was that the quality of the project can be divided into two parts: Internal quality (code quality, measurable by code quality metrics) External quality (Acceptance test, how well the software meets the requirements) So based on that, I want to set up some research and validate the outcome of the project. The problem is, I will conduct this research on my own, so it's not possible to run the project once in BDD style and the other one in waterfall by myself. It's also not possible to compare BDD and waterfall projects on a larger scale, due to the fact that there are not enough BDD projects that can be measured because of the age of BDD. So, my question is: did anybody face this problem? How could I execute my experiment in such a way that it is of scientific value?

    Read the article

  • ISO 12207: SQA as a supporting process?

    - by user970696
    I have been following ISO12207 for the sake of my thesis dealing with software quality. Now I should explain quality assurance and here comes the problem: according to this norm, QA is a supporting process, separated but on the same level with verification, validation and auditing processes. According to other sources, Quality Assurance is basically high level activity making sure that standards, norms etc. are being followed. Usually the part of Quality Assurance is the Quality Control (testing, reviewing, inspections also V&V) which measures the quality and provides QA with this information so it can be acted upon. I somehow do not understand how QA is thought to be according to this ISO and what activities should it perform. Also it does not mention QC except for a footnote.

    Read the article

  • Changing Recovery Model in Replicated Database

    - by Rob
    I now am the proud owner of two servers that replicate with each other. I had nothing to do with the install, but (of course), now i have to support the databases. Both databases are in the Simple recovery model, but the users want to ensure as little data loss as possible so I'm thinking that I should change the recovery model over to full and start doing transaction log backups. I wasn't planning on backing up the subscribing database, only the publisher. Is this the right plan? Do I need to switch both the Subscriber and and the publisher to Full, or can I leave the subscriber in Simple, but have the Publisher in Full? When I change the recovery model in one (or both) do the databases need to be offline? Thanks

    Read the article

  • .Net MVC UserControl - Form values not mapped to model

    - by Andreas
    Hi I have a View that contains a usercontrol. The usercontrol is rendered using: <% Html.RenderPartial("GeneralStuff", Model.General, ViewData); %> My problem is that the usercontrol renders nicely with values from the model but when I post values edited in the usercontrol they are not mapped back to Model.General. I know I can find the values in Request.Form but I really thought that MVC would manage to map these values back to the model. My usercontrol: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<namespace.Models.GeneralViewModel>" %> <fieldset> <div> <%= Html.LabelFor(model => model.Value)%> <%= Html.TextBoxFor(model => model.Value)%> </div> </fieldset> I'm using .Net MVC 2 Thanks for any help!

    Read the article

  • Define "Validation in the Model"

    - by sunwukung
    There have been a couple of discussions regarding the location of user input validation: http://stackoverflow.com/questions/659950/should-validation-be-done-in-form-objects-or-the-model http://stackoverflow.com/questions/134388/where-do-you-do-your-validation-model-controller-or-view These discussions were quite old, so I wanted to ask the question again to see if anyone had any fresh input. If not, I apologise in advance. If you come from the Validation in the Model camp - does Model mean OOP representation of data (i.e. Active Record/Data Mapper) as "Entity" (to borrow the DDD terminology) - in which case you would, I assume, want all Model classes to inherit common validation constraints. Or can these rules simply be part of a Service in the Model - i.e. a Validation service? For example, could you consider Zend_Form and it's validation classes part of the Model? The concept of a Domain Model does not appear to be limited to Entities, and so validation may not necessarily need to be confined to this Entities. It seems that you would require a lot of potentially superfluous handing of values and responses back and forth between forms and "Entities" - and in some instances you may not persist the data recieved from user input, or recieve it from user input at all.

    Read the article

  • Experience Oracle Enterprise Data Quality at OpenWorld 2012

    - by Mala Narasimharajan
    This year Enterprise Data Quality features sessions designed to cover topics above and beyond product strategy and roadmap. Specifically, we have planned to have sessions around data governance, how does EDQ enable data acquisition, migration as well as integration. In addition, check out our hands-on-lab to see for yourself the new enhancements and integration to EDQ. Finally, no OpenWorld is complete without fully exploring the DemoGrounds – come and learn how enterprise data quality is critical to enterprise success and fit-for-purpose data. SESSIONS Mon Oct 1   1:45 PM - 2:45 PM Oracle Enterprise Data Quality: Product Overview and Roadmap Moscone West – 3006 Wed Oct 3   1:15 PM - 2:15 PM Data Preparation and Ongoing Governance with the Oracle Enterprise Data Quality Platform                                                   Moscone West – 3000 Thurs – Oct 4   12:45 PM - 1:45 PM Data Acquisition, Migration, and Integration with the Oracle Enterprise Data Quality Platform                                                 Moscone West – 3005     HANDS ON LAB – Mon Oct 1   4:45 PM - 5:45 PM Introduction to Oracle Enterprise Data Quality Application                                                                                                                   Marriott Marquis - Salon ½ DEMO   Trusted Data with Oracle Enterprise Data Quality Moscone South, Right - S-243

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >