Search Results

Search found 8349 results on 334 pages for 'entity groups'.

Page 102/334 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • Is there a clean separation of my layers with this attempt at Domain Driven Design in XAML and C#

    - by Buddy James
    I'm working on an application. I'm using a mixture of TDD and DDD. I'm working hard to separate the layers of my application and that is where my question comes in. My solution is laid out as follows Solution MyApp.Domain (WinRT class library) Entity (Folder) Interfaces(Folder) IPost.cs (Interface) BlogPosts.cs(Implementation of IPost) Service (Folder) Interfaces(Folder) IDataService.cs (Interface) BlogDataService.cs (Implementation of IDataService) MyApp.Presentation(Windows 8 XAML + C# application) ViewModels(Folder) BlogViewModel.cs App.xaml MainPage.xaml (Contains a property of BlogViewModel MyApp.Tests (WinRT Unit testing project used for my TDD) So I'm planning to use my ViewModel with the XAML UI I'm writing a test and define my interfaces in my system and I have the following code thus far. [TestMethod] public void Get_Zero_Blog_Posts_From_Presentation_Layer_Returns_Empty_Collection() { IBlogViewModel viewModel = _container.Resolve<IBlogViewModel>(); viewModel.LoadBlogPosts(0); Assert.AreEqual(0, viewModel.BlogPosts.Count, "There should be 0 blog posts."); } viewModel.BlogPosts is an ObservableCollection<IPost> Now.. my first thought is that I'd like the LoadBlogPosts method on the ViewModel to call a static method on the BlogPost entity. My problem is I feel like I need to inject the IDataService into the Entity object so that it promotes loose coupling. Here are the two options that I'm struggling with: Not use a static method and use a member method on the BlogPost entity. Have the BlogPost take an IDataService in the constructor and use dependency injection to resolve the BlogPost instance and the IDataService implementation. Don't use the entity to call the IDataService. Put the IDataService in the constructor of the ViewModel and use my container to resolve the IDataService when the viewmodel is instantiated. So with option one the layers will look like this ViewModel(Presentation layer) - Entity (Domain layer) - IDataService (Service Layer) or ViewModel(Presentation layer) - IDataService (Service Layer)

    Read the article

  • NHibernate Pitfalls: Cascades

    - by Ricardo Peres
    This is part of a series of posts about NHibernate Pitfalls. See the entire collection here. For entities that have associations – one-to-one, one-to-many, many-to-one or many-to-many –, NHibernate needs to know what to do with their related entities, in three particular moments: when saving, updating or deleting. In particular, there are two possible behaviors: either ignore these related entities or cascade changes to them. NHibernate allows setting the cascade behavior for each association, and the default behavior is not to cascade (ignore). The possible cascade options are: None Ignore, this is the default Save-Update If the entity is being saved or updated, also save any related entities that are either not saved or have been modified and associate these related entities to the root entity. Generally safe Delete If the entity is being deleted, also delete the related entities. This is only useful for parent-child relations Delete-Orphan Identical to Delete, with the addition that if once related entity is removed from the association – orphaned –, also delete it. Also only for parent-child All Combination of Save-Update and Delete, usually that’s what we want (for parent-child relations, of course) All-Delete-Orphan Same as All plus delete any related entities who lose their relationship In summary, Save-Update is generally what you want in most cases. As for the Delete variations, they should only be used if the related entities depend on the root entity (parent-child), so that deleting the root entity and not their related entities would result in a constraint violation on the database.

    Read the article

  • Reflective discovery of an inner class in an API

    - by wassup
    Let me ask you, as this bothers me for quite a while but appears to be subjectively the best solution for my problem, if reflective discovery of an inner class for API purposes is that bad idea? First, let me explain what I mean by saying "reflective discovery" and all that stuff. I am sketching an API for a Java database system, that'll be centered around block-based entities (don't ask me what that means - that's a long story), and those entities can be read and returned to the Java code as objects subclassed from the Entity class. I have an Entity.Factory class, that, by means of fluent interfaces, takes a Class<? extends Entity> argument and then, uses an instance of Section.Builder, Property.Builder, or whatever builder the entity has, to put it into the back-end storage. The idea about registering all entity types and their builders just doesn't appeal to me, so I thought that the closest solution to the problem that'd suffice my design needs would be to discover, using reflection, all inner classes of Entity classes and find one that's called Builder. Looking for some expert insight :) And if I missed some important design details (which could happen as I tried to make this question as concise as possible), just tell me and I'll add them.

    Read the article

  • OIM 11g notification framework

    - by Rajesh G Kumar
    OIM 11g has introduced an improved and template based Notifications framework. New release has removed the limitation of sending text based emails (out-of-the-box emails) and enhanced to support html features. New release provides in-built out-of-the-box templates for events like 'Reset Password', 'Create User Self Service' , ‘User Deleted' etc. Also provides new APIs to support custom templates to send notifications out of OIM. OIM notification framework supports notification mechanism based on events, notification templates and template resolver. They are defined as follows: Ø Events are defined as XML file and imported as part of MDS database in order to make notification event available for use. Ø Notification templates are created using OIM advance administration console. The template contains the text and the substitution 'variables' which will be replaced with the data provided by the template resolver. Templates support internationalization and can be defined as HTML or in form of simple text. Ø Template resolver is a Java class that is responsible to provide attributes and data to be used at runtime and design time. It must be deployed following the OIM plug-in framework. Resolver data provided at design time is to be used by end user to design notification template with available entity variables and it also provides data at runtime to replace the designed variable with value to be displayed to recipients. Steps to define custom notifications in OIM 11g are: Steps# Steps 1. Define the Notification Event 2. Create the Custom Template Resolver class 3. Create Template with notification contents to be sent to recipients 4. Create Event triggering spots in OIM 1. Notification Event metadata The Notification Event is defined as XML file which need to be imported into MDS database. An event file must be compliant with the schema defined by the notification engine, which is NotificationEvent.xsd. The event file contains basic information about the event.XSD location in MDS database: “/metadata/iam-features-notification/NotificationEvent.xsd”Schema file can be viewed by exporting file from MDS using weblogicExportMetadata.sh script.Sample Notification event metadata definition: 1: <?xml version="1.0" encoding="UTF-8"?> 2: <Events xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance xsi:noNamespaceSchemaLocation="../../../metadata/NotificationEvent.xsd"> 3: <EventType name="Sample Notification"> 4: <StaticData> 5: <Attribute DataType="X2-Entity" EntityName="User" Name="Granted User"/> 6: </StaticData> 7: <Resolver class="com.iam.oim.demo.notification.DemoNotificationResolver"> 8: <Param DataType="91-Entity" EntityName="Resource" Name="ResourceInfo"/> 9: </Resolver> 10: </EventType> 11: </Events> Line# Description 1. XML file notation tag 2. Events is root tag 3. EventType tag is to declare a unique event name which will be available for template designing 4. The StaticData element lists a set of parameters which allow user to add parameters that are not data dependent. In other words, this element defines the static data to be displayed when notification is to be configured. An example of static data is the User entity, which is not dependent on any other data and has the same set of attributes for all event instances and notification templates. Available attributes are used to be defined as substitution tokens in the template. 5. Attribute tag is child tag for StaticData to declare the entity and its data type with unique reference name. User entity is most commonly used Entity as StaticData. 6. StaticData closing tag 7. Resolver tag defines the resolver class. The Resolver class must be defined for each notification. It defines what parameters are available in the notification creation screen and how those parameters are replaced when the notification is to be sent. Resolver class resolves the data dynamically at run time and displays the attributes in the UI. 8. The Param DataType element lists a set of parameters which allow user to add parameters that are data dependent. An example of the data dependent or a dynamic entity is a resource object which user can select at run time. A notification template is to be configured for the resource object. Corresponding to the resource object field, a lookup is displayed on the UI. When a user selects the event the call goes to the Resolver class provided to fetch the fields that are displayed in the Available Data list, from which user can select the attribute to be used on the template. Param tag is child tag to declare the entity and its data type with unique reference name. 9. Resolver closing tag 10 EventType closing tag 11. Events closing tag Note: - DataType needs to be declared as “X2-Entity” for User entity and “91-Entity” for Resource or Organization entities. The dynamic entities supported for lookup are user, resource, and organization. Once notification event metadata is defined, need to be imported into MDS database. Fully qualified resolver class name need to be define for XML but do not need to load the class in OIM yet (it can be loaded later). 2. Coding the notification resolver All event owners have to provide a resolver class which would resolve the data dynamically at run time. Custom resolver class must implement the interface oracle.iam.notification.impl.NotificationEventResolver and override the implemented methods with actual implementation. It has 2 methods: S# Methods Descriptions 1. public List<NotificationAttribute> getAvailableData(String eventType, Map<String, Object> params); This API will return the list of available data variables. These variables will be available on the UI while creating/modifying the Templates and would let user select the variables so that they can be embedded as a token as part of the Messages on the template. These tokens are replaced by the value passed by the resolver class at run time. Available data is displayed in a list. The parameter "eventType" specifies the event Name for which template is to be read.The parameter "params" is the map which has the entity name and the corresponding value for which available data is to be fetched. Sample code snippet: List<NotificationAttribute> list = new ArrayList<NotificationAttribute>(); long objKey = (Long) params.get("resource"); //Form Field details based on Resource object key HashMap<String, Object> formFieldDetail = getObjectFormName(objKey); for (Iterator<?> itrd = formFieldDetail.entrySet().iterator(); itrd.hasNext(); ) { NotificationAttribute availableData = new NotificationAttribute(); Map.Entry formDetailEntrySet = (Entry<?, ?>)itrd.next(); String fieldLabel = (String)formDetailEntrySet.getValue(); availableData.setName(fieldLabel); list.add(availableData); } return list; 2. Public HashMap<String, Object> getReplacedData(String eventType, Map<String, Object> params); This API would return the resolved value of the variables present on the template at the runtime when notification is being sent. The parameter "eventType" specifies the event Name for which template is to be read.The parameter "params" is the map which has the base values such as usr_key, obj_key etc required by the resolver implementation to resolve the rest of the variables in the template. Sample code snippet: HashMap<String, Object> resolvedData = new HashMap<String, Object>();String firstName = getUserFirstname(params.get("usr_key"));resolvedData.put("fname", firstName); String lastName = getUserLastName(params.get("usr_key"));resolvedData.put("lname", lastname);resolvedData.put("count", "1 million");return resolvedData; This code must be deployed as per OIM 11g plug-in framework. The XML file defining the plug-in is as below: <?xml version="1.0" encoding="UTF-8"?> <oimplugins xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <plugins pluginpoint="oracle.iam.notification.impl.NotificationEventResolver"> <plugin pluginclass= " com.iam.oim.demo.notification.DemoNotificationResolver" version="1.0" name="Sample Notification Resolver"/> </plugins> </oimplugins> 3. Defining the template To create a notification template: Log in to the Oracle Identity Administration Click the System Management tab and then click the Notification tab From the Actions list on the left pane, select Create On the Create page, enter values for the following fields under the Template Information section: Template Name: Demo template Description Text: Demo template Under the Event Details section, perform the following: From the Available Event list, select the event for which the notification template is to be created from a list of available events. Depending on your selection, other fields are displayed in the Event Details section. Note that the template Sample Notification Event created in the previous step being used as the notification event. The contents of the Available Data drop down are based on the event XML StaticData tag, the drop down basically lists all the attributes of the entities defined in that tag. Once you select an element in the drop down, it will show up in the Selected Data text field and then you can just copy it and paste it into either the message subject or the message body fields prefixing $ symbol. Example if list has attribute like First_Name then message body will contains this as $First_Name which resolver will parse and replace it with actual value at runtime. In the Resource field, select a resource from the lookup. This is the dynamic data defined by the Param DataType element in the XML definition. Based on selected resource getAvailableData method of resolver will be called to fetch the resource object attribute detail, if method is overridden with required implementation. For current scenario, Map<String, Object> params will get populated with object key as value and key as “resource” in the map. This is the only input will be provided to resolver at design time. You need to implement the further logic to fetch the object attributes detail to populate the available Data list. List string should not have space in between, if object attributes has space for attribute name then implement logic to replace the space with ‘_’ before populating the list. Example if attribute name is “First Name” then make it “First_Name” and populate the list. Space is not supported while you try to parse and replace the token at run time with real value. Make a note that the Available Data and Selected Data are used in the substitution tokens definition only, they do not define the final data that will be sent in the notification. OIM will invoke the resolver class to get the data and make the substitutions. Under the Locale Information section, enter values in the following fields: To specify a form of encoding, select either UTF-8 or ASCII. In the Message Subject field, enter a subject for the notification. From the Type options, select the data type in which you want to send the message. You can choose between HTML and Text/Plain. In the Short Message field, enter a gist of the message in very few words. In the Long Message field, enter the message that will be sent as the notification with Available data token which need to be replaced by resolver at runtime. After you have entered the required values in all the fields, click Save. A message is displayed confirming the creation of the notification template. Click OK 4. Triggering the event A notification event can be triggered from different places in OIM. The logic behind the triggering must be coded and plugged into OIM. Examples of triggering points for notifications: Event handlers: post process notifications for specific data updates in OIM users Process tasks: to notify the users that a provisioning task was executed by OIM Scheduled tasks: to notify something related to the task The scheduled job has two parameters: Template Name: defines the notification template to be sent User Login: defines the user record that will provide the data to be sent in the notification Sample Code Snippet: public void execute(String templateName , String userId) { try { NotificationService notService = Platform.getService(NotificationService.class); NotificationEvent eventToSend=this.createNotificationEvent(templateName,userId); notService.notify(eventToSend); } catch (Exception e) { e.printStackTrace(); } } private NotificationEvent createNotificationEvent(String poTemplateName, String poUserId) { NotificationEvent event = new NotificationEvent(); String[] receiverUserIds= { poUserId }; event.setUserIds(receiverUserIds); event.setTemplateName(poTemplateName); event.setSender(null); HashMap<String, Object> templateParams = new HashMap<String, Object>(); templateParams.put("USER_LOGIN",poUserId); event.setParams(templateParams); return event; } public HashMap getAttributes() { return null; } public void setAttributes() {} }

    Read the article

  • Is it possible to group records belonging to an entity in dbunit?

    - by Joshua
    Our JPA entity model auto-generates primary key identifiers for user, user_address tables. Would it be possible to group these entities given below via dbunit, so that I don't need to provide neither the primary key as well as the foreign key reference from user_address.user_id. It is getting very hard to maintain these keys (i.e. I would prefer to group the primary record 'user' and the child records 'user_address' so that dbunit can group them automatically by looking up the entity metadata). Is it achievable? <user id="1" first_name="Josh" creation_date="2009-07-11 15:45:28"/> <user_address id="1" user_id="1" address="Main St" city="Los Angeles"/> I would prefer something like this <!-- First user --> <user first_name="Josh" creation_date="2009-07-11 15:45:28"/> <user_address address="Main St" city="Los Angeles"/> <!-- Second user --> <user first_name="Mary" creation_date="2009-07-11 15:45:28"/> <user_address address="Taylors St" city="San Jose"/>

    Read the article

  • CRM2011 - "The given key was not present in the dictionary"

    - by DJZorrow
    I am what you call a "n00b" in CRM plugin development. I am trying to write a plugin for Microsoft's Dynamics CRM 2011 that will create a new activity entity when you create a new contact. I want this activity entity to be associated with the contact entity. This is my current code: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xrm.Sdk; namespace ITPH_CRM_Deactivate_Account_SSP_Disable { public class SSPDisable_Plugin: IPlugin { public void Execute(IServiceProvider serviceProvider) { // Obtain the execution context from the service provider. IPluginExecutionContext context = (IPluginExecutionContext) serviceProvider.GetService(typeof(IPluginExecutionContext)); IOrganizationServiceFactory serviceFactory = (IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory)); IOrganizationService service = serviceFactory.CreateOrganizationService(context.UserId); if (context.InputParameters.Contains("Target") && context.InputParameters["target"] is Entity) { Entity entity = context.InputParameters["Target"] as Entity; if (entity.LogicalName != "account") { return; } Entity followup = new Entity(); followup.LogicalName = "activitypointer"; followup.Attributes = new AttributeCollection(); followup.Attributes.Add("subject", "Created via Plugin."); followup.Attributes.Add("description", "This is generated by the magic of C# ..."); followup.Attributes.Add("scheduledstart", DateTime.Now.AddDays(3)); followup.Attributes.Add("actualend", DateTime.Now.AddDays(5)); if (context.OutputParameters.Contains("id")) { Guid regardingobjectid = new Guid(context.OutputParameters["id"].ToString()); string regardingobjectidType = "account"; followup["regardingobjectid"] = new EntityReference(regardingobjectidType, regardingobjectid); } service.Create(followup); } } } But when i try to run this code: I get an error when i try to create a new contact in the CRM environment. The error is: "The given key was not present in the dictionary" (Link *1). The error pops up right as i try to save the new contact. Link *1: http://puu.sh/4SXrW.png (Translated bold text: "Error on business process") Thanks for any help or suggestions :)

    Read the article

  • Getting 404 when attempting to POST file to Google Cloud Storage from service account

    - by klactose
    I'm wondering if anyone can tell me the proper syntax & formatting for a service account to send a POST Object to bucket request? I'm attempting it programmatically using the HttpComponents library. I manage to get a token from my GoogleCredential, but every time I construct the POST request, I get: HTTP/1.1 403 Forbidden <?xml version='1.0' encoding='UTF-8'?><Error><Code>AccessDenied</Code><Message>Access denied.</Message><Detailsbucket-name</Details></Error The Google documentation that describes the request methods, mentions posting using html forms, but I'm hoping that wasn't suggesting the ONLY way to get the job done. I know that HttpComponents has a way to explicitly create form data by using UrlEncodedFormEntity, but it doesn't support multipart data. Which is why I went with using the MultipartEntity class. My code is below: MultipartEntity entity = new MultipartEntity( HttpMultipartMode.BROWSER_COMPATIBLE ); String token = credential.getAccessToken(); entity.addPart("Authorization", new StringBody("OAuth " + token)); String date = formatDate(new Date()); entity.addPart("Date", new StringBody(date)); entity.addPart("Content-Encoding", new StringBody("UTF-8")); entity.addPart("Content-Type", new StringBody("multipart/form-data")); entity.addPart("bucket", new StringBody(bucket)); entity.addPart("key", new StringBody("fileName")); entity.addPart("success_action_redirect", new StringBody("/storage")); File uploadFile = new File("pathToFile"); FileBody fileBody = new FileBody(uploadFile, "text/xml"); entity.addPart("file", fileBody); httppost.setEntity(entity); System.out.println("Posting URI = "+httppost.toString()); HttpResponse response = client.execute(httppost); HttpEntity resp_entity = response.getEntity(); As I mentioned, I am able to get an actual token, so I'm pretty sure the problem is in how I've formed the request as opposed to not being properly authenticated. Keep in mind: This is being performed by a service account. Which means that it does have Read/Write access Thanks for reading, and I appreciate any help!

    Read the article

  • Configuring JPA Primary key sequence generators

    - by pachunoori.vinay.kumar(at)oracle.com
    This article describes the JPA feature of generating and assigning the unique sequence numbers to JPA entity .This article provides information on jpa sequence generator annotations and its usage. UseCase Description Adding a new Employee to the organization using Employee form should assign unique employee Id. Following description provides the detailed steps to implement the generation of unique employee numbers using JPA generators feature Steps to configure JPA Generators 1.Generate Employee Entity using "Entities from Table Wizard". View image2.Create a Database Connection and select the table "Employee" for which entity will be generated and Finish the wizards with default selections. View image 3.Select the offline database sources-Schema-create a Sequence object or you can copy to offline db from online database connection. View image 4.Open the persistence.xml in application navigator and select the Entity "Employee" in structure view and select the tab "Generators" in flat editor. 5.In the Sequence Generator section,enter name of sequence "InvSeq" and select the sequence from drop down list created in step3. View image 6.Expand the Employees in structure view and select EmployeeId and select the "Primary Key Generation" tab.7.In the Generated value section,select the "Use Generated value" check box ,select the strategy as "Sequence" and select the Generator as "InvSeq" defined step 4. View image   Following annotations gets added for the JPA generator configured in JDeveloper for an entity To use a specific named sequence object (whether it is generated by schema generation or already exists in the database) you must define a sequence generator using a @SequenceGenerator annotation. Provide a unique label as the name for the sequence generator and refer the name in the @GeneratedValue annotation along with generation strategy  For  example,see the below Employee Entity sample code configured for sequence generation. EMPLOYEE_ID is the primary key and is configured for auto generation of sequence numbers. EMPLOYEE_SEQ is the sequence object exist in database.This sequence is configured for generating the sequence numbers and assign the value as primary key to Employee_id column in Employee table. @SequenceGenerator(name="InvSeq", sequenceName = "EMPLOYEE_SEQ")   @Entity public class Employee implements Serializable {    @Id    @Column(name="EMPLOYEE_ID", nullable = false)    @GeneratedValue(strategy = GenerationType.SEQUENCE, generator="InvSeq")   private Long employeeId; }   @SequenceGenerator @GeneratedValue @SequenceGenerator - will define the sequence generator based on a  database sequence object Usage: @SequenceGenerator(name="SequenceGenerator", sequenceName = "EMPLOYEE_SEQ") @GeneratedValue - Will define the generation strategy and refers the sequence generator  Usage:     @GeneratedValue(strategy = GenerationType.SEQUENCE, generator="name of the Sequence generator defined in @SequenceGenerator")

    Read the article

  • [EF + Oracle] Inserting Data (1/2)

    - by JTorrecilla
    Prologue Following EF series (I ,II y III) in this chapter we will see how to create DB record from EF. Inserting Data Like we indicated in the 2º post: “One Entity matches with a DB record, and one property match with a Table Column”. To start, we need to create an object from one of the Entities: 1: EMPLEADOS empleado = new EMPLEADOS(); Also like, I told previously, Exists the possibility to use the Static Function defined by VS for each Entity: Once we have created the object, we can Access to it properties to fill like a common class:   1: empleado.NOMBRE = "Javier Torrecilla";   After finish of fill our Entity properties, it must be needed to add the object to the appropriate ObjectSet in the ObjectContext: 1: enti.EMPLEADOS.AddObject(empleado); or 1: enti.AddToEMPLEADOS(empleado); Both methods will do the same action, create an insert statement. Have we finished? No. Any Entity has a property called “EntityState”. This prop is an Enum from “EntityState”, which has the following: Detached: the Entity is created, but not added to the Context. Unchanged: There is no pending changes in the Entity. Added: The entity is added to the ObjectSet, but it is not yet sent to the DB. Deleted: The object is deleted form the ObjectSet, but not yet from the DB. Modified: There is Pending Changes to confirm. Let’s see, the several values of the property during the Creation steps: 1. While the Object is created and we are filling the props: EntityState.Detached; 2. After adding to the ObjectSet: EntityState.Added. This not indicated that the record is in the DB 3. Saving the Data: To sabe the data in the DB, we are going to call “SaveChanges” method of the Object Context. After invoke it, the property will be EntityState.Unchanged.   What does SaveChanges Method? This function will synchronize and send all pending changes to DB. It will add, modify or delete all Entities, whose EntityState property, is setted to Added, Deleted or Modified. After finishing, all added or modified entities will be change the State to “Unchanged”, and deleted Entities must take the “Detached” state.

    Read the article

  • Building applications with WCF - Intro

    - by skjagini
    I am going to write series of articles using Windows Communication Framework (WCF) to develop client and server applications and this is the first part of that series. What is WCF As Juwal puts in his Programming WCF book, WCF provides an SDK for developing and deploying services on Windows, provides runtime environment to expose CLR types as services and consume services as CLR types. Building services with WCF is incredibly easy and it’s implementation provides a set of industry standards and off the shelf plumbing including service hosting, instance management, reliability, transaction management, security etc such that it greatly increases productivity Scenario: Lets consider a typical bank customer trying to create an account, deposit amount and transfer funds between accounts, i.e. checking and savings. To make it interesting, we are going to divide the functionality into multiple services and each of them working with database directly. We will run test cases with and without transactional support across services. In this post we will build contracts, services, data access layer, unit tests to verify end to end communication etc, nothing big stuff here and we dig into other features of the WCF in subsequent posts with incremental changes. In any distributed architecture we have two pieces i.e. services and clients. Services as the name implies provide functionality to execute various pieces of business logic on the server, and clients providing interaction to the end user. Services can be built with Web Services or with WCF. Service built on WCF have the advantage of binding independent, i.e. can run against TCP and HTTP protocol without any significant changes to the code. Solution Services Profile: For creating a new bank customer, getting details about existing customer ProfileContract ProfileService Checking Account: To get checking account balance, deposit or withdraw amount CheckingAccountContract CheckingAccountService Savings Account: To get savings account balance, deposit or withdraw amount SavingsAccountContract SavingsAccountService ServiceHost: To host services, i.e. running the services at particular address, binding and contract where client can connect to Client: Helps end user to use services like creating account and amount transfer between the accounts BankDAL: Data access layer to work with database     BankDAL It’s no brainer not to use an ORM as many matured products are available currently in market including Linq2Sql, Entity Framework (EF), LLblGenPro etc. For this exercise I am going to use Entity Framework 4.0, CTP 5 with code first approach. There are two approaches when working with data, data driven and code driven. In data driven we start by designing tables and their constrains in database and generate entities in code while in code driven (code first) approach entities are defined in code and the metadata generated from the entities is used by the EF to create tables and table constrains. In previous versions the entity classes had  to derive from EF specific base classes. In EF 4 it  is not required to derive from any EF classes, the entities are not only persistence ignorant but also enable full test driven development using mock frameworks.  Application consists of 3 entities, Customer entity which contains Customer details; CheckingAccount and SavingsAccount to hold the respective account balance. We could have introduced an Account base class for CheckingAccount and SavingsAccount which is certainly possible with EF mappings but to keep it simple we are just going to follow 1 –1 mapping between entity and table mappings. Lets start out by defining a class called Customer which will be mapped to Customer table, observe that the class is simply a plain old clr object (POCO) and has no reference to EF at all. using System;   namespace BankDAL.Model { public class Customer { public int Id { get; set; } public string FullName { get; set; } public string Address { get; set; } public DateTime DateOfBirth { get; set; } } }   In order to inform EF about the Customer entity we have to define a database context with properties of type DbSet<> for every POCO which needs to be mapped to a table in database. EF uses convention over configuration to generate the metadata resulting in much less configuration. using System.Data.Entity;   namespace BankDAL.Model { public class BankDbContext: DbContext { public DbSet<Customer> Customers { get; set; } } }   Entity constrains can be defined through attributes on Customer class or using fluent syntax (no need to muscle with xml files), CustomerConfiguration class. By defining constrains in a separate class we can maintain clean POCOs without corrupting entity classes with database specific information.   using System; using System.Data.Entity.ModelConfiguration;   namespace BankDAL.Model { public class CustomerConfiguration: EntityTypeConfiguration<Customer> { public CustomerConfiguration() { Initialize(); }   private void Initialize() { //Setting the Primary Key this.HasKey(e => e.Id);   //Setting required fields this.HasRequired(e => e.FullName); this.HasRequired(e => e.Address); //Todo: Can't create required constraint as DateOfBirth is not reference type, research it //this.HasRequired(e => e.DateOfBirth); } } }   Any queries executed against Customers property in BankDbContext are executed against Cusomers table. By convention EF looks for connection string with key of BankDbContext when working with the context.   We are going to define a helper class to work with Customer entity with methods for querying, adding new entity etc and these are known as repository classes, i.e., CustomerRepository   using System; using System.Data.Entity; using System.Linq; using BankDAL.Model;   namespace BankDAL.Repositories { public class CustomerRepository { private readonly IDbSet<Customer> _customers;   public CustomerRepository(BankDbContext bankDbContext) { if (bankDbContext == null) throw new ArgumentNullException(); _customers = bankDbContext.Customers; }   public IQueryable<Customer> Query() { return _customers; }   public void Add(Customer customer) { _customers.Add(customer); } } }   From the above code it is observable that the Query methods returns customers as IQueryable i.e. customers are retrieved only when actually used i.e. iterated. Returning as IQueryable also allows to execute filtering and joining statements from business logic using lamba expressions without cluttering the data access layer with tens of methods.   Our CheckingAccountRepository and SavingsAccountRepository look very similar to each other using System; using System.Data.Entity; using System.Linq; using BankDAL.Model;   namespace BankDAL.Repositories { public class CheckingAccountRepository { private readonly IDbSet<CheckingAccount> _checkingAccounts;   public CheckingAccountRepository(BankDbContext bankDbContext) { if (bankDbContext == null) throw new ArgumentNullException(); _checkingAccounts = bankDbContext.CheckingAccounts; }   public IQueryable<CheckingAccount> Query() { return _checkingAccounts; }   public void Add(CheckingAccount account) { _checkingAccounts.Add(account); }   public IQueryable<CheckingAccount> GetAccount(int customerId) { return (from act in _checkingAccounts where act.CustomerId == customerId select act); }   } } The repository classes look very similar to each other for Query and Add methods, with the help of C# generics and implementing repository pattern (Martin Fowler) we can reduce the repeated code. Jarod from ElegantCode has posted an article on how to use repository pattern with EF which we will implement in the subsequent articles along with WCF Unity life time managers by Drew Contracts It is very easy to follow contract first approach with WCF, define the interface and append ServiceContract, OperationContract attributes. IProfile contract exposes functionality for creating customer and getting customer details.   using System; using System.ServiceModel; using BankDAL.Model;   namespace ProfileContract { [ServiceContract] public interface IProfile { [OperationContract] Customer CreateCustomer(string customerName, string address, DateTime dateOfBirth);   [OperationContract] Customer GetCustomer(int id);   } }   ICheckingAccount contract exposes functionality for working with checking account, i.e., getting balance, deposit and withdraw of amount. ISavingsAccount contract looks the same as checking account.   using System.ServiceModel;   namespace CheckingAccountContract { [ServiceContract] public interface ICheckingAccount { [OperationContract] decimal? GetCheckingAccountBalance(int customerId);   [OperationContract] void DepositAmount(int customerId,decimal amount);   [OperationContract] void WithdrawAmount(int customerId, decimal amount);   } }   Services   Having covered the data access layer and contracts so far and here comes the core of the business logic, i.e. services.   .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } ProfileService implements the IProfile contract for creating customer and getting customer detail using CustomerRepository. using System; using System.Linq; using System.ServiceModel; using BankDAL; using BankDAL.Model; using BankDAL.Repositories; using ProfileContract;   namespace ProfileService { [ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class Profile: IProfile { public Customer CreateAccount( string customerName, string address, DateTime dateOfBirth) { Customer cust = new Customer { FullName = customerName, Address = address, DateOfBirth = dateOfBirth };   using (var bankDbContext = new BankDbContext()) { new CustomerRepository(bankDbContext).Add(cust); bankDbContext.SaveChanges(); } return cust; }   public Customer CreateCustomer(string customerName, string address, DateTime dateOfBirth) { return CreateAccount(customerName, address, dateOfBirth); } public Customer GetCustomer(int id) { return new CustomerRepository(new BankDbContext()).Query() .Where(i => i.Id == id).FirstOrDefault(); }   } } From the above code you shall observe that we are calling bankDBContext’s SaveChanges method and there is no save method specific to customer entity because EF manages all the changes centralized at the context level and all the pending changes so far are submitted in a batch and it is represented as Unit of Work. Similarly Checking service implements ICheckingAccount contract using CheckingAccountRepository, notice that we are throwing overdraft exception if the balance falls by zero. WCF has it’s own way of raising exceptions using fault contracts which will be explained in the subsequent articles. SavingsAccountService is similar to CheckingAccountService. using System; using System.Linq; using System.ServiceModel; using BankDAL.Model; using BankDAL.Repositories; using CheckingAccountContract;   namespace CheckingAccountService { [ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class Checking:ICheckingAccount { public decimal? GetCheckingAccountBalance(int customerId) { using (var bankDbContext = new BankDbContext()) { CheckingAccount account = (new CheckingAccountRepository(bankDbContext) .GetAccount(customerId)).FirstOrDefault();   if (account != null) return account.Balance;   return null; } }   public void DepositAmount(int customerId, decimal amount) { using(var bankDbContext = new BankDbContext()) { var checkingAccountRepository = new CheckingAccountRepository(bankDbContext); CheckingAccount account = (checkingAccountRepository.GetAccount(customerId)) .FirstOrDefault();   if (account == null) { account = new CheckingAccount() { CustomerId = customerId }; checkingAccountRepository.Add(account); }   account.Balance = account.Balance + amount; if (account.Balance < 0) throw new ApplicationException("Overdraft not accepted");   bankDbContext.SaveChanges(); } } public void WithdrawAmount(int customerId, decimal amount) { DepositAmount(customerId, -1*amount); } } }   BankServiceHost The host acts as a glue binding contracts with it’s services, exposing the endpoints. The services can be exposed either through the code or configuration file, configuration file is preferred as it allows run time changes to service behavior even after deployment. We have 3 services and for each of the service you need to define name (the class that implements the service with fully qualified namespace) and endpoint known as ABC, i.e. address, binding and contract. We are using netTcpBinding and have defined the base address with for each of the contracts .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } <system.serviceModel> <services> <service name="ProfileService.Profile"> <endpoint binding="netTcpBinding" contract="ProfileContract.IProfile"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Profile"/> </baseAddresses> </host> </service> <service name="CheckingAccountService.Checking"> <endpoint binding="netTcpBinding" contract="CheckingAccountContract.ICheckingAccount"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Checking"/> </baseAddresses> </host> </service> <service name="SavingsAccountService.Savings"> <endpoint binding="netTcpBinding" contract="SavingsAccountContract.ISavingsAccount"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Savings"/> </baseAddresses> </host> </service> </services> </system.serviceModel> Have to open the services by creating service host which will handle the incoming requests from clients.   using System;   namespace ServiceHost { class Program { static void Main(string[] args) { CreateHosts(); Console.ReadLine(); }   private static void CreateHosts() { CreateHost(typeof(ProfileService.Profile),"Profile Service"); CreateHost(typeof(SavingsAccountService.Savings), "Savings Account Service"); CreateHost(typeof(CheckingAccountService.Checking), "Checking Account Service"); }   private static void CreateHost(Type type, string hostDescription) { System.ServiceModel.ServiceHost host = new System.ServiceModel.ServiceHost(type); host.Open();   if (host.ChannelDispatchers != null && host.ChannelDispatchers.Count != 0 && host.ChannelDispatchers[0].Listener != null) Console.WriteLine("Started: " + host.ChannelDispatchers[0].Listener.Uri); else Console.WriteLine("Failed to start:" + hostDescription); } } } BankClient    The client has no knowledge about service business logic other than the functionality it exposes through the contract, end points and a proxy to work against. The endpoint data and server proxy can be generated by right clicking on the project reference and choosing ‘Add Service Reference’ and entering the service end point address. Or if you have access to source, you can manually reference contract dlls and update clients configuration file to point to the service end point if the server and client happens to be being built using .Net framework. One of the pros with the manual approach is you don’t have to work against messy code generated files.   <system.serviceModel> <client> <endpoint name="tcpProfile" address="net.tcp://localhost:1000/Profile" binding="netTcpBinding" contract="ProfileContract.IProfile"/> <endpoint name="tcpCheckingAccount" address="net.tcp://localhost:1000/Checking" binding="netTcpBinding" contract="CheckingAccountContract.ICheckingAccount"/> <endpoint name="tcpSavingsAccount" address="net.tcp://localhost:1000/Savings" binding="netTcpBinding" contract="SavingsAccountContract.ISavingsAccount"/>   </client> </system.serviceModel> The client uses a façade to connect to the services   using System.ServiceModel; using CheckingAccountContract; using ProfileContract; using SavingsAccountContract;   namespace Client { public class ProxyFacade { public static IProfile ProfileProxy() { return (new ChannelFactory<IProfile>("tcpProfile")).CreateChannel(); }   public static ICheckingAccount CheckingAccountProxy() { return (new ChannelFactory<ICheckingAccount>("tcpCheckingAccount")) .CreateChannel(); }   public static ISavingsAccount SavingsAccountProxy() { return (new ChannelFactory<ISavingsAccount>("tcpSavingsAccount")) .CreateChannel(); }   } }   With that in place, lets get our unit tests going   using System; using System.Diagnostics; using BankDAL.Model; using NUnit.Framework; using ProfileContract;   namespace Client { [TestFixture] public class Tests { private void TransferFundsFromSavingsToCheckingAccount(int customerId, decimal amount) { ProxyFacade.CheckingAccountProxy().DepositAmount(customerId, amount); ProxyFacade.SavingsAccountProxy().WithdrawAmount(customerId, amount); }   private void TransferFundsFromCheckingToSavingsAccount(int customerId, decimal amount) { ProxyFacade.SavingsAccountProxy().DepositAmount(customerId, amount); ProxyFacade.CheckingAccountProxy().WithdrawAmount(customerId, amount); }     [Test] public void CreateAndGetProfileTest() { IProfile profile = ProxyFacade.ProfileProxy(); const string customerName = "Tom"; int customerId = profile.CreateCustomer(customerName, "NJ", new DateTime(1982, 1, 1)).Id; Customer customer = profile.GetCustomer(customerId); Assert.AreEqual(customerName,customer.FullName); }   [Test] public void DepositWithDrawAndTransferAmountTest() { IProfile profile = ProxyFacade.ProfileProxy(); string customerName = "Smith" + DateTime.Now.ToString("HH:mm:ss"); var customer = profile.CreateCustomer(customerName, "NJ", new DateTime(1982, 1, 1)); // Deposit to Savings ProxyFacade.SavingsAccountProxy().DepositAmount(customer.Id, 100); ProxyFacade.SavingsAccountProxy().DepositAmount(customer.Id, 25); Assert.AreEqual(125, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); // Withdraw ProxyFacade.SavingsAccountProxy().WithdrawAmount(customer.Id, 30); Assert.AreEqual(95, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id));   // Deposit to Checking ProxyFacade.CheckingAccountProxy().DepositAmount(customer.Id, 60); ProxyFacade.CheckingAccountProxy().DepositAmount(customer.Id, 40); Assert.AreEqual(100, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id)); // Withdraw ProxyFacade.CheckingAccountProxy().WithdrawAmount(customer.Id, 30); Assert.AreEqual(70, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id));   // Transfer from Savings to Checking TransferFundsFromSavingsToCheckingAccount(customer.Id,10); Assert.AreEqual(85, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); Assert.AreEqual(80, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id));   // Transfer from Checking to Savings TransferFundsFromCheckingToSavingsAccount(customer.Id, 50); Assert.AreEqual(135, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); Assert.AreEqual(30, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id)); }   [Test] public void FundTransfersWithOverDraftTest() { IProfile profile = ProxyFacade.ProfileProxy(); string customerName = "Angelina" + DateTime.Now.ToString("HH:mm:ss");   var customerId = profile.CreateCustomer(customerName, "NJ", new DateTime(1972, 1, 1)).Id;   ProxyFacade.SavingsAccountProxy().DepositAmount(customerId, 100); TransferFundsFromSavingsToCheckingAccount(customerId,80); Assert.AreEqual(20, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customerId)); Assert.AreEqual(80, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customerId));   try { TransferFundsFromSavingsToCheckingAccount(customerId,30); } catch (Exception e) { Debug.WriteLine(e.Message); }   Assert.AreEqual(110, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customerId)); Assert.AreEqual(20, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customerId)); } } }   We are creating a new instance of the channel for every operation, we will look into instance management and how creating a new instance of channel affects it in subsequent articles. The first two test cases deals with creation of Customer, deposit and withdraw of month between accounts. The last case, FundTransferWithOverDraftTest() is interesting. Customer starts with depositing $100 in SavingsAccount followed by transfer of $80 in to checking account resulting in $20 in savings account.  Customer then initiates $30 transfer from Savings to Checking resulting in overdraft exception on Savings with $30 being deposited to Checking. As we are not running both the requests in transactions the customer ends up with more amount than what he started with $100. In subsequent posts we will look into transactions handling.  Make sure the ServiceHost project is set as start up project and start the solution. Run the test cases either from NUnit client or TestDriven.Net/Resharper which ever is your favorite tool. Make sure you have updated the data base connection string in the ServiceHost config file to point to your local database

    Read the article

  • SQLAuthority News – 2 Whitepapers Announced – AlwaysOn Architecture Guide: Building a High Availability and Disaster Recovery Solution

    - by pinaldave
    Understanding AlwaysOn Architecture is extremely important when building a solution with failover clusters and availability groups. Microsoft has just released two very important white papers related to this subject. Both the white papers are written by top experts in industry and have been reviewed by excellent panel of experts. Every time I talk with various organizations who are adopting the SQL Server 2012 they are always excited with the concept of the new feature AlwaysOn. One of the requests I often here is the related to detailed documentations which can help enterprises to build a robust high availability and disaster recovery solution. I believe following two white paper now satisfies the request. AlwaysOn Architecture Guide: Building a High Availability and Disaster Recovery Solution by Using AlwaysOn Availability Groups SQL Server 2012 AlwaysOn Availability Groups provides a unified high availability and disaster recovery (HADR) solution. This paper details the key topology requirements of this specific design pattern on important concepts like quorum configuration considerations, steps required to build the environment, and a workflow that shows how to handle a disaster recovery. AlwaysOn Architecture Guide: Building a High Availability and Disaster Recovery Solution by Using Failover Cluster Instances and Availability Groups SQL Server 2012 AlwaysOn Failover Cluster Instances (FCI) and AlwaysOn Availability Groups provide a comprehensive high availability and disaster recovery solution. This paper details the key topology requirements of this specific design pattern on important concepts like asymmetric storage considerations, quorum model selection, quorum votes, steps required to build the environment, and a workflow. If you are not going to implement AlwaysOn feature, this two Whitepapers are still a great reference material to review as it will give you complete idea regarding what it takes to implement AlwaysOn architecture and what kind of efforts needed. One should at least bookmark above two white papers for future reference. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, T SQL, Technology Tagged: AlwaysOn

    Read the article

  • Can an aggregate root hold references of members of another aggregate root?

    - by Rushino
    Hello, I know outside aggregates cant change anything inside an aggregate without passing by his root. That said i would like to know if an aggregate root can hold references of members (objects insides) of another aggregate root? (fellowing DDD rules) Example : a Calendar contain a list of phases which contain a list of sequences which contain a list of assignations Calendar is root because phases and sequences and assignations only work in context of a calendar. You also have Students and Groups of student (called groups) It is possible (fellowing DDD rules) to make Groups holding references of assignations or it need to pass by the root for accessing groups from assignations ? Thanks.

    Read the article

  • Cannont add service account to domain group during sql cluster install

    - by Sam
    I'm installing a 2008 instance on a 2003 machine which is already running 2005. I need to set up domain groups for the security setup step: http://msdn.microsoft.com/en-us/library/ms179530.aspx On Windows Server 2003, specify domain groups for SQL Server services. All resource permissions are controlled by domain-level groups that include SQL Server service accounts as group members. Much more info on this here: http://support.microsoft.com/kb/910708 I've had problems with being able to add the windows service accounts to the groups at install time. The security admins had to make my account a domain admin - which they were hesitant to do. The account under which SQL Server Setup is running must have permissions to add accounts to the domain groups. Is there a specific security setting which would allow my account to add accounts to a group?

    Read the article

  • Group Matchmaking

    - by Simon Kérouack
    Consider different groups(1 or more players) queuing together, we want to make 2 opposing teams containing each the same amount of players while keeping the groups together. At the same time we want to make both teams' average ranking as close as possible. Now also consider we have as a working set the subset of groups currently queuing within a given ranking range. For an example, let's say we have the following groups, ordered by queuing time: Id, playerCount, totalRank, avgRank 0, 3, 126, 42 1, 2, 60, 30 2, 1, 25, 25 3, 2, 80, 40 4, 1, 40, 40 5, 1, 20, 20 6, 3, 150, 50 for this specific subset, the expected output should ideally be: team1: 0, 1 (total: 186) team2: 2, 5, 6 (total: 195) up to now the solution I have been using is to balance out each team by making each team pick the group with highest ranking within the subset turn by turn. The team who picks is the one with the currently lowest average rank unless one is already full. If one team is already full the other team tries to complete itself with groups that would make the rank gap as small as possible. This solution turns out to have issues with frequent edge cases and I'm looking for a better solution, or some fine-tuning that could be made. In most cases, players seems to want teams of 5 people and queue in group of 2. Our average subset when 2 teams of 5 are chosen is made of about 14 players if that may be of any help.

    Read the article

  • How to structure well my adwords campaign?

    - by Romain Dorange
    I am starting an adwords campaigns and I will measure conversion rates using the Adwords conversion tracking pixel. Conversion might be account creation or a concrete sale. As it will be a test campaign to have some insights on CTR, CR, etc... on the future, I am likely to try several configurations. two differents ads with different landing URL and messages : one with a focus on the product / the other will contains a discount embedded in the URL 4 differents groups/thematics of keywords I guess I have to build 4 ads groups based on the keywords 2 ads with the different messages assign the two ads to each ads groups follow the campaign precisely in the ads tabs where I can see the effectiveness of each Ads per Ads Groups (for a total of 8 lines of reporting) Am I right ? Also, what are the KPI I can have from an adwords campaign tu measure global effectiveness? measure of ROI from concrete sales (tracking pixel with e-commerce tag on confirmation page) measure of ROI from leads acquisition (tracking pixel on account creation) measure of traffic increase with the campaign Thanks a lot.

    Read the article

  • LDAP: Need login of application/servers from a certain group only

    - by Geo
    We need to configure LDAP for login to different servers and applications. We have created all users and different groups as follows: dn: dc=ldapserver,dc=local dn: ou=people,dc=ldapserver,dc=local ou: people dn: uid=geo,ou=people,dc=ldapserver,dc=local dn: uid=user,ou=people,dc=ldapserver,dc=local dn: ou=groups,dc=ldapserver,dc=local dn: cn=server,ou=groups,dc=ldapserver,dc=local member: uid=geo,ou=people,dc=ldapserver,dc=local dn: cn=website,ou=groups,dc=ldapserver,dc=local member: uid=user,ou=people,dc=ldapserver,dc=local We need scenario in such a way that the users that are member of server need only login to server (that is geo) and users that are member of website need only login to websites (That is user “user”). Please let me know how we can configure it. For login site we tried by giving DN as cn=website,ou=groups,dc=ldapserver,dc=local and Login Attribute as uid and also member but it is not working. Can anyone please help us on it. Also please let us know is there any other option for accomplish this scenario. Thanks Geo

    Read the article

  • How can you tell whether to use Composite Pattern or a Tree Structure, or a third implementation?

    - by Aske B.
    I have two client types, an "Observer"-type and a "Subject"-type. They're both associated with a hierarchy of groups. The Observer will receive (calendar) data from the groups it is associated with throughout the different hierarchies. This data is calculated by combining data from 'parent' groups of the group trying to collect data (each group can have only one parent). The Subject will be able to create the data (that the Observers will receive) in the groups they're associated with. When data is created in a group, all 'children' of the group will have the data as well, and they will be able to make their own version of a specific area of the data, but still linked to the original data created (in my specific implementation, the original data will contain time-period(s) and headline, while the subgroups specify the rest of the data for the receivers directly linked to their respective groups). However, when the Subject creates data, it has to check if all affected Observers have any data that conflicts with this, which means a huge recursive function, as far as I can understand. So I think this can be summed up to the fact that I need to be able to have a hierarchy that you can go up and down in, and some places be able to treat them as a whole (recursion, basically). Also, I'm not just aiming at a solution that works. I'm hoping to find a solution that is relatively easy to understand (architecture-wise at least) and also flexible enough to be able to easily receive additional functionality in the future. Is there a design pattern, or a good practice to go by, to solve this problem or similar hierarchy problems? EDIT: Here's the design I have: The "Phoenix"-class is named that way because I didn't think of an appropriate name yet. But besides this I need to be able to hide specific activities for specific observers, even though they are attached to them through the groups. A little Off-topic: Personally, I feel that I should be able to chop this problem down to smaller problems, but it escapes me how. I think it's because it involves multiple recursive functionalities that aren't associated with each other and different client types that needs to get information in different ways. I can't really wrap my head around it. If anyone can guide me in a direction of how to become better at encapsulating hierarchy problems, I'd be very glad to receive that as well.

    Read the article

  • What's New in Oracle VM VirtualBox 4.2?

    - by Fat Bloke
    A year is a long time in the IT industry. Since the last VirtualBox feature release, which was a little over a year ago, we've seen: new releases of cool new operating systems, such as Windows 8, ChromeOS, and Mountain Lion; we've seen a myriad of new Linux releases from big Enterprise class distributions like Oracle 6.3, to accessible desktop distros like Ubuntu 12.04 and Fedora 17; and we've also seen the spec of a typical PC or laptop double in power. All of these events have influenced our new VirtualBox version which we're releasing today. Here's how... Powerful hosts  One of the trends we've seen is that as the average host platform becomes more powerful, our users are consistently running more and more vm's. Some of our users have large libraries of vm's of various vintages, whilst others have groups of vm's that are run together as an assembly of the various tiers in a multi-tiered software solution, for example, a database tier, middleware tier, and front-ends.  So we're pleased to unveil a more powerful VirtualBox Manager to address the needs of these users: VM Groups Groups allow you to organize your VM library in a sensible way, e.g.  by platform type, by project, by version, by whatever. To create groups you can drag one VM onto another or select one or more VM's and choose Machine...Group from the menu bar. You can expand and collapse groups to save screen real estate, and you can Enter and Leave a group (think iPad navigation here) by using the right and left arrow keys when groups are selected. But groups are more than passive folders, because you can now also perform operations on groups, rather than all the individual VMs. So if you have a multi-tiered solution you can start the whole stack up with just one click. Autostart Many VirtualBox users run dedicated services in their VMs, for example, running a Wiki. With these types of VM workloads, you really want the VM start up when the host machine boots up. So with 4.2 we've introduced a cross-platform Auto-start mechanism to allow you to treat VMs as host services. Headless VM Launching With VM's such as web servers, wikis, and other types of server-class workloads, the Console of the VM is pretty much redundant. For some time now VirtualBox has offered a separate launch mechanism for these VM's, namely the command-line interface commands VBoxHeadless or VBoxManage startvm ... --type headless commands. But with 4.2 we also allow you launch headless VMs from the Manager. Simply hold down Shift when launching the VM from the Manager.  It's that easy. But how do you stop a headless VM? Well, with 4.2 we allow you to Close the VM from the Manager. (BTW best to use the ACPI Shutdown method which allows the guest VM to close down gracefully.) Easy VM Creation For our expert users, the  New VM Wizard was a little tiresome, so now there's a faster 2-click VM creation mode. Just Hide the description when creating a new VM. Powerful VMs  As the hosts have become more powerful, so are the guests that are running inside them. Here are some of the 4.2 features to accommodate them: Virtual Network Interface Cards  With 4.2, it's now possible to create VMs with up to 36 NICs, when using the ICH9 chipset emulation. But with great power comes great responsibility (didn't Obi-Wan say something similar?), and so we have also introduced bandwidth limiting to prevent a rogue VM stealing the whole pipe. VLAN tagging Some of our users leverage VLANs extensively so we've enhanced the E1000 NICs to support this.  Processor Performance If you are running a CPU which supports Nested Paging (aka EPT in the Intel world) such as most of the Core i5 and i7 CPUs, or are running an AMD Bulldozer or later, you should see some performance improvements from our work with these processors. And while we're talking Processors, we've added support for some of the more modern VIA CPUs too. Powerful Automation Because VirtualBox runs atop a fully blown operating system, it makes sense to leverage the capabilities of the host to run scripts that can drive the guest VMs. Guest Automation was introduced in a prior release but with 4.2 we've revamped the APIs to allow a richer and more powerful set of operations to be executed by the guest. Check out the IGuest APIs in the VirtualBox Programming Guide and Reference (SDK). Powerful Platforms  All the hardcore engineering that has gone into 4.2 has been done for a purpose and that is to deliver a fast and powerful engine that can run almost any x86 OS because of the integrity of the virtualization. So we're pleased to add support for these platforms: Mac OS X "Mountain Lion"  Windows 8 Windows Server 2012 Ubuntu 12.04 (“Precise Pangolin”) Fedora 17 Oracle Linux 6.3  Here's the proof: We don't have time to go into the myriad of smaller improvements such as support for burning audio CDs from a guest, bi-directional clipboard control,  drag-and-drop of files into Linux guests, etc. so we'll leave that as an exercise for the user as soon as you've downloaded from the Oracle or community site and taken a peek at the User Guide. So all in all, a pretty solid release, one that we hope you'll enjoy discovering. - FB 

    Read the article

  • Hibernate unable to instantiate default tuplizer - cannot find getter

    - by ZeldaPinwheel
    I'm trying to use Hibernate to persist a class that looks like this: public class Item implements Serializable, Comparable<Item> { // Item id private Integer id; // Description of item in inventory private String description; // Number of items described by this inventory item private int count; //Category item belongs to private String category; // Date item was purchased private GregorianCalendar purchaseDate; public Item() { } public Integer getId() { return id; } public void setId(Integer id) { this.id = id; } public String getDescription() { return description; } public void setDescription(String description) { this.description = description; } public int getCount() { return count; } public void setCount(int count) { this.count = count; } public String getCategory() { return category; } public void setCategory(String category) { this.category = category; } public GregorianCalendar getPurchaseDate() { return purchaseDate; } public void setPurchasedate(GregorianCalendar purchaseDate) { this.purchaseDate = purchaseDate; } My Hibernate mapping file contains the following: <property name="puchaseDate" type="java.util.GregorianCalendar"> <column name="purchase_date"></column> </property> When I try to run, I get error messages indicating there is no getter function for the purchaseDate attribute: 577 [main] INFO org.hibernate.connection.DriverManagerConnectionProvider - Using Hibernate built-in connection pool (not for production use!) 577 [main] INFO org.hibernate.connection.DriverManagerConnectionProvider - Hibernate connection pool size: 20 577 [main] INFO org.hibernate.connection.DriverManagerConnectionProvider - autocommit mode: false 592 [main] INFO org.hibernate.connection.DriverManagerConnectionProvider - using driver: com.mysql.jdbc.Driver at URL: jdbc:mysql://localhost:3306/home_inventory 592 [main] INFO org.hibernate.connection.DriverManagerConnectionProvider - connection properties: {user=root, password=****} 1078 [main] INFO org.hibernate.cfg.SettingsFactory - RDBMS: MySQL, version: 5.1.45 1078 [main] INFO org.hibernate.cfg.SettingsFactory - JDBC driver: MySQL-AB JDBC Driver, version: mysql-connector-java-5.1.12 ( Revision: ${bzr.revision-id} ) 1103 [main] INFO org.hibernate.dialect.Dialect - Using dialect: org.hibernate.dialect.MySQLDialect 1107 [main] INFO org.hibernate.engine.jdbc.JdbcSupportLoader - Disabling contextual LOB creation as JDBC driver reported JDBC version [3] less than 4 1109 [main] INFO org.hibernate.transaction.TransactionFactoryFactory - Using default transaction strategy (direct JDBC transactions) 1110 [main] INFO org.hibernate.transaction.TransactionManagerLookupFactory - No TransactionManagerLookup configured (in JTA environment, use of read-write or transactional second-level cache is not recommended) 1110 [main] INFO org.hibernate.cfg.SettingsFactory - Automatic flush during beforeCompletion(): disabled 1110 [main] INFO org.hibernate.cfg.SettingsFactory - Automatic session close at end of transaction: disabled 1110 [main] INFO org.hibernate.cfg.SettingsFactory - JDBC batch size: 15 1110 [main] INFO org.hibernate.cfg.SettingsFactory - JDBC batch updates for versioned data: disabled 1111 [main] INFO org.hibernate.cfg.SettingsFactory - Scrollable result sets: enabled 1111 [main] INFO org.hibernate.cfg.SettingsFactory - JDBC3 getGeneratedKeys(): enabled 1111 [main] INFO org.hibernate.cfg.SettingsFactory - Connection release mode: auto 1111 [main] INFO org.hibernate.cfg.SettingsFactory - Maximum outer join fetch depth: 2 1111 [main] INFO org.hibernate.cfg.SettingsFactory - Default batch fetch size: 1 1111 [main] INFO org.hibernate.cfg.SettingsFactory - Generate SQL with comments: disabled 1111 [main] INFO org.hibernate.cfg.SettingsFactory - Order SQL updates by primary key: disabled 1111 [main] INFO org.hibernate.cfg.SettingsFactory - Order SQL inserts for batching: disabled 1112 [main] INFO org.hibernate.cfg.SettingsFactory - Query translator: org.hibernate.hql.ast.ASTQueryTranslatorFactory 1113 [main] INFO org.hibernate.hql.ast.ASTQueryTranslatorFactory - Using ASTQueryTranslatorFactory 1113 [main] INFO org.hibernate.cfg.SettingsFactory - Query language substitutions: {} 1113 [main] INFO org.hibernate.cfg.SettingsFactory - JPA-QL strict compliance: disabled 1113 [main] INFO org.hibernate.cfg.SettingsFactory - Second-level cache: enabled 1113 [main] INFO org.hibernate.cfg.SettingsFactory - Query cache: disabled 1113 [main] INFO org.hibernate.cfg.SettingsFactory - Cache region factory : org.hibernate.cache.impl.NoCachingRegionFactory 1113 [main] INFO org.hibernate.cfg.SettingsFactory - Optimize cache for minimal puts: disabled 1114 [main] INFO org.hibernate.cfg.SettingsFactory - Structured second-level cache entries: disabled 1117 [main] INFO org.hibernate.cfg.SettingsFactory - Echoing all SQL to stdout 1118 [main] INFO org.hibernate.cfg.SettingsFactory - Statistics: disabled 1118 [main] INFO org.hibernate.cfg.SettingsFactory - Deleted entity synthetic identifier rollback: disabled 1118 [main] INFO org.hibernate.cfg.SettingsFactory - Default entity-mode: pojo 1118 [main] INFO org.hibernate.cfg.SettingsFactory - Named query checking : enabled 1118 [main] INFO org.hibernate.cfg.SettingsFactory - Check Nullability in Core (should be disabled when Bean Validation is on): enabled 1151 [main] INFO org.hibernate.impl.SessionFactoryImpl - building session factory org.hibernate.HibernateException: Unable to instantiate default tuplizer [org.hibernate.tuple.entity.PojoEntityTuplizer] at org.hibernate.tuple.entity.EntityTuplizerFactory.constructTuplizer(EntityTuplizerFactory.java:110) at org.hibernate.tuple.entity.EntityTuplizerFactory.constructDefaultTuplizer(EntityTuplizerFactory.java:135) at org.hibernate.tuple.entity.EntityEntityModeToTuplizerMapping.<init>(EntityEntityModeToTuplizerMapping.java:80) at org.hibernate.tuple.entity.EntityMetamodel.<init>(EntityMetamodel.java:323) at org.hibernate.persister.entity.AbstractEntityPersister.<init>(AbstractEntityPersister.java:475) at org.hibernate.persister.entity.SingleTableEntityPersister.<init>(SingleTableEntityPersister.java:133) at org.hibernate.persister.PersisterFactory.createClassPersister(PersisterFactory.java:84) at org.hibernate.impl.SessionFactoryImpl.<init>(SessionFactoryImpl.java:295) at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1385) at service.HibernateSessionFactory.currentSession(HibernateSessionFactory.java:53) at service.ItemSvcHibImpl.generateReport(ItemSvcHibImpl.java:78) at service.test.ItemSvcTest.testGenerateReport(ItemSvcTest.java:226) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at junit.framework.TestCase.runTest(TestCase.java:164) at junit.framework.TestCase.runBare(TestCase.java:130) at junit.framework.TestResult$1.protect(TestResult.java:106) at junit.framework.TestResult.runProtected(TestResult.java:124) at junit.framework.TestResult.run(TestResult.java:109) at junit.framework.TestCase.run(TestCase.java:120) at junit.framework.TestSuite.runTest(TestSuite.java:230) at junit.framework.TestSuite.run(TestSuite.java:225) at org.eclipse.jdt.internal.junit.runner.junit3.JUnit3TestReference.run(JUnit3TestReference.java:130) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.hibernate.tuple.entity.EntityTuplizerFactory.constructTuplizer(EntityTuplizerFactory.java:107) ... 29 more Caused by: org.hibernate.PropertyNotFoundException: Could not find a getter for puchaseDate in class domain.Item at org.hibernate.property.BasicPropertyAccessor.createGetter(BasicPropertyAccessor.java:328) at org.hibernate.property.BasicPropertyAccessor.getGetter(BasicPropertyAccessor.java:321) at org.hibernate.mapping.Property.getGetter(Property.java:304) at org.hibernate.tuple.entity.PojoEntityTuplizer.buildPropertyGetter(PojoEntityTuplizer.java:299) at org.hibernate.tuple.entity.AbstractEntityTuplizer.<init>(AbstractEntityTuplizer.java:158) at org.hibernate.tuple.entity.PojoEntityTuplizer.<init>(PojoEntityTuplizer.java:77) ... 34 more I'm new to Hibernate, so I don't know all the ins and outs, but I do have the getter and setter for the purchaseDate attribute. I don't know what I'm missing here - does anyone else? Thanks!

    Read the article

  • How to Export data to Excel using LINQ to Entity?

    - by Rita
    Hi I have the data coming from Entity Data model table on my ASP.NET page. Now I have to export this data into Excel on button click. If it is using OLEDB, it is straight forward as it is here: http://csharp.net-informations.com/excel/csharp-excel-oledb-insert.htm Here is my function to read data from inquiries table: var model = from i in myEntity.Inquiries where i.User_Id == 5 orderby i.TX_Id descending select new { RequestID = i.TX_Id, CustomerName = i.CustomerMaster.FirstName, RequestDate = i.RequestDate, Email = i.CustomerMaster.MS_Id, DocDescription = i.Document.Description, ProductName = i.Product.Name

    Read the article

  • LINQ to SQL - Save an entity without creating a new DataContext?

    - by aximili
    I get this error Cannot add an entity with a key that is already in use when I try to save an Item [AcceptVerbs(HttpVerbs.Post)] public ActionResult Edit(Item item) { Global.DataContext.Items.Attach(item); Global.DataContext.SubmitChanges(); return View(item); } That's because I cannot attach the item to the static global DataContext. Is it possible to save an item without creating a new DataContext? (I am very new to LINQ)

    Read the article

  • How do I merge a transient entity with a session using Castle ActiveRecordMediator?

    - by Daniel T.
    I have a Store and a Product entity: public class Store { public Guid Id { get; set; } public int Version { get; set; } public ISet<Product> Products { get; set; } } public class Product { public Guid Id { get; set; } public int Version { get; set; } public Store ParentStore { get; set; } public string Name { get; set; } } In other words, I have a Store that can contain multiple Products in a bidirectional one-to-many relationship. I'm sending data back and forth between a web browser and a web service. The following steps emulates the communication between the two, using session-per-request. I first save a new instance of a Store: using (new SessionScope()) { // this is data from the browser var store = new Store { Id = Guid.Empty }; ActiveRecordMediator.SaveAndFlush(store); } Then I grab the store out of the DB, add a new product to it, and then save it: using (new SessionScope()) { // this is data from the browser var product = new Product { Id = Guid.Empty, Name = "Apples" }); var store = ActiveRecordLinq.AsQueryable<Store>().First(); store.Products.Add(product); ActiveRecordMediator.SaveAndFlush(store); } Up to this point, everything works well. Now I want to update the Product I just saved: using (new SessionScope()) { // data from browser var product = new Product { Id = Guid.Empty, Version = 1, Name = "Grapes" }; var store = ActiveRecordLinq.AsQueryable<Store>().First(); store.Products.Add(product); // throws exception on this call ActiveRecordMediator.SaveAndFlush(store); } When I try to update the product, I get the following exception: a different object with the same identifier value was already associated with the session: 00000000-0000-0000-0000-000000000000, of entity:Product" As I understand it, the problem is that when I get the Store out of the database, it also gets the Product that's associated with it. Both entities are persistent. Then I tried to save a transient Product (the one that has the updated info from the browser) that has the same ID as the one that's already associated with the session, and an exception is thrown. However, I don't know how to get around this problem. If I could get access to a NHibernate.ISession, I could call ISession.Merge() on it, but it doesn't look like ActiveRecordMediator has anything similar (SaveCopy threw the same exception). Does anyone know the solution? Any help would be greatly appreciated!

    Read the article

  • how to insert using linq to entity foreign key value?

    - by Gamble
    I have a table (TestTable) for example: ID Name Parentid (FK) and I would like to insert a new record like ID(1) Name(Test) ParentID(5) FK. How can insert a new record into TestTable with linq to entity? var testTable = new TestTable(); testTable.ID = 1; testTable.Name = "TestName"; testTable ... thank you for the working example.

    Read the article

  • How do I check if a Linq to SQL entity has grandchildren?

    - by EdenMachine
    How can I find out if a Linq to SQL entity has grandchildren or not? Pseudo-code below: Return From p In dc.Processes Where p.Signers.Count > 0 and p.Signers.Signatures.Count > 0 Obviously I can't run the code above but I need to make sure that all the returning Processes have at least one Signer and that all of those Signers have at least one Signature. TIA!

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >