Search Results

Search found 109760 results on 4391 pages for 'ado net entity data model'.

Page 51/4391 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • Do entity collections and object sets implement IQueryable<T>?

    - by Chevex
    I am using Entity Framework for the first time and noticed that the entities object returns entity collections. DBEntities db = new DBEntities(); db.Users; //Users is an ObjectSet<User> User user = db.Users.Where(x => x.Username == "test").First(); //Is this getting executed in the SQL or in memory? user.Posts; //Posts is an EntityCollection<Post> Post post = user.Posts.Where(x => x.PostID == "123").First(); //Is this getting executed in the SQL or in memory? Do both ObjectSet and EntityCollection implement IQueryable? I am hoping they do so that I know the queries are getting executed at the data source and not in memory. EDIT: So apparently EntityCollection does not while ObjectSet does. Does that mean I would be better off using this code? DBEntities db = new DBEntities(); User user = db.Users.Where(x => x.Username == "test").First(); //Is this getting executed in the SQL or in memory? Post post = db.Posts.Where(x => (x.PostID == "123")&&(x.Username == user.Username)).First(); // Querying the object set instead of the entity collection. Also, what is the difference between ObjectSet and EntityCollection? Shouldn't they be the same? Thanks in advance!

    Read the article

  • Parallelism in .NET – Part 14, The Different Forms of Task

    - by Reed
    Before discussing Task creation and actual usage in concurrent environments, I will briefly expand upon my introduction of the Task class and provide a short explanation of the distinct forms of Task.  The Task Parallel Library includes four distinct, though related, variations on the Task class. In my introduction to the Task class, I focused on the most basic version of Task.  This version of Task, the standard Task class, is most often used with an Action delegate.  This allows you to implement for each task within the task decomposition as a single delegate. Typically, when using the new threading constructs in .NET 4 and the Task Parallel Library, we use lambda expressions to define anonymous methods.  The advantage of using a lambda expression is that it allows the Action delegate to directly use variables in the calling scope.  This eliminates the need to make separate Task classes for Action<T>, Action<T1,T2>, and all of the other Action<…> delegate types.  As an example, suppose we wanted to make a Task to handle the ”Show Splash” task from our earlier decomposition.  Even if this task required parameters, such as a message to display, we could still use an Action delegate specified via a lambda: // Store this as a local variable string messageForSplashScreen = GetSplashScreenMessage(); // Create our task Task showSplashTask = new Task( () => { // We can use variables in our outer scope, // as well as methods scoped to our class! this.DisplaySplashScreen(messageForSplashScreen); }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This provides a huge amount of flexibility.  We can use this single form of task for any task which performs an operation, provided the only information we need to track is whether the task has completed successfully or not.  This leads to my first observation: Use a Task with a System.Action delegate for any task for which no result is generated. This observation leads to an obvious corollary: we also need a way to define a task which generates a result.  The Task Parallel Library provides this via the Task<TResult> class. Task<TResult> subclasses the standard Task class, providing one additional feature – the ability to return a value back to the user of the task.  This is done by switching from providing an Action delegate to providing a Func<TResult> delegate.  If we decompose our problem, and we realize we have one task where its result is required by a future operation, this can be handled via Task<TResult>.  For example, suppose we want to make a task for our “Check for Update” task, we could do: Task<bool> checkForUpdateTask = new Task<bool>( () => { return this.CheckWebsiteForUpdate(); }); Later, we would start this task, and perform some other work.  At any point in the future, we could get the value from the Task<TResult>.Result property, which will cause our thread to block until the task has finished processing: // This uses Task<bool> checkForUpdateTask generated above... // Start the task, typically on a background thread checkForUpdateTask.Start(); // Do some other work on our current thread this.DoSomeWork(); // Discover, from our background task, whether an update is available // This will block until our task completes bool updateAvailable = checkForUpdateTask.Result; This leads me to my second observation: Use a Task<TResult> with a System.Func<TResult> delegate for any task which generates a result. Task and Task<TResult> provide a much cleaner alternative to the previous Asynchronous Programming design patterns in the .NET framework.  Instead of trying to implement IAsyncResult, and providing BeginXXX() and EndXXX() methods, implementing an asynchronous programming API can be as simple as creating a method that returns a Task or Task<TResult>.  The client side of the pattern also is dramatically simplified – the client can call a method, then either choose to call task.Wait() or use task.Result when it needs to wait for the operation’s completion. While this provides a much cleaner model for future APIs, there is quite a bit of infrastructure built around the current Asynchronous Programming design patterns.  In order to provide a model to work with existing APIs, two other forms of Task exist.  There is a constructor for Task which takes an Action<Object> and a state parameter.  In addition, there is a constructor for creating a Task<TResult> which takes a Func<Object, TResult> as well as a state parameter.  When using these constructors, the state parameter is stored in the Task.AsyncState property. While these two overloads exist, and are usable directly, I strongly recommend avoiding this for new development.  The two forms of Task which take an object state parameter exist primarily for interoperability with traditional .NET Asynchronous Programming methodologies.  Using lambda expressions to capture variables from the scope of the creator is a much cleaner approach than using the untyped state parameters, since lambda expressions provide full type safety without introducing new variables.

    Read the article

  • Inheritance with POCO entities in Entity Framework 4

    - by Juvaly
    Hi All, I have a Consumer class and a BillableConsumer : Consumer class. When trying to do any operation on my "Consumers" set, I get the error message "Object mapping could not be found for Type with identity Models.BillableConsumer. From the CSDL: <EntityType Name="BillableConsumer" BaseType="Models.Consumer"> <Property Type="String" Name="CardExpiratoin" Nullable="false" /> <Property Type="String" Name="CardNumber" Nullable="false" /> <Property Type="String" Name="City" Nullable="false" /> <Property Type="String" Name="Country" Nullable="false" /> <Property Type="String" Name="CVV" Nullable="false" /> <Property Type="String" Name="NameOnCard" Nullable="false" /> <Property Type="String" Name="PostalCode" Nullable="false" /> <Property Type="String" Name="State" /> <Property Type="String" Name="StreetAddress" Nullable="false" /> </EntityType> From the C-S: <EntitySetMapping Name="Consumers"> <EntityTypeMapping TypeName="IsTypeOf(Models.Consumer)"> <MappingFragment StoreEntitySet="consumer"> <ScalarProperty Name="LoginID" ColumnName="LoginID" /> <ScalarProperty Name="FirstName" ColumnName="FirstName" /> <ScalarProperty Name="LastName" ColumnName="LastName" /> </MappingFragment> </EntityTypeMapping> <EntityTypeMapping TypeName="IsTypeOf(Models.BillableConsumer)"> <MappingFragment StoreEntitySet="billinginformation"> <ScalarProperty Name="CardExpiratoin" ColumnName="CardExpiratoin" /> <ScalarProperty Name="CardNumber" ColumnName="CardNumber" /> <ScalarProperty Name="City" ColumnName="City" /> <ScalarProperty Name="Country" ColumnName="Country" /> <ScalarProperty Name="CVV" ColumnName="CVV" /> <ScalarProperty Name="LoginID" ColumnName="LoginID" /> <ScalarProperty Name="NameOnCard" ColumnName="NameOnCard" /> <ScalarProperty Name="PostalCode" ColumnName="PostalCode" /> <ScalarProperty Name="State" ColumnName="State" /> <ScalarProperty Name="StreetAddress" ColumnName="StreetAddress" /> </MappingFragment> </EntityTypeMapping> </EntitySetMapping> Is this because I did not specifically add the BillableConsumer entity to the object set? How do I do that in a POCO scenario? Thanks! UPDATE: I decided to test whether or not POCOs generated with the T4 template would solve the problem and they did. The most annoying part is that when I restored my original classes from SVN to try and figure out how they are different - they worked as well!! Not adding this as an answer because someone else might have an actual explanation...

    Read the article

  • Entity framework 4 many-to-many insertion?

    - by Saxman
    Hi all, I'm not very familiar with the many-to-many insertion process using Entity Framework 4, POCO. I have a blog with 3 tables: Post, Comment, and Tag. A Post can have many Tags and a Tag can be in many Posts. Here are the Post and Tag models: public class Tag { public int Id { get; set; } [Required] [StringLength(25, ErrorMessage = "Tag name can't exceed 25 characters.")] public string Name { get; set; } public virtual ICollection<Post> Posts { get; set; } } public class Post { public int Id { get; set; } [Required] [StringLength(512, ErrorMessage = "Title can't exceed 512 characters")] public string Title { get; set; } [Required] [AllowHtml] public string Content { get; set; } public string FriendlyUrl { get; set; } public DateTime PostedDate { get; set; } public bool IsActive { get; set; } public virtual ICollection<Comment> Comments { get; set; } public virtual ICollection<Tag> Tags { get; set; } } Now when I'm adding a new post, I'm not sure what would be the right way to do. I'm thinking that I'll have a textbox where I can select multiple tags for that post (this part is already done), in my controller, I will check to see if the tag is already exists or not, if not, then I will insert the new tag. But I'm not even sure based on the models that I've created for EF, will they create a PostsTags table, or they are creating just a Tags and a Posts table and links between the two? How would I insert the new Post and set the tags to that post? Is it just newPost.Tags = Tags (where Tags are the one that got selected, do I even need to check to see if they already exists?), and then something like _post.Add(newPost);? Thanks.

    Read the article

  • How to host customer developed code server side

    - by user963263
    I'm developing a multi-tenant web application, most likely using ASP.NET MVC5 and Web API. I have used business applications in the past where it was possible to upload custom DLL's or paste in custom code to a GUI to have custom functions run server side. These applications were self hosted and single-tenant though so the customer developed bits didn't impact other clients. I want to host the multi-tenant web application myself and allow customers to upload custom code that will run server side. This could be for things like custom web services that client side JavaScript could interact with, or it could be for automation steps that they want triggered server side asynchronously when a user takes a particular action. Additionally, I want to expose an API that allows customers' code to interact with data specific to the web application itself. Client code may need to be "wrapped" so that it has access to appropriate references - to our custom API and maybe to a white list of approved libraries. There are several issues to consider - security, performance (infinite loops, otherwise poorly written code, load balancing, etc.), receive compiled DLL's or require raw code, etc. Is there an established pattern for this sort of thing or a sample project anyone can point to? Or any general recommendations?

    Read the article

  • How to bind dropdownlist data to complex class?

    - by chobo2
    Hi I am using asp.net mvc 2.0(default binding model) and I have this problem. I have a strongly typed view that has a dropdownlist <%= Html.DropDownList("List", "-----")%> Now I have a model class like Public class Test() { public List { get; set; } public string Selected {get; set;} } Now I have in my controller this public ActionResult TestAction() { Test ViewModel = new Test(); ViewModel.List = new SelectList(GetList(), "value", "text", "selected"); return View(Test); } [AcceptVerbs(HttpVerbs.Post)] public ActionResult TestAction(Test ViewModel) { return View(); } Now when I load up the TestAction page for the first time it populates the dropdown list as expected. Now I want to post the selected value back to the server(the dropdownlist is within a form tag with some other textboxes). So I am trying to bind it automatically when it comes in as seen (Test ViewModel) However I get this big nasty error. Server Error in '/' Application. No parameterless constructor defined for this object. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.MissingMethodException: No parameterless constructor defined for this object. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [MissingMethodException: No parameterless constructor defined for this object.] System.RuntimeTypeHandle.CreateInstance(RuntimeType type, Boolean publicOnly, Boolean noCheck, Boolean& canBeCached, RuntimeMethodHandleInternal& ctor, Boolean& bNeedSecurityCheck) +0 System.RuntimeType.CreateInstanceSlow(Boolean publicOnly, Boolean skipCheckThis, Boolean fillCache) +98 System.RuntimeType.CreateInstanceDefaultCtor(Boolean publicOnly, Boolean skipVisibilityChecks, Boolean skipCheckThis, Boolean fillCache) +241 System.Activator.CreateInstance(Type type, Boolean nonPublic) +69 System.Activator.CreateInstance(Type type) +6 System.Web.Mvc.DefaultModelBinder.CreateModel(ControllerContext controllerContext, ModelBindingContext bindingContext, Type modelType) +403 System.Web.Mvc.DefaultModelBinder.BindSimpleModel(ControllerContext controllerContext, ModelBindingContext bindingContext, ValueProviderResult valueProviderResult) +544 System.Web.Mvc.DefaultModelBinder.BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext) +479 System.Web.Mvc.DefaultModelBinder.GetPropertyValue(ControllerContext controllerContext, ModelBindingContext bindingContext, PropertyDescriptor propertyDescriptor, IModelBinder propertyBinder) +45 System.Web.Mvc.DefaultModelBinder.BindProperty(ControllerContext controllerContext, ModelBindingContext bindingContext, PropertyDescriptor propertyDescriptor) +658 System.Web.Mvc.DefaultModelBinder.BindProperties(ControllerContext controllerContext, ModelBindingContext bindingContext) +147 System.Web.Mvc.DefaultModelBinder.BindComplexElementalModel(ControllerContext controllerContext, ModelBindingContext bindingContext, Object model) +98 System.Web.Mvc.DefaultModelBinder.BindComplexModel(ControllerContext controllerContext, ModelBindingContext bindingContext) +2504 System.Web.Mvc.DefaultModelBinder.BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext) +548 System.Web.Mvc.ControllerActionInvoker.GetParameterValue(ControllerContext controllerContext, ParameterDescriptor parameterDescriptor) +474 System.Web.Mvc.ControllerActionInvoker.GetParameterValues(ControllerContext controllerContext, ActionDescriptor actionDescriptor) +181 System.Web.Mvc.ControllerActionInvoker.InvokeAction(ControllerContext controllerContext, String actionName) +830 System.Web.Mvc.Controller.ExecuteCore() +136 System.Web.Mvc.ControllerBase.Execute(RequestContext requestContext) +111 System.Web.Mvc.ControllerBase.System.Web.Mvc.IController.Execute(RequestContext requestContext) +39 System.Web.Mvc.<>c__DisplayClass8.<BeginProcessRequest>b__4() +65 System.Web.Mvc.Async.<>c__DisplayClass1.<MakeVoidDelegate>b__0() +44 System.Web.Mvc.Async.<>c__DisplayClass8`1.<BeginSynchronous>b__7(IAsyncResult _) +42 System.Web.Mvc.Async.WrappedAsyncResult`1.End() +141 System.Web.Mvc.Async.AsyncResultWrapper.End(IAsyncResult asyncResult, Object tag) +54 System.Web.Mvc.Async.AsyncResultWrapper.End(IAsyncResult asyncResult, Object tag) +40 System.Web.Mvc.MvcHandler.EndProcessRequest(IAsyncResult asyncResult) +52 System.Web.Mvc.MvcHandler.System.Web.IHttpAsyncHandler.EndProcessRequest(IAsyncResult result) +38 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +8836913 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +184 So how can I do this?

    Read the article

  • ASP.NET MVC model binding foreign key relationship

    - by Rory Fitzpatrick
    Is it possible to bind a foreign key relationship on my model to a form input? Say I have a one-to-many relationship between Car and Manufacturer. I want to have a form for updating Car that includes a select input for setting Manufacturer. I was hoping to be able to do this using the built-in model binding, but I'm starting to think I'll have to do it myself. My action method signature looks like this: public JsonResult Save(int id, [Bind(Include="Name, Description, Manufacturer")]Car car) The form posts the values Name, Description and Manufacturer, where Manufacturer is a primary key of type int. Name and Description get set properly, but not Manufacturer, which makes sense since the model binder has no idea what the PK field is. Does that mean I would have to write a custom IModelBinder that it aware of this? I'm not sure how that would work since my data access repositories are loaded through an IoC container on each Controller constructor.

    Read the article

  • ASP.NET MVC 2 loading partial view using jQuery - no client side validation

    - by brainnovative
    I am using jQuery.load() to render a partial view. This part looks like this: $('#sizeAddHolder').load( '/MyController/MyAction', function () { ... }); The code for actions in my controller is the following: public ActionResult MyAction(byte id) { var model = new MyModel { ObjectProp1 = "Some text" }; return View(model); } [HttpPost] public ActionResult MyAction(byte id, FormCollection form) { // TODO: DB insert logic goes here var result = ...; return Json(result); } I am returning a partial view that looks something like this: <% using (Html.BeginForm("MyAction", "MyController")) {%> <%= Html.ValidationSummary(true) %> <h3>Create my object</h3> <fieldset> <legend>Fields</legend> <div class="editor-label"> <%= Html.LabelFor(model => model.ObjectProp1) %> </div> <div class="editor-field"> <%= Html.TextBoxFor(model => model.Size.ObjectProp1) %> <%= Html.ValidationMessageFor(model => model.ObjectProp1) %> </div> div class="editor-label"> <%= Html.LabelFor(model => model.ObjectProp2) %> </div> <div class="editor-field"> <%= Html.TextBoxFor(model => model.ObjectProp2) %> <%= Html.ValidationMessageFor(model => model.ObjectProp2) %> </div> <p> <input type="submit" value="Create" /> </p> </fieldset> <% } %> Client side validation does not work in this case. What is more the script that contains validation messages also isn't included in the view that's returned. Both properties in my model class have Required and StringLength attributes. Is there any way to trigger client side validation in a view which has been loaded like this?

    Read the article

  • Random sort list with LINQ and Entity Framework in vb.net

    - by Sander Versluys
    I've seen many examples in LINQ but i'm not able to reproduce the same result in vb.net. I have following code: Dim context As New MyModel.Entities() Dim rnd As New System.Random() Dim gardens As List(Of Tuin) = (From t In context.Gardens Where _ t.Approved = True And _ Not t.Famous = True _ Order By rnd.Next() _ Select t).ToList() But i receive an error when binding this list to a control. LINQ to Entities does not recognize the method 'Int32 Next()' method, and this method cannot be translated into a store expression. Any suggestion on how to get this to work?

    Read the article

  • Metalanguage like BNF or XML-Schema to validate a tree-instance against a tree-model

    - by Stefan
    Hi! I'm implementing a new machine learning algorithm in Java that extracts a prototype datastructure from a set of structured datasets (tree-structure). As im developing a generic library for that purpose, i kept my design independent from concrete data-representations like XML. My problem now is that I need a way to define a data model, which is basically a ruleset describing valid trees, against which a set of trees is being matched. I thought of using BNF or a similar dialect. Basically I need a way to iterate through the space of all valid TreeNodes defined by the ModelTree (Like a search through the search space for algorithms like A*) so that i can compare my set of concrete trees with the model. I know that I'll have to deal with infinite spaces there but first things first. I know, it's rather tricky (and my sentences are pretty bumpy) but I would appreciate any clues. Thanks in advance, Stefan

    Read the article

  • ASP.NET MVC 2 actionlink breaking after migration from MVC version 1

    - by thermal7
    Hi, I am migrating my application from asp.net mvc to mvc version 2 and am having the following issue. I have paging links << < that I include in each page. Like so: <% Html.RenderPartial("PagingControl", Model); %> They exist in an ascx file as follows. <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<BankingDB.Controllers.Utility.IPagedSortedObject>" %> <div class="paging"> <div class="previous-paging"> <!- error!! -><%= Model.HasPreviousPage ? Html.ActionLink("<<", "Index", Model.buildParams(1)) : "<<"%> <%= Model.HasPreviousPage ? Html.ActionLink("<", "Index", Model.buildParams(Model.PreviousPageIndex)) : "<"%> </div> <div class="paging-details"> Showing records <%= Model.BaseRecordIndex %> to <%= Model.MaxRecordIndex %> of <%= Model.TotalRecordCount %> </div> <div class="next-paging"> <%= Model.HasNextPage ? Html.ActionLink(">", "Index", Model.buildParams(Model.NextPageIndex)) : ">"%> <%= Model.HasNextPage ? Html.ActionLink(">>", "Index", Model.buildParams(Model.PageCount)) : ">>"%> </div> </div> When I try to access the page I get the error: CS0173: Type of conditional expression cannot be determined because there is no implicit conversion between 'System.Web.Mvc.MvcHtmlString' and 'string' The error is marked above and appears to be with the action link. Including the controller name doesn't help. Any ideas?

    Read the article

  • How can I highlight empty fields in ASP.NET MVC 2 before model binding has occurred?

    - by Richard Poole
    I'm trying to highlight certain form fields (let's call them important fields) when they're empty. In essence, they should behave a bit like required fields, but they should be highlighted if they are empty when the user first GETs the form, before POST & model validation has occurred. The user can also ignore the warnings and submit the form when these fields are empty (i.e. empty important fields won't cause ModelState.IsValid to be false). Ideally it needs to work server-side (empty important fields are highlighted with warning message on GET) and client-side (highlighted if empty when losing focus). I've thought of a few ways of doing this, but I'm hoping some bright spark can come up with a nice elegant solution... Just use a CSS class to flag important fields Update every view/template to render important fields with an important CSS class. Write some jQuery to highlight empty important fields when the DOM is ready and hook their blur events so highlights & warning messages can be shown/hidden as appropriate. Pros: Quick and easy. Cons: Unnecessary duplication of importance flags and warning messages across views & templates. Clients with JavaScript disabled will never see highlights/warnings. Custom data annotation and client-side validator Create classes similar to RequiredAttribute, RequiredAttributeAdapter and ModelClientValidationRequiredRule, and register the adapter with DataAnnotationsModelValidatorProvider.RegisterAdapter. Create a client-side validator like this that responds to the blur event. Pros: Data annotation follows DRY principle (Html.ValidationMessageFor<T> picks up field importance and warning message from attribute, no duplication). Cons: Must call TryValidateModel from GET actions to ensure empty fields are decorated. Not technically validation (client- & server-side rules don't match) so it's at the mercy of framework changes. Clients with JavaScript disabled will never see highlights/warnings. Clone the entire validation framework It strikes me that I'm trying to achieve exactly the same thing as validation but with warnings rather than errors. It needs to run before model binding (and therefore validation) has occurred. Perhaps it's worth designing a similar framework with annotations like Required, RegularExpression, StringLength, etc. that somehow cause Html.TextBoxFor<T> etc. to render the warning CSS class and Html.ValidationMessageFor<T> to emit the warning message and JSON needed to enable client-side blur checks. Pros: Sounds like something MVC 2 could do with out of the box. Cons: Way too much effort for my current requirement! I'm swaying towards option 1. Can anyone think of a better solution?

    Read the article

  • ADO.net entity framework- Bridge entities

    - by user308806
    Hello, I have used ADO.net entity framwork to build my application. My application's data diagram contains 2 bridge entities. The problem is that I can not see or access these bridge entities in ADO.net framework using domain services. Do you have any idea how to use CRUD operations on them? Regards,

    Read the article

  • Entity Framework ObjectContext re-usage

    - by verror
    I'm learning EF now and have a question regarding the ObjectContext: Should I create instance of ObjectContext for every query (function) when I access the database? Or it's better to create it once (singleton) and reuse it? Before EF I was using enterprise library data access block and created instance of dataacess for DataAccess function...

    Read the article

  • asp.net Dynamic Data Site with custom own MetaData

    - by loviji
    Hello, I'm searching info about configuring own MetaData in asp.NET Dynamic Site. For example. I have a table in MS Sql Server with structure shown below: CREATE TABLE [dbo].[someTable]( [id] [int] NOT NULL, [pname] [nvarchar](20) NULL, [FullName] [nvarchar](50) NULL, [age] [int] NULL) and I there are 2 Ms Sql tables (I've created), sysTables and sysColumns. sysTables: ID sysTableName TableName TableDescription 1 | someTable |Persons |All Data about Persons in system sysColumns: ID TableName sysColumnName ColumnName ColumnDesc ColumnType MUnit 1 |someTable | sometable_pname| Name | Persona Name(ex. John)| nvarchar(20) | null 2 |someTable | sometable_Fullname| Full Name | Persona Name(ex. John Black)| nvarchar(50) | null 3 |someTable | sometable_age| age | Person age| int | null I want that, in Details/Edit/Insert/List/ListDetails pages use as MetaData sysColumns and sysTableData. Because, for ex. in DetailsPage fullName, it is not beatiful as Full Name . someIdea, is it possible? thanks

    Read the article

  • asp.net Dynamic Data Site own MetaData

    - by loviji
    Hello, I'm searching info about configuring own MetaData in asp.NET Dynamic Site. For example. I have a table in MS Sql Server with structure shown below: CREATE TABLE [dbo].[someTable]( [id] [int] NOT NULL, [pname] [nvarchar](20) NULL, [FullName] [nvarchar](50) NULL, [age] [int] NULL) and I have Ms Sql table, sysTables and sysColumns. sysTables: ID sysTableName TableName TableDescription 1 | someTable |Persons |All Data about Persons in system sysColumns ID TableName sysColumnName ColumnName ColumnDesc ColumnType MUnit 1 |someTable | sometable_pname| Name | Persona Name(ex. John)| nvarchar(20) | null 2 |someTable | sometable_Fullname| Full Name | Persona Name(ex. John Black)| nvarchar(50) | null 3 |someTable | sometable_age| age | Person age| int | null

    Read the article

  • Getting code-first Entity Framework to build tables on SQL Azure

    - by NER1808
    I am new the code-first Entity Framework. I have tried a few things now, but can't get EF to construct any tables in the my SQL Azure database. Can anyone advise of some steps and settings I should check. The membership provider has no problems create it's tables. I have added the PersistSecurityInfo=True in the connection string. The connection string is using the main user account for the server. When I implement the tables in the database using sql everything works fine. I have the following in the WebRole.cs //Initialize the database Database.SetInitializer<ReykerSCPContext>(new DbInitializer()); My DbInitializer (which does not get run before I get a "Invalid object name 'dbo.ClientAccountIFAs'." when I try to access the table for the first time. Sometime after startup. public class DbInitializer:DropCreateDatabaseIfModelChanges<ReykerSCPContext> { protected override void Seed(ReykerSCPContext context) { using (context) { //Add Doc Types context.DocTypes.Add(new DocType() { DocTypeId = 1, Description = "Statement" }); context.DocTypes.Add(new DocType() { DocTypeId = 2, Description = "Contract note" }); context.DocTypes.Add(new DocType() { DocTypeId = 3, Description = "Notification" }); context.DocTypes.Add(new DocType() { DocTypeId = 4, Description = "Invoice" }); context.DocTypes.Add(new DocType() { DocTypeId = 5, Description = "Document" }); context.DocTypes.Add(new DocType() { DocTypeId = 6, Description = "Newsletter" }); context.DocTypes.Add(new DocType() { DocTypeId = 7, Description = "Terms and Conditions" }); //Add ReykerAccounttypes context.ReykerAccountTypes.Add(new ReykerAccountType() { ReykerAccountTypeID = 1, Description = "ISA" }); context.ReykerAccountTypes.Add(new ReykerAccountType() { ReykerAccountTypeID = 2, Description = "Trading" }); context.ReykerAccountTypes.Add(new ReykerAccountType() { ReykerAccountTypeID = 3, Description = "SIPP" }); context.ReykerAccountTypes.Add(new ReykerAccountType() { ReykerAccountTypeID = 4, Description = "CTF" }); context.ReykerAccountTypes.Add(new ReykerAccountType() { ReykerAccountTypeID = 5, Description = "JISA" }); context.ReykerAccountTypes.Add(new ReykerAccountType() { ReykerAccountTypeID = 6, Description = "Direct" }); context.ReykerAccountTypes.Add(new ReykerAccountType() { ReykerAccountTypeID = 7, Description = "ISA & Direct" }); //Save the changes context.SaveChanges(); } and my DBContext class looks like public class ReykerSCPContext : DbContext { //set the connection explicitly public ReykerSCPContext():base("ReykerSCPContext"){} //define tables public DbSet<ClientAccountIFA> ClientAccountIFAs { get; set; } public DbSet<Document> Documents { get; set; } public DbSet<DocType> DocTypes { get; set; } public DbSet<ReykerAccountType> ReykerAccountTypes { get; set; } protected override void OnModelCreating(DbModelBuilder modelBuilder) { //Runs when creating the model. Can use to define special relationships, such as many-to-many. } The code used to access the is public List<ClientAccountIFA> GetAllClientAccountIFAs() { using (DataContext) { var caiCollection = from c in DataContext.ClientAccountIFAs select c; return caiCollection.ToList(); } } and it errors on the last line. Help!

    Read the article

  • Parallelism in .NET – Part 7, Some Differences between PLINQ and LINQ to Objects

    - by Reed
    In my previous post on Declarative Data Parallelism, I mentioned that PLINQ extends LINQ to Objects to support parallel operations.  Although nearly all of the same operations are supported, there are some differences between PLINQ and LINQ to Objects.  By introducing Parallelism to our declarative model, we add some extra complexity.  This, in turn, adds some extra requirements that must be addressed. In order to illustrate the main differences, and why they exist, let’s begin by discussing some differences in how the two technologies operate, and look at the underlying types involved in LINQ to Objects and PLINQ . LINQ to Objects is mainly built upon a single class: Enumerable.  The Enumerable class is a static class that defines a large set of extension methods, nearly all of which work upon an IEnumerable<T>.  Many of these methods return a new IEnumerable<T>, allowing the methods to be chained together into a fluent style interface.  This is what allows us to write statements that chain together, and lead to the nice declarative programming model of LINQ: double min = collection .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .Min(item => item.PerformComputation()); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Other LINQ variants work in a similar fashion.  For example, most data-oriented LINQ providers are built upon an implementation of IQueryable<T>, which allows the database provider to turn a LINQ statement into an underlying SQL query, to be performed directly on the remote database. PLINQ is similar, but instead of being built upon the Enumerable class, most of PLINQ is built upon a new static class: ParallelEnumerable.  When using PLINQ, you typically begin with any collection which implements IEnumerable<T>, and convert it to a new type using an extension method defined on ParallelEnumerable: AsParallel().  This method takes any IEnumerable<T>, and converts it into a ParallelQuery<T>, the core class for PLINQ.  There is a similar ParallelQuery class for working with non-generic IEnumerable implementations. This brings us to our first subtle, but important difference between PLINQ and LINQ – PLINQ always works upon specific types, which must be explicitly created. Typically, the type you’ll use with PLINQ is ParallelQuery<T>, but it can sometimes be a ParallelQuery or an OrderedParallelQuery<T>.  Instead of dealing with an interface, implemented by an unknown class, we’re dealing with a specific class type.  This works seamlessly from a usage standpoint – ParallelQuery<T> implements IEnumerable<T>, so you can always “switch back” to an IEnumerable<T>.  The difference only arises at the beginning of our parallelization.  When we’re using LINQ, and we want to process a normal collection via PLINQ, we need to explicitly convert the collection into a ParallelQuery<T> by calling AsParallel().  There is an important consideration here – AsParallel() does not need to be called on your specific collection, but rather any IEnumerable<T>.  This allows you to place it anywhere in the chain of methods involved in a LINQ statement, not just at the beginning.  This can be useful if you have an operation which will not parallelize well or is not thread safe.  For example, the following is perfectly valid, and similar to our previous examples: double min = collection .AsParallel() .Select(item => item.SomeOperation()) .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .Min(item => item.PerformComputation()); However, if SomeOperation() is not thread safe, we could just as easily do: double min = collection .Select(item => item.SomeOperation()) .AsParallel() .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .Min(item => item.PerformComputation()); In this case, we’re using standard LINQ to Objects for the Select(…) method, then converting the results of that map routine to a ParallelQuery<T>, and processing our filter (the Where method) and our aggregation (the Min method) in parallel. PLINQ also provides us with a way to convert a ParallelQuery<T> back into a standard IEnumerable<T>, forcing sequential processing via standard LINQ to Objects.  If SomeOperation() was thread-safe, but PerformComputation() was not thread-safe, we would need to handle this by using the AsEnumerable() method: double min = collection .AsParallel() .Select(item => item.SomeOperation()) .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .AsEnumerable() .Min(item => item.PerformComputation()); Here, we’re converting our collection into a ParallelQuery<T>, doing our map operation (the Select(…) method) and our filtering in parallel, then converting the collection back into a standard IEnumerable<T>, which causes our aggregation via Min() to be performed sequentially. This could also be written as two statements, as well, which would allow us to use the language integrated syntax for the first portion: var tempCollection = from item in collection.AsParallel() let e = item.SomeOperation() where (e.SomeProperty > 6 && e.SomeProperty < 24) select e; double min = tempCollection.AsEnumerable().Min(item => item.PerformComputation()); This allows us to use the standard LINQ style language integrated query syntax, but control whether it’s performed in parallel or serial by adding AsParallel() and AsEnumerable() appropriately. The second important difference between PLINQ and LINQ deals with order preservation.  PLINQ, by default, does not preserve the order of of source collection. This is by design.  In order to process a collection in parallel, the system needs to naturally deal with multiple elements at the same time.  Maintaining the original ordering of the sequence adds overhead, which is, in many cases, unnecessary.  Therefore, by default, the system is allowed to completely change the order of your sequence during processing.  If you are doing a standard query operation, this is usually not an issue.  However, there are times when keeping a specific ordering in place is important.  If this is required, you can explicitly request the ordering be preserved throughout all operations done on a ParallelQuery<T> by using the AsOrdered() extension method.  This will cause our sequence ordering to be preserved. For example, suppose we wanted to take a collection, perform an expensive operation which converts it to a new type, and display the first 100 elements.  In LINQ to Objects, our code might look something like: // Using IEnumerable<SourceClass> collection IEnumerable<ResultClass> results = collection .Select(e => e.CreateResult()) .Take(100); If we just converted this to a parallel query naively, like so: IEnumerable<ResultClass> results = collection .AsParallel() .Select(e => e.CreateResult()) .Take(100); We could very easily get a very different, and non-reproducable, set of results, since the ordering of elements in the input collection is not preserved.  To get the same results as our original query, we need to use: IEnumerable<ResultClass> results = collection .AsParallel() .AsOrdered() .Select(e => e.CreateResult()) .Take(100); This requests that PLINQ process our sequence in a way that verifies that our resulting collection is ordered as if it were processed serially.  This will cause our query to run slower, since there is overhead involved in maintaining the ordering.  However, in this case, it is required, since the ordering is required for correctness. PLINQ is incredibly useful.  It allows us to easily take nearly any LINQ to Objects query and run it in parallel, using the same methods and syntax we’ve used previously.  There are some important differences in operation that must be considered, however – it is not a free pass to parallelize everything.  When using PLINQ in order to parallelize your routines declaratively, the same guideline I mentioned before still applies: Parallelization is something that should be handled with care and forethought, added by design, and not just introduced casually.

    Read the article

  • Mapping between 4+1 architectural view model & UML

    - by Sadeq Dousti
    I'm a bit confused about how the 4+1 architectural view model maps to UML. Wikipedia gives the following mapping: Logical view: Class diagram, Communication diagram, Sequence diagram. Development view: Component diagram, Package diagram Process view: Activity diagram Physical view: Deployment diagram Scenarios: Use-case diagram The paper Role of UML Sequence Diagram Constructs in Object Lifecycle Concept gives the following mapping: Logical view (class diagram (CD), object diagram (OD), sequence diagram (SD), collaboration diagram (COD), state chart diagram (SCD), activity diagram (AD)) Development view (package diagram, component diagram), Process view (use case diagram, CD, OD, SD, COD, SCD, AD), Physical view (deployment diagram), and Use case view (use case diagram, OD, SD, COD, SCD, AD) which combines the four mentioned above. The web page UML 4+1 View Materials presents the following mapping: Finally, the white paper Applying 4+1 View Architecture with UML 2 gives yet another mapping: Logical view class diagrams, object diagrams, state charts, and composite structures Process view sequence diagrams, communication diagrams, activity diagrams, timing diagrams, interaction overview diagrams Development view component diagrams Physical view deployment diagram Use case view use case diagram, activity diagrams I'm sure further search will reveal other mappings as well. While various people usually have different perspectives, I don't see why this is the case here. Specially, each UML diagram describes the system from a particular aspect. So, for instance, why the "sequence diagram" is considered as describing the "logical view" of the system by one author, while another author considers it as describing the "process view"? Could you please help me clarify the confusion?

    Read the article

  • Using an alternate JSON Serializer in ASP.NET Web API

    - by Rick Strahl
    The new ASP.NET Web API that Microsoft released alongside MVC 4.0 Beta last week is a great framework for building REST and AJAX APIs. I've been working with it for quite a while now and I really like the way it works and the complete set of features it provides 'in the box'. It's about time that Microsoft gets a decent API for building generic HTTP endpoints into the framework. DataContractJsonSerializer sucks As nice as Web API's overall design is one thing still sucks: The built-in JSON Serialization uses the DataContractJsonSerializer which is just too limiting for many scenarios. The biggest issues I have with it are: No support for untyped values (object, dynamic, Anonymous Types) MS AJAX style Date Formatting Ugly serialization formats for types like Dictionaries To me the most serious issue is dealing with serialization of untyped objects. I have number of applications with AJAX front ends that dynamically reformat data from business objects to fit a specific message format that certain UI components require. The most common scenario I have there are IEnumerable query results from a database with fields from the result set rearranged to fit the sometimes unconventional formats required for the UI components (like jqGrid for example). Creating custom types to fit these messages seems like overkill and projections using Linq makes this much easier to code up. Alas DataContractJsonSerializer doesn't support it. Neither does DataContractSerializer for XML output for that matter. What this means is that you can't do stuff like this in Web API out of the box:public object GetAnonymousType() { return new { name = "Rick", company = "West Wind", entered= DateTime.Now }; } Basically anything that doesn't have an explicit type DataContractJsonSerializer will not let you return. FWIW, the same is true for XmlSerializer which also doesn't work with non-typed values for serialization. The example above is obviously contrived with a hardcoded object graph, but it's not uncommon to get dynamic values returned from queries that have anonymous types for their result projections. Apparently there's a good possibility that Microsoft will ship Json.NET as part of Web API RTM release.  Scott Hanselman confirmed this as a footnote in his JSON Dates post a few days ago. I've heard several other people from Microsoft confirm that Json.NET will be included and be the default JSON serializer, but no details yet in what capacity it will show up. Let's hope it ends up as the default in the box. Meanwhile this post will show you how you can use it today with the beta and get JSON that matches what you should see in the RTM version. What about JsonValue? To be fair Web API DOES include a new JsonValue/JsonObject/JsonArray type that allow you to address some of these scenarios. JsonValue is a new type in the System.Json assembly that can be used to build up an object graph based on a dictionary. It's actually a really cool implementation of a dynamic type that allows you to create an object graph and spit it out to JSON without having to create .NET type first. JsonValue can also receive a JSON string and parse it without having to actually load it into a .NET type (which is something that's been missing in the core framework). This is really useful if you get a JSON result from an arbitrary service and you don't want to explicitly create a mapping type for the data returned. For serialization you can create an object structure on the fly and pass it back as part of an Web API action method like this:public JsonValue GetJsonValue() { dynamic json = new JsonObject(); json.name = "Rick"; json.company = "West Wind"; json.entered = DateTime.Now; dynamic address = new JsonObject(); address.street = "32 Kaiea"; address.zip = "96779"; json.address = address; dynamic phones = new JsonArray(); json.phoneNumbers = phones; dynamic phone = new JsonObject(); phone.type = "Home"; phone.number = "808 123-1233"; phones.Add(phone); phone = new JsonObject(); phone.type = "Home"; phone.number = "808 123-1233"; phones.Add(phone); //var jsonString = json.ToString(); return json; } which produces the following output (formatted here for easier reading):{ name: "rick", company: "West Wind", entered: "2012-03-08T15:33:19.673-10:00", address: { street: "32 Kaiea", zip: "96779" }, phoneNumbers: [ { type: "Home", number: "808 123-1233" }, { type: "Mobile", number: "808 123-1234" }] } If you need to build a simple JSON type on the fly these types work great. But if you have an existing type - or worse a query result/list that's already formatted JsonValue et al. become a pain to work with. As far as I can see there's no way to just throw an object instance at JsonValue and have it convert into JsonValue dictionary. It's a manual process. Using alternate Serializers in Web API So, currently the default serializer in WebAPI is DataContractJsonSeriaizer and I don't like it. You may not either, but luckily you can swap the serializer fairly easily. If you'd rather use the JavaScriptSerializer built into System.Web.Extensions or Json.NET today, it's not too difficult to create a custom MediaTypeFormatter that uses these serializers and can replace or partially replace the native serializer. Here's a MediaTypeFormatter implementation using the ASP.NET JavaScriptSerializer:using System; using System.Net.Http.Formatting; using System.Threading.Tasks; using System.Web.Script.Serialization; using System.Json; using System.IO; namespace Westwind.Web.WebApi { public class JavaScriptSerializerFormatter : MediaTypeFormatter { public JavaScriptSerializerFormatter() { SupportedMediaTypes.Add(new System.Net.Http.Headers.MediaTypeHeaderValue("application/json")); } protected override bool CanWriteType(Type type) { // don't serialize JsonValue structure use default for that if (type == typeof(JsonValue) || type == typeof(JsonObject) || type== typeof(JsonArray) ) return false; return true; } protected override bool CanReadType(Type type) { if (type == typeof(IKeyValueModel)) return false; return true; } protected override System.Threading.Tasks.Taskobject OnReadFromStreamAsync(Type type, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext) { var task = Taskobject.Factory.StartNew(() = { var ser = new JavaScriptSerializer(); string json; using (var sr = new StreamReader(stream)) { json = sr.ReadToEnd(); sr.Close(); } object val = ser.Deserialize(json,type); return val; }); return task; } protected override System.Threading.Tasks.Task OnWriteToStreamAsync(Type type, object value, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext, System.Net.TransportContext transportContext) { var task = Task.Factory.StartNew( () = { var ser = new JavaScriptSerializer(); var json = ser.Serialize(value); byte[] buf = System.Text.Encoding.Default.GetBytes(json); stream.Write(buf,0,buf.Length); stream.Flush(); }); return task; } } } Formatter implementation is pretty simple: You override 4 methods to tell which types you can handle and then handle the input or output streams to create/parse the JSON data. Note that when creating output you want to take care to still allow JsonValue/JsonObject/JsonArray types to be handled by the default serializer so those objects serialize properly - if you let either JavaScriptSerializer or JSON.NET handle them they'd try to render the dictionaries which is very undesirable. If you'd rather use Json.NET here's the JSON.NET version of the formatter:// this code requires a reference to JSON.NET in your project #if true using System; using System.Net.Http.Formatting; using System.Threading.Tasks; using System.Web.Script.Serialization; using System.Json; using Newtonsoft.Json; using System.IO; using Newtonsoft.Json.Converters; namespace Westwind.Web.WebApi { public class JsonNetFormatter : MediaTypeFormatter { public JsonNetFormatter() { SupportedMediaTypes.Add(new System.Net.Http.Headers.MediaTypeHeaderValue("application/json")); } protected override bool CanWriteType(Type type) { // don't serialize JsonValue structure use default for that if (type == typeof(JsonValue) || type == typeof(JsonObject) || type == typeof(JsonArray)) return false; return true; } protected override bool CanReadType(Type type) { if (type == typeof(IKeyValueModel)) return false; return true; } protected override System.Threading.Tasks.Taskobject OnReadFromStreamAsync(Type type, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext) { var task = Taskobject.Factory.StartNew(() = { var settings = new JsonSerializerSettings() { NullValueHandling = NullValueHandling.Ignore, }; var sr = new StreamReader(stream); var jreader = new JsonTextReader(sr); var ser = new JsonSerializer(); ser.Converters.Add(new IsoDateTimeConverter()); object val = ser.Deserialize(jreader, type); return val; }); return task; } protected override System.Threading.Tasks.Task OnWriteToStreamAsync(Type type, object value, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext, System.Net.TransportContext transportContext) { var task = Task.Factory.StartNew( () = { var settings = new JsonSerializerSettings() { NullValueHandling = NullValueHandling.Ignore, }; string json = JsonConvert.SerializeObject(value, Formatting.Indented, new JsonConverter[1] { new IsoDateTimeConverter() } ); byte[] buf = System.Text.Encoding.Default.GetBytes(json); stream.Write(buf,0,buf.Length); stream.Flush(); }); return task; } } } #endif   One advantage of the Json.NET serializer is that you can specify a few options on how things are formatted and handled. You get null value handling and you can plug in the IsoDateTimeConverter which is nice to product proper ISO dates that I would expect any Json serializer to output these days. Hooking up the Formatters Once you've created the custom formatters you need to enable them for your Web API application. To do this use the GlobalConfiguration.Configuration object and add the formatter to the Formatters collection. Here's what this looks like hooked up from Application_Start in a Web project:protected void Application_Start(object sender, EventArgs e) { // Action based routing (used for RPC calls) RouteTable.Routes.MapHttpRoute( name: "StockApi", routeTemplate: "stocks/{action}/{symbol}", defaults: new { symbol = RouteParameter.Optional, controller = "StockApi" } ); // WebApi Configuration to hook up formatters and message handlers // optional RegisterApis(GlobalConfiguration.Configuration); } public static void RegisterApis(HttpConfiguration config) { // Add JavaScriptSerializer formatter instead - add at top to make default //config.Formatters.Insert(0, new JavaScriptSerializerFormatter()); // Add Json.net formatter - add at the top so it fires first! // This leaves the old one in place so JsonValue/JsonObject/JsonArray still are handled config.Formatters.Insert(0, new JsonNetFormatter()); } One thing to remember here is the GlobalConfiguration object which is Web API's static configuration instance. I think this thing is seriously misnamed given that GlobalConfiguration could stand for anything and so is hard to discover if you don't know what you're looking for. How about WebApiConfiguration or something more descriptive? Anyway, once you know what it is you can use the Formatters collection to insert your custom formatter. Note that I insert my formatter at the top of the list so it takes precedence over the default formatter. I also am not removing the old formatter because I still want JsonValue/JsonObject/JsonArray to be handled by the default serialization mechanism. Since they process in sequence and I exclude processing for these types JsonValue et al. still get properly serialized/deserialized. Summary Currently DataContractJsonSerializer in Web API is a pain, but at least we have the ability with relatively limited effort to replace the MediaTypeFormatter and plug in our own JSON serializer. This is useful for many scenarios - if you have existing client applications that used MVC JsonResult or ASP.NET AJAX results from ASMX AJAX services you can plug in the JavaScript serializer and get exactly the same serializer you used in the past so your results will be the same and don't potentially break clients. JSON serializers do vary a bit in how they serialize some of the more complex types (like Dictionaries and dates for example) and so if you're migrating it might be helpful to ensure your client code doesn't break when you switch to ASP.NET Web API. Going forward it looks like Microsoft is planning on plugging in Json.Net into Web API and make that the default. I think that's an awesome choice since Json.net has been around forever, is fast and easy to use and provides a ton of functionality as part of this great library. I just wish Microsoft would have figured this out sooner instead of now at the last minute integrating with it especially given that Json.Net has a similar set of lower level JSON objects JsonValue/JsonObject etc. which now will end up being duplicated by the native System.Json stuff. It's not like we don't already have enough confusion regarding which JSON serializer to use (JavaScriptSerializer, DataContractJsonSerializer, JsonValue/JsonObject/JsonArray and now Json.net). For years I've been using my own JSON serializer because the built in choices are both limited. However, with an official encorsement of Json.Net I'm happily moving on to use that in my applications. Let's see and hope Microsoft gets this right before ASP.NET Web API goes gold.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api  AJAX  ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Parallelism in .NET – Part 20, Using Task with Existing APIs

    - by Reed
    Although the Task class provides a huge amount of flexibility for handling asynchronous actions, the .NET Framework still contains a large number of APIs that are based on the previous asynchronous programming model.  While Task and Task<T> provide a much nicer syntax as well as extending the flexibility, allowing features such as continuations based on multiple tasks, the existing APIs don’t directly support this workflow. There is a method in the TaskFactory class which can be used to adapt the existing APIs to the new Task class: TaskFactory.FromAsync.  This method provides a way to convert from the BeginOperation/EndOperation method pair syntax common through .NET Framework directly to a Task<T> containing the results of the operation in the task’s Result parameter. While this method does exist, it unfortunately comes at a cost – the method overloads are far from simple to decipher, and the resulting code is not always as easily understood as newer code based directly on the Task class.  For example, a single call to handle WebRequest.BeginGetResponse/EndGetReponse, one of the easiest “pairs” of methods to use, looks like the following: var task = Task.Factory.FromAsync<WebResponse>( request.BeginGetResponse, request.EndGetResponse, null); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The compiler is unfortunately unable to infer the correct type, and, as a result, the WebReponse must be explicitly mentioned in the method call.  As a result, I typically recommend wrapping this into an extension method to ease use.  For example, I would place the above in an extension method like: public static class WebRequestExtensions { public static Task<WebResponse> GetReponseAsync(this WebRequest request) { return Task.Factory.FromAsync<WebResponse>( request.BeginGetResponse, request.EndGetResponse, null); } } This dramatically simplifies usage.  For example, if we wanted to asynchronously check to see if this blog supported XHTML 1.0, and report that in a text box to the user, we could do: var webRequest = WebRequest.Create("http://www.reedcopsey.com"); webRequest.GetReponseAsync().ContinueWith(t => { using (var sr = new StreamReader(t.Result.GetResponseStream())) { string str = sr.ReadLine();; this.textBox1.Text = string.Format("Page at {0} supports XHTML 1.0: {1}", t.Result.ResponseUri, str.Contains("XHTML 1.0")); } }, TaskScheduler.FromCurrentSynchronizationContext());   By using a continuation with a TaskScheduler based on the current synchronization context, we can keep this request asynchronous, check based on the first line of the response string, and report the results back on our UI directly.

    Read the article

  • Parallelism in .NET – Part 8, PLINQ’s ForAll Method

    - by Reed
    Parallel LINQ extends LINQ to Objects, and is typically very similar.  However, as I previously discussed, there are some differences.  Although the standard way to handle simple Data Parellelism is via Parallel.ForEach, it’s possible to do the same thing via PLINQ. PLINQ adds a new method unavailable in standard LINQ which provides new functionality… LINQ is designed to provide a much simpler way of handling querying, including filtering, ordering, grouping, and many other benefits.  Reading the description in LINQ to Objects on MSDN, it becomes clear that the thinking behind LINQ deals with retrieval of data.  LINQ works by adding a functional programming style on top of .NET, allowing us to express filters in terms of predicate functions, for example. PLINQ is, generally, very similar.  Typically, when using PLINQ, we write declarative statements to filter a dataset or perform an aggregation.  However, PLINQ adds one new method, which provides a very different purpose: ForAll. The ForAll method is defined on ParallelEnumerable, and will work upon any ParallelQuery<T>.  Unlike the sequence operators in LINQ and PLINQ, ForAll is intended to cause side effects.  It does not filter a collection, but rather invokes an action on each element of the collection. At first glance, this seems like a bad idea.  For example, Eric Lippert clearly explained two philosophical objections to providing an IEnumerable<T>.ForEach extension method, one of which still applies when parallelized.  The sole purpose of this method is to cause side effects, and as such, I agree that the ForAll method “violates the functional programming principles that all the other sequence operators are based upon”, in exactly the same manner an IEnumerable<T>.ForEach extension method would violate these principles.  Eric Lippert’s second reason for disliking a ForEach extension method does not necessarily apply to ForAll – replacing ForAll with a call to Parallel.ForEach has the same closure semantics, so there is no loss there. Although ForAll may have philosophical issues, there is a pragmatic reason to include this method.  Without ForAll, we would take a fairly serious performance hit in many situations.  Often, we need to perform some filtering or grouping, then perform an action using the results of our filter.  Using a standard foreach statement to perform our action would avoid this philosophical issue: // Filter our collection var filteredItems = collection.AsParallel().Where( i => i.SomePredicate() ); // Now perform an action foreach (var item in filteredItems) { // These will now run serially item.DoSomething(); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This would cause a loss in performance, since we lose any parallelism in place, and cause all of our actions to be run serially. We could easily use a Parallel.ForEach instead, which adds parallelism to the actions: // Filter our collection var filteredItems = collection.AsParallel().Where( i => i.SomePredicate() ); // Now perform an action once the filter completes Parallel.ForEach(filteredItems, item => { // These will now run in parallel item.DoSomething(); }); This is a noticeable improvement, since both our filtering and our actions run parallelized.  However, there is still a large bottleneck in place here.  The problem lies with my comment “perform an action once the filter completes”.  Here, we’re parallelizing the filter, then collecting all of the results, blocking until the filter completes.  Once the filtering of every element is completed, we then repartition the results of the filter, reschedule into multiple threads, and perform the action on each element.  By moving this into two separate statements, we potentially double our parallelization overhead, since we’re forcing the work to be partitioned and scheduled twice as many times. This is where the pragmatism comes into play.  By violating our functional principles, we gain the ability to avoid the overhead and cost of rescheduling the work: // Perform an action on the results of our filter collection .AsParallel() .Where( i => i.SomePredicate() ) .ForAll( i => i.DoSomething() ); The ability to avoid the scheduling overhead is a compelling reason to use ForAll.  This really goes back to one of the key points I discussed in data parallelism: Partition your problem in a way to place the most work possible into each task.  Here, this means leaving the statement attached to the expression, even though it causes side effects and is not standard usage for LINQ. This leads to my one guideline for using ForAll: The ForAll extension method should only be used to process the results of a parallel query, as returned by a PLINQ expression. Any other usage scenario should use Parallel.ForEach, instead.

    Read the article

  • Why Cornell University Chose Oracle Data Masking

    - by Troy Kitch
    One of the eight Ivy League schools, Cornell University found itself in the unfortunate position of having to inform over 45,000 University community members that their personal information had been breached when a laptop was stolen. To ensure this wouldn’t happen again, Cornell took steps to ensure that data used for non-production purposes is de-identified with Oracle Data Masking. A recent podcast highlights why organizations like Cornell are choosing Oracle Data Masking to irreversibly de-identify production data for use in non-production environments. Organizations often copy production data, that contains sensitive information, into non-production environments so they can test applications and systems using “real world” information. Data in non-production has increasingly become a target of cyber criminals and can be lost or stolen due to weak security controls and unmonitored access. Similar to production environments, data breaches in non-production environments can cost millions of dollars to remediate and cause irreparable harm to reputation and brand. Cornell’s applications and databases help carry out the administrative and academic mission of the university. They are running Oracle PeopleSoft Campus Solutions that include highly sensitive faculty, student, alumni, and prospective student data. This data is supported and accessed by a diverse set of developers and functional staff distributed across the university. Several years ago, Cornell experienced a data breach when an employee’s laptop was stolen.  Centrally stored backup information indicated there was sensitive data on the laptop. With no way of knowing what the criminal intended, the university had to spend significant resources reviewing data, setting up service centers to handle constituent concerns, and provide free credit checks and identity theft protection services—all of which cost money and took time away from other projects. To avoid this issue in the future Cornell came up with several options; one of which was to sanitize the testing and training environments. “The project management team was brought in and they developed a project plan and implementation schedule; part of which was to evaluate competing products in the market-space and figure out which one would work best for us.  In the end we chose Oracle’s solution based on its architecture and its functionality.” – Tony Damiani, Database Administration and Business Intelligence, Cornell University The key goals of the project were to mask the elements that were identifiable as sensitive in a consistent and efficient manner, but still support all the previous activities in the non-production environments. Tony concludes,  “What we saw was a very minimal impact on performance. The masking process added an additional three hours to our refresh window, but it was well worth that time to secure the environment and remove the sensitive data. I think some other key points you can keep in mind here is that there was zero impact on the production environment. Oracle Data Masking works in non-production environments only. Additionally, the risk of exposure has been significantly reduced and the impact to business was minimal.” With Oracle Data Masking organizations like Cornell can: Make application data securely available in non-production environments Prevent application developers and testers from seeing production data Use an extensible template library and policies for data masking automation Gain the benefits of referential integrity so that applications continue to work Listen to the podcast to hear the complete interview.  Learn more about Oracle Data Masking by registering to watch this SANS Institute Webcast and view this short demo.

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >