Search Results

Search found 16824 results on 673 pages for 'model binding'.

Page 142/673 | < Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >

  • If I create a transient property in the model, isn't this managed by core data then?

    - by mystify
    Just to grok this: If I had a transient property, lets say averagePrice, and I mark that as "transient" in the data modeler: This will not be persistet, and no column will be created in SQLite for that? And: If I make my own NSManagedObject subclass with an averagePrice property, does it make any sense to model that property in the xcdatamodel file? Would it make a difference if I would simply create a property in my subclass and not model that in the entity? (I think: yes, it doesn't matter at all ... but not sure)

    Read the article

  • JPA: what is the proper pattern for iterating over large result sets?

    - by Caffeine Coma
    Let's say I have a table with millions of rows. Using JPA, what's the proper way to iterate over a query against that table, such that I don't have all an in-memory List with millions of objects? I suspect that the following will blow up if the table is large: List<Model> models = entityManager().createQuery("from Model m", Model.class).getResultList(); for (Model model : models) { // do something with model } Is pagination (looping and manually updating setFirstResult()/setMaxResult()) really the best solution?

    Read the article

  • ASP.NET MVC Unit Testing Controllers - Repositories

    - by Brian McCord
    This is more of an opinion seeking question, so there may not be a "right" answer, but I would welcome arguments as to why your answer is the "right" one. Given an MVC application that is using Entity Framework for the persistence engine, a repository layer, a service layer that basically defers to the repository, and a delete method on a controller that looks like this: public ActionResult Delete(State model) { try { if( model == null ) { return View( model ); } _stateService.Delete( model ); return RedirectToAction("Index"); } catch { return View( model ); } } I am looking for the proper way to Unit Test this. Currently, I have a fake repository that gets used in the service, and my unit test looks like this: [TestMethod] public void Delete_Post_Passes_With_State_4() { //Arrange var stateService = GetService(); var stateController = new StateController( stateService ); ViewResult result = stateController.Delete( 4 ) as ViewResult; var model = (State)result.ViewData.Model; //Act RedirectToRouteResult redirectResult = stateController.Delete( model ) as RedirectToRouteResult; stateController = new StateController( stateService ); var newresult = stateController.Delete( 4 ) as ViewResult; var newmodel = (State)newresult.ViewData.Model; //Assert Assert.AreEqual( redirectResult.RouteValues["action"], "Index" ); Assert.IsNull( newmodel ); } Is this overkill? Do I need to check to see if the record actually got deleted (as I already have Service and Repository tests that verify this)? Should I even use a fake repository here or would it make more sense just to mock the whole thing? The examples I'm looking at used this model of doing things, and I just copied it, but I'm really open to doing things in a "best practices" way. Thanks.

    Read the article

  • What is the best way to maintain an entity's original properties when they are not included in MVC binding from edit page?

    - by kingdango
    I have an ASP.NET MVC view for editing a model object. The edit page includes most of the properties of my object but not all of them -- specifically it does not include CreatedOn and CreatedBy fields since those are set upon creation (in my service layer) and shouldn't change in the future. Unless I include these properties as hidden fields they will not be picked up during Binding and are unavailable when I save the modified object in my EF 4 DB Context. In actuality, upon save the original values would be overwritten by nulls (or some type-specific default). I don't want to drop these in as hidden fields because it is a waste of bytes and I don't want those values exposed to potential manipulation. Is there a "first class" way to handle this situation? Is it possible to specify a EF Model property is to be ignored unless explicitly set?

    Read the article

  • Passing ViewModel for backbone.js from MVC3 Server-Side

    - by Roman
    In ASP.NET MVC there is Model, View and Controller. MODEL represents entities which are stored in database and essentially is all the data used in a application (for example, generated by EntityFramework, "DB First" approach). Not all data from model you want to show in the view (for example, hashs of passwords). So you create VIEW MODEL, each for every strongly-typed-razor-view you have in application. Like this: using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace MyProject.ViewModels.SomeController.SomeAction { public class ViewModel { public ViewModel() { Entities1 = new List<ViewEntity1>(); Entities2 = new List<ViewEntity2>(); } public List<ViewEntity1> Entities1 { get; set; } public List<ViewEntity2> Entities2 { get; set; } } public class ViewEntity1 { //some properties from original DB-entity you want to show } public class ViewEntity2 { } } When you create complex client-side interfaces (I do), you use some pattern for javascript on client, MVC or MVVM (I know only these). So, with MVC on client you have another model (Backbone.Model for example), which is third model in application. It is a bit much. Why don`t we use the same ViewModel model on a client (in backbone.js or another framework)? Is there a way to transfer CS-coded model to JS-coded? Like in MVVM pattern, with knockout.js, when you can do like this: in SomeAction.cshtml: <div style="display: none;" id="view_model">@Json.Encode(Model)</div> after that in Javascript-code var ViewModel = ko.mapping.fromJSON($("#view_model").get(0).innerHTML); now you can extend your ViewModel with some actions, event handlers, etc: ko.utils.extend(ViewModel, { some_function: function () { //some code } }); So, we are not building the same view model on the client again, we are transferring existing view model from server. At least, data. But knockout.js is not suitable for me, you can`t build complex UI with it, it is just data-binding. I need something more structural, like backbone.js. The only way to build ViewModel for backbone.js I can see now is re-writing same ViewModel in JS from server with hands. Is there any ways to transfer it from server? To reuse the same viewmodel on server view and client view?

    Read the article

  • Asp.NET MVC View with different objects

    - by user195910
    If i have a controller action "Create" that returns a view with the following as the Model type: public class PaymentModel { public Model.SummaryInformation SummaryInformation; public Model.CardDetail CardDetail; } If there is a button on this view that POST's to an action "New" and I want that action to recieve a different object e.g. public class PaymentNewModel { public Model.CardDetail CardDetail; } Is this possible? I do not want to use the same Model when the view is rendered to the Model that is POSTed

    Read the article

  • html code in anchor tag text in c# razor

    - by Tripping
    I am trying to show the rupee sign: ₹ code: &#8377 ; in my anchor tag. I have tried a few things but nothing works ... please help @{ string _text = String.Format("{0} {1} {2} {3} {4}", @Html.DisplayFor(model => model.NoOfBedrooms), "bedroom flat", @Html.DisplayFor(model => model.TransactionTypeDescription), @Html.Raw(&#8377;), @Html.DisplayFor(model => model.Price)); string _rental = ""; if (Model.TransactionType == 2) { _rental = " per month"; } else { _rental = ""; }; string _linkText = _text + _rental; } @Html.ActionLink(_linkText, "Details", "Property", new { id = Model.PropertyId }, null)

    Read the article

  • Visual-C++ Linker Error

    - by LordByron
    I have a class called MODEL in which public static int theMaxFrames resides. The class is defined in its own header file. theMaxFrames is accessed by a class within the MODEL class and by one function, void set_up(), which is also in the MODEL class. The Render.cpp source file contains a function which calls a function in the Direct3D.cpp source file which in turn calls the set_up() function through a MODEL object. This is the only connection between these two source files and theMaxFrames. When I try to compile my code I get the following error messages: 1Direct3D.obj : error LNK2001: unresolved external symbol "public: static int MODEL::theMaxFrames" (?theMaxFrames@MODEL@@2HA) 1Render.obj : error LNK2001: unresolved external symbol "public: static int MODEL::theMaxFrames" (?theMaxFrames@MODEL@@2HA) 1C:\Users\Byron\Documents\Visual Studio 2008\Projects\xFileViewer\Debug\xFileViewer.exe : fatal error LNK1120: 1 unresolved externals

    Read the article

  • Type casting Collections using Conversion Operators

    - by Vyas Bharghava
    The below code gives me User-defined conversion must convert to or from enclosing type, while snippet #2 doesn't... It seems that a user-defined conversion routine must convert to or from the class that contains the routine. What are my alternatives? Explicit operator as extension method? Anything else? public static explicit operator ObservableCollection<ViewModel>(ObservableCollection<Model> modelCollection) { var viewModelCollection = new ObservableCollection<ViewModel>(); foreach (var model in modelCollection) { viewModelCollection.Add(new ViewModel() { Model = model }); } return viewModelCollection; } Snippet #2 public static explicit operator ViewModel(Model model) { return new ViewModel() {Model = model}; } Thanks in advance!

    Read the article

  • Asp.net mvc 3: Strange Validation

    - by coure06
    I have applied DataAnnotation based validations to two of my properties like this [Required(ErrorMessage = "Title is required")] public string Title { get; set; } [Required(ErrorMessage = "Description is required")] public string Description { get; set; } Here is the view page's code @Html.LabelFor(model => model.Obj.Title) @Html.EditorFor(model => model.Obj.Title) @Html.LabelFor(model => model.Obj.Description) @Html.TextAreaFor(model => model.Obj.Description) The Problem is that on click of submit button, on client side (js) its only giving me error for for Title and not for the Description. But Its giving me validation error for the Description after the postback. What possible causes?

    Read the article

  • Django - Problem with models/manager to organise a query...

    - by user296644
    Hi, I have an application to count the number of access to an object for each website in a same database. class SimpleHit(models.Model): """ Hit is the hit counter of a given object """ content_type = models.ForeignKey(ContentType) object_id = models.PositiveIntegerField() content_object = generic.GenericForeignKey('content_type', 'object_id') site = models.ForeignKey(Site) hits_total = models.PositiveIntegerField(default=0, blank=True) [...] class SimpleHitManager(models.Manager): def get_query_set(self): print self.model._meta.fields qset = super(SimpleHitManager, self).get_query_set() qset = qset.filter(hits__site=settings.SITE_ID) return qset class SimpleHitBase(models.Model): hits = generic.GenericRelation(SimpleHit) objects = SimpleHitManager() _hits = None def _db_get_hits(self, only=None): if self._hits == None: try: self._hits = self.hits.get(site=settings.SITE_ID) except SimpleHit.DoesNotExist: self._hits = SimpleHit() return self._hits @property def hits_total(self): return self._db_get_hits().hits_total [...] class Meta: abstract = True And I have a model like: class Model(SimpleHitBase): name = models.CharField(max_length=255) url = models.CharField(max_length=255) rss = models.CharField(max_length=255) creation = AutoNowAddDateTimeField() update = AutoNowDateTimeField() So, my problem is this one: when I call Model.objects.all(), I would like to have one request for the SQL (not two). In this case: one for Model in order to have information and one for the hits in order to have the counter (hits_total). This is because I cannot call directly hits.hits_total (due to SITE_ID?). I have tried select_related, but it seems to do not work... Question: - How can I add column automatically like (SELECT hits.hits_total, model.* FROM [...]) to the queryset? - Or use a functional select_related with my models? I want this model could be plugable on all other existing model. Thank you, Best regards.

    Read the article

  • FineUploader multiple instances and dynamic naming

    - by RichieMN
    I am using FineUploader 4.0.8 within a MVC4 project using the jquery wrapper. Here is an example of my js code that creates a single instance of the fineUploader and is working just fine. At this time, I have the need for more than one instance of fineUploader, but each individual control doesn't know anything about the other and they're rendered as needed on a page (I've seen previous questions using a jQuery .each, which won't work here). Currently, I can't seem to correctly render any upload buttons, probably due to having an ID duplicated or something. See below for how I'm using MVC's Razor to create unique variables and html IDs for that individual instance. Here's my current implementation where I've added the dynamic values (places where you see _@Model.{PropertyName}): // Uploader control setup var [email protected] = $('#[email protected]').fineUploader({ debug: true, template: '[email protected]', button: $("#[email protected]"), request: { endpoint: '@Url.Action("UploadFile", "Survey")', customHeaders: { Accept: 'application/json' }, params: { [email protected]: (function () { return instance; }), [email protected]: (function () { return surveyItemResultId; }), [email protected]: (function () { return itemId; }), [email protected]: (function () { return loopingCounter++; }) } }, validation: { acceptFiles: ['image/*', 'application/xls', 'application/pdf', 'text/csv', 'application/vnd.openxmlformats-officedocument.spreadsheetml.template', 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet', 'application/vnd.ms-excel'], allowedExtensions: ['jpeg', 'jpg', 'gif', 'png', 'bmp', 'csv', 'xls', 'xlsx', 'pdf', 'xlt', 'xltx', 'txt'], sizeLimit: 1024 * 1024 * 2.5, // 2.5MB stopOnFirstInvalidFile: false }, failedUploadTextDisplay: { mode: 'custom' }, multiple: true, text: { uploadButton: 'Select your upload file(s)' } }).on('submitted', function (event, id, filename) { $("#modal-overlay").fadeIn(); $("#modal-box").fadeIn(); [email protected]++; $(':input[type=button], :input[type=submit], :input[type=reset]').attr('disabled', 'disabled'); }).on('complete', function (event, id, filename, responseJSON) { [email protected]++; if ([email protected] == [email protected]) { $(':input[type=button], :input[type=submit], :input[type=reset]').removeAttr('disabled'); //$("#overlay").fadeOut(); $("#modal-box").fadeOut(); $("#modal-overlay").fadeOut(); } }).on('error', function (event, id, name, errorReason, xhr) { //$("#overlay").fadeOut(); alert('error: ' + errorReason); $("#modal-box").fadeOut(); $("#modal-overlay").fadeOut(); }); }); Using the same principle as above, I've added this logic to the template too. Here's part of my template: <script type="text/template" id="[email protected]"> <div class="qq-uploader-selector qq-uploader"> <div class="qq-upload-drop-area-selector qq-upload-drop-area qq-hide-dropzone"> <span>Drop files here to upload</span> What I see when I use the above code is only the drag and drop section visible, and no button. There are no js errors either. I do have an example that only has one instance of this control on it and the results are the same visible drag and drop section and no button). Any thought as to what's going on? I'll gladly update the question if you find I'm missing something helpful. Please don't just -1 me if it's something I can easily improve or fix.

    Read the article

  • SQL Server and Hyper-V Dynamic Memory Part 2

    - by SQLOS Team
    Part 1 of this series was an introduction and overview of Hyper-V Dynamic Memory. This part looks at SQL Server memory management and how the SQL engine responds to changing OS memory conditions.   Part 2: SQL Server Memory Management As with any Windows process, sqlserver.exe has a virtual address space (VAS) of 4GB on 32-bit and 8TB in 64-bit editions. Pages in its VAS are mapped to pages in physical memory when the memory is committed and referenced for the first time. The collection of VAS pages that have been recently referenced is known as the Working Set. How and when SQL Server allocates virtual memory and grows its working set depends on the memory model it uses. SQL Server supports three basic memory models:   1. Conventional Memory Model   The Conventional model is the default SQL Server memory model and has the following properties: - Dynamic - can grow or shrink its working set in response to load and external (operating system) memory conditions. - OS uses 4K pages – (not to be confused with SQL Server “pages” which are 8K regions of committed memory).- Pageable - Can be paged out to disk by the operating system.   2. Locked Page Model The locked page memory model is set when SQL Server is started with "Lock Pages in Memory" privilege*. It has the following characteristics: - Dynamic - can grow or shrink its working set in the same way as the Conventional model.- OS uses 4K pages - Non-Pageable – When memory is committed it is locked in memory, meaning that it will remain backed by physical memory and will not be paged out by the operating system. A common misconception is to interpret "locked" as non-dynamic. A SQL Server instance using the locked page memory model will grow and shrink (allocate memory and release memory) in response to changing workload and OS memory conditions in the same way as it does with the conventional model.   This is an important consideration when we look at Hyper-V Dynamic Memory – “locked” memory works perfectly well with “dynamic” memory.   * Note in “Denali” (Standard Edition and above), and in SQL 2008 R2 64-bit (Enterprise and above editions) the Lock Pages in Memory privilege is all that is required to set this model. In 2008 R2 64-Bit standard edition it also requires trace flag 845 to be set, in 2008 R2 32-bit editions it requires sp_configure 'awe enabled' 1.   3. Large Page Model The Large page model is set using trace flag 834 and potentially offers a small performance boost for systems that are configured with large pages. It is characterized by: - Static - memory is allocated at startup and does not change. - OS uses large (>2MB) pages - Non-Pageable The large page model is supported with Hyper-V Dynamic Memory (and Hyper-V also supports large pages), but you get no benefit from using Dynamic Memory with this model since SQL Server memory does not grow or shrink. The rest of this article will focus on the locked and conventional SQL Server memory models.   When does SQL Server grow? For “dynamic” configurations (Conventional and Locked memory models), the sqlservr.exe process grows – allocates and commits memory from the OS – in response to a workload. As much memory is allocated as is required to optimally run the query and buffer data for future queries, subject to limitations imposed by:   - SQL Server max server memory setting. If this configuration option is set, the buffer pool is not allowed to grow to more than this value. In SQL Server 2008 this value represents single page allocations, and in “Denali” it represents any size page allocations and also managed CLR procedure allocations.   - Memory signals from OS. The operating system sets a signal on memory resource notification objects to indicate whether it has memory available or whether it is low on available memory. If there is only 32MB free for every 4GB of memory a low memory signal is set, which continues until 64MB/4GB is free. If there is 96MB/4GB free the operating system sets a high memory signal. SQL Server only allocates memory when the high memory signal is set.   To summarize, for SQL Server to grow you need three conditions: a workload, max server memory setting higher than the current allocation, high memory signals from the OS.    When does SQL Server shrink caches? SQL Server as a rule does not like to return memory to the OS, but it will shrink its caches in response to memory pressure. Memory pressure can be divided into “internal” and “external”.   - External memory pressure occurs when the operating system is running low on memory and low memory signals are set. The SQL Server Resource Monitor checks for low memory signals approximately every 5 seconds and it will attempt to free memory until the signals stop.   To free memory SQL Server does the following: ·         Frees unused memory. ·         Notifies Memory Manager Clients to release memory o   Caches – Free unreferenced cache objects. o   Buffer pool - Based on oldest access times.   The freed memory is released back to the operating system. This process continues until the low memory resource notifications stop.    - Internal memory pressure occurs when the size of different caches and allocations increase but the SQL Server process needs to keep its total memory within a target value. For example if max server memory is set and certain caches are growing large, it will cause SQL to free memory for re-use internally, but not to release memory back to the OS. If you lower the value of max server memory you will generate internal memory pressure that will cause SQL to release memory back to the OS.    Memory pressure handling has not changed much since SQL 2005 and it was described in detail in a blog post by Slava Oks.   Note that SQL Server Express is an exception to the above behavior. Unlike other editions it does not assume it is the most important process running on the system but tries to be more “desktop” friendly. It will empty its working set after a period of inactivity.   How does SQL Server respond to changing OS memory?    In SQL Server 2005 support for Hot-Add memory was introduced. This feature, available in Enterprise and above editions, allows the server to make use of any extra physical memory that was added after SQL Server started. Being able to add physical memory when the system is running is limited to specialized hardware, but with the Hyper-V Dynamic Memory feature, when new memory is allocated to a guest virtual machine, it looks like hot-add physical memory to the guest. What this means is that thanks to the hot-add memory feature, SQL Server 2005 and higher can dynamically grow if more “physical” memory is granted to a guest VM by Hyper-V dynamic memory.   SQL Server checks OS memory every second and dynamically adjusts its “target” (based on available OS memory and max server memory) accordingly.   In “Denali” Standard Edition will also have sqlserver.exe support for hot-add memory when running virtualized (i.e. detecting and acting on Hyper-V Dynamic Memory allocations).   How does a SQL Server workload in a guest VM impact Hyper-V dynamic memory scheduling?   When a SQL workload causes the sqlserver.exe process to grow its working set, the Hyper-V memory scheduler will detect memory pressure in the guest VM and add memory to it. SQL Server will then detect the extra memory and grow according to workload demand. In our tests we have seen this feedback process cause a guest VM to grow quickly in response to SQL workload - we are still working on characterizing this ramp-up.    How does SQL Server respond when Hyper-V removes memory from a guest VM through ballooning?   If pressure from other VM's cause Hyper-V Dynamic Memory to take memory away from a VM through ballooning (allocating memory with a virtual device driver and returning it to the host OS), Windows Memory Manager will page out unlocked portions of memory and signal low resource notification events. When SQL Server detects these events it will shrink memory until the low memory notifications stop (see cache shrinking description above).    This raises another question. Can we make SQL Server release memory more readily and hence behave more "dynamically" without compromising performance? In certain circumstances where the application workload is predictable it may be possible to have a job which varies "max server memory" according to need, lowering it when the engine is inactive and raising it before a period of activity. This would have limited applicaability but it is something we're looking into.   What Memory Management changes are there in SQL Server “Denali”?   In SQL Server “Denali” (aka SQL11) the Memory Manager has been re-written to be more efficient. The main changes are summarized in this post. An important change with respect to Hyper-V Dynamic Memory support is that now the max server memory setting includes any size page allocations and managed CLR procedure allocations it now represents a closer approximation to total sqlserver.exe memory usage. This makes it easier to calculate a value for max server memory, which becomes important when configuring virtual machines to work well with Hyper-V Dynamic Memory Startup and Maximum RAM settings.   Another important change is no more AWE or hot-add support for 32-bit edition. This means if you're running a 32-bit edition of Denali you're limited to a 4GB address space and will not be able to take advantage of dynamically added OS memory that wasn't present when SQL Server started (though Hyper-V Dynamic Memory is still a supported configuration).   In part 3 we’ll develop some best practices for configuring and using SQL Server with Dynamic Memory. Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Passing multiple simple POST Values to ASP.NET Web API

    - by Rick Strahl
    A few weeks backs I posted a blog post  about what does and doesn't work with ASP.NET Web API when it comes to POSTing data to a Web API controller. One of the features that doesn't work out of the box - somewhat unexpectedly -  is the ability to map POST form variables to simple parameters of a Web API method. For example imagine you have this form and you want to post this data to a Web API end point like this via AJAX: <form> Name: <input type="name" name="name" value="Rick" /> Value: <input type="value" name="value" value="12" /> Entered: <input type="entered" name="entered" value="12/01/2011" /> <input type="button" id="btnSend" value="Send" /> </form> <script type="text/javascript"> $("#btnSend").click( function() { $.post("samples/PostMultipleSimpleValues?action=kazam", $("form").serialize(), function (result) { alert(result); }); }); </script> or you might do this more explicitly by creating a simple client map and specifying the POST values directly by hand:$.post("samples/PostMultipleSimpleValues?action=kazam", { name: "Rick", value: 1, entered: "12/01/2012" }, $("form").serialize(), function (result) { alert(result); }); On the wire this generates a simple POST request with Url Encoded values in the content:POST /AspNetWebApi/samples/PostMultipleSimpleValues?action=kazam HTTP/1.1 Host: localhost User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:15.0) Gecko/20100101 Firefox/15.0.1 Accept: application/json Connection: keep-alive Content-Type: application/x-www-form-urlencoded; charset=UTF-8 X-Requested-With: XMLHttpRequest Referer: http://localhost/AspNetWebApi/FormPostTest.html Content-Length: 41 Pragma: no-cache Cache-Control: no-cachename=Rick&value=12&entered=12%2F10%2F2011 Seems simple enough, right? We are basically posting 3 form variables and 1 query string value to the server. Unfortunately Web API can't handle request out of the box. If I create a method like this:[HttpPost] public string PostMultipleSimpleValues(string name, int value, DateTime entered, string action = null) { return string.Format("Name: {0}, Value: {1}, Date: {2}, Action: {3}", name, value, entered, action); }You'll find that you get an HTTP 404 error and { "Message": "No HTTP resource was found that matches the request URI…"} Yes, it's possible to pass multiple POST parameters of course, but Web API expects you to use Model Binding for this - mapping the post parameters to a strongly typed .NET object, not to single parameters. Alternately you can also accept a FormDataCollection parameter on your API method to get a name value collection of all POSTed values. If you're using JSON only, using the dynamic JObject/JValue objects might also work. ModelBinding is fine in many use cases, but can quickly become overkill if you only need to pass a couple of simple parameters to many methods. Especially in applications with many, many AJAX callbacks the 'parameter mapping type' per method signature can lead to serious class pollution in a project very quickly. Simple POST variables are also commonly used in AJAX applications to pass data to the server, even in many complex public APIs. So this is not an uncommon use case, and - maybe more so a behavior that I would have expected Web API to support natively. The question "Why aren't my POST parameters mapping to Web API method parameters" is already a frequent one… So this is something that I think is fairly important, but unfortunately missing in the base Web API installation. Creating a Custom Parameter Binder Luckily Web API is greatly extensible and there's a way to create a custom Parameter Binding to provide this functionality! Although this solution took me a long while to find and then only with the help of some folks Microsoft (thanks Hong Mei!!!), it's not difficult to hook up in your own projects. It requires one small class and a GlobalConfiguration hookup. Web API parameter bindings allow you to intercept processing of individual parameters - they deal with mapping parameters to the signature as well as converting the parameters to the actual values that are returned. Here's the implementation of the SimplePostVariableParameterBinding class:public class SimplePostVariableParameterBinding : HttpParameterBinding { private const string MultipleBodyParameters = "MultipleBodyParameters"; public SimplePostVariableParameterBinding(HttpParameterDescriptor descriptor) : base(descriptor) { } /// <summary> /// Check for simple binding parameters in POST data. Bind POST /// data as well as query string data /// </summary> public override Task ExecuteBindingAsync(ModelMetadataProvider metadataProvider, HttpActionContext actionContext, CancellationToken cancellationToken) { // Body can only be read once, so read and cache it NameValueCollection col = TryReadBody(actionContext.Request); string stringValue = null; if (col != null) stringValue = col[Descriptor.ParameterName]; // try reading query string if we have no POST/PUT match if (stringValue == null) { var query = actionContext.Request.GetQueryNameValuePairs(); if (query != null) { var matches = query.Where(kv => kv.Key.ToLower() == Descriptor.ParameterName.ToLower()); if (matches.Count() > 0) stringValue = matches.First().Value; } } object value = StringToType(stringValue); // Set the binding result here SetValue(actionContext, value); // now, we can return a completed task with no result TaskCompletionSource<AsyncVoid> tcs = new TaskCompletionSource<AsyncVoid>(); tcs.SetResult(default(AsyncVoid)); return tcs.Task; } private object StringToType(string stringValue) { object value = null; if (stringValue == null) value = null; else if (Descriptor.ParameterType == typeof(string)) value = stringValue; else if (Descriptor.ParameterType == typeof(int)) value = int.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(Int32)) value = Int32.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(Int64)) value = Int64.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(decimal)) value = decimal.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(double)) value = double.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(DateTime)) value = DateTime.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(bool)) { value = false; if (stringValue == "true" || stringValue == "on" || stringValue == "1") value = true; } else value = stringValue; return value; } /// <summary> /// Read and cache the request body /// </summary> /// <param name="request"></param> /// <returns></returns> private NameValueCollection TryReadBody(HttpRequestMessage request) { object result = null; // try to read out of cache first if (!request.Properties.TryGetValue(MultipleBodyParameters, out result)) { // parsing the string like firstname=Hongmei&lastname=Ge result = request.Content.ReadAsFormDataAsync().Result; request.Properties.Add(MultipleBodyParameters, result); } return result as NameValueCollection; } private struct AsyncVoid { } }   The ExecuteBindingAsync method is fired for each parameter that is mapped and sent for conversion. This custom binding is fired only if the incoming parameter is a simple type (that gets defined later when I hook up the binding), so this binding never fires on complex types or if the first type is not a simple type. For the first parameter of a request the Binding first reads the request body into a NameValueCollection and caches that in the request.Properties collection. The request body can only be read once, so the first parameter request reads it and then caches it. Subsequent parameters then use the cached POST value collection. Once the form collection is available the value of the parameter is read, and the value is translated into the target type requested by the Descriptor. SetValue writes out the value to be mapped. Once you have the ParameterBinding in place, the binding has to be assigned. This is done along with all other Web API configuration tasks at application startup in global.asax's Application_Start:GlobalConfiguration.Configuration.ParameterBindingRules .Insert(0, (HttpParameterDescriptor descriptor) => { var supportedMethods = descriptor.ActionDescriptor.SupportedHttpMethods; // Only apply this binder on POST and PUT operations if (supportedMethods.Contains(HttpMethod.Post) || supportedMethods.Contains(HttpMethod.Put)) { var supportedTypes = new Type[] { typeof(string), typeof(int), typeof(decimal), typeof(double), typeof(bool), typeof(DateTime) }; if (supportedTypes.Where(typ => typ == descriptor.ParameterType).Count() > 0) return new SimplePostVariableParameterBinding(descriptor); } // let the default bindings do their work return null; });   The ParameterBindingRules.Insert method takes a delegate that checks which type of requests it should handle. The logic here checks whether the request is POST or PUT and whether the parameter type is a simple type that is supported. Web API calls this delegate once for each method signature it tries to map and the delegate returns null to indicate it's not handling this parameter, or it returns a new parameter binding instance - in this case the SimplePostVariableParameterBinding. Once the parameter binding and this hook up code is in place, you can now pass simple POST values to methods with simple parameters. The examples I showed above should now work in addition to the standard bindings. Summary Clearly this is not easy to discover. I spent quite a bit of time digging through the Web API source trying to figure this out on my own without much luck. It took Hong Mei at Micrsoft to provide a base example as I asked around so I can't take credit for this solution :-). But once you know where to look, Web API is brilliantly extensible to make it relatively easy to customize the parameter behavior. I'm very stoked that this got resolved  - in the last two months I've had two customers with projects that decided not to use Web API in AJAX heavy SPA applications because this POST variable mapping wasn't available. This might actually change their mind to still switch back and take advantage of the many great features in Web API. I too frequently use plain POST variables for communicating with server AJAX handlers and while I could have worked around this (with untyped JObject or the Form collection mostly), having proper POST to parameter mapping makes things much easier. I said this in my last post on POST data and say it again here: I think POST to method parameter mapping should have been shipped in the box with Web API, because without knowing about this limitation the expectation is that simple POST variables map to parameters just like query string values do. I hope Microsoft considers including this type of functionality natively in the next version of Web API natively or at least as a built-in HttpParameterBinding that can be just added. This is especially true, since this binding doesn't affect existing bindings. Resources SimplePostVariableParameterBinding Source on GitHub Global.asax hookup source Mapping URL Encoded Post Values in  ASP.NET Web API© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api  AJAX   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • iPhone Open GL ES using FBX - How do I import animations from FBX into iPhone?

    - by Dominic Tancredi
    I've been researching this extensively. We have a game that's 90% complete, using custom game logic in iPhone 4.0. We've been asked to import a 3D model and have it animate when various events happen in the game. I've put together an OpenGL view (based on Eagl and several examples), and used Blender to import the model, as well as Jeff LeMarche's script to export the .h file. After much trial, it worked, and I was able to show a rotating model (unskinned). However, the 3d artist hadn't UV unwrapped the model, so provided me a new model, this one as a Maya file, along with animation in a FBX format, a .obj file, and .tga texture unwrapped. My question is : how can I use FBX inside OpenGL ES inside iPhone to run through animations? And what's the pipeline to get this Maya file into Blender to be able to create a .h file. I've tried the obj2opengl however the model is missing normals (did it have it in the first place?) and the skin isn't applying at all (possibly a code issue, something I think I can fix). I'm trying to use Jeff LeMarche's animation tutorial but can't figure out how to get the model files into a proper .h file for use. Any advice?

    Read the article

  • SharePoint Apps and Windows Azure

    - by ScottGu
    Last Monday I had an opportunity to present as part of the keynote of this year’s SharePoint Conference.  My segment of the keynote covered the new SharePoint Cloud App Model we are introducing as part of the upcoming SharePoint 2013 and Office 365 releases.  This new app model for SharePoint is additive to the full trust solutions developers write today, and is built around three core tenants: Simplifying the development model and making it consistent between the on-premises version of SharePoint and SharePoint Online provided with Office 365. Making the execution model loosely coupled – and enabling developers to build apps and write code that can run outside of the core SharePoint service. This makes it easy to deploy SharePoint apps using Windows Azure, and avoid having to worry about breaking SharePoint and the apps within it when something is upgraded.  This new loosely coupled model also enables developers to write SharePoint applications that can leverage the full capabilities of the .NET Framework – including ASP.NET Web Forms 4.5, ASP.NET MVC 4, ASP.NET Web API, EF 5, Async, and more. Implementing this loosely coupled model using standard web protocols – like OAuth, JSON, and REST APIs – that enable developers to re-use skills and tools, and easily integrate SharePoint with Web and Mobile application architectures. A video of my talk + demos is now available to watch online: In the talk I walked through building an app from scratch – it showed off how easy it is to build solutions using new SharePoint application, and highlighted a web + workflow + mobile scenario that integrates SharePoint with code hosted on Windows Azure (all built using Visual Studio 2012 and ASP.NET 4.5 – including MVC and Web API). The new SharePoint Cloud App Model is something that I think is pretty exciting, and it is going to make it a lot easier to build SharePoint apps using the full power of both Windows Azure and the .NET Framework.  Using Windows Azure to easily extend SaaS based solutions like Office 365 is also a really natural fit and one that is going to offer a bunch of great developer opportunities.  Hope this helps, Scott  P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

< Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >