Search Results

Search found 2704 results on 109 pages for 'strongly typed datasets'.

Page 24/109 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Google App Engine - The most awaited feature

    - by systempuntoout
    This list is taken from the official Google App Engine roadmap: SSL for third-party domains Background servers capable of running for longer than 30s Ability to reserve instances to reduce application loading overhead Ability to select different availability vs. latency options for Datastore Support for mapping operations across datasets Datastore dump and restore facility Raise request/response size limits for some APIs Improved monitoring and alerting of application serving Support for Browser Push (Comet) communication Built-in support for OAuth & OpenID What is your most awaited feature and why?

    Read the article

  • How big in bytes is a DataRow

    - by JeffreyABecker
    I have a one-time process (hashing all our user passwords) written using datasets. The performance needs improvement so we've profiled the application and found that increasing the 'batch size' of the update will improve performance. I also know that if I load the entire data set into memory the application will start hitting swap and slow down. The question is: how big is a System.Data.DataRow derived class? I'd like to calculate a batch size which I know won't force the application into swap.

    Read the article

  • MVC 2: Html.TextBoxFor, etc. in VB.NET 2010

    - by Brian
    Hello, I have this sample ASP.NET MVC 2.0 view in C#, bound to a strongly typed model that has a first name, last name, and email: <div> First: <%= Html.TextBoxFor(i => i.FirstName) %> <%= Html.ValidationMessageFor(i => i.FirstName, "*") %> </div> <div> Last: <%= Html.TextBoxFor(i => i.LastName) %> <%= Html.ValidationMessageFor(i => i.LastName, "*")%> </div> <div> Email: <%= Html.TextBoxFor(i => i.Email) %> <%= Html.ValidationMessageFor(i => i.Email, "*")%> </div> I converted it to VB.NET, seeing the appropriate constructs in VB.NET 10, as: <div> First: <%= Html.TextBoxFor(Function(i) i.FirstName) %> <%= Html.ValidationMessageFor(Function(i) i.FirstName, "*") %> </div> <div> Last: <%= Html.TextBoxFor(Function(i) i.LastName)%> <%= Html.ValidationMessageFor(Function(i) i.LastName, "*")%> </div> <div> Email: <%= Html.TextBoxFor(Function(i) i.Email)%> <%= Html.ValidationMessageFor(Function(i) i.Email, "*")%> </div> No luck. Is this right, and if not, what syntax do I need to use? Again, I'm using ASP.NET MVC 2.0, this is a view bound to a strongly typed model... does MVC 2 still not support the new language constructs in .NET 2010? Thanks.

    Read the article

  • Mapping and metadata information could not be found for EntityType Exception

    - by dcompiled
    I am trying out ASP.NET MVC Framework 2 with the Microsoft Entity Framework and when I try and save new records I get this error: Mapping and metadata information could not be found for EntityType 'WebUI.Controllers.PersonViewModel' My Entity Framework container stores records of type Person and my view is strongly typed with class PersonViewModel which derives from Person. Records would save properly until I tried to use the derived view model class. Can anyone explain why the metadata class doesnt work when I derive my view model? I want to be able to use a strongly typed model and also use data annotations (metadata) without resorting to mixing my storage logic (EF classes) and presentation logic (views). // Rest of the Person class is autogenerated by the EF [MetadataType(typeof(Person.Metadata))] public partial class Person { public sealed class Metadata { [DisplayName("First Name")] [Required(ErrorMessage = "Field [First Name] is required")] public object FirstName { get; set; } [DisplayName("Middle Name")] public object MiddleName { get; set; } [DisplayName("Last Name")] [Required(ErrorMessage = "Field [Last Name] is required")] public object LastName { get; set; } } } // From the View (PersonCreate.aspx) <%@ Page Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<WebUI.Controllers.PersonViewModel>" %> // From PersonController.cs public class PersonViewModel : Person { public List<SelectListItem> TitleList { get; set; } } // end class PersonViewModel

    Read the article

  • How to handle payment types with varying properties in the most elegant way.

    - by Byron Sommardahl
    I'm using ASP.NET MVC 2. Keeping it simple, I have three payment types: credit card, e-check, or "bill me later". I want to: choose one payment type display some fields for one payment type in my view run some logic using those fields (specific to the type) display a confirmation view run some more logic using those fields (specific to the type) display a receipt view Each payment type has fields specific to the type... maybe 2 fields, maybe more. For now, I know how many and what fields, but more could be added. I believe the best thing for my views is to have a partial view per payment type to handle the different fields and let the controller decide which partial to render (if you have a better option, I'm open). My real problem comes from the logic that happens in the controller between views. Each payment type has a variable number of fields. I'd like to keep everything strongly typed, but it feels like some sort of dictionary is the only option. Add to that specific logic that runs depending on the payment type. In an effort to keep things strongly typed, I've created a class for each payment type. No interface or inherited type since the fields are different per payment type. Then, I've got a Submit() method for each payment type. Then, while the controller is deciding which partial view to display, it also assigns the target of the submit action. This is not elegant solution and feels very wrong. I'm reaching out for a hand. How would you do this?

    Read the article

  • ASP MVC2 model binding issue on POST

    - by Brandon Linton
    So I'm looking at moving from MVC 1.0 to MVC 2.0 RTM. One of the conventions I'd like to start following is using the strongly-typed HTML helpers for generating controls like text boxes. However, it looks like it won't be an easy jump. I tried migrating my first form, replacing lines like this: <%= Html.TextBox("FirstName", Model.Data.FirstName, new {maxlength = 30}) %> ...for lines like this: <%= Html.TextBoxFor(x => x.Data.FirstName, new {maxlength = 30}) %> Previously, this would map into its appropriate view model on a POST, using the following method signature: [AcceptVerbs(HttpVerbs.Post)] public ActionResult Registration(AccountViewInfo viewInfo) Instead, it currently gets an empty object back. I believe the disconnect is in the fact that we pass the view model into a larger aggregate object that has some page metadata and other fun stuff along with it (hence x.Data.FirstName instead of x.FirstName). So my question is: what is the best way to use the strongly-typed helpers while still allowing the MVC framework to appropriately cast the form collection to my view-model as it does in the original line? Is there any way to do it without changing the aggregate type we pass to the view? Thanks!

    Read the article

  • Looking for ideas for a simple pattern matching algorithm to run on a microcontroller

    - by pic_audio
    I'm working on a project to recognize simple audio patterns. I have two data sets, each made up of between 4 and 32 note/duration pairs. One set is predefined, the other is from an incoming data stream. The length of the two strongly correlated data sets is often different, but roughly the same "shape". My goal is to come up with some sort of ranking as to how well the two data sets correlate/match. I have converted the incoming frequencies to pitch and shifted the incoming data stream's pitch so that it's average pitch matches that of the predefined data set. I also stretch/compress the incoming data set's durations to match the overall duration of the predefined set. Here are two graphical examples of data that should be ranked as strongly correlated: http://s2.postimage.org/FVeG0-ee3c23ecc094a55b15e538c3a0d83dd5.gif (Sorry, as a new user I couldn't directly post images) I'm doing this on a 8-bit microcontroller so resources are minimal. Speed is less an issue, a second or two of processing isn't a deal breaker. It wouldn't surprise me if there is an obvious solution, I've just been staring at the problem too long. Any ideas? Thanks in advance...

    Read the article

  • Command-Line Parsing API from TestAPI library - Type-Safe Commands how to

    - by MicMit
    Library at http://testapi.codeplex.com/ Excerpt of usage from http://blogs.msdn.com/ivo_manolov/archive/2008/12/17/9230331.aspx A third common approach is forming strongly-typed commands from the command-line parameters. This is common for cases when the command-line looks as follows: some-exe COMMAND parameters-to-the-command The parsing in this case is a little bit more involved: Create one class for every supported command, which derives from the Command abstract base class and implements an expected Execute method. Pass an expected command along with the command-line arguments to CommandLineParser.ParseCommand – the method will return a strongly-typed Command instance that can be Execute()-d. // EXAMPLE #3: // Sample for parsing the following command-line: // Test.exe run /runId=10 /verbose // In this particular case we have an actual command on the command-line (“run”), which we want to effectively de-serialize and execute. public class RunCommand : Command { bool? Verbose { get; set; } int? RunId { get; set; } public override void Execute() { // Implement your "run" execution logic here. } } Command c = new RunCommand(); CommandLineParser.ParseArguments(c, args); c.Execute(); ============================ I don't get if we instantiate specific class before parsing arguments , what's the point of command line argument "run" which is very first one. I thought the idea was to instantiate and execute command/class based on a command line parameter ( "run" parameter becomes instance RunCommand class, "walk" becomes WalkCommand class and so on ). Can it be done with the latest version ?

    Read the article

  • Looking for a .Net ORM

    - by SLaks
    I'm looking for a .Net 3.5 ORM framework with a rather unusual set of requirements: I need to create and alter tables at runtime with schemas defined by my end-users. (Obviously, that wouldn't be strongly-typed; I'm looking for something like a DataTable there) I also want regular strongly-typed partial classes for rows in non-dynamic tables, with custom validation and other logic. (Like normal ORMs) I want to load the entire database (or some entire tables) once, and keep it in memory throughout the life of the (WinForms) GUI. (I have a shared SQL Server with a relatively slow connection) I also want regular LINQ support (like LINQ-to-SQL) for ASP.Net on the shared server (which has a fast connection to SQL Server) In addition to SQL Server, I also want to be able to use a single-file database that would support XCopy deployment (without installing SQL CE on the end-user's machine). (Probably Access or SQLite) Finally, it has to be free (unless it's OpenAccess) I'll probably have to write it myself, as I don't think there is an existing ORM that meets these requirements. However, I don't want to re-invent the wheel if there is one, hence this question. I'm using VS2010, but I don't know when my webhost (LFC) will upgrade to .Net 4.0

    Read the article

  • Using T4MVC in real project

    - by artvolk
    T4MVC is cool, but I have a couple of issues integrating it in my project, any help is really appriciated: I've got such warnings for all my actions (I use SnippetsBaseController as base class for all my controller classes: Warning 26 'Snippets.Controllers.ErrorController.Actions' hides inherited member 'Snippets.Controllers.Base.SnippetsBaseController.Actions'. Use the new keyword if hiding was intended. C:\projects_crisp-source_crisp\crisp-snippets\Snippets\T4MVC.cs 481 32 Snippets Is it possible to have strongly typed names of custom Routes, for example, I have route defined like this: routes.MapRoute( "Feed", "feed/", MVC.Snippets.Rss(), new { controller = "Snippets", action = "Rss" } ); Is it possible to replace: <%= Url.RouteUrl("Feed") %> with something like: <%= Url.RouteUrl(MVC.Routes.Feed) %> Having strongly typed links to static files is really cool, but I use <base /> in my pages, so I don't need any URL processing, can I redefine T4MVCHelpers.ProcessVirtualPath without tweaking the T4MVC.tt itself? T4MVC always generate links with uppercased controller and action names, for example: /Snippets/Add instead of /snippets/add. Is it possible to generate them lowercase?

    Read the article

  • Does Java 6 open a default port for JMX remote connections?

    - by Bob Cross
    My specific question has to do with JMX as used in JDK 1.6: if I am running a Java process using JRE 1.6 with com.sun.management.jmxremote in the command line, does Java pick a default port for remote JMX connections? Backstory: I am currently trying to develop a procedure to give to a customer that will enable them to connect to one of our processes via JMX from a remote machine. The goal is to facillitate their remote debugging of a situation occurring on a real-time display console. Because of their service level agreement, they are strongly motivated to capture as much data as possible and, if the situation looks too complicated to fix quickly, to restart the display console and allow it to reconnect to the server-side. I am aware the I could run jconsole on JDK 1.6 processes and jvisualvm on post-JDK 1.6.7 processes given physical access to the console. However, because of the operational requirements and people problems involved, we are strongly motivated to grab the data that we need remotely and get them up and running again. EDIT: I am aware of the command line port property com.sun.management.jmxremote.port=portNum The question that I am trying to answer is, if you do not set that property at the command line, does Java pick another port for remote monitoring? If so, how could you determine what it might be?

    Read the article

  • Model binding & derived model classes

    - by Richard Ev
    Does ASP.NET MVC offer any simple way to get model binding to work when you have model classes that inherit from others? In my scenario I have a View that is strongly typed to List<Person>. I have a couple of classes that inherit from Person, namely PersonTypeOne and PersonTypeTwo. I have three strongly typed partial views with names that match these class names (and render form elements for the properties of their respective models). This means that in my main View I can have the following code: <% for(int i = 0; i < Model.Count; i++) { Html.RenderPartial(Model[i].GetType().Name, Model[i]); } %> This works well, apart from when the user submits the form the relevant controller action method just gets a List<Person>, rather than a list of Person, PersonTypeOne and PersonTypeTwo. This is pretty much as expected as the form submission doesn't contain enough information to tell the default model binder to create any instances of PersonTypeOne and PersonTypeTwo classes. So, is there any way to get such functionality from the default model binder?

    Read the article

  • Binding, Prefixes and generated HTML

    - by Vman
    MVC newbie question re binders. Supposing I have two strongly typed partial actions that happen to have a model attributes with the same name, and are rendered in the same containing page i.e.: Class Friend {string Name {get; set ;} DateTime DOB {get; set ;}} Class Foe {string Name {get; set ;} string ReasonForDislike {get; set ;}} Both partials will have a line: <%= Html.TextBoxFor(model => model.Name) %> And associated controller actions: public ActionResult SaveFriend(Friend friend) public ActionResult SaveFoe(Foe foe) My problem is that both will render on my containing page with the same id (of course, bad for lots of reasons). I’m aware of the [Bind] attribute that allows me add a prefix, resulting in code: public ActionResult SaveFriend([Bind(Prefix = “friend”)] Friend friend) <%= Html.TextBox("friend.Name", Model. Name) %> //Boo, no TextBoxFor :( But this still doesn’t cut it. I can just about tolerate the loss of the strongly typed TextBoxFor helpers but I’ve yet to get clientside validation to work with prefixes: I’ve tried: <%= Html.ValidationMessage("friend.Name") %> ...and every other variant I can think of. I seem to need the model to be aware of the prefix in both directions but bind only applies when mapping the inbound request. It seems (to me) a common scenario but I’m struggling to find examples out there. What am I missing! Thanks in advance.

    Read the article

  • Question about WeakHashMap

    - by michael
    Hi, In the Javadoc of "http://java.sun.com/j2se/1.4.2/docs/api/java/util/WeakHashMap.html", it said "Each key object in a WeakHashMap is stored indirectly as the referent of a weak reference. Therefore a key will automatically be removed only after the weak references to it, both inside and outside of the map, have been cleared by the garbage collector." And then Note that a value object may refer indirectly to its key via the WeakHashMap itself; that is, a value object may strongly refer to some other key object whose associated value object, in turn, strongly refers to the key of the first value object. But should not both Key and Value should be used weak reference in WeakHashMap? i.e. if there is low on memory, GC will free the memory held by the value object (since the value object most likely take up more memory than key object in most cases)? And if GC free the Value object, the Key Object can be free as well? Basically, I am looking for a HashMap which will reduce memory usage when there is low memory (GC collects the value and key objects if necessary). Is it possible in Java? Thank you.

    Read the article

  • Use LINQ, to Sort and Filter items in a List<ReturnItem> collection, based on the values within a Li

    - by Daniel McPherson
    This is tricky to explain. We have a DataTable that contains a user configurable selection of columns, which are not known at compile time. Every column in the DataTable is of type String. We need to convert this DataTable into a strongly typed Collection of "ReturnItem" objects so that we can then sort and filter using LINQ for use in our application. We have made some progress as follows: We started with the basic DataTable. We then process the DataTable, creating a new "ReturnItem" object for each row This "ReturnItem" object has just two properties: ID ( string ) and Columns( List(object) ). The properties collection contains one entry for each column, representing a single DataRow. Each property is made Strongly Typed (int, string, datetime, etc). For example it would add a new "DateTime" object to the "ReturnItem" Columns List containing the value of the "Created" Datatable Column. The result is a List(ReturnItem) that we would then like to be able to Sort and Filter using LINQ based on the value in one of the properties, for example, sort on "Created" date. We have been using the LINQ Dynamic Query Library, which gets us so far, but it doesn't look like the way forward because we are using it over a List Collection of objects. Basically, my question boils down to: How can I use LINQ, to Sort and Filter items in a List(ReturnItem) collection, based on the values within a List(object) property which is part of the ReturnItem class?

    Read the article

  • New features of C# 4.0

    This article covers New features of C# 4.0. Article has been divided into below sections. Introduction. Dynamic Lookup. Named and Optional Arguments. Features for COM interop. Variance. Relationship with Visual Basic. Resources. Other interested readings… 22 New Features of Visual Studio 2008 for .NET Professionals 50 New Features of SQL Server 2008 IIS 7.0 New features Introduction It is now close to a year since Microsoft Visual C# 3.0 shipped as part of Visual Studio 2008. In the VS Managed Languages team we are hard at work on creating the next version of the language (with the unsurprising working title of C# 4.0), and this document is a first public description of the planned language features as we currently see them. Please be advised that all this is in early stages of production and is subject to change. Part of the reason for sharing our plans in public so early is precisely to get the kind of feedback that will cause us to improve the final product before it rolls out. Simultaneously with the publication of this whitepaper, a first public CTP (community technology preview) of Visual Studio 2010 is going out as a Virtual PC image for everyone to try. Please use it to play and experiment with the features, and let us know of any thoughts you have. We ask for your understanding and patience working with very early bits, where especially new or newly implemented features do not have the quality or stability of a final product. The aim of the CTP is not to give you a productive work environment but to give you the best possible impression of what we are working on for the next release. The CTP contains a number of walkthroughs, some of which highlight the new language features of C# 4.0. Those are excellent for getting a hands-on guided tour through the details of some common scenarios for the features. You may consider this whitepaper a companion document to these walkthroughs, complementing them with a focus on the overall language features and how they work, as opposed to the specifics of the concrete scenarios. C# 4.0 The major theme for C# 4.0 is dynamic programming. Increasingly, objects are “dynamic” in the sense that their structure and behavior is not captured by a static type, or at least not one that the compiler knows about when compiling your program. Some examples include a. objects from dynamic programming languages, such as Python or Ruby b. COM objects accessed through IDispatch c. ordinary .NET types accessed through reflection d. objects with changing structure, such as HTML DOM objects While C# remains a statically typed language, we aim to vastly improve the interaction with such objects. A secondary theme is co-evolution with Visual Basic. Going forward we will aim to maintain the individual character of each language, but at the same time important new features should be introduced in both languages at the same time. They should be differentiated more by style and feel than by feature set. The new features in C# 4.0 fall into four groups: Dynamic lookup Dynamic lookup allows you to write method, operator and indexer calls, property and field accesses, and even object invocations which bypass the C# static type checking and instead gets resolved at runtime. Named and optional parameters Parameters in C# can now be specified as optional by providing a default value for them in a member declaration. When the member is invoked, optional arguments can be omitted. Furthermore, any argument can be passed by parameter name instead of position. COM specific interop features Dynamic lookup as well as named and optional parameters both help making programming against COM less painful than today. On top of that, however, we are adding a number of other small features that further improve the interop experience. Variance It used to be that an IEnumerable<string> wasn’t an IEnumerable<object>. Now it is – C# embraces type safe “co-and contravariance” and common BCL types are updated to take advantage of that. Dynamic Lookup Dynamic lookup allows you a unified approach to invoking things dynamically. With dynamic lookup, when you have an object in your hand you do not need to worry about whether it comes from COM, IronPython, the HTML DOM or reflection; you just apply operations to it and leave it to the runtime to figure out what exactly those operations mean for that particular object. This affords you enormous flexibility, and can greatly simplify your code, but it does come with a significant drawback: Static typing is not maintained for these operations. A dynamic object is assumed at compile time to support any operation, and only at runtime will you get an error if it wasn’t so. Oftentimes this will be no loss, because the object wouldn’t have a static type anyway, in other cases it is a tradeoff between brevity and safety. In order to facilitate this tradeoff, it is a design goal of C# to allow you to opt in or opt out of dynamic behavior on every single call. The dynamic type C# 4.0 introduces a new static type called dynamic. When you have an object of type dynamic you can “do things to it” that are resolved only at runtime: dynamic d = GetDynamicObject(…); d.M(7); The C# compiler allows you to call a method with any name and any arguments on d because it is of type dynamic. At runtime the actual object that d refers to will be examined to determine what it means to “call M with an int” on it. The type dynamic can be thought of as a special version of the type object, which signals that the object can be used dynamically. It is easy to opt in or out of dynamic behavior: any object can be implicitly converted to dynamic, “suspending belief” until runtime. Conversely, there is an “assignment conversion” from dynamic to any other type, which allows implicit conversion in assignment-like constructs: dynamic d = 7; // implicit conversion int i = d; // assignment conversion Dynamic operations Not only method calls, but also field and property accesses, indexer and operator calls and even delegate invocations can be dispatched dynamically: dynamic d = GetDynamicObject(…); d.M(7); // calling methods d.f = d.P; // getting and settings fields and properties d[“one”] = d[“two”]; // getting and setting thorugh indexers int i = d + 3; // calling operators string s = d(5,7); // invoking as a delegate The role of the C# compiler here is simply to package up the necessary information about “what is being done to d”, so that the runtime can pick it up and determine what the exact meaning of it is given an actual object d. Think of it as deferring part of the compiler’s job to runtime. The result of any dynamic operation is itself of type dynamic. Runtime lookup At runtime a dynamic operation is dispatched according to the nature of its target object d: COM objects If d is a COM object, the operation is dispatched dynamically through COM IDispatch. This allows calling to COM types that don’t have a Primary Interop Assembly (PIA), and relying on COM features that don’t have a counterpart in C#, such as indexed properties and default properties. Dynamic objects If d implements the interface IDynamicObject d itself is asked to perform the operation. Thus by implementing IDynamicObject a type can completely redefine the meaning of dynamic operations. This is used intensively by dynamic languages such as IronPython and IronRuby to implement their own dynamic object models. It will also be used by APIs, e.g. by the HTML DOM to allow direct access to the object’s properties using property syntax. Plain objects Otherwise d is a standard .NET object, and the operation will be dispatched using reflection on its type and a C# “runtime binder” which implements C#’s lookup and overload resolution semantics at runtime. This is essentially a part of the C# compiler running as a runtime component to “finish the work” on dynamic operations that was deferred by the static compiler. Example Assume the following code: dynamic d1 = new Foo(); dynamic d2 = new Bar(); string s; d1.M(s, d2, 3, null); Because the receiver of the call to M is dynamic, the C# compiler does not try to resolve the meaning of the call. Instead it stashes away information for the runtime about the call. This information (often referred to as the “payload”) is essentially equivalent to: “Perform an instance method call of M with the following arguments: 1. a string 2. a dynamic 3. a literal int 3 4. a literal object null” At runtime, assume that the actual type Foo of d1 is not a COM type and does not implement IDynamicObject. In this case the C# runtime binder picks up to finish the overload resolution job based on runtime type information, proceeding as follows: 1. Reflection is used to obtain the actual runtime types of the two objects, d1 and d2, that did not have a static type (or rather had the static type dynamic). The result is Foo for d1 and Bar for d2. 2. Method lookup and overload resolution is performed on the type Foo with the call M(string,Bar,3,null) using ordinary C# semantics. 3. If the method is found it is invoked; otherwise a runtime exception is thrown. Overload resolution with dynamic arguments Even if the receiver of a method call is of a static type, overload resolution can still happen at runtime. This can happen if one or more of the arguments have the type dynamic: Foo foo = new Foo(); dynamic d = new Bar(); var result = foo.M(d); The C# runtime binder will choose between the statically known overloads of M on Foo, based on the runtime type of d, namely Bar. The result is again of type dynamic. The Dynamic Language Runtime An important component in the underlying implementation of dynamic lookup is the Dynamic Language Runtime (DLR), which is a new API in .NET 4.0. The DLR provides most of the infrastructure behind not only C# dynamic lookup but also the implementation of several dynamic programming languages on .NET, such as IronPython and IronRuby. Through this common infrastructure a high degree of interoperability is ensured, but just as importantly the DLR provides excellent caching mechanisms which serve to greatly enhance the efficiency of runtime dispatch. To the user of dynamic lookup in C#, the DLR is invisible except for the improved efficiency. However, if you want to implement your own dynamically dispatched objects, the IDynamicObject interface allows you to interoperate with the DLR and plug in your own behavior. This is a rather advanced task, which requires you to understand a good deal more about the inner workings of the DLR. For API writers, however, it can definitely be worth the trouble in order to vastly improve the usability of e.g. a library representing an inherently dynamic domain. Open issues There are a few limitations and things that might work differently than you would expect. · The DLR allows objects to be created from objects that represent classes. However, the current implementation of C# doesn’t have syntax to support this. · Dynamic lookup will not be able to find extension methods. Whether extension methods apply or not depends on the static context of the call (i.e. which using clauses occur), and this context information is not currently kept as part of the payload. · Anonymous functions (i.e. lambda expressions) cannot appear as arguments to a dynamic method call. The compiler cannot bind (i.e. “understand”) an anonymous function without knowing what type it is converted to. One consequence of these limitations is that you cannot easily use LINQ queries over dynamic objects: dynamic collection = …; var result = collection.Select(e => e + 5); If the Select method is an extension method, dynamic lookup will not find it. Even if it is an instance method, the above does not compile, because a lambda expression cannot be passed as an argument to a dynamic operation. There are no plans to address these limitations in C# 4.0. Named and Optional Arguments Named and optional parameters are really two distinct features, but are often useful together. Optional parameters allow you to omit arguments to member invocations, whereas named arguments is a way to provide an argument using the name of the corresponding parameter instead of relying on its position in the parameter list. Some APIs, most notably COM interfaces such as the Office automation APIs, are written specifically with named and optional parameters in mind. Up until now it has been very painful to call into these APIs from C#, with sometimes as many as thirty arguments having to be explicitly passed, most of which have reasonable default values and could be omitted. Even in APIs for .NET however you sometimes find yourself compelled to write many overloads of a method with different combinations of parameters, in order to provide maximum usability to the callers. Optional parameters are a useful alternative for these situations. Optional parameters A parameter is declared optional simply by providing a default value for it: public void M(int x, int y = 5, int z = 7); Here y and z are optional parameters and can be omitted in calls: M(1, 2, 3); // ordinary call of M M(1, 2); // omitting z – equivalent to M(1, 2, 7) M(1); // omitting both y and z – equivalent to M(1, 5, 7) Named and optional arguments C# 4.0 does not permit you to omit arguments between commas as in M(1,,3). This could lead to highly unreadable comma-counting code. Instead any argument can be passed by name. Thus if you want to omit only y from a call of M you can write: M(1, z: 3); // passing z by name or M(x: 1, z: 3); // passing both x and z by name or even M(z: 3, x: 1); // reversing the order of arguments All forms are equivalent, except that arguments are always evaluated in the order they appear, so in the last example the 3 is evaluated before the 1. Optional and named arguments can be used not only with methods but also with indexers and constructors. Overload resolution Named and optional arguments affect overload resolution, but the changes are relatively simple: A signature is applicable if all its parameters are either optional or have exactly one corresponding argument (by name or position) in the call which is convertible to the parameter type. Betterness rules on conversions are only applied for arguments that are explicitly given – omitted optional arguments are ignored for betterness purposes. If two signatures are equally good, one that does not omit optional parameters is preferred. M(string s, int i = 1); M(object o); M(int i, string s = “Hello”); M(int i); M(5); Given these overloads, we can see the working of the rules above. M(string,int) is not applicable because 5 doesn’t convert to string. M(int,string) is applicable because its second parameter is optional, and so, obviously are M(object) and M(int). M(int,string) and M(int) are both better than M(object) because the conversion from 5 to int is better than the conversion from 5 to object. Finally M(int) is better than M(int,string) because no optional arguments are omitted. Thus the method that gets called is M(int). Features for COM interop Dynamic lookup as well as named and optional parameters greatly improve the experience of interoperating with COM APIs such as the Office Automation APIs. In order to remove even more of the speed bumps, a couple of small COM-specific features are also added to C# 4.0. Dynamic import Many COM methods accept and return variant types, which are represented in the PIAs as object. In the vast majority of cases, a programmer calling these methods already knows the static type of a returned object from context, but explicitly has to perform a cast on the returned value to make use of that knowledge. These casts are so common that they constitute a major nuisance. In order to facilitate a smoother experience, you can now choose to import these COM APIs in such a way that variants are instead represented using the type dynamic. In other words, from your point of view, COM signatures now have occurrences of dynamic instead of object in them. This means that you can easily access members directly off a returned object, or you can assign it to a strongly typed local variable without having to cast. To illustrate, you can now say excel.Cells[1, 1].Value = "Hello"; instead of ((Excel.Range)excel.Cells[1, 1]).Value2 = "Hello"; and Excel.Range range = excel.Cells[1, 1]; instead of Excel.Range range = (Excel.Range)excel.Cells[1, 1]; Compiling without PIAs Primary Interop Assemblies are large .NET assemblies generated from COM interfaces to facilitate strongly typed interoperability. They provide great support at design time, where your experience of the interop is as good as if the types where really defined in .NET. However, at runtime these large assemblies can easily bloat your program, and also cause versioning issues because they are distributed independently of your application. The no-PIA feature allows you to continue to use PIAs at design time without having them around at runtime. Instead, the C# compiler will bake the small part of the PIA that a program actually uses directly into its assembly. At runtime the PIA does not have to be loaded. Omitting ref Because of a different programming model, many COM APIs contain a lot of reference parameters. Contrary to refs in C#, these are typically not meant to mutate a passed-in argument for the subsequent benefit of the caller, but are simply another way of passing value parameters. It therefore seems unreasonable that a C# programmer should have to create temporary variables for all such ref parameters and pass these by reference. Instead, specifically for COM methods, the C# compiler will allow you to pass arguments by value to such a method, and will automatically generate temporary variables to hold the passed-in values, subsequently discarding these when the call returns. In this way the caller sees value semantics, and will not experience any side effects, but the called method still gets a reference. Open issues A few COM interface features still are not surfaced in C#. Most notably these include indexed properties and default properties. As mentioned above these will be respected if you access COM dynamically, but statically typed C# code will still not recognize them. There are currently no plans to address these remaining speed bumps in C# 4.0. Variance An aspect of generics that often comes across as surprising is that the following is illegal: IList<string> strings = new List<string>(); IList<object> objects = strings; The second assignment is disallowed because strings does not have the same element type as objects. There is a perfectly good reason for this. If it were allowed you could write: objects[0] = 5; string s = strings[0]; Allowing an int to be inserted into a list of strings and subsequently extracted as a string. This would be a breach of type safety. However, there are certain interfaces where the above cannot occur, notably where there is no way to insert an object into the collection. Such an interface is IEnumerable<T>. If instead you say: IEnumerable<object> objects = strings; There is no way we can put the wrong kind of thing into strings through objects, because objects doesn’t have a method that takes an element in. Variance is about allowing assignments such as this in cases where it is safe. The result is that a lot of situations that were previously surprising now just work. Covariance In .NET 4.0 the IEnumerable<T> interface will be declared in the following way: public interface IEnumerable<out T> : IEnumerable { IEnumerator<T> GetEnumerator(); } public interface IEnumerator<out T> : IEnumerator { bool MoveNext(); T Current { get; } } The “out” in these declarations signifies that the T can only occur in output position in the interface – the compiler will complain otherwise. In return for this restriction, the interface becomes “covariant” in T, which means that an IEnumerable<A> is considered an IEnumerable<B> if A has a reference conversion to B. As a result, any sequence of strings is also e.g. a sequence of objects. This is useful e.g. in many LINQ methods. Using the declarations above: var result = strings.Union(objects); // succeeds with an IEnumerable<object> This would previously have been disallowed, and you would have had to to some cumbersome wrapping to get the two sequences to have the same element type. Contravariance Type parameters can also have an “in” modifier, restricting them to occur only in input positions. An example is IComparer<T>: public interface IComparer<in T> { public int Compare(T left, T right); } The somewhat baffling result is that an IComparer<object> can in fact be considered an IComparer<string>! It makes sense when you think about it: If a comparer can compare any two objects, it can certainly also compare two strings. This property is referred to as contravariance. A generic type can have both in and out modifiers on its type parameters, as is the case with the Func<…> delegate types: public delegate TResult Func<in TArg, out TResult>(TArg arg); Obviously the argument only ever comes in, and the result only ever comes out. Therefore a Func<object,string> can in fact be used as a Func<string,object>. Limitations Variant type parameters can only be declared on interfaces and delegate types, due to a restriction in the CLR. Variance only applies when there is a reference conversion between the type arguments. For instance, an IEnumerable<int> is not an IEnumerable<object> because the conversion from int to object is a boxing conversion, not a reference conversion. Also please note that the CTP does not contain the new versions of the .NET types mentioned above. In order to experiment with variance you have to declare your own variant interfaces and delegate types. COM Example Here is a larger Office automation example that shows many of the new C# features in action. using System; using System.Diagnostics; using System.Linq; using Excel = Microsoft.Office.Interop.Excel; using Word = Microsoft.Office.Interop.Word; class Program { static void Main(string[] args) { var excel = new Excel.Application(); excel.Visible = true; excel.Workbooks.Add(); // optional arguments omitted excel.Cells[1, 1].Value = "Process Name"; // no casts; Value dynamically excel.Cells[1, 2].Value = "Memory Usage"; // accessed var processes = Process.GetProcesses() .OrderByDescending(p =&gt; p.WorkingSet) .Take(10); int i = 2; foreach (var p in processes) { excel.Cells[i, 1].Value = p.ProcessName; // no casts excel.Cells[i, 2].Value = p.WorkingSet; // no casts i++; } Excel.Range range = excel.Cells[1, 1]; // no casts Excel.Chart chart = excel.ActiveWorkbook.Charts. Add(After: excel.ActiveSheet); // named and optional arguments chart.ChartWizard( Source: range.CurrentRegion, Title: "Memory Usage in " + Environment.MachineName); //named+optional chart.ChartStyle = 45; chart.CopyPicture(Excel.XlPictureAppearance.xlScreen, Excel.XlCopyPictureFormat.xlBitmap, Excel.XlPictureAppearance.xlScreen); var word = new Word.Application(); word.Visible = true; word.Documents.Add(); // optional arguments word.Selection.Paste(); } } The code is much more terse and readable than the C# 3.0 counterpart. Note especially how the Value property is accessed dynamically. This is actually an indexed property, i.e. a property that takes an argument; something which C# does not understand. However the argument is optional. Since the access is dynamic, it goes through the runtime COM binder which knows to substitute the default value and call the indexed property. Thus, dynamic COM allows you to avoid accesses to the puzzling Value2 property of Excel ranges. Relationship with Visual Basic A number of the features introduced to C# 4.0 already exist or will be introduced in some form or other in Visual Basic: · Late binding in VB is similar in many ways to dynamic lookup in C#, and can be expected to make more use of the DLR in the future, leading to further parity with C#. · Named and optional arguments have been part of Visual Basic for a long time, and the C# version of the feature is explicitly engineered with maximal VB interoperability in mind. · NoPIA and variance are both being introduced to VB and C# at the same time. VB in turn is adding a number of features that have hitherto been a mainstay of C#. As a result future versions of C# and VB will have much better feature parity, for the benefit of everyone. Resources All available resources concerning C# 4.0 can be accessed through the C# Dev Center. Specifically, this white paper and other resources can be found at the Code Gallery site. Enjoy! span.fullpost {display:none;}

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • video card performance monitoring?

    - by Dru
    Is there a 'top' like command for monitoring the GPU and memory usage of a video card? I am most interested in Linux commands, but and OS would be interesting. I strongly suspect that for a group of my systems the video cards are being under-utilized (but I have no idea by how much) and would like to re-allocate funds to other bottle-necks. We are using higher end cards, so the price difference between cards is significant. Thank you.

    Read the article

  • Replacement for MySQL proxy

    - by tostinni
    We have a standard MySQL Master/Slave replication for our database. In order to use the slave server we have set up MySQL Proxy. However we have been strongly discouraged to use it as it still alpha and not very well supported. Our application is built with Drupal 7 which doesn't use very effectively its slave database (see my related question on Drupal Answers). What tools could we use to act like MySQL proxy and send SELECT queries on the slave server in order to distribute the load ?

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >