Search Results

Search found 8875 results on 355 pages for 'mime types'.

Page 65/355 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • Sort List C# in arbitrary order

    - by Jasper
    I have a C# List I.E. List<Food> x = new List<Food> () ; This list is populated with this class Public class Food { public string id { get; set; } public string idUser { get; set; } public string idType { get; set; } //idType could be Fruit , Meat , Vegetable , Candy public string location { get; set; } } Now i have this unsorted List<Food> list ; which has I.E. 15 elements. There are 8 Vegetable Types , 3 Fruit Types , 1 Meat Types , 1 Candy Types I would sort this so that to have a list ordered in this way : 1° : Food.idType Fruit 2° : Food.idType Vegetables 3° : Food.idType Meat 4° : Food.idType Candy 5° : Food.idType Fruit 6° : Food.idType Vegetables 7° : Food.idType Fruit //Becouse there isnt more Meat so i insert the //next one which is Candy but also this type is empty //so i start from begin : Fruit 8° : Food.idType Vegetables 9° : Food.idType Vegetables // For the same reason of 7° 10 ° Food.idType Vegetables ...... .... .... 15 : Food.idType Vegetables I cant find a rule to do this. Is there a linq or List.Sort instruction which help me to order the list in this way? Update i changed the return value of idType and now return int type instead string so 1=Vegetable , 2=Fruit , 3=Candy 4=Meat

    Read the article

  • Sharepoint: Integrity of lookup fields after a list import

    - by driAn
    Hi there I got a question about the behavior of lookup fields when importing data. I wonder how the lookup fields behave when the list they point to is being replaced/imported. To explain the issue, I will provide a quick example below: As example, assume we have these two sharepoint lists: Product Types ------------- + Type Name + Code Nr + etc Products -------- + Product Name + Product Type (Lookup field to list "Product Types") + etc In my scenario, the Products List contains production data on the production Sharepoint platform. It is filled with data by the business users. However the Product Types list contains rather static data and is maintained by the developer. Now after a development cycle, the developer wants to deploy his new webparts and his new data (product types list). The developer performs the following procedure: On the dev machine: Export "product type" list using stsadm On the production machine: Delete all items in the "product type" list On the production machine: Import the "product type" list using stsadm This means we basically replace the "product type" list on the production server while keeping the "product" list as it is. Now the question: Is this safe? Will the lookup references break under certain circumstances? Any downside of this import/export procedure? What happens if someone accesses a "product" during the import? Will the (now invalid) reference clear its own content (become a null value). What happens if the schema of the "product type" list changes (new column)? Will this cause any troubles? Thanks for all feedback and suggestions!

    Read the article

  • How to transfer objects through the header in WCF

    - by Michael
    I'm trying to transfer some user information in the header of the message through message inspectors. I have created a behavior which adds the inspector to the service (both client and server). But when I try to communicate with the service I get the following error: XmlException: Name cannot begin with the '<' character, hexadecimal value 0x3C. I have also get exception telling me that DataContracts where unexpected. Type 'System.DelegateSerializationHolder+DelegateEntry' with data contract name 'DelegateSerializationHolder.DelegateEntry:http://schemas.datacontract.org/2004/07/System' is not expected. Consider using a DataContractResolver or add any types not known statically to the list of known types - for example, by using the KnownTypeAttribute attribute or by adding them to the list of known types passed to DataContractSerializer. The thing is that my object contains other objects which are marked as DataContract and I'm not interested adding the KnownType attribute for those types. Another problem might be that my object to serialize is very restricted in form of internal class and internal properties etc. Can anyone guide me in the right direction. What I'm I doing wrong? Some code: public virtual object BeforeSendRequest(ref Message request, IClientChannel channel) { var header = MessageHeader.CreateHeader("<name>", "<namespace>", object); request.Headers.Add(header); return Guid.NewGuid(); }

    Read the article

  • Adding a dynamic image to UITable View Cell

    - by Graeme
    Hi, I have a table in which I want a dynamic image to load in at the left-hand side. The table needs to use an IF statement to select the appropriate image from the "Resources" folder, and needs to be based upon [dog types]. The [dog types] is extracted from an RSS feed, and so the image in the table cell needs to match the each cell's [dog types] tag. Update: See code below for what I'm looking to do (only below it's for earthquakes, for me its pictures of dogs). I suspect I need to use - (UIImage *)imageForTypes:(NSString *)types { to do such a thing. Thanks. Updated code: This is for Apple's Earthquake sample but does exactly what I need to do. It matches images to the severity (2.0, 3.0, 4.0, 5.0 etc.) of every earthquake to the magnitude. - (UIImage *)imageForMagnitude:(CGFloat)magnitude { if (magnitude >= 5.0) { return [UIImage imageNamed:@"5.0.png"]; } if (magnitude >= 4.0) { return [UIImage imageNamed:@"4.0.png"]; } if (magnitude >= 3.0) { return [UIImage imageNamed:@"3.0.png"]; } if (magnitude >= 2.0) { return [UIImage imageNamed:@"2.0.png"]; } return nil; } and under - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath magnitudeImage.image = [self imageForMagnitude:earthquake.magnitude];

    Read the article

  • Is there a programming toolkit for converting "any file type" to a TIFF image?

    - by Ryan
    Hello, I've written several variations of a program. The purpose of the program is to convert "any file type" to a TIFF image represenation of that file, as if it were being printed using a printer. I'm currently using a third party printer driver that I send files to, and it outputs a TIFF image. This is nice, but it requires me to use Office Interop files, and interact with each individual processing application in order to print the files. I had previously tried a toolkit, called Apose .NET, which did not rely on Office Interop, and did not require any printer driver. It did the conversion all on its own and would create a TIFF image. The problem with Aspose .NET was that it did not support a wide variety of input file types. Most notably, it can't do Visio files. My project calls for the ability to create a TIFF image for virtually "any file type". (excluding exes, music files, and stuff) I know that finding something that handles literally any file type is probably not a very feasible task, so I figure if it can at least handle all the Office file types, Adobe types, and other major standard file types, then I can write a custom extension parsing module that uses those processing applications to do the printing of any file type that can be viewed using those applications. So, does anyone know of a toolkit that can do this? Preferably one that does not rely on Office or a printer driver. It does not have to be free, or open source. Or, if you know of an amazing printer driver that does this, I'm open to that too. Thanks in advance, Ryan

    Read the article

  • WPF Sizing of Panels

    - by Mystagogue
    I'm looking for an article or overview of WPF Panel types, that explains the sizing characters of each. For example, here are the panel types: http://msdn.microsoft.com/en-us/library/ms754152.aspx#Panels_derived_elements I've learned (by experiment) that UniformGrid can be given a fixed height, or it can be "auto" where it expands to fit available space. That is great, but what I wanted was for Uniform Grid to shrink to fit its internal content (particularly content that is provided dynamically, at run-time). I don't think it has that ability. So I'd like to know either what other Panel I could use for that purpose, or what Panel I should nest the UniformGrid inside of. But I don't just want an answer to that specific question. I want the sizing dynamics and capabilities of all the Panel types, in summary form, so I can make all these choices as needed. Online I find articles that cover only half the Panel types, and don't give as much information about sizing as I'm describing. Anyone know the link (or book) that has the info I'm seeking? p.s. Since I want the UniformGrid to shrink to the dynamic content I'm providing, I could just keep track of the total height of controls placed within, and then set the height of the UniformGrid. But it would be nice if WPF took care of this for me.

    Read the article

  • Invalid Cast Exception in ASP.NET but not in WinForms

    - by Shadow Scorpion
    I have a problem in this code: public static T[] GetExtras <T>(Type[] Types) { List<T> Res = new List<T>(); foreach (object Current in GetExtras(typeof(T), Types)) { Res.Add((T)Current);//this is the error } return Res.ToArray(); } public static object[] GetExtras(Type ExtraType, Type[] Types) { lock (ExtraType) { if (!ExtraType.IsInterface) return new object[] { }; List<object> Res = new List<object>(); bool found = false; found = (ExtraType == typeof(IExtra)); foreach (Type CurInterFace in ExtraType.GetInterfaces()) { if (found = (CurInterFace == typeof(IExtra))) break; } if (!found) return new object[] { }; foreach (Type CurType in Types) { found = false; if (!CurType.IsClass) continue; foreach (Type CurInterface in CurType.GetInterfaces()) { try { if (found = (CurInterface.FullName == ExtraType.FullName)) break; } catch { } } try { if (found) Res.Add(Activator.CreateInstance(CurType)); } catch { } } return Res.ToArray(); } } When I'm using this code in windows application it works! But I cant use it on ASP page. Why?

    Read the article

  • Is it valid to use unsafe struct * as an opaque type instead of IntPtr in .NET Platform Invoke?

    - by David Jeske
    .NET Platform Invoke advocates declaring pointer types as IntPtr. For example, the following [DllImport("user32.dll")] static extern IntPtr SendMessage(IntPtr hWnd, UInt32 Msg, Int32 wParam, Int32 lParam); However, I find when interfacing with interesting native interfaces, that have many pointer types, flattening everything into IntPtr makes the code very hard to read and removes the typical typechecking that a compiler can do. I've been using a pattern where I declare an unsafe struct to be an opaque pointer type. I can store this pointer type in a managed object, and the compiler can typecheck it form me. For example: class Foo { unsafe struct FOO {}; // opaque type unsafe FOO *my_foo; class if { [DllImport("mydll")] extern static unsafe FOO* get_foo(); [DllImport("mydll")] extern static unsafe void do_something_foo(FOO *foo); } public unsafe Foo() { this.my_foo = if.get_foo(); } public unsafe do_something_foo() { if.do_something_foo(this.my_foo); } While this example may not seem different than using IntPtr, when there are several pointer types moving between managed and native code, using these opaque pointer types for typechecking is a godsend. I have not run into any trouble using this technique in practice. However, I also have not seen an examples of anyone using this technique, and I wonder why. Is there any reason that the above code is invalid in the eyes of the .NET runtime? My main question is about how the .NET GC system treats "unsafe FOO *my_foo". Is this pointer something the GC system is going to try to trace, or is it simply going to ignore it? My hope is that because the underlying type is a struct, and it's declared unsafe, that the GC would ignore it. However, I don't know for sure. Thoughts?

    Read the article

  • Polymorphic Numerics on .Net and In C#

    - by Bent Rasmussen
    It's a real shame that in .Net there is no polymorphism for numbers, i.e. no INumeric interface that unifies the different kinds of numerical types such as bool, byte, uint, int, etc. In the extreme one would like a complete package of abstract algebra types. Joe Duffy has an article about the issue: http://www.bluebytesoftware.com/blog/CommentView,guid,14b37ade-3110-4596-9d6e-bacdcd75baa8.aspx How would you express this in C#, in order to retrofit it, without having influence over .Net or C#? I have one idea that involves first defining one or more abstract types (interfaces such as INumeric - or more abstract than that) and then defining structs that implement these and wrap types such as int while providing operations that return the new type (e.g. Integer32 : INumeric; where addition would be defined as public Integer32 Add(Integer32 other) { return Return(Value + other.Value); } I am somewhat afraid of the execution speed of this code but at least it is abstract. No operator overloading goodness... Any other ideas? .Net doesn't look like a viable long-term platform if it cannot have this kind of abstraction I think - and be efficient about it. Abstraction is reuse.

    Read the article

  • boost::lambda bind expressions can't get bind to string's empty() to work

    - by navigator
    Hi, I am trying to get the below code snippet to compile. But it fails with: error C2665: 'boost::lambda::function_adaptor::apply' : none of the 8 overloads could convert all the argument types. Sepcifying the return type when calling bind does not help. Any idea what I am doing wrong? Thanks. #include <boost/lambda/lambda.hpp> #include <boost/lambda/bind.hpp> #include <string> #include <map> int main() { namespace bl = boost::lambda; typedef std::map<int, std::string> types; types keys_and_values; keys_and_values[ 0 ] = "zero"; keys_and_values[ 1 ] = "one"; keys_and_values[ 2 ] = "Two"; std::for_each( keys_and_values.begin(), keys_and_values.end(), std::cout << bl::constant("Value empty?: ") << std::boolalpha << bl::bind(&std::string::empty, bl::bind(&types::value_type::second, _1)) << "\n"); return 0; }

    Read the article

  • database design suggestion needed

    - by JMSA
    I need to design a table for daily sales of pharmaceutical products. There are hundreds of types of products available {Name, code}. Thousands of sales-persons are employed to sell those products{name, code}. They collect products from different depots{name, code}. They work in different Areas - Zones - Markets - Outlets, etc. {All have names and codes} Each product has various types of prices {Production Price, Trade Price, Business Price, Discount Price, etc.}. And, sales-persons are free to choose from those combination to estimate the sales price. The problem is, daily sales requires huge amount of data-entry. Within couple of years there may be gigabytes of data (if not terabytes). If I need to show daily, weekly, monthly, quarterly and yearly sales reports there will be various types of sql queries I shall need. This is my initial design: Product {ID, Code, Name, IsActive} ProductXYZPriceHistory {ID, ProductID, Date, EffectDate, Price, IsCurrent} SalesPerson {ID, Code, Name, JoinDate, and so on..., IsActive} SalesPersonSalesAraeaHistory {ID, SalesPersonID, SalesAreaID, IsCurrent} Depot {ID, Code, Name, IsActive} Outlet {ID, Code, Name, AreaID, IsActive} AreaHierarchy {ID, Code, Name, PrentID, AreaLevel, IsActive} DailySales {ID, ProductID, SalesPersonID, OutletID, Date, PriceID, SalesPrice, Discount, etc...} Now, apart from indexing, how can I normalize my DailySales table to have a fine grained design that I shall not need to change for years to come? Please show me a sample design of only the DailySales data-entry table (from which all types of reports would be queried) on the basis of above information. I don't need a detailed design advice. I just need an advice regarding only the DailySales table. Is there any way to break this particular table to achieve granularity?

    Read the article

  • Why linking doesn't work in my Xtext-based DSL?

    - by reprogrammer
    The following is the Xtext grammar for my DSL. Model: variableTypes=VariableTypes predicateTypes=PredicateTypes variableDeclarations= VariableDeclarations rules=Rules; VariableType: name=ID; VariableTypes: 'var types' (variableTypes+=VariableType)+; PredicateTypes: 'predicate types' (predicateTypes+=PredicateType)+; PredicateType: name=ID '(' (variableTypes+=[VariableType|ID])+ ')'; VariableDeclarations: 'vars' (variableDeclarations+=VariableDeclaration)+; VariableDeclaration: name=ID ':' type=[VariableType|ID]; Rules: 'rules' (rules+=Rule)+; Rule: head=Head ':-' body=Body; Head: predicate=Predicate; Body: (predicates+=Predicate)+; Predicate: predicateType=[PredicateType|ID] '(' (terms+=Term)+ ')'; Term: variable=Variable; Variable: variableDeclaration=[VariableDeclaration|ID]; terminal WS: (' ' | '\t' | '\r' | '\n' | ',')+; And, the following is a program in the above DSL. var types Node predicate types Edge(Node, Node) Path(Node, Node) vars x : Node y : Node z : Node rules Path(x, y) :- Edge(x, y) Path(x, y) :- Path(x, z) Path(z, y) When I used the generated Switch class to traverse the EMF object model corresponding to the above program, I realized that the nodes are not linked together properly. For example, the getPredicateType() method on a Predicate node returns null. Having read the Xtext user's guide, my impression is that the Xtext default linking semantics should work for my DSL. But, for some reason, the AST nodes of my DSL don't get linked together properly. Can anyone help me in diagnosing this problem?

    Read the article

  • How to find the first declaring method for a reference method

    - by Oliver Gierke
    Suppose you have a generic interface and an implementation: public interface MyInterface<T> { void foo(T param); } public class MyImplementation<T> implements MyInterface<T> { void foo(T param) { } } These two types are frework types. In the next step I want allow users to extend that interface as well as redeclare foo(T param) to maybe equip it with further annotations. public interface MyExtendedInterface extends MyInterface<Bar> { @Override void foo(Bar param); // Further declared methods } I create an AOP proxy for the extended interface and intercept especially the calls to furtherly declared methods. As foo(…) is no redeclared in MyExtendedInterface I cannot execute it by simply invoking MethodInvocation.proceed() as the instance of MyImplementation only implements MyInterface.foo(…) and not MyExtendedInterface.foo(…). So is there a way to get access to the method that declared a method initially? Regarding this example is there a way to find out that foo(Bar param) was declared in MyInterface originally and get access to the accoriding Method instance? I already tried to scan base class methods to match by name and parameter types but that doesn't work out as generics pop in and MyImplementation.getMethod("foo", Bar.class) obviously throws a NoSuchMethodException. I already know that MyExtendedInterface types MyInterface to Bar. So If I could create some kind of "typed view" on MyImplementation my math algorithm could work out actually.

    Read the article

  • Unity Configuration and Same Assembly

    - by tyndall
    I'm currently getting an error trying to resolve my IDataAccess class. The value of the property 'type' cannot be parsed. The error is: Could not load file or assembly 'TestProject' or one of its dependencies. The system cannot find the file specified. (C:\Source\TestIoC\src\TestIoC\TestProject\bin\Debug\TestProject.vshost.exe.config line 14) This is inside a WPF Application project. What is the correct syntax to refer to the Assembly you are currently in? is there a way to do this? I know in a larger solution I would be pulling Types from seperate assemblies so this might not be an issue. But what is the right way to do this for a small self-contained test project. Note: I'm only interested in doing the XML config at this time, not the C# (in code) config. UPDATE: see all comments My XML config: <configuration> <configSections> <section name="unity" type="Microsoft.Practices.Unity.Configuration.UnityConfigurationSection, Microsoft.Practices.Unity.Configuration" /> </configSections> <unity> <typeAliases> <!-- Lifetime manager types --> <typeAlias alias="singleton" type="Microsoft.Practices.Unity.ContainerControlledLifetimeManager, Microsoft.Practices.Unity" /> <typeAlias alias="external" type="Microsoft.Practices.Unity.ExternallyControlledLifetimeManager, Microsoft.Practices.Unity" /> <typeAlias alias="IDataAccess" type="TestProject.IDataAccess, TestProject" /> <typeAlias alias="DataAccess" type="TestProject.DataAccess, TestProject" /> </typeAliases> <containers> <container name="Services"> <types> <type type="IDataAccess" mapTo="DataAccess" /> </types> </container> </containers> </unity> </configuration>

    Read the article

  • Mysql select - improve performance

    - by realshadow
    Hey, I am working on an e-shop which sells products only via loans. I display 10 products per page in any category, each product has 3 different price tags - 3 different loan types. Everything went pretty well during testing time, query execution time was perfect, but today when transfered the changes to the production server, the site "collapsed" in about 2 minutes. The query that is used to select loan types sometimes hangs for ~10 seconds and it happens frequently and thus it cant keep up and its hella slow. The table that is used to store the data has approximately 2 milion records and each select looks like this: SELECT * FROM products_loans WHERE KOD IN("X17/Q30-10", "X17/12", "X17/5-24") AND 369.27 BETWEEN CENA_OD AND CENA_DO; 3 loan types and the price that needs to be in range between CENA_OD and CENA_DO, thus 3 rows are returned. But since I need to display 10 products per page, I need to run it trough a modified select using OR, since I didnt find any other solution to this. I have asked about it here, but got no answer. As mentioned in the referencing post, this has to be done separately since there is no column that could be used in a join (except of course price and code, but that ended very, very badly). Here is the show create table, kod and CENA_OD/CENA_DO very indexed via INDEX. CREATE TABLE `products_loans` ( `KOEF_ID` bigint(20) NOT NULL, `KOD` varchar(30) NOT NULL, `AKONTACIA` int(11) NOT NULL, `POCET_SPLATOK` int(11) NOT NULL, `koeficient` decimal(10,2) NOT NULL default '0.00', `CENA_OD` decimal(10,2) default NULL, `CENA_DO` decimal(10,2) default NULL, `PREDAJNA_CENA` decimal(10,2) default NULL, `AKONTACIA_SUMA` decimal(10,2) default NULL, `TYP_VYHODY` varchar(4) default NULL, `stage` smallint(6) NOT NULL default '1', PRIMARY KEY (`KOEF_ID`), KEY `CENA_OD` (`CENA_OD`), KEY `CENA_DO` (`CENA_DO`), KEY `KOD` (`KOD`), KEY `stage` (`stage`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 And also selecting all loan types and later filtering them trough php doesnt work good, since each type has over 50k records and the select takes too much time as well... Any ides about improving the speed are appreciated.

    Read the article

  • Why does Microsoft advise against readonly fields with mutable values?

    - by Weeble
    In the Design Guidelines for Developing Class Libraries, Microsoft say: Do not assign instances of mutable types to read-only fields. The objects created using a mutable type can be modified after they are created. For example, arrays and most collections are mutable types while Int32, Uri, and String are immutable types. For fields that hold a mutable reference type, the read-only modifier prevents the field value from being overwritten but does not protect the mutable type from modification. This simply restates the behaviour of readonly without explaining why it's bad to use readonly. The implication appears to be that many people do not understand what "readonly" does and will wrongly expect readonly fields to be deeply immutable. In effect it advises using "readonly" as code documentation indicating deep immutability - despite the fact that the compiler has no way to enforce this - and disallows its use for its normal function: to ensure that the value of the field doesn't change after the object has been constructed. I feel uneasy with this recommendation to use "readonly" to indicate something other than its normal meaning understood by the compiler. I feel that it encourages people to misunderstand the meaning of "readonly", and furthermore to expect it to mean something that the author of the code might not intend. I feel that it precludes using it in places it could be useful - e.g. to show that some relationship between two mutable objects remains unchanged for the lifetime of one of those objects. The notion of assuming that readers do not understand the meaning of "readonly" also appears to be in contradiction to other advice from Microsoft, such as FxCop's "Do not initialize unnecessarily" rule, which assumes readers of your code to be experts in the language and should know that (for example) bool fields are automatically initialised to false, and stops you from providing the redundancy that shows "yes, this has been consciously set to false; I didn't just forget to initialize it". So, first and foremost, why do Microsoft advise against use of readonly for references to mutable types? I'd also be interested to know: Do you follow this Design Guideline in all your code? What do you expect when you see "readonly" in a piece of code you didn't write?

    Read the article

  • Are Dynamic Prepared Statements Bad? (with php + mysqli)

    - by John
    I like the flexibility of Dynamic SQL and I like the security + improved performance of Prepared Statements. So what I really want is Dynamic Prepared Statements, which is troublesome to make because bind_param and bind_result accept "fixed" number of arguments. So I made use of an eval() statement to get around this problem. But I get the feeling this is a bad idea. Here's example code of what I mean // array of WHERE conditions $param = array('customer_id'=>1, 'qty'=>'2'); $stmt = $mysqli->stmt_init(); $types = ''; $bindParam = array(); $where = ''; $count = 0; // build the dynamic sql and param bind conditions foreach($param as $key=>$val) { $types .= 'i'; $bindParam[] = '$p'.$count.'=$param["'.$key.'"]'; $where .= "$key = ? AND "; $count++; } // prepare the query -- SELECT * FROM t1 WHERE customer_id = ? AND qty = ? $sql = "SELECT * FROM t1 WHERE ".substr($where, 0, strlen($where)-4); $stmt->prepare($sql); // assemble the bind_param command $command = '$stmt->bind_param($types, '.implode(', ', $bindParam).');'; // evaluate the command -- $stmt->bind_param($types,$p0=$param["customer_id"],$p1=$param["qty"]); eval($command); Is that last eval() statement a bad idea? I tried to avoid code injection by encapsulating values behind the variable name $param. Does anyone have an opinion or other suggestions? Are there issues I need to be aware of?

    Read the article

  • What's the recommended implementation for hashing OLE Variants?

    - by Barry Kelly
    OLE Variants, as used by older versions of Visual Basic and pervasively in COM Automation, can store lots of different types: basic types like integers and floats, more complicated types like strings and arrays, and all the way up to IDispatch implementations and pointers in the form of ByRef variants. Variants are also weakly typed: they convert the value to another type without warning depending on which operator you apply and what the current types are of the values passed to the operator. For example, comparing two variants, one containing the integer 1 and another containing the string "1", for equality will return True. So assuming that I'm working with variants at the underlying data level (e.g. VARIANT in C++ or TVarData in Delphi - i.e. the big union of different possible values), how should I hash variants consistently so that they obey the right rules? Rules: Variants that hash unequally should compare as unequal, both in sorting and direct equality Variants that compare as equal for both sorting and direct equality should hash as equal It's OK if I have to use different sorting and direct comparison rules in order to make the hashing fit. The way I'm currently working is I'm normalizing the variants to strings (if they fit), and treating them as strings, otherwise I'm working with the variant data as if it was an opaque blob, and hashing and comparing its raw bytes. That has some limitations, of course: numbers 1..10 sort as [1, 10, 2, ... 9] etc. This is mildly annoying, but it is consistent and it is very little work. However, I do wonder if there is an accepted practice for this problem.

    Read the article

  • What corresponds to the Unity 'registration name' in the unity configuration section?

    - by unanswered
    When registering and resolving types in a Unity Container using code you can use 'Registration Names' to disambiguate your references that derive from an interface or base class hierarchy. The 'registration name' text would be provided as a parameter to the register and resolve methods: myContainer.RegisterType<IMyService, CustomerService>("Customers"); and MyServiceBase result = myContainer.Resolve<MyServiceBase>("Customers"); However when I register types in the configuration files I do not see where the 'registration name' can be assigned I register an Interface: <typeAlias alias="IEnlistmentNotification" type="System.Transactions.IEnlistmentNotification, System.Transactions, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" /> Then two types that I happen to know implement that interface: <typeAlias alias="PlaylistManager" type="Sample.Dailies.Grid.Workers.PlaylistManager, Sample.Dailies.Grid.Workers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" /> <typeAlias alias="FlexAleManager" type="Sample.Dailies.Grid.Workers.FlexAleManager, Sample.Dailies.Grid.Workers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" /> Then I provide mappings between the interface and the two types: <type type="IEnlistmentNotification" mapTo="FlexAleManager"><lifetime type="singleton"/></type> <type type="IEnlistmentNotification" mapTo="PlaylistManager"><lifetime type="singleton"/></type> That seems to correspond to this code: myContainer.RegisterType<IEnlistmentNotification, FlexAleManager>(); myContainer.RegisterType<IEnlistmentNotification, PlaylistManager>(); but clearly what I need is a disambiguating config entry that corresponds to this code: myContainer.RegisterType<IEnlistmentNotification, FlexAleManager>("Flex"); myContainer.RegisterType<IEnlistmentNotification, PlaylistManager>("Play"); Then when I get into my code I could do this: IEnlistmentNotification flex = myContainer.Resolve<IEnlistmentNotification>("Flex"); IEnlistmentNotification play = myContainer.Resolve<IEnlistmentNotification>("Play"); See what I mean? Thanks, Kimball

    Read the article

  • reshaping a data frame into long format in R

    - by user1773115
    I'm struggling with a reshape in R. I have 2 types of error (err and rel_err) that have been calculated for 3 different models. This gives me a total of 6 error variables (i.e. err_1, err_2, err_3, rel_err_1, rel_err_2, and rel_err_3). For each of these types of error I have 3 different types of predivtive validity tests (ie random holdouts, backcast, forecast). I would like to make my data set long so I keep the 4 types of test long while also making the two error measurements long. So in the end I will have one variable called err and one called rel_err as well as an id variable for what model the error corresponds to (1,2,or 3) Here is my data right now: iter err_1 rel_err_1 err_2 rel_err_2 err_3 rel_err_3 test_type 1 -0.09385732 -0.2235443 -0.1216982 -0.2898543 -0.1058366 -0.2520759 random 1 0.16141630 0.8575728 0.1418732 0.7537442 0.1584816 0.8419816 back 1 0.16376930 0.8700738 0.1431505 0.7605302 0.1596502 0.8481901 front 1 0.14345986 0.6765194 0.1213689 0.5723444 0.1374676 0.6482615 random 1 0.15890059 0.7435382 0.1589823 0.7439204 0.1608709 0.7527580 back 1 0.14412360 0.6743928 0.1442039 0.6747684 0.1463520 0.6848202 front and here is what I would like it to look like: iter model err rel_err test_type 1 1 -0.09385732 (#'s) random 1 2 -0.1216982 (#'s) random 1 3 -0.1216982 (#'s) random and on... I've tried playing around with the syntax but can't quite figure out what to put for the time.varying argument Thanks very much for any help you can offer.

    Read the article

  • Consume WCF Service InProcess using Agatha and WCF

    - by REA_ANDREW
    I have been looking into this lately for a specific reason.  Some integration tests I want to write I want to control the types of instances which are used inside the service layer but I want that control from the test class instance.  One of the problems with just referencing the service is that a lot of the time this will by default be done inside a different process.  I am using StructureMap as my DI of choice and one of the tools which I am using inline with RhinoMocks is StructureMap.AutoMocking.  With StructureMap the main entry point is the ObjectFactory.  This will be process specific so if I decide that the I want a certain instance of a type to be used inside the ServiceLayer I cannot configure the ObjectFactory from my test class as that will only apply to the process which it belongs to. This is were I started thinking about two things: Running a WCF in process Being able to share mocked instances across processes A colleague in work pointed me to a project which is for the latter but I thought that it would be a better solution if I could run the WCF Service in process.  One of the projects which I use when I think about WCF Services is AGATHA, and the one which I have to used to try and get my head around doing this. Another asset I have is a book called Programming WCF Services by Juval Lowy and if you have not heard of it or read it I would definately recommend it.  One of the many topics that is inside this book is the type of configuration you need to communicate with a service in the same process, and it turns out to be quite simple from a config point of view. <system.serviceModel> <services> <service name="Agatha.ServiceLayer.WCF.WcfRequestProcessor"> <endpoint address ="net.pipe://localhost/MyPipe" binding="netNamedPipeBinding" contract="Agatha.Common.WCF.IWcfRequestProcessor"/> </service> </services> <client> <endpoint name="MyEndpoint" address="net.pipe://localhost/MyPipe" binding="netNamedPipeBinding" contract="Agatha.Common.WCF.IWcfRequestProcessor"/> </client> </system.serviceModel>   You can see here that I am referencing the Agatha object and contract here, but also that my binding and the address is something called Named Pipes.  THis is sort of the “Magic” which makes it happen in the same process. Next I need to open the service prior to calling the methods on a proxy which I also need.  My initial attempt at the proxy did not use any Agatha specific coding and one of the pains I found was that you obviously need to give your proxy the known types which the serializer can be aware of.  So we need to add to the known types of the proxy programmatically.  I came across the following blog post which showed me how easy it was http://bloggingabout.net/blogs/vagif/archive/2009/05/18/how-to-programmatically-define-known-types-in-wcf.aspx. First Pass So with this in mind, and inside a console app this was my first pass at consuming a service in process.  First here is the proxy which I made making use of the Agatha IWcfRequestProcessor contract. public class InProcProxy : ClientBase<Agatha.Common.WCF.IWcfRequestProcessor>, Agatha.Common.WCF.IWcfRequestProcessor { public InProcProxy() { } public InProcProxy(string configurationName) : base(configurationName) { } public Agatha.Common.Response[] Process(params Agatha.Common.Request[] requests) { return Channel.Process(requests); } public void ProcessOneWayRequests(params Agatha.Common.OneWayRequest[] requests) { Channel.ProcessOneWayRequests(requests); } } So with the proxy in place I could then use this after opening the service so here is the code which I use inside the console app make the request. static void Main(string[] args) { ComponentRegistration.Register(); ServiceHost serviceHost = new ServiceHost(typeof(Agatha.ServiceLayer.WCF.WcfRequestProcessor)); serviceHost.Open(); Console.WriteLine("Service is running...."); using (var proxy = new InProcProxy()) { foreach (var operation in proxy.Endpoint.Contract.Operations) { foreach (var t in KnownTypeProvider.GetKnownTypes(null)) { operation.KnownTypes.Add(t); } } var request = new GetProductsRequest(); var responses = proxy.Process(new[] { request }); var response = (GetProductsResponse)responses[0]; Console.WriteLine("{0} Products have been retrieved", response.Products.Count); } serviceHost.Close(); Console.WriteLine("Finished"); Console.ReadLine(); } So what I used here is the KnownTypeProvider of Agatha to easily get all the types I need for the service/proxy and add them to the proxy.  My Request handler for this was just a test one which always returned 2 products. public class GetProductsHandler : RequestHandler<GetProductsRequest,GetProductsResponse> { public override Agatha.Common.Response Handle(GetProductsRequest request) { return new GetProductsResponse { Products = new List<ProductDto> { new ProductDto{}, new ProductDto{} } }; } } Second Pass Now after I did this I started reading up some more on some resources including more by Davy Brion and others on Agatha.  Now it turns out that the work I did above to create a derived class of the ClientBase implementing Agatha.Common.WCF.IWcfRequestProcessor was not necessary due to a nice class which is present inside the Agatha code base, RequestProcessorProxy which takes care of this for you! :-) So disregarding that class I made for the proxy and changing my code to use it I am now left with the following: static void Main(string[] args) { ComponentRegistration.Register(); ServiceHost serviceHost = new ServiceHost(typeof(Agatha.ServiceLayer.WCF.WcfRequestProcessor)); serviceHost.Open(); Console.WriteLine("Service is running...."); using (var proxy = new RequestProcessorProxy()) { var request = new GetProductsRequest(); var responses = proxy.Process(new[] { request }); var response = (GetProductsResponse)responses[0]; Console.WriteLine("{0} Products have been retrieved", response.Products.Count); } serviceHost.Close(); Console.WriteLine("Finished"); Console.ReadLine(); }   Cheers for now, Andy References Agatha WCF InProcess Without WCF StructureMap.AutoMocking Cross Process Mocking Agatha Programming WCF Services by Juval Lowy

    Read the article

  • Using Unity – Part 6

    - by nmarun
    This is the last of the ‘Unity’ series and I’ll be talking about generics here. If you’ve been following the previous articles, you must have noticed that I’m just adding more and more ‘Product’ classes to the project. I’ll change that trend in this blog where I’ll be adding an ICaller interface and a Caller class. 1: public interface ICaller<T> where T : IProduct 2: { 3: string CallMethod<T>(string typeName); 4: } 5:  6: public class Caller<T> : ICaller<T> where T:IProduct 7: { 8: public string CallMethod<T>(string typeName) 9: { 10: //... 11: } 12: } We’ll fill-in the implementation of the CallMethod in a few, but first, here’s what we’re going to do: create an instance of the Caller class pass it the IProduct as a generic parameter in the CallMethod method, we’ll use Unity to dynamically create an instance of IProduct implemented object I need to add the config information for ICaller and Caller types. 1: <typeAlias alias="ICaller`1" type="ProductModel.ICaller`1, ProductModel" /> 2: <typeAlias alias="Caller`1" type="ProductModel.Caller`1, ProductModel" /> The .NET Framework’s convention to express generic types is ICaller`1, where the digit following the "`" matches the number of types contained in the generic type. So a generic type that contains 4 types contained in the generic type would be declared as: 1: <typeAlias alias="Caller`4" type="ProductModel.Caller`4, ProductModel" /> On my .aspx page, I have the following UI design: 1: <asp:RadioButton ID="LegacyProduct" Text="Product" runat="server" GroupName="ProductWeb" 2: AutoPostBack="true" OnCheckedChanged="RadioButton_CheckedChanged" /> 3: <br /> 4: <asp:RadioButton ID="NewProduct" Text="Product 2" runat="server" GroupName="ProductWeb" 5: AutoPostBack="true" OnCheckedChanged="RadioButton_CheckedChanged" /> 6: <br /> 7: <asp:RadioButton ID="ComplexProduct" Text="Product 3" runat="server" GroupName="ProductWeb" 8: AutoPostBack="true" OnCheckedChanged="RadioButton_CheckedChanged" /> 9: <br /> 10: <asp:RadioButton ID="ArrayConstructor" Text="Product 4" runat="server" GroupName="ProductWeb" 11: AutoPostBack="true" OnCheckedChanged="RadioButton_CheckedChanged" /> Things to note here are that all these radio buttons belong to the same GroupName => only one of these four can be clicked. Next, all four controls postback to the same ‘OnCheckedChanged’ event and lastly the ID’s point to named types of IProduct (already added to the web.config file). 1: <type type="IProduct" mapTo="Product" name="LegacyProduct" /> 2:  3: <type type="IProduct" mapTo="Product2" name="NewProduct" /> 4:  5: <type type="IProduct" mapTo="Product3" name="ComplexProduct"> 6: ... 7: </type> 8:  9: <type type="IProduct" mapTo="Product4" name="ArrayConstructor"> 10: ... 11: </type> In my calling code, I see which radio button was clicked, pass that as an argument to the CallMethod method. 1: protected void RadioButton_CheckedChanged(object sender, EventArgs e) 2: { 3: string typeName = ((RadioButton)sender).ID; 4: ICaller<IProduct> caller = unityContainer.Resolve<ICaller<IProduct>>(); 5: productDetailsLabel.Text = caller.CallMethod<IProduct>(typeName); 6: } What’s basically happening here is that the ID of the control gets passed on to the typeName which will be one of “LegacyProduct”, “NewProduct”, “ComplexProduct” or “ArrayConstructor”. I then create an instance of an ICaller and pass the typeName to it. Now, we’ll fill in the blank for the CallMethod method (sorry for the naming guys). 1: public string CallMethod<T>(string typeName) 2: { 3: IUnityContainer unityContainer = HttpContext.Current.Application["UnityContainer"] as IUnityContainer; 4: T productInstance = unityContainer.Resolve<T>(typeName); 5: return ((IProduct)productInstance).WriteProductDetails(); 6: } This is where I’ll resolve the IProduct by passing the type name and calling the WriteProductDetails() method. With all things in place, when I run the application and choose different radio buttons, the output should look something like below:          Basically this is how generics come to play in Unity. Please see the code I’ve used for this here. This marks the end of the ‘Unity’ series. I’ll definitely post any updates that I find, but for now I don’t have anything planned.

    Read the article

  • The Sensemaking Spectrum for Business Analytics: Translating from Data to Business Through Analysis

    - by Joe Lamantia
    One of the most compelling outcomes of our strategic research efforts over the past several years is a growing vocabulary that articulates our cumulative understanding of the deep structure of the domains of discovery and business analytics. Modes are one example of the deep structure we’ve found.  After looking at discovery activities across a very wide range of industries, question types, business needs, and problem solving approaches, we've identified distinct and recurring kinds of sensemaking activity, independent of context.  We label these activities Modes: Explore, compare, and comprehend are three of the nine recognizable modes.  Modes describe *how* people go about realizing insights.  (Read more about the programmatic research and formal academic grounding and discussion of the modes here: https://www.researchgate.net/publication/235971352_A_Taxonomy_of_Enterprise_Search_and_Discovery) By analogy to languages, modes are the 'verbs' of discovery activity.  When applied to the practical questions of product strategy and development, the modes of discovery allow one to identify what kinds of analytical activity a product, platform, or solution needs to support across a spread of usage scenarios, and then make concrete and well-informed decisions about every aspect of the solution, from high-level capabilities, to which specific types of information visualizations better enable these scenarios for the types of data users will analyze. The modes are a powerful generative tool for product making, but if you've spent time with young children, or had a really bad hangover (or both at the same time...), you understand the difficult of communicating using only verbs.  So I'm happy to share that we've found traction on another facet of the deep structure of discovery and business analytics.  Continuing the language analogy, we've identified some of the ‘nouns’ in the language of discovery: specifically, the consistently recurring aspects of a business that people are looking for insight into.  We call these discovery Subjects, since they identify *what* people focus on during discovery efforts, rather than *how* they go about discovery as with the Modes. Defining the collection of Subjects people repeatedly focus on allows us to understand and articulate sense making needs and activity in more specific, consistent, and complete fashion.  In combination with the Modes, we can use Subjects to concretely identify and define scenarios that describe people’s analytical needs and goals.  For example, a scenario such as ‘Explore [a Mode] the attrition rates [a Measure, one type of Subject] of our largest customers [Entities, another type of Subject] clearly captures the nature of the activity — exploration of trends vs. deep analysis of underlying factors — and the central focus — attrition rates for customers above a certain set of size criteria — from which follow many of the specifics needed to address this scenario in terms of data, analytical tools, and methods. We can also use Subjects to translate effectively between the different perspectives that shape discovery efforts, reducing ambiguity and increasing impact on both sides the perspective divide.  For example, from the language of business, which often motivates analytical work by asking questions in business terms, to the perspective of analysis.  The question posed to a Data Scientist or analyst may be something like “Why are sales of our new kinds of potato chips to our largest customers fluctuating unexpectedly this year?” or “Where can innovate, by expanding our product portfolio to meet unmet needs?”.  Analysts translate questions and beliefs like these into one or more empirical discovery efforts that more formally and granularly indicate the plan, methods, tools, and desired outcomes of analysis.  From the perspective of analysis this second question might become, “Which customer needs of type ‘A', identified and measured in terms of ‘B’, that are not directly or indirectly addressed by any of our current products, offer 'X' potential for ‘Y' positive return on the investment ‘Z' required to launch a new offering, in time frame ‘W’?  And how do these compare to each other?”.  Translation also happens from the perspective of analysis to the perspective of data; in terms of availability, quality, completeness, format, volume, etc. By implication, we are proposing that most working organizations — small and large, for profit and non-profit, domestic and international, and in the majority of industries — can be described for analytical purposes using this collection of Subjects.  This is a bold claim, but simplified articulation of complexity is one of the primary goals of sensemaking frameworks such as this one.  (And, yes, this is in fact a framework for making sense of sensemaking as a category of activity - but we’re not considering the recursive aspects of this exercise at the moment.) Compellingly, we can place the collection of subjects on a single continuum — we call it the Sensemaking Spectrum — that simply and coherently illustrates some of the most important relationships between the different types of Subjects, and also illuminates several of the fundamental dynamics shaping business analytics as a domain.  As a corollary, the Sensemaking Spectrum also suggests innovation opportunities for products and services related to business analytics. The first illustration below shows Subjects arrayed along the Sensemaking Spectrum; the second illustration presents examples of each kind of Subject.  Subjects appear in colors ranging from blue to reddish-orange, reflecting their place along the Spectrum, which indicates whether a Subject addresses more the viewpoint of systems and data (Data centric and blue), or people (User centric and orange).  This axis is shown explicitly above the Spectrum.  Annotations suggest how Subjects align with the three significant perspectives of Data, Analysis, and Business that shape business analytics activity.  This rendering makes explicit the translation and bridging function of Analysts as a role, and analysis as an activity. Subjects are best understood as fuzzy categories [http://georgelakoff.files.wordpress.com/2011/01/hedges-a-study-in-meaning-criteria-and-the-logic-of-fuzzy-concepts-journal-of-philosophical-logic-2-lakoff-19731.pdf], rather than tightly defined buckets.  For each Subject, we suggest some of the most common examples: Entities may be physical things such as named products, or locations (a building, or a city); they could be Concepts, such as satisfaction; or they could be Relationships between entities, such as the variety of possible connections that define linkage in social networks.  Likewise, Events may indicate a time and place in the dictionary sense; or they may be Transactions involving named entities; or take the form of Signals, such as ‘some Measure had some value at some time’ - what many enterprises understand as alerts.   The central story of the Spectrum is that though consumers of analytical insights (represented here by the Business perspective) need to work in terms of Subjects that are directly meaningful to their perspective — such as Themes, Plans, and Goals — the working realities of data (condition, structure, availability, completeness, cost) and the changing nature of most discovery efforts make direct engagement with source data in this fashion impossible.  Accordingly, business analytics as a domain is structured around the fundamental assumption that sense making depends on analytical transformation of data.  Analytical activity incrementally synthesizes more complex and larger scope Subjects from data in its starting condition, accumulating insight (and value) by moving through a progression of stages in which increasingly meaningful Subjects are iteratively synthesized from the data, and recombined with other Subjects.  The end goal of  ‘laddering’ successive transformations is to enable sense making from the business perspective, rather than the analytical perspective.Synthesis through laddering is typically accomplished by specialized Analysts using dedicated tools and methods. Beginning with some motivating question such as seeking opportunities to increase the efficiency (a Theme) of fulfillment processes to reach some level of profitability by the end of the year (Plan), Analysts will iteratively wrangle and transform source data Records, Values and Attributes into recognizable Entities, such as Products, that can be combined with Measures or other data into the Events (shipment of orders) that indicate the workings of the business.  More complex Subjects (to the right of the Spectrum) are composed of or make reference to less complex Subjects: a business Process such as Fulfillment will include Activities such as confirming, packing, and then shipping orders.  These Activities occur within or are conducted by organizational units such as teams of staff or partner firms (Networks), composed of Entities which are structured via Relationships, such as supplier and buyer.  The fulfillment process will involve other types of Entities, such as the products or services the business provides.  The success of the fulfillment process overall may be judged according to a sophisticated operating efficiency Model, which includes tiered Measures of business activity and health for the transactions and activities included.  All of this may be interpreted through an understanding of the operational domain of the businesses supply chain (a Domain).   We'll discuss the Spectrum in more depth in succeeding posts.

    Read the article

  • Using JSON.NET for dynamic JSON parsing

    - by Rick Strahl
    With the release of ASP.NET Web API as part of .NET 4.5 and MVC 4.0, JSON.NET has effectively pushed out the .NET native serializers to become the default serializer for Web API. JSON.NET is vastly more flexible than the built in DataContractJsonSerializer or the older JavaScript serializer. The DataContractSerializer in particular has been very problematic in the past because it can't deal with untyped objects for serialization - like values of type object, or anonymous types which are quite common these days. The JavaScript Serializer that came before it actually does support non-typed objects for serialization but it can't do anything with untyped data coming in from JavaScript and it's overall model of extensibility was pretty limited (JavaScript Serializer is what MVC uses for JSON responses). JSON.NET provides a robust JSON serializer that has both high level and low level components, supports binary JSON, JSON contracts, Xml to JSON conversion, LINQ to JSON and many, many more features than either of the built in serializers. ASP.NET Web API now uses JSON.NET as its default serializer and is now pulled in as a NuGet dependency into Web API projects, which is great. Dynamic JSON Parsing One of the features that I think is getting ever more important is the ability to serialize and deserialize arbitrary JSON content dynamically - that is without mapping the JSON captured directly into a .NET type as DataContractSerializer or the JavaScript Serializers do. Sometimes it isn't possible to map types due to the differences in languages (think collections, dictionaries etc), and other times you simply don't have the structures in place or don't want to create them to actually import the data. If this topic sounds familiar - you're right! I wrote about dynamic JSON parsing a few months back before JSON.NET was added to Web API and when Web API and the System.Net HttpClient libraries included the System.Json classes like JsonObject and JsonArray. With the inclusion of JSON.NET in Web API these classes are now obsolete and didn't ship with Web API or the client libraries. I re-linked my original post to this one. In this post I'll discus JToken, JObject and JArray which are the dynamic JSON objects that make it very easy to create and retrieve JSON content on the fly without underlying types. Why Dynamic JSON? So, why Dynamic JSON parsing rather than strongly typed parsing? Since applications are interacting more and more with third party services it becomes ever more important to have easy access to those services with easy JSON parsing. Sometimes it just makes lot of sense to pull just a small amount of data out of large JSON document received from a service, because the third party service isn't directly related to your application's logic most of the time - and it makes little sense to map the entire service structure in your application. For example, recently I worked with the Google Maps Places API to return information about businesses close to me (or rather the app's) location. The Google API returns a ton of information that my application had no interest in - all I needed was few values out of the data. Dynamic JSON parsing makes it possible to map this data, without having to map the entire API to a C# data structure. Instead I could pull out the three or four values I needed from the API and directly store it on my business entities that needed to receive the data - no need to map the entire Maps API structure. Getting JSON.NET The easiest way to use JSON.NET is to grab it via NuGet and add it as a reference to your project. You can add it to your project with: PM> Install-Package Newtonsoft.Json From the Package Manager Console or by using Manage NuGet Packages in your project References. As mentioned if you're using ASP.NET Web API or MVC 4 JSON.NET will be automatically added to your project. Alternately you can also go to the CodePlex site and download the latest version including source code: http://json.codeplex.com/ Creating JSON on the fly with JObject and JArray Let's start with creating some JSON on the fly. It's super easy to create a dynamic object structure with any of the JToken derived JSON.NET objects. The most common JToken derived classes you are likely to use are JObject and JArray. JToken implements IDynamicMetaProvider and so uses the dynamic  keyword extensively to make it intuitive to create object structures and turn them into JSON via dynamic object syntax. Here's an example of creating a music album structure with child songs using JObject for the base object and songs and JArray for the actual collection of songs:[TestMethod] public void JObjectOutputTest() { // strong typed instance var jsonObject = new JObject(); // you can explicitly add values here using class interface jsonObject.Add("Entered", DateTime.Now); // or cast to dynamic to dynamically add/read properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; album.Artist = "AC/DC"; album.YearReleased = 1976; album.Songs = new JArray() as dynamic; dynamic song = new JObject(); song.SongName = "Dirty Deeds Done Dirt Cheap"; song.SongLength = "4:11"; album.Songs.Add(song); song = new JObject(); song.SongName = "Love at First Feel"; song.SongLength = "3:10"; album.Songs.Add(song); Console.WriteLine(album.ToString()); } This produces a complete JSON structure: { "Entered": "2012-08-18T13:26:37.7137482-10:00", "AlbumName": "Dirty Deeds Done Dirt Cheap", "Artist": "AC/DC", "YearReleased": 1976, "Songs": [ { "SongName": "Dirty Deeds Done Dirt Cheap", "SongLength": "4:11" }, { "SongName": "Love at First Feel", "SongLength": "3:10" } ] } Notice that JSON.NET does a nice job formatting the JSON, so it's easy to read and paste into blog posts :-). JSON.NET includes a bunch of configuration options that control how JSON is generated. Typically the defaults are just fine, but you can override with the JsonSettings object for most operations. The important thing about this code is that there's no explicit type used for holding the values to serialize to JSON. Rather the JSON.NET objects are the containers that receive the data as I build up my JSON structure dynamically, simply by adding properties. This means this code can be entirely driven at runtime without compile time restraints of structure for the JSON output. Here I use JObject to create a album 'object' and immediately cast it to dynamic. JObject() is kind of similar in behavior to ExpandoObject in that it allows you to add properties by simply assigning to them. Internally, JObject values are stored in pseudo collections of key value pairs that are exposed as properties through the IDynamicMetaObject interface exposed in JSON.NET's JToken base class. For objects the syntax is very clean - you add simple typed values as properties. For objects and arrays you have to explicitly create new JObject or JArray, cast them to dynamic and then add properties and items to them. Always remember though these values are dynamic - which means no Intellisense and no compiler type checking. It's up to you to ensure that the names and values you create are accessed consistently and without typos in your code. Note that you can also access the JObject instance directly (not as dynamic) and get access to the underlying JObject type. This means you can assign properties by string, which can be useful for fully data driven JSON generation from other structures. Below you can see both styles of access next to each other:// strong type instance var jsonObject = new JObject(); // you can explicitly add values here jsonObject.Add("Entered", DateTime.Now); // expando style instance you can just 'use' properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; JContainer (the base class for JObject and JArray) is a collection so you can also iterate over the properties at runtime easily:foreach (var item in jsonObject) { Console.WriteLine(item.Key + " " + item.Value.ToString()); } The functionality of the JSON objects are very similar to .NET's ExpandObject and if you used it before, you're already familiar with how the dynamic interfaces to the JSON objects works. Importing JSON with JObject.Parse() and JArray.Parse() The JValue structure supports importing JSON via the Parse() and Load() methods which can read JSON data from a string or various streams respectively. Essentially JValue includes the core JSON parsing to turn a JSON string into a collection of JsonValue objects that can be then referenced using familiar dynamic object syntax. Here's a simple example:public void JValueParsingTest() { var jsonString = @"{""Name"":""Rick"",""Company"":""West Wind"", ""Entered"":""2012-03-16T00:03:33.245-10:00""}"; dynamic json = JValue.Parse(jsonString); // values require casting string name = json.Name; string company = json.Company; DateTime entered = json.Entered; Assert.AreEqual(name, "Rick"); Assert.AreEqual(company, "West Wind"); } The JSON string represents an object with three properties which is parsed into a JObject class and cast to dynamic. Once cast to dynamic I can then go ahead and access the object using familiar object syntax. Note that the actual values - json.Name, json.Company, json.Entered - are actually of type JToken and I have to cast them to their appropriate types first before I can do type comparisons as in the Asserts at the end of the test method. This is required because of the way that dynamic types work which can't determine the type based on the method signature of the Assert.AreEqual(object,object) method. I have to either assign the dynamic value to a variable as I did above, or explicitly cast ( (string) json.Name) in the actual method call. The JSON structure can be much more complex than this simple example. Here's another example of an array of albums serialized to JSON and then parsed through with JsonValue():[TestMethod] public void JsonArrayParsingTest() { var jsonString = @"[ { ""Id"": ""b3ec4e5c"", ""AlbumName"": ""Dirty Deeds Done Dirt Cheap"", ""Artist"": ""AC/DC"", ""YearReleased"": 1976, ""Entered"": ""2012-03-16T00:13:12.2810521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/61kTaH-uZBL._AA115_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/gp/product/…ASIN=B00008BXJ4"", ""Songs"": [ { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Dirty Deeds Done Dirt Cheap"", ""SongLength"": ""4:11"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Love at First Feel"", ""SongLength"": ""3:10"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Big Balls"", ""SongLength"": ""2:38"" } ] }, { ""Id"": ""7b919432"", ""AlbumName"": ""End of the Silence"", ""Artist"": ""Henry Rollins Band"", ""YearReleased"": 1992, ""Entered"": ""2012-03-16T00:13:12.2800521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/51FO3rb1tuL._SL160_AA160_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/End-Silence-Rollins-Band/dp/B0000040OX/ref=sr_1_5?ie=UTF8&qid=1302232195&sr=8-5"", ""Songs"": [ { ""AlbumId"": ""7b919432"", ""SongName"": ""Low Self Opinion"", ""SongLength"": ""5:24"" }, { ""AlbumId"": ""7b919432"", ""SongName"": ""Grip"", ""SongLength"": ""4:51"" } ] } ]"; JArray jsonVal = JArray.Parse(jsonString) as JArray; dynamic albums = jsonVal; foreach (dynamic album in albums) { Console.WriteLine(album.AlbumName + " (" + album.YearReleased.ToString() + ")"); foreach (dynamic song in album.Songs) { Console.WriteLine("\t" + song.SongName); } } Console.WriteLine(albums[0].AlbumName); Console.WriteLine(albums[0].Songs[1].SongName); } JObject and JArray in ASP.NET Web API Of course these types also work in ASP.NET Web API controller methods. If you want you can accept parameters using these object or return them back to the server. The following contrived example receives dynamic JSON input, and then creates a new dynamic JSON object and returns it based on data from the first:[HttpPost] public JObject PostAlbumJObject(JObject jAlbum) { // dynamic input from inbound JSON dynamic album = jAlbum; // create a new JSON object to write out dynamic newAlbum = new JObject(); // Create properties on the new instance // with values from the first newAlbum.AlbumName = album.AlbumName + " New"; newAlbum.NewProperty = "something new"; newAlbum.Songs = new JArray(); foreach (dynamic song in album.Songs) { song.SongName = song.SongName + " New"; newAlbum.Songs.Add(song); } return newAlbum; } The raw POST request to the server looks something like this: POST http://localhost/aspnetwebapi/samples/PostAlbumJObject HTTP/1.1User-Agent: FiddlerContent-type: application/jsonHost: localhostContent-Length: 88 {AlbumName: "Dirty Deeds",Songs:[ { SongName: "Problem Child"},{ SongName: "Squealer"}]} and the output that comes back looks like this: {  "AlbumName": "Dirty Deeds New",  "NewProperty": "something new",  "Songs": [    {      "SongName": "Problem Child New"    },    {      "SongName": "Squealer New"    }  ]} The original values are echoed back with something extra appended to demonstrate that we're working with a new object. When you receive or return a JObject, JValue, JToken or JArray instance in a Web API method, Web API ignores normal content negotiation and assumes your content is going to be received and returned as JSON, so effectively the parameter and result type explicitly determines the input and output format which is nice. Dynamic to Strong Type Mapping You can also map JObject and JArray instances to a strongly typed object, so you can mix dynamic and static typing in the same piece of code. Using the 2 Album jsonString shown earlier, the code below takes an array of albums and picks out only a single album and casts that album to a static Album instance.[TestMethod] public void JsonParseToStrongTypeTest() { JArray albums = JArray.Parse(jsonString) as JArray; // pick out one album JObject jalbum = albums[0] as JObject; // Copy to a static Album instance Album album = jalbum.ToObject<Album>(); Assert.IsNotNull(album); Assert.AreEqual(album.AlbumName,jalbum.Value<string>("AlbumName")); Assert.IsTrue(album.Songs.Count > 0); } This is pretty damn useful for the scenario I mentioned earlier - you can read a large chunk of JSON and dynamically walk the property hierarchy down to the item you want to access, and then either access the specific item dynamically (as shown earlier) or map a part of the JSON to a strongly typed object. That's very powerful if you think about it - it leaves you in total control to decide what's dynamic and what's static. Strongly typed JSON Parsing With all this talk of dynamic let's not forget that JSON.NET of course also does strongly typed serialization which is drop dead easy. Here's a simple example on how to serialize and deserialize an object with JSON.NET:[TestMethod] public void StronglyTypedSerializationTest() { // Demonstrate deserialization from a raw string var album = new Album() { AlbumName = "Dirty Deeds Done Dirt Cheap", Artist = "AC/DC", Entered = DateTime.Now, YearReleased = 1976, Songs = new List<Song>() { new Song() { SongName = "Dirty Deeds Done Dirt Cheap", SongLength = "4:11" }, new Song() { SongName = "Love at First Feel", SongLength = "3:10" } } }; // serialize to string string json2 = JsonConvert.SerializeObject(album,Formatting.Indented); Console.WriteLine(json2); // make sure we can serialize back var album2 = JsonConvert.DeserializeObject<Album>(json2); Assert.IsNotNull(album2); Assert.IsTrue(album2.AlbumName == "Dirty Deeds Done Dirt Cheap"); Assert.IsTrue(album2.Songs.Count == 2); } JsonConvert is a high level static class that wraps lower level functionality, but you can also use the JsonSerializer class, which allows you to serialize/parse to and from streams. It's a little more work, but gives you a bit more control. The functionality available is easy to discover with Intellisense, and that's good because there's not a lot in the way of documentation that's actually useful. Summary JSON.NET is a pretty complete JSON implementation with lots of different choices for JSON parsing from dynamic parsing to static serialization, to complex querying of JSON objects using LINQ. It's good to see this open source library getting integrated into .NET, and pushing out the old and tired stock .NET parsers so that we finally have a bit more flexibility - and extensibility - in our JSON parsing. Good to go! Resources Sample Test Project http://json.codeplex.com/© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  Web Api  AJAX   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Differences Between NHibernate and Entity Framework

    - by Ricardo Peres
    Introduction NHibernate and Entity Framework are two of the most popular O/RM frameworks on the .NET world. Although they share some functionality, there are some aspects on which they are quite different. This post will describe this differences and will hopefully help you get started with the one you know less. Mind you, this is a personal selection of features to compare, it is by no way an exhaustive list. History First, a bit of history. NHibernate is an open-source project that was first ported from Java’s venerable Hibernate framework, one of the first O/RM frameworks, but nowadays it is not tied to it, for example, it has .NET specific features, and has evolved in different ways from those of its Java counterpart. Current version is 3.3, with 3.4 on the horizon. It currently targets .NET 3.5, but can be used as well in .NET 4, it only makes no use of any of its specific functionality. You can find its home page at NHForge. Entity Framework 1 came out with .NET 3.5 and is now on its second major version, despite being version 4. Code First sits on top of it and but came separately and will also continue to be released out of line with major .NET distributions. It is currently on version 4.3.1 and version 5 will be released together with .NET Framework 4.5. All versions will target the current version of .NET, at the time of their release. Its home location is located at MSDN. Architecture In NHibernate, there is a separation between the Unit of Work and the configuration and model instances. You start off by creating a Configuration object, where you specify all global NHibernate settings such as the database and dialect to use, the batch sizes, the mappings, etc, then you build an ISessionFactory from it. The ISessionFactory holds model and metadata that is tied to a particular database and to the settings that came from the Configuration object, and, there will typically be only one instance of each in a process. Finally, you create instances of ISession from the ISessionFactory, which is the NHibernate representation of the Unit of Work and Identity Map. This is a lightweight object, it basically opens and closes a database connection as required and keeps track of the entities associated with it. ISession objects are cheap to create and dispose, because all of the model complexity is stored in the ISessionFactory and Configuration objects. As for Entity Framework, the ObjectContext/DbContext holds the configuration, model and acts as the Unit of Work, holding references to all of the known entity instances. This class is therefore not lightweight as its NHibernate counterpart and it is not uncommon to see examples where an instance is cached on a field. Mappings Both NHibernate and Entity Framework (Code First) support the use of POCOs to represent entities, no base classes are required (or even possible, in the case of NHibernate). As for mapping to and from the database, NHibernate supports three types of mappings: XML-based, which have the advantage of not tying the entity classes to a particular O/RM; the XML files can be deployed as files on the file system or as embedded resources in an assembly; Attribute-based, for keeping both the entities and database details on the same place at the expense of polluting the entity classes with NHibernate-specific attributes; Strongly-typed code-based, which allows dynamic creation of the model and strongly typing it, so that if, for example, a property name changes, the mapping will also be updated. Entity Framework can use: Attribute-based (although attributes cannot express all of the available possibilities – for example, cascading); Strongly-typed code mappings. Database Support With NHibernate you can use mostly any database you want, including: SQL Server; SQL Server Compact; SQL Server Azure; Oracle; DB2; PostgreSQL; MySQL; Sybase Adaptive Server/SQL Anywhere; Firebird; SQLLite; Informix; Any through OLE DB; Any through ODBC. Out of the box, Entity Framework only supports SQL Server, but a number of providers exist, both free and commercial, for some of the most used databases, such as Oracle and MySQL. See a list here. Inheritance Strategies Both NHibernate and Entity Framework support the three canonical inheritance strategies: Table Per Type Hierarchy (Single Table Inheritance), Table Per Type (Class Table Inheritance) and Table Per Concrete Type (Concrete Table Inheritance). Associations Regarding associations, both support one to one, one to many and many to many. However, NHibernate offers far more collection types: Bags of entities or values: unordered, possibly with duplicates; Lists of entities or values: ordered, indexed by a number column; Maps of entities or values: indexed by either an entity or any value; Sets of entities or values: unordered, no duplicates; Arrays of entities or values: indexed, immutable. Querying NHibernate exposes several querying APIs: LINQ is probably the most used nowadays, and really does not need to be introduced; Hibernate Query Language (HQL) is a database-agnostic, object-oriented SQL-alike language that exists since NHibernate’s creation and still offers the most advanced querying possibilities; well suited for dynamic queries, even if using string concatenation; Criteria API is an implementation of the Query Object pattern where you create a semi-abstract conceptual representation of the query you wish to execute by means of a class model; also a good choice for dynamic querying; Query Over offers a similar API to Criteria, but using strongly-typed LINQ expressions instead of strings; for this, although more refactor-friendlier that Criteria, it is also less suited for dynamic queries; SQL, including stored procedures, can also be used; Integration with Lucene.NET indexer is available. As for Entity Framework: LINQ to Entities is fully supported, and its implementation is considered very complete; it is the API of choice for most developers; Entity-SQL, HQL’s counterpart, is also an object-oriented, database-independent querying language that can be used for dynamic queries; SQL, of course, is also supported. Caching Both NHibernate and Entity Framework, of course, feature first-level cache. NHibernate also supports a second-level cache, that can be used among multiple ISessionFactorys, even in different processes/machines: Hashtable (in-memory); SysCache (uses ASP.NET as the cache provider); SysCache2 (same as above but with support for SQL Server SQL Dependencies); Prevalence; SharedCache; Memcached; Redis; NCache; Appfabric Caching. Out of the box, Entity Framework does not have any second-level cache mechanism, however, there are some public samples that show how we can add this. ID Generators NHibernate supports different ID generation strategies, coming from the database and otherwise: Identity (for SQL Server, MySQL, and databases who support identity columns); Sequence (for Oracle, PostgreSQL, and others who support sequences); Trigger-based; HiLo; Sequence HiLo (for databases that support sequences); Several GUID flavors, both in GUID as well as in string format; Increment (for single-user uses); Assigned (must know what you’re doing); Sequence-style (either uses an actual sequence or a single-column table); Table of ids; Pooled (similar to HiLo but stores high values in a table); Native (uses whatever mechanism the current database supports, identity or sequence). Entity Framework only supports: Identity generation; GUIDs; Assigned values. Properties NHibernate supports properties of entity types (one to one or many to one), collections (one to many or many to many) as well as scalars and enumerations. It offers a mechanism for having complex property types generated from the database, which even include support for querying. It also supports properties originated from SQL formulas. Entity Framework only supports scalars, entity types and collections. Enumerations support will come in the next version. Events and Interception NHibernate has a very rich event model, that exposes more than 20 events, either for synchronous pre-execution or asynchronous post-execution, including: Pre/Post-Load; Pre/Post-Delete; Pre/Post-Insert; Pre/Post-Update; Pre/Post-Flush. It also features interception of class instancing and SQL generation. As for Entity Framework, only two events exist: ObjectMaterialized (after loading an entity from the database); SavingChanges (before saving changes, which include deleting, inserting and updating). Tracking Changes For NHibernate as well as Entity Framework, all changes are tracked by their respective Unit of Work implementation. Entities can be attached and detached to it, Entity Framework does, however, also support self-tracking entities. Optimistic Concurrency Control NHibernate supports all of the imaginable scenarios: SQL Server’s ROWVERSION; Oracle’s ORA_ROWSCN; A column containing date and time; A column containing a version number; All/dirty columns comparison. Entity Framework is more focused on Entity Framework, so it only supports: SQL Server’s ROWVERSION; Comparing all/some columns. Batching NHibernate has full support for insertion batching, but only if the ID generator in use is not database-based (for example, it cannot be used with Identity), whereas Entity Framework has no batching at all. Cascading Both support cascading for collections and associations: when an entity is deleted, their conceptual children are also deleted. NHibernate also offers the possibility to set the foreign key column on children to NULL instead of removing them. Flushing Changes NHibernate’s ISession has a FlushMode property that can have the following values: Auto: changes are sent to the database when necessary, for example, if there are dirty instances of an entity type, and a query is performed against this entity type, or if the ISession is being disposed; Commit: changes are sent when committing the current transaction; Never: changes are only sent when explicitly calling Flush(). As for Entity Framework, changes have to be explicitly sent through a call to AcceptAllChanges()/SaveChanges(). Lazy Loading NHibernate supports lazy loading for Associated entities (one to one, many to one); Collections (one to many, many to many); Scalar properties (thing of BLOBs or CLOBs). Entity Framework only supports lazy loading for: Associated entities; Collections. Generating and Updating the Database Both NHibernate and Entity Framework Code First (with the Migrations API) allow creating the database model from the mapping and updating it if the mapping changes. Extensibility As you can guess, NHibernate is far more extensible than Entity Framework. Basically, everything can be extended, from ID generation, to LINQ to SQL transformation, HQL native SQL support, custom column types, custom association collections, SQL generation, supported databases, etc. With Entity Framework your options are more limited, at least, because practically no information exists as to what can be extended/changed. It features a provider model that can be extended to support any database. Integration With Other Microsoft APIs and Tools When it comes to integration with Microsoft technologies, it will come as no surprise that Entity Framework offers the best support. For example, the following technologies are fully supported: ASP.NET (through the EntityDataSource); ASP.NET Dynamic Data; WCF Data Services; WCF RIA Services; Visual Studio (through the integrated designer). Documentation This is another point where Entity Framework is superior: NHibernate lacks, for starters, an up to date API reference synchronized with its current version. It does have a community mailing list, blogs and wikis, although not much used. Entity Framework has a number of resources on MSDN and, of course, several forums and discussion groups exist. Conclusion Like I said, this is a personal list. I may come as a surprise to some that Entity Framework is so behind NHibernate in so many aspects, but it is true that NHibernate is much older and, due to its open-source nature, is not tied to product-specific timeframes and can thus evolve much more rapidly. I do like both, and I chose whichever is best for the job I have at hands. I am looking forward to the changes in EF5 which will add significant value to an already interesting product. So, what do you think? Did I forget anything important or is there anything else worth talking about? Looking forward for your comments!

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >