Search Results

Search found 14532 results on 582 pages for 'dynamic types'.

Page 30/582 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • Enum types, FlagsAttribute & Zero value – Part 2

    - by nmgomes
    In my previous post I wrote about why you should pay attention when using enum value Zero. After reading that post you are probably thinking like Benjamin Roux: Why don’t you start the enum values at 0x1? Well I could, but doing that I lose the ability to have Sync and Async mutually exclusive by design. Take a look at the following enum types: [Flags] public enum OperationMode1 { Async = 0x1, Sync = 0x2, Parent = 0x4 } [Flags] public enum OperationMode2 { Async = 0x0, Sync = 0x1, Parent = 0x2 } To achieve mutually exclusion between Sync and Async values using OperationMode1 you would have to operate both values: protected void CheckMainOperarionMode(OperationMode1 mode) { switch (mode) { case (OperationMode1.Async | OperationMode1.Sync | OperationMode1.Parent): case (OperationMode1.Async | OperationMode1.Sync): throw new InvalidOperationException("Cannot be Sync and Async simultaneous"); break; case (OperationMode1.Async | OperationMode1.Parent): case (OperationMode1.Async): break; case (OperationMode1.Sync | OperationMode1.Parent): case (OperationMode1.Sync): break; default: throw new InvalidOperationException("No default mode specified"); } } but this is a by design constraint in OperationMode2. Why? Simply because 0x0 is the neutral element for the bitwise OR operation. Knowing this singularity, replacing and simplifying the previous method, you get: protected void CheckMainOperarionMode(OperationMode2 mode) { switch (mode) { case (OperationMode2.Sync | OperationMode2.Parent): case (OperationMode2.Sync): break; case (OperationMode2.Parent): default: break; } This means that: if both Sync and Async values are specified Sync value always win (Zero is the neutral element for bitwise OR operation) if no Sync value specified, the Async method is used. Here is the final method implementation: protected void CheckMainOperarionMode(OperationMode2 mode) { if (mode & OperationMode2.Sync == OperationMode2.Sync) { } else { } } All content above prove that Async value (0x0) is useless from the arithmetic perspective, but, without it we lose readability. The following IF statements are logically equals but the first is definitely more readable: if (OperationMode2.Async | OperationMode2.Parent) { } if (OperationMode2.Parent) { } Here’s another example where you can see the benefits of 0x0 value, the default value can be used explicitly. <my:Control runat="server" Mode="Async,Parent"> <my:Control runat="server" Mode="Parent">

    Read the article

  • Copying Properties between 2 Different Types&hellip;

    - by Shawn Cicoria
    I’m not sure where I had seen some of this base code, but this comes up time & time again on projects. Here’s a little method that copies all the R/W properties (public) between 2 distinct class definitions: It’s called as follows: private static void Test1() { MyClass obj1 = new MyClass() { Prop1 = "one", Prop2 = "two", Prop3 = 100 }; MyOtherClass obj2 = null; obj2 = CopyClass(obj1); Console.WriteLine(obj1); Console.WriteLine(obj2); } namespace Space1 { public class MyClass { public string Prop1 { get; set; } public string Prop2 { get; set; } public int Prop3 { get; set; } public override string ToString() { var rv = string.Format("MyClass: {0} Prop2: {1} Prop3 {2}", Prop1, Prop2, Prop3); return rv; } } } namespace Space2 { public class MyOtherClass { public string Prop1 { get; set; } public string Prop2 { get; set; } public int Prop3 { get; set; } public override string ToString() { var rv = string.Format("MyOtherClass: {0} Prop2: {1} Prop3 {2}", Prop1, Prop2, Prop3); return rv; } } Source of the method: /// /// Provides a Copy of Public fields between 2 distinct classes /// /// Source class name /// Target class name /// Instance of type Source /// An instance of type Target copying all public properties matching name from the Source. public static T CopyClass(S source) where T : new() { T target = default(T); BindingFlags flags = BindingFlags.Public | BindingFlags.Instance; if (source == null) { return (T)target; } if (target == null) target = new T(); PropertyInfo[] objProperties = target.GetType().GetProperties(flags); foreach (PropertyInfo pi in objProperties) { string name = pi.Name; PropertyInfo sourceProp = source.GetType().GetProperty(name, flags); if (sourceProp == null) { throw new ApplicationException(string.Format("CopyClass - object type {0} & {1} mismatch in property:{2}", source.GetType(), target.GetType(), name)); } if (pi.CanWrite && sourceProp.CanRead) { object sourceValue = sourceProp.GetValue(source, null); pi.SetValue(target, sourceValue, null); } else { throw new ApplicationException(string.Format("CopyClass - can't read/write a property object types {0} & {1} property:{2}", source.GetType(), target.GetType(), name)); } } return target; }

    Read the article

  • Whats the most efficient MySQL column types for this data?

    - by AlabamaKush
    I have several tables with some pretty standard data in each. Can somebody help me optimize them by telling me the best column types for this data. Whats beside them is what I have currently. Number (max length 7) --> MEDIUMINT(8) Unsigned Text (max length 30) --> VARCHAR(30) Text (max length 200) --> VARCHAR(200) Number (max length 4) --> SMALLINT(5) Unsigned Number (either 0 or 1) --> TINYINT(1) Unsigned Text (max length 500) --> TEXT Any suggestions? I'm just guessing with this so I know some of them are wrong...

    Read the article

  • MSSQL: Primary Key Schema Largely Guid but Sometimes Integer Types...

    - by Code Sherpa
    OK, this may be a silly question but... I have inherited a project and am tasked with going over the primary key relationships. The project largely uses Guids. I say "largely" because there are examples where tables use integral types to reflect enumerations. For example, dbo.MessageFolder has MessageFolderId of type int to reflect public emum MessageFolderTypes { inbox = 1, sent = 2, trash = 3, etc... } This happens a lot. There are tables with primary keys of type int which is unavoidable because of their reliance on enumerations and tables with primary keys of type Guid which reflect the primary key choice on the part of the previous programmer. Should I care that the PK schema is spotty like this? It doesn't feel right but does it really matter? If this could create a problem, how do I get around it (I really can't move all PKs to type int without serious legwork and I have never heard of enumerations that have guid values)? Thanks.

    Read the article

  • Pass Types as arguments to a function in Haskell?

    - by Charles Peng
    The following two functions are extremely similar. They read from a [String] n elements, either [Int] or [Float]. How can I factor the common code out? I don't know of any mechanism in Haskell that supports passing types as arguments. readInts n stream = foldl next ([], stream) [1..n] where next (lst, x:xs) _ = (lst ++ [v], xs) where v = read x :: Int readFloats n stream = foldl next ([], stream) [1..n] where next (lst, x:xs) _ = (lst ++ [v], xs) where v = read x :: Float I am at a beginner level of Haskell, so any comments on my code are welcome.

    Read the article

  • SQL Server: Primary Key Schema Largely Guid but Sometimes Integer Types...

    - by Code Sherpa
    OK, this may be a silly question but... I have inherited a project and am tasked with going over the primary key relationships. The project largely uses Guids. I say "largely" because there are examples where tables use integral types to reflect enumerations. For example, dbo.MessageFolder has MessageFolderId of type int to reflect public emum MessageFolderTypes { inbox = 1, sent = 2, trash = 3, etc... } This happens a lot. There are tables with primary keys of type int which is unavoidable because of their reliance on enumerations and tables with primary keys of type Guid which reflect the primary key choice on the part of the previous programmer. Should I care that the PK schema is spotty like this? It doesn't feel right but does it really matter? If this could create a problem, how do I get around it (I really can't move all PKs to type int without serious legwork and I have never heard of enumerations that have guid values)? Thanks.

    Read the article

  • .NET security mechanism to restrict access between two Types in the same Website project?

    - by jdk
    Question: Is there a mechanism in the .NET Framework to hide one custom Type from another without using separate projects/assemblies? I'm using C# with ASP.NET in a Website project (Note: Not a Web Application). Obviously there's not a way to enforce this restriction using language-specific OO keywords so I am looking for something else, for example: maybe a permission framework or code access mechanism, maybe something that uses meta data like Attributes. I'm unsure. I don't really care whether the solution actually hides classes from each other or just makes them inaccessible, etc. A runtime or design time answer will suffice. Looking for something easy to implement otherwise it's not worth the effort ... Background: I'm working in an ASP.NET Website project and the team has decided not to use separate project assemblies for different software layers. Therefore I'm looking for a way to have, for example, a DataAccess/ folder of which I disallow its classes to access other Types in the ASP.NET Website project.

    Read the article

  • SQL SERVER – CXPACKET – Parallelism – Usual Solution – Wait Type – Day 6 of 28

    - by pinaldave
    CXPACKET has to be most popular one of all wait stats. I have commonly seen this wait stat as one of the top 5 wait stats in most of the systems with more than one CPU. Books On-Line: Occurs when trying to synchronize the query processor exchange iterator. You may consider lowering the degree of parallelism if contention on this wait type becomes a problem. CXPACKET Explanation: When a parallel operation is created for SQL Query, there are multiple threads for a single query. Each query deals with a different set of the data (or rows). Due to some reasons, one or more of the threads lag behind, creating the CXPACKET Wait Stat. There is an organizer/coordinator thread (thread 0), which takes waits for all the threads to complete and gathers result together to present on the client’s side. The organizer thread has to wait for the all the threads to finish before it can move ahead. The Wait by this organizer thread for slow threads to complete is called CXPACKET wait. Note that not all the CXPACKET wait types are bad. You might experience a case when it totally makes sense. There might also be cases when this is unavoidable. If you remove this particular wait type for any query, then that query may run slower because the parallel operations are disabled for the query. Reducing CXPACKET wait: We cannot discuss about reducing the CXPACKET wait without talking about the server workload type. OLTP: On Pure OLTP system, where the transactions are smaller and queries are not long but very quick usually, set the “Maximum Degree of Parallelism” to 1 (one). This way it makes sure that the query never goes for parallelism and does not incur more engine overhead. EXEC sys.sp_configure N'cost threshold for parallelism', N'1' GO RECONFIGURE WITH OVERRIDE GO Data-warehousing / Reporting server: As queries will be running for long time, it is advised to set the “Maximum Degree of Parallelism” to 0 (zero). This way most of the queries will utilize the parallel processor, and long running queries get a boost in their performance due to multiple processors. EXEC sys.sp_configure N'cost threshold for parallelism', N'0' GO RECONFIGURE WITH OVERRIDE GO Mixed System (OLTP & OLAP): Here is the challenge. The right balance has to be found. I have taken a very simple approach. I set the “Maximum Degree of Parallelism” to 2, which means the query still uses parallelism but only on 2 CPUs. However, I keep the “Cost Threshold for Parallelism” very high. This way, not all the queries will qualify for parallelism but only the query with higher cost will go for parallelism. I have found this to work best for a system that has OLTP queries and also where the reporting server is set up. Here, I am setting ‘Cost Threshold for Parallelism’ to 25 values (which is just for illustration); you can choose any value, and you can find it out by experimenting with the system only. In the following script, I am setting the ‘Max Degree of Parallelism’ to 2, which indicates that the query that will have a higher cost (here, more than 25) will qualify for parallel query to run on 2 CPUs. This implies that regardless of the number of CPUs, the query will select any two CPUs to execute itself. EXEC sys.sp_configure N'cost threshold for parallelism', N'25' GO EXEC sys.sp_configure N'max degree of parallelism', N'2' GO RECONFIGURE WITH OVERRIDE GO Read all the post in the Wait Types and Queue series. Additionally a must read comment of Jonathan Kehayias. Note: The information presented here is from my experience and I no way claim it to be accurate. I suggest you all to read the online book for further clarification. All the discussion of Wait Stats over here is generic and it varies from system to system. It is recommended that you test this on the development server before implementing on the production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: DMV, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Power Dynamic Database-Driven Websites with MySQL & PHP

    - by Antoinette O'Sullivan
    Join major names among MySQL customers by learning to power dynamic database-driven websites with MySQL & PHP. With the MySQL and PHP: Developing Dynamic Web Applications course, in 4 days, you learn how to develop applications in PHP and how to use MySQL efficiently for those applications! Through a hands-on approach, this instructor-led course helps you improve your PHP skills and combine them with time-proven database management techniques to create best-of-breed web applications that are efficient, solid and secure. You can currently take this course as a: Live Virtual Class (LVC): There are a number events on the schedule to suit different timezones in January 2013 and March 2013. With an LVC, you get to follow this live instructor-led class from your own desk - so no travel expense or inconvenience. In-Class Event: Travel to an education center to attend this class. Here are some events already on the scheduled:  Where  When  Delivery Language  Lisbon, Portugal  15 April 2013  European Portugese  Porto, Portugal 15 April 2013   European Portugese  Barcelona, Spain 28 February 2013  Spanish  Madrid, Spain 4 March 2013   Spanish If you do not see an event that suits you, register your interest in an additional date/location/delivery language. If you want more indepth knowledge on developing with MySQL and PHP, consider the MySQL for Developers course. For full details on these and all courses on the authentic MySQL curriculum, go to http://oracle.com/education/mysql.

    Read the article

  • Coarse Collision Detection in highly dynamic environment

    - by Millianz
    I'm currently working a 3D space game with A LOT of dynamic objects that are all moving (there is pretty much no static environment). I have the collision detection and resolution working just fine, but I am now trying to optimize the collision detection (which is currently O(N^2) -- linear search). I thought about multiple options, a bounding volume hierarchy, a Binary Spatial Partitioning tree, an Octree or a Grid. I however need some help with deciding what's best for my situation. A grid seems unfeasible simply due to the space requirements and cache coherence problems. Since everything is so dynamic however, it seems to be that trees aren't ideal either, since they would have to be completely rebuilt every frame. I must admit I never implemented a physics engine that required spatial partitioning, do I indeed need to rebuild the tree every frame (assuming that everything is constantly moving) or can I update the trees after integrating? Advice is much appreciated - to give some more background: You're flying a space ship in an asteroid field, and there are lots and lots of asteroids and some enemy ships, all of which shoot bullets. EDIT: I came across the "Sweep an Prune" algorithm, which seems like the right thing for my purposes. It appears like the right mixture of fast building of the data structures involved and detailed enough partitioning. This is the best resource I can find: http://www.codercorner.com/SAP.pdf If anyone has any suggestions whether or not I'm going in the right direction, please let me know.

    Read the article

  • How-to get the binding for a tab in the Dynamic Tab Shell Template

    - by Frank Nimphius
    The Dynamic Tab Shell template does expose a method on the Tab.java class that allows you to get access to the ADF binding container for a tab. At least in theory this works, because in practice this call always returns a null value (a bug is filed for this). To work around the problem, you can use code similar to the following to get the ADF binding for a specific tab DCBindingContainer currentBinding = (DCBindingContainer) BindingContext.getCurrent().getCurrentBindingsEntry(); DCBindingContainer templateBinding = (DCBindingContainer)currentBinding.get("ptb1"); DCBindingContainer tabBinding= (DCBindingContainer)templateBinding.get("r"+0);  In the code line above, the tabBinding variable will hold the binding reference to the first tab in the dynamic tab shell template. Note that the tab doesn't need to be visible for this (which has to do with how the template works).  "ptb1" is the template reference name in the PageDef file (Executable section) of the template consumer view. Check this string in your page before using this code. If it differs, change it also in the code above. "r0" is the binding reference of the first tab in the template. Te last tab is referenced by "r14".  

    Read the article

  • Dynamic Monitoring Service (DMS) Configuration Dumping and CPU Utilization

    - by ShawnBailey
    There was recently a report of CPU spikes on a system that were occuring at precise 3 hour intervals. Research revealed that the spikes were the result of the Dynamic Monitoring Service generating a metrics dump and writing it under the server 'logs' folder for every WLS server in the domain. This blog provides some information on what this is for and how to control it. The Dynamic Monitoring Service is a facility in FMw (JRF to be more precise) that collects runtime data on the components deployed to WebLogic. Each component is responsible for how much or how little they use the service and SOA collects a fair amount of information. To view what is collected on any running server you can use the following URL, http://host:port/dms/Spy and login with admin credentials. DMS is essentially always running and collecting this information in the runtime and to protect against loss of this data it also runs automatic backups, by default at the 3 hour interval mentioned above. Most of the management options for DMS are exposed through WLST but these settings are not so we must open the dms_config.xml file which can be found in DOMAIN_HOME/config/fmwconfig/servers/<server_name>/dms_config.xml. The contents are fairly short and at the bottom you will find the following entry: <dumpConfiguration>     <dump intervalSeconds="10800" maxSizeMBytes="75" enabled="true"/> </dumpConfiguration> The interval of 10800 seconds corresponds to the 3 hours and the maximum size is 75MB. The file is written as an archive to DOMAIN_HOME/servers/<server_name>/logs/metrics. This archive contains the dump in XML format. You can disable the dumps all together by simply setting the 'enabled' value to 'false' or of course you could modify the other parameters to suit your needs. Disabling the dumps will NOT impact DMS collections or display at runtime. It will only eliminate these periodic backups.

    Read the article

  • How to specialize a type parameterized argument to multiple different types for in Scala?

    - by jmount
    I need a back-check (please). In an article ( http://www.win-vector.com/blog/2010/06/automatic-differentiation-with-scala/ ) I just wrote I stated that it is my belief in Scala that you can not specify a function that takes an argument that is itself a function with an unbound type parameter. What I mean is you can write: def g(f:Array[Double]=>Double,Array[Double]):Double but you can not write something like: def g(f[Y]:Array[Y]=>Double,Array[Double]):Double because Y is not known. The intended use is that inside g() I will specialize fY to multiple different types at different times. You can write: def g[Y](f:Array[Y]=>Double,Array[Double]):Double but then f() is of a single type per call to g() (which is exactly what we do not want). However, you can get all of the equivalent functionality by using a trait extension instead insisting on passing around a function. What I advocated in my article was: 1) Creating a trait that imitates the structure of Scala's Function1 trait. Something like: abstract trait VectorFN { def apply[Y](x:Array[Y]):Y } 2) declaring def g(f:VectorFN,Double):Double (using the trait is the type). This works (people here on StackOverflow helped me find it, and I am happy with it)- but am I mis-representing Scala by missing an even better solution?

    Read the article

  • Generic Event Generator and Handler from User Supplied Types?

    - by JaredBroad
    I'm trying to allow the user to supply custom data and manage the data with custom types. The user's algorithm will get time synchronized events pushed into the event handlers they define. I'm not sure if this is possible but here's the "proof of concept" code I'd like to build. It doesn't detect T in the for loop: "The type or namespace name 'T' could not be found" class Program { static void Main(string[] args) { Algorithm algo = new Algorithm(); Dictionary<Type, string[]> userDataSources = new Dictionary<Type, string[]>(); // "User" adding custom type and data source for algorithm to consume userDataSources.Add(typeof(Weather), new string[] { "temperature data1", "temperature data2" }); for (int i = 0; i < 2; i++) { foreach (Type T in userDataSources.Keys) { string line = userDataSources[typeof(T)][i]; //Iterate over CSV data.. var userObj = new T(line); algo.OnData < typeof(T) > (userObj); } } } //User's algorithm pattern. interface IAlgorithm<TData> where TData : class { void OnData<TData>(TData data); } //User's algorithm. class Algorithm : IAlgorithm<Weather> { //Handle Custom User Data public void OnData<Weather>(Weather data) { Console.WriteLine(data.date.ToString()); Console.ReadKey(); } } //Example "user" custom type. public class Weather { public DateTime date = new DateTime(); public double temperature = 0; public Weather(string line) { Console.WriteLine("Initializing weather object with: " + line); date = DateTime.Now; temperature = -1; } } }

    Read the article

  • ViewBag dynamic in ASP.NET MVC 3 - RC 2

    - by hajan
    Earlier today Scott Guthrie announced the ASP.NET MVC 3 - Release Candidate 2. I installed the new version right after the announcement since I was eager to see the new features. Among other cool features included in this release candidate, there is a new ViewBag dynamic which can be used to pass data from Controllers to Views same as you use ViewData[] dictionary. What is great and nice about ViewBag (despite the name) is that its a dynamic type which means you can dynamically get/set values and add any number of additional fields without need of strongly-typed classes. In order to see the difference, please take a look at the following examples. Example - Using ViewData Controller public ActionResult Index() {     List<string> colors = new List<string>();     colors.Add("red");     colors.Add("green");     colors.Add("blue");                 ViewData["listColors"] = colors;     ViewData["dateNow"] = DateTime.Now;     ViewData["name"] = "Hajan";     ViewData["age"] = 25;     return View(); } View (ASPX View Engine) <p>     My name is     <b><%: ViewData["name"] %></b>,     <b><%: ViewData["age"] %></b> years old.     <br />         I like the following colors: </p> <ul id="colors"> <% foreach (var color in ViewData["listColors"] as List<string>){ %>     <li>        <font color="<%: color %>"><%: color %></font>    </li> <% } %> </ul> <p>     <%: ViewData["dateNow"] %> </p> (I know the code might look cleaner with Razor View engine, but it doesn’t matter right? ;) ) Example - Using ViewBag Controller public ActionResult Index() {     List<string> colors = new List<string>();     colors.Add("red");     colors.Add("green");     colors.Add("blue");     ViewBag.ListColors = colors; //colors is List     ViewBag.DateNow = DateTime.Now;     ViewBag.Name = "Hajan";     ViewBag.Age = 25;     return View(); } You see the difference? View (ASPX View Engine) <p>     My name is     <b><%: ViewBag.Name %></b>,     <b><%: ViewBag.Age %></b> years old.     <br />         I like the following colors: </p> <ul id="colors"> <% foreach (var color in ViewBag.ListColors) { %>     <li>         <font color="<%: color %>"><%: color %></font>     </li> <% } %> </ul> <p>     <%: ViewBag.DateNow %> </p> In my example now I don’t need to cast ViewBag.ListColors as List<string> since ViewBag is dynamic type! On the other hand the ViewData[“key”] is object.I would like to note that if you use ViewData["ListColors"] = colors; in your Controller, you can retrieve it in the View by using ViewBag.ListColors. And the result in both cases is Hope you like it! Regards, Hajan

    Read the article

  • WCF – interchangeable data-contract types

    - by nmarun
    In a WSDL based environment, unlike a CLR-world, we pass around the ‘state’ of an object and not the reference of an object. Well firstly, what does ‘state’ mean and does this also mean that we can send a struct where a class is expected (or vice-versa) as long as their ‘state’ is one and the same? Let’s see. So I have an operation contract defined as below: 1: [ServiceContract] 2: public interface ILearnWcfServiceExtend : ILearnWcfService 3: { 4: [OperationContract] 5: Employee SaveEmployee(Employee employee); 6: } 7:  8: [ServiceBehavior] 9: public class LearnWcfService : ILearnWcfServiceExtend 10: { 11: public Employee SaveEmployee(Employee employee) 12: { 13: employee.EmployeeId = 123; 14: return employee; 15: } 16: } Quite simplistic operation there (which translates to ‘absolutely no business value’). Now, the data contract Employee mentioned above is a struct. 1: public struct Employee 2: { 3: public int EmployeeId { get; set; } 4:  5: public string FName { get; set; } 6: } After compilation and consumption of this service, my proxy (in the Reference.cs file) looks like below (I’ve ignored the rest of the details just to avoid unwanted confusion): 1: public partial struct Employee : System.Runtime.Serialization.IExtensibleDataObject, System.ComponentModel.INotifyPropertyChanged I call the service with the code below: 1: private static void CallWcfService() 2: { 3: Employee employee = new Employee { FName = "A" }; 4: Console.WriteLine("IsValueType: {0}", employee.GetType().IsValueType); 5: Console.WriteLine("IsClass: {0}", employee.GetType().IsClass); 6: Console.WriteLine("Before calling the service: {0} - {1}", employee.EmployeeId, employee.FName); 7: employee = LearnWcfServiceClient.SaveEmployee(employee); 8: Console.WriteLine("Return from the service: {0} - {1}", employee.EmployeeId, employee.FName); 9: } The output is: I now change my Employee type from a struct to a class in the proxy class and run the application: 1: public partial class Employee : System.Runtime.Serialization.IExtensibleDataObject, System.ComponentModel.INotifyPropertyChanged { The output this time is: The state of an object implies towards its composition, the properties and the values of these properties and not based on whether it is a reference type (class) or a value type (struct). And as shown above, we’re actually passing an object by its state and not by reference. Continuing on the same topic of ‘type-interchangeability’, WCF treats two data contracts as equivalent if they have the same ‘wire-representation’. We can do so using the DataContract and DataMember attributes’ Name property. 1: [DataContract] 2: public struct Person 3: { 4: [DataMember] 5: public int Id { get; set; } 6:  7: [DataMember] 8: public string FirstName { get; set; } 9: } 10:  11: [DataContract(Name="Person")] 12: public class Employee 13: { 14: [DataMember(Name = "Id")] 15: public int EmployeeId { get; set; } 16:  17: [DataMember(Name="FirstName")] 18: public string FName { get; set; } 19: } I’ve created two data contracts with the exact same wire-representation. Just remember that the names and the types of data members need to match to be considered equivalent. The question then arises as to what gets generated in the proxy class. Despite us declaring two data contracts (Person and Employee), only one gets emitted – Person. This is because we’re saying that the Employee type has the same wire-representation as the Person type. Also that the signature of the SaveEmployee operation gets changed on the proxy side: 1: [System.CodeDom.Compiler.GeneratedCodeAttribute("System.ServiceModel", "4.0.0.0")] 2: [System.ServiceModel.ServiceContractAttribute(ConfigurationName="ServiceProxy.ILearnWcfServiceExtend")] 3: public interface ILearnWcfServiceExtend 4: { 5: [System.ServiceModel.OperationContractAttribute(Action="http://tempuri.org/ILearnWcfServiceExtend/SaveEmployee", ReplyAction="http://tempuri.org/ILearnWcfServiceExtend/SaveEmployeeResponse")] 6: ClientApplication.ServiceProxy.Person SaveEmployee(ClientApplication.ServiceProxy.Person employee); 7: } But, on the service side, the SaveEmployee still accepts and returns an Employee data contract. 1: [ServiceBehavior] 2: public class LearnWcfService : ILearnWcfServiceExtend 3: { 4: public Employee SaveEmployee(Employee employee) 5: { 6: employee.EmployeeId = 123; 7: return employee; 8: } 9: } Despite all these changes, our output remains the same as the last one: This is type-interchangeability at work! Here’s one more thing to ponder about. Our Person type is a struct and Employee type is a class. Then how is it that the Person type got emitted as a ‘class’ in the proxy? It’s worth mentioning that WSDL describes a type called Employee and does not say whether it is a class or a struct (see the SOAP message below): 1: <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" 2: xmlns:tem="http://tempuri.org/" 3: xmlns:ser="http://schemas.datacontract.org/2004/07/ServiceApplication"> 4: <soapenv:Header/> 5: <soapenv:Body> 6: <tem:SaveEmployee> 7: <!--Optional:--> 8: <tem:employee> 9: <!--Optional:--> 10: <ser:EmployeeId>?</ser:EmployeeId> 11: <!--Optional:--> 12: <ser:FName>?</ser:FName> 13: </tem:employee> 14: </tem:SaveEmployee> 15: </soapenv:Body> 16: </soapenv:Envelope> There are some differences between how ‘Add Service Reference’ and the svcutil.exe generate the proxy class, but turns out both do some kind of reflection and determine the type of the data contract and emit the code accordingly. So since the Employee type is a class, the proxy ‘Person’ type gets generated as a class. In fact, reflecting on svcutil.exe application, you’ll see that there are a couple of places wherein a flag actually determines a type as a class or a struct. One example is in the ExportISerializableDataContract method in the System.Runtime.Serialization.CodeExporter class. Seems like these flags have a say in deciding whether the type gets emitted as a struct or a class. This behavior is different if you use the WSDL tool though. WSDL tool does not do any kind of reflection of the data contract / serialized type, it emits the type as a class by default. You can check this using the two command lines below:   Note to self: Remember ‘state’ and type-interchangeability when traversing through the WSDL planet!

    Read the article

  • SOA Suite 11g Dynamic Payload Testing with soapUI Free Edition

    - by Greg Mally
    Overview Many web service developers use soapUI for various tests like: smoke test, unit test, and load testing because you can get a free edition that is fairly robust. However, if you need to venture into more complex testing that requires a dynamic payload, then the free edition doesn't necessarily make it easy. This feature does exist in soapUI, but for obvious reasons it is in the Pro version. In this blog I will show you how to use soapUI free edition for dynamic payloads in a simplified example. Hopefully this will open the doors for you to expand into more complex scenarios. The following assumes that you have a working knowledge of soapUI and will not go into concepts like setting up a project etc. For the basics, please review the documentation for soapUI: http://www.soapui.org/Getting-Started/. Additionally, we will be using asynchronous web services and you can review the setup for this in my blog: SOA Suite 11g Asynchronous Testing with soapUI. Features in soapUI Free Edition Relating to this Topic The soapUI test tool provides a very feature rich environment that can do many things provided you are willing to go beyond point and click. For this example, we will be leveraging just a couple features for our dynamic payload example: Test Case Properties Scripting with Groovy Basically, we will be using a property as a global variable and we will manipulate that property using a Groovy script. Setting Up Our Property Properties are available throughout soapUI and here is a snippet from the soapUI website defining the locations: Projects : for handling Project scope values, for example a subscription ID TestSuite : for handling TestSuite scoped values, can be seen as "arguments" to a TestSuite TestCases : for handling TestCase scoped values, can be seen as "arguments" to a TestCase Properties TestStep : for providing local values/state within a TestCase Local TestStep properties : several TestStep types maintain their own list of properties specific to their functionality : DataSource, DataSink, Run TestCase MockServices : for handling MockService scoped values/arguments MockResponses : for handling MockResponse scoped values Global Properties : for handling Global properties, optionally from an external source For our example, we will be defining a custom property in a TestCase called SimpleAsyncPayload. The property can be created in either the Custom Properties tab located at the bottom of the Navigator panel when the TestCase is selected in the Navigator or the Properties label in the TestCase editor: Navigator Panel TestCase Editor You will notice that I set a value of “0” for the custom property. For this simplified example, we will need to retrieve that value and manipulate it prior to making the web service request invocation. In order to accomplish this, we will need to get Groovy ;) Let's Get Groovy We will now add a new Groovy Script step to the TestCase called Manipulate Payload: TestCase Editor > Append Step > Groovy Script Once we have added the Groovy Script step to our TestCase, we can open the Groovy Script editor to add the code to: Get the current value of the property we created called SimpleAsyncPayload. Convert the value of the property to an integer. Increment the value. Store the incremented value back into the TestCase property called SimpleAsyncPayload. The script should look something like the following: Groovy Script Editor – Manipulate Payload At this point we can test the script to see if it is working by simply running the TestCase (left-click on the green triangle in the upper left-hand corner of the TestCase editor). To verify if it ran correctly, we can look at the value of the SimpleAsyncPayload property which should now be 1: TestCase Editor – Run Results All that is left to complete the TestCase is to append another step of type Test Request. The information required to append the request is a name and an operation to invoke. In this example we will use the default name and select the SimpleAsyncBPELProcessBingd -> process as the operation (any other information being requested, simply use the defaults unless you are calling an asynchronous operation then do not add any assertions). We are now in familiar ground with the Test Request editor. Depending upon the type of operation you are invoking (synchronous or asynchronous), please update the request with the necessary information (e.g., callback information for asynchronous operations). We will now tweak the Test Request payload to retrieve the value of the SimpleAsyncPayload property. The soapUI editor makes this very simple: right-click in the payload and navigate to the property (e.g., right-click > Get Data.. > TestCase: [Groovy TestCase] > Property [SimpleAsyncPayload]): Test Request Editor – Insert Property Value Your payload should now look something like the following: Test Request Editor – Inserted Property Value Just like before, we are now ready to run the TestCase. If everything goes as expected we should see a response like the following: Message Viewer – Results of TestCase Run We are now setup to be able to run a stress test where the payload will change for each request. This simple example can be expanded to include multiple payload values, complex calculations in the scripts, or whatever can be done via the soapUI scripting. Hopefully you have found this useful and happy testing to you :)

    Read the article

  • Entity Framework 4 POCO entities in separate assembly, Dynamic Data Website?

    - by steve.macdonald
    Basically I want to use a dynamic data website to maintain data in an EF4 model where the entities are in their own assembly. Model and context are in another assembly. I tried this http://stackoverflow.com/questions/2282916/entity-framework-4-self-tracking-entities-asp-net-dynamic-data-error but get an "ambiguous match" error from reflection: System.Reflection.AmbiguousMatchException was unhandled by user code Message=Ambiguous match found. Source=mscorlib StackTrace: at System.RuntimeType.GetPropertyImpl(String name, BindingFlags bindingAttr, Binder binder, Type returnType, Type[] types, ParameterModifier[] modifiers) at System.Type.GetProperty(String name) at System.Web.DynamicData.ModelProviders.EFTableProvider..ctor(EFDataModelProvider dataModel, EntitySet entitySet, EntityType entityType, Type entityClrType, Type parentEntityClrType, Type rootEntityClrType, String name) at System.Web.DynamicData.ModelProviders.EFDataModelProvider.CreateTableProvider(EntitySet entitySet, EntityType entityType) at System.Web.DynamicData.ModelProviders.EFDataModelProvider..ctor(Object contextInstance, Func1 contextFactory) at System.Web.DynamicData.ModelProviders.SchemaCreator.CreateDataModel(Object contextInstance, Func1 contextFactory) at System.Web.DynamicData.MetaModel.RegisterContext(Func`1 contextFactory, ContextConfiguration configuration) at WebApplication1.Global.RegisterRoutes(RouteCollection routes) in C:\dev\Puffin\Puffin.Prototype.Web\Global.asax.cs:line 42 at WebApplication1.Global.Application_Start(Object sender, EventArgs e) in C:\dev\Puffin\Puffin.Prototype.Web\Global.asax.cs:line 78 InnerException:

    Read the article

  • Dynamic query runs directly but not through variable, what could be the reason?

    - by waheed
    Here is my scenario, I'm creating a dynamic query using a select statement which uses functions to generate the query. I am storing it into a variable and running it using exec. i.e. declare @dsql nvarchar(max) set @dsql = '' select @dsql = @dsql + dbo.getDynmicQuery(column1, column2) from Table1 exec(@dsql) Now it produces the many errors in this scenario, like 'Incorrect syntax near ','' and 'Case expressions may only be nested to level 10.' But if i take the text from @dsql and assign it a variable manually like: declare @dsql nvarchar(max) set @dsql = '' set @dsql = N'<Dynamic query text>' exec(@dsql) it runs and generates the result, what could be the reason for that ?? Thanks..

    Read the article

  • Dynamic Linq help, different errors depending on object passed as parameter?

    - by sah302
    I have an entityDao that is inherbited by everyone of my objectDaos. I am using Dynamic Linq and trying to get some generic queries to work. I have the following code in my generic method in my EntityDao : public abstract class EntityDao<ImplementationType> where ImplementationType : Entity { public ImplementationType getOneByValueOfProperty(string getProperty, object getValue){ ImplementationType entity = null; if (getProperty != null && getValue != null) { LCFDataContext lcfdatacontext = new LCFDataContext(); //Generic LINQ Query Here entity = lcfdatacontext.GetTable<ImplementationType>().Where(getProperty + " =@0", getValue).FirstOrDefault(); //.Where(getProperty & "==" & CStr(getValue)) } //lcfdatacontext.SubmitChanges() //lcfdatacontext.Dispose() return entity; } }         Then I do the following method call in a unit test (all my objectDaos inherit entityDao): [Test] public void getOneByValueOfProperty() { Accomplishment result = accomplishmentDao.getOneByValueOfProperty("AccomplishmentType.Name", "Publication"); Assert.IsNotNull(result); } The above passes (AccomplishmentType has a relationship to accomplishment) Accomplishment result = accomplishmentDao.getOneByValueOfProperty("Description", "Can you hear me now?"); Accomplishment result = accomplishmentDao.getOneByValueOfProperty("LocalId", 4); Both of the above work Accomplishment result = accomplishmentDao.getOneByValueOfProperty("Id", New Guid("95457751-97d9-44b5-8f80-59fc2d170a4c"))       Does not work and says the following: Operator '=' incompatible with operand types 'Guid' and 'Guid Why is this happening? Guid's can't be compared? I tried == as well but same error. What's even moreso confusing is that every example of Dynamic Linq I have seen simply usings strings whether using the parameterized where predicate or this one I have commented out: //.Where(getProperty & "==" & CStr(getValue)) With or without the Cstr, many datatypes don't work with this format. I tried setting the getValue to a string instead of an object as well, but then I just get different errors (such as a multiword string would stop comparison after the first word). What am I missing to make this work with GUIDs and/or any data type? Ideally I would like to be able to just pass in a string for getValue (as I have seen for every other dynamic LINQ example) instead of the object and have it work regardless of the data Type of the column.

    Read the article

  • Static libraries, dynamic libraries, DLLs, entry points, headers ... how to get out of this alive?

    - by tunnuz
    Hello, I recently had to program C++ under Windows for an University project, and I'm pretty confused about static and dynamic libraries system, what the compiler needs, what the linker needs, how to build a library ... is there any good document about this out there? I'm pretty confused about the *nix library system as well (so, dylibs, the ar tool, how to compile them ...), can you point a review document about the current library techniques on the various architectures? Note: due to my poor knowledge this message could contain wrong concepts, feel free to edit it. Thank you Feel free to add more reference, I will add them to the summary. References Since most of you posted *nix or Windows specific references I will summarize here the best ones, I will mark as accepted answer the Wikipedia one, because is a good start point (and has references inside too) to get introduced to this stuff. Program Library Howto (Unix) Dynamic-Link Libraries (from MSDN) (Windows) DLL Information (StackOverflow) (Windows) Programming in C (Unix) An Overview of Compiling and Linking (Windows)

    Read the article

  • Finding out if an IP address is static or dynamic?

    - by Joshua
    I run a large bulletin board and I get spammers every now and again. My moderation team does a good job filtering them out but every time I IP ban them they seem to come back (I'm pretty sure it's the same person on some occasions, as the post patterns are exactly the same as are the usernames) but I'm afraid to ban them by IP address every time. If they are on a dynamic IP address, I could be banning innocent users later down the line when they try to get to my forum through SERPs, but if I ban only via static IPs I know that I'm only banning that one person. So, is there a way to properly determine if an IP address is static or dynamic? Thanks.

    Read the article

  • How do you do dynamic script evaluation in C#?

    - by Deane
    What is the state of dynamic code evaluation in C#? For a very advanced feature of an app I'm working on, I'd like the users to be able to enter a line of C# code that should evaluate to a boolean. Something like: DateTime.Now.Hours > 12 && DateTime.Now.Hours < 14 I want to dynamically eval this string and capture the result as a boolean. I tried Microsoft.JScript.Eval.JScriptEvaluate, and this worked, but it's technically deprecated and it only works with Javascript (not ideal, but workable). Additionally, I'd like to be able to push objects into the script engine so that they can be used in the evaluation. Some resources I find mentioned dynamically compiling assemblies, but this is more overhead than I think I want to deal with. So, what is the state of dynamic script evaluation in C#? Is it possible, or am I out of luck?

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >