Search Results

Search found 16783 results on 672 pages for 'static typing'.

Page 115/672 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • Delegates in c#

    - by Jalpesh P. Vadgama
    I have used delegates in my programming since C# 2.0. But I have seen there are lots of confusion going on with delegates so I have decided to blog about it. In this blog I will explain about delegate basics and use of delegates in C#. What is delegate? We can say a delegate is a type safe function pointer which holds methods reference in object. As per MSDN it's a type that references to a method. So you can assign more than one methods to delegates with same parameter and same return type. Following is syntax for the delegate public delegate int Calculate(int a, int b); Here you can see the we have defined the delegate with two int parameter and integer parameter as return parameter. Now any method that matches this parameter can be assigned to above delegates. To understand the functionality of delegates let’s take a following simple example. using System; namespace Delegates { class Program { public delegate int CalculateNumber(int a, int b); static void Main(string[] args) { int a = 5; int b = 5; CalculateNumber addNumber = new CalculateNumber(AddNumber); Console.WriteLine(addNumber(5, 6)); Console.ReadLine(); } public static int AddNumber(int a, int b) { return a + b; } } } Here in the above code you can see that I have created a object of CalculateNumber delegate and I have assigned the AddNumber static method to it. Where you can see in ‘AddNumber’ static method will just return a sum of two numbers. After that I am calling method with the help of the delegates and printing out put to the console application. Now let’s run the application and following is the output as expected. That’s it. You can see the out put of delegates after adding a number. This delegates can be used in variety of scenarios. Like in web application we can use it to update one controls properties from another control’s action. Same you can also call a delegates whens some UI interaction done like button clicked. Hope you liked it. Stay tuned for more. In next post I am going to explain about multicast delegates. Till then happy programming.

    Read the article

  • Dynamically loading Assemblies to reduce Runtime Depencies

    - by Rick Strahl
    I've been working on a request to the West Wind Application Configuration library to add JSON support. The config library is a very easy to use code-first approach to configuration: You create a class that holds the configuration data that inherits from a base configuration class, and then assign a persistence provider at runtime that determines where and how the configuration data is store. Currently the library supports .NET Configuration stores (web.config/app.config), XML files, SQL records and string storage.About once a week somebody asks me about JSON support and I've deflected this question for the longest time because frankly I think that JSON as a configuration store doesn't really buy a heck of a lot over XML. Both formats require the user to perform some fixup of the plain configuration data - in XML into XML tags, with JSON using JSON delimiters for properties and property formatting rules. Sure JSON is a little less verbose and maybe a little easier to read if you have hierarchical data, but overall the differences are pretty minor in my opinion. And yet - the requests keep rolling in.Hard Link Issues in a Component LibraryAnother reason I've been hesitant is that I really didn't want to pull in a dependency on an external JSON library - in this case JSON.NET - into the core library. If you're not using JSON.NET elsewhere I don't want a user to have to require a hard dependency on JSON.NET unless they want to use the JSON feature. JSON.NET is also sensitive to versions and doesn't play nice with multiple versions when hard linked. For example, when you have a reference to V4.4 in your project but the host application has a reference to version 4.5 you can run into assembly load problems. NuGet's Update-Package can solve some of this *if* you can recompile, but that's not ideal for a component that's supposed to be just plug and play. This is no criticism of JSON.NET - this really applies to any dependency that might change.  So hard linking the DLL can be problematic for a number reasons, but the primary reason is to not force loading of JSON.NET unless you actually need it when you use the JSON configuration features of the library.Enter Dynamic LoadingSo rather than adding an assembly reference to the project, I decided that it would be better to dynamically load the DLL at runtime and then use dynamic typing to access various classes. This allows me to run without a hard assembly reference and allows more flexibility with version number differences now and in the future.But there are also a couple of downsides:No assembly reference means only dynamic access - no compiler type checking or IntellisenseRequirement for the host application to have reference to JSON.NET or else get runtime errorsThe former is minor, but the latter can be problematic. Runtime errors are always painful, but in this case I'm willing to live with this. If you want to use JSON configuration settings JSON.NET needs to be loaded in the project. If this is a Web project, it'll likely be there already.So there are a few things that are needed to make this work:Dynamically create an instance and optionally attempt to load an Assembly (if not loaded)Load types into dynamic variablesUse Reflection for a few tasks like statics/enumsThe dynamic keyword in C# makes the formerly most difficult Reflection part - method calls and property assignments - fairly painless. But as cool as dynamic is it doesn't handle all aspects of Reflection. Specifically it doesn't deal with object activation, truly dynamic (string based) member activation or accessing of non instance members, so there's still a little bit of work left to do with Reflection.Dynamic Object InstantiationThe first step in getting the process rolling is to instantiate the type you need to work with. This might be a two step process - loading the instance from a string value, since we don't have a hard type reference and potentially having to load the assembly. Although the host project might have a reference to JSON.NET, that instance might have not been loaded yet since it hasn't been accessed yet. In ASP.NET this won't be a problem, since ASP.NET preloads all referenced assemblies on AppDomain startup, but in other executable project, assemblies are just in time loaded only when they are accessed.Instantiating a type is a two step process: Finding the type reference and then activating it. Here's the generic code out of my ReflectionUtils library I use for this:/// <summary> /// Creates an instance of a type based on a string. Assumes that the type's /// </summary> /// <param name="typeName">Common name of the type</param> /// <param name="args">Any constructor parameters</param> /// <returns></returns> public static object CreateInstanceFromString(string typeName, params object[] args) { object instance = null; Type type = null; try { type = GetTypeFromName(typeName); if (type == null) return null; instance = Activator.CreateInstance(type, args); } catch { return null; } return instance; } /// <summary> /// Helper routine that looks up a type name and tries to retrieve the /// full type reference in the actively executing assemblies. /// </summary> /// <param name="typeName"></param> /// <returns></returns> public static Type GetTypeFromName(string typeName) { Type type = null; // Let default name binding find it type = Type.GetType(typeName, false); if (type != null) return type; // look through assembly list var assemblies = AppDomain.CurrentDomain.GetAssemblies(); // try to find manually foreach (Assembly asm in assemblies) { type = asm.GetType(typeName, false); if (type != null) break; } return type; } To use this for loading JSON.NET I have a small factory function that instantiates JSON.NET and sets a bunch of configuration settings on the generated object. The startup code also looks for failure and tries loading up the assembly when it fails since that's the main reason the load would fail. Finally it also caches the loaded instance for reuse (according to James the JSON.NET instance is thread safe and quite a bit faster when cached). Here's what the factory function looks like in JsonSerializationUtils:/// <summary> /// Dynamically creates an instance of JSON.NET /// </summary> /// <param name="throwExceptions">If true throws exceptions otherwise returns null</param> /// <returns>Dynamic JsonSerializer instance</returns> public static dynamic CreateJsonNet(bool throwExceptions = true) { if (JsonNet != null) return JsonNet; lock (SyncLock) { if (JsonNet != null) return JsonNet; // Try to create instance dynamic json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); if (json == null) { try { var ass = AppDomain.CurrentDomain.Load("Newtonsoft.Json"); json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); } catch (Exception ex) { if (throwExceptions) throw; return null; } } if (json == null) return null; json.ReferenceLoopHandling = (dynamic) ReflectionUtils.GetStaticProperty("Newtonsoft.Json.ReferenceLoopHandling", "Ignore"); // Enums as strings in JSON dynamic enumConverter = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.Converters.StringEnumConverter"); json.Converters.Add(enumConverter); JsonNet = json; } return JsonNet; }This code's purpose is to return a fully configured JsonSerializer instance. As you can see the code tries to create an instance and when it fails tries to load the assembly, and then re-tries loading.Once the instance is loaded some configuration occurs on it. Specifically I set the ReferenceLoopHandling option to not blow up immediately when circular references are encountered. There are a host of other small config setting that might be useful to set, but the default seem to be good enough in recent versions. Note that I'm setting ReferenceLoopHandling which requires an Enum value to be set. There's no real easy way (short of using the cardinal numeric value) to set a property or pass parameters from static values or enums. This means I still need to use Reflection to make this work. I'm using the same ReflectionUtils class I previously used to handle this for me. The function looks up the type and then uses Type.InvokeMember() to read the static property.Another feature I need is have Enum values serialized as strings rather than numeric values which is the default. To do this I can use the StringEnumConverter to convert enums to strings by adding it to the Converters collection.As you can see there's still a bit of Reflection to be done even in C# 4+ with dynamic, but with a few helpers this process is relatively painless.Doing the actual JSON ConversionFinally I need to actually do my JSON conversions. For the Utility class I need serialization that works for both strings and files so I created four methods that handle these tasks two each for serialization and deserialization for string and file.Here's what the File Serialization looks like:/// <summary> /// Serializes an object instance to a JSON file. /// </summary> /// <param name="value">the value to serialize</param> /// <param name="fileName">Full path to the file to write out with JSON.</param> /// <param name="throwExceptions">Determines whether exceptions are thrown or false is returned</param> /// <param name="formatJsonOutput">if true pretty-formats the JSON with line breaks</param> /// <returns>true or false</returns> public static bool SerializeToFile(object value, string fileName, bool throwExceptions = false, bool formatJsonOutput = false) { dynamic writer = null; FileStream fs = null; try { Type type = value.GetType(); var json = CreateJsonNet(throwExceptions); if (json == null) return false; fs = new FileStream(fileName, FileMode.Create); var sw = new StreamWriter(fs, Encoding.UTF8); writer = Activator.CreateInstance(JsonTextWriterType, sw); if (formatJsonOutput) writer.Formatting = (dynamic)Enum.Parse(FormattingType, "Indented"); writer.QuoteChar = '"'; json.Serialize(writer, value); } catch (Exception ex) { Debug.WriteLine("JsonSerializer Serialize error: " + ex.Message); if (throwExceptions) throw; return false; } finally { if (writer != null) writer.Close(); if (fs != null) fs.Close(); } return true; }You can see more of the dynamic invocation in this code. First I grab the dynamic JsonSerializer instance using the CreateJsonNet() method shown earlier which returns a dynamic. I then create a JsonTextWriter and configure a couple of enum settings on it, and then call Serialize() on the serializer instance with the JsonTextWriter that writes the output to disk. Although this code is dynamic it's still fairly short and readable.For full circle operation here's the DeserializeFromFile() version:/// <summary> /// Deserializes an object from file and returns a reference. /// </summary> /// <param name="fileName">name of the file to serialize to</param> /// <param name="objectType">The Type of the object. Use typeof(yourobject class)</param> /// <param name="binarySerialization">determines whether we use Xml or Binary serialization</param> /// <param name="throwExceptions">determines whether failure will throw rather than return null on failure</param> /// <returns>Instance of the deserialized object or null. Must be cast to your object type</returns> public static object DeserializeFromFile(string fileName, Type objectType, bool throwExceptions = false) { dynamic json = CreateJsonNet(throwExceptions); if (json == null) return null; object result = null; dynamic reader = null; FileStream fs = null; try { fs = new FileStream(fileName, FileMode.Open, FileAccess.Read); var sr = new StreamReader(fs, Encoding.UTF8); reader = Activator.CreateInstance(JsonTextReaderType, sr); result = json.Deserialize(reader, objectType); reader.Close(); } catch (Exception ex) { Debug.WriteLine("JsonNetSerialization Deserialization Error: " + ex.Message); if (throwExceptions) throw; return null; } finally { if (reader != null) reader.Close(); if (fs != null) fs.Close(); } return result; }This code is a little more compact since there are no prettifying options to set. Here JsonTextReader is created dynamically and it receives the output from the Deserialize() operation on the serializer.You can take a look at the full JsonSerializationUtils.cs file on GitHub to see the rest of the operations, but the string operations are very similar - the code is fairly repetitive.These generic serialization utilities isolate the dynamic serialization logic that has to deal with the dynamic nature of JSON.NET, and any code that uses these functions is none the wiser that JSON.NET is dynamically loaded.Using the JsonSerializationUtils WrapperThe final consumer of the SerializationUtils wrapper is an actual ConfigurationProvider, that is responsible for handling reading and writing JSON values to and from files. The provider is simple a small wrapper around the SerializationUtils component and there's very little code to make this work now:The whole provider looks like this:/// <summary> /// Reads and Writes configuration settings in .NET config files and /// sections. Allows reading and writing to default or external files /// and specification of the configuration section that settings are /// applied to. /// </summary> public class JsonFileConfigurationProvider<TAppConfiguration> : ConfigurationProviderBase<TAppConfiguration> where TAppConfiguration: AppConfiguration, new() { /// <summary> /// Optional - the Configuration file where configuration settings are /// stored in. If not specified uses the default Configuration Manager /// and its default store. /// </summary> public string JsonConfigurationFile { get { return _JsonConfigurationFile; } set { _JsonConfigurationFile = value; } } private string _JsonConfigurationFile = string.Empty; public override bool Read(AppConfiguration config) { var newConfig = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfiguration)) as TAppConfiguration; if (newConfig == null) { if(Write(config)) return true; return false; } DecryptFields(newConfig); DataUtils.CopyObjectData(newConfig, config, "Provider,ErrorMessage"); return true; } /// <summary> /// Return /// </summary> /// <typeparam name="TAppConfig"></typeparam> /// <returns></returns> public override TAppConfig Read<TAppConfig>() { var result = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfig)) as TAppConfig; if (result != null) DecryptFields(result); return result; } /// <summary> /// Write configuration to XmlConfigurationFile location /// </summary> /// <param name="config"></param> /// <returns></returns> public override bool Write(AppConfiguration config) { EncryptFields(config); bool result = JsonSerializationUtils.SerializeToFile(config, JsonConfigurationFile,false,true); // Have to decrypt again to make sure the properties are readable afterwards DecryptFields(config); return result; } }This incidentally demonstrates how easy it is to create a new provider for the West Wind Application Configuration component. Simply implementing 3 methods will do in most cases.Note this code doesn't have any dynamic dependencies - all that's abstracted away in the JsonSerializationUtils(). From here on, serializing JSON is just a matter of calling the static methods on the SerializationUtils class.Already, there are several other places in some other tools where I use JSON serialization this is coming in very handy. With a couple of lines of code I was able to add JSON.NET support to an older AJAX library that I use replacing quite a bit of code that was previously in use. And for any other manual JSON operations (in a couple of apps I use JSON Serialization for 'blob' like document storage) this is also going to be handy.Performance?Some of you might be thinking that using dynamic and Reflection can't be good for performance. And you'd be right… In performing some informal testing it looks like the performance of the native code is nearly twice as fast as the dynamic code. Most of the slowness is attributable to type lookups. To test I created a native class that uses an actual reference to JSON.NET and performance was consistently around 85-90% faster with the referenced code. That being said though - I serialized 10,000 objects in 80ms vs. 45ms so this isn't hardly slouchy. For the configuration component speed is not that important because both read and write operations typically happen once on first access and then every once in a while. But for other operations - say a serializer trying to handle AJAX requests on a Web Server one would be well served to create a hard dependency.Dynamic Loading - Worth it?On occasion dynamic loading makes sense. But there's a price to be paid in added code complexity and a performance hit. But for some operations that are not pivotal to a component or application and only used under certain circumstances dynamic loading can be beneficial to avoid having to ship extra files and loading down distributions. These days when you create new projects in Visual Studio with 30 assemblies before you even add your own code, trying to keep file counts under control seems a good idea. It's not the kind of thing you do on a regular basis, but when needed it can be a useful tool. Hopefully some of you find this information useful…© Rick Strahl, West Wind Technologies, 2005-2013Posted in .NET  C#   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Beware of const members

    - by nmarun
    I happened to learn a new thing about const today and how one needs to be careful with its usage. Let’s say I have a third-party assembly ‘ConstVsReadonlyLib’ with a class named ConstSideEffect.cs: 1: public class ConstSideEffect 2: { 3: public static readonly int StartValue = 10; 4: public const int EndValue = 20; 5: } In my project, I reference the above assembly as follows: 1: static void Main(string[] args) 2: { 3: for (int i = ConstSideEffect.StartValue; i < ConstSideEffect.EndValue; i++) 4: { 5: Console.WriteLine(i); 6: } 7: Console.ReadLine(); 8: } You’ll see values 10 through 19 as expected. Now, let’s say I receive a new version of the ConstVsReadonlyLib. 1: public class ConstSideEffect 2: { 3: public static readonly int StartValue = 5; 4: public const int EndValue = 30; 5: } If I just drop this new assembly in the bin folder and run the application, without rebuilding my console application, my thinking was that the output would be from 5 to 29. Of course I was wrong… if not you’d not be reading this blog. The actual output is from 5 through 19. The reason is due to the behavior of const and readonly members. To begin with, const is the compile-time constant and readonly is a runtime constant. Next, when you compile the code, a compile-time constant member is replaced with the value of the constant in the code. But, the IL generated when you reference a read-only constant, references the readonly variable, not its value. So, the IL version of the Main method, after compilation actually looks something like: 1: static void Main(string[] args) 2: { 3: for (int i = ConstSideEffect.StartValue; i < 20; i++) 4: { 5: Console.WriteLine(i); 6: } 7: Console.ReadLine(); 8: } I’m no expert with this IL thingi, but when I look at the disassembled code of the exe file (using IL Disassembler), I see the following: I see our readonly member still being referenced by the variable name (ConstVsReadonlyLib.ConstSideEffect::StartValue) in line 0001. Then there’s the Console.WriteLine in line 000b and finally, see the value of 20 in line 0017. This, I’m pretty sure is our const member being replaced by its value which marks the upper bound of the ‘for’ loop. Now you know why the output was from 5 through 19. This definitely is a side-effect of having const members and one needs to be aware of it. While we’re here, I’d like to add a few other points about const and readonly members: const is slightly faster, but is less flexible readonly cannot be declared within a method scope const can be used only on primitive types (numbers and strings) Just wanted to share this before going to bed!

    Read the article

  • Pluggable Rules for Entity Framework Code First

    - by Ricardo Peres
    Suppose you want a system that lets you plug custom validation rules on your Entity Framework context. The rules would control whether an entity can be saved, updated or deleted, and would be implemented in plain .NET. Yes, I know I already talked about plugable validation in Entity Framework Code First, but this is a different approach. An example API is in order, first, a ruleset, which will hold the collection of rules: 1: public interface IRuleset : IDisposable 2: { 3: void AddRule<T>(IRule<T> rule); 4: IEnumerable<IRule<T>> GetRules<T>(); 5: } Next, a rule: 1: public interface IRule<T> 2: { 3: Boolean CanSave(T entity, DbContext ctx); 4: Boolean CanUpdate(T entity, DbContext ctx); 5: Boolean CanDelete(T entity, DbContext ctx); 6: String Name 7: { 8: get; 9: } 10: } Let’s analyze what we have, starting with the ruleset: Only has methods for adding a rule, specific to an entity type, and to list all rules of this entity type; By implementing IDisposable, we allow it to be cancelled, by disposing of it when we no longer want its rules to be applied. A rule, on the other hand: Has discrete methods for checking if a given entity can be saved, updated or deleted, which receive as parameters the entity itself and a pointer to the DbContext to which the ruleset was applied; Has a name property for helping us identifying what failed. A ruleset really doesn’t need a public implementation, all we need is its interface. The private (internal) implementation might look like this: 1: sealed class Ruleset : IRuleset 2: { 3: private readonly IDictionary<Type, HashSet<Object>> rules = new Dictionary<Type, HashSet<Object>>(); 4: private ObjectContext octx = null; 5:  6: internal Ruleset(ObjectContext octx) 7: { 8: this.octx = octx; 9: } 10:  11: public void AddRule<T>(IRule<T> rule) 12: { 13: if (this.rules.ContainsKey(typeof(T)) == false) 14: { 15: this.rules[typeof(T)] = new HashSet<Object>(); 16: } 17:  18: this.rules[typeof(T)].Add(rule); 19: } 20:  21: public IEnumerable<IRule<T>> GetRules<T>() 22: { 23: if (this.rules.ContainsKey(typeof(T)) == true) 24: { 25: foreach (IRule<T> rule in this.rules[typeof(T)]) 26: { 27: yield return (rule); 28: } 29: } 30: } 31:  32: public void Dispose() 33: { 34: this.octx.SavingChanges -= RulesExtensions.OnSaving; 35: RulesExtensions.rulesets.Remove(this.octx); 36: this.octx = null; 37:  38: this.rules.Clear(); 39: } 40: } Basically, this implementation: Stores the ObjectContext of the DbContext to which it was created for, this is so that later we can remove the association; Has a collection - a set, actually, which does not allow duplication - of rules indexed by the real Type of an entity (because of proxying, an entity may be of a type that inherits from the class that we declared); Has generic methods for adding and enumerating rules of a given type; Has a Dispose method for cancelling the enforcement of the rules. A (really dumb) rule applied to Product might look like this: 1: class ProductRule : IRule<Product> 2: { 3: #region IRule<Product> Members 4:  5: public String Name 6: { 7: get 8: { 9: return ("Rule 1"); 10: } 11: } 12:  13: public Boolean CanSave(Product entity, DbContext ctx) 14: { 15: return (entity.Price > 10000); 16: } 17:  18: public Boolean CanUpdate(Product entity, DbContext ctx) 19: { 20: return (true); 21: } 22:  23: public Boolean CanDelete(Product entity, DbContext ctx) 24: { 25: return (true); 26: } 27:  28: #endregion 29: } The DbContext is there because we may need to check something else in the database before deciding whether to allow an operation or not. And here’s how to apply this mechanism to any DbContext, without requiring the usage of a subclass, by means of an extension method: 1: public static class RulesExtensions 2: { 3: private static readonly MethodInfo getRulesMethod = typeof(IRuleset).GetMethod("GetRules"); 4: internal static readonly IDictionary<ObjectContext, Tuple<IRuleset, DbContext>> rulesets = new Dictionary<ObjectContext, Tuple<IRuleset, DbContext>>(); 5:  6: private static Type GetRealType(Object entity) 7: { 8: return (entity.GetType().Assembly.IsDynamic == true ? entity.GetType().BaseType : entity.GetType()); 9: } 10:  11: internal static void OnSaving(Object sender, EventArgs e) 12: { 13: ObjectContext octx = sender as ObjectContext; 14: IRuleset ruleset = rulesets[octx].Item1; 15: DbContext ctx = rulesets[octx].Item2; 16:  17: foreach (ObjectStateEntry entry in octx.ObjectStateManager.GetObjectStateEntries(EntityState.Added)) 18: { 19: Object entity = entry.Entity; 20: Type realType = GetRealType(entity); 21:  22: foreach (dynamic rule in (getRulesMethod.MakeGenericMethod(realType).Invoke(ruleset, null) as IEnumerable)) 23: { 24: if (rule.CanSave(entity, ctx) == false) 25: { 26: throw (new Exception(String.Format("Cannot save entity {0} due to rule {1}", entity, rule.Name))); 27: } 28: } 29: } 30:  31: foreach (ObjectStateEntry entry in octx.ObjectStateManager.GetObjectStateEntries(EntityState.Deleted)) 32: { 33: Object entity = entry.Entity; 34: Type realType = GetRealType(entity); 35:  36: foreach (dynamic rule in (getRulesMethod.MakeGenericMethod(realType).Invoke(ruleset, null) as IEnumerable)) 37: { 38: if (rule.CanDelete(entity, ctx) == false) 39: { 40: throw (new Exception(String.Format("Cannot delete entity {0} due to rule {1}", entity, rule.Name))); 41: } 42: } 43: } 44:  45: foreach (ObjectStateEntry entry in octx.ObjectStateManager.GetObjectStateEntries(EntityState.Modified)) 46: { 47: Object entity = entry.Entity; 48: Type realType = GetRealType(entity); 49:  50: foreach (dynamic rule in (getRulesMethod.MakeGenericMethod(realType).Invoke(ruleset, null) as IEnumerable)) 51: { 52: if (rule.CanUpdate(entity, ctx) == false) 53: { 54: throw (new Exception(String.Format("Cannot update entity {0} due to rule {1}", entity, rule.Name))); 55: } 56: } 57: } 58: } 59:  60: public static IRuleset CreateRuleset(this DbContext context) 61: { 62: Tuple<IRuleset, DbContext> ruleset = null; 63: ObjectContext octx = (context as IObjectContextAdapter).ObjectContext; 64:  65: if (rulesets.TryGetValue(octx, out ruleset) == false) 66: { 67: ruleset = rulesets[octx] = new Tuple<IRuleset, DbContext>(new Ruleset(octx), context); 68: 69: octx.SavingChanges += OnSaving; 70: } 71:  72: return (ruleset.Item1); 73: } 74: } It relies on the SavingChanges event of the ObjectContext to intercept the saving operations before they are actually issued. Yes, it uses a bit of dynamic magic! Very handy, by the way! So, let’s put it all together: 1: using (MyContext ctx = new MyContext()) 2: { 3: IRuleset rules = ctx.CreateRuleset(); 4: rules.AddRule(new ProductRule()); 5:  6: ctx.Products.Add(new Product() { Name = "xyz", Price = 50000 }); 7:  8: ctx.SaveChanges(); //an exception is fired here 9:  10: //when we no longer need to apply the rules 11: rules.Dispose(); 12: } Feel free to use it and extend it any way you like, and do give me your feedback! As a final note, this can be easily changed to support plain old Entity Framework (not Code First, that is), if that is what you are using.

    Read the article

  • Dynamically loading Assemblies to reduce Runtime Dependencies

    - by Rick Strahl
    I've been working on a request to the West Wind Application Configuration library to add JSON support. The config library is a very easy to use code-first approach to configuration: You create a class that holds the configuration data that inherits from a base configuration class, and then assign a persistence provider at runtime that determines where and how the configuration data is store. Currently the library supports .NET Configuration stores (web.config/app.config), XML files, SQL records and string storage.About once a week somebody asks me about JSON support and I've deflected this question for the longest time because frankly I think that JSON as a configuration store doesn't really buy a heck of a lot over XML. Both formats require the user to perform some fixup of the plain configuration data - in XML into XML tags, with JSON using JSON delimiters for properties and property formatting rules. Sure JSON is a little less verbose and maybe a little easier to read if you have hierarchical data, but overall the differences are pretty minor in my opinion. And yet - the requests keep rolling in.Hard Link Issues in a Component LibraryAnother reason I've been hesitant is that I really didn't want to pull in a dependency on an external JSON library - in this case JSON.NET - into the core library. If you're not using JSON.NET elsewhere I don't want a user to have to require a hard dependency on JSON.NET unless they want to use the JSON feature. JSON.NET is also sensitive to versions and doesn't play nice with multiple versions when hard linked. For example, when you have a reference to V4.4 in your project but the host application has a reference to version 4.5 you can run into assembly load problems. NuGet's Update-Package can solve some of this *if* you can recompile, but that's not ideal for a component that's supposed to be just plug and play. This is no criticism of JSON.NET - this really applies to any dependency that might change.  So hard linking the DLL can be problematic for a number reasons, but the primary reason is to not force loading of JSON.NET unless you actually need it when you use the JSON configuration features of the library.Enter Dynamic LoadingSo rather than adding an assembly reference to the project, I decided that it would be better to dynamically load the DLL at runtime and then use dynamic typing to access various classes. This allows me to run without a hard assembly reference and allows more flexibility with version number differences now and in the future.But there are also a couple of downsides:No assembly reference means only dynamic access - no compiler type checking or IntellisenseRequirement for the host application to have reference to JSON.NET or else get runtime errorsThe former is minor, but the latter can be problematic. Runtime errors are always painful, but in this case I'm willing to live with this. If you want to use JSON configuration settings JSON.NET needs to be loaded in the project. If this is a Web project, it'll likely be there already.So there are a few things that are needed to make this work:Dynamically create an instance and optionally attempt to load an Assembly (if not loaded)Load types into dynamic variablesUse Reflection for a few tasks like statics/enumsThe dynamic keyword in C# makes the formerly most difficult Reflection part - method calls and property assignments - fairly painless. But as cool as dynamic is it doesn't handle all aspects of Reflection. Specifically it doesn't deal with object activation, truly dynamic (string based) member activation or accessing of non instance members, so there's still a little bit of work left to do with Reflection.Dynamic Object InstantiationThe first step in getting the process rolling is to instantiate the type you need to work with. This might be a two step process - loading the instance from a string value, since we don't have a hard type reference and potentially having to load the assembly. Although the host project might have a reference to JSON.NET, that instance might have not been loaded yet since it hasn't been accessed yet. In ASP.NET this won't be a problem, since ASP.NET preloads all referenced assemblies on AppDomain startup, but in other executable project, assemblies are just in time loaded only when they are accessed.Instantiating a type is a two step process: Finding the type reference and then activating it. Here's the generic code out of my ReflectionUtils library I use for this:/// <summary> /// Creates an instance of a type based on a string. Assumes that the type's /// </summary> /// <param name="typeName">Common name of the type</param> /// <param name="args">Any constructor parameters</param> /// <returns></returns> public static object CreateInstanceFromString(string typeName, params object[] args) { object instance = null; Type type = null; try { type = GetTypeFromName(typeName); if (type == null) return null; instance = Activator.CreateInstance(type, args); } catch { return null; } return instance; } /// <summary> /// Helper routine that looks up a type name and tries to retrieve the /// full type reference in the actively executing assemblies. /// </summary> /// <param name="typeName"></param> /// <returns></returns> public static Type GetTypeFromName(string typeName) { Type type = null; // Let default name binding find it type = Type.GetType(typeName, false); if (type != null) return type; // look through assembly list var assemblies = AppDomain.CurrentDomain.GetAssemblies(); // try to find manually foreach (Assembly asm in assemblies) { type = asm.GetType(typeName, false); if (type != null) break; } return type; } To use this for loading JSON.NET I have a small factory function that instantiates JSON.NET and sets a bunch of configuration settings on the generated object. The startup code also looks for failure and tries loading up the assembly when it fails since that's the main reason the load would fail. Finally it also caches the loaded instance for reuse (according to James the JSON.NET instance is thread safe and quite a bit faster when cached). Here's what the factory function looks like in JsonSerializationUtils:/// <summary> /// Dynamically creates an instance of JSON.NET /// </summary> /// <param name="throwExceptions">If true throws exceptions otherwise returns null</param> /// <returns>Dynamic JsonSerializer instance</returns> public static dynamic CreateJsonNet(bool throwExceptions = true) { if (JsonNet != null) return JsonNet; lock (SyncLock) { if (JsonNet != null) return JsonNet; // Try to create instance dynamic json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); if (json == null) { try { var ass = AppDomain.CurrentDomain.Load("Newtonsoft.Json"); json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); } catch (Exception ex) { if (throwExceptions) throw; return null; } } if (json == null) return null; json.ReferenceLoopHandling = (dynamic) ReflectionUtils.GetStaticProperty("Newtonsoft.Json.ReferenceLoopHandling", "Ignore"); // Enums as strings in JSON dynamic enumConverter = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.Converters.StringEnumConverter"); json.Converters.Add(enumConverter); JsonNet = json; } return JsonNet; }This code's purpose is to return a fully configured JsonSerializer instance. As you can see the code tries to create an instance and when it fails tries to load the assembly, and then re-tries loading.Once the instance is loaded some configuration occurs on it. Specifically I set the ReferenceLoopHandling option to not blow up immediately when circular references are encountered. There are a host of other small config setting that might be useful to set, but the default seem to be good enough in recent versions. Note that I'm setting ReferenceLoopHandling which requires an Enum value to be set. There's no real easy way (short of using the cardinal numeric value) to set a property or pass parameters from static values or enums. This means I still need to use Reflection to make this work. I'm using the same ReflectionUtils class I previously used to handle this for me. The function looks up the type and then uses Type.InvokeMember() to read the static property.Another feature I need is have Enum values serialized as strings rather than numeric values which is the default. To do this I can use the StringEnumConverter to convert enums to strings by adding it to the Converters collection.As you can see there's still a bit of Reflection to be done even in C# 4+ with dynamic, but with a few helpers this process is relatively painless.Doing the actual JSON ConversionFinally I need to actually do my JSON conversions. For the Utility class I need serialization that works for both strings and files so I created four methods that handle these tasks two each for serialization and deserialization for string and file.Here's what the File Serialization looks like:/// <summary> /// Serializes an object instance to a JSON file. /// </summary> /// <param name="value">the value to serialize</param> /// <param name="fileName">Full path to the file to write out with JSON.</param> /// <param name="throwExceptions">Determines whether exceptions are thrown or false is returned</param> /// <param name="formatJsonOutput">if true pretty-formats the JSON with line breaks</param> /// <returns>true or false</returns> public static bool SerializeToFile(object value, string fileName, bool throwExceptions = false, bool formatJsonOutput = false) { dynamic writer = null; FileStream fs = null; try { Type type = value.GetType(); var json = CreateJsonNet(throwExceptions); if (json == null) return false; fs = new FileStream(fileName, FileMode.Create); var sw = new StreamWriter(fs, Encoding.UTF8); writer = Activator.CreateInstance(JsonTextWriterType, sw); if (formatJsonOutput) writer.Formatting = (dynamic)Enum.Parse(FormattingType, "Indented"); writer.QuoteChar = '"'; json.Serialize(writer, value); } catch (Exception ex) { Debug.WriteLine("JsonSerializer Serialize error: " + ex.Message); if (throwExceptions) throw; return false; } finally { if (writer != null) writer.Close(); if (fs != null) fs.Close(); } return true; }You can see more of the dynamic invocation in this code. First I grab the dynamic JsonSerializer instance using the CreateJsonNet() method shown earlier which returns a dynamic. I then create a JsonTextWriter and configure a couple of enum settings on it, and then call Serialize() on the serializer instance with the JsonTextWriter that writes the output to disk. Although this code is dynamic it's still fairly short and readable.For full circle operation here's the DeserializeFromFile() version:/// <summary> /// Deserializes an object from file and returns a reference. /// </summary> /// <param name="fileName">name of the file to serialize to</param> /// <param name="objectType">The Type of the object. Use typeof(yourobject class)</param> /// <param name="binarySerialization">determines whether we use Xml or Binary serialization</param> /// <param name="throwExceptions">determines whether failure will throw rather than return null on failure</param> /// <returns>Instance of the deserialized object or null. Must be cast to your object type</returns> public static object DeserializeFromFile(string fileName, Type objectType, bool throwExceptions = false) { dynamic json = CreateJsonNet(throwExceptions); if (json == null) return null; object result = null; dynamic reader = null; FileStream fs = null; try { fs = new FileStream(fileName, FileMode.Open, FileAccess.Read); var sr = new StreamReader(fs, Encoding.UTF8); reader = Activator.CreateInstance(JsonTextReaderType, sr); result = json.Deserialize(reader, objectType); reader.Close(); } catch (Exception ex) { Debug.WriteLine("JsonNetSerialization Deserialization Error: " + ex.Message); if (throwExceptions) throw; return null; } finally { if (reader != null) reader.Close(); if (fs != null) fs.Close(); } return result; }This code is a little more compact since there are no prettifying options to set. Here JsonTextReader is created dynamically and it receives the output from the Deserialize() operation on the serializer.You can take a look at the full JsonSerializationUtils.cs file on GitHub to see the rest of the operations, but the string operations are very similar - the code is fairly repetitive.These generic serialization utilities isolate the dynamic serialization logic that has to deal with the dynamic nature of JSON.NET, and any code that uses these functions is none the wiser that JSON.NET is dynamically loaded.Using the JsonSerializationUtils WrapperThe final consumer of the SerializationUtils wrapper is an actual ConfigurationProvider, that is responsible for handling reading and writing JSON values to and from files. The provider is simple a small wrapper around the SerializationUtils component and there's very little code to make this work now:The whole provider looks like this:/// <summary> /// Reads and Writes configuration settings in .NET config files and /// sections. Allows reading and writing to default or external files /// and specification of the configuration section that settings are /// applied to. /// </summary> public class JsonFileConfigurationProvider<TAppConfiguration> : ConfigurationProviderBase<TAppConfiguration> where TAppConfiguration: AppConfiguration, new() { /// <summary> /// Optional - the Configuration file where configuration settings are /// stored in. If not specified uses the default Configuration Manager /// and its default store. /// </summary> public string JsonConfigurationFile { get { return _JsonConfigurationFile; } set { _JsonConfigurationFile = value; } } private string _JsonConfigurationFile = string.Empty; public override bool Read(AppConfiguration config) { var newConfig = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfiguration)) as TAppConfiguration; if (newConfig == null) { if(Write(config)) return true; return false; } DecryptFields(newConfig); DataUtils.CopyObjectData(newConfig, config, "Provider,ErrorMessage"); return true; } /// <summary> /// Return /// </summary> /// <typeparam name="TAppConfig"></typeparam> /// <returns></returns> public override TAppConfig Read<TAppConfig>() { var result = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfig)) as TAppConfig; if (result != null) DecryptFields(result); return result; } /// <summary> /// Write configuration to XmlConfigurationFile location /// </summary> /// <param name="config"></param> /// <returns></returns> public override bool Write(AppConfiguration config) { EncryptFields(config); bool result = JsonSerializationUtils.SerializeToFile(config, JsonConfigurationFile,false,true); // Have to decrypt again to make sure the properties are readable afterwards DecryptFields(config); return result; } }This incidentally demonstrates how easy it is to create a new provider for the West Wind Application Configuration component. Simply implementing 3 methods will do in most cases.Note this code doesn't have any dynamic dependencies - all that's abstracted away in the JsonSerializationUtils(). From here on, serializing JSON is just a matter of calling the static methods on the SerializationUtils class.Already, there are several other places in some other tools where I use JSON serialization this is coming in very handy. With a couple of lines of code I was able to add JSON.NET support to an older AJAX library that I use replacing quite a bit of code that was previously in use. And for any other manual JSON operations (in a couple of apps I use JSON Serialization for 'blob' like document storage) this is also going to be handy.Performance?Some of you might be thinking that using dynamic and Reflection can't be good for performance. And you'd be right… In performing some informal testing it looks like the performance of the native code is nearly twice as fast as the dynamic code. Most of the slowness is attributable to type lookups. To test I created a native class that uses an actual reference to JSON.NET and performance was consistently around 85-90% faster with the referenced code. This will change though depending on the size of objects serialized - the larger the object the more processing time is spent inside the actual dynamically activated components and the less difference there will be. Dynamic code is always slower, but how much it really affects your application primarily depends on how frequently the dynamic code is called in relation to the non-dynamic code executing. In most situations where dynamic code is used 'to get the process rolling' as I do here the overhead is small enough to not matter.All that being said though - I serialized 10,000 objects in 80ms vs. 45ms so this is hardly slouchy performance. For the configuration component speed is not that important because both read and write operations typically happen once on first access and then every once in a while. But for other operations - say a serializer trying to handle AJAX requests on a Web Server one would be well served to create a hard dependency.Dynamic Loading - Worth it?Dynamic loading is not something you need to worry about but on occasion dynamic loading makes sense. But there's a price to be paid in added code  and a performance hit which depends on how frequently the dynamic code is accessed. But for some operations that are not pivotal to a component or application and are only used under certain circumstances dynamic loading can be beneficial to avoid having to ship extra files adding dependencies and loading down distributions. These days when you create new projects in Visual Studio with 30 assemblies before you even add your own code, trying to keep file counts under control seems like a good idea. It's not the kind of thing you do on a regular basis, but when needed it can be a useful option in your toolset… © Rick Strahl, West Wind Technologies, 2005-2013Posted in .NET  C#   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • How to unicode Myanmar texts on Java? [closed]

    - by Spacez Ly Wang
    I'm just beginner of Java. I'm trying to unicode (display) correctly Myanmar texts on Java GUI ( Swing/Awt ). I have four TrueType fonts which support Myanmar unicode texts. There are Myanmar3, Padauk, Tharlon, Myanmar Text ( Window 8 built-in ). You may need the fonts before the code. Google the fonts, please. Each of the fonts display on Java GUI differently and incorrectly. Here is the code for GUI Label displaying myanmar texts: ++++++++++++++++++++++++ package javaapplication1; import javax.swing.JFrame; import javax.swing.JTextField; public class CusFrom { private static void createAndShowGUI() { JFrame frame = new JFrame("Hello World Swing"); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); String s = "\u1015\u102F \u103C\u1015\u102F"; JLabel label = new JLabel(s); label.setFont(new java.awt.Font("Myanmar3", 0, 20));// font insert here, Myanmar Text, Padauk, Myanmar3, Tharlon frame.getContentPane().add(label); frame.pack(); frame.setVisible(true); } public static void main(String[] args) { javax.swing.SwingUtilities.invokeLater(new Runnable() { public void run() { createAndShowGUI(); } }); } } ++++++++++++++++++++++++ Outputs vary. See the pictures: Myanmar3 IMG Padauk IMG Tharlon IMG Myanmar Text IMG What is the correct form? (on notepad) Well, next is the code for GUI Textfield inputting Myanmar texts: ++++++++++++++++++++++++ package javaapplication1; import javax.swing.JFrame; import javax.swing.JTextField; public class XusForm { private static void createAndShowGUI() { JFrame frame = new JFrame("Frame Title"); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); JTextField textfield = new JTextField(); textfield.setFont(new java.awt.Font("Myanmar3", 0, 20)); frame.getContentPane().add(textfield); frame.pack(); frame.setVisible(true); } public static void main(String[] args) { javax.swing.SwingUtilities.invokeLater(new Runnable() { public void run() { createAndShowGUI(); } }); } } ++++++++++++++++++++++++ Outputs vary when I input keys( unicode text ) on keyboards. Myanmar Text Output IMG Padauk Output IMG Myanmar3 Output IMG Tharlon Output IMG Those fonts work well on Linux when opening text files with Text Editor application. My Question is how to unicode Myanmar texts on Java GUI. Do I need additional codes left to display well? Or Does Java still have errors? The fonts display well on Web Application (HTML, CSS) but I'm not sure about displaying on Java Web Application.

    Read the article

  • Class Design -- Multiple Calls from One Method or One Call from Multiple Methods?

    - by Andrew
    I've been working on some code recently that interfaces with a CMS we use and it's presented me with a question on class design that I think is applicable in a number of situations. Essentially, what I am doing is extracting information from the CMS and transforming this information into objects that I can use programatically for other purposes. This consists of two steps: Retrieve the data from the CMS (we have a DAL that I use, so this is essentially just specifying what data from the CMS I want--no connection logic or anything like that) Map the parsed data to my own [C#] objects There are basically two ways I can approach this: One call from multiple methods public void MainMethodWhereIDoStuff() { IEnumerable<MyObject> myObjects = GetMyObjects(); // Do other stuff with myObjects } private static IEnumerable<MyObject> GetMyObjects() { IEnumerable<CmsDataItem> cmsDataItems = GetCmsDataItems(); List<MyObject> mappedObjects = new List<MyObject>(); // do stuff to map the CmsDataItems to MyObjects return mappedObjects; } private static IEnumerable<CmsDataItem> GetCmsDataItems() { List<CmsDataItem> cmsDataItems = new List<CmsDataItem>(); // do stuff to get the CmsDataItems I want return cmsDataItems; } Multiple calls from one method public void MainMethodWhereIDoStuff() { IEnumerable<CmsDataItem> cmsDataItems = GetCmsDataItems(); IEnumerable<MyObject> myObjects = GetMyObjects(cmsDataItems); // do stuff with myObjects } private static IEnumerable<MyObject> GetMyObjects(IEnumerable<CmsDataItem> itemsToMap) { // ... } private static IEnumerable<CmsDataItem> GetCmsDataItems() { // ... } I am tempted to say that the latter is better than the former, as GetMyObjects does not depend on GetCmsDataItems, and it is explicit in the calling method the steps that are executed to retrieve the objects (I'm concerned that the first approach is kind of an object-oriented version of spaghetti code). On the other hand, the two helper methods are never going to be used outside of the class, so I'm not sure if it really matters whether one depends on the other. Furthermore, I like the fact that in the first approach the objects can be retrieved from one line-- most likely anyone working with the main method doesn't care how the objects are retrieved, they just need to retrieve the objects, and the "daisy chained" helper methods hide the exact steps needed to retrieve them (in practice, I actually have a few more methods but am still able to retrieve the object collection I want in one line). Is one of these methods right and the other wrong? Or is it simply a matter of preference or context dependent?

    Read the article

  • web hosting locally

    - by Pradyut Bhattacharya
    i have made a website and hosted in my local computer using a static ip where can i buy a domain name such as www.something.com such that it can redirect to my static ip so that if i m using a page like a http://localhost/index.jsp it can be accessed by http://www.something.com/index.jsp does it matter if i run the server locally or i buy a managed web hosting server from a big company if the traffic is low on my site?? thanks

    Read the article

  • Segmentation fault 11 in MacOS X- C++ [migrated]

    - by Marcos Cesar Vargas Magana
    all. I have a "segmentation fault 11" error when I run the following code. The code actually compiles but I get the error at run time. //** Terror.h ** #include <iostream> #include <string> #include <map> using std::map; using std::pair; using std::string; template<typename Tsize> class Terror { public: //Inserts a message in the map. static Tsize insertMessage(const string& message) { mErrorMessages.insert( pair<Tsize, string>(mErrorMessages.size()+1, message) ); return mErrorMessages.size(); } private: static map<Tsize, string> mErrorMessages; } template<typename Tsize> map<Tsize,string> Terror<Tsize>::mErrorMessages; //** error.h ** #include <iostream> #include "Terror.h" typedef unsigned short errorType; typedef Terror<errorType> error; errorType memoryAllocationError=error::insertMessage("ERROR: out of memory."); //** main.cpp ** #include <iostream> #include "error.h" using namespace std; int main() { try { throw error(memoryAllocationError); } catch(error& err) { } } I have kind of debugging the code and the error happens when the message is being inserted in the static map member. An observation is that if I put the line: errorType memoryAllocationError=error::insertMessage("ERROR: out of memory."); inside the "main()" function instead of at global scope, then everything works fine. But I would like to extend the error messages at global scope, not at local scope. The map is defined static so that all instances of "error" share the same error codes and messages. Do you know how can I get this or something similar. Thank you very much.

    Read the article

  • Subterranean IL: Volatile

    - by Simon Cooper
    This time, we'll be having a look at the volatile. prefix instruction, and one of the differences between volatile in IL and C#. The volatile. prefix volatile is a tricky one, as there's varying levels of documentation on it. From what I can see, it has two effects: It prevents caching of the load or store value; rather than reading or writing to a cached version of the memory location (say, the processor register or cache), it forces the value to be loaded or stored at the 'actual' memory location, so it is then immediately visible to other threads. It forces a memory barrier at the prefixed instruction. This ensures instructions don't get re-ordered around the volatile instruction. This is slightly more complicated than it first seems, and only seems to matter on certain architectures. For more details, Joe Duffy has a blog post going into the details. For this post, I'll be concentrating on the first aspect of volatile. Caching field accesses To demonstrate this, I created a simple multithreaded IL program. It boils down to the following code: .class public Holder { .field public static class Holder holder .field public bool stop .method public static specialname void .cctor() { newobj instance void Holder::.ctor() stsfld class Holder Holder::holder ret }}.method private static void Main() { .entrypoint // Thread t = new Thread(new ThreadStart(DoWork)) // t.Start() // Thread.Sleep(2000) // Console.WriteLine("Stopping thread...") ldsfld class Holder Holder::holder ldc.i4.1 stfld bool Holder::stop call instance void [mscorlib]System.Threading.Thread::Join() ret}.method private static void DoWork() { ldsfld class Holder Holder::holder // while (!Holder.holder.stop) {} DoWork: dup ldfld bool Holder::stop brfalse DoWork pop ret} If you compile and run this code, you'll find that the call to Thread.Join() never returns - the DoWork spinlock is reading a cached version of Holder.stop, which is never being updated with the new value set by the Main method. Adding volatile to the ldfld fixes this: dupvolatile.ldfld bool Holder::stopbrfalse DoWork The volatile ldfld forces the field access to read direct from heap memory, which is then updated by the main thread, rather than using a cached copy. volatile in C# This highlights one of the differences between IL and C#. In IL, volatile only applies to the prefixed instruction, whereas in C#, volatile is specified on a field to indicate that all accesses to that field should be volatile (interestingly, there's no mention of the 'no caching' aspect of volatile in the C# spec; it only focuses on the memory barrier aspect). Furthermore, this information needs to be stored within the assembly somehow, as such a field might be accessed directly from outside the assembly, but there's no concept of a 'volatile field' in IL! How this information is stored with the field will be the subject of my next post.

    Read the article

  • Creating collection with no code (almost)

    - by Sean Feldman
    When doing testing, I tend to create an object mother for the items generated multiple times for specifications. Quite often these objects need to be a part of a collection. A neat way to do so is to leverage .NET params mechanism: public static IEnumerable<T> CreateCollection<T>(params T[] items) { return items; } And usage is the following: private static IEnumerable<IPAddress> addresses = CreateCollection(new IPAddress(123456789), new IPAddress(987654321));

    Read the article

  • Minimizing Dependencies For GUIs

    - by tuba09
    I've been working on a project, and have been charged with designing the projects GUI front-end. I'm coding in Java and using the Swing toolkit. Usability-wise, the GUI front-end follows all of Nielsen's heuristics. Users can easily get to where they want to go through the click of a button / JComboBox. Essentially, in Swing terms, what happens is their actions drive the creation/deletion of custom panels. The GUI is coming along fine for the most part. However, I have to admit to being utterly dismayed at the tight web of dependencies my code is being smothered in. The main problem that I've encountered, that I haven't been able to fix as of yet, is how to keep a reference to the panels/buttons being changed. I'll give an example: Say there's a button A Say there's a panel B displaying picture C Say there's another picture D (not currently being displayed by panel B) When user clicks A, panel B should remove picture C and display picture D My question is, what's the best way of keeping track of panel B? Since I need a global point of access to panel B, my solution has so far been to just shoehorn it into a static variable, and access it through a series of static getters and setters. And this static variable is usually stored in the reference's original class. I.e. UserPanel has a static variable that stores a reference to itself. Is there an easy, tried-and-true way of dealing with these kinds of situations? Like my GUI works fine, but it is not modular and/or robust at all. To add to this, the dreaded 'cyclical dependencies' issue that's shunned by so many programmers is out here in full effect. I'm fairly new to development and just want to make sure that my code will be fairly extensible and won't cause much of a headache to the next person that decides to get a try at it. I know there's loads of books out there that probably have a nice elegant solution to this, but unfortunately I just don't have the time to leisure read right now. I need something that's quick and dirty. Thanks in advance

    Read the article

  • Parenting Opengl with Groups in LibGDX

    - by Rudy_TM
    I am trying to make an object child of a Group, but this object has a draw method that calls opengl to draw in the screen. Its class its this public class OpenGLSquare extends Actor { private static final ImmediateModeRenderer renderer = new ImmediateModeRenderer10(); private static Matrix4 matrix = null; private static Vector2 temp = new Vector2(); public static void setMatrix4(Matrix4 mat) { matrix = mat; } @Override public void draw(SpriteBatch batch, float arg1) { // TODO Auto-generated method stub renderer.begin(matrix, GL10.GL_TRIANGLES); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x0, y0, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x0, y1, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x1, y1, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x1, y1, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x1, y0, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x0, y0, 0f); renderer.end(); } } In my screen class I have this, i call it in the constructor MyGroupClass spriteLab = new MyGroupClass(spriteSheetLab); OpenGLSquare square = new OpenGLSquare(); square.setX0(100); square.setY0(200); square.setX1(400); square.setY1(280); square.color.set(Color.BLUE); square.setSize(); //spriteLab.addActorAt(0, clock); spriteLab.addActor(square); stage.addActor(spriteLab); And the render in the screen I have @Override public void render(float arg0) { this.gl.glClear(GL10.GL_COLOR_BUFFER_BIT |GL10.GL_DEPTH_BUFFER_BIT); stage.draw(); stage.act(Gdx.graphics.getDeltaTime()); } The problem its that when i use opengl with parent, it resets all the other chldren to position 0,0 and the opengl renderer paints the square in the exact position of the screen and not relative to the parent. I tried using batch.enableBlending() and batch.disableBlending() that fixes the position problem of the other children, but not the relative position of the opengl drawing and it also puts alpha to the glDrawing. What am i doing wrong?:/

    Read the article

  • Problem Loading a DLL (4 replies)

    I'm having some issues now that I can't see what I'm doing wrong. I'm Pinvoking a non WindowsAPI DLL (I mean, not a dll provided on windows). Pinvoking LoadLibrary, I can get a IntPtr to any WindowsAPI DLL, but never I can get a pointer to my DLL. The code I'm using is very simple, like this one: [DllImport(&quot;kernel32.dll&quot;, SetLastError true)] public static extern IntPtr LoadLibrary(); static void ...

    Read the article

  • Multi subnet in ubuntu with dnsmasq

    - by Fox Mulder
    I have a multi lan port box that install ubuntu server 11.10. I am setup network in /etc/network/interfaces file as follow: auto lo iface lo inet loopback auto eth0 iface eth0 inet static      address 192.168.128.254      netmask 255.255.255.0      network 192.168.128.0      broadcast 192.168.128.255      gateway 192.168.128.1      dns-nameservers xxxxxx auto eth1 iface eth1 inet static      address 192.168.11.1      netmask 255.255.255.0      network 192.168.11.0      broadcast 192.168.11.255 auto eth2 iface eth2 inet static      address 192.168.21.1      netmask 255.255.255.0      network 192.168.21.0      broadcast 192.168.21.255 auto eth3 iface eth3 inet static      address 192.168.31.1      netmask 255.255.255.0      network 192.168.31.0      broadcast 192.168.31.255 I am also enable the ip forward by echo 1 /proc/sys/net/ipv4/if_forward in rc.local. my dnsmasq config as follow except-interface=eth0 dhcp-range=interface:eth1,set:wifi,192.168.11.101,192.168.11.200,255.255.255.0 dhcp-range=interface:eth2,set:kids,192.168.21.101,192.168.21.200,255.255.255.0 dhcp-range=interface:eth3,set:game,192.168.31.101,192.168.31.200,255.255.255.0 the dhcp was working fine in eth1,eth2,eth3, any machine plug in the subnet can get correct subnet's ip. My problem was, each subnet machine can't ping each other. for example. 192.168.11.101 can't ping 192.168.21.101 but can ping 192.168.128.1 192.168.31.101 can't ping 192.168.21.101 but can ping 192.168.128.1 I am also try to using route add -net 192.168.11.0 netmask 255.255.255.0 gw 192.168.11.1 (and also 192.168.21.0/192.168.31.0) at this multi-lan-port machine. But still won't work. Does anyone can help ? Thanks.

    Read the article

  • Helper methods StartOfMonth and StartOfNextMonth

    - by Michael Freidgeim
    There are couple methods recently added to My DateTimeHelper class     public static DateTime StartOfMonth(this DateTime dateValue)         {             return new DateTime(dateValue.Year,dateValue.Month,1,0,0,0);         }         public static DateTime StartOfNextMonth(this DateTime dateValue)         {             return StartOfMonth(dateValue).AddMonths(1);         }

    Read the article

  • Where to buy a domain for my local server [closed]

    - by Pradyut Bhattacharya
    I have made a website and hosted in my local computer using a static ip Where can i buy a domain name such as www.something.com such that it can redirect to my static IP. So that if i m using a page like a http://localhost/index.jsp it can be accessed by http://www.something.com/index.jsp Does it matter if i run the server locally or should I buy a managed web hosting server from a big company if the traffic is low on my site?

    Read the article

  • Extension methods on a null object instance – something you did not know

    - by nmarun
    Extension methods gave developers with a lot of bandwidth to do interesting (read ‘cool’) things. But there are a couple of things that we need to be aware of while using these extension methods. I have a StringUtil class that defines two extension methods: 1: public static class StringUtils 2: { 3: public static string Left( this string arg, int leftCharCount) 4: { 5: if (arg == null ) 6: { 7: throw new ArgumentNullException( "arg" ); 8: } 9: return arg.Substring(0, leftCharCount); 10...(read more)

    Read the article

  • Entity Framework 5, separating business logic from model - Repository?

    - by bnice7
    I am working on my first public-facing web application and I’m using MVC 4 for the presentation layer and EF 5 for the DAL. The database structure is locked, and there are moderate differences between how the user inputs data and how the database itself gets populated. I have done a ton of reading on the repository pattern (which I have never used) but most of my research is pushing me away from using it since it supposedly creates an unnecessary level of abstraction for the latest versions of EF since repositories and unit-of-work are already built-in. My initial approach is to simply create a separate set of classes for my business objects in the BLL that can act as an intermediary between my Controllers and the DAL. Here’s an example class: public class MyBuilding { public int Id { get; private set; } public string Name { get; set; } public string Notes { get; set; } private readonly Entities _context = new Entities(); // Is this thread safe? private static readonly int UserId = WebSecurity.GetCurrentUser().UserId; public IEnumerable<MyBuilding> GetList() { IEnumerable<MyBuilding> buildingList = from p in _context.BuildingInfo where p.Building.UserProfile.UserId == UserId select new MyBuilding {Id = p.BuildingId, Name = p.BuildingName, Notes = p.Building.Notes}; return buildingList; } public void Create() { var b = new Building {UserId = UserId, Notes = this.Notes}; _context.Building.Add(b); _context.SaveChanges(); // Set the building ID this.Id = b.BuildingId; // Seed 1-to-1 tables with reference the new building _context.BuildingInfo.Add(new BuildingInfo {Building = b}); _context.GeneralInfo.Add(new GeneralInfo {Building = b}); _context.LocationInfo.Add(new LocationInfo {Building = b}); _context.SaveChanges(); } public static MyBuilding Find(int id) { using (var context = new Entities()) // Is this OK to do in a static method? { var b = context.Building.FirstOrDefault(p => p.BuildingId == id && p.UserId == UserId); if (b == null) throw new Exception("Error: Building not found or user does not have access."); return new MyBuilding {Id = b.BuildingId, Name = b.BuildingInfo.BuildingName, Notes = b.Notes}; } } } My primary concern: Is the way I am instantiating my DbContext as a private property thread-safe, and is it safe to have a static method that instantiates a separate DbContext? Or am I approaching this all wrong? I am not opposed to learning up on the repository pattern if I am taking the total wrong approach here.

    Read the article

  • Checking timeouts made more readable

    - by Markus
    I have several situations where I need to control timeouts in a technical application. Either in a loop or as a simple check. Of course – handling this is really easy, but none of these is looking cute. To clarify, here is some C# (Pseudo) code: private DateTime girlWentIntoBathroom; girlWentIntoBathroom = DateTime.Now; do { // do something } while (girlWentIntoBathroom.AddSeconds(10) > DateTime.Now); or if (girlWentIntoBathroom.AddSeconds(10) > DateTime.Now) MessageBox.Show("Wait a little longer"); else MessageBox.Show("Knock louder"); Now I was inspired by something a saw in Ruby on StackOverflow: Now I’m wondering if this construct can be made more readable using extension methods. My goal is something that can be read like “If girlWentIntoBathroom is more than 10 seconds ago” 1st attempt if (girlWentIntoBathroom > (10).Seconds().Ago()) MessageBox.Show("Wait a little longer"); else MessageBox.Show("Knock louder"); So I wrote an extension for integer that converts the integer into a TimeSpan public static TimeSpan Seconds(this int amount) { return new TimeSpan(0, 0, amount); } After that, I wrote an extension for TimeSpan like this: public static DateTime Ago(this TimeSpan diff) { return DateTime.Now.Add(-diff); } This works fine so far, but has a great disadvantage. The logic is inverted! Since girlWentIntoBathroom is a timestamp in the past, the right side of the equation needs to count backwards: impossible. Just inverting the equation is no solution, because it will invert the read sentence as well. 2nd attempt So I tried something new: if (girlWentIntoBathroom.IsMoreThan(10).SecondsAgo()) MessageBox.Show("Knock louder"); else MessageBox.Show("Wait a little longer"); IsMoreThan() needs to transport the past timestamp as well as the span for the extension SecondsAgo(). It could be: public static DateWithIntegerSpan IsMoreThan(this DateTime baseTime, int span) { return new DateWithIntegerSpan() { Date = baseTime, Span = span }; } Where DateWithIntegerSpan is simply: public class DateWithIntegerSpan { public DateTime Date {get; set;} public int Span { get; set; } } And SecondsAgo() is public static bool SecondsAgo(this DateWithIntegerSpan dateAndSpan) { return dateAndSpan.Date.Add(new TimeSpan(0, 0, dateAndSpan.Span)) < DateTime.Now; } Using this approach, the English sentence matches the expected behavior. But the disadvantage is, that I need a helping class (DateWithIntegerSpan). Has anyone an idea to make checking timeouts look more cute and closer to a readable sentence? Am I a little too insane thinking about something minor like this?

    Read the article

  • C# Drawing On Separate Thread [migrated]

    - by Zaid
    I have a "public static" class called "DrawTest" and inside is a method public static DrawRandomRectangle(Bitmap g) { } I want to call that method and draw bunches of stuff and then update the pictureBox that uses the image on a separate thread. To simplify, I'm not trying to make anything specific I'm just trying to learn how to draw and update an image inside of a picturebox on a separate thread.

    Read the article

  • Using extension methods to decrease the surface area of a C# interface

    - by brian_ritchie
    An interface defines a contract to be implemented by one or more classes.  One of the keys to a well-designed interface is defining a very specific range of functionality. The profile of the interface should be limited to a single purpose & should have the minimum methods required to implement this functionality.  Keeping the interface tight will keep those implementing the interface from getting lazy & not implementing it properly.  I've seen too many overly broad interfaces that aren't fully implemented by developers.  Instead, they just throw a NotImplementedException for the method they didn't implement. One way to help with this issue, is by using extension methods to move overloaded method definitions outside of the interface. Consider the following example: .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: Consolas, "Courier New", Courier, Monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: public interface IFileTransfer 2: { 3: void SendFile(Stream stream, Uri destination); 4: } 5:   6: public static class IFileTransferExtension 7: { 8: public static void SendFile(this IFileTransfer transfer, 9: string Filename, Uri destination) 10: { 11: using (var fs = File.OpenRead(Filename)) 12: { 13: transfer.SendFile(fs, destination); 14: } 15: } 16: } 17:   18: public static class TestIFileTransfer 19: { 20: static void Main() 21: { 22: IFileTransfer transfer = new FTPFileTransfer("user", "pass"); 23: transfer.SendFile(filename, new Uri("ftp://ftp.test.com")); 24: } 25: } In this example, you may have a number of overloads that uses different mechanisms for specifying the source file. The great part is, you don't need to implement these methods on each of your derived classes.  This gives you a better interface and better code reuse.

    Read the article

  • Problem Loading a DLL (4 replies)

    I'm having some issues now that I can't see what I'm doing wrong. I'm Pinvoking a non WindowsAPI DLL (I mean, not a dll provided on windows). Pinvoking LoadLibrary, I can get a IntPtr to any WindowsAPI DLL, but never I can get a pointer to my DLL. The code I'm using is very simple, like this one: [DllImport(&quot;kernel32.dll&quot;, SetLastError true)] public static extern IntPtr LoadLibrary(); static void ...

    Read the article

  • Can't ping devices by IP address for devices allocated IPs by DHCP

    - by GiddyUpHorsey
    I have a home network with a Trendnet wireless router and a Windows Domain. The Domain Controller/DNS server is a Windows 2000 Server and is configured to forward queries to DNS servers provided by the ISP. The router provides DHCP and is configured with the Windows 2000 Server as the DNS server. The network has been set up for a couple of years and usually works fine. When I connect iPhones to the network over WiFi, the router can ping the iPhones through its browser based admin interface, but Windows machines that are part of the Windows Domain cannot. A laptop was connected to the network over WiFi that wasn't joined to the domain and it could see the iPhones. The router UI shows that the laptop has a reserved IP allocated via DHCP. All machines either have a static or DHCP allocated IP on the 192.168.0.* subnet. Router - 192.168.0.1 - Static - Wired Windows Domain Controller - 192.168.0.8 - Static - Virtual Windows 7 Workstation - 192.168.0.200 - DHCP Auto - Wired VMWare ESXi Host - 192.168.0.201 - Static? - Wired iPhone 1 - 192.168.0.202 - DHCP Auto - WiFi iPhone 2 - 192.168.0.203 - DHCP Auto - WiFi Windows Vista Laptop - 192.168.0.204 - DHCP Reserved - WiFi Using the Windows 7 machine (200), I try to ping each machine and the only DHCP machine that responds is itself. The other DHCP machines fail with Reply from 192.168.0.200: Destination host unreachable.. Using nslookup fails with *** domain.controller.name can't find 192.168.0.203: Non-existent domain. Using the Windows 2000 Domain Controller (8), I try to ping each machine and the only DHCP machine that responds is the Windows 7 machine (200). Pinging the other DHCP machines fails with Request timed out.. Using nslookup also fails with *** domain.controller.name can't find 192.168.0.203: Non-existent domain. Using the iPhone 2 (203), I try to ping (Network Ping Lite) the machines with static IP addresses and that works fine. When I try to ping the Windows 7 machine (200) it is unable to get a response. How do I configure the DNS server/Windows Domain/Router properly so that the Windows Domain machines can see the IPs allocated via DHCP?

    Read the article

  • nginx rewrite for /blah/(.*) /$1

    - by skrewler
    I'm migrating from mod_php to nginx. I got everything working except for this rewrite.. I'm just not familiar enough with nginx configuration to know the correct way to do this. I came up with this by looking at a sample on the nginx site. server { server_name test01.www.myhost.com; root /home/vhosts/my_home/blah; access_log /var/log/nginx/blah.access.log; error_log /var/log/nginx/blah.error.log; index index.php; location / { try_files $uri $uri/ @rewrites; } location @rewrites { rewrite ^ /index.php last; rewrite ^/ht/userGreeting.php /js/iFrame/index.php last; rewrite ^/ht/(.*)$ /$1 last; rewrite ^/userGreeting.php$ /js/iFrame/index.php last; rewrite ^/a$ /adminLogin.php last; rewrite ^/boom\/(.*)$ /boom/index.php?q=$1 last; rewrite ^favicon.ico$ favico_ry.ico last; } # This block will catch static file requests, such as images, css, js # The ?: prefix is a 'non-capturing' mark, meaning we do not require # the pattern to be captured into $1 which should help improve performance location ~* \.(?:ico|css|js|gif|jpe?g|png)$ { # Some basic cache-control for static files to be sent to the browser expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; } include php.conf; } The issue I'm having is with this rewrite: rewrite ^ht\/(.*)$ /$1 last; 99% of requests that will hit this rewrite are static files. So I think maybe it's getting sent to the static files section and that's where things are being messed up? I tried adding this but it didn't work: location ~* ^ht\/.*\.(?:ico|css|js|gif|jpe?g|png)$ { # Some basic cache-control for static files to be sent to the browser expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; } Any help would be appreciated. I know the best thing to do would be to just change the references of /ht/whatever.jpg to /whatever.jpg in the code.. but that's not an option for now.

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >