Search Results

Search found 36013 results on 1441 pages for 'public fields'.

Page 241/1441 | < Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >

  • Passing text message to web page from web user control

    - by Narendra Tiwari
    Here is a brief summary how we can send a text message to webpage by a web user control. Delegates is the slolution. There are many good articles on .net delegates you can refer some of them below. The scenario is we want to send a text message to the page on completion of some activity on webcontrol. 1/ Create a Base class for webcontrol (refer code below), assuming we are passing some text messages to page from web user control  - Declare a delegate  - Declare an event of type delegate using System; using System.Data; using System.Configuration; using System.Web; using System.Web.Security; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using System.Web.UI.HtmlControls; //Declaring delegate with message parameter public delegate void SendMessageToThePageHandler(string messageToThePage); public         } class ControlBase: System.Web.UI.UserControl { public ControlBase() { // TODO: Add constructor logic here }protected override void OnInit(EventArgs e) { base.OnInit(e); }private string strMessageToPass;/// <summary> /// MessageToPass - Property to pass text message to page /// </summary> public string MessageToPass { get { return strMessageToPass; } set { strMessageToPass = value; } }/// <summary> /// SendMessageToPage - Called from control to invoke the event /// </summary> /// <param name="strMessage">Message to pass</param> public void SendMessageToPage(string strMessage) {   if (this.sendMessageToThePage != null)       this.sendMessageToThePage(strMessage); } 2/ Register events on webpage on page Load eventthis.AddControlEventHandler((ControlBase)WebUserControl1); this.AddControlEventHandler((ControlBase)WebUserControl2); /// <summary> /// AddControlEventHandler- Hooking web user control event /// </summary> /// <param name="ctrl"></param> private void AddControlEventHandler(ControlBase ctrl) { ctrl.sendMessageToThePage += delegate(string strMessage) {   //display message   lblMessage.Text = strMessage; }; } References: http://www.akadia.com/services/dotnet_delegates_and_events.html     3/

    Read the article

  • MVC data binding

    - by user441521
    I'm using MVC but I've read that MVVM is sort of about data binding and having pure markup in your views that data bind back to the backend via the data-* attributes. I've looked at knockout but it looks pretty low level and I feel like I can make a library that does this and is much easier to use where basically you only need to call 1 javascript function that will data bind your entire page because of the data-* attributes you assign to html elements. The benefits of this (that I see) is that your view is 100% decoupled from your back-end so that a given view never has to be changed if your back-end changes (ie for asp.net people no more razor in your view that makes your view specific to MS). My question would be, I know there is knockout out there but are there any others that provide this data binding functionality for MVC type applications? I don't want to recreate something that may already exist but I want to make something "better" and easier to use than knockout. To give an example of what I mean here is all the code one would need to get data binding in my library. This isn't final but just showing the idea that all you have to do is call 1 javascript function and set some data-* attribute values and everything ties together. Is this worth seeing through? <script> $(function () { // this is all you have to call to make databinding for POST or GET to work DataBind(); }); </script> <form id="addCustomer" data-bind="Customer" data-controller="Home" data-action="CreateCustomer"> Name: <input type="text" data-bind="Name" data-bind-type="text" /> Birthday: <input type="text" data-bind="Birthday" data-bind-type="text" /> Address: <input type="text" data-bind="Address" data-bind-type="text" /> <input type="submit" value="Save" id="btnSave" /> </form> ================================================= // controller action [HttpPost] public string CreateCustomer(Customer customer) { if(customer.Name == "Rick") return "success"; return "failure"; } // model public class Customer { public string Name { get; set; } public DateTime Birthday { get; set; } public string Address { get; set; } }

    Read the article

  • Voice echo in UDP based voice transmission [closed]

    - by Meherzad
    I have coded a java application for voice transmission between to ip in LAN. Here the code. public static Boolean flag= true; public static Boolean recFlag=true; DatagramSocket UDPSocket=null; AudioFormat format = null; TargetDataLine microphone=null; byte[] buffer=null; DatagramPacket UDPPacket=null; public void startChat(String ipAddress){ try{ buffer = new byte[1000]; UDPSocket=new DatagramSocket(1987); Thread th=new Thread(new Listener()); th.start(); microphone = AudioSystem.getTargetDataLine(format); format= new AudioFormat(8000.0f, 16, 1, true, true); UDPPacket = new DatagramPacket(buffer, buffer.length, InetAddress.getByName(ipAddress), 1988); microphone.open(format); microphone.start(); while (flag) { microphone.read(buffer, 0, buffer.length); UDPSocket.send(UDPPacket); } } catch(Exception e){ System.out.println(" ssss "+e.getMessage()); } } public class Listener extends Thread{ byte[] buff=new byte[1000]; DatagramSocket UDPSocket1=null; DatagramPacket recPacket=null; DataLine.Info info = new DataLine.Info(SourceDataLine.class, format); SourceDataLine line=null; @Override public void run(){ try{ UDPSocket1=new DatagramSocket(1988); format= new AudioFormat(8000.0f, 16, 1, true, true); line = (SourceDataLine) AudioSystem.getLine(info); line.open(format); line.start(); } catch(Exception e){ System.out.println("list "+ e.getMessage()); } recPacket=new DatagramPacket(buff, buff.length); while(recFlag){ try{ UDPSocket1.receive(recPacket); buff = (byte[])recPacket.getData(); line.write(buff, 0, buff.length); } catch(Exception e){ System.out.println("errr "+e.getMessage()); } } line.drain(); line.close(); } } Main problem which I am facing that I am getting only echo of my own voice. I am unable to hear voice from the other end only I am hearing is my own voice. Please suggest any solution.

    Read the article

  • Java Slick2d Animation not working

    - by user3558075
    Hello everyone I am trying to make a simple 2d game using java and the slick2d library. this is my first time doing it and i need some help. right now I am trying to make the Animations for the character, so that when you go right he turns right and when you go left he turns left... but I keep getting an error when im drawing the character, ive try'd re-downloading slick but that didnt work. when i get rid of the player.draw(x,y); line of code it dosen't crash but the character isnt there. heres my code, can anyone help? package enteties; import input.Keyinput; import org.newdawn.slick.Animation; import org.newdawn.slick.GameContainer; import org.newdawn.slick.Graphics; import org.newdawn.slick.Image; import org.newdawn.slick.Input; import org.newdawn.slick.SlickException; import org.newdawn.slick.state.BasicGameState; import org.newdawn.slick.state.StateBasedGame; import playerinfo.Playerinfo; public class Player extends BasicGameState{ Playerinfo pi = new Playerinfo(); Keyinput ki = new Keyinput(); Animation player,up,down,left,right; public void init(GameContainer gc, StateBasedGame sbg) throws SlickException { Image[] goingUp = {new Image("res/buckysBack.png") , new Image("res/charBack.png")}; Image[] goingDown = {new Image("res/buckysFront.png") , new Image("res/charFront.png")}; Image[] goingLeft = {new Image("res/buckysLeft.png") , new Image("res/charLeft.png")}; Image[] goingRight = {new Image("res/buckysRight.png") , new Image("res/charRight.png")}; int[] duration = {200,200}; Animation up = new Animation(goingUp,duration,false); Animation down = new Animation(goingDown,duration,false); Animation left = new Animation(goingLeft,duration,false); Animation right = new Animation(goingRight,duration,false); player = up; } public void render(GameContainer gc, StateBasedGame sbg, Graphics g) throws SlickException { //error happens here, when i remove this line it dosent crash player.draw(720,450); } public void update(GameContainer gc, StateBasedGame sbg, int delta) throws SlickException { } public int getID() { return 0; } }

    Read the article

  • Tile-based 2D collision detection problems

    - by Vee
    I'm trying to follow this tutorial http://www.tonypa.pri.ee/tbw/tut05.html to implement real-time collisions in a tile-based world. I find the center coordinates of my entities thanks to these properties: public float CenterX { get { return X + Width / 2f; } set { X = value - Width / 2f; } } public float CenterY { get { return Y + Height / 2f; } set { Y = value - Height / 2f; } } Then in my update method, in the player class, which is called every frame, I have this code: public override void Update() { base.Update(); int downY = (int)Math.Floor((CenterY + Height / 2f - 1) / 16f); int upY = (int)Math.Floor((CenterY - Height / 2f) / 16f); int leftX = (int)Math.Floor((CenterX + Speed * NextX - Width / 2f) / 16f); int rightX = (int)Math.Floor((CenterX + Speed * NextX + Width / 2f - 1) / 16f); bool upleft = Game.CurrentMap[leftX, upY] != 1; bool downleft = Game.CurrentMap[leftX, downY] != 1; bool upright = Game.CurrentMap[rightX, upY] != 1; bool downright = Game.CurrentMap[rightX, downY] != 1; if(NextX == 1) { if (upright && downright) CenterX += Speed; else CenterX = (Game.GetCellX(CenterX) + 1)*16 - Width / 2f; } } downY, upY, leftX and rightX should respectively find the lowest Y position, the highest Y position, the leftmost X position and the rightmost X position. I add + Speed * NextX because in the tutorial the getMyCorners function is called with these parameters: getMyCorners (ob.x+ob.speed*dirx, ob.y, ob); The GetCellX and GetCellY methods: public int GetCellX(float mX) { return (int)Math.Floor(mX / SGame.Camera.TileSize); } public int GetCellY(float mY) { return (int)Math.Floor(mY / SGame.Camera.TileSize); } The problem is that the player "flickers" while hitting a wall, and the corner detection doesn't even work correctly since it can overlap walls that only hit one of the corners. I do not understand what is wrong. In the tutorial the ob.x and ob.y fields should be the same as my CenterX and CenterY properties, and the ob.width and ob.height should be the same as Width / 2f and Height / 2f. However it still doesn't work. Thanks for your help.

    Read the article

  • Dynamically loading Assemblies to reduce Runtime Depencies

    - by Rick Strahl
    I've been working on a request to the West Wind Application Configuration library to add JSON support. The config library is a very easy to use code-first approach to configuration: You create a class that holds the configuration data that inherits from a base configuration class, and then assign a persistence provider at runtime that determines where and how the configuration data is store. Currently the library supports .NET Configuration stores (web.config/app.config), XML files, SQL records and string storage.About once a week somebody asks me about JSON support and I've deflected this question for the longest time because frankly I think that JSON as a configuration store doesn't really buy a heck of a lot over XML. Both formats require the user to perform some fixup of the plain configuration data - in XML into XML tags, with JSON using JSON delimiters for properties and property formatting rules. Sure JSON is a little less verbose and maybe a little easier to read if you have hierarchical data, but overall the differences are pretty minor in my opinion. And yet - the requests keep rolling in.Hard Link Issues in a Component LibraryAnother reason I've been hesitant is that I really didn't want to pull in a dependency on an external JSON library - in this case JSON.NET - into the core library. If you're not using JSON.NET elsewhere I don't want a user to have to require a hard dependency on JSON.NET unless they want to use the JSON feature. JSON.NET is also sensitive to versions and doesn't play nice with multiple versions when hard linked. For example, when you have a reference to V4.4 in your project but the host application has a reference to version 4.5 you can run into assembly load problems. NuGet's Update-Package can solve some of this *if* you can recompile, but that's not ideal for a component that's supposed to be just plug and play. This is no criticism of JSON.NET - this really applies to any dependency that might change.  So hard linking the DLL can be problematic for a number reasons, but the primary reason is to not force loading of JSON.NET unless you actually need it when you use the JSON configuration features of the library.Enter Dynamic LoadingSo rather than adding an assembly reference to the project, I decided that it would be better to dynamically load the DLL at runtime and then use dynamic typing to access various classes. This allows me to run without a hard assembly reference and allows more flexibility with version number differences now and in the future.But there are also a couple of downsides:No assembly reference means only dynamic access - no compiler type checking or IntellisenseRequirement for the host application to have reference to JSON.NET or else get runtime errorsThe former is minor, but the latter can be problematic. Runtime errors are always painful, but in this case I'm willing to live with this. If you want to use JSON configuration settings JSON.NET needs to be loaded in the project. If this is a Web project, it'll likely be there already.So there are a few things that are needed to make this work:Dynamically create an instance and optionally attempt to load an Assembly (if not loaded)Load types into dynamic variablesUse Reflection for a few tasks like statics/enumsThe dynamic keyword in C# makes the formerly most difficult Reflection part - method calls and property assignments - fairly painless. But as cool as dynamic is it doesn't handle all aspects of Reflection. Specifically it doesn't deal with object activation, truly dynamic (string based) member activation or accessing of non instance members, so there's still a little bit of work left to do with Reflection.Dynamic Object InstantiationThe first step in getting the process rolling is to instantiate the type you need to work with. This might be a two step process - loading the instance from a string value, since we don't have a hard type reference and potentially having to load the assembly. Although the host project might have a reference to JSON.NET, that instance might have not been loaded yet since it hasn't been accessed yet. In ASP.NET this won't be a problem, since ASP.NET preloads all referenced assemblies on AppDomain startup, but in other executable project, assemblies are just in time loaded only when they are accessed.Instantiating a type is a two step process: Finding the type reference and then activating it. Here's the generic code out of my ReflectionUtils library I use for this:/// <summary> /// Creates an instance of a type based on a string. Assumes that the type's /// </summary> /// <param name="typeName">Common name of the type</param> /// <param name="args">Any constructor parameters</param> /// <returns></returns> public static object CreateInstanceFromString(string typeName, params object[] args) { object instance = null; Type type = null; try { type = GetTypeFromName(typeName); if (type == null) return null; instance = Activator.CreateInstance(type, args); } catch { return null; } return instance; } /// <summary> /// Helper routine that looks up a type name and tries to retrieve the /// full type reference in the actively executing assemblies. /// </summary> /// <param name="typeName"></param> /// <returns></returns> public static Type GetTypeFromName(string typeName) { Type type = null; // Let default name binding find it type = Type.GetType(typeName, false); if (type != null) return type; // look through assembly list var assemblies = AppDomain.CurrentDomain.GetAssemblies(); // try to find manually foreach (Assembly asm in assemblies) { type = asm.GetType(typeName, false); if (type != null) break; } return type; } To use this for loading JSON.NET I have a small factory function that instantiates JSON.NET and sets a bunch of configuration settings on the generated object. The startup code also looks for failure and tries loading up the assembly when it fails since that's the main reason the load would fail. Finally it also caches the loaded instance for reuse (according to James the JSON.NET instance is thread safe and quite a bit faster when cached). Here's what the factory function looks like in JsonSerializationUtils:/// <summary> /// Dynamically creates an instance of JSON.NET /// </summary> /// <param name="throwExceptions">If true throws exceptions otherwise returns null</param> /// <returns>Dynamic JsonSerializer instance</returns> public static dynamic CreateJsonNet(bool throwExceptions = true) { if (JsonNet != null) return JsonNet; lock (SyncLock) { if (JsonNet != null) return JsonNet; // Try to create instance dynamic json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); if (json == null) { try { var ass = AppDomain.CurrentDomain.Load("Newtonsoft.Json"); json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); } catch (Exception ex) { if (throwExceptions) throw; return null; } } if (json == null) return null; json.ReferenceLoopHandling = (dynamic) ReflectionUtils.GetStaticProperty("Newtonsoft.Json.ReferenceLoopHandling", "Ignore"); // Enums as strings in JSON dynamic enumConverter = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.Converters.StringEnumConverter"); json.Converters.Add(enumConverter); JsonNet = json; } return JsonNet; }This code's purpose is to return a fully configured JsonSerializer instance. As you can see the code tries to create an instance and when it fails tries to load the assembly, and then re-tries loading.Once the instance is loaded some configuration occurs on it. Specifically I set the ReferenceLoopHandling option to not blow up immediately when circular references are encountered. There are a host of other small config setting that might be useful to set, but the default seem to be good enough in recent versions. Note that I'm setting ReferenceLoopHandling which requires an Enum value to be set. There's no real easy way (short of using the cardinal numeric value) to set a property or pass parameters from static values or enums. This means I still need to use Reflection to make this work. I'm using the same ReflectionUtils class I previously used to handle this for me. The function looks up the type and then uses Type.InvokeMember() to read the static property.Another feature I need is have Enum values serialized as strings rather than numeric values which is the default. To do this I can use the StringEnumConverter to convert enums to strings by adding it to the Converters collection.As you can see there's still a bit of Reflection to be done even in C# 4+ with dynamic, but with a few helpers this process is relatively painless.Doing the actual JSON ConversionFinally I need to actually do my JSON conversions. For the Utility class I need serialization that works for both strings and files so I created four methods that handle these tasks two each for serialization and deserialization for string and file.Here's what the File Serialization looks like:/// <summary> /// Serializes an object instance to a JSON file. /// </summary> /// <param name="value">the value to serialize</param> /// <param name="fileName">Full path to the file to write out with JSON.</param> /// <param name="throwExceptions">Determines whether exceptions are thrown or false is returned</param> /// <param name="formatJsonOutput">if true pretty-formats the JSON with line breaks</param> /// <returns>true or false</returns> public static bool SerializeToFile(object value, string fileName, bool throwExceptions = false, bool formatJsonOutput = false) { dynamic writer = null; FileStream fs = null; try { Type type = value.GetType(); var json = CreateJsonNet(throwExceptions); if (json == null) return false; fs = new FileStream(fileName, FileMode.Create); var sw = new StreamWriter(fs, Encoding.UTF8); writer = Activator.CreateInstance(JsonTextWriterType, sw); if (formatJsonOutput) writer.Formatting = (dynamic)Enum.Parse(FormattingType, "Indented"); writer.QuoteChar = '"'; json.Serialize(writer, value); } catch (Exception ex) { Debug.WriteLine("JsonSerializer Serialize error: " + ex.Message); if (throwExceptions) throw; return false; } finally { if (writer != null) writer.Close(); if (fs != null) fs.Close(); } return true; }You can see more of the dynamic invocation in this code. First I grab the dynamic JsonSerializer instance using the CreateJsonNet() method shown earlier which returns a dynamic. I then create a JsonTextWriter and configure a couple of enum settings on it, and then call Serialize() on the serializer instance with the JsonTextWriter that writes the output to disk. Although this code is dynamic it's still fairly short and readable.For full circle operation here's the DeserializeFromFile() version:/// <summary> /// Deserializes an object from file and returns a reference. /// </summary> /// <param name="fileName">name of the file to serialize to</param> /// <param name="objectType">The Type of the object. Use typeof(yourobject class)</param> /// <param name="binarySerialization">determines whether we use Xml or Binary serialization</param> /// <param name="throwExceptions">determines whether failure will throw rather than return null on failure</param> /// <returns>Instance of the deserialized object or null. Must be cast to your object type</returns> public static object DeserializeFromFile(string fileName, Type objectType, bool throwExceptions = false) { dynamic json = CreateJsonNet(throwExceptions); if (json == null) return null; object result = null; dynamic reader = null; FileStream fs = null; try { fs = new FileStream(fileName, FileMode.Open, FileAccess.Read); var sr = new StreamReader(fs, Encoding.UTF8); reader = Activator.CreateInstance(JsonTextReaderType, sr); result = json.Deserialize(reader, objectType); reader.Close(); } catch (Exception ex) { Debug.WriteLine("JsonNetSerialization Deserialization Error: " + ex.Message); if (throwExceptions) throw; return null; } finally { if (reader != null) reader.Close(); if (fs != null) fs.Close(); } return result; }This code is a little more compact since there are no prettifying options to set. Here JsonTextReader is created dynamically and it receives the output from the Deserialize() operation on the serializer.You can take a look at the full JsonSerializationUtils.cs file on GitHub to see the rest of the operations, but the string operations are very similar - the code is fairly repetitive.These generic serialization utilities isolate the dynamic serialization logic that has to deal with the dynamic nature of JSON.NET, and any code that uses these functions is none the wiser that JSON.NET is dynamically loaded.Using the JsonSerializationUtils WrapperThe final consumer of the SerializationUtils wrapper is an actual ConfigurationProvider, that is responsible for handling reading and writing JSON values to and from files. The provider is simple a small wrapper around the SerializationUtils component and there's very little code to make this work now:The whole provider looks like this:/// <summary> /// Reads and Writes configuration settings in .NET config files and /// sections. Allows reading and writing to default or external files /// and specification of the configuration section that settings are /// applied to. /// </summary> public class JsonFileConfigurationProvider<TAppConfiguration> : ConfigurationProviderBase<TAppConfiguration> where TAppConfiguration: AppConfiguration, new() { /// <summary> /// Optional - the Configuration file where configuration settings are /// stored in. If not specified uses the default Configuration Manager /// and its default store. /// </summary> public string JsonConfigurationFile { get { return _JsonConfigurationFile; } set { _JsonConfigurationFile = value; } } private string _JsonConfigurationFile = string.Empty; public override bool Read(AppConfiguration config) { var newConfig = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfiguration)) as TAppConfiguration; if (newConfig == null) { if(Write(config)) return true; return false; } DecryptFields(newConfig); DataUtils.CopyObjectData(newConfig, config, "Provider,ErrorMessage"); return true; } /// <summary> /// Return /// </summary> /// <typeparam name="TAppConfig"></typeparam> /// <returns></returns> public override TAppConfig Read<TAppConfig>() { var result = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfig)) as TAppConfig; if (result != null) DecryptFields(result); return result; } /// <summary> /// Write configuration to XmlConfigurationFile location /// </summary> /// <param name="config"></param> /// <returns></returns> public override bool Write(AppConfiguration config) { EncryptFields(config); bool result = JsonSerializationUtils.SerializeToFile(config, JsonConfigurationFile,false,true); // Have to decrypt again to make sure the properties are readable afterwards DecryptFields(config); return result; } }This incidentally demonstrates how easy it is to create a new provider for the West Wind Application Configuration component. Simply implementing 3 methods will do in most cases.Note this code doesn't have any dynamic dependencies - all that's abstracted away in the JsonSerializationUtils(). From here on, serializing JSON is just a matter of calling the static methods on the SerializationUtils class.Already, there are several other places in some other tools where I use JSON serialization this is coming in very handy. With a couple of lines of code I was able to add JSON.NET support to an older AJAX library that I use replacing quite a bit of code that was previously in use. And for any other manual JSON operations (in a couple of apps I use JSON Serialization for 'blob' like document storage) this is also going to be handy.Performance?Some of you might be thinking that using dynamic and Reflection can't be good for performance. And you'd be right… In performing some informal testing it looks like the performance of the native code is nearly twice as fast as the dynamic code. Most of the slowness is attributable to type lookups. To test I created a native class that uses an actual reference to JSON.NET and performance was consistently around 85-90% faster with the referenced code. That being said though - I serialized 10,000 objects in 80ms vs. 45ms so this isn't hardly slouchy. For the configuration component speed is not that important because both read and write operations typically happen once on first access and then every once in a while. But for other operations - say a serializer trying to handle AJAX requests on a Web Server one would be well served to create a hard dependency.Dynamic Loading - Worth it?On occasion dynamic loading makes sense. But there's a price to be paid in added code complexity and a performance hit. But for some operations that are not pivotal to a component or application and only used under certain circumstances dynamic loading can be beneficial to avoid having to ship extra files and loading down distributions. These days when you create new projects in Visual Studio with 30 assemblies before you even add your own code, trying to keep file counts under control seems a good idea. It's not the kind of thing you do on a regular basis, but when needed it can be a useful tool. Hopefully some of you find this information useful…© Rick Strahl, West Wind Technologies, 2005-2013Posted in .NET  C#   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Dynamically loading Assemblies to reduce Runtime Dependencies

    - by Rick Strahl
    I've been working on a request to the West Wind Application Configuration library to add JSON support. The config library is a very easy to use code-first approach to configuration: You create a class that holds the configuration data that inherits from a base configuration class, and then assign a persistence provider at runtime that determines where and how the configuration data is store. Currently the library supports .NET Configuration stores (web.config/app.config), XML files, SQL records and string storage.About once a week somebody asks me about JSON support and I've deflected this question for the longest time because frankly I think that JSON as a configuration store doesn't really buy a heck of a lot over XML. Both formats require the user to perform some fixup of the plain configuration data - in XML into XML tags, with JSON using JSON delimiters for properties and property formatting rules. Sure JSON is a little less verbose and maybe a little easier to read if you have hierarchical data, but overall the differences are pretty minor in my opinion. And yet - the requests keep rolling in.Hard Link Issues in a Component LibraryAnother reason I've been hesitant is that I really didn't want to pull in a dependency on an external JSON library - in this case JSON.NET - into the core library. If you're not using JSON.NET elsewhere I don't want a user to have to require a hard dependency on JSON.NET unless they want to use the JSON feature. JSON.NET is also sensitive to versions and doesn't play nice with multiple versions when hard linked. For example, when you have a reference to V4.4 in your project but the host application has a reference to version 4.5 you can run into assembly load problems. NuGet's Update-Package can solve some of this *if* you can recompile, but that's not ideal for a component that's supposed to be just plug and play. This is no criticism of JSON.NET - this really applies to any dependency that might change.  So hard linking the DLL can be problematic for a number reasons, but the primary reason is to not force loading of JSON.NET unless you actually need it when you use the JSON configuration features of the library.Enter Dynamic LoadingSo rather than adding an assembly reference to the project, I decided that it would be better to dynamically load the DLL at runtime and then use dynamic typing to access various classes. This allows me to run without a hard assembly reference and allows more flexibility with version number differences now and in the future.But there are also a couple of downsides:No assembly reference means only dynamic access - no compiler type checking or IntellisenseRequirement for the host application to have reference to JSON.NET or else get runtime errorsThe former is minor, but the latter can be problematic. Runtime errors are always painful, but in this case I'm willing to live with this. If you want to use JSON configuration settings JSON.NET needs to be loaded in the project. If this is a Web project, it'll likely be there already.So there are a few things that are needed to make this work:Dynamically create an instance and optionally attempt to load an Assembly (if not loaded)Load types into dynamic variablesUse Reflection for a few tasks like statics/enumsThe dynamic keyword in C# makes the formerly most difficult Reflection part - method calls and property assignments - fairly painless. But as cool as dynamic is it doesn't handle all aspects of Reflection. Specifically it doesn't deal with object activation, truly dynamic (string based) member activation or accessing of non instance members, so there's still a little bit of work left to do with Reflection.Dynamic Object InstantiationThe first step in getting the process rolling is to instantiate the type you need to work with. This might be a two step process - loading the instance from a string value, since we don't have a hard type reference and potentially having to load the assembly. Although the host project might have a reference to JSON.NET, that instance might have not been loaded yet since it hasn't been accessed yet. In ASP.NET this won't be a problem, since ASP.NET preloads all referenced assemblies on AppDomain startup, but in other executable project, assemblies are just in time loaded only when they are accessed.Instantiating a type is a two step process: Finding the type reference and then activating it. Here's the generic code out of my ReflectionUtils library I use for this:/// <summary> /// Creates an instance of a type based on a string. Assumes that the type's /// </summary> /// <param name="typeName">Common name of the type</param> /// <param name="args">Any constructor parameters</param> /// <returns></returns> public static object CreateInstanceFromString(string typeName, params object[] args) { object instance = null; Type type = null; try { type = GetTypeFromName(typeName); if (type == null) return null; instance = Activator.CreateInstance(type, args); } catch { return null; } return instance; } /// <summary> /// Helper routine that looks up a type name and tries to retrieve the /// full type reference in the actively executing assemblies. /// </summary> /// <param name="typeName"></param> /// <returns></returns> public static Type GetTypeFromName(string typeName) { Type type = null; // Let default name binding find it type = Type.GetType(typeName, false); if (type != null) return type; // look through assembly list var assemblies = AppDomain.CurrentDomain.GetAssemblies(); // try to find manually foreach (Assembly asm in assemblies) { type = asm.GetType(typeName, false); if (type != null) break; } return type; } To use this for loading JSON.NET I have a small factory function that instantiates JSON.NET and sets a bunch of configuration settings on the generated object. The startup code also looks for failure and tries loading up the assembly when it fails since that's the main reason the load would fail. Finally it also caches the loaded instance for reuse (according to James the JSON.NET instance is thread safe and quite a bit faster when cached). Here's what the factory function looks like in JsonSerializationUtils:/// <summary> /// Dynamically creates an instance of JSON.NET /// </summary> /// <param name="throwExceptions">If true throws exceptions otherwise returns null</param> /// <returns>Dynamic JsonSerializer instance</returns> public static dynamic CreateJsonNet(bool throwExceptions = true) { if (JsonNet != null) return JsonNet; lock (SyncLock) { if (JsonNet != null) return JsonNet; // Try to create instance dynamic json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); if (json == null) { try { var ass = AppDomain.CurrentDomain.Load("Newtonsoft.Json"); json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); } catch (Exception ex) { if (throwExceptions) throw; return null; } } if (json == null) return null; json.ReferenceLoopHandling = (dynamic) ReflectionUtils.GetStaticProperty("Newtonsoft.Json.ReferenceLoopHandling", "Ignore"); // Enums as strings in JSON dynamic enumConverter = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.Converters.StringEnumConverter"); json.Converters.Add(enumConverter); JsonNet = json; } return JsonNet; }This code's purpose is to return a fully configured JsonSerializer instance. As you can see the code tries to create an instance and when it fails tries to load the assembly, and then re-tries loading.Once the instance is loaded some configuration occurs on it. Specifically I set the ReferenceLoopHandling option to not blow up immediately when circular references are encountered. There are a host of other small config setting that might be useful to set, but the default seem to be good enough in recent versions. Note that I'm setting ReferenceLoopHandling which requires an Enum value to be set. There's no real easy way (short of using the cardinal numeric value) to set a property or pass parameters from static values or enums. This means I still need to use Reflection to make this work. I'm using the same ReflectionUtils class I previously used to handle this for me. The function looks up the type and then uses Type.InvokeMember() to read the static property.Another feature I need is have Enum values serialized as strings rather than numeric values which is the default. To do this I can use the StringEnumConverter to convert enums to strings by adding it to the Converters collection.As you can see there's still a bit of Reflection to be done even in C# 4+ with dynamic, but with a few helpers this process is relatively painless.Doing the actual JSON ConversionFinally I need to actually do my JSON conversions. For the Utility class I need serialization that works for both strings and files so I created four methods that handle these tasks two each for serialization and deserialization for string and file.Here's what the File Serialization looks like:/// <summary> /// Serializes an object instance to a JSON file. /// </summary> /// <param name="value">the value to serialize</param> /// <param name="fileName">Full path to the file to write out with JSON.</param> /// <param name="throwExceptions">Determines whether exceptions are thrown or false is returned</param> /// <param name="formatJsonOutput">if true pretty-formats the JSON with line breaks</param> /// <returns>true or false</returns> public static bool SerializeToFile(object value, string fileName, bool throwExceptions = false, bool formatJsonOutput = false) { dynamic writer = null; FileStream fs = null; try { Type type = value.GetType(); var json = CreateJsonNet(throwExceptions); if (json == null) return false; fs = new FileStream(fileName, FileMode.Create); var sw = new StreamWriter(fs, Encoding.UTF8); writer = Activator.CreateInstance(JsonTextWriterType, sw); if (formatJsonOutput) writer.Formatting = (dynamic)Enum.Parse(FormattingType, "Indented"); writer.QuoteChar = '"'; json.Serialize(writer, value); } catch (Exception ex) { Debug.WriteLine("JsonSerializer Serialize error: " + ex.Message); if (throwExceptions) throw; return false; } finally { if (writer != null) writer.Close(); if (fs != null) fs.Close(); } return true; }You can see more of the dynamic invocation in this code. First I grab the dynamic JsonSerializer instance using the CreateJsonNet() method shown earlier which returns a dynamic. I then create a JsonTextWriter and configure a couple of enum settings on it, and then call Serialize() on the serializer instance with the JsonTextWriter that writes the output to disk. Although this code is dynamic it's still fairly short and readable.For full circle operation here's the DeserializeFromFile() version:/// <summary> /// Deserializes an object from file and returns a reference. /// </summary> /// <param name="fileName">name of the file to serialize to</param> /// <param name="objectType">The Type of the object. Use typeof(yourobject class)</param> /// <param name="binarySerialization">determines whether we use Xml or Binary serialization</param> /// <param name="throwExceptions">determines whether failure will throw rather than return null on failure</param> /// <returns>Instance of the deserialized object or null. Must be cast to your object type</returns> public static object DeserializeFromFile(string fileName, Type objectType, bool throwExceptions = false) { dynamic json = CreateJsonNet(throwExceptions); if (json == null) return null; object result = null; dynamic reader = null; FileStream fs = null; try { fs = new FileStream(fileName, FileMode.Open, FileAccess.Read); var sr = new StreamReader(fs, Encoding.UTF8); reader = Activator.CreateInstance(JsonTextReaderType, sr); result = json.Deserialize(reader, objectType); reader.Close(); } catch (Exception ex) { Debug.WriteLine("JsonNetSerialization Deserialization Error: " + ex.Message); if (throwExceptions) throw; return null; } finally { if (reader != null) reader.Close(); if (fs != null) fs.Close(); } return result; }This code is a little more compact since there are no prettifying options to set. Here JsonTextReader is created dynamically and it receives the output from the Deserialize() operation on the serializer.You can take a look at the full JsonSerializationUtils.cs file on GitHub to see the rest of the operations, but the string operations are very similar - the code is fairly repetitive.These generic serialization utilities isolate the dynamic serialization logic that has to deal with the dynamic nature of JSON.NET, and any code that uses these functions is none the wiser that JSON.NET is dynamically loaded.Using the JsonSerializationUtils WrapperThe final consumer of the SerializationUtils wrapper is an actual ConfigurationProvider, that is responsible for handling reading and writing JSON values to and from files. The provider is simple a small wrapper around the SerializationUtils component and there's very little code to make this work now:The whole provider looks like this:/// <summary> /// Reads and Writes configuration settings in .NET config files and /// sections. Allows reading and writing to default or external files /// and specification of the configuration section that settings are /// applied to. /// </summary> public class JsonFileConfigurationProvider<TAppConfiguration> : ConfigurationProviderBase<TAppConfiguration> where TAppConfiguration: AppConfiguration, new() { /// <summary> /// Optional - the Configuration file where configuration settings are /// stored in. If not specified uses the default Configuration Manager /// and its default store. /// </summary> public string JsonConfigurationFile { get { return _JsonConfigurationFile; } set { _JsonConfigurationFile = value; } } private string _JsonConfigurationFile = string.Empty; public override bool Read(AppConfiguration config) { var newConfig = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfiguration)) as TAppConfiguration; if (newConfig == null) { if(Write(config)) return true; return false; } DecryptFields(newConfig); DataUtils.CopyObjectData(newConfig, config, "Provider,ErrorMessage"); return true; } /// <summary> /// Return /// </summary> /// <typeparam name="TAppConfig"></typeparam> /// <returns></returns> public override TAppConfig Read<TAppConfig>() { var result = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfig)) as TAppConfig; if (result != null) DecryptFields(result); return result; } /// <summary> /// Write configuration to XmlConfigurationFile location /// </summary> /// <param name="config"></param> /// <returns></returns> public override bool Write(AppConfiguration config) { EncryptFields(config); bool result = JsonSerializationUtils.SerializeToFile(config, JsonConfigurationFile,false,true); // Have to decrypt again to make sure the properties are readable afterwards DecryptFields(config); return result; } }This incidentally demonstrates how easy it is to create a new provider for the West Wind Application Configuration component. Simply implementing 3 methods will do in most cases.Note this code doesn't have any dynamic dependencies - all that's abstracted away in the JsonSerializationUtils(). From here on, serializing JSON is just a matter of calling the static methods on the SerializationUtils class.Already, there are several other places in some other tools where I use JSON serialization this is coming in very handy. With a couple of lines of code I was able to add JSON.NET support to an older AJAX library that I use replacing quite a bit of code that was previously in use. And for any other manual JSON operations (in a couple of apps I use JSON Serialization for 'blob' like document storage) this is also going to be handy.Performance?Some of you might be thinking that using dynamic and Reflection can't be good for performance. And you'd be right… In performing some informal testing it looks like the performance of the native code is nearly twice as fast as the dynamic code. Most of the slowness is attributable to type lookups. To test I created a native class that uses an actual reference to JSON.NET and performance was consistently around 85-90% faster with the referenced code. This will change though depending on the size of objects serialized - the larger the object the more processing time is spent inside the actual dynamically activated components and the less difference there will be. Dynamic code is always slower, but how much it really affects your application primarily depends on how frequently the dynamic code is called in relation to the non-dynamic code executing. In most situations where dynamic code is used 'to get the process rolling' as I do here the overhead is small enough to not matter.All that being said though - I serialized 10,000 objects in 80ms vs. 45ms so this is hardly slouchy performance. For the configuration component speed is not that important because both read and write operations typically happen once on first access and then every once in a while. But for other operations - say a serializer trying to handle AJAX requests on a Web Server one would be well served to create a hard dependency.Dynamic Loading - Worth it?Dynamic loading is not something you need to worry about but on occasion dynamic loading makes sense. But there's a price to be paid in added code  and a performance hit which depends on how frequently the dynamic code is accessed. But for some operations that are not pivotal to a component or application and are only used under certain circumstances dynamic loading can be beneficial to avoid having to ship extra files adding dependencies and loading down distributions. These days when you create new projects in Visual Studio with 30 assemblies before you even add your own code, trying to keep file counts under control seems like a good idea. It's not the kind of thing you do on a regular basis, but when needed it can be a useful option in your toolset… © Rick Strahl, West Wind Technologies, 2005-2013Posted in .NET  C#   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Composite-like pattern and SRP violation

    - by jimmy_keen
    Recently I've noticed myself implementing pattern similar to the one described below. Starting with interface: public interface IUserProvider { User GetUser(UserData data); } GetUser method's pure job is to somehow return user (that would be an operation speaking in composite terms). There might be many implementations of IUserProvider, which all do the same thing - return user basing on input data. It doesn't really matter, as they are only leaves in composite terms and that's fairly simple. Now, my leaves are used by one own them all composite class, which at the moment follows this implementation: public interface IUserProviderComposite : IUserProvider { void RegisterProvider(Predicate<UserData> predicate, IUserProvider provider); } public class UserProviderComposite : IUserProviderComposite { public User GetUser(SomeUserData data) ... public void RegisterProvider(Predicate<UserData> predicate, IUserProvider provider) ... } Idea behind UserProviderComposite is simple. You register providers, and this class acts as a reusable entry-point. When calling GetUser, it will use whatever registered provider matches predicate for requested user data (if that helps, it stores key-value map of predicates and providers internally). Now, what confuses me is whether RegisterProvider method (brings to mind composite's add operation) should be a part of that class. It kind of expands its responsibilities from providing user to also managing providers collection. As far as my understanding goes, this violates Single Responsibility Principle... or am I wrong here? I thought about extracting register part into separate entity and inject it to the composite. As long as it looks decent on paper (in terms of SRP), it feels bit awkward because: I would be essentially injecting Dictionary (or other key-value map) ...or silly wrapper around it, doing nothing more than adding entires This won't be following composite anymore (as add won't be part of composite) What exactly is the presented pattern called? Composite felt natural to compare it with, but I realize it's not exactly the one however nothing else rings any bells. Which approach would you take - stick with SRP or stick with "composite"/pattern? Or is the design here flawed and given the problem this can be done in a better way?

    Read the article

  • Strange rendering in XNA/Monogame

    - by Gerhman
    I am trying to render G-Code generated for a 3d-printer as the printed product by reading the file as line segments and the drawing cylinders with the diameter of the filament around the segment. I think I have managed to do this part right because the vertex I am sending to the graphics device appear to have been processed correctly. My problem I think lies somewhere in the rendering. What basically happens is that when I start rotating my model in the X or Y axis then it renders perfectly for half of the rotation but then for the other half it has this weird effect where you start seeing through the outer filament into some of the shapes inside. This effect is the strongest with X rotations though. Here is a picture of the part of the rotation that looks correct: And here is one that looks horrible: I am still quite new to XNA and/Monogame and 3d programming as a whole. I have no idea what could possibly be causing this and even less of an idea of what this type of behavior is called. I am guessing this has something to do with rendering so have added the code for that part: protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.Black); basicEffect.World = world; basicEffect.View = view; basicEffect.Projection = projection; basicEffect.VertexColorEnabled = true; basicEffect.EnableDefaultLighting(); GraphicsDevice.SetVertexBuffer(vertexBuffer); RasterizerState rasterizerState = new RasterizerState(); rasterizerState.CullMode = CullMode.CullClockwiseFace; rasterizerState.ScissorTestEnable = true; GraphicsDevice.RasterizerState = rasterizerState; foreach (EffectPass pass in basicEffect.CurrentTechnique.Passes) { pass.Apply(); GraphicsDevice.DrawPrimitives(PrimitiveType.TriangleList, 0, vertexBuffer.VertexCount); } base.Draw(gameTime); } I don't know if it could be because I am shading something that does not really have a texture. I am using this custom vertex declaration I found on some tutorial that allows me to store a vertex with a position, color and normal: public struct VertexPositionColorNormal { public Vector3 Position; public Color Color; public Vector3 Normal; public readonly static VertexDeclaration VertexDeclaration = new VertexDeclaration ( new VertexElement(0, VertexElementFormat.Vector3, VertexElementUsage.Position, 0), new VertexElement(sizeof(float) * 3, VertexElementFormat.Color, VertexElementUsage.Color, 0), new VertexElement(sizeof(float) * 3 + 4, VertexElementFormat.Vector3, VertexElementUsage.Normal, 0) ); } If any of you have ever seen this type of thing please help. Also, if you think that the problem might lay somewhere else in my code then please just request what part you would like to see in the comments section.

    Read the article

  • Arrive steering behavior

    - by dbostream
    I bought a book called Programming game AI by example and I am trying to implement the arrive steering behavior. The problem I am having is that my objects oscillate around the target position; after oscillating less and less for awhile they finally come to a stop at the target position. Does anyone have any idea why this oscillating behavior occur? Since the examples accompanying the book are written in C++ I had to rewrite the code into C#. Below is the relevant parts of the steering behavior: private enum Deceleration { Fast = 1, Normal = 2, Slow = 3 } public MovingEntity Entity { get; private set; } public Vector2 SteeringForce { get; private set; } public Vector2 Target { get; set; } public Vector2 Calculate() { SteeringForce.Zero(); SteeringForce = SumForces(); SteeringForce.Truncate(Entity.MaxForce); return SteeringForce; } private Vector2 SumForces() { Vector2 force = new Vector2(); if (Activated(BehaviorTypes.Arrive)) { force += Arrive(Target, Deceleration.Slow); if (!AccumulateForce(force)) return SteeringForce; } return SteeringForce; } private Vector2 Arrive(Vector2 target, Deceleration deceleration) { Vector2 toTarget = target - Entity.Position; double distance = toTarget.Length(); if (distance > 0) { //because Deceleration is enumerated as an int, this value is required //to provide fine tweaking of the deceleration.. double decelerationTweaker = 0.3; double speed = distance / ((double)deceleration * decelerationTweaker); speed = Math.Min(speed, Entity.MaxSpeed); Vector2 desiredVelocity = toTarget * speed / distance; return desiredVelocity - Entity.Velocity; } return new Vector2(); } private bool AccumulateForce(Vector2 forceToAdd) { double magnitudeRemaining = Entity.MaxForce - SteeringForce.Length(); if (magnitudeRemaining <= 0) return false; double magnitudeToAdd = forceToAdd.Length(); if (magnitudeToAdd > magnitudeRemaining) magnitudeToAdd = magnitudeRemaining; SteeringForce += Vector2.NormalizeRet(forceToAdd) * magnitudeToAdd; return true; } This is the update method of my objects: public void Update(double deltaTime) { Vector2 steeringForce = Steering.Calculate(); Vector2 acceleration = steeringForce / Mass; Velocity = Velocity + acceleration * deltaTime; Velocity.Truncate(MaxSpeed); Position = Position + Velocity * deltaTime; } If you want to see the problem with your own eyes you can download a minimal example here. Thanks in advance.

    Read the article

  • Throwing exception from a property when my object state is invalid

    - by Rumi P.
    Microsoft guidelines say: "Avoid throwing exceptions from property getters", and I normally follow that. But my application uses Linq2SQL, and there is the case where my object can be in invalid state because somebody or something wrote nonsense into the database. Consider this toy example: [Table(Name="Rectangle")] public class Rectangle { [Column(Name="ID", IsPrimaryKey = true, IsDbGenerated = true)] public int ID {get; set;} [Column(Name="firstSide")] public double firstSide {get; set;} [Column(Name="secondSide")] public double secondSide {get; set;} public double sideRatio { get { return firstSide/secondSide; } } } Here, I could write code which ensures that my application never writes a Rectangle with a zero-length side into the database. But no matter how bulletproof I make my own code, somebody could open the database with a different application and create an invalid Rectangle, especially one with a 0 for secondSide. (For this example, please forget that it is possible to design the database in a way such that writing a side length of zero into the rectangle table is impossible; my domain model is very complex and there are constraints on model state which cannot be expressed in a relational database). So, the solution I am gravitating to is to change the getter to: get { if(firstSide > 0 && secondSide > 0) return firstSide/secondSide; else throw new System.InvalidOperationException("All rectangle sides should have a positive length"); } The reasoning behind not throwing exceptions from properties is that programmers should be able to use them without having to make precautions about catching and handling them them. But in this case, I think that it is OK to continue to use this property without such precautions: if the exception is thrown because my application wrote a non-zero rectangle side into the database, then this is a serious bug. It cannot and shouldn't be handled in the application, but there should be code which prevents it. It is good that the exception is visibly thrown, because that way the bug is caught. if the exception is thrown because a different application changed the data in the database, then handling it is outside of the scope of my application. So I can't do anything about it if I catch it. Is this a good enough reasoning to get over the "avoid" part of the guideline and throw the exception? Or should I turn it into a method after all? Note that in the real code, the properties which can have an invalid state feel less like the result of a calculation, so they are "natural" properties, not methods.

    Read the article

  • Calling functions from different classes

    - by A Ron Hubbard Clevenger
    I'm writing a program and I'm supposed to check and see if a certain object is in the list before I call it. I set up the contains() method which is supposed to use the equals() method of the Comparable interface I implemented on my Golfer class but it doesn't seem to call it (I put print statements in to check). I can't seem to figure out whats wrong with the code, the ArrayUnsortedList class I'm using to go through the list even uses the correct toString() method I defined in my Golfer class but for some reason it won't use the equals() method I implemented. //From "GolfApp.java" public class GolfApp{ ListInterface <Golfer>golfers = new ArraySortedList<Golfer> (20); Golfer golfer; //..*snip*.. if(this.golfers.contains(new Golfer(name,score))) System.out.println("The list already contains this golfer"); else{ this.golfers.add(this.golfer = new Golfer(name,score)); System.out.println("This golfer is already on the list"); } //From "ArrayUnsortedList.java" protected void find(T target){ location = 0; found = false; while (location < numElements){ if (list[location].equals(target)) //Where I think the problem is { found = true; return; } else location++; } } public boolean contains(T element){ find(element); return found; } //From "Golfer.java" public class Golfer implements Comparable<Golfer>{ //..irrelavant code sniped..// public boolean equals(Golfer golfer) { String thisString = score + ":" + name; String otherString = golfer.getScore() + ":" + golfer.getName() ; System.out.println("Golfer.equals() has bee called"); return thisString.equalsIgnoreCase(otherString); } public String toString() { return (score + ":" + name); } My main problem seems to be getting the find function of the ArrayUnsortedList to call my equals function in the find() part of the List but I'm not exactly sure why, like I said when I have it printed out it works with the toString() method I implemented perfectly. I'm almost positive the problem has to do with the find() function in the ArraySortedList not calling my equals() method. I tried using some other functions that relied on the find() method and got the same results.

    Read the article

  • Another question about handling game states

    - by Eva
    I'm making a game designed with the entity-component paradigm that uses systems to communicate between components as explained here. I've reached the point in my development that I need to add game states (such as paused, playing, level start, round start, game over, etc.), but I'm not sure how to do it with my framework. I've looked at this code example on game states which everyone seems to reference, but I don't think it fits with my framework. It seems to have each state handling its own drawing and updating. My framework has a SystemManager that handles all the updating using systems. For example, here's my RenderingSystem class: public class RenderingSystem extends GameSystem { private GameView gameView_; /** * Constructor * Creates a new RenderingSystem. * @param gameManager The game manager. Used to get the game components. */ public RenderingSystem(GameManager gameManager) { super(gameManager); } /** * Method: registerGameView * Registers gameView into the RenderingSystem. * @param gameView The game view registered. */ public void registerGameView(GameView gameView) { gameView_ = gameView; } /** * Method: triggerRender * Adds a repaint call to the event queue for the dirty rectangle. */ public void triggerRender() { Rectangle dirtyRect = new Rectangle(); for (GameObject object : getRenderableObjects()) { GraphicsComponent graphicsComponent = object.getComponent(GraphicsComponent.class); dirtyRect.add(graphicsComponent.getDirtyRect()); } gameView_.repaint(dirtyRect); } /** * Method: renderGameView * Renders the game objects onto the game view. * @param g The graphics object that draws the game objects. */ public void renderGameView(Graphics g) { for (GameObject object : getRenderableObjects()) { GraphicsComponent graphicsComponent = object.getComponent(GraphicsComponent.class); if (!graphicsComponent.isVisible()) continue; GraphicsComponent.Shape shape = graphicsComponent.getShape(); BoundsComponent boundsComponent = object.getComponent(BoundsComponent.class); Rectangle bounds = boundsComponent.getBounds(); g.setColor(graphicsComponent.getColor()); if (shape == GraphicsComponent.Shape.RECTANGULAR) { g.fill3DRect(bounds.x, bounds.y, bounds.width, bounds.height, true); } else if (shape == GraphicsComponent.Shape.CIRCULAR) { g.fillOval(bounds.x, bounds.y, bounds.width, bounds.height); } } } /** * Method: getRenderableObjects * @return The renderable game objects. */ private HashSet<GameObject> getRenderableObjects() { return gameManager.getGameObjectManager().getRelevantObjects( getClass()); } } Also all the updating in my game is event-driven. I don't have a loop like theirs that simply updates everything at the same time. I like my framework because it makes it easy to add new GameObjects, but doesn't have the problems some component-based designs encounter when communicating between components. I would hate to chuck it just to get pause to work. Is there a way I can add game states to my game without removing the entity-component design? Does the game state example actually fit my framework, and I'm just missing something?

    Read the article

  • design a model for a system of dependent variables

    - by dbaseman
    I'm dealing with a modeling system (financial) that has dozens of variables. Some of the variables are independent, and function as inputs to the system; most of them are calculated from other variables (independent and calculated) in the system. What I'm looking for is a clean, elegant way to: define the function of each dependent variable in the system trigger a re-calculation, whenever a variable changes, of the variables that depend on it A naive way to do this would be to write a single class that implements INotifyPropertyChanged, and uses a massive case statement that lists out all the variable names x1, x2, ... xn on which others depend, and, whenever a variable xi changes, triggers a recalculation of each of that variable's dependencies. I feel that this naive approach is flawed, and that there must be a cleaner way. I started down the path of defining a CalculationManager<TModel> class, which would be used (in a simple example) something like as follows: public class Model : INotifyPropertyChanged { private CalculationManager<Model> _calculationManager = new CalculationManager<Model>(); // each setter triggers a "PropertyChanged" event public double? Height { get; set; } public double? Weight { get; set; } public double? BMI { get; set; } public Model() { _calculationManager.DefineDependency<double?>( forProperty: model => model.BMI, usingCalculation: (height, weight) => weight / Math.Pow(height, 2), withInputs: model => model.Height, model.Weight); } // INotifyPropertyChanged implementation here } I won't reproduce CalculationManager<TModel> here, but the basic idea is that it sets up a dependency map, listens for PropertyChanged events, and updates dependent properties as needed. I still feel that I'm missing something major here, and that this isn't the right approach: the (mis)use of INotifyPropertyChanged seems to me like a code smell the withInputs parameter is defined as params Expression<Func<TModel, T>>[] args, which means that the argument list of usingCalculation is not checked at compile time the argument list (weight, height) is redundantly defined in both usingCalculation and withInputs I am sure that this kind of system of dependent variables must be common in computational mathematics, physics, finance, and other fields. Does someone know of an established set of ideas that deal with what I'm grasping at here? Would this be a suitable application for a functional language like F#? Edit More context: The model currently exists in an Excel spreadsheet, and is being migrated to a C# application. It is run on-demand, and the variables can be modified by the user from the application's UI. Its purpose is to retrieve variables that the business is interested in, given current inputs from the markets, and model parameters set by the business.

    Read the article

  • Is 2 lines of push/pop code for each pre-draw-state too many?

    - by Griffin
    I'm trying to simplify vector graphics management in XNA; currently by incorporating state preservation. 2X lines of push/pop code for X states feels like too many, and it just feels wrong to have 2 lines of code that look identical except for one being push() and the other being pop(). The goal is to eradicate this repetitiveness,and I hoped to do so by creating an interface in which a client can give class/struct refs in which he wants restored after the rendering calls. Also note that many beginner-programmers will be using this, so forcing lambda expressions or other advanced C# features to be used in client code is not a good idea. I attempted to accomplish my goal by using Daniel Earwicker's Ptr class: public class Ptr<T> { Func<T> getter; Action<T> setter; public Ptr(Func<T> g, Action<T> s) { getter = g; setter = s; } public T Deref { get { return getter(); } set { setter(value); } } } an extension method: //doesn't work for structs since this is just syntatic sugar public static Ptr<T> GetPtr <T> (this T obj) { return new Ptr<T>( ()=> obj, v=> obj=v ); } and a Push Function: //returns a Pop Action for later calling public static Action Push <T> (ref T structure) where T: struct { T pushedValue = structure; //copies the struct data Ptr<T> p = structure.GetPtr(); return new Action( ()=> {p.Deref = pushedValue;} ); } However this doesn't work as stated in the code. How might I accomplish my goal? Example of code to be refactored: protected override void RenderLocally (GraphicsDevice device) { if (!(bool)isCompiled) {Compile();} //TODO: make sure state settings don't implicitly delete any buffers/resources RasterizerState oldRasterState = device.RasterizerState; DepthFormat oldFormat = device.PresentationParameters.DepthStencilFormat; DepthStencilState oldBufferState = device.DepthStencilState; { //Rendering code } device.RasterizerState = oldRasterState; device.DepthStencilState = oldBufferState; device.PresentationParameters.DepthStencilFormat = oldFormat; }

    Read the article

  • Using elapsed time for SlowMo in XNA

    - by Dave Voyles
    I'm trying to create a slow-mo effect in my pong game so that when a player is a button the paddles and ball will suddenly move at a far slower speed. I believe my understanding of the concepts of adjusting the timing in XNA are done, but I'm not sure of how to incorporate it into my design exactly. The updates for my bats (paddles) are done in my Bat.cs class: /// Controls the bat moving up the screen /// </summary> public void MoveUp() { SetPosition(Position + new Vector2(0, -moveSpeed)); } /// <summary> /// Controls the bat moving down the screen /// </summary> public void MoveDown() { SetPosition(Position + new Vector2(0, moveSpeed)); } /// <summary> /// Updates the position of the AI bat, in order to track the ball /// </summary> /// <param name="ball"></param> public virtual void UpdatePosition(Ball ball) { size.X = (int)Position.X; size.Y = (int)Position.Y; } While the rest of my game updates are done in my GameplayScreen.cs class (I'm using the XNA game state management sample) Class GameplayScreen { ........... bool slow; .......... public override void Update(GameTime gameTime, bool otherScreenHasFocus, bool coveredByOtherScreen) base.Update(gameTime, otherScreenHasFocus, false); if (IsActive) { // SlowMo Stuff Elapsed = (float)gameTime.ElapsedGameTime.TotalSeconds; if (Slowmo) Elapsed *= .8f; MoveTimer += Elapsed; double elapsedTime = gameTime.ElapsedGameTime.TotalMilliseconds; if (Keyboard.GetState().IsKeyDown(Keys.Up)) slow = true; else if (Keyboard.GetState().IsKeyDown(Keys.Down)) slow = false; if (slow == true) elapsedTime *= .1f; // Updating bat position leftBat.UpdatePosition(ball); rightBat.UpdatePosition(ball); // Updating the ball position ball.UpdatePosition(); and finally my fixed time step is declared in the constructor of my Game1.cs Class: /// <summary> /// The main game constructor. /// </summary> public Game1() { IsFixedTimeStep = slow = false; } So my question is: Where do I place the MoveTimer or elapsedTime, so that my bat will slow down accordingly?

    Read the article

  • Marshalling C# Structs into DX11 cbuffers

    - by Craig
    I'm having some issues with the packing of my structure in C# and passing them through to cbuffers I have registered in HLSL. When I pack my struct in one manner the information seems to be able to pass to the shader: [StructLayout(LayoutKind.Explicit, Size = 16)] internal struct TestStruct { [FieldOffset(0)] public Vector3 mEyePosition; [FieldOffset(12)] public int type; } This works perfectly when used against this HLSL fragment: cbuffer PerFrame : register(b0) { Vector3 eyePos; int type; } float3 GetColour() { float3 returnColour = float(0.0f, 0.0f, 0.0f); switch(type) { case 0: returnColour = float3(1.0f, 0.0f, 0.0f); break; case 1: returnColour = float3(0.0f, 1.0f, 0.0f); break; case 2: returnColour = float3(0.0f, 0.0f, 1.0f); break; } return returnColour; } However, when I use the following structure definitions... // Note this is 16 because HLSL packs in 4 float 'chunks'. // It is also simplified, but still demonstrates the problem. [StructLayout(Layout.Explicit, Size = 16)] internal struct InternalTestStruct { [FieldOffset(0)] public int type; } [StructLayout(LayoutKind.Explicit, Size = 32)] internal struct TestStruct { [FieldOffset(0)] public Vector3 mEyePosition; //Missing 4 bytes here for correct packing. [FieldOffset(16)] public InternalTestStruct mInternal; } ... the following HLSL fragment no longer works. struct InternalType { int type; } cbuffer PerFrame : register(b0) { Vector3 eyePos; InternalType internalStruct; } float3 GetColour() { float3 returnColour = float(0.0f, 0.0f, 0.0f); switch(internaltype.type) { case 0: returnColour = float3(1.0f, 0.0f, 0.0f); break; case 1: returnColour = float3(0.0f, 1.0f, 0.0f); break; case 2: returnColour = float3(0.0f, 0.0f, 1.0f); break; } return returnColour; } Is there a problem with the way I am packing the struct, or is it another issue? To re-iterate: I can pass a struct in a cbuffer so long as it does not contain a nested struct.

    Read the article

  • Mutable Records in F#

    - by MarkPearl
    I’m loving my expert F# book – today I thought I would give a post on using mutable records as covered in Chapter 4 of Expert F#. So as they explain the simplest mutable data structures in F# are mutable records. The whole concept of things by default being immutable is a new one for me from my C# background. Anyhow… lets look at some C# code first. using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace MutableRecords { public class DiscreteEventCounter { public int Total { get; set; } public int Positive { get; set; } public string Name { get; private set; } public DiscreteEventCounter(string name) { Name = name; } } class Program { private static void recordEvent(DiscreteEventCounter s, bool isPositive) { s.Total += 1; if (isPositive) s.Positive += 1; } private static void reportStatus (DiscreteEventCounter s) { Console.WriteLine("We have {0} {1} out of {2}", s.Positive, s.Name, s.Total); } static void Main(string[] args) { var longCounter = new DiscreteEventCounter("My Discrete Counter"); recordEvent(longCounter, true); recordEvent(longCounter, true); reportStatus(longCounter); Console.ReadLine(); } } } Quite simple, we have a class that has a few values. We instantiate an instance of the class and perform increments etc on the instance. Now lets look at an equivalent F# sample. namespace EncapsulationNS module Module1 = open System type DiscreteEventCounter = { mutable Total : int mutable Positive : int Name : string } let recordEvent (s: DiscreteEventCounter) isPositive = s.Total <- s.Total+1 if isPositive then s.Positive <- s.Positive+1 let reportStatus (s: DiscreteEventCounter) = printfn "We have %d %s out of %d" s.Positive s.Name s.Total let newCounter nm = { Total = 0; Positive = 0; Name = nm } // // Using it... // let longCounter = newCounter "My Discrete Counter" recordEvent longCounter (true) recordEvent longCounter (true) reportStatus longCounter System.Console.ReadLine() Notice in the type declaration of the DiscreteEventCounter we had to explicitly declare that the total and positive value holders were mutable. And that’s it – a very simple example of mutable types.

    Read the article

  • Can't change color of sprites in unity

    - by Aceleeon
    I would like to create a script that targets a 2d sprite "enemy" and changes their color to red (slightly opaque red if possible) when you hit tab. I have this code from a 3d tutorial hoping the transition would work. But it does not. I only get the script to cycle the enemy tags but never changes the color of the sprite. I have the code below I'm very new to coding, and any help would be FANTASTIC! HELP! hahah. TL;DR Cant get 3d color targeting to work for 2D. Check out the c#code below using UnityEngine; using System.Collections; using System.Collections.Generic; public class Targetting : MonoBehaviour { public List targets; public Transform selectedTarget; private Transform myTransform; // Use this for initialization void Start () { targets = new List(); selectedTarget = null; myTransform = transform; AddAllEnemies(); } public void AddAllEnemies() { GameObject[] go = GameObject.FindGameObjectsWithTag("Enemy"); foreach(GameObject enemy in go) AddTarget(enemy.transform); } public void AddTarget(Transform enemy) { targets.Add(enemy); } private void SortTargetsByDistance() { targets.Sort(delegate(Transform t1,Transform t2) { return Vector3.Distance(t1.position, myTransform.position).CompareTo(Vector3.Distance(t2.position, myTransform.position)); }); } private void TargetEnemy() { if(selectedTarget == null) { SortTargetsByDistance(); selectedTarget = targets[0]; } else { int index = targets.IndexOf(selectedTarget); if(index < targets.Count -1) { index++; } else { index = 0; } selectedTarget = targets[index]; } } private void SelectTarget() { selectedTarget.GetComponent().color = Color.red; } private void DeselectTarget() { selectedTarget.GetComponent().color = Color.blue; selectedTarget = null; } // Update is called once per frame void Update() { if(Input.GetKeyDown(KeyCode.Tab)) { TargetEnemy(); } } }

    Read the article

  • How to get around the Circular Reference issue with JSON and Entity

    - by DanScan
    I have been experimenting with creating a website that leverages MVC with JSON for my presentation layer and Entity framework for data model/database. My Issue comes into play with serializing my Model objects into JSON. I am using the code first method to create my database. When doing the code first method a one to many relationship (parent/child) requires the child to have a reference back to the parent. (Example code my be a typo but you get the picture) class parent { public List<child> Children{get;set;} public int Id{get;set;} } class child { public int ParentId{get;set;} [ForeignKey("ParentId")] public parent MyParent{get;set;} public string name{get;set;} } When returning a "parent" object via a JsonResult a circular reference error is thrown because "child" has a property of class parent. I have tried the ScriptIgnore attribute but I lose the ability to look at the child objects. I will need to display information in a parent child view at some point. I have tried to make base classes for both parent and child that do not have a circular reference. Unfortunately when I attempt to send the baseParent and baseChild these are read by the JSON Parser as their derived classes (I am pretty sure this concept is escaping me). Base.baseParent basep = (Base.baseParent)parent; return Json(basep, JsonRequestBehavior.AllowGet); The one solution I have come up with is to create "View" Models. I create simple versions of the database models that do not include the reference to the parent class. These view models each have method to return the Database Version and a constructor that takes the database model as a parameter (viewmodel.name = databasemodel.name). This method seems forced although it works. NOTE:I am posting here because I think this is more discussion worthy. I could leverage a different design pattern to over come this issue or it could be as simple as using a different attribute on my model. In my searching I have not seen a good method to overcome this problem. My end goal would be to have a nice MVC application that heavily leverages JSON for communicating with the server and displaying data. While maintaining a consistant model across layers (or as best as I can come up with).

    Read the article

  • Should library classes be wrapped before using them in unit testing?

    - by Songo
    I'm doing unit testing and in one of my classes I need to send a mail from one of the methods, so using constructor injection I inject an instance of Zend_Mail class which is in Zend framework. Example: class Logger{ private $mailer; function __construct(Zend_Mail $mail){ $this->mail=$mail; } function toBeTestedFunction(){ //Some code $this->mail->setTo('some value'); $this->mail->setSubject('some value'); $this->mail->setBody('some value'); $this->mail->send(); //Some } } However, Unit testing demands that I test one component at a time, so I need to mock the Zend_Mail class. In addition I'm violating the Dependency Inversion principle as my Logger class now depends on concretion not abstraction. Does that mean that I can never use a library class directly and must always wrap it in a class of my own? Example: interface Mailer{ public function setTo($to); public function setSubject($subject); public function setBody($body); public function send(); } class MyMailer implements Mailer{ private $mailer; function __construct(){ $this->mail=new Zend_Mail; //The class isn't injected this time } function setTo($to){ $this->mailer->setTo($to); } //implement the rest of the interface functions similarly } And now my Logger class can be happy :D class Logger{ private $mailer; function __construct(Mailer $mail){ $this->mail=$mail; } //rest of the code unchanged } Questions: Although I solved the mocking problem by introducing an interface, I have created a totally new class Mailer that now needs to be unit tested although it only wraps Zend_Mail which is already unit tested by the Zend team. Is there a better approach to all this? Zend_Mail's send() function could actually have a Zend_Transport object when called (i.e. public function send($transport = null)). Does this make the idea of a wrapper class more appealing? The code is in PHP, but answers doesn't have to be. This is more of a design issue than a language specific feature

    Read the article

  • Simple HTML5 Friendly Markup Sample

    - by Geertjan
    From a demo done by David Heffelfinger (who has a great Java EE 7 screencast series here), on HTML5 friendly markup. index.xhtml:  <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:jsf="http://xmlns.jcp.org/jsf"> <title>Data Entry Page</title> <body> <form method="POST" jsf:id='form'> <table> <tr> <td>Name:</td> <td><input jsf:id='name' type="text" jsf:value="${person.name}" /></td> </tr> <tr> <td>City</td> <th><input jsf:id='city' type="text" jsf:value="${person.city}"/></th> </tr> <tr> <td><input type="submit" value="Submit" jsf:action="confirmation" /></td> </tr> </table> </form> </body> </html> confirmation.xhtml: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Data Confirmation Page</title> </head> <body> <h1>#{person.name}</h1> from <h2>#{person.city}</h2> </body> </html> Person.java: package org.demo; import javax.enterprise.inject.Model; @Model public class Person { String name; String city; public String getName() { return name; } public void setName(String name) { this.name = name; } public String getCity() { return city; } public void setCity(String city) { this.city = city; } }

    Read the article

  • Best practice for organizing/storing character/monster data in an RPG?

    - by eclecto
    Synopsis: Attempting to build a cross-platform RPG app in Adobe Flash Builder and am trying to figure out the best class hierarchy and the best way to store the static data used to build each of the individual "hero" and "monster" types. My programming experience, particularly in AS3, is embarrassingly small. My ultra-alpha method is to include a "_class" object in the constructor for each instance. The _class, in turn, is a static Object pulled from a class created specifically for that purpose, so things look something like this: // Character.as package { public class Character extends Sprite { public var _strength:int; // etc. public function Character(_class:Object) { _strength = _class._strength; // etc. } } } // MonsterClasses.as package { public final class MonsterClasses extends Object { public static const Monster1:Object={ _strength:50, // etc. } // etc. } } // Some other class in which characters/monsters are created. // Create a new instance of Character var myMonster = new Character(MonsterClasses.Monster1); Another option I've toyed with is the idea of making each character class/monster type its own subclass of Character, but I'm not sure if it would be efficient or even make sense considering that these classes would only be used to store variables and would add no new methods. On the other hand, it would make creating instances as simple as var myMonster = new Monster1; and potentially cut down on the overhead of having to read a class containing the data for, at a conservative preliminary estimate, over 150 monsters just to fish out the one monster I want (assuming, and I really have no idea, that such a thing might cause any kind of slowdown in execution). But long story short, I want a system that's both efficient at compile time and easy to work with during coding. Should I stick with what I've got or try a different method? As a subquestion, I'm also assuming here that the best way to store data that will be bundled with the final game and not read externally is simply to declare everything in AS3. Seems to me that if I used, say, XML or JSON I'd have to use the associated AS3 classes and methods to pull in the data, parse it, and convert it to AS3 object(s) anyway, so it would be inefficient. Right?

    Read the article

  • Advantages to Multiple Methods over Switch

    - by tandu
    I received a code review from a senior developer today asking "By the way, what is your objection to dispatching functions by way of a switch statement?" I have read in many places about how pumping an argument through switch to call methods is bad OOP, not as extensible, etc. However, I can't really come up with a definitive answer for him. I would like to settle this for myself once and for all. Here are our competing code suggestions (php used as an example, but can apply more universally): class Switch { public function go($arg) { switch ($arg) { case "one": echo "one\n"; break; case "two": echo "two\n"; break; case "three": echo "three\n"; break; default: throw new Exception("Unknown call: $arg"); break; } } } class Oop { public function go_one() { echo "one\n"; } public function go_two() { echo "two\n"; } public function go_three() { echo "three\n"; } public function __call($_, $__) { throw new Exception("Unknown call $_ with arguments: " . print_r($__, true)); } } Part of his argument was "It (switch method) has a much cleaner way of handling default cases than what you have in the generic __call() magic method." I disagree about the cleanliness and in fact prefer call, but I would like to hear what others have to say. Arguments I can come up with in support of Oop scheme: A bit cleaner in terms of the code you have to write (less, easier to read, less keywords to consider) Not all actions delegated to a single method. Not much difference in execution here, but at least the text is more compartmentalized. In the same vein, another method can be added anywhere in the class instead of a specific spot. Methods are namespaced, which is nice. Does not apply here, but consider a case where Switch::go() operated on a member rather than a parameter. You would have to change the member first, then call the method. For Oop you can call the methods independently at any time. Arguments I can come up with in support of Switch scheme: For the sake of argument, cleaner method of dealing with a default (unknown) request Seems less magical, which might make unfamiliar developers feel more comfortable Anyone have anything to add for either side? I'd like to have a good answer for him.

    Read the article

  • Getting a SecurityToken from a RequestSecurityTokenResponse in WIF

    - by Shawn Cicoria
    When you’re working with WIF and WSTrustChannelFactory when you call the Issue operation, you can also request that a RequestSecurityTokenResponse as an out parameter. However, what can you do with that object?  Well, you could keep it around and use it for subsequent calls with the extension method CreateChannelWithIssuedToken – or can you? public static T CreateChannelWithIssuedToken<T>(this ChannelFactory<T> factory, SecurityToken issuedToken);   As you can see from the method signature it takes a SecurityToken – but that’s not present on the RequestSecurityTokenResponse class. However, you can through a little magic get a GenericXmlSecurityToken by means of the following set of extension methods below – just call rstr.GetSecurityTokenFromResponse() – and you’ll get a GenericXmlSecurityToken as a return. public static class TokenHelper { /// <summary> /// Takes a RequestSecurityTokenResponse, pulls out the GenericXmlSecurityToken usable for further WS-Trust calls /// </summary> /// <param name="rstr"></param> /// <returns></returns> public static GenericXmlSecurityToken GetSecurityTokenFromResponse(this RequestSecurityTokenResponse rstr) { var lifeTime = rstr.Lifetime; var appliesTo = rstr.AppliesTo.Uri; var tokenXml = rstr.GetSerializedTokenFromResponse(); var token = GetTokenFromSerializedToken(tokenXml, appliesTo, lifeTime); return token; } /// <summary> /// Provides a token as an XML string. /// </summary> /// <param name="rstr"></param> /// <returns></returns> public static string GetSerializedTokenFromResponse(this RequestSecurityTokenResponse rstr) { var serializedRst = new WSFederationSerializer().GetResponseAsString(rstr, new WSTrustSerializationContext()); return serializedRst; } /// <summary> /// Turns the XML representation of the token back into a GenericXmlSecurityToken. /// </summary> /// <param name="tokenAsXmlString"></param> /// <param name="appliesTo"></param> /// <param name="lifetime"></param> /// <returns></returns> public static GenericXmlSecurityToken GetTokenFromSerializedToken(this string tokenAsXmlString, Uri appliesTo, Lifetime lifetime) { RequestSecurityTokenResponse rstr2 = new WSFederationSerializer().CreateResponse( new SignInResponseMessage(appliesTo, tokenAsXmlString), new WSTrustSerializationContext()); return new GenericXmlSecurityToken( rstr2.RequestedSecurityToken.SecurityTokenXml, new BinarySecretSecurityToken( rstr2.RequestedProofToken.ProtectedKey.GetKeyBytes()), lifetime.Created.HasValue ? lifetime.Created.Value : DateTime.MinValue, lifetime.Expires.HasValue ? lifetime.Expires.Value : DateTime.MaxValue, rstr2.RequestedAttachedReference, rstr2.RequestedUnattachedReference, null); } }

    Read the article

< Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >