Search Results

Search found 1156 results on 47 pages for 'serialize'.

Page 23/47 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • C# Extension Methods - To Extend or Not To Extend...

    - by James Michael Hare
    I've been thinking a lot about extension methods lately, and I must admit I both love them and hate them. They are a lot like sugar, they taste so nice and sweet, but they'll rot your teeth if you eat them too much.   I can't deny that they aren't useful and very handy. One of the major components of the Shared Component library where I work is a set of useful extension methods. But, I also can't deny that they tend to be overused and abused to willy-nilly extend every living type.   So what constitutes a good extension method? Obviously, you can write an extension method for nearly anything whether it is a good idea or not. Many times, in fact, an idea seems like a good extension method but in retrospect really doesn't fit.   So what's the litmus test? To me, an extension method should be like in the movies when a person runs into their twin, separated at birth. You just know you're related. Obviously, that's hard to quantify, so let's try to put a few rules-of-thumb around them.   A good extension method should:     Apply to any possible instance of the type it extends.     Simplify logic and improve readability/maintainability.     Apply to the most specific type or interface applicable.     Be isolated in a namespace so that it does not pollute IntelliSense.     So let's look at a few examples in relation to these rules.   The first rule, to me, is the most important of all. Once again, it bears repeating, a good extension method should apply to all possible instances of the type it extends. It should feel like the long lost relative that should have been included in the original class but somehow was missing from the family tree.    Take this nifty little int extension, I saw this once in a blog and at first I really thought it was pretty cool, but then I started noticing a code smell I couldn't quite put my finger on. So let's look:       public static class IntExtensinos     {         public static int Seconds(int num)         {             return num * 1000;         }           public static int Minutes(int num)         {             return num * 60000;         }     }     This is so you could do things like:       ...     Thread.Sleep(5.Seconds());     ...     proxy.Timeout = 1.Minutes();     ...     Awww, you say, that's cute! Well, that's the problem, it's kitschy and it doesn't always apply (and incidentally you could achieve the same thing with TimeStamp.FromSeconds(5)). It's syntactical candy that looks cool, but tends to rot and pollute the code. It would allow things like:       total += numberOfTodaysOrders.Seconds();     which makes no sense and should never be allowed. The problem is you're applying an extension method to a logical domain, not a type domain. That is, the extension method Seconds() doesn't really apply to ALL ints, it applies to ints that are representative of time that you want to convert to milliseconds.    Do you see what I mean? The two problems, in a nutshell, are that a) Seconds() called off a non-time value makes no sense and b) calling Seconds() off something to pass to something that does not take milliseconds will be off by a factor of 1000 or worse.   Thus, in my mind, you should only ever have an extension method that applies to the whole domain of that type.   For example, this is one of my personal favorites:       public static bool IsBetween<T>(this T value, T low, T high)         where T : IComparable<T>     {         return value.CompareTo(low) >= 0 && value.CompareTo(high) <= 0;     }   This allows you to check if any IComparable<T> is within an upper and lower bound. Think of how many times you type something like:       if (response.Employee.Address.YearsAt >= 2         && response.Employee.Address.YearsAt <= 10)     {     ...     }     Now, you can instead type:       if(response.Employee.Address.YearsAt.IsBetween(2, 10))     {     ...     }     Note that this applies to all IComparable<T> -- that's ints, chars, strings, DateTime, etc -- and does not depend on any logical domain. In addition, it satisfies the second point and actually makes the code more readable and maintainable.   Let's look at the third point. In it we said that an extension method should fit the most specific interface or type possible. Now, I'm not saying if you have something that applies to enumerables, you create an extension for List, Array, Dictionary, etc (though you may have reasons for doing so), but that you should beware of making things TOO general.   For example, let's say we had an extension method like this:       public static T ConvertTo<T>(this object value)     {         return (T)Convert.ChangeType(value, typeof(T));     }         This lets you do more fluent conversions like:       double d = "5.0".ConvertTo<double>();     However, if you dig into Reflector (LOVE that tool) you will see that if the type you are calling on does not implement IConvertible, what you convert to MUST be the exact type or it will throw an InvalidCastException. Now this may or may not be what you want in this situation, and I leave that up to you. Things like this would fail:       object value = new Employee();     ...     // class cast exception because typeof(IEmployee) != typeof(Employee)     IEmployee emp = value.ConvertTo<IEmployee>();       Yes, that's a downfall of working with Convertible in general, but if you wanted your fluent interface to be more type-safe so that ConvertTo were only callable on IConvertibles (and let casting be a manual task), you could easily make it:         public static T ConvertTo<T>(this IConvertible value)     {         return (T)Convert.ChangeType(value, typeof(T));     }         This is what I mean by choosing the best type to extend. Consider that if we used the previous (object) version, every time we typed a dot ('.') on an instance we'd pull up ConvertTo() whether it was applicable or not. By filtering our extension method down to only valid types (those that implement IConvertible) we greatly reduce our IntelliSense pollution and apply a good level of compile-time correctness.   Now my fourth rule is just my general rule-of-thumb. Obviously, you can make extension methods as in-your-face as you want. I included all mine in my work libraries in its own sub-namespace, something akin to:       namespace Shared.Core.Extensions { ... }     This is in a library called Shared.Core, so just referencing the Core library doesn't pollute your IntelliSense, you have to actually do a using on Shared.Core.Extensions to bring the methods in. This is very similar to the way Microsoft puts its extension methods in System.Linq. This way, if you want 'em, you use the appropriate namespace. If you don't want 'em, they won't pollute your namespace.   To really make this work, however, that namespace should only include extension methods and subordinate types those extensions themselves may use. If you plant other useful classes in those namespaces, once a user includes it, they get all the extensions too.   Also, just as a personal preference, extension methods that aren't simply syntactical shortcuts, I like to put in a static utility class and then have extension methods for syntactical candy. For instance, I think it imaginable that any object could be converted to XML:       namespace Shared.Core     {         // A collection of XML Utility classes         public static class XmlUtility         {             ...             // Serialize an object into an xml string             public static string ToXml(object input)             {                 var xs = new XmlSerializer(input.GetType());                   // use new UTF8Encoding here, not Encoding.UTF8. The later includes                 // the BOM which screws up subsequent reads, the former does not.                 using (var memoryStream = new MemoryStream())                 using (var xmlTextWriter = new XmlTextWriter(memoryStream, new UTF8Encoding()))                 {                     xs.Serialize(xmlTextWriter, input);                     return Encoding.UTF8.GetString(memoryStream.ToArray());                 }             }             ...         }     }   I also wanted to be able to call this from an object like:       value.ToXml();     But here's the problem, if i made this an extension method from the start with that one little keyword "this", it would pop into IntelliSense for all objects which could be very polluting. Instead, I put the logic into a utility class so that users have the choice of whether or not they want to use it as just a class and not pollute IntelliSense, then in my extensions namespace, I add the syntactical candy:       namespace Shared.Core.Extensions     {         public static class XmlExtensions         {             public static string ToXml(this object value)             {                 return XmlUtility.ToXml(value);             }         }     }   So now it's the best of both worlds. On one hand, they can use the utility class if they don't want to pollute IntelliSense, and on the other hand they can include the Extensions namespace and use as an extension if they want. The neat thing is it also adheres to the Single Responsibility Principle. The XmlUtility is responsible for converting objects to XML, and the XmlExtensions is responsible for extending object's interface for ToXml().

    Read the article

  • Use a Fake Http Channel to Unit Test with HttpClient

    - by Steve Michelotti
    Applications get data from lots of different sources. The most common is to get data from a database or a web service. Typically, we encapsulate calls to a database in a Repository object and we create some sort of IRepository interface as an abstraction to decouple between layers and enable easier unit testing by leveraging faking and mocking. This works great for database interaction. However, when consuming a RESTful web service, this is is not always the best approach. The WCF Web APIs that are available on CodePlex (current drop is Preview 3) provide a variety of features to make building HTTP REST services more robust. When you download the latest bits, you’ll also find a new HttpClient which has been updated for .NET 4.0 as compared to the one that shipped for 3.5 in the original REST Starter Kit. The HttpClient currently provides the best API for consuming REST services on the .NET platform and the WCF Web APIs provide a number of extension methods which extend HttpClient and make it even easier to use. Let’s say you have a client application that is consuming an HTTP service – this could be Silverlight, WPF, or any UI technology but for my example I’ll use an MVC application: 1: using System; 2: using System.Net.Http; 3: using System.Web.Mvc; 4: using FakeChannelExample.Models; 5: using Microsoft.Runtime.Serialization; 6:   7: namespace FakeChannelExample.Controllers 8: { 9: public class HomeController : Controller 10: { 11: private readonly HttpClient httpClient; 12:   13: public HomeController(HttpClient httpClient) 14: { 15: this.httpClient = httpClient; 16: } 17:   18: public ActionResult Index() 19: { 20: var response = httpClient.Get("Person(1)"); 21: var person = response.Content.ReadAsDataContract<Person>(); 22:   23: this.ViewBag.Message = person.FirstName + " " + person.LastName; 24: 25: return View(); 26: } 27: } 28: } On line #20 of the code above you can see I’m performing an HTTP GET request to a Person resource exposed by an HTTP service. On line #21, I use the ReadAsDataContract() extension method provided by the WCF Web APIs to serialize to a Person object. In this example, the HttpClient is being passed into the constructor by MVC’s dependency resolver – in this case, I’m using StructureMap as an IoC and my StructureMap initialization code looks like this: 1: using StructureMap; 2: using System.Net.Http; 3:   4: namespace FakeChannelExample 5: { 6: public static class IoC 7: { 8: public static IContainer Initialize() 9: { 10: ObjectFactory.Initialize(x => 11: { 12: x.For<HttpClient>().Use(() => new HttpClient("http://localhost:31614/")); 13: }); 14: return ObjectFactory.Container; 15: } 16: } 17: } My controller code currently depends on a concrete instance of the HttpClient. Now I *could* create some sort of interface and wrap the HttpClient in this interface and use that object inside my controller instead – however, there are a few why reasons that is not desirable: For one thing, the API provided by the HttpClient provides nice features for dealing with HTTP services. I don’t really *want* these to look like C# RPC method calls – when HTTP services have REST features, I may want to inspect HTTP response headers and hypermedia contained within the message so that I can make intelligent decisions as to what to do next in my workflow (although I don’t happen to be doing these things in my example above) – this type of workflow is common in hypermedia REST scenarios. If I just encapsulate HttpClient behind some IRepository interface and make it look like a C# RPC method call, it will become difficult to take advantage of these types of things. Second, it could get pretty mind-numbing to have to create interfaces all over the place just to wrap the HttpClient. Then you’re probably going to have to hard-code HTTP knowledge into your code to formulate requests rather than just “following the links” that the hypermedia in a message might provide. Third, at first glance it might appear that we need to create an interface to facilitate unit testing, but actually it’s unnecessary. Even though the code above is dependent on a concrete type, it’s actually very easy to fake the data in a unit test. The HttpClient provides a Channel property (of type HttpMessageChannel) which allows you to create a fake message channel which can be leveraged in unit testing. In this case, what I want is to be able to write a unit test that just returns fake data. I also want this to be as re-usable as possible for my unit testing. I want to be able to write a unit test that looks like this: 1: [TestClass] 2: public class HomeControllerTest 3: { 4: [TestMethod] 5: public void Index() 6: { 7: // Arrange 8: var httpClient = new HttpClient("http://foo.com"); 9: httpClient.Channel = new FakeHttpChannel<Person>(new Person { FirstName = "Joe", LastName = "Blow" }); 10:   11: HomeController controller = new HomeController(httpClient); 12:   13: // Act 14: ViewResult result = controller.Index() as ViewResult; 15:   16: // Assert 17: Assert.AreEqual("Joe Blow", result.ViewBag.Message); 18: } 19: } Notice on line #9, I’m setting the Channel property of the HttpClient to be a fake channel. I’m also specifying the fake object that I want to be in the response on my “fake” Http request. I don’t need to rely on any mocking frameworks to do this. All I need is my FakeHttpChannel. The code to do this is not complex: 1: using System; 2: using System.IO; 3: using System.Net.Http; 4: using System.Runtime.Serialization; 5: using System.Threading; 6: using FakeChannelExample.Models; 7:   8: namespace FakeChannelExample.Tests 9: { 10: public class FakeHttpChannel<T> : HttpClientChannel 11: { 12: private T responseObject; 13:   14: public FakeHttpChannel(T responseObject) 15: { 16: this.responseObject = responseObject; 17: } 18:   19: protected override HttpResponseMessage Send(HttpRequestMessage request, CancellationToken cancellationToken) 20: { 21: return new HttpResponseMessage() 22: { 23: RequestMessage = request, 24: Content = new StreamContent(this.GetContentStream()) 25: }; 26: } 27:   28: private Stream GetContentStream() 29: { 30: var serializer = new DataContractSerializer(typeof(T)); 31: Stream stream = new MemoryStream(); 32: serializer.WriteObject(stream, this.responseObject); 33: stream.Position = 0; 34: return stream; 35: } 36: } 37: } The HttpClientChannel provides a Send() method which you can override to return any HttpResponseMessage that you want. You can see I’m using the DataContractSerializer to serialize the object and write it to a stream. That’s all you need to do. In the example above, the only thing I’ve chosen to do is to provide a way to return different response objects. But there are many more features you could add to your own re-usable FakeHttpChannel. For example, you might want to provide the ability to add HTTP headers to the message. You might want to use a different serializer other than the DataContractSerializer. You might want to provide custom hypermedia in the response as well as just an object or set HTTP response codes. This list goes on. This is the just one example of the really cool features being added to the next version of WCF to enable various HTTP scenarios. The code sample for this post can be downloaded here.

    Read the article

  • c# - PubNub JSON serialization code works in example project but not in my project

    - by pilau
    I am making a Winamp plugin with the single function of sending the details of the song being played over HTTP to a webpage. It works like this: Winamp song event triggered - check for new song - publish to webpage using PubNub (C# API). So far I got to the stage where everything works exactly as it is supposed to, except for the PubNub code which doesn't serialize the object I'm passing for publishing into JSON. All I keep getting in the PubNub console is a mere {} - an empty JSON object. A little background on the project structure: I am using Sharpamp which is a custom library that enables making Winamp plugins with C#. I am also using the PubNub C# API. The gen_notifier_cs project is the C++ plugin wrapper created by Sharpamp. notifier_cs is where all my code resides. The two other projects are self explanatory I assume. I have referenced the PubNub API in notifier_cs, and also have referenced Sharpamp in both notifier_cs and PubNub API. So, the objects that need to get serialized are of a class Song as defined in Sharpamp: public class Song { public string Title { get; internal set; } public string Artist { get; internal set; } public string Album { get; internal set; } public string Year { get; internal set; } public bool HasMetadata { get; internal set; } public string Filename { get; internal set; } } So let's say if I have a song object with song data in it, I would go pubnub.publish("winamp_pipe", song); to publish it and PubNub will automatically serialize the data into JSON. But that's just not working in my solution. To test why it wasn't serializing, I copied that class to the example code file in the PubNub API. Visual Studio changed the class to this (notice the public Song() method): public class Song { public Song() { return; } public string Album { get; set; } public string Artist { get; set; } public string Filename { get; set; } public bool HasMetadata { get; set; } public string Title { get; set; } public string Year { get; set; } } On the same example file I initiated a default song object with some values: Song song = new Song(); song.Album = "albumname"; song.Artist = "artistname"; song.HasMetadata = true; song.Title = "songtitle"; song.Year = "2012"; And published it: pubnub.publish("winamp_pipe", song); and it worked! I got the JSON object in the PubNub channel! {"Album":"albumname","Artist":"artistname","Filename":null,"HasMetadata":true,"Title":"songtitle","Year":"2012"} So, I tried replacing the "new" Song class with the original one defined in Sharpamp. I tried adding another class definition in the notifier_cs project but that clashes with the one in Sharpamp which I have to rely on. I have been trying so many things as far as I could come up with. Needless to say none prevailed. Still, all I get is an empty JSON object. I have been pulling out my hair for the last day. I know this post is super long but I appreciate your input nonetheless. Thanks in advance.

    Read the article

  • Issue allowing custom Xml Serialization/Deserialization on certain types of field

    - by sw1sh
    I've been working with Xml Serialization/Deserialization in .net and wanted a method where the serialization/deserialization process would only be applied to certain parts of an Xml fragment. This is so I can keep certain parts of the fragment in Xml after the deserialization process. To do this I thought it would be best to create a new class (XmlLiteral) that implemented IXmlSerializable and then wrote specific code for handling the IXmlSerializable.ReadXml and IXmlSerializable.WriteXml methods. In my example below this works for Serializing, however during the Deserialization process it fails to run for multiple uses of my XmlLiteral class. In my example below sTest1 gets populated correctly, but sTest2 and sTest3 are empty. I'm guessing I must be going wrong with the following lines but can't figure out why.. Any ideas at all? Private Sub ReadXml(ByVal reader As System.Xml.XmlReader) Implements IXmlSerializable.ReadXml Dim StringType As String = "" If reader.IsEmptyElement OrElse reader.Read() = False Then Exit Sub End If _src = reader.ReadOuterXml() End Sub Full listing: Imports System Imports System.Xml.Serialization Imports System.Xml Imports System.IO Imports System.Text Public Class XmlLiteralExample Inherits System.Web.UI.Page Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load Dim MyObjectInstance As New MyObject MyObjectInstance.aProperty = "MyValue" MyObjectInstance.XmlLiteral1 = New XmlLiteral("<test1>Some Value</test1>") MyObjectInstance.XmlLiteral2 = New XmlLiteral("<test2>Some Value</test2>") MyObjectInstance.XmlLiteral3 = New XmlLiteral("<test3>Some Value</test3>") ' quickly serialize the object to Xml Dim sw As New StringWriter(New StringBuilder()) Dim s As New XmlSerializer(MyObjectInstance.[GetType]()), xmlnsEmpty As New XmlSerializerNamespaces xmlnsEmpty.Add("", "") s.Serialize(sw, MyObjectInstance, xmlnsEmpty) Dim XElement As XElement = XElement.Parse(sw.ToString()) ' XElement reads as the following, so serialization works OK '<MyObject> ' <aProperty>MyValue</aProperty> ' <XmlLiteral1> ' <test1>Some Value</test1> ' </XmlLiteral1> ' <XmlLiteral2> ' <test2>Some Value</test2> ' </XmlLiteral2> ' <XmlLiteral3> ' <test3>Some Value</test3> ' </XmlLiteral3> '</MyObject> ' quickly deserialize the object back to an instance of MyObjectInstance2 Dim MyObjectInstance2 As New MyObject Dim xmlReader As XmlReader, x As XmlSerializer xmlReader = XElement.CreateReader x = New XmlSerializer(MyObjectInstance2.GetType()) MyObjectInstance2 = x.Deserialize(xmlReader) Dim sProperty As String = MyObjectInstance2.aProperty ' equal to "MyValue" Dim sTest1 As String = MyObjectInstance2.XmlLiteral1.Text ' contains <test1>Some Value</test1> Dim sTest2 As String = MyObjectInstance2.XmlLiteral2.Text ' is empty Dim sTest3 As String = MyObjectInstance2.XmlLiteral3.Text ' is empty ' sTest3 and sTest3 should be populated but are not? xmlReader = Nothing End Sub Public Class MyObject Private _aProperty As String Private _XmlLiteral1 As XmlLiteral Private _XmlLiteral2 As XmlLiteral Private _XmlLiteral3 As XmlLiteral Public Property aProperty As String Get Return _aProperty End Get Set(ByVal value As String) _aProperty = value End Set End Property Public Property XmlLiteral1 As XmlLiteral Get Return _XmlLiteral1 End Get Set(ByVal value As XmlLiteral) _XmlLiteral1 = value End Set End Property Public Property XmlLiteral2 As XmlLiteral Get Return _XmlLiteral2 End Get Set(ByVal value As XmlLiteral) _XmlLiteral2 = value End Set End Property Public Property XmlLiteral3 As XmlLiteral Get Return _XmlLiteral3 End Get Set(ByVal value As XmlLiteral) _XmlLiteral3 = value End Set End Property Public Sub New() _XmlLiteral1 = New XmlLiteral _XmlLiteral2 = New XmlLiteral _XmlLiteral3 = New XmlLiteral End Sub End Class <System.Xml.Serialization.XmlRootAttribute(Namespace:="", IsNullable:=False)> _ Public Class XmlLiteral Implements IXmlSerializable Private _src As String Public Property Text() As String Get Return _src End Get Set(ByVal value As String) _src = value End Set End Property Public Sub New() _src = "" End Sub Public Sub New(ByVal Text As String) _src = Text End Sub #Region "IXmlSerializable Members" Private Function GetSchema() As System.Xml.Schema.XmlSchema Implements IXmlSerializable.GetSchema Return Nothing End Function Private Sub ReadXml(ByVal reader As System.Xml.XmlReader) Implements IXmlSerializable.ReadXml Dim StringType As String = "" If reader.IsEmptyElement OrElse reader.Read() = False Then Exit Sub End If _src = reader.ReadOuterXml() End Sub Private Sub WriteXml(ByVal writer As System.Xml.XmlWriter) Implements IXmlSerializable.WriteXml writer.WriteRaw(_src) End Sub #End Region End Class End Class

    Read the article

  • Deserializing Metafile

    - by Kildareflare
    I have an application that works with Enhanced Metafiles. I am able to create them, save them to disk as .emf and load them again no problem. I do this by using the gdi32.dll methods and the DLLImport attribute. However, to enable Version Tolerant Serialization I want to save the metafile in an object along with other data. This essentially means that I need to serialize the metafile data as a byte array and then deserialize it again in order to reconstruct the metafile. The problem I have is that the deserialized data would appear to be corrupted in some way, since the method that I use to reconstruct the Metafile raises a "Parameter not valid exception". At the very least the pixel format and resolutions have changed. Code use is below. [DllImport("gdi32.dll")] public static extern uint GetEnhMetaFileBits(IntPtr hemf, uint cbBuffer, byte[] lpbBuffer); [DllImport("gdi32.dll")] public static extern IntPtr SetEnhMetaFileBits(uint cbBuffer, byte[] lpBuffer); [DllImport("gdi32.dll")] public static extern bool DeleteEnhMetaFile(IntPtr hemf); The application creates a metafile image and passes it to the method below. private byte[] ConvertMetaFileToByteArray(Image image) { byte[] dataArray = null; Metafile mf = (Metafile)image; IntPtr enhMetafileHandle = mf.GetHenhmetafile(); uint bufferSize = GetEnhMetaFileBits(enhMetafileHandle, 0, null); if (enhMetafileHandle != IntPtr.Zero) { dataArray = new byte[bufferSize]; GetEnhMetaFileBits(enhMetafileHandle, bufferSize, dataArray); } DeleteEnhMetaFile(enhMetafileHandle); return dataArray; } At this point the dataArray is inserted into an object and serialized using a BinaryFormatter. The saved file is then deserialized again using a BinaryFormatter and the dataArray retrieved from the object. The dataArray is then used to reconstruct the original Metafile using the following method. public static Image ConvertByteArrayToMetafile(byte[] data) { Metafile mf = null; try { IntPtr hemf = SetEnhMetaFileBits((uint)data.Length, data); mf = new Metafile(hemf, true); } catch (Exception ex) { System.Windows.Forms.MessageBox.Show(ex.Message); } return (Image)mf; } The reconstructed metafile is then saved saved to disk as a .emf (Model) at which point it can be accessed by the Presenter for display. private static void SaveFile(Image image, String filepath) { try { byte[] buffer = ConvertMetafileToByteArray(image); File.WriteAllBytes(filepath, buffer); //will overwrite file if it exists } catch (Exception ex) { System.Windows.Forms.MessageBox.Show(ex.Message); } } The problem is that the save to disk fails. If this same method is used to save the original Metafile before it is serialized everything is OK. So something is happening to the data during serialization/deserializtion. Indeed if I check the Metafile properties in the debugger I can see that the ImageFlags, PropertyID, resolution and pixelformats change. Original Format32bppRgb changes to Format32bppArgb Original Resolution 81 changes to 96 I've trawled though google and SO and this has helped me get this far but Im now stuck. Does any one have enough experience with Metafiles / serialization to help..? EDIT: If I serialize/deserialize the byte array directly (without embedding in another object) I get the same problem.

    Read the article

  • Jquery change function don’t work properly

    - by user1493448
    I have same id, name two html select. If I change first html select then it’s work properly but If I change second html select then it’s not work. Please can someone point out what I may be doing wrong here? Many thanks. Here is my code: <html> <head> <title>the title</title> <script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.min.js"></script> <script type="text/javascript" language="javascript"> $(document).ready(function() { $("#name").change(function(){ $.post( "data.php", $("#testform").serialize(), function(data) { $('#stage1').html(data); } ); var str = $("#testform").serialize(); $("#stage2").text(str); }); }); </script> </head> <body> <div id="stage1" style="background-color:blue; color: white"> STAGE - 1 </div> <form id="testform"> <table> <tr> <td><p>Fruit:</p></td> <td> <select id="name" name="name[]"> <option>Apple</option> <option>Mango</option> <option>Orange</option> <option>Banana</option> </select> </td> </tr> <tr> <td><p>Fruit:</p></td> <td> <select id="name" name="name[]"> <option>Apple</option> <option>Mango</option> <option>Orange</option> <option>Banana</option> </select> </td> </tr> </table> </form> </body> </html> Php code: <?php $fruit=$_REQUEST["name"]; $n = count($fruit); for($i=0;$i<$n; $i++) { echo $fruit[$i]."<br/>"; } ?>

    Read the article

  • boost::serialization of mutual pointers

    - by KneLL
    First, please take a look at these code: class Key; class Door; class Key { public: int id; Door *pDoor; Key() : id(0), pDoor(NULL) {} private: friend class boost::serialization::access; template <typename A> void serialize(A &ar, const unsigned int ver) { ar & BOOST_SERIALIZATION_NVP(id) & BOOST_SERIALIZATION_NVP(pDoor); } }; class Door { public: int id; Key *pKey; Door() : id(0), pKey(NULL) {} private: friend class boost::serialization::access; template <typename A> void serialize(A &ar, const unsigned int ver) { ar & BOOST_SERIALIZATION_NVP(id) & BOOST_SERIALIZATION_NVP(pKey); } }; BOOST_CLASS_TRACKING(Key, track_selectively); BOOST_CLASS_TRACKING(Door, track_selectively); int main() { Key k1, k_in; Door d1, d_in; k1.id = 1; d1.id = 2; k1.pDoor = &d1; d1.pKey = &k1; // Save data { wofstream f1("test.xml"); boost::archive::xml_woarchive ar1(f1); // !!!!! (1) const Key *pK = &k1; const Door *pD = &d1; ar1 << BOOST_SERIALIZATION_NVP(pK) << BOOST_SERIALIZATION_NVP(pD); } // Load data { wifstream i1("test.xml"); boost::archive::xml_wiarchive ar1(i1); // !!!!! (2) A *pK = &k_in; B *pD = &d_in; // (2.1) //ar1 >> BOOST_SERIALIZATION_NVP(k_in) >> BOOST_SERIALIZATION_NVP(d_in); // (2.2) ar1 >> BOOST_SERIALIZATION_NVP(pK) >> BOOST_SERIALIZATION_NVP(pD); } } The first (1) is a simple question - is it possible to pass objects to archive without pointers? If simply pass objects 'as is' that boost throws exception about duplicated pointers. But I'm confused of creating pointers to save objects. The second (2) is a real trouble. If comment out string after (2.1) then boost will corectly load a first Key object (and init internal Door pointer pDoor), but will not init a second Door (d_in) object. After this I have an inited *k_in* object with valid pointer to Door and empty *d_in* object. If use string (2.2) then boost will create two Key and Door objects somewhere in memory and save addresses in pointers. But I want to have two objects *k_in* and *d_in*. So, if I copy a values of memory objects to local variables then I store only addresses, for example, I can write code after (2.2): d_in.id = pD->id; d_in.pKey = pD->pKey; But in this case I store only a pointer and memory object remains in memory and I cannot delete it, because *d_in.pKey* will be unvalid. And I cannot perform a deep copy with operator=(), because if I write code like this: Key &operator==(const Key &k) { if (this != &k) { id = k.id; // call to Door::operator=() that calls *pKey = *d.pKey and so on *pDoor = *k.pDoor; } return *this; } then I will get a something like recursion of operator=()s of Key and Door. How to implement proper serialization of such pointers?

    Read the article

  • Python dictionary key missing

    - by Greg K
    I thought I'd put together a quick script to consolidate the CSS rules I have distributed across multiple CSS files, then I can minify it. I'm new to Python but figured this would be a good exercise to try a new language. My main loop isn't parsing the CSS as I thought it would. I populate a list with selectors parsed from the CSS files to return the CSS rules in order. Later in the script, the list contains an element that is not found in the dictionary. for line in self.file.readlines(): if self.hasSelector(line): selector = self.getSelector(line) if selector not in self.order: self.order.append(selector) elif selector and self.hasProperty(line): # rules.setdefault(selector,[]).append(self.getProperty(line)) property = self.getProperty(line) properties = [] if selector not in rules else rules[selector] if property not in properties: properties.append(property) rules[selector] = properties # print "%s :: %s" % (selector, "".join(rules[selector])) return rules Error encountered: $ css-combine combined.css test1.css test2.css Traceback (most recent call last): File "css-combine", line 108, in <module> c.run(outfile, stylesheets) File "css-combine", line 64, in run [(selector, rules[selector]) for selector in parser.order], KeyError: 'p' Swap the inputs: $ css-combine combined.css test2.css test1.css Traceback (most recent call last): File "css-combine", line 108, in <module> c.run(outfile, stylesheets) File "css-combine", line 64, in run [(selector, rules[selector]) for selector in parser.order], KeyError: '#header_.title' I've done some quirky things in the code like sub spaces for underscores in dictionary key names in case it was an issue - maybe this is a benign precaution? Depending on the order of the inputs, a different key cannot be found in the dictionary. The script: #!/usr/bin/env python import optparse import re class CssParser: def __init__(self): self.file = False self.order = [] # store rules assignment order def parse(self, rules = {}): if self.file == False: raise IOError("No file to parse") selector = False for line in self.file.readlines(): if self.hasSelector(line): selector = self.getSelector(line) if selector not in self.order: self.order.append(selector) elif selector and self.hasProperty(line): # rules.setdefault(selector,[]).append(self.getProperty(line)) property = self.getProperty(line) properties = [] if selector not in rules else rules[selector] if property not in properties: properties.append(property) rules[selector] = properties # print "%s :: %s" % (selector, "".join(rules[selector])) return rules def hasSelector(self, line): return True if re.search("^([#a-z,\.:\s]+){", line) else False def getSelector(self, line): s = re.search("^([#a-z,:\.\s]+){", line).group(1) return "_".join(s.strip().split()) def hasProperty(self, line): return True if re.search("^\s?[a-z-]+:[^;]+;", line) else False def getProperty(self, line): return re.search("([a-z-]+:[^;]+;)", line).group(1) class Consolidator: """Class to consolidate CSS rule attributes""" def run(self, outfile, files): parser = CssParser() rules = {} for file in files: try: parser.file = open(file) rules = parser.parse(rules) except IOError: print "Cannot read file: " + file finally: parser.file.close() self.serialize( [(selector, rules[selector]) for selector in parser.order], outfile ) def serialize(self, rules, outfile): try: f = open(outfile, "w") for rule in rules: f.write( "%s {\n\t%s\n}\n\n" % ( " ".join(rule[0].split("_")), "\n\t".join(rule[1]) ) ) except IOError: print "Cannot write output to: " + outfile finally: f.close() def init(): op = optparse.OptionParser( usage="Usage: %prog [options] <output file> <stylesheet1> " + "<stylesheet2> ... <stylesheetN>", description="Combine CSS rules spread across multiple " + "stylesheets into a single file" ) opts, args = op.parse_args() if len(args) < 3: if len(args) == 1: print "Error: No input files specified.\n" elif len(args) == 2: print "Error: One input file specified, nothing to combine.\n" op.print_help(); exit(-1) return [opts, args] if __name__ == '__main__': opts, args = init() outfile, stylesheets = [args[0], args[1:]] c = Consolidator() c.run(outfile, stylesheets) Test CSS file 1: body { background-color: #e7e7e7; } p { margin: 1em 0em; } File 2: body { font-size: 16px; } #header .title { font-family: Tahoma, Geneva, sans-serif; font-size: 1.9em; } #header .title a, #header .title a:hover { color: #f5f5f5; border-bottom: none; text-shadow: 2px 2px 3px rgba(0, 0, 0, 1); } Thanks in advance.

    Read the article

  • Recursive XSD Help

    - by Alon
    Hi, i'm trying to learn a little bit XSD and I'm trying to create a XSD for this xml: <Document> <TextBox Name="Username" /> <TextBox Name="Password" /> </Document> ... so there's an element, which is an abstract complex type. Every element have elements and so on. Document and TextBox are extending Element. I trid this: <?xml version="1.0" encoding="utf-8" ?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="Document"> <xs:complexType> <xs:complexContent> <xs:extension base="Element"> </xs:extension> </xs:complexContent> </xs:complexType> </xs:element> <xs:complexType name="Element" abstract="true"> <xs:sequence minOccurs="0" maxOccurs="unbounded"> <xs:element name="Element" type="Element"></xs:element> </xs:sequence> </xs:complexType> <xs:complexType name="TextBox"> <xs:complexContent> <xs:extension base="Element"> <xs:attribute name="Name" type="xs:string" /> </xs:extension> </xs:complexContent> </xs:complexType> </xs:schema> I compiled it to C# with Xsd2Code, and now I try to deserialize it: var serializer = new XmlSerializer(typeof(Document)); var document = (Document)serializer.Deserialize(new FileStream("Document1.xml", FileMode.Open)); foreach (var element in document.Element1) { Console.WriteLine(((TextBox)element).Name); } Console.ReadLine(); and it dosen't print anything. When I try to serialize it like so: var serializer = new XmlSerializer(typeof(Document)); var document = new Document(); document.Element1 = new List<Element>(); document.Element1.Add(new TextBox() { Name = "abc" }); serializer.Serialize(new FileStream("d.xml", FileMode.Create), document); ...the output is: <?xml version="1.0"?> <Document xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Element1> <Element xsi:type="TextBox"> <Element1 /> <Name>abc</Name> </Element> </Element1> </Document> When it should be: <Document> <TextBox Name="abc" /> </Document> Any ideas how to fix the xsd or another code generator? Thanks.

    Read the article

  • ASP.NET MVC - Form Post always redirect when I just want to bind json result to a div

    - by Saxman
    Hi all, I'm having a little problem with json result. I'm submitting a contact form, and after the submission, I just want to return some json data (indicating either a success or failed and displays a message) back to the view, without causing a redirect, but it kept redirecting me to the action, here are the codes: HTML <div id="contactForm2" class="grid_6"> <div id="contactFormContainer"> @using (Html.BeginForm(MVC.Home.ActionNames.ContactForm, MVC.Home.Name, FormMethod.Post, new { id = "contactForm" })) { <p> <input type="text" tabindex="1" size="22" value="" id="contactName" class="text_input required" name="contactName" /> <label for="contactName"> <strong class="leftSpace">Your Name (required)</strong></label></p> <p> <input type="text" tabindex="2" size="22" value="" id="contactEmail" class="text_input required" name="contactEmail" /> <label for="contactEmail"> <strong class="leftSpace">Email (required)</strong></label></p> <p> <input type="text" tabindex="2" size="22" value="" id="contactPhone" class="text_input" name="contactPhone" /> <label for="contactPhone"> <strong class="leftSpace">Phone</strong></label></p> <p> <label> <strong class="leftSpace n">Message (required)</strong></label> <textarea tabindex="4" rows="4" cols="56" id="contactMessage" class="text_area required" name="contactMessage"></textarea></p> <p> <input type="submit" value="Send" tabindex="5" id="contactSubmit" class="button submit" name="contactSubmit" /></p> } </div> <div id="contactFormStatus"> </div> </div> Controller [HttpPost] public virtual JsonResult ContactForm(FormCollection formCollection) { var name = formCollection["contactName"]; var email = formCollection["contactEmail"]; var phone = formCollection["contactPhone"]; var message = formCollection["contactMessage"]; if (string.IsNullOrEmpty(name) || string.IsNullOrEmpty(email) || string.IsNullOrEmpty(message)) { return Json(new { success = false, message = "Please complete all the required fields so that I can get back to you. Thanks." }); } // Insert contact form data here... return Json(new { success = true, message = "Your inquery has been sent. Thank you." }); } javascript $(document).ready(function () { $('#contactSubmit').live('click', function () { var form = $('#contactForm'); var formData = form.serialize(); $.post('/Home/ContactForm', formData, function (result) { var status = $('#contactFormStatus'); if (result.success) { $('#contactForm')[0].reset; } status.append(result.message); }, 'json' ); return false; }); }); I've also tried this javascript, but also got a redirect $(document).ready(function () { $('#contactSubmit').live('click', function () { var form = $('#contactForm'); var formData = form.serialize(); $.ajax({ type: 'POST', url: '/Home/ContactForm', data: formData, success: function (result) { SubmitContactResult(result); }, cache: false }); }); function SubmitContactResult(result) { var status = $('#contactFormStatus'); if (result.success) { $('#contactForm')[0].reset; } status.append(result.message); } }); Any idea what's going on with my code? Thank you very much.

    Read the article

  • Session state provider and global.asax not interacting properly?

    - by yodaj007
    I'm experimenting with creating a crude, proof-of-concept session state store provider in ASP.Net. But I've got a problem and I'm not sure what to do about it. The website works properly when using the InProc provider. The Session_Start in global.asax is called on session creation as it should. But not if I implement my own provider. The Session_Start method from global.asax isn't being called at all if a new session is being created (that is, I delete the session state file). Am I missing something important here? public class TestSessionProvider : SessionStateStoreProviderBase { private const string ROOT = "c:\\projects\\sessions\\"; private const int TIMEOUT_MINUTES = 30; public string ApplicationName { get { return HostingEnvironment.ApplicationVirtualPath; } } private string GetFilename(string id) { string filename = String.Format("{0}_{1}.session", ApplicationName, id); char[] invalids = Path.GetInvalidPathChars(); for (int i = 0; i < invalids.Length; i++) { filename = filename.Replace(invalids[i], '_'); } return Path.Combine(ROOT, Path.GetFileName(filename)); } public override void Initialize(string name, NameValueCollection config) { if (config == null) throw new ArgumentNullException("config"); if (String.IsNullOrEmpty(name)) name = "Sporkalicious"; base.Initialize(name, config); } public override SessionStateStoreData CreateNewStoreData(HttpContext context, int timeout) { SessionStateItemCollection items = new SessionStateItemCollection(); HttpStaticObjectsCollection objects = SessionStateUtility.GetSessionStaticObjects(context); return new SessionStateStoreData(items, objects, TIMEOUT_MINUTES); } /// <summary> /// The CreateUninitializedItem method is used with cookieless sessions when the regenerateExpiredSessionId /// attribute is set to true, which causes SessionStateModule to generate a new SessionID value when an /// expired session ID is encountered. /// </summary> /// <param name="context"></param> /// <param name="id"></param> /// <param name="timeout"></param> public override void CreateUninitializedItem(HttpContext context, string id, int timeout) { FileStream fs = File.Open(GetFilename(id), FileMode.CreateNew, FileAccess.Write, FileShare.None); BinaryWriter writer = new BinaryWriter(fs); SessionStateItemCollection coll = new SessionStateItemCollection(); coll.Serialize(writer); fs.Flush(); fs.Close(); } public override SessionStateStoreData GetItem(HttpContext context, string id, out bool locked, out TimeSpan lockAge, out object lockId, out SessionStateActions actions) { return GetItemExclusive(context, id, out locked, out lockAge, out lockId, out actions); } public override SessionStateStoreData GetItemExclusive(HttpContext context, string id, out bool locked, out TimeSpan lockAge, out object lockId, out SessionStateActions actions) { locked = false; lockAge = TimeSpan.FromDays(1); lockId = 0; actions = SessionStateActions.None; if (!File.Exists(GetFilename(id))) { return null; } FileStream fs = File.Open(GetFilename(id), FileMode.Open, FileAccess.ReadWrite, FileShare.Read); BinaryReader reader = new BinaryReader(fs); SessionStateItemCollection coll = SessionStateItemCollection.Deserialize(reader); fs.Close(); return new SessionStateStoreData(coll, new HttpStaticObjectsCollection(), TIMEOUT_MINUTES); } public override void ReleaseItemExclusive(HttpContext context, string id, object lockId) { } public override void RemoveItem(HttpContext context, string id, object lockId, SessionStateStoreData item) { File.Delete(GetFilename(id)); } public override void ResetItemTimeout(HttpContext context, string id) { } public override void SetAndReleaseItemExclusive(HttpContext context, string id, SessionStateStoreData item, object lockId, bool newItem) { if (!File.Exists(GetFilename(id))) { CreateUninitializedItem(context, id, 10); } FileStream fs = File.Open(GetFilename(id), FileMode.Truncate, FileAccess.Write, FileShare.None); BinaryWriter writer = new BinaryWriter(fs); SessionStateItemCollection coll = (SessionStateItemCollection)item.Items; coll.Serialize(writer); fs.Flush(); fs.Close(); } public override bool SetItemExpireCallback(SessionStateItemExpireCallback expireCallback) { return false; } public override void InitializeRequest(HttpContext context){} public override void EndRequest(HttpContext context){} public override void Dispose(){} }

    Read the article

  • Generating cache file for Twitter rss feed

    - by Kerri
    I'm working on a site with a simple php-generated twitter box with user timeline tweets pulled from the user_timeline rss feed, and cached to a local file to cut down on loads, and as backup for when twitter goes down. I based the caching on this: http://snipplr.com/view/8156/twitter-cache/. It all seemed to be working well yesterday, but today I discovered the cache file was blank. Deleting it then loading again generated a fresh file. The code I'm using is below. I've edited it to try to get it to work with what I was already using to display the feed and probably messed something crucial up. The changes I made are the following (and I strongly believe that any of these could be the cause): - Revised the time difference code (the linked example seemed to use a custom function that wasn't included in the code) Removed the "serialize" function from the "fwrites". This is purely because I couldn't figure out how to unserialize once I loaded it in the display code. I truthfully don't understand the role that serialize plays or how it works, so I'm sure I should have kept it in. If that's the case I just need to understand where/how to deserialize in the second part of the code so that it can be parsed. Removed the $rss variable in favor of just loading up the cache file in my original tweet display code. So, here are the relevant parts of the code I used: <?php $feedURL = "http://twitter.com/statuses/user_timeline/#######.rss"; // START CACHING $cache_file = dirname(__FILE__).'/cache/twitter_cache.rss'; // Start with the cache if(file_exists($cache_file)){ $mtime = (strtotime("now") - filemtime($cache_file)); if($mtime > 600) { $cache_rss = file_get_contents('http://twitter.com/statuses/user_timeline/75168146.rss'); $cache_static = fopen($cache_file, 'wb'); fwrite($cache_static, $cache_rss); fclose($cache_static); } echo "<!-- twitter cache generated ".date('Y-m-d h:i:s', filemtime($cache_file))." -->"; } else { $cache_rss = file_get_contents('http://twitter.com/statuses/user_timeline/#######.rss'); $cache_static = fopen($cache_file, 'wb'); fwrite($cache_static, $cache_rss); fclose($cache_static); } //END CACHING //START DISPLAY $doc = new DOMDocument(); $doc->load($cache_file); $arrFeeds = array(); foreach ($doc->getElementsByTagName('item') as $node) { $itemRSS = array ( 'title' => $node->getElementsByTagName('title')->item(0)->nodeValue, 'date' => $node->getElementsByTagName('pubDate')->item(0)->nodeValue ); array_push($arrFeeds, $itemRSS); } // the rest of the formatting and display code.... } ?> ETA 6/17 Nobody can help…? I'm thinking it has something to do with writing a blank cache file over a good one when twitter is down, because otherwise I imagine that this should be happening every ten minutes when the cache file is overwritten again, but it doesn't happen that frequently. I made the following change to the part where it checks how old the file is to overwrite it: $cache_rss = file_get_contents('http://twitter.com/statuses/user_timeline/75168146.rss'); if($mtime > 600 && $cache_rss != ''){ $cache_static = fopen($cache_file, 'wb'); fwrite($cache_static, $cache_rss); fclose($cache_static); } …so now, it will only write the file if it's over ten minutes old and there's actual content retrieved from the rss page. Do you think this will work?

    Read the article

  • PHP Form - Empty input enter this text - Validation

    - by James Skelton
    No doubt very simple question for someone with php knowledge. I have a form with a datepicker, all is fine when a user has selected a date the email is send with: Date: 2012 04 10 But i would like if the user has skipped this and left blank (as i have not made this required) to send as: Date: Not Entered (<-- Or something) Instead at the minute of course it reads: Date: Form input <input type="text" class="form-control" id="datepicker" name="datepicker" size="50" value="Date Of Wedding" /> This is the validator $(document).ready(function(){ //validation contact form $('#submit').click(function(event){ event.preventDefault(); var fname = $('#name').val(); var validInput = new RegExp(/^[a-zA-Z0-9\s]+$/); var email = $('#email').val(); var validEmail = new RegExp(/^([a-zA-Z0-9_\.\-])+\@(([a-zA-Z0-9\-])+\.)+([a-zA-Z0-9]{2,4})+$/); var message = $('#message').val(); if(fname==''){ showError('<div class="alert alert-danger">Please enter your name.</div>', $('#name')); $('#name').addClass('required'); return;} if(!validInput.test(fname)){ showError('<div class="alert alert-danger">Please enter a valid name.</div>', $('#name')); $('#name').addClass('required'); return;} if(email==''){ showError('<div class="alert alert-danger">Please enter an email address.</div>', $('#email')); $('#email').addClass('required'); return;} if(!validEmail.test(email)){ showError('<div class="alert alert-danger">Please enter a valid email.</div>', $('#email')); $('#email').addClass('required'); return;} if(message==''){ showError('<div class="alert alert-danger">Please enter a message.</div>', $('#message')); $('#message').addClass('required'); return;} // setup some local variables var request; var form = $(this).closest('form'); // serialize the data in the form var serializedData = form.serialize(); // fire off the request to /contact.php request = $.ajax({ url: "contact.php", type: "post", data: serializedData }); // callback handler that will be called on success request.done(function (response, textStatus, jqXHR){ $('.contactWrap').show( 'slow' ).fadeIn("slow").html(' <div class="alert alert-success centered"><h3>Thank you! Your message has been sent.</h3></div> '); }); // callback handler that will be called on failure request.fail(function (jqXHR, textStatus, errorThrown){ // log the error to the console console.error( "The following error occured: "+ textStatus, errorThrown ); }); }); //remove 'required' class and hide error $('input, textarea').keyup( function(event){ if($(this).hasClass('required')){ $(this).removeClass('required'); $('.error').hide("slow").fadeOut("slow"); } }); // show error showError = function (error, target){ $('.error').removeClass('hidden').show("slow").fadeIn("slow").html(error); $('.error').data('target', target); $(target).focus(); console.log(target); console.log(error); return; } });

    Read the article

  • Ajax using Rails

    - by Steve
    Hi, I have a favourite and un-favourite functionality in my application and I am using jQuery. This functionality works partially. The page gets loaded, and when I click the 'favourite' button(it is inside add_favourite_div element), it sends a XHR request and the post is set as favourite. Then a new div called "remove_favourite_div" replaces its place.Now when I click the remove favourite(which is part of remove_favourite_div), it sends a normal http request inside of xhr. The structure when the page gets loaded first time <div id="favourite"> <div id="add_favourite_div"> <form method="post" id="add_favourite" action="/viewpost/add_favourite"> <div style="margin: 0pt; padding: 0pt; display: inline;"> <input type="hidden" value="w873BgYHLxQmadUalzMRUC+1ql4AtP3U7f78dT8x9ho=" name="authenticity_token"> </div> <input type="hidden" value="3" name="Favourite[post_id]" id="Favourite_place_id"> <input type="hidden" value="2" name="Favourite[user_id]" id="Favourite_user_id"> <input type="submit" value="Favourite" name="commit"><br> </form> </div> </div> DOM after clicking on the unfavourite button <div id="favourite"> <div id="remove_favourite_div"> <form method="post" id="remove_favourite" action="/viewpost/remove_favourite"> <div style="margin: 0pt; padding: 0pt; display: inline;"> <input type="hidden" value="w873BgYHLxQmadUalzMRUC+1ql4AtP3U7f78dT8x9ho=" name="authenticity_token"> </div> <input type="hidden" value="3" name="Favourite[post_id]" id="Favourite_place_id"> <input type="hidden" value="2" name="Favourite[user_id]" id="Favourite_user_id"> <input type="submit" value="UnFavourite" name="commit"><br> </form> </div> </div> In my application.js, I have two functions to trigger the xhr request $("#add_favourite").submit(function(){ alert("add favourite"); action = $(this).attr("action") $.post(action,$(this).serialize(),null,"script"); return false; }); $("#remove_favourite").submit(function(){ alert("remove favourite"); action = $(this).attr("action"); $.post(action,$(this).serialize(),null,"script"); return false; }); Here, when the post is initially not a favourite, favourite button is displayed and when i clicked on the button, $("#add_favourite").submit gets called and unfavourite form is displayed correctly, but now when I click on the un-favourite button, $("#remove_favourite").submit does not get called. The whole scenario is true in both ways, I mean favourite-Unfavourite and Unfavourite-favourite Can someone please help me to solve this Thanks

    Read the article

  • ASP.NET MVC ‘Extendable-hooks’ – ControllerActionInvoker class

    - by nmarun
    There’s a class ControllerActionInvoker in ASP.NET MVC. This can be used as one of an hook-points to allow customization of your application. Watching Brad Wilsons’ Advanced MP3 from MVC Conf inspired me to write about this class. What MSDN says: “Represents a class that is responsible for invoking the action methods of a controller.” Well if MSDN says it, I think I can instill a fair amount of confidence into what the class does. But just to get to the details, I also looked into the source code for MVC. Seems like the base class Controller is where an IActionInvoker is initialized: 1: protected virtual IActionInvoker CreateActionInvoker() { 2: return new ControllerActionInvoker(); 3: } In the ControllerActionInvoker (the O-O-B behavior), there are different ‘versions’ of InvokeActionMethod() method that actually call the action method in question and return an instance of type ActionResult. 1: protected virtual ActionResult InvokeActionMethod(ControllerContext controllerContext, ActionDescriptor actionDescriptor, IDictionary<string, object> parameters) { 2: object returnValue = actionDescriptor.Execute(controllerContext, parameters); 3: ActionResult result = CreateActionResult(controllerContext, actionDescriptor, returnValue); 4: return result; 5: } I guess that’s enough on the ‘behind-the-screens’ of this class. Let’s see how we can use this class to hook-up extensions. Say I have a requirement that the user should be able to get different renderings of the same output, like html, xml, json, csv and so on. The user will type-in the output format in the url and should the get result accordingly. For example: http://site.com/RenderAs/ – renders the default way (the razor view) http://site.com/RenderAs/xml http://site.com/RenderAs/csv … and so on where RenderAs is my controller. There are many ways of doing this and I’m using a custom ControllerActionInvoker class (even though this might not be the best way to accomplish this). For this, my one and only route in the Global.asax.cs is: 1: routes.MapRoute("RenderAsRoute", "RenderAs/{outputType}", 2: new {controller = "RenderAs", action = "Index", outputType = ""}); Here the controller name is ‘RenderAsController’ and the action that’ll get called (always) is the Index action. The outputType parameter will map to the type of output requested by the user (xml, csv…). I intend to display a list of food items for this example. 1: public class Item 2: { 3: public int Id { get; set; } 4: public string Name { get; set; } 5: public Cuisine Cuisine { get; set; } 6: } 7:  8: public class Cuisine 9: { 10: public int CuisineId { get; set; } 11: public string Name { get; set; } 12: } Coming to my ‘RenderAsController’ class. I generate an IList<Item> to represent my model. 1: private static IList<Item> GetItems() 2: { 3: Cuisine cuisine = new Cuisine { CuisineId = 1, Name = "Italian" }; 4: Item item = new Item { Id = 1, Name = "Lasagna", Cuisine = cuisine }; 5: IList<Item> items = new List<Item> { item }; 6: item = new Item {Id = 2, Name = "Pasta", Cuisine = cuisine}; 7: items.Add(item); 8: //... 9: return items; 10: } My action method looks like 1: public IList<Item> Index(string outputType) 2: { 3: return GetItems(); 4: } There are two things that stand out in this action method. The first and the most obvious one being that the return type is not of type ActionResult (or one of its derivatives). Instead I’m passing the type of the model itself (IList<Item> in this case). We’ll convert this to some type of an ActionResult in our custom controller action invoker class later. The second thing (a little subtle) is that I’m not doing anything with the outputType value that is passed on to this action method. This value will be in the RouteData dictionary and we’ll use this in our custom invoker class as well. It’s time to hook up our invoker class. First, I’ll override the Initialize() method of my RenderAsController class. 1: protected override void Initialize(RequestContext requestContext) 2: { 3: base.Initialize(requestContext); 4: string outputType = string.Empty; 5:  6: // read the outputType from the RouteData dictionary 7: if (requestContext.RouteData.Values["outputType"] != null) 8: { 9: outputType = requestContext.RouteData.Values["outputType"].ToString(); 10: } 11:  12: // my custom invoker class 13: ActionInvoker = new ContentRendererActionInvoker(outputType); 14: } Coming to the main part of the discussion – the ContentRendererActionInvoker class: 1: public class ContentRendererActionInvoker : ControllerActionInvoker 2: { 3: private readonly string _outputType; 4:  5: public ContentRendererActionInvoker(string outputType) 6: { 7: _outputType = outputType.ToLower(); 8: } 9: //... 10: } So the outputType value that was read from the RouteData, which was passed in from the url, is being set here in  a private field. Moving to the crux of this article, I now override the CreateActionResult method. 1: protected override ActionResult CreateActionResult(ControllerContext controllerContext, ActionDescriptor actionDescriptor, object actionReturnValue) 2: { 3: if (actionReturnValue == null) 4: return new EmptyResult(); 5:  6: ActionResult result = actionReturnValue as ActionResult; 7: if (result != null) 8: return result; 9:  10: // This is where the magic happens 11: // Depending on the value in the _outputType field, 12: // return an appropriate ActionResult 13: switch (_outputType) 14: { 15: case "json": 16: { 17: JavaScriptSerializer serializer = new JavaScriptSerializer(); 18: string json = serializer.Serialize(actionReturnValue); 19: return new ContentResult { Content = json, ContentType = "application/json" }; 20: } 21: case "xml": 22: { 23: XmlSerializer serializer = new XmlSerializer(actionReturnValue.GetType()); 24: using (StringWriter writer = new StringWriter()) 25: { 26: serializer.Serialize(writer, actionReturnValue); 27: return new ContentResult { Content = writer.ToString(), ContentType = "text/xml" }; 28: } 29: } 30: case "csv": 31: controllerContext.HttpContext.Response.AddHeader("Content-Disposition", "attachment; filename=items.csv"); 32: return new ContentResult 33: { 34: Content = ToCsv(actionReturnValue as IList<Item>), 35: ContentType = "application/ms-excel" 36: }; 37: case "pdf": 38: string filePath = controllerContext.HttpContext.Server.MapPath("~/items.pdf"); 39: controllerContext.HttpContext.Response.AddHeader("content-disposition", 40: "attachment; filename=items.pdf"); 41: ToPdf(actionReturnValue as IList<Item>, filePath); 42: return new FileContentResult(StreamFile(filePath), "application/pdf"); 43:  44: default: 45: controllerContext.Controller.ViewData.Model = actionReturnValue; 46: return new ViewResult 47: { 48: TempData = controllerContext.Controller.TempData, 49: ViewData = controllerContext.Controller.ViewData 50: }; 51: } 52: } A big method there! The hook I was talking about kinda above actually is here. This is where different kinds / formats of output get returned based on the output type requested in the url. When the _outputType is not set (string.Empty as set in the Global.asax.cs file), the razor view gets rendered (lines 45-50). This is the default behavior in most MVC applications where-in a view (webform/razor) gets rendered on the browser. As you see here, this gets returned as a ViewResult. But then, for an outputType of json/xml/csv, a ContentResult gets returned, while for pdf, a FileContentResult is returned. Here are how the different kinds of output look like: This is how we can leverage this feature of ASP.NET MVC to developer a better application. I’ve used the iTextSharp library to convert to a pdf format. Mike gives quite a bit of detail regarding this library here. You can download the sample code here. (You’ll get an option to download once you open the link). Verdict: Hot chocolate: $3; Reebok shoes: $50; Your first car: $3000; Being able to extend a web application: Priceless.

    Read the article

  • Passing multiple simple POST Values to ASP.NET Web API

    - by Rick Strahl
    A few weeks backs I posted a blog post  about what does and doesn't work with ASP.NET Web API when it comes to POSTing data to a Web API controller. One of the features that doesn't work out of the box - somewhat unexpectedly -  is the ability to map POST form variables to simple parameters of a Web API method. For example imagine you have this form and you want to post this data to a Web API end point like this via AJAX: <form> Name: <input type="name" name="name" value="Rick" /> Value: <input type="value" name="value" value="12" /> Entered: <input type="entered" name="entered" value="12/01/2011" /> <input type="button" id="btnSend" value="Send" /> </form> <script type="text/javascript"> $("#btnSend").click( function() { $.post("samples/PostMultipleSimpleValues?action=kazam", $("form").serialize(), function (result) { alert(result); }); }); </script> or you might do this more explicitly by creating a simple client map and specifying the POST values directly by hand:$.post("samples/PostMultipleSimpleValues?action=kazam", { name: "Rick", value: 1, entered: "12/01/2012" }, $("form").serialize(), function (result) { alert(result); }); On the wire this generates a simple POST request with Url Encoded values in the content:POST /AspNetWebApi/samples/PostMultipleSimpleValues?action=kazam HTTP/1.1 Host: localhost User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:15.0) Gecko/20100101 Firefox/15.0.1 Accept: application/json Connection: keep-alive Content-Type: application/x-www-form-urlencoded; charset=UTF-8 X-Requested-With: XMLHttpRequest Referer: http://localhost/AspNetWebApi/FormPostTest.html Content-Length: 41 Pragma: no-cache Cache-Control: no-cachename=Rick&value=12&entered=12%2F10%2F2011 Seems simple enough, right? We are basically posting 3 form variables and 1 query string value to the server. Unfortunately Web API can't handle request out of the box. If I create a method like this:[HttpPost] public string PostMultipleSimpleValues(string name, int value, DateTime entered, string action = null) { return string.Format("Name: {0}, Value: {1}, Date: {2}, Action: {3}", name, value, entered, action); }You'll find that you get an HTTP 404 error and { "Message": "No HTTP resource was found that matches the request URI…"} Yes, it's possible to pass multiple POST parameters of course, but Web API expects you to use Model Binding for this - mapping the post parameters to a strongly typed .NET object, not to single parameters. Alternately you can also accept a FormDataCollection parameter on your API method to get a name value collection of all POSTed values. If you're using JSON only, using the dynamic JObject/JValue objects might also work. ModelBinding is fine in many use cases, but can quickly become overkill if you only need to pass a couple of simple parameters to many methods. Especially in applications with many, many AJAX callbacks the 'parameter mapping type' per method signature can lead to serious class pollution in a project very quickly. Simple POST variables are also commonly used in AJAX applications to pass data to the server, even in many complex public APIs. So this is not an uncommon use case, and - maybe more so a behavior that I would have expected Web API to support natively. The question "Why aren't my POST parameters mapping to Web API method parameters" is already a frequent one… So this is something that I think is fairly important, but unfortunately missing in the base Web API installation. Creating a Custom Parameter Binder Luckily Web API is greatly extensible and there's a way to create a custom Parameter Binding to provide this functionality! Although this solution took me a long while to find and then only with the help of some folks Microsoft (thanks Hong Mei!!!), it's not difficult to hook up in your own projects. It requires one small class and a GlobalConfiguration hookup. Web API parameter bindings allow you to intercept processing of individual parameters - they deal with mapping parameters to the signature as well as converting the parameters to the actual values that are returned. Here's the implementation of the SimplePostVariableParameterBinding class:public class SimplePostVariableParameterBinding : HttpParameterBinding { private const string MultipleBodyParameters = "MultipleBodyParameters"; public SimplePostVariableParameterBinding(HttpParameterDescriptor descriptor) : base(descriptor) { } /// <summary> /// Check for simple binding parameters in POST data. Bind POST /// data as well as query string data /// </summary> public override Task ExecuteBindingAsync(ModelMetadataProvider metadataProvider, HttpActionContext actionContext, CancellationToken cancellationToken) { // Body can only be read once, so read and cache it NameValueCollection col = TryReadBody(actionContext.Request); string stringValue = null; if (col != null) stringValue = col[Descriptor.ParameterName]; // try reading query string if we have no POST/PUT match if (stringValue == null) { var query = actionContext.Request.GetQueryNameValuePairs(); if (query != null) { var matches = query.Where(kv => kv.Key.ToLower() == Descriptor.ParameterName.ToLower()); if (matches.Count() > 0) stringValue = matches.First().Value; } } object value = StringToType(stringValue); // Set the binding result here SetValue(actionContext, value); // now, we can return a completed task with no result TaskCompletionSource<AsyncVoid> tcs = new TaskCompletionSource<AsyncVoid>(); tcs.SetResult(default(AsyncVoid)); return tcs.Task; } private object StringToType(string stringValue) { object value = null; if (stringValue == null) value = null; else if (Descriptor.ParameterType == typeof(string)) value = stringValue; else if (Descriptor.ParameterType == typeof(int)) value = int.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(Int32)) value = Int32.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(Int64)) value = Int64.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(decimal)) value = decimal.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(double)) value = double.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(DateTime)) value = DateTime.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(bool)) { value = false; if (stringValue == "true" || stringValue == "on" || stringValue == "1") value = true; } else value = stringValue; return value; } /// <summary> /// Read and cache the request body /// </summary> /// <param name="request"></param> /// <returns></returns> private NameValueCollection TryReadBody(HttpRequestMessage request) { object result = null; // try to read out of cache first if (!request.Properties.TryGetValue(MultipleBodyParameters, out result)) { // parsing the string like firstname=Hongmei&lastname=Ge result = request.Content.ReadAsFormDataAsync().Result; request.Properties.Add(MultipleBodyParameters, result); } return result as NameValueCollection; } private struct AsyncVoid { } }   The ExecuteBindingAsync method is fired for each parameter that is mapped and sent for conversion. This custom binding is fired only if the incoming parameter is a simple type (that gets defined later when I hook up the binding), so this binding never fires on complex types or if the first type is not a simple type. For the first parameter of a request the Binding first reads the request body into a NameValueCollection and caches that in the request.Properties collection. The request body can only be read once, so the first parameter request reads it and then caches it. Subsequent parameters then use the cached POST value collection. Once the form collection is available the value of the parameter is read, and the value is translated into the target type requested by the Descriptor. SetValue writes out the value to be mapped. Once you have the ParameterBinding in place, the binding has to be assigned. This is done along with all other Web API configuration tasks at application startup in global.asax's Application_Start:GlobalConfiguration.Configuration.ParameterBindingRules .Insert(0, (HttpParameterDescriptor descriptor) => { var supportedMethods = descriptor.ActionDescriptor.SupportedHttpMethods; // Only apply this binder on POST and PUT operations if (supportedMethods.Contains(HttpMethod.Post) || supportedMethods.Contains(HttpMethod.Put)) { var supportedTypes = new Type[] { typeof(string), typeof(int), typeof(decimal), typeof(double), typeof(bool), typeof(DateTime) }; if (supportedTypes.Where(typ => typ == descriptor.ParameterType).Count() > 0) return new SimplePostVariableParameterBinding(descriptor); } // let the default bindings do their work return null; });   The ParameterBindingRules.Insert method takes a delegate that checks which type of requests it should handle. The logic here checks whether the request is POST or PUT and whether the parameter type is a simple type that is supported. Web API calls this delegate once for each method signature it tries to map and the delegate returns null to indicate it's not handling this parameter, or it returns a new parameter binding instance - in this case the SimplePostVariableParameterBinding. Once the parameter binding and this hook up code is in place, you can now pass simple POST values to methods with simple parameters. The examples I showed above should now work in addition to the standard bindings. Summary Clearly this is not easy to discover. I spent quite a bit of time digging through the Web API source trying to figure this out on my own without much luck. It took Hong Mei at Micrsoft to provide a base example as I asked around so I can't take credit for this solution :-). But once you know where to look, Web API is brilliantly extensible to make it relatively easy to customize the parameter behavior. I'm very stoked that this got resolved  - in the last two months I've had two customers with projects that decided not to use Web API in AJAX heavy SPA applications because this POST variable mapping wasn't available. This might actually change their mind to still switch back and take advantage of the many great features in Web API. I too frequently use plain POST variables for communicating with server AJAX handlers and while I could have worked around this (with untyped JObject or the Form collection mostly), having proper POST to parameter mapping makes things much easier. I said this in my last post on POST data and say it again here: I think POST to method parameter mapping should have been shipped in the box with Web API, because without knowing about this limitation the expectation is that simple POST variables map to parameters just like query string values do. I hope Microsoft considers including this type of functionality natively in the next version of Web API natively or at least as a built-in HttpParameterBinding that can be just added. This is especially true, since this binding doesn't affect existing bindings. Resources SimplePostVariableParameterBinding Source on GitHub Global.asax hookup source Mapping URL Encoded Post Values in  ASP.NET Web API© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api  AJAX   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • ASP.NET MVC Validation Complete

    - by Ricardo Peres
    OK, so let’s talk about validation. Most people are probably familiar with the out of the box validation attributes that MVC knows about, from the System.ComponentModel.DataAnnotations namespace, such as EnumDataTypeAttribute, RequiredAttribute, StringLengthAttribute, RangeAttribute, RegularExpressionAttribute and CompareAttribute from the System.Web.Mvc namespace. All of these validators inherit from ValidationAttribute and perform server as well as client-side validation. In order to use them, you must include the JavaScript files MicrosoftMvcValidation.js, jquery.validate.js or jquery.validate.unobtrusive.js, depending on whether you want to use Microsoft’s own library or jQuery. No significant difference exists, but jQuery is more extensible. You can also create your own attribute by inheriting from ValidationAttribute, but, if you want to have client-side behavior, you must also implement IClientValidatable (all of the out of the box validation attributes implement it) and supply your own JavaScript validation function that mimics its server-side counterpart. Of course, you must reference the JavaScript file where the declaration function is. Let’s see an example, validating even numbers. First, the validation attribute: 1: [Serializable] 2: [AttributeUsage(AttributeTargets.Property, AllowMultiple = false, Inherited = true)] 3: public class IsEvenAttribute : ValidationAttribute, IClientValidatable 4: { 5: protected override ValidationResult IsValid(Object value, ValidationContext validationContext) 6: { 7: Int32 v = Convert.ToInt32(value); 8:  9: if (v % 2 == 0) 10: { 11: return (ValidationResult.Success); 12: } 13: else 14: { 15: return (new ValidationResult("Value is not even")); 16: } 17: } 18:  19: #region IClientValidatable Members 20:  21: public IEnumerable<ModelClientValidationRule> GetClientValidationRules(ModelMetadata metadata, ControllerContext context) 22: { 23: yield return (new ModelClientValidationRule() { ValidationType = "iseven", ErrorMessage = "Value is not even" }); 24: } 25:  26: #endregion 27: } The iseven validation function is declared like this in JavaScript, using jQuery validation: 1: jQuery.validator.addMethod('iseven', function (value, element, params) 2: { 3: return (true); 4: return ((parseInt(value) % 2) == 0); 5: }); 6:  7: jQuery.validator.unobtrusive.adapters.add('iseven', [], function (options) 8: { 9: options.rules['iseven'] = options.params; 10: options.messages['iseven'] = options.message; 11: }); Do keep in mind that this is a simple example, for example, we are not using parameters, which may be required for some more advanced scenarios. As a side note, if you implement a custom validator that also requires a JavaScript function, you’ll probably want them together. One way to achieve this is by including the JavaScript file as an embedded resource on the same assembly where the custom attribute is declared. You do this by having its Build Action set as Embedded Resource inside Visual Studio: Then you have to declare an attribute at assembly level, perhaps in the AssemblyInfo.cs file: 1: [assembly: WebResource("SomeNamespace.IsEven.js", "text/javascript")] In your views, if you want to include a JavaScript file from an embedded resource you can use this code: 1: public static class UrlExtensions 2: { 3: private static readonly MethodInfo getResourceUrlMethod = typeof(AssemblyResourceLoader).GetMethod("GetWebResourceUrlInternal", BindingFlags.NonPublic | BindingFlags.Static); 4:  5: public static IHtmlString Resource<TType>(this UrlHelper url, String resourceName) 6: { 7: return (Resource(url, typeof(TType).Assembly.FullName, resourceName)); 8: } 9:  10: public static IHtmlString Resource(this UrlHelper url, String assemblyName, String resourceName) 11: { 12: String resourceUrl = getResourceUrlMethod.Invoke(null, new Object[] { Assembly.Load(assemblyName), resourceName, false, false, null }).ToString(); 13: return (new HtmlString(resourceUrl)); 14: } 15: } And on the view: 1: <script src="<%: this.Url.Resource("SomeAssembly", "SomeNamespace.IsEven.js") %>" type="text/javascript"></script> Then there’s the CustomValidationAttribute. It allows externalizing your validation logic to another class, so you have to tell which type and method to use. The method can be static as well as instance, if it is instance, the class cannot be abstract and must have a public parameterless constructor. It can be applied to a property as well as a class. It does not, however, support client-side validation. Let’s see an example declaration: 1: [CustomValidation(typeof(ProductValidator), "OnValidateName")] 2: public String Name 3: { 4: get; 5: set; 6: } The validation method needs this signature: 1: public static ValidationResult OnValidateName(String name) 2: { 3: if ((String.IsNullOrWhiteSpace(name) == false) && (name.Length <= 50)) 4: { 5: return (ValidationResult.Success); 6: } 7: else 8: { 9: return (new ValidationResult(String.Format("The name has an invalid value: {0}", name), new String[] { "Name" })); 10: } 11: } Note that it can be either static or instance and it must return a ValidationResult-derived class. ValidationResult.Success is null, so any non-null value is considered a validation error. The single method argument must match the property type to which the attribute is attached to or the class, in case it is applied to a class: 1: [CustomValidation(typeof(ProductValidator), "OnValidateProduct")] 2: public class Product 3: { 4: } The signature must thus be: 1: public static ValidationResult OnValidateProduct(Product product) 2: { 3: } Continuing with attribute-based validation, another possibility is RemoteAttribute. This allows specifying a controller and an action method just for performing the validation of a property or set of properties. This works in a client-side AJAX way and it can be very useful. Let’s see an example, starting with the attribute declaration and proceeding to the action method implementation: 1: [Remote("Validate", "Validation")] 2: public String Username 3: { 4: get; 5: set; 6: } The controller action method must contain an argument that can be bound to the property: 1: public ActionResult Validate(String username) 2: { 3: return (this.Json(true, JsonRequestBehavior.AllowGet)); 4: } If in your result JSON object you include a string instead of the true value, it will consider it as an error, and the validation will fail. This string will be displayed as the error message, if you have included it in your view. You can also use the remote validation approach for validating your entire entity, by including all of its properties as included fields in the attribute and having an action method that receives an entity instead of a single property: 1: [Remote("Validate", "Validation", AdditionalFields = "Price")] 2: public String Name 3: { 4: get; 5: set; 6: } 7:  8: public Decimal Price 9: { 10: get; 11: set; 12: } The action method will then be: 1: public ActionResult Validate(Product product) 2: { 3: return (this.Json("Product is not valid", JsonRequestBehavior.AllowGet)); 4: } Only the property to which the attribute is applied and the additional properties referenced by the AdditionalFields will be populated in the entity instance received by the validation method. The same rule previously stated applies, if you return anything other than true, it will be used as the validation error message for the entity. The remote validation is triggered automatically, but you can also call it explicitly. In the next example, I am causing the full entity validation, see the call to serialize(): 1: function validate() 2: { 3: var form = $('form'); 4: var data = form.serialize(); 5: var url = '<%: this.Url.Action("Validation", "Validate") %>'; 6:  7: var result = $.ajax 8: ( 9: { 10: type: 'POST', 11: url: url, 12: data: data, 13: async: false 14: } 15: ).responseText; 16:  17: if (result) 18: { 19: //error 20: } 21: } Finally, by implementing IValidatableObject, you can implement your validation logic on the object itself, that is, you make it self-validatable. This will only work server-side, that is, the ModelState.IsValid property will be set to false on the controller’s action method if the validation in unsuccessful. Let’s see how to implement it: 1: public class Product : IValidatableObject 2: { 3: public String Name 4: { 5: get; 6: set; 7: } 8:  9: public Decimal Price 10: { 11: get; 12: set; 13: } 14:  15: #region IValidatableObject Members 16: 17: public IEnumerable<ValidationResult> Validate(ValidationContext validationContext) 18: { 19: if ((String.IsNullOrWhiteSpace(this.Name) == true) || (this.Name.Length > 50)) 20: { 21: yield return (new ValidationResult(String.Format("The name has an invalid value: {0}", this.Name), new String[] { "Name" })); 22: } 23: 24: if ((this.Price <= 0) || (this.Price > 100)) 25: { 26: yield return (new ValidationResult(String.Format("The price has an invalid value: {0}", this.Price), new String[] { "Price" })); 27: } 28: } 29: 30: #endregion 31: } The errors returned will be matched against the model properties through the MemberNames property of the ValidationResult class and will be displayed in their proper labels, if present on the view. On the controller action method you can check for model validity by looking at ModelState.IsValid and you can get actual error messages and related properties by examining all of the entries in the ModelState dictionary: 1: Dictionary<String, String> errors = new Dictionary<String, String>(); 2:  3: foreach (KeyValuePair<String, ModelState> keyValue in this.ModelState) 4: { 5: String key = keyValue.Key; 6: ModelState modelState = keyValue.Value; 7:  8: foreach (ModelError error in modelState.Errors) 9: { 10: errors[key] = error.ErrorMessage; 11: } 12: } And these are the ways to perform date validation in ASP.NET MVC. Don’t forget to use them!

    Read the article

  • Performance Optimization &ndash; It Is Faster When You Can Measure It

    - by Alois Kraus
    Performance optimization in bigger systems is hard because the measured numbers can vary greatly depending on the measurement method of your choice. To measure execution timing of specific methods in your application you usually use Time Measurement Method Potential Pitfalls Stopwatch Most accurate method on recent processors. Internally it uses the RDTSC instruction. Since the counter is processor specific you can get greatly different values when your thread is scheduled to another core or the core goes into a power saving mode. But things do change luckily: Intel's Designer's vol3b, section 16.11.1 "16.11.1 Invariant TSC The time stamp counter in newer processors may support an enhancement, referred to as invariant TSC. Processor's support for invariant TSC is indicated by CPUID.80000007H:EDX[8]. The invariant TSC will run at a constant rate in all ACPI P-, C-. and T-states. This is the architectural behavior moving forward. On processors with invariant TSC support, the OS may use the TSC for wall clock timer services (instead of ACPI or HPET timers). TSC reads are much more efficient and do not incur the overhead associated with a ring transition or access to a platform resource." DateTime.Now Good but it has only a resolution of 16ms which can be not enough if you want more accuracy.   Reporting Method Potential Pitfalls Console.WriteLine Ok if not called too often. Debug.Print Are you really measuring performance with Debug Builds? Shame on you. Trace.WriteLine Better but you need to plug in some good output listener like a trace file. But be aware that the first time you call this method it will read your app.config and deserialize your system.diagnostics section which does also take time.   In general it is a good idea to use some tracing library which does measure the timing for you and you only need to decorate some methods with tracing so you can later verify if something has changed for the better or worse. In my previous article I did compare measuring performance with quantum mechanics. This analogy does work surprising well. When you measure a quantum system there is a lower limit how accurately you can measure something. The Heisenberg uncertainty relation does tell us that you cannot measure of a quantum system the impulse and location of a particle at the same time with infinite accuracy. For programmers the two variables are execution time and memory allocations. If you try to measure the timings of all methods in your application you will need to store them somewhere. The fastest storage space besides the CPU cache is the memory. But if your timing values do consume all available memory there is no memory left for the actual application to run. On the other hand if you try to record all memory allocations of your application you will also need to store the data somewhere. This will cost you memory and execution time. These constraints are always there and regardless how good the marketing of tool vendors for performance and memory profilers are: Any measurement will disturb the system in a non predictable way. Commercial tool vendors will tell you they do calculate this overhead and subtract it from the measured values to give you the most accurate values but in reality it is not entirely true. After falling into the trap to trust the profiler timings several times I have got into the habit to Measure with a profiler to get an idea where potential bottlenecks are. Measure again with tracing only the specific methods to check if this method is really worth optimizing. Optimize it Measure again. Be surprised that your optimization has made things worse. Think harder Implement something that really works. Measure again Finished! - Or look for the next bottleneck. Recently I have looked into issues with serialization performance. For serialization DataContractSerializer was used and I was not sure if XML is really the most optimal wire format. After looking around I have found protobuf-net which uses Googles Protocol Buffer format which is a compact binary serialization format. What is good for Google should be good for us. A small sample app to check out performance was a matter of minutes: using ProtoBuf; using System; using System.Diagnostics; using System.IO; using System.Reflection; using System.Runtime.Serialization; [DataContract, Serializable] class Data { [DataMember(Order=1)] public int IntValue { get; set; } [DataMember(Order = 2)] public string StringValue { get; set; } [DataMember(Order = 3)] public bool IsActivated { get; set; } [DataMember(Order = 4)] public BindingFlags Flags { get; set; } } class Program { static MemoryStream _Stream = new MemoryStream(); static MemoryStream Stream { get { _Stream.Position = 0; _Stream.SetLength(0); return _Stream; } } static void Main(string[] args) { DataContractSerializer ser = new DataContractSerializer(typeof(Data)); Data data = new Data { IntValue = 100, IsActivated = true, StringValue = "Hi this is a small string value to check if serialization does work as expected" }; var sw = Stopwatch.StartNew(); int Runs = 1000 * 1000; for (int i = 0; i < Runs; i++) { //ser.WriteObject(Stream, data); Serializer.Serialize<Data>(Stream, data); } sw.Stop(); Console.WriteLine("Did take {0:N0}ms for {1:N0} objects", sw.Elapsed.TotalMilliseconds, Runs); Console.ReadLine(); } } The results are indeed promising: Serializer Time in ms N objects protobuf-net   807 1000000 DataContract 4402 1000000 Nearly a factor 5 faster and a much more compact wire format. Lets use it! After switching over to protbuf-net the transfered wire data has dropped by a factor two (good) and the performance has worsened by nearly a factor two. How is that possible? We have measured it? Protobuf-net is much faster! As it turns out protobuf-net is faster but it has a cost: For the first time a type is de/serialized it does use some very smart code-gen which does not come for free. Lets try to measure this one by setting of our performance test app the Runs value not to one million but to 1. Serializer Time in ms N objects protobuf-net 85 1 DataContract 24 1 The code-gen overhead is significant and can take up to 200ms for more complex types. The break even point where the code-gen cost is amortized by its faster serialization performance is (assuming small objects) somewhere between 20.000-40.000 serialized objects. As it turned out my specific scenario involved about 100 types and 1000 serializations in total. That explains why the good old DataContractSerializer is not so easy to take out of business. The final approach I ended up was to reduce the number of types and to serialize primitive types via BinaryWriter directly which turned out to be a pretty good alternative. It sounded good until I measured again and found that my optimizations so far do not help much. After looking more deeper at the profiling data I did found that one of the 1000 calls did take 50% of the time. So how do I find out which call it was? Normal profilers do fail short at this discipline. A (totally undeserved) relatively unknown profiler is SpeedTrace which does unlike normal profilers create traces of your applications by instrumenting your IL code at runtime. This way you can look at the full call stack of the one slow serializer call to find out if this stack was something special. Unfortunately the call stack showed nothing special. But luckily I have my own tracing as well and I could see that the slow serializer call did happen during the serialization of a bool value. When you encounter after much analysis something unreasonable you cannot explain it then the chances are good that your thread was suspended by the garbage collector. If there is a problem with excessive GCs remains to be investigated but so far the serialization performance seems to be mostly ok.  When you do profile a complex system with many interconnected processes you can never be sure that the timings you just did measure are accurate at all. Some process might be hitting the disc slowing things down for all other processes for some seconds as well. There is a big difference between warm and cold startup. If you restart all processes you can basically forget the first run because of the OS disc cache, JIT and GCs make the measured timings very flexible. When you are in need of a random number generator you should measure cold startup times of a sufficiently complex system. After the first run you can try again getting different and much lower numbers. Now try again at least two times to get some feeling how stable the numbers are. Oh and try to do the same thing the next day. It might be that the bottleneck you found yesterday is gone today. Thanks to GC and other random stuff it can become pretty hard to find stuff worth optimizing if no big bottlenecks except bloatloads of code are left anymore. When I have found a spot worth optimizing I do make the code changes and do measure again to check if something has changed. If it has got slower and I am certain that my change should have made it faster I can blame the GC again. The thing is that if you optimize stuff and you allocate less objects the GC times will shift to some other location. If you are unlucky it will make your faster working code slower because you see now GCs at times where none were before. This is where the stuff does get really tricky. A safe escape hatch is to create a repro of the slow code in an isolated application so you can change things fast in a reliable manner. Then the normal profilers do also start working again. As Vance Morrison does point out it is much more complex to profile a system against the wall clock compared to optimize for CPU time. The reason is that for wall clock time analysis you need to understand how your system does work and which threads (if you have not one but perhaps 20) are causing a visible delay to the end user and which threads can wait a long time without affecting the user experience at all. Next time: Commercial profiler shootout.

    Read the article

  • Backing up SQL Azure

    - by Herve Roggero
    That's it!!! After many days and nights... and an amazing set of challenges, I just released the Enzo Backup for SQL Azure BETA product (http://www.bluesyntax.net). Clearly, that was one of the most challenging projects I have done so far. Why??? Because to create a highly redundant system, expecting failures at all times for an operation that could take anywhere from a couple of minutes to a couple of hours, and still making sure that the operation completes at some point was remarkably challenging. Some routines have more error trapping that actual code... Here are a few things I had to take into account: Exponential Backoff (explained in another post) Dual dynamic determination of number of rows to backup  Dynamic reduction of batch rows used to restore the data Implementation of a flexible BULK Insert API that the tool could use Implementation of a custom Storage REST API to handle automatic retries Automatic data chunking based on blob sizes Compression of data Implementation of the Task Parallel Library at multiple levels including deserialization of Azure Table rows and backup/restore operations Full or Partial Restore operations Implementation of a Ghost class to serialize/deserialize data tables And that's just a partial list... I will explain what some of those mean in future blob posts. A lot of the complexities had to do with implementing a form of retry logic, depending on the resource and the operation.

    Read the article

  • CodePlex Daily Summary for Wednesday, March 21, 2012

    CodePlex Daily Summary for Wednesday, March 21, 2012Popular ReleasesMetodología General Ajustada - MGA: 02.02.01: Cambios John: Se actualizan los seis formularios de Identificaciòn para que despuès de guardar actualice las grillas, de tal manera que no se dupliquen los registros al guardar. Se genera instalador con los cambios y se actualiza la base datos con ùltimos cambios en el SP de Flujo de Caja.xyzzy+: xyzzy+ 0.2.2.235+0: SHA1: 4a0258736e7df52bb6e2304178b7fcf02414ae17 PrerequisitesMicrosoft Visual C++ 2010 SP1 Redistributable Package (x86) (ja) FeaturesUnicode Visual Style Known ProblemsCharacter encodings other than Shift_JIS and UTF-X may be broken. Functions related to character encodings may not work. (ex. iso-code-char)Phalanger - The PHP Language Compiler for the .NET Framework: 3.0 (March 2012) for .NET 4.0: March release of Phalanger 3.0 significantly enhances performance, adds new features and fixes many issues. See following for the list of main improvements: New features: Phalanger Tools installable for Visual Studio 2011 Beta "filter" extension with several most used filters implemented DomDocument HTML parser, loadHTML() method mail() PHP compatible function PHP 5.4 T_CALLABLE token PHP 5.4 "callable" type hint PCRE: UTF32 characters in range support configuration supports <c...Nearforums - ASP.NET MVC forum engine: Nearforums v8.0: Version 8.0 of Nearforums, the ASP.NET MVC Forum Engine, containing new features: Internationalization Custom authentication provider Access control list for forums and threads Webdeploy package checksum: abc62990189cf0d488ef915d4a55e4b14169bc01 Visit Roadmap for more details.BIDS Helper: BIDS Helper 1.6: This beta release is the first to support SQL Server 2012 (in addition to SQL Server 2005, 2008, and 2008 R2). Since it is marked as a beta release, we are looking for bug reports in the next few months as you use BIDS Helper on real projects. In addition to getting all existing BIDS Helper functionality working appropriately in SQL Server 2012 (SSDT), the following features are new... Analysis Services Tabular Smart Diff Tabular Actions Editor Tabular HideMemberIf Tabular Pre-Build ...SQL Monitor - managing sql server performance: SQLMon 4.2 alpha 12: 1. improved process visualizer, now shows how many dead locks, and what are the locked objects 2. fixed some other problems.Json.NET: Json.NET 4.5 Release 1: New feature - Windows 8 Metro build New feature - JsonTextReader automatically reads ISO strings as dates New feature - Added DateFormatHandling to control whether dates are written in the MS format or ISO format, with ISO as the default New feature - Added DateTimeZoneHandling to control reading and writing DateTime time zone details New feature - Added async serialize/deserialize methods to JsonConvert New feature - Added Path to JsonReader/JsonWriter/ErrorContext and exceptions w...SCCM Client Actions Tool: SCCM Client Actions Tool v1.11: SCCM Client Actions Tool v1.11 is the latest version. It comes with following changes since last version: Fixed a bug when ping and cmd.exe kept running in endless loop after action progress was finished. Fixed update checking from Codeplex RSS feed. The tool is downloadable as a ZIP file that contains four files: ClientActionsTool.hta – The tool itself. Cmdkey.exe – command line tool for managing cached credentials. This is needed for alternate credentials feature when running the HTA...WebSocket4Net: WebSocket4Net 0.5: Changes in this release fixed the wss's default port bug improved JsonWebSocket supported set client access policy protocol for silverlight fixed a handshake issue in Silverlight fixed a bug that "Host" field in handshake hadn't contained port if the port is not default supported passing in Origin parameter for handshaking supported reacting pings from server side fixed a bug in data sending fixed the bug sending a closing handshake with no message which would cause an excepti...SuperWebSocket, a .NET WebSocket Server: SuperWebSocket 0.5: Changes included in this release: supported closing handshake queue checking improved JSON subprotocol supported sending ping from server to client fixed a bug about sending a closing handshake with no message refactored the code to improve protocol compatibility fixed a bug about sub protocol configuration loading in Mono improved BasicSubProtocol added JsonWebSocketSessionDaun Management Studio: Daun Management Studio 0.1 (Alpha Version): These are these the alpha application packages for Daun Management Studio to manage MongoDB Server. Please visit our official website http://www.daun-project.comSurvey™ - web survey & form engine: Survey™ 2.0: The new stable Survey™ Project 2.0.0.1 version contains many new features like: Technical changes: - Use of Jquery, ASTreeview, Tabs, Tooltips and new menuprovider Features & Bugfixes: Survey list and search function Folder structure for surveys New Menustructure Library list New Library fields User list and search functions Layout options for a survey with CSS, page header and footer New IP filter security feature Enhanced Token Management New Question fields as ID, Alias...SmartNet: V1.0.0.0: DY SmartNet ?????? V1.0Project Vanquish: 0.0.3: Implemented SSAO and also added a new Hemispheric light.Relational data Transfer Application: Data Transfer Application: Relational data Transfer Application helps to move data scenarios from One relational database to other Relational database without moving entire tables.Media.Net: 0.1: This is the first version.FolderDrive: FolderDrive release 0.01 (alpha): FolderDrive v. 0.01 [alpha] This is the first alpha release of FolderDrive utility. Known problems: - displays popup message at every startup (needs to be shown only at first start) Plans for next release: - more options - startup behavior fixes - adding permanent drive bindings that don't require the app to be constantly run - ClickOnce deployment (?)Javascript .NET: Javascript .NET v0.6: Upgraded to the latest stable branch of v8 (/tags/3.9.18), and switched to using their scons build system. We no longer include v8 source code as part of this project's source code. Simultaneous multithreaded use of v8 now supported (v8 Isolates), although different contexts may not share objects or call each other. 64-bit .Net 4.0 DLL now included. (Download now includes x86 and x64 for both .Net 3.5 and .Net 4.0.)MyRouter (Virtual WiFi Router): MyRouter 1.0.6: This release should be more stable there were a few bug fixes including the x64 issue as well as an error popping up when MyRouter started this was caused by a NULL valueFinestra Virtual Desktops: 2.5.4501: This is a very minor update release. Please see the information about the 2.5 and 2.5.4500 releases for more information on recent changes. This update did not even have an automatic update triggered for it. Adds error checking and reporting to all threads, not only those with message loopsNew Projects320 TWIN System: twin systemArtificial Intelligence Optimization: Optimization of artificial intelligenceAuthor-it DITA Importer: Author-it plug-in that imports DITA XML content into a library.Drag Animated Panel: DragAnimatedPanel is a WPF (Window Presentation Fundation) panel that lets you drag and rearrange elements of animation. It's developed in C#.Empires: Create an empire - in space!EpLibrary: EpLibrary makes it easier for Visual C++ developers to develop an application. You will no longer have to create the basic functionality over and over again. It's developed in Visual C++ 2008.estructuradedatos2012: Our final project, we need hosting; an action game with wiiremote enabled.ExcelTestRunner: With Excel Test Runner, numerical models (eg quantative libraries) in Excel can be used to directly feed your .NET unit tests with input data and test outcomes. Developed in C# and does not use automation.Fast Binary Serialization: fastbinaryserializer is based on local class methods serializing properties unboxed to streams. Besides that is can serialize objects partial.Iveely Search Engine: ?????????。???C#??。IveelyOS (Iveely Operating System): ?????????。JoySys: MVC project .eCommercekinectlearningbswu09: Bachelor project about a motion controlled learning game with Kinect.LabTech: Program stworzony jako laboratorium do testów sprzetu, wirtualnych rozwiazan, symulowania sieci i problemów zlozonych. Sluzy równiez do testowania obciazenia serwerów, sieci i aplikacji. Zawiera: symulator sieci, tester aplikacji, wirtualny serwer i serwery baz danych.LINQPad examples: LINQPad code examplesProjet LIF7: Projet LIF7 RPG Printemps 2012ReCaptcha Validator Plugin for Kooboo CMS for adding content or sending feedback: ReCaptcha Validator Plugin for Kooboo CMS for adding content or sending feedback e-mailSecurity Foundation -- WCF based SSO: This project was started as a WCF based SSO solution that serves ASP.NET websites (through membership providers ) and other winform / web services. Then we realized that we need to bring in claim-based funcitionalities and make it work as our own identity foundation. SharePoint 2010 InfoPath Forms Hub: The SharePoint 2010 InfoPath Forms Hub enables SharePoint users to consume all browserenabled InfoPath forms that are deployed across the entire SharePoint farm from one single Webpart.trident_library: ???????????????User Accounts Manager: Aplikacija, ki omogoca operacije z uporabniki v doloceni domeni na google apps. Za komunikacijo s strežniki uporablja Google Apps API.USI Reporting: Project USI ReportingWAYWO Enterprise Conversation System: Enterprise Conversation SystemxgcBase: my helper????????? ??????? ???: ?????? ????????? (http://d.hatena.ne.jp/gsf_zero1 ) ???????????????????。 ** ????????????????????????????。???????々??????????。 **

    Read the article

  • An XML file or Database?

    - by webnoob
    I am re-writing a section of my site and am trying to decide how much of a rewrite this will be. At the moment I have a web service feed that generates an xml once per day. I then use this xml file on my website to generate the general structure. I am trying to decide if this information should be located in the database or stay in the xml file. The file can range from 4mb - 12mb. The files depth can go on and on so I have to recurse to find the data I want. I use the .NET serializer classes and store the serialized file in a global variable to avoid re-serializing it each time the page is loaded. My reasons for thinking a database would be better are: I would know exactly where I am in the file by using an internal ID so I wouldn't have to recurse the file to get information. I wouldn't have to load / serialize the XML and could just use my already open database connections. Searching for the data in the file would be quicker(?) as I would just perform an SQL query rather than re-cursing the file. Has anyone got any ideas which is better and which option uses more resources on the server or be quicker? EDIT: The file is read every time the web page is loaded (although only serialized once). It isn't written to by standard users (only by an admin task that runs in the middle of the night). This is my initial investigation before mocking up.

    Read the article

  • LibGdx efficient data saving/loading?

    - by grimrader22
    Currently, my LibGDX game consists of a 512 x 512 map of Tiles and entities such as players and monsters. I am wondering how to efficiently save and load the data of my levels. At the moment I am using JSON serialization for each class I want to save. I implement the Json.Serializable interface for all of these classes and write only the variables that are necessary. So my map consists of 512 x 512 tiles, that's 260,000 tiles. Each tile on the map consists of a Tile object, which points to some final Tile object like a GRASS_TILE or a STONE_TILE. When I serialize each level tile, the final Tile that it points to is re-serialized over and over again, so if I have 100 Tiles all pointing to GRASS_TILE, the data of GRASS_TILE is written 100 times over. When I go to load/deserialize my objects, 100 GrassTile objects are created, but they are each their own object. They no longer point to the final tile object. I feel like this reading/writing files very slow. If I were to abandon JSON serialization, to my knowledge my next best option would be saving the level data to a sql database. Unless there is a way to speed up serializing/deserializing 260,000 tiles I may have to do this. Is this a good idea? Could I really write that many tiles to the database efficiently? To sum all this up, I am trying to save my levels using JSON serialization, but it is VERY slow. What other options do I have for saving the data of so many tiles. I also must note that the JSON serialization is not slow on a PC, it is only VERY slow on a mobile device. Since file writing/reading is so slow on mobile devices, what can I do?

    Read the article

  • ASP.NET Session Management

    - by geekrutherford
    Great article (a little old but still relevant) about the inner workings of session management in ASP.NET: Underpinnings of the Session State Management Implementation in ASP.NET.   Using StateServer and the BinaryFormatter serialization occuring caused me quite the headache over the last few days. Curiously, it appears the w3wp.exe process actually consumes more memory when utilizing StateServer and storing somewhat large and complex data types in session.   Users began experiencing Out Of Memory exceptions in the production environment. Looking at the stack trace it related to serialization using the BinaryFormatter. Using remote debugging against our QA server I noted that the code in the application functioned without issue. The exception occured outside the context of the application itself when the request had completed and the web server was trying to serialize session state into the StateServer.   The short term solution is switching back to the InProc method. Thus far this has proven to consume considerably less memory and has caused no issues. Long term the complex object stored in session will be off-loaded into a web service used to access the information directly from the database outside the context of the object used to encapsulate it.

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >