Search Results

Search found 32343 results on 1294 pages for 'good practice'.

Page 445/1294 | < Previous Page | 441 442 443 444 445 446 447 448 449 450 451 452  | Next Page >

  • Looking for a book which teaches how to write applications (as opposed to writng code)

    - by rumtscho
    I am not a developer. I have coded for fun and for university projects in several languages, and during an internship, I have written code which is still in use by a department of Fortune Global 500 company. I also have extensive theoretical knowledge of software engineering - process models, architecture, project management, UI construction using Lauesen's virtual windows... you get the picture. But I am not involved with software development in my job. I recently decided to start coding for fun again, and now I have some free days to do it. But this time, I want to do it right. I want to write a real, useful application, install it on my devices and use it myself. Also I want to publish it for others to use, should they want to do so. I am vaguely aware that there is more to writing an application than to writing code. There is stuff like version control software, decision for the right IDE, having a suite of unit tests, producing an installation package - and probably lots of other things I never thought of but which must be taken care of in a proper application, as opposed to a bunch of classes I am running from my IDE. All this is stuff I should know before I start, but I have not learned it. Coding books touch on some of the subjects like IDE choice, but don't go into detail, and are not exhaustive. Theoretical software engineering textbooks are even less helpful. So is there a book which teaches exactly that? I know that I can find information on each of these topics on the Internet, but I'd rather have a systematic book exhaustively listing all the things I should take care of if I want to create a good application, and offering the currently accepted solutions for them. In the best case, it will be language- and platform independent, but if you know of a good book focusing on a specific platform, I would like to know about it too. I know I want a lot, but given how important such knowledge is and how many people need it, surely somebody must have written such a book?

    Read the article

  • How best to present a security vulnerability to a web development team in your own company?

    - by BigCoEmployee
    Imagine the following scenario: You work at Big Co. and your coworkers down the hall are on the web development team for Big Co's public blog system, which a lot of Big Co employees and some public people use. The blog system allows any HTML and JavaScript, and you've been told that it was a choice (not by accident) but you aren't sure if they realize the implications of this. So you want to convince them that this is a bad idea. You write some demonstration code and plant a XSS script in your own blog, and then write some blog posts. Soon after, the head blog admin (down the hall) visits your blog post and the XSS sends his cookies to you. You copy them into your browser and you are now logged in as him. Okay, now you're logged in as him... And you start realizing that it maybe wasn't such a good idea to go ahead and 'hack' the blog system. But you are a good guy! You don't touch his account after logging into it, and you definitely don't plan on publicizing this weakness; you just maybe want to show them that the public is able to do this, so that they can fix it before someone malicious realizes the same thing! What is the best course of action from here?

    Read the article

  • Shouldn't we ignore IE6 and IE7 users?

    - by Sebi
    Lately I created two webpages with a simple CMS for a a small club and a private person (http://foto.roser.li and http://www.ovlu.li/cms). I dont really understand much of HTML/PHP/CSS and all this web stuff, but it was enough to adapt the stylesheets and add some javascript functions and so on to make the pages to look more or less nice in firefox and IE8. Nevertheles, if you open the pages with IE7 or IE6, particulary the second home page really looks terrible. So what should I do know? I don't have the know-how to adapt the stylesheets so that it looks good at such browsers. However if I check the statistics of the pages, i noticed that almost half of the visitors uses this kind of browsers. I could put much effort in adapting the stylesheets to look good at all browser but while thinking about this I'm asking myself if this is really the best way. If all developers put effort in providing solutions for really old browsers (like e.g. the IE6), then people who use these brwoser dont have any incentives to update their browsers. Naturally, I as a provider of information should present the content in a readable form to my users, but were should i draw the line? I also saw that some users are visiting the pages with even IE5 or IE4... but I really can't think of a way how to adapt the content in a suitable form for these very old brwosers. I would appreciaed any hints how to handle this balancing between putting much effort into developing a version for very old brwosers and the necessity of providing content to all users which want to visits my homepages. Im also interested in your thoughts about they idea that putting effort in adapting the content for old browsers prevents the users from updating their browsres because they don't see any need.

    Read the article

  • C#: How to implement a smart cache

    - by Svish
    I have some places where implementing some sort of cache might be useful. For example in cases of doing resource lookups based on custom strings, finding names of properties using reflection, or to have only one PropertyChangedEventArgs per property name. A simple example of the last one: public static class Cache { private static Dictionary<string, PropertyChangedEventArgs> cache; static Cache() { cache = new Dictionary<string, PropertyChangedEventArgs>(); } public static PropertyChangedEventArgs GetPropertyChangedEventArgsa(string propertyName) { if (cache.ContainsKey(propertyName)) return cache[propertyName]; return cache[propertyName] = new PropertyChangedEventArgs(propertyName); } } But, will this work well? For example if we had a whole load of different propertyNames, that would mean we would end up with a huge cache sitting there never being garbage collected or anything. I'm imagining if what is cached are larger values and if the application is a long-running one, this might end up as kind of a problem... or what do you think? How should a good cache be implemented? Is this one good enough for most purposes? Any examples of some nice cache implementations that are not too hard to understand or way too complex to implement?

    Read the article

  • Sockets and COBOL

    - by kati
    I have received a job at a hospital which still uses COBOL for all organizational work, the whole (now 20 Terabyte) database (Which was a homebrew in, guess what, COBOL) is filled with the data of every patient since the last 45 (or so) years. So that was my story. Now to my question: Currently, all sockets were (from what I've seen) implemented by COBOL programs writing their data into files. These files then were read out by C++ programs (That was an additional module added in the late 1980s) and using C++ sockets sent to the database. Now this solution has stopped working as they are moving the database from COBOL to COBOL, yes - they didn't use MySQL or so - they implemented a new database - again in COBOL. I asked the guy that worked there before me (hes around 70 now) why the hell someone would do that and he told me that he is so good at COBOL that he doesn't want to write it in any other language. So far so good now my question: How can I implement socket connections in COBOL? I need to create an interface to the external COBOL database located at, for example, 192.168.1.23:283.

    Read the article

  • C#, How to download file into string with progress callback?

    - by Kaminari
    I would like to use the WebClient (or there is another better option?) but there is a problem. I understand that opening up the stream takes some time and this can not be avoided. However, reading it takes a strangely much more amount of time compared to read it entirely immediately. Of course it's not working good because i'm not so familiar with streams. Is there a best way to do this? I mean two ways, to string and to file. Progress is my own delegate and it's working good. FIRST UPDATE: Ok, now i got something like this and it seems to work but still slow: System.Net.WebClient client = new System.Net.WebClient(); System.IO.Stream streamRemote = client.OpenRead(new Uri(URL)); if (savePath == null) { StreamReader reader = new StreamReader(streamRemote); int iByteSize = 0; byte[] byteBuffer = new byte[iSize]; char[] charBuffer = new char[iSize]; StringBuilder sb = new StringBuilder(); while ((iByteSize = reader.Read(charBuffer, 0, iSize)) > 0) { sb.Append(charBuffer, 0, iByteSize); iRunningByteTotal += iByteSize; float dIndex = (float)(iRunningByteTotal); float dTotal = (float)byteBuffer.Length; float dProgressPercentage = (dIndex / dTotal); float iProgressPercentage = (dProgressPercentage * 100); if (Progress != null) Progress(iProgressPercentage); } result = sb.ToString(); } Im wondering about DownloadStringAsync method?

    Read the article

  • What is the best (Windows) program launcher?

    - by AR
    One of the biggest general productivity boosters I've used is a good program launcher. I was a long-time user of SlickRun, and I've tried a few others. My current favorite is Executor - by far the best I've used. Other options: Executor: My current favorite Vista Start Menu: Pretty good, actually, but Executor is similar (binds to Win+Z) and much more flexible. Quicksilver: For Macs only, but it seems to be the gold standard against which most other launchers are measured. Google Desktop: Press Ctrl+Ctrl and it's a quick launcher! AutoHotKey: Much,much more than just a launcher - more than I need, really. SlickRun: simple and unobtrusive Launchy: Seems to be the launcher of choice for many StackOverflow users :) Colibri: "Type Ahead - Information at the tip of your wings". Quite a cool concept. Many, many others. Scott Hanselman outlines some more here. I realize that everyone will have their own preferences, but the question is: is there anything that really stands out in terms of speed, features, and especially productivity increase?

    Read the article

  • How to properly manage a complex DB structure?

    - by errr
    Let's say you have several systems using the same DB - each uses several schemes (sometimes same as the other). This structure of these schemes is somewhat very big and complicated. Now, how could you possibly manage such scheme structure? Obviously using some sort of "configuration" - the simplest would be SQL scripts, but a more reasonable solution would be XMLs which can be easily converted into SQL, or some other readable solution (for example, JPA's XMLs or Annotations). This solution though, causes a problem where you can't really tell if your configuration matches the structure of the DB schemes exactly. You can't say if those two are synchronized. Why wouldn't they? Well, in such big structure there are going to be many changes, and you won't always remember to save/commit your configuration after you've altered the schemes, or maybe you did save/commit it, but eventually didn't altered anything in the schemes and forgot to undo the changes to the configuration. More than that, another problem (not caused by the configuration, but isn't addressed by it either) is versioning. I don't see any good way of managing the DB schemes versions (say our last alteration makes 3 systems crash - not good, how to "rollback"?). And thoughts? thx.

    Read the article

  • Magento - save quote_address on cart addAction using observer

    - by Byron
    I am writing a custom Magento module that gives rate quotes (based on an external API) on the product page. After I get the response on the product page, it adds some of the info from the response into the product view's form. The goal is to save the address (as well as some other things, but those can be in the session for now). The address needs to be saved to the quote, so that the checkout (onestepcheckout, free version) will automatically fill those values in (city, state, zip, country, shipping method [of which there are 3]) and load the rate quote via ajax (which it's already doing). I have gone about this by using an Observer, and watching for the cart events. I fill in the address and save it. When I end up on the cart page or checkout page ~4/5 times the data is lost, and the sql table shows that the quote_address is getting save with no address info, even though there is an explicit save. The observer method's I've used are: checkout_cart_update_item_complete checkout_cart_product_add_after The code saving is: (I've tried this with the address, quote and cart all not calling save() with the same results, as well as not calling setQuote() $quote->getShippingAddress() ->setCountryId($params['estimate_to_country']) ->setCity($params['estimate_to_city']) ->setPostcode($params['estimate_to_zip_code']) ->setRegionId($params['estimate_to_state_code']) ->setRegion($params['estimate_to_state']) ->setCollectShippingRates(true) ->setShippingMethod($params['carrier_method']) ->setQuote($quote) ->save(); $quote->getBillingAddress() ->setCountryId($params['estimate_to_country']) ->setCity($params['estimate_to_city']) ->setPostcode($params['estimate_to_zip_code']) ->setRegionId($params['estimate_to_state_code']) ->setRegion($params['estimate_to_state']) ->setCollectShippingRates(true) ->setShippingMethod($params['carrier_method']) ->setQuote($quote) ->save(); $quote->save(); $cart->save(); I imagine there's a good chance this is not the best place to save this info, but I am not quite sure. I was wondering if there is a good place to do this without overriding a whole controller to save the address on the add to cart action. Thanks so much, let me know if I need to clarify

    Read the article

  • Generating all unique crossword puzzle grids

    - by heydenberk
    I want to generate all unique crossword puzzle grids of a certain grid size (4x4 is a good size). All possible puzzles, including non-unique puzzles, are represented by a binary string with the length of the grid area (16 in the case of 4x4), so all possible 4x4 puzzles are represented by the binary forms of all numbers in the range 0 to 2^16. Generating these is easy, but I'm curious if anyone has a good solution for how to programmatically eliminate invalid and duplicate cases. For example, all puzzles with a single column or single row are functionally identical, hence eliminating 7 of those 8 cases. Also, according to crossword puzzle conventions, all squares must be contiguous. I've had success removing all duplicate structures, but my solution took several minutes to execute and probably was not ideal. I'm at something of a loss for how to detect contiguity so if anyone has ideas on this it'd be much appreciated. I'd prefer solutions in python but write in whichever language you prefer. If anyone wants, I can post my python code for generating all grids and removing duplicates, slow as it may be.

    Read the article

  • autoreload webpage on new release

    - by user3726562
    I have a website in PHP which is completely ajax-based. There is an index.php, but a part from it, all the other page are never rendered directly into the browser. Instead, all the post and get request are done tfrom javascript through ajax. So basically, if you go to /contact.php you will not see anything. All the pages are rendered inside index.php. So, there are a lot of people that use this page that are not very good in using the web. Asking them to "refresh" the page makes them to wonder what we are talking about. Unfortunately. The biggest issue happens when we do a new release. Especially the javascript code (but not only) can be the old one in a client's webpage as they maybe havent refreshed the page for some week (lol). S I do an svn-update and the new code is on the server. I just refresh my page and see the new features. However, the poor guys that dont really know how to make a refresh, will not see anything. I have added a big button on the page with the text "refresh". Which makes a location.reload. Hopefully this can help a few of them. But my question is: how to "force" the browser to reload itself when a new svn-version has been released? Hopefully in a simple way, without needing some node.js, javascript timer or stuff like that. It is also quite important to not refresh the page when the user is doing something with the page (maybe writing a mail in the UI, and then suddenly the page get refreshed: not so good).

    Read the article

  • How to increase my "advanced" knowledge of PHP further? (quickly)

    - by Kerry
    I have been working with PHP for years and gotten a very good grasp of the language, created many advanced and not-so-advanced systems that are working very well. The problem I'm running into is that I only learn when I find a need for something that I haven't learned before. This causes me to look up solutions and other code that handles the problem, and so I will learn about a new function or structure that I hadn't seen before. It is in this way that I have learned many of my better techniques (such as studying classes put out by Amazon, Google or other major companies). The main problem with this is the concept of not being able to learn something if you don't know it exists. For instance, it took me several months of programming to learn about the empty() function, and I simply would check the string length using strlen() to check for empty values. I'm now getting into building bigger and bigger systems, and I've started to read blogs like highscalability.com and been researching MySQL replication and server data for scaling. I know that structure of your code is very important to make full systems work. After reading a recent blog about reddit's structure, it made me question if there is some standard or "accepted systems" out there. I have looked into frameworks (I've used Kohana, which I regretted, but decided that PHP frameworks were not for me) and I prefer my own library of functions rather than having a framework. My current structure is a mix between WordPress, Kohana and my own knowledge. The ways I can see as being potentially beneficial are: Read blogs Read tutorials Work with someone else Read a book What would be the best way(s) to "get to the next level" the level of being a very good system developer?

    Read the article

  • Abstract away a compound identity value for use in business logic?

    - by John K
    While separating business logic and data access logic into two different assemblies, I want to abstract away the concept of identity so that the business logic deals with one consistent identity without having to understand its actual representation in the data source. I've been calling this a compound identity abstraction. Data sources in this project are swappable and various and the business logic shouldn't care which data source is currently in use. The identity is the toughest part because its implementation can change per kind of data source, whereas other fields like name, address, etc are consistently scalar values. What I'm searching for is a good way to abstract the concept of identity, whether it be an existing library, a software pattern or just a solid good idea of some kind is provided. The proposed compound identity value would have to be comparable and usable in the business logic and passed back to the data source to specify records, entities and/or documents to affect, so the data source must be able to parse back out the details of its own compound ids. Data Source Examples: This serves to provide an idea of what I mean by various data sources having different identity implementations. A relational data source might express a piece of content with an integer identifier plus a language specific code. For example. content_id language Other Columns expressing details of content 1 en_us 1 fr_ca The identity of the first record in the above example is: 1 + en_us However when a NoSQL data source is substituted, it might somehow represent each piece of content with a GUID string 936DA01F-9ABD-4d9d-80C7-02AF85C822A8 plus language code of a different standardization, And a third kind of data source might use just a simple scalar value. So on and so forth, you get the idea.

    Read the article

  • Data base design with Blob

    - by mmuthu
    Hi, I have a situation where i need to store the binary data into database as blob column. There are three different table exists in my database where in i need to store a blob data for each record. Not every record will have the blob data all the time. It is time and user based. The table one will have to store the *.doc files almost for all the record The table two will have to store the *.xml optionally. The table three will have to store images (not sure what is frequency, etc) Now my questions is whether it is a good idea to maintain a separate table to store the blob data pointing it to the respective table PK's (Yes, there will be no FK's and assuming program will maintain it). It will be some thing like below, BLOB|PK_ID|TABLE_NAME Alternatively, is it a good idea to keep the blob column in respective tables. As for as my application runtime is concerned, The table 2 will be read very frequently. Though the blob column will not be required. The table 2 record will gets deleted frequently. Similarly other blob data in respective table will not be accessed frequently. All of the blob content will be read on-demand basis. I'm thinking first approach will work better for me. What do you guys think? Btw, I'm using Oracle.

    Read the article

  • Given a 2d array sorted in increasing order from left to right and top to bottom, what is the best w

    - by Phukab
    I was recently given this interview question and I'm curious what a good solution to it would be. Say I'm given a 2d array where all the numbers in the array are in increasing order from left to right and top to bottom. What is the best way to search and determine if a target number is in the array? Now, my first inclination is to utilize a binary search since my data is sorted. I can determine if a number is in a single row in O(log N) time. However, it is the 2 directions that throw me off. Another solution I could use, if I could be sure the matrix is n x n, is to start at the middle. If the middle value is less than my target, then I can be sure it is in the left square portion of the matrix from the middle. I then move diagnally and check again, reducing the size of the square that the target could potentially be in until I have honed in on the target number. Does anyone have any good ideas on solving this problem? Example array: Sorted left to right, top to bottom. 1 2 4 5 6 2 3 5 7 8 4 6 8 9 10 5 8 9 10 11

    Read the article

  • CSS - background-size: cover; not working in Firefox

    - by Jayant Bhawal
    body{ background-image: url("./content/site_data/bg.jpg"); background-size: cover; background-repeat: no-repeat; font-family: 'Lobster', cursive; } Check: http://demo.jayantbhawal.in on firefox browsers, NOT in widescreen mode. The code works on Chrome(Android + PC) and even the stock Android browser, but NOT Firefox(Android + PC). Is there any good alternative to it? Why is it not working anyways? Googled this issue a lot of times, but no one else seems to have this problem. Is it just me? In any case, how do I fix it? There are quite some questions on SO about it too, but none of them provide a legitimate solution, so can someone just tell me if they have background-size: cover; issues on firefox too? So basically tell me 3 things: 1. Why is it happening? 2. What is a good alternative to it? 3. Is this happening to you too? On Firefox browsers of course. Chrome Version 35.0.1916.114 m Firefox Version 29.0.1 Note: I may already be trying to fix it so at times you may see a totally weird page. Wait a bit and reload.

    Read the article

  • How to structure Javascript programs in complex web applications?

    - by mixedpickles
    Hi there. I have a problem, which is not easily described. I'm writing a web application that makes heavy usage of jquery and ajax calls. Now I don't have experience in designing the architecture for javascript programms, but I realize that my program has not a good structure. I think I have to many identifiers referring to the same (at least more or less) thing. Let's have an exemplary look at an arbitrary UI widget: The eventhandlers use DOM elements as parameters. The DOM element represents a widget in the browser. A lot of times I use jQuery objects (I think they are basically a wrapper around DOM elements) to do something with the widget. Sometimes they are used transiently, sometimes they are stored in a variable for later purposes. The ajax function calls use strings identifiers for these widgets. They are processed server side. Beside that I have a widget class whose instances represents a widget. It is instantiated through the new operator. Now I have somehow four different object identifiers for the same thing, which needs to be kept in sync until the page is loaded anew. This seems not to be a good thing. Any advice?

    Read the article

  • Book Recomendations: WPF/Silverlight for Business(smallish, internal) Environment

    - by Refracted Paladin
    Yes, I know there are an insane amount of Book posts here(SO) but none I believe for my specific need. If there is and I missed it I apologize. I am the only developer at a non-profit organization(~200 employees) where we are a M$ shop and 90% of the things I develop are specific to our company and are internal only. I am given a lot of latitude on how I accomplish my goals so using new technologies is in my best interest. So far I have developed all winform & asp.net applications but I am an expert by NO MEANS. I would now like to focus on XAML driven development(WPF & Silverlight) but I have no idea where to start. I am subscribed to numerous Silverlight blogs and I have went through a few good tutorials however, I would really appreciate a GOOD SOLID book in my hands going forward. I prefer learning books versus reference books and I REALLY would like one from a Business standpoint as well. Shameless, self-promoting is welcomed if you happen to be an author or reviewer for one that meets my criteria. I would, however, prefer that recomendations were based on first-hand experience(no, 'my friend as this awesome book he told me about', please). If more info is needed to provide accurate recomendations please let me know. Thanks

    Read the article

  • Should I Teach My Son Programming? What approaches should I take? [closed]

    - by DaveDev
    I was wondering if it's a good idea to teach object oriented programming to my son? I was never really good at math as a kid, but I think since I've started programming it's given me a greater ability to understand math by being better able to visualise relationships between abstract models. I thought it might give him a better advantage in learning & applying logical & mathematical concepts throughout his life if he was able to take advantage of the tools available to programmers. what would be the best programming fields, techniques and/or concepts? What approach should I take? what concepts should I avoid? what fields of mathematics would he find this benfits him most? He's only 2 now so it wouldn't be for another few years before I do this, (and even at that, only from a very high level point of view). I thought I'd put it to the programming community and see what you guys thought? Possible Duplicate: What are some recommended programming resources for pre-teens?

    Read the article

  • Mercurial on shared network drive?

    - by user1164199
    Right now I have my repo on my local drive. In order to back it up, I have to copy .hg to a window's network drive. At Is it a good idea to put Mercurial Repository in shared Network drive?, Lasse Karlsen said the repo shouldn't be on a shared folder on a network server because "mercurial cannot reliably hold locks in all situations". Would this still be an issue when the repository is only updated by a single user? If so, can someone explain to me why the corruption happens? A while back our IT had problem setting up a mercurial server. I am very fond of mercurial (it has a great interface and is very easy to work with), but if it's going to be such a pain in the neck to set up for multiple users, I am willing to look for something else. Does anyone have any suggestions (with reasons)? I am looking for a revision control program that has the following attributes: 2. Good interface (allow you to easily see revision and changes to the code over multiple revisions). 3. Work as a local repo or a network repo. 4. IT will feel comfortable installing on their network. Thanks, Stephen

    Read the article

  • Java vs. C variant for desktop and tablet development

    - by MirroredFate
    I am going to write a desktop application, but I am conflicted concerning which language to use. It (the desktop application) will need to have a good GUI, and to be extendable (hopefully good with modules of some sort). It must be completely cross-platform, including executable in various tablet environments. I put this as a requirement while realizing that some modification will no doubt be necessary. The language should also have some form of networking tools available. I have read http://introcs.cs.princeton.edu/java/faq/c2java.html and understand the differences between Java and C very well. I am looking not necessarily at C, but more at a C variant. If it is a complete toss-up, I will use Java as I know Java much better. However, I do not want to use a language that will be inferior for the task I wish to accomplish. Thank you for all suggestions and explanations. NOTE: If this is not the correct stack for this question, I apologize. It seemed appropriate according to the rules.

    Read the article

  • Credit card validation with regexp using test()

    - by Matt
    I'm trying to complete some homework and it appears the book might have gotten it wrong. I have a simple html page that allows user to pick a credit card in our case american express. The user then enters a number and evalutes that number based on a regular expression. My question ends up being when test() evaluates the number it returns a boolean or a string? I should then compare that string or boolean? True == true should fire off the code in a nested if statement. Heres what the book gives me as valid code: if(document.forms[0].cardName.value == "American Express") { var cardProtocol = new RegExp("^3[47][0-9]{13}$"); //REGEX ENTRY HERE if(cardProtocol.test(document.forms[0].cardNumber.value)) document.forms[0].ccResult.value = "Valid credit card number"; } The above code doesn't work in firefox. I've tried modifying it with 2 alerts to make sure the number is good and the boolean is good...and still no luck: if(document.forms[0].cardName.value == "American Express") { var cardProtocol = new RegExp("^3[47][0-9]{13}$"); //REGEX ENTRY HERE <------ alert(document.forms[0].cardNumber.value) alert(cardProtocol.test(document.forms[0].cardNumber.value)) if((cardProtocol.test(document.forms[0].cardNumber.value)) == true ) // <--Problem { document.forms[0].ccResult.value = "Valid credit card number"; } else { document.forms[0].ccResult.value = "Invalid credit card number"; } } Any ideas? the if loop is the culprit but I'm not figuring out why it is not working. Please throw up the code for the if loop! Thanks for the help!

    Read the article

  • .NET HTML Sanitation for rich HTML Input

    - by Rick Strahl
    Recently I was working on updating a legacy application to MVC 4 that included free form text input. When I set up the new site my initial approach was to not allow any rich HTML input, only simple text formatting that would respect a few simple HTML commands for bold, lists etc. and automatically handles line break processing for new lines and paragraphs. This is typical for what I do with most multi-line text input in my apps and it works very well with very little development effort involved. Then the client sprung another note: Oh by the way we have a bunch of customers (real estate agents) who need to post complete HTML documents. Oh uh! There goes the simple theory. After some discussion and pleading on my part (<snicker>) to try and avoid this type of raw HTML input because of potential XSS issues, the client decided to go ahead and allow raw HTML input anyway. There has been lots of discussions on this subject on StackOverFlow (and here and here) but to after reading through some of the solutions I didn't really find anything that would work even closely for what I needed. Specifically we need to be able to allow just about any HTML markup, with the exception of script code. Remote CSS and Images need to be loaded, links need to work and so. While the 'legit' HTML posted by these agents is basic in nature it does span most of the full gamut of HTML (4). Most of the solutions XSS prevention/sanitizer solutions I found were way to aggressive and rendered the posted output unusable mostly because they tend to strip any externally loaded content. In short I needed a custom solution. I thought the best solution to this would be to use an HTML parser - in this case the Html Agility Pack - and then to run through all the HTML markup provided and remove any of the blacklisted tags and a number of attributes that are prone to JavaScript injection. There's much discussion on whether to use blacklists vs. whitelists in the discussions mentioned above, but I found that whitelists can make sense in simple scenarios where you might allow manual HTML input, but when you need to allow a larger array of HTML functionality a blacklist is probably easier to manage as the vast majority of elements and attributes could be allowed. Also white listing gets a bit more complex with HTML5 and the new proliferation of new HTML tags and most new tags generally don't affect XSS issues directly. Pure whitelisting based on elements and attributes also doesn't capture many edge cases (see some of the XSS cheat sheets listed below) so even with a white list, custom logic is still required to handle many of those edge cases. The Microsoft Web Protection Library (AntiXSS) My first thought was to check out the Microsoft AntiXSS library. Microsoft has an HTML Encoding and Sanitation library in the Microsoft Web Protection Library (formerly AntiXSS Library) on CodePlex, which provides stricter functions for whitelist encoding and sanitation. Initially I thought the Sanitation class and its static members would do the trick for me,but I found that this library is way too restrictive for my needs. Specifically the Sanitation class strips out images and links which rendered the full HTML from our real estate clients completely useless. I didn't spend much time with it, but apparently I'm not alone if feeling this library is not really useful without some way to configure operation. To give you an example of what didn't work for me with the library here's a small and simple HTML fragment that includes script, img and anchor tags. I would expect the script to be stripped and everything else to be left intact. Here's the original HTML:var value = "<b>Here</b> <script>alert('hello')</script> we go. Visit the " + "<a href='http://west-wind.com'>West Wind</a> site. " + "<img src='http://west-wind.com/images/new.gif' /> " ; and the code to sanitize it with the AntiXSS Sanitize class:@Html.Raw(Microsoft.Security.Application.Sanitizer.GetSafeHtmlFragment(value)) This produced a not so useful sanitized string: Here we go. Visit the <a>West Wind</a> site. While it removed the <script> tag (good) it also removed the href from the link and the image tag altogether (bad). In some situations this might be useful, but for most tasks I doubt this is the desired behavior. While links can contain javascript: references and images can 'broadcast' information to a server, without configuration to tell the library what to restrict this becomes useless to me. I couldn't find any way to customize the white list, nor is there code available in this 'open source' library on CodePlex. Using Html Agility Pack for HTML Parsing The WPL library wasn't going to cut it. After doing a bit of research I decided the best approach for a custom solution would be to use an HTML parser and inspect the HTML fragment/document I'm trying to import. I've used the HTML Agility Pack before for a number of apps where I needed an HTML parser without requiring an instance of a full browser like the Internet Explorer Application object which is inadequate in Web apps. In case you haven't checked out the Html Agility Pack before, it's a powerful HTML parser library that you can use from your .NET code. It provides a simple, parsable HTML DOM model to full HTML documents or HTML fragments that let you walk through each of the elements in your document. If you've used the HTML or XML DOM in a browser before you'll feel right at home with the Agility Pack. Blacklist based HTML Parsing to strip XSS Code For my purposes of HTML sanitation, the process involved is to walk the HTML document one element at a time and then check each element and attribute against a blacklist. There's quite a bit of argument of what's better: A whitelist of allowed items or a blacklist of denied items. While whitelists tend to be more secure, they also require a lot more configuration. In the case of HTML5 a whitelist could be very extensive. For what I need, I only want to ensure that no JavaScript is executed, so a blacklist includes the obvious <script> tag plus any tag that allows loading of external content including <iframe>, <object>, <embed> and <link> etc. <form>  is also excluded to avoid posting content to a different location. I also disallow <head> and <meta> tags in particular for my case, since I'm only allowing posting of HTML fragments. There is also some internal logic to exclude some attributes or attributes that include references to JavaScript or CSS expressions. The default tag blacklist reflects my use case, but is customizable and can be added to. Here's my HtmlSanitizer implementation:using System.Collections.Generic; using System.IO; using System.Xml; using HtmlAgilityPack; namespace Westwind.Web.Utilities { public class HtmlSanitizer { public HashSet<string> BlackList = new HashSet<string>() { { "script" }, { "iframe" }, { "form" }, { "object" }, { "embed" }, { "link" }, { "head" }, { "meta" } }; /// <summary> /// Cleans up an HTML string and removes HTML tags in blacklist /// </summary> /// <param name="html"></param> /// <returns></returns> public static string SanitizeHtml(string html, params string[] blackList) { var sanitizer = new HtmlSanitizer(); if (blackList != null && blackList.Length > 0) { sanitizer.BlackList.Clear(); foreach (string item in blackList) sanitizer.BlackList.Add(item); } return sanitizer.Sanitize(html); } /// <summary> /// Cleans up an HTML string by removing elements /// on the blacklist and all elements that start /// with onXXX . /// </summary> /// <param name="html"></param> /// <returns></returns> public string Sanitize(string html) { var doc = new HtmlDocument(); doc.LoadHtml(html); SanitizeHtmlNode(doc.DocumentNode); //return doc.DocumentNode.WriteTo(); string output = null; // Use an XmlTextWriter to create self-closing tags using (StringWriter sw = new StringWriter()) { XmlWriter writer = new XmlTextWriter(sw); doc.DocumentNode.WriteTo(writer); output = sw.ToString(); // strip off XML doc header if (!string.IsNullOrEmpty(output)) { int at = output.IndexOf("?>"); output = output.Substring(at + 2); } writer.Close(); } doc = null; return output; } private void SanitizeHtmlNode(HtmlNode node) { if (node.NodeType == HtmlNodeType.Element) { // check for blacklist items and remove if (BlackList.Contains(node.Name)) { node.Remove(); return; } // remove CSS Expressions and embedded script links if (node.Name == "style") { if (string.IsNullOrEmpty(node.InnerText)) { if (node.InnerHtml.Contains("expression") || node.InnerHtml.Contains("javascript:")) node.ParentNode.RemoveChild(node); } } // remove script attributes if (node.HasAttributes) { for (int i = node.Attributes.Count - 1; i >= 0; i--) { HtmlAttribute currentAttribute = node.Attributes[i]; var attr = currentAttribute.Name.ToLower(); var val = currentAttribute.Value.ToLower(); span style="background: white; color: green">// remove event handlers if (attr.StartsWith("on")) node.Attributes.Remove(currentAttribute); // remove script links else if ( //(attr == "href" || attr== "src" || attr == "dynsrc" || attr == "lowsrc") && val != null && val.Contains("javascript:")) node.Attributes.Remove(currentAttribute); // Remove CSS Expressions else if (attr == "style" && val != null && val.Contains("expression") || val.Contains("javascript:") || val.Contains("vbscript:")) node.Attributes.Remove(currentAttribute); } } } // Look through child nodes recursively if (node.HasChildNodes) { for (int i = node.ChildNodes.Count - 1; i >= 0; i--) { SanitizeHtmlNode(node.ChildNodes[i]); } } } } } Please note: Use this as a starting point only for your own parsing and review the code for your specific use case! If your needs are less lenient than mine were you can you can make this much stricter by not allowing src and href attributes or CSS links if your HTML doesn't allow it. You can also check links for external URLs and disallow those - lots of options.  The code is simple enough to make it easy to extend to fit your use cases more specifically. It's also quite easy to make this code work using a WhiteList approach if you want to go that route. The code above is semi-generic for allowing full featured HTML fragments that only disallow script related content. The Sanitize method walks through each node of the document and then recursively drills into all of its children until the entire document has been traversed. Note that the code here uses an XmlTextWriter to write output - this is done to preserve XHTML style self-closing tags which are otherwise left as non-self-closing tags. The sanitizer code scans for blacklist elements and removes those elements not allowed. Note that the blacklist is configurable either in the instance class as a property or in the static method via the string parameter list. Additionally the code goes through each element's attributes and looks for a host of rules gleaned from some of the XSS cheat sheets listed at the end of the post. Clearly there are a lot more XSS vulnerabilities, but a lot of them apply to ancient browsers (IE6 and versions of Netscape) - many of these glaring holes (like CSS expressions - WTF IE?) have been removed in modern browsers. What a Pain To be honest this is NOT a piece of code that I wanted to write. I think building anything related to XSS is better left to people who have far more knowledge of the topic than I do. Unfortunately, I was unable to find a tool that worked even closely for me, or even provided a working base. For the project I was working on I had no choice and I'm sharing the code here merely as a base line to start with and potentially expand on for specific needs. It's sad that Microsoft Web Protection Library is currently such a train wreck - this is really something that should come from Microsoft as the systems vendor or possibly a third party that provides security tools. Luckily for my application we are dealing with a authenticated and validated users so the user base is fairly well known, and relatively small - this is not a wide open Internet application that's directly public facing. As I mentioned earlier in the post, if I had my way I would simply not allow this type of raw HTML input in the first place, and instead rely on a more controlled HTML input mechanism like MarkDown or even a good HTML Edit control that can provide some limits on what types of input are allowed. Alas in this case I was overridden and we had to go forward and allow *any* raw HTML posted. Sometimes I really feel sad that it's come this far - how many good applications and tools have been thwarted by fear of XSS (or worse) attacks? So many things that could be done *if* we had a more secure browser experience and didn't have to deal with every little script twerp trying to hack into Web pages and obscure browser bugs. So much time wasted building secure apps, so much time wasted by others trying to hack apps… We're a funny species - no other species manages to waste as much time, effort and resources as we humans do :-) Resources Code on GitHub Html Agility Pack XSS Cheat Sheet XSS Prevention Cheat Sheet Microsoft Web Protection Library (AntiXss) StackOverflow Links: http://stackoverflow.com/questions/341872/html-sanitizer-for-net http://blog.stackoverflow.com/2008/06/safe-html-and-xss/ http://code.google.com/p/subsonicforums/source/browse/trunk/SubSonic.Forums.Data/HtmlScrubber.cs?r=61© Rick Strahl, West Wind Technologies, 2005-2012Posted in Security  HTML  ASP.NET  JavaScript   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Using JSON.NET for dynamic JSON parsing

    - by Rick Strahl
    With the release of ASP.NET Web API as part of .NET 4.5 and MVC 4.0, JSON.NET has effectively pushed out the .NET native serializers to become the default serializer for Web API. JSON.NET is vastly more flexible than the built in DataContractJsonSerializer or the older JavaScript serializer. The DataContractSerializer in particular has been very problematic in the past because it can't deal with untyped objects for serialization - like values of type object, or anonymous types which are quite common these days. The JavaScript Serializer that came before it actually does support non-typed objects for serialization but it can't do anything with untyped data coming in from JavaScript and it's overall model of extensibility was pretty limited (JavaScript Serializer is what MVC uses for JSON responses). JSON.NET provides a robust JSON serializer that has both high level and low level components, supports binary JSON, JSON contracts, Xml to JSON conversion, LINQ to JSON and many, many more features than either of the built in serializers. ASP.NET Web API now uses JSON.NET as its default serializer and is now pulled in as a NuGet dependency into Web API projects, which is great. Dynamic JSON Parsing One of the features that I think is getting ever more important is the ability to serialize and deserialize arbitrary JSON content dynamically - that is without mapping the JSON captured directly into a .NET type as DataContractSerializer or the JavaScript Serializers do. Sometimes it isn't possible to map types due to the differences in languages (think collections, dictionaries etc), and other times you simply don't have the structures in place or don't want to create them to actually import the data. If this topic sounds familiar - you're right! I wrote about dynamic JSON parsing a few months back before JSON.NET was added to Web API and when Web API and the System.Net HttpClient libraries included the System.Json classes like JsonObject and JsonArray. With the inclusion of JSON.NET in Web API these classes are now obsolete and didn't ship with Web API or the client libraries. I re-linked my original post to this one. In this post I'll discus JToken, JObject and JArray which are the dynamic JSON objects that make it very easy to create and retrieve JSON content on the fly without underlying types. Why Dynamic JSON? So, why Dynamic JSON parsing rather than strongly typed parsing? Since applications are interacting more and more with third party services it becomes ever more important to have easy access to those services with easy JSON parsing. Sometimes it just makes lot of sense to pull just a small amount of data out of large JSON document received from a service, because the third party service isn't directly related to your application's logic most of the time - and it makes little sense to map the entire service structure in your application. For example, recently I worked with the Google Maps Places API to return information about businesses close to me (or rather the app's) location. The Google API returns a ton of information that my application had no interest in - all I needed was few values out of the data. Dynamic JSON parsing makes it possible to map this data, without having to map the entire API to a C# data structure. Instead I could pull out the three or four values I needed from the API and directly store it on my business entities that needed to receive the data - no need to map the entire Maps API structure. Getting JSON.NET The easiest way to use JSON.NET is to grab it via NuGet and add it as a reference to your project. You can add it to your project with: PM> Install-Package Newtonsoft.Json From the Package Manager Console or by using Manage NuGet Packages in your project References. As mentioned if you're using ASP.NET Web API or MVC 4 JSON.NET will be automatically added to your project. Alternately you can also go to the CodePlex site and download the latest version including source code: http://json.codeplex.com/ Creating JSON on the fly with JObject and JArray Let's start with creating some JSON on the fly. It's super easy to create a dynamic object structure with any of the JToken derived JSON.NET objects. The most common JToken derived classes you are likely to use are JObject and JArray. JToken implements IDynamicMetaProvider and so uses the dynamic  keyword extensively to make it intuitive to create object structures and turn them into JSON via dynamic object syntax. Here's an example of creating a music album structure with child songs using JObject for the base object and songs and JArray for the actual collection of songs:[TestMethod] public void JObjectOutputTest() { // strong typed instance var jsonObject = new JObject(); // you can explicitly add values here using class interface jsonObject.Add("Entered", DateTime.Now); // or cast to dynamic to dynamically add/read properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; album.Artist = "AC/DC"; album.YearReleased = 1976; album.Songs = new JArray() as dynamic; dynamic song = new JObject(); song.SongName = "Dirty Deeds Done Dirt Cheap"; song.SongLength = "4:11"; album.Songs.Add(song); song = new JObject(); song.SongName = "Love at First Feel"; song.SongLength = "3:10"; album.Songs.Add(song); Console.WriteLine(album.ToString()); } This produces a complete JSON structure: { "Entered": "2012-08-18T13:26:37.7137482-10:00", "AlbumName": "Dirty Deeds Done Dirt Cheap", "Artist": "AC/DC", "YearReleased": 1976, "Songs": [ { "SongName": "Dirty Deeds Done Dirt Cheap", "SongLength": "4:11" }, { "SongName": "Love at First Feel", "SongLength": "3:10" } ] } Notice that JSON.NET does a nice job formatting the JSON, so it's easy to read and paste into blog posts :-). JSON.NET includes a bunch of configuration options that control how JSON is generated. Typically the defaults are just fine, but you can override with the JsonSettings object for most operations. The important thing about this code is that there's no explicit type used for holding the values to serialize to JSON. Rather the JSON.NET objects are the containers that receive the data as I build up my JSON structure dynamically, simply by adding properties. This means this code can be entirely driven at runtime without compile time restraints of structure for the JSON output. Here I use JObject to create a album 'object' and immediately cast it to dynamic. JObject() is kind of similar in behavior to ExpandoObject in that it allows you to add properties by simply assigning to them. Internally, JObject values are stored in pseudo collections of key value pairs that are exposed as properties through the IDynamicMetaObject interface exposed in JSON.NET's JToken base class. For objects the syntax is very clean - you add simple typed values as properties. For objects and arrays you have to explicitly create new JObject or JArray, cast them to dynamic and then add properties and items to them. Always remember though these values are dynamic - which means no Intellisense and no compiler type checking. It's up to you to ensure that the names and values you create are accessed consistently and without typos in your code. Note that you can also access the JObject instance directly (not as dynamic) and get access to the underlying JObject type. This means you can assign properties by string, which can be useful for fully data driven JSON generation from other structures. Below you can see both styles of access next to each other:// strong type instance var jsonObject = new JObject(); // you can explicitly add values here jsonObject.Add("Entered", DateTime.Now); // expando style instance you can just 'use' properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; JContainer (the base class for JObject and JArray) is a collection so you can also iterate over the properties at runtime easily:foreach (var item in jsonObject) { Console.WriteLine(item.Key + " " + item.Value.ToString()); } The functionality of the JSON objects are very similar to .NET's ExpandObject and if you used it before, you're already familiar with how the dynamic interfaces to the JSON objects works. Importing JSON with JObject.Parse() and JArray.Parse() The JValue structure supports importing JSON via the Parse() and Load() methods which can read JSON data from a string or various streams respectively. Essentially JValue includes the core JSON parsing to turn a JSON string into a collection of JsonValue objects that can be then referenced using familiar dynamic object syntax. Here's a simple example:public void JValueParsingTest() { var jsonString = @"{""Name"":""Rick"",""Company"":""West Wind"", ""Entered"":""2012-03-16T00:03:33.245-10:00""}"; dynamic json = JValue.Parse(jsonString); // values require casting string name = json.Name; string company = json.Company; DateTime entered = json.Entered; Assert.AreEqual(name, "Rick"); Assert.AreEqual(company, "West Wind"); } The JSON string represents an object with three properties which is parsed into a JObject class and cast to dynamic. Once cast to dynamic I can then go ahead and access the object using familiar object syntax. Note that the actual values - json.Name, json.Company, json.Entered - are actually of type JToken and I have to cast them to their appropriate types first before I can do type comparisons as in the Asserts at the end of the test method. This is required because of the way that dynamic types work which can't determine the type based on the method signature of the Assert.AreEqual(object,object) method. I have to either assign the dynamic value to a variable as I did above, or explicitly cast ( (string) json.Name) in the actual method call. The JSON structure can be much more complex than this simple example. Here's another example of an array of albums serialized to JSON and then parsed through with JsonValue():[TestMethod] public void JsonArrayParsingTest() { var jsonString = @"[ { ""Id"": ""b3ec4e5c"", ""AlbumName"": ""Dirty Deeds Done Dirt Cheap"", ""Artist"": ""AC/DC"", ""YearReleased"": 1976, ""Entered"": ""2012-03-16T00:13:12.2810521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/61kTaH-uZBL._AA115_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/gp/product/…ASIN=B00008BXJ4"", ""Songs"": [ { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Dirty Deeds Done Dirt Cheap"", ""SongLength"": ""4:11"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Love at First Feel"", ""SongLength"": ""3:10"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Big Balls"", ""SongLength"": ""2:38"" } ] }, { ""Id"": ""7b919432"", ""AlbumName"": ""End of the Silence"", ""Artist"": ""Henry Rollins Band"", ""YearReleased"": 1992, ""Entered"": ""2012-03-16T00:13:12.2800521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/51FO3rb1tuL._SL160_AA160_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/End-Silence-Rollins-Band/dp/B0000040OX/ref=sr_1_5?ie=UTF8&qid=1302232195&sr=8-5"", ""Songs"": [ { ""AlbumId"": ""7b919432"", ""SongName"": ""Low Self Opinion"", ""SongLength"": ""5:24"" }, { ""AlbumId"": ""7b919432"", ""SongName"": ""Grip"", ""SongLength"": ""4:51"" } ] } ]"; JArray jsonVal = JArray.Parse(jsonString) as JArray; dynamic albums = jsonVal; foreach (dynamic album in albums) { Console.WriteLine(album.AlbumName + " (" + album.YearReleased.ToString() + ")"); foreach (dynamic song in album.Songs) { Console.WriteLine("\t" + song.SongName); } } Console.WriteLine(albums[0].AlbumName); Console.WriteLine(albums[0].Songs[1].SongName); } JObject and JArray in ASP.NET Web API Of course these types also work in ASP.NET Web API controller methods. If you want you can accept parameters using these object or return them back to the server. The following contrived example receives dynamic JSON input, and then creates a new dynamic JSON object and returns it based on data from the first:[HttpPost] public JObject PostAlbumJObject(JObject jAlbum) { // dynamic input from inbound JSON dynamic album = jAlbum; // create a new JSON object to write out dynamic newAlbum = new JObject(); // Create properties on the new instance // with values from the first newAlbum.AlbumName = album.AlbumName + " New"; newAlbum.NewProperty = "something new"; newAlbum.Songs = new JArray(); foreach (dynamic song in album.Songs) { song.SongName = song.SongName + " New"; newAlbum.Songs.Add(song); } return newAlbum; } The raw POST request to the server looks something like this: POST http://localhost/aspnetwebapi/samples/PostAlbumJObject HTTP/1.1User-Agent: FiddlerContent-type: application/jsonHost: localhostContent-Length: 88 {AlbumName: "Dirty Deeds",Songs:[ { SongName: "Problem Child"},{ SongName: "Squealer"}]} and the output that comes back looks like this: {  "AlbumName": "Dirty Deeds New",  "NewProperty": "something new",  "Songs": [    {      "SongName": "Problem Child New"    },    {      "SongName": "Squealer New"    }  ]} The original values are echoed back with something extra appended to demonstrate that we're working with a new object. When you receive or return a JObject, JValue, JToken or JArray instance in a Web API method, Web API ignores normal content negotiation and assumes your content is going to be received and returned as JSON, so effectively the parameter and result type explicitly determines the input and output format which is nice. Dynamic to Strong Type Mapping You can also map JObject and JArray instances to a strongly typed object, so you can mix dynamic and static typing in the same piece of code. Using the 2 Album jsonString shown earlier, the code below takes an array of albums and picks out only a single album and casts that album to a static Album instance.[TestMethod] public void JsonParseToStrongTypeTest() { JArray albums = JArray.Parse(jsonString) as JArray; // pick out one album JObject jalbum = albums[0] as JObject; // Copy to a static Album instance Album album = jalbum.ToObject<Album>(); Assert.IsNotNull(album); Assert.AreEqual(album.AlbumName,jalbum.Value<string>("AlbumName")); Assert.IsTrue(album.Songs.Count > 0); } This is pretty damn useful for the scenario I mentioned earlier - you can read a large chunk of JSON and dynamically walk the property hierarchy down to the item you want to access, and then either access the specific item dynamically (as shown earlier) or map a part of the JSON to a strongly typed object. That's very powerful if you think about it - it leaves you in total control to decide what's dynamic and what's static. Strongly typed JSON Parsing With all this talk of dynamic let's not forget that JSON.NET of course also does strongly typed serialization which is drop dead easy. Here's a simple example on how to serialize and deserialize an object with JSON.NET:[TestMethod] public void StronglyTypedSerializationTest() { // Demonstrate deserialization from a raw string var album = new Album() { AlbumName = "Dirty Deeds Done Dirt Cheap", Artist = "AC/DC", Entered = DateTime.Now, YearReleased = 1976, Songs = new List<Song>() { new Song() { SongName = "Dirty Deeds Done Dirt Cheap", SongLength = "4:11" }, new Song() { SongName = "Love at First Feel", SongLength = "3:10" } } }; // serialize to string string json2 = JsonConvert.SerializeObject(album,Formatting.Indented); Console.WriteLine(json2); // make sure we can serialize back var album2 = JsonConvert.DeserializeObject<Album>(json2); Assert.IsNotNull(album2); Assert.IsTrue(album2.AlbumName == "Dirty Deeds Done Dirt Cheap"); Assert.IsTrue(album2.Songs.Count == 2); } JsonConvert is a high level static class that wraps lower level functionality, but you can also use the JsonSerializer class, which allows you to serialize/parse to and from streams. It's a little more work, but gives you a bit more control. The functionality available is easy to discover with Intellisense, and that's good because there's not a lot in the way of documentation that's actually useful. Summary JSON.NET is a pretty complete JSON implementation with lots of different choices for JSON parsing from dynamic parsing to static serialization, to complex querying of JSON objects using LINQ. It's good to see this open source library getting integrated into .NET, and pushing out the old and tired stock .NET parsers so that we finally have a bit more flexibility - and extensibility - in our JSON parsing. Good to go! Resources Sample Test Project http://json.codeplex.com/© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  Web Api  AJAX   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • SQLAuthority News – SQLPASS Nov 8-11, 2010-Seattle – An Alternative Look at Experience

    - by pinaldave
    I recently attended most prestigious SQL Server event SQLPASS between Nov 8-11, 2010 at Seattle. I have only one expression for the event - Best Summit Ever This year the summit was at its best. Instead of writing about my usual routine or the event, I am going to write about the interesting things I did and how I felt about it! Best Summit Ever Trip to Seattle! This was my second trip to Seattle this year and the journey is always long. Here is the travel stats on how long it takes to get to Seattle: 24 hours official air time 36 hours total travel time (connection waits and airport commute) Every time I travel to USA I gain a day and when I travel back to home, I lose a day. However, the total traveling time is around 3 days. The journey is long and very exhausting. However, it is all worth it when you’re attending an event like SQLPASS. Here are few things I carry when I travel for a long journey: Dry Snack packs – I like to have some good Indian Dry Snacks along with me in my backpack so I can have my own snack when I want Amazon Kindle – Loaded with 80+ books A physical book – This is usually a very easy to read book I do not watch movies on the plane and usually spend my time reading something quick and easy. If I can go to sleep, I go for it. I prefer to not to spend time in conversation with the guy sitting next to me because usually I end up listening to their biography, which I cannot blog about. Sheraton Seattle SQLPASS In any case, I love to go to Seattle as the city is great and has everything a brilliant metropolis has to offer. The new Light Train is extremely convenient, and I can take it directly from the airport to the city center. My hotel, the Sheraton, was only few meters (in the USA people count in blocks – 3 blocks) away from the train station. This time I saved USD 40 each round trip due to the Light Train. Sessions I attended! Well, I really wanted to attend most of the sessions but there was great dilemma of which ones to choose. There were many, many sessions to be attended and at any given time there was more than one good session being presented. I had decided to attend sessions in area performance tuning and I attended quite a few sessions this year, compared to what I was able to do last year. Here are few names of the speakers whose sessions I attended (please note, following great speakers are not listed in any order. I loved them and I enjoyed their sessions): Conor Cunningham Rushabh Mehta Buck Woody Brent Ozar Jonathan Kehayias Chris Leonard Bob Ward Grant Fritchey I had great fun attending their sessions. The sessions were meaningful and enlightening. It is hard to rate any session but I have found that the insights learned in Conor Cunningham’s sessions are the highlight of the PASS Summit. Rushabh Mehta at Keynote SQLPASS   Bucky Woody and Brent Ozar I always like the sessions where the speaker is much closer to the audience and has real world experience. I think speakers who have worked in the real world deliver the best content and most useful information. Sessions I did not like! Indeed there were few sessions I did not like it and I am not going to name them here. However, there were strong reasons I did not like their sessions, and here is why: Sessions were all theory and had no real world connections. All technical questions ended with confusing answers (lots of “I will get back to you on it,” “it depends,” “let us take this offline” and many more…) “I am God” kind of attitude in the speakers For example, I attended a session of one very well known speaker who is a specialist for one particular area. I was bit late for the session and was surprised to see that in a room that could hold 350 people there were only 30 attendees. After sitting there for 15 minutes, I realized why lots of people left. Very soon I found I preferred to stare out the window instead of listening to that particular speaker. One on One Talk! Many times people ask me what I really like about PASS. I always say the experience of meeting SQL legends and spending time with them one on one and LEARNING! Here is the quick list of the people I met during this event and spent more than 30 minutes with each of them talking about various subjects: Pinal Dave and Brad Shulz Pinal Dave and Rushabh Mehta Michael Coles and Pinal Dave Rushabh Mehta – It is always pleasure to meet with him. He is a man with lots of energy and a passion for community. He recently told me that he really wanted to turn PASS into resource for learning for every SQL Server Developer and Administrator in the world. I had great in-depth discussion regarding how a single person can contribute to a community. Michael Coles – I consider him my best friend. It is always fun to meet him. He is funny and very knowledgeable. I think there are very few people who are as expert as he is in encryption and spatial databases. Worth meeting him every single time. Glenn Berry – A real friend of everybody. He is very a simple person and very true to his heart. I think there is not a single person in whole community who does not like him. He is a friends of all and everybody likes him very much. I once again had time to sit with him and learn so much from him. As he is known as Dr. DMV, I can be his nurse in the area of DMV. Brad Schulz – I always wanted to meet him but never got chance until today. I had great time meeting him in person and we have spent considerable amount of time together discussing various T-SQL tricks and tips. I do not know where he comes up with all the different ideas but I enjoy reading his blog and sharing his wisdom with me. Jonathan Kehayias – He is drill sergeant in US army. If you get the impression that he is a giant with very strong personality – you are wrong. He is very kind and soft spoken DBA with strong performance tuning skills. I asked him how he has kept his two jobs separate and I got very good answer – just work hard and have passion for what you do. I attended his sessions and his presentation style is very unique.  I feel like he is speaking in a language I understand. Louis Davidson – I had never had a chance to sit with him and talk about technology before. He has so much wisdom and he is very kind. During the dinner, I had talked with him for long time and without hesitation he started to draw a schema for me on the menu. It was a wonderful experience to learn from a master at the dinner table. He explained to me the real and practical differences between third normal form and forth normal form. Honestly I did not know earlier, but now I do. Erland Sommarskog – This man needs no introduction, he is very well known and very clear in conveying his ideas. I learned a lot from him during the course of year. Every time I meet him, I learn something new and this time was no exception. Joe Webb – Joey is all about community and people, we had interesting conversation about community, MVP and how one can be helpful to community without losing passion for long time. It is always pleasant to talk to him and of course, I had fun time. Ross Mistry – I call him my brother many times because he indeed looks like my cousin. He provided me lots of insight of how one can write book and how he keeps his books simple to appeal to all the readers. A wonderful person and great friend. Ola Hallgren - I did not know he was coming to the summit. I had great time meeting him and had a wonderful conversation with him regarding his scripts and future community activities. Blythe Morrow – She used to be integrated part of SQL Server Community and PASS HQ. It was wonderful to meet her again and re-connect. She is wonderful person and I had a great time talking to her. Solid Quality Mentors – It is difficult to decide who to mention here. Instead of writing all the names, I am going to include a photo of our meeting. I had great fun meeting various members of our global branches. This year I was sitting with my Spanish speaking friends and had great fun as Javier Loria from Solid Quality translated lots of things for me. Party, Party and Parties Every evening there were various parties. I did attend almost all of them. Every party had different theme but the goal of all the parties the same – networking. Here are the few parties where I had lots of fun: Dell Reception Party Exhibitor Party Solid Quality Fun Party Red Gate Friends Party MVP Dinner Microsoft Party MVP Dinner Quest Party Gameworks PASS Party Volunteer Party at Garage Solid Quality Mentors (10 Members out of 120) They were all great networking opportunities and lots of fun. I really had great time meeting people at the various parties. There were few people everywhere – well, I will say I am among them – who hopped parties. NDA – Not Decided Agenda During the event there were few meetings marked “NDA.” Someone asked me “why are these things NDA?”  My response was simple: because they are not sure themselves. NDA stands for Not Decided Agenda. Toys, Giveaways and Luggage I admit, I was like child in Gameworks and was playing to win soft toys. I was doing it for my daughter. I must thank all of the people who gave me their cards to try my luck. I won 4 soft-toys for my daughter and it was fun. Also, thanks to Angel who did a final toy swap with me to get the desired toy for my daughter. I also collected ducks from Idera, as my daughter really loves them. Solid Quality Booth Each of the exhibitors was giving away something and I got so much stuff that my luggage got quite a bit bigger when I returned. Best Exhibitor Idera had SQLDoctor (a real magician and fun guy) to promote their new tool SQLDoctor. I really had a great time participating in the magic myself. At one point, the magician made my watch disappear.  I have seen better magic before, but this time it caught me unexpectedly and I was taken by surprise. I won many ducks again. The Common Question I heard the following common questions: I have seen you somewhere – who are you? – I am Pinal Dave. I did not know that Pinal is your first name and Dave is your last name, how do you pronounce your last name again? – Da-way How old are you? – I am as old as I can be. Are you an Indian because you look like one? – I did not answer this one. Where are you from? This question was usually asked after looking at my badge which says India. So did you really fly from India? – Yes, because I have seasickness so I do not prefer the sea journey. How long was the journey? – 24/36/12 (air travel time/total travel time/time zone difference) Why do you write on SQLAuthority.com? – Because I want to. I remember your daughter looks like you. – Is this even a question? Of course, she is daddy’s little girl. There were so many other questions, I will have to write another blog post about it. SQLPASS Again, Best Summit Ever! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: About Me, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology Tagged: SQLPASS

    Read the article

< Previous Page | 441 442 443 444 445 446 447 448 449 450 451 452  | Next Page >