Search Results

Search found 88206 results on 3529 pages for 'code coverage'.

Page 605/3529 | < Previous Page | 601 602 603 604 605 606 607 608 609 610 611 612  | Next Page >

  • Problem with SLATEC routine usage with gfortran

    - by user39461
    I am trying to compute the Bessel function of the second kind (Bessel_y) using the SLATEC's Amos library available on Netlib. Here is the SLATEC code I use. Below I have pasted my test program that calls SLATEC routine CBESY. PROGRAM BESSELTEST IMPLICIT NONE REAL:: FNU INTEGER, PARAMETER :: N = 2, KODE = 1 COMPLEX,ALLOCATABLE :: CWRK (:), CY (:) COMPLEX:: Z, ci INTEGER :: NZ, IERR ALLOCATE(CWRK(N), CY(N)) ci = cmplx (0.0, 1.0) FNU = 0.0e0 Z = CMPLX(0.3e0, 0.4e0) CALL CBESY(Z, FNU, KODE, N, CY, NZ, CWRK, IERR) WRITE(*,*) 'CY: ', CY WRITE(*,*) 'IERR: ', IERR STOP END PROGRAM And here is the output of the above program: CY: ( 5.78591091E-39, 5.80327020E-39) ( 0.0000000 , 0.0000000 ) IERR: 4 Ierr = 4 meaning there is some problem with the input itself. To be precise, the IERR = 4 means the following as per the header info in CBESY.f file: ! IERR=4, CABS(Z) OR FNU+N-1 TOO LARGE - NO COMPUTA- ! TION BECAUSE OF COMPLETE LOSSES OF SIGNIFI- ! CANCE BY ARGUMENT REDUCTION Clearly, CABS(Z) (which is 0.50) or FNU + N - 1 (which is 1.0) are not too large but still the routine CBESY throws the error message number 4 as above. The CY array should have following values for the argument given in above code: CY(1) = -0.4983 + 0.6700i CY(2) = -1.0149 + 0.9485i These values are computed using Matlab. I can't figure out what's the problem when I call CBESY from SLATEC library. Any clues? Much thanks for the suggestions/help. PS: if it is of any help, I used gfortran to compile, link and then create the SLATEC library file ( the .a file ) which I keep in the same directory as my test program above. shell command to execute above code: gfortran -c BesselTest.f95 gfortran -o a *.o libslatec.a a GD.

    Read the article

  • #1 O’Reilly eBook for 2010

    - by Jan Goyvaerts
    The year-end issue of O’Reilly’s author newsletter discussed the trends O’Reilly has been seeing the past few years, and their predictions for 2011. The key trend is that digital is now more than ever poised to take over print: Our digitally distributed products have grown from 18.36% of our publishing mix in 2009 to 28.09% of our mix in 2010. What is more impressive is that our digitally distributed products have produced more than double the revenue that has been lost with the decline of print. I think this is important because some say that digital cannibalizes print products. Our data indicates the contrary, as print is declining much more slowly than digital is growing. I think we may be seeing developers purchasing a print book, and then purchasing the electronic editions to search and copying code from, as the incremental cost for digital is more than reasonable. My own book seems to be leading this trend. Thanks to everyone who purchased it! And the five bestselling O’Reilly ebook products for 2010: 1) Regular Expressions Cookbook, 2) jQuery Cookbook, 3) Learning Python, 4) HTML5: Up and Running, and 5) JavaScript Cookbook. I think it’s interesting that the top five ebooks are code-intensive books. They’re great products for search and code reuse. It’s also interesting that none of the top 5 ebooks made the top 5 of print books.

    Read the article

  • dojo.require() prevents Firefox from rendering the page

    - by Eduard Wirch
    Im experiencing strange behavior with Firefox and Dojo. I have a html page with these lines in the <head> section: ... <script type="text/javascript" src="dojo.js" djconfig="parseOnLoad: true, locale: 'de'"></script> <script type="text/javascript"> dojo.require("dojo.number"); </script> ... Sometimes the page loads normally. But sometimes it won't. Firefox will fetch the whole html page but not render it. I see only a gray window. After some experiments I figured out that the rendering problem has something to do with the load time of the html. Firefox starts evaluating the html page while loading it. If the page takes too long to load the above javascript will be executed BEFORE the html finishes loading. If this happens I'll get the gray window. Advising Firefox to show me the source code of the page will display the correct complete html code. BUT: if I save the page to disk (File-Save Page As...) the html code will be truncated and the above part will look like this: ... <script type="text/javascript" src="dojo.js" djconfig="parseOnLoad: true, locale: 'de'"></script> <script type="text/javascript"> dojo.require("dojo.number"); </script></head><body></body></html> This explains why I get to see a gray area. But why does this code appear there? I assume the require() method of Dojo does something "evil". But I can't figure out what. There is no write.document("</head><body></body></html>"); in the Dojo code. I checked for it. The problem would be fixed, if I'd place the dojo.require("dojo.number"); statement in the window.load event: <script type="text/javascript"> window.load=function() { dojo.require("dojo.number"); } </script> But I'm curious why this happens. Is there a Javasctript function which forces Firefox to stop evaluating the page? Does Dojo do somethig "bad"? Can anyone explain this behavior to me? EDIT: Dojo 1.3.1, no JS errors or warnings.

    Read the article

  • DocumentDB - Another Azure NoSQL Storage Service

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2014/08/25/documentdb---another-azure-nosql-storage-service.aspxMicrosoft just released a bunch of new features for Azure on 22nd and one of them I was interested in most is DocumentDB, a document NoSQL database service on the cloud.   Quick Look at DocumentDB We can try DocumentDB from the new azure preview portal. Just click the NEW button and select the item named DocumentDB to create a new account. Specify the name of the DocumentDB, which will be the endpoint we are going to use to connect later. Select the capacity unit, resource group and subscription. In resource group section we can select which region our DocumentDB will be located. Same as other azure services select the same location with your consumers of the DocumentDB, for example the website, web services, etc.. After several minutes the DocumentDB will be ready. Click the KEYS button we can find the URI and primary key, which will be used when connecting. Now let's open Visual Studio and try to use the DocumentDB we had just created. Create a new console application and install the DocumentDB .NET client library from NuGet with the keyword "DocumentDB". You need to select "Include Prerelase" in NuGet Package Manager window since this library was not yet released. Next we will create a new database and document collection under our DocumentDB account. The code below created an instance of DocumentClient with the URI and primary key we just copied from azure portal, and create a database and collection. And it also prints the document and collection link string which will be used later to insert and query documents. 1: static void Main(string[] args) 2: { 3: var endpoint = new Uri("https://shx.documents.azure.com:443/"); 4: var key = "LU2NoyS2fH0131TGxtBE4DW/CjHQBzAaUx/mbuJ1X77C4FWUG129wWk2oyS2odgkFO2Xdif9/ZddintQicF+lA=="; 5:  6: var client = new DocumentClient(endpoint, key); 7: Run(client).Wait(); 8:  9: Console.WriteLine("done"); 10: Console.ReadKey(); 11: } 12:  13: static async Task Run(DocumentClient client) 14: { 15:  16: var database = new Database() { Id = "testdb" }; 17: database = await client.CreateDatabaseAsync(database); 18: Console.WriteLine("database link = {0}", database.SelfLink); 19:  20: var collection = new DocumentCollection() { Id = "testcol" }; 21: collection = await client.CreateDocumentCollectionAsync(database.SelfLink, collection); 22: Console.WriteLine("collection link = {0}", collection.SelfLink); 23: } Below is the result from the console window. We need to copy the collection link string for future usage. Now if we back to the portal we will find a database was listed with the name we specified in the code. Next we will insert a document into the database and collection we had just created. In the code below we pasted the collection link which copied in previous step, create a dynamic object with several properties defined. As you can see we can add some normal properties contains string, integer, we can also add complex property for example an array, a dictionary and an object reference, unless they can be serialized to JSON. 1: static void Main(string[] args) 2: { 3: var endpoint = new Uri("https://shx.documents.azure.com:443/"); 4: var key = "LU2NoyS2fH0131TGxtBE4DW/CjHQBzAaUx/mbuJ1X77C4FWUG129wWk2oyS2odgkFO2Xdif9/ZddintQicF+lA=="; 5:  6: var client = new DocumentClient(endpoint, key); 7:  8: // collection link pasted from the result in previous demo 9: var collectionLink = "dbs/AAk3AA==/colls/AAk3AP6oFgA=/"; 10:  11: // document we are going to insert to database 12: dynamic doc = new ExpandoObject(); 13: doc.firstName = "Shaun"; 14: doc.lastName = "Xu"; 15: doc.roles = new string[] { "developer", "trainer", "presenter", "father" }; 16:  17: // insert the docuemnt 18: InsertADoc(client, collectionLink, doc).Wait(); 19:  20: Console.WriteLine("done"); 21: Console.ReadKey(); 22: } the insert code will be very simple as below, just provide the collection link and the object we are going to insert. 1: static async Task InsertADoc(DocumentClient client, string collectionLink, dynamic doc) 2: { 3: var document = await client.CreateDocumentAsync(collectionLink, doc); 4: Console.WriteLine(await JsonConvert.SerializeObjectAsync(document, Formatting.Indented)); 5: } Below is the result after the object had been inserted. Finally we will query the document from the database and collection. Similar to the insert code, we just need to specify the collection link so that the .NET SDK will help us to retrieve all documents in it. 1: static void Main(string[] args) 2: { 3: var endpoint = new Uri("https://shx.documents.azure.com:443/"); 4: var key = "LU2NoyS2fH0131TGxtBE4DW/CjHQBzAaUx/mbuJ1X77C4FWUG129wWk2oyS2odgkFO2Xdif9/ZddintQicF+lA=="; 5:  6: var client = new DocumentClient(endpoint, key); 7:  8: var collectionLink = "dbs/AAk3AA==/colls/AAk3AP6oFgA=/"; 9:  10: SelectDocs(client, collectionLink); 11:  12: Console.WriteLine("done"); 13: Console.ReadKey(); 14: } 15:  16: static void SelectDocs(DocumentClient client, string collectionLink) 17: { 18: var docs = client.CreateDocumentQuery(collectionLink + "docs/").ToList(); 19: foreach(var doc in docs) 20: { 21: Console.WriteLine(doc); 22: } 23: } Since there's only one document in my collection below is the result when I executed the code. As you can see all properties, includes the array was retrieve at the same time. DocumentDB also attached some properties we didn't specified such as "_rid", "_ts", "_self" etc., which is controlled by the service.   DocumentDB Benefit DocumentDB is a document NoSQL database service. Different from the traditional database, document database is truly schema-free. In a short nut, you can save anything in the same database and collection if it could be serialized to JSON. We you query the document database, all sub documents will be retrieved at the same time. This means you don't need to join other tables when using a traditional database. Document database is very useful when we build some high performance system with hierarchical data structure. For example, assuming we need to build a blog system, there will be many blog posts and each of them contains the content and comments. The comment can be commented as well. If we were using traditional database, let's say SQL Server, the database schema might be defined as below. When we need to display a post we need to load the post content from the Posts table, as well as the comments from the Comments table. We also need to build the comment tree based on the CommentID field. But if were using DocumentDB, what we need to do is to save the post as a document with a list contains all comments. Under a comment all sub comments will be a list in it. When we display this post we just need to to query the post document, the content and all comments will be loaded in proper structure. 1: { 2: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 3: "title": "xxxxx", 4: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 5: "postedOn": "08/25/2014 13:55", 6: "comments": 7: [ 8: { 9: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 10: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 11: "commentedOn": "08/25/2014 14:00", 12: "commentedBy": "xxx" 13: }, 14: { 15: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 16: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 17: "commentedOn": "08/25/2014 14:10", 18: "commentedBy": "xxx", 19: "comments": 20: [ 21: { 22: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 23: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 24: "commentedOn": "08/25/2014 14:18", 25: "commentedBy": "xxx", 26: "comments": 27: [ 28: { 29: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 30: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 31: "commentedOn": "08/25/2014 18:22", 32: "commentedBy": "xxx", 33: } 34: ] 35: }, 36: { 37: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 38: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 39: "commentedOn": "08/25/2014 15:02", 40: "commentedBy": "xxx", 41: } 42: ] 43: }, 44: { 45: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 46: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 47: "commentedOn": "08/25/2014 14:30", 48: "commentedBy": "xxx" 49: } 50: ] 51: }   DocumentDB vs. Table Storage DocumentDB and Table Storage are all NoSQL service in Microsoft Azure. One common question is "when we should use DocumentDB rather than Table Storage". Here are some ideas from me and some MVPs. First of all, they are different kind of NoSQL database. DocumentDB is a document database while table storage is a key-value database. Second, table storage is cheaper. DocumentDB supports scale out from one capacity unit to 5 in preview period and each capacity unit provides 10GB local SSD storage. The price is $0.73/day includes 50% discount. For storage service the highest price is $0.061/GB, which is almost 10% of DocumentDB. Third, table storage provides local-replication, geo-replication, read access geo-replication while DocumentDB doesn't support. Fourth, there is local emulator for table storage but none for DocumentDB. We have to connect to the DocumentDB on cloud when developing locally. But, DocumentDB supports some cool features that table storage doesn't have. It supports store procedure, trigger and user-defined-function. It supports rich indexing while table storage only supports indexing against partition key and row key. It supports transaction, table storage supports as well but restricted with Entity Group Transaction scope. And the last, table storage is GA but DocumentDB is still in preview.   Summary In this post I have a quick demonstration and introduction about the new DocumentDB service in Azure. It's very easy to interact through .NET and it also support REST API, Node.js SDK and Python SDK. Then I explained the concept and benefit of  using document database, then compared with table storage.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • My ashx stopped working (Works locally, but not online)

    - by Madi D.
    i have a simple ASHX file that returns pictures upon request (mainly a counter), previously the code simply fetched pre-made pictures(already uploaded and available) and sent them to the requester.. i just modified the code,so it would take a base picture, do some modifications on it, save it to the server, then serve it to the user.. tested it locally and it worked like a charm. however when i uploaded the code to my hosting service (Godaddy..) it didnt work. Can someone point the problem out to me? Note: ASHX worked with me before,so i know the web.config and IIS are handling them properly, however i think i am missing something .. Code: <%@ WebHandler Language="C#" Class="NWEmpPhotoHandler" %> using System; using System.Web; using System.IO; using System.Drawing; using System.Drawing.Imaging; public class NWEmpPhotoHandler : IHttpHandler { public bool IsReusable { get { return true; } } public void ProcessRequest(HttpContext ctx) { try { //Some Code to fetch # of vistors from DB int x = 10,000; // # of Vistors, fetched from DB string numberOfVistors = (1000 + x).ToString(); string filePath = ctx.Server.MapPath("counter.jpg"); Bitmap myBitmap = new Bitmap(filePath); Graphics g = Graphics.FromImage(myBitmap); g.TextRenderingHint = System.Drawing.Text.TextRenderingHint.AntiAlias; StringFormat strFormat = new StringFormat(); g.DrawString(numberOfVistors , new Font("Tahoma", 24), Brushes.Maroon, new RectangleF(55, 82, 500, 500),null); string PathToSave = ctx.Server.MapPath("counter-" + numberOfVistors + ".jpg"); saveJpeg(PathToSave, myBitmap, 100); if (File.Exists(PathToSave)) { ctx.Response.ContentType = "image/jpg"; ctx.Response.WriteFile(PathToSave); //ctx.Response.OutputStream.Write(picByteArray, 0, picByteArray.Length); } } catch (ArgumentException exe) { } } private ImageCodecInfo getEncoderInfo(string mimeType) { // Get image codecs for all image formats ImageCodecInfo[] codecs = ImageCodecInfo.GetImageEncoders(); // Find the correct image codec for (int i = 0; i < codecs.Length; i++) if (codecs[i].MimeType == mimeType) return codecs[i]; return null; } private void saveJpeg(string path, Bitmap img, long quality) { // Encoder parameter for image quality EncoderParameter qualityParam = new EncoderParameter(Encoder.Quality, quality); // Jpeg image codec ImageCodecInfo jpegCodec = this.getEncoderInfo("image/jpeg"); if (jpegCodec == null) return; EncoderParameters encoderParams = new EncoderParameters(1); encoderParams.Param[0] = qualityParam; img.Save(path, jpegCodec, encoderParams); } }

    Read the article

  • Building Web Applications with ACT and jQuery

    - by dwahlin
    My second talk at TechEd is focused on integrating ASP.NET AJAX and jQuery features into websites (if you’re interested in Silverlight you can download code/slides for that talk here). The content starts out by discussing ScriptManager features available in ASP.NET 3.5 and ASP.NET 4 and provides details on why you should consider using a Content Delivery Network (CDN).  If you’re running an external facing site then checking out the CDN features offered by Microsoft or Google is definitely recommended. The talk also goes into the process of contributing to the Ajax Control Toolkit as well as the new Ajax Minifier tool that’s available to crunch JavaScript and CSS files. The extra fun starts in the next part of the talk which details some of the work Microsoft is doing with the jQuery team to donate template, globalization and data linking code to the project. I go into jQuery templates, data linking and a new globalization option that are all being worked on. I want to thank Stephen Walther, Dave Reed and James Senior for their thoughts and contributions since some of the topics covered are pretty bleeding edge right now.The slides and sample code for the talk can be downloaded below.     Download Slides and Samples

    Read the article

  • C# best means to store data locally when offline

    - by mickartz
    I am in the midst of writing a small program (more to experiment with vs 2010 than anything else) Despite being an experiment it has some practical use for our local athletics club. My thought was to access the DB (currently online) to download the current members and store locally on a laptop (this is a MS sql table, used to power the club's website). take the laptop to the event (yes there ARE places that don't have internet coverage), add members to that days race (also a row from a sql table (though no changes would be made to this), record results (new records in 3rd table) Once home, showered and within internet access again, upload/edit the tables as per the race results/member changes etc. So I was thinking i'd do something like write xml files locally with the data, including a field to indicate changes etc? If anyone can point me in a direction i would appreciate it...hell if anyone could tell me if this has a name!!..I'd appreciate it TIA Michael Artz

    Read the article

  • Silverlight Cream for April 24, 2010 -- #846

    - by Dave Campbell
    In this Issue: Michael Washington, Timmy Kokke, Pete Brown, Paul Yanez, Emil Stoychev, Jeremy Likness, and Pavan Podila. Shoutouts: If you've got some time to spend, the User Experience Kit is packed with info: User Experience Kit, and just plain fun to navigate ... thanks Scott Barnes for reminding me about it! Jesse Liberty is looking for some help organizing and cataloging posts for a new project he's got going: Help Wanted Emil Stoychev posted Slides and demos from my talk on Silverlight 4 From SilverlightCream.com: Silverlight 4 Drag and Drop File Manager Michael Washington has a post up about a Silverlight Drag and Drop File Manager in MVVM, but a secondary important point about the post is that he and Alan Beasley followed strict Designer/Developer rules on this... you recognized Alan's ListBox didn't you? Changing CSS with jQuery syntax in Silverlight using jLight Timmy Kokke is using jLight as introduced in a prior post to interact with the DOM from Silverlight. Essential Silverlight and WPF Skills: The UI Thread, Dispatchers, Background Workers and Async Network Programming Pete Brown has a great backrounder up for WPF and Silverlight devs on threading and networking, good comments too so far. Fluid layout and Fullscreen in Silverlight Paul Yanez has a quick post and demo up on forcing full-screen with a fluid layout, all code included -- and it doesn't take much Data Binding in Silverlight Emil Stoychev has a great long tutorial up on DataBinding in Silverlight ... he hits all the major points with text, samples, and code... definitely one to read! Yet Another MVVM Locator Pattern Another not-necessarily Silverlight post from Jeremy Likness -- but definitely a good one on MVVM and locator patterns. The SpiderWebControl for Silverlight Pavan Podila has a 'SpiderWebControl' for Silverlight 4 up... this is a great network graph control with any sort of feature I can think of... check out the demo, then grab the code... or the other way around, your choice :) Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Git: Fixing a bug affecting two branches

    - by Aram Kocharyan
    I'm basing my Git repo on http://nvie.com/posts/a-successful-git-branching-model/ and was wondering what happens if you have this situation: Say I'm developing on two feature branches A and B, and B requires code from A. The X node introduces an error in feature A which affects branch B, but this is not detected at node Y where feature A and B were merged and testing was conducted before branching out again and working on the next iteration. As a result, the bug is found at node Z by the people working on feature B. At this stage it's decided that a bugfix is needed. This fix should be applied to both features, since the people working on feature A also need the bug fixed, since its part of their feature. Should a bugfix branch be created from the latest feature A node (the one branching from node Y) and then merged with feature A? After which both features are merged into develop again and tested before branching out? The problem with this is that it requires both branches to merge to fix the issue. Since feature B doesn't touch code in feature A, is there a way to change the history at node Y by implementing the fix and still allowing the feature B branch to remain unmerged yet have the fixed code from feature A? Mildly related: Git bug branching convention

    Read the article

  • How do you run PartCover with spaces in the path?

    - by nportelli
    I have a msbuild file that I'm trying to run from Hudson CI. It outputs like this "C:\Program Files\Gubka Bob\PartCover .NET 2\PartCover.exe" --target "C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\MSTest.exe" --target-args "/noisolation" "/testcontainer:C:\CI\Hudson\jobs\Video Raffle\workspace\Source\VideoRaffleCaller\Source\VideoRaffleCaller.Test.Unit\bin\Debug\VideoRaffleCaller.Test.Unit.dll" --include "[VideoRaffleCaller*]*" --output "Coverage\partcover.xml" I get this error Invalid switch "raffle\workspace\source\videorafflecaller\source\videorafflecall er.test.unit\bin\debug\videorafflecaller.test.unit.dll". For switch syntax, type "MSTest /help" WTF? Looks like PartCover doesn't handle spaces in the --target-args well. Or am I missing some quotes somewhere? Has anyone gotten something like to to work?

    Read the article

  • VS 2010 IDE Features in a nutshell

    - by Rajesh Pillai
    Going through a VS 2010 IDE Features.  We will explore each feature in subsequent posts.  The post are documented as being reviewed by me.   Breakpoint Labeling Breakpoint Searching Breakpoint Import/Export Dynamic Data Tooling WPF Tree Visualizer Call Hierarchy Improved WPF Tooling Historical Debugging Mini-Dump Debugging Quick Search Better Multi-Monitor Support Highlight References Parallel Stacks Window Parallel Tasks Window Document Map Margin Generate from Usage Concurrency Profiler Inline Call Tree Extensible Test Runner MVC Tooling Web Deploy JQuery IntelliSense SharePoint Tooling HTML Snippets Web.config Transformation ClickOnce Enhancements for MS Office     VS is an editor as well as a platform for development and this is only more true with VS 2010.  As an editor there is improved forcus on writing code, understanding code, navigating and publishing code.   VS Shell has been completely rewritten using WPF extending huge benefits.  The start page has been rewritten using XAML, so it is easy to customize.   Support new support for Silverlight, MFC, F# , Azure and extended support for Office 2010, Sharepoint.   Has a good Extension Manager as well.   Enjoy Coding !!!

    Read the article

  • C#/.NET Little Wonders: Fun With Enum Methods

    - by James Michael Hare
    Once again lets dive into the Little Wonders of .NET, those small things in the .NET languages and BCL classes that make development easier by increasing readability, maintainability, and/or performance. So probably every one of us has used an enumerated type at one time or another in a C# program.  The enumerated types we create are a great way to represent that a value can be one of a set of discrete values (or a combination of those values in the case of bit flags). But the power of enum types go far beyond simple assignment and comparison, there are many methods in the Enum class (that all enum types “inherit” from) that can give you even more power when dealing with them. IsDefined() – check if a given value exists in the enum Are you reading a value for an enum from a data source, but are unsure if it is actually a valid value or not?  Casting won’t tell you this, and Parse() isn’t guaranteed to balk either if you give it an int or a combination of flags.  So what can we do? Let’s assume we have a small enum like this for result codes we want to return back from our business logic layer: 1: public enum ResultCode 2: { 3: Success, 4: Warning, 5: Error 6: } In this enum, Success will be zero (unless given another value explicitly), Warning will be one, and Error will be two. So what happens if we have code like this where perhaps we’re getting the result code from another data source (could be database, could be web service, etc)? 1: public ResultCode PerformAction() 2: { 3: // set up and call some method that returns an int. 4: int result = ResultCodeFromDataSource(); 5:  6: // this will suceed even if result is < 0 or > 2. 7: return (ResultCode) result; 8: } So what happens if result is –1 or 4?  Well, the cast does not fail, so what we end up with would be an instance of a ResultCode that would have a value that’s outside of the bounds of the enum constants we defined. This means if you had a block of code like: 1: switch (result) 2: { 3: case ResultType.Success: 4: // do success stuff 5: break; 6:  7: case ResultType.Warning: 8: // do warning stuff 9: break; 10:  11: case ResultType.Error: 12: // do error stuff 13: break; 14: } That you would hit none of these blocks (which is a good argument for always having a default in a switch by the way). So what can you do?  Well, there is a handy static method called IsDefined() on the Enum class which will tell you if an enum value is defined.  1: public ResultCode PerformAction() 2: { 3: int result = ResultCodeFromDataSource(); 4:  5: if (!Enum.IsDefined(typeof(ResultCode), result)) 6: { 7: throw new InvalidOperationException("Enum out of range."); 8: } 9:  10: return (ResultCode) result; 11: } In fact, this is often recommended after you Parse() or cast a value to an enum as there are ways for values to get past these methods that may not be defined. If you don’t like the syntax of passing in the type of the enum, you could clean it up a bit by creating an extension method instead that would allow you to call IsDefined() off any isntance of the enum: 1: public static class EnumExtensions 2: { 3: // helper method that tells you if an enum value is defined for it's enumeration 4: public static bool IsDefined(this Enum value) 5: { 6: return Enum.IsDefined(value.GetType(), value); 7: } 8: }   HasFlag() – an easier way to see if a bit (or bits) are set Most of us who came from the land of C programming have had to deal extensively with bit flags many times in our lives.  As such, using bit flags may be almost second nature (for a quick refresher on bit flags in enum types see one of my old posts here). However, in higher-level languages like C#, the need to manipulate individual bit flags is somewhat diminished, and the code to check for bit flag enum values may be obvious to an advanced developer but cryptic to a novice developer. For example, let’s say you have an enum for a messaging platform that contains bit flags: 1: // usually, we pluralize flags enum type names 2: [Flags] 3: public enum MessagingOptions 4: { 5: None = 0, 6: Buffered = 0x01, 7: Persistent = 0x02, 8: Durable = 0x04, 9: Broadcast = 0x08 10: } We can combine these bit flags using the bitwise OR operator (the ‘|’ pipe character): 1: // combine bit flags using 2: var myMessenger = new Messenger(MessagingOptions.Buffered | MessagingOptions.Broadcast); Now, if we wanted to check the flags, we’d have to test then using the bit-wise AND operator (the ‘&’ character): 1: if ((options & MessagingOptions.Buffered) == MessagingOptions.Buffered) 2: { 3: // do code to set up buffering... 4: // ... 5: } While the ‘|’ for combining flags is easy enough to read for advanced developers, the ‘&’ test tends to be easy for novice developers to get wrong.  First of all you have to AND the flag combination with the value, and then typically you should test against the flag combination itself (and not just for a non-zero)!  This is because the flag combination you are testing with may combine multiple bits, in which case if only one bit is set, the result will be non-zero but not necessarily all desired bits! Thanks goodness in .NET 4.0 they gave us the HasFlag() method.  This method can be called from an enum instance to test to see if a flag is set, and best of all you can avoid writing the bit wise logic yourself.  Not to mention it will be more readable to a novice developer as well: 1: if (options.HasFlag(MessagingOptions.Buffered)) 2: { 3: // do code to set up buffering... 4: // ... 5: } It is much more concise and unambiguous, thus increasing your maintainability and readability. It would be nice to have a corresponding SetFlag() method, but unfortunately generic types don’t allow you to specialize on Enum, which makes it a bit more difficult.  It can be done but you have to do some conversions to numeric and then back to the enum which makes it less of a payoff than having the HasFlag() method.  But if you want to create it for symmetry, it would look something like this: 1: public static T SetFlag<T>(this Enum value, T flags) 2: { 3: if (!value.GetType().IsEquivalentTo(typeof(T))) 4: { 5: throw new ArgumentException("Enum value and flags types don't match."); 6: } 7:  8: // yes this is ugly, but unfortunately we need to use an intermediate boxing cast 9: return (T)Enum.ToObject(typeof (T), Convert.ToUInt64(value) | Convert.ToUInt64(flags)); 10: } Note that since the enum types are value types, we need to assign the result to something (much like string.Trim()).  Also, you could chain several SetFlag() operations together or create one that takes a variable arg list if desired. Parse() and ToString() – transitioning from string to enum and back Sometimes, you may want to be able to parse an enum from a string or convert it to a string - Enum has methods built in to let you do this.  Now, many may already know this, but may not appreciate how much power are in these two methods. For example, if you want to parse a string as an enum, it’s easy and works just like you’d expect from the numeric types: 1: string optionsString = "Persistent"; 2:  3: // can use Enum.Parse, which throws if finds something it doesn't like... 4: var result = (MessagingOptions)Enum.Parse(typeof (MessagingOptions), optionsString); 5:  6: if (result == MessagingOptions.Persistent) 7: { 8: Console.WriteLine("It worked!"); 9: } Note that Enum.Parse() will throw if it finds a value it doesn’t like.  But the values it likes are fairly flexible!  You can pass in a single value, or a comma separated list of values for flags and it will parse them all and set all bits: 1: // for string values, can have one, or comma separated. 2: string optionsString = "Persistent, Buffered"; 3:  4: var result = (MessagingOptions)Enum.Parse(typeof (MessagingOptions), optionsString); 5:  6: if (result.HasFlag(MessagingOptions.Persistent) && result.HasFlag(MessagingOptions.Buffered)) 7: { 8: Console.WriteLine("It worked!"); 9: } Or you can parse in a string containing a number that represents a single value or combination of values to set: 1: // 3 is the combination of Buffered (0x01) and Persistent (0x02) 2: var optionsString = "3"; 3:  4: var result = (MessagingOptions) Enum.Parse(typeof (MessagingOptions), optionsString); 5:  6: if (result.HasFlag(MessagingOptions.Persistent) && result.HasFlag(MessagingOptions.Buffered)) 7: { 8: Console.WriteLine("It worked again!"); 9: } And, if you really aren’t sure if the parse will work, and don’t want to handle an exception, you can use TryParse() instead: 1: string optionsString = "Persistent, Buffered"; 2: MessagingOptions result; 3:  4: // try parse returns true if successful, and takes an out parm for the result 5: if (Enum.TryParse(optionsString, out result)) 6: { 7: if (result.HasFlag(MessagingOptions.Persistent) && result.HasFlag(MessagingOptions.Buffered)) 8: { 9: Console.WriteLine("It worked!"); 10: } 11: } So we covered parsing a string to an enum, what about reversing that and converting an enum to a string?  The ToString() method is the obvious and most basic choice for most of us, but did you know you can pass a format string for enum types that dictate how they are written as a string?: 1: MessagingOptions value = MessagingOptions.Buffered | MessagingOptions.Persistent; 2:  3: // general format, which is the default, 4: Console.WriteLine("Default : " + value); 5: Console.WriteLine("G (default): " + value.ToString("G")); 6:  7: // Flags format, even if type does not have Flags attribute. 8: Console.WriteLine("F (flags) : " + value.ToString("F")); 9:  10: // integer format, value as number. 11: Console.WriteLine("D (num) : " + value.ToString("D")); 12:  13: // hex format, value as hex 14: Console.WriteLine("X (hex) : " + value.ToString("X")); Which displays: 1: Default : Buffered, Persistent 2: G (default): Buffered, Persistent 3: F (flags) : Buffered, Persistent 4: D (num) : 3 5: X (hex) : 00000003 Now, you may not really see a difference here between G and F because I used a [Flags] enum, the difference is that the “F” option treats the enum as if it were flags even if the [Flags] attribute is not present.  Let’s take a non-flags enum like the ResultCode used earlier: 1: // yes, we can do this even if it is not [Flags] enum. 2: ResultCode value = ResultCode.Warning | ResultCode.Error; And if we run that through the same formats again we get: 1: Default : 3 2: G (default): 3 3: F (flags) : Warning, Error 4: D (num) : 3 5: X (hex) : 00000003 Notice that since we had multiple values combined, but it was not a [Flags] marked enum, the G and default format gave us a number instead of a value name.  This is because the value was not a valid single-value constant of the enum.  However, using the F flags format string, it broke out the value into its component flags even though it wasn’t marked [Flags]. So, if you want to get an enum to display appropriately for whether or not it has the [Flags] attribute, use G which is the default.  If you always want it to attempt to break down the flags, use F.  For numeric output, obviously D or  X are the best choice depending on whether you want decimal or hex. Summary Hopefully, you learned a couple of new tricks with using the Enum class today!  I’ll add more little wonders as I think of them and thanks for all the invaluable input!   Technorati Tags: C#,.NET,Little Wonders,Enum,BlackRabbitCoder

    Read the article

  • MSCC: Purpose and benefits of Version Control Systems (VCS)

    Unfortunately, there was no monthly meetup during May. Which means that it was even more important and interesting to go forward with a great topic for this month. Earlier this year I already spoke to Nayar Joolfoo about doing a presentation on version control systems (VCS), and he gladly agreed since then. It was just about finding the right date for the action. Furthermore, it was also a great coincidence that Avinash Meetoo announced on social media networks that Knowledge 7 is about to have a new training on "Effective git" - which correlates to a book title Avinash is currently working on - all the best with your approach on this and reach out to our MSCC craftsmen for recessions. Once again a big Thank you to Orange Ebene Accelerator on providing the venue for us, and the MSCC members involved on securing the time slot for our event. Unfortunately, it's kind of tough to get an early confirmation for our meetups these days. I'll keep you posted on that one as there are some interesting and exciting options coming up soon. Okay, let's talk about the meeting and version control systems again. As usual, I'm going to put my first impression of the meetup: "Absolutely great topic, questions and discussions on version control systems, like git or VSO. I was also highly pleased by the number of first timers and female IT geeks. Hopefully, we will be able to keep this trend for future get-togethers." And I really have to emphasise the amount of fresh blood coming to our gathering. Also, during the initial phase it was surprising to see that exactly those first-timers, most of them students at various campuses here on the island, had absolutely no idea about version control systems. More about further down... Reactions of other attendees If I counted correctly, we had a total of 17 attendees this month, and I'd like to give you feedback from some of them: "Inspiring. Helped me understand more about GIT." -- Sean on event comments "Joined the meetup today with literally no idea what is a version control system. I have several reasons why I should be starting to use VCS as from NOW in my projects. Thanks Nayar, Jochen and other participants :)" -- Yudish on event comments "Was present today and I'm very satisfied.I was not aware if there was a such tool like git available. Thanks to those who contributed for this meetup.It was great. Learned a lot from this meetup!!" -- Leonardo on event comments "Seriously, I can see how it’s going to ease my task and help me save time. Gone are the issues with files backups.  And since I’ll be doing my dissertation this year, using Git would help me a lot for my backups and I’m grateful to Nayar for the great explanation." -- Swan-Iyah on MSCC meetup : Version Controls Hopefully, I'll be able to get some other sources - personal blogs preferred - on our meeting. Geeks, thank you so much for those encouraging comments. It's really great to experience that we, all members of the MSCC, are doing the right thing to get more IT information out, and to help each other to improve and evolve in our professional careers. Our agenda of the day Honestly, we had a bumpy start... First, I was battling a little bit with the movable room divider in order to maximize the space. I mean, we had 24 RSVPs and usually there might additional people coming along. Then, for what ever reason, we were facing power outages - actually twice in short periods. Not too good for the projector after all, but hey it went smooth for the rest of the time being. And last but not least... our first speaker Nayar got stuck somewhere on the road. ;-) Anyway, not a real show-stopper and we used the time until Nayar's arrival to introduce ourselves a little bit. It is always important for me to get to know the "newbies" a little bit, and as a result we had lots of students of university - first year, second year and recent graduates - among them. Surprisingly, none of them was ever in contact with version control systems at all. I mean, this is a shocking discovery! Similar to the ability of touch-typing I'd say that being able to use (and master) any kind of version control system is compulsory in any job in the IT industry. Seriously, I'm wondering what is being taught during the classes on the campus. All of them have to work on semester assessments or final projects, even in small teams of 2-4 people. That's the perfect occasion to get started with VCS. Already in this phase, we had great input from more experienced VCS users, like Sean, Avinash and myself. git - a modern approach to VCS - Nayar What a tour! Nayar gave us the full round of git from start to finish, even touching some more advanced techniques. First, he started to explain about the importance of version control systems as an essential tool for software developers, even working alone on a project, and the ability to have a kind of "time machine" that allows you to inspect and revert to a previous version of source code at any time. Then he showed how easy it is to install git on an Ubuntu based system but also mentioned that git is literally available for any operating system, like Windows, Mac OS X and of course other Linux distributions. Next, he showed us how to set the initial configuration values of user name and email address which simplifies the daily usage of the git client while working with your repositories. Then he initialised and added a new repository for some local development of a blogging software. All commands were done using the command line interface (CLI) so that they can be repeated on any system as reference. The syntax and the procedure is always the same, and Nayar clearly mentioned this to the attendees. Now, having a git repository in place it was about time to work on some "important" changes on the blogging software - just for the sake of demonstrating the ease of use and power of git. One interesting question came very early: "How many commands do we have to learn? It looks quite difficult at the moment" - Well, rest assured that during daily development circles you will need less than 10 git commands on a regular base: git add, commit, push, pull, checkout, and merge And Nayar demo'd all of them. Much to the delight of everyone he also showed gitk which is the git repository browser. It's an UI tool to display changes in a repository or a selected set of commits. This includes visualizing the commit graph, showing information related to each commit, and the files in the trees of each revision. Using gitk to display and browse information of a local git repository And last but not least, we took advantage of the internet connectivity and reached out to various online portals offering git hosting for free. Nayar showed us how to push the local repository into a remote system on github. Showing the web-based git browser and history handling, and then also explained and demo'd on how to connect to existing online repositories in order to get access to either your own source code or other people's open source projects. Next to github, we also spoke about bitbucket and gitlab as potential online platforms for your projects. Have a look at the conditions and details about their free service packages and what you can get additionally as a paying customer. Usually, you already get a lot of services for up to five users for free but there might be other important aspects that might have an impact on your decision. Anyways, moving git-based repositories between systems is a piece of cake, and changing online platforms is possible at any stage of your development. Visual Studio Online (VSO) - Jochen Well, Nayar literally covered all elements of working with git during his session, including the use of external online platforms. So, what would be the advantage of talking about Visual Studio Online (VSO)? First of all, VSO is "just another" online platform for hosting and managing git repositories on remote systems, equivalent to github, bitbucket, or any other web site. At the moment (of writing), Microsoft also provides a free package of up to five users / developers on a git repository but there is more in that package. Of course, it is related to software development on the Windows systems and the bonds are tightened towards the use of Visual Studio but out of experience you are absolutely not restricted to that. Connecting a Linux or Mac OS X machine with a git client or an integrated development environment (IDE) like Eclipse or Xcode works as smooth as expected. So, why should one opt in for VSO? Well, one of the main aspects that I would like to mention here is that VSO integrates the Application Life Cycle Methodology (ALM) of Microsoft in their platform. Meaning that you get agile project management with Backlogs, Sprints, Burn-down charts as well as the ability to track tasks, bug reports and work items next to collaborative team chats. It's the whole package of agile development you'll get. And, something I mentioned briefly during the begin of our meeting, VSO gives you the possibility of an automated continuous integrated (CI) process which builds and can run tests of your source code after each commit of changes. Having a proper CI strategy is also part of the Clean Code Developer practices - on Level Green actually -, and not only simplifies your life as a software developer but also reduces the sources of potential errors. Seamless integration and automated deployment between Microsoft Azure Web Sites and git repository But my favourite feature is the seamless continuous deployment to Microsoft Azure. Especially, while working on web projects it's absolutely astounishing that as soon as you commit your chances it just takes a couple of seconds until your modifications are deployed and available on your Azure-hosted web sites. Upcoming Events and networking Due to the adjusted times, everybody was kind of hungry and we didn't follow up on networking or upcoming events - very unfortunate to my opinion and this will have an impact on future planning of our meetups. Because I rather would like to see more conversations during and at the end of our meetings than everyone just packing their laptops, bags and accessories and rush off to grab some food. I was hoping to get some information regarding this year's Code Challenge - supposedly to be organised during July? Maybe someone could leave a comment on that - but I couldn't get any updates. Well, I'll keep digging... In case that you would like to get more into git and how to use it effectively, please check out Knowledge 7's upcoming course on "Effective git". Thanks Avinash for your vital input into today's conversation and I'm looking forward to get a grip on your book title very soon. My resume of the day Do not work in IT without any kind of version control system! Seriously, without a VCS in place you're doing it wrong. It's like driving a car without seat belts attached or riding your bike without safety helmet. You don't do that! End of discussion. ;-) Nowadays, having access to free (as in cost) tools to install on your machine and numerous online platforms to host your source code for free for up to five users it's a no-brainer to get yourself familiar with VCS. Today's sessions gave a good overview on how to start using git and how to connect to various remote services like github or VSO.

    Read the article

  • Still getting duplicate token error after calling DuplicateTokenEx for impersonated token

    - by atconway
    I'm trying to return a Sytem.IntPtr from a service call so that the client can use impersonation to call some code. My imersonation code works properly if not passing the token back from a WCF service. I'm not sure why this is not working. I get the following error: "Invalid token for impersonation - it cannot be duplicated." Here is my code that does work except when I try to pass the token back from a service to a WinForm C# client to then impersonate. [DllImport("advapi32.dll", EntryPoint = "DuplicateTokenEx")] public extern static bool DuplicateTokenEx(IntPtr ExistingTokenHandle, uint dwDesiredAccess, ref SECURITY_ATTRIBUTES lpThreadAttributes, int TokenType, int ImpersonationLevel, ref IntPtr DuplicateTokenHandle); private IntPtr tokenHandle = new IntPtr(0); private IntPtr dupeTokenHandle = new IntPtr(0); [StructLayout(LayoutKind.Sequential)] public struct SECURITY_ATTRIBUTES { public int Length; public IntPtr lpSecurityDescriptor; public bool bInheritHandle; } public enum SecurityImpersonationLevel { SecurityAnonymous = 0, SecurityIdentification = 1, SecurityImpersonation = 2, SecurityDelegation = 3 } public enum TokenType { TokenPrimary = 1, TokenImpersonation = 2 } private const int MAXIMUM_ALLOWED = 0x2000000; [PermissionSetAttribute(SecurityAction.Demand, Name = "FullTrust")] public System.IntPtr GetWindowsUserToken(string UserName, string Password, string DomainName) { IntPtr tokenHandle = new IntPtr(0); IntPtr dupTokenHandle = new IntPtr(0); const int LOGON32_PROVIDER_DEFAULT = 0; //This parameter causes LogonUser to create a primary token. const int LOGON32_LOGON_INTERACTIVE = 2; //Initialize the token handle tokenHandle = IntPtr.Zero; //Call LogonUser to obtain a handle to an access token for credentials supplied. bool returnValue = LogonUser(UserName, DomainName, Password, LOGON32_LOGON_INTERACTIVE, LOGON32_PROVIDER_DEFAULT, ref tokenHandle); //Make sure a token was returned; if no populate the ResultCode and throw an exception: int ResultCode = 0; if (false == returnValue) { ResultCode = Marshal.GetLastWin32Error(); throw new System.ComponentModel.Win32Exception(ResultCode, "API call to LogonUser failed with error code : " + ResultCode); } SECURITY_ATTRIBUTES sa = new SECURITY_ATTRIBUTES(); sa.bInheritHandle = true; sa.Length = Marshal.SizeOf(sa); sa.lpSecurityDescriptor = (IntPtr)0; bool dupReturnValue = DuplicateTokenEx(tokenHandle, MAXIMUM_ALLOWED, ref sa, (int)SecurityImpersonationLevel.SecurityDelegation, (int)TokenType.TokenImpersonation, ref dupTokenHandle); int ResultCodeDup = 0; if (false == dupReturnValue) { ResultCodeDup = Marshal.GetLastWin32Error(); throw new System.ComponentModel.Win32Exception(ResultCode, "API call to DuplicateToken failed with error code : " + ResultCode); } //Return the user token return dupTokenHandle; } Any idea if I'm not using the call to DuplicateTokenEx correctly? According to the MSDN documentation I read here I should be able to create a token valid for delegation and use across the context on remote systems. When 'SecurityDelegation' is used, the server process can impersonate the client's security context on remote systems. Thanks!

    Read the article

  • Java devs: why not use Groovy?

    - by FarmBoy
    OK, so there are quite a few people using Java these days. But as the language nears two decades of age, it isn't exactly the coolest option out there. Many of us are excited about dynamic languages with some functional features like Ruby or Python, even though we spend our days using Java. So why is it that the adoption of Groovy has been so slow? It seems that Groovy offers much of the benefits of Ruby and Python, but it is far easier to transition a Java shop to Groovy. Even if performance were the concern, it seems that many would want to use Groovy for testing the production Java code. Or use Groovy/Grails for internal apps in which performance concerns are minimal. Or for writing one-off scripts to generate code. Yet Groovy languishes outside of Tiobe's top 50 languages, for reasons that are unclear to me. I have been using Groovy and Grails professionally for about four months, and it has been an excellent experience, such that I hate to think about going back to the Java/Spring/Hibernate model. Does anyone have any sense on why we are not seeing more significant migration from Java to Groovy? Note that I'm not asking why Java developers are still using Java for new projects. My question is: Why is it that most Java Developers are still not using Groovy at all. Edit: I am assuming that all good developers see the utility of dynamic typing and higher order functions for some programming tasks. (Even if it is deemed inappropriate for production code.)

    Read the article

  • Windows Service Hosting WCF Objects over SSL (https) - Custom JSON Error Handling Doesn't Work

    - by bpatrick100
    I will first show the code that works in a non-ssl (http) environment. This code uses a custom json error handler, and all errors thrown, do get bubbled up to the client javascript (ajax). // Create webservice endpoint WebHttpBinding binding = new WebHttpBinding(); ServiceEndpoint serviceEndPoint = new ServiceEndpoint(ContractDescription.GetContract(Type.GetType(svcHost.serviceContract + ", " + svcHost.assemblyName)), binding, new EndpointAddress(svcHost.hostUrl)); // Add exception handler serviceEndPoint.Behaviors.Add(new FaultingWebHttpBehavior()); // Create host and add webservice endpoint WebServiceHost webServiceHost = new WebServiceHost(svcHost.obj, new Uri(svcHost.hostUrl)); webServiceHost.Description.Endpoints.Add(serviceEndPoint); webServiceHost.Open(); I'll also show you what the FaultingWebHttpBehavior class looks like: public class FaultingWebHttpBehavior : WebHttpBehavior { public FaultingWebHttpBehavior() { } protected override void AddServerErrorHandlers(ServiceEndpoint endpoint, EndpointDispatcher endpointDispatcher) { endpointDispatcher.ChannelDispatcher.ErrorHandlers.Clear(); endpointDispatcher.ChannelDispatcher.ErrorHandlers.Add(new ErrorHandler()); } public class ErrorHandler : IErrorHandler { public bool HandleError(Exception error) { return true; } public void ProvideFault(Exception error, MessageVersion version, ref Message fault) { // Build an object to return a json serialized exception GeneralFault generalFault = new GeneralFault(); generalFault.BaseType = "Exception"; generalFault.Type = error.GetType().ToString(); generalFault.Message = error.Message; // Create the fault object to return to the client fault = Message.CreateMessage(version, "", generalFault, new DataContractJsonSerializer(typeof(GeneralFault))); WebBodyFormatMessageProperty wbf = new WebBodyFormatMessageProperty(WebContentFormat.Json); fault.Properties.Add(WebBodyFormatMessageProperty.Name, wbf); } } } [DataContract] public class GeneralFault { [DataMember] public string BaseType; [DataMember] public string Type; [DataMember] public string Message; } The AddServerErrorHandlers() method gets called automatically, once webServiceHost.Open() gets called. This sets up the custom json error handler, and life is good :-) The problem comes, when we switch to and SSL (https) environment. I'll now show you endpoint creation code for SSL: // Create webservice endpoint WebHttpBinding binding = new WebHttpBinding(); ServiceEndpoint serviceEndPoint = new ServiceEndpoint(ContractDescription.GetContract(Type.GetType(svcHost.serviceContract + ", " + svcHost.assemblyName)), binding, new EndpointAddress(svcHost.hostUrl)); // This exception handler code below (FaultingWebHttpBehavior) doesn't work with SSL communication for some reason, need to resarch... // Add exception handler serviceEndPoint.Behaviors.Add(new FaultingWebHttpBehavior()); //Add Https Endpoint WebServiceHost webServiceHost = new WebServiceHost(svcHost.obj, new Uri(svcHost.hostUrl)); binding.Security.Mode = WebHttpSecurityMode.Transport; binding.Security.Transport.ClientCredentialType = HttpClientCredentialType.None; webServiceHost.AddServiceEndpoint(svcHost.serviceContract, binding, string.Empty); Now, with this SSL endpoint code, the service starts up correctly, and wcf hosted objects can be communicated with just fine via client javascript. However, the custom error handler doesn't work. The reason is, the AddServerErrorHandlers() method never gets called when webServiceHost.Open() is run. So, can anyone tell me what is wrong with this picture? And why, is AddServerErrorHandlers() not getting called automatically, like it does when I'm using non-ssl endpoints? Thanks!

    Read the article

  • BizTalk 2009 - The Community ODBC Adapter: Installation

    - by Stuart Brierley
    I have previsouly detailed the installation of MySQL, the configuration of MySQL and the installation of the ODBC Data Connector for MySQL.  The reason I needed to install and configure these servers was to provide a test environment for a BizTalk Server 2009 solution I am working on where BizTalk will be querying and populating a MySQL database. To do this I then needed to install and add the Community ODBC adapter from Two Connect: "The Community BizTalk Adapter for ODBC is based on the code that was first made available on GotDotNet a few years ago. TwoConnect has refreshed this code, added an installer, and tested it against the latest BizTalk editions. We are releasing the updates back to the BizTalk developer, user and partner community as part of our ongoing community intitiatives. This is the second adapter package that TwoConnect makes available to the community after the very succesful release of the BizTalk WSE 3 adapter a couple of years ago. This adapter is useful in all ODBC based integration scenarios. The following are the new features added and fixes made to the old code base on GotDotNet." Detailed below are the installation instructions for this adapter.  Downloading and running the installer will load up the splash screen. Next you need to select the installation location for the adapter. You then need to confirm the installation following which you will be shown the installation progress. Assuming all has gone well you should see the installation complete screen. Once the installation has completed successfully you will then need to add the adapter to your BizTalk Server.  To do this open the BizTalk Administration console, expand the Platform Settings and right click on Adapters then select New\Adapter. You should then be able to select the ODBC adapter and choose the display name for the adapter. This adapter will then be shown in the BizTalk Administration console. Next I will be looking at using the ODBC Adapter when: Generating schemas Creating a receive port Creating a send port

    Read the article

  • Google Analytics setting cookies on static content despite being on entirely separate domain

    - by Donald Jenkins
    I recently decided to comply with the YSlow recommendation that static content is hosted on a cookieless domain. As I already use the root of my domain (donaldjenkins.com) to host my website—on which Google Analytics sets a few cookies—that meant I had to move the CNAME URL for the CDN serving the static files from cdn.donaldjenkins.com to an entirely separate, dedicated domain. I purchased cdn.dj (yes, it's a real Djibouti domain name), hosted the files on the root (which contains nothing else, other than a robots.txt file) and set a CNAME of e.cdn.dj for the CDN. This setup works, but I was rather surprised to find that YSlow was still flagging the static files for not being cookie-free: here's a screenshot: The cdn.djdomain was new, and was never used for anything other than hosting these static files. Running httpfox on the site shows the _utma and _utmz Google Analytics cookies are being set on the static files listed above—despite their being hosted on an entirely separate, dedicated domain. Here's my Google Analytics code: //Google Analytics tracking code var _gaq=[['_setAccount','UA-5245947-5'],['_trackPageview']]; (function(d,t){var g=d.createElement(t),s=d.getElementsByTagName(t)[0]; g.src=('https:'==location.protocol?'//ssl':'//www')+'.google-analytics.com/ga.js'; s.parentNode.insertBefore(g,s)}(document,'script')); // [END] Google Analytics tracking code I'm not obsessing about this issue—I know it's not really affecting server performance—but I'd like to just understand what is causing it not to go away...

    Read the article

  • Developers are strange

    - by DavidWimbush
    Why do developers always use the GUI tools in SQL Server? I've always found this irritating and just vaguely assumed it's because they aren't familiar with SQL syntax. But when you think about it it, it's a genuine puzzle. Developers type code all day - really heavy code too like generics, lamda functions and extension methods. They (thankfully) scorn the Visual Studio stuff where you drag a table onto the class and it pastes in lots of code to query the table into a DataSet or something. But when they want to add a column to a table, without fail they dive into the graphical table designer. And half the time the script it generates does horrible things like copy the table to another one with the new column, delete the old table, and rename the new table. Which is fine if your users don't care about uptime. Is ALTER TABLE ADD <column definition> really that hard? I just don't get it.

    Read the article

  • Autoscaling in a modern world&hellip;. Part 1

    - by Steve Loethen
    It has been a while since I have had time to sit down and blog.  I need to make sure I take the time.  It helps me to focus on technology and not let the administrivia keep me from doing the things I love. I have been focusing on the cloud for the last couple of years.  Specifically the  PaaS platform from Microsoft called Azure.  Time to dig in.. I wanted to explore Autoscaling.  Autoscaling is not native part of Azure.  The platform has the needed connection points.  You can write code that looks at the health and performance of your application components and react to needed scaling changes.  But that means you have to write all the code.  Luckily, an add on to the Enterprise Library provides a lot of code that gets you a long way to being able to autoscale without having to start from scratch. The tool set is primarily composed of a Autoscaler object that you need to host.  This object, when hosted and configured, looks at the performance criteria you specify and adjusts your application based on your needs.  Sounds perfect. I started with the a set of HOL’s that gave me a good basis to understand the mechanics.  I worked through labs 1 and 2 just to get the feel, but let’s start our saga at the end of lab3.  Lab3 end results in a web application, hosted in Azure and a console app running on premise.  The web app has a few buttons on it.  One set adds messages to a queue, another removes them.  A second set of buttons drives processor utilization to 100%.  If you want to guess, a safe bet is that the Autoscaler is configured to react to a queue that has filled up or high cpu usage.  We will continue our saga in the next post…

    Read the article

  • Silverlight Cream for April 23, 2010 -- #845

    - by Dave Campbell
    In this Issue: Jason Allor, Bill Reiss, Mike Snow, Tim Heuer, John Papa, Jeremy Likness, and Dave Campbell. Shoutouts: You saw it at MIX10 and DevConnections... now you can give it a dance, John Papa announced eBay Simple Lister Beta Now Available Mike Snow posted some info about and a link to his new Flickr/Bing/Google High End Image Viewer and he's looking for feedback From SilverlightCream.com: Hierarchical Data Trees With A Custom DataSource Jason Allor is rounding out a series here in his new blog (bookmark it), and he's created his own custom HierarchicalDataSource class for use with the TreeView. Space Rocks game step 11: Start level logic Bill Reiss has Episode 11 up in his Space Rocks game ... working on NewGame and start level logic Silverlight Tip of the Day #3 – Mouse Right Clicks Mike Snow has Tip 3 up ... about handling right-mouse clicks in Silverlight 4 -- oh yeah, we got right mouse now ... grab Mike's project to check it out. Silverlight 4 enables Authorization header modification Tim Heuer talks about the ability to modify the Authorization header in network calls with Silverlight 4. He gives not only the quick-and-dirty of how to use it, but has some good examples, code, and code results for show and tell. WCF RIA Services - Hands On Lab John Papa built a bookstore app in roughly 10 minutes in the keynote at DevConnections. He now has a tutorial on doing just that plus all the code up. Transactions with MVVM Not strictly Silverlight (or WPF), but Jeremy Likness has an interesting article up on MVVM and transaction processing. Read the post then grab his helper class. Your First Windows Phone 7 Application As with the First Silverlight App a couple weeks ago, if you've got any WP7 experience at all, just keep going... this is for folks that have not looked at it yet, have not downloaded anything... oh, and it's by Dave Campbell Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Don't Use "Static" in C#?

    - by Joshiatto
    I submitted an application I wrote to some other architects for code review. One of them almost immediately wrote me back and said "Don't use "static". You can't write automated tests with static classes and methods. "Static" is to be avoided." I checked and fully 1/4 of my classes are marked "static". I use static when I am not going to create an instance of a class because the class is a single global class used throughout the code. He went on to mention something involving mocking, IOC/DI techniques that can't be used with static code. He says it is unfortunate when 3rd party libraries are static because of their un-testability. Is this other architect correct? update: here is an example: APIManager - this class keeps dictionaries of 3rd party APIs I am calling along with the next allowed time. It enforces API usage limits that a lot of 3rd parties have in their terms of service. I use it anywhere I am calling a 3rd party service by calling Thread.Sleep(APIManager.GetWait("ProviderXYZ")); before making the call. Everything in here is thread safe and it works great with the TPL in C#.

    Read the article

  • Has "Killer Game Programming in Java" by Andrew Davison been rendered useless? What new tutorial should I follow? [duplicate]

    - by BDillan
    This question already has an answer here: Killer Game Programming [Java 3D] outdated? 5 answers Its pathetic how it was created in 2005 not so long ago when the Java 3D 1.31 API was released. Now after downloading the source code off the books website and implementing it onto my work-space it cant run. (The 3D Shooter Ch. 23) I downloaded Java 3D 1.5 API, installed it- went back to my source code and javax.media ect.., and EVERY imported class within the game simply couldn't be resolved... they are all outdated.. Please do not say how its meant for a positive curvature in your game development skills. No ~ I got the book to understand how to code 3D textures/particle systems/games. Without seeing results how can I learn? If there is still hope for this book please tell me. Otherwise what other books match this one in Game Programming? This one was very well organized, documented and long. I am not looking for a 3D game prog. book on Java. Rather one that has the same reputation as this one but is a bit newer..

    Read the article

  • What CI server and Configuration Management tools I should use

    - by Bera
    Hi ! What CI server and Configuration Management tools I should use together for a truly development and deploy maintenance. There isn't the de facto rails sustainable environment, is there? Some assumptions: • code control version ok - git (de facto tool) • test framework ok - whatever (rspec is my choice) • code coverage and analysis ok - whatever (metric-fu, for example) • server stack ok - (Passenger for example) • issue tracker (RedMine) • etc, ... I'm want to play if integrity and moonshine projects, for me it's a good for beginning, isn't it? What do you think about this? Thanks, Bruno

    Read the article

  • Two Copies of Pete Brown's "Silverlight 5 In Action" to Give Away

    - by Dave Campbell
    Yes... you read that correctly... I have two copies of Pete Brown's excellent book "Silverlight 5 In Action" to give away... if you're not familiar with Pete's book, here is a short synopsis for a large book: Silverlight 5 in Action teaches you how to build desktop-quality applications you can deploy on the web. Beginners will appreciate the progression from simple examples to full applications that employ good design and coding practices. Seasoned . NET developers will love how the sample code embraces and extends what they already know. As with other give-aways I've done on my blog, rather than me trying to pick the most worthy 2 people of all submittals, what I'm going to do is randomly select 2 entries from those that are submitted. Email address for Submittals I have a special email address for submittals: mailto:[email protected]?Subject=Giveaway. Deadline for Submittals I will take submittals dated from the time this post hits until midnight Sunday night, June 17, 2012 - Arizona time. That means sometime Monday morning June 18th, I will announce the winners. Send in an email and good luck... it's a great book! But wait, there's more! If you don't want to wait until next Tuesday to get into Pete's book, or you don't figure you're that lucky to get one of the two I'm giving away, I also have a 39% off discount code for "Silverlight 5 In Action" if used at Manning.com!! Just order your book online, and use the discount code 12s5sc and you'll get the book on it's way immediately. Either way you go... you won't be disappointed. I've been reading this as it goes and it is a treasure-trove of information. Grab your copy, and Stay in the 'Light!

    Read the article

< Previous Page | 601 602 603 604 605 606 607 608 609 610 611 612  | Next Page >