Search Results

Search found 37426 results on 1498 pages for 'simple talk editorial team'.

Page 187/1498 | < Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >

  • .NET Reflector Pro T-shirt contest - and the winner is...

    - by Laila
    Three weeks ago, I kicked off a T-shirt design contest. We've been eagerly poring over the results and today, it's finally announcement time! Although many of you raced to design some great t-shirts for us, we ended up with a clear winner who came up with a nice design and an original slogan that accurately represents what .NET Reflector Pro lets you do: decompile and debug C# and VB.NET code. So, the winner is... Mandeep Sangha! Mandeep sent us the following awesome design via the Twitter account, mss_10: We liked the combination of detective and superhero elements through the magnifying glass and the slogan. Batman (possibly the most eminent of detective-superheroes?) would be proud to wear this under his suit. Mandeep will become the happy owner of a free copy of .NET Reflector Pro and an exciting box of Red Gate goodies... as well as a copy of their very own t-shirt once it's been brought to life by our printing shop! The t-shirts will bear the name of their designer, and will be made available at .NET developer events around the world, such as conferences, tradeshows and user group events. Congratulations, Mandeep! We'll be in touch to sort out the details of your prizes. But that wasn't the only great design we received. We chose three runners-up as well: Sam Beauvois: http://twitpic.com/1vvsi9 Sherwin Rice: http://www.greenwaytechno.com/img/tee-1.png Mathieu Grétry: http://blog.section9.be/public/tshirt_reflector_01.png Thanks to you all for taking part in the contest. You'll all receive a free license for .NET Reflector Pro! We'll get in touch with you individually through twitter, so that we can get you your prizes. Keep an eye out for this T-shirt - it'll soon be making its way to an event near you!

    Read the article

  • Deploying Reports using the ReportingServices2005 Class and the RS Utility

    Much of the routine administration of Reporting Services (SSRS), such as the routine deployment of RDL reports, can be automated by using the Reporting Service 2005 class library and web services. To make things easier, Microsoft supply the RS utility to run Visual Basic code as a script. It is an intriguing system, with a lot of potential, as Greg Larsen explains.

    Read the article

  • Good questions to ask a potential new boss?

    - by David Johnstone
    I first asked this question on Stack Overflow, but it turns out this is a better place for it. Imagine you were working as a software developer. Imagine that the manager of your team leaves and your company is looking for a replacement. Imagine that as part of the hiring process you had the opportunity to talk with him. You are not the only person doing an interview, and while it is not ultimately your decision whether or not to hire him, you do have an influence. What questions would you ask? What would you talk with him about?

    Read the article

  • Unit Testing TSQL

    - by Grant Fritchey
    I went through a period of time where I spent a lot of effort figuring out how to set up unit tests for TSQL. It wasn't easy. There are a few tools out there that help, but mostly it involves lots of programming. well, not as much as before. Thanks to the latest Down Tools Week at Red Gate a new utility has been built and released into the wild, SQL Test. Like a lot of the new tools coming out of Red Gate these days, this one is directly integrated into SSMS, which means you're working where you're comfortable and where you already have lots of tools at your disposal. After the install, when you launch SSMS and get connected, you're prompted to install the tSQLt example database. Go for it. It's a quick way to see how the tool works. I'd suggest using it. It' gives you a quick leg up. The concepts are pretty straight forward. There are a series of CLR commands that you use to configure a test and the test assertions. In between you're calling TSQL, either calls to your structure, queries, or stored procedures. They already have the one things that I always found wanting in database tests, a way to compare tables of results. I also like the ability to create a dummy copy of tables for the tests. It lets you control structures and behaviors so that the tests are more focused. One of the issues I always ran into with the other testing tools is that setting up the tests might require potentially destructive changes to the structure of the database (dropping FKs, etc.) which added lots of time and effort to setting up the tests, making testing more difficult, and therefor, less useful. Functionally, this is pretty similar to the Visual Studio tests and TSQLUnit tests that I used to use. The primary improvement over the Visual Studio tests is that I'm working in SSMS instead of Visual Studio. The primary improvement over TSQLUnit is the SQL Test interface it self. A lot of the functionality is the same, but having a sweet little tool to manage & run the tests from makes a huge difference. Oh, and don't worry. You can still run these tests directly from TSQL too, so automation has not gone away. I'm still thinking about how I'd use this in a dev environment where I also had source control to fret. That might be another blog post right there. I'm just getting started with SQL Test, so this is the first of several blog posts & videos. Watch this space. Try the tool.

    Read the article

  • .NET Demon 1.0 Released

    - by theo.spears
    Today we're officially releasing version 1.0 of .NET Demon, the Visual Studio Extension Alex Davies and I have been working on for the last 6 months. There have been beta versions available for a while, but we have now released the first "official" version and made it available to purchase. If you haven't yet tried the tool, it's all about reducing the time between when you write a line of code and when you are able to try it out so you don't have to wait: Continuous compilation We use spare CPU cycles on your machine to compile your code in the background when you make changes, so assemblies are up to date whenever you want to run them. Some clever logic means we only recompile code which may have been affected by your changes. Continuous save .NET Demon can perform background saving, so you don't lose any work in case of crashes or power failures, and are less likely to forget to commit changed files. Continuous testing (Experimental) The testing tool in .NET Demon watches which code you change in your solution, and automatically reruns tests which are impacted, so you learn about any breaking changes as quickly as possible. It also gives you inline test coverage information inside Visual Studio. Continuous testing is still experimental - it will work fine in many cases, but we know it's not yet perfect. Releasing version 1.0 doesn't mean we're pausing development or pushing out improvements. We will still be regularly providing new versions with improved functionality and fixes for any bugs people come across. Visit the .NET Demon product page to download

    Read the article

  • Thank you to all entrants! Finalist announcement coming soon...

    - by Rebecca Amos
    We had a fantastic response to this year's Exceptional DBA Awards. A big thank you to everyone who took the time and effort to make a nomination - it's great to see so many DBAs being appreciated for the hard work that they do. We're now busy collating the answers to send off to the Exceptional DBA judges. They'll pick their five finalists, which we'll be announcing in a few weeks’ time. So watch this space for further details. In the meantime, don't forget you can still download your free resources from the Exceptional DBA Award website. You can use them for your own career and personal development; pass them on to a great DBA you know, or to start planning your entry for next year!

    Read the article

  • Exchange Server 2010 Compliance Capabilities

    Exchange 2010 now has more features to assist System Admistrators with Compliance, but are these features sufficient? Are they easy to use? Michael Smith gives a roundup of these new features, and points out where are useful, and where they fall short of providing a complete solution to the compliance task

    Read the article

  • The Fast Guide to Application Profiling

    In this sample chapter from his recently released book (co-Authored with Paul Glavich) Chris Farrell gives us a fast overview of performance profiling, memory profiling, profiling tools, and in fact everything we need to know when it comes to profiling our applications. This is a great first step, and The Complete Guide to .NET Performance Testing and Optimization is crammed with even more indispensable knowledge.

    Read the article

  • Planning for Disaster

    There is a certain paradox in being advised to expect the unexpected, but the DBA must plan and prepare in advance to protect their organisation's data assets in the event of an unexpected crisis, and return them to normal operating conditions. To minimise downtime in such circumstances should be the aim of every effective DBA. To plan for recovery, It pays to have the mindset of a pessimist.

    Read the article

  • Modifying Contiguous Time Periods in a History Table

    Alex Kuznetsov is credited with a clever technique for creating a history table for SQL that is designed to store contiguous time periods and check that these time periods really are contiguous, using nothing but constraints. This is now increasingly useful with the DATE data type in SQL Server. The modification of data in this type of table isn't always entirely intuitive so Alex is on hand to give a brief explanation of how to do it.

    Read the article

  • Virtual Machine Storage Provisioning and best practises

    If you're using Virtualization technology, then at some point you'll have run out of (or will run out of) virtual disk space, & had to provision extra storage; are you confident that you know how to do that? Sean Duffy makes sure you're doing it right, sharing his recommendations and tips in this step-by-step guide to Virtual Machine Storage provisioning for VMware. Follow this advice, and you'll be a Virtualization Veteran in no time.

    Read the article

  • Finding Stuff in SQL Server Database DDL

    You'd have thought that nothing would be easier than using SQL Server Management Studio (SSMS) for searching through the DDL for both the names and definitions of the structural metadata of your databases, for the occurrence of a particular string of letters. Not so easy, it turns out, though Phil Factor is able to come up with various methods for various purposes.

    Read the article

  • Anatomy of a .NET Assembly - CLR metadata 2

    - by Simon Cooper
    Before we look any further at the CLR metadata, we need a quick diversion to understand how the metadata is actually stored. Encoding table information As an example, we'll have a look at a row in the TypeDef table. According to the spec, each TypeDef consists of the following: Flags specifying various properties of the class, including visibility. The name of the type. The namespace of the type. What type this type extends. The field list of this type. The method list of this type. How is all this data actually represented? Offset & RID encoding Most assemblies don't need to use a 4 byte value to specify heap offsets and RIDs everywhere, however we can't hard-code every offset and RID to be 2 bytes long as there could conceivably be more than 65535 items in a heap or more than 65535 fields or types defined in an assembly. So heap offsets and RIDs are only represented in the full 4 bytes if it is required; in the header information at the top of the #~ stream are 3 bits indicating if the #Strings, #GUID, or #Blob heaps use 2 or 4 bytes (the #US stream is not accessed from metadata), and the rowcount of each table. If the rowcount for a particular table is greater than 65535 then all RIDs referencing that table throughout the metadata use 4 bytes, else only 2 bytes are used. Coded tokens Not every field in a table row references a single predefined table. For example, in the TypeDef extends field, a type can extend another TypeDef (a type in the same assembly), a TypeRef (a type in a different assembly), or a TypeSpec (an instantiation of a generic type). A token would have to be used to let us specify the table along with the RID. Tokens are always 4 bytes long; again, this is rather wasteful of space. Cutting the RID down to 2 bytes would make each token 3 bytes long, which isn't really an optimum size for computers to read from memory or disk. However, every use of a token in the metadata tables can only point to a limited subset of the metadata tables. For the extends field, we only need to be able to specify one of 3 tables, which we can do using 2 bits: 0x0: TypeDef 0x1: TypeRef 0x2: TypeSpec We could therefore compress the 4-byte token that would otherwise be needed into a coded token of type TypeDefOrRef. For each type of coded token, the least significant bits encode the table the token points to, and the rest of the bits encode the RID within that table. We can work out whether each type of coded token needs 2 or 4 bytes to represent it by working out whether the maximum RID of every table that the coded token type can point to will fit in the space available. The space available for the RID depends on the type of coded token; a TypeOrMethodDef coded token only needs 1 bit to specify the table, leaving 15 bits available for the RID before a 4-byte representation is needed, whereas a HasCustomAttribute coded token can point to one of 18 different tables, and so needs 5 bits to specify the table, only leaving 11 bits for the RID before 4 bytes are needed to represent that coded token type. For example, a 2-byte TypeDefOrRef coded token with the value 0x0321 has the following bit pattern: 0 3 2 1 0000 0011 0010 0001 The first two bits specify the table - TypeRef; the other bits specify the RID. Because we've used the first two bits, we've got to shift everything along two bits: 000000 1100 1000 This gives us a RID of 0xc8. If any one of the TypeDef, TypeRef or TypeSpec tables had more than 16383 rows (2^14 - 1), then 4 bytes would need to be used to represent all TypeDefOrRef coded tokens throughout the metadata tables. Lists The third representation we need to consider is 1-to-many references; each TypeDef refers to a list of FieldDef and MethodDef belonging to that type. If we were to specify every FieldDef and MethodDef individually then each TypeDef would be very large and a variable size, which isn't ideal. There is a way of specifying a list of references without explicitly specifying every item; if we order the MethodDef and FieldDef tables by the owning type, then the field list and method list in a TypeDef only have to be a single RID pointing at the first FieldDef or MethodDef belonging to that type; the end of the list can be inferred by the field list and method list RIDs of the next row in the TypeDef table. Going back to the TypeDef If we have a look back at the definition of a TypeDef, we end up with the following reprensentation for each row: Flags - always 4 bytes Name - a #Strings heap offset. Namespace - a #Strings heap offset. Extends - a TypeDefOrRef coded token. FieldList - a single RID to the FieldDef table. MethodList - a single RID to the MethodDef table. So, depending on the number of entries in the heaps and tables within the assembly, the rows in the TypeDef table can be as small as 14 bytes, or as large as 24 bytes. Now we've had a look at how information is encoded within the metadata tables, in the next post we can see how they are arranged on disk.

    Read the article

  • Databases and Beer

    - by Johnm
    It is a bit of a no-brainer: Include the word "beer" in a subject line of an e-mail or blog post title and you can be certain that it will be read. While there are times this practice might be a ploy to increase readership, it is not the case for this blog post. There is inspiration that can be drawn from other industries to which we, as database professionals, can apply in our industry. In this post I will highlight one of my favorite participants of the brewing industry. The Boston Beer Company started in the 1970s in Boston, Massachusetts. Others may be more familiar with this company through their Samuel Adams Boston Lager and other various seasonal beers. I am continually inspired by their commitment to mastery of the brewing process to which they evangelize frequently in their commercials. They also are continually in pursuit of pushing the boundaries of beer as we know it while working within traditional constraints. A recent example of this is their collaboration with Weihenstephan Brewery of Munich, Germany to produce the soon to be released Infinium beer. This beer, while brewed as an ale, is touted as something closer to something like Champaign - all while complying with the Reinheitsgebot. The Reinheitsgebot is also known as the "German Beer Purity Law" which was originated in 1516. This law states that beer is to consist of water, barley, hops and yeast. That's it. Quite a limiting constraint indeed. and yet, The Boston Beer Company pushed forward. Much like the process of brewing, the discipline of database design and architecture is one that is continually in process and driven by the pursuit of mastery. While we do not have purity laws to constrain us, we have many other types: best practices, company policies, government regulations, security and budgets. Through our fellow comrades, we discuss the challenges and constraints in which we operate. We boil down the principles and theories that define our profession. We reassemble these into something that is complementary to the business needs that we must fulfill. As a result, it is not uncommon to see something amazingly innovative in a small business who is pushing the boundaries of their database well beyond its intended state. It is equally common to see innovation in the use of features available in the more advanced features of databases that are found in large businesses. The tag line for The Boston Beer Company is: "Take Pride In Your Beer.", I would like to offer an alternative and say "Take Pride In Your Database." So, As you pour your next Boston Lager into a frosted glass, consider those who spend their lives mastering the craft of brewing and strive to interject their spirit into everything that you do as a database professional. Cheers!

    Read the article

  • ReSharper C# Live Template for Dependency Property and Property Change Routed Event Boilerplate Code

    - by Bart Read
    I don't know about you but it took me about 5 seconds to get royally fed up of typing the boilerplate code necessary for creating WPF (and Silverlight) dependency properties and, if you want them, their associated property change routed events. Being a ReSharper user, I wondered if there was any live template for doing this. It turns out there's nothing built in, but there are many examples of templates for creating dependency properties out there on the web, such as this excellent one from Roy...(read more)

    Read the article

  • Converting Encrypted Values

    - by Johnm
    Your database has been protecting sensitive data at rest using the cell-level encryption features of SQL Server for quite sometime. The employees in the auditing department have been inviting you to their after-work gatherings and buying you drinks. Thousands of customers implicitly include you in their prayers of thanks giving as their identities remain safe in your company's database. The cipher text resting snuggly in a column of the varbinary data type is great for security; but it can create some interesting challenges when interacting with other data types such as the XML data type. The XML data type is one that is often used as a message type for the Service Broker feature of SQL Server. It also can be an interesting data type to capture for auditing or integrating with external systems. The challenge that cipher text presents is that the need for decryption remains even after it has experienced its XML metamorphosis. Quite an interesting challenge nonetheless; but fear not. There is a solution. To simulate this scenario, we first will want to create a plain text value for us to encrypt. We will do this by creating a variable to store our plain text value: -- set plain text value DECLARE @PlainText NVARCHAR(255); SET @PlainText = 'This is plain text to encrypt'; The next step will be to create a variable that will store the cipher text that is generated from the encryption process. We will populate this variable by using a pre-defined symmetric key and certificate combination: -- encrypt plain text value DECLARE @CipherText VARBINARY(MAX); OPEN SYMMETRIC KEY SymKey     DECRYPTION BY CERTIFICATE SymCert     WITH PASSWORD='mypassword2010';     SET @CipherText = EncryptByKey                          (                            Key_GUID('SymKey'),                            @PlainText                           ); CLOSE ALL SYMMETRIC KEYS; The value of our newly generated cipher text is 0x006E12933CBFB0469F79ABCC79A583--. This will be important as we reference our cipher text later in this post. Our final step in preparing our scenario is to create a table variable to simulate the existence of a table that contains a column used to hold encrypted values. Once this table variable has been created, populate the table variable with the newly generated cipher text: -- capture value in table variable DECLARE @tbl TABLE (EncVal varbinary(MAX)); INSERT INTO @tbl (EncVal) VALUES (@CipherText); We are now ready to experience the challenge of capturing our encrypted column in an XML data type using the FOR XML clause: -- capture set in xml DECLARE @xml XML; SET @xml = (SELECT               EncVal             FROM @tbl AS MYTABLE             FOR XML AUTO, BINARY BASE64, ROOT('root')); If you add the SELECT @XML statement at the end of this portion of the code you will see the contents of the XML data in its raw format: <root>   <MYTABLE EncVal="AG4Skzy/sEafeavMeaWDBwEAAACE--" /> </root> Strangely, the value that is captured appears nothing like the value that was created through the encryption process. The result being that when this XML is converted into a readable data set the encrypted value will not be able to be decrypted, even with access to the symmetric key and certificate used to perform the decryption. An immediate thought might be to convert the varbinary data type to either a varchar or nvarchar before creating the XML data. This approach makes good sense. The code for this might look something like the following: -- capture set in xml DECLARE @xml XML; SET @xml = (SELECT              CONVERT(NVARCHAR(MAX),EncVal) AS EncVal             FROM @tbl AS MYTABLE             FOR XML AUTO, BINARY BASE64, ROOT('root')); However, this results in the following error: Msg 9420, Level 16, State 1, Line 26 XML parsing: line 1, character 37, illegal xml character A quick query that returns CONVERT(NVARCHAR(MAX),EncVal) reveals that the value that is causing the error looks like something off of a genuine Chinese menu. While this situation does present us with one of those spine-tingling, expletive-generating challenges, rest assured that this approach is on the right track. With the addition of the "style" argument to the CONVERT method, our solution is at hand. When dealing with converting varbinary data types we have three styles available to us: - The first is to not include the style parameter, or use the value of "0". As we see, this style will not work for us. - The second option is to use the value of "1" will keep our varbinary value including the "0x" prefix. In our case, the value will be 0x006E12933CBFB0469F79ABCC79A583-- - The third option is to use the value of "2" which will chop the "0x" prefix off of our varbinary value. In our case, the value will be 006E12933CBFB0469F79ABCC79A583-- Since we will want to convert this back to varbinary when reading this value from the XML data we will want the "0x" prefix, so we will want to change our code as follows: -- capture set in xml DECLARE @xml XML; SET @xml = (SELECT              CONVERT(NVARCHAR(MAX),EncVal,1) AS EncVal             FROM @tbl AS MYTABLE             FOR XML AUTO, BINARY BASE64, ROOT('root')); Once again, with the inclusion of the SELECT @XML statement at the end of this portion of the code you will see the contents of the XML data in its raw format: <root>   <MYTABLE EncVal="0x006E12933CBFB0469F79ABCC79A583--" /> </root> Nice! We are now cooking with gas. To continue our scenario, we will want to parse the XML data into a data set so that we can glean our freshly captured cipher text. Once we have our cipher text snagged we will capture it into a variable so that it can be used during decryption: -- read back xml DECLARE @hdoc INT; DECLARE @EncVal NVARCHAR(MAX); EXEC sp_xml_preparedocument @hDoc OUTPUT, @xml; SELECT @EncVal = EncVal FROM OPENXML (@hdoc, '/root/MYTABLE') WITH ([EncVal] VARBINARY(MAX) '@EncVal'); EXEC sp_xml_removedocument @hDoc; Finally, the decryption of our cipher text using the DECRYPTBYKEYAUTOCERT method and the certificate utilized to perform the encryption earlier in our exercise: SELECT     CONVERT(NVARCHAR(MAX),                     DecryptByKeyAutoCert                          (                            CERT_ID('AuditLogCert'),                            N'mypassword2010',                            @EncVal                           )                     ) EncVal; Ah yes, another hurdle presents itself! The decryption produced the value of NULL which in cryptography means that either you don't have permissions to decrypt the cipher text or something went wrong during the decryption process (ok, sometimes the value is actually NULL; but not in this case). As we see, the @EncVal variable is an nvarchar data type. The third parameter of the DECRYPTBYKEYAUTOCERT method requires a varbinary value. Therefore we will need to utilize our handy-dandy CONVERT method: SELECT     CONVERT(NVARCHAR(MAX),                     DecryptByKeyAutoCert                          (                             CERT_ID('AuditLogCert'),                             N'mypassword2010',                             CONVERT(VARBINARY(MAX),@EncVal)                           )                     ) EncVal; Oh, almost. The result remains NULL despite our conversion to the varbinary data type. This is due to the creation of an varbinary value that does not reflect the actual value of our @EncVal variable; but rather a varbinary conversion of the variable itself. In this case, something like 0x3000780030003000360045003--. Considering the "style" parameter got us past XML challenge, we will want to consider its power for this challenge as well. Knowing that the value of "1" will provide us with the actual value including the "0x", we will opt to utilize that value in this case: SELECT     CONVERT(NVARCHAR(MAX),                     DecryptByKeyAutoCert                          (                            CERT_ID('SymCert'),                            N'mypassword2010',                            CONVERT(VARBINARY(MAX),@EncVal,1)                           )                     ) EncVal; Bingo, we have success! We have discovered what happens with varbinary data when captured as XML data. We have figured out how to make this data useful post-XML-ification. Best of all we now have a choice in after-work parties now that our very happy client who depends on our XML based interface invites us for dinner in celebration. All thanks to the effective use of the style parameter.

    Read the article

  • SQL Azure Down - how I got labs.red-gate.com back up

    - by Richard Mitchell
    11:06am - Currently SQL Azure in western europe is down How do I know this? Well on labs.red-gate.com (my Azure website) I have elmah installed which started sending me e-mails about connection failures from 10:40am when trying to get the dynamic content from the database (I was too busy playing with my new Eee Pad transformer to notice immediately). Going to the website confirmed the failure and trying to connect to SQL Azure from SQL Server Management studio and the Management confirmed bad...(read more)

    Read the article

  • Subterranean IL: Compiling C# exception handlers

    - by Simon Cooper
    An exception handler in C# combines the IL catch and finally exception handling clauses into a single try statement: try { Console.WriteLine("Try block") // ... } catch (IOException) { Console.WriteLine("IOException catch") // ... } catch (Exception e) { Console.WriteLine("Exception catch") // ... } finally { Console.WriteLine("Finally block") // ... } How does this get compiled into IL? Initial implementation If you remember from my earlier post, finally clauses must be specified with their own .try clause. So, for the initial implementation, we take the try/catch/finally, and simply split it up into two .try clauses (I have to use label syntax for this): StartTry: ldstr "Try block" call void [mscorlib]System.Console::WriteLine(string) // ... leave.s End EndTry: StartIOECatch: ldstr "IOException catch" call void [mscorlib]System.Console::WriteLine(string) // ... leave.s End EndIOECatch: StartECatch: ldstr "Exception catch" call void [mscorlib]System.Console::WriteLine(string) // ... leave.s End EndECatch: StartFinally: ldstr "Finally block" call void [mscorlib]System.Console::WriteLine(string) // ... endfinally EndFinally: End: // ... .try StartTry to EndTry catch [mscorlib]System.IO.IOException handler StartIOECatch to EndIOECatch catch [mscorlib]System.Exception handler StartECatch to EndECatch .try StartTry to EndTry finally handler StartFinally to EndFinally However, the resulting program isn't verifiable, and doesn't run: [IL]: Error: Shared try has finally or fault handler. Nested try blocks What's with the verification error? Well, it's a condition of IL verification that all exception handling regions (try, catch, filter, finally, fault) of a single .try clause have to be completely contained within any outer exception region, and they can't overlap with any other exception handling clause. In other words, IL exception handling clauses must to be representable in the scoped syntax, and in this example, we're overlapping catch and finally clauses. Not only is this example not verifiable, it isn't semantically correct. The finally handler is specified round the .try. What happens if you were able to run this code, and an exception was thrown? Program execution enters top of try block, and exception is thrown within it CLR searches for an exception handler, finds catch Because control flow is leaving .try, finally block is run The catch block is run leave.s End inside the catch handler branches to End label. We're actually running the finally before the catch! What we do about it What we actually need to do is put the catch clauses inside the finally clause, as this will ensure the finally gets executed at the correct time (this time using scoped syntax): .try { .try { ldstr "Try block" call void [mscorlib]System.Console::WriteLine(string) // ... leave.s End } catch [mscorlib]System.IO.IOException { ldstr "IOException catch" call void [mscorlib]System.Console::WriteLine(string) // ... leave.s End } catch [mscorlib]System.Exception { ldstr "Exception catch" call void [mscorlib]System.Console::WriteLine(string) // ... leave.s End } } finally { ldstr "Finally block" call void [mscorlib]System.Console::WriteLine(string) // ... endfinally } End: ret Returning from methods There is a further semantic mismatch that the C# compiler has to deal with; in C#, you are allowed to return from within an exception handling block: public int HandleMethod() { try { // ... return 0; } catch (Exception) { // ... return -1; } } However, you can't ret inside an exception handling block in IL. So the C# compiler does a leave.s to a ret outside the exception handling area, loading/storing any return value to a local variable along the way (as leave.s clears the stack): .method public instance int32 HandleMethod() { .locals init ( int32 retVal ) .try { // ... ldc.i4.0 stloc.0 leave.s End } catch [mscorlib]System.Exception { // ... ldc.i4.m1 stloc.0 leave.s End } End: ldloc.0 ret } Conclusion As you can see, the C# compiler has quite a few hoops to jump through to translate C# code into semantically-correct IL, and hides the numerous conditions on IL exception handling blocks from the C# programmer. Next up: catch-all blocks, and how the runtime deals with non-Exception exceptions.

    Read the article

  • Subterranean IL: Generics and array covariance

    - by Simon Cooper
    Arrays in .NET are curious beasts. They are the only built-in collection types in the CLR, and SZ-arrays (single dimension, zero-indexed) have their own commands and IL syntax. One of their stranger properties is they have a kind of built-in covariance long before generic variance was added in .NET 4. However, this causes a subtle but important problem with generics. First of all, we need to briefly recap on array covariance. SZ-array covariance To demonstrate, I'll tweak the classes I introduced in my previous posts: public class IncrementableClass { public int Value; public virtual void Increment(int incrementBy) { Value += incrementBy; } } public class IncrementableClassx2 : IncrementableClass { public override void Increment(int incrementBy) { base.Increment(incrementBy); base.Increment(incrementBy); } } In the CLR, SZ-arrays of reference types are implicitly convertible to arrays of the element's supertypes, all the way up to object (note that this does not apply to value types). That is, an instance of IncrementableClassx2[] can be used wherever a IncrementableClass[] or object[] is required. When an SZ-array could be used in this fashion, a run-time type check is performed when you try to insert an object into the array to make sure you're not trying to insert an instance of IncrementableClass into an IncrementableClassx2[]. This check means that the following code will compile fine but will fail at run-time: IncrementableClass[] array = new IncrementableClassx2[1]; array[0] = new IncrementableClass(); // throws ArrayTypeMismatchException These checks are enforced by the various stelem* and ldelem* il instructions in such a way as to ensure you can't insert a IncrementableClass into a IncrementableClassx2[]. For the rest of this post, however, I'm going to concentrate on the ldelema instruction. ldelema This instruction pops the array index (int32) and array reference (O) off the stack, and pushes a pointer (&) to the corresponding array element. However, unlike the ldelem instruction, the instruction's type argument must match the run-time array type exactly. This is because, once you've got a managed pointer, you can use that pointer to both load and store values in that array element using the ldind* and stind* (load/store indirect) instructions. As the same pointer can be used for both input and output to the array, the type argument to ldelema must be invariant. At the time, this was a perfectly reasonable restriction, and maintained array type-safety within managed code. However, along came generics, and with it the constrained callvirt instruction. So, what happens when we combine array covariance and constrained callvirt? .method public static void CallIncrementArrayValue() { // IncrementableClassx2[] arr = new IncrementableClassx2[1] ldc.i4.1 newarr IncrementableClassx2 // arr[0] = new IncrementableClassx2(); dup newobj instance void IncrementableClassx2::.ctor() ldc.i4.0 stelem.ref // IncrementArrayValue<IncrementableClass>(arr, 0) // here, we're treating an IncrementableClassx2[] as IncrementableClass[] dup ldc.i4.0 call void IncrementArrayValue<class IncrementableClass>(!!0[],int32) // ... ret } .method public static void IncrementArrayValue<(IncrementableClass) T>( !!T[] arr, int32 index) { // arr[index].Increment(1) ldarg.0 ldarg.1 ldelema !!T ldc.i4.1 constrained. !!T callvirt instance void IIncrementable::Increment(int32) ret } And the result: Unhandled Exception: System.ArrayTypeMismatchException: Attempted to access an element as a type incompatible with the array. at IncrementArrayValue[T](T[] arr, Int32 index) at CallIncrementArrayValue() Hmm. We're instantiating the generic method as IncrementArrayValue<IncrementableClass>, but passing in an IncrementableClassx2[], hence the ldelema instruction is failing as it's expecting an IncrementableClass[]. On features and feature conflicts What we've got here is a conflict between existing behaviour (ldelema ensuring type safety on covariant arrays) and new behaviour (managed pointers to object references used for every constrained callvirt on generic type instances). And, although this is an edge case, there is no general workaround. The generic method could be hidden behind several layers of assemblies, wrappers and interfaces that make it a requirement to use array covariance when calling the generic method. Furthermore, this will only fail at runtime, whereas compile-time safety is what generics were designed for! The solution is the readonly. prefix instruction. This modifies the ldelema instruction to ignore the exact type check for arrays of reference types, and so it lets us take the address of array elements using a covariant type to the actual run-time type of the array: .method public static void IncrementArrayValue<(IncrementableClass) T>( !!T[] arr, int32 index) { // arr[index].Increment(1) ldarg.0 ldarg.1 readonly. ldelema !!T ldc.i4.1 constrained. !!T callvirt instance void IIncrementable::Increment(int32) ret } But what about type safety? In return for ignoring the type check, the resulting controlled mutability pointer can only be used in the following situations: As the object parameter to ldfld, ldflda, stfld, call and constrained callvirt instructions As the pointer parameter to ldobj or ldind* As the source parameter to cpobj In other words, the only operations allowed are those that read from the pointer; stind* and similar that alter the pointer itself are banned. This ensures that the array element we're pointing to won't be changed to anything untoward, and so type safety within the array is maintained. This is a typical example of the maxim that whenever you add a feature to a program, you have to consider how that feature interacts with every single one of the existing features. Although an edge case, the readonly. prefix instruction ensures that generics and array covariance work together and that compile-time type safety is maintained. Tune in next time for a look at the .ctor generic type constraint, and what it means.

    Read the article

  • The Running Cost of Azure - MSDN Offer

    - by RobbieT
    Richard recently blogged about getting the Red Gate Labs website onto Azure; it's been running awhile now and, as Richard makes sure the cogs are all turning, I've been trying to track the cost. We decided to launch on Windows Azure as both an exercise in using Azure and also getting to grips with hosting stuff in the cloud. If you have an MSDN subscription then you're eligible for an offer which looks pretty great: What the offer amounted to was a small compute instance, a bunch of storage...(read more)

    Read the article

  • SQL Monitor and "The Cloud"

    - by Richard Mitchell
    So, how can we demo this thing? In the beginning there was a product, and it was a good product for the testers had decreed it so, and nobody argues with a tester. But then comes the inevitable question of how can somebody test it out without risk. Red Gate prides itself on the tools being easy for people to trial before they buy, and no cut down trial for you sir, oh no, for you sir only the best will do - a fully functional trial - suits you sir. The problem The problem comes when you get a...(read more)

    Read the article

< Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >