Search Results

Search found 44763 results on 1791 pages for 'first responder'.

Page 241/1791 | < Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >

  • Community Branching

    - by Dane Morgridge
    As some may have noticed, I have taken a liking to Ruby (and Rails in particular) quite a bit recently. This last weekend I spoke at the NYC Code Camp on a comparison of ASP.NET and Rails as well as an intro to Entity Framework talk.  I am speaking at RubyNation in April and have submitted to other ruby conferences around the area and I am also doing a Rails and MongoDB talk at the Philly Code Camp in April. Before you start to think this is my "I'm leaving .NET post", which it isn't so I need to clarify. I am not, nor do I intend to any time in the near future plan on abandoning .NET.  I am simply branching out into another community based on a development technology that I very much enjoy.  If you look at my twitter bio, you will see that I am into Entity Framework, Ruby on Rails, C++ and ASP.NET MVC, and not necessarily in that order.  I know you're probably thinking to your self that I am crazy, which is probably true on several levels (especially the C++ part). I was actually crazy enough at the NYC Code Camp to show up wearing a Linux t-shirt, presenting with my MacBook Pro on Entity Framework, ASP.NET MVC and Rails. (I did get pelted in the head with candy by Rachel Appel for it though) At all of the code camps I am submitting to this year, i will be submitting sessions on likely all four topics, and some sessions will be a combination of 2 or more.  For example, my "ASP.NET MVC: A Gateway To Rails?" talk touches ASP.NET MVC, Entity Framework Code First and Rails. Simply put (and I talk about this in my MVC & Rails talk) is that learning and using Rails has made me a better ASP.NET MVC developer. Just one example of this is helper methods.  When I started working with ASP.NET MVC, I didn't really want to use helpers and preferred to just use standard html tags, especially where links were concerned.  It was just me being stubborn and not really seeing all of the benefit of the helpers.  To my defense, coming from WebForms, I wanted to be as bare metal as possible and it seemed at first like a lot of the helpers were an unnecessary abstraction. I took my first look at Rails back in v1 and didn't spend very much time with it so I dismissed it and went on my merry ASP.NET WebForms way.  Then I picked up ASP.NET MVC and grasped the MVC pattern itself much better. After this, I took another look at Rails and everything made sense.  I decided then to learn Rails. (I think it is important for developers to learn new languages and platforms regularly so it was a natural progression for me) I wanted to learn it the right way, so when I dug into code, everyone used helpers everywhere for pretty much everything possible. I took some time to dig in and found out how helpful they were and subsequently realized how awesome they were in ASP.NET MVC also and started using them. In short, I love Rails (and Ruby in general).  I also love ASP.NET MVC and Entity Framework and yes I still love C++.  I have varying degrees of love for them individually at any given moment and it is likely to shift based on the current project I am working on.  I know you're thinking it so before you ask the question. "Which do I use when?", I'm going to give the standard developer answer of: It depends.  There are a lot of factors that I am not going to even go into that would go into a decision.  The most basic question I would ask though is,  does this project depend on .NET?  If it does, then I'd say that ASP.NET MVC is probably going to be the more logical choice and I am going to leave it at that.  I am working on projects right now in both technologies and I don't see that changing anytime soon (one project even uses both). With all that being said, you'll find me at code camps, conferences and user groups presenting on .NET, Ruby or both, writing about .NET and Ruby and I will likely be blogging on both in the future.  I know of others that have successfully branched out to other communities and with any luck I'll be successful at it too. On a (sorta) side note, I read a post by Justin Etheredge the other day that pretty much sums up my feelings about Ruby as a language.  I highly recommend checking it out: What Is So Great About Ruby?

    Read the article

  • Google Analytics - Unable to get GA Tracking

    - by Pure.Krome
    We've been using GA for a few years with no probs. About 2-3 weeks ago we tried to clean up some of our tracking and on one of our profiles, it's not working anymore (since oct 10.) First, some context then some GA Debugging code. 1. Context. We have the following setup: different root domains AND different sub-domains on one of the root domains. www.website.com www.website.com.au www.anotherWebsite.com foo.website.com baa.website.com So what we're doing is the following: each root domain and each sub-domain get their own tracking code. This way we can allow separate people (from outside our company) to access only their own data. Eg. a manager for foo.website.com can only see data related to that domain .. and see data on the other domains. Have a last account which is the SUM of all the domains. this is for us. so we can see total numbers. So to do this, we have two trackers that fire off, on the page. the individual accounts all work fine - they seem to be tracking data ok. the 'global' account is not working and this gives us the = Tracking Not Installed error. This has been going on since oct 10. So the wait 24/48/72 hours thing is waaaaay over. 2. GA Debug code. Installing GA Debug chrome extension gives the following output. I've tried to hide anything that could be considered secret. UA-XXXXX34-1 == Global account (which isn't working any more). UA-XXXXX34-11 == Specific account for www.website.com _gaq.push processing "_setAccount" for args: "[UA-XXXXX34-1]": ga_debug.js:18 _gaq.push processing "_setDomainName" for args: "[website.com]": ga_debug.js:18 _gaq.push processing "_setAllowLinker" for args: "[true]": ga_debug.js:18 _gaq.push processing "_trackPageview" for args: "[]": ga_debug.js:18 Track Pageview ga_debug.js:18 Tracking beacon sent! utmwv=--snipped-- Account ID : UA-XXXX234-1 Page Title : Some page title Host Name : www.website.com Page : / Referring URL : - Hit ID : 1923583969 Visitor ID : 785310647 Session Count : 51 Session Time - First : Thu Aug 23 2012 15:20:17 GMT 1000 (AUS Eastern Standard Time) Session Time - Last : Mon Oct 29 2012 11:41:46 GMT 1100 (AUS Eastern Summer Time) Session Time - Current : Mon Oct 29 2012 12:19:23 GMT 1100 (AUS Eastern Summer Time) Campaign Time : Thu Aug 23 2012 15:20:17 GMT 1000 (AUS Eastern Standard Time) Campaign Session : 1 Campaign Count : 1 Campaign Source : (direct) Campaign Medium : (none); Campaign Name : (direct) Language : en-gb Encoding : UTF-8 Flash Version : 11.4 r31 Java Enabled : true Screen Resolution : 1050x1680 Browser Size : 1033x861 Color Depth : 32-bit Ga.js Version : 5.3.7d Cachebuster : 1846514973 ga_debug.js:18 _gaq.push processing "_setAccount" for args: "[UA-XXXX234-11]": ga_debug.js:18 _gaq.push processing "_setDomainName" for args: "[website.com]": ga_debug.js:18 _gaq.push processing "_setAllowLinker" for args: "[true]": ga_debug.js:18 _gaq.push processing "_trackPageview" for args: "[]": ga_debug.js:18 Track Pageview ga_debug.js:18 Tracking beacon sent! utmwv=--snipped-- Account ID : UA-XXXX234-11 Page Title : SomePageTitle Host Name : www.website.com Page : / Referring URL : - Hit ID : 1923583969 Visitor ID : 785310647 Session Count : 51 Session Time - First : Thu Aug 23 2012 15:20:17 GMT 1000 (AUS Eastern Standard Time) Session Time - Last : Mon Oct 29 2012 11:41:46 GMT 1100 (AUS Eastern Summer Time) Session Time - Current : Mon Oct 29 2012 12:19:23 GMT 1100 (AUS Eastern Summer Time) Campaign Time : Thu Aug 23 2012 15:20:17 GMT 1000 (AUS Eastern Standard Time) Campaign Session : 1 Campaign Count : 1 Campaign Source : (direct) Campaign Medium : (none); Campaign Name : (direct) Language : en-gb Encoding : UTF-8 Flash Version : 11.4 r31 Java Enabled : true Screen Resolution : 1050x1680 Browser Size : 1033x861 Color Depth : 32-bit Ga.js Version : 5.3.7d Cachebuster : 1580443754 and this is the js code he have. BTW, it is inside a <head></head> <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push( ['_setAccount', 'UA-XXXX234-1'], ['_setDomainName', 'website.com'], ['_setAllowLinker', true], ['_trackPageview'] ,['b._setAccount','UA-XXXX234-11'], ['b._setDomainName','website.com'], ['b._setAllowLinker',true], ['b._trackPageview'] ); (function () { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> Finally, I've triple checked that the UA is the correct text. and yes, the global account is -1 and the specific domain is -11. Anyone have any suggestions to help?

    Read the article

  • Anatomy of a .NET Assembly - CLR metadata 2

    - by Simon Cooper
    Before we look any further at the CLR metadata, we need a quick diversion to understand how the metadata is actually stored. Encoding table information As an example, we'll have a look at a row in the TypeDef table. According to the spec, each TypeDef consists of the following: Flags specifying various properties of the class, including visibility. The name of the type. The namespace of the type. What type this type extends. The field list of this type. The method list of this type. How is all this data actually represented? Offset & RID encoding Most assemblies don't need to use a 4 byte value to specify heap offsets and RIDs everywhere, however we can't hard-code every offset and RID to be 2 bytes long as there could conceivably be more than 65535 items in a heap or more than 65535 fields or types defined in an assembly. So heap offsets and RIDs are only represented in the full 4 bytes if it is required; in the header information at the top of the #~ stream are 3 bits indicating if the #Strings, #GUID, or #Blob heaps use 2 or 4 bytes (the #US stream is not accessed from metadata), and the rowcount of each table. If the rowcount for a particular table is greater than 65535 then all RIDs referencing that table throughout the metadata use 4 bytes, else only 2 bytes are used. Coded tokens Not every field in a table row references a single predefined table. For example, in the TypeDef extends field, a type can extend another TypeDef (a type in the same assembly), a TypeRef (a type in a different assembly), or a TypeSpec (an instantiation of a generic type). A token would have to be used to let us specify the table along with the RID. Tokens are always 4 bytes long; again, this is rather wasteful of space. Cutting the RID down to 2 bytes would make each token 3 bytes long, which isn't really an optimum size for computers to read from memory or disk. However, every use of a token in the metadata tables can only point to a limited subset of the metadata tables. For the extends field, we only need to be able to specify one of 3 tables, which we can do using 2 bits: 0x0: TypeDef 0x1: TypeRef 0x2: TypeSpec We could therefore compress the 4-byte token that would otherwise be needed into a coded token of type TypeDefOrRef. For each type of coded token, the least significant bits encode the table the token points to, and the rest of the bits encode the RID within that table. We can work out whether each type of coded token needs 2 or 4 bytes to represent it by working out whether the maximum RID of every table that the coded token type can point to will fit in the space available. The space available for the RID depends on the type of coded token; a TypeOrMethodDef coded token only needs 1 bit to specify the table, leaving 15 bits available for the RID before a 4-byte representation is needed, whereas a HasCustomAttribute coded token can point to one of 18 different tables, and so needs 5 bits to specify the table, only leaving 11 bits for the RID before 4 bytes are needed to represent that coded token type. For example, a 2-byte TypeDefOrRef coded token with the value 0x0321 has the following bit pattern: 0 3 2 1 0000 0011 0010 0001 The first two bits specify the table - TypeRef; the other bits specify the RID. Because we've used the first two bits, we've got to shift everything along two bits: 000000 1100 1000 This gives us a RID of 0xc8. If any one of the TypeDef, TypeRef or TypeSpec tables had more than 16383 rows (2^14 - 1), then 4 bytes would need to be used to represent all TypeDefOrRef coded tokens throughout the metadata tables. Lists The third representation we need to consider is 1-to-many references; each TypeDef refers to a list of FieldDef and MethodDef belonging to that type. If we were to specify every FieldDef and MethodDef individually then each TypeDef would be very large and a variable size, which isn't ideal. There is a way of specifying a list of references without explicitly specifying every item; if we order the MethodDef and FieldDef tables by the owning type, then the field list and method list in a TypeDef only have to be a single RID pointing at the first FieldDef or MethodDef belonging to that type; the end of the list can be inferred by the field list and method list RIDs of the next row in the TypeDef table. Going back to the TypeDef If we have a look back at the definition of a TypeDef, we end up with the following reprensentation for each row: Flags - always 4 bytes Name - a #Strings heap offset. Namespace - a #Strings heap offset. Extends - a TypeDefOrRef coded token. FieldList - a single RID to the FieldDef table. MethodList - a single RID to the MethodDef table. So, depending on the number of entries in the heaps and tables within the assembly, the rows in the TypeDef table can be as small as 14 bytes, or as large as 24 bytes. Now we've had a look at how information is encoded within the metadata tables, in the next post we can see how they are arranged on disk.

    Read the article

  • MySQL Connector/Net 6.6.3 Beta 2 has been released

    - by fernando
    MySQL Connector/Net 6.6.3, a new version of the all-managed .NET driver for MySQL has been released.  This is the second of two beta releases intended to introduce users to the new features in the release. This release is feature complete it should be stable enough for users to understand the new features and how we expect them to work.  As is the case with all non-GA releases, it should not be used in any production environment.  It is appropriate for use with MySQL server versions 5.0-5.6. It is now available in source and binary form from http://dev.mysql.com/downloads/connector/net/#downloads and mirror sites (note that not all mirror sites may be up to date at this point-if you can't find this version on some mirror, please try again later or choose another download site.) The 6.6 version of MySQL Connector/Net brings the following new features:   * Stored routine debugging   * Entity Framework 4.3 Code First support   * Pluggable authentication (now third parties can plug new authentications mechanisms into the driver).   * Full Visual Studio 2012 support: everything from Server Explorer to Intellisense&   the Stored Routine debugger. Stored Procedure Debugging ------------------------------------------- We are very excited to introduce stored procedure debugging into our Visual Studio integration.  It works in a very intuitive manner by simply clicking 'Debug Routine' from Server Explorer. You can debug stored routines, functions&   triggers. These release contains fixes specific of the debugger as well as other fixes specific of other areas of Connector/NET:   * Added feature to define initial values for InOut stored procedure arguments.   * Debugger: Fixed Visual Studio locked connection after debugging a routine.   * Fix for bug Cannot Create an Entity with a Key of Type String (MySQL bug #65289, Oracle bug #14540202).   * Fix for bug "CacheServerProperties can cause 'Packet too large' error". MySQL Bug #66578 Orabug #14593547.   * Fix for handling unnamed parameter in MySQLCommand. This fix allows the mysqlcommand to handle parameters without requiring naming (e.g. INSERT INTO Test (id,name) VALUES (?, ?) ) (MySQL Bug #66060, Oracle bug #14499549).   * Fixed end of line issue when debugging a routine.   * Added validation to avoid overwriting a routine backup file when it hasn't changed.   * Fixed inheritance on Entity Framework Code First scenarios. (MySql bug #63920 and Oracle bug #13582335).   * Fixed "Trying to customize column precision in Code First does not work" (MySql bug #65001, Oracle bug #14469048).   * Fixed bug ASP.NET Membership database fails on MySql database UTF32 (MySQL bug #65144, Oracle bug #14495292).   * Fix for MySqlCommand.LastInsertedId holding only 32 bit values (MySql bug #65452, Oracle bug #14171960).   * Fixed "Decimal type should have digits at right of decimal point", now default is 2, and user's changes in     EDM designer are recognized (MySql bug #65127, Oracle bug #14474342).   * Fix for NullReferenceException when saving an uninitialized row in Entity Framework (MySql bug #66066, Oracle bug #14479715).   * Fix for error when calling RoleProvider.RemoveUserFromRole(): causes an exception due to a wrong table being used (MySql bug #65805, Oracle bug #14405338).   * Fix for "Memory Leak on MySql.Data.MySqlClient.MySqlCommand", too many MemoryStream's instances created (MySql bug #65696, Oracle bug #14468204).   * Added ANTLR attribution notice (Oracle bug #14379162).   * Fix for debugger failing when having a routine with an if-elseif-else.   * Also the programming interface for authentication plugins has been redefined. Some limitations remains, due to the current debugger architecture:   * Some MySQL functions cannot be debugged currently (get_lock, release_lock, begin, commit, rollback, set transaction level)..   * Only one debug session may be active on a given server. The Debugger is feature complete at this point. We look forward to your feedback. Documentation ------------------------------------- You can view current Connector/Net documentation at http://dev.mysql.com/doc/refman/5.5/en/connector-net.html You can find our team blog at http://blogs.oracle.com/MySQLOnWindows. You can also post questions on our forums at http://forums.mysql.com/. Enjoy and thanks for the support!

    Read the article

  • WIF, ADFS 2 and WCF&ndash;Part 5: Service Client (more Flexibility with WSTrustChannelFactory)

    - by Your DisplayName here!
    See the previous posts first. WIF includes an API to manually request tokens from a token service. This gives you more control over the request and more flexibility since you can use your own token caching scheme instead of being bound to the channel object lifetime. The API is straightforward. You first request a token from the STS and then use that token to create a channel to the relying party service. I’d recommend using the WS-Trust bindings that ship with WIF to talk to ADFS 2 – they are pre-configured to match the binding configuration of the ADFS 2 endpoints. The following code requests a token for a WCF service from ADFS 2: private static SecurityToken GetToken() {     // Windows authentication over transport security     var factory = new WSTrustChannelFactory(         new WindowsWSTrustBinding(SecurityMode.Transport),         stsEndpoint);     factory.TrustVersion = TrustVersion.WSTrust13;       var rst = new RequestSecurityToken     {         RequestType = RequestTypes.Issue,         AppliesTo = new EndpointAddress(svcEndpoint),         KeyType = KeyTypes.Symmetric     };       var channel = factory.CreateChannel();     return channel.Issue(rst); } Afterwards, the returned token can be used to create a channel to the service. Again WIF has some helper methods here that make this very easy: private static void CallService(SecurityToken token) {     // create binding and turn off sessions     var binding = new WS2007FederationHttpBinding(         WSFederationHttpSecurityMode.TransportWithMessageCredential);     binding.Security.Message.EstablishSecurityContext = false;       // create factory and enable WIF plumbing     var factory = new ChannelFactory<IService>(binding, new EndpointAddress(svcEndpoint));     factory.ConfigureChannelFactory<IService>();       // turn off CardSpace - we already have the token     factory.Credentials.SupportInteractive = false;       var channel = factory.CreateChannelWithIssuedToken<IService>(token);       channel.GetClaims().ForEach(c =>         Console.WriteLine("{0}\n {1}\n  {2} ({3})\n",             c.ClaimType,             c.Value,             c.Issuer,             c.OriginalIssuer)); } Why is this approach more flexible? Well – some don’t like the configuration voodoo. That’s a valid reason for using the manual approach. You also get more control over the token request itself since you have full control over the RST message that gets send to the STS. One common parameter that you may want to set yourself is the appliesTo value. When you use the automatic token support in the WCF federation binding, the appliesTo is always the physical service address. This means in turn that this address will be used as the audience URI value in the SAML token. Well – this in turn means that when you have an application that consists of multiple services, you always have to configure all physical endpoint URLs in ADFS 2 and in the WIF configuration of the service(s). Having control over the appliesTo allows you to use more symbolic realm names, e.g. the base address or a completely logical name. Since the URL is never de-referenced you have some degree of freedom here. In the next post we will look at the necessary code to request multiple tokens in a call chain. This is a common scenario when you first have to acquire a token from an identity provider and have to send that on to a federation gateway or Resource STS. Stay tuned.

    Read the article

  • Global Perspective: Oracle AppAdvantage Does its Stage Debut in the UK

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Global Perspective is a monthly series that brings experiences, business needs and real-world use cases from regions across the globe. This month’s feature is a follow-up from last month’s Global Perspective note from a well known ACE Director based in EMEA. My first contribution to this blog was before Oracle Open World and I was quite excited about where this initiative would take me in my understanding of the value of Oracle Fusion Middleware. Rimi Bewtra from the Oracle AppAdvantage team came as promised to the Oracle ACE Director briefings and explained what this initiative was all about and I then asked the directors to take part in the new survey. The story was really well received and then at the SOA advisory board that many of these ACE Directors already take part in there was a further discussion on how this initiative will help customers understand the benefits of adoption. A few days later Rick Beers launched the program at a lunch of invited customer executives which included one from Pella who talked about their projects (a quick recap on that here). I wasn’t able to stay for the whole event but what really interested me was that these executives who understood the technology but where looking for how they could use them to drive their businesses. Lots of ideas were bubbling up in my head about how we can use this in user groups to help our members, and the timing was fantastic as just three weeks later we had UKOUG_Apps13, our flagship Applications conference in the UK. We had independently working with Oracle marketing in the UK on an initiative called Apps Transformation to help our members look beyond just the application they use today. We have had a Fusion community page but felt the options open are now much wider than Fusion Applications, there are acquired applications, social, mobility and of course the underlying technology, Oracle Fusion Middleware. I was really pleased to be allowed to give the Oracle AppAdvantage story as a session in our conference and we are planning a special Apps Transformation event in March where I hope the Oracle AppAdvantage team will take part and we will have the results of the survey to discuss. But, life also came full circle for me. In my first post, I talked about Andrew Sutherland and his original theory that Oracle Fusion Middleware adoption had technical drivers. Well, Andrew was a speaker at our event and he gave a potted, tech-talk free update on Oracle Open World. Andrew talked about the Prevailing Technology Winds, and what is driving this today and he talked about that in the past it was the move from simply automating processes (ERP etc), through the altering of those processes (SOA) and onto consolidation. The next drivers are around the need to predict, both faster and more accurately; how to better exploit the information that we have available. He went on to talk about The Nexus of Forces: Social, Mobile, Cloud and Information – harnessing these forces of change with Oracle technology. Gartner really likes this concept and if you want to know more you can get their paper here. All this has made me think, and I hope it will make you too. Technology can help us drive our businesses better and understanding your needs can be the first step on your journey, which was the theme of our event in the UK. I spoke to a number of the delegates and I hope to share some of their stories in later posts. If you have a story to share, the survey is at: https://www.surveymonkey.com/s/P335DD3 About the Author: Debra Lilley, Fujitsu Fusion Champion, UKOUG Board Member, Fusion User Experience Advocate and ACE Director. Debra has 18 years experience with Oracle Applications, with E Business Suite since 9.4.1, moving to Business Intelligence Team Leader and then Oracle Alliance Director. She has spoken at over 100 conferences worldwide and posts at debrasoraclethoughts Editor’s Note: Debra has kindly agreed to share her musings and experience in a monthly column on the Fusion Middleware blog so do stay tuned…

    Read the article

  • Summit reflections

    - by Rob Farley
    So far, my three PASS Summit experiences have been notably different to each other. My first, I wasn’t on the board and I gave two regular sessions and a Lightning Talk in which I told jokes. My second, I was a board advisor, and I delivered a precon, a spotlight and a Lightning Talk in which I sang. My third (last week), I was a full board director, and I didn’t present at all. Let’s not talk about next year. I’m not sure there are many options left. This year, I noticed that a lot more people recognised me and said hello. I guess that’s potentially because of the singing last year, but could also be because board elections can bring a fair bit of attention, and because of the effort I’ve put in through things like 24HOP... Yeah, ok. It’d be the singing. My approach was very different though. I was watching things through different eyes. I looked for the things that seemed to be working and the things that didn’t. I had staff there again, and was curious to know how their things were working out. I knew a lot more about what was going on behind the scenes to make various things happen, and although very little about the Summit was actually my responsibility (based on not having that portfolio), my perspective had moved considerably. Before the Summit started, Board Members had been given notebooks – an idea Tom (who heads up PASS’ marketing) had come up with after being inspired by seeing Bill walk around with a notebook. The plan was to take notes about feedback we got from people. It was a good thing, and the notebook forms a nice pair with the SQLBits one I got a couple of years ago when I last spoke there. I think one of the biggest impacts of this was that during the first keynote, Bill told everyone present about the notebooks. This set a tone of “we’re listening”, and a number of people were definitely keen to tell us things that would cause us to pull out our notebooks. PASSTV was a new thing this year. Justin, the host, featured on the couch and talked a lot of people about a lot of things, including me (he talked to me about a lot of things, I don’t think he talked to a lot people about me). Reaching people through online methods is something which interests me a lot – it has huge potential, and I love the idea of being able to broadcast to people who are unable to attend in person. I’m keen to see how this medium can be developed over time. People who know me will know that I’m a keen advocate of certification – I've been SQL certified since version 6.5, and have even been involved in creating exams. However, I don’t believe in studying for exams. I think training is worthwhile for learning new skills, but the goal should be on learning those skills, not on passing an exam. Exams should be for proving that the skills are there, not a goal in themselves. The PASS Summit is an excellent place to take exams though, and with an attitude of professional development throughout the event, why not? So I did. I wasn’t expecting to take one, but I was persuaded and took the MCM Knowledge Exam. I hadn’t even looked at the syllabus, but tried it anyway. I was very tired, and even fell asleep at one point during it. I’ll find out my result at some point in the future – the Prometric site just says “Tested” at the moment. As I said, it wasn’t something I was expecting to do, but it was good to have something unexpected during the week. Of course it was good to catch up with old friends and make new ones. I feel like every time I’m in the US I see things develop a bit more, with more and more people knowing who I am, who my staff are, and recognising the LobsterPot brand. I missed being a presenter, but I definitely enjoyed seeing many friends on the list of presenters. I won’t try to list them, because there are so many these days that people might feel sad if I don’t mention them. For those that I managed to see, I was pleased to see that the majority of them have lifted their presentation skills since I last saw them, and I happily told them as much. One person who I will mention was Paul White, who travelled from New Zealand to his first PASS Summit. He gave two sessions (a regular session and a half-day), packed large rooms of people, and had everyone buzzing with enthusiasm. I spoke to him after the event, and he told me that his expectations were blown away. Paul isn’t normally a fan of crowds, and the thought of 4000 people would have been scary. But he told me he had no idea that people would welcome him so well, be so friendly and so down to earth. He’s seen the significance of the SQL Server community, and says he’ll be back. It’ll be good to see him there. Will you be there too?

    Read the article

  • JavaOne Latin America Opening Keynotes

    - by Tori Wieldt
    Originally published on blogs.oracle.com/javaone It was a great first day at JavaOne Brazil, which included the Java Strategy and Java Technical keynotes. Henrik Stahl, Senior Director, Product Management for Java opened the keynotes by saying that this is the third year for JavaOne Latin America. He explained, "You know what they say, the first time doesn't count, the second time is a habit and the third time it's a tradition!" He mentioned that he was thrilled that this is largest JavaOne in Brazil to date, and he wants next year to be larger. He said that Oracle knows Latin America is an important hub for development.  "We continually come back to Latin America because of the dedication the community has with driving the continued innovation for Java," he said. Stahl explained that Oracle and the Java community must continue to innovate and Make the Future Java together. The success of Java depends on three important factors: technological innovation, Oracle as a strong steward of Java, and community participation. "The Latin American Java Community (especially in Brazil) is a shining example of how to be positive contributor to Java," Stahl said. Next, George Saab, VP software dev, Java Platform Group at Oracle, discussed some of the recent and upcoming changes to Java. "In addition to the incremental improvements to Java 7, we have also increased the set of platforms supported by Oracle from Linux, Windows, and Solaris to now also include Mac OS X and Linux/ARM for ARM-based PCs such as the Raspberry Pi and emerging ARM based microservers."  Saab announced that EA builds for Linux ARM Hard Float ABI will be available by the end of the year.  Staffan Friberg, Product Manager, Java Platform Group, provided an overview of some of the language coming in Java 8, including Lambda, remove of PermGen, improved data and time APIs and improved security, Java 8 development is moving along. He reminded the audience that they can go to OpenJDK to see this development being done in real-time, and that there are weekly early access builds of OracleJDK 8 that developers can download and try today. Judson Althoff, Senior Vice President, Worldwide Alliances and Channels and Embedded Sales, was invited to the stage, and the audience was told that "even though he is wearing a suit, he is still pretty technical." Althoff started off with a bang: "The Internet of Things is on a collision course with big data and this is a huge opportunity for developers."  For example, Althoff said, today cars are more a data device than a mechanical device. A car embedded with sensors for fuel efficiency, temperature, tire pressure, etc. can generate a petabyte of data A DAY. There are similar examples in healthcare (patient monitoring and privacy requirements creates a complex data problem) and transportation management (sending a package around the world with sensors for humidity, temperature and light). Althoff then brought on stage representatives from three companies that are successful with Java today, first Axel Hansmann, VP Strategy & Marketing Communications, Cinterion. Mr. Hansmann explained that Cinterion, a market leader in Latin America, enables M2M services with Java. At JavaOne San Francisco, Cinterion launched the EHS5, the smallest 3g solderable module, with Java installed on it. This provides Original Equipment Manufacturers (OEMs) with a cost effective, flexible platform for bringing advanced M2M technology to market.Next, Steve Nelson, Director of Marketing for the Americas, at Freescale explained that Freescale is #1 in Embedded Processors in Wired and Wireless Communications, and #1 in Automotive Semiconductors in the Americas. He said that Java provides a mature, proven platform that is uniquely suited to meet the requirements of almost any type of embedded device. He encouraged University students to get involved in the Freescale Cup, a global competition where student teams build, program, and race a model car around a track for speed.Roberto Franco, SBTVD Forum President, SBTVD, talked about Ginga, a Java-based standard for television in Brazil. He said there are 4 million Ginga TV sets in Brazil, and they expect over 20 million TV sets to be sold by the end of 2014. Ginga is also being adopted in other 11 countries in Latin America. Ginga brings interactive services not only at TV set, but also on other devices such as tablets,  PCs or smartphones, as the main or second screen. "Interactive services is already a reality," he said, ' but in a near future, we foresee interactivity enhanced TV content, convergence with OTT services and a big participation from the audience,  all integrated on TV, tablets, smartphones and second screen devices."Before he left the stage, Nandini Ramani thanked Judson for being part of the Java community and invited him to the next Geek Bike Ride in Brazil. She presented him an official geek bike ride jersey.For the Technical Keynote, a "blue screen of death" appeared. With mock concern, Stephin Chin asked the rest of the presenters if they could go on without slides. What followed was a interesting collection of demos, including JavaFX on a tablet, a look at Project Easel in NetBeans, and even Simon Ritter controlling legos with his brainwaves! Stay tuned for more dispatches.

    Read the article

  • JavaOne Latin America Opening Keynotes

    - by Tori Wieldt
    It was a great first day at JavaOne Brazil, which included the Java Strategy and Java Technical keynotes. Henrik Stahl, Senior Director, Product Management for Java opened the keynotes by saying that this is the third year for JavaOne Latin America. He explained, "You know what they say, the first time doesn't count, the second time is a habit and the third time it's a tradition!" He mentioned that he was thrilled that this is largest JavaOne in Brazil to date, and he wants next year to be larger. He said that Oracle knows Latin America is an important hub for development.  "We continually come back to Latin America because of the dedication the community has with driving the continued innovation for Java," he said. Stahl explained that Oracle and the Java community must continue to innovate and Make the Future Java together. The success of Java depends on three important factors: technological innovation, Oracle as a strong steward of Java, and community participation. "The Latin American Java Community (especially in Brazil) is a shining example of how to be positive contributor to Java," Stahl said. Next, George Saab, VP software dev, Java Platform Group at Oracle, discussed some of the recent and upcoming changes to Java. "In addition to the incremental improvements to Java 7, we have also increased the set of platforms supported by Oracle from Linux, Windows, and Solaris to now also include Mac OS X and Linux/ARM for ARM-based PCs such as the Raspberry Pi and emerging ARM based microservers."  Saab announced that EA builds for Linux ARM Hard Float ABI will be available by the end of the year.  Staffan Friberg, Product Manager, Java Platform Group, provided an overview of some of the language coming in Java 8, including Lambda, remove of PermGen, improved data and time APIs and improved security, Java 8 development is moving along. He reminded the audience that they can go to OpenJDK to see this development being done in real-time, and that there are weekly early access builds of OracleJDK 8 that developers can download and try today. Judson Althoff, Senior Vice President, Worldwide Alliances and Channels and Embedded Sales, was invited to the stage, and the audience was told that "even though he is wearing a suit, he is still pretty technical." Althoff started off with a bang: "The Internet of Things is on a collision course with big data and this is a huge opportunity for developers."  For example, Althoff said, today cars are more a data device than a mechanical device. A car embedded with sensors for fuel efficiency, temperature, tire pressure, etc. can generate a petabyte of data A DAY. There are similar examples in healthcare (patient monitoring and privacy requirements creates a complex data problem) and transportation management (sending a package around the world with sensors for humidity, temperature and light). Althoff then brought on stage representatives from three companies that are successful with Java today, first Axel Hansmann, VP Strategy & Marketing Communications, Cinterion. Mr. Hansmann explained that Cinterion, a market leader in Latin America, enables M2M services with Java. At JavaOne San Francisco, Cinterion launched the EHS5, the smallest 3g solderable module, with Java installed on it. This provides Original Equipment Manufacturers (OEMs) with a cost effective, flexible platform for bringing advanced M2M technology to market.Next, Steve Nelson, Director of Marketing for the Americas, at Freescale explained that Freescale is #1 in Embedded Processors in Wired and Wireless Communications, and #1 in Automotive Semiconductors in the Americas. He said that Java provides a mature, proven platform that is uniquely suited to meet the requirements of almost any type of embedded device. He encouraged University students to get involved in the Freescale Cup, a global competition where student teams build, program, and race a model car around a track for speed.Roberto Franco, SBTVD Forum President, SBTVD, talked about Ginga, a Java-based standard for television in Brazil. He said there are 4 million Ginga TV sets in Brazil, and they expect over 20 million TV sets to be sold by the end of 2014. Ginga is also being adopted in other 11 countries in Latin America. Ginga brings interactive services not only at TV set, but also on other devices such as tablets,  PCs or smartphones, as the main or second screen. "Interactive services is already a reality," he said, ' but in a near future, we foresee interactivity enhanced TV content, convergence with OTT services and a big participation from the audience,  all integrated on TV, tablets, smartphones and second screen devices."Before he left the stage, Nandini Ramani thanked Judson for being part of the Java community and invited him to the next Geek Bike Ride in Brazil. She presented him an official geek bike ride jersey.For the Technical Keynote, a "blue screen of death" appeared. With mock concern, Stephin Chin asked the rest of the presenters if they could go on without slides. What followed was a interesting collection of demos, including JavaFX on a tablet, a look at Project Easel in NetBeans, and even Simon Ritter controlling legos with his brainwaves! Stay tuned for more dispatches.

    Read the article

  • Physics System ignores collision in some rare cases

    - by Gajoo
    I've been developing a simple physics engine for my game. since the game physics is very simple I've decided to increase accuracy a little bit. Instead of formal integration methods like fourier or RK4, I'm directly computing the results after delta time "dt". based on the very first laws of physics : dx = 0.5 * a * dt^2 + v0 * dt dv = a * dt where a is acceleration and v0 is object's previous velocity. Also to handle collisions I've used a method which is somehow different from those I've seen so far. I'm detecting all the collision in the given time frame, stepping the world forward to the nearest collision, resolving it and again check for possible collisions. As I said the world consist of very simple objects, so I'm not loosing any performance due to multiple collision checking. First I'm checking if the ball collides with any walls around it (which is working perfectly) and then I'm checking if it collides with the edges of the walls (yellow points in the picture). the algorithm seems to work without any problem except some rare cases, in which the collision with points are ignored. I've tested everything and all the variables seem to be what they should but after leaving the system work for a minute or two the system the ball passes through one of those points. Here is collision portion of my code, hopefully one of you guys can give me a hint where to look for a potential bug! void PhysicalWorld::checkForPointCollision(Vec2 acceleration, PhysicsComponent& ball, Vec2& collisionNormal, float& collisionTime, Vec2 target) { // this function checks if there will be any collision between a circle and a point // ball contains informations about the circle (it's current velocity, position and radius) // collisionNormal is an output variable // collisionTime is also an output varialbe // target is the point I want to check for collisions Vec2 V = ball.mVelocity; Vec2 A = acceleration; Vec2 P = ball.mPosition - target; float wallWidth = mMap->getWallWidth() / (mMap->getWallWidth() + mMap->getHallWidth()) / 2; float r = ball.mRadius / (mMap->getWallWidth() + mMap->getHallWidth()); // r is ball radius scaled to match actual rendered object. if (A.any()) // todo : I need to first correctly solve the collisions in case there is no acceleration return; if (V.any()) // if object is not moving there will be no collisions! { float D = P.x * V.y - P.y * V.x; float Delta = r*r*V.length2() - D*D; if(Delta < eps) return; Delta = sqrt(Delta); float sgnvy = V.y > 0 ? 1: (V.y < 0?-1:0); Vec2 c1(( D*V.y+sgnvy*V.x*Delta) / V.length2(), (-D*V.x+fabs(V.y)*Delta) / V.length2()); Vec2 c2(( D*V.y-sgnvy*V.x*Delta) / V.length2(), (-D*V.x-fabs(V.y)*Delta) / V.length2()); float t1 = (c1.x - P.x) / V.x; float t2 = (c2.x - P.x) / V.x; if(t1 > eps && t1 <= collisionTime) { collisionTime = t1; collisionNormal = c1; } if(t2 > eps && t2 <= collisionTime) { collisionTime = t2; collisionNormal = c2; } } } // this function should step the world forward by dt. it doesn't check for collision of any two balls (components) // it just checks if there is a collision between the current component and 4 points forming a rectangle around it. void PhysicalWorld::step(float dt) { for (unsigned i=0;i<mObjects.size();i++) { PhysicsComponent &current = *mObjects[i]; Vec2 acceleration = current.mForces * current.mInvMass; float rt=dt; // stores how much more the world should advance while(rt > eps) { float collisionTime = rt; Vec2 collisionNormal = Vec2(0,0); float halfWallWidth = mMap->getWallWidth() / (mMap->getWallWidth() + mMap->getHallWidth()) / 2; // we check if there is any collision with any of those 4 points around the ball // if there is a collision both collisionNormal and collisionTime variables will change // after these functions collisionTime will be exactly the value of nearest collision (if any) // and if there was, collisionNormal will report in which direction the ball should return. checkForPointCollision(acceleration,current,collisionNormal,collisionTime,Vec2(floor(current.mPosition.x) + halfWallWidth,floor(current.mPosition.y) + halfWallWidth)); checkForPointCollision(acceleration,current,collisionNormal,collisionTime,Vec2(floor(current.mPosition.x) + halfWallWidth, ceil(current.mPosition.y) - halfWallWidth)); checkForPointCollision(acceleration,current,collisionNormal,collisionTime,Vec2( ceil(current.mPosition.x) - halfWallWidth,floor(current.mPosition.y) + halfWallWidth)); checkForPointCollision(acceleration,current,collisionNormal,collisionTime,Vec2( ceil(current.mPosition.x) - halfWallWidth, ceil(current.mPosition.y) - halfWallWidth)); // either if there is a collision or if there is not we step the forward since we are sure there will be no collision before collisionTime current.mPosition += collisionTime * (collisionTime * acceleration * 0.5 + current.mVelocity); current.mVelocity += collisionTime * acceleration; // if the ball collided with anything collisionNormal should be at least none zero in one of it's axis if (collisionNormal.any()) { collisionNormal *= Dot(collisionNormal, current.mVelocity) / collisionNormal.length2(); current.mVelocity -= 2 * collisionNormal; // simply reverse velocity along collision normal direction } rt -= collisionTime; } // reset all forces for current object so it'll be ready for later game event current.mForces.zero(); } }

    Read the article

  • Custom Configuration Section Handlers

    Most .NET developers who need to store something in configuration tend to use appSettings for this purpose, in my experience.  More recently, the framework itself has helped things by adding the <connectionStrings /> section so at least these are in their own section and not adding to the appSettings clutter that pollutes most apps.  I recommend avoiding appSettings for several reasons.  In addition to those listed there, I would add that strong typing and validation are additional reasons to go the custom configuration section route. For my ASP.NET Tips and Tricks talk, I use the following example, which is a simple DemoSettings class that includes two fields.  The first is an integer representing how many attendees there are present for the talk, and the second is the title of the talk.  The setup in web.config is as follows: <configSections> <section name="DemoSettings" type="ASPNETTipsAndTricks.Code.DemoSettings" /> </configSections>   <DemoSettings sessionAttendees="100" title="ASP.NET Tips and Tricks DevConnections Spring 2010" /> Referencing the values in code is strongly typed and straightforward.  Here I have a page that exposes two properties which internally get their values from the configuration section handler: public partial class CustomConfig1 : System.Web.UI.Page { public string SessionTitle { get { return DemoSettings.Settings.Title; } } public int SessionAttendees { get { return DemoSettings.Settings.SessionAttendees; } } } Note that the settings are only read from the config file once after that they are cached so there is no need to be concerned about excessive file access. Now weve seen how to set it up on the config file and how to refer to the settings in code.  All that remains is to see the file itself: public class DemoSettings : ConfigurationSection { private static DemoSettings settings = ConfigurationManager.GetSection("DemoSettings") as DemoSettings; public static DemoSettings Settings{ get { return settings;} }   [ConfigurationProperty("sessionAttendees" , DefaultValue = 200 , IsRequired = false)] [IntegerValidator(MinValue = 1 , MaxValue = 10000)] public int SessionAttendees { get { return (int)this["sessionAttendees"]; } set { this["sessionAttendees"] = value; } }   [ConfigurationProperty("title" , IsRequired = true)] [StringValidator(InvalidCharacters = "~!@#$%^&*()[]{}/;\"|\\")] public string Title { get { return (string)this["title"]; } set { this["title"] = value; }   } } The class is pretty straightforward, but there are some important components to note.  First, it must inherit from System.Configuration.ConfigurationSection.  Next, as a convention I like to have a static settings member that is responsible for pulling out the section when the class is first referenced, and further to expose this collection via a static readonly property, Settings.  Note that the types of both of these are the type of my class, DemoSettings. The properties of the class, SessionAttendees and Title, should map to the attributes of the config element in the XML file.  The [ConfigurationProperty] attribute allows you to map the attribute name to the property name (thus using both XML standard naming conventions and C# naming conventions).  In addition, you can specify a default value to use if nothing is specified in the config file, and whether or not the setting must be provided (IsRequired).  If it is required, then it doesnt make sense to include a default value. Beyond defaults and required, you can specify more advanced validation rules for the configuration values using additional C# attributes, such as [IntegerValidator] and [StringValidator].  Using these, you can declaratively specify that your configuration values be in a given range, or omit certain forbidden characters, for instance.  Of course you can write your own custom validation attributes, and there are others specified in System.Configuration. Individual sections can also be loaded from separate files, using syntax like this: <DemoSettings configSource="demosettings.config" /> Summary Using a custom configuration section handler is not hard.  If your application or component requires configuration, I recommend creating a custom configuration handler dedicated to your app or component.  Doing so will reduce the clutter in appSettings, will provide you with strong typing and validation, and will make it much easier for other developers or system administrators to locate and understand the various configuration values that are necessary for a given application. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • In Social Relationship Management, the Spirit is Willing, but Execution is Weak

    - by Mike Stiles
    In our final talk in this series with Aberdeen’s Trip Kucera, we wanted to find out if enterprise organizations are actually doing anything about what they’re learning around the importance of communicating via social and using social listening for a deeper understanding of customers and prospects. We found out that if your brand is lagging behind, you’re not alone. Spotlight: How was Aberdeen able to find out if companies are putting their money where their mouth is when it comes to implementing social across the enterprise? Trip: One way to think about the relative challenges a business has in a given area is to look at the gap between “say” and “do.” The first of those words reveals the brand’s priorities, while the second reveals their ability to execute on those priorities. In Aberdeen’s research, we capture this by asking firms to rank the value of a set of activities from one on the low end to five on the high end. We then ask them to rank their ability to execute those same activities, again on a one to five, not effective to highly effective scale. Spotlight: And once you get their self-assessments, what is it you’re looking for? Trip: There are two things we’re looking for in this analysis. The first is we want to be able to identify the widest gaps between perception of value and execution. This suggests impediments to adoption or simply a high level of challenge, be it technical or otherwise. It may also suggest areas where we can expect future investment and innovation. Spotlight: So the biggest potential pain points surface, places where they know something is critical but also know they aren’t doing much about it. What’s the second thing you look for? Trip: The second thing we want to do is look at specific areas in which high-performing companies, the Leaders, are out-executing the Followers. This points to the business impact of these activities since Leaders are defined by a set of business performance metrics. Put another way, we’re correlating adoption of specific business competencies with performance, looking for what high-performers do differently. Spotlight: Ah ha, that tells us what steps the winners are taking that are making them winners. So what did you find out? Trip: Generally speaking, we see something of a glass curtain when it comes to the social relationship management execution gap. There isn’t a single social media activity in which more than 50% of respondents indicated effectiveness, which would be a 4 or 5 on that 1-5 scale. This despite the fact that 70% of firms indicate that generating positive social media mentions is valuable or very valuable, a 4 or 5 on our 1-5 scale. Spotlight: Well at least they get points for being honest. The verdict they’re giving themselves is that they just aren’t cutting it in these highly critical social development areas. Trip: And the widest gap is around directly engaging with customers and/or prospects on social networks, which 69% of firms rated as valuable but only 34% of companies say they are executing well. Perhaps even more interesting is that these two are interdependent since you’re most likely to generate goodwill on social through happy, engaged customers. This data also suggests that social is largely being used as a broadcast channel rather than for one-to-one engagement. As we’ve discussed previously, social is an inherently personal media. Spotlight: And if they’re still using it as a broadcast channel, that shows they still fail to understand the root of social and see it as just another outlet for their ads and push-messaging. That’s depressing. Trip: A second way to evaluate this data is by using Aberdeen’s performance benchmarking. The story is both a bit different, but consistent in its own way. The first thing we notice is that Leaders are more effective in their execution of several key social relationship management capabilities, namely generating positive mentions and engaging with “influencers” and customers. Based on the fact that Aberdeen uses a broad set of performance metrics to rank the respondents as either “Leaders” (top 35% in weighted performance) or “Followers” (bottom 65% in weighted performance), from website conversion to annual revenue growth, we can then correlated high social effectiveness with company performance. We can also connect the specific social capabilities used by Leaders with effectiveness. We spoke about a few of those key capabilities last time and also discuss them in a new report: Social Powers Activate: Engineering Social Engagement to Win the Hidden Sales Cycle. Spotlight: What all that tells me is there are rewards for making the effort and getting it right. That’s how you become a Leader. Trip: But there’s another part of the story, which is that overall effectiveness, even among Leaders, is muted. There’s just one activity in which more than a majority of Leaders cite high effectiveness, effectiveness being the generation of positive buzz. While 80% of Leaders indicate “directly engaging with customers” through social media channels is valuable, the highest rated activity among Leaders, only 42% say they’re effective. This gap even among Leaders shows the challenges still involved in effective social relationship management. @mikestilesPhoto: stock.xchng

    Read the article

  • WIF-less claim extraction from ACS: SWT

    - by Elton Stoneman
    WIF with SAML is solid and flexible, but unless you need the power, it can be overkill for simple claim assertion, and in the REST world WIF doesn’t have support for the latest token formats.  Simple Web Token (SWT) may not be around forever, but while it's here it's a nice easy format which you can manipulate in .NET without having to go down the WIF route. Assuming you have set up a Relying Party in ACS, specifying SWT as the token format: When ACS redirects to your login page, it will POST the SWT in the first form variable. It comes through in the BinarySecurityToken element of a RequestSecurityTokenResponse XML payload , the SWT type is specified with a TokenType of http://schemas.xmlsoap.org/ws/2009/11/swt-token-profile-1.0 : <t:RequestSecurityTokenResponse xmlns:t="http://schemas.xmlsoap.org/ws/2005/02/trust">   <t:Lifetime>     <wsu:Created xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">2012-08-31T07:31:18.655Z</wsu:Created>     <wsu:Expires xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">2012-08-31T09:11:18.655Z</wsu:Expires>   </t:Lifetime>   <wsp:AppliesTo xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy">     <EndpointReference xmlns="http://www.w3.org/2005/08/addressing">       <Address>http://localhost/x.y.z</Address>     </EndpointReference>   </wsp:AppliesTo>   <t:RequestedSecurityToken>     <wsse:BinarySecurityToken wsu:Id="uuid:fc8d3332-d501-4bb0-84ba-d31aa95a1a6c" ValueType="http://schemas.xmlsoap.org/ws/2009/11/swt-token-profile-1.0" EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> [ base64string ] </wsse:BinarySecurityToken>   </t:RequestedSecurityToken>   <t:TokenType>http://schemas.xmlsoap.org/ws/2009/11/swt-token-profile-1.0</t:TokenType>   <t:RequestType>http://schemas.xmlsoap.org/ws/2005/02/trust/Issue</t:RequestType>   <t:KeyType>http://schemas.xmlsoap.org/ws/2005/05/identity/NoProofKey</t:KeyType> </t:RequestSecurityTokenResponse> Reading the SWT is as simple as base-64 decoding, then URL-decoding the element value:     var wrappedToken = XDocument.Parse(HttpContext.Current.Request.Form[1]);     var binaryToken = wrappedToken.Root.Descendants("{http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd}BinarySecurityToken").First();     var tokenBytes = Convert.FromBase64String(binaryToken.Value);     var token = Encoding.UTF8.GetString(tokenBytes);     var tokenType = wrappedToken.Root.Descendants("{http://schemas.xmlsoap.org/ws/2005/02/trust}TokenType").First().Value; The decoded token contains the claims as key/value pairs, along with the issuer, audience (ACS realm), expiry date and an HMAC hash, which are in query string format. Separate them on the ampersand, and you can write out the claim values in your logged-in page:     var decoded = HttpUtility.UrlDecode(token);     foreach (var part in decoded.Split('&'))     {         Response.Write("<pre>" + part + "</pre><br/>");     } - which will produce something like this: http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationinstant=2012-08-31T06:57:01.855Z http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod=http://schemas.microsoft.com/ws/2008/06/identity/authenticationmethod/windows http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname=XYZ http://schemas.xmlsoap.org/ws/2005/05/identity/claims/[email protected] http://schemas.xmlsoap.org/ws/2005/05/identity/claims/[email protected] http://schemas.microsoft.com/accesscontrolservice/2010/07/claims/identityprovider=http://fs.svc.xyz.com/adfs/services/trust Audience=http://localhost/x.y.z ExpiresOn=1346402225 Issuer=https://x-y-z.accesscontrol.windows.net/ HMACSHA256=oDCeEDDAWEC8x+yBnTaCLnzp4L6jI0Z/xNK95PdZTts= The HMAC hash lets you validate the token to ensure it hasn’t been tampered with. You'll need the token signing key from ACS, then you can re-sign the token and compare hashes. There's a full implementation of an SWT parser and validator here: How To Request SWT Token From ACS And How To Validate It At The REST WCF Service Hosted In Windows Azure, and a cut-down claim inspector on my github code gallery: ACS Claim Inspector. Interestingly, ACS lets you have a value for your logged-in page which has no relation to the realm for authentication, so you can put this code into a generic claim inspector page, and set that to be your logged-in page for any relying party where you want to check what's being sent through. Particularly handy with ADFS, when you're modifying the claims provided, and want to quickly see the results.

    Read the article

  • Session Report - Java on the Raspberry Pi

    - by Janice J. Heiss
    On mid-day Wednesday, the always colorful Oracle Evangelist Simon Ritter demonstrated Java on the Raspberry Pi at his session, “Do You Like Coffee with Your Dessert?”. The Raspberry Pi consists of a credit card-sized single-board computer developed in the UK with the intention of stimulating the teaching of basic computer science in schools. “I don't think there is a single feature that makes the Raspberry Pi significant,” observed Ritter, “but a combination of things really makes it stand out. First, it's $35 for what is effectively a completely usable computer. You do have to add a power supply, SD card for storage and maybe a screen, keyboard and mouse, but this is still way cheaper than a typical PC. The choice of an ARM (Advanced RISC Machine and Acorn RISC Machine) processor is noteworthy, because it avoids problems like cooling (no heat sink or fan) and can use a USB power brick. When you add in the enormous community support, it offers a great platform for teaching everyone about computing.”Some 200 enthusiastic attendees were present at the session which had the feel of Simon Ritter sharing a fun toy with friends. The main point of the session was to show what Oracle was doing to support Java on the Raspberry Pi in a way that is entertaining and fun. Ritter pointed out that, in addition to being great for teaching, it’s an excellent introduction to the ARM architecture, and runs well with Java and will get better once it has official hard float support. The possibilities are vast.Ritter explained that the Raspberry Pi Project started in 2006 with the goal of devising a computer to inspire children; it drew inspiration from the BBC Micro literacy project of 1981 that produced a series of microcomputers created by the Acorn Computer company. It was officially launched on February 29, 2012, with a first production of 10,000 boards. There were 100,000 pre-orders in one day; currently about 4,000 boards are produced a day. Ritter described the specification as follows:* CPU: ARM 11 core running at 700MHz Broadcom SoC package Can now be overclocked to 1GHz (without breaking the warranty!) * Memory: 256Mb* I/O: HDMI and composite video 2 x USB ports (Model B only) Ethernet (Model B only) Header pins for GPIO, UART, SPI and I2C He took attendees through a brief history of ARM Architecture:* Acorn BBC Micro (6502 based) Not powerful enough for Acorn’s plans for a business computer * Berkeley RISC Project UNIX kernel only used 30% of instruction set of Motorola 68000 More registers, less instructions (Register windows) One chip architecture to come from this was… SPARC * Acorn RISC Machine (ARM) 32-bit data, 26-bit address space, 27 registers First machine was Acorn Archimedes * Spin off from Acorn, Advanced RISC MachinesNext he presented its features:* 32-bit RISC Architecture–  ARM accounts for 75% of embedded 32-bit CPUs today– 6.1 Billion chips sold last year (zero manufactured by ARM)* Abstract architecture and microprocessor core designs– Raspberry Pi is ARM11 using ARMv6 instruction set* Low power consumption– Good for mobile devices– Raspberry Pi can be powered from 700mA 5V only PSU– Raspberry Pi does not require heatsink or fanHe described the current ARM Technology:* ARMv6– ARM 11, ARM Cortex-M* ARMv7– ARM Cortex-A, ARM Cortex-M, ARM Cortex-R* ARMv8 (Announced)– Will support 64-bit data and addressingHe next gave the Java Specifics for ARM: Floating point operations* Despite being an ARMv6 processor it does include an FPU– FPU only became standard as of ARMv7* FPU (Hard Float, or HF) is much faster than a software library* Linux distros and Oracle JVM for ARM assume no HF on ARMv6– Need special build of both– Raspbian distro build now available– Oracle JVM is in the works, release date TBDNot So RISCPerformance Improvements* DSP Enhancements* Jazelle* Thumb / Thumb2 / ThumbEE* Floating Point (VFP)* NEON* Security Enhancements (TrustZone)He spent a few minutes going over the challenges of using Java on the Raspberry Pi and covered:* Sound* Vision * Serial (TTL UART)* USB* GPIOTo implement sound with Java he pointed out:* Sound drivers are now included in new distros* Java Sound API– Remember to add audio to user’s groups– Some bits work, others not so much* Playing (the right format) WAV file works* Using MIDI hangs trying to open a synthesizer* FreeTTS text-to-speech– Should work once sound works properlyHe turned to JavaFX on the Raspberry Pi:* Currently internal builds only– Will be released as technology preview soon* Work involves optimal implementation of Prism graphics engine– X11?* Once the JavaFX implementation is completed there will be little of concern to developers-- It’s just Java (WORA). He explained the basis of the Serial Port:* UART provides TTL level signals (3.3V)* RS-232 uses 12V signals* Use MAX3232 chip to convert* Use this for access to serial consoleHe summarized his key points. The Raspberry Pi is a very cool (and cheap) computer that is great for teaching, a great introduction to ARM that works very well with Java and will work better in the future. The opportunities are limitless. For further info, check out, Raspberry Pi User Guide by Eben Upton and Gareth Halfacree. From there, Ritter tried out several fun demos, some of which worked better than others, but all of which were greeted with considerable enthusiasm and support and good humor (even when he ran into some glitches).  All in all, this was a fun and lively session.

    Read the article

  • SQL SERVER – ?Finding Out What Changed in a Deleted Database – Notes from the Field #041

    - by Pinal Dave
    [Note from Pinal]: This is a 41th episode of Notes from the Field series. The real world is full of challenges. When we are reading theory or book, we sometimes do not realize how real world reacts works and that is why we have the series notes from the field, which is extremely popular with developers and DBA. Let us talk about interesting problem of how to figure out what has changed in the DELETED database. Well, you think I am just throwing the words but in reality this kind of problems are making our DBA’s life interesting and in this blog post we have amazing story from Brian Kelley about the same subject. In this episode of the Notes from the Field series database expert Brian Kelley explains a how to find out what has changed in deleted database. Read the experience of Brian in his own words. Sometimes, one of the hardest questions to answer is, “What changed?” A similar question is, “Did anything change other than what we expected to change?” The First Place to Check – Schema Changes History Report: Pinal has recently written on the Schema Changes History report and its requirement for the Default Trace to be enabled. This is always the first place I look when I am trying to answer these questions. There are a couple of obvious limitations with the Schema Changes History report. First, while it reports what changed, when it changed, and who changed it, other than the base DDL operation (CREATE, ALTER, DELETE), it does not present what the changes actually were. This is not something covered by the default trace. Second, the default trace has a fixed size. When it hits that size, the changes begin to overwrite. As a result, if you wait too long, especially on a busy database server, you may find your changes rolled off. But the Database Has Been Deleted! Pinal cited another issue, and that’s the inability to run the Schema Changes History report if the database has been dropped. Thankfully, all is not lost. One thing to remember is that the Schema Changes History report is ultimately driven by the Default Trace. As you may have guess, it’s a trace, like any other database trace. And the Default Trace does write to disk. The trace files are written to the defined LOG directory for that SQL Server instance and have a prefix of log_: Therefore, you can read the trace files like any other. Tip: Copy the files to a working directory. Otherwise, you may occasionally receive a file in use error. With the Default Trace files, if you ask the question early enough, you can see the information for a deleted database just the same as any other database. Testing with a Deleted Database: Here’s a short script that will create a database, create a schema, create an object, and then drop the database. Without the database, you can’t do a standard Schema Changes History report. CREATE DATABASE DeleteMe; GO USE DeleteMe; GO CREATE SCHEMA Test AUTHORIZATION dbo; GO CREATE TABLE Test.Foo (FooID INT); GO USE MASTER; GO DROP DATABASE DeleteMe; GO This sets up the perfect situation where we can’t retrieve the information using the Schema Changes History report but where it’s still available. Finding the Information: I’ve sorted the columns so I can see the Event Subclass, the Start Time, the Database Name, the Object Name, and the Object Type at the front, but otherwise, I’m just looking at the trace files using SQL Profiler. As you can see, the information is definitely there: Therefore, even in the case of a dropped/deleted database, you can still determine who did what and when. You can even determine who dropped the database (loginame is captured). The key is to get the default trace files in a timely manner in order to extract the information. If you want to get started with performance tuning and database security with the help of experts, read more over at Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Stepping outside Visual Studio IDE [Part 1 of 2] with Eclipse

    - by mbcrump
    “If you're walking down the right path and you're willing to keep walking, eventually you'll make progress." – Barack Obama In my quest to become a better programmer, I’ve decided to start the process of learning Java. I will be primary using the Eclipse Language IDE. I will not bore you with the history just what is needed for a .NET developer to get up and running. I will provide links, screenshots and a few brief code tutorials. Links to documentation. The Official Eclipse FAQ’s Links to binaries. Eclipse IDE for Java EE Developers the Galileo Package (based on Eclipse 3.5 SR2)  Sun Developer Network – Java Eclipse officially recommends Java version 5 (also known as 1.5), although many Eclipse users use the newer version 6 (1.6). That's it, nothing more is required except to compile and run java. Installation Unzip the Eclipse IDE for Java EE Developers and double click the file named Eclipse.exe. You will probably want to create a link for it on your desktop. Once, it’s installed and launched you will have to select a workspace. Just accept the defaults and you will see the following: Lets go ahead and write a simple program. To write a "Hello World" program follow these steps: Start Eclipse. Create a new Java Project: File->New->Project. Select "Java" in the category list. Select "Java Project" in the project list. Click "Next". Enter a project name into the Project name field, for example, "HW Project". Click "Finish" Allow it to open the Java perspective Create a new Java class: Click the "Create a Java Class" button in the toolbar. (This is the icon below "Run" and "Window" with a tooltip that says "New Java Class.") Enter "HW" into the Name field. Click the checkbox indicating that you would like Eclipse to create a "public static void main(String[] args)" method. Click "Finish". A Java editor for HW.java will open. In the main method enter the following line.      System.out.println("This is my first java program and btw Hello World"); Save using ctrl-s. This automatically compiles HW.java. Click the "Run" button in the toolbar (looks like a VCR play button). You will be prompted to create a Launch configuration. Select "Java Application" and click "New". Click "Run" to run the Hello World program. The console will open and display "This is my first java program and btw Hello World". You now have your first java program, lets go ahead and make an applet. Since you already have the HW.java open, click inside the window and remove all code. Now copy/paste the following code snippet. Java Code Snippet for an applet. 1: import java.applet.Applet; 2: import java.awt.Graphics; 3: import java.awt.Color; 4:  5: @SuppressWarnings("serial") 6: public class HelloWorld extends Applet{ 7:  8: String text = "I'm a simple applet"; 9:  10: public void init() { 11: text = "I'm a simple applet"; 12: setBackground(Color.GREEN); 13: } 14:  15: public void start() { 16: System.out.println("starting..."); 17: } 18:  19: public void stop() { 20: System.out.println("stopping..."); 21: } 22:  23: public void destroy() { 24: System.out.println("preparing to unload..."); 25: } 26:  27: public void paint(Graphics g){ 28: System.out.println("Paint"); 29: g.setColor(Color.blue); 30: g.drawRect(0, 0, 31: getSize().width -1, 32: getSize().height -1); 33: g.setColor(Color.black); 34: g.drawString(text, 15, 25); 35: } 36: } The Eclipse IDE should look like Click "Run" to run the Hello World applet. Now, lets test our new java applet. So, navigate over to your workspace for example: “C:\Users\mbcrump\workspace\HW Project\bin” and you should see 2 files. HW.class java.policy.applet Create a HTML page with the following code: 1: <HTML> 2: <BODY> 3: <APPLET CODE=HW.class WIDTH=200 HEIGHT=100> 4: </APPLET> 5: </BODY> 6: </HTML> Open, the HTML page in Firefox or IE and you will see your applet running.  I hope this brief look at the Eclipse IDE helps someone get acquainted with Java Development. Even if your full time gig is with .NET, it will not hurt to have another language in your tool belt. As always, I welcome any suggestions or comments.

    Read the article

  • Microsoft BUILD 2013 Day 1&ndash;Keynote

    - by Tim Murphy
    Originally posted on: http://geekswithblogs.net/tmurphy/archive/2013/06/27/microsoft-build-2013-day-1ndashkeynote.aspx This one is going to be a little long because the keynote was jam-packed so bare with me. The keynote for the first day of BUILD 2013 was kicked off by Steve Balmer.  He made it very clear that Microsoft’s focus is on accelerating its time to market with products and product updates.  His quote was that “Rapid release” is the new norm.  He continued by showing off several new Lumias that have been buzzing around the internet for a while and announce that Sprint will now be carrying the HTC 8XT and Samsung ATIV. Balmer is known for repeating words or phrase for affect.  This time it was “Rapid release, rapid release” and “Touch, touch, touch, touch, touch, …”.  This was fun, but even more fun was when he announce that all attendees would receive an Acer Iconia 8” tablet. SCORE! The next subject Balmer focused on is new apps.  The three new ones were Flipboard, Facebook and NFL Fantasy Football.  I liked the first two because these are ones that people coming from other platforms are missing.  The NFL app is great just because it targets a demographic that can be fanatical.  If these types of apps keep coming than the missing app argument goes away. While many Negative Nancy’s are describing Windows 8.1 as Windows 180 Steve Balmer chose to call it a “refined blend” as in a coffee that has been improved with a new mix.  This includes more multi-tasking options and leveraging Bing straight throughout the entire ecosystem. He ended this first section by explaining that this will also bring more Bing development opportunities to the community. Steve Balmer was followed by Julie Larson-Green who spent her time on stage selling us on Windows 8 all over again from my point of view.  Something that I would not have thought was needed until I had listened to some other attendees who had a number of concerns and complaints.  She showed a number of new gestures that will come with Windows 8.1, and while they were cool I was left wondering if they really improved the experience.  I guess only time will tell. I did like the fact that it the UI implementation to bring up “All Apps” now mirrors that of Windows Phone.  The consistency is a big step forward that I hope to see continue.  The cool factor went up from there as she swiped content from a desktop (mega-tablet) to the XBox One.  This seamless experience I believe is what is really needed for any future platform to be relevant. I was much more enthused by the presentation of Antoine Leblond who humbled us by letting us know that there are 5k new API.  How that can be or how anyone would ever use all of them is another question.  His announcement was that the Visual Studio 2013 preview would be available today along with the Windows 8.1 bits.  One of the features of VS2013 that he demonstrated is the power consumption profiler.  With battery life being a key factor with consumer consumption devices this is a welcome addition. He didn’t limit his presentation to VS2013 features though.  He showed how the Store has been redesigned to enable better search and discoverability of apps and how Win 8.1 can perform multiple screen scales depending on the resolution of the device automatically.  The last feature he demoed was the real time video streaming API which he made sure we understood by attaching a Surface to a little robot.  Oh, but there was one more thing.  Antoine and Julie announce that all attendees would also be getting Surface Pros.  BONUS! How much more could there be?  Gurdeep Singh Pall was about to pile on.  He introduced us to Bing as a platform (BaaP?).  He said if they (Microsoft) could do something with and API that is good 3rd party developers can do something that is dynamite and showed us some of the tools they had produced.  These included natural user interface improvements such as voice commands that looked to put Siri to shame.  Add to that 3D, OCR and translation capabilities and the future looks to be full of opportunities. Balmer then came out to show us one last thing.  Project Spark is a game design environment that will be available for Windows 8.1, XBox 360 and XBox One.  All I can say is that if my kids get their hands on this they are going to be able to learn some of what dad does in a much more enjoyable way. At the end of it all I was both exhausted and energized by what I saw.  What could they have possibly left for the day 2 keynote?  I hear it will feature Scott Hanselman.  If that is right we are in for a treat.  See you there. del.icio.us Tags: BUILD 2013,Windows 8.1,Winodws Phone,XAML,Keynote,Bing,Visual Studio 2013,Project Spark

    Read the article

  • Code refactoring with Visual Studio 2010 Part-4

    - by Jalpesh P. Vadgama
    I have been writing few post with code refactoring features in Visual Studio 2010. This post also will be part of series and this post will be last of the series. In this post I am going explain two features 1) Encapsulate Field and 2) Extract Interface. Let’s explore both features in details. Encapsulate Field: This is a nice code refactoring feature provides by Visual Studio 2010. With help of this feature we can create properties from the existing private field of the class. Let’s take a simple example of Customer Class. In that I there are two private field called firstName and lastName. Below is the code for the class. public class Customer { private string firstName; private string lastName; public string Address { get; set; } public string City { get; set; } } Now lets encapsulate first field firstName with Encapsulate feature. So first select that field and goto refactor menu in Visual Studio 2010 and click on Encapsulate Field. Once you click that a dialog box will appear like following. Now once you click OK a preview dialog box will open as we have selected preview reference changes. I think its a good options to check that option to preview code that is being changed by IDE itself. Dialog will look like following. Once you click apply it create a new property called FirstName. Same way I have done for the lastName and now my customer class code look like following. public class Customer { private string firstName; public string FirstName { get { return firstName; } set { firstName = value; } } private string lastName; public string LastName { get { return lastName; } set { lastName = value; } } public string Address { get; set; } public string City { get; set; } } So you can see that its very easy to create properties with existing fields and you don’t have to change anything there in code it will change all the stuff itself. Extract Interface: When you are writing software prototype and You don’t know the future implementation of that then its a good practice to use interface there. I am going to explain here that How we can extract interface from the existing code without writing a single line of code with the help of code refactoring feature of Visual Studio 2010. For that I have create a Simple Repository class called CustomerRepository with three methods like following. public class CustomerRespository { public void Add() { // Some code to add customer } public void Update() { //some code to update customer } public void Delete() { //some code delete customer } } In above class there are three method Add,Update and Delete where we are going to implement some code for each one. Now I want to create a interface which I can use for my other entities in project. So let’s create a interface from the above class with the help of Visual Studio 2010. So first select class and goto refactor menu and click Extract Interface. It will open up dialog box like following. Here I have selected all the method for interface and Once I click OK then it will create a new file called ICustomerRespository where it has created a interface. Just like following. Here is a code for that interface. using System; namespace CodeRefractoring { interface ICustomerRespository { void Add(); void Delete(); void Update(); } } Now let's see the code for the our class. It will also changed like following to implement the interface. public class CustomerRespository : ICustomerRespository { public void Add() { // Some code to add customer } public void Update() { //some code to update customer } public void Delete() { //some code delete customer } } Isn't that great we have created a interface and implemented it without writing a single line of code. Hope you liked it. Stay tuned for more.. Till that Happy Programming.

    Read the article

  • My error with upgrading 4.0 to 4.2- What NOT to do...

    - by Steve Tunstall
    Last week, I was helping a client upgrade from the 2011.1.4.0 code to the newest 2011.1.4.2 code. We downloaded the 4.2 update from MOS, upload and unpacked it on both controllers, and upgraded one of the controllers in the cluster with no issues at all. As this was a brand-new system with no networking or pools made on it yet, there were not any resources to fail back and forth between the controllers. Each controller had it's own, private, management interface (igb0 and igb1) and that's it. So we took controller 1 as the passive controller and upgraded it first. The first controller came back up with no issues and was now on the 4.2 code. Great. We then did a takeover on controller 1, making it the active head (although there were no resources for it to take), and then proceeded to upgrade controller 2. Upon upgrading the second controller, we ran the health check with no issues. We then ran the update and it ran and rebooted normally. However, something strange then happened. It took longer than normal to come back up, and when it did, we got the "cluster controllers on different code" error message that one gets when the two controllers of a cluster are running different code. But we just upgraded the second controller to 4.2, so they should have been the same, right??? Going into the Maintenance-->System screen of controller 2, we saw something very strange. The "current version" was still on 4.0, and the 4.2 code was there but was in the "previous" state with the rollback icon, as if it was the OLDER code and not the newer code. I have never seen this happen before. I would have thought it was a bad 4.2 code file, but it worked just fine with controller 1, so I don't think that was it. Other than the fact the code did not update, there was nothing else going on with this system. It had no yellow lights, no errors in the Problems section, and no errors in any of the logs. It was just out of the box a few hours ago, and didn't even have a storage pool yet. So.... We deleted the 4.2 code, uploaded it from scratch, ran the health check, and ran the upgrade again. once again, it seemed to go great, rebooted, and came back up to the same issue, where it came to 4.0 instead of 4.2. See the picture below.... HERE IS WHERE I MADE A BIG MISTAKE.... I SHOULD have instantly called support and opened a Sev 2 ticket. They could have done a shared shell and gotten the correct Fishwork engineer to look at the files and the code and determine what file was messed up and fixed it. The system was up and working just fine, it was just on an older code version, not really a huge problem at all. Instead, I went ahead and clicked the "Rollback" icon, thinking that the system would rollback to the 4.2 code.   Ouch... What happened was that the system said, "Fine, I will delete the 4.0 code and boot to your 4.2 code"... Which was stupid on my part because something was wrong with the 4.2 code file here and the 4.0 was just fine.  So now the system could not boot at all, and the 4.0 code was completely missing from the system, and even a high-level Fishworks engineer could not help us. I had messed it up good. We could only get to the ILOM, and I had to re-image the system from scratch using a hard-to-get-and-use FishStick USB drive. These are tightly controlled and difficult to get, almost always handcuffed to an engineer who will drive out to re-image a system. This took another day of my client's time.  So.... If you see a "previous version" of your system code which is actually a version higher than the current version... DO NOT ROLL IT BACK.... It did not upgrade for a very good reason. In my case, after the system was re-imaged to a code level just 3 back, we once again tried the same 4.2 code update and it worked perfectly the first time and is now great and stable.  Lesson learned.  By the way, our buddy Ryan Matthews wanted to point out the best practice and supported way of performing an upgrade of an active/active ZFSSA, where both controllers are doing some of the work. These steps would not have helpped me for the above issue, but it's important to follow the correct proceedure when doing an upgrade. 1) Upload software to both controllers and wait for it to unpack 2) On controller "A" navigate to configuration/cluster and click "takeover" 3) Wait for controller "B" to finish restarting, then login to it, navigate to maintenance/system, and roll forward to the new software. 4) Wait for controller "B" to apply the update and finish rebooting 5) Login to controller "B", navigate to configuration/cluster and click "takeover" 6) Wait for controller "A" to finish restarting, then login to it, navigate to maintenance/system, and roll forward to the new software. 7) Wait for controller "A" to apply the update and finish rebooting 8) Login to controller "B", navigate to configuration/cluster and click "failback"

    Read the article

  • NDepend Evaluation: Part 3

    - by Anthony Trudeau
    NDepend is a Visual Studio add-in designed for intense code analysis with the goal of high code quality. NDepend uses a number of metrics and aggregates the data in pleasing static and active visual reports. My evaluation of NDepend will be broken up into several different parts. In the first part of the evaluation I looked at installing the add-in.  And in the last part I went over my first impressions including an overview of the features.  In this installment I provide a little more detail on a few of the features that I really like. Dependency Matrix The dependency matrix is one of the rich visual components provided with NDepend.  At a glance it lets you know where you have coupling problems including cycles.  It does this with number indicating the weight of the dependency and a color-coding that indicates the nature of the dependency. Green and blue cells are direct dependencies (with the difference being whether the relationship is from row-to-column or column-to-row).  Black cells are the ones that you really want to know about.  These indicate that you have a cycle.  That is, type A refers to type B and type B also refers to Type A. But, that’s not the end of the story.  A handy pop-up appears when you hover over the cell in question.  It explains the color, the dependency, and provides several interesting links that will teach you more than you want to know about the dependency. You can double-click the problem cells to explode the dependency.  That will show the dependencies on a method-by-method basis allowing you to more easily target and fix the problem.  When you’re done you can click the back button on the toolbar. Dependency Graph The dependency graph is another component provided.  It’s complementary to the dependency matrix, but it isn’t as easy to identify dependency issues using the window. On a positive note, it does provide more information than the matrix. My biggest issue with the dependency graph is determining what is shown.  This was not readily obvious.  I ended up using the navigation buttons to get an acceptable view.  I would have liked to choose what I see. Once you see the types you want you can get a decent idea of coupling strength based on the width of the dependency lines.  Double-arrowed lines are problematic and are shown in red.  The size of the boxes will be related to the metric being displayed.  This is controlled using the Box Size drop-down in the toolbar.  Personally, I don’t find the size of the box to be helpful, so I change it to Constant Font. One nice thing about the display is that you can see the entire path of dependencies when you hover over a type.  This is done by color-coding the dependencies and dependants.  It would be nice if selecting the box for the type would lock the highlighting in place. I did find a perhaps unintended work-around to the color-coding.  You can lock the color-coding in by hovering over the type, right-clicking, and then clicking on the canvas area to clear the pop-up menu.  You can then do whatever with it including saving it to an image file with the color-coding. CQL NDepend uses a code query language (CQL) to work with your code just like it was a database.  CQL cannot be confused with the robustness of T-SQL or even LINQ, but it represents an impressive attempt at providing an expressive way to enumerate and interrogate your code. There are two main windows you’ll use when working with CQL.  The CQL Query Explorer allows you to define what queries (rules) are run as part of a report – I immediately unselected rules that I don’t want in my results.  The CQL Query Edit window is where you can view or author your own rules.  The explorer window is pretty self-explanatory, so I won’t mention it further other than to say that any queries you author will appear in the custom group. Authoring your own queries is really hard to screw-up.  The Intellisense-like pop-ups tell you what you can do while making composition easy.  I was able to create a query within two minutes of playing with the editor.  My query warns if any types that are interfaces don’t start with an “I”. WARN IF Count > 0 IN SELECT TYPES WHERE IsInterface AND !NameLike “I” The results from the CQL Query Edit window are immediate. That fact makes it useful for ad hoc querying.  It’s worth mentioning two things that could make the experience smoother.  First, out of habit from using Visual Studio I expect to be able to scroll and press Tab to select an item in the list (like Intellisense).  You have to press Enter when you scroll to the item you want.  Second, the commands are case-sensitive.  I don’t see a really good reason to enforce that. CQL has a lot of potential not just in enforcing code quality, but also enforcing architectural constraints that your enterprise has defined. Up Next My next update will be the final part of the evaluation.  I will summarize my experience and provide my conclusions on the NDepend add-in. ** View Part 1 of the Evaluation ** ** View Part 2 of the Evaluation ** Disclaimer: Patrick Smacchia contacted me about reviewing NDepend. I received a free license in return for sharing my experiences and talking about the capabilities of the add-in on this site. There is no expectation of a positive review elicited from the author of NDepend.

    Read the article

  • Upgrading to Code Based Migrations EF 4.3.1 with Connector/Net 6.6

    - by GABMARTINEZ
    Entity Framework 4.3.1 includes a new feature called code first migrations.  We are adding support for this feature in our upcoming 6.6 release of Connector/Net.  In this walk-through we'll see the workflow of code-based migrations when you have an existing application and you would like to upgrade to this EF 4.3.1 version and use this approach, so you can keep track of the changes that you do to your database.   The first thing we need to do is add the new Entity Framework 4.3.1 package to our application. This should via the NuGet package manager.  You can read more about why EF is not part of the .NET framework here. Adding EF 4.3.1 to our existing application  Inside VS 2010 go to Tools -> Library Package Manager -> Package Manager Console, this will open the Power Shell Host Window where we can work with all the EF commands. In order to install this library to your existing application you should type Install-Package EntityFramework This will make some changes to your application. So Let's check them. In your .config file you'll see a  <configSections> which contains the version you have from EntityFramework and also was added the <entityFramework> section as shown below. This section is by default configured to use SQL Express which won't be necesary for this case. So you can comment it out or leave it empty. Also please make sure you're using the Connector/Net 6.6.x version which is the one that has this support as is shown in the previous image. At this point we face one issue; in order to be able to work with Migrations we need the __MigrationHistory table that we don't have yet since our Database was created with an older version. This table is used to keep track of the changes in our model. So we need to get it in our existing Database. Getting a Migration-History table into an existing database First thing we need to do to enable migrations in our existing application is to create our configuration class which will set up the MySqlClient Provider as our SQL Generator. So we have to add it with the following code: using System.Data.Entity.Migrations;     //add this at the top of your cs file public class Configuration : DbMigrationsConfiguration<NameOfYourDbContext>  //Make sure to use the name of your existing DBContext { public Configuration() { this.AutomaticMigrationsEnabled = false; //Set Automatic migrations to false since we'll be applying the migrations manually for this case. SetSqlGenerator("MySql.Data.MySqlClient", new MySql.Data.Entity.MySqlMigrationSqlGenerator());     }   }  This code will set up our configuration that we'll be using when executing all the migrations for our application. Once we have done this we can Build our application so we can check that everything is fine. Creating our Initial Migration Now let's add our Initial Migration. In Package Manager Console, execute "add-migration InitialCreate", you can use any other name but I like to set this as our initial create for future reference. After we run this command, some changes were done in our application: A new Migrations Folder was created. A new class migration call InitialCreate which in most of the cases should have empty Up and Down methods as long as your database is up to date with your Model. Since all your entities already exists, delete all duplicated code to create any entity which exists already in your Database if there is any. I found this easier when you don't have any pending updates to do to your database. Now we have our empty migration that will make no changes in our database and represents how are all the things at the begining of our migrations.  Finally, let's create our MigrationsHistory table. Optionally you can add SQL code to delete the edmdata table which is not needed anymore. public override void Up() { // Just make sure that you used 4.1 or later version         Sql("DROP TABLE EdmMetadata"); } From our Package Manager Console let's type: Update-database; If you like to see the operations made on each Update-database command you can use the flag -verbose after the Update-database. This will make two important changes.  It will execute the Up method in the initial migration which has no changes in the database. And second, and very important,  it will create the __MigrationHistory table necessary to keep track of your changes. And next time you make a change to your database it will compare the current model to the one stored in the Model Column of this table. Conclusion The important thing of this walk through is that we must create our initial migration before we start doing any changes to our model. This way we'll be adding the necessary __MigrationsHistory table to our existing database, so we can keep our database up to date with all the changes we do in our context model using migrations. Hope you have found this information useful. Please let us know if you have any questions or comments, also please check our forums here where we keep answering questions in general for the community.  Happy MySQL/Net Coding!

    Read the article

  • Take Two: Comparing JVMs on ARM/Linux

    - by user12608080
    Although the intent of the previous article, entitled Comparing JVMs on ARM/Linux, was to introduce and highlight the availability of the HotSpot server compiler (referred to as c2) for Java SE-Embedded ARM v7,  it seems, based on feedback, that everyone was more interested in the OpenJDK comparisons to Java SE-E.  In fact there were two main concerns: The fact that the previous article compared Java SE-E 7 against OpenJDK 6 might be construed as an unlevel playing field because version 7 is newer and therefore potentially more optimized. That the generic compiler settings chosen to build the OpenJDK implementations did not put those versions in a particularly favorable light. With those considerations in mind, we'll institute the following changes to this version of the benchmarking: In order to help alleviate an additional concern that there is some sort of benchmark bias, we'll use a different suite, called DaCapo.  Funded and supported by many prestigious organizations, DaCapo's aim is to benchmark real world applications.  Further information about DaCapo can be found at http://dacapobench.org. At the suggestion of Xerxes Ranby, who has been a great help through this entire exercise, a newer Linux distribution will be used to assure that the OpenJDK implementations were built with more optimal compiler settings.  The Linux distribution in this instance is Ubuntu 11.10 Oneiric Ocelot. Having experienced difficulties getting Ubuntu 11.10 to run on the original D2Plug ARMv7 platform, for these benchmarks, we'll switch to an embedded system that has a supported Ubuntu 11.10 release.  That platform is the Freescale i.MX53 Quick Start Board.  It has an ARMv7 Coretex-A8 processor running at 1GHz with 1GB RAM. We'll limit comparisons to 4 JVM implementations: Java SE-E 7 Update 2 c1 compiler (default) Java SE-E 6 Update 30 (c1 compiler is the only option) OpenJDK 6 IcedTea6 1.11pre 6b23~pre11-0ubuntu1.11.10.2 CACAO build 1.1.0pre2 OpenJDK 6 IcedTea6 1.11pre 6b23~pre11-0ubuntu1.11.10.2 JamVM build-1.6.0-devel Certain OpenJDK implementations were eliminated from this round of testing for the simple reason that their performance was not competitive.  The Java SE 7u2 c2 compiler was also removed because although quite respectable, it did not perform as well as the c1 compilers.  Recall that c2 works optimally in long-lived situations.  Many of these benchmarks completed in a relatively short period of time.  To get a feel for where c2 shines, take a look at the first chart in this blog. The first chart that follows includes performance of all benchmark runs on all platforms.  Later on we'll look more at individual tests.  In all runs, smaller means faster.  The DaCapo aficionado may notice that only 10 of the 14 DaCapo tests for this version were executed.  The reason for this is that these 10 tests represent the only ones successfully completed by all 4 JVMs.  Only the Java SE-E 6u30 could successfully run all of the tests.  Both OpenJDK instances not only failed to complete certain tests, but also experienced VM aborts too. One of the first observations that can be made between Java SE-E 6 and 7 is that, for all intents and purposes, they are on par with regards to performance.  While it is a fact that successive Java SE releases add additional optimizations, it is also true that Java SE 7 introduces additional complexity to the Java platform thus balancing out any potential performance gains at this point.  We are still early into Java SE 7.  We would expect further performance enhancements for Java SE-E 7 in future updates. In comparing Java SE-E to OpenJDK performance, among both OpenJDK VMs, Cacao results are respectable in 4 of the 10 tests.  The charts that follow show the individual results of those four tests.  Both Java SE-E versions do win every test and outperform Cacao in the range of 9% to 55%. For the remaining 6 tests, Java SE-E significantly outperforms Cacao in the range of 114% to 311% So it looks like OpenJDK results are mixed for this round of benchmarks.  In some cases, performance looks to have improved.  But in a majority of instances, OpenJDK still lags behind Java SE-Embedded considerably. Time to put on my asbestos suit.  Let the flames begin...

    Read the article

  • Inside BackgroundWorker

    - by João Angelo
    The BackgroundWorker is a reusable component that can be used in different contexts, but sometimes with unexpected results. If you are like me, you have mostly used background workers while doing Windows Forms development due to the flexibility they offer for running a background task. They support cancellation and give events that signal progress updates and task completion. When used in Windows Forms, these events (ProgressChanged and RunWorkerCompleted) get executed back on the UI thread where you can freely access your form controls. However, the logic of the progress changed and worker completed events being invoked in the thread that started the background worker is not something you get directly from the BackgroundWorker, but instead from the fact that you are running in the context of Windows Forms. Take the following example that illustrates the use of a worker in three different scenarios: – Console Application or Windows Service; – Windows Forms; – WPF. using System; using System.ComponentModel; using System.Threading; using System.Windows.Forms; using System.Windows.Threading; class Program { static AutoResetEvent Synch = new AutoResetEvent(false); static void Main() { var bw1 = new BackgroundWorker(); var bw2 = new BackgroundWorker(); var bw3 = new BackgroundWorker(); Console.WriteLine("DEFAULT"); var unspecializedThread = new Thread(() => { OutputCaller(1); SynchronizationContext.SetSynchronizationContext( new SynchronizationContext()); bw1.DoWork += (sender, e) => OutputWork(1); bw1.RunWorkerCompleted += (sender, e) => OutputCompleted(1); // Uses default SynchronizationContext bw1.RunWorkerAsync(); }); unspecializedThread.IsBackground = true; unspecializedThread.Start(); Synch.WaitOne(); Console.WriteLine(); Console.WriteLine("WINDOWS FORMS"); var windowsFormsThread = new Thread(() => { OutputCaller(2); SynchronizationContext.SetSynchronizationContext( new WindowsFormsSynchronizationContext()); bw2.DoWork += (sender, e) => OutputWork(2); bw2.RunWorkerCompleted += (sender, e) => OutputCompleted(2); // Uses WindowsFormsSynchronizationContext bw2.RunWorkerAsync(); Application.Run(); }); windowsFormsThread.IsBackground = true; windowsFormsThread.SetApartmentState(ApartmentState.STA); windowsFormsThread.Start(); Synch.WaitOne(); Console.WriteLine(); Console.WriteLine("WPF"); var wpfThread = new Thread(() => { OutputCaller(3); SynchronizationContext.SetSynchronizationContext( new DispatcherSynchronizationContext()); bw3.DoWork += (sender, e) => OutputWork(3); bw3.RunWorkerCompleted += (sender, e) => OutputCompleted(3); // Uses DispatcherSynchronizationContext bw3.RunWorkerAsync(); Dispatcher.Run(); }); wpfThread.IsBackground = true; wpfThread.SetApartmentState(ApartmentState.STA); wpfThread.Start(); Synch.WaitOne(); } static void OutputCaller(int workerId) { Console.WriteLine( "bw{0}.{1} | Thread: {2} | IsThreadPool: {3}", workerId, "RunWorkerAsync".PadRight(18), Thread.CurrentThread.ManagedThreadId, Thread.CurrentThread.IsThreadPoolThread); } static void OutputWork(int workerId) { Console.WriteLine( "bw{0}.{1} | Thread: {2} | IsThreadPool: {3}", workerId, "DoWork".PadRight(18), Thread.CurrentThread.ManagedThreadId, Thread.CurrentThread.IsThreadPoolThread); } static void OutputCompleted(int workerId) { Console.WriteLine( "bw{0}.{1} | Thread: {2} | IsThreadPool: {3}", workerId, "RunWorkerCompleted".PadRight(18), Thread.CurrentThread.ManagedThreadId, Thread.CurrentThread.IsThreadPoolThread); Synch.Set(); } } Output: //DEFAULT //bw1.RunWorkerAsync | Thread: 3 | IsThreadPool: False //bw1.DoWork | Thread: 4 | IsThreadPool: True //bw1.RunWorkerCompleted | Thread: 5 | IsThreadPool: True //WINDOWS FORMS //bw2.RunWorkerAsync | Thread: 6 | IsThreadPool: False //bw2.DoWork | Thread: 5 | IsThreadPool: True //bw2.RunWorkerCompleted | Thread: 6 | IsThreadPool: False //WPF //bw3.RunWorkerAsync | Thread: 7 | IsThreadPool: False //bw3.DoWork | Thread: 5 | IsThreadPool: True //bw3.RunWorkerCompleted | Thread: 7 | IsThreadPool: False As you can see the output between the first and remaining scenarios is somewhat different. While in Windows Forms and WPF the worker completed event runs on the thread that called RunWorkerAsync, in the first scenario the same event runs on any thread available in the thread pool. Another scenario where you can get the first behavior, even when on Windows Forms or WPF, is if you chain the creation of background workers, that is, you create a second worker in the DoWork event handler of an already running worker. Since the DoWork executes in a thread from the pool the second worker will use the default synchronization context and the completed event will not run in the UI thread.

    Read the article

  • Developing Schema Compare for Oracle (Part 6): 9i Query Performance

    - by Simon Cooper
    All throughout the EAP and beta versions of Schema Compare for Oracle, our main request was support for Oracle 9i. After releasing version 1.0 with support for 10g and 11g, our next step was then to get version 1.1 of SCfO out with support for 9i. However, there were some significant problems that we had to overcome first. This post will concentrate on query execution time. When we first tested SCfO on a 9i server, after accounting for various changes to the data dictionary, we found that database registration was taking a long time. And I mean a looooooong time. The same database that on 10g or 11g would take a couple of minutes to register would be taking upwards of 30 mins on 9i. Obviously, this is not ideal, so a poke around the query execution plans was required. As an example, let's take the table population query - the one that reads ALL_TABLES and joins it with a few other dictionary views to get us back our list of tables. On 10g, this query takes 5.6 seconds. On 9i, it takes 89.47 seconds. The difference in execution plan is even more dramatic - here's the (edited) execution plan on 10g: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 108K| 939 || 1 | SORT ORDER BY | | 108K| 939 || 2 | NESTED LOOPS OUTER | | 108K| 938 ||* 3 | HASH JOIN RIGHT OUTER | | 103K| 762 || 4 | VIEW | ALL_EXTERNAL_LOCATIONS | 2058 | 3 ||* 20 | HASH JOIN RIGHT OUTER | | 73472 | 759 || 21 | VIEW | ALL_EXTERNAL_TABLES | 2097 | 3 ||* 34 | HASH JOIN RIGHT OUTER | | 39920 | 755 || 35 | VIEW | ALL_MVIEWS | 51 | 7 || 58 | NESTED LOOPS OUTER | | 39104 | 748 || 59 | VIEW | ALL_TABLES | 6704 | 668 || 89 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2025 | 5 || 106 | VIEW | ALL_PART_TABLES | 277 | 11 |------------------------------------------------------------------------------- And the same query on 9i: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 16P| 55G|| 1 | SORT ORDER BY | | 16P| 55G|| 2 | NESTED LOOPS OUTER | | 16P| 862M|| 3 | NESTED LOOPS OUTER | | 5251G| 992K|| 4 | NESTED LOOPS OUTER | | 4243M| 2578 || 5 | NESTED LOOPS OUTER | | 2669K| 1440 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 ||* 50 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2043 | ||* 66 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_TABLES | 1777K| ||* 80 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_LOCATIONS | 1744K| ||* 96 | VIEW | ALL_PART_TABLES | 852K| |------------------------------------------------------------------------------- Have a look at the cost column. 10g's overall query cost is 939, and 9i is 55,000,000,000 (or more precisely, 55,496,472,769). It's also having to process far more data. What on earth could be causing this huge difference in query cost? After trawling through the '10g New Features' documentation, we found item 1.9.2.21. Before 10g, Oracle advised that you do not collect statistics on data dictionary objects. From 10g, it advised that you do collect statistics on the data dictionary; for our queries, Oracle therefore knows what sort of data is in the dictionary tables, and so can generate an efficient execution plan. On 9i, no statistics are present on the system tables, so Oracle has to use the Rule Based Optimizer, which turns most LEFT JOINs into nested loops. If we force 9i to use hash joins, like 10g, we get a much better plan: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 7587K| 3704 || 1 | SORT ORDER BY | | 7587K| 3704 ||* 2 | HASH JOIN OUTER | | 7587K| 822 ||* 3 | HASH JOIN OUTER | | 5262K| 616 ||* 4 | HASH JOIN OUTER | | 2980K| 465 ||* 5 | HASH JOIN OUTER | | 710K| 432 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 || 50 | VIEW | ALL_PART_TABLES | 852K| 104 || 78 | VIEW | ALL_TAB_COMMENTS | 2043 | 14 || 93 | VIEW | ALL_EXTERNAL_LOCATIONS | 1744K| 31 || 106 | VIEW | ALL_EXTERNAL_TABLES | 1777K| 28 |------------------------------------------------------------------------------- That's much more like it. This drops the execution time down to 24 seconds. Not as good as 10g, but still an improvement. There are still several problems with this, however. 10g introduced a new join method - a right outer hash join (used in the first execution plan). The 9i query optimizer doesn't have this option available, so forcing a hash join means it has to hash the ALL_TABLES table, and furthermore re-hash it for every hash join in the execution plan; this could be thousands and thousands of rows. And although forcing hash joins somewhat alleviates this problem on our test systems, there's no guarantee that this will improve the execution time on customers' systems; it may even increase the time it takes (say, if all their tables are partitioned, or they've got a lot of materialized views). Ideally, we would want a solution that provides a speedup whatever the input. To try and get some ideas, we asked some oracle performance specialists to see if they had any ideas or tips. Their recommendation was to add a hidden hook into the product that allowed users to specify their own query hints, or even rewrite the queries entirely. However, we would prefer not to take that approach; as well as a lot of new infrastructure & a rewrite of the population code, it would have meant that any users of 9i would have to spend some time optimizing it to get it working on their system before they could use the product. Another approach was needed. All our population queries have a very specific pattern - a base table provides most of the information we need (ALL_TABLES for tables, or ALL_TAB_COLS for columns) and we do a left join to extra subsidiary tables that fill in gaps (for instance, ALL_PART_TABLES for partition information). All the left joins use the same set of columns to join on (typically the object owner & name), so we could re-use the hash information for each join, rather than re-hashing the same columns for every join. To allow us to do this, along with various other performance improvements that could be done for the specific query pattern we were using, we read all the tables individually and do a hash join on the client. Fortunately, this 'pure' algorithmic problem is the kind that can be very well optimized for expected real-world situations; as well as storing row data we're not using in the hash key on disk, we use very specific memory-efficient data structures to store all the information we need. This allows us to achieve a database population time that is as fast as on 10g, and even (in some situations) slightly faster, and a memory overhead of roughly 150 bytes per row of data in the result set (for schemas with 10,000 tables in that means an extra 1.4MB memory being used during population). Next: fun with the 9i dictionary views.

    Read the article

  • WMemoryProfiler is Released

    - by Alois Kraus
    What is it? WMemoryProfiler is a managed profiling Api to aid integration testing. This free library can get managed heap statistics and memory usage for your own process (remember testing) and other processes as well. The best thing is that it does work from .NET 2.0 up to .NET 4.5 in x86 and x64. To make it more interesting it can attach to any running .NET process. The reason why I do mention this is that commercial profilers do support this functionality only for their professional editions. An normally only since .NET 4.0 since the profiling API only since then does support attaching to a running process. This thing does differ in many aspects from “normal” profilers because while profiling yourself you can get all objects from all managed heaps back as an object array. If you ever wanted to change the state of an object which does only exist a method local in another thread you can get your hands on it now … Enough theory. Show me some code /// <summary> /// Show feature to not only get statisics out of a process but also the newly allocated /// instances since the last call to MarkCurrentObjects. /// GetNewObjects does return the newly allocated objects as object array /// </summary> static void InstanceTracking() { using (var dumper = new MemoryDumper()) // if you have problems use to see the debugger windows true,true)) { dumper.MarkCurrentObjects(); Allocate(); ILookup<Type, object> newObjects = dumper.GetNewObjects() .ToLookup( x => x.GetType() ); Console.WriteLine("New Strings:"); foreach (var newStr in newObjects[typeof(string)] ) { Console.WriteLine("Str: {0}", newStr); } } } … New Strings: Str: qqd Str: String data: Str: String data: 0 Str: String data: 1 … This is really hot stuff. Not only you can get heap statistics but you can directly examine the new objects and make queries upon them. When I do find more time I can reconstruct the object root graph from it from my own process. It this cool or what? You can also peek into the Finalization Queue to check if you did accidentally forget to dispose a whole bunch of objects … /// <summary> /// .NET 4.0 or above only. Get all finalizable objects which are ready for finalization and have no other object roots anymore. /// </summary> static void NotYetFinalizedObjects() { using (var dumper = new MemoryDumper()) { object[] finalizable = dumper.GetObjectsReadyForFinalization(); Console.WriteLine("Currently {0} objects of types {1} are ready for finalization. Consider disposing them before.", finalizable.Length, String.Join(",", finalizable.ToLookup( x=> x.GetType() ) .Select( x=> x.Key.Name)) ); } } How does it work? The W of WMemoryProfiler is a good hint. It does employ Windbg and SOS dll to do the heavy lifting and concentrates on an easy to use Api which does hide completely Windbg. If you do not want to see Windbg you will never see it. In my experience the most complex thing is actually to download Windbg from the Windows 8 Stanalone SDK. This is described in the Readme and the exception you are greeted with if it is missing in much greater detail. So I will not go into this here.   What Next? Depending on the feedback I do get I can imagine some features which might be useful as well Calculate first order GC Roots from the actual object graph Identify global statics in Types in object graph Support read out of finalization queue of .NET 2.0 as well. Support Memory Dump analysis (again a feature only supported by commercial profilers in their professional editions if it is supported at all) Deserialize objects from a memory dump into a live process back (this would need some more investigation but it is doable) The last item needs some explanation. Why on earth would you want to do that? The basic idea is to store in your live process some logging/tracing data which can become quite big but since it is never written to it is very fast to generate. When your process crashes with a memory dump you could transfer this data structure back into a live viewer which can then nicely display your program state at the point it did crash. This is an advanced trouble shooting technique I have not seen anywhere yet but it could be quite useful. You can have here a look at the current feature list of WMemoryProfiler with some examples.   How To Get Started? First I would download the released source package (it is tiny). And compile the complete project. Then you can compile the Example project (it has this name) and uncomment in the main method the scenario you want to check out. If you are greeted with an exception it is time to install the Windows 8 Standalone SDK which is described in great detail in the exception text. Thats it for the first round. I have seen something more limited in the Java world some years ago (now I cannot find the link anymore) but anyway. Now we have something much better.

    Read the article

< Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >