Search Results

Search found 3673 results on 147 pages for 'architectures and technologies'.

Page 130/147 | < Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >

  • If You Could Cut Your Meeting Times in ½ Would You?

    - by Brian Dayton
                    I know it sounds like a big promise. And what I'm thinking about may not cut a :60 minute meeting into :30 minutes, but it could make meetings and interactions up to 2X more productive. How? Social Media for the Enterprise, Not Social Media In the Enterprise Bear with me. I'm not talking about whether or not workers should or shouldn't have access to Facebook on corporate networks. That topic has been discussed @ length. I'm also not talking about the direct benefits of Social Networking tools like Presence (the ability to see someone online and ask a question in real-time), blogs, RSS feeds or external tools like Twitter. The Un-Measurable Benefits Would you do something that you believe will have a positive effect--but can't be measured? It's impossible to quantify the effectiveness of a meeting. However, what I am talking about would be more of a byproduct of all of the social networking tools above. Here's the hypothesis: As I've gotten more and more busy with work, family, travel and kids--and the same has happened to my friends and family--I'm less and less connected. But by introducing Facebook to my life I've not only made connections with longtime friends whom I haven't spoken to in years--but I've increased the pace and quality of interactions, on and offline, with close friends who I see and speak to every week. In some cases it even enhances the connections and interactions with those I see or speak to every day. The same holds true in an organization. Especially a larger one with highly matrixed organizational structures. You work with people on a project, new people come in with each different project and a disproportionate amount of time is spent getting oriented and staying current. Going back to the initial value proposition--making meetings shorter/more effective--a large amount of time is spent: -          At Project Kick-off: Meeting and understanding team member's histories, goals & roles -          Ongoing: Summarizing events since the last meeting or update email In my personal, Facebook life today I know that: -          My best friend from college - has been stranded in India for 5 days because of the volcano in Iceland and is now only 250 miles from home -          One of my co-workers started conference calls at 6:30 this morning -          My wife wasn't terribly pleased with my painting skills in our new bathroom (disclosure: she told me this face to face too) Strengthening Weak Links A recent article in CIO Magazine, Three Dangerous Social Media Misconceptions (Kristen Burnham, March 12, 2010) calls out the #1 misconception as follows: 1. "Face-to-face relationships are far more valuable than virtual ones." While some level of physical interaction will always add value to relationships, Gartner says that come 2020, most relationships and teams will be based on "weak links"--that is, you may not have personally met a contact, but you'll know of or may have interacted with him via social sites like Facebook, LinkedIn and Twitter. The sooner your enterprise adopts these tools, the sooner your employees will learn them, and the sooner you'll begin to cultivate these relationships-of-the-future.   I personally believe that it's not an either/or choice between face-to-face and virtual interactions. In fact, I'll be as bold as saying it doesn't matter. I can point to two extremely valuable work relationships that I've had over the past 5 years: -          I shared an office with one of them -          I met the other person, face-to-face, only once Both relationships were very productive. The dynamics were similar. The communication tactics differed immensely. What does matter is the quality, frequency and relevance of interactions. Still sound like too much? An over-promise? Stay tuned for my next post The Gap Between Facebook and LinkedIn. I'll also connect some of the dots with where Oracle Applications and technologies are headed.        

    Read the article

  • Session Report: What’s New in JSF: A Complete Tour of JSF 2.2

    - by Janice J. Heiss
    On Wednesday, Ed Burns, Consulting Staff Member at Oracle, presented a session, CON3870 -- “What’s New in JSF: A Complete Tour of JSF 2.2,” in which he provided an update on recent developments in JavaServer Faces 2.2. He began by emphasizing that, “JavaServer Faces 2.2 continues the evolution of the Java EE standard user interface technology. Like previous releases, this iteration is very community-driven and transparent.” He pointed out that since JSF was introduced at the 2001 JavaOne Keynote, it has had a long and successful run and has found a home in applications where the UI logic resides entirely on the server where the model and UI logic is. In such cases, the browser performs fairly simple functions. However, developers can take advantage of the power of browsers, something that Project Avatar is focused on by letting developers author their applications so the UI logic is running on the client and communicating to the back end via RESTful web services. “Most importantly,” remarked Burns, “JSF 2.2 offers a really good migration path because even in the scope of one application you could have an app written with JSF that has its UI logic on the server and, on a gradual basis, you could migrate parts of the app over to use client-side technologies. This can be done at any level of granularity – per page or per collection of pages. It all depends on what you want to do.” His presentation, which focused on the basic new features of JSF 2.2, began by restating the scope of JSF and encouraged attendees to check out Roger Kitain’s session: CON5133 “Techniques for Responsive Real-Time Web UIs.” Burns explained that JSF has endured because, “We still need web apps that are maintainable, localizable, quick to build, accessible, secure, look great and are fun to use.” It is used on every continent – the curious can go here to check out where its unofficial usage is tracked. He emphasized the significance of the UI logic being substantially on the server. This: Separates Component Semantics from Rendering, Allows components to “own” their little patch of the UI -- encode/decode, And offers a well-defined lifecycle: Inversion of Control. Burns reminded attendees that JSR-344, the spec for JSF 2.2, is now on Java Community Process 2.8, a revised version of the JCP that allows for more openness and transparency. He then offered some tools for community access to JSF 2.2:    * Public java.net projects spec http://jsf-spec.java.net/ impl http://jsf.java.net/ Open Source: GPL+Classpath Exception    * Mailing Lists [email protected]                                Public readable archive, JSPA signed member read/write [email protected]                                     Public readable archive, any java.net member read/write                         All mail sent to jsr344-experts is sent to users. * Issue Tracker spec http://jsf-spec.java.net/issues/ impl http://jsf.java.net/issues/ JSF 2.2, which is JSR 344, has a Public Review Draft planned by December 2012 with no need for a Renewal Ballot. The Early Draft Review of JSR 344 was published on December 8, 2011. Interested developers are encouraged to offer their input. Six Big Ticket Features of JSF 2.2 Burns summarized the six big ticket features of JSF 2.2:* HTML5 Friendly Markup Support Pass through attributes and elements * Faces Flows* Cross Site Request Forgery Protection* Loading Facelets via ResourceHandler* File Upload Component* Multi-Templating He explained that he called it “HTML 5 friendly” because there is really nothing HTML 5 specific about it -- it could be 4. But it enables developers to use new elements that are present in HTML5 without having a JSF component library that is written to take advantage of those specifically. It gives the page author the ability to use plain HTML5 to write their page, but to still take advantage of the server-side available in JSF. He presented a demo showing JSF 2.2’s ability to leverage the expressiveness of HTML5. Burns then explained the significance of face flows, which offer function points and quantify how much work has taken place, something of great value to JSF users. He went on to talk about JSF 2.2.’s cross-site request forgery protection (CSRF) and offered details about how it protects applications against attack. Then he talked about JSF 2.2’s File Upload Component and explained that the final specification will have Ajax and non-Ajax support. The current milestone has non-Ajax support implemented. He then went on to explain its capacity to add facelets through ResourceHandler. Previously, JSF 2.0 added Facelets and ResourceHandler as disparate units; now in JSF 2.2 the two concepts are unified. Finally, he explained the concept of multi-templating in JSF 2.2 and went on to discuss more medium-level features of the release. For an easy, low maintenance way of staying in touch with JSF developments go to JSF’s Twitter page where every month or so, important updates are offered.

    Read the article

  • Non-Dom Element Event Binding with jQuery

    - by Rick Strahl
    Yesterday I had a short discussion with Dave Reed on Twitter regarding setting up fake ‘events’ on objects that are hookable. jQuery makes it real easy to bind events on DOM elements and with a little bit of extra work (that I didn’t know about) you can also set up binding to non-DOM element ‘event’ bindings. Assume for a second that you have a simple JavaScript object like this: var item = { sku: "wwhelp" , foo: function() { alert('orginal foo function'); } }; and you want to be notified when the foo function is called. You can use jQuery to bind the handler like this: $(item).bind("foo", function () { alert('foo Hook called'); } ); Binding alone won’t actually cause the handler to be triggered so when you call: item.foo(); you only get the ‘original’ message. In order to fire both the original handler and the bound event hook you have to use the .trigger() function: $(item).trigger("foo"); Now if you do the following complete sequence: var item = { sku: "wwhelp" , foo: function() { alert('orginal foo function'); } }; $(item).bind("foo", function () { alert('foo hook called'); } ); $(item).trigger("foo"); You’ll see the ‘hook’ message first followed by the ‘original’ message fired in succession. In other words, using this mechanism you can hook standard object functions and chain events to them in a way similar to the way you can do with DOM elements. The main difference is that the ‘event’ has to be explicitly triggered in order for this to happen rather than just calling the method directly. .trigger() relies on some internal logic that checks for event bindings on the object (attached via an expando property) which .trigger() searches for in its bound event list. Once the ‘event’ is found it’s called prior to execution of the original function. This is pretty useful as it allows you to create standard JavaScript objects that can act as event handlers and are effectively hookable without having to explicitly override event definitions with JavaScript function handlers. You get all the benefits of jQuery’s event methods including the ability to hook up multiple events to the same handler function and the ability to uniquely identify each specific event instance with post fix string names (ie. .bind("MyEvent.MyName") and .unbind("MyEvent.MyName") to bind MyEvent). Watch out for an .unbind() Bug Note that there appears to be a bug with .unbind() in jQuery that doesn’t reliably unbind an event and results in a elem.removeEventListener is not a function error. The following code demonstrates: var item = { sku: "wwhelp", foo: function () { alert('orginal foo function'); } }; $(item).bind("foo.first", function () { alert('foo hook called'); }); $(item).bind("foo.second", function () { alert('foo hook2 called'); }); $(item).trigger("foo"); setTimeout(function () { $(item).unbind("foo"); // $(item).unbind("foo.first"); // $(item).unbind("foo.second"); $(item).trigger("foo"); }, 3000); The setTimeout call delays the unbinding and is supposed to remove the event binding on the foo function. It fails both with the foo only value (both if assigned only as “foo” or “foo.first/second” as well as when removing both of the postfixed event handlers explicitly. Oddly the following that removes only one of the two handlers works: setTimeout(function () { //$(item).unbind("foo"); $(item).unbind("foo.first"); // $(item).unbind("foo.second"); $(item).trigger("foo"); }, 3000); this actually works which is weird as the code in unbind tries to unbind using a DOM method that doesn’t exist. <shrug> A partial workaround for unbinding all ‘foo’ events is the following: setTimeout(function () { $.event.special.foo = { teardown: function () { alert('teardown'); return true; } }; $(item).unbind("foo"); $(item).trigger("foo"); }, 3000); which is a bit cryptic to say the least but it seems to work more reliably. I can’t take credit for any of this – thanks to Dave Reed and Damien Edwards who pointed out some of these behaviors. I didn’t find any good descriptions of the process so thought it’d be good to write it down here. Hope some of you find this helpful.© Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  

    Read the article

  • jQuery 1.4 Opacity and IE Filters

    - by Rick Strahl
    Ran into a small problem today with my client side jQuery library after switching to jQuery 1.4. I ran into a problem with a shadow plugin that I use to provide drop shadows for absolute elements – for Mozilla WebKit browsers the –moz-box-shadow and –webkit-box-shadow CSS attributes are used but for IE a manual element is created to provide the shadow that underlays the original element along with a blur filter to provide the fuzziness in the shadow. Some of the key pieces are: var vis = el.is(":visible"); if (!vis) el.show(); // must be visible to get .position var pos = el.position(); if (typeof shEl.style.filter == "string") sh.css("filter", 'progid:DXImageTransform.Microsoft.Blur(makeShadow=true, pixelradius=3, shadowOpacity=' + opt.opacity.toString() + ')'); sh.show() .css({ position: "absolute", width: el.outerWidth(), height: el.outerHeight(), opacity: opt.opacity, background: opt.color, left: pos.left + opt.offset, top: pos.top + opt.offset }); This has always worked in previous versions of jQuery, but with 1.4 the original filter no longer works. It appears that applying the opacity after the original filter wipes out the original filter. IOW, the opacity filter is not applied incrementally, but absolutely which is a real bummer. Luckily the workaround is relatively easy by just switching the order in which the opacity and filter are applied. If I apply the blur after the opacity I get my correct behavior back with both opacity: sh.show() .css({ position: "absolute", width: el.outerWidth(), height: el.outerHeight(), opacity: opt.opacity, background: opt.color, left: pos.left + opt.offset, top: pos.top + opt.offset }); if (typeof shEl.style.filter == "string") sh.css("filter", 'progid:DXImageTransform.Microsoft.Blur(makeShadow=true, pixelradius=3, shadowOpacity=' + opt.opacity.toString() + ')'); While this works this still causes problems in other areas where opacity is implicitly set in code such as for fade operations or in the case of my shadow component the style/property watcher that keeps the shadow and main object linked. Both of these may set the opacity explicitly and that is still broken as it will effectively kill the blur filter. This seems like a really strange design decision by the jQuery team, since clearly the jquery css function does the right thing for setting filters. Internally however, the opacity setting doesn’t use .css instead hardcoding the filter which given jQuery’s usual flexibility and smart code seems really inappropriate. The following is from jQuery.js 1.4: var style = elem.style || elem, set = value !== undefined; // IE uses filters for opacity if ( !jQuery.support.opacity && name === "opacity" ) { if ( set ) { // IE has trouble with opacity if it does not have layout // Force it by setting the zoom level style.zoom = 1; // Set the alpha filter to set the opacity var opacity = parseInt( value, 10 ) + "" === "NaN" ? "" : "alpha(opacity=" + value * 100 + ")"; var filter = style.filter || jQuery.curCSS( elem, "filter" ) || ""; style.filter = ralpha.test(filter) ? filter.replace(ralpha, opacity) : opacity; } return style.filter && style.filter.indexOf("opacity=") >= 0 ? (parseFloat( ropacity.exec(style.filter)[1] ) / 100) + "": ""; } You can see here that the style is explicitly set in code rather than relying on $.css() to assign the value resulting in the old filter getting wiped out. jQuery 1.32 looks a little different: // IE uses filters for opacity if ( !jQuery.support.opacity && name == "opacity" ) { if ( set ) { // IE has trouble with opacity if it does not have layout // Force it by setting the zoom level elem.zoom = 1; // Set the alpha filter to set the opacity elem.filter = (elem.filter || "").replace( /alpha\([^)]*\)/, "" ) + (parseInt( value ) + '' == "NaN" ? "" : "alpha(opacity=" + value * 100 + ")"); } return elem.filter && elem.filter.indexOf("opacity=") >= 0 ? (parseFloat( elem.filter.match(/opacity=([^)]*)/)[1] ) / 100) + '': ""; } Offhand I’m not sure why the latter works better since it too is assigning the filter. However, when checking with the IE script debugger I can see that there are actually a couple of filter tags assigned when using jQuery 1.32 but only one when I use jQuery 1.4. Note also that the jQuery 1.3 compatibility plugin for jQUery 1.4 doesn’t address this issue either. Resources ww.jquery.js (shadow plug-in $.fn.shadow) © Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  

    Read the article

  • SmtpClient and Locked File Attachments

    - by Rick Strahl
    Got a note a couple of days ago from a client using one of my generic routines that wraps SmtpClient. Apparently whenever a file has been attached to a message and emailed with SmtpClient the file remains locked after the message has been sent. Oddly this particular issue hasn’t cropped up before for me although these routines are in use in a number of applications I’ve built. The wrapper I use was built mainly to backfit an old pre-.NET 2.0 email client I built using Sockets to avoid the CDO nightmares of the .NET 1.x mail client. The current class retained the same class interface but now internally uses SmtpClient which holds a flat property interface that makes it less verbose to send off email messages. File attachments in this interface are handled by providing a comma delimited list for files in an Attachments string property which is then collected along with the other flat property settings and eventually passed on to SmtpClient in the form of a MailMessage structure. The jist of the code is something like this: /// <summary> /// Fully self contained mail sending method. Sends an email message by connecting /// and disconnecting from the email server. /// </summary> /// <returns>true or false</returns> public bool SendMail() { if (!this.Connect()) return false; try { // Create and configure the message MailMessage msg = this.GetMessage(); smtp.Send(msg); this.OnSendComplete(this); } catch (Exception ex) { string msg = ex.Message; if (ex.InnerException != null) msg = ex.InnerException.Message; this.SetError(msg); this.OnSendError(this); return false; } finally { // close connection and clear out headers // SmtpClient instance nulled out this.Close(); } return true; } /// <summary> /// Configures the message interface /// </summary> /// <param name="msg"></param> protected virtual MailMessage GetMessage() { MailMessage msg = new MailMessage(); msg.Body = this.Message; msg.Subject = this.Subject; msg.From = new MailAddress(this.SenderEmail, this.SenderName); if (!string.IsNullOrEmpty(this.ReplyTo)) msg.ReplyTo = new MailAddress(this.ReplyTo); // Send all the different recipients this.AssignMailAddresses(msg.To, this.Recipient); this.AssignMailAddresses(msg.CC, this.CC); this.AssignMailAddresses(msg.Bcc, this.BCC); if (!string.IsNullOrEmpty(this.Attachments)) { string[] files = this.Attachments.Split(new char[2] { ',', ';' }, StringSplitOptions.RemoveEmptyEntries); foreach (string file in files) { msg.Attachments.Add(new Attachment(file)); } } if (this.ContentType.StartsWith("text/html")) msg.IsBodyHtml = true; else msg.IsBodyHtml = false; msg.BodyEncoding = this.Encoding; … additional code omitted return msg; } Basically this code collects all the property settings of the wrapper object and applies them to the SmtpClient and in GetMessage() to an individual MailMessage properties. Specifically notice that attachment filenames are converted from a comma-delimited string to filenames from which new attachments are created. The code as it’s written however, will cause the problem with file attachments not being released properly. Internally .NET opens up stream handles and reads the files from disk to dump them into the email send stream. The attachments are always sent correctly but the local files are not immediately closed. As you probably guessed the issue is simply that some resources are not automatcially disposed when sending is complete and sure enough the following code change fixes the problem: // Create and configure the message using (MailMessage msg = this.GetMessage()) { smtp.Send(msg); if (this.SendComplete != null) this.OnSendComplete(this); // or use an explicit msg.Dispose() here } The Message object requires an explicit call to Dispose() (or a using() block as I have here) to force the attachment files to get closed. I think this is rather odd behavior for this scenario however. The code I use passes in filenames and my expectation of an API that accepts file names is that it uses the files by opening and streaming them and then closing them when done. Why keep the streams open and require an explicit .Dispose() by the calling code which is bound to lead to unexpected behavior just as my customer ran into? Any API level code should clean up as much as possible and this is clearly not happening here resulting in unexpected behavior. Apparently lots of other folks have run into this before as I found based on a few Twitter comments on this topic. Odd to me too is that SmtpClient() doesn’t implement IDisposable – it’s only the MailMessage (and Attachments) that implement it and require it to clean up for left over resources like open file handles. This means that you couldn’t even use a using() statement around the SmtpClient code to resolve this – instead you’d have to wrap it around the message object which again is rather unexpected. Well, chalk that one up to another small unexpected behavior that wasted a half an hour of my time – hopefully this post will help someone avoid this same half an hour of hunting and searching. Resources: Full code to SmptClientNative (West Wind Web Toolkit Repository) SmtpClient Documentation MSDN © Rick Strahl, West Wind Technologies, 2005-2010Posted in .NET  

    Read the article

  • SQLAuthority News – Meeting with Allen Bailochan Tuladhar – An Unlimited Experience

    - by pinaldave
    Allen  Tuladhar I recently came back from my 9-day trip in Nepal and I must say that this is one of the best trips I had in my lifetime. Allen Bailochan Tuladhar is a wonderful person and an extreme enthusiast for Microsoft Technology. Allen is the Chief Executive Officer of Unlimited Technologies Pvt Ltd., Country Manager of Microsoft MDP Nepal, the Member Secretary of Nepali Language in Information Technology, and member of the Steering Committee of the Government of Nepal. He is the person who keeps the Nepal’s Tech Community constantly motivating and taking it to the next level. I have met Allen for many times before, but this was the first time I was with him in Kathmandu, Nepal. I was very impressed with the amount of the work he does in the community. During my 9 days of stay, every single day was a new lesson for me. I was amazed and overwhelmed with the many things he does every single day. Not only he does he work closely with Government of Nepal ministry, but he is also the most known person in the Student Community. His expertise in the technical subject matter is not limited to one technology; rather, I have seen him actively engaging himself in  discussions of various tech topics. Allen presending at TechMela Kathmandu, Nepal Allen is currently active in working out to localize Windows and Office and incorporate it using the Nepali language. I was able to witness and experience how the localization works, as well as the procedure on how to do such. If you know the whole localization process, you must have realized how big and daunting of a process it is. I was glad that I became a part of it. Prominent Personality of Nepal on Panel Discussion Another great opportunity I had when I was at Allen’s office is that I have learned how the radio technology talk show works. Nepali Radio station has the weekly program in their local language, in which MS technology is discussed and industry leaders are invited to talk about their experience with the technology. I found the program so interesting because it has so much variety in terms of technology subjects. Well, my understanding of Nepali language is limited but I did understand quite a bit. Ravi, Nutan, Pinal, Gandip I got the chance to meet lots of Database Professionals as well. People in Nepal are very polite even though they are very strong in their technology fundamentals. I had in-depth discussion regarding High Availability scenarios, as well Query Tuning. Database professionals from the leading financial sectors of Nepal wanted me to visit their Data Center and help them out with a few advances. In no time, Allen organized a visit for me. He sent me a Nepali-speaking expert from his own organization to accompany me in overcoming any difficulties while I was on my way helping this financial district. Pinal (SQLAuthority) and Deependra (Unlimited) When I was going to Nepal, I was really not sure if I would be able to stay busy for 9 days straight in Community-related activity. However, on the 9th day I realize that I can still stay here for more than 9 days because in every single day, I feel enthusiastic enough to do something new. Allen Bailochan Tuladhar Even though I was working  very hard every day, I hardly had the chance to work with and talk to him one-on-one for the first few days. One of the evenings, Allen invited me to his home and we discussed about his future ideas. I was really surprised to see how much a man can do for his technical community and for his country. When I asked Allen’s wife and daughter if they ever think it’s getting too much with regards to Allen putting tough efforts to the community, their answer was something I did not expect. I found out that Allen’s wife manages all the back office and logistics of the community events and his daughter manages the websites. I felt that they do not have any complain,  and instead, their whole family is in this activity as deeply as it can get, which I thought is a very good thing. Pinal and Allen I want to end this post with an interesting story that happened during our lunch hour at one of the Nepali restaurants. While we were having our lunch and having some chitchat, Allen suddenly stood up and called several people walking along the pavement. He introduced them all to me as Microsoft Student Partners. He asked all of them to order their favorite dish and called the waiter to inform that he will pick up their tab. Figuring out the question written on my face, he just said one sentence: “They are all future technology professionals who are going to make all of us proud.” I guess I have a lot of things to learn. Hats off to Allen! Pinal and Allen at Microsoft MDP Unlimited Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: MVP, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology

    Read the article

  • SSAS Compare: an intern’s journey

    - by Red Gate Software BI Tools Team
    About a month ago, David mentioned an intern working in the BI Tools Team. That intern happens to be me! In five weeks’ time, I’ll start my second year of Computer Science at the University of Cambridge and be a full-time student again, but for the past eight weeks, I’ve been living a completely different life. As Jon mentioned before, the teams here at Red Gate are small and everyone (including the interns!) is responsible for the product as a whole. I’ve attended planning sessions, UX tests, daily meetings, and everything else a full-time member of the team would; I had as much say in where we would go next with the product as anyone; I was able to see that what I was doing was an important part of the product from the feedback we got in the UX tests. All these things almost made me forget that this is just an internship and not my full-time job. First steps at Red Gate Being based in Cambridge, Red Gate has many Cambridge university graduates working for them. They also hire some Cambridge undergraduates for internships each summer. With its popularity with university graduates and its great working environment, Red Gate has managed to build up a great reputation. When I thought of doing an internship here in Cambridge, Red Gate just seemed to be the obvious choice for my first real work experience. On my first day at Red Gate, David, the lead developer for SSAS Compare, helped me settle in and explained what I’d be doing. My task was to improve the user experience of displaying differences between MDX scripts by syntax highlighting, script formatting, and improving the difference identification in the first place. David suggested how I should approach the problem, but left all the details and design decisions to me. That was when I realised how much independence and responsibility I’d have. What I’ve done If you launch the latest version of SSAS Compare and drill down to an MDX script difference, you can see the changes that have been made. In earlier versions, you could only see the scripts in plain text on both sides — either in black or grey, depending on whether they were the same or not. However, you couldn’t see exactly where the scripts were different, which was especially annoying when the two scripts were large – as they often are. Furthermore, if parts of the two scripts were formatted differently, they seemed to be different but were actually the same, which caused even more confusion and made it difficult to see where the differences were. All these issues have been fixed now. The two scripts are automatically formatted by the tool so that if two things are syntactically equivalent, they look the same – including case differences in keywords! The actual difference is highlighted in grey, which makes them easy to spot. The difference identification has been improved as well, so two scripts aren’t identified as different if there’s just a difference in meaningless whitespace characters, or when you have “select” on one side and “SELECT” on the other. We also have syntax highlighting, which makes it easier to read the scripts. How I did it In order to do the formatting properly, we decided to parse the MDX scripts. After some investigation into parser builders, I decided to go with the GOLD Parser builder and the bsn-goldparser .NET engine. GOLD Parser builder provides a fairly nice GUI to write, build, and test grammar in. We also liked the idea of separating the grammar building from parsing a text. The bsn-goldparser is one of many .NET engines for GOLD, and although it doesn’t support the newest features of GOLD Parser, it has “the ability to map semantic action classes to terminals or reduction rules, so that a completely functional semantic AST can be created directly without intermediate token AST representation, and without the need for glue code.” That makes it much easier for us to change the implementation in our program when we change the grammar. As bsn-goldparser is open source, and I wanted some more features in it, I contributed two new features which have now been merged to the project. Unfortunately, there wasn’t an MDX grammar written for GOLD already, so I had to write it myself. I was referencing MSDN to get the formal grammar specification, but the specification was all over the place, so it wasn’t that easy to implement and find. We’re aware that we don’t yet fully support all valid MDX, so sometimes you’ll just see the MDX script difference displayed the old way. In that case, there is some grammar construct we don’t yet recognise. If you come across something SSAS Compare doesn’t recognise, we’d love to hear about it so we can add it to our grammar. When some MDX script gets parsed, a tree is produced. That tree can then be processed into a list of inlines which deal with the correct formatting and can be outputted to the screen. Doing all this has led me to many new technologies and projects I haven’t worked with before. This was my first experience with C# and Visual Studio, although I have done things in Java before. I have learnt how to unit test with NUnit, how to do dependency injection with Ninject, how to source-control code with SVN and Mercurial, how to build with TeamCity, how to use GOLD, and many other things. What’s coming next Sadly, my internship comes to an end this week, so there will be less development on MDX difference view for a while. But the team is going to work on marking the differences better and making it consistent with difference indication in the top part of comparison window, and will keep adding support for more MDX grammar so you can see the differences easily in every comparison you make. So long! And maybe I’ll see you next summer!

    Read the article

  • Request Limit Length Limits for IIS&rsquo;s requestFiltering Module

    - by Rick Strahl
    Today I updated my CodePaste.net site to MVC 3 and pushed an update to the site. The update of MVC went pretty smooth as well as most of the update process to the live site. Short of missing a web.config change in the /views folder that caused blank pages on the server, the process was relatively painless. However, one issue that kicked my ass for about an hour – and not foe the first time – was a problem with my OpenId authentication using DotNetOpenAuth. I tested the site operation fairly extensively locally and everything worked no problem, but on the server the OpenId returns resulted in a 404 response from IIS for a nice friendly OpenId return URL like this: http://codepaste.net/Account/OpenIdLogon?dnoa.userSuppliedIdentifier=http%3A%2F%2Frstrahl.myopenid.com%2F&dnoa.return_to_sig_handle=%7B634239223364590000%7D%7BjbHzkg%3D%3D%7D&dnoa.return_to_sig=7%2BcGhp7UUkcV2B8W29ibIDnZuoGoqzyS%2F%2FbF%2FhhYscgWzjg%2BB%2Fj10ZpNdBkUCu86dkTL6f4OK2zY5qHhCnJ2Dw%3D%3D&openid.assoc_handle=%7BHMAC-SHA256%7D%7B4cca49b2%7D%7BMVGByQ%3D%3D%7D&openid.claimed_id=http%3A%2F%2Frstrahl.myopenid.com%2F&openid.identity=http%3A%2F%2Frstrahl.myopenid.com%2F&openid.mode=id_res&openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.ns.sreg=http%3A%2F%2Fopenid.net%2Fextensions%2Fsreg%2F1.1&openid.op_endpoint=http%3A%2F%2Fwww.myopenid.com%2Fserver&openid.response_nonce=2010-10-29T04%3A12%3A53Zn5F4r5&openid.return_to=http%3A%2F%2Fcodepaste.net%2FAccount%2FOpenIdLogon%3Fdnoa.userSuppliedIdentifier%3Dhttp%253A%252F%252Frstrahl.myopenid.com%252F%26dnoa.return_to_sig_handle%3D%257B634239223364590000%257D%257BjbHzkg%253D%253D%257D%26dnoa.return_to_sig%3D7%252BcGhp7UUkcV2B8W29ibIDnZuoGoqzyS%252F%252FbF%252FhhYscgWzjg%252BB%252Fj10ZpNdBkUCu86dkTL6f4OK2zY5qHhCnJ2Dw%253D%253D&openid.sig=h1GCSBTDAn1on98sLA6cti%2Bj1M6RffNerdVEI80mnYE%3D&openid.signed=assoc_handle%2Cclaimed_id%2Cidentity%2Cmode%2Cns%2Cns.sreg%2Cop_endpoint%2Cresponse_nonce%2Creturn_to%2Csigned%2Csreg.email%2Csreg.fullname&openid.sreg.email=rstrahl%40host.com&openid.sreg.fullname=Rick+Strahl A 404 of course isn’t terribly helpful – normally a 404 is a resource not found error, but the resource is definitely there. So how the heck do you figure out what’s wrong? If you’re just interested in the solution, here’s the short version: IIS by default allows only for a 1024 byte query string, which is obviously exceeded by the above. The setting is controlled by the RequestFiltering module in IIS 6 and later which can be configured in ApplicationHost.config (in \%windir\system32\inetsvr\config). To set the value configure the requestLimits key like so: <configuration> <security> <requestFiltering> <requestLimits maxQueryString="2048"> </requestLimits> </requestFiltering> </security> </configuration> This fixed me right up and made the requests work. How do you find out about problems like this? Ah yes the troubles of an administrator? Read on and I’ll take you through a quick review of how I tracked this down. Finding the Problem The issue with the error returned is that IIS returns a 404 Resource not found error and doesn’t provide much information about it. If you’re lucky enough to be able to run your site from the localhost IIS is actually very helpful and gives you the right information immediately in a nicely detailed error page. The bottom of the page actually describes exactly what needs to be fixed. One problem with this easy way to find an error: You HAVE TO run localhost. On my server which has about 10 domains running localhost doesn’t point at the particular site I had problems with so I didn’t get the luxury of this nice error page. Using Failed Request Tracing to retrieve Error Info The first place I go with IIS errors is to turn on Failed Request Tracing in IIS to get more error information. If you have access to the server to make a configuration change you can enable Failed Request Tracing like this: Find the Failed Request Tracing Rules in the IIS Service Manager.   Select the option and then Edit Site Tracing to enable tracing. Then add a rule for * (all content) and specify status codes from 100-999 to capture all errors. if you know exactly what error you’re looking for it might help to specify it exactly to keep the number of errors down. Then run your request and let it fail. IIS will throw error log files into a folder like this C:\inetpub\logs\FailedReqLogFiles\W3SVC5 where the last 5 is the instance ID of the site. These files are XML but they include an XSL stylesheet that provides some decent formatting. In this case it pointed me straight at the offending module:   Ok, it’s the RequestFilteringModule. Request Filtering is built into IIS 6-7 and configured in ApplicationHost.config. This module defines a few basic rules about what paths and extensions are allowed in requests and among other things how long a query string is allowed to be. Most of these settings are pretty sensible but the query string value can easily become a problem especially if you’re dealing with OpenId since these return URLs are quite extensive. Debugging failed requests is never fun, but IIS 6 and forward at least provides us the tools that can help us point in the right direction. The error message the FRT report isn’t as nice as the IIS error message but it at least points at the offending module which gave me the clue I needed to look at request restrictions in ApplicationHost.config. This would still be a stretch if you’re not intimately familiar, but I think with some Google searches it would be easy to track this down with a few tries… Hope this was useful to some of you. Useful to me to put this out as a reminder – I’ve run into this issue before myself and totally forgot. Next time I got it, right?© Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  Security  

    Read the article

  • Surface development: it&rsquo;s just like software development

    - by Dennis Vroegop
    Surface is magic. Everyone using it seems to think that way. And I have to be honest, after working for almost 2 years with the platform I still get that special feeling the moment I turn on the unit to do some more work. The whole user experience, the rich environment of the SDK, the touch, even the look and feel of the Surface environment is so much different from the stuff I’ve been working on all my career that I am still bewildered by it. But… and this is a big but.. in the end we’re still talking about a computer and that needs software to become useful. Deep down the magic of the Surface unit there is a PC somewhere, running Windows Vista and the .net framework 3.5. When you write that magic software that makes the platform come alive you’re still working with .net, WPF/XNA, C#, VB.Net and all those other tools and technologies you know so well. Sure, the whole user experience is different from what you’ve known. And the way of thinking about users, their interaction and the positioning of screen elements requires a whole new paradigm. And that takes time. It took me about half a year before I had the feeling I got it nailed down. But when that moment came (about 18 months ago…) I realized that everything I had learned so far on software development still is true when it comes to Surface. The last 6 months I have been working with some people with a different background to start a new company. The idea was that the new company would be focussing on Surface and Surface only. These people come from a marketing background and had some good ideas for some applications. And I have to admit: their ideas were good. Very good. Where it all fell down of course is that these ideas need to be implemented in a piece of software. And creating great software takes skilled developers and a lot of time and money. That’s where things went wrong: the marketing guys didn’t realize and didn’t want to realize that software development is a job that takes skill. You can’t just hire a bunch of developers and expect them to deliver the best sort of software, especially not when it comes to Surface. I tried to explain that yes, their User Interface in Photoshop looked great, but no: I couldn’t develop an application like that in a weeks time. Even worse: the while backend of the software (WCF for communications, SQL Server for the database, etc) would take a lot more time than the frontend. They didn’t understand. It took them a couple of days to drawn the UI in Photoshop so in Blend I should be able to build the software in about the same amount of time. Well, you and I know that it doesn’t work that way. Software is hard to write, and even harder to write well, and it takes skill and dedication. It’s not something you can do as fast as you can draw a mock up for a Surface application in Photohop. The same holds true for web applications of course. A lot of designers there fail to appreciate the hard work that goes into writing the plumbing for a good web app that can handle thousands of users. Although the UI is very important, it’s not all there is to it. And in Surface development this is the same. The UI should create the feeling of magic, but the software behind it is what makes it come alive. And that takes time. A lot of time. So brush of you skills and don’t throw them away if you start developing for Surface. Because projects (and colaborations) can fail there as hard as they can in any other area of software development. On a side note: we decided to stop the colaboration (something the other parties involved didn’t appreciate and were very angry about) and decided to hire a designer for the Surface projects. The focus is back where it belongs: on the software development we know so well and have been doing very well for 13 years. UI is just a part of the whole project and not the end product. So my company Detrio is still going strong when it comes to develivering Surface solutions but once again from a technological background, not a marketing background.

    Read the article

  • Getting the innermost .NET Exception

    - by Rick Strahl
    Here's a trivial but quite useful function that I frequently need in dynamic execution of code: Finding the innermost exception when an exception occurs, because for many operations (for example Reflection invocations or Web Service calls) the top level errors returned can be rather generic. A good example - common with errors in Reflection making a method invocation - is this generic error: Exception has been thrown by the target of an invocation In the debugger it looks like this: In this case this is an AJAX callback, which dynamically executes a method (ExecuteMethod code) which in turn calls into an Amazon Web Service using the old Amazon WSE101 Web service extensions for .NET. An error occurs in the Web Service call and the innermost exception holds the useful error information which in this case points at an invalid web.config key value related to the System.Net connection APIs. The "Exception has been thrown by the target of an invocation" error is the Reflection APIs generic error message that gets fired when you execute a method dynamically and that method fails internally. The messages basically says: "Your code blew up in my face when I tried to run it!". Which of course is not very useful to tell you what actually happened. If you drill down the InnerExceptions eventually you'll get a more detailed exception that points at the original error and code that caused the exception. In the code above the actually useful exception is two innerExceptions down. In most (but not all) cases when inner exceptions are returned, it's the innermost exception that has the information that is really useful. It's of course a fairly trivial task to do this in code, but I do it so frequently that I use a small helper method for this: /// <summary> /// Returns the innermost Exception for an object /// </summary> /// <param name="ex"></param> /// <returns></returns> public static Exception GetInnerMostException(Exception ex) { Exception currentEx = ex; while (currentEx.InnerException != null) { currentEx = currentEx.InnerException; } return currentEx; } This code just loops through all the inner exceptions (if any) and assigns them to a temporary variable until there are no more inner exceptions. The end result is that you get the innermost exception returned from the original exception. It's easy to use this code then in a try/catch handler like this (from the example above) to retrieve the more important innermost exception: object result = null; string stringResult = null; try { if (parameterList != null) // use the supplied parameter list result = helper.ExecuteMethod(methodToCall,target, parameterList.ToArray(), CallbackMethodParameterType.Json,ref attr); else // grab the info out of QueryString Values or POST buffer during parameter parsing // for optimization result = helper.ExecuteMethod(methodToCall, target, null, CallbackMethodParameterType.Json, ref attr); } catch (Exception ex) { Exception activeException = DebugUtils.GetInnerMostException(ex); WriteErrorResponse(activeException.Message, ( HttpContext.Current.IsDebuggingEnabled ? ex.StackTrace : null ) ); return; } Another function that is useful to me from time to time is one that returns all inner exceptions and the original exception as an array: /// <summary> /// Returns an array of the entire exception list in reverse order /// (innermost to outermost exception) /// </summary> /// <param name="ex">The original exception to work off</param> /// <returns>Array of Exceptions from innermost to outermost</returns> public static Exception[] GetInnerExceptions(Exception ex) {     List<Exception> exceptions = new List<Exception>();     exceptions.Add(ex);       Exception currentEx = ex;     while (currentEx.InnerException != null)     {         exceptions.Add(ex);     }       // Reverse the order to the innermost is first     exceptions.Reverse();       return exceptions.ToArray(); } This function loops through all the InnerExceptions and returns them and then reverses the order of the array returning the innermost exception first. This can be useful in certain error scenarios where exceptions stack and you need to display information from more than one of the exceptions in order to create a useful error message. This is rare but certain database exceptions bury their exception info in mutliple inner exceptions and it's easier to parse through them in an array then to manually walk the exception stack. It's also useful if you need to log errors and want to see the all of the error detail from all exceptions. None of this is rocket science, but it's useful to have some helpers that make retrieval of the critical exception info trivial. Resources DebugUtils.cs utility class in the West Wind Web Toolkit© Rick Strahl, West Wind Technologies, 2005-2011Posted in CSharp  .NET  

    Read the article

  • Top 4 Lame Tech Blogging Posts

    - by jkauffman
    From a consumption point of view, tech blogging is a great resource for one-off articles on niche subjects. If you spend any time reading tech blogs, you may find yourself running into several common, useless types of posts tech bloggers slip into. Some of these lame posts may just be natural due to common nerd psychology, and some others are probably due to lame, lemming-like laziness. I’m sure I’ll do my fair share of fitting the mold, but I quickly get bored when I happen upon posts that hit these patterns without any real purpose or personal touches. 1. The Content Regurgitation Posts This is a common pattern fueled by the starving pan-handlers in the web traffic economy. These are posts that are terse opinions or addendums to an existing post. I commonly see these involve huge block quotes from the linked article which almost always produces over 50% of the post itself. I’ve accidentally gone to these posts when I’m knowingly only interested in the source material. Web links can degrade as well, so if the source link is broken, then, well, I’m pretty steamed. I see this occur with simple opinions on technologies, Stack Overflow solutions, or various tech news like posts from Microsoft. It’s not uncommon to go to the linked article and see the author announce that he “added a blog post” as a response or summary of the topic. This is just rude, but those who do it are probably aware of this. It’s a matter of winning that sweet, juicy web traffic. I doubt this leeching is fooling anybody these days. I would like to rally human dignity and urge people to avoid these types of posts, and just leave a comment on the source material. 2. The “Sorry I Haven’t Posted In A While” Posts This one is far too common. You’ll most likely see this quote somewhere in the body of the offending post: I have been really busy. If the poster is especially guilt-ridden, you’ll see a few volleys of excuses. Here are some common reasons I’ve seen, which I’ll list from least to most painfully awkward. Out of town Vague allusions to personal health problems (these typically includes phrases like “sick”, “treatment'”, and “all better now!”) “Personal issues” (which I usually read as "divorce”) Graphic or specific personal health problems (maximum awkwardness potential is achieved if you see links to charity fund websites) I can’t help but to try over-analyzing why this occurs. Personally, I see this an an amalgamation of three plain factors: Life happens Us nerds are duty-driven, and driven to guilt at personal inefficiencies Tech blogs can become personal journals I don’t think we can do much about the first two, but on the third I think we could certainly contain our urges. I’m a pretty boring guy and, whether or I like it or not, I have an unspoken duty to protect the world from hearing about my unremarkable existence. Nobody cares what kind of sandwich I’m eating. Similarly, if I disappear for a while, it’s unlikely that anybody who happens upon my blog would care why. Rest assured, if I stop posting for a while due to a vasectomy, you will be the first to know. 3. The “At A Conference”, or “Conference Review” Posts I don’t know if I’m like everyone else on this one, but I have never been successfully interested in these posts. It even sounds like a good idea: if I can’t make it to a particular conference (like the KCDC this year), wouldn’t I be interested in a concentrated summary of events? Apparently, no! Within this realm, I’ve never read a post by a blogger that held my interest. What really baffles is is that, for whatever reason, I am genuinely engaged and interested when talking to someone in person regarding the same topic. I have noticed the same phenomenon when hearing about others’ vacations. If someone sends me an email about their vacation, I gloss over it and forget about it quickly. In contrast, if I’m speaking to that individual in person about their vacation, I’m actually interested. I’m unsure why the written medium eradicates the intrigue. I was raised by a roaming pack of friendly wild video games, so that may be a factor. 4. The “Top X Number of Y’s That Z” Posts I’ve seen this one crop up a lot more in the past few of years. Here are some fabricated examples: 5 Easy Ways to Improve Your Code Top 7 Good Habits Programmers Learn From Experience The 8 Things to Consider When Giving Estimates Top 4 Lame Tech Blogging Posts These are attention-grabbing headlines, and I’d assume they rack up hits. In fact, I enjoy a good number of these. But, I’ve been drawn to articles like this just to find an endless list of identically formatted posts on the blog’s archive sidebar. Often times these posts have overlapping topics, too. These types of posts give the impression that the author has given thought to prioritize and organize the points as a result of a comprehensive consideration of a particular topic. Did the author really weigh all the possibilities when identifying the “Top 4 Lame Tech Blogging Patterns”? Unfortunately, probably not. What a tool. To reiterate, I still enjoy the format, but I feel it is abused. Nowadays, I’m pretty skeptical when approaching posts in this format. If these trends continue, my brain will filter these blog posts out just as effectively as it ignores the encroaching “do xxx with this one trick” advertisements. Conclusion To active blog readers, I hope my guide has served you precious time in being able to identify lame blog posts at a glance. Save time and energy by skipping over the chaff of the internet! And if you author a blog, perhaps my insight will help you to avoid the occasional urge to produce these needless filler posts.

    Read the article

  • ODI 12c's Mapping Designer - Combining Flow Based and Expression Based Mapping

    - by Madhu Nair
    post by David Allan ODI is renowned for its declarative designer and minimal expression based paradigm. The new ODI 12c release has extended this even further to provide an extended declarative mapping designer. The ODI 12c mapper is a fusion of ODI's new declarative designer with the familiar flow based designer while retaining ODI’s key differentiators of: Minimal expression based definition, The ability to incrementally design an interface and to extract/load data from any combination of sources, and most importantly Backed by ODI’s extensible knowledge module framework. The declarative nature of the product has been extended to include an extensible library of common components that can be used to easily build simple to complex data integration solutions. Big usability improvements through consistent interactions of components and concepts all constructed around the familiar knowledge module framework provide the utmost flexibility. Here is a little taster: So what is a mapping? A mapping comprises of a logical design and at least one physical design, it may have many. A mapping can have many targets, of any technology and can be arbitrarily complex. You can build reusable mappings and use them in other mappings or other reusable mappings. In the example below all of the information from an Oracle bonus table and a bonus file are joined with an Oracle employees table before being written to a target. Some things that are cool include the one-click expression cross referencing so you can easily see what's used where within the design. The logical design in a mapping describes what you want to accomplish  (see the animated GIF here illustrating how the above mapping was designed) . The physical design lets you configure how it is to be accomplished. So you could have one logical design that is realized as an initial load in one physical design and as an incremental load in another. In the physical design below we can customize how the mapping is accomplished by picking Knowledge Modules, in ODI 12c you can pick multiple nodes (on logical or physical) and see common properties. This is useful as we can quickly compare property values across objects - below we can see knowledge modules settings on the access points between execution units side by side, in the example one table is retrieved via database links and the other is an external table. In the logical design I had selected an append mode for the integration type, so by default the IKM on the target will choose the most suitable/default IKM - which in this case is an in-built Oracle Insert IKM (see image below). This supports insert and select hints for the Oracle database (the ANSI SQL Insert IKM does not support these), so by default you will get direct path inserts with Oracle on this statement. In ODI 12c, the mapper is just that, a mapper. Design your mapping, write to multiple targets, the targets can be in the same data server, in different data servers or in totally different technologies - it does not matter. ODI 12c will derive and generate a plan that you can use or customize with knowledge modules. Some of the use cases which are greatly simplified include multiple heterogeneous targets, multi target inserts for Oracle and writing of XML. Let's switch it up now and look at a slightly different example to illustrate expression reuse. In ODI you can define reusable expressions using user functions. These can be reused across mappings and the implementations specialized per technology. So you can have common expressions across Oracle, SQL Server, Hive etc. shielding the design from the physical aspects of the generated language. Another way to reuse is within a mapping itself. In ODI 12c expressions can be defined and reused within a mapping. Rather than replicating the expression text in larger expressions you can decompose into smaller snippets, below you can see UNIT_TAX AMOUNT has been defined and is used in two downstream target columns - its used in the TOTAL_TAX_AMOUNT plus its used in the UNIT_TAX_AMOUNT (a recording of the calculation).  You can see the columns that the expressions depend on (upstream) and the columns the expression is used in (downstream) highlighted within the mapper. Also multi selecting attributes is a convenient way to see what's being used where, below I have selected the TOTAL_TAX_AMOUNT in the target datastore and the UNIT_TAX_AMOUNT in UNIT_CALC. You can now see many expressions at once now and understand much more at the once time without needlessly clicking around and memorizing information. Our mantra during development was to keep it simple and make the tool more powerful and do even more for the user. The development team was a fusion of many teams from Oracle Warehouse Builder, Sunopsis and BEA Aqualogic, debating and perfecting the mapper in ODI 12c. This was quite a project from supporting the capabilities of ODI in 11g to building the flow based mapping tool to support the future. I hope this was a useful insight, there is so much more to come on this topic, this is just a preview of much more that you will see of the mapper in ODI 12c.

    Read the article

  • Figuring out the IIS Version for a given OS in .NET Code

    - by Rick Strahl
    Here's an odd requirement: I need to figure out what version of IIS is available on a given machine in order to take specific configuration actions when installing an IIS based application. I build several configuration tools for application configuration and installation and depending on which version of IIS is available on IIS different configuration paths are taken. For example, when dealing with XP machine you can't set up an Application Pool for an application because XP (IIS 5.1) didn't support Application pools. Configuring 32 and 64 bit settings are easy in IIS 7 but this didn't work in prior versions and so on. Along the same lines I saw a question on the AspInsiders list today, regarding a similar issue where somebody needed to know the IIS version as part of an ASP.NET application prior to when the Request object is available. So it's useful to know which version of IIS you can possibly expect. This should be easy right? But it turns there's no real easy way to detect IIS on a machine. There's no registry key that gives you the full version number - you can detect installation but not which version is installed. The easiest way: Request.ServerVariables["SERVER_SOFTWARE"] The easiest way to determine IIS version number is if you are already running inside of ASP.NET and you are inside of an ASP.NET request. You can look at Request.ServerVariables["SERVER_SOFTWARE"] to get a string like Microsoft-IIS/7.5 returned to you. It's a cinch to parse this to retrieve the version number. This works in the limited scenario where you need to know the version number inside of a running ASP.NET application. Unfortunately this is not a likely use case, since most times when you need to know a specific version of IIS when you are configuring or installing your application. The messy way: Match Windows OS Versions to IIS Versions Since Version 5.x of IIS versions of IIS have always been tied very closely to the Operating System. Meaning the only way to get a specific version of IIS was through the OS - you couldn't install another version of IIS on the given OS. Microsoft has a page that describes the OS version to IIS version relationship here: http://support.microsoft.com/kb/224609 In .NET you can then sniff the OS version and based on that return the IIS version. The following is a small utility function that accomplishes the task of returning an IIS version number for a given OS: /// <summary> /// Returns the IIS version for the given Operating System. /// Note this routine doesn't check to see if IIS is installed /// it just returns the version of IIS that should run on the OS. /// /// Returns the value from Request.ServerVariables["Server_Software"] /// if available. Otherwise uses OS sniffing to determine OS version /// and returns IIS version instead. /// </summary> /// <returns>version number or -1 </returns> public static decimal GetIisVersion() { // if running inside of IIS parse the SERVER_SOFTWARE key // This would be most reliable if (HttpContext.Current != null && HttpContext.Current.Request != null) { string os = HttpContext.Current.Request.ServerVariables["SERVER_SOFTWARE"]; if (!string.IsNullOrEmpty(os)) { //Microsoft-IIS/7.5 int dash = os.LastIndexOf("/"); if (dash > 0) { decimal iisVer = 0M; if (Decimal.TryParse(os.Substring(dash + 1), out iisVer)) return iisVer; } } } decimal osVer = (decimal) Environment.OSVersion.Version.Major + ((decimal) Environment.OSVersion.Version.MajorRevision / 10); // Windows 7 and Win2008 R2 if (osVer == 6.1M) return 7.5M; // Windows Vista and Windows 2008 else if (osVer == 6.0M) return 7.0M; // Windows 2003 and XP 64 bit else if (osVer == 5.2M) return 6.0M; // Windows XP else if (osVer == 5.1M) return 5.1M; // Windows 2000 else if (osVer == 5.0M) return 5.0M; // error result return -1M; } } Talk about a brute force apporach, but it works. This code goes only back to IIS 5 - anything before that is not something you possibly would want to have running. :-) Note that this is updated through Windows 7/Windows Server 2008 R2. Later versions will need to be added as needed. Anybody know what the Windows Version number of Windows 8 is?© Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  IIS   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • On Contract Employment

    - by kerry
    I am going to post about something I don’t post about a lot, the business side of development.  Scott at the antipimp does a good job of explaining how contracts work from a business perspective.  I am going to give a view from the ground. First, a little background on myself.  I have recently taken a 6 month contract after about 8 years of fulltime employment.  I have 2 kids, and a stay at home wife.  I took this contract opportunity because I wanted to try it on for size.  I have always wondered whether I would like doing contracts over fulltime employment.  So, in keeping with the theme of this blog I will write this down now so that I may reference it later. ALL jobs are temporary! Right now you may not realize it, most people simply ignore it, but EVERY job is temporary.  Everyone should be planning for life after the money stops coming in.  Sadly, most people do not.  Contracting pushes this issue to the forefront, making you deal with it.  After a month on a contract, I am happy to say that I am saving more than I ever saved in a fulltime position.  Hopefully, I will be ready in case of an extended window of unemployment between contracts. Networking I find it extremely gratifying getting to know people.  It is especially beneficial when moving to a new city.  What better way to go out and meet people in your field than to work a few contracts?  6 months of working beside someone and you get to know them pretty well.  This is one of my favorite aspects. Technical Agility Moving between IS shops takes (or molds you into) a flexible person.  You have to be able to go in and hit the ground running.  This means you need to be able to sit down and start work on a large codebase working in a language that you may or may not have that much experience in.  It is also an excellent way to learn new languages and broaden your technical skill set.  I took my current position to learn Ruby.  A month ago, I had only used it in passing, but now I am using it every day.  It’s a tragedy in this field when people start coding for the joy and love of coding, then become deeply entrenched in their companies methods and technologies that it becomes a just a job. Less Stress I am not talking about the kind of stress you get from a jackass boss.  I am talking about the kind of stress I (or others) experience about planning and future proofing your code.  Not saying I stay up at night worrying whether we have done it right, if that code I wrote today is going to bite me later, but it still creeps around in the dark recesses of my mind.  Careful though, I am not suggesting you write sloppy code; just defer any large architectural or design decisions to the ‘code owners’. Flexible Scheduling It makes me very happy to be able to cut out a few hours early on a Friday (provided the work is done) and start the weekend off early by going to the pool, or taking the kids to the park.  Contracting provides you this opportunity (mileage may vary).  Most of your fulltime brethren will not care, they will be jealous that they’re corporate policy prevents them from doing the same.  However, you must be mindful of situations where this is not appropriate, and don’t over do it.  You are there to work after all. Affirmation of Need Have you ever been stuck in a job where you thought you were underpaid?  Have you ever been in a position where you felt like there was not enough workload for you?  This is not a problem for contractors.  When you start a contract it is understood that you are needed, and the employer knows that you are happy with the terms. Contracting may not be for everyone.  But, if you develop a relationship with a good consulting firm, keep their clients happy, then they will keep you happy.  They want you to work almost as much as you do.  Just be sure and plan financially for any windows of unemployment.

    Read the article

  • Getting a Web Resource Url in non WebForms Applications

    - by Rick Strahl
    WebResources in ASP.NET are pretty useful feature. WebResources are resources that are embedded into a .NET assembly and can be loaded from the assembly via a special resource URL. WebForms includes a method on the ClientScriptManager (Page.ClientScript) and the ScriptManager object to retrieve URLs to these resources. For example you can do: ClientScript.GetWebResourceUrl(typeof(ControlResources), ControlResources.JQUERY_SCRIPT_RESOURCE); GetWebResourceUrl requires a type (which is used for the assembly lookup in which to find the resource) and the resource id to lookup. GetWebResourceUrl() then returns a nasty old long URL like this: WebResource.axd?d=-b6oWzgbpGb8uTaHDrCMv59VSmGhilZP5_T_B8anpGx7X-PmW_1eu1KoHDvox-XHqA1EEb-Tl2YAP3bBeebGN65tv-7-yAimtG4ZnoWH633pExpJor8Qp1aKbk-KQWSoNfRC7rQJHXVP4tC0reYzVw2&t=634533278261362212 While lately excessive resource usage has been frowned upon especially by MVC developers who tend to opt for content distributed as files, I still think that Web Resources have their place even in non-WebForms applications. Also if you have existing assemblies that include resources like scripts and common image links it sure would be nice to access them from non-WebForms pages like MVC views or even in plain old Razor Web Pages. Where's my Page object Dude? Unfortunately natively ASP.NET doesn't have a mechanism for retrieving WebResource Urls outside of the WebForms engine. It's a feature that's specifically baked into WebForms and that relies specifically on the Page HttpHandler implementation. Both Page.ClientScript (obviously) and ScriptManager rely on a hosting Page object in order to work and the various methods off these objects require control instances passed. The reason for this is that the script managers can inject scripts and links into Page content (think RegisterXXXX methods) and for that a Page instance is required. However, for many other methods - like GetWebResourceUrl() - that simply return resources or resource links the Page reference is really irrelevant. While there's a separate ClientScriptManager class, it's marked as sealed and doesn't have any public constructors so you can't create your own instance (without Reflection). Even if it did the internal constructor it does have requires a Page reference. No good… So, can we get access to a WebResourceUrl generically without running in a WebForms Page instance? We just have to create a Page instance ourselves and use it internally. There's nothing intrinsic about the use of the Page class in ClientScript, at least for retrieving resources and resource Urls so it's easy to create an instance of a Page for example in a static method. For our needs of retrieving ResourceUrls or even actually retrieving script resources we can use a canned, non-configured Page instance we create on our own. The following works just fine: public static string GetWebResourceUrl(Type type, string resource ) { Page page = new Page(); return page.ClientScript.GetWebResourceUrl(type, resource); } A slight optimization for this might be to cache the created Page instance. Page tends to be a pretty heavy object to create each time a URL is required so you might want to cache the instance: public class WebUtils { private static Page CachedPage { get { if (_CachedPage == null) _CachedPage = new Page(); return _CachedPage; } } private static Page _CachedPage; public static string GetWebResourceUrl(Type type, string resource) { return CachedPage.ClientScript.GetWebResourceUrl(type, resource); } } You can now use GetWebResourceUrl in a Razor page like this: <!DOCTYPE html> <html <head> <script src="@WebUtils.GetWebResourceUrl(typeof(ControlResources),ControlResources.JQUERY_SCRIPT_RESOURCE)"> </script> </head> <body> <div class="errordisplay"> <img src="@WebUtils.GetWebResourceUrl(typeof(ControlResources),ControlResources.WARNING_ICON_RESOURCE)" /> This is only a Test! </div> </body> </html> And voila - there you have WebResources served from a non-Page based application. WebResources may be a on the way out, but legacy apps have them embedded and for some situations, like fallback scripts and some common image resources I still like to use them. Being able to use them from non-WebForms applications should have been built into the core ASP.NETplatform IMHO, but seeing that it's not this workaround is easy enough to implement.© Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  MVC   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • .NET 3.5 Installation Problems in Windows 8

    - by Rick Strahl
    Windows 8 installs with .NET 4.5. A default installation of Windows 8 doesn't seem to include .NET 3.0 or 3.5, although .NET 2.0 does seem to be available by default (presumably because Windows has app dependencies on that). I ran into some pretty nasty compatibility issues regarding .NET 3.5 which I'll describe in this post. I'll preface this by saying that depending on how you install Windows 8 you may not run into these issues. In fact, it's probably a special case, but one that might be common with developer folks reading my blog. Specifically it's the install order that screwed things up for me -  installing Visual Studio before explicitly installing .NET 3.5 from Windows Features - in particular. If you install Visual Studio 2010 I highly recommend you install .NET 3.5 from Windows features BEFORE you install Visual Studio 2010 and save yourself the trouble I went through. So when I installed Windows 8, and then looked at the Windows Features to install after the fact in the Windows Feature dialog, I thought - .NET 3.5 - who needs it. I'd be happy to not have to install .NET 3.5, but unfortunately I found out quite a while after initial installation that one of my applications/tools (DevExpress's awesome CodeRush) depends on it and won't install without it. Enabling .NET 3.5 in Windows 8 If you want to run .NET 3.5 on Windows 8, don't download an installer - those installers don't work on Windows 8, and you don't need to do this because you can use the Windows Features dialog to enable .NET 3.5: And that *should* do the trick. If you do this before you install other apps that require .NET 3.5 and install a non-SP1 one version of it, you are going to have no problems. Unfortunately for me, even after I've installed the above, when I run the CodeRush installer I still get this lovely dialog: Now I double checked to see if .NET 3.5 is installed - it is, both for 32 bit and 64 bit. I went as far as creating a small .NET Console app and running it to verify that it actually runs. And it does… So naturally I thought the CodeRush installer is a little whacky. After some back and forth Alex Skorkin on Twitter pointed me in the right direction: He asked me to look in the registry for exact info on which version of .NET 3.5 is installed here: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP where I found that .NET 3.5 SP1 was installed. This is the 64 bit key which looks all correct. However, when I looked under the 32 bit node I found: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\NET Framework Setup\NDP\v3.5 Notice that the service pack number is set to 0, rather than 1 (which it was for the 64 bit install), which is what the installer requires. So to summarize: the 64 bit version is installed with SP1, the 32 bit version is not. Uhm, Ok… thanks for that! Easy to fix, you say - just install SP1. Nope, not so easy because the standalone installer doesn't work on Windows 8. I can't get either .NET 3.5 installer or the SP 1 installer to even launch. They simply start and hang (or exit immediately) without messages. I also tried to get Windows to update .NET 3.5 by checking for Windows Updates, which should pick up on the dated version of .NET 3.5 and pull down SP1, but that's also no go. Check for Updates doesn't bring down any updates for me yet. I'm sure at some random point in the future Windows will deem it necessary to update .NET 3.5 to SP1, but at this point it's not letting me coerce it to do it explicitly. How did this happen I'm not sure exactly whether this is the cause and effect, but I suspect the story goes like this: Installed Windows 8 without support for .NET 3.5 Installed Visual Studio 2010 which installs .NET 3.5 (no SP) I now had .NET 3.5 installed but without SP1. I then: Tried to install CodeRush - Error: .NET 3.5 SP1 required Enabled .NET 3.5 in Windows Features I figured enabling the .NET 3.5 Windows Features would do the trick. But still no go. Now I suspect Visual Studio installed the 32 bit version of .NET 3.5 on my machine and Windows Features detected the previous install and didn't reinstall it. This left the 32 bit install at least with no SP1 installed. How to Fix it My final solution was to completely uninstall .NET 3.5 *and* to reboot: Go to Windows Features Uncheck the .NET Framework 3.5 Restart Windows Go to Windows Features Check .NET Framework 3.5 and voila, I now have a proper installation of .NET 3.5. I tried this before but without the reboot step in between which did not work. Make sure you reboot between uninstalling and reinstalling .NET 3.5! More Problems The above fixed me right up, but in looking for a solution it seems that a lot of people are also having problems with .NET 3.5 installing properly from the Windows Features dialog. The problem there is that the feature wasn't properly loading from the installer disks or not downloading the proper components for updates. It turns out you can explicitly install Windows features using the DISM tool in Windows.dism.exe /online /enable-feature /featurename:NetFX3 /Source:f:\sources\sxs You can try this without the /Source flag first - which uses the hidden Windows installer files if you kept those. Otherwise insert the DVD or ISO and point at the path \sources\sxs path where the installer lives. This also gives you a little more information if something does go wrong.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Windows  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Internet Explorer and Cookie Domains

    - by Rick Strahl
    I've been bitten by some nasty issues today in regards to using a domain cookie as part of my FormsAuthentication operations. In the app I'm currently working on we need to have single sign-on that spans multiple sub-domains (www.domain.com, store.domain.com, mail.domain.com etc.). That's what a domain cookie is meant for - when you set the cookie with a Domain value of the base domain the cookie stays valid for all sub-domains. I've been testing the app for quite a while and everything is working great. Finally I get around to checking the app with Internet Explorer and I start discovering some problems - specifically on my local machine using localhost. It appears that Internet Explorer (all versions) doesn't allow you to specify a domain of localhost, a local IP address or machine name. When you do, Internet Explorer simply ignores the cookie. In my last post I talked about some generic code I created to basically parse out the base domain from the current URL so a domain cookie would automatically used using this code:private void IssueAuthTicket(UserState userState, bool rememberMe) { FormsAuthenticationTicket ticket = new FormsAuthenticationTicket(1, userState.UserId, DateTime.Now, DateTime.Now.AddDays(10), rememberMe, userState.ToString()); string ticketString = FormsAuthentication.Encrypt(ticket); HttpCookie cookie = new HttpCookie(FormsAuthentication.FormsCookieName, ticketString); cookie.HttpOnly = true; if (rememberMe) cookie.Expires = DateTime.Now.AddDays(10); var domain = Request.Url.GetBaseDomain(); if (domain != Request.Url.DnsSafeHost) cookie.Domain = domain; HttpContext.Response.Cookies.Add(cookie); } This code works fine on all browsers but Internet Explorer both locally and on full domains. And it also works fine for Internet Explorer with actual 'real' domains. However, this code fails silently for IE when the domain is localhost or any other local address. In that case Internet Explorer simply refuses to accept the cookie and fails to log in. Argh! The end result is that the solution above trying to automatically parse the base domain won't work as local addresses end up failing. Configuration Setting Given this screwed up state of affairs, the best solution to handle this is a configuration setting. Forms Authentication actually has a domain key that can be set for FormsAuthentication so that's natural choice for the storing the domain name: <authentication mode="Forms"> <forms loginUrl="~/Account/Login" name="gnc" domain="mydomain.com" slidingExpiration="true" timeout="30" xdt:Transform="Replace"/> </authentication> Although I'm not actually letting FormsAuth set my cookie directly I can still access the domain name from the static FormsAuthentication.CookieDomain property, by changing the domain assignment code to:if (!string.IsNullOrEmpty(FormsAuthentication.CookieDomain)) cookie.Domain = FormsAuthentication.CookieDomain; The key is to only set the domain when actually running on a full authority, and leaving the domain key blank on the local machine to avoid the local address debacle. Note if you want to see this fail with IE, set the domain to domain="localhost" and watch in Fiddler what happens. Logging Out When specifying a domain key for a login it's also vitally important that that same domain key is used when logging out. Forms Authentication will do this automatically for you when the domain is set and you use FormsAuthentication.SignOut(). If you use an explicit Cookie to manage your logins or other persistant value, make sure that when you log out you also specify the domain. IOW, the expiring cookie you set for a 'logout' should match the same settings - name, path, domain - as the cookie you used to set the value.HttpCookie cookie = new HttpCookie("gne", ""); cookie.Expires = DateTime.Now.AddDays(-5); // make sure we use the same logic to release cookie var domain = Request.Url.GetBaseDomain(); if (domain != Request.Url.DnsSafeHost) cookie.Domain = domain; HttpContext.Response.Cookies.Add(cookie); I managed to get my code to do what I needed it to, but man I'm getting so sick and tired of fixing IE only bugs. I spent most of the day today fixing a number of small IE layout bugs along with this issue which took a bit of time to trace down.© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Rendering ASP.NET MVC Razor Views outside of MVC revisited

    - by Rick Strahl
    Last year I posted a detailed article on how to render Razor Views to string both inside of ASP.NET MVC and outside of it. In that article I showed several different approaches to capture the rendering output. The first and easiest is to use an existing MVC Controller Context to render a view by simply passing the controller context which is fairly trivial and I demonstrated a simple ViewRenderer class that simplified the process down to a couple lines of code. However, if no Controller Context is available the process is not quite as straight forward and I referenced an old, much more complex example that uses my RazorHosting library, which is a custom self-contained implementation of the Razor templating engine that can be hosted completely outside of ASP.NET. While it works inside of ASP.NET, it’s an awkward solution when running inside of ASP.NET, because it requires a bit of setup to run efficiently.Well, it turns out that I missed something in the original article, namely that it is possible to create a ControllerContext, if you have a controller instance, even if MVC didn’t create that instance. Creating a Controller Instance outside of MVCThe trick to make this work is to create an MVC Controller instance – any Controller instance – and then configure a ControllerContext through that instance. As long as an HttpContext.Current is available it’s possible to create a fully functional controller context as Razor can get all the necessary context information from the HttpContextWrapper().The key to make this work is the following method:/// <summary> /// Creates an instance of an MVC controller from scratch /// when no existing ControllerContext is present /// </summary> /// <typeparam name="T">Type of the controller to create</typeparam> /// <returns>Controller Context for T</returns> /// <exception cref="InvalidOperationException">thrown if HttpContext not available</exception> public static T CreateController<T>(RouteData routeData = null) where T : Controller, new() { // create a disconnected controller instance T controller = new T(); // get context wrapper from HttpContext if available HttpContextBase wrapper = null; if (HttpContext.Current != null) wrapper = new HttpContextWrapper(System.Web.HttpContext.Current); else throw new InvalidOperationException( "Can't create Controller Context if no active HttpContext instance is available."); if (routeData == null) routeData = new RouteData(); // add the controller routing if not existing if (!routeData.Values.ContainsKey("controller") && !routeData.Values.ContainsKey("Controller")) routeData.Values.Add("controller", controller.GetType().Name .ToLower() .Replace("controller", "")); controller.ControllerContext = new ControllerContext(wrapper, routeData, controller); return controller; }This method creates an instance of a Controller class from an existing HttpContext which means this code should work from anywhere within ASP.NET to create a controller instance that’s ready to be rendered. This means you can use this from within an Application_Error handler as I needed to or even from within a WebAPI controller as long as it’s running inside of ASP.NET (ie. not self-hosted). Nice.So using the ViewRenderer class from the previous article I can now very easily render an MVC view outside of the context of MVC. Here’s what I ended up in my Application’s custom error HttpModule: protected override void OnDisplayError(WebErrorHandler errorHandler, ErrorViewModel model) { var Response = HttpContext.Current.Response; Response.ContentType = "text/html"; Response.StatusCode = errorHandler.OriginalHttpStatusCode; var context = ViewRenderer.CreateController<ErrorController>().ControllerContext; var renderer = new ViewRenderer(context); string html = renderer.RenderView("~/Views/Shared/GenericError.cshtml", model); Response.Write(html); }That’s pretty sweet, because it’s now possible to use ViewRenderer just about anywhere in any ASP.NET application, not only inside of controller code. This also allows the constructor for the ViewRenderer from the last article to work without a controller context parameter, using a generic view as a base for the controller context when not passed:public ViewRenderer(ControllerContext controllerContext = null) { // Create a known controller from HttpContext if no context is passed if (controllerContext == null) { if (HttpContext.Current != null) controllerContext = CreateController<ErrorController>().ControllerContext; else throw new InvalidOperationException( "ViewRenderer must run in the context of an ASP.NET " + "Application and requires HttpContext.Current to be present."); } Context = controllerContext; }In this case I use the ErrorController class which is a generic controller instance that exists in the same assembly as my ViewRenderer class and that works just fine since ‘generically’ rendered views tend to not rely on anything from the controller other than the model which is explicitly passed.While these days most of my apps use MVC I do still have a number of generic pieces in most of these applications where Razor comes in handy. This includes modules like the above, which when they error often need to display error output. In other cases I need to generate string template output for emailing or logging data to disk. Being able to render simply render an arbitrary View to and pass in a model makes this super nice and easy at least within the context of an ASP.NET application!You can check out the updated ViewRenderer class below to render your ‘generic views’ from anywhere within your ASP.NET applications. Hope some of you find this useful.ResourcesViewRenderer Class in Westwind.Web.Mvc Library (Github)Original ViewRenderer ArticleRazor Hosting Library (GitHub)Original Razor Hosting Article© Rick Strahl, West Wind Technologies, 2005-2013Posted in ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Building a Distributed Commerce Infrastructure in the Cloud using Azure and Commerce Server

    - by Lewis Benge
    One of the biggest questions I routinely get asked is how scalable Commerce Server is. Of course the text book answer is the product has been around for 10 years, powers some of the largest e-Commerce websites in the world, so it scales horizontally extremely well. One argument however though is what if you can't predict the growth of demand required of your Commerce Platform, or need the ability to scale up during busy seasons such as Christmas for a retail environment but are hesitant on maintaining the infrastructure on a year-round basis? The obvious answer is to utilise the many elasticated cloud infrastructure providers that are establishing themselves in the ever-growing market, the problem however is Commerce Server is still product which has a legacy tightly coupled dependency on Windows and IIS components. Commerce Server 2009 codename "R2" however introduced to the concept of an n-tier deployment of Microsoft Commerce Server, meaning you are no longer tied to core objects API but instead have serializable Commerce Entity objects, and business logic allowing for Commerce Server to now be built into a WCF-based SOA architecture. Presentation layers no-longer now need to remain on the same physical machine as the application server, meaning you can now build the user experience into multiple-technologies and host them in multiple places – leveraging the transport benefits that a WCF service may bring, such as message queuing, security, and multiple end-points. All of this logic will still need to remain in your internal infrastructure, for two reasons. Firstly cloud based computing infrastructure does not support PCI security requirements, and secondly even though many of the legacy Commerce Server dependencies have been abstracted away within this version of the application, it is still not a fully supported to be deployed exclusively into the cloud. If you do wish to benefit from the scalability of the cloud however, you can still achieve a great Commerce Server and Azure setup by utilising both the Azure App Fabric in terms of the service bus, and authentication services and Windows Azure to host any online presence you may require. The architecture would be something similar to this: This setup would allow you to construct your Commerce Services as part of your on-site infrastructure. These services would contain all of the channels custom business logic, and provide the overall interface back into the underlying Commerce Server components. It would be recommended that services are constructed around the specific business domain of the application, which based on your business model would usually consist of separate services around Catalogue, Orders, Search, Profiles, and Marketing. The App Fabric service bus is then used to abstract and aggregate further the services, making them available to the cloud and subsequently secured by App Fabrics authentication services. These services are now available for consumption by any client, using any supported technology – not just .NET. Thus meaning you are now able to construct apps for IPhone, integrate with Java based POS Devices, and any many other potential uses. This aggregation is useful, and forms the basis of the further strategy around diversifying and enhancing the e-Commerce experience, but also provides the foundation for the scalability we want to gain from utilising a cloud-based application platform. The Windows Azure application platform is Microsoft solution to benefiting from the true economies of scale in terms of the elasticity of the cloud. Just before the launch of the Azure Platform – Domino's pizza actually managed to run their whole SuperBowl operation from the scalability of Windows Azure, and simply switching back to their traditional operation the next day with no residual infrastructure costs. The platform also natively can subscribe to services and messages exposed within the AppFabric service bus, making it an ideal solution to build and deploy a presentation layer which will need to support of scalable infrastructure – such as a high demand public facing e-Commerce portal, or a promotion element of a brand. Windows Azure has excellent support for ASP.NET, including its own caching providers meaning expensive operations such as catalogue queries can persist in memory on the application server, reducing the demand on internal infrastructure and prioritising it for more business critical operations such as receiving orders and processing payments. Windows Azure also supports other languages too, meaning utilising this approach you can technically build a Commerce Server presentation layer in Java, PHP, or Ruby – or equally in ASP.NET or Silverlight without having to change any of the underlying business or Commerce Server implementation. This SOA-style architecture is one of the primary differentiators for Commerce Server as a product in the e-Commerce market, and now with the introduction of a WCF capability in Commerce Server 2009/2009 R2 the opportunities for extensibility of the both the user experience, and integration into third parties, are drastically increased, all with no effect to the underlying channel logic. So if you are looking at deployment options for your e-Commerce application to help support demand in a cost effective way. I would highly recommend you consider looking at Windows Azure, and if you have any questions in-particular about this style of deployment, please feel free to get in touch!

    Read the article

  • Transformation of Product Management in Telecommunications for Rapid Launch of Next Generation Products

    - by raul.goycoolea
    @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }a:link, span.MsoHyperlink { color: blue; text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color: purple; text-decoration: underline; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } The Telecom industry continues to evolve through disruptive products, uncertain markets, shorter product lifecycles and convergence of technologies. Today’s market has moved from network centric to consumer centric and focuses primarily on the customer experience. It has resulted in several product management challenges such as an increased complexity and volume of offerings, creating product variants, accelerating time-to-market, ability to provide multiple product views for varied stakeholders, leveraging OSS intelligence to BSS layer, product co-creation and increasing audit and security concerns for service providers. The document discusses how enterprise product management enabled by PLM-based product catalogue solutions helps to launch next generation products rapidly in the context of the Telecommunication Industry.   1.0.       Introduction   Figure 1: Business Scenario   Modern business demands the launch of complex products in a very short timeframe and effecting changes in the price plan faster without IT intervention. One of the key transformation initiatives companies are focusing on is in the area of product management transformation and operational efficiency improvement. As part of these initiatives, companies are investing in best- in-class COTs-based Product Management solutions developed on industry-wide standards.   The new COTs packages are planned to integrate with existing or new B/OSS systems to provide a strategic end-to-end agile solution for reduced time-to-market and order journey time. In addition, system rationalization is being undertaken to phase out legacy systems and migrate to strategic systems.   2.0.       An Overview of Product Management in Telecom   Product data in telecom is multi- dimensional and difficult to manage. It increased significantly due to the complexity of the product, product offerings on the converged network, increased volume of offerings, bundled offering structures and ever increasing regulatory requirements.   In addition, the shrinking product lifecycle in telecom makes it difficult to manage the dynamic product data. Mergers and acquisitions coupled with organic growth pose major challenges in product portfolio management. It is a roadblock in the journey towards becoming an agile organization.       Figure 2: Complexity in Product Management   Network Technology’ is the new dimension in telecom product management where the same products are realized through different networks i.e., Soiled network to Converged network. Consequently, the product solution is different.     Figure 3: Current Scenario - Pain Points in Product Management   The major business implications arising out of the current scenario are slow time-to-market and an inefficient process that affects innovation.   3.0. Transformation of Next Generation Product Management   Companies must focus on their Product Management Transformation Journey in the areas of:   ·       Management of single truth of product information across the organization/geographies which is currently managed in heterogeneous systems   ·       Management of the Intellectual Property (IP) on the product concept and partnership in the design of discrete components to integrate into the system   ·       Leveraging structured and unstructured product data within the extended enterprise to extract consumer insights and drive innovation   ·       Management of effective operational separation to comply with regulatory bodies   ·       Reuse of existing designs and add relevant features such as value-added services to enable effective product bundling     Figure 4: Next generation needs   PLM-based Enterprise Product Catalogue solutions efficiently address the above requirements and act as an enabler towards product management transformation and rapid product launch.   4.0. PLM-based Enterprise Product Management     Figure 5: PLM-based Enterprise Product Mastering   Enterprise Product Management (EPM) enables the business to manage complex product attributes of data in complex environments. Product Mastering helps create a 'single view' of the product by creating a business-driven, IT-supported environment where a global 'single truth record' is created, managed and reused.   4.1 The Business Case for Telco PLM-based solutions for Enterprise Product Management   ·       Telco PLM-based Product Mastering solutions provide a centralized authoring environment for product definition and control of all product data and rules   ·       PLM packages are designed to support multiple perspectives of product data (ordering perspective, billing perspective, provisioning perspective)   ·       Maintains relationships/links between different elements of the entire product definition   ·       Telco PLM packages are specialized in next generation lifecycle management requirements of products such as revision and state management, test and release management, role management and impact analysis)   ·       Takes into consideration all aspects of OSS product requirements compared to CRM product catalogue solutions where the product data managed is mostly order oriented and transactional     ·       New breed of Telco PLM packages are designed with 'open' standards such as SID and eTOM. They are interoperable, support integration frameworks such as subscription and notification.   ·       Telco PLM packages have developed good collaboration frameworks to integrate suppliers and partners into the product development value chain   4.2 Various Architectures/Approaches for Product Mastering using Telco PLM systems   4. 2.a Single Central Product Management (Mastering) Approach   Figure 6: Single Central Product Management (Master) Approach       This approach is implemented across verticals such as aerospace and automotive. It focuses on a physically centralized product master to which other sources are dependent on. The product definition data (Product bundles, service bundles, price plans, offers and discounts, product configuration rules and market campaigns) is created and maintained physically in a centralized environment. In addition, the product definition/authoring environment is centralized. The existing legacy product definition data available in CRM product catalogue, billing catalogue and the legacy product catalogue is migrated to the centralized PLM-based Enterprise Product Management solution.   Architectural changes must be made in the existing business landscape of applications to create and revise data because the applications have to refer to the central repository for approvals and validation of product configurations. It is achieved by modifying how the applications write data or how the applications can be adapted to use the rules to be managed and published.   Complete product configuration validation will be done in enterprise / central product catalogue and final configuration will be sent to the B/OSS system through the SOA compliant product distribution architecture. The approach/architecture enables greater control in terms of product data management and product data governance.   4.2.b Federated Product Management (Mastering) Architecture     Figure 7: Federated Product Management (Mastering) Architecture   In the federated product mastering approach, the basic unique product definition data (product id, description product hierarchy, basic price plans and simple product design rules) will be centrally created and will be maintained. And, the advanced product definition (Product bundling, promotions, offers & discount plans) will be created in respective down stream OSS systems. The advanced product definition (Product bundling, promotions, offers and discount plans) will be created in respective downstream OSS systems.   For example, basic product definitions such as attributes, product hierarchy and basic price plans will be created and maintained in Enterprise/Central product reference catalogue and distributed to downstream OSS systems. Respective downstream OSS systems build product bundles, promotions, advanced price plans over the basic product definition and master the advanced product definition. Central reference database accesses the respective other source product master data and assembles a point-in-time consolidated view of the product. The approach is typically adapted in some merger and acquisition scenarios where there is a low probability of a central physical authority managing the data. In addition, the migration effort in this case is minimal and there are no big architectural changes to the organization application landscape. However, this approach will not result in better product data management and data governance.   5.0 Customer Scenario – Before EPC deployment   A leading global telecommunications service provider wanted to launch a quad play and triple play service offering in the shortest possible lead time. The service provider was offering Broadband and VoIP services to customers. The company wanted to reuse a majority of the Broadband services and price plans and bundle them with new wireless and IPTV services for quad play and triple play. The challenges in launching the new service offerings were:       Figure 8: Triple Play Plan   ·       Broadband product data was stored in multiple product catalogues (CRM catalogue, Billing catalogue, spread sheets)   ·       Product managers spent a lot of time performing tasks involving duplication or re-keying of data. Manual effort caused errors, cost and time over-runs.   ·       No effective product and price data governance mechanism. Price change issues arising from the lack of data consistency across systems resulted in leakage of customer value and revenue.   ·       Product data had re-usability issues and was not in a structured format. It resulted in uncontrolled product portfolio creation and product management issues.   ·       Lack of enterprise product model resulted into product distribution challenges and thus delays in product launch.   ·       Designers are constrained by existing legacy product management solutions to model product/service requirements and product configuration rules such as upgrading, downgrading and cross selling.    5.1 Customer Scenario - After EPC deployment     Figure 9: SOA-based end-to-end EPC Solution   The company deployed PLM-based Enterprise Product Catalogue solutions to launch quad play service after evaluating various product catalogues. The broadband product offering, service and price data were migrated to the new system, and the product and price plan hierarchy for new offerings were created using the entities defined in the Enterprise Product Model. Supplier product catalogue data such as routers and set up boxes were loaded onto the new solution through SOA-based web service. Price plans and configuration rules were built in the new system. The validated final product configurations were extracted from the product catalogue in a SID format and were distributed to the downstream B/OSS systems through exposed SOA-based web services. The transformations required for the B/OSS system were handled using the transformation layer as part of the solution.   6.0 How PLM enabled Product Management Transformation         Figure 10: Product Management Transformation     PLM-based Product Catalogue Solution helped the customer reduce the product launch cycle time by 30% and enable transformation of Product Management for next generation services.   7.0 Conclusion   On the one hand, the telecom industry is undergoing changes due to disruptions, uncertain product markets and increased complexity of products. On the other hand, the ARPU is decreasing year-on-year. Communications Service Providers are embarking on convergence, bundled service offerings, flexibility to cross-sell and up-sell, introduce new value-added services, leverage Web 2.0 concepts and network capabilities. Consequently, large scale IT transformation initiatives to improve their ARPU supporting network and business transformations are a business imperative. Product Management has become a focus area. Companies are investing in best-in- class COTS solutions to reduce time-to-market, ensure rapid service delivery and improve operational efficiency. An efficient PLM-based enterprise product mastering solution plays a key role in achieving zero touch automation and rapid product launch.   References:   1.     Preston G.Smith, Donald G.Reineristsem, Van Nostrand Reinhold “Developing Products in Half the time”.   2.     John G. Innes, "Achieving Successful Product Change", Pitman Publishing.   3.     D T Pham and R M Setchi (16th Jan, 2001) "Authoring environment for documentation development" University of Wales Cardiff, U.K., Proceedings on Institution of Mechanical Engineers, Vol. 215, Part B.   4.     Oracle Product Hub for Communications:   http://www.oracle.com/us/products/applications/master-data-management/product-hub-082059.html  

    Read the article

  • SnagIt Live Writer Plug-in Updated

    - by Rick Strahl
    Ah, I love SnagIt from TechSmith and I use the heck out of it almost every day. So no surprise that I've decided some time ago to integrate SnagIt into a few applications that require screen shots extensively. It's been a while since I've posted an update to my small SnagIt Windows Live Writer plug-in. There have been a few nagging issues that have crept up with recent changes in the way SnagIt handles captures in recent versions and they have been addressed in this update of SnagIt. Personally I love SnagIt and use it extensively mostly for blogging, but also for writing documentation and articles etc. While there are many other (and also free) tools out there to do basic screen captures, SnagIt continues to be the most convenient tool for me with its nice built in capture and effects editor that makes creating professional looking captures childishly simple. And maybe even more importantly: SnagIt has a COM interface that can be automated and  makes it super easy to embed into other applications. I've built plugins for SnagIt as well as for one of my company's own tools, Html Help Builder. If you use the Windows Live Writer offline WebLog Editor to write blog posts and have a copy of SnagIt it's probably worth your while to check this out if you haven't already. In case you haven't, this plugin integrates SnagIt with Live Writer so you can easily capture and edit content and embed it into a post. Captures are shown in the SnagIt Preview editor where you can edit the image and apply image markup or effects, before selecting Finish (or Cancel). The final image can then be pasted directly into your Live Writer post. When installed the SnagIt plug-in shows up on the PlugIn list or in the Plug-Ins toolbar shortcut: Once you select the Plug in you get the capture window that allows you to customize the capture process which includes most of the useful SnagIt capture options: Once you're done capturing the image shows up in the SnagIt Image Editor and you can crop, mark up and apply effects. When done you click the Finish button and the image is embedded right into your blog post. Easy - how do you think the images in this blog entry got in here? The beauty of SnagIt is that it's all easily integrated - Capturing, editing and embedding, it only takes a few seconds to do it all especially if you save image effect presets in SnagIt. What's updated The main issue addressed in this update has to do with the plug-in updates the Live Writer window. When a capture starts Live Writer gets minimized to get out of the way to let you pick your capture source. When the capture is complete and the image has been embedded Live Writer is activated once again. Recent versions of SnagIt however had changed the Window positioning of SnagIt so that Live Writer ended up popping up back behind the SnagIt window which was pretty annoying. This update pushes Live Writer back to the top of the window stack using some delaying tactics in the code. There have also been a few small changes to the way the code interacts with the COM object which is more reliable if a capture fails or SnagIt blows up or is locked because it's already in a capture outside of the automation interface. Source Code SnagIt Automation is something I actually use a lot. As mentioned I've integrated this automation into Live Writer as well as my documentation tool Html Help Builder, which I use just about daily. The SnagIt integration has a similar interface in that application and provides similar functionality. It's quite useful to integrate SnagIt into other applications. Because it's quite useful to embed SnagIt into other apps there's source code that you can download and embed into your own applications. The code includes both the dialog class that is automated from Live Writer, as well as the basic capture component that captures images to a disk file. Resources Download the SnagIt Capture Plug-in Installer An MSI installer that you can run that will install the plug-in into Live Writer's PlugIns directory. Source Code to the SnagIt Capture Plug-in Contains the plug-in assembly, as well as the source code to the plug-in and the setup project.© Rick Strahl, West Wind Technologies, 2005-2011Posted in Live Writer  WebLog   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Top Tweets SOA Partner Community – November 2011

    - by JuergenKress
    Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity soacommunity SOA Community Dutch ACEs SOA Partner Community award celebration wp.me/p10C8u-i9 OracleBPM Gauging Maturity of your BPM Strategy – part 1/2, bit.ly/vJE9UZ MagicChatzi Dutch ACE’s and ACE Directors had a small party: achatzia.blogspot.com/2011/11/celebr… leonsmiers #Capgemini #Oracle #BPM Blog index bit.ly/tUYtvD #yam lucasjellema Blog post by my colleague Emiel on the AMIS blog: Timeouts in Oracle SOA Suite 11g – tinyurl.com/73amo3r biemond Solving __OAUX_GENXSD_.TOP.XSD with BPEL: When you use an external web service in combination with a BPEL servic… t.co/Gzzatzrr OracleBlogs Jumpstart Fusion Middleware projects with Oracle User Productivity Kit ow.ly/1fJMev cpurdy on Oracle Coherence data grid, its new RESTful APIs, and Oracle Service Bus (OSB): blogs.oracle.com/slc/entry/orac… Accenture Learn how Service-Oriented Architecture can help public service agencies solve legacy system issues. bit.ly/sTteM4 #SOA eelzinga Thanks for organising it Andreas! #soacommunity eelzinga Had a nice drink with the fellow Dutch Oracle ACE members for a little celebration of the SOA Community Partner Award. #soacommunity EmielP Wrote a blogpost about timeouts in the #Oracle #SOA Suite: bit.ly/uhUcrX OracleBlogs Processing Binary Data in SOA Suite 11g t.co/Tzd1xBsY OracleBlogs Finding the Value in SOA by Stephen Bennett t.co/9MMLJoLz OTNArchBeat SOA All the Time; Architects in AZ; Clearing Info Integration hurdles t.co/5viNj8ib OracleBlogs Demo: Business Transaction Management with SOA Management Pack ow.ly/1fFBv3 OTNArchBeat SOA All the Time; Architects in AZ; Clearing Info Integration hurdles t.co/Dnfzo0PN oracletechnet Wikis.oracle.com lives leonsmiers A new #capgemini #oracle #blog, Measuring the Human Task activity in Oracle BPM bit.ly/uPan08 #yam @CapgeminiOracle OTNArchBeat 3 SOA business cases, explained in a 2-minute elevator speech | @JoeMcKendrick t.co/aYGNkZup OTNArchBeat Gartner, Inc. places Oracle SOA Governance in Magic Quadrant for SOA Governance Technologies t.co/bSG5cuTr Jphjulstad Red carpet to Oracle BPM – evita.no evita.no/ikbViewer/soa-… Oracle #Oracle Named a Leader in #SOA Governance Magic Quadrant by Leading Analyst Firm t.co/prnyGu2U soacommunity What presentations & topics do you like to see at the next SOA & BPM & Webcenter Community Forum early 2012? #soacommunity soacommunity Oracle BPM Suite 11g Handbook Released wp.me/p10C8u-hU OTNArchBeat SOA Development Virtual Developer Day (On Demand) | @soacommunity bit.ly/sqhQmX OracleBlogs SOA Development Virtual Developer Day (On Demand) t.co/MDrdnx0h 9 Nov Favorite Undo Retweet Reply OracleBlogs Specialized Partners Only! New Service to Promote Your Events t.co/qTgyEpY4 biemond @stevendavelaar this is for you t.co/hInKCcfY it explains your sso problem soacommunity SOA Development Virtual Developer Day (on demand) t.co/flXPWk4R soacommunity IPT Swiss SOA Experts – thanks for the nice ink wp.me/p10C8u-i3 soacommunity Enjoy #wjax specially the presentations from our #ACE @t_winterberg @myfear @AdamBien pic.twitter.com/m8VcBSG3 OTNArchBeat Discounts on books, more, for Oracle Technology Network members bit.ly/vRxMfB OracleSOA Justify the ROI of SOA in 10 seconds…a pic is worth 1000 words bit.ly/roi_of_soa_img #oraclesoa #soa #oow11 orclateamsoa A-Team SOA Blog: Case Management in BPM 11g -  Mark Foster Oracle BPM 11g & Case Management I’ve seen… t.co/l5zb6pFr t_winterberg Die nächste SIG #SOA steht an: 7.12. in Hamburg. Neues Tooling und Erfahrungen rund um Oracle FMW, SOA, BPM… (cont) deck.ly/~YC57v OracleBlogs Continuous Integration for SOA/BPM ow.ly/1fsekI OracleBlogs BPM Suite 11g Handbook Released ow.ly/1frlzv lucasjellema Iterating over collection (array) in BPM (and dispatching jobs for entries in array): t.co/1SEhSvWv – subprocesses are the key. lucasjellema Lucas Jellema Useful tip from Mark Nelson: BPM API documentation (as well as Human Workflow Service) available: redstack.wordpress.com/2011/09/28/api… OTNArchBeat SOA, cloud: it’s the architecture that matters | Joe McKendrick zd.net/tNCiTF orclateamsoa: Building a job dispatcher in BPM -or- Iterating over collections in BPM ow.ly/1frbrz orclateamsoa Using the Database as a Policy Store for SOA 11g ow.ly/1frbrA OracleBPM Oracle launches Process Accelerators for BPM: t.co/XPEE61QL Jphjulstad Human-Centric BPM Selection Checklist t.co/3TZXZHLH OracleBlogs Fusion Middleware General Session at OOW 2011: Missed It? Read On… t.co/aU5JvM6K gschmutz Great! The product page of the OSB 11g Development Cookbook is now online: t.co/5Jfbe6Ng Looking forward to get it, u too? brhubart Oracle IT Architecture Essentials; Lightweight Composite Service Development with SCA and Spring; Cloud Migration ow.ly/7esNg eelzinga New blogpost : Oracle Service Bus, Generic fault handling, bit.ly/sGr4UL #osb #oracleservicebus For regular information on Oracle SOA Suite become a member in the SOA Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Technorati Tags: soacommunity,twitter,Oracle,SOA Community,Jürgen Kress,OPN

    Read the article

  • Download Microsoft MSDN Magazine PDF Issues For Offline Reading

    - by Kavitha
    MSDN Magazine is a must read for every Microsoft developer. It provides in-depth analysis and excellent guides on all the latest Microsoft development tools and technologies. Every month one can grab this magazine on the stands or read it online for free. What if you want to read the magazine offline on your PC or mobile devices? Just grab a PDF version of the magazine and read it whenever you want. The PDF version of MSDN magazines are very handy for travellers who don’t get access to internet always. In this post we are going to provide you links to download PDF version, source code and online version of every month MSDN Magazine issue starting from 2010. Bookmark this post and keep checking it monthly to get access to MSDN Magazine links. December 2010 Issue    Download PDF(not yet available)    Download Source Code    Read Magazine Online        November 2010 Issue    Download PDF (not yet available)    Read Magazine Online    Download Source Code       October 2010 Issue    Download PDF    Download Source Code    Read Magazine Online        September 2010 Issue    Download PDF    Download Source Code    Read Magazine Online       August 2010 Issue    Download PDF    Download Source Code    Read Magazine Online       July 2010 Issue    Download PDF    Download Source Code    Read Magazine Online       June 2010 Issue    Download PDF    Download Source Code    Read Magazine Online       May 2010 Issue      Download PDF    Download Source Code    Read Magazine Online       April 2010 Issue    Download PDF    Read Magazine Online    Download Source Code       March 2010 Issue    Download PDF    Download Source Code    Read Magazine Online       February 2010 Issue    Download PDF    Download Source Code    Read Magazine Online       January 2010 Issue    Download PDF    Download Source Code    Read Magazine Online This article titled,Download Microsoft MSDN Magazine PDF Issues For Offline Reading, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Oracle Database 12c: Oracle Multitenant Option

    - by hamsun
    1. Why ? 2. What is it ? 3. How ? 1. Why ? The main idea of the 'grid' is to share resources, to make better use of storage, CPU and memory. If a database administrator wishes to implement this idea, he or she must consolidate many databases to one database. One of the concerns of running many applications together in one database is: ‚what will happen, if one of the applications must be restored because of a human error?‘ Tablespace point in time recovery can be used for this purpose, but there are a few prerequisites. Most importantly the tablespaces are strictly separated for each application. Another reason for creating separated databases is security: each customer has his own database. Therefore, there is often a proliferation of smaller databases. Each of them must be maintained, upgraded, each allocates virtual memory and runs background processes thereby wasting resources. Oracle 12c offers another possibility for virtualization, providing isolation at the database level: the multitenant container database holding pluggable databases. 2. What ? Pluggable databases are logical units inside a multitenant container database, which consists of one multitenant container database and up to 252 pluggable databases. The SGA is shared as are the background processes. The multitenant container database holds metadata information common for pluggable databases inside the System and the Sysaux tablespace, and there is just one Undo tablespace. The pluggable databases have smaller System and Sysaux tablespaces, containing just their 'personal' metadata. New data dictionary views will make the information available either on pdb (dba_views) or container level (cdb_views). There are local users, which are known in specific pluggable databases and common users known in all containers. Pluggable databases can be easily plugged to another multitenant container database and converted from a non-CDB. They can undergo point in time recovery. 3. How ? Creating a multitenant container database can be done using the database configuration assistant: There you find the new option: Create as Container Database. If you prefer ‚hand made‘ databases you can execute the command from a instance in nomount state: CREATE DATABASE cdb1 ENABLE PLUGGABLE DATABASE …. And of course this can also be achieved through Enterprise Manager Cloud. A freshly created multitenant container database consists of two containers: the root container as the 'rack' and a seed container, a template for future pluggable databases. There are 4 ways to create other pluggable databases: 1. Create an empty pdb from seed 2. Plug in a non-CDB 3. Move a pdb from another pdb 4. Copy a pdb from another pdb We will discuss option2: how to plug in a non_CDB into a multitenant container database. Three different methods are available : 1. Create an empty pdb and use Datapump in traditional export/import mode or with Transportable Tablespace or Database mode. This method is suitable for pre 12c databases. 2. Create an empty pdb and use GoldenGate replication. When the pdb catches up with the non-CDB, you fail over to the pdb. 3. Databases of Version 12c or higher can be plugged in with the help of the new dbms_pdb Package. This is a demonstration for method 3: Step1: Connect to the non-CDB to be plugged in and create an xml File with description of the database. The xml file is written to $ORACLE_HOME/dbs per default and contains mainly information about the datafiles. Step 2: Check if the non-CDB is pluggable in the multitenant container database: Step 3: Create the pluggable database, connected to the Multitenant container database. With nocopy option the files will be reused, but the tempfile is created anew: A service is created and registered automatically with the listener: Step 4: Delete unnecessary metadata from PDB SYSTEM tablespace: To connect to newly created pdb, edit tnsnames.ora and add entry for new pdb. Connect to plugged-in non_CDB and clean up Data Dictionary to remove entries now maintained in multitenant container database. As all kept objects have to be recompiled it will take a few minutes. Step 5: The plugged-in database will be automatically synchronised by creating common users and roles when opened the first time in read write mode. Step 6: Verify tablespaces and users: There is only one local tablespace (users) and one local user (scott) in the plugged-in non_CDB pdb_orcl. This method of creating plugged_in non_CDB from is fast and easy for 12c databases. The method for deplugging a pluggable database from a CDB is to create a new non_CDB and use the the new full transportable feature of Datapump and drop the pluggable database. About the Author: Gerlinde has been working for Oracle University Germany as one of our Principal Instructors for over 14 years. She started with Oracle 7 and became an Oracle Certified Master for Oracle 10g and 11c. She is a specialist in Database Core Technologies, with profound knowledge in Backup & Recovery, Performance Tuning for DBAs and Application Developers, Datawarehouse Administration, Data Guard and Real Application Clusters.

    Read the article

  • SOA Community Newsletter: nouvelle lettre !

    - by mseika
    SOA PARTNER COMMUNITY NEWSLETTERAUGUST 2012 Dear SOA partner community member Have you submitted your feedback on SOA Partner Community Survey 2012? This is the last chance to participate in the survey. We recommend you to complete the survey and help us to improve our SOA Community. Thanks to all attendees and trainers for their participation in the excellent Fusion Middleware Summer Camps held in Lisbon and Munich. I would also like to thank you for the great feedback and the nice reports provided by AMIS Technology Blog & Middleware by Link Consulting. Most of our courses have been overbooked, if you did not get a chance or missed it, we offer a wide range of online training and the course material. Key take-away from the advanced BPM course is to become an expert in ADF. Here is the course from Grant Ronald Learn Advanced ADF online available. The Link Consulting Team became experts in SOA Governance with EAMS and Oracle Enterprise Repository! We always encourage our community members to share their best practices and are very keen to publish it. Please let us know if you want to share your best practices through this medium.We encourage you to make use of the Specialization benefits - this month we are giving an opportunity to Promote Your SOA & BPM Events. Jürgen KressOracle SOA & BPM Partner Adoption EMEA NEW CONTENT Presentations & Training material OFM Summer CampsPromote Your SOA & BPM Events Advanced ADF Online, For Free By Grant BPM 11g Customer Stories & Solution Catalog & Process Accelerators Delivering SOA Governance with EAMS by Link Consulting Team WebLogic Server Provisioning and Patching News from our Partners & CommunityUpdated material by Oracle Connect and Network SOA Blogs SOA on Facebook SOA on LinkedIn SOA on Twitter Mix SOA Forum SOA Workspace PRESENTATIONS & TRAINING MATERIAL OFM SUMMER CAMPS Thanks to all attendees who invested their time and utilized the opportunity to attend the Summer Camps! Due to high demand of our most of the trainings, we had a long waiting list with more numbers of partners who are keen to attend it. We would like to give our special thanks to all trainers, who delivered excellent workshops! Most of the presentations and course material have been posted on our SOA Community Workspaceand WebLogic Community Workspace. You can access the content only if you are a registered community member. To register for the SOA Community please click here. You can register for the WebLogic Community here. To find out the first impressions of the event please visit our Facebook pages:www.facebook.com/WebLogicCommunity &www.facebook.com/soacommunity or Picasa AlbumThanks for the excellent blog posts from AMIS Technology Blog & Middleware by Link Consulting. Let us know if you published a twitter blog on@soacommunity & @wlscommunity. We will be pleased to publish it in our Newsletters. BPM Course Quotes “Its always easy, if you know, what you are doing” - Torsten Winterberg, Opitz“ The best ideas are the ideas from the best” - Filipe Sequeria, Primesoft “Best invest in the education in the last 12 months” - Richard Schaller, IPT “Practice best practice with the best instructor” - Graham Lamond Capgemini “If you have basic BPM knowledge, this is the course to really mater it” - Diogo Henriques Link Consulting “Very good trainers lot of work. Lot of fun as well” - Matthias Gris Workflow Factory “If you like to accelerate in Oracle come to the training to bring it all together” - Marcel van der Glind, Amis ADF Course Quotes "Excellent training, great opportunity to network!" - Frank Houweling, Amis "Lots of fun and good ideas" - Ana Santiago, GFI "Learn ADF, worth it Fusion Apps is the future" - Miguel Delgadillo, STO Consulting "The best way to learn Fusion Middleware from the #1" Alexandro Montantes, STO Consulting "Be advanced to to be the first” - Dimitar Petrov Fadata "Great opportunity to suck all the knowledge out of some very experienced product managers” - Wilfred von der Deijl, The Future Group WebLogic Course Quotes “Oracle trainings are the best” - Pedro Neto Novobas“ "Excellent training, well organized” - Pedro Antunh, Capgemini “This course dives you into Oracle WebLogic giving you a quick start on benefiting from Fusion Apps” - Leonardo Fernandes, Outsystems Additional Quotes “Thanks a lot again for organizing such a great and informative Summer Camp. Both training and networking were organized very professionally. I have gained tons of very useful Info, which will definitely help to increase quality of our future projects.” - Daniel Fasko fss-group.com I didn’t get the chance yesterday to thank you for a most enjoyable and thoroughly educational time I had in Munich over the last few days.” - Jeroen Bakker Ordina “Just to congratulate you on a great event, not only today but also in the previous days of training. As we know, a very good organization and, as a native Portuguese that knows Lisbon very good, a nice choice of places to visit. Looking forward to come again next year.” Pedro Miguel Neto, Novobase PROMOTE YOUR SOA & BPM EVENTS The Partner Event Publisher has just been made available to all SOA & BPM specialized partners in EMEA. Partners now have the opportunity to publish their events to theOracle.com/events site and spread the word on their upcoming live in-person and/or live webcast events. See the demo below and click here to read more information. ADVANCED ADF ONLINE, FOR FREE BY GRANT The second part of the advanced ADF online eCourse is Live now! This covers the advanced topics of region and region interaction as well as getting down and dirty with some of the layout features of ADF Faces, skinning and DVT components. The aim of this course is to give you a self-paced learning aid which covers the more advanced topics of ADF development. The content is developed by Product Management and our Curriculum development teams and is based on advanced training material we have been running internally for about 18 months. We will get started on the next chapter, but in the meantime, please have a look at chapters one and two. Back to top BPM 11G CUSTOMER STORIES & SOLUTION CATALOG & PROCESS ACCELERATORS Stories Everyone loves a good story on planning or implementing a BPM strategy. Everyone wants to hear how it was done before?, what worked?, what was achieved? If you have achieved success with BPM, we are very keen to hear your stories and examples of how your customers use it. We receive lots of requests from people who are thinking of using BPM to solve a specific problem or in combination with a specific technology to talk to someone who has done it before. These stories are invaluable. Drop down the details of anything you think is relevant with a bit of detail and we will follow up on it. As one good deed deserves another, we will do our best to give you stories if you need them to show that where you are going, others have treaded before. Send your stories to us using this e-mail link and we will share them among other like minded people. Solution Catalogue This summer, Oracle is launching a solution catalogue specifically intended for partners. If you have delivered a successful implementation in BPM and think it could be reused and applied again in a similar scenario in the same industry or in a similar environment, then we ware keen to know about it and will add it to the solution catalogue. The solution catalogue will showcase successful BPM solutions both inside and outside Oracle. Be in touch with us on this e-mail link and we will make sure to add your solution. Process AcceleratorsFinally if you have specific processes that you are expert on, you have implemented at a customer and you want to work with us on getting these productised, then we would love to know about it. The process accelerator programme is explained in the most recent SOA/BPM Community Newsletter but again feel free to contact us if you want to get involved. Good luck with BPM and let us know how we can help. Barry O'Reilly Director BPM [email protected] DELIVERING SOA GOVERNANCE WITH EAMS BY LINK CONSULTING TEAM In the last 12 years Link Consulting has been making its presence in specific areas such as Governance and Architecture, both in terms of practices and methodologies, products, know-how and technological expertise. The Enterprise Architecture Management System - Oracle Enterprise Edition (EAMS - OER Edition) is the result of this experience and combines the architecture management solution with OER in order to deliver a product specialized for SOA Governance that gathers the better of two worlds in solution that enables SOA Governance projects, initiatives and programs. Enterprise Architecture Management System Enterprise Architecture Management System (EAMS), is an automation based solution that enables the efficient management of Enterprise Architectures. The solution uses configured enterprise repositories and takes advantages of its features to provide automation capabilities to the users. EAMS provides capabilities to create/customize/analyze repository data, architectural blueprints, reports and analytic charts. Oracle Enterprise Repository Oracle Enterprise Repository (OER) is one of the major and central elements of the Oracle SOA Governance solution. Oracle Enterprise Repository provides the tools to manage and govern the metadata for any type of software asset, from business processes and services to patterns, frameworks, applications, components, and models. OER maps the relationships and inter-dependencies that connect those assets to improve impact analysis, promote and optimize their reuse, and measure their impact on the bottom line. It provides the visibility, feedback, controls, and analytics to keep your SOA on track to deliver business value. The intense focus on automation helps to overcome barriers to SOA adoption and streamline governance throughout the lifecycle. Core capabilities of the OER include: Asset Management Asset Lifecycle Management Usage Tracking Service Discovery Version Management Dependency Analysis Portfolio Management EAMS - OER Edition The solution takes the advantages and features from both products and combines them in a symbiotic tool that enhances the quality of SOA Governance Initiatives and Programs. EAMS is able to produce a vast number of outputs by combining its analytical engine, SOA-specific configurations and the assets in OER and other related tools, catalogs and repositories. The configurations encompass not only the extendable parametrization of the metadata but also fully configurable blueprints, PowerPoint reports, charts and queries. The SOA blueprints The solution comes with a set of predefined architectural representations that help the organization better perceive their SOA landscape. More blueprints can be easily created in order to accommodate the organizations needs in terms of detail, audience and metadata. Charts & Dashboards The solution encompasses a set of predefined charts and dashboards that promote a more agile way to control and explore the assets. Time Based Visualization All representations are time bound, and with EAMS - OER you can truly govern SOA with a complete view of the Past, Present and Future; The solution delivers Gap Analysis, a project oriented approach while taking into consideration the As-Was, As-Is an To-Be. Time based visualization differentiating factors: Extensive automation and maintenance of architectural representations Organization wide solution. Easy access and navigation to and between all architectural artifacts and representations. Flexible meta-model, customization and extensibility capabilities. Lifecycle management and enforcement of the time dimension over all the repository content. Profile based customization. Comprehensive visibility Architectural alignment Friendly and striking user interfaces For more information on EAMS visit us here. For more information on SOA visit us here. WEBLOGIC SERVER PROVISIONING AND PATCHING For access to the Oracle demo systems please visit OPN and talk to your Partner Expert.SOA Suite and BPM Suite runs on WebLogic! We are pleased to announce the availability of a WebLogic Server Management demo that showcases some of the key provisioning and patching capabilities of WebLogic Server Management Pack Enterprise Edition (EE). To learn more about these features - as well as other features of the pack - please visit the pack's saleskit page.Demo Highlights The demo showcases the following capabilities: Patching Oracle WebLogic Servers Standardizing WebLogic Server Patch Rollouts Creating a WebLogic Domain Provisioning Profile Cloning a WebLogic Domain from a Provisioning Profile Deploying a Java EE Application Scaling Out an Oracle WebLogic Cluster Demo Instructions Go to the DSS website for Oracle Partners. On the Standard Demo Launchpad page, under the “Software Lifecycle Automation” section, click on the link “EM Cloud Control 12c WLS Provisioning and Patching” (tagged as “NEW”). Specific demo launchpad page contains a link to the detailed demo script with instructions on how to show the demo.

    Read the article

< Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >