Search Results

Search found 2745 results on 110 pages for 'waiting'.

Page 106/110 | < Previous Page | 102 103 104 105 106 107 108 109 110  | Next Page >

  • The Low Down Dirty Azure Blues

    - by SGWellens
    Remember the SETI screen savers that used to be on everyone's computer? As far I as know, it was the first bona-fide use of "Cloud" computing…albeit an ad hoc cloud. I still think it was a brilliant leveraging of computing power. My interest in clouds was re-piqued when I went to a technical seminar at the local .Net User Group. The speaker was Mike Benkovitch and he expounded magnificently on the virtues of the Azure platform. Mike always does a good job. One killer reason he gave for cloud computing is instant scalability. Not applicable for most applications, but it is there if needed. I have a bunch of files stored on Microsoft's SkyDrive platform which is cloud storage. It is painfully slow. Accessing a file means going through layers and layers of software, redirections and security. Am I complaining? Hell no! It's free! So my opinions of Cloud Computing are both skeptical and appreciative. What intrigued me at the seminar, in addition to its other features, is that Azure can serve as a web hosting platform. I have a client with an Asp.Net web site I developed who is not happy with the performance of their current hosting service. I checked the cost of Azure and since the site has low bandwidth/space requirements the cost would be competitive with the existing host provider: Azure Pricing Calculator. And, Azure has a three month free trial. Perfect! I could try moving the website and see how it works for free. I went through the signup process. Everything was proceeding fine until I went to the MS SQL database management screen. A popup window informed me that I needed to install Silverlight on my machine. Silverlight? No thanks. Buh-Bye. I half-heartedly found the Azure support button and logged a ticket telling them I didn't want Silverlight on my machine. Within 4 to 6 hours (and a myriad (5) of automated support emails) they sent me a link to a database management page that did not require Silverlight. Thanks! I was able to create a database immediately. One really nice feature was that after creating the database, I was given a list of connection strings. I went to the current host provider, made a backup of the database and saved it to my machine. I attached to the remote database using SQL Server Studio 2012 and looked for the Restore menu item. It was missing. So I tried using the SQL command: RESTORE DATABASE MyDatabase FROM DISK ='C:\temp\MyBackup.bak' Msg 40510, Level 16, State 1, Line 1 Statement 'RESTORE DATABASE' is not supported in this version of SQL Server. Are you kidding me? Why on earth…? This can't be happening! I opened both the source database and destination database in SQL Management Studio. I right clicked the source database, selected "Tasks" and noticed a menu selection called "Deploy Database to SQL Azure" Are you kidding me? Could it be? Oh yes, it be! There was a small problem because the database already existed on the Azure machine, I deployed to a new name, deleted the existing database and renamed the deployed database to what I needed. It was ridiculously easy. Being able to attach SQL Management Studio to remote databases is an awesome but scary feature. You can limit the IP addresses that can access the database which enhances security but when you give people, any people, me included, that much power, one errant mouse click could bring a live system down. My Advice: Tread softly and carry a large backup thumb-drive. Then I created a web site, the URL it returned look something like this: http://MyWebSite.azurewebsites.net/ Azure supports FTP, but I couldn't figure out the settings until I downloaded the publishing profile. It was an XML file that contained the needed information. I still couldn't connect with my FTP client (FileZilla). After about an hour of messing around, I deleted the port number from the FileZilla setup page….and voila, I was in like Flynn.   There are other options of deploying directly from Visual Studio, TFS, etc. but I do not like integrated tools that do things without my asking: It's usually hard to figure out what they did and how to undo it. I uploaded the aspx , cs , webconfig, etc. files. Bu it didn't run. The site I ported was in .NET 3.5. The Azure website configuration page gave me a choice between .NET 2.0 and 4.0. So, I switched to Visual Studio 2010, chose .NET 4.0 and upgraded the site. Of course I have the original version completely backed up and stored in a granite cave beneath the Nevada desert. And I have a backup CD under my pillow. The site uses ReportViewer to generate PDF documents. Of course it was the wrong version. I removed the old references to version 9 and added new references to version 10 (*see note below). Since the DLLs were not on the Azure Server, I uploaded them to the bin directory, crossed my fingers, burned some incense and gave it a try. After some fiddling around it ran. I don't know if I did anything particular to make it work or it just needed time to sort things out. However, one critical feature didn't work: ReportViewer could not programmatically generate PDF documents. I was getting this exception: "An error occurred during local report processing. Parameter is not valid." Rats. I did some searching and found other people were having the same problem, so I added a post saying I was having the same problem: http://social.msdn.microsoft.com/Forums/en-US/windowsazurewebsitespreview/thread/b4a6eb43-0013-435f-9d11-00ee26a8d017 Currently they are looking into this problem and I am waiting for the results. Hence I had the time to write this BLOG entry. How lucky you are. This was the last message I got from the Microsoft person: Hi Steve, Windows Azure Web Sites is a multi-tenant environment. For security issue, we limited some API calls. Unfortunately, some GDI APIS required by the PDF converting function are in this list. We have noticed this issue, and still investigation the best way to go. At this moment, there is no news to share. Sorry about this. Will keep you posted. If I had to guess, I would say they are concerned with people uploading images and doing intensive graphics programming which would hog CPU time.  But that is just a guess. Another problem. While trying to resolve the ReportViewer problem, I tried to write a file to the PDF directory to see if there was a permissions problem with some test code: String MyPath = MapPath(@"~\PDFs\Test.txt"); File.WriteAllText(MyPath, "Hello Azure");     I got this message: Access to the path <my path> is denied. After some research, I understood that since Azure is a cloud based platform, it can't allow web applications to save files to local directories. The application could be moved or replicated as scaling occurs and trying to manage local files would be problematic to say the least. There are other options: Use the Azure APIs to get a path. That way the location of the storage is separated from the application. However, the web site is then tied Azure and can't be moved to another hosting platform. Use the ApplicationData folder (not recommended). Write to BLOB storage. Or, I could try and stream the PDF output directly to the email and not save a file. I'm not going to work on a final solution until the ReportViewer is fixed. I am just sharing some of the things you need to be aware of if you decide to use Azure. I got this information from here. (Note the author of the BLOG added a comment saying he has updated his entry). Is my memory faulty? While getting this BLOG ready, I tried to write the test file again. And it worked. My memory is incorrect, or much more likely, something changed on the server…perhaps while they are trying to get ReportViewer to work. (Anyway, that's my story and I'm sticking to it). *Note: Since Visual Studio 2010 Express doesn't include a Report Editor, I downloaded and installed SQL Server Report Builder 2.0. It is a standalone Report Editor to replace the one not in Visual Studio 2010 Express. I hope someone finds this useful. Steve Wellens CodeProject

    Read the article

  • "Translator by Moth"

    - by Daniel Moth
    This article serves as the manual for the free Windows Phone 7 app called "Translator by Moth". The app is available from the following link (browse the link on your Window Phone 7 phone, or from your PC with zune software installed): http://social.zune.net/redirect?type=phoneApp&id=bcd09f8e-8211-e011-9264-00237de2db9e   Startup At startup the app makes a connection to the bing Microsoft Translator service to retrieve the available languages, and also which languages offer playback support (two network calls total). It populates with the results the two list pickers ("from" and "to") on the "current" page. If for whatever reason the network call fails, you are informed via a message box, and the app keeps trying to make a connection every few seconds. When it eventually succeeds, the language pickers on the "current" page get updated. Until it succeeds, the language pickers remain blank and hence no new translations are possible. As you can guess, if the Microsoft Translation service add more languages for textual translation (or enables more for playback) the app will automatically pick those up. "current" page The "current" page is the main page of the app with language pickers, translation boxes and the application bar. Language list pickers The "current" page allows you to pick the "from" and "to" languages, which are populated at start time. Until these language get populated with the results of the network calls, they remain empty and disabled. When enabled, tapping on either of them brings up on a full screen popup the list of languages to pick from, formatted as English Name followed by Native Name (when the latter is known). The "to" list, in addition to the language names, indicates which languages have playback support via a * in front of the language name. When making a selection for the "to" language, and if there is text entered for translation, a translation is performed (so there is no need to tap on the "translate" application bar button). Note that both language choices are remembered between different launches of the application.   text for translation The textbox where you enter the translation is always enabled. When there is nothing entered in it, it displays (centered and in italics) text prompting you to enter some text for translation. When you tap on it, the prompt text disappears and it becomes truly empty, waiting for input via the keyboard that automatically pops up. The text you type is left aligned and not in italic font. The keyboard shows suggestions of text as you type. The keyboard can be dismissed either by tapping somewhere else on the screen, or via tapping on the Windows Phone hardware "back" button, or via taping on the "enter" key. In the latter case (tapping on the "enter" key), if there was text entered and if the "from" language is not blank, a translation is performed (so there is no need to tap on the "translate" application bar button). The last text entered is remembered between application launches. translated text The translated text appears below the "to" language (left aligned in normal font). Until a translation is performed, there is a message in that space informing you of what to expect (translation appearing there). When the "current" page is cleared via the "clear" application bar button, the translated text reverts back to the message. Note a subtle point: when a translation has been performed and subsequently you change the "from" language or the text for translation, the translated text remains in place but is now in italic font (attempting to indicate that it may be out of date). In any case, this text is not remembered between application launches. application bar buttons and menus There are 4 application bar buttons and 4 application bar menus. "translate" button takes the text for translation and translates it to the translated text, via a single network call to the bing Microsoft Translator service. If the network call fails, the user is informed via a message box. The button is disabled when there is no "from" language available or when there is not text for translation entered. "play" button takes the translated text and plays it out loud in a native speaker's voice (of the "to" language), via a single network call to the bing Microsoft Translator service. If the network call fails, the user is informed via a message box. The button is disabled when there is no "to" language available or when there is no translated text available. "clear" button clears any user text entered in the text for translation box and any translation present in the translated text box. If both of those are already empty, the button is disabled. It also stops any playback if there is one in flight. "save" button saves the entire translation ("from" language, "to" language, text for translation, and translated text) to the bottom of the "saved" page (described later), and simultaneously switches to the "saved" page. The button is disabled if there is no translation or the translation is not up to date (i.e. one of the elements have been changed). "swap to and from languages" menu swaps around the "from" and "to" languages. It also takes the translated text and inserts it in the text for translation area. The translated text area becomes blank. The menu is disabled when there is no "from" and "to" language info. "send translation via sms" menu takes the translated text and creates an SMS message containing it. The menu is disabled when there is no translation present. "send translation via email" menu takes the translated text and creates an email message containing it (after you choose which email account you want to use). The menu is disabled when there is no translation present. "about" menu shows the "about" page described later. "saved" page The "saved" page is initially empty. You can add translations to it by translating text on the "current" page and then tapping the application bar "save" button. Once a translation appears in the list, you can read it all offline (both the "from" and "to" text). Thus, you can create your own phrasebook list, which is remembered between application launches (it is stored on your device). To listen to the translation, simply tap on it – this is only available for languages that support playback, as indicated by the * in front of them. The sound is retrieved via a single network call to the bing Microsoft Translator service (if it fails an appropriate message is displayed in a message box). Tap and hold on a saved translation to bring up a context menu with 4 items: "move to top" menu moves the selected item to the top of the saved list (and scrolls there so it is still in view) "copy to current" menu takes the "from" and "to" information (language and text), and populates the "current" page with it (switching at the same time to the current page). This allows you to make tweaks to the translation (text or languages) and potentially save it back as a new item. Note that the action makes a copy of the translation, so you are not actually editing the existing saved translation (which remains intact). "delete" menu deletes the selected translation. "delete all" menu deletes all saved translations from the "saved" page – there is no way to get that info back other than re-entering it, so be cautious. Note: Once playback of a translation has been retrieved via a network call, Windows Phone 7 caches the results. What this means is that as long as you play a saved translation once, it is likely that it will be available to you for some time, even when there is no network connection.   "about" page The "about" page provides some textual information (that you can view in the screenshot) including a link to the creator's blog (that you can follow on your Windows Phone 7 device). Use that link to discover the email for any feedback. Other UI design info As you can see in the screenshots above, "Translator by Moth" has been designed from scratch for Windows Phone 7, using the nice pivot control and application bar. It also supports both portrait and landscape orientations, and looks equally good in both the light and the dark theme. Other than the default black and white colors, it uses the user's chosen accent color (which is blue in the screenshot examples above). Feedback and support Please report (via the email on the blog) any bugs you encounter or opportunities for performance improvements and they will be fixed in the next update. Suggestions for new features will be considered, but given that the app is FREE, no promises are made. If you like the app, don't forget to rate "Translator by Moth" on the marketplace. Comments about this post welcome at the original blog.

    Read the article

  • NRF Online Merchandising Workshop: Where Online Retailers Are Focusing for Holiday and Beyond

    - by Rose Spicer-Oracle
    0 0 1 1204 6863 Oracle Corporation 57 16 8051 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Last month we attended the NRF Online Merchandising Workshop in LA, and it was a great opportunity to catch up with our customers, meet new retailers, and hear some great presentations from VF Corporation, Zazzle, Julep Beauty, Backcountry, eBags and more. The one-on-one conversations with Merchants and the keynote presentations carry the same themes across companies of all sizes and across verticals. With only 125 days left (and counting) until Black Friday, these conversations provided some great insight in to what’s top of mind for retailers during the most stressful time of their year, and a sneak peek in to what they will deliver this holiday season.  Some of the most popular topics were: When to start promoting for holiday: seems like a funny conversation to have in July, but a number of retailers said they already had their holiday shopping gift guides live on their site, and it was attracting a significant portion of their onsite traffic. When it comes to timing, most retailers were questioning when to begin their holiday promotions -- carefully balancing when to release pricing and specials, and knowing that customers are holding out for last-minute deals and price drops. Many retailers noted the frustrations around transparent pricing by Amazon and a few other mega-retailers last year, publishing their “lowest prices of the season” as early as October – ensuring shoppers that those prices were the best they could get all season long. Many retailers felt their hands were forced to drop prices. Others kept their set pricing with negative customer reaction, causing some to miss their holiday goals. The pressure is on, and most retailers identified November 1 as their target start date for the holiday promotions blitz. Some are even waiting for the big guys to release their “lowest prices of the season” guides and will then follow suit.      Attribution is tough – and a huge focus: understanding the path to conversion is a tough nut to crack, especially in the new omnichannel world where consumers use multiple touchpoints to make a single purchase, and internal management wants to know hard data. This has lead many retailers to invest in attribution; carefully tracking their online marketing efforts to determine what gets “credit” for the sale, instead of giving credit to the “last click.” Retailers noted that it is very difficult to determine the numbers when online and offline worlds collide – like when a shopper uses digital channels for research and then makes a purchase in a store. As one of the presenters from The North Face mentioned in her keynote, a key to enabling better customer service and satisfaction when it comes to converged online and offline sales is training the in-store staff, and creating a culture where it eventually “doesn’t matter what group gets the credit” if they all add to the sale. No doubt, the area of attribution will be a big area of retail investment in the coming years.      How to plan for the converged world: planning to ensure inventory gets where it needs to be was another concern. In conversations with retailers, we advised them to analyze customer patterns: where shoppers purchase items, where the items were sourced from and even where items are returned. This analysis is very valuable in determining inventory plans. From there, retailers can more accurately plan and allocate inventory to support both the online and offline customer behavior. As we head into the holiday season, the need for accurate enterprise-wide inventory visibility, and providing that information to associates, is even more critical to the brand-wide customer experience.       Improving the search / navigation / usability of the site(s): Aside from some of the big ideas and standard holiday pricing pressure, most conversations we had centered around continuing to improve the basics of the site. Reinvesting in search and navigation came up time and time again (FitForCommerce blogged about what a big topic it was at the event as well). Obviously getting shoppers on their path quickly and allowing them to find what they need fast is critical, but it was definitely interesting to hear just how much effort is still going in to honing the search and navigation experience. Adding new elements to search and navigation like typeahed, inventive navigation refinements, and new navigation categories like gift guides, specialized boutiques and flash sales were top of mind, in addition to searchandising and making search-driven product recommendations. (Oracle can help!)       Reducing cart abandonment: always a hot topic that is top of mind for every online retailer. Getting shoppers to the cart is often less then half the battle; getting them to click “buy” and complete the transaction is much more difficult. While retailers carefully study the checkout process and where shoppers tend to bounce, they know that how they design their checkout page is critical. We’re all online shoppers in our personal lives and we know how frustrating it can be when total prices are not transparent (i.e. shipping, processing, taxes is not included until the very last possible screen before clicking that buy button). Online retailers are struggling with where in the checkout process to surface the total price to be charged to reduce cart abandonment, while not showing the total figure too early in the process that it keeps shoppers from getting to checkout altogether. Recent research shows that providing total pricing prior to the checkout process dramatically reduces cart abandonment – as it serves as a filter to those shopping within a specific price band. Much of the cart abandonment discussion leads us to…       The free shipping / free returns question: it’s no secret that because of Amazon and programs like Prime, consumers expect free shipping, much to the chagrin of the smaller retailer. The reality is that if you’re not a mega-retailer, shipping is an expensive part of doing business that doesn’t allow most retailers to keep their prices low and offer free shipping. This has many retailers venturing out on the “free returns” path, especially in apparel. A number of retailers we spoke with are testing a flat rate shipping fee with free returns to see if they can crack the price threshold where shoppers are willing to pay for shipping with an added service. But, free shipping remains king.      Social ads and retargeting: they are working, but do they turn off consumers? That’s the big question. Every retailer we spoke with during a roundtable on the topic said that social ads and retargeting (where that pair of boots you’re been eyeing on a site magically follows you around the Internet) work and are meeting campaign goals. The larger question many retailers are asking is if this type of tactic is turning off a large number of shoppers, even if these campaigns are meeting their early goals. Retailers also mentioned that Facebook ads are working very well for them, especially when it comes to new customer acquisition, serving as a complimentary a channel to SEO when it comes to engaging new customers. While there are always new things to experiment with in retail, standard challenges are top of mind as retailers scramble to get ready for holiday. It will undoubtedly be another record-breaking online shopping season, but as retailers get more and more advanced with each Black Friday, expect some exciting things. This excitement needs to be backed by sound solutions and optimized operations. Then again, consumers are expecting more than ever, so I don’t doubt that retailers are already thinking about the possibilities of holiday 2015… and beyond. Customers who read this article, also found value in the following stories: Personalization for Retail: http://blogs.oracle.com/retail/entry/personalization_for_retailShop Direct User Experience Focus Drives Sales:https://blogs.oracle.com/retail/entry/shop_direct_user_experience_focusMaking Waves: Australian Online Retailer SurfStitch: https://blogs.oracle.com/oracleretail/entry/surf_stitchWhat’s new in Oracle Commerce v11.1 for RetailWhat the Content+Commerce Equation is Missing

    Read the article

  • CodePlex Daily Summary for Wednesday, April 28, 2010

    CodePlex Daily Summary for Wednesday, April 28, 2010New ProjectsArgument Handler: This project aims to help the handling of arguments in command line programs.Bing for BlackBerry: Bing for BlackBerry is a SDK that allows intergration and searching Bing in there applications.C4F - GeoWallpaper: This is an extension for my MEF Utility Runner to change desktop wallpaper based on Flickr images geotagged with your current location. Uses Windo...CRM 4.0 Contract Utilities: List of Contract Utilities (i.e. custom workflow actions) 1. Change contract status from Active to Draft 2. Copy Contract (with custom start/end da...ELIS: Multimedia player based on WPFEnterprise Administration with Powershell: The EnterpriseShell aims to produce a powershell code library that will enable Enterprise Administrators to quickly reconfigure their IT infrastruc...ExposedObject: ExposedObject uses dynamic typing in C# to provide convenient access to private fields and methods from code outside of the class – for testing, ex...F# Project Extender: Installing F# Project Extender provides tools to better organize files in F# projects by allowing project subdirectories and separating file manage...Hack Framework: Code bundle with the internets brains connected into one piece of .Net frameworkKrypton XNA: Krypton allows users of the XNA framework to easily add 2D lighting to their games. Krypton is fast, as it utilizes the GPU and uses a vertex shade...Net Darts: Provides an easy way to calculate the score left to throw. Users can easily click on the score.PlayerSharp: PlayerSharp is a library written entirely in C# that allows you to communicate your C# programs with the Player Server by Brian Gerkey et al (http:...Ratpoid: RatpoidRedeemer Tower Defense: 2d tower defense game. It is developed with XNA technology, using .Net Visual Studio 2008 or .Net Visual Studio 2010SelfService: Simple self service projectSharePoint Exchange Calendar: a jQuery based calendar web part for displaying Exchange calendars within SharePoint.SharpORM, easy use Object & Relation Database mapping Library: AIM on: Object easy storeage, Create,Retrieve,Update,Delete User .NET Attribute marking or Xml Standalone Mapping eg. public class Somethin...Silverlight Calculator: Silverlight Calculator makes it easier for people to do simple math calculations. It's developed in C# Silverlight 2. Silverlight Calendar: Silverlight Calendar makes it easier for people to view a calendar in silverlight way. It's developed in C# Silverlight 2.SPSocialUtil for SharePoint 2010: SPSocialUtil makes it easier for delevoper to use Social Tag, Tag Cloud, BookMark, Colleague in SharePoint 2010. It's developed in C# with Visual S...Sublight: Sublight open source project is a simple metadata utility for view models in ASP.NET MVC 2Veda Auto Dealer Report: Create work item tasks for the Veda Auto Dealer ReportWPF Meta-Effects: WPF Meta-Effects makes it easier for shader effect developpers to develop and maintain shader code. You'll no longer have to write any HLSL, instea...New ReleasesBing for BlackBerry: Bing SDK for BlackBerry: There are four downloadable components: The library, in source code format, in its latest stable release. A "getting started" doc that will gui...DotNetNuke® Store: 02.01.34: What's New in this release? Bugs corrected: - Fixed a bug related to encryption cookie when DNN is used in Medium Trust environment. New Features:...Encrypted Notes: Encrypted Notes 1.6.4: This is the latest version of Encrypted Notes, with general improvements and bug fixes for 'Batch Encryption'. It has an installer that will create...EPiServer CMS Page Type Builder: Page Type Builder 1.2 Beta 2: For more information about this release check out this blog post.ExposedObject: ExposedObject 0.1: This is an initial release of the ExposedObject library that lets you conveniently call private methods and access private fields of a class.Extended SSIS Package Execute: Ver 0.01: Version 0.01 - 2008 Compatible OnlyF# Project Extender: V0.9.0.0 (VS2008): F# project extender for Visual Studio 2008. Initial ReleaseFileExplorer.NET: FileExplorer.NET 1.0: This is the first release of this project. ---------------------------------------------------------------- Please report any bugs that you may en...Fluent ViewModel Configuration for WPF (MVVM): FluentViewModel Alpha3: Overhaul of the configuration system Separation of the configuratior and service locator Added support for general services - using Castle Wind...Home Access Plus+: v4.1: v4.1 Change Log: booking system fixes/additions Uploader fixes Added excluded extensions in my computer Updated Config Tool Fixed an issue ...ILMerge-GUI, merge .NET assemblies: 1.9.0 BETA: Compatible with .NET 4.0 This is the first version of ILMerge-GUI working with .NET 4.0. The final version will be 2.0.0, to be released by mid May...iTuner - The iTunes Companion: iTuner 1.2.3769 Beta 3c: Given the high download rate and awesome feedback for Beta 3b, I decided to release this interim build with the following enhancements: Added new h...Jet Login Tool (JetLoginTool): In then Out - 1.5.3770.18310: Fixed: Will only attempt to logout if currently logged in Fixed: UI no longer blocks while waiting for log outKooboo CMS: Kooboo CMS 2.1.1.0: New features Add new API RssUrl to generate RSS link, this is an extension to UrlHelper. Add possibility to index and search attachment content ...Krypton XNA: Krypton v1.0: First release of Krypton. I've tried to throw together a small testbed, but just getting tortoisehq to work the first time around was a huge pain. ...LinkedIn® for Windows Mobile: LinkedIn for Windows Mobile v0.5: Added missing files to installer packageNito.KitchenSink: Version 7: New features (since Version 5) Fixed null reference bug in ExceptionExtensions. Added DynamicStaticTypeMembers and RefOutArg for dynamically (lat...Numina Application/Security Framework: Numina.Framework Core 51341: Added multiple methods to API and classic ASP libraryOpenIdPortableArea: 0.1.0.3 OpenIdPortableArea: OpenIdPortableArea.Release: DotNetOpenAuth.dll DotNetOpenAuth.xml MvcContrib.dll MvcContrib.xml OpenIdPortableArea.dll OpenIdPortableAre...Play-kanaler (Windows Media Center Plug-in): Playkanaler 1.0.5 Alpha: Playkanaler version 1.0.5 Alpha Skärmsläckar-fix. helt otestad!PokeIn Comet Ajax Library: PokeIn v0.81 Library with ServerWatch Sample: Release Notes Functionality improved. Possible bugs fixed. Realtime server time sample addedSharpORM, easy use Object & Relation Database mapping Library: Vbyte.SharpOrm 1.0.2010.427: Vbyte.SharpOrm for Access & SQLServer.7z 1.0 alpha release for Access oledb provider and Sql Server 2000+.Silverlight 4.0 Popup Menu: Context Menu for Silverlight 4.0: - Markup items can now be added seperately using the AddItem method. Alternatingly all items can be placed inside a listbox which can then be added...Silverlight Calculator: SilverCalculator: SilverCalculator version 1.0 Live demoSilverlight Calendar: Silverlight Calendar: Silverlight Calendar version 1.0 Live demoSpeakup Desktop Frontend: Speakup Desktop Frontend v0.2a: This is new version of Speakup Desktop Frontend. It requires .net 4.0 to be installed before using it. In this release next changes were done: - ...TwitterVB - A .NET Twitter Library: Twitter-2.5: Adds xAuth support and increases TwitterLocation information (html help file is not up to date, will correct in a later version.)VCC: Latest build, v2.1.30427.5: Automatic drop of latest buildXP-More: 1.1: Added a parent VHD edit feature. Moved VM settings from double-click to a button, and rearranged the buttons' layout.Yasbg: It's Static 1.0: Many changes have been made from the previous release. Read the README! This release adds settings tab and fixes bugs. To run, first unzip, then...Most Popular ProjectsRawrWBFS Managerpatterns & practices – Enterprise LibraryAJAX Control ToolkitSilverlight ToolkitMicrosoft SQL Server Product Samples: DatabaseWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesPHPExcelMost Active ProjectsRawrpatterns & practices – Enterprise LibraryGMap.NET - Great Maps for Windows Forms & PresentationNB_Store - Free DotNetNuke Ecommerce Catalog ModuleIonics Isapi Rewrite FilterParticle Plot PivotFarseer Physics EngineBlogEngine.NETDotNetZip LibrarySqlDiffFramework-A Visual Differencing Engine for Dissimilar Data Sources

    Read the article

  • Imperative vs. LINQ Performance on WP7

    - by Bil Simser
    Jesse Liberty had a nice post presenting the concepts around imperative, LINQ and fluent programming to populate a listbox. Check out the post as it’s a great example of some foundational things every .NET programmer should know. I was more interested in what the IL code that would be generated from imperative vs. LINQ was like and what the performance numbers are and how they differ. The code at the instruction level is interesting but not surprising. The imperative example with it’s creating lists and loops weighs in at about 60 instructions. .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: .method private hidebysig instance void ImperativeMethod() cil managed 2: { 3: .maxstack 3 4: .locals init ( 5: [0] class [mscorlib]System.Collections.Generic.IEnumerable`1<int32> someData, 6: [1] class [mscorlib]System.Collections.Generic.List`1<int32> inLoop, 7: [2] int32 n, 8: [3] class [mscorlib]System.Collections.Generic.IEnumerator`1<int32> CS$5$0000, 9: [4] bool CS$4$0001) 10: L_0000: nop 11: L_0001: ldc.i4.1 12: L_0002: ldc.i4.s 50 13: L_0004: call class [mscorlib]System.Collections.Generic.IEnumerable`1<int32> [System.Core]System.Linq.Enumerable::Range(int32, int32) 14: L_0009: stloc.0 15: L_000a: newobj instance void [mscorlib]System.Collections.Generic.List`1<int32>::.ctor() 16: L_000f: stloc.1 17: L_0010: nop 18: L_0011: ldloc.0 19: L_0012: callvirt instance class [mscorlib]System.Collections.Generic.IEnumerator`1<!0> [mscorlib]System.Collections.Generic.IEnumerable`1<int32>::GetEnumerator() 20: L_0017: stloc.3 21: L_0018: br.s L_003a 22: L_001a: ldloc.3 23: L_001b: callvirt instance !0 [mscorlib]System.Collections.Generic.IEnumerator`1<int32>::get_Current() 24: L_0020: stloc.2 25: L_0021: nop 26: L_0022: ldloc.2 27: L_0023: ldc.i4.5 28: L_0024: cgt 29: L_0026: ldc.i4.0 30: L_0027: ceq 31: L_0029: stloc.s CS$4$0001 32: L_002b: ldloc.s CS$4$0001 33: L_002d: brtrue.s L_0039 34: L_002f: ldloc.1 35: L_0030: ldloc.2 36: L_0031: ldloc.2 37: L_0032: mul 38: L_0033: callvirt instance void [mscorlib]System.Collections.Generic.List`1<int32>::Add(!0) 39: L_0038: nop 40: L_0039: nop 41: L_003a: ldloc.3 42: L_003b: callvirt instance bool [mscorlib]System.Collections.IEnumerator::MoveNext() 43: L_0040: stloc.s CS$4$0001 44: L_0042: ldloc.s CS$4$0001 45: L_0044: brtrue.s L_001a 46: L_0046: leave.s L_005a 47: L_0048: ldloc.3 48: L_0049: ldnull 49: L_004a: ceq 50: L_004c: stloc.s CS$4$0001 51: L_004e: ldloc.s CS$4$0001 52: L_0050: brtrue.s L_0059 53: L_0052: ldloc.3 54: L_0053: callvirt instance void [mscorlib]System.IDisposable::Dispose() 55: L_0058: nop 56: L_0059: endfinally 57: L_005a: nop 58: L_005b: ldarg.0 59: L_005c: ldfld class [System.Windows]System.Windows.Controls.ListBox PerfTest.MainPage::LB1 60: L_0061: ldloc.1 61: L_0062: callvirt instance void [System.Windows]System.Windows.Controls.ItemsControl::set_ItemsSource(class [mscorlib]System.Collections.IEnumerable) 62: L_0067: nop 63: L_0068: ret 64: .try L_0018 to L_0048 finally handler L_0048 to L_005a 65: } 66:   67: Compare that to the IL generated for the LINQ version which has about half of the instructions and just gets the job done, no fluff. .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: .method private hidebysig instance void LINQMethod() cil managed 2: { 3: .maxstack 4 4: .locals init ( 5: [0] class [mscorlib]System.Collections.Generic.IEnumerable`1<int32> someData, 6: [1] class [mscorlib]System.Collections.Generic.IEnumerable`1<int32> queryResult) 7: L_0000: nop 8: L_0001: ldc.i4.1 9: L_0002: ldc.i4.s 50 10: L_0004: call class [mscorlib]System.Collections.Generic.IEnumerable`1<int32> [System.Core]System.Linq.Enumerable::Range(int32, int32) 11: L_0009: stloc.0 12: L_000a: ldloc.0 13: L_000b: ldsfld class [System.Core]System.Func`2<int32, bool> PerfTest.MainPage::CS$<>9__CachedAnonymousMethodDelegate6 14: L_0010: brtrue.s L_0025 15: L_0012: ldnull 16: L_0013: ldftn bool PerfTest.MainPage::<LINQProgramming>b__4(int32) 17: L_0019: newobj instance void [System.Core]System.Func`2<int32, bool>::.ctor(object, native int) 18: L_001e: stsfld class [System.Core]System.Func`2<int32, bool> PerfTest.MainPage::CS$<>9__CachedAnonymousMethodDelegate6 19: L_0023: br.s L_0025 20: L_0025: ldsfld class [System.Core]System.Func`2<int32, bool> PerfTest.MainPage::CS$<>9__CachedAnonymousMethodDelegate6 21: L_002a: call class [mscorlib]System.Collections.Generic.IEnumerable`1<!!0> [System.Core]System.Linq.Enumerable::Where<int32>(class [mscorlib]System.Collections.Generic.IEnumerable`1<!!0>, class [System.Core]System.Func`2<!!0, bool>) 22: L_002f: ldsfld class [System.Core]System.Func`2<int32, int32> PerfTest.MainPage::CS$<>9__CachedAnonymousMethodDelegate7 23: L_0034: brtrue.s L_0049 24: L_0036: ldnull 25: L_0037: ldftn int32 PerfTest.MainPage::<LINQProgramming>b__5(int32) 26: L_003d: newobj instance void [System.Core]System.Func`2<int32, int32>::.ctor(object, native int) 27: L_0042: stsfld class [System.Core]System.Func`2<int32, int32> PerfTest.MainPage::CS$<>9__CachedAnonymousMethodDelegate7 28: L_0047: br.s L_0049 29: L_0049: ldsfld class [System.Core]System.Func`2<int32, int32> PerfTest.MainPage::CS$<>9__CachedAnonymousMethodDelegate7 30: L_004e: call class [mscorlib]System.Collections.Generic.IEnumerable`1<!!1> [System.Core]System.Linq.Enumerable::Select<int32, int32>(class [mscorlib]System.Collections.Generic.IEnumerable`1<!!0>, class [System.Core]System.Func`2<!!0, !!1>) 31: L_0053: stloc.1 32: L_0054: ldarg.0 33: L_0055: ldfld class [System.Windows]System.Windows.Controls.ListBox PerfTest.MainPage::LB2 34: L_005a: ldloc.1 35: L_005b: callvirt instance void [System.Windows]System.Windows.Controls.ItemsControl::set_ItemsSource(class [mscorlib]System.Collections.IEnumerable) 36: L_0060: nop 37: L_0061: ret 38: } Again, not surprising here but a good indicator that you should consider using LINQ where possible. In fact if you have ReSharper installed you’ll see a squiggly (technical term) in the imperative code that says “Hey Dude, I can convert this to LINQ if you want to be c00L!” (or something like that, it’s the 2010 geek version of Clippy). What about the fluent version? As Jon correctly pointed out in the comments, when you compare the IL for the LINQ code and the IL for the fluent code it’s the same. LINQ and the fluent interface are just syntactical sugar so you decide what you’re most comfortable with. At the end of the day they’re both the same. Now onto the numbers. Again I expected the imperative version to be better performing than the LINQ version (before I saw the IL that was generated). Call it womanly instinct. A gut feel. Whatever. Some of the numbers are interesting though. For Jesse’s example of 50 items, the numbers were interesting. The imperative sample clocked in at 7ms while the LINQ version completed in 4. As the number of items went up, the elapsed time didn’t necessarily climb exponentially. At 500 items they were pretty much the same and the results were similar up to about 50,000 items. After that I tried 500,000 items where the gap widened but not by much (2.2 seconds for imperative, 2.3 for LINQ). It wasn’t until I tried 5,000,000 items where things were noticeable. Imperative filled the list in 20 seconds while LINQ took 8 seconds longer (although personally I wouldn’t suggest you put 5 million items in a list unless you want your users showing up at your door with torches and pitchforks). Here’s the table with the full results. Method/Items 50 500 5,000 50,000 500,000 5,000,000 Imperative 7ms 7ms 38ms 223ms 2230ms 20974ms LINQ/Fluent 4ms 6ms 41ms 240ms 2310ms 28731ms Like I said, at the end of the day it’s not a huge difference and you really don’t want your users waiting around for 30 seconds on a mobile device filling lists. In fact if Windows Phone 7 detects you’re taking more than 10 seconds to do any one thing, it considers the app hung and shuts it down. The results here are for Windows Phone 7 but frankly they're the same for desktop and web apps so feel free to apply it generally. From a programming perspective, choose what you like. Some LINQ statements can get pretty hairy so I usually fall back with my simple mind and write it imperatively. If you really want to impress your friends, write it old school then let ReSharper do the hard work for! Happy programming!

    Read the article

  • Special thanks to everyone that helped me in 2010.

    - by mbcrump
    2010 has been a very good year for me and I wanted to create a list and thank everyone for what they have done for me.  I also wanted to thank everyone for reading and subscribing to my blog. It is hard to believe that people actually want to read what I write. I feel like I owe a huge thanks to everyone listed below. Looking back upon 2010, I feel that I’ve grown as a developer and you are part of that reason. Sometimes we get caught up in day to day work and forget to give thanks to those that helped us along the way. The list below is mine, it includes people and companies. This list is obviously not going to include everyone that has helped, just those that have stood out in my mind. When I think back upon 2010, their names keep popping up in my head. So here goes, in no particular order.  People Dave Campbell – For everything he has done for the Silverlight Community with his Silverlight Cream blog. I can’t think of a better person to get recognition at the Silverlight FireStarter event. I also wanted to thank him for spending several hours of his time helping me track down a bug in my feedburner account. Victor Gaudioso – For his large collection of video tutorials on his blog and the passion and enthusiasm he has for Silverlight. We have talked on the phone and I’ve never met anyone so fired up for Silverlight. Kunal Chowdhury – Kunal has always been available for me to bounce ideas off of. Kunal has also answered a lot of questions that stumped me. His blog and CodeProject article have green a great help to me and the Silverlight Community. Glen Gordon – I was looking frantically for a Windows Phone 7 several months before release and Glen found one for me. This allowed me to start a blog series on the Windows Phone 7 hardware and developing an application from start to finish that Scott Guthrie retweeted.  Jeff Blankenburg – For listening to my complaints in the early stages of Windows Phone 7. Jeff was always very polite and gave me his cell phone number to talk it over. He also walked me through several problems that I was having early on. Pete Brown – For writing Silverlight 4 in Action. This book is definitely a labor of love. I followed Pete on Twitter as he was writing it and he spent a lot of late nights and weekends working on it. I felt a lot smarter after reading it the first time. The second time was even better. John Papa – For all of his work on the Silverlight Firestarter and the Silverlight community in general. He has also helped me on a personal level with several things. Daniel Heisler – For putting up with me the past year while we worked on many .NET projects together in 2010. Alvin Ashcraft – For publishing a daily blog post on the best of .NET links. He has linked to my site many times and I really appreciate what he does for the community. Chris Alcock – For publishing the Morning Brew every weekday. I remember when I first appeared on his site, I started getting hundreds of hits on my site and wondered if I was getting a DOS attack or something. It was great to find out that Chris had linked to one of my articles. Joel Cochran – For spending a week teaching “Blend-O-Rama”. This was my one of my favorite sessions of this year. I learned a lot about Expression Blend from it and the best part was that it was free and during lunchtime. Jeremy Likness – Jeremy is smart – very smart. I have learned a lot from Jeremy over the past year. He is also involved in the Silverlight community in every way possible, from forums to blog post to screencast to open source. It goes on and on. The people that I met at VSLive Orlando 2010. I had a great time chatting with Walt Ritscher, Wallace McClure, Tim Huckabee and David Platt. Also a special thanks to all of my friends on Twitter like @wilhil, @DBVaughan, @DataArtist, @wbm, @DirkStrauss and @rsringeri and many many more. Software Companies / Events / May of gave me FREE stuff. =) Microsoft (3) – I was sent a free coupon code by Microsoft to take the Silverlight 4 Beta Exam. I jumped on the offer and took the exam. It was great being selected to try out the exam before it goes public even though Microsoft eventually published a universal coupon code for everyone. I am still waiting to find out if I passed the exam. My fingers are crossed. Microsoft reaching out to me with some questions regarding the .NET Community. I’ve never had a company contact me with such interest in the community. Having a contest where 75 people could win a $100 gift certificate and a T-Shirt for submitting a Windows Phone 7 app. I submitted my app and won. All of the free launch events this year (Windows Phone 7, Visual Studio 2010, ASP.NET MVC). Wintellect – For providing an awesome day of free technical training called T.E.N. Where else can you get free training from some of the best programmers in the world? I also won a contest from them that included a NETAdvantage Ultimate License from Infragistics. VSLive – I attended the Orlando 2010 Conference and it was the best developer’s conference that I have ever attended. I got to know a lot of people at this conference and hang out with many wonderful speakers. I live tweeted the event and while it may have annoyed some, the organizers of VSLive loved it. I won the contest on Twitter and they invited me back to the 2011 session of my choice. This is a very nice gift and I really appreciate the generosity. BarcodeLib.com – For providing free barcode generating tools for a Non-Profit ASP.NET project that I was working on. Their third party controls really made this a breeze compared to my existing solution. NDepend – It is absolutely the best tool to improve code quality. The product is extremely large and I would recommend heading over to their site to check it out. Silverlight Spy – I was writing a blog post on Silverlight Spy and Koen Zwikstra provided a FREE license to me. If you ever wanted to peek inside of a Silverlight Application then this is the tool for you. He is also working on a version that will support OOB and Windows Phone 7. I would recommend checking out his site. Birmingham .NET Users Group / Silverlight Nights User Group – It takes a lot of time to put together a user group meeting every month yet it always seems to happen. I don’t want to name names for fear of leaving someone out but both of these User Groups are excellent if you live in the Birmingham, Alabama area. Publishing Companies Manning Publishing – For giving me early access to Silverlight 4 in Action by Pete Brown. It was really nice to be able to read this awesome book while Pete was writing it. I was also one of the first people to publish a review of the book. Sams Publishing and DZone – For providing a copy of Silverlight 4 Unleashed by Laurent Bugnion for me to review for their site. The review is coming in January 2011. Special Shoutout to the following 3rd Party Silverlight Controls It has been a great pleasure to work with the following companies on 3rd Party Control Giveaways every month. It always amazes me how every 3rd Party Control company is so eager to help out the community. I’ve never been turned down by any of these companies! These giveaways have sparked a lot of interest in Silverlight and hopefully I can continue giving away a new set every month. If you are a 3rd Party Control company and are interested in participating in these giveaways then please email me at mbcrump29[at]gmail[d0t].com. The companies below have already participated in my giveaways: Infragistics (December 2010) - Win a set of Infragistics Silverlight Controls with Data Visualization!  Mindscape (November 2010) - Mindscape Silverlight Controls + Free Mega Pack Contest Telerik (October 2010) - Win Telerik RadControls for Silverlight! ($799 Value) Again, I just wanted to say Thanks to everyone for helping me grow as a developer.  Subscribe to my feed

    Read the article

  • Java JRE 7 Certified with Oracle E-Business Suite

    - by Steven Chan (Oracle Development)
    Java Runtime Environment 7u10 (a.k.a. JRE 1.7.0_10 build 18) and later updates on the JRE 7 codeline are now certified with Oracle E-Business Suite Release 11i and 12 Windows-based desktop clients. What's needed to enable EBS environments for JRE 7? EBS customers should ensure that they are running JRE 7u10, at minimum, on Windows desktop clients. Of the compatibility issues identified with JRE 7, the most critical is an issue that prevents E-Business Suite Forms-based products from launching on Windows desktops that are running JRE 7.  Customers can prevent this issue -- and all other JRE 7 compatibility issues -- by ensuring that they have applied the latest certified patches documented for JRE 7 configurations to their EBS application tier servers.  These are summarized here for convenience. If the requirements change over time, please check the Notes for the authoritative list of patches: Apply Forms patch 14615390 to EBS 11i environments (Note 125767.1) Apply Forms patch 14614795 to EBS 12.0 and 12.1 environments (Note 437878.1) These patches are compatible with JRE 6 and 7, production ready, and fully-tested with the E-Business Suite.  These patches may be applied immediately to all E-Business Suite environments. All other Forms prerequisites documented in the Notes above should also be applied.  Where are the official patch requirements documented? All patches required for ensuring full compatibility of the E-Business Suite with JRE 7 are documented in these Notes: For EBS 11i: Deploying Sun JRE (Native Plug-in) for Windows Clients in Oracle E-Business Suite Release 11i (Note 290807.1) Upgrading Developer 6i with Oracle E-Business Suite 11i (Note 125767.1) For EBS 12 Deploying Sun JRE (Native Plug-in) for Windows Clients in Oracle E-Business Suite Release 12 (Note 393931.1) Upgrading OracleAS 10g Forms and Reports in Oracle E-Business Suite Release 12 (Note 437878.1) Prerequisites for 32-bit and 64-bit JRE certifications JRE 1.70_10 32-bit + EBS 11.5.10.2 Windows XP SP3 Windows Vista SP1 and SP2 Windows 7 and Windows 7 SP1  Forms 6.0.8.28.x patch 14615390 (Note 125767.1) JRE 1.70_10 32-bit + EBS 12.0 & 12.1 Windows XP SP3 Windows Vista SP1 and SP2 Windows 7 and Windows 7 SP1 Forms 10g overlay patch 14614795 (Note 437878.1) SSL Users:  10.1.0.5 version of Patch 6370967 applied to AS 10.1.3 with OPatch. Note: This fix is already included in the April 2011 AS 10.1.3.5 CPU patch and later. JRE 1.7.0_10 64-bit + EBS 11.5.10.2 Windows 7 (64-bit) and Windows 7 SP1 (64-bit) Forms 6.0.8.28.x patch 14615390 (Note 125767.1) JRE 1.70_10 64-bit + EBS 12.0 & 12.1 Windows 7 (64-bit) and Windows 7 SP1 (64-bit) Forms 10g overlay patch 14614795 (Note 437878.1) SSL Users:  10.1.0.5 version of Patch 6370967 applied to AS 10.1.3 with OPatch. Note: This fix is already included in the April 2011 AS 10.1.3.5 CPU patch and later.  EBS + Discoverer 11g Users JRE 1.7.0_10 (7u10) is certified for Discoverer 11g in E-Business Suite environments with the following minimum requirements: Discoverer (11g) 11.1.1.6 plus Patch 13877486 and later  Reference: How To Find Oracle BI Discoverer 10g and 11g Certification Information (Document 233047.1) Worried about the 'mismanaged session cookie' issue? No need to worry -- it's fixed.  To recap: JRE releases 1.6.0_18 through 1.6.0_22 had issues with mismanaging session cookies that affected some users in some circumstances. The fix for those issues was first included in JRE 1.6.0_23. These fixes will carry forward and continue to be fixed in all future JRE releases on the JRE 6 and 7 codelines.  In other words, if you wish to avoid the mismanaged session cookie issue, you should apply any release after JRE 1.6.0_22 on the JRE 6 codeline, and JRE 7u10 and later JRE 7 codeline updates. All JRE 6 and 7 releases are certified with EBS upon release Our standard policy is that all E-Business Suite customers can apply all JRE updates to end-user desktops from JRE 1.6.0_03 and later updates on the 1.6 codeline, and from JRE 7u10 and later updates on the JRE 7 codeline.  We test all new JRE 1.6 and JRE 7 releases in parallel with the JRE development process, so all new JRE 1.6 and 7 releases are considered certified with the E-Business Suite on the same day that they're released by our Java team.  You do not need to wait for a certification announcement before applying new JRE 1.6 or JRE 7 releases to your EBS users' desktops. Implications of Java 6 End of Public Updates for EBS Users The Support Roadmap for Oracle Java is published here: Oracle Java SE Support Roadmap The latest updates to that page (as of Sept. 19, 2012) state (emphasis added): Java SE 6 End of Public Updates Notice After February 2013, Oracle will no longer post updates of Java SE 6 to its public download sites. Existing Java SE 6 downloads already posted as of February 2013 will remain accessible in the Java Archive on Oracle Technology Network. Developers and end-users are encouraged to update to more recent Java SE versions that remain available for public download. For enterprise customers, who need continued access to critical bug fixes and security fixes as well as general maintenance for Java SE 6 or older versions, long term support is available through Oracle Java SE Support . What does this mean for Oracle E-Business Suite users? EBS users fall under the category of "enterprise users" above.  Java is an integral part of the Oracle E-Business Suite technology stack, so EBS users will continue to receive Java SE 6 updates after February 2013. In other words, nothing will change for EBS users after February 2013.  EBS users will continue to receive critical bug fixes and security fixes as well as general maintenance for Java SE 6. These Java SE 6 updates will be made available to EBS users for the Extended Support periods documented in the Oracle Lifetime Support policy document for Oracle Applications (PDF): EBS 11i Extended Support ends November 2013 EBS 12.0 Extended Support ends January 2015 EBS 12.1 Extended Support ends December 2018 Will EBS users be forced to upgrade to JRE 7 for Windows desktop clients? No. This upgrade is highly recommended but currently remains optional. JRE 6 will be available to Windows users to run with EBS for the duration of your respective EBS Extended Support period.  Updates will be delivered via My Oracle Support, where you can continue to receive critical bug fixes and security fixes as well as general maintenance for JRE 6 desktop clients.  Coexistence of JRE 6 and JRE 7 on Windows desktops The upgrade to JRE 7 is highly recommended for EBS users, but some users may need to run both JRE 6 and 7 on their Windows desktops for reasons unrelated to the E-Business Suite. Most EBS configurations with IE and Firefox use non-static versioning by default. JRE 7 will be invoked instead of JRE 6 if both are installed on a Windows desktop. For more details, see "Appendix B: Static vs. Non-static Versioning and Set Up Options" in Notes 290801.1 and 393931.1. Applying Updates to JRE 6 and JRE 7 to Windows desktops Auto-update will keep JRE 7 up-to-date for Windows users with JRE 7 installed. Auto-update will only keep JRE 7 up-to-date for Windows users with both JRE 6 and 7 installed.  JRE 6 users are strongly encouraged to apply the latest Critical Patch Updates as soon as possible after each release. The Jave SE CPUs will be available via My Oracle Support.  EBS users can find more information about JRE 6 and 7 updates here: Information Center: Installation & Configuration for Oracle Java SE (Note 1412103.2) The dates for future Java SE CPUs can be found on the Critical Patch Updates, Security Alerts and Third Party Bulletin.  An RSS feed is available on that site for those who would like to be kept up-to-date. What will Mac users need? Oracle will provide updates to JRE 7 for Mac OS X users. EBS users running Macs will need to upgrade to JRE 7 to receive JRE updates. The certification of Oracle E-Business Suite with JRE 7 for Mac-based desktop clients accessing EBS Forms-based content is underway. Mac users waiting for that certification may find this article useful: How to Reenable Apple Java 6 Plug-in for Mac EBS Users Will EBS users be forced to upgrade to JDK 7 for EBS application tier servers? No. This upgrade will be highly recommended but will be optional for EBS application tier servers running on Windows, Linux, and Solaris.  You can choose to remain on JDK 6 for the duration of your respective EBS Extended Support period.  If you remain on JDK 6, you will continue to receive critical bug fixes and security fixes as well as general maintenance for JDK 6. The certification of Oracle E-Business Suite with JDK 7 for EBS application tier servers on Windows, Linux, and Solaris as well as other platforms such as IBM AIX and HP-UX is planned.  Customers running platforms other than Windows, Linux, and Solaris should refer to their Java vendors's sites for more information about their support policies. References Recommended Browsers for Oracle Applications 11i (Metalink Note 285218.1) Upgrading Sun JRE (Native Plug-in) with Oracle Applications 11i for Windows Clients (Metalink Note 290807.1) Recommended Browsers for Oracle Applications 12 (MetaLink Note 389422.1) Upgrading JRE Plugin with Oracle Applications R12 (MetaLink Note 393931.1) Related Articles Mismanaged Session Cookie Issue Fixed for EBS in JRE 1.6.0_23 Roundup: Oracle JInitiator 1.3 Desupported for EBS Customers in July 2009

    Read the article

  • Commit in SQL

    - by PRajkumar
    SQL Transaction Control Language Commands (TCL)                                           (COMMIT) Commit Transaction As a SQL language we use transaction control language very frequently. Committing a transaction means making permanent the changes performed by the SQL statements within the transaction. A transaction is a sequence of SQL statements that Oracle Database treats as a single unit. This statement also erases all save points in the transaction and releases transaction locks. Oracle Database issues an implicit COMMIT before and after any data definition language (DDL) statement. Oracle recommends that you explicitly end every transaction in your application programs with a COMMIT or ROLLBACK statement, including the last transaction, before disconnecting from Oracle Database. If you do not explicitly commit the transaction and the program terminates abnormally, then the last uncommitted transaction is automatically rolled back.   Until you commit a transaction: ·         You can see any changes you have made during the transaction by querying the modified tables, but other users cannot see the changes. After you commit the transaction, the changes are visible to other users' statements that execute after the commit ·         You can roll back (undo) any changes made during the transaction with the ROLLBACK statement   Note: Most of the people think that when we type commit data or changes of what you have made has been written to data files, but this is wrong when you type commit it means that you are saying that your job has been completed and respective verification will be done by oracle engine that means it checks whether your transaction achieved consistency when it finds ok it sends a commit message to the user from log buffer but not from data buffer, so after writing data in log buffer it insists data buffer to write data in to data files, this is how it works.   Before a transaction that modifies data is committed, the following has occurred: ·         Oracle has generated undo information. The undo information contains the old data values changed by the SQL statements of the transaction ·         Oracle has generated redo log entries in the redo log buffer of the System Global Area (SGA). The redo log record contains the change to the data block and the change to the rollback block. These changes may go to disk before a transaction is committed ·         The changes have been made to the database buffers of the SGA. These changes may go to disk before a transaction is committed   Note:   The data changes for a committed transaction, stored in the database buffers of the SGA, are not necessarily written immediately to the data files by the database writer (DBWn) background process. This writing takes place when it is most efficient for the database to do so. It can happen before the transaction commits or, alternatively, it can happen some times after the transaction commits.   When a transaction is committed, the following occurs: 1.      The internal transaction table for the associated undo table space records that the transaction has committed, and the corresponding unique system change number (SCN) of the transaction is assigned and recorded in the table 2.      The log writer process (LGWR) writes redo log entries in the SGA's redo log buffers to the redo log file. It also writes the transaction's SCN to the redo log file. This atomic event constitutes the commit of the transaction 3.      Oracle releases locks held on rows and tables 4.      Oracle marks the transaction complete   Note:   The default behavior is for LGWR to write redo to the online redo log files synchronously and for transactions to wait for the redo to go to disk before returning a commit to the user. However, for lower transaction commit latency application developers can specify that redo be written asynchronously and that transaction do not need to wait for the redo to be on disk.   The syntax of Commit Statement is   COMMIT [WORK] [COMMENT ‘your comment’]; ·         WORK is optional. The WORK keyword is supported for compliance with standard SQL. The statements COMMIT and COMMIT WORK are equivalent. Examples Committing an Insert INSERT INTO table_name VALUES (val1, val2); COMMIT WORK; ·         COMMENT Comment is also optional. This clause is supported for backward compatibility. Oracle recommends that you used named transactions instead of commit comments. Specify a comment to be associated with the current transaction. The 'text' is a quoted literal of up to 255 bytes that Oracle Database stores in the data dictionary view DBA_2PC_PENDING along with the transaction ID if a distributed transaction becomes in doubt. This comment can help you diagnose the failure of a distributed transaction. Examples The following statement commits the current transaction and associates a comment with it: COMMIT     COMMENT 'In-doubt transaction Code 36, Call (415) 555-2637'; ·         WRITE Clause Use this clause to specify the priority with which the redo information generated by the commit operation is written to the redo log. This clause can improve performance by reducing latency, thus eliminating the wait for an I/O to the redo log. Use this clause to improve response time in environments with stringent response time requirements where the following conditions apply: The volume of update transactions is large, requiring that the redo log be written to disk frequently. The application can tolerate the loss of an asynchronously committed transaction. The latency contributed by waiting for the redo log write to occur contributes significantly to overall response time. You can specify the WAIT | NOWAIT and IMMEDIATE | BATCH clauses in any order. Examples To commit the same insert operation and instruct the database to buffer the change to the redo log, without initiating disk I/O, use the following COMMIT statement: COMMIT WRITE BATCH; Note: If you omit this clause, then the behavior of the commit operation is controlled by the COMMIT_WRITE initialization parameter, if it has been set. The default value of the parameter is the same as the default for this clause. Therefore, if the parameter has not been set and you omit this clause, then commit records are written to disk before control is returned to the user. WAIT | NOWAIT Use these clauses to specify when control returns to the user. The WAIT parameter ensures that the commit will return only after the corresponding redo is persistent in the online redo log. Whether in BATCH or IMMEDIATE mode, when the client receives a successful return from this COMMIT statement, the transaction has been committed to durable media. A crash occurring after a successful write to the log can prevent the success message from returning to the client. In this case the client cannot tell whether or not the transaction committed. The NOWAIT parameter causes the commit to return to the client whether or not the write to the redo log has completed. This behavior can increase transaction throughput. With the WAIT parameter, if the commit message is received, then you can be sure that no data has been lost. Caution: With NOWAIT, a crash occurring after the commit message is received, but before the redo log record(s) are written, can falsely indicate to a transaction that its changes are persistent. If you omit this clause, then the transaction commits with the WAIT behavior. IMMEDIATE | BATCH Use these clauses to specify when the redo is written to the log. The IMMEDIATE parameter causes the log writer process (LGWR) to write the transaction's redo information to the log. This operation option forces a disk I/O, so it can reduce transaction throughput. The BATCH parameter causes the redo to be buffered to the redo log, along with other concurrently executing transactions. When sufficient redo information is collected, a disk write of the redo log is initiated. This behavior is called "group commit", as redo for multiple transactions is written to the log in a single I/O operation. If you omit this clause, then the transaction commits with the IMMEDIATE behavior. ·         FORCE Clause Use this clause to manually commit an in-doubt distributed transaction or a corrupt transaction. ·         In a distributed database system, the FORCE string [, integer] clause lets you manually commit an in-doubt distributed transaction. The transaction is identified by the 'string' containing its local or global transaction ID. To find the IDs of such transactions, query the data dictionary view DBA_2PC_PENDING. You can use integer to specifically assign the transaction a system change number (SCN). If you omit integer, then the transaction is committed using the current SCN. ·         The FORCE CORRUPT_XID 'string' clause lets you manually commit a single corrupt transaction, where string is the ID of the corrupt transaction. Query the V$CORRUPT_XID_LIST data dictionary view to find the transaction IDs of corrupt transactions. You must have DBA privileges to view the V$CORRUPT_XID_LIST and to specify this clause. ·         Specify FORCE CORRUPT_XID_ALL to manually commit all corrupt transactions. You must have DBA privileges to specify this clause. Examples Forcing an in doubt transaction. Example The following statement manually commits a hypothetical in-doubt distributed transaction. Query the V$CORRUPT_XID_LIST data dictionary view to find the transaction IDs of corrupt transactions. You must have DBA privileges to view the V$CORRUPT_XID_LIST and to issue this statement. COMMIT FORCE '22.57.53';

    Read the article

  • Adding an Admin user to an ASP.NET MVC 4 application using a single drop-in file

    - by Jon Galloway
    I'm working on an ASP.NET MVC 4 tutorial and wanted to set it up so just dropping a file in App_Start would create a user named "Owner" and assign them to the "Administrator" role (more explanation at the end if you're interested). There are reasons why this wouldn't fit into most application scenarios: It's not efficient, as it checks for (and creates, if necessary) the user every time the app starts up The username, password, and role name are hardcoded in the app (although they could be pulled from config) Automatically creating an administrative account in code (without user interaction) could lead to obvious security issues if the user isn't informed However, with some modifications it might be more broadly useful - e.g. creating a test user with limited privileges, ensuring a required account isn't accidentally deleted, or - as in my case - setting up an account for demonstration or tutorial purposes. Challenge #1: Running on startup without requiring the user to install or configure anything I wanted to see if this could be done just by having the user drop a file into the App_Start folder and go. No copying code into Global.asax.cs, no installing addition NuGet packages, etc. That may not be the best approach - perhaps a NuGet package with a dependency on WebActivator would be better - but I wanted to see if this was possible and see if it offered the best experience. Fortunately ASP.NET 4 and later provide a PreApplicationStartMethod attribute which allows you to register a method which will run when the application starts up. You drop this attribute in your application and give it two parameters: a method name and the type that contains it. I created a static class named PreApplicationTasks with a static method named, then dropped this attribute in it: [assembly: PreApplicationStartMethod(typeof(PreApplicationTasks), "Initializer")] That's it. One small gotcha: the namespace can be a problem with assembly attributes. I decided my class didn't need a namespace. Challenge #2: Only one PreApplicationStartMethod per assembly In .NET 4, the PreApplicationStartMethod is marked as AllMultiple=false, so you can only have one PreApplicationStartMethod per assembly. This was fixed in .NET 4.5, as noted by Jon Skeet, so you can have as many PreApplicationStartMethods as you want (allowing you to keep your users waiting for the application to start indefinitely!). The WebActivator NuGet package solves the multiple instance problem if you're in .NET 4 - it registers as a PreApplicationStartMethod, then calls any methods you've indicated using [assembly: WebActivator.PreApplicationStartMethod(type, method)]. David Ebbo blogged about that here:  Light up your NuGets with startup code and WebActivator. In my scenario (bootstrapping a beginner level tutorial) I decided not to worry about this and stick with PreApplicationStartMethod. Challenge #3: PreApplicationStartMethod kicks in before configuration has been read This is by design, as Phil explains. It allows you to make changes that need to happen very early in the pipeline, well before Application_Start. That's fine in some cases, but it caused me problems when trying to add users, since the Membership Provider configuration hadn't yet been read - I got an exception stating that "Default Membership Provider could not be found." The solution here is to run code that requires configuration in a PostApplicationStart method. But how to do that? Challenge #4: Getting PostApplicationStartMethod without requiring WebActivator The WebActivator NuGet package, among other things, provides a PostApplicationStartMethod attribute. That's generally how I'd recommend running code that needs to happen after Application_Start: [assembly: WebActivator.PostApplicationStartMethod(typeof(TestLibrary.MyStartupCode), "CallMeAfterAppStart")] This works well, but I wanted to see if this would be possible without WebActivator. Hmm. Well, wait a minute - WebActivator works in .NET 4, so clearly it's registering and calling PostApplicationStartup tasks somehow. Off to the source code! Sure enough, there's even a handy comment in ActivationManager.cs which shows where PostApplicationStartup tasks are being registered: public static void Run() { if (!_hasInited) { RunPreStartMethods(); // Register our module to handle any Post Start methods. But outside of ASP.NET, just run them now if (HostingEnvironment.IsHosted) { Microsoft.Web.Infrastructure.DynamicModuleHelper.DynamicModuleUtility.RegisterModule(typeof(StartMethodCallingModule)); } else { RunPostStartMethods(); } _hasInited = true; } } Excellent. Hey, that DynamicModuleUtility seems familiar... Sure enough, K. Scott Allen mentioned it on his blog last year. This is really slick - a PreApplicationStartMethod can register a new HttpModule in code. Modules are run right after application startup, so that's a perfect time to do any startup stuff that requires configuration to be read. As K. Scott says, it's this easy: using System; using System.Web; using Microsoft.Web.Infrastructure.DynamicModuleHelper; [assembly:PreApplicationStartMethod(typeof(MyAppStart), "Start")] public class CoolModule : IHttpModule { // implementation not important // imagine something cool here } public static class MyAppStart { public static void Start() { DynamicModuleUtility.RegisterModule(typeof(CoolModule)); } } Challenge #5: Cooperating with SimpleMembership The ASP.NET MVC Internet template includes SimpleMembership. SimpleMembership is a big improvement over traditional ASP.NET Membership. For one thing, rather than forcing a database schema, it can work with your database schema. In the MVC 4 Internet template case, it uses Entity Framework Code First to define the user model. SimpleMembership bootstrap includes a call to InitializeDatabaseConnection, and I want to play nice with that. There's a new [InitializeSimpleMembership] attribute on the AccountController, which calls \Filters\InitializeSimpleMembershipAttribute.cs::OnActionExecuting(). That comment in that method that says "Ensure ASP.NET Simple Membership is initialized only once per app start" which sounds like good advice. I figured the best thing would be to call that directly: new Mvc4SampleApplication.Filters.InitializeSimpleMembershipAttribute().OnActionExecuting(null); I'm not 100% happy with this - in fact, it's my least favorite part of this solution. There are two problems - first, directly calling a method on a filter, while legal, seems odd. Worse, though, the Filter lives in the application's namespace, which means that this code no longer works well as a generic drop-in. The simplest workaround would be to duplicate the relevant SimpleMembership initialization code into my startup code, but I'd rather not. I'm interested in your suggestions here. Challenge #6: Module Init methods are called more than once When debugging, I noticed (and remembered) that the Init method may be called more than once per page request - it's run once per instance in the app pool, and an individual page request can cause multiple resource requests to the server. While SimpleMembership does have internal checks to prevent duplicate user or role entries, I'd rather not cause or handle those exceptions. So here's the standard single-use lock in the Module's init method: void IHttpModule.Init(HttpApplication context) { lock (lockObject) { if (!initialized) { //Do stuff } initialized = true; } } Putting it all together With all of that out of the way, here's the code I came up with: using Mvc4SampleApplication.Filters; using System.Web; using System.Web.Security; using WebMatrix.WebData; [assembly: PreApplicationStartMethod(typeof(PreApplicationTasks), "Initializer")] public static class PreApplicationTasks { public static void Initializer() { Microsoft.Web.Infrastructure.DynamicModuleHelper.DynamicModuleUtility .RegisterModule(typeof(UserInitializationModule)); } } public class UserInitializationModule : IHttpModule { private static bool initialized; private static object lockObject = new object(); private const string _username = "Owner"; private const string _password = "p@ssword123"; private const string _role = "Administrator"; void IHttpModule.Init(HttpApplication context) { lock (lockObject) { if (!initialized) { new InitializeSimpleMembershipAttribute().OnActionExecuting(null); if (!WebSecurity.UserExists(_username)) WebSecurity.CreateUserAndAccount(_username, _password); if (!Roles.RoleExists(_role)) Roles.CreateRole(_role); if (!Roles.IsUserInRole(_username, _role)) Roles.AddUserToRole(_username, _role); } initialized = true; } } void IHttpModule.Dispose() { } } The Verdict: Is this a good thing? Maybe. I think you'll agree that the journey was undoubtedly worthwhile, as it took us through some of the finer points of hooking into application startup, integrating with membership, and understanding why the WebActivator NuGet package is so useful Will I use this in the tutorial? I'm leaning towards no - I think a NuGet package with a dependency on WebActivator might work better: It's a little more clear what's going on Installing a NuGet package might be a little less error prone than copying a file A novice user could uninstall the package when complete It's a good introduction to NuGet, which is a good thing for beginners to see This code either requires either duplicating a little code from that filter or modifying the file to use the namespace Honestly I'm undecided at this point, but I'm glad that I can weigh the options. If you're interested: Why are you doing this? I'm updating the MVC Music Store tutorial to ASP.NET MVC 4, taking advantage of a lot of new ASP.NET MVC 4 features and trying to simplify areas that are giving people trouble. One change that addresses both needs us using the new OAuth support for membership as much as possible - it's a great new feature from an application perspective, and we get a fair amount of beginners struggling with setting up membership on a variety of database and development setups, which is a distraction from the focus of the tutorial - learning ASP.NET MVC. Side note: Thanks to some great help from Rick Anderson, we had a draft of the tutorial that was looking pretty good earlier this summer, but there were enough changes in ASP.NET MVC 4 all the way up to RTM that there's still some work to be done. It's high priority and should be out very soon. The one issue I ran into with OAuth is that we still need an Administrative user who can edit the store's inventory. I thought about a number of solutions for that - making the first user to register the admin, or the first user to use the username "Administrator" is assigned to the Administrator role - but they both ended up requiring extra code; also, I worried that people would use that code without understanding it or thinking about whether it was a good fit.

    Read the article

  • SQL Cruise Alaska 2011

    - by Grant Fritchey
    I had the extreme good fortune to get sent on the last SQL Cruise to Alaska. I love my job. In case you don't what this is, SQL Cruise is a trip on a cruise ship during which you get to attend classes while on the boat, learning all about SQL Server and related topics as well as network with the instructors and the other Cruisers. Frankly, it's amazing. Classes ran from Monday, 5/30, to Saturday, 6/4. The networking was constant, between classes, at night on cruise ship, out on excursions in Alaskan rainforests and while snorkeling in ocean waters. Here's a run down of the experience from my point of view. Because I couldn't travel out 2 days early, I missed the BBQ that occurred the day before the cruise when many of the Cruisers received their swag bags. Some of that swag came from Red Gate. I researched what was useful on a cruise like this and purchased small flashlights and binoculars for all the Cruisers. The flashlights were because, depending on your cabin, ships can be very dark. The binoculars were so that the cruisers could watch all the beautiful landscape as it flowed by. I would have liked to have been there when the bags were opened, but I heard from several people that they appreciated the gifts. Cruisers "In" the hot tub. Pictured: Marjory Woody, Michele Grondin, Kyle Brandt, Grant Fritchey, John Halunen Sunday I went to board the ship with my wife. We had a bit of an adventure because I messed up our documents. It all worked out and we got on board to meet up at the back of the boat at one of the outdoor bars with the other Cruisers, thanks to tweets letting everyone know where to go. That was the end of electronic coordination on the trip (connectivity in Alaska was horrible for everyone except AT&T). The Cruisers were a great bunch of people and it was a real honor to meet them and get to spend time with them. After everyone settled into their cabins, our very first activity was a contest, sponsored by Red Gate. The Cruisers, in an effort to get to know each other and the ship, were required to go all over taking various photographs, some of them hilarious. The winning team of three would all win prizes. Some of the significant others helped out and I tagged along with a team that tied for first but lost the coin toss. The winning team consisted of Christina Leo (blog|twitter), Ryan Malcom (twitter), Neil Hambly (blog|twitter). They then had to do math and identify the cabin with the lowest prime number, oh, and get a picture of it and be the first to get back up to the bar where we were waiting. Christina came in first and very happily carried home an Ipad2. Ryan won a 1TB portable hard drive and Neil won a wireless mouse (picture below, note my special SQL Server Central Friday Shirt. Thanks Steve (blog|twitter)). Winners: Christina Leo, Neil Hambly, Ryan Malcolm. Just Lucky: Grant Fritchey Monday morning classes started. Buck Woody (blog|twitter) was a special guest speaker on this cruise. His theme was "Three C's on the High Seas: Career, Communication and Cloud." The first session was all on Career. I'm not going to type out all my notes from the session, but let's just say, if you get the chance to hear Buck talk about how to manage your career, I suggest you attend. I have a ton of blog posts that I'll be putting together over the next several months (yes, months) both here and over on ScaryDBA. I also have a bunch of work I'm going to be doing to get my career performance bumped up a notch or two (and let's face it, that won't be easy). Later on Monday, Tim Ford (blog|twitter) did a session on DMOs. Specifically the session was on Tim's Period Table of DMOs that he has put together, and how to use some of the more interesting DMOs in your day to day job. It was a great session, packed with good information. Next, Brent Ozar (blog|twitter) did a session on how to monitor and guide SAN configuration for the DBA that doesn't have access to the SAN. That was some seriously useful information. Tuesday morning we only had a single class. Kendra Little (blog|twitter) taught us all about "No Lock for Yes Fun".  It was all about the different transaction isolation levels and how they work. There is so often confusion in this area and Kendra does a great job in clarifying the information. Also, she tosses in her excellent drawings to liven up the presentation. Then it was excursion time in Juneau. My wife and I, along with several other Cruisers, took a hike up around the Mendenhall Glacier. It was absolutely beautiful weather and walking through the Alaskan rain forest was a treat. Our guide, Jason, was a great guy and it was a good day of hiking. Wednesday was an all day excursion in Skagway. My wife and I took the "Ghost and Good Time Girls" walking tour that ended up at a bar that used to be a brothel, the Red Onion. It was a great history of the town. We went back out and hit a few museums and exhibits. We also hiked up the side of the mountain to see the Dewey Lake and some great views of the town. Finally we hiked out to the far side of town to see the Gold Rush cemetery. Hiking done we went back to the boat and had a quiet dinner on our own. Thursday we cruised through Glacier Bay and saw at least four different glaciers including sitting next to the Marjory Glacier for  about an hour. It was amazing. Then it got better. We went into class with Buck again, this time to talk about Communication. Again, I've got pages of notes that I'm going to be referring back to for some time to come. This was an excellent opportunity to learn. Snorkelers: Nicole Bertrand, Aaron Bertrand, Grant Fritchey, Neil Hambly, Christina Leo, John Robel, Yanni Robel, Tim Ford Friday we pulled into Ketchikan. A bunch of us went snorkeling. Yes, snorkeling. Yes, in Alaska. Yes, snorkeling in the ocean in Alaska. It was fantastic. They had us put on 7mm thick wet suits (an adventure all by itself) so it was basically warm the entire time we were in the water (except for the occasional squirt of cold water down my back). Before we got in the water a bald eagle flew up and landed about 15 feet in front of us, which was just an incredible event. Then our guide pointed out about 14 other eagles in the area, hanging out in the trees. Wow! The water was pretty clear and there was a ton of things to see. That was absolutely a blast. Back on the boat I presented a session called Execution Plans: The Deep Dive (note the nautical theme). It seemed to go over well and I had several good questions come out of the session that will lead to new blog posts. After I presented, it was Aaron Bertrand's (blog|twitter) turn. He did a session on "What's New in Denali" that provided a lot of great information. He was able to incorporate new things straight out of Tech-Ed, so this was expanded beyond his usual presentation. The man really knows what he's talking about and communicates it well. Saturday we were travelling so there was time for a bunch of classes. Jeremiah Peschka (blog|twitter) did a great overview of some of the NoSQL databases and what they should be used for. The session was called "The Database is Dead" but it was really about how there are specific uses for these databases that SQL Server doesn't fill, but also that these databases can't replace SQL Server in other areas. Again, good material. Brent Ozar presented again with a session on Defensive Indexing. It was an overview of how indexes work and a deep dive into how to apply them appropriately in your databases to better support access. A good session, as you would expect. Then we pulled into Victoria, BC, in Canada and had a nice dinner with several of the Cruisers, including Denny Cherry (blog|twitter). After that it was back to Seattle on Sunday. By the way, the Science Fiction Museum in Seattle isn't a Science Fiction Museum any more. I was very disappointed to discover this. Overall, it was a great experience. I'm extremely appreciative of Red Gate for sending me and for Tim, Brent, Kendra and Jeremiah for having me. The other Cruisers were all amazing people and it was an honor & privilege to meet them and spend time with them. While this was a seriously fun time, it was also a very serious training opportunity with solid information coming from seasoned industry pros.

    Read the article

  • Microsoft&rsquo;s new technical computing initiative

    - by Randy Walker
    I made a mental note from earlier in the year.  Microsoft literally buys computers by the truckload.  From what I understand, it’s a typical practice amongst large software vendors.  You plug a few wires in, you test it, and you instantly have mega tera tera flops (don’t hold me to that number).  Microsoft has been trying to plug away at their cloud services (named Azure).  Which, for the layman, means Microsoft runs your software on their computers, and as demand increases you can allocate more computing power on the fly. With this in mind, it doesn’t surprise me that I was recently sent an executive email concerning Microsoft’s new technical computing initiative.  I find it to be a great marketing idea with actual substance behind their real work.  From the programmer academic perspective, in college we dreamed about this type of processing power.  This has decades of computer science theory behind it. A copy of the email received.  (note that I almost deleted this email, thinking it was spam due to it’s length) We don't often think about how complex life really is. Take the relatively simple task of commuting to and from work: it is, in fact, a complicated interplay of variables such as weather, train delays, accidents, traffic patterns, road construction, etc. You can however, take steps to shorten your commute - using a good, predictive understanding of a few of these variables. In fact, you probably are already taking these inputs and instinctively building a predictive model that you act on daily to get to your destination more quickly. Now, when we apply the same method to very complex tasks, this modeling approach becomes much more challenging. Recent world events clearly demonstrated our inability to process vast amounts of information and variables that would have helped to more accurately predict the behavior of global financial markets or the occurrence and impact of a volcano eruption in Iceland. To make sense of issues like these, researchers, engineers and analysts create computer models of the almost infinite number of possible interactions in complex systems. But, they need increasingly more sophisticated computer models to better understand how the world behaves and to make fact-based predictions about the future. And, to do this, it requires a tremendous amount of computing power to process and examine the massive data deluge from cameras, digital sensors and precision instruments of all kinds. This is the key to creating more accurate and realistic models that expose the hidden meaning of data, which gives us the kind of insight we need to solve a myriad of challenges. We have made great strides in our ability to build these kinds of computer models, and yet they are still too difficult, expensive and time consuming to manage. Today, even the most complicated data-rich simulations cannot fully capture all of the intricacies and dependencies of the systems they are trying to model. That is why, across the scientific and engineering world, it is so hard to say with any certainty when or where the next volcano will erupt and what flight patterns it might affect, or to more accurately predict something like a global flu pandemic. So far, we just cannot collect, correlate and compute enough data to create an accurate forecast of the real world. But this is about to change. Innovations in technology are transforming our ability to measure, monitor and model how the world behaves. The implication for scientific research is profound, and it will transform the way we tackle global challenges like health care and climate change. It will also have a huge impact on engineering and business, delivering breakthroughs that could lead to the creation of new products, new businesses and even new industries. Because you are a subscriber to executive e-mails from Microsoft, I want you to be the first to know about a new effort focused specifically on empowering millions of the world's smartest problem solvers. Today, I am happy to introduce Microsoft's Technical Computing initiative. Our goal is to unleash the power of pervasive, accurate, real-time modeling to help people and organizations achieve their objectives and realize their potential. We are bringing together some of the brightest minds in the technical computing community across industry, academia and science at www.modelingtheworld.com to discuss trends, challenges and shared opportunities. New advances provide the foundation for tools and applications that will make technical computing more affordable and accessible where mathematical and computational principles are applied to solve practical problems. One day soon, complicated tasks like building a sophisticated computer model that would typically take a team of advanced software programmers months to build and days to run, will be accomplished in a single afternoon by a scientist, engineer or analyst working at the PC on their desktop. And as technology continues to advance, these models will become more complete and accurate in the way they represent the world. This will speed our ability to test new ideas, improve processes and advance our understanding of systems. Our technical computing initiative reflects the best of Microsoft's heritage. Ever since Bill Gates articulated the then far-fetched vision of "a computer on every desktop" in the early 1980's, Microsoft has been at the forefront of expanding the power and reach of computing to benefit the world. As someone who worked closely with Bill for many years at Microsoft, I am happy to share with you that the passion behind that vision is fully alive at Microsoft and is carried out in the creation of our new Technical Computing group. Enabling more people to make better predictions We have seen the impact of making greater computing power more available firsthand through our investments in high performance computing (HPC) over the past five years. Scientists, engineers and analysts in organizations of all sizes and sectors are finding that using distributed computational power creates societal impact, fuels scientific breakthroughs and delivers competitive advantages. For example, we have seen remarkable results from some of our current customers: Malaria strikes 300,000 to 500,000 people around the world each year. To help in the effort to eradicate malaria worldwide, scientists at Intellectual Ventures use software that simulates how the disease spreads and would respond to prevention and control methods, such as vaccines and the use of bed nets. Technical computing allows researchers to model more detailed parameters for more accurate results and receive those results in less than an hour, rather than waiting a full day. Aerospace engineering firm, a.i. solutions, Inc., needed a more powerful computing platform to keep up with the increasingly complex computational needs of its customers: NASA, the Department of Defense and other government agencies planning space flights. To meet that need, it adopted technical computing. Now, a.i. solutions can produce detailed predictions and analysis of the flight dynamics of a given spacecraft, from optimal launch times and orbit determination to attitude control and navigation, up to eight times faster. This enables them to avoid mistakes in any areas that can cause a space mission to fail and potentially result in the loss of life and millions of dollars. Western & Southern Financial Group faced the challenge of running ever larger and more complex actuarial models as its number of policyholders and products grew and regulatory requirements changed. The company chose an actuarial solution that runs on technical computing technology. The solution is easy for the company's IT staff to manage and adjust to meet business needs. The new solution helps the company reduce modeling time by up to 99 percent - letting the team fine-tune its models for more accurate product pricing and financial projections. Our Technical Computing direction Collaborating closely with partners across industry and academia, we must now extend the reach of technical computing even further to help predictive modelers and data explorers make faster, more accurate predictions. As we build the Technical Computing initiative, we will invest in three core areas: Technical computing to the cloud: Microsoft will play a leading role in bringing technical computing power to scientists, engineers and analysts through the cloud. Existing high- performance computing users will benefit from the ability to augment their on-premises systems with cloud resources that enable 'just-in-time' processing. This platform will help ensure processing resources are available whenever they are needed-reliably, consistently and quickly. Simplify parallel development: Today, computers are shipping with more processing power than ever, including multiple cores, but most modern software only uses a small amount of the available processing power. Parallel programs are extremely difficult to write, test and trouble shoot. However, a consistent model for parallel programming can help more developers unlock the tremendous power in today's modern computers and enable a new generation of technical computing. We are delivering new tools to automate and simplify writing software through parallel processing from the desktop... to the cluster... to the cloud. Develop powerful new technical computing tools and applications: We know scientists, engineers and analysts are pushing common tools (i.e., spreadsheets and databases) to the limits with complex, data-intensive models. They need easy access to more computing power and simplified tools to increase the speed of their work. We are building a platform to do this. Our development efforts will yield new, easy-to-use tools and applications that automate data acquisition, modeling, simulation, visualization, workflow and collaboration. This will allow them to spend more time on their work and less time wrestling with complicated technology. Thinking bigger There is so much left to be discovered and so many questions yet to be answered in the fascinating world around us. We believe the technical computing community will show us that we have not seen anything yet. Imagine just some of the breakthroughs this community could make possible: Better predictions to help improve the understanding of pandemics, contagion and global health trends. Climate change models that predict environmental, economic and human impact, accessible in real-time during key discussions and debates. More accurate prediction of natural disasters and their impact to develop more effective emergency response plans. With an ambitious charter in hand, this new team is ready to build on our progress to-date and execute Microsoft's technical computing vision over the months and years ahead. We will steadily invest in the right technologies, tools and talent, and work to bring together the technical computing community. I invite you to visit www.modelingtheworld.com today. We welcome your ideas and feedback. I look forward to making this journey with you and others who want to answer the world's biggest questions, discover solutions to problems that seem impossible and uncover a host of new opportunities to change the world we live in for the better. Bob

    Read the article

  • Spooling in SQL execution plans

    - by Rob Farley
    Sewing has never been my thing. I barely even know the terminology, and when discussing this with American friends, I even found out that half the words that Americans use are different to the words that English and Australian people use. That said – let’s talk about spools! In particular, the Spool operators that you find in some SQL execution plans. This post is for T-SQL Tuesday, hosted this month by me! I’ve chosen to write about spools because they seem to get a bad rap (even in my song I used the line “There’s spooling from a CTE, they’ve got recursion needlessly”). I figured it was worth covering some of what spools are about, and hopefully explain why they are remarkably necessary, and generally very useful. If you have a look at the Books Online page about Plan Operators, at http://msdn.microsoft.com/en-us/library/ms191158.aspx, and do a search for the word ‘spool’, you’ll notice it says there are 46 matches. 46! Yeah, that’s what I thought too... Spooling is mentioned in several operators: Eager Spool, Lazy Spool, Index Spool (sometimes called a Nonclustered Index Spool), Row Count Spool, Spool, Table Spool, and Window Spool (oh, and Cache, which is a special kind of spool for a single row, but as it isn’t used in SQL 2012, I won’t describe it any further here). Spool, Table Spool, Index Spool, Window Spool and Row Count Spool are all physical operators, whereas Eager Spool and Lazy Spool are logical operators, describing the way that the other spools work. For example, you might see a Table Spool which is either Eager or Lazy. A Window Spool can actually act as both, as I’ll mention in a moment. In sewing, cotton is put onto a spool to make it more useful. You might buy it in bulk on a cone, but if you’re going to be using a sewing machine, then you quite probably want to have it on a spool or bobbin, which allows it to be used in a more effective way. This is the picture that I want you to think about in relation to your data. I’m sure you use spools every time you use your sewing machine. I know I do. I can’t think of a time when I’ve got out my sewing machine to do some sewing and haven’t used a spool. However, I often run SQL queries that don’t use spools. You see, the data that is consumed by my query is typically in a useful state without a spool. It’s like I can just sew with my cotton despite it not being on a spool! Many of my favourite features in T-SQL do like to use spools though. This looks like a very similar query to before, but includes an OVER clause to return a column telling me the number of rows in my data set. I’ll describe what’s going on in a few paragraphs’ time. So what does a Spool operator actually do? The spool operator consumes a set of data, and stores it in a temporary structure, in the tempdb database. This structure is typically either a Table (ie, a heap), or an Index (ie, a b-tree). If no data is actually needed from it, then it could also be a Row Count spool, which only stores the number of rows that the spool operator consumes. A Window Spool is another option if the data being consumed is tightly linked to windows of data, such as when the ROWS/RANGE clause of the OVER clause is being used. You could maybe think about the type of spool being like whether the cotton is going onto a small bobbin to fit in the base of the sewing machine, or whether it’s a larger spool for the top. A Table or Index Spool is either Eager or Lazy in nature. Eager and Lazy are Logical operators, which talk more about the behaviour, rather than the physical operation. If I’m sewing, I can either be all enthusiastic and get all my cotton onto the spool before I start, or I can do it as I need it. “Lazy” might not the be the best word to describe a person – in the SQL world it describes the idea of either fetching all the rows to build up the whole spool when the operator is called (Eager), or populating the spool only as it’s needed (Lazy). Window Spools are both physical and logical. They’re eager on a per-window basis, but lazy between windows. And when is it needed? The way I see it, spools are needed for two reasons. 1 – When data is going to be needed AGAIN. 2 – When data needs to be kept away from the original source. If you’re someone that writes long stored procedures, you are probably quite aware of the second scenario. I see plenty of stored procedures being written this way – where the query writer populates a temporary table, so that they can make updates to it without risking the original table. SQL does this too. Imagine I’m updating my contact list, and some of my changes move data to later in the book. If I’m not careful, I might update the same row a second time (or even enter an infinite loop, updating it over and over). A spool can make sure that I don’t, by using a copy of the data. This problem is known as the Halloween Effect (not because it’s spooky, but because it was discovered in late October one year). As I’m sure you can imagine, the kind of spool you’d need to protect against the Halloween Effect would be eager, because if you’re only handling one row at a time, then you’re not providing the protection... An eager spool will block the flow of data, waiting until it has fetched all the data before serving it up to the operator that called it. In the query below I’m forcing the Query Optimizer to use an index which would be upset if the Name column values got changed, and we see that before any data is fetched, a spool is created to load the data into. This doesn’t stop the index being maintained, but it does mean that the index is protected from the changes that are being done. There are plenty of times, though, when you need data repeatedly. Consider the query I put above. A simple join, but then counting the number of rows that came through. The way that this has executed (be it ideal or not), is to ask that a Table Spool be populated. That’s the Table Spool operator on the top row. That spool can produce the same set of rows repeatedly. This is the behaviour that we see in the bottom half of the plan. In the bottom half of the plan, we see that the a join is being done between the rows that are being sourced from the spool – one being aggregated and one not – producing the columns that we need for the query. Table v Index When considering whether to use a Table Spool or an Index Spool, the question that the Query Optimizer needs to answer is whether there is sufficient benefit to storing the data in a b-tree. The idea of having data in indexes is great, but of course there is a cost to maintaining them. Here we’re creating a temporary structure for data, and there is a cost associated with populating each row into its correct position according to a b-tree, as opposed to simply adding it to the end of the list of rows in a heap. Using a b-tree could even result in page-splits as the b-tree is populated, so there had better be a reason to use that kind of structure. That all depends on how the data is going to be used in other parts of the plan. If you’ve ever thought that you could use a temporary index for a particular query, well this is it – and the Query Optimizer can do that if it thinks it’s worthwhile. It’s worth noting that just because a Spool is populated using an Index Spool, it can still be fetched using a Table Spool. The details about whether or not a Spool used as a source shows as a Table Spool or an Index Spool is more about whether a Seek predicate is used, rather than on the underlying structure. Recursive CTE I’ve already shown you an example of spooling when the OVER clause is used. You might see them being used whenever you have data that is needed multiple times, and CTEs are quite common here. With the definition of a set of data described in a CTE, if the query writer is leveraging this by referring to the CTE multiple times, and there’s no simplification to be leveraged, a spool could theoretically be used to avoid reapplying the CTE’s logic. Annoyingly, this doesn’t happen. Consider this query, which really looks like it’s using the same data twice. I’m creating a set of data (which is completely deterministic, by the way), and then joining it back to itself. There seems to be no reason why it shouldn’t use a spool for the set described by the CTE, but it doesn’t. On the other hand, if we don’t pull as many columns back, we might see a very different plan. You see, CTEs, like all sub-queries, are simplified out to figure out the best way of executing the whole query. My example is somewhat contrived, and although there are plenty of cases when it’s nice to give the Query Optimizer hints about how to execute queries, it usually doesn’t do a bad job, even without spooling (and you can always use a temporary table). When recursion is used, though, spooling should be expected. Consider what we’re asking for in a recursive CTE. We’re telling the system to construct a set of data using an initial query, and then use set as a source for another query, piping this back into the same set and back around. It’s very much a spool. The analogy of cotton is long gone here, as the idea of having a continual loop of cotton feeding onto a spool and off again doesn’t quite fit, but that’s what we have here. Data is being fed onto the spool, and getting pulled out a second time when the spool is used as a source. (This query is running on AdventureWorks, which has a ManagerID column in HumanResources.Employee, not AdventureWorks2012) The Index Spool operator is sucking rows into it – lazily. It has to be lazy, because at the start, there’s only one row to be had. However, as rows get populated onto the spool, the Table Spool operator on the right can return rows when asked, ending up with more rows (potentially) getting back onto the spool, ready for the next round. (The Assert operator is merely checking to see if we’ve reached the MAXRECURSION point – it vanishes if you use OPTION (MAXRECURSION 0), which you can try yourself if you like). Spools are useful. Don’t lose sight of that. Every time you use temporary tables or table variables in a stored procedure, you’re essentially doing the same – don’t get upset at the Query Optimizer for doing so, even if you think the spool looks like an expensive part of the query. I hope you’re enjoying this T-SQL Tuesday. Why not head over to my post that is hosting it this month to read about some other plan operators? At some point I’ll write a summary post – once I have you should find a comment below pointing at it. @rob_farley

    Read the article

  • Clean Code Developer & Certification in IT - MSCC 21.09.2013

    It was a very busy weekend this time, and quite some hectic to organise the second meetup on a Saturday for the Mauritius Software Craftsmanship Community (MSCC) but it was absolutely fun. Following, I'm writing a brief summary about the topics we spoke about and the new impulses I got. "What a meetup... I was positively impressed. At the beginning I thought that noone would actually show up but then by the time the room got filled. Lots of conversation, great dialogues and fantastic networking between fresh students, experienced students, experienced employees, and self-employed attendees. That's what community is all about!" Above quote was my first reaction shortly after the gathering. And despite being busy during the weekend and yesterday, I took my time to reflect a little bit on things happened and statements made before writing it here on my blog. Additionally, I was also very curious about possible reactions and blogs from other attendees. Reactions from other craftsmen Let me quickly give you some links and quotes from others first... "Like Jochen posted on facebook, that was indeed a 5+ hours marathon (maybe 4 hours for me but still) … Wohoo! We’re indeed a bunch of crazy geeks who did not realise how time flew as we dived into the myriad discussions that sprouted. Yet in the end everyone was happy (:" -- Ish on MSCC meetup - The marathon (: "And the 4hours spent @ Talking drums bore its fruit..I was doing something I never did before....reading the borrowed book while walking....and though I was not that familiar with things mentionned in the book...I was skimming,scanning & flipping...reading titles...short paragraphs...and I skipped pages till I reached home." -- Yannick on Mauritius Software Craftsmanship 1st Meet-up "Hi Developers, Just wanted to share with you the meetups i attended last Saturday - [...] - The second meetup is the one hosted by Jochen Kirstätter, the MSCC, where the attendees were Craftsman, no woman, this time - all sharing the same passion of being a developer - even though it is on different platforms(Windows - Windows Phone - Linux - Adobe(yes a designer) - .Net) - but we manage to sit at the same table - sharing developer views and experience in the corporate world - also talking about good practice when coding( where Jochen initiated a discussion on Clean Coding ) i could not stay till the end - but from what i have heard - the longer you stay the more fun you have till 1600. Developers in the Facebook grouping i invite you to stay tuned about the various developer communities popping up - where you can come to share and learn good practices, develop the entrepreneurial spirit, and learn and share your passion about technologies" -- Arnaud on Facebook More feedback has been posted on the event directly. So, should I really write more? Wouldn't that spoil the impressions? Starting the day with a surprise Indeed, I was very pleased to stumble over the existence of Mobile Monday Mauritius on LinkedIn, an association about any kind of mobile app development, mobile gadgets and latest smartphones on the market. Despite the Monday in their name they had scheduled their recent meeting on Saturday between 10:00 and 12:00hrs. Wow, what a coincidence! Let's grap the bull by its horns and pay them an introductory visit. As they chose the Ebene Accelerator at the Orange Tower in Ebene it was a no-brainer to leave home a bit earlier and stop by. It was quite an experience and fun to talk to the geeks over there. Really looking forward to organise something together.... Arriving at the venue As the children got a bit uneasy at the MoMo gathering and I didn't want to disturb them too much, we arrived early at Bagatelle. Well, no problems as we went for a decent breakfast at Food Lover's Market. Shortly afterwards we went to our venue location, Talking Drums, and prepared the room for the meeting. We only had to take off a repro-painting of the wall in order to have a decent area for the projector. All went very smooth and my two little ones were of great help. Just in time, our first craftsman Avinash arrived on the spot. And then the waiting started... Luckily, not too long. Bit by bit more and more IT people came to join our meeting. Meanwhile, I used the time to give a brief introduction about the MSCC in general, what we are (hm, maybe I am) trying to achieve and that the recent phase is completely focused on creating more awareness that a community like the MSCC is active here in Mauritius. As soon as we reached some 'critical mass' of about ten people I asked everyone for a short introduction and bio, just in case... Conversation between participants started to kick in and we were actually more networking than having a focus on our topics of the day. Quick updates on latest news and development around the MSCC Finally, Clean Code Developer No matter how the position is actually called, whether it is Software Engineer, Software Developer, Programmer, Architect, or Craftsman, anyone working in IT is facing almost the same obstacles. As for the process of writing software applications there are re-occurring patterns and principles combined with some common exercise and best practices on how to resolve them. Initiated by the must-read book 'Clean Code' by Robert C. Martin (aka Uncle Bob) the concept of the Clean Code Developer (CCD) was born already some years ago. CCD is much likely to traditional martial arts where you create awareness of certain principles and learn how to apply practices to improve your style. The CCD initiative recommends to indicate your level of knowledge and experience with coloured wrist bands - equivalent to the belt colours - for various reasons. Frankly speaking, I think that the biggest advantage here is provided by the obvious recognition of conceptual understanding. For example, take the situation of a team meeting... A member with a higher grade in CCD, say Green grade, sees that there are mainly Red grades to talk to, and adjusts her way of communication to their level of understanding. The choice of words might change as certain elements of CCD are not yet familiar to all team members. So instead of talking in an abstract way which only Green grades could follow the whole scenario comes down to Red grade level. Different story, better results... Similar to learning martial arts, we only covered two grades during this occasion - black and red. Most interestingly, there was quite some positive feedback and lots of questions about the principles and practices of the red grade. And we gathered real-world examples from various craftsman and discussed them. Following the Clean Code Developer Red Grade and some annotations from our meetup: CCD Red Grade - Principles Don't Repeat Yourself - DRY Keep It Simple, Stupid (and Short) - KISS Beware of Optimisations! Favour Composition over Inheritance - FCoI Interestingly most of the attendees already heard about those key words but couldn't really classify or categorize them. It's very similar to a situation in which you do not the particular for a thing and have to describe it to others... until someone tells you the actual name and suddenly all is very simple. CCD Red Grade - Practices Follow the Boy Scouts Rule Root Cause Analysis - RCA Use a Version Control System Apply Simple Refactoring Pattern Reflect Daily Introduction to the principles and practices of Clean Code Developer - here: Red Grade As for the various ToDo's we commonly agreed that the Boy Scout Rule clearly is not limited to software development or IT administration but applies to daily life in general. Same for the root cause analysis, btw. We really had good stories with surprisingly endings and conclusions. A quick check about who is using a version control system brought more drive into the conversation. Not only that we had people that aren't using any VCS at all, we also had the 'classic' approach of backup folders and naming conventions as well as the VCS 'junkie' that has to use multiple systems at a time. Just for the records: Git and GitHub seem to be in favour of some of the attendees. Regarding the daily reflection at the end of the day we came up with an easy solution: Wrap it up as a blog entry! Certifications in IT This is kind of a controversy in IT in general. Is it interesting to go for certifications or are they completely obsolete? What are the possibilities to get certified? What are the options we have in Mauritius? How would certificates stand compared to other educational tracks like Computer Science or Web Design. The ratio between craftsmen with certifications like MCP, MSTS, CCNA or LPI versus the ones without wasn't in favour for the first group but there was a high interest in the topic itself and some were really surprised to hear that exam preparations are completely free available online including temporarily voucher codes for either discounts or completely free exams. Furthermore, we discussed possible options on forming so-called study groups on a specific certificates and organising more frequent meetups in order to learn together. Taking into consideration that we have sponsored access to the video course material of Pluralsight (and now PeepCode as well as TrainSignal), we might give it a try by the end of the year. Current favourites are LPIC Level 1 and one of the Microsoft exams 40-78x. Feedback and ideas for the MSCC The closing conversations and discussions about how the MSCC is recently doing, what are the possibilities and what's (hopefully) going to happen in the future were really fertile and I made a couple of mental bullet points which I'm looking forward to tackle down together with orher craftsmen. Eventually, it might be a good option to elaborate on some issues during our weekly Code & Coffee sessions one Wednesday morning. Active discussion on various IT topics like certifications (LPI, MCP, CCNA, etc) and sharing experience Finally, we made it till the end of the planned time. Well, actually the talk was still on and we continued even after 16:00hrs. Unfortunately, we (the children and I) had to leave for evening activities. My resume of the day... It was great to have 15 craftsmen in one room. There are hundreds of IT geeks out there in Mauritius, and as Mauritius Software Craftsmanship Community we still have a lot of work to do to pass on the message to some more key players and companies. Currently, it seems that we are able to attract a good number of students in Computer Science... but we have a lot more to offer, even or especially for IT people on the job. I'm already looking forward to our next Saturday meetup in the near future. PS: Meetup pictures are courtesy of Nirvan Pagooah. Thanks for sharing...

    Read the article

  • Solaris: What comes next?

    - by alanc
    As you probably know by now, a few months ago, we released Solaris 11 after years of development. That of course means we now need to figure out what comes next - if Solaris 11 is “The First Cloud OS”, then what do we need to make future releases of Solaris be, to be modern and competitive when they're released? So we've been having planning and brainstorming meetings, and I've captured some notes here from just one of those we held a couple weeks ago with a number of the Silicon Valley based engineers. Now before someone sees an idea here and calls their product rep wanting to know what's up, please be warned what follows are rough ideas, and as I'll discuss later, none of them have any committment, schedule, working code, or even plan for integration in any possible future product at this time. (Please don't make me force you to read the full Oracle future product disclaimer here, you should know it by heart already from the front of every Oracle product slide deck.) To start with, we did some background research, looking at ideas from other Oracle groups, and competitive OS'es. We examined what was hot in the technology arena and where the interesting startups were heading. We then looked at Solaris to see where we could apply those ideas. Making Network Admins into Socially Networking Admins We all know an admin who has grumbled about being the only one stuck late at work to fix a problem on the server, or having to work the weekend alone to do scheduled maintenance. But admins are humans (at least most are), and crave companionship and community with their fellow humans. And even when they're alone in the server room, they're never far from a network connection, allowing access to the wide world of wonders on the Internet. Our solution here is not building a new social network - there's enough of those already, and Oracle even has its own Oracle Mix social network already. What we proposed is integrating Solaris features to help engage our system admins with these social networks, building community and bringing them recognition in the workplace, using achievement recognition systems as found in many popular gaming platforms. For instance, if you had a Facebook account, and a group of admin friends there, you could register it with our Social Network Utility For Facebook, and then your friends might see: Alan earned the achievement Critically Patched (April 2012) for patching all his servers. Matt is only at 50% - encourage him to complete this achievement today! To avoid any undue risk of advertising who has unpatched servers that are easier targets for hackers to break into, this information would be tightly protected via Facebook's world-renowned privacy settings to avoid it falling into the wrong hands. A related form of gamification we considered was replacing simple certfications with role-playing-game-style Experience Levels. Instead of just knowing an admin passed a test establishing a given level of competency, these would provide recruiters with a more detailed level of how much real-world experience an admin has. Achievements such as the one above would feed into it, but larger numbers of experience points would be gained by tougher or more critical tasks - such as recovering a down system, or migrating a service to a new platform. (As long as it was an Oracle platform of course - migrating to an HP or IBM platform would cause the admin to lose points with us.) Unfortunately, we couldn't figure out a good way to prevent (if you will) “gaming” the system. For instance, a disgruntled admin might decide to start ignoring warnings from FMA that a part is beginning to fail or skip preventative maintenance, in the hopes that they'd cause a catastrophic failure to earn more points for bolstering their resume as they look for a job elsewhere, and not worrying about the effect on your business of a mission critical server going down. More Z's for ZFS Our suggested new feature for ZFS was inspired by the worlds most successful Z-startup of all time: Zynga. Using the Social Network Utility For Facebook described above, we'd tie it in with ZFS monitoring to help you out when you find yourself in a jam needing more disk space than you have, and can't wait a month to get a purchase order through channels to buy more. Instead with the click of a button you could post to your group: Alan can't find any space in his server farm! Can you help? Friends could loan you some space on their connected servers for a few weeks, knowing that you'd return the favor when needed. ZFS would create a new filesystem for your use on their system, and securely share it with your system using Kerberized NFS. If none of your friends have space, then you could buy temporary use space in small increments at affordable rates right there in Facebook, using your Facebook credits, and then file an expense report later, after the urgent need has passed. Universal Single Sign On One thing all the engineers agreed on was that we still had far too many "Single" sign ons to deal with in our daily work. On the web, every web site used to have its own password database, forcing us to hope we could remember what login name was still available on each site when we signed up, and which unique password we came up with to avoid having to disclose our other passwords to a new site. In recent years, the web services world has finally been reducing the number of logins we have to manage, with many services allowing you to login using your identity from Google, Twitter or Facebook. So we proposed following their lead, introducing PAM modules for web services - no more would you have to type in whatever login name IT assigned and try to remember the password you chose the last time password aging forced you to change it - you'd simply choose which web service you wanted to authenticate against, and would login to your Solaris account upon reciept of a cookie from their identity service. Pinning notes to the cloud We also all noted that we all have our own pile of notes we keep in our daily work - in text files in our home directory, in notebooks we carry around, on white boards in offices and common areas, on sticky notes on our monitors, or on scraps of paper pinned to our bulletin boards. The contents of the notes vary, some are things just for us, some are useful for our groups, some we would share with the world. For instance, when our group moved to a new building a couple years ago, we had a white board in the hallway listing all the NIS & DNS servers, subnets, and other network configuration information we needed to set up our Solaris machines after the move. Similarly, as Solaris 11 was finishing and we were all learning the new network configuration commands, we shared notes in wikis and e-mails with our fellow engineers. Users may also remember one of the popular features of Sun's old BigAdmin site was a section for sharing scripts and tips such as these. Meanwhile, the online "pin board" at Pinterest is taking the web by storm. So we thought, why not mash those up to solve this problem? We proposed a new BigAddPin site where users could “pin” notes, command snippets, configuration information, and so on. For instance, once they had worked out the ideal Automated Installation manifest for their app server, they could pin it up to share with the rest of their group, or choose to make it public as an example for the world. Localized data, such as our group's notes on the servers for our subnet, could be shared only to users connecting from that subnet. And notes that they didn't want others to see at all could be marked private, such as the list of phone numbers to call for late night pizza delivery to the machine room, the birthdays and anniversaries they can never remember but would be sleeping on the couch if they forgot, or the list of automatically generated completely random, impossible to remember root passwords to all their servers. For greater integration with Solaris, we'd put support right into the command shells — redirect output to a pinned note, set your path to include pinned notes as scripts you can run, or bring up your recent shell history and pin a set of commands to save for the next time you need to remember how to do that operation. Location service for Solaris servers A longer term plan would involve convincing the hardware design groups to put GPS locators with wireless transmitters in future server designs. This would help both admins and service personnel trying to find servers in todays massive data centers, and could feed into location presence apps to help show potential customers that while they may not see many Solaris machines on the desktop any more, they are all around. For instance, while walking down Wall Street it might show “There are over 2000 Solaris computers in this block.” [Note: this proposal was made before the recent media coverage of a location service aggregrator app with less noble intentions, and in hindsight, we failed to consider what happens when such data similarly falls into the wrong hands. We certainly wouldn't want our app to be misinterpreted as “There are over $20 million dollars of SPARC servers in this building, waiting for you to steal them.” so it's probably best it was rejected.] Harnessing the power of the GPU for Security Most modern OS'es make use of the widespread availability of high powered GPU hardware in today's computers, with desktop environments requiring 3-D graphics acceleration, whether in Ubuntu Unity, GNOME Shell on Fedora, or Aero Glass on Windows, but we haven't yet made Solaris fully take advantage of this, beyond our basic offering of Compiz on the desktop. Meanwhile, more businesses are interested in increasing security by using biometric authentication, but must also comply with laws in many countries preventing discrimination against employees with physical limations such as missing eyes or fingers, not to mention the lost productivity when employees can't login due to tinted contacts throwing off a retina scan or a paper cut changing their fingerprint appearance until it heals. Fortunately, the two groups considering these problems put their heads together and found a common solution, using 3D technology to enable authentication using the one body part all users are guaranteed to have - pam_phrenology.so, a new PAM module that uses an array USB attached web cams (or just one if the user is willing to spin their chair during login) to take pictures of the users head from all angles, create a 3D model and compare it to the one in the authentication database. While Mythbusters has shown how easy it can be to fool common fingerprint scanners, we have not yet seen any evidence that people can impersonate the shape of another user's cranium, no matter how long they spend beating their head against the wall to reshape it. This could possibly be extended to group users, using modern versions of some of the older phrenological studies, such as giving all users with long grey beards access to the System Architect role, or automatically placing users with pointy spikes in their hair into an easy use mode. Unfortunately, there are still some unsolved technical challenges we haven't figured out how to overcome. Currently, a visit to the hair salon causes your existing authentication to expire, and some users have found that shaving their heads is the only way to avoid bad hair days becoming bad login days. Reaction to these ideas After gathering all our notes on these ideas from the engineering brainstorming meeting, we took them in to present to our management. Unfortunately, most of their reaction cannot be printed here, and they chose not to accept any of these ideas as they were, but they did have some feedback for us to consider as they sent us back to the drawing board. They strongly suggested our ideas would be better presented if we weren't trying to decipher ink blotches that had been smeared by the condensation when we put our pint glasses on the napkins we were taking notes on, and to that end let us know they would not be approving any more engineering offsites in Irish themed pubs on the Friday of a Saint Patrick's Day weekend. (Hopefully they mean that situation specifically and aren't going to deny the funding for travel to this year's X.Org Developer's Conference just because it happens to be in Bavaria and ending on the Friday of the weekend Oktoberfest starts.) They recommended our research techniques could be improved over just sitting around reading blogs and checking our Facebook, Twitter, and Pinterest accounts, such as considering input from alternate viewpoints on topics such as gamification. They also mentioned that Oracle hadn't fully adopted some of Sun's common practices and we might have to try harder to get those to be accepted now that we are one unified company. So as I said at the beginning, don't pester your sales rep just yet for any of these, since they didn't get approved, but if you have better ideas, pass them on and maybe they'll get into our next batch of planning.

    Read the article

  • Day 6 - Game Menuing Woes and Future Screen Sneak Peeks

    - by dapostolov
    So, after my last post on Day 5 I dabbled with my game class design. I took the approach where each game objects is tightly coupled with a graphic. The good news is I got the menu working but not without some hard knocks and game growing pains. I'll explain later, but for now...here is a class diagram of my first stab at my class structure and some code...   Ok, there are few mistakes, however, I'm going to leave it as is for now... As you can see I created an inital abstract base class called GameSprite. This class when inherited will provide a simple virtual default draw method:        public virtual void DrawSprite(SpriteBatch spriteBatch)         {             spriteBatch.Draw(Sprite, Position, Color.White);         } The benefits of coding it this way allows me to inherit the class and utilise the method in the screen draw method...So regardless of what the graphic object type is it will now have the ability to render a static image on the screen. Example: public class MyStaticTreasureChest : GameSprite {} If you remember the window draw method from Day 3's post, we could use the above code as follows...         protected override void Draw(GameTime gameTime)         {             GraphicsDevice.Clear(Color.CornflowerBlue);             spriteBatch.Begin(SpriteBlendMode.AlphaBlend);             foreach(var gameSprite in ListOfGameObjects)            {                 gameSprite.DrawSprite(spriteBatch);            }             spriteBatch.End();             base.Draw(gameTime);         } I have to admit the GameSprite object is pretty plain as with its DrawSprite method... But ... we now have the ability to render 3 static menu items on the screen ... BORING! I want those menu items to do something exciting, which of course involves animation... So, let's have a peek at AnimatedGameSprite in the above game diagram. The idea with the AnimatedGameSprite is that it has an image to animate...such as ... characters, fireballs, and... menus! So after inheriting from GameSprite class, I added a few more options such as UpdateSprite...         public virtual void UpdateSprite(float elapsed)         {             _totalElapsed += elapsed;             if (_totalElapsed > _timePerFrame)             {                 _frame++;                 _frame = _frame % _framecount;                 _totalElapsed -= _timePerFrame;             }         }  And an overidden DrawSprite...         public override void DrawSprite(SpriteBatch spriteBatch)         {             int FrameWidth = Sprite.Width / _framecount;             Rectangle sourcerect = new Rectangle(FrameWidth * _frame, 0, FrameWidth, Sprite.Height);             spriteBatch.Draw(Sprite, Position, sourcerect, Color.White, _rotation, _origin, _scale, SpriteEffects.None, _depth);         } With these two methods...I can animate and image, all I had to do was add a few more lines to the screens Update Method (From Day 3), like such:             float elapsed = (float) gameTime.ElapsedGameTime.TotalSeconds;             foreach (var item in ListOfAnimatedGameObjects)             {                 item.UpdateSprite(elapsed);             } And voila! My images begin to animate in one spot, on the screen... Hmm, but how do I interact with the menu items using a mouse...well the mouse cursor was easy enough... this.IsMouseVisible = true; But, to have it "interact" with an image was a bit more tricky...I had to perform collision detection!             mouseStateCurrent = Mouse.GetState();             var uiEnabledSprites = (from s in menuItems                                    where s.IsEnabled                                    select s).ToList();             foreach (var item in uiEnabledSprites)             {                 var r = new Rectangle((int)item.Position.X, (int)item.Position.Y, item.Sprite.Width, item.Sprite.Height);                 item.MenuState = MenuState.Normal;                 if (r.Intersects(new Rectangle(mouseStateCurrent.X, mouseStateCurrent.Y, 0, 0)))                 {                     item.MenuState = MenuState.Hover;                     if (mouseStatePrevious.LeftButton == ButtonState.Pressed                         && mouseStateCurrent.LeftButton == ButtonState.Released)                     {                         item.MenuState = MenuState.Pressed;                     }                 }             }             mouseStatePrevious = mouseStateCurrent; So, basically, what it is doing above is iterating through all my interactive objects and detecting a rectangle collision and the object , plays the state animation (or static image).  Lessons Learned, Time Burned... So, I think I did well to start, but after I hammered out my prototype...well...things got sloppy and I began to realise some design flaws... At the time: I couldn't seem to figure out how to open another window, such as the character creation screen Input was not event based and it was bugging me My menu design relied heavily on mouse input and I couldn't use keyboard. Mouse input, is tightly bound with graphic rendering / positioning, so its logic will have to be in each scene. Menu animations would stop mid frame, then continue when the action occured again. This is bad, because...what if I had a sword sliding onthe screen? Then it would slide a quarter of the way, then stop due to another action, then render again mid-slide... it just looked sloppy. Menu, Solved!? To solve the above problems I did a little research and I found some great code in the XNA forums. The one worth mentioning was the GameStateManagementSample. With this sample, you can create a basic "text based" menu system which allows you to swap screens, popup screens, play the game, and quit....basic game state management... In my next post I'm going to dwelve a bit more into this code and adapt it with my code from this prototype. Text based menus just won't cut it for me, for now...however, I'm still going to stick with my animated menu item idea. A sneak peek using the Game State Management Sample...with no changes made... Cool Things to Mention: At work ... I tend to break out in random conversations every-so-often and I get talking about some of my challenges with this game (or some stupid observation about something... stupid) During one conversation I was discussing how I should animate my images; I explained that I knew I had to use the Update method provided, but I didn't know how (at the time) to render an image at an appropriate "pace" and how many frames to use, etc.. I also got thinking that if a machine rendered my images faster / slower, that was surely going to f-up my animations. To which a friend, Sheldon,  answered, surely the Draw method is like a camera taking a snapshot of a scene in time. Then it clicked...I understood the big picture of the game engine... After some research I discovered that the Draw method attempts to keep a framerate of 60 fps. From what I understand, the game engine will even leave out a few calls to the draw method if it begins to slow down. This is why we want to put our sprite updates in the update method. Then using a game timer (provided by the engine), we want to render the scene based on real time passed, not framerate. So even the engine renders at 20 fps, the animations will still animate at the same real time speed! Which brings up another point. Why 60 fps? I'm speculating that Microsoft capped it because LCD's dont' refresh faster than 60 fps? On another note, If the game engine knows its falling behind in rendering...then surely we can harness this to speed up our games. Maybe I can find some flag which tell me if the game is lagging, and what the current framerate is, etc...(instead of coding it like I did last time) Sheldon, suggested maybe I can render like WoW does, in prioritised layers...I think he's onto something, however I don't think I'll have that many graphics to worry about such a problem of graphic latency. We'll see. People to Mention: Well,as you are aware I hadn't posted in a couple days and I was surprised to see a few emails and messenger queries about my game progress (and some concern as to why I stopped). I want to thank everyone for their kind words of support and put everyone at ease by stating that I do intend on completing this project. Granted I only have a few hours each night, but, I'll do it. Thank you to Garth for mailing in my next screen! That was a nice surprise! The Sneek Peek you've been waiting for... Garth has also volunteered to render me some wizard images. He was a bit shocked when I asked for them in 2D animated strips. He said I was going backward (and that I have really bad Game Development Lingo). But, I advised Garth that I will use 3D images later...for now...2D images. Garth also had some great game design ideas to add on. I advised him that I will save his ideas and include them in the future design document (for the 3d version?). Lastly, my best friend Alek, is going to join me in developing this game. This was a project we started eons ago but never completed because of our careers. Now, priorities change and we have some spare time on our hands. Let's see what trouble Alek and I can get into! Tonight I'll be uploading my prototypes and base game to a source control for both of us to work off of. D.

    Read the article

  • CodePlex Daily Summary for Sunday, November 06, 2011

    CodePlex Daily Summary for Sunday, November 06, 2011Popular ReleasesSelf-Tracking Entity Generator for WPF and Silverlight: Self-Tracking Entity Generator v 0.9.9 Update 2: Self-Tracking Entity Generator v 0.9.9 for Entity Framework 4.0. No change to the self-tracking entity generator v 0.9.9. WPF sample (SchoolSample) is updated with unit testing for both ViewModel and Model classes.SubExtractor: Release 1020: Feature: added "baseline double quotes" character to selector box Feature: added option to save SRT files as ANSI Feature: made "Save Sup files to Source directory" apply to both Sup and Idx source files. Fix: removed SDH text (...) or ... that is split over 2 lines Fix: better decision-making in when to prefix a line with a '-' because SDH was removedAcDown????? - Anime&Comic Downloader: AcDown????? v3.6.1: ?? ● AcDown??????????、??????,??????????????????????,???????Acfun、Bilibili、???、???、???、Tucao.cc、SF???、?????80????,???????????、?????????。 ● AcDown???????????????????????????,???,???????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86)?.NET Framework 2.0???(x64),?????"?????????"??? ??????????????,??????????: ??"AcDown?????"????????? ?? v3.6.1?? ??.hlv...Track Folder Changes: Track Folder Changes 1.1: Fixed exception when right-clicking the root nodeKinect Toolbox: Kinect Toolbox v1.1.0.2: This version adds support for the Kinect for Windows SDK beta 2.MapWindow 4: MapWindow GIS v4.8.6 - Final release - 32Bit: This is the final release of MapWindow v4.8. It has 4.8.6 as version number. This version has been thoroughly tested. If you do get an exception send the exception to us. Don't forget to include your e-mail address. Use the forums at http://www.mapwindow.org/phorum/ for questions. Please consider donating a small portion of the money you have saved by having free GIS tools: http://www.mapwindow.org/pages/donate.php What’s New in 4.8.6 (Final release) · A few minor issues have been fixed Wha...Kinect Mouse Cursor: Kinect Mouse Cursor 1.1: Updated for Kinect for Windows SDK v1.0 Beta 2!Coding4Fun Kinect Toolkit: Coding4Fun Kinect Toolkit 1.1: Updated for Kinect for Windows SDK v1.0 Beta 2!Async Executor: 1.0: Source code of the AsyncExecutorMedia Companion: MC 3.421b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) TV Show Resolutions... Fix to show the season-specials.tbn when selecting an episode from season 00. Before, MC would try & load season00.tbn Fix for issue #197 - new show added by 'Manually Add Path' not being picked up. Also made non-visible the same thing in Root Folders...Nearforums - ASP.NET MVC forum engine: Nearforums v7.0: Version 7.0 of Nearforums, the ASP.NET MVC Forum Engine, containing new features: UI: Flexible layout to handle both list and table-like template layouts. Theming - Visual choice of themes: Deliver some templates on installation, export/import functionality, preview. Allow site owners to choose default list sort order for the forums. Forum latest activity. Visit the project Roadmap for more details. Webdeploy packages sha1 checksum: e6bb913e591543ab292a753d1a16cdb779488c10?????????? - ????????: All-In-One Code Framework ??? 2011-11-02: http://download.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1codechs&DownloadId=216140 ??????,11??,?????20????Microsoft OneCode Sample,????6?Program Language Sample,2?Windows Base Sample,2?GDI+ Sample,4?Internet Explorer Sample?6?ASP.NET Sample。?????????????。 ????,?????。http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1code&DownloadId=128165 Program Language CSImageFullScreenSlideShow VBImageFullScreenSlideShow CSDynamicallyBuildLambdaExpressionWithFie...Python Tools for Visual Studio: 1.1 Alpha: We’re pleased to announce the release of Python Tools for Visual Studio 1.1 Alpha. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python programming language. This release includes new core IDE features, a couple of new sample libraries for interacting with Kinect and Excel, and many bug fixes for issues reported since the release of 1.0. For the core IDE features we’ve added many new features which improve the basic edit...BExplorer (Better Explorer): Better Explorer 2.0.0.631 Alpha: Changelog: Added: Some new functions in ribbon Added: Possibility to choose displayed columns Added: Basic Search Fixed: Some bugs after navigation Fixed: Attempt to fix slow navigation and slow start Known issues: - BreadcrumbBar fails on some situations - Basic search not work quite well in some situations Please if anyone find bugs be kind and report them at the Issue Tracker! Thanks!DotNetNuke® Community Edition: 05.06.04: Major Highlights Fixed issue with upgrades on systems that had upgraded the Telerik library to 6.0.0 Fixed issue with Razor Host upgrade to 5.6.3 The logic for module administration checks contains incorrect logic in 1 place, opening the possibility of a user with edit permissions gaining access to functionality they should not have through a particularly crafted url Security FixesBrowsers support the ability to remember common strings such as usernames/addresses etc. Code was adde...Terminals: Version 2.0 - Beta 3 Release: Beta 3 Refresh Dont forget to backup your config files BEFORE upgrading! The team has finally put the nail into the official release date for version 2.0. As bugs are winding down on the 2.0 Roadmap we decided to push out another build - the first 2.0 Beta build. Please take time to use and abuse this release. We left logging in place, and this is a debug build so be sure to submit your logs on each bug reported, and please do report all bugs! Check the source code page on the site, th...iTuner - The iTunes Companion: iTuner 1.4.4322: Added German (unverified, apologies if incorrect) Properly source invariant resources with correct resIDs Replaced obsolete lyric providers with working providers Fix Pseudolater to correctly morph every third char Fix null reference in CatalogBaseTumbleDeck: TumbleDeck 1.0.1 Alpha: New version of TumbleDeck is out! Check it out, it's great, we will be testing it and releasing more stable versions all the time. If you spot any unwanted bugs or features you want added please, please, please email us at tumbledeck@mail.com or contact us on the Discussions tab! If you can see your old version of TumbleDeck, please uninstall it and install this version again. Thanks.VidCoder: 1.2.1: Fixed a couple regressions: video encoder was blank in queue and crashes with the High Profile preset when opening the Settings window. Fixed problem with auto-update introduced in 1.2.0. If you have 1.2.0 you will need to update manually to get this.AssaultCube Reloaded: Release 2.3: THE RELEASE YOU'VE ALL BEEN WAITING FOR! IT CAN NOW BE CONSIDERED STABLE Linux has Debian 64-bit precompiled binaries, but you can compile your own as it also contains the source. If you are using Mac or other operating systems, download the Linux package. The server pack is ready for both Windows and Linux, but you might need to compile your own for linux (source included) If you are using Windows and require the source code, download the source package!New ProjectsApploft: Apploft is a new App Platform for windows allowing you to run apps based on powerful code which can pull content from Online.Bugshooting Output for Microsoft Dynamics CRM: Provides an output DLL to use with Bugshooting.Bulk Copy Test Cases Tool for Microsoft Test Manager & TFS: A while ago I had written a blog post Microsoft Test Manager Test Case Versioning on how to manage Test Cases over multiple releases which required you to manually copy test cases individually. I have created a tool to help with the bulk copying of Test Cases that updates the ItDiagnostics Tool for Microsoft Dynamics CRM 2011: Diagnostics Tool for Microsoft Dynamics CRM 2011 helps CRM developers and administrators to enable trace and devErrors on CRM server. It also generates an HTML report file with information about the CRM deployment.Ege University Renewable Energy Society: Ege University Renewable Energy Society Open Source Projectsfirst use of tfs: first project. connection to tfsFlagFtp: FlagFtp is a FTP library for .NET which supports various operations, such as retrieving file lists, write and read from/to files, retrieving file and directory infos, etc...LJCommon: LJCommonnwrole: .Net Worker RoleOrchard Mango Theme: Orchard Mango Theme is a simple inspired Microsoft Windows Phone OS. Original creator and designer Marco Siniscalco (http://www.marcosiniscalco.com)Project Rainbow: This is a school project from KAHO-SL in Belgium, ghent. although this is an open source site, we wish to ask not to copy or steal any of our code if you are related to our school and/or project.Rawler -The Web scraping Framework using XAML: This is the Web scraping Framework using XAML .This framework makes Web scraping possible by only XAML. TenneySoftware Graphing Calculator: A 2D and 3D graphing calculator inspired by Analog's ZPlotter, utilizing my very own libraries to manage the 2D and 3D plotting. The article from Analog, an 8-bit Atari computer magazine from the 80's, can be located here: http://www.cyberroach.com/analog/an30/ZpltAn30.htmThe GINA bot: under constructionTime To Go: A little handy app that shows you how long you've got left until stuff.Windows Azure WordPress Accelerator: Accelerator to deploy WordPress in Windows AzureWP7 App site template: The WP7 App Site Template is intended to make it easier for Windows Phone 7 developers to market their apps. It's currently a simple one page site template, but any contributions/suggestions welcome.ZViewTV.NET: ZViewTV.NET est un programme de visualisation de flux audio-vidéo. Il a été créé par l'equipe qui à fait ZGuideTV.NET

    Read the article

  • Agile Testing Days 2012 – Day 2 – Learn through disagreement

    - by Chris George
    I think I was in the right place! During Day 1 I kept on reading tweets about Lean Coffee that has happened earlier that morning. It intrigued me and I figured in for a penny in for a pound, and set my alarm for 6:45am. Following the award night the night before, it was _really_ hard getting up when it went off, but I did and after a very early breakfast, set off for the 10 min walk to the Dorint. With Lean Coffee due to start at 07:30, I arrived at the hotel and made my way to one of the hotel bars. I soon realised I was in the right place as although the bar was empty, there was a table with post-it’s and pens! This MUST be the place! The premise of Lean Coffee is to have several small timeboxed discussions. Everyone writes down what they would like to discuss on post-its that are then briefly explained and submitted to the pile. Once everyone is done, the group dot-votes on the topics. The topics are then sorted by the dot vote counts and the discussions begin. Each discussion had 8 mins to start with, which meant it prevented the discussions getting off topic too much. After the time elapsed, the group had a vote whether to extend the discussion by a further 4 mins or move on. Several discussion were had around training, soft skills etc. The conversations were really interesting and there were quite a few good ideas. Overall it was a very enjoyable experience, certainly worth the early start! Make Melly Happy Following Lean Coffee was real coffee, and much needed that was! The first keynote of the day was “Let’s help Melly (Changing Work into Life)”by Jurgen Appelo. Draw lines to track happiness This was a very interesting presentation, and set the day nicely. The theme to the keynote was projects are about the people, more-so than the actual tasks. So he started by showing a photo of an employee ‘Melly’ who looked happy enough. He then stated that she looked happy but actually hated her job. In fact 50% of Americans hate their jobs. He went on to say that the world over 50% of people hate Americans their jobs. Jurgen talked about many ways to reduce the feedback cycle, not only of the project, but of the people management. Ideas such as Happiness doors, happiness tracking (drawing lines on a wall indicating your happiness for that day), kudo boxes (to compliment a colleague for good work). All of these (and more) ideas stimulate conversation amongst the team, lead to early detection of issues and investigation of solutions. I’ve massively simplified Jurgen’s keynote and have certainly not done it justice, so I will post a link to the video once it’s available. Following more coffee, the next talk was “How releasing faster changes testing” by Alexander Schwartz. This is a topic very close to our hearts at the moment, so I was eager to find out any juicy morsels that could help us achieve more frequent releases, and Alex did not disappoint. He started off by confirming something that I have been a firm believer in for a number of years now; adding more people can do more harm than good when trying to release. This is for a number of reasons, but just adding new people to a team at such a critical time can be more of a drain on resources than they add. The alternative is to have the whole team have shared responsibility for faster delivery. So the whole team is responsible for quality and testing. Obviously you will have the test engineers on the project who have the specialist skills, but there is no reason that the entire team cannot do exploratory testing on the product. This links nicely with the Developer Exploratory testing presented by Sigge on Day 1, and certainly something that my team are really striving towards. Focus on cycle time, so what can be done to reduce the time between dev cycles, release cycles. What’s stops a release, what delays a release? all good solid questions that can be answered. Alex suggested that perhaps the product doesn’t need to be fully tested. Doing less testing will reduce the cycle time therefore get the release out faster. He suggested a risk-based approach to planning what testing needs to happen. Reducing testing could have an impact on revenue if it causes harm to customers, so test the ‘right stuff’! Determine a set of tests that are ‘face saving’ or ‘smoke’ tests. These tests cover the core functionality of the product and aim to prevent major embarrassment if these areas were to fail! Amongst many other very good points, Alex suggested that a good approach would be to release after every new feature is added. So do a bit of work -> release, do some more work -> release. By releasing small increments of work, the impact on the customer of bugs being introduced is reduced. Red Pill, Blue Pill The second keynote of the day was “Adaptation and improvisation – but your weakness is not your technique” by Markus Gartner and proved to be another very good presentation. It started off quoting lines from the Matrix which relate to adapting, improvising, realisation and mastery. It has alot of nerds in the room smiling! Markus went on to explain how through deliberate practice ( and a lot of it!) you can achieve mastery, but then you never stop learning. Through methods such as code retreats, testing dojos, workshops you can continually improve and learn. The code retreat idea was one that interested me. It involved pairing to write an automated test for, say, 45 mins, they deleting all the code, finding a different partner and writing the same test again! This is another keynote where the video will speak louder than anything I can write here! Markus did elaborate on something that Lisa and Janet had touched on yesterday whilst busting the myth that “Testers Must Code”. Whilst it is true that to be a tester, you don’t need to code, it is becoming more common that there is this crossover happening where more testers are coding and more programmers are testing. Markus made a special distinction between programmers and developers as testers develop tests code so this helped to make that clear. “Extending Continuous Integration and TDD with Continuous Testing” by Jason Ayers was my next talk after lunch. We already do CI and a bit of TDD on my project team so I was interested to see what this continuous testing thing was all about and whether it would actually work for us. At the start of the presentation I was of the opinion that it just would not work for us because our tests are too slow, and that would be the case for many people. Jason started off by setting the scene and saying that those doing TDD spend between 10-15% of their time waiting for tests to run. This can be reduced by testing less often, reducing the test time but this then increases the risk of introduced bugs not being spotted quickly. Therefore, in comes Continuous Testing (CT). CT systems run your unit tests whenever you save some code and runs them in the background so you can continue working. This is a really nice idea, but to do this, your tests must be fast, independent and reliable. The latter two should be the case anyway, and the first is ideal, but hard! Jason makes several suggestions to make tests fast. Firstly keep the scope of the test small, secondly spin off any expensive tests into a suite which is run, perhaps, overnight or outside of the CT system at any rate. So this started to change my mind, perhaps we could re-engineer our tests, and continuously run the quick ones to give an element of coverage. This talk was very interesting and I’ve already tried a couple of the tools mentioned on our product (Mighty Moose and NCrunch). Sadly due to the way our solution is built, it currently doesn’t work, but we will look at whether we can make this work because this has the potential to be a mini-game-changer for us. Using the wrong data Gojko’s Hierarchy of Quality The final keynote of the day was “Reinventing software quality” by Gojko Adzic. He opened the talk with the statement “We’ve got quality wrong because we are using the wrong data”! Gojko then went on to explain that we should judge a bug by whether the customer cares about it, not by whether we think it’s important. Why spend time fixing issues that the customer just wouldn’t care about and releasing months later because of this? Surely it’s better to release now and get customer feedback? This was another reference to the idea of how it’s better to build the right thing wrong than the wrong thing right. Get feedback early to make sure you’re making the right thing. Gojko then showed something which was very analogous to Maslow’s heirachy of needs. Successful – does it contribute to the business? Useful – does it do what the user wants Usable – does it do what it’s supposed to without breaking Performant/Secure – is it secure/is the performance acceptable Deployable Functionally ok – can it be deployed without breaking? He then explained that User Stories should focus on change. In other words they should focus on the users needs, not the users process. Describe what the change will be, how that change will happen then measure it! Networking and Beer Following the day’s closing keynote, there were drinks and nibble for the ‘Networking’ evening. This was a great opportunity to talk to people. I find approaching strangers very uncomfortable but once again, when in Rome! Pete Walen and I had a long conversation about only fixing issues that the customer cares about versus fixing issues that make you proud of your software! Without saying much, and asking the right questions, Pete made me re-evaluate my thoughts on the matter. Clever, very clever!  Oh and he ‘bought’ me a beer! My Takeaway Triple from Day 2: release small and release often to minimize issues creeping in and get faster feedback from ‘the real world’ Focus on issues that the customers care about, not what we think is important It’s okay to disagree with someone, even if they are well respected agile testing gurus, that’s how discussion and learning happens!  

    Read the article

  • CodePlex Daily Summary for Thursday, November 03, 2011

    CodePlex Daily Summary for Thursday, November 03, 2011Popular ReleasesFlagConsole: 1.0.1: BUGFIXES: - Fixed a bug, which caused the label not to draw a word, if it had the same length as the label's length.Nearforums - ASP.NET MVC forum engine: Nearforums v7.0: Version 7.0 of Nearforums, the ASP.NET MVC Forum Engine, containing new features: UI: Flexible layout to handle both list and table-like template layouts. Theming - Visual choice of themes: Deliver some templates on installation, export/import functionality, preview. Allow site owners to choose default list sort order for the forums. Forum latest activity. Visit the project Roadmap for more details.?????????? - ????????: All-In-One Code Framework ??? 2011-11-02: http://download.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1codechs&DownloadId=216140 ??????,11??,?????20????Microsoft OneCode Sample,????6?Program Language Sample,2?Windows Base Sample,2?GDI+ Sample,4?Internet Explorer Sample?6?ASP.NET Sample。?????????????。 ????,?????。http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1code&DownloadId=128165 Program Language CSImageFullScreenSlideShow VBImageFullScreenSlideShow CSDynamicallyBuildLambdaExpressionWithFie...Python Tools for Visual Studio: 1.1 Alpha: We’re pleased to announce the release of Python Tools for Visual Studio 1.1 Alpha. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python programming language. This release includes new core IDE features, a couple of new sample libraries for interacting with Kinect and Excel, and many bug fixes for issues reported since the release of 1.0. For the core IDE features we’ve added many new features which improve the basic edit...BExplorer (Better Explorer): Better Explorer 2.0.0.631 Alpha: Changelog: Added: Some new functions in ribbon Added: Possibility to choose displayed columns Added: Basic Search Fixed: Some bugs after navigation Fixed: Attempt to fix slow navigation and slow start Known issues: - BreadcrumbBar fails on some situations - Basic search not work quite well in some situations Please if anyone find bugs be kind and report them at the Issue Tracker! Thanks!DotNetNuke® Community Edition: 05.06.04: Major Highlights Fixed issue with upgrades on systems that had upgraded the Telerik library to 6.0.0 Fixed issue with Razor Host upgrade to 5.6.3 The logic for module administration checks contains incorrect logic in 1 place, opening the possibility of a user with edit permissions gaining access to functionality they should not have through a particularly crafted url Security FixesBrowsers support the ability to remember common strings such as usernames/addresses etc. Code was adde...Terminals: Version 2.0 - Beta 2 Release: The team has finally put the nail into the official release date for version 2.0. As bugs are winding down on the 2.0 Roadmap we decided to push out another build - the first 2.0 Beta build. Please take time to use and abuse this release. We left logging in place, and this is a debug build so be sure to submit your logs on each bug reported, and please do report all bugs! Check the source code page on the site, this beta includes all commits since (and including) the 90428 check-in back i...iTuner - The iTunes Companion: iTuner 1.4.4322: Added German (unverified, apologies if incorrect) Properly source invariant resources with correct resIDs Replaced obsolete lyric providers with working providers Fix Pseudolater to correctly morph every third char Fix null reference in CatalogBaseTrack Folder Changes: Track Folder Changes 1.0: Track Folder Changes 1.0 (binary)Devpad: 4.6: Whats new for Devpad 4.6: New Recent Files New Run in Safari Minor Bug Fix's, improvements and speed upsWindows Workflow Foundation on Codeplex: Microsoft.Activities v1.8.8: Microsoft.Activities Overview How do I install Microsoft.Activities? Updates in this release9318Technical Analysis Engine for .NET: Technical Analysis Engine 1.25: What's new in the 1.25 release (2011-11-01): - Added Williams %R indicator - Added Moving Average Envelopes indicatorBoxWorld: BoxWorld_2011.10.30: BoxWorld - 8.0.1110.30 This is the initial release of BoxWorld. I'd recommend downloading the installer as it contains the compiled code and everything all nicely contained. By default, you end up with this directory structure: C:\Program Files\ViperWorks\BoxWorld C:\Program Files\ViperWorks\BoxWorld\Data C:\Program Files\ViperWorks\BoxWorld\Interface C:\Program Files\ViperWorks\BoxWorld\Source In the root you have the compiled EXE files, one for the main release, one for the LITE release ...VidCoder: 1.2.1: Fixed a couple regressions: video encoder was blank in queue and crashes with the High Profile preset when opening the Settings window. Fixed problem with auto-update introduced in 1.2.0. If you have 1.2.0 you will need to update manually to get this.AssaultCube Reloaded: Release 2.3: THE RELEASE YOU'VE ALL BEEN WAITING FOR! IT CAN NOW BE CONSIDERED STABLE Linux has Debian 64-bit precompiled binaries, but you can compile your own as it also contains the source. If you are using Mac or other operating systems, download the Linux package. The server pack is ready for both Windows and Linux, but you might need to compile your own for linux (source included) If you are using Windows and require the source code, download the source package!A Microblog API (SINA weibo.com open API in C#, .Net???????API): AMicroblogAPI v1.0: AMicroBlogAPI is a C# implementation, a strong-typed deep encapsulation, a easy-to-use .net wrapper of SINA microblog API v1.0. App developers no longer need to parse various HTTP responses -- all responses are parsed into strong types; No longer need to construt the request query strings -- just simply gives the values of parameters; No longer need to implement the OAuth -- a single call could logs the user on. In this release (AMicroblogAPI v1.0), all basic data APIs are implemented. For ...patterns & practices: Enterprise Library Contrib: Enterprise Library Contrib - 5.0 (Oct 2011): This release of Enterprise Library Contrib is based on the Microsoft patterns & practices Enterprise Library 5.0 core and contains the following: Common extensionsTypeConfigurationElement<T> - A Polymorphic Configuration Element without having to be part of a PolymorphicConfigurationElementCollection. AnonymousConfigurationElement - A Configuration element that can be uniquely identified without having to define its name explicitly. Data Access Application Block extensionsMySql Provider - ...Network Monitor Open Source Parsers: Network Monitor Parsers 3.4.2748: The Network Monitor Parsers packages contain parsers for more than 400 network protocols, including RFC based public protocols and protocols for Microsoft products defined in the Microsoft Open Specifications for Windows and SQL Server. NetworkMonitor_Parsers.msi is the base parser package which defines parsers for commonly used public protocols and protocols for Microsoft Windows. In this release, NetowrkMonitor_Parsers.msi continues to improve quality and fix bugs. It has included the fo...Duckworth Lewis Professional Edition Calculator: DLcalc 3.0: DLcalc 3.0 can perform Duckworth/Lewis Professional Edition calculations 100% accurately. It also produces over-by-over and ball-by-ball PAR score tables.Media Companion: MC 3.420b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) Movies Fixed: Fanart and poster scraping issues TV Shows (Re)Added: Rebuild single show Fixed: Issue when shows are moved from original location Ability to handle " for actor nicknames Crash when episode name contains "<" (does not scrape yet) Clears fanart when switch...New ProjectsASP.NET MVC fluent validation framework: FluentMvc allow you to create validatable models without using any validation attributes.BeanPermutator: The BeanPermutator library provides a robust permutations engine. It was designed with framework API, optimal performance, and memory usage in mind. For performance, the implementation is modeled after STL's next_permutation; except with added support for conditional result sets.Cicero: Cicero is a library/application to programmatically analyze managed crash dumps. It provides a managed API to load dumps and look at managed objects, using [url:SOS|http://msdn.microsoft.com/en-us/library/bb190764.aspx] under the covers.CodePlex Windows Phone App: This project deals with how to browse efficently through codeplex website using windows phone capabilities.Collage: A photo collage screensaver.Docs Professional Archiving System: This application basically provides a professional audit compliant archiving system with storage service.EBSCOLunch: This app simply shows ebsco's lunch menu. It runs in the system tray and when clicked on shows today's cafeteria specials.Emesary: C# Interobject communication and application mssaging: Simple quick and efficient class based interobject communcation to allow decoupled disparate parts of a system to function together without knowing about each, e.g GUI, DB and business logic.Enterprise Data Entry, Reporting and Analysis Tool: This project will have app with different modules that will show the interface tools functions based on the designation of an employee as defined in Active Directory. Example: A team manager can view all the details regarding his immediate subordinates. Financial date and time libary: Date and time routines that know about holidays, rolling conventions, and day count fractions.Frame Work Model by Patán: Proyecto de modelado de FrameworkIttxa: Is in Alpha releaseKooboo CMS data sync module: Kooboo cms data export / import module.MetroBlog: A Windows 8 Metro inspired blog. Mosque: In the name of GOD this is a portalMPO tools: A .net library for parsing .mpo stereo images (in Multi-Picture Format).mytest5: my testtttttttttNetflix MVP Example: Example .NET movie application using the MVP pattern with windows forms and WPF. Uses the Netflix Odata service to get movie information.OpenStreetMap2Oracle: OpenStreetMap2Oracle is a windows application, which exports OpenStreetMap Data (*.osm - files) in an oracle database. The geometries will be stored in oracle's SDO_GEOMETRY datatype. It is developed in C Sharp with modern WPF - UI. PSRR - Remote Registry PowerShell 3.0 Module: Remote Registry PowerShell Module to manage the registry with Windows PowerShell. This version supports the new improvement in .NET 4 to specify a 32-bit or 64-bit view of the registry with the Microsoft.Win32.RegistryView enumeration when you open base keys.Raptor-Plus-Plus: CS240 class project Reset IPv6: Troubleshooting Pack to reset virtual IPv6 interfaces on Windows 7 (ISATAP, Teredo, IPHTTPS and 6to4)resurection: Game for WP7SharePoint 2010 Google Maps Web Part for Sandbox and Farm Solutions: The Google Maps Web Part lets you quickly and easily add Google Maps to your Microsoft SharePoint 2010 pages. Compatible with SharePoint 2010, SharePoint 2010 SP1 and SharePoint Online, via Office 365.SPLight UTL Sharepoint 2010 BackUp / Restore UI: Sharepoint 2010 BackUp / Restore UI Sharepoint 2010 backUp restore user interface site collection backup restore graphical user interfaceVisual Image Processing: Visual Image processing for 2d or 3d real-time video or still image from camera or web cam or scanner ... XNA4LED: XNA 4.0 LED Billboard Example

    Read the article

  • Jagran Prakashan Increases Staff Productivity by 40%

    - by Michael Snow
    Jagran Prakashan Increases Staff Productivity by 40%, Launches New IT Projects up to 4x Faster, Enables Mobile Service, and Improves Business Agility Oracle Customer: JPL Location:  Uttar Pradesh, India Industry: Media and Entertainment Employees:  10,000 Annual Revenue:  $100 to $500 Million Jagran Prakashan Ltd. (JPL) is one of India's premier media and communications groups with interests spanning print, advertising, event management, and mobile services for weather, cricket scores, and educational activities. It is a major media enterprise, with 300 locations across 15 states. Its impressive stable of print publications includes Dainik Jagran, the world’s most widely read daily newspaper––with a readership of over 55 million––the country’s leading afternoon dailies, and a range of popular local, bilingual, and English language newspapers. JPL was using multiple systems to manage its business processes. Users were resistant to using multiple passwords for various applications, preferring to continue their less efficient, legacy work practices. In addition, there was no single repository for sharing documents across the organization, such as company announcements or project documents. The company relied on e-mail to disseminate up-to-date company information, often missing employees. It was also time-consuming and difficult for managers to track the status of ongoing assignments or projects because collaboration and document sharing was inefficient and ineffective.With diverse businesses and many geographic locations, JPL needed to implement a centralized and user-friendly enterprise portal to improve document sharing and collaboration and increase business agility. The company implemented Oracle WebCenter Portal to create a dynamic, secure, and intuitive self-service enterprise portal to improve the user experience and increase operating efficiency. It improved staff productivity by 40%, accelerated new IT projects by up to 4x, boosted staff morale, and increased business agility.   Increases Staff Productivity by 40%, Launches New Products up to 2x Faster A word from JPL "With Oracle WebCenter Portal, we gained a dynamic, secure, and intuitive self-service enterprise portal that provided an exceptional user experience and enabled us to engage employees in a collaborative environment. It increased IT staff productivity by 40%, delivered new projects up to 4x faster, and enabled mobile service to improve our business agility.” Sarbani Bhatia, Vice President IT, Jagran Prakashahn Ltd Before implementing Oracle WebCenter Portal, JPL stored project-critical information, such as page planning of daily newspaper editions and the launch of new editions or supplements on individual laptops or in the e-mail system. Collaboration between colleagues was limited to physical meetings, telephone discussions, and e-mail. It was difficult to trace and recover important project documents when a staff member resigned, which represented a significant risk to business continuity. Employees were also averse to multiple passwords and resisted using the systems, affecting staff productivity. With Oracle WebCenter Portal, JPL created a dynamic, secure, and intuitive self-service enterprise portal with business activity streams. The portal allowed users to navigate, discover, and access information, such as advertising rates, requisition approvals, ad-hoc queries, and employee surveys from a single entry point with a single password. Managers can also upload important documents, such as new pricing for advertisers or newspaper distributors, and share them through the information and instruction section in the portal. In addition, managers can now easily track and review timelines for projects online rather than gathering information from meetings and e-mails. The company gained the ability to centrally manage information, ensured business continuity, and improved staff productivity by 40%.“In the media industry, news has a very short shelf life, so speed is crucial. Information delayed is like information lost,” said Sarbani Bhatia, vice president IT, Jagran Prakashahn Ltd. “Thanks to Oracle WebCenter Portal’s contextual collaboration tools, we can provide and share feedback for new project launches, such as career or education supplements, up to 2x faster through discussion forums or knowledge groups. Tasks that previously required four months, we now complete in one month.”In addition, the company can broadcast announcements, flash employee birthdays, and promote important events through the message section on the webpage, instead of using the e-mail system. The company can also conduct opinion polls to gauge employee response to organizational issues and improve management decision-making.“With over 10,000 employees across 300 locations, it is critical for management to hear the voice of employees and develop a cohesive organizational culture. Oracle WebCenter Portal enables employees to engage with business processes and systems in a collaborative environment, providing users with an exceptional experience,” Bhatia said. Enables Mobility Access and Increases Business Agility Newspaper advertisements generate the majority of JPL’s revenue. With most sales staff on the move, the company needed to ensure timely approval of print advertisement discounts for specific clients and meet tight publication deadlines.  By integrating Oracle WebCenter Portal seamlessly with its enterprise resource planning (ERP) system and other applications, such as the organizational mass mailing system, business intelligence, and management information system, JPL embedded its approval workflow processes into the enterprise portal and provided users with an integrated and intuitive interface. About 30% of JPL’s sales staff members now have tablets and receive advertising discount approval from managers while in the field and no longer need to return to the office, which has significantly improved efficiency and increased business agility.“Application mobility was critical for sales representatives in the field to meet stringent auditing requirements for online accountability, particularly for our newspaper advertising business. Staff member satisfaction has improved significantly now that the sales team can use tablets to access the portal––a capability we will extend to smart phones in the second stage of the implementation,” Bhatia said. Accelerates Application Development by up to 4x and Cuts Costs by up to 60% With Oracle WebCenter Portal, users can easily create, modify, and upload information to their personalized webpages without IT assistance. By seamlessly integrating Oracle WebCenter Portal with the payroll database, managers can decide which members of their team can access the page and with whom they will share information, a decision based on role or geographical location. A sales representative selling advertising space for a local language daily newspaper, for example, can upload an updated advertising rate relevant only to that particular publication. Users can also easily adapt to the new platform, thanks to its intuitive design and look, reducing the need for training and lowering resistance to using the system.Using Oracle WebCenter Portal’s out-of-the-box reusable components, such as portal pages and templates, provided JPL’s developers with a comprehensive and flexible user experience platform and increased the speed of application development. In less than five months, JPL developed more than 55 workflows. The IT team accelerated deployment of new applications by up to 4x, as they do not need to install them on individual machines now that they have a web-based environment.   “Previously, we would have spent a whole day deploying a new application for each department or location. With a browser-based environment, we have cut costs by up to 60% by reducing deployment time to zero, because our IT team can roll out a new application from a single point, thanks to Oracle WebCenter Portal,” Bhatia said. Challenges Provide a dynamic, secure, and intuitive self-service enterprise portal to improve staff productivity and ensure business continuity Enable seamless integration with multiple enterprise applications to improve workflow efficiency—including approval of print advertisement discounts—and increase business agility Improve engagement with employees and enable collaboration to enhance management decision-making Accelerate time-to-market for new services, such as new advertising programs Solutions Oracle Product and ServicesOracle WebCenter Portal 11g Increased staff productivity by 40% and enhanced user satisfaction by enabling employees to easily navigate, discover, and access information from a single, self-service enterprise portal without IT assistance Launched new products, such as career or education supplements, up to 2x faster by enabling peer collaboration and incorporating feedback generated through discussion forums, thanks to Oracle WebCenter Portal’s out-of-the-box collaboration tools Accelerated application development up to 4x by enabling developers to optimize reusable components for managing and deploying new applications in a browser-based environment rather than spending one day to install applications for each department, cutting costs by up to 60% Ensured business continuity by enabling managers to easily track and review project timelines online rather than storing important documents on individual laptops or relying on the e-mail system Increased business agility and operational efficiency by seamlessly integrating with the in-house, ERP system and embedding business processes into a single portal Boosted company revenue by enabling sales team members to submit print-advertising discount requests through mobile devices instead of waiting to return to office, ensuring timely approval from managers to meet tight publication deadlines Improved management decision-making by enabling employees to easily share and access feedback through opinion polls or forums, boosting staff morale Introduced the single sign-on capability and enhanced security by enabling managers to decide access level for staff members based on role or geographical location Reduced the need for staff training and minimized user resistance to systems by providing a dynamic and intuitive user experience Why Oracle JPL did not consider other products because the company was already using Oracle Database, Enterprise Edition with Real Application Clusters and had a positive experience with Oracle. JPL chose Oracle WebCenter Portal to ensure no compatibility issues for integration with its existing Oracle products and to take advantage of the experience and support of a reputable vendor to ensure business continuity. “We chose Oracle because we knew we could rely on its support and experience. In addition, Oracle WebCenter Portal’s speed, agility, and mobile access features were a perfect fit for our business requirements,” Bhatia said. Implementation Process JPL launched the enterprise portal to 500 users in the first phase of the project, and plans to extend this to 2,000 users when the portal is fully launched. Oracle partner PricewaterhouseCoopers used Oracle Application Development Framework for the intial set-up, user training and to develop and design sample workflows. JPL’s internal IT staff then took charge of the implementation, bringing it to completion on budget. Partner Oracle PartnerPricewaterhouseCoopers (India)

    Read the article

  • Why Are Minimized Programs Often Slow to Open Again?

    - by Jason Fitzpatrick
    It seems particularly counterintuitive: you minimize an application because you plan on returning to it later and wish to skip shutting the application down and restarting it later, but sometimes maximizing it takes even longer than launching it fresh. What gives? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Bart wants to know why he’s not saving any time with application minimization: I’m working in Photoshop CS6 and multiple browsers a lot. I’m not using them all at once, so sometimes some applications are minimized to taskbar for hours or days. The problem is, when I try to maximize them from the taskbar – it sometimes takes longer than starting them! Especially Photoshop feels really weird for many seconds after finally showing up, it’s slow, unresponsive and even sometimes totally freezes for minute or two. It’s not a hardware problem as it’s been like that since always on all on my PCs. Would I also notice it after upgrading my HDD to SDD and adding RAM (my main PC holds 4 GB currently)? Could guys with powerful pcs / macs tell me – does it also happen to you? I guess OSes somehow “focus” on active software and move all the resources away from the ones that run, but are not used. Is it possible to somehow set RAM / CPU / HDD priorities or something, for let’s say, Photoshop, so it won’t slow down after long period of inactivity? So what is the deal? Why does he find himself waiting to maximize a minimized app? The Answer SuperUser contributor Allquixotic explains why: Summary The immediate problem is that the programs that you have minimized are being paged out to the “page file” on your hard disk. This symptom can be improved by installing a Solid State Disk (SSD), adding more RAM to your system, reducing the number of programs you have open, or upgrading to a newer system architecture (for instance, Ivy Bridge or Haswell). Out of these options, adding more RAM is generally the most effective solution. Explanation The default behavior of Windows is to give active applications priority over inactive applications for having a spot in RAM. When there’s significant memory pressure (meaning the system doesn’t have a lot of free RAM if it were to let every program have all the RAM it wants), it starts putting minimized programs into the page file, which means it writes out their contents from RAM to disk, and then makes that area of RAM free. That free RAM helps programs you’re actively using — say, your web browser — run faster, because if they need to claim a new segment of RAM (like when you open a new tab), they can do so. This “free” RAM is also used as page cache, which means that when active programs attempt to read data on your hard disk, that data might be cached in RAM, which prevents your hard disk from being accessed to get that data. By using the majority of your RAM for page cache, and swapping out unused programs to disk, Windows is trying to improve responsiveness of the program(s) you are actively using, by making RAM available to them, and caching the files they access in RAM instead of the hard disk. The downside of this behavior is that minimized programs can take a while to have their contents copied from the page file, on disk, back into RAM. The time increases the larger the program’s footprint in memory. This is why you experience that delay when maximizing Photoshop. RAM is many times faster than a hard disk (depending on the specific hardware, it can be up to several orders of magnitude). An SSD is considerably faster than a hard disk, but it is still slower than RAM by orders of magnitude. Having your page file on an SSD will help, but it will also wear out the SSD more quickly than usual if your page file is heavily utilized due to RAM pressure. Remedies Here is an explanation of the available remedies, and their general effectiveness: Installing more RAM: This is the recommended path. If your system does not support more RAM than you already have installed, you will need to upgrade more of your system: possibly your motherboard, CPU, chassis, power supply, etc. depending on how old it is. If it’s a laptop, chances are you’ll have to buy an entire new laptop that supports more installed RAM. When you install more RAM, you reduce memory pressure, which reduces use of the page file, which is a good thing all around. You also make available more RAM for page cache, which will make all programs that access the hard disk run faster. As of Q4 2013, my personal recommendation is that you have at least 8 GB of RAM for a desktop or laptop whose purpose is anything more complex than web browsing and email. That means photo editing, video editing/viewing, playing computer games, audio editing or recording, programming / development, etc. all should have at least 8 GB of RAM, if not more. Run fewer programs at a time: This will only work if the programs you are running do not use a lot of memory on their own. Unfortunately, Adobe Creative Suite products such as Photoshop CS6 are known for using an enormous amount of memory. This also limits your multitasking ability. It’s a temporary, free remedy, but it can be an inconvenience to close down your web browser or Word every time you start Photoshop, for instance. This also wouldn’t stop Photoshop from being swapped when minimizing it, so it really isn’t a very effective solution. It only helps in some specific situations. Install an SSD: If your page file is on an SSD, the SSD’s improved speed compared to a hard disk will result in generally improved performance when the page file has to be read from or written to. Be aware that SSDs are not designed to withstand a very frequent and constant random stream of writes; they can only be written over a limited number of times before they start to break down. Heavy use of a page file is not a particularly good workload for an SSD. You should install an SSD in combination with a large amount of RAM if you want maximum performance while preserving the longevity of the SSD. Use a newer system architecture: Depending on the age of your system, you may be using an out of date system architecture. The “system architecture” is generally defined as the “generation” (think generations like children, parents, grandparents, etc.) of the motherboard and CPU. Newer generations generally support faster I/O (input/output), better memory bandwidth, lower latency, and less contention over shared resources, instead providing dedicated links between components. For example, starting with the “Nehalem” generation (around 2009), the Front-Side Bus (FSB) was eliminated, which removed a common bottleneck, because almost all system components had to share the same FSB for transmitting data. This was replaced with a “point to point” architecture, meaning that each component gets its own dedicated “lane” to the CPU, which continues to be improved every few years with new generations. You will generally see a more significant improvement in overall system performance depending on the “gap” between your computer’s architecture and the latest one available. For example, a Pentium 4 architecture from 2004 is going to see a much more significant improvement upgrading to “Haswell” (the latest as of Q4 2013) than a “Sandy Bridge” architecture from ~2010. Links Related questions: How to reduce disk thrashing (paging)? Windows Swap (Page File): Enable or Disable? Also, just in case you’re considering it, you really shouldn’t disable the page file, as this will only make matters worse; see here. And, in case you needed extra convincing to leave the Windows Page File alone, see here and here. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • How Mature is Your Database Change Management Process?

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database Delivery Patterns & Practices Further Reading Organization and team processes How do you get your database schema changes live, on to your production system? As your team of developers and DBAs are working on the changes to the database to support your business-critical applications, how do these updates wend their way through from dev environments, possibly to QA, hopefully through pre-production and eventually to production in a controlled, reliable and repeatable way? In this article, I describe a model we use to try and understand the different stages that customers go through as their database change management processes mature, from the very basic and manual, through to advanced continuous delivery practices. I also provide a simple chart that will help you determine “How mature is our database change management process?” This process of managing changes to the database – which all of us who have worked in application/database development have had to deal with in one form or another – is sometimes known as Database Change Management (even if we’ve never used the term ourselves). And it’s a difficult process, often painfully so. Some developers take the approach of “I’ve no idea how my changes get live – I just write the stored procedures and add columns to the tables. It’s someone else’s problem to get this stuff live. I think we’ve got a DBA somewhere who deals with it – I don’t know, I’ve never met him/her”. I know I used to work that way. I worked that way because I assumed that making the updates to production was a trivial task – how hard can it be? Pause the application for half an hour in the middle of the night, copy over the changes to the app and the database, and switch it back on again? Voila! But somehow it never seemed that easy. And it certainly was never that easy for database changes. Why? Because you can’t just overwrite the old database with the new version. Databases have a state – more specifically 4Tb of critical data built up over the last 12 years of running your business, and if your quick hotfix happened to accidentally delete that 4Tb of data, then you’re “Looking for a new role” pretty quickly after the failed release. There are a lot of other reasons why a managed database change management process is important for organisations, besides job security, not least: Frequency of releases. Many business managers are feeling the pressure to get functionality out to their users sooner, quicker and more reliably. The new book (which I highly recommend) Lean Enterprise by Jez Humble, Barry O’Reilly and Joanne Molesky provides a great discussion on how many enterprises are having to move towards a leaner, more frequent release cycle to maintain their competitive advantage. It’s no longer acceptable to release once per year, leaving your customers waiting all year for changes they desperately need (and expect) Auditing and compliance. SOX, HIPAA and other compliance frameworks have demanded that companies implement proper processes for managing changes to their databases, whether managing schema changes, making sure that the data itself is being looked after correctly or other mechanisms that provide an audit trail of changes. We’ve found, at Red Gate that we have a very wide range of customers using every possible form of database change management imaginable. Everything from “Nothing – I just fix the schema on production from my laptop when things go wrong, and write it down in my notebook” to “A full Continuous Delivery process – any change made by a dev gets checked in and recorded, fully tested (including performance tests) before a (tested) release is made available to our Release Management system, ready for live deployment!”. And everything in between of course. Because of the vast number of customers using so many different approaches we found ourselves struggling to keep on top of what everyone was doing – struggling to identify patterns in customers’ behavior. This is useful for us, because we want to try and fit the products we have to different needs – different products are relevant to different customers and we waste everyone’s time (most notably, our customers’) if we’re suggesting products that aren’t appropriate for them. If someone visited a sports store, looking to embark on a new fitness program, and the store assistant suggested the latest $10,000 multi-gym, complete with multiple weights mechanisms, dumb-bells, pull-up bars and so on, then he’s likely to lose that customer. All he needed was a pair of running shoes! To solve this issue – in an attempt to simplify how we understand our customers and our offerings – we built a model. This is a an attempt at trying to classify our customers in to some sort of model or “Customer Maturity Framework” as we rather grandly term it, which somehow simplifies our understanding of what our customers are doing. The great statistician, George Box (amongst other things, the “Box” in the Box-Jenkins time series model) gave us the famous quote: “Essentially all models are wrong, but some are useful” We’ve taken this quote to heart – we know it’s a gross over-simplification of the real world of how users work with complex legacy and new database developments. Almost nobody precisely fits in to one of our categories. But we hope it’s useful and interesting. There are actually a number of similar models that exist for more general application delivery. We’ve found these from ThoughtWorks/Forrester, from InfoQ and others, and initially we tried just taking these models and replacing the word “application” for “database”. However, we hit a problem. From talking to our customers we know that users are far less further down the road of mature database change management than they are for application development. As a simple example, no application developer, who wants to keep his/her job would develop an application for an organisation without source controlling that code. Sure, he/she might not be using an advanced Gitflow branching methodology but they’ll certainly be making sure their code gets managed in a repo somewhere with all the benefits of history, auditing and so on. But this certainly isn’t the case (yet) for the database – a very large segment of the people we speak to have no source control set up for their databases whatsoever, even at the most basic level (for example, keeping change scripts in a source control system somewhere). By the way, if this is you, Red Gate has a great whitepaper here, on the barriers people face getting a source control process implemented at their organisations. This difference in maturity is the same as you move in to areas such as continuous integration (common amongst app developers, relatively rare for database developers) and automated release management (growing amongst app developers, very rare for the database). So, when we created the model we started from scratch and biased the levels of maturity towards what we actually see amongst our customers. But, what are these stages? And what level are you? The table below describes our definitions for four levels of maturity – Baseline, Beginner, Intermediate and Advanced. As I say, this is a model – you won’t fit any of these categories perfectly, but hopefully one will ring true more than others. We’ve also created a PDF with a flow chart to help you find which of these groups most closely matches your team:  Download the Database Delivery Maturity Framework PDF here   Level D1 – Baseline Work directly on live databases Sometimes work directly in production Generate manual scripts for releases. Sometimes use a product like SQL Compare or similar to do this Any tests that we might have are run manually Level D2 – Beginner Have some ad-hoc DB version control such as manually adding upgrade scripts to a version control system Attempt is made to keep production in sync with development environments There is some documentation and planning of manual deployments Some basic automated DB testing in process Level D3 – Intermediate The database is fully version-controlled with a product like Red Gate SQL Source Control or SSDT Database environments are managed Production environment schema is reproducible from the source control system There are some automated tests Have looked at using migration scripts for difficult database refactoring cases Level D4 – Advanced Using continuous integration for database changes Build, testing and deployment of DB changes carried out through a proper database release process Fully automated tests Production system is monitored for fast feedback to developers   Does this model reflect your team at all? Where are you on this journey? We’d be very interested in knowing how you get on. We’re doing a lot of work at the moment, at Red Gate, trying to help people progress through these stages. For example, if you’re currently not source controlling your database, then this is a natural next step. If you are already source controlling your database, what about the next stage – continuous integration and automated release management? To help understand these issues, there’s a summary of the Red Gate Database Delivery learning program on our site, alongside a Patterns and Practices library here on Simple-Talk and a Training Academy section on our documentation site to help you get up and running with the tools you need to progress. All feedback is welcome and it would be great to hear where you find yourself on this journey! This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • How to resolve "dpkg: error processing /var/cache/apt/archives/python-apport_2.0.1-0ubuntu9_all.deb"?

    - by raz7588
    Update Manager will not update although I have over 100 updates to do I get a error message like this: installArchives() failed: Extracting templates from packages: 29%% Extracting templates from packages: 58%% Extracting templates from packages: 88%% Extracting templates from packages: 100%% Preconfiguring packages ... Extracting templates from packages: 29%% Extracting templates from packages: 58%% Extracting templates from packages: 88%% Extracting templates from packages: 100%% Preconfiguring packages ... Extracting templates from packages: 29%% Extracting templates from packages: 58%% Extracting templates from packages: 88%% Extracting templates from packages: 100%% Preconfiguring packages ... Extracting templates from packages: 29%% Extracting templates from packages: 58%% Extracting templates from packages: 88%% Extracting templates from packages: 100%% Preconfiguring packages ... (Reading database ... (Reading database ... 5%% (Reading database ... 10%% (Reading database ... 15%% (Reading database ... 20%% (Reading database ... 25%% (Reading database ... 30%% (Reading database ... 35%% (Reading database ... 40%% (Reading database ... 45%% (Reading database ... 50%% (Reading database ... 55%% (Reading database ... 60%% (Reading database ... 65%% (Reading database ... 70%% (Reading database ... 75%% (Reading database ... 80%% (Reading database ... 85%% (Reading database ... 90%% (Reading database ... 95%% (Reading database ... 100%% (Reading database ... 189751 files and directories currently installed.) Preparing to replace python-problem-report 2.0.1-0ubuntu7 (using .../python-problem-report_2.0.1-0ubuntu9_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/python-problem-report_2.0.1-0ubuntu9_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace python-apport 2.0.1-0ubuntu7 (using .../python-apport_2.0.1-0ubuntu9_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/python-apport_2.0.1-0ubuntu9_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace apport 2.0.1-0ubuntu7 (using .../apport_2.0.1-0ubuntu9_all.deb) ... apport stop/waiting Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/apport_2.0.1-0ubuntu9_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already apport start/running Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace gnome-orca 3.4.1-0ubuntu0.1 (using .../gnome-orca_3.4.2-0ubuntu0.1_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/gnome-orca_3.4.2-0ubuntu0.1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace python-piston-mini-client 0.7.2-0ubuntu1 (using .../python-piston-mini-client_0.7.2+bzr57-0ubuntu1_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/python-piston-mini-client_0.7.2+bzr57-0ubuntu1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace oneconf 0.2.8 (using .../oneconf_0.2.8.1_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/oneconf_0.2.8.1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace software-center 5.2.2 (using .../software-center_5.2.2.2_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/software-center_5.2.2.2_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace libglade2-0 1:2.6.4-1ubuntu1 (using .../libglade2-0_1%%3a2.6.4-1ubuntu1.1_amd64.deb) ... Unpacking replacement libglade2-0 ... Preparing to replace libv4l-0 0.8.6-1ubuntu1 (using .../libv4l-0_0.8.6-1ubuntu2_amd64.deb) ... De-configuring libv4l-0:i386 ... Unpacking replacement libv4l-0 ... Preparing to replace libv4l-0:i386 0.8.6-1ubuntu1 (using .../libv4l-0_0.8.6-1ubuntu2_i386.deb) ... Unpacking replacement libv4l-0:i386 ... Preparing to replace libv4lconvert0:i386 0.8.6-1ubuntu1 (using .../libv4lconvert0_0.8.6-1ubuntu2_i386.deb) ... De-configuring libv4lconvert0 ... Unpacking replacement libv4lconvert0:i386 ... Preparing to replace libv4lconvert0 0.8.6-1ubuntu1 (using .../libv4lconvert0_0.8.6-1ubuntu2_amd64.deb) ... Unpacking replacement libv4lconvert0 ... Errors were encountered while processing: /var/cache/apt/archives/python-problem-report_2.0.1-0ubuntu9_all.deb /var/cache/apt/archives/python-apport_2.0.1-0ubuntu9_all.deb /var/cache/apt/archives/apport_2.0.1-0ubuntu9_all.deb /var/cache/apt/archives/gnome-orca_3.4.2-0ubuntu0.1_all.deb /var/cache/apt/archives/python-piston-mini-client_0.7.2+bzr57-0ubuntu1_all.deb /var/cache/apt/archives/oneconf_0.2.8.1_all.deb /var/cache/apt/archives/software-center_5.2.2.2_all.deb Error in function: SystemError: E:Sub-process /usr/bin/dpkg returned an error code (1) Setting up libglade2-0 (1:2.6.4-1ubuntu1.1) ... dpkg: error processing gnome-orca (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration. dpkg: error processing python-problem-report (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration. Setting up libv4lconvert0 (0.8.6-1ubuntu2) ... Setting up libv4lconvert0:i386 (0.8.6-1ubuntu2) ... dpkg: error processing python-piston-mini-client (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration. Setting up libv4l-0 (0.8.6-1ubuntu2) ... Setting up libv4l-0:i386 (0.8.6-1ubuntu2) ... dpkg: dependency problems prevent configuration of python-apport: python-apport depends on python-problem-report (>= 0.94); however: Package python-problem-report is not configured yet. dpkg: error processing python-apport (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of software-center: software-center depends on python-piston-mini-client (>= 0.1+bzr29); however: Package python-piston-mini-client is not configured yet. dpkg: error processing software-center (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of oneconf: oneconf depends on python-piston-mini-client (>= 0.3+bzr32-0ubuntu1); however: Package python-piston-mini-client is not configured yet. dpkg: error processing oneconf (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of apport: apport depends on python-apport (>= 2.0.1-0ubuntu7); however: Package python-apport is not configured yet. dpkg: error processing apport (--configure): dependency problems - leaving unconfigured Processing triggers for libc-bin ... ldconfig deferred processing now taking place This has been going on for two weeks now and I cannot get any updates. Any help would be great.

    Read the article

  • Of transactions and Mongo

    - by Nuri Halperin
    Originally posted on: http://geekswithblogs.net/nuri/archive/2014/05/20/of-transactions-and-mongo-again.aspxWhat's the first thing you hear about NoSQL databases? That they lose your data? That there's no transactions? No joins? No hope for "real" applications? Well, you *should* be wondering whether a certain of database is the right one for your job. But if you do so, you should be wondering that about "traditional" databases as well! In the spirit of exploration let's take a look at a common challenge: You are a bank. You have customers with accounts. Customer A wants to pay B. You want to allow that only if A can cover the amount being transferred. Let's looks at the problem without any context of any database engine in mind. What would you do? How would you ensure that the amount transfer is done "properly"? Would you prevent a "transaction" from taking place unless A can cover the amount? There are several options: Prevent any change to A's account while the transfer is taking place. That boils down to locking. Apply the change, and allow A's balance to go below zero. Charge person A some interest on the negative balance. Not friendly, but certainly a choice. Don't do either. Options 1 and 2 are difficult to attain in the NoSQL world. Mongo won't save you headaches here either. Option 3 looks a bit harsh. But here's where this can go: ledger. See, and account doesn't need to be represented by a single row in a table of all accounts with only the current balance on it. More often than not, accounting systems use ledgers. And entries in ledgers - as it turns out – don't actually get updated. Once a ledger entry is written, it is not removed or altered. A transaction is represented by an entry in the ledger stating and amount withdrawn from A's account and an entry in the ledger stating an addition of said amount to B's account. For sake of space-saving, that entry in the ledger can happen using one entry. Think {Timestamp, FromAccountId, ToAccountId, Amount}. The implication of the original question – "how do you enforce non-negative balance rule" then boils down to: Insert entry in ledger Run validation of recent entries Insert reverse entry to roll back transaction if validation failed. What is validation? Sum up the transactions that A's account has (all deposits and debits), and ensure the balance is positive. For sake of efficiency, one can roll up transactions and "close the book" on transactions with a pseudo entry stating balance as of midnight or something. This lets you avoid doing math on the fly on too many transactions. You simply run from the latest "approved balance" marker to date. But that's an optimization, and premature optimizations are the root of (some? most?) evil.. Back to some nagging questions though: "But mongo is only eventually consistent!" Well, yes, kind of. It's not actually true that Mongo has not transactions. It would be more descriptive to say that Mongo's transaction scope is a single document in a single collection. A write to a Mongo document happens completely or not at all. So although it is true that you can't update more than one documents "at the same time" under a "transaction" umbrella as an atomic update, it is NOT true that there' is no isolation. So a competition between two concurrent updates is completely coherent and the writes will be serialized. They will not scribble on the same document at the same time. In our case - in choosing a ledger approach - we're not even trying to "update" a document, we're simply adding a document to a collection. So there goes the "no transaction" issue. Now let's turn our attention to consistency. What you should know about mongo is that at any given moment, only on member of a replica set is writable. This means that the writable instance in a set of replicated instances always has "the truth". There could be a replication lag such that a reader going to one of the replicas still sees "old" state of a collection or document. But in our ledger case, things fall nicely into place: Run your validation against the writable instance. It is guaranteed to have a ledger either with (after) or without (before) the ledger entry got written. No funky states. Again, the ledger writing *adds* a document, so there's no inconsistent document state to be had either way. Next, we might worry about data loss. Here, mongo offers several write-concerns. Write-concern in Mongo is a mode that marshals how uptight you want the db engine to be about actually persisting a document write to disk before it reports to the application that it is "done". The most volatile, is to say you don't care. In that case, mongo would just accept your write command and say back "thanks" with no guarantee of persistence. If the server loses power at the wrong moment, it may have said "ok" but actually no written the data to disk. That's kind of bad. Don't do that with data you care about. It may be good for votes on a pole regarding how cute a furry animal is, but not so good for business. There are several other write-concerns varying from flushing the write to the disk of the writable instance, flushing to disk on several members of the replica set, a majority of the replica set or all of the members of a replica set. The former choice is the quickest, as no network coordination is required besides the main writable instance. The others impose extra network and time cost. Depending on your tolerance for latency and read-lag, you will face a choice of what works for you. It's really important to understand that no data loss occurs once a document is flushed to an instance. The record is on disk at that point. From that point on, backup strategies and disaster recovery are your worry, not loss of power to the writable machine. This scenario is not different from a relational database at that point. Where does this leave us? Oh, yes. Eventual consistency. By now, we ensured that the "source of truth" instance has the correct data, persisted and coherent. But because of lag, the app may have gone to the writable instance, performed the update and then gone to a replica and looked at the ledger there before the transaction replicated. Here are 2 options to deal with this. Similar to write concerns, mongo support read preferences. An app may choose to read only from the writable instance. This is not an awesome choice to make for every ready, because it just burdens the one instance, and doesn't make use of the other read-only servers. But this choice can be made on a query by query basis. So for the app that our person A is using, we can have person A issue the transfer command to B, and then if that same app is going to immediately as "are we there yet?" we'll query that same writable instance. But B and anyone else in the world can just chill and read from the read-only instance. They have no basis to expect that the ledger has just been written to. So as far as they know, the transaction hasn't happened until they see it appear later. We can further relax the demand by creating application UI that reacts to a write command with "thank you, we will post it shortly" instead of "thank you, we just did everything and here's the new balance". This is a very powerful thing. UI design for highly scalable systems can't insist that the all databases be locked just to paint an "all done" on screen. People understand. They were trained by many online businesses already that your placing of an order does not mean that your product is already outside your door waiting (yes, I know, large retailers are working on it... but were' not there yet). The second thing we can do, is add some artificial delay to a transaction's visibility on the ledger. The way that works is simply adding some logic such that the query against the ledger never nets a transaction for customers newer than say 15 minutes and who's validation flag is not set. This buys us time 2 ways: Replication can catch up to all instances by then, and validation rules can run and determine if this transaction should be "negated" with a compensating transaction. In case we do need to "roll back" the transaction, the backend system can place the timestamp of the compensating transaction at the exact same time or 1ms after the original one. Effectively, once A or B visits their ledger, both transactions would be visible and the overall balance "as of now" would reflect no change.  The 2 transactions (attempted/ reverted) would be visible , since we do actually account for the attempt. Hold on a second. There's a hole in the story: what if several transfers from A to some accounts are registered, and 2 independent validators attempt to compute the balance concurrently? Is there a chance that both would conclude non-sufficient-funds even though rolling back transaction 100 would free up enough for transaction 117 (some random later transaction)? Yes. there is that chance. But the integrity of the business rule is not compromised, since the prime rule is don't dispense money you don't have. To minimize or eliminate this scenario, we can also assign a single validation process per origin account. This may seem non-scalable, but it can easily be done as a "sharded" distribution. Say we have 11 validation threads (or processing nodes etc.). We divide the account number space such that each validator is exclusively responsible for a certain range of account numbers. Sounds cunningly similar to Mongo's sharding strategy, doesn't it? Each validator then works in isolation. More capacity needed? Chop the account space into more chunks. So where  are we now with the nagging questions? "No joins": Huh? What are those for? "No transactions": You mean no cross-collection and no cross-document transactions? Granted - but don't always need them either. "No hope for real applications": well... There are more issues and edge cases to slog through, I'm sure. But hopefully this gives you some ideas of how to solve common problems without distributed locking and relational databases. But then again, you can choose relational databases if they suit your problem.

    Read the article

  • When was sys.dm_os_wait_stats last cleared?

    - by SQLOS Team
    The sys.dm_os_wait_stats DMV provides essential metrics for diagnosing SQL Server performance problems. Returning incrementally accumulating information about all the completed waits encountered by executing threads it is a useful way to identify bottlenecks such as IO latency issues or waits on locks. The counters are reset each time SQL server is restarted, or when the following command is run: DBCC SQLPERF ('sys.dm_os_wait_stats', CLEAR); To make sense out of these wait values you need to know how they change over time. Suppose you are asked to troubleshoot a system and you don't know when the wait stats were last zeroed. Is there any way to find the elapsed time since this happened? If the wait stats were not cleared using the DBCC SQLPERF command then you can simply correlate the stats with the time SQL Server was started using the sqlserver_start_time column introduced in SQL Server 2008 R2: SELECT sqlserver_start_time from sys.dm_os_sys_info However how do you tell if someone has run DBCC SQLPERF ('sys.dm_os_wait_stats', CLEAR) since the server was started, and if they did, when? Without this information the initial, or historical, wait_stats have less value until you can measure deltas over time. There is a way to at least estimate when the stats were last cleared, by using the wait stats themselves and choosing a thread that spends most of its time sleeping. A good candidate is the SQL Trace incremental flush task, which mostly sleeps (in 4 second intervals) and in between it attempts to flush (if there are new events – which is rare when only default trace is running) – so it pretty much sleeps all the time. Hence the time it has spent waiting is very close to the elapsed time since the counter was reset. Credit goes to Ivan Penkov in the SQLOS dev team for suggesting this. Here's an example (excuse formatting): 144 seconds after the server was started: select top 10 wait_type, wait_time_ms from sys.dm_os_wait_stats order by wait_time_ms desc wait_type                                                               wait_time_ms--------------------------------------------------------------------------------------------------------------- XE_DISPATCHER_WAIT                                      242273LAZYWRITER_SLEEP                                          146010LOGMGR_QUEUE                                                145412DIRTY_PAGE_POLL                                             145411XE_TIMER_EVENT                                               145216REQUEST_FOR_DEADLOCK_SEARCH             145194SQLTRACE_INCREMENTAL_FLUSH_SLEEP    144325SLEEP_TASK                                                        73359BROKER_TO_FLUSH                                           73113PREEMPTIVE_OS_AUTHENTICATIONOPS       143 (10 rows affected) Reset: DBCC SQLPERF('sys.dm_os_wait_stats', CLEAR)" DBCC execution completed. If DBCC printed error messages, contact your system administrator. After 8 seconds: select top 10 wait_type, wait_time_ms from sys.dm_os_wait_stats order by wait_time_ms desc wait_type                                                                 wait_time_ms--------------------------------------------------------------------------------------------------------------------- REQUEST_FOR_DEADLOCK_SEARCH              10013LAZYWRITER_SLEEP                                           8124SQLTRACE_INCREMENTAL_FLUSH_SLEEP     8017LOGMGR_QUEUE                                                 7579DIRTY_PAGE_POLL                                              7532XE_TIMER_EVENT                                                5007BROKER_TO_FLUSH                                            4118SLEEP_TASK                                                         3089PREEMPTIVE_OS_AUTHENTICATIONOPS        28SOS_SCHEDULER_YIELD                                   27 (10 rows affected)   After 12 seconds: select top 10 wait_type, wait_time_ms from sys.dm_os_wait_stats order by wait_time_ms desc wait_type                                                                  wait_time_ms------------------------------------------------------------------------------------------------------ REQUEST_FOR_DEADLOCK_SEARCH               15020LAZYWRITER_SLEEP                                            14206LOGMGR_QUEUE                                                  14036DIRTY_PAGE_POLL                                               13973SQLTRACE_INCREMENTAL_FLUSH_SLEEP      12026XE_TIMER_EVENT                                                 10014SLEEP_TASK                                                          7207BROKER_TO_FLUSH                                             7207PREEMPTIVE_OS_AUTHENTICATIONOPS         57SOS_SCHEDULER_YIELD                                     28 (10 rows affected) It may not be accurate to the millisecond, but it can provide a useful data point, and give an indication whether the wait stats were manually cleared after startup, and if so approximately when. - Guy     Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Too nervous to install

    - by The Prop
    Yesterday I (a professional rugby prop of somewhat limited intellect) landed in http://htmlagilitypack.codeplex.com/ and found myself stranded in a town with no signposts. The locals don't need signposts - they know their way around - so who gives a hoot about visitors? Well I'm a visitor and I'm lost. Here's my plea to the good burgesses of Codeplex-sans-signs: HELP!! Let me back-track and explain what landed me at the bottom of this tangled ruck. There's a "Download" button positioned near the top-right of the Codeplex web page, right? Like the Sword of Damocles, a down-arrow to the left of the button indicates, presumably, what a download would include: CURRENT 1.4.0 Stable DATE Fri May 7 2010 at 7:00 AM STATUS Stable With a simple-minded confidence that has since deserted me (the confidence - not the simple-mindedness), I clicked "Download". This introduced 3 new files to my computer: HtmlAgilityPack.dll, HtmlAgilityPack.pdb, and HtmlAgilityPack.XML This is when the first stab of doubt penetrated that globe between my cauliflower ears that I call a head. Where's the dot cs? Somewhere in Codeplex, I'd read advice to another lost soul to "download and build the HTMLAgilityPack solution". As I've done so many times as an All Black prop, I glared at the opposition front row - ah, I mean the 3 new files. Shouldn't one of them have a ".cs" on the back of his jersey - er, on the end of its name? Or is this just how they play the game in Codeplex-sans-signs? Undaunted (props have more courage than sense) I packed into my first C# scrum. The half-back feeds the ball in, and the front rows collapse - er, the debugging stops at this line of my code: "HtmlAgilityPack.HtmlDocument doc = new HtmlAgilityPack.HtmlDocument();" Then the Referee blows his whistle and announces one of those verdicts that's utterly indecipherable to your average loose-head prop: Locating source for 'C:\Source\htmlagilitypack\Trunk\HtmlAgilityPack\HtmlDocument.cs'. Checksum: MD5 {62 bc f3 7e 9a 92 a6 32 7 d6 5b f8 76 59 7b 5b} The file 'C:\Source\htmlagilitypack\Trunk\HtmlAgilityPack\HtmlDocument.cs' does not exist. Looking in script documents for 'C:\Source\htmlagilitypack\Trunk\HtmlAgilityPack\HtmlDocument.cs'... Looking in the projects for 'C:\Source\htmlagilitypack\Trunk\HtmlAgilityPack\HtmlDocument.cs'. The file was not found in a project. Looking in directory 'C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\vc7\atlmfc'... Looking in directory 'C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\vc7\crt'... The debugger will ask the user to find the file: C:\Source\htmlagilitypack\Trunk\HtmlAgilityPack\HtmlDocument.cs. The user pressed Cancel [a brain-stemmer from the prop] in the Find Source dialog. The debug source files settings for the active solution have been modified so that the debugger will not ask the user to find the file: C:\Source\htmlagilitypack\Trunk\HtmlAgilityPack\HtmlDocument.cs. The debugger could not locate the source file 'C:\Source\htmlagilitypack\Trunk\HtmlAgilityPack\HtmlDocument.cs'. Even if it had been the first 50 stanzas of "Eskimo Nell", I couldn't have been more shocked. I'm so shocked, my jaws clamp shut around the opposition hooker's ear. He thumbs me in the iris. With a cornea-torn eye I peer at the Codeplex site. My brain stem sparks and I punch the "View all downloads" link. It sparks four more times on each download link, and.. lo! FOUR files this time: HAPExplorer.zip, HtmlAgilityPack.1.4.0.Source.zip, HtmlAgilityPack.1.4.0.zip, HtmlAgilityPack.Documentation.chm But... is this not the same place arrived at recently by my flat-mate Chaz, journalist extraordinaire? (Chaz, if you're reading this, I'm not plugging for nothing - just write kindly about me in your next report, okay?) Didn't these same four files flummox Chaz The Great? He told me about it. Chaz left a message with Codeplex and then solved the problem by just walking away. Typical journalist, huh. But I'm not like that. I don't walk away. I'm made of the sort of stubborn stuff that becomes an All Black prop. Hence this impassioned plea: GOOD TOWNSFOLK OF CODEPLEX-SANS-SIGNS, WHAT SHOULD I DO NEXT? Can somebody point me to Main Street? How does a simpleton install 'C:\Source\htmlagilitypack\Trunk\HtmlAgilityPack\HtmlDocument.cs'? I'm willing to prostrate myself and grovel to the first kind face that passes in front of my rapidly clouding sight. So help me, I'd even tug my forelock if I had one! Should I hold forth my rod over the wilderness, and create a folder called 'C:\Source\htmlagilitypack\Trunk\HtmlAgilityPack\' or some such? If so, what files should I move into it? ANYTHING else a dum-ass should know about? - and I mean ANYTHING - you just don't know how witless a punch-drunk prop can be.. %( Whenever I've installed other programs they've given me an ".exe" or ".msi" that I can click on and it's all done for me like magic. HEY... there's nothing of that nature here, is there? Am I missing something? Something for dummies to click? (From the waiting rooms of Dr I. Sight Phixes) (signed) The Prop

    Read the article

< Previous Page | 102 103 104 105 106 107 108 109 110  | Next Page >