Search Results

Search found 13293 results on 532 pages for 'small ticket'.

Page 219/532 | < Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >

  • Launching a WPF Window in a Separate Thread, Part 1

    - by Reed
    Typically, I strongly recommend keeping the user interface within an application’s main thread, and using multiple threads to move the actual “work” into background threads.  However, there are rare times when creating a separate, dedicated thread for a Window can be beneficial.  This is even acknowledged in the MSDN samples, such as the Multiple Windows, Multiple Threads sample.  However, doing this correctly is difficult.  Even the referenced MSDN sample has major flaws, and will fail horribly in certain scenarios.  To ease this, I wrote a small class that alleviates some of the difficulties involved. The MSDN Multiple Windows, Multiple Threads Sample shows how to launch a new thread with a WPF Window, and will work in most cases.  The sample code (commented and slightly modified) works out to the following: // Create a thread Thread newWindowThread = new Thread(new ThreadStart( () => { // Create and show the Window Window1 tempWindow = new Window1(); tempWindow.Show(); // Start the Dispatcher Processing System.Windows.Threading.Dispatcher.Run(); })); // Set the apartment state newWindowThread.SetApartmentState(ApartmentState.STA); // Make the thread a background thread newWindowThread.IsBackground = true; // Start the thread newWindowThread.Start(); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This sample creates a thread, marks it as single threaded apartment state, and starts the Dispatcher on that thread. That is the minimum requirements to get a Window displaying and handling messages correctly, but, unfortunately, has some serious flaws. The first issue – the created thread will run continuously until the application shuts down, given the code in the sample.  The problem is that the ThreadStart delegate used ends with running the Dispatcher.  However, nothing ever stops the Dispatcher processing.  The thread was created as a Background thread, which prevents it from keeping the application alive, but the Dispatcher will continue to pump dispatcher frames until the application shuts down. In order to fix this, we need to call Dispatcher.InvokeShutdown after the Window is closed.  This would require modifying the above sample to subscribe to the Window’s Closed event, and, at that point, shutdown the Dispatcher: // Create a thread Thread newWindowThread = new Thread(new ThreadStart( () => { Window1 tempWindow = new Window1(); // When the window closes, shut down the dispatcher tempWindow.Closed += (s,e) => Dispatcher.CurrentDispatcher.BeginInvokeShutdown(DispatcherPriority.Background); tempWindow.Show(); // Start the Dispatcher Processing System.Windows.Threading.Dispatcher.Run(); })); // Setup and start thread as before This eliminates the first issue.  Now, when the Window is closed, the new thread’s Dispatcher will shut itself down, which in turn will cause the thread to complete. The above code will work correctly for most situations.  However, there is still a potential problem which could arise depending on the content of the Window1 class.  This is particularly nasty, as the code could easily work for most windows, but fail on others. The problem is, at the point where the Window is constructed, there is no active SynchronizationContext.  This is unlikely to be a problem in most cases, but is an absolute requirement if there is code within the constructor of Window1 which relies on a context being in place. While this sounds like an edge case, it’s fairly common.  For example, if a BackgroundWorker is started within the constructor, or a TaskScheduler is built using TaskScheduler.FromCurrentSynchronizationContext() with the expectation of synchronizing work to the UI thread, an exception will be raised at some point.  Both of these classes rely on the existence of a proper context being installed to SynchronizationContext.Current, which happens automatically, but not until Dispatcher.Run is called.  In the above case, SynchronizationContext.Current will return null during the Window’s construction, which can cause exceptions to occur or unexpected behavior. Luckily, this is fairly easy to correct.  We need to do three things, in order, prior to creating our Window: Create and initialize the Dispatcher for the new thread manually Create a synchronization context for the thread which uses the Dispatcher Install the synchronization context Creating the Dispatcher is quite simple – The Dispatcher.CurrentDispatcher property gets the current thread’s Dispatcher and “creates a new Dispatcher if one is not already associated with the thread.”  Once we have the correct Dispatcher, we can create a SynchronizationContext which uses the dispatcher by creating a DispatcherSynchronizationContext.  Finally, this synchronization context can be installed as the current thread’s context via SynchronizationContext.SetSynchronizationContext.  These three steps can easily be added to the above via a single line of code: // Create a thread Thread newWindowThread = new Thread(new ThreadStart( () => { // Create our context, and install it: SynchronizationContext.SetSynchronizationContext( new DispatcherSynchronizationContext( Dispatcher.CurrentDispatcher)); Window1 tempWindow = new Window1(); // When the window closes, shut down the dispatcher tempWindow.Closed += (s,e) => Dispatcher.CurrentDispatcher.BeginInvokeShutdown(DispatcherPriority.Background); tempWindow.Show(); // Start the Dispatcher Processing System.Windows.Threading.Dispatcher.Run(); })); // Setup and start thread as before This now forces the synchronization context to be in place before the Window is created and correctly shuts down the Dispatcher when the window closes. However, there are quite a few steps.  In my next post, I’ll show how to make this operation more reusable by creating a class with a far simpler API…

    Read the article

  • Big Data – Is Big Data Relevant to me? – Big Data Questionnaires – Guest Post by Vinod Kumar

    - by Pinal Dave
    This guest post is by Vinod Kumar. Vinod Kumar has worked with SQL Server extensively since joining the industry over a decade ago. Working on various versions of SQL Server 7.0, Oracle 7.3 and other database technologies – he now works with the Microsoft Technology Center (MTC) as a Technology Architect. Let us read the blog post in Vinod’s own voice. I think the series from Pinal is a good one for anyone planning to start on Big Data journey from the basics. In my daily customer interactions this buzz of “Big Data” always comes up, I react generally saying – “Sir, do you really have a ‘Big Data’ problem or do you have a big Data problem?” Generally, there is a silence in the air when I ask this question. Data is everywhere in organizations – be it big data, small data, all data and for few it is bad data which is same as no data :). Wow, don’t discount me as someone who opposes “Big Data”, I am a big supporter as much as I am a critic of the abuse of this term by the people. In this post, I wanted to let my mind flow so that you can also think in the direction I want you to see these concepts. In any case, this is not an exhaustive dump of what is in my mind – but you will surely get the drift how I am going to question Big Data terms from customers!!! Is Big Data Relevant to me? Many of my customers talk to me like blank whiteboard with no idea – “why Big Data”. They want to jump into the bandwagon of technology and they want to decipher insights from their unexplored data a.k.a. unstructured data with structured data. So what are these industry scenario’s that come to mind? Here are some of them: Financials Fraud detection: Banks and Credit cards are monitoring your spending habits on real-time basis. Customer Segmentation: applies in every industry from Banking to Retail to Aviation to Utility and others where they deal with end customer who consume their products and services. Customer Sentiment Analysis: Responding to negative brand perception on social or amplify the positive perception. Sales and Marketing Campaign: Understand the impact and get closer to customer delight. Call Center Analysis: attempt to take unstructured voice recordings and analyze them for content and sentiment. Medical Reduce Re-admissions: How to build a proactive follow-up engagements with patients. Patient Monitoring: How to track Inpatient, Out-Patient, Emergency Visits, Intensive Care Units etc. Preventive Care: Disease identification and Risk stratification is a very crucial business function for medical. Claims fraud detection: There is no precise dollars that one can put here, but this is a big thing for the medical field. Retail Customer Sentiment Analysis, Customer Care Centers, Campaign Management. Supply Chain Analysis: Every sensors and RFID data can be tracked for warehouse space optimization. Location based marketing: Based on where a check-in happens retail stores can be optimize their marketing. Telecom Price optimization and Plans, Finding Customer churn, Customer loyalty programs Call Detail Record (CDR) Analysis, Network optimizations, User Location analysis Customer Behavior Analysis Insurance Fraud Detection & Analysis, Pricing based on customer Sentiment Analysis, Loyalty Management Agents Analysis, Customer Value Management This list can go on to other areas like Utility, Manufacturing, Travel, ITES etc. So as you can see, there are obviously interesting use cases for each of these industry verticals. These are just representative list. Where to start? A lot of times I try to quiz customers on a number of dimensions before starting a Big Data conversation. Are you getting the data you need the way you want it and in a timely manner? Can you get in and analyze the data you need? How quickly is IT to respond to your BI Requests? How easily can you get at the data that you need to run your business/department/project? How are you currently measuring your business? Can you get the data you need to react WITHIN THE QUARTER to impact behaviors to meet your numbers or is it always “rear-view mirror?” How are you measuring: The Brand Customer Sentiment Your Competition Your Pricing Your performance Supply Chain Efficiencies Predictive product / service positioning What are your key challenges of driving collaboration across your global business?  What the challenges in innovation? What challenges are you facing in getting more information out of your data? Note: Garbage-in is Garbage-out. Hold good for all reporting / analytics requirements Big Data POCs? A number of customers get into the realm of setting a small team to work on Big Data – well it is a great start from an understanding point of view, but I tend to ask a number of other questions to such customers. Some of these common questions are: To what degree is your advanced analytics (natural language processing, sentiment analysis, predictive analytics and classification) paired with your Big Data’s efforts? Do you have dedicated resources exploring the possibilities of advanced analytics in Big Data for your business line? Do you plan to employ machine learning technology while doing Advanced Analytics? How is Social Media being monitored in your organization? What is your ability to scale in terms of storage and processing power? Do you have a system in place to sort incoming data in near real time by potential value, data quality, and use frequency? Do you use event-driven architecture to manage incoming data? Do you have specialized data services that can accommodate different formats, security, and the management requirements of multiple data sources? Is your organization currently using or considering in-memory analytics? To what degree are you able to correlate data from your Big Data infrastructure with that from your enterprise data warehouse? Have you extended the role of Data Stewards to include ownership of big data components? Do you prioritize data quality based on the source system (that is Facebook/Twitter data has lower quality thresholds than radio frequency identification (RFID) for a tracking system)? Do your retention policies consider the different legal responsibilities for storing Big Data for a specific amount of time? Do Data Scientists work in close collaboration with Data Stewards to ensure data quality? How is access to attributes of Big Data being given out in the organization? Are roles related to Big Data (Advanced Analyst, Data Scientist) clearly defined? How involved is risk management in the Big Data governance process? Is there a set of documented policies regarding Big Data governance? Is there an enforcement mechanism or approach to ensure that policies are followed? Who is the key sponsor for your Big Data governance program? (The CIO is best) Do you have defined policies surrounding the use of social media data for potential employees and customers, as well as the use of customer Geo-location data? How accessible are complex analytic routines to your user base? What is the level of involvement with outside vendors and third parties in regard to the planning and execution of Big Data projects? What programming technologies are utilized by your data warehouse/BI staff when working with Big Data? These are some of the important questions I ask each customer who is actively evaluating Big Data trends for their organizations. These questions give you a sense of direction where to start, what to use, how to secure, how to analyze and more. Sign off Any Big data is analysis is incomplete without a compelling story. The best way to understand this is to watch Hans Rosling – Gapminder (2:17 to 6:06) videos about the third world myths. Don’t get overwhelmed with the Big Data buzz word, the destination to what your data speaks is important. In this blog post, we did not particularly look at any Big Data technologies. This is a set of questionnaire one needs to keep in mind as they embark their journey of Big Data. I did write some of the basics in my blog: Big Data – Big Hype yet Big Opportunity. Do let me know if these questions make sense?  Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • CodePlex Daily Summary for Saturday, May 15, 2010

    CodePlex Daily Summary for Saturday, May 15, 2010New ProjectsBizTalk EDI Guidance: BizTalk EDI Guidance is intended to simplify the delivery of EDI solutions by leveraging the ESB Toolkit. This project is currently Alpha and sh...Continues Integration Sample: I'm providing a series of blog post to show a complete CI process using CruiseControl.Net and msbuild. The source code for this series is hosted here.DioM2D: My Dragons in our Midst RPG. Runs on my custom Starlight Engine.Ethical Hacking ASP.NET: Security tools and guidelines for white-hat hacking and protecting ASP.NET web applications.Farseer Engine with XNATouch: Farseer is great engine for game physics. This implementation uses XNATouch framework.Feature Builder Guidance Extensions: Feature Builder Guidance Extensions are Feature Extensions which extend the guidance for the Feature Building experience. Each FBGX will be suppli...Microsoft Office Document Security: MODS is a plugin for office 2007 thats includes Hash Encryption, Hex Convertion and more. Plugins: MODS For Word still working on (MODS for Excel ...Minimize Engine (XNA): The Minimize Engine is a basic 3D Games Engine created using XNA, with its primary focus around Grid Based games.MSForge TownCrier: This project is meant to build a notification and calling system for MSForge.net User Groups.NatureProtector: Silverlight 4 project.OutSync: OutSync is a free Windows desktop application that syncs photos of your Facebook friends with matching contacts in Microsoft Outlook. It allows you...Quick Save Images, Clipboard save to file, Quick save, bmp, png, jpeg, Image: ClipSa is a very small tool for very quick picture saving. You put some picture into the clipboard (PrintScrn/Alt-PrintScrn/Ctrl-C), ClipSa saves ...ResHelper Manager: Resource strings management tool that creates localization files for any type of localization target (asp.net, wpf and so on...)SecureCookieHttpModule: Secure your session cookie (and other session-based) cookies for replay attacks using this easy to use ASP.NET HttpModule.simpleChMS: A Church Management System (ChMS) designed for churches or ministries like youth groups that want to facilitate better care or theie membership. Fo...sMAPtool: -SPDomainObject: mapping strong type objects to sp listsSQL Trim: This project aims at developing a universal trim function for Microsoft SQL Server. It trims: 1) pre spaces 2) post spaces 3) double spaces 3) subs...TurretGunner: mt-experienceNew ReleasesBeanProxy: BeanProxy 3.0: BeanProxy is a C# (.NET 3.5) library housing classes that facilitates unit testing. Any non-static, public interface/class or abstract class can be...Blueset Studio Opensource Projects: 蓝色之风记事本 0.2 Alpha: 一个超级Bug版本……CSharp Intellisense: V2.1: - Bug fix (Pascal Casing)DioM2D: DioM2D0.01: http://www.dragonsinourmidst.com/forums/showthread.php?p=690058#post690058Ethical Hacking ASP.NET: Version 1.0.0.1: This is the initial release of the project. Read more about the available tests and features on the Documentation tab. You need the full .NET Frame...Event Scavenger: Collector service update - version 3.2.4: Added check if the database connection string is set up in the config file.Feature Builder Guidance Extensions: FBGX-Binaries: This release consists of a zip file containing all the VSIXs resulting from building each of the FBGX packages found here as source. This will mak...Floe IRC Client: Floe IRC Client 2010-05 R2: - Detaching windows (right click on the tabs to detach them) - Highlight lines with your nick or other patterns - Fixed several bugs - Tabs can now...Free language translator and file converter: Free Language Translator 1.96: Fixed some minor bugs and improved the UI a bit. If you can not install the msi file you might be missing some prerequisites. You can try running t...Geocache Downloader: release 1.0: This is the first release.kp.net: Alpha release is avalable: The goal of this alpha release is to try the code in some production scenarios and find out what features should be tuned.Live-Exchange Calendar Sync: Live-Exchange Calendar Sync: Live-Exchange Calendar Sync Beta May 14, 2010 release of Live-Exchange Calendar Sync 1.0 BETA. (Version 45334) Getting StartedInfo about installat...MAPILab Explorer for SharePoint: MAPILab Explorer for SharePoint ver 2.1.1: 1) Small bug fixed that appears on first start (when earliers versions wasn't installed). How to install:Download ZIP file and extract it on Sha...Microsoft Office Document Security: MODS 4 WORD (SOURCE INCLUDED): Includes Source CodeMoonyDesk (windows desktop widgets): MoonyDesk Alpha: MoonyDesk Alpha (some memory improvements)OnTopReplica: Release 2.9.3: Some bugfixes and improvements. Czech translation added (thanks René Mihula).OutSync: OutSync v1.0.100.0: OutSync v1.0.100.0 is the final release by Mel before the move to CodePlex. I have tested it on Windows 7 32bit and 64bit with Office 2007 and it ...Quick Save Images, Clipboard save to file, Quick save, bmp, png, jpeg, Image: Clipsa v 0.1: Download and extract to any place 2 files - clipSa.exe and clipSa.exe.config Run clipSa.exe. That's all.ResHelper Manager: ResHelperManager: List of changes applied to this version of ResHelper is included in main download zip package. Example sourcesIn Source Code tab are sources of De...Rx Contrib: V1.3: - Bug Fix - BufferWithTimeOrCount with flexible time period setting when ever the time period elapsed...SharePoint DVK Integration: SharePoint 2007 DVK integration v1.0.3: Fixes Fixed default field bindings. I rebound too many fields on every page load. Fixed extension replacing on creating target url (threw it out)...ShoutcastStast for DotNetNuke: DNN_ShoutcastStats alpha 05.00.495: First Alpha release of ShoutcastStats Module for DotNetNuke This first alpha version of the ShoutcastStats Module for DotNetNuke is still in devel...SilverPart 2.1: SilverPart 2.1: SilverPart 2.1 This interim release fixes some major bugs related to Firefox and anonymous access. - Fix for Issue ID 4005 - SilverPart does not w...sMAPtool: sMAPedit v0.7c (Base Release with Maps): Fixed: force a gargabe collection update to prevent pictureBox's memory leak Added: essential map pack with all basic maps in jpg format Added:...SQL Trim: Trim: Initial releaseSSIS Multiple Hash: Multiple Hash V1.2.1: This is version 1.2.1 of the Multiple Hash SSIS Component. It supports SQL 2005 and SQL 2008, although you have to download the correct install pa...StreamInsight Yahoo Finance input adapter example: StockTicker_v1_0_RTM: Updated for StreamInsight RTM.Update Controls .NET: 2.1.0.0: Automatic dependency management for WPF and Silverlight data binding. This release combines both the WPF and Silverlight assemblies into one insta...VCC: Latest build, v2.1.30514.0: Automatic drop of latest buildMost Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active Projectspatterns & practices – Enterprise LibraryMirror Testing SystemRawrPHPExcelBlogEngine.NETMicrosoft Biology FoundationCustomer Portal Accelerator for Microsoft Dynamics CRMWindows Azure Command-line Tools for PHP DevelopersShake - C# MakeStyleCop

    Read the article

  • A Look Back at 2010 Predictions

    - by David Dorf
    Now is the time of year people make their predictions for next year, but before I start thinking about 2011 it's worth a look back to see how my predictions for 2010 fared. 1. Borders and Blockbuster bite the dust. I would have never predicted a strong brand such as Circuit City could die, but now I know it can happen to anyone. Borders has lost the battle with Barnes & Noble and Blockbuster has lost to Netflix. And just to be sure, Amazon put an extra nail in each coffin. Borders received additional investment from Bennett LeBow to keep it afloat, but the stock is down around $1.25 with no profits in sight. Blockbuster filed for bankruptcy back in September. 2. Every retailer finally has a page on Facebook... but very few figure out how to keep fans engaged. Retailer postings become noise, and fans start to unsubscribe. Twitter goes in the same direction. A few standout retailers will figure out how to use social media, and the rest will remain dumbfounded. Most retailers are on the Facebook bandwagon, and their fan bases seem to be increasing thanks to promotions like The Gap's logo redesign, Lowes' black Friday sneak peak, and Walmart's Crowd Savers. There are several examples of f-commerce advancements, including some interesting integrations from Amazon.3. Smartphones consolidate and grow. More and more people will step-up to smartphones, most of which will choose iPhone, Blackberry, and Android phones. Other smartphones will vanish, and networks will start to strain. But retailers will finally embrace mobile as the next big channel. Retail marketing departments will build mobile apps without the help of their IT department, and eventually they will get into a bind. Android has been on a tear lately stealing market share from Blackberry. Palm and Microsoft are trending down, and Apple is holding steady. Smartphone sales are up 15% and expected to continue. Retailers understand the importance of mobile, and some innovative applications have been produced this year. 4. Google helps the little guys. Google will push its Favorite Places project to help give exposure to small retailers and restaurants. They will enable small retailers to act like big ones by providing storefronts, detailed product information, and coupons for consumers. Google will find a way to bring augmented reality to the masses. I can't say I've seen much new from Google regarding Favorite Places, but they've continued to push local product search. From the PC or smartphone, consumers can search for products and see which nearby stores have it stock. Oracle Retail even productized an integration to Google to support this effort. I suppose if Google ever buys Groupon then it will bring them even closer to local shopping. Google talked about augmented humanity, but that has nothing to do with augmented reality. 5. Steve Jobs Is Bugs Bunny and Steve Ballmer is Elmer Fudd. (OK, I stole that headline from an InformationWeek article. I couldn't resist.) Both Apple and Microsoft will continue to open new stores, but only Apple will show real growth. POSReady 2009 (formerly WEPOS) will continue to share the POS market with Linux. The iPhone and iPod will continue to capture market share, but there won't be an Apple tablet. There won't be an Apple tablet? What was I thinking? While Apple has well over 300 stores, there are less than 10 Microsoft stores. Initial impressions show that even though Microsoft is locating its store near Apple Stores, they are not converting customers, with shoppers citing a lack of assortment and high prices. 6. Consolidation of e-commerce software providers. Software vendors in the areas of search, reviews, online call-centers, payments, and e-commerce will consolidate, partly driven by the success of m-commerce and SaaS. Amazon will find someone else to buy, and eBay will continue to lose momentum. Consolidation of e-commerce providers continued with IBM acquiring Sterling Commerce and CoreMetrics, and Oracle recently announcing the acquisition of ATG. Amazon grabbed Zappos, Woot, and Diapers.com to continue its dominance of online selling. While eBay's Marketplace growth may have slowed, its PayPal division is doing quite well, fueled in part by demand for mobile payments. 7. Book publishers mirror music labels. Just as the iPod brought digital downloads to the masses, the Kindle and Nook will power the e-book revolution. Books will continue to use DRM for a few more years before following the path of music. Publishers will try to preserve the margins of hardbacks by associating e-book releases with paperbacks. Amazon has done a good job providing e-reader clients for smartphones, PCs, and tablets. Competition from Barnes & Noble has forced Amazon to support book loaning, and both companies are making it easier for people to publish ebooks (with or without DRM). Progress is slow but steady. 8. NFC makes inroads, RFID treads water. Near Field Communications start to appear in mobile phones, and retailers beta test its use for payments and loyalty programs. RFID tag costs come down a bit, but not enough to spur accelerated adoption.Nokia announced plans to offer NFC-enabled phones in 2011, and rumors are swirling about NFC in the upcoming iPhone.  I think NFC is heading in the right direction, and I've heard more interest from retailers about specialized uses for RFID.9. Digital Signage goes the way of augmented reality. People use their camera phones to leave geo-tagged notes all over cities, rating stores and restaurants, and "painting" graffiti. But people get tired of holding their phones in front of their faces, so AR glasses are offered in much the same way bluetooth headsets emerged. Retailers experiement with in-store advertising using AR. Several retailers like Pizza Hut, Benetton, and Target have experimented with AR but its still somewhat of a gimmick used by marketing.  I think this prediction is a year or two too early. 10. JDA flip-flops again. After announcing their embracing of the .Net architecture, then switching to J2EE after the Manugistics acquisition, JDA will finally decide to standardize on Apple's Objective C. Everything will be ported to the iPhone and be available on the AppStore. After all, there's not much left to try. This was, of course, a joke but the sentiment is still valid.  JDA seems more supply-chain focused than retail focused, which is a an outcrop if their i2 acquisition.  Of the 10 predictions, I'm going to say I got 6 somewhat correct.  (Don't you just love grading your own paper?)  Soon I'll post my predictions for 2011 so be on the lookout.  Until then here's one more prediction:  Va Tech beats Stanford in the Orange Bowl -- count on it!

    Read the article

  • Creating and maintaining Orchard translations

    - by Bertrand Le Roy
    Many volunteers have already stepped up to provide translations for Orchard. There are many challenges to overcome with translating such a project. Orchard is a very modular CMS, so the translation mechanism needs to account for the core as well as first and third party modules and themes. Another issue is that every new version of Orchard or of a module changes some localizable strings and adds new ones as others enter obsolescence. In order to address those problems, I've built a small Orchard module that automates some of the most complex tasks that maintaining a translation implies. In this post, I'll walk you through the operations I had to do to update the French translation for Orchard 1.0. In order to make sure you translate all the first party modules, I would recommend that you start from a full source code enlistment. The reason is that I'll show how you can extract the default en-US translation from any source code enlistment. That enables you to create a translation that is even more up-to-date than what is currently on the site. Alternatively, you could start by downloading the current en-US translation. If you decide to do so, just skip the relevant paragraphs. First, let's install the Orchard Translation Manager. I'm starting from a vanilla clone of the latest in the code repository. After you've setup the site, go into the dashboard and click on Gallery. Locate the Orchard Translation Manager in the list of modules and click "Install". Once the module is installed, you need to enable its one feature by going into Configuration/Features and clicking "Enable" next to Vandelay.TranslationManager. We're done with the setup that we need in order to start our translation work. We'll now switch to the command-line and to our favorite text editor. Open a command-line on the Orchard web site folder. I found the easiest way to do this is to do a SHIFT+right-click on the Orchard.Web folder in Windows Explorer and to click "Open command window here". Type bin\orchard to enter the Orchard command-line environment. If you do a "help commands" you should see four commands in the list that came from the module we just installed: extract default translation, install translation, package translation and sync translation. First, we're going to generate the default translation. Note that it is possible to generate that default translation for a specific list of modules and themes by using the /Extensions: switch, which should facilitate the translation of third party extensions, but in this tutorial we're going to generate it for the whole of the Orchard source code. extract default translation /Output:\temp .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This should have created an Orchard.en-us.po.zip file in the temp directory. Extract that archive into an orchard.po folder under \temp. The next step depends on whether you have an existing translation that you want to update or not. If you do have an existing translation, just extract it into the same \temp\orchard.po directory. That should result in a file structure where you have the default en-US translation alongside your own. If you don't have an existing translation, just continue, the commands will be the same. We are now going to synchronize those translations (or generate the stub for a new one if you didn't start from an existing translation). sync translation /Input:\temp\orchard.po /Culture:fr-FR After this command (where you should of course substitute fr-FR with the culture you're working on), we now have updated files that contain a few useful flags. Open each of the .po files under the culture you are working on (there should be around 36) with your favorite text editor. For all the strings that are still valid in the latest version, nothing changes and you don't need to do anything. For all the strings that disappeared from the default culture, the old translation will still be there but they will be prefixed with the following comment: # Obsolete translation Conveniently, all the obsolete strings will be grouped at the end of the file. You can select all those and delete them. For all the new strings, you will see the following comment: # Untranslated string This is where the hard work begins. You'll need to translate each of those new strings by entering the translation between the quotes in: msgstr "" Don't introduce hard carriage returns in the strings, just stay on one line (your text editor should do some reasonable wrapping so this shouldn't be a big deal). Once you're done with a file, save it. Make sure, and this is very important, that your text editor is saving using the UTF-8 encoding. In Notepad, that setting can be found in the file saving dialog by doing a "Save As" rather than a plain "Save": When all the po files have been edited, you are ready to package the translation for submission (a.k.a. sending e-mail to the localization mailing list). package translation /Culture:fr-FR /Input:\temp\orchard.po /Output:\temp You should now see a Orchard.fr-FR.po.zip file in temp that is ready to be submitted. That is, once you've tested it, which can be done by deploying it into the site: install translation \temp\orchard.fr-fr.po.zip Once this is done you can go into the dashboard under Configuration/Settings and click on "Add or remove supported cultures for the site". Choose your culture and click "Add". You can go back to settings and set the default culture. Save. You may now take a tour of the application and verify that everything works as expected: And that's it really. Creating a translation for Orchard is a matter of a few hours. If you don't see a translation for your culture, please consider creating it.

    Read the article

  • Visiting the Emtel Data Centre

    Back in February at the first event of the Emtel Knowledge Series (EKS) I spoke to various people at Emtel about their data centre here on the island. I was trying to see whether it would be possible to arrange a meeting over there for a selected group of our community members. Well, let's say it like this... My first approach wasn't that promising and far from successful but during the following months there were more and more occasions to get in touch with the "right" contact persons at Emtel to make it happen... Setting up an appointment and pre-requisites The major improvement came during a Boot Camp for Windows Phone 8.1 App development organised by Microsoft Indian Ocean Islands in cooperation with Emtel at the Emtel World, Ebene. Apart from learning bits and pieces regarding Universal Apps I took the opportunity to get in touch with Arvin Lockee, Sales Executive - Data, during our lunch break. And this really kicked off the whole procedure. Prior to get access to the Emtel data centre it is requested that you provide full name and National ID of anyone going to visit. Also, it should be noted that there was only a limited amount of seats available. Anyways, packed with this information I posted through the usual social media channels. Responses came in very quickly and based on First-come, first-serve (FCFS) principle I noted down the details and forwarded them to Emtel in order to fix a date and time for the visit. In preparation on our side, all attendees exchanged contact details and we organised transport options to go to the data centre in Arsenal. The day before and on the day of our meeting, Arvin send me a reminder to check whether everything is still confirmed and ready to go... Of course, it was! Arriving at the Emtel Data Centre As I'm coming from Flic En Flac towards the North, we agreed that I'm going to pick up a couple of young fellows near the old post office in Port Louis. All went well, except that Sean eventually might be living in another time zone compared to the rest of us. Anyway, after some extended stop we were complete and arrived just in time in Arsenal to meet and greet with Ish and Veer. Again, Emtel is taking access procedures to their data centre very serious and the gate stayed close until all our IDs had been noted and compared to the list of registered attendees. Despite having a good laugh at the mixture of old and new ID cards it was a straight-forward processing. The ward was very helpful and guided us to the waiting area at the entrance section of the building. Shortly after we were welcomed by Kamlesh Bokhoree, the Data Centre Officer. He gave us brief introduction into the rules and regulations during our visit, like no photography allowed, not touching the buttons, and following his instructions through the whole visit. Of course! Inside the data centre Next, he explained us the multi-factor authentication system using a combination of bio-metric data, like finger print reader, and "classic" pin panel. The Emtel data centre provides multiple services and next to co-location for your own hardware they also offer storage options for your backup and archive data in their massive, fire-resistant vault. Very impressive to get to know about the considerations that have been done in choosing the right location and how to set up the whole premises. It should also be noted that there is 24/7 CCTV surveillance inside and outside the buildings. Strengths of the Emtel TIER 3 Data Centre, Mauritius Finally, we were guided into the first server room. And wow, the whole setup is cleverly planned and outlined in the architecture. From the false floor and ceilings in order to provide optimum air flow, over to the separation of cold and hot aisles between the full-size server racks, and of course the monitored air conditions in order to analyse and watch changes in temperature, smoke detection and other parameters. And not surprisingly everything has been implemented in two independent circuits. There is a standardised classification for the construction and operation of data centres world-wide, and the Emtel's one has been designed to be a TIER 4 building but due to the lack of an alternative power supplier on the island it is officially registered as a TIER 3 compliant data centre. Maybe in the long run there might be a second supplier of energy next to CEB... time will tell. Luckily, the data centre is integrated into the National Fibre Optic Gigabit Ring and Emtel already connects internationally through diverse undersea cable routes like SAFE & LION/LION2 out of Mauritius and through several other providers for onwards connectivity. The data centre is part of the National Fibre Optic Gigabit Ring and has redundant internet connectivity onwards. Meanwhile, Arvin managed to join our little group of geeks and he supported Kamlesh in answering our technical questions regarding the capacities and general operation of the data centre. Visiting the NOC and its dedicated team of IT professionals was surely one of the visual highlights. Seeing their wall of screens to monitor any kind of activities on the data lines, the managed servers and the activity in and around the building was great. Even though I'm using a multi-head setup since years I cannot keep it up with that setup... ;-) But I got a couple of ideas on how to improve my work spaces here at the office. Clear advantages of hosting your e-commerce and mobile backends locally After the completely isolated NOC area we continued our Q&A session with Kamlesh and Arvin in the second server room which is dedictated to shared environments. On first thought it should be well-noted that there is lots of space for full-sized racks and therefore co-location of your own hardware. Actually, given the feedback that there will be upcoming changes in prices the facilities at the Emtel data centre are getting more and more competitive and interesting for local companies, especially small and medium enterprises. After seeing this world-class infrastructure available on the island, I'm already considering of moving one of my root servers abroad to be co-located here on the island. This would provide an improved user experience in terms of site performance and latency. This would be a good improvement, especially for upcoming e-commerce solutions for two of my local clients. Later on, we actually started the conversation of additional services that could be a catalyst for the local market in order to attract more small and medium companies to take the data centre into their evaluations regarding online activities. Until today Emtel does not provide virtualised server environments but there might be ongoing plans in the future to cover this field as well. Emtel is a mobile operator and internet connectivity provider in the first place, entering a market of managed and virtualised server infrastructures including capacities in terms of cloud storage and computing are rather new and there is a continuous learning curve at Emtel, too. You cannot just jump into a new market and see how it works out... And I appreciate Emtel's approach towards a solid fundament and then building new services on top of that. Emtel as a future one-stop-shop service provider for all your internet and telecommunications needs. Emtel's promotional video about their TIER 3 data centre in Arsenal, Mauritius More details are thoroughly described in Emtel's brochure of their data centre. Check out their PDF document here. Thanks for this opportunity Visiting and walking through the Emtel data centre for more than 2 hours was a great experience. As representative of the Mauritius Software Craftsmanship Community (MSCC) I would like to thank anyone at Emtel involved in the process of making it happen, and especially to Arvin Lockee and Kamlesh Bokhoree for their time and patience in explaining the infrastructure and answering all the endless questions from our members. Thank You!

    Read the article

  • Fixing a SkyDrive Sync Disaster

    - by Rick Strahl
    For a few months I've been using SkyDrive to handle some basic synching tasks for a number of folders of mine. Specifically I've been dumping a few of my development folders into sky drive so I have a live running backup. It had been working just fine until about a week ago when something went awry. Badly! The idea is that the SkyDrive should sync files, but somewhere in its sync relationship it appears that SkyDrive got confused and assumed it needed to sync back older files to my local machine from the SkyDrive server. So rather than syncing my newer files to the server SkyDrive was pushing older files back to me. Because SkyDrive is so slow actually updating data it's not unusual for SkyDrive to be far behind in syncing and apparently some files were out of date by several months. Of course this is insidious because I didn't notice it for quite some time. I'd been happily working away on my files when a few days ago I noted a bunch of files with -RasXps (my machine name) popping up in various folders. At first I thought my Git repository was giving me a fit, but eventually realized that SkyDrive was actually pushing old files into my monitored folders. To be fair SkyDrive did make backups of the existing files, but by the time I caught it there were literally a few thousand files scattered on my machine that were now updated with old files from online. Here's what some of this looks like: If you look at the directory list you see a bunch of files with a -RasXps postfix appended to them. Those are the files that SkyDrive replaced and backed up on my machine. As you can see the backed up files are actually newer than the ones it pulled from the online SkyDrive. Unless I modified the files after they were updated they all were older than the existing local files. Not exactly how I imagined my synching would work. At first I started cleaning up this mess manually. In most cases the obvious solution was to simply delete the original file and replace with the -RasXps file, but not in all files. Some scrutiny was required and besides being a pain in the ass to rename files, quite frequently I had to dig out Beyond Compare to compare a few files where it wasn't quite clear what's wrong. I quickly realized that doing this by hand would be too hard for the large number of files that got hosed. Hacking together a small .NET Utility So, I figured the easiest way to tackle this is to write a small utility app that shows me all the mangled files that have backups, allows me to compare them and then quickly select and update them, removing the -RasXps file after choosing one of the two files. What I ended up with was a quick and dirty WinForms app that allows me to pick a root folder, and then shows all the -MachineName files: I start by picking a base folder and a template to search for - typically the -MachineName. Clicking Go brings up a list of all files in that folder and its subdirectories.  The list also displays the dates for the saved (-MachineName) file and the current file on disk, along with highlighting for the newer of the two. I can right click on any file and get a context menu pop up to open the folder in Explorer, or open Beyond Compare and view the two files to compare differences which I found very helpful for a number of files where I had modified the files after SkyDrive had updated to an old one. Typically these would be the green files (of which there were thankfully few). To 'fix' files I can select any number of files in the list, then use one of the three buttons on the right to apply an operation. I can use the Saved files - that is the backup file that SkyDrive created with the -MachineName extension (-RasXps above). Or I can use the current file, which is the file with the right name on disk right now and delete the -MachineName file. Or on some occasions I can just opt to delete both of them. For some files like binaries it's often easier to just delete and them be rebuild than choosing. For the most part the process involves accepting the pink files, and checking the few green files and see if any modifications were made since the file was updated incorrectly by SkyDrive. For me luckily those are few in number. Anyways, I thought I share this utility in case anybody else runs into this issue. I've included the VS2012 solution and all the source code so you can see how it works and you can tweak it as needed. The .NET 4.5 binaries are also included if you can't compile. Be warned though!  This rough code is provided as is and makes no guarantees or claims about file safety. All three of the action buttons on the form will delete data. It's a very rough utility and there are no safeguards that ask nicely before deleting files. I highly recommend you make a backup before you have at it. This tools is very narrow in focus, but it might also work with other sync issues from other vendors. I seem to remember that I had similar issues with SugarSync at some point and it too created the -MachineName style files on sync conflicts. Hope this helps somebody out so you can avoid wasting the better part of a full work day on this… Resources Download the Source Code and Binaries for SkyDrive Rescue© Rick Strahl, West Wind Technologies, 2005-2013Posted in Windows  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • VS 2010 Debugger Improvements (BreakPoints, DataTips, Import/Export)

    - by ScottGu
    This is the twenty-first in a series of blog posts I’m doing on the VS 2010 and .NET 4 release.  Today’s blog post covers a few of the nice usability improvements coming with the VS 2010 debugger.  The VS 2010 debugger has a ton of great new capabilities.  Features like Intellitrace (aka historical debugging), the new parallel/multithreaded debugging capabilities, and dump debuging support typically get a ton of (well deserved) buzz and attention when people talk about the debugging improvements with this release.  I’ll be doing blog posts in the future that demonstrate how to take advantage of them as well.  With today’s post, though, I thought I’d start off by covering a few small, but nice, debugger usability improvements that were also included with the VS 2010 release, and which I think you’ll find useful. Breakpoint Labels VS 2010 includes new support for better managing debugger breakpoints.  One particularly useful feature is called “Breakpoint Labels” – it enables much better grouping and filtering of breakpoints within a project or across a solution.  With previous releases of Visual Studio you had to manage each debugger breakpoint as a separate item. Managing each breakpoint separately can be a pain with large projects and for cases when you want to maintain “logical groups” of breakpoints that you turn on/off depending on what you are debugging.  Using the new VS 2010 “breakpoint labeling” feature you can now name these “groups” of breakpoints and manage them as a unit. Grouping Multiple Breakpoints Together using a Label Below is a screen-shot of the breakpoints window within Visual Studio 2010.  This lists all of the breakpoints defined within my solution (which in this case is the ASP.NET MVC 2 code base): The first and last breakpoint in the list above breaks into the debugger when a Controller instance is created or released by the ASP.NET MVC Framework. Using VS 2010, I can now select these two breakpoints, right-click, and then select the new “Edit labels…” menu command to give them a common label/name (making them easier to find and manage): Below is the dialog that appears when I select the “Edit labels” command.  We can use it to create a new string label for our breakpoints or select an existing one we have already defined.  In this case we’ll create a new label called “Lifetime Management” to describe what these two breakpoints cover: When we press the OK button our two selected breakpoints will be grouped under the newly created “Lifetime Management” label: Filtering/Sorting Breakpoints by Label We can use the “Search” combobox to quickly filter/sort breakpoints by label.  Below we are only showing those breakpoints with the “Lifetime Management” label: Toggling Breakpoints On/Off by Label We can also toggle sets of breakpoints on/off by label group.  We can simply filter by the label group, do a Ctrl-A to select all the breakpoints, and then enable/disable all of them with a single click: Importing/Exporting Breakpoints VS 2010 now supports importing/exporting breakpoints to XML files – which you can then pass off to another developer, attach to a bug report, or simply re-load later.  To export only a subset of breakpoints, you can filter by a particular label and then click the “Export breakpoint” button in the Breakpoints window: Above I’ve filtered my breakpoint list to only export two particular breakpoints (specific to a bug that I’m chasing down).  I can export these breakpoints to an XML file and then attach it to a bug report or email – which will enable another developer to easily setup the debugger in the correct state to investigate it on a separate machine.  Pinned DataTips Visual Studio 2010 also includes some nice new “DataTip pinning” features that enable you to better see and track variable and expression values when in the debugger.  Simply hover over a variable or expression within the debugger to expose its DataTip (which is a tooltip that displays its value)  – and then click the new “pin” button on it to make the DataTip always visible: You can “pin” any number of DataTips you want onto the screen.  In addition to pinning top-level variables, you can also drill into the sub-properties on variables and pin them as well.  Below I’ve “pinned” three variables: “category”, “Request.RawUrl” and “Request.LogonUserIdentity.Name”.  Note that these last two variable are sub-properties of the “Request” object.   Associating Comments with Pinned DataTips Hovering over a pinned DataTip exposes some additional UI within the debugger: Clicking the comment button at the bottom of this UI expands the DataTip - and allows you to optionally add a comment with it: This makes it really easy to attach and track debugging notes: Pinned DataTips are usable across both Debug Sessions and Visual Studio Sessions Pinned DataTips can be used across multiple debugger sessions.  This means that if you stop the debugger, make a code change, and then recompile and start a new debug session - any pinned DataTips will still be there, along with any comments you associate with them.  Pinned DataTips can also be used across multiple Visual Studio sessions.  This means that if you close your project, shutdown Visual Studio, and then later open the project up again – any pinned DataTips will still be there, along with any comments you associate with them. See the Value from Last Debug Session (Great Code Editor Feature) How many times have you ever stopped the debugger only to go back to your code and say: $#@! – what was the value of that variable again??? One of the nice things about pinned DataTips is that they keep track of their “last value from debug session” – and you can look these values up within the VB/C# code editor even when the debugger is no longer running.  DataTips are by default hidden when you are in the code editor and the debugger isn’t running.  On the left-hand margin of the code editor, though, you’ll find a push-pin for each pinned DataTip that you’ve previously setup: Hovering your mouse over a pinned DataTip will cause it to display on the screen.  Below you can see what happens when I hover over the first pin in the editor - it displays our debug session’s last values for the “Request” object DataTip along with the comment we associated with them: This makes it much easier to keep track of state and conditions as you toggle between code editing mode and debugging mode on your projects. Importing/Exporting Pinned DataTips As I mentioned earlier in this post, pinned DataTips are by default saved across Visual Studio sessions (you don’t need to do anything to enable this). VS 2010 also now supports importing/exporting pinned DataTips to XML files – which you can then pass off to other developers, attach to a bug report, or simply re-load later. Combined with the new support for importing/exporting breakpoints, this makes it much easier for multiple developers to share debugger configurations and collaborate across debug sessions. Summary Visual Studio 2010 includes a bunch of great new debugger features – both big and small.  Today’s post shared some of the nice debugger usability improvements. All of the features above are supported with the Visual Studio 2010 Professional edition (the Pinned DataTip features are also supported in the free Visual Studio 2010 Express Editions)  I’ll be covering some of the “big big” new debugging features like Intellitrace, parallel/multithreaded debugging, and dump file analysis in future blog posts.  Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • The APEX of Business Value...or...the Business Value of APEX? Oracle Cloud Takes Oracle APEX to New Heights!

    - by Gene Eun
    The attraction of Oracle Application Express (APEX) has increased tremendously with the recent launch of the Oracle Cloud. APEX already supported departmental development and deployment of business applications with minimal involvement from the IT department. Positioned as the ideal replacement for MS Access, APEX probably has managed better to capture the eye of developers and was used for enterprise application development at least as much as for the kind of tactical applications that Oracle strategically positioned it for. With APEX as PaaS from the Oracle Cloud, a leap is made to a much higher level of business value. Now the IT department is not even needed to make infrastructure available with a database running  on it. All the business needs is a credit card. And the business application that is developed, managed and used from the cloud through a standard browser can now just as easily be accessed by users from around the world as by users from the business department itself. As a bonus – the development of the APEX application is also done in the cloud – with no special demands on the location or the enterprise access privileges of the developers. To sum it up: APEX from Oracle Cloud Database Service get the development environment up and running in minutes no involvement from the internal IT department required (not for infrastructure, platform, or development) superior availability and scalability is offered by Oracle users from anywhere in the world can be invited to access the application developers from anywhere in the world can participate in creating and maintaining the application In addition: because the Oracle Cloud platform is the same as the on-premise platform, you can still decide to move the APEX application between the cloud and the local environment – and back again. The REST-ful services that are available through APEX allow programmatic interaction with the database under the APEX application. That means that this database can be synchronized with on premise databases or data stores in (other) clouds. Through the Oracle Cloud Messaging Service, the APEX application can easily enter into asynchronous conversations with other APEX applications, Fusion Middleware applications (ADF, SOA, BPM) and any other type of REST-enabled application. In my opinion, now, for the first time perhaps, APEX offers the attraction to the business that has been suggested before: because of the cloud, all the business needs is  a credit card (a budget of $175 per month), an internet-connection and a browser. Not like before, with a PC hidden under a desk or a database running somewhere in the data center. No matter how unattended: equipment is needed, power is consumed, the database needs to be kept running and if Oracle Database XE does not suffice, software licenses are required as well. And this set up always has a security challenge associated with it. The cloud fee for the Oracle Cloud Database Service includes infrastructure, power, licenses, availability, platform upgrades, a collection of reusable application components and the development and runtime environments containing the APEX platform. Of course this not only means that business departments can move quickly without having to convince their IT colleagues to move along – it also means that small organizations that do not even have IT colleagues can do the same. Getting tailored applications or applications up and running to get in touch with users and customers all over the world is now within easy reach for small outfits – without any investment. My misunderstanding For a long time, I was under the impression that the essence of APEX was that the business could create applications themselves – meaning that business ‘people’ would actually go into APEX to create the application. To me APEX was too much of a developers’ tool to see that happen – apart from the odd business analyst who missed his or her calling as an IT developer. Having looked at various other cloud based development offerings – including Force.com, Mendix, WaveMaker, WorkXpress, OrangeScape, Caspio and Cordys- I have come to realize my mistake. All these platforms are positioned for 'the business' but require a fair amount of coding and technical expertise. However, they make the business happy nevertheless, because they allow the  business to completely circumvent the IT department. That is the essence. Not having to go through the red tape, not having to wait for IT staff who (justifiably) need weeks or months to provide an environment, not having to deal with administrators (again, justifiably) refusing to take on that 'strange environment'. Being able to think of an initiative and turn into action right away. The business does not have to build the application - it can easily hire some external developers or even that nerdy boy next door. They can get started, get an application up and running and invite users in – especially external users such as customers. They will worry later about upgrades and life cycle management and integration. To get applications up and running quickly and start turning ideas into action and results rightaway. That is the key selling point for all these cloud offerings, including APEX from the Cloud. And it is a compelling story. For APEX probably even more so than for the others. While I consider APEX a somewhat proprietary framework compared with ‘regular’ Java/JEE web development (or even .NET and PHP  development), it is still far more open than most cloud environments. APEX is SQL and PL/SQL based – nothing special about those languages – and can run just as easily on site as in the cloud. It has been around since 2004 (that is not including several predecessors that fed straight into APEX) so it can be considered pretty mature. Oracle as a company seems pretty stable – so investments in its technology are bound to last for some time to come. By the way: neither APEX nor the other Cloud DevaaS offerings are targeted at creating applications with enormous life times. They fit into a trend of agile development and rapid life cycle management, with fairly light weight user interfaces that quickly adapt to taste, technology trends and functional requirements and that are easily replaced. APEX and ADF – a match made in heaven?! (or at least in the sky) Note that using APEX only for cloud based database with REST-ful Services is also a perfectly viable scenario: any UI – mobile or browser based – capable of consuming REST-ful services can be created against such a business tier. Creating an ADF Mobile application for example that runs aginst REST-ful services is a best practice for mobile development. Such REST-ful services can be consumed from any service provider – including the Cloud based APEX powered REST-ful services running against the Oracle Cloud Database Service! The ADF Mobile architecture overview can easily be morphed to fit the APEX services in – allowing for a cloud based mobile app: Want to learn more about Oracle Database Cloud Service or Oracle Cloud, just visit cloud.oracle.com  or oracle.com/cloud. Repost of a blog entry by Rick Greenwald, Director of Product Management, Oracle Database Cloud Service.

    Read the article

  • OS Analytics - Deep Dive Into Your OS

    - by Eran_Steiner
    Enterprise Manager Ops Center provides a feature called "OS Analytics". This feature allows you to get a better understanding of how the Operating System is being utilized. You can research the historical usage as well as real time data. This post will show how you can benefit from OS Analytics and how it works behind the scenes. We will have a call to discuss this blog - please join us!Date: Thursday, November 1, 2012Time: 11:00 am, Eastern Daylight Time (New York, GMT-04:00)1. Go to https://oracleconferencing.webex.com/oracleconferencing/j.php?ED=209833067&UID=1512092402&PW=NY2JhMmFjMmFh&RT=MiMxMQ%3D%3D2. If requested, enter your name and email address.3. If a password is required, enter the meeting password: oracle1234. Click "Join". To join the teleconference:Call-in toll-free number:       1-866-682-4770  (US/Canada)      Other countries:                https://oracle.intercallonline.com/portlets/scheduling/viewNumbers/viewNumber.do?ownerNumber=5931260&audioType=RP&viewGa=true&ga=ONConference Code:       7629343#Security code:            7777# Here is quick summary of what you can do with OS Analytics in Ops Center: View historical charts and real time value of CPU, memory, network and disk utilization Find the top CPU and Memory processes in real time or at a certain historical day Determine proper monitoring thresholds based on historical data View Solaris services status details Drill down into a process details View the busiest zones if applicable Where to start To start with OS Analytics, choose the OS asset in the tree and click the Analytics tab. You can see the CPU utilization, Memory utilization and Network utilization, along with the current real time top 5 processes in each category (click the image to see a larger version):  In the above screen, you can click each of the top 5 processes to see a more detailed view of that process. Here is an example of one of the processes: One of the cool things is that you can see the process tree for this process along with some port binding and open file descriptors. On Solaris machines with zones, you get an extra level of tabs, allowing you to get more information on the different zones: This is a good way to see the busiest zones. For example, one zone may not take a lot of CPU but it can consume a lot of memory, or perhaps network bandwidth. To see the detailed Analytics for each of the zones, simply click each of the zones in the tree and go to its Analytics tab. Next, click the "Processes" tab to see real time information of all the processes on the machine: An interesting column is the "Target" column. If you configured Ops Center to work with Enterprise Manager Cloud Control, then the two products will talk to each other and Ops Center will display the correlated target from Cloud Control in this table. If you are only using Ops Center - this column will remain empty. Next, if you view a Solaris machine, you will have a "Services" tab: By default, all services will be displayed, but you can choose to display only certain states, for example, those in maintenance or the degraded ones. You can highlight a service and choose to view the details, where you can see the Dependencies, Dependents and also the location of the service log file (not shown in the picture as you need to scroll down to see the log file). The "Threshold" tab is particularly helpful - you can view historical trends of different monitored values and based on the graph - determine what the monitoring values should be: You can ask Ops Center to suggest monitoring levels based on the historical values or you can set your own. The different colors in the graph represent the current set levels: Red for critical, Yellow for warning and Blue for Information, allowing you to quickly see how they're positioned against real data. It's important to note that when looking at longer periods, Ops Center smooths out the data and uses averages. So when looking at values such as CPU Usage, try shorter time frames which are more detailed, such as one hour or one day. Applying new monitoring values When first applying new values to monitored attributes - a popup will come up asking if it's OK to get you out of the current Monitoring Policy. This is OK if you want to either have custom monitoring for a specific machine, or if you want to use this current machine as a "Gold image" and extract a Monitoring Policy from it. You can later apply the new Monitoring Policy to other machines and also set it as a default Monitoring Profile. Once you're done with applying the different monitoring values, you can review and change them in the "Monitoring" tab. You can also click the "Extract a Monitoring Policy" in the actions pane on the right to save all the new values to a new Monitoring Policy, which can then be found under "Plan Management" -> "Monitoring Policies". Visiting the past Under the "History" tab you can "go back in time". This is very helpful when you know that a machine was busy a few hours ago (perhaps in the middle of the night?), but you were not around to take a look at it in real time. Here's a view into yesterday's data on one of the machines: You can see an interesting CPU spike happening at around 3:30 am along with some memory use. In the bottom table you can see the top 5 CPU and Memory consumers at the requested time. Very quickly you can see that this spike is related to the Solaris 11 IPS repository synchronization process using the "pkgrecv" command. The "time machine" doesn't stop here - you can also view historical data to determine which of the zones was the busiest at a given time: Under the hood The data collected is stored on each of the agents under /var/opt/sun/xvm/analytics/historical/ An "os.zip" file exists for the main OS. Inside you will find many small text files, named after the Epoch time stamp in which they were taken If you have any zones, there will be a file called "guests.zip" containing the same small files for all the zones, as well as a folder with the name of the zone along with "os.zip" in it If this is the Enterprise Controller or the Proxy Controller, you will have folders called "proxy" and "sat" in which you will find the "os.zip" for that controller The actual script collecting the data can be viewed for debugging purposes as well: On Linux, the location is: /opt/sun/xvmoc/private/os_analytics/collect On Solaris, the location is /opt/SUNWxvmoc/private/os_analytics/collect If you would like to redirect all the standard error into a file for debugging, touch the following file and the output will go into it: # touch /tmp/.collect.stderr   The temporary data is collected under /var/opt/sun/xvm/analytics/.collectdb until it is zipped. If you would like to review the properties for the Analytics, you can view those per each agent in /opt/sun/n1gc/lib/XVM.properties. Find the section "Analytics configurable properties for OS and VSC" to view the Analytics specific values. I hope you find this helpful! Please post questions in the comments below. Eran Steiner

    Read the article

  • Process.Start() and ShellExecute() fails with URLs on Windows 8

    - by Rick Strahl
    Since I installed Windows 8 I've noticed that a number of my applications appear to have problems opening URLs. That is when I click on a link inside of a Windows application, either nothing happens or there's an error that occurs. It's happening both to my own applications and a host of Windows applications I'm running. At first I thought this was an issue with my default browser (Chrome) but after switching the default browser to a few others and experimenting a bit I noticed that the errors occur - oddly enough - only when I run an application as an Administrator. I also tried switching to FireFox and Opera as my default browser and saw exactly the same behavior. The scenario for this is a bit bizarre: Running on Windows 8 Call Process.Start() (or ShellExecute() in Win32 API) with a URL or an HTML file Run 'As Administrator' (works fine under non-elevated user account!) or with UAC off A browser other than Internet Explorer is set as your Default Web Browser Talk about a weird scenario: Something that doesn't work when you run as an Administrator which is supposed to have rights to everything on the system! Instead running under an Admin account - either elevated with a User Account Control prompt or even when running as a full Administrator fails. It appears that this problem does not occur for everyone, but when I looked for a solution to this, I saw quite a few posts in relation to this with no clear resolutions. I have three Windows 8 machines running here in the office and all three of them showed this behavior. Lest you think this is just a programmer's problem - this can affect any software running on your system that needs to run under administrative rights. Try it out Now, in order for this next example to fail, any browser but Internet Explorer has to be your default browser and even then it may not fail depending on how you installed your browser. To see if this is a problem create a small Console application and call Process.Start() with a URL in it:namespace Win8ShellBugConsole { class Program { static void Main(string[] args) { Console.WriteLine("Launching Url..."); Process.Start("http://microsoft.com"); Console.Write("Press any key to continue..."); Console.ReadKey(); Console.WriteLine("\r\n\r\nLaunching image..."); Process.Start(Path.GetFullPath(@"..\..\sailbig.jpg")); Console.Write("Press any key to continue..."); Console.ReadKey(); } } } Compile this code. Then execute the code from Explorer (not from Visual Studio because that may change the permissions). If you simply run the EXE and you're not running as an administrator, you'll see the Web page pop up in the browser as well as the image loading. Now run the same thing with Run As Administrator: Now when you run it you get a nice error when Process.Start() is fired: The same happens if you are running with User Account Control off altogether - ie. you are running as a full admin account. Now if you comment out the URL in the code above and just fire the image display - that works just fine in any user mode. As does opening any other local file type or even starting a new EXE locally (ie. Process.Start("c:\windows\notepad.exe"). All that works, EXCEPT for URLs. The code above uses Process.Start() in .NET but the same happens in Win32 Applications that use the ShellExecute API. In some of my older Fox apps ShellExecute returns an error code of 31 - which is No Shell Association found. What's the Deal? It turns out the problem has to do with the way browsers are registering themselves on Windows. Internet Explorer - being a built-in application in Windows 8 - apparently does this correctly, but other browsers possibly don't or at least didn't at the time I installed them. So even Chrome, which continually updates itself, has a recent version that apparently has this registration issue fixed, I was unable to simply set IE as my default browser then use Chrome to 'Set as Default Browser'. It still didn't work. Neither did using the Set Program Associations dialog which lets you assign what extensions are mapped to by a given application. Each application provides a set of extension/moniker mappings that it supports and this dialog lets you associate them on a system wide basis. This also did not work for Chrome or any of the other browsers at first. However, after repeated retries here eventually I did manage to get FireFox to work, but not any of the others. What Works? Reinstall the Browser In the end I decided on the hard core pull the plug solution: Totally uninstall and re-install Chrome in this case. And lo and behold, after reinstall everything was working fine. Now even removing the association for Chrome, switching to IE as the default browser and then back to Chrome works. But, even though the version of Chrome I was running before uninstalling and reinstalling is the same as I'm running now after the reinstall now it works. Of course I had to find out the hard way, before Richard commented with a note regarding what the issue is with Chrome at least: http://code.google.com/p/chromium/issues/detail?id=156400 As expected the issue is a registration issue - with keys not being registered at the machine level. Reading this I'm still not sure why this should be a problem - an elevated account still runs under the same user account (ie. I'm still rickstrahl even if I Run As Administrator), so why shouldn't an app be able to read my Current User registry hive? And also that doesn't quite explain why if I register the extensions using Run As Administrator in Chrome when using Set as Default Browser). But in the end it works… Not so fast It's now a couple of days later and still there are some oddball problems although this time they appear to be purely Chrome issues. After the reinstall Chrome seems to pop up properly with ShellExecute() calls both in regular user and Admin mode. However, it now looks like Chrome is actually running two completely separate user profiles for each. For example, when I run Visual Studio in Admin mode and go to View in browser, Chrome complains that it was installed in Admin mode and can't launch (WTF?). Then you retry a few times later and it ends up working. When launched that way some of the plug-ins installed don't show up with the effect that sometimes they're visible sometimes they're not. Also Chrome seems to loose my configuration and Google sign in between sessions now, presumably when switching user modes. Add-ins installed in admin mode don't show up in user mode and vice versa. Ah, this is lovely. Did I mention that I freaking hate UAC precisely because of this kind of bullshit. You can never tell exactly what account your app is running under, and apparently apps also have a hard time trying to put data into the right place that works for both scenarios. And as my recent post on using Windows Live accounts shows it's yet another level of abstraction ontop of the underlying system identity that can cause all sort of small side effect headaches like this. Hopefully, most of you are skirting this issue altogether - having installed more recent versions of your favorite browsers. If not, hopefully this post will take you straight to reinstallation to fix this annoying issue.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Windows  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • C#/.NET Little Wonders: The Timeout static class

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. When I started the “Little Wonders” series, I really wanted to pay homage to parts of the .NET Framework that are often small but can help in big ways.  The item I have to discuss today really is a very small item in the .NET BCL, but once again I feel it can help make the intention of code much clearer and thus is worthy of note. The Problem - Magic numbers aren’t very readable or maintainable In my first Little Wonders Post (Five Little Wonders That Make Code Better) I mention the TimeSpan factory methods which, I feel, really help the readability of constructed TimeSpan instances. Just to quickly recap that discussion, ask yourself what the TimeSpan specified in each case below is 1: // Five minutes? Five Seconds? 2: var fiveWhat1 = new TimeSpan(0, 0, 5); 3: var fiveWhat2 = new TimeSpan(0, 0, 5, 0); 4: var fiveWhat3 = new TimeSpan(0, 0, 5, 0, 0); You’d think they’d all be the same unit of time, right?  After all, most overloads tend to tack additional arguments on the end.  But this is not the case with TimeSpan, where the constructor forms are:     TimeSpan(int hours, int minutes, int seconds);     TimeSpan(int days, int hours, int minutes, int seconds);     TimeSpan(int days, int hours, int minutes, int seconds, int milliseconds); Notice how in the 4 and 5 parameter version we suddenly have the parameter days slipping in front of hours?  This can make reading constructors like those above much harder.  Fortunately, there are TimeSpan factory methods to help make your intention crystal clear: 1: // Ah! Much clearer! 2: var fiveSeconds = TimeSpan.FromSeconds(5); These are great because they remove all ambiguity from the reader!  So in short, magic numbers in constructors and methods can be ambiguous, and anything we can do to clean up the intention of the developer will make the code much easier to read and maintain. Timeout – Readable identifiers for infinite timeout values In a similar way to TimeSpan, let’s consider specifying timeouts for some of .NET’s (or our own) many methods that allow you to specify timeout periods. For example, in the TPL Task class, there is a family of Wait() methods that can take TimeSpan or int for timeouts.  Typically, if you want to specify an infinite timeout, you’d just call the version that doesn’t take a timeout parameter at all: 1: myTask.Wait(); // infinite wait But there are versions that take the int or TimeSpan for timeout as well: 1: // Wait for 100 ms 2: myTask.Wait(100); 3:  4: // Wait for 5 seconds 5: myTask.Wait(TimeSpan.FromSeconds(5); Now, if we want to specify an infinite timeout to wait on the Task, we could pass –1 (or a TimeSpan set to –1 ms), which what the .NET BCL methods with timeouts use to represent an infinite timeout: 1: // Also infinite timeouts, but harder to read/maintain 2: myTask.Wait(-1); 3: myTask.Wait(TimeSpan.FromMilliseconds(-1)); However, these are not as readable or maintainable.  If you were writing this code, you might make the mistake of thinking 0 or int.MaxValue was an infinite timeout, and you’d be incorrect.  Also, reading the code above it isn’t as clear that –1 is infinite unless you happen to know that is the specified behavior. To make the code like this easier to read and maintain, there is a static class called Timeout in the System.Threading namespace which contains definition for infinite timeouts specified as both int and TimeSpan forms: Timeout.Infinite An integer constant with a value of –1 Timeout.InfiniteTimeSpan A static readonly TimeSpan which represents –1 ms (only available in .NET 4.5+) This makes our calls to Task.Wait() (or any other calls with timeouts) much more clear: 1: // intention to wait indefinitely is quite clear now 2: myTask.Wait(Timeout.Infinite); 3: myTask.Wait(Timeout.InfiniteTimeSpan); But wait, you may say, why would we care at all?  Why not use the version of Wait() that takes no arguments?  Good question!  When you’re directly calling the method with an infinite timeout that’s what you’d most likely do, but what if you are just passing along a timeout specified by a caller from higher up?  Or perhaps storing a timeout value from a configuration file, and want to default it to infinite? For example, perhaps you are designing a communications module and want to be able to shutdown gracefully, but if you can’t gracefully finish in a specified amount of time you want to force the connection closed.  You could create a Shutdown() method in your class, and take a TimeSpan or an int for the amount of time to wait for a clean shutdown – perhaps waiting for client to acknowledge – before terminating the connection.  So, assume we had a pub/sub system with a class to broadcast messages: 1: // Some class to broadcast messages to connected clients 2: public class Broadcaster 3: { 4: // ... 5:  6: // Shutdown connection to clients, wait for ack back from clients 7: // until all acks received or timeout, whichever happens first 8: public void Shutdown(int timeout) 9: { 10: // Kick off a task here to send shutdown request to clients and wait 11: // for the task to finish below for the specified time... 12:  13: if (!shutdownTask.Wait(timeout)) 14: { 15: // If Wait() returns false, we timed out and task 16: // did not join in time. 17: } 18: } 19: } We could even add an overload to allow us to use TimeSpan instead of int, to give our callers the flexibility to specify timeouts either way: 1: // overload to allow them to specify Timeout in TimeSpan, would 2: // just call the int version passing in the TotalMilliseconds... 3: public void Shutdown(TimeSpan timeout) 4: { 5: Shutdown(timeout.TotalMilliseconds); 6: } Notice in case of this class, we don’t assume the caller wants infinite timeouts, we choose to rely on them to tell us how long to wait.  So now, if they choose an infinite timeout, they could use the –1, which is more cryptic, or use Timeout class to make the intention clear: 1: // shutdown the broadcaster, waiting until all clients ack back 2: // without timing out. 3: myBroadcaster.Shutdown(Timeout.Infinite); We could even add a default argument using the int parameter version so that specifying no arguments to Shutdown() assumes an infinite timeout: 1: // Modified original Shutdown() method to add a default of 2: // Timeout.Infinite, works because Timeout.Infinite is a compile 3: // time constant. 4: public void Shutdown(int timeout = Timeout.Infinite) 5: { 6: // same code as before 7: } Note that you can’t default the ShutDown(TimeSpan) overload with Timeout.InfiniteTimeSpan since it is not a compile-time constant.  The only acceptable default for a TimeSpan parameter would be default(TimeSpan) which is zero milliseconds, which specified no wait, not infinite wait. Summary While Timeout.Infinite and Timeout.InfiniteTimeSpan are not earth-shattering classes in terms of functionality, they do give you very handy and readable constant values that you can use in your programs to help increase readability and maintainability when specifying infinite timeouts for various timeouts in the BCL and your own applications. Technorati Tags: C#,CSharp,.NET,Little Wonders,Timeout,Task

    Read the article

  • Thread placement policies on NUMA systems - update

    - by Dave
    In a prior blog entry I noted that Solaris used a "maximum dispersal" placement policy to assign nascent threads to their initial processors. The general idea is that threads should be placed as far away from each other as possible in the resource topology in order to reduce resource contention between concurrently running threads. This policy assumes that resource contention -- pipelines, memory channel contention, destructive interference in the shared caches, etc -- will likely outweigh (a) any potential communication benefits we might achieve by packing our threads more densely onto a subset of the NUMA nodes, and (b) benefits of NUMA affinity between memory allocated by one thread and accessed by other threads. We want our threads spread widely over the system and not packed together. Conceptually, when placing a new thread, the kernel picks the least loaded node NUMA node (the node with lowest aggregate load average), and then the least loaded core on that node, etc. Furthermore, the kernel places threads onto resources -- sockets, cores, pipelines, etc -- without regard to the thread's process membership. That is, initial placement is process-agnostic. Keep reading, though. This description is incorrect. On Solaris 10 on a SPARC T5440 with 4 x T2+ NUMA nodes, if the system is otherwise unloaded and we launch a process that creates 20 compute-bound concurrent threads, then typically we'll see a perfect balance with 5 threads on each node. We see similar behavior on an 8-node x86 x4800 system, where each node has 8 cores and each core is 2-way hyperthreaded. So far so good; this behavior seems in agreement with the policy I described in the 1st paragraph. I recently tried the same experiment on a 4-node T4-4 running Solaris 11. Both the T5440 and T4-4 are 4-node systems that expose 256 logical thread contexts. To my surprise, all 20 threads were placed onto just one NUMA node while the other 3 nodes remained completely idle. I checked the usual suspects such as processor sets inadvertently left around by colleagues, processors left offline, and power management policies, but the system was configured normally. I then launched multiple concurrent instances of the process, and, interestingly, all the threads from the 1st process landed on one node, all the threads from the 2nd process landed on another node, and so on. This happened even if I interleaved thread creating between the processes, so I was relatively sure the effect didn't related to thread creation time, but rather that placement was a function of process membership. I this point I consulted the Solaris sources and talked with folks in the Solaris group. The new Solaris 11 behavior is intentional. The kernel is no longer using a simple maximum dispersal policy, and thread placement is process membership-aware. Now, even if other nodes are completely unloaded, the kernel will still try to pack new threads onto the home lgroup (socket) of the primordial thread until the load average of that node reaches 50%, after which it will pick the next least loaded node as the process's new favorite node for placement. On the T4-4 we have 64 logical thread contexts (strands) per socket (lgroup), so if we launch 48 concurrent threads we will find 32 placed on one node and 16 on some other node. If we launch 64 threads we'll find 32 and 32. That means we can end up with our threads clustered on a small subset of the nodes in a way that's quite different that what we've seen on Solaris 10. So we have a policy that allows process-aware packing but reverts to spreading threads onto other nodes if a node becomes too saturated. It turns out this policy was enabled in Solaris 10, but certain bugs suppressed the mixed packing/spreading behavior. There are configuration variables in /etc/system that allow us to dial the affinity between nascent threads and their primordial thread up and down: see lgrp_expand_proc_thresh, specifically. In the OpenSolaris source code the key routine is mpo_update_tunables(). This method reads the /etc/system variables and sets up some global variables that will subsequently be used by the dispatcher, which calls lgrp_choose() in lgrp.c to place nascent threads. Lgrp_expand_proc_thresh controls how loaded an lgroup must be before we'll consider homing a process's threads to another lgroup. Tune this value lower to have it spread your process's threads out more. To recap, the 'new' policy is as follows. Threads from the same process are packed onto a subset of the strands of a socket (50% for T-series). Once that socket reaches the 50% threshold the kernel then picks another preferred socket for that process. Threads from unrelated processes are spread across sockets. More precisely, different processes may have different preferred sockets (lgroups). Beware that I've simplified and elided details for the purposes of explication. The truth is in the code. Remarks: It's worth noting that initial thread placement is just that. If there's a gross imbalance between the load on different nodes then the kernel will migrate threads to achieve a better and more even distribution over the set of available nodes. Once a thread runs and gains some affinity for a node, however, it becomes "stickier" under the assumption that the thread has residual cache residency on that node, and that memory allocated by that thread resides on that node given the default "first-touch" page-level NUMA allocation policy. Exactly how the various policies interact and which have precedence under what circumstances could the topic of a future blog entry. The scheduler is work-conserving. The x4800 mentioned above is an interesting system. Each of the 8 sockets houses an Intel 7500-series processor. Each processor has 3 coherent QPI links and the system is arranged as a glueless 8-socket twisted ladder "mobius" topology. Nodes are either 1 or 2 hops distant over the QPI links. As an aside the mapping of logical CPUIDs to physical resources is rather interesting on Solaris/x4800. On SPARC/Solaris the CPUID layout is strictly geographic, with the highest order bits identifying the socket, the next lower bits identifying the core within that socket, following by the pipeline (if present) and finally the logical thread context ("strand") on the core. But on Solaris on the x4800 the CPUID layout is as follows. [6:6] identifies the hyperthread on a core; bits [5:3] identify the socket, or package in Intel terminology; bits [2:0] identify the core within a socket. Such low-level details should be of interest only if you're binding threads -- a bad idea, the kernel typically handles placement best -- or if you're writing NUMA-aware code that's aware of the ambient placement and makes decisions accordingly. Solaris introduced the so-called critical-threads mechanism, which is expressed by putting a thread into the FX scheduling class at priority 60. The critical-threads mechanism applies to placement on cores, not on sockets, however. That is, it's an intra-socket policy, not an inter-socket policy. Solaris 11 introduces the Power Aware Dispatcher (PAD) which packs threads instead of spreading them out in an attempt to be able to keep sockets or cores at lower power levels. Maximum dispersal may be good for performance but is anathema to power management. PAD is off by default, but power management polices constitute yet another confounding factor with respect to scheduling and dispatching. If your threads communicate heavily -- one thread reads cache lines last written by some other thread -- then the new dense packing policy may improve performance by reducing traffic on the coherent interconnect. On the other hand if your threads in your process communicate rarely, then it's possible the new packing policy might result on contention on shared computing resources. Unfortunately there's no simple litmus test that says whether packing or spreading is optimal in a given situation. The answer varies by system load, application, number of threads, and platform hardware characteristics. Currently we don't have the necessary tools and sensoria to decide at runtime, so we're reduced to an empirical approach where we run trials and try to decide on a placement policy. The situation is quite frustrating. Relatedly, it's often hard to determine just the right level of concurrency to optimize throughput. (Understanding constructive vs destructive interference in the shared caches would be a good start. We could augment the lines with a small tag field indicating which strand last installed or accessed a line. Given that, we could augment the CPU with performance counters for misses where a thread evicts a line it installed vs misses where a thread displaces a line installed by some other thread.)

    Read the article

  • CodePlex Daily Summary for Tuesday, March 23, 2010

    CodePlex Daily Summary for Tuesday, March 23, 2010New Projects.NET StarCraft II Replay Parser: A .NET 3.5 Library used to parse StarCraft II replays. Developed in C# 3.5.BackToBasics "B2B" Chat: With technology and software getting more and more complicated, why not get back to basics with BackToBasicsChat. B2B allows you to chat over a ser...Dark Neuron Game Engine: Dark Neuron allows you to easily create fun and interesting games with no need of developing basic game components. This engine is developed for C#...DeepZoom Pivot Constructor: Library to make building DeepZoom images and Pivot displays simpler.ePaper reader: The project is aimed at creating a tool which helps in reading electronic editions of news papers(pdf/flash)FSharpPageProvider for EPiServer CMS 6: This project starts as the port of EPiServer XmlPageProvider to F# programming language. Hammock for REST: Hammock is a REST library for .NET that greatly simplifies consuming and wrapping RESTful services.Kirill Osenkov: Various small projects, tools, utilities and samples by Kirill OsenkovliveDB: liveDB - web client for sql serverLucilla Framework: lucilla frameworkMVC Foolproof Validation: MVC Foolproof Validation aims to extend the Data Annotation validation provided in ASP.NET MVC. Initial efforts are focused on adding contingent va...MVC2Forums: MVC2Forums is simply a forum system based upon MVC2.Mvvm Foundation Silverlight: Mvvm Foundation Silverlight is a library of classes that are helpful when building Silverlight applications based in the MVVM pattern. This librar...MyPersonalWebsite: This is my personal web site developed using ASP.NET MVC 2Planner: Planner makes it easier for all peoples to plan your tasks. It's developed in Delphi.Prose: Prose is an playground for an experimental JavaScript like language compiler. Eventually it will implement 0-CFA, CFA2, and a Tracing JITQuestTracker: QuestTracker is a todo list presented in the format of a quest tracking list such as the one in World of Warcraft.SevenZipLib: SevenZipLib is a C# interface to the 7-zip library.SimpleGeo .Net: .Net Client library for the SimpleGeo.com serviceNew ReleasesAutenticar no OpenLDAP utilizando pGIna: DLL LDAPAuth Plus: New Group: No LoginBMap.NET: BMap.NET 2: This is the 2nd version of BMap.NET. It has included these tags: Bing Maps, and "About BMap.NET".Cronos: Version 2.04: This is primarily a bug-fix release. Several numerical issues have been resolved, and a resource leak (of MS Windows graphics objects) has been fi...EV Dashboard: v1.0: This release includes support for an App.config file and Auto Connect, which will connect to the specified BMS at startup. Note: You still have to ...GKO Libraries: GKO Libraries 0.1 Beta: 0.1 Beta Added More utilities and functions RefactoringsGLB Virtual Player Builder: 0.4.2 Beta: Beta build that includes a new player creator.HKGolden Express: HKGoldenExpress (Build 201003222215): New features: (None) Bug fix: Fixed bugs of unable to parse XML file stream returned from HKGolden API, as the encoding of XML file stream chang...jQuery Web Controls ASP.Net: jQueryWebControls 1.1.1.2: En esta versión se han corregido problemas existentes en la ejecución de los scripts de jquery cuando se utilizaban MasterPage y/o Ajax Control Too...LightKit: Version 0.2.2: Fixed: fixed bug when CollectionItemsEditor ditermines IsChanged property incorrectly fixed ObjectEditor`a thisstring propertyName method wrong l...LINQ to Twitter: LINQ to Twitter Beta v2.0.8: New items added since v1.1 include: Support for OAuth (via DotNetOpenAuth), secure communication via https, VB language support, serialization of ...MapWindow6: MapWindow 6.0 msi (March 22): This version fixes the icons for the desktop installer and changes the install directory to Program Files\MapWindow instead of Program Files\ISU.Math.NET Numerics: 2010.3.22.1334 Build: Latest alpha buildMiniCalendar Web Part: MiniCalendar Web Part 1.8: A small web part to display links to events stored in a list (or document library) in a mini calendar (in month view mode). It shows tooltips for t...OCInject: Release Two: This release brings some missing features such as Singleton support, Func<T> factories and child containers. It, also, has an updated constructor ...Phalanger - The PHP Language Compiler for the .NET Framework: 2.0 (March 2010): Installer of the latest binaries of Phalanger 2.0 (March 2010) and its integration into Visual Studio 2008. Easy installer with automatic IIS int...Planner: Planner: firstQuestTracker: QuestTracker 0.1: This is the preliminary release of QuestTracker. There's not much documentation or many features yet, but it is functional. Any feedback would be a...QuestTracker: QuestTracker 0.1.1: Bugfix for QuestTracker 0.1QuestTracker: QuestTracker 0.1.2: Fixes an issue with saving the quest list.Rawr: Rawr 2.3.13: We're pleased to announce that, after long last, Rawr3 has entered public beta. You're still welcome to continue using Rawr2 (that's what you're re...Single Web Session: Alpha Model Plugin: !How to use Single Web Session add following line into your web config <httpModules> <add name="SingleSession" type="SingleWebSession.Model.W...SMIL - SharePoint Map Integration Layer: SMIL 1.0: Custom data field Extracts Lat/Lon from EXIF from images being uploaded. Map Web Part Filter with SharePoint views Filter by connecting to...sTASKedit: sTASKedit 42532 (Developer Alpha): This release is only to verify the currently decoded task structure... Supported files: tasks.data (v1.3.6 client)VCC: Latest build, v2.1.30322.0: Automatic drop of latest buildVisual Studio DSite: Advanced Notepad (Visual C++ 2008): An notepad written in c that can save in a rich text file format.Wallpaper Rotator: Wallpaper Rotator 0.5: Wallpaper Rotator 0.5 This version includes the following improvements: Saving the choice of "Random Order (Shuffle Mode)" Updating the configu...Most Popular ProjectsMetaSharpRawrWBFS ManagerSilverlight ToolkitASP.NET Ajax LibraryMicrosoft SQL Server Product Samples: DatabaseAJAX Control ToolkitLiveUpload to FacebookWindows Presentation Foundation (WPF)ASP.NETMost Active ProjectsRawrjQuery Library for SharePoint Web ServicesBlogEngine.NETLINQ to TwitterPHPExcelFarseer Physics EngineFacebook Developer ToolkitNB_Store - Free DotNetNuke Ecommerce Catalog Modulepatterns & practices: Composite WPF and SilverlightN2 CMS

    Read the article

  • CodePlex Daily Summary for Thursday, June 17, 2010

    CodePlex Daily Summary for Thursday, June 17, 2010New ProjectsAstalanumerator: A JavaScript based recursive DOM/JS object inspector. Uses a simple tree menu to enumerate all properties of a object.BDD Log Converter: A simple .NET class and console application that will convert BDD logs (MDT) into XML format.CastleInvestProj: Castle Investigating project Easy Callback: This library facilitates the use of multiple asynchronous calls on the same page, and asynchronous calls from a user control also have a clean cod...Easy Wings: Small webApp to manage aircraft booking in flying club. French only for the moment.EPiServer Template Foundation: EPiServer Template Foundation builds on top of Page Type Builder to provide a framework for common site features such as basic page type properties...guidebook: a project to plan your road trip.Look into documents for e-discovery: Search, browse, tag, annotate documents such as MS Word, PDF, e-mail, etc. Good for legal professionals do e-discovery. One Bus Away for Windows Phone: A Windows Phone 7 application written in Silverlight for the OneBusAway (www.onebusaway.org) website. Allows mobile users to search for public tra...OneBusAway for Windows Phone 7: OneBusAway is a service with transit information for the Seattle, WA region. We are creating a mobile application for Windows Phone 7 utilizing th...PoFabLab - Poetry Generation Library and Editor in .NET: PoFabLab is an open source library and word processor designed for digital poets. The library can scan lines, perform Markov analysis, filter text...Project Axure: More details coming soon.Чат кутежа 2.0: ИРЦ чат специально для форума ЕНЕ简易代码生成器: 初次使用CodePlex,这只是一个测试项目。打算用WPF做一个简单的代码生成器,兼具SQL Server Client功能。使用.Net 4.0, C#开发。运营工作系统: TRAS(Team resource assist system) is a toolkit that help the studio to manage and distribute the daily work, like publish the news, GM broadcast a...New ReleasesAmuse - A New MU* Client For Windows: 2010 June: Important Notice to TestersPlease uninstall any previous versions of Amuse prior to this one before installing. Changes and InformationFirst relea...ASP.NET Generic Data Source Control: V1.0: GenericDataSource - Version 1.0Binary This is the first official binary release of the GenericDataSource for ASP.NET - stable and ready for product...Astalanumerator: Astalanumerator 0.7: I wanted to map all properties in javascript and inspect them regardless if they were objects or not. IE doesn’t support for(i in..) for native pro...BDD Log Converter: BDD Log Converter 0.1.0: First release (0.1.0).DVD Swarm: 0.8.10.616: Major update with improvements to encoding speed.Easy Callback: Easy Callback 1.0.0.0: Easy Callback library 1.0.0.0Facebook Connect Authentication for ASP.NET: Facebook Connect Authentication for ASP.NET - v1.0: Now supporting Facebook's new Open Graph API JavaScript SDK, this release of FBConnectAuth also adds support for running in partially trusted envir...FlickrNet API Library: 3.0 Beta 3: Another small Beta. Changed parsing code so exceptions aren't raised when new attributes are added by Flickr. This affects searches where you are ...Infragistics Analytics Framework: Infragistics Analytics Framework 10.2: An updated version of Infragistics Analytics Framework, which utilizes the newest version (v.1.4.4) of MSAF as well as the newest release (v.10.2) ...NUnit Add-in for Growl Notifications: NUnit Add-in for Growl Notifications 1.0 build 1: Version 1.0 build 1:[change] Test run failure notification now disappears automaticallyOpen Source PLM Activities: 3dxml player integration for Aras Innovator: This is just a simple html file you need to add to your Aras Innovator install directory. It loads the 3Dxml player for your 3dxml files. Tested o...patterns & practices - Windows Azure Guidance: WAAG - Part 2 - Drop 1: First code and docs drop for Part 2 of the Windows Azure Architecture Guide Part 1 of the Guide is released here. Highlights of this release are:...Phalanger - The PHP Language Compiler for the .NET Framework: 2.0 (June 2010): Installer of the latest binaries of Phalanger 2.0 (June 2010) and its integration into Visual Studio 2008 SP1. * Improved compatibility with P...RIA Services Essentials: Book Club Application (June 16, 2010): Added some XAML to hide/show link to BookShelf page based on whether the user is logged in or not. Updated IsBookOwner authorization rule implement...secs4net: Relase 1.01: version 1.01 releasesELedit: sELedit v1.1c: Added: Tool for exporting NPC/Mob database file that is used by sNPCeditSharePoint Ad Rotator: SPAdRotator 2.0 Beta 2: Added: Open tool pane link to default Web Part text Made all images except the first hidden by default, so the Web Part will degrade gracefully w...sMAPtool: sMAPtool v0.7f (without Maps): Added: 3rd party magnifier softwaresNPCedit: sNPCedit v0.9c: Added: npc/mob names and corresponding datbaseSolidWorks Addin Development: GenericAddinFrameworkR1-06.17.2010: .sTASKedit: sTASKedit v0.8: Important BugFix: there was an mistake in the structure, team-member block and get-items block was swapped internally. Tasks that contains both blo...stefvanhooijdonk.com: UnitTesting-SP2010-TFS2010: Files for my post on TFS2010 and NUnit testing with SP2010 projects. see the post here: http://wp.me/pMnlQ-88 The XSLT here is from http://nunit4t...Telerik CAB Enabling Kit for RadControls for WinForms: TCEK 2010.1.10.504: What's new in v2010.1.0610 (Beta): RadDocking component has been replaced with the latest RadDock control Requirements: Visual Studio 2005+ Tele...TFS Buddy: TFS Buddy 1.2: Fixes a problem with notificationsThales Simulator Library: Version 0.9: The Thales Simulator Library is an implementation of a software emulation of the Thales (formerly Zaxus & Racal) Hardware Security Module cryptogra...Triton Application Framework: Tools - Code Generator - Build 1.0: This is the first release of the Generator. This is buggy but works.VCC: Latest build, v2.1.30616.0: Automatic drop of latest buildXsltDb - DotNetNuke Module Builder: 01.01.27: Code completion for XsltDb, HTML and XSL stuff!! Full screen editing Some bugs are still in EditArea component and object lists in code completi...Чат кутежа 2.0: 0.9a build 2 версия: вторая сборка первой альфа-версии ирц-клиента.Most Popular ProjectsWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryPHPExcelMicrosoft SQL Server Community & SamplesASP.NETMost Active ProjectsdotSpatialpatterns & practices: Enterprise Library Contribpatterns & practices – Enterprise LibraryBlogEngine.NETLightweight Fluent WorkflowRhyduino - Arduino and Managed CodeSunlit World SchemeNB_Store - Free DotNetNuke Ecommerce Catalog ModuleSolidWorks Addin DevelopmentN2 CMS

    Read the article

  • Learning content for MCSDs: Web Applications and Windows Store Apps using HTML5

    Recently, I started again to learn for various Microsoft certifications. First candidate on my way to MSCD: Web Applications is the Exam 70-480: Programming in HTML5 with JavaScript and CSS3. Motivation to go for a Microsoft exam I guess, this is quite personal but let me briefly describe my intentions to go that exam. First, I'm doing web development since the 1990's. Working with HTML, CSS and Javascript is happening almost daily in my workspace. And honestly, I do not only do 'pure' web development but already integrated several HTML/CSS/Javascript frontend UIs into an existing desktop application (written in Visual FoxPro) inclusive two-way communication and data exchange. Hm, might be an interesting topic for another blog article here... Second, this exam has a very interesting aspect which is listed at the bottom of the exam's details: Credit Toward Certification When you pass Exam 70-480: Programming in HTML5 with JavaScript and CSS3, you complete the requirements for the following certification(s): Programming in HTML5 with JavaScript and CSS3 Specialist Exam 70-480: Programming in HTML5 with JavaScript and CSS3: counts as credit toward the following certification(s): MCSD: Web Applications MCSD: Windows Store Apps using HTML5 So, passing one single exam will earn you specialist certification straight-forward, and opens the path to higher levels of certifications. Preparations and learning path Well, due to a newsletter from Microsoft Learning (MSL) I caught interest in picking up the circumstances and learning materials for this particular exam. As of writing this article there is a promotional / voucher code available which enables you to register for this exam for free! Simply register yourself with or log into your existing account at Prometric, choose the exam for a testing facility near to you and enter the voucher code HTMLJMP (available through 31.03.2013 or while supplies last). Hurry up, there are restrictions... As stated above, I'm already very familiar with web development and the programming flavours involved into this. But of course, it is always good to freshen up your knowledge and reflect on yourself. Microsoft is putting a lot of effort to attract any kind of developers into the 'App Development'. Whether it is for the Windows 8 Store or the Windows Phone 8 Store, doesn't really matter. They simply need more apps. This demand for skilled developers also comes with a nice side-effect: Lots and lots of material to study. During the first couple of hours, I could easily gather high quality preparation material - again for free! Following is just a small list of starting points. If you have more resources, please drop me a message in the comment section, and I'll be glad to update this article accordingly. Developing HTML5 Apps Jump Start This is an accelerated jump start video course on development of HTML5 Apps for Windows 8. There are six modules that are split into two video sessions per module. Very informative and intense course material. This is packed stuff taken from an official preparation course for exam 70-480. Developing Windows Store Apps with HTML5 Jump Start Again, an accelerated preparation video course on Windows 8 Apps. There are six modules with two video sessions each which will catapult you to your exam. This is also related to preps for exam 70-481. Programming Windows 8 Apps with HTML, CSS, and JavaScript Kraig Brockschmidt delves into the ups and downs of Windows 8 App development over 800+ pages. Great eBook to read, study, and to practice the samples - best of all, it's for free. codeSHOW() This is a Windows 8 HTML/JS project with the express goal of demonstrating simple development concepts for the Windows 8 platform. Code, code and more code... absolutely great stuff to study and practice. Microsoft Virtual Academy I already wrote about the MVA in a previous article. Well, if you haven't registered yourself yet, now is the time. The list is not complete for sure, but this might keep you busy for at least one or even two weeks to go through the material. Please don't hesitate to add more resources in the comment section. Right now, I'm already through all videos once, and digging my way through chapter 4 of Kraig's book. Additional material - Pluralsight Apart from those free online resources, I also following some courses from the excellent library of Pluralsight. They already have their own section for Windows 8 development, but of course, you get companion material about HTML5, CSS and Javascript in other sections, too. Introduction to Building Windows 8 Applications Building Windows 8 Applications with JavaScript and HTML Selling Windows 8 Apps HTML5 Fundamentals Using HTML5 and CSS3 HTML5 Advanced Topics CSS3 etc... Interesting to see that Michael Palermo provides his course material on multiple platforms. Fantastic! You might also pay a visit to his personal blog. Hm, it just came to my mind that Aaron Skonnard of Pluralsight publishes so-called '24 hours Learning Paths' based on courses available in the course library. Would be interested to see a combination for Windows 8 App development using HTML5, CSS3 and Javascript in the future. Recommended workspace environment Well, you might have guessed it but this requires Windows 8, Visual Studio 2012 Express or another flavour, and a valid Developers License. Due to an MSDN subscription I working on VS 2012 Premium with some additional tools by Telerik. Honestly, the fastest way to get you up and running for Windows 8 App development is the source code archive of codeSHOW(). It does not only give you all source code in general but contains a couple of SDKs like Bing Maps, Microsoft Advertising, Live ID, and Telerik Windows 8 controls... for free! Hint: Get the Windows Phone 8 SDK as well. Don't worry, while you are studying the material for Windows 8 you will be able to leverage from this knowledge to development for the phone platform, too. It takes roughly one to two hours to get your workspace and learning environment, at least this was my time frame due to slow internet connection and an aged spare machine. ;-) Oh, before I forget to mention it, as soon as you're done, go quickly to the Windows Store and search for ClassBrowserPlus. You might not need it ad hoc for your development using HTML5, CSS and Javascript but I think that it is a great developer's utility that enables you to view the properties, methods and events (along with help text) for all Windows 8 classes. It's always good to look behind the scenes and to explore how it is made. Idea: Start/join a learning group The way you learn new things or intensify your knowledge in a certain technology is completely up to your personal preference. Back in my days at the university, we used to meet once or twice a week in a small quiet room to exchange our progress, questions and problems we ran into. In general, I recommend to any software craftsman to lift your butt and get out to exchange with other developers. Personally, I like this approach, as it gives you new points of view and an insight into others' own experience with certain techniques and how they managed to solve tricky issues. Just keep it relaxed and not too formal after all, and you might a have a good time away from your dull office desk. Give your machine a break, too.

    Read the article

  • More Fun With Math

    - by PointsToShare
    More Fun with Math   The runaway student – three different ways of solving one problem Here is a problem I read in a Russian site: A student is running away. He is moving at 1 mph. Pursuing him are a lion, a tiger and his math teacher. The lion is 40 miles behind and moving at 6 mph. The tiger is 28 miles behind and moving at 4 mph. His math teacher is 30 miles behind and moving at 5 mph. Who will catch him first? Analysis Obviously we have a set of three problems. They are all basically the same, but the details are different. The problems are of the same class. Here is a little excursion into computer science. One of the things we strive to do is to create solutions for classes of problems rather than individual problems. In your daily routine, you call it re-usability. Not all classes of problems have such solutions. If a class has a general (re-usable) solution, it is called computable. Otherwise it is unsolvable. Within unsolvable classes, we may still solve individual (some but not all) problems, albeit with different approaches to each. Luckily the vast majority of our daily problems are computable, and the 3 problems of our runaway student belong to a computable class. So, let’s solve for the catch-up time by the math teacher, after all she is the most frightening. She might even make the poor runaway solve this very problem – perish the thought! Method 1 – numerical analysis. At 30 miles and 5 mph, it’ll take her 6 hours to come to where the student was to begin with. But by then the student has advanced by 6 miles. 6 miles require 6/5 hours, but by then the student advanced by another 6/5 of a mile as well. And so on and so forth. So what are we to do? One way is to write code and iterate it until we have solved it. But this is an infinite process so we’ll end up with an infinite loop. So what to do? We’ll use the principles of numerical analysis. Any calculator – your computer included – has a limited number of digits. A double floating point number is good for about 14 digits. Nothing can be computed at a greater accuracy than that. This means that we will not iterate ad infinidum, but rather to the point where 2 consecutive iterations yield the same result. When we do financial computations, we don’t even have to go that far. We stop at the 10th of a penny.  It behooves us here to stop at a 10th of a second (100 milliseconds) and this will how we will avoid an infinite loop. Interestingly this alludes to the Zeno paradoxes of motion – in particular “Achilles and the Tortoise”. Zeno says exactly the same. To catch the tortoise, Achilles must always first come to where the tortoise was, but the tortoise keeps moving – hence Achilles will never catch the tortoise and our math teacher (or lion, or tiger) will never catch the student, or the policeman the thief. Here is my resolution to the paradox. The distance and time in each step are smaller and smaller, so the student will be caught. The only thing that is infinite is the iterative solution. The race is a convergent geometric process so the steps are diminishing, but each step in the solution takes the same amount of effort and time so with an infinite number of steps, we’ll spend an eternity solving it.  This BTW is an original thought that I have never seen before. But I digress. Let’s simply write the code to solve the problem. To make sure that it runs everywhere, I’ll do it in JavaScript. function LongCatchUpTime(D, PV, FV) // D is Distance; PV is Pursuers Velocity; FV is Fugitive’ Velocity {     var t = 0;     var T = 0;     var d = parseFloat(D);     var pv = parseFloat (PV);     var fv = parseFloat (FV);     t = d / pv;     while (t > 0.000001) //a 10th of a second is 1/36,000 of an hour, I used 1/100,000     {         T = T + t;         d = t * fv;         t = d / pv;     }     return T;     } By and large, the higher the Pursuer’s velocity relative to the fugitive, the faster the calculation. Solving this with the 10th of a second limit yields: 7.499999232000001 Method 2 – Geometric Series. Each step in the iteration above is smaller than the next. As you saw, we stopped iterating when the last step was small enough, small enough not to really matter.  When we have a sequence of numbers in which the ratio of each number to its predecessor is fixed we call the sequence geometric. When we are looking at the sum of sequence, we call the sequence of sums series.  Now let’s look at our student and teacher. The teacher runs 5 times faster than the student, so with each iteration the distance between them shrinks to a fifth of what it was before. This is a fixed ratio so we deal with a geometric series.  We normally designate this ratio as q and when q is less than 1 (0 < q < 1) the sum of  + … +  is  – 1) / (q – 1). When q is less than 1, it is easier to use ) / (1 - q). Now, the steps are 6 hours then 6/5 hours then 6/5*5 and so on, so q = 1/5. And the whole series is multiplied by 6. Also because q is less than 1 , 1/  diminishes to 0. So the sum is just  / (1 - q). or 1/ (1 – 1/5) = 1 / (4/5) = 5/4. This times 6 yields 7.5 hours. We can now continue with some algebra and take it back to a simpler formula. This is arduous and I am not going to do it here. Instead let’s do some simpler algebra. Method 3 – Simple Algebra. If the time to capture the fugitive is T and the fugitive travels at 1 mph, then by the time the pursuer catches him he travelled additional T miles. Time is distance divided by speed, so…. (D + T)/V = T  thus D + T = VT  and D = VT – T = (V – 1)T  and T = D/(V – 1) This “strangely” coincides with the solution we just got from the geometric sequence. This is simpler ad faster. Here is the corresponding code. function ShortCatchUpTime(D, PV, FV) {     var d = parseFloat(D);     var pv = parseFloat (PV);     var fv = parseFloat (FV);     return d / (pv - fv); } The code above, for both the iterative solution and the algebraic solution are actually for a larger class of problems.  In our original problem the student’s velocity (speed) is 1 mph. In the code it may be anything as long as it is less than the pursuer’s velocity. As long as PV > FV, the pursuer will catch up. Here is the really general formula: T = D / (PV – FV) Finally, let’s run the program for each of the pursuers.  It could not be worse. I know he’d rather be eaten alive than suffering through yet another math lesson. See the code run? Select  “Catch Up Time” in www.mgsltns.com/games.htm The host is running on Unix, so the link is case sensitive. That’s All Folks

    Read the article

  • Modern/Metro Internet Explorer: What were they thinking???

    - by Rick Strahl
    As I installed Windows 8.1 last week I decided that I really should take a closer look at Internet Explorer in the Modern/Metro environment again. Right away I ran into two issues that are real head scratchers to me.Modern Split Windows don't resize Viewport but Zoom OutThis one falls in the "WTF, really?" department: It looks like Modern Internet Explorer's Modern doesn't resize the browser window as every other browser (including IE 11 on the desktop) does, but rather tries to adjust the zoom to the width of the browser. This means that if you use the Modern IE browser and you split the display between IE and another application, IE will be zoomed out, with text becoming much, much smaller, rather than resizing the browser viewport and adjusting the pixel width as you would when a browser window is typically resized.Here's what I'm talking about in a couple of pictures. First here's the full screen Internet Explorer version (this shot is resized down since it's full screen at 1080p, click to see the full image):This brings up the first issue which is: On the desktop who wants to browse a site full screen? Most sites aren't fully optimized for 1080p widescreen experience and frankly most content that wide just looks weird. Even in typical 10" resolutions of 1280 width it's weird to look at things this way. At least this issue can be worked around with @media queries and either constraining the view, or adding additional content to make use of the extra space. Still running a desktop browser full screen is not optimal on a desktop machine - ever.Regardless, this view, while oversized, is what I expect: Everything is rendered in the right ratios, with font-size and the responsive design styling properly respected.But now look what happens when you split the desktop windows and show half desktop and have modern IE (this screen shot is not resized but cropped - this is actual size content as you can see in the cropped Twitter window on the right half of the screen):What's happening here is that IE is zooming out of the content to make it fit into the smaller width, shrinking the content rather than resizing the viewport's pixel width. In effect it looks like the pixel width stays at 1080px and the viewport expands out height-wise in response resulting in some crazy long portrait view.There goes responsive design - out the window literally. If you've built your site using @media queries and fixed viewport sizes, Internet Explorer completely screws you in this split view. On my 1080p monitor, the site shown at a little under half width becomes completely unreadable as the fonts are too small and break up. As you go into split view and you resize the window handle the content of the browser gets smaller and smaller (and effectively longer and longer on the bottom) effectively throwing off any responsive layout to the point of un-readability even on a big display, let alone a small tablet screen.What could POSSIBLY be the benefit of this screwed up behavior? I checked around a bit trying different pages in this shrunk down view. Other than the Microsoft home page, every page I went to was nearly unreadable at a quarter width. The only page I found that worked 'normally' was the Microsoft home page which undoubtedly is optimized just for Internet Explorer specifically.Bottom Address Bar opaquely overlays ContentAnother problematic feature for me is the browser address bar on the bottom. Modern IE shows the status bar opaquely on the bottom, overlaying the content area of the Web Page - until you click on the page. Until you do though, the address bar overlays the bottom content solidly. And not just a little bit but by good sizable chunk.In the application from the screen shot above I have an application toolbar on the bottom and the IE Address bar completely hides that bottom toolbar when the page is first loaded, until the user clicks into the content at which point the address bar shrinks down to a fat border style bar with a … on it. Toolbars on the bottom are pretty common these days, especially for mobile optimized applications, so I'd say this is a common use case. But even if you don't have toolbars on the bottom maybe there's other fixed content on the bottom of the page that is vital to display. While other browsers often also show address bars and then later hide them, these other browsers tend to resize the viewport when the address bar status changes, so the content can respond to the size change. Not so with Modern IE. The address bar overlays content and stays visible until content is clicked. No resize notification or viewport height change is sent to the browser.So basically Internet Explorer is telling me: "Our toolbar is more important than your content!" - AND gives me no chance to re-act to that behavior. The result on this page/application is that the user sees no actionable operations until he or she clicks into the content area, which is terrible from a UI perspective as the user has no idea what options are available on initial load.It's doubly confounding in that IE is running in full screen mode and has an the entire height of the screen at its disposal - there's plenty of real estate available to not require this sort of hiding of content in the first place. Heck, even Windows Phone with its more constrained size doesn't hide content - in fact the address bar on Windows Phone 8 is always visible.What were they thinking?Every time I use anything in the Modern Metro interface in Windows 8/8.1 I get angry.  I can pretty much ignore Metro/Modern for my everyday usage, but unfortunately with Internet Explorer in the modern shell I have to live with, because there will be users using it to access my sites. I think it's inexcusable by Microsoft to build such a crappy shell around the browser that impacts the actual usability of Web content. In both of the cases above I can only scratch my head at what could have possibly motivated anybody designing the UI for the browser to make these screwed up choices, that manipulate the content in a totally unmaintainable way.© Rick Strahl, West Wind Technologies, 2005-2013Posted in Windows  HTML5   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Developing with Fluid UI – The Fluid Home Page

    - by Dave Bain
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} The first place to start with Fluid UI is with the Fluid Home Page. Sometimes it’s referred to as the landing page, but it’s formally called the Fluid Home Page. It’s delivered with PeopleTools 8.54, and the nice thing about it is, it’s a component. That’s one thing you’ll discover with Fluid UI. Fluid UI is built int PeopleTools with Fluid UI. The Home Page is a component, the tiles or grouplets are group boxes, and the search and prompt pages are just pages. It makes it easy to find things, customize and brand the applications (and of course to see what’s going on) when you can open it in AppDesigner. To see what makes a component fluid, let’s start with the Fluid Home Page. It’s a component called PT_LANDINGPAGE. You can open it in AppDesigner and see what’s unique and different about Fluid UI. If you open the Component Properties dialog, you’ll see a new tab called Fluid On the Component Properties Fluid tab you’ll see the most important checkbox of all, Fluid Mode. That is the one flag that will tell PeopleSoft if the component is Fluid (responsive, dynamic layout) or classic (pixel perfect). Now that you know it’s a single flag, you know that a component can’t be both Fluid UI and Classic at the same time, it’s one or the other. There are some other interesting fields on this page. The Small Form Factor Optimized field tells us whether or not to display this on a small device (think smarphone). Header Toolbar Actions offer standard options that are set at the component level so you have complete control of the components header bar. You’ll notice that the PT_LANDINGPAGE has got some PostBuild PeopleCode. That’s to build the grouplets that are used to launch Fluid UI Pages (more about those later). Probably not a good idea to mess with that code! The next thing to look at is the Page Definition for the PT_LANDINGPAGE component. When you open the page PT_LANDINGPAGE it will look different than anything you’ve ever seen. You’re probably thinking “What’s up with all the group boxes”? That is where Fluid UI is so different. In classic PeopleSoft, you put a button, field, group, any control on a page and that’s where it shows up, no questions asked. With Fluid UI, everything is positioned relative to something else. That’s why there are so many containers (you know them as group boxes). They are UI objects that are used for dynamic positioning. The Fluid Home Page has some special behavior and special settings. The first is in the Web Profile Configuration settings (Main Menu->PeopleTools->Web Profile->Web Profile Configuration from the main menu). There are two checkboxes that control the behavior of Fluid UI. Disable Fluid Mode and Disable Fluid On Desktop. Disable Fluid Mode prevents any Fluid UI component from being run from this installation. This is a web profile setting for users that want to run later versions of PeopleTools but only want to run Classic PeopleSoft pages. The second setting, Disable Fluid On Desktop allows the Fluid UI to be run on mobile devices such as smartphones and tablets, but prevents Fluid UI from running on a desktop computer. Fluid UI settings are also make in My Personalizations (Main Menu->My Personalizations from the Main Menu), in the General Options section. In that section, each user has the choice to determine the home page for their desktop and for tablets. Now that you know the Fluid UI landing page is just a component, and the profile and personalization settings, you should be able to launch one. It’s pretty easy to add a menu using Structure and Content, just make sure the proper security is set up. You’ll have to run a Fluid UI supported browser in order to see it. Latest versions of Chrome, Firefox and IE will do. Check the certification page on MOS for all the details. When you open the first Fluid Landing Page, there’s not much there. Not to worry, we’ll get some content on it soon. Take a moment to navigate around and look at some of the header actions that were set up from the component properties. The home button takes you back to the classic system. You won’t see any notifications and the personalization doesn’t have any content to add. The NavBar icon on the top right has a lot of content, including a Navigator and Classic home. Spend some time looking through what’s available. Stay tuned for more. Next up is adding some content. Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:8.0pt; mso-para-margin-left:0in; line-height:107%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • SQL University: What and why of database testing

    - by Mladen Prajdic
    This is a post for a great idea called SQL University started by Jorge Segarra also famously known as SqlChicken on Twitter. It’s a collection of blog posts on different database related topics contributed by several smart people all over the world. So this week is mine and we’ll be talking about database testing and refactoring. In 3 posts we’ll cover: SQLU part 1 - What and why of database testing SQLU part 2 - What and why of database refactoring SQLU part 2 – Tools of the trade With that out of the way let us sharpen our pencils and get going. Why test a database The sad state of the industry today is that there is very little emphasis on testing in general. Test driven development is still a small niche of the programming world while refactoring is even smaller. The cause of this is the inability of developers to convince themselves and their managers that writing tests is beneficial. At the moment they are mostly viewed as waste of time. This is because the average person (let’s not fool ourselves, we’re all average) is unable to think about lower future costs in relation to little more current work. It’s orders of magnitude easier to know about the current costs in relation to current amount of work. That’s why programmers convince themselves testing is a waste of time. However we have to ask ourselves what tests are really about? Maybe finding bugs? No, not really. If we introduce bugs, we’re likely to write test around those bugs too. But yes we can find some bugs with tests. The main point of tests is to have reproducible repeatability in our systems. By having a code base largely covered by tests we can know with better certainty what a small code change can break in other parts of the system. By having repeatability we can make code changes with confidence, since we know we’ll see what breaks in other tests. And here comes the inability to estimate future costs. By spending just a few more hours writing those tests we’d know instantly what broke where. Imagine we fix a reported bug. We check-in the code, deploy it and the users are happy. Until we get a call 2 weeks later about a certain monthly process has stopped working. What we don’t know is that this process was developed by a long gone coworker and for some reason it relied on that same bug we’ve happily fixed. There’s no way we could’ve known that. We say OK and go in and fix the monthly process. But what we have no clue about is that there’s this ETL job that relied on data from that monthly process. Now that we’ve fixed the process it’s giving unexpected (yet correct since we fixed it) data to the ETL job. So we have to fix that too. But there’s this part of the app we coded that relies on data from that exact ETL job. And just like that we enter the “Loop of maintenance horror”. With the loop eventually comes blame. Here’s a nice tip for all developers and DBAs out there: If you make a mistake man up and admit to it. All of the above is valid for any kind of software development. Keeping this in mind the database is nothing other than just a part of the application. But a big part! One reason why testing a database is even more important than testing an application is that one database is usually accessed from multiple applications and processes. This makes it the central and vital part of the enterprise software infrastructure. Knowing all this can we really afford not to have tests? What to test in a database Now that we’ve decided we’ll dive into this testing thing we have to ask ourselves what needs to be tested? The short answer is: everything. The long answer is: read on! There are 2 main ways of doing tests: Black box and White box testing. Black box testing means we have no idea how the system internals are built and we only have access to it’s inputs and outputs. With it we test that the internal changes to the system haven’t caused the input/output behavior of the system to change. The most important thing to test here are the edge conditions. It’s where most programs break. Having good edge condition tests we can be more confident that the systems changes won’t break. White box testing has the full knowledge of the system internals. With it we test the internal system changes, different states of the application, etc… White and Black box tests should be complementary to each other as they are very much interconnected. Testing database routines includes testing stored procedures, views, user defined functions and anything you use to access the data with. Database routines are your input/output interface to the database system. They count as black box testing. We test then for 2 things: Data and schema. When testing schema we only care about the columns and the data types they’re returning. After all the schema is the contract to the out side systems. If it changes we usually have to change the applications accessing it. One helpful T-SQL command when doing schema tests is SET FMTONLY ON. It tells the SQL Server to return only empty results sets. This speeds up tests because it doesn’t return any data to the client. After we’ve validated the schema we have to test the returned data. There no other way to do this but to have expected data known before the tests executes and comparing that data to the database routine output. Testing Authentication and Authorization helps us validate who has access to the SQL Server box (Authentication) and who has access to certain database objects (Authorization). For desktop applications and windows authentication this works well. But the biggest problem here are web apps. They usually connect to the database as a single user. Please ensure that that user is not SA or an account with admin privileges. That is just bad. Load testing ensures us that our database can handle peak loads. One often overlooked tool for load testing is Microsoft’s OSTRESS tool. It’s part of RML utilities (x86, x64) for SQL Server and can help determine if our database server can handle loads like 100 simultaneous users each doing 10 requests per second. SQL Profiler can also help us here by looking at why certain queries are slow and what to do to fix them.   One particular problem to think about is how to begin testing existing databases. First thing we have to do is to get to know those databases. We can’t test something when we don’t know how it works. To do this we have to talk to the users of the applications accessing the database, run SQL Profiler to see what queries are being run, use existing documentation to decipher all the object relationships, etc… The way to approach this is to choose one part of the database (say a logical grouping of tables that go together) and filter our traces accordingly. Once we’ve done that we move on to the next grouping and so on until we’ve covered the whole database. Then we move on to the next one. Database Testing is a topic that we can spent many hours discussing but let this be a nice intro to the world of database testing. See you in the next post.

    Read the article

  • Big Visible Charts

    - by Robert May
    An important part of Agile is the concept of transparency and visibility. In proper functioning teams, stakeholders can look at any team at any time in the iteration or release and see how that team is doing by simply looking at what we call Big Visible Charts. If you’ve done Scrum, you’ve seen these charts. However, interpreting these charts can often be an art form. There are several different charts that can be useful. In this newsletter, I’ll focus on the Iteration Burndown and Cumulative Flow charts. I’ve included a copy of the spreadsheet that I used to create the charts, and if you don’t have a tool that creates them for you, you can use this spreadsheet to do so. Our preferred tool for managing Scrum projects is Rally. Rally creates all of these charts for you, saving you quite a bit of time. The Iteration Burndown and Cumulative Flow Charts This is the main chart that teams use. Although less useful to stakeholders, this chart is critical to the team and provides quite a bit of information to the team about how their iteration is going. Most charts are a combination of the charts below, so you may need to combine aspects of each section to understand what is happening in your iterations. Ideal Ah, isn’t that a pretty picture? Unfortunately, it’s also very unrealistic. I’ve seen iterations that come close to ideal, but never that match perfectly. If your iteration matches perfectly, chances are, someone is playing with the numbers. Reality is just too difficult to have a burndown chart that matches this exactly. Late Planning Iteration started, but the team didn’t. You can tell this by the fact that the real number of estimated hours didn’t appear until day two. In the cumulative flow, you can also see that nothing was defined in Day one and two. You want to avoid situations like this. You’ll note that the team had to burn faster than is ideal to meet the iteration because of the late planning. This often results in long weeks and days. Testing Starved Determining whether or not testing is starved is difficult without the cumulative flow. The pattern in the burndown could be nothing more that developers not completing stories early enough or could be caused by stories being too big. With the cumulative flow, however, you see that only small bites are in progress and stories were completed early, but testing didn’t start testing until the end of the iteration, and didn’t complete testing all stories in the iteration. When this happens, question whether or not your testing resources are sufficient for your team and whether or not acceptance is adequately defined. No Testing With this one, both graphs show the same thing; the team needs testers and testing! Without testing, what was completed cannot be verified to make sure that it is acceptable to the business. If you find yourself in this situation, review your testing practices and acceptance testing process and make changes today. Late Development With this situation, both graphs tell a story. In the top graph, you can see that the hours failed to burn down as quickly as the team expected. This could be caused by the team not correctly estimating their hours or the team could have had illness or some other issue that affected them. Often, when teams are tackling something that is more unknown, they’ll run into technical barriers that cause the burn down to happen slower than expected. In the cumulative flow graph, you can see that not much was completed in the first few days. This could be because of illness or technical barriers or simply poor estimation. Testing was able to keep up with everything that was completed, however. No Tool Updating When you see graphs that look like this, you can be assured that it’s because the team is not updating the tool that generates the graphs. Review your policy for when they are to update. On the teams that I run, I require that each team member updates the tool at least once daily. You should also check to see how well the team is breaking down stories into tasks. If they’re creating few large tasks, graphs can look similar to this. As a general rule, I never allow tasks, other than Unit Testing and Uncertainty, to be greater than eight hours in duration. Scope Increase I always encourage team members to enter in however much time they think they have left on a task, even if that means increasing the total amount of time left to do. You get a much better and more realistic picture this way. Increasing time remaining could explain the burndown graph, but by looking at the cumulative flow graph, we can see that stories were added to the iteration and scope was increased. Since planning should consume all of the hours in the iteration, this is almost always a bad thing. If the scope change happened late in the iteration and the hours remaining were well below the ideal burn, then increasing scope is probably o.k., but estimation needs to get better. However, with the charts above, that’s clearly not what happened and the team was required to do extra work to make the iteration. If you find this happening, your product owner and ScrumMasters need training. The team also needs to learn to say no. Scope Decrease Scope decreases are just as bad as scope increases. Usually, graphs above show that the team did a poor job of estimating their stories and part way through had to reduce scope to change the iteration. This will happen once in a while, but if you find it’s a pattern on your team, you need to re-evaluate planning. Some teams are hopelessly optimistic. In those cases, I’ll introduce a task I call “Uncertainty.” With Uncertainty, the team estimates how many hours they might need if things don’t go well with the tasks they’ve defined. They try to estimate things that could go poorly and increase the time appropriately. Having an Uncertainty task allows them to have a low and high estimate. Uncertainty should not just be an arbitrary buffer. It must correlate to real uncertainty in the tasks that have been defined. Stories are too Big Often, we see graphs like the ones above. Note that the burndown looks fairly good, other than the chunky acceptance of stories. However, when you look at cumulative flow, you can see that at one point, everything is in progress. This is a bad thing. When you see graphs like this, you’re in one of two states. You may just have a very small team and can only handle one or two stories in your iteration. If you have more than one or two people, then the most likely problem is that your stories are far too big. To combat this, break large high hour stories into smaller pieces that can be completed independently and accepted independently. If you don’t, you’ll likely be requiring your testers to do heroic things to complete testing on the last day of the iteration and you’re much more likely to have the entire iteration fail, because of the limited amount of things that can be completed. Summary There are other charts that can be useful when doing scrum. If you don’t have any big visible charts, you really need to evaluate your process and change. These charts can provide the team a wealth of information and help you write better software. If you have any questions about charts that you’re seeing on your team, contact me with a screen capture of the charts and I’ll tell you what I’m seeing in those charts. I always want this information to be useful, so please let me know if you have other questions. Technorati Tags: Agile

    Read the article

  • Setting useLegacyV2RuntimeActivationPolicy At Runtime

    - by Reed
    Version 4.0 of the .NET Framework included a new CLR which is almost entirely backwards compatible with the 2.0 version of the CLR.  However, by default, mixed-mode assemblies targeting .NET 3.5sp1 and earlier will fail to load in a .NET 4 application.  Fixing this requires setting useLegacyV2RuntimeActivationPolicy in your app.Config for the application.  While there are many good reasons for this decision, there are times when this is extremely frustrating, especially when writing a library.  As such, there are (rare) times when it would be beneficial to set this in code, at runtime, as well as verify that it’s running correctly prior to receiving a FileLoadException. Typically, loading a pre-.NET 4 mixed mode assembly is handled simply by changing your app.Config file, and including the relevant attribute in the startup element: <?xml version="1.0" encoding="utf-8" ?> <configuration> <startup useLegacyV2RuntimeActivationPolicy="true"> <supportedRuntime version="v4.0"/> </startup> </configuration> .csharpcode { background-color: #ffffff; font-family: consolas, "Courier New", courier, monospace; color: black; font-size: small } .csharpcode pre { background-color: #ffffff; font-family: consolas, "Courier New", courier, monospace; color: black; font-size: small } .csharpcode pre { margin: 0em } .csharpcode .rem { color: #008000 } .csharpcode .kwrd { color: #0000ff } .csharpcode .str { color: #006080 } .csharpcode .op { color: #0000c0 } .csharpcode .preproc { color: #cc6633 } .csharpcode .asp { background-color: #ffff00 } .csharpcode .html { color: #800000 } .csharpcode .attr { color: #ff0000 } .csharpcode .alt { background-color: #f4f4f4; margin: 0em; width: 100% } .csharpcode .lnum { color: #606060 } This causes your application to run correctly, and load the older, mixed-mode assembly without issues. For full details on what’s happening here and why, I recommend reading Mark Miller’s detailed explanation of this attribute and the reasoning behind it. Before I show any code, let me say: I strongly recommend using the official approach of using app.config to set this policy. That being said, there are (rare) times when, for one reason or another, changing the application configuration file is less than ideal. While this is the supported approach to handling this issue, the CLR Hosting API includes a means of setting this programmatically via the ICLRRuntimeInfo interface.  Normally, this is used if you’re hosting the CLR in a native application in order to set this, at runtime, prior to loading the assemblies.  However, the F# Samples include a nice trick showing how to load this API and bind this policy, at runtime.  This was required in order to host the Managed DirectX API, which is built against an older version of the CLR. This is fairly easy to port to C#.  Instead of a direct port, I also added a little addition – by trapping the COM exception received if unable to bind (which will occur if the 2.0 CLR is already bound), I also allow a runtime check of whether this property was setup properly: public static class RuntimePolicyHelper { public static bool LegacyV2RuntimeEnabledSuccessfully { get; private set; } static RuntimePolicyHelper() { ICLRRuntimeInfo clrRuntimeInfo = (ICLRRuntimeInfo)RuntimeEnvironment.GetRuntimeInterfaceAsObject( Guid.Empty, typeof(ICLRRuntimeInfo).GUID); try { clrRuntimeInfo.BindAsLegacyV2Runtime(); LegacyV2RuntimeEnabledSuccessfully = true; } catch (COMException) { // This occurs with an HRESULT meaning // "A different runtime was already bound to the legacy CLR version 2 activation policy." LegacyV2RuntimeEnabledSuccessfully = false; } } [ComImport] [InterfaceType(ComInterfaceType.InterfaceIsIUnknown)] [Guid("BD39D1D2-BA2F-486A-89B0-B4B0CB466891")] private interface ICLRRuntimeInfo { void xGetVersionString(); void xGetRuntimeDirectory(); void xIsLoaded(); void xIsLoadable(); void xLoadErrorString(); void xLoadLibrary(); void xGetProcAddress(); void xGetInterface(); void xSetDefaultStartupFlags(); void xGetDefaultStartupFlags(); [MethodImpl(MethodImplOptions.InternalCall, MethodCodeType = MethodCodeType.Runtime)] void BindAsLegacyV2Runtime(); } } Using this, it’s possible to not only set this at runtime, but also verify, prior to loading your mixed mode assembly, whether this will succeed. In my case, this was quite useful – I am working on a library purely for internal use which uses a numerical package that is supplied with both a completely managed as well as a native solver.  The native solver uses a CLR 2 mixed-mode assembly, but is dramatically faster than the pure managed approach.  By checking RuntimePolicyHelper.LegacyV2RuntimeEnabledSuccessfully at runtime, I can decide whether to enable the native solver, and only do so if I successfully bound this policy. There are some tricks required here – To enable this sort of fallback behavior, you must make these checks in a type that doesn’t cause the mixed mode assembly to be loaded.  In my case, this forced me to encapsulate the library I was using entirely in a separate class, perform the check, then pass through the required calls to that class.  Otherwise, the library will load before the hosting process gets enabled, which in turn will fail. This code will also, of course, try to enable the runtime policy before the first time you use this class – which typically means just before the first time you check the boolean value.  As a result, checking this early on in the application is more likely to allow it to work. Finally, if you’re using a library, this has to be called prior to the 2.0 CLR loading.  This will cause it to fail if you try to use it to enable this policy in a plugin for most third party applications that don’t have their app.config setup properly, as they will likely have already loaded the 2.0 runtime. As an example, take a simple audio player.  The code below shows how this can be used to properly, at runtime, only use the “native” API if this will succeed, and fallback (or raise a nicer exception) if this will fail: public class AudioPlayer { private IAudioEngine audioEngine; public AudioPlayer() { if (RuntimePolicyHelper.LegacyV2RuntimeEnabledSuccessfully) { // This will load a CLR 2 mixed mode assembly this.audioEngine = new AudioEngineNative(); } else { this.audioEngine = new AudioEngineManaged(); } } public void Play(string filename) { this.audioEngine.Play(filename); } } Now – the warning: This approach works, but I would be very hesitant to use it in public facing production code, especially for anything other than initializing your own application.  While this should work in a library, using it has a very nasty side effect: you change the runtime policy of the executing application in a way that is very hidden and non-obvious.

    Read the article

  • HTG Explains: Should You Build Your Own PC?

    - by Chris Hoffman
    There was a time when every geek seemed to build their own PC. While the masses bought eMachines and Compaqs, geeks built their own more powerful and reliable desktop machines for cheaper. But does this still make sense? Building your own PC still offers as much flexibility in component choice as it ever did, but prebuilt computers are available at extremely competitive prices. Building your own PC will no longer save you money in most cases. The Rise of Laptops It’s impossible to look at the decline of geeks building their own PCs without considering the rise of laptops. There was a time when everyone seemed to use desktops — laptops were more expensive and significantly slower in day-to-day tasks. With the diminishing importance of computing power — nearly every modern computer has more than enough power to surf the web and use typical programs like Microsoft Office without any trouble — and the rise of laptop availability at nearly every price point, most people are buying laptops instead of desktops. And, if you’re buying a laptop, you can’t really build your own. You can’t just buy a laptop case and start plugging components into it — even if you could, you would end up with an extremely bulky device. Ultimately, to consider building your own desktop PC, you have to actually want a desktop PC. Most people are better served by laptops. Benefits to PC Building The two main reasons to build your own PC have been component choice and saving money. Building your own PC allows you to choose all the specific components you want rather than have them chosen for you. You get to choose everything, including the PC’s case and cooling system. Want a huge case with room for a fancy water-cooling system? You probably want to build your own PC. In the past, this often allowed you to save money — you could get better deals by buying the components yourself and combining them, avoiding the PC manufacturer markup. You’d often even end up with better components — you could pick up a more powerful CPU that was easier to overclock and choose more reliable components so you wouldn’t have to put up with an unstable eMachine that crashed every day. PCs you build yourself are also likely more upgradable — a prebuilt PC may have a sealed case and be constructed in such a way to discourage you from tampering with the insides, while swapping components in and out is generally easier with a computer you’ve built on your own. If you want to upgrade your CPU or replace your graphics card, it’s a definite benefit. Downsides to Building Your Own PC It’s important to remember there are downsides to building your own PC, too. For one thing, it’s just more work — sure, if you know what you’re doing, building your own PC isn’t that hard. Even for a geek, researching the best components, price-matching, waiting for them all to arrive, and building the PC just takes longer. Warranty is a more pernicious problem. If you buy a prebuilt PC and it starts malfunctioning, you can contact the computer’s manufacturer and have them deal with it. You don’t need to worry about what’s wrong. If you build your own PC and it starts malfunctioning, you have to diagnose the problem yourself. What’s malfunctioning, the motherboard, CPU, RAM, graphics card, or power supply? Each component has a separate warranty through its manufacturer, so you’ll have to determine which component is malfunctioning before you can send it off for replacement. Should You Still Build Your Own PC? Let’s say you do want a desktop and are willing to consider building your own PC. First, bear in mind that PC manufacturers are buying in bulk and getting a better deal on each component. They also have to pay much less for a Windows license than the $120 or so it would cost you to to buy your own Windows license. This is all going to wipe out the cost savings you’ll see — with everything all told, you’ll probably spend more money building your own average desktop PC than you would picking one up from Amazon or the local electronics store. If you’re an average PC user that uses your desktop for the typical things, there’s no money to be saved from building your own PC. But maybe you’re looking for something higher end. Perhaps you want a high-end gaming PC with the fastest graphics card and CPU available. Perhaps you want to pick out each individual component and choose the exact components for your gaming rig. In this case, building your own PC may be a good option. As you start to look at more expensive, high-end PCs, you may start to see a price gap — but you may not. Let’s say you wanted to blow thousands of dollars on a gaming PC. If you’re looking at spending this kind of money, it would be worth comparing the cost of individual components versus a prebuilt gaming system. Still, the actual prices may surprise you. For example, if you wanted to upgrade Dell’s $2293 Alienware Aurora to include a second NVIDIA GeForce GTX 780 graphics card, you’d pay an additional $600 on Alienware’s website. The same graphics card costs $650 on Amazon or Newegg, so you’d be spending more money building the system yourself. Why? Dell’s Alienware gets bulk discounts you can’t get — and this is Alienware, which was once regarded as selling ridiculously overpriced gaming PCs to people who wouldn’t build their own. Building your own PC still allows you to get the most freedom when choosing and combining components, but this is only valuable to a small niche of gamers and professional users — most people, even average gamers, would be fine going with a prebuilt system. If you’re an average person or even an average gamer, you’ll likely find that it’s cheaper to purchase a prebuilt PC rather than assemble your own. Even at the very high end, components may be more expensive separately than they are in a prebuilt PC. Enthusiasts who want to choose all the individual components for their dream gaming PC and want maximum flexibility may want to build their own PCs. Even then, building your own PC these days is more about flexibility and component choice than it is about saving money. In summary, you probably shouldn’t build your own PC. If you’re an enthusiast, you may want to — but only a small minority of people would actually benefit from building their own systems. Feel free to compare prices, but you may be surprised which is cheaper. Image Credit: Richard Jones on Flickr, elPadawan on Flickr, Richard Jones on Flickr     

    Read the article

  • Personal Technology – Laptop Screen Blank – No Post – No BIOS – No Boot

    - by Pinal Dave
    If your laptop Screen is Blank and there is no POST, BIOS or boot, you can follow the steps mentioned here and there are chances that it will work if there is no hardware failure inside. Step 1: Remove the power cord from the laptop Step 2: Remove the battery from the laptop Step 3: Hold power button (keep it pressed) for almost 60 seconds Step 4: Plug power back in laptop Step 5: Start computer and it should just start normally. Step 6: Now shut down Step 7: Insert the battery back in the laptop Step 8: Start laptop again and it should work Note 1: If your laptop does not work after inserting back the memory. Remove the memory and repeat above process. Do not insert the battery back as it is malfunctioning. Note 2: If your screen is faulty or have issues with your hardware (motherboard, screen or anything else) this method will not fix your computer. Those, who care about how I come up with this not SQL related blog post, here is the very funny true story. If you are a married man, you will know what I am going to describe next. May be you have faced the same situation or at least you feel and understand my situation. My wife’s computer suddenly stops working when she was searching for my daughter’s mathematics worksheets online. While the fatal accident happened with my wife’s computer (which was my loyal computer for over 4 years before she got it), I was working in my home office, fixing a high priority issue (live order’s database was corrupted) with one of the largest eCommerce websites.  While I was working on production server where I was fixing database corruption, my wife ran to my home office. Here is how the conversation went: Wife: This computer does not work. I: Restart it. Wife: It does not start. I: What did you do with it? Wife: Nothing, it just stopped working. I: Okey, I will look into it later, working on the very urgent issue. Wife: I was printing my daughter’s worksheet. I: Hm.. Okey. Wife: It was the mathematics worksheet, which you promised you will teach but you never get around to do it, so I am doing it myself. I: Thanks. I appreciate it. I am very busy with this issue as million dollar transaction are not happening as the database got corrupted and … Wife: So what … umm… You mean to say that you care about this customer more than your daughter. You know she got A+ in every other class but in mathematics she got only A. She missed that extra credit question. I: She is only 4, it is okay. Wife: She is 4.5 years old not 4. So you are not going to fix this computer which does not start at all. I think our daughter next time will even get lower grades as her dad is busy fixing something. I: Alright, I give up bring me that computer. Our daughter who was listening everything so far she finally decided to speak up. Daughter: Dad, it is a laptop not computer. I: Yes, sweety get that laptop here and your dad is going to fix the this small issue of million dollar issue later on. I decided to pay attention to my wife’s computer. She was right. No matter what I do, it will not boot up, it will not start, no BIOS, no POST screen. The computer starts for a second but nothing comes up on the screen. The light indicating hard drive comes up for a second and goes off. Nothing happens. I removed every single USB drive from the laptop but it still would not start. It was indeed no fun for me. Finally I remember my days when I was not married and used to study in University of Southern California, Los Angeles. I remembered that I used to have very old second (or maybe third or fourth) hand computer with me. In polite words, I had pre-owned computer and it used to face very similar issues again and again. I had small routine I used to follow to fix my old computer and I had decided to follow the same steps again with this computer. Step 1: Remove the power cord from the laptop Step 2: Remove the battery from the laptop Step 3: Hold power button (keep it pressed) for almost 60 seconds Step 4: Plug power back in laptop Step 5: Start computer and it should just start normally. Step 6: Now shut down Step 7: Insert the battery back in the laptop Step 8: Start laptop again and it should work Note 1: If your laptop does not work after inserting back the memory. Remove the memory and repeat above process. Do not insert the battery back as it is malfunctioning. Note 2: If your screen is faulty or have issues with your hardware (motherboard, screen or anything else) this method will not fix your computer. Once I followed above process, her computer worked. I was very delighted, that now I can go back to solving the problem where millions of transactions were waiting as I was fixing corrupted database and it the current state of the database was in emergency mode. Once I fixed the computer, I looked at my wife and asked. I: Well, now this laptop is back online, can I get guaranteed that she will get A+ in mathematics in this week’s quiz? Wife: Sure, I promise. I: Fantastic. After saying that I started to look at my database corruption and my wife interrupted me again. Wife: Btw, I forgot to tell you. Our daughter had got A in mathematics last week but she had another quiz today and she already have received A+ there. I kept my promise. I looked at her and she started to walk outside room, before I say anything my phone rang. DBA from eCommerce company had called me, as he was wondering why there is no activity from my side in last 10 minutes. DBA: Hey bud, are you still connected. I see um… no activity in last 10 minutes. I: Oh, well, I was just saving the world. I am back now. After two hours I had fixed the database corruption and everything was normal. I was outsmarted by my wife but honestly I still respect and love her the same as she is the one who spends countless hours with our daughter so she does not miss me and I can continue writing blogs and keep on doing technology evangelism. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Humor, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Are Chromebooks the New Netbooks, and What Does That Mean?

    - by Chris Hoffman
    Netbooks — small, cheap, slow laptops — were once very popular. They fell out of favor — people bought them because they seemed cheap and portable, but the actual experience was lackluster. Most netbooks now sit unused. Windows netbooks have vanished from stores today, but there’s a new super-cheap laptop — the Chromebook. Chromebook sales numbers are impressive, but their usage statistics tell a different story. Are Chromebooks just the new netbook? The Problem With Netbooks Netbooks seemed appealing, especially in an age before tablets and lightweight ultrabooks. You could buy a netbook for $200 or so and have a portable device that let you get on the Internet. The name “netbook” spelled that out — it was a portable device for getting on the ‘net. They weren’t really that great. The original netbook was a lightweight Asus Eee PC that ran Linux alone and had a small amount of fast flash storage. Netbooks eventually ran heavier Windows XP operating systems — Windows Vista was out, but it was just too bloated to run on netbooks. Manufacturers added slow magnetic hard drives, bloatware, and even DVD drives! They couldn’t run most Windows software very well. The build quality was poor and their keyboards were tiny and cramped. People liked the idea of a lightweight device that let them get on the Internet and loved the cheap price, but the actual experience wasn’t great. Chromebook Sales Chromebook sales numbers seem surprisingly high. NPD reported that Chromebooks were 21% of all notebooks sold in the US in 2013. If you combine laptop and tablet sales into a single statistic, Chromebooks were 9.6% of all those devices sold. That’s 2/3 as many Chromebooks sold as iPads in the US! Of Amazon’s best-selling laptop computers, two of the top three are Chromebooks. These definitely look like successful products. Unlike netbooks, Chromebooks are taking off in a big way in the education market. Many schools are buying Chromebooks for their students instead of more expensive Windows laptops. They’re easier to manage and lock down than Windows laptops, but — more importantly for cash-strapped schools — they’re very cheap. Netbooks never had this sort of momentum in schools. Chromebook Usage Statistics Here’s where the rosy picture of Chromebooks starts to become more realistic. StatCounter’s browser usage statistics show how widely used different operating systems are. For example, Windows 7 has the highest share with 35.71% of web activity in April, 2014. The chart doesn’t even show Chrome OS at all, although there is an “Other” number near the bottom. Click the Download Data link to download a CSV file and we can view more detailed information. Chrome OS only accounted for 0.38% of web usage in April, 2014. Desktop Linux, which people often shrug at, accounted for 1.52% in the same month. To its credit, Chrome OS usage has increased. Chromebooks were widely mocked back in November, 2013 when the sales numbers came out. After all, they only accounted for 0.11% of web usage globally in November, 2013! But Chrome OS numbers have been improving: Nov, 2013: 0.11% Dec, 2013: 0.22% Jan, 2014: 0.31% Feb, 2014: 0.35% Mar, 2014: 0.36% Apr, 2014: 0.38% Chrome OS is climbing, but it’s definitely still in the “Other” category. It isn’t as high as we’d expect to see it with those types of sales numbers. Chromebooks vs. Netbooks Chromebooks are more limited devices than traditional PCs. You can do quite a few things, but you have to do it all using Chrome or Chrome apps. Most people won’t be enabling developer mode and installing a Linux desktop. You don’t have access to the powerful desktop software available for Windows and even Mac OS X. On the other hand, these Chromebooks are less compromised than netbooks in many ways. They come with a lightweight operating system designed for portable, mobile devices. They don’t come packed with any bloatware, like the bloatware you’ll find on competing Windows PCs and the original netbooks. They’re cheaper because the manufacturer doesn’t have to pay for a Windows license. There’s no need for antivirus software weighing the operating system down. They’re larger than the original netbooks, with many of them being 11.6-inches instead of the original 8-inch bodies many older netbooks came with. They have larger, more comfortable keyboards and fast solid-state storage. Really, Chromebooks are what netbooks wanted to be. People didn’t buy netbooks to use typical Windows software — they just wanted a lightweight PC. Of course, for many people, the real successor to netbooks is tablets. If all you want is a portable device to throw in a bag so you can get online, maybe a tablet is better. Where Does This Leave Chromebooks? So, are Chromebooks the new netbooks? It’s a bit early to answer that question. Chromebooks are definitely not out of the competition — their sales look good and their usage share is increasing. On the other hand, Chrome OS is still pretty far behind. They’re not catching fire like tablets did. Maybe netbooks were just before their time and Chromebooks were what they were always meant to be. Just as Microsoft’s Windows XP tablets failed, Windows XP netbooks also failed. Tablets took off with a more refined operating system on better hardware years later. “Netbooks” — or Chromebooks — are now taking off with a more purpose-built operating system on better hardware, too. It’s hard to count Chromebooks out because they provide a much better experience than netbooks ever did. If you’re one of the people who wants to use old Windows desktop apps on your portable laptop, you may think netbooks were better — but most people don’t want that. But maybe people either want a full desktop PC experience or a full mobile tablet experience. Is there a place for a laptop with a keyboard that can only view websites? We’ll have to wait and see. Image Credit: Kevin Jarret on Flickr, Clive Darra on Flickr, Sean Freese on Flickr

    Read the article

< Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >