Search Results

Search found 592 results on 24 pages for 'rolling stones'.

Page 23/24 | < Previous Page | 19 20 21 22 23 24  | Next Page >

  • Upgrading from TFS 2010 RC to TFS 2010 RTM done

    - by Martin Hinshelwood
    Today is the big day, with the Launch of Visual Studio 2010 already done in Asia, and rolling around the world towards us, we are getting ready for the RTM (Released). We have had TFS 2010 in Production for nearly 6 months and have had only minimal problems. Update 12th April 2010  – Added Scott Hanselman’s tweet about the MSDN download release time. SSW was the first company in the world outside of Microsoft to deploy Visual Studio 2010 Team Foundation Server to production, not once, but twice. I am hoping to make it 3 in a row, but with all the hype around the new version, and with it being a production release and not just a go-live, I think there will be a lot of competition. Developers: MSDN will be updated with #vs2010 downloads and details at 10am PST *today*! @shanselman - Scott Hanselman Same as before, we need to Uninstall 2010 RC and install 2010 RTM. The installer will take care of all the complexity of actually upgrading any schema changes. If you are upgrading from TFS 2008 to TFS2010 you can follow our Rules To Better TFS 2010 Migration and read my post on our successes.   We run TFS 2010 in a Hyper-V virtual environment, so we have the advantage of running a snapshot as well as taking a DB backup. Done - Snapshot the hyper-v server Microsoft does not support taking a snapshot of a running server, for very good reason, and Brian Harry wrote a post after my last upgrade with the reason why you should never snapshot a running server. Done - Uninstall Visual Studio Team Explorer 2010 RC You will need to uninstall all of the Visual Studio 2010 RC client bits that you have on the server. Done - Uninstall TFS 2010 RC Done - Install TFS 2010 RTM Done - Configure TFS 2010 RTM Pick the Upgrade option and point it at your existing “tfs_Configuration” database to load all of the existing settings Done - Upgrade the SharePoint Extensions Upgrade Build Servers (Pending) Test the server The back out plan, and you should always have one, is to restore the snapshot. Upgrading to Team Foundation Server 2010 – Done The first thing you need to do is off the TFS server and then log into the Hyper-v server and create a snapshot. Figure: Make sure you turn the server off and delete all old snapshots before you take a new one I noticed that the snapshot that was taken before the Beta 2 to RC upgrade was still there. You should really delete old snapshots before you create a new one, but in this case the SysAdmin (who is currently tucked up in bed) asked me not to. I guess he is worried about a developer messing up his server Turn your server on and wait for it to boot in anticipation of all the nice shiny RTM’ness that is coming next. The upgrade procedure for TFS2010 is to uninstal the old version and install the new one. Figure: Remove Visual Studio 2010 Team Foundation Server RC from the system.   Figure: Most of the heavy lifting is done by the Uninstaller, but make sure you have removed any of the client bits first. Specifically Visual Studio 2010 or Team Explorer 2010.  Once the uninstall is complete, this took around 5 minutes for me, you can begin the install of the RTM. Running the 64 bit OS will allow the application to use more than 2GB RAM, which while not common may be of use in heavy load situations. Figure: It is always recommended to install the 64bit version of a server application where possible. I do not think it is likely, with SharePoint 2010 and Exchange 2010  and even Windows Server 2008 R2 being 64 bit only, I do not think there will be another release of a server app that is 32bit. You then need to choose what it is you want to install. This depends on how you are running TFS and on how many servers. In our case we run TFS and the Team Foundation Build Service (controller only) on out TFS server along with Analysis services and Reporting Services. But our SharePoint server lives elsewhere. Figure: This always confuses people, but in reality it makes sense. Don’t install what you do not need. Every extra you install has an impact of performance. If you are integrating with SharePoint you will need to run this install on every Front end server in your farm and don’t forget to upgrade your Build servers and proxy servers later. Figure: Selecting only Team Foundation Server (TFS) and Team Foundation Build Services (TFBS)   It is worth noting that if you have a lot of builds kicking off, and hence a lot of get operations against your TFS server, you can use a proxy server to cache the source control on another server in between your TFS server and your build servers. Figure: Installing Microsoft .NET Framework 4 takes the most time. Figure: Now run Windows Update, and SSW Diagnostic to make sure all your bits and bobs are up to date. Note: SSW Diagnostic will check your Power Tools, Add-on’s, Check in Policies and other bits as well. Configure Team Foundation Server 2010 – Done Now you can configure the server. If you have no key you will need to pick “Install a Trial Licence”, but it is only £500, or free with a MSDN subscription. Anyway, if you pick Trial you get 90 days to get your key. Figure: You can pick trial and add your key later using the TFS Server Admin. Here is where the real choices happen. We are doing an Upgrade from a previous version, so I will pick Upgrade the same as all you folks that are using the RC or TFS 2008. Figure: The upgrade wizard takes your existing 2010 or 2008 databases and upgraded them to the release.   Once you have entered your database server name you can click “List available databases” and it will show what it can upgrade. Figure: Select your database from the list and at this point, make sure you have a valid backup. At this point you have not made ANY changes to the databases. At this point the configuration wizard will load configuration from your existing database if you have one. If you are upgrading TFS 2008 refer to Rules To Better TFS 2010 Migration. Mostly during the wizard the default values will suffice, but depending on the configuration you want you can pick different options. Figure: Set the application tier account and Authentication method to use. We use NTLM to keep things simple as we host our TFS server externally for our remote developers.  Figure: Setting your TFS server URL’s to be the remote URL’s allows the reports to be accessed without using VPN. Very handy for those remote developers. Figure: Detected the existing Warehouse no problem. Figure: Again we love green ticks. It gives us a warm fuzzy feeling. Figure: The username for connecting to Reporting services should be a domain account (if you are on a domain that is). Figure: Setup the SharePoint integration to connect to your external SharePoint server. You can take the option to connect later.   You then need to run all of your readiness checks. These check can save your life! it will check all of the settings that you have entered as well as checking all the external services are configures and running properly. There are two reasons that TFS 2010 is so easy and painless to install where previous version were not. Microsoft changes the install to two steps, Install and configuration. The second reason is that they have pulled out all of the stops in making the install run all the checks necessary to make sure that once you start the install that it will complete. if you find any errors I recommend that you report them on http://connect.microsoft.com so everyone can benefit from your misery.   Figure: Now we have everything setup the configuration wizard can do its work.  Figure: Took a while on the “Web site” stage for some point, but zipped though after that.  Figure: last wee bit. TFS Needs to do a little tinkering with the data to complete the upgrade. Figure: All upgraded. I am not worried about the yellow triangle as SharePoint was being a little silly Exception Message: TF254021: The account name or password that you specified is not valid. (type TfsAdminException) Exception Stack Trace:    at Microsoft.TeamFoundation.Management.Controls.WizardCommon.AccountSelectionControl.TestLogon(String connectionString)    at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument) [Info   @16:10:16.307] Benign exception caught as part of verify: Exception Message: TF255329: The following site could not be accessed: http://projects.ssw.com.au/. The server that you specified did not return the expected response. Either you have not installed the Team Foundation Server Extensions for SharePoint Products on this server, or a firewall is blocking access to the specified site or the SharePoint Central Administration site. For more information, see the Microsoft Web site (http://go.microsoft.com/fwlink/?LinkId=161206). (type TeamFoundationServerException) Exception Stack Trace:    at Microsoft.TeamFoundation.Client.SharePoint.WssUtilities.VerifyTeamFoundationSharePointExtensions(ICredentials credentials, Uri url)    at Microsoft.TeamFoundation.Admin.VerifySharePointSitesUrl.Verify() Inner Exception Details: Exception Message: TF249064: The following Web service returned an response that is not valid: http://projects.ssw.com.au/_vti_bin/TeamFoundationIntegrationService.asmx. This Web service is used for the Team Foundation Server Extensions for SharePoint Products. Either the extensions are not installed, the request resulted in HTML being returned, or there is a problem with the URL. Verify that the following URL points to a valid SharePoint Web application and that the application is available: http://projects.ssw.com.au. If the URL is correct and the Web application is operating normally, verify that a firewall is not blocking access to the Web application. (type TeamFoundationServerInvalidResponseException) Exception Data Dictionary: ResponseStatusCode = InternalServerError I’ll look at SharePoint after, probably the SharePoint box just needs a restart or a kick If there is a problem with SharePoint it will come out in testing, But I will definatly be passing this on to Microsoft.   Upgrading the SharePoint connector to TFS 2010 You will need to upgrade the Extensions for SharePoint Products and Technologies on all of your SharePoint farm front end servers. To do this uninstall  the TFS 2010 RC from it in the same way as the server, and then install just the RTM Extensions. Figure: Only install the SharePoint Extensions on your SharePoint front end servers. TFS 2010 supports both SharePoint 2007 and SharePoint 2010.   Figure: When you configure SharePoint it uploads all of the solutions and templates. Figure: Everything is uploaded Successfully. Figure: TFS even remembered the settings from the previous installation, fantastic.   Upgrading the Team Foundation Build Servers to TFS 2010 Just like on the SharePoint servers you will need to upgrade the Build Server to the RTM. Just uninstall TFS 2010 RC and then install only the Team Foundation Build Services component. Unlike on the SharePoint server you will probably have some version of Visual Studio installed. You will need to remove this as well. (Coming Soon) Connecting Visual Studio 2010 / 2008 / 2005 and Eclipse to TFS2010 If you have developers still on Visual Studio 2005 or 2008 you will need do download the respective compatibility pack: Visual Studio Team System 2005 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 Visual Studio Team System 2008 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 If you are using Eclipse you can download the new Team Explorer Everywhere install for connecting to TFS. Get your developers to check that you have the latest version of your applications with SSW Diagnostic which will check for Service Packs and hot fixes to Visual Studio as well.   Technorati Tags: TFS,TFS2010,TFS 2010,Upgrade

    Read the article

  • Oracle Fusion Applications: Changing the Game

    - by kellsey.ruppel(at)oracle.com
    Originally posted in the Oracle Profit Magazine, November 2010 Edition. When the order processing system red-flags a customer's credit status, the IT department doesn't get the customer's call. When a supplier misses a delivery date for a key automotive assembly, it's not the CIO who has to answer for the error. Knowledge workers (known in IT circles as "users") are on the front lines when an exception occurs in an established business process. They're also the ones who study sales trends to decide when to open a new store in an up-and-coming neighborhood, which products are most profitable, how employee skill sets are evolving, and which suppliers are most efficient. In short, knowledge workers are masters of business as unusual. Traditional enterprise resource planning (ERP) systems and other familiar enterprise applications excel at automating, managing, and executing standard business processes. These programs shine when everything goes as planned. Life gets even trickier when a traditional application needs to be extended with a new service or an extra step is added to a business process when new products are brought to market, divisions are merged, or companies are acquired. Monolithic applications often need the IT department to step in and make the necessary adjustments--incurring additional costs and delays. Until now. When Oracle unveiled the much-anticipated family of Oracle Fusion Applications at Oracle OpenWorld in September 2010, knowledge workers in particular had a lot to cheer about. Business users will soon have ready access to analytical information and collaboration tools in the context of what they are working on, so they can make better decisions when problems or opportunities arise. Additionally, the Oracle Fusion Applications platform will make it easy for business users to tweak processes, create new capabilities, and find information, often without the need for IT department assistance and while still following company guidelines. And IT leaders will be happy to hear about new deployment options, guided implementation and setup tools, and cost-saving management capabilities. Just as important, the underlying technologies in Oracle Fusion Applications will allow organizations to choose among their existing investments and next-generation enterprise applications so they can introduce innovations at a pace that makes the most business and financial sense. "Oracle Fusion Applications are architected so you don't have to do rip and replace," says Jim Hayes, managing director of the consulting firm Accenture. "That's very important for creating a business case that will get through the steering committee and be approved by the board. It shows you can drive value and make a difference in the near term." For these and other reasons, analysts and early adopters are calling Oracle Fusion Applications a game changer for enterprise customers. The differences become apparent in three key areas: the way we innovate, work, and adopt technology. Game Changer #1: New Standard for InnovationChange is a constant challenge for most businesses, whether the catalysts are market dynamics, new competition, or the ever-expanding regulatory environment. And, in an ongoing effort to differentiate, business leaders are constantly looking for new ways to do business, serve constituents, and bring new products and services to market. In addition, companies face significant costs to keep their applications up-to-date. For example, when a company adds new suppliers to a procurement system, the IT shop typically has to invest time, effort, and even consulting fees for custom integrations that allow various ERP systems to communicate with each other. Oracle Fusion Applications were built on Web services and a modular SOA foundation to ease customizations and integration activities among all applications--whether from Oracle or another vendor. Interfaces and updates written in ubiquitous Java, rather than a proprietary coding language, allow organizations to tap into existing in-house technical skills rather than seek expensive outside specialists. And with SOA, organizations can extend a feature set or integrate with other SOA environments by combining Web services such as "look up customer" into a new business process managed by the BPEL orchestration engine. Flexibility like this has long-term implications. "Because users capture these changes at a higher metadata layer, not in the application's code, changes and additions are protected even as new versions of Oracle Fusion Applications are released," says Steve Miranda, senior vice president of applications development at Oracle. "This is a much more sustainable approach because you don't incur costly customizations that prevent upgrades and other innovations." And changes are easier to make: if one change is made in the metadata, that change is automatically reflected throughout the application interface, business intelligence, business process, and business logic. Game Changer #2: New Standard for WorkBoosting productivity comes down to doing the basics right: running business processes more efficiently and managing exceptions more effectively, so users can accomplish more in the course of a day or spend more quality time with the most profitable customers. The fastest way to improve process efficiency is to reduce the number of steps it takes to execute common tasks, such as ordering office equipment from an internal procurement system. Oracle Fusion Applications will deliver a complete role-based user experience with business intelligence and collaboration capabilities provided in the context of the work at hand. "We created every Oracle Fusion Applications screen by asking 'What does the user need to know?' 'What does he or she need to do?' and 'Who do they need to work with to get the job done?'" Miranda explains. So when the sales department heads need new laptops, the self-service procurement screen will not only display a list of approved vendors and configurations, but also a running list of reviews by coworkers who recently purchased the various models. Embedded intelligence may also display prevailing delivery lead times based on actual order histories, not the generic shipping dates vendors may quote. The pervasive business intelligence serves many other business activities across all areas of the enterprise. For example, a manager considering whether to promote a direct report can see the person's employee profile, with a salary history, appraisal summaries, and a rundown of skills and training. This approach to business intelligence also has implications for supply chain management. "One of the challenges at Ingersoll Rand is lack of visibility in our supply chain," says Mike Macrie, global director of enterprise applications for global industrial firm Ingersoll Rand. "Oracle Fusion Applications are going to provide the embedded intelligence to give us that visibility and give us the ability to analyze those orders at any point in our supply chain." Oracle Fusion Applications will also create a "role-based user experience" that displays a work list of events that need attention, based on user job function. Role awareness guides users with daily lists of action items and exceptions. So a credit manager may see seven invoices with discounts that are about to expire or 12 suppliers that have been put on hold because credit memos are awaiting approval. Individualization extends to the search capabilities of Oracle Fusion Applications. The platform uses Web-style search screens powered by an Oracle enterprise search engine, with a security framework that filters search results so individuals will only see the internal information they're authorized to access. A further aid to productivity is Oracle Fusion Applications' integration with Web 2.0 collaboration and social networking resources for business environments. Hover-over text will reveal relevant contact information whenever the name of a person appears in an Oracle Fusion Application. Users can connect via an online chat, phone call, or instant message without leaving the main application, reducing the time required for an accounts payable staffer to resolve a mismatch between an invoiced charge and the service record, for example. Addresses of suppliers, customers, or partners will also initiate hover-over text to show contact details and Web-based maps. Finally, Oracle Fusion Applications will promote a new way of working with purpose-driven communities that can bring new efficiencies to everything from cultivating sales leads to managing new projects. As soon as a lead or project materializes, the applications will automatically gather relevant participants into an online community that shares member contact information, schedules, discussion forums, and Wiki pages. "Oracle Fusion Applications will allow us to take it to the next level with embedded Web 2.0 tools and the embedded analytics," says Steve Printz, CIO and vice president, supply chain management, at window-and-door manufacturer Pella. "[This] allows those employees today who are processing transactions to really contribute to the success of the company and become decision-makers." Game Changer #3: New Standard for Technology AdoptionAs IT becomes a dominant component of how businesses run and compete, organizations need to lower the cost of implementing applications and introducing new application features. In the past, rolling out new code often required creating a test bed system, moving beta code to a separate system for user feedback, and--once all the revisions were made--moving version one of the software onto production systems, where business users could finally get the needed new features. Oracle Fusion Applications will use a dedicated setup manager application to streamline this process. First, the setup manager will help scope out the project, querying users about their requirements. "From those questions and answers we determine the steps and the order of those steps that will enable that task," Miranda says. Next, system utilities will assign tasks to owners, track completion status, and monitor the overall status of a programming effort. Oracle Fusion Applications can then recommend Web services that allow users to migrate setup choices and steps across all the various deployments of the application. Those setup capabilities automate the migration from test systems to production systems, as well as between different business units that may be using the same application. "The self-service ability of the setup manager helps business users change setups with very little intervention from the IT team," says Ravi Kumar, vice president at IT services company Infosys. "That to me is a big difference from how we've viewed enterprise applications before." For additional flexibility, organizations will be able to adopt Oracle Fusion Applications modules in either of two modes: a single-instance alternative uses one database for all Oracle Fusion Applications, while a "pillar mode" creates separate databases to underpin each application. This means IT departments running any one of Oracle's applications or even third-party applications can plug Oracle Fusion Applications modules into their environment and see additional business value created on top of their existing systems. And Oracle Fusion Applications offer a hybrid approach to deployment. The applications are all software-as-a-service-ready, so customers can choose on-premises, public or private cloud, or a combination of these to suit their business needs. It's that combination of flexibility and a roadmap for the future that may be the biggest game changer of all. "The Oracle Fusion Applications architecture allows us to migrate our company at a pace that's consistent with our business strategy, whereas before we might have had to do it with a massive upgrade," says Macrie of Ingersoll Rand. "We're looking forward to that architecture to really give us more flexibility in how we migrate over time." For More InformationUser Input Key to the Success of Oracle Fusion ApplicationsTransforming Coexistence into Strategic ValueUnder the HoodOracle Fusion ApplicationsOracle Service-Oriented Architecture  

    Read the article

  • CodePlex Daily Summary for Wednesday, May 12, 2010

    CodePlex Daily Summary for Wednesday, May 12, 2010New ProjectsAMP (Adamo Media Player): AMP is a custom media player that specializes in large media collections! Change the way you manage your media!BoogiePartners: BoogiePartners provides a loose collection of utilities or extensions for Boogie and SpecSharp developers.C# Developer Utility Library: Collection of helpful code functions including zero-config rolling file logging, parameter validation, reflection & object construction, I18N count...Commerce Server 2009 Orders using Pipelines in a Console Application: Using Commerce Server 2009 outside of a Web Application this project show how to create dummy Orders using Pipelines in a NON WEB CONTEXT, an examp...CRM Queue Manager C#: CRM 4 Queue Manager written in C#. Based on the original VB.Net CRM Queue Manager with some changes, additions and feature changes. Converts emai...DbBuilder: This is a tool indended for creation of MS SQL database from scratch, incremental updates, management of redeployable objects, etc. using scripted ...DbNetData: A collection of cross vendor database interface classes for .NET written in C# providing a consistent and simplified way of accessing SQL Server,Or...Deploy Workflow Manager: Kick off a workflow associated with a SharePoint List on all list items at once. SharePoint Designer Workflows will also be included. This projec...DigiLini - digital on screen ruler: Handy desktop tool for everyone who does ever do something graphical on his computer. There is an sizable horizontal and vertial ruler. You can pos...Dynamic Grid Data Type for Umbraco: The Dynamic Grid Data Type for Umbraco is a custom ASCX/C# control that was created to store tabular data as an Umbraco "Data Type". There's an abi...FlashPlus: An extension for Google Chrome and a bookmarklet for Internet Explorer and Firefox, this project makes media content on browsers more usable. This...Flat Database: Simple project making it really quick and easy, to implement a simple flat-file database for an application.Flood Watch Spam Control: Flood watch is a C# based application meant to help prevent site black listings. This application monitors outbound smtp streams, if a stream excee...Gamebox: gameboxKill Bill: Kill Bill covers the areas of customers, suppliers, products, sales and administration divided into modules which together form the system for the ...KND Decoder: Der KND Decoder konvertiert die lästigen Zahlenwerte, in den Java Dateien vom Knuddels Applet, zu lesbaren Strings. Die neue dekodier Methode wurde...Moonbeam: D3D11 FrameworkSite Defender: Add-on for blogs or anywhere else there could be people spamming your website.sMODfix: -Stringzilla: A string formatting classes designed for .NET 4 to enable named string formatting and conditional string formatting options based upon bound data c...The CQRS Kitchen: The CQRS Kitchen is an example application build with Silverlight 4 that demonstrates how to implement a CQRS / Event Sourcing application with the...XELF Framework for XNA / .NET: XELF Framework for XNA / .NET  XNA Game Studioおよび.NET Framework用のライブラリ・フレームワークとしてXELFが開発したC#ソースコードを含むプロジェクトです。  現在は、「XELF.Framework」のWindows用XNA G...xxxxxxxxx: xxxxxxxNew ReleasesASP.NET MVC 2 - CommonLibrary.CMS: CommonLibrary CMS - Alpha: CommonLibrary CMSA simple yet powerful CMS system in ASP.NET MVC 2 using C# 3.5. ActiveRecord based support for Blogs, Widgets, Pages, Parts, Ev...ASP.NET MVC Extensions: v1.0 RTM: v1.0 RTMAWS SimpleDB Browser: Release 2: Miscellaneous fixes from the Alpha release. Built to work with the released version of the .Net Framework 4.0.BIDS Helper: 1.4.3.0: This release addresses the following issues: For some people the BIDS Helper extensions to the Project Properties page for the SSIS Deploy plugin w...Bojinx: Bojinx Core V4.5.13: Fixes / Enhancements: Sequencer now accepts Commands directly without using events. Command queue now accepts both events and commands EventBu...CF-Soft: TestCases_DROP1: TestCases_DROP1CodePlex Runtime Intelligence Integration: PreEmptive.Attributes distributable: Contains a signed redistributable version of the PreEmptive.Attributes.dll library that non .NET 4 applications can use support in-code instrumenta...Convection Game Engine (Basic Edition): Convection Basic (44772): Compiled version of Convection Basic change set 44772.CRM Queue Manager C#: Initial release: Initial release of the CRM Queue Manager in C#. Release includes both the installer as well as the latest source code in zippped format. Running...DbNetData: DbNetData.1.0: Initial releaseDigiLini - digital on screen ruler: DigiLini 1.0: First stable version.Facturator - Create invoices easy and fast: Facturator.zip: current stable versionFlat Database: Initial Release: This is the working initial release of FlatDBKharaPOS: MSDN Magazine Sample: This is the release that supports the MSDN magazine article "Enterprise Patterns with WCF RIA Services. Some of the project was affected by the upg...Let's Up: 1.1 (Build 100511): - Add short and long break feature.Mesopotamia Experiment: Mesopotamia 1.2.88: Primarly bug fixes... Bug Fixes - fixed bug in synapse mutating whereby new ones were of one side only eg, source or target - fixed bug in screen ...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Class Libraries, version 1.0.1.123: The NodeXL class libraries can be used to display network graphs in .NET applications. To include a NodeXL network graph in a WPF desktop or Windo...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel 2007 Template, version 1.0.1.123: The NodeXL Excel 2007 template displays a network graph using edge and vertex lists stored in an Excel 2007 workbook. What's NewThis version makes...Over Store: OverStore 1.18.0.0: Version number is increased. New callback events added: Persisting and Persisted, which are raised on actually writing object data into the stora...Reusable Library: V1.0.7: A collection of reusable abstractions for enterprise application developerReusable Library Demo: V1.0.5: A demonstration of reusable abstractions for enterprise application developerRuntime Intelligence Endpoint Starter Kit: Runtime Intelligence Endpoint Starter Kit 20100511: Added crossdomain policy files so Silverlight applications can send usage data from anywhere.Scorm2CC: SCORM2CC Release 0.12.0.58711 net-2.0 Alpha: SCORM2CC Release 0.12.0.58711 net-2.0 AlphaShake - C# Make: Shake v0.1.14: Updated API to match with the current documentation.SharePoint LogViewer: SharePoint Log Viewer 2.6: This is a maintenance release. It has bug several fixes.Site Directory for SharePoint 2010 (from Microsoft Consulting Services, UK): v1.4: As 1.3 with the following changes Addition of a 'Site Data' webpart Site Directory can now be a site collection root or sub-site Scan job now ...sMODfix: sMODfix v1.0: Basic Versionsmtp4dev: smtp4dev 2.0: Smtp4dev 2.0 is powered by a completely re-written server component and now offers SSL/TLS and AUTH support.SocialScapes: SocialScapes Sidebar 1.0: The SocialScapes Sidebar is the first release of the new SocialScapes suite. There will be more modules to come along with a complete data aggrega...SSIS Multiple Hash: Multiple Hash V1.2: This is version 1.2 of the Multiple Hash SSIS Component. It supports SQL 2005 and SQL 2008, although you have to download the correct install pack...Surfium: Beta build: Somehow testedTerminals: Terminals 1.9a - RDP6 Support: The major change in this release is that Terminals has been upgraded to require RDP Client 6 to be installed for creating RDP connections. Backward...TFS Compare: Release 3.0: This release provides a new feature - the ability to navigate back and forth between document differences. Also, this release provides support for...VCC: Latest build, v2.1.30511.0: Automatic drop of latest buildyoutubeFisher: youtubeFisher v2.0: What's new new method of youtube parameters capturing HD 720p video support full-HD 1080p video support add Cancel option to stop file downl...Most Popular ProjectsWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active Projectspatterns & practices – Enterprise LibraryMirror Testing SystemRawrThe Information Literacy Education Learning Environment (ILE)Caliburn: An Application Framework for WPF and SilverlightwhiteBlogEngine.NETPHPExceljQuery Library for SharePoint Web ServicesTweetSharp

    Read the article

  • CodePlex Daily Summary for Monday, April 26, 2010

    CodePlex Daily Summary for Monday, April 26, 2010New Projects.Net 4.0 WPF Twitter Client: A .Net 4.0 Twitter client application built in WPF using PrismAop 权限管理 license授权: 授权管理 Aop应用asp.mvc example with ajax: asp.mvc example with ajaxBojinx: Flex 4 Framework and Management utilities IOC Container Flexible and powerful event management and handling Create your own Processors and Annotat...Browser Gallese: Il Browser Gallese serve per andare in internet quasi SICURO e GratiseaaCaletti: 2. Semester project @ ÅrhusEnki Char 2 BIN: A program that converts characters to binary and vice versaFsGame: A wrapper over xna to improve its usage from F#jouba Images Converter: Convert piles of images to all key formats at one go! Make quick adjustments - resize, rotate. Get your pictures ready to be printed or uploaded to...MVC2 RESTful Library: A RESTFul Library using ASP.NET MVC 2.0My Schedule Tool: Schedule can be used to create simple or complex schedules for executing tens. NAntDefineTasks: NAnt Define Tasks allows you to define NAnt tasks in terms of other NAnt tasks, instead of having to write any C# code. PowerShell Admin Modules: PAM supplies a number of PowerShell modules satisfying the needs of Windows administrators. By pulling together functions for adminsitering files a...Search Youtube Direct: Another project By URPROB This is a sample application that take search videos from youtube. The videos are shown with different attributes catagor...Simple Site Template: Simple Site Template 一个为了方便网站项目框架模板~Slaff's demo project: this is my demo projectSSIS Credit Card Number Validator 08 (CCNV08): Credit Card Number Validator 08 (CCNV08) is a Custom SSIS Data Flow Transformation Component for SQL Server 2008 that determines whether the given ...Su-Lekhan: Su-Lekhan is lite weight source code editor with refactoring support which is extensible to multiple languagesV-Data: V-Data es una herramienta de calidad de datos (Data Quality) que mejora considerablemente la calidad de la información disponible en cualquier sist...zy26: zy26 was here...New ReleasesActive Directory User Properties Change: Source Code v 1.0: Full Source Code + Database Creation SQL Script You need to change some values in web.config and DataAccess.vb. Hope you will get the most benefit!!Bojinx: Bojinx Core V4.5: V 4.5 Stable buildBrowser Gallese: Browser 1.0.0.9: Il Browser Gallese va su internet veloce,gratis e Sicuro dalla nascita Su questo sito ci sono i fle d'istallazione e il codice sorgente scritti con...CSS 360 Planetary Calendar: LCO: Documents for the LCO MilestoneGArphics: Backgrounds: GArphics examples converted into PNG-images with a resolution of 1680x1050.GArphics: Beta v0.8: Beta v0.8. Includes built-in examples and a lot of new settings.Helium Frog Animator: Helium Frog Flow Diagram: This file contains a flow diagram showing how the program functions. The image is in .bmp format.Highlighterr for Visual C++ 2010: Highlighterr for Visual C++ 2010 Test Release: Seems I'm updating this on a daily basis now, I keep finding things to add/fix. So stay tuned for updates! Current Version: 1.02 Now picks up chan...HTML Ruby: 6.22.2: Slightly improved performance on inserted ruby Cleaned up the code someiTuner - The iTunes Companion: iTuner 1.2.3767 Beta 3: Beta 3 is requires iTunes 9.1.0.79 or later A Librarian status panel showing active and queued Librarian scanners. This will be hidden behind the ...Jet Login Tool (JetLoginTool): Logout - 1.5.3765.28026: Changed: Will logout from Jet if the "Stop" button is hit Fixed: Bug where A notification is received that the engine has stopped, but it hasn't ...jouba Images Converter: Release: The first beta release for Images ConverterKeep Focused - an enhanced tool for Time Management using Pomodoro Technique: Release 0.3 Alpha: Release Notes (Release 0.3 Alpha) The alpha preview provides the basic functionality used by the Pomodoro Technique. Files 1. Keep Focused Exe...Mercurial to Team Foundation Server Work Item Hook: Version 0.2: Changes Include: Eliminates the need for the Suds package by hand-rolling its own soap implementation Ability to specify columns and computed col...Mews: Mews.Application V0.8: Installation InstuctionsNew Features16569 16571 Fixed Issues16668 Missing Features16567 Known Remaining IssuesLarge text-only articles require la...MiniTwitter: 1.12: MiniTwitter 1.12 更新内容 注意 このバージョンは開発中につき、正式リリースまでに複数回更新されることがあります。 また不安定かもしれませんので、新機能を試したいという方以外には 1.11 をお勧めします。 追加 List 機能に仮対応 タイムラインの振り分け条件として...Multiwfn: multiwfn1.3.1_binary: multiwfn1.3.1_binaryMultiwfn: multiwfn1.3.1_source: multiwfn1.3.1_sourceMVC2 RESTful Library: Source Code: VS2010 Source CodeMy Schedule Tool: MyScheduleTool 0.1: Initial version for MyScheduleTool. If you have any comments about the project, please let me know. My email is cmicmobile@gmail.comNestoria.NET: Nestoria.NET 0.8.1: Provides access to the Nestoria API. Documentation contains a basic getting started guide. Please visit Darren Edge's blog for ongoing developmen...OgmoXNA: OgmoXNA Binaries: Binary release libraries for OgmoXNA and OgmoXNA content pipeline extensions. Includes both Windows and Xbox360 binaries in separate directories.OgmoXNA: OgmoXNA Source: NOTE: THE ASSETS PROVIDED IN THE DEMO GAME DISTRIBUTED WITH THIS SOURCE ARE PROPERTY OF MATT THORSON, AND ARE NOT TO BE USED FOR ANY OTHER PROJECT...PokeIn Comet Ajax Library: PokeIn v07 x64: New FeaturesInstant Client Disconnected Detection Disable Server Push Technology OptionPokeIn Comet Ajax Library: PokeIn v07 x86: New FeaturesInstant Client Disconnected Detection Disable Server Push Technology OptionPowerShell Admin Modules: PAM 0.1: Version 0.1 includes share moduleRehost Image: 1.3.8: Added option to remove resolution/size text from the thumbnails of uploaded images in ImageShack. Fixed issue with FTP servers that give back jun...Site Directory for SharePoint 2010 (from Microsoft Consulting Services, UK): v1.3: Same as v1.2 with the following changes: Source Code and WSP updated to SharePoint 2010 and Visual Studio 2010 RTM releases Fields seperated into...SSIS Credit Card Number Validator 08 (CCNV08): CCNV08-1004Apr-R1: Version Released in Apr 2010SSIS Twitter Suite: v2.0: Upgraded Twitter Task to SSIS 2008sTASKedit: sTASKedit v0.7 Alpha: Features:Import of v56 (CN 1.3.6 v101) and v79 (PWI 1.4.2 v312) tasks.data files Export to v56 (Client) and v55 (Server) Clone & Delete Tasks ...StyleCop for ReSharper: StyleCop for ReSharper 5.0.14724.1: StyleCop for ReSharper 5.0.14724.1 =============== StyleCop 4.3.3.0 ReSharper 5.0.1659.36 Visual Studio 2008 / Visual Studio 2010 Fixes === G...Windows Live ID SSO Luminis Integration: Version 1.2: This new version has been updated for accomodating the new release of the Windows Live SSO Toolkit v4.2.WPF Application Framework (WAF): WPF Application Framework (WAF) 1.0.0.11: Version: 1.0.0.11 (Milestone 11): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requi...XAML Code Snippets addin for Visual Studio 2010: Release for VS 2010 RTM: Release built for Visual Studio 2010 RTMXP-More: 1.0: Release Notes Improved VM folder recognition Added About window Added input validation and exception handling If exist, vud and vpcbackupfile...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitSilverlight ToolkitMicrosoft SQL Server Product Samples: Databasepatterns & practices – Enterprise LibraryWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesPHPExcelMost Active Projectspatterns & practices – Enterprise LibraryRawrGMap.NET - Great Maps for Windows Forms & PresentationBlogEngine.NETParticle Plot PivotNB_Store - Free DotNetNuke Ecommerce Catalog ModuleDotNetZip LibraryFarseer Physics EngineN2 CMSpatterns & practices: Composite WPF and Silverlight

    Read the article

  • Part 2: Career development as a Software Developer without becoming a manager.

    - by albertpascual
    Seems like my previous post inspired by the work of Michael “Doc” Norton was a great success for the amount of emails I have received. Yet amazed how many people didn’t want to discuss their questions in the comments  sections. I would encourage people to be more public, still I would like to reply to all of you on this public media. I still welcome those emails. What I found out is that many people feels like me, they want to be developers and still be compensated for their experience without wanting to take a job as a manager. Their perfect day is a full day of coding and learning. Many believe their companies will never pay a manager’s salary to a developer no matter what. Most of you ask how to get the ball rolling. And is the later that I’m addressing here, the previous group, will never try. What companies understand developers value and where can I find them? This is a very difficult question to ask, I don’t have a list of those companies or departments, I have seen in my past signs in companies bending backwards to compensate, in more ways the monetary, a developer that is a good resource to them. Allowing the person to move out of the state and still let them work for the company from home is a sign that company goes by individual cases. Allowing them to go to conference that will not benefit the company is another big sign. Simple signs like flexible hours and letting some people work from home. To see those signs you need to be working in that company for awhile and look at the departments where the manager is taking care of their employees in individual cases. Look for the department where people get quiet extra perks, where some people in the department work from home or remotely. In my experience, but not always true, medium to big companies, are prompt to recognize good developers. Then again, some companies just don’t get it and is when you see many technical people managing developers. For all the people that email me stating that developers can also be very good managers, I do not disagree, I just think that a good developers loves writing code, when you remove that part the better salary isn’t enough to keep a developer happy. Burned out developers appreciate being promoted to managers. How do I know I work in a bad company? In my experience I have been a consultant and seen many companies, a few signs I have learned about companies that will not recognize good developers are: When the turn over is pretty high, when developers are moving out in a big rate, no rocket scientist needs to tap you in the shoulder. When the company is looking always to outsource their development resources. The product is not that interesting nor the company cares too much for their final result and support. Code sweat shops. You’ll know when you start working in one of those. Run for the hills! Where do I start? Disclaimer: I have only based this post on Michael “Doc” Norton, this is just my interpretation and ideas. First thing is to look at Michael “Doc” Norton presentation Take Control of Your Development Career http://docondev.blogspot.com/ That should be the first thing any developer should look and follow like it was a pattern. I would personally recommend to find some language or pattern you are interested with and learn it, learn something that will make you happy. Second, join a User Group and get involve in the community. There are hundreds of user groups, and I’m sure you’ll find one in your city or near you town. Code Camps are Developers Meet Ups are also good resources. Third, I would join a open source project you are interested or better yet, create a new open source project with the new technology that you have learn and get coding. Fourth, create a Twitter account and follow the people that talks about the technology you are interested on. If you follow this 4 steps above I think you’ll be on your way, after they are complete, when you release your Open Source project you can say that you accomplished the first steps. Now, do not expect anything to change in your career life, you are changing and should not expect anything in return, besides borrowing some time from sleeping and your family. Creating a good schedule may help you, I find wasted time in many places that I use. Flying for work is actually one of those that allows me to do my best work on a airplane, don’t need to borrow time from anywhere else. Making sure you always have a light, charged laptop is so important. Next steps following the Michael “Doc” Norton Pattern or my interpretation of. First, help run a user group or better yet, start a new user group. I’ll add, as well, go to one conference a year and free development events around your city; Code Camps, Geek Dinners, etc. There are many free events sponsored by different companies for developers to get to know their products, I highly recommend those as the way to get connected. Second, chose a mentor, this is a very hard thing to do I experienced, find an expert in the technology you are learning that has the time for you, it is difficult, I wish you best of luck. Third, learn another technology or pattern, open your horizons a little bit more. Why not, if you had fun previously, keep doing it. Fourth, get involved in forums to answer and ask questions, getting notice in public forums is rewarding for your ego after such a long journey. Final steps following the Michael “Doc” Norton Pattern Teach what you know, become humble on your knowledge, find as many opportunities to teach and to get involved with the community, bring all that to your day job. Mr. Norton talks about getting naked, expose yourself to others in your knowledge and what you do not know. You are never too important for small opportunities, yet don’t  be afraid to take anything big and learn from the experience. Anytime you have the opportunity to talk to somebody that has reach the point the community knows his or her name, means that you should learn from it. Take opportunities that won’t make you money, yet will make you happy. Sometimes you need to spend money and time. Register talks in Code Camps and Dev Meet Ups, those are free, also go to Conference, Development Summits and Geek Diners for example. One day, people will pay you to attend. When will all these pay off? I don’t know. I’m still in the path, there are a few things that during your journey you may get little acknowledgements that you are in the correct path. In my case I think those are the little signs that tells you about your journey. I got awarded the Microsoft Most Valuable Professional for ASP.NET in 2007, 2008, 2009 and 2010. I got selected to speak at the DevConnections in Las Vegas in 2010 and Orlando 2011. I do believe that I do have a long way to go, yet what I do makes me happy and I hope I can keep doing for years to come. Every year I can see an improvement on my code, and more frameworks and languages are under my belt, I learn to embrace them all as well as in my daily job, I have been able to work in a few projects beyond my department. I’m a learner and believer of the Michael “Doc” Norton pattern. Looking forward to learn more about it to be able to apply it better. In my short journey I now see my mistakes, I did a few things right, I have been listening the intelligent people and not being afraid to move along the technology changes. In my professional life, I have tried to avoid being placed in only one technology and product. I have always share my code and never confused anybody that wanted to take over any of my projects, I didn’t think anything I created as my own nor care too much when politics didn’t see my vision. I stayed flexible, ready and visible, yet humble. I keep my head just below the clouds, and avoided managers meetings. I credit my manager for my success, and I faulted publicly only myself for the failures. Hope this helps. Cheers, Al Follow me in Twitter  Read my previous post tweetmeme_url = 'http://weblogs.asp.net/albertpascual/archive/2010/12/09/part-2-career-development-as-a-software-developer-without-becoming-a-manager.aspx'; tweetmeme_source = 'alpascual';

    Read the article

  • Best Practices - updated: which domain types should be used to run applications

    - by jsavit
    This post is one of a series of "best practices" notes for Oracle VM Server for SPARC (formerly named Logical Domains). This is an updated and enlarged version of the post on this topic originally posted October 2012. One frequent question "what type of domain should I use to run applications?" There used to be a simple answer: "run applications in guest domains in almost all cases", but now there are more things to consider. Enhancements to Oracle VM Server for SPARC and introduction of systems like the current SPARC servers including the T4 and T5 systems, the Oracle SuperCluster T5-8 and Oracle SuperCluster M6-32 provide scale and performance much higher than the original servers that ran domains. Single-CPU performance, I/O capacity, memory sizes, are much larger now, and far more demanding applications are now being hosted in logical domains. The general advice continues to be "use guest domains in almost all cases", meaning, "use virtual I/O rather than physical I/O", unless there is a specific reason to use the other domain types. The sections below will discuss the criteria for choosing between domain types. Review: division of labor and types of domain Oracle VM Server for SPARC offloads management and I/O functionality from the hypervisor to domains (also called virtual machines), providing a modern alternative to older VM architectures that use a "thick", monolithic hypervisor. This permits a simpler hypervisor design, which enhances reliability, and security. It also reduces single points of failure by assigning responsibilities to multiple system components, further improving reliability and security. Oracle VM Server for SPARC defines the following types of domain, each with their own roles: Control domain - management control point for the server, runs the logical domain daemon and constraints engine, and is used to configure domains and manage resources. The control domain is the first domain to boot on a power-up, is always an I/O domain, and is usually a service domain as well. It doesn't have to be, but there's no reason to not leverage it for virtual I/O services. There is one control domain per T-series system, and one per Physical Domain (PDom) on an M5-32 or M6-32 system. M5 and M6 systems can be physically domained, with logical domains within the physical ones. I/O domain - a domain that has been assigned physical I/O devices. The devices may be one more more PCIe root complexes (in which case the domain is also called a root complex domain). The domain has native access to all the devices on the assigned PCIe buses. The devices can be any device type supported by Solaris on the hardware platform. a SR-IOV (Single-Root I/O Virtualization) function. SR-IOV lets a physical device (also called a physical function) or PF) be subdivided into multiple virtual functions (VFs) which can be individually assigned directly to domains. SR-IOV devices currently can be Ethernet or InfiniBand devices. direct I/O ownership of one or more PCI devices residing in a PCIe bus slot. The domain has direct access to the individual devices An I/O domain has native performance and functionality for the devices it owns, unmediated by any virtualization layer. It may also have virtual devices. Service domain - a domain that provides virtual network and disk devices to guest domains. The services are defined by commands that are run in the control domain. It usually is an I/O domain as well, in order for it to have devices to virtualize and serve out. Guest domain - a domain whose devices are all virtual rather than physical: virtual network and disk devices provided by one or more service domains. In common practice, this is where applications are run. Device considerations Consider the following when choosing between virtual devices and physical devices: Virtual devices provide the best flexibility - they can be dynamically added to and removed from a running domain, and you can have a large number of them up to a per-domain device limit. Virtual devices are compatible with live migration - domains that exclusively have virtual devices can be live migrated between servers supporting domains. On the other hand: Physical devices provide the best performance - in fact, native "bare metal" performance. Virtual devices approach physical device throughput and latency, especially with virtual network devices that can now saturate 10GbE links, but physical devices are still faster. Physical I/O devices do not add load to service domains - all the I/O goes directly from the I/O domain to the device, while virtual I/O goes through service domains, which must be provided sufficient CPU and memory capacity. Physical I/O devices can be other than network and disk - we virtualize network, disk, and serial console, but physical devices can be the wide range of attachable certified devices, including things like tape and CDROM/DVD devices. In some cases the lines are now blurred: virtual devices have better performance than previously: starting with Oracle VM Server for SPARC 3.1 there is near-native virtual network performance. There is more flexibility with physical devices than before: SR-IOV devices can now be dynamically reconfigured on domains. Tradeoffs one used to have to make are now relaxed: you can often have the flexibility of virtual I/O with performance that previously required physical I/O. You can have the performance and isolation of SR-IOV with the ability to dynamically reconfigure it, just like with virtual devices. Typical deployment A service domain is generally also an I/O domain: otherwise it wouldn't have access to physical device "backends" to offer to its clients. Similarly, an I/O domain is also typically a service domain in order to leverage the available PCI buses. Control domains must be I/O domains, because they boot up first on the server and require physical I/O. It's typical for the control domain to also be a service domain too so it doesn't "waste" the I/O resources it uses. A simple configuration consists of a control domain that is also the one I/O and service domain, and some number of guest domains using virtual I/O. In production, customers typically use multiple domains with I/O and service roles to eliminate single points of failure, as described in Availability Best Practices - Avoiding Single Points of Failure . Guest domains have virtual disk and virtual devices provisioned from more than one service domain, so failure of a service domain or I/O path or device does not result in an application outage. This also permits "rolling upgrades" in which service domains are upgraded one at a time while their guests continue to operate without disruption. (It should be noted that resiliency to I/O device failures can also be provided by the single control domain, using multi-path I/O) In this type of deployment, control, I/O, and service domains are used for virtualization infrastructure, while applications run in guest domains. Changing application deployment patterns The above model has been widely and successfully used, but more configuration options are available now. Servers got bigger than the original T2000 class machines with 2 I/O buses, so there is more I/O capacity that can be used for applications. Increased server capacity made it attractive to run more vertically-scaled applications, such as databases, with higher resource requirements than the "light" applications originally seen. This made it attractive to run applications in I/O domains so they could get bare-metal native I/O performance. This is leveraged by the Oracle SuperCluster engineered systems mentioned previously. In those engineered systems, I/O domains are used for high performance applications with native I/O performance for disk and network and optimized access to the Infiniband fabric. Another technical enhancement is Single Root I/O Virtualization (SR-IOV), which make it possible to give domains direct connections and native I/O performance for selected I/O devices. Not all I/O domains own PCI complexes, and there are increasingly more I/O domains that are not service domains. They use their I/O connectivity for performance for their own applications. However, there are some limitations and considerations: at this time, a domain using physical I/O cannot be live-migrated to another server. There is also a need to plan for security and introducing unneeded dependencies: if an I/O domain is also a service domain providing virtual I/O to guests, it has the ability to affect the correct operation of its client guest domains. This is even more relevant for the control domain. where the ldm command must be protected from unauthorized (or even mistaken) use that would affect other domains. As a general rule, running applications in the service domain or the control domain should be avoided. For reference, an excellent guide to secure deployment of domains by Stefan Hinker is at Secure Deployment of Oracle VM Server for SPARC. To recap: Guest domains with virtual I/O still provide the greatest operational flexibility, including features like live migration. They should be considered the default domain type to use unless there is a specific requirement that mandates an I/O domain. I/O domains can be used for applications with the highest performance requirements. Single Root I/O Virtualization (SR-IOV) makes this more attractive by giving direct I/O access to more domains, and by permitting dynamic reconfiguration of SR-IOV devices. Today's larger systems provide multiple PCIe buses - for example, 16 buses on the T5-8 - making it possible to configure multiple I/O domains each owning their own bus. Service domains should in general not be used for applications, because compromised security in the domain, or an outage, can affect domains that depend on it. This concern can be mitigated by providing guests' their virtual I/O from more than one service domain, so interruption of service in one service domain does not cause an application outage. The control domain should in general not be used to run applications, for the same reason. Oracle SuperCluster uses the control domain for applications, but it is an exception. It's not a general purpose environment; it's an engineered system with specifically configured applications and optimization for optimal performance. These are recommended "best practices" based on conversations with a number of Oracle architects. Keep in mind that "one size does not fit all", so you should evaluate these practices in the context of your own requirements. Summary Higher capacity servers that run Oracle VM Server for SPARC are attractive for applications with the most demanding resource requirements. New deployment models permit native I/O performance for demanding applications by running them in I/O domains with direct access to their devices. This is leveraged in SPARC SuperCluster, and can be leveraged in T-series servers to provision high-performance applications running in domains. Carefully planned, this can be used to provide peak performance for critical applications. That said, the improved virtual device performance in Oracle VM Server means that the default choice should still be guest domains with virtual I/O.

    Read the article

  • T-SQL Tuesday #53-Matt's Making Me Do This!

    - by Most Valuable Yak (Rob Volk)
    Hello everyone! It's that time again, time for T-SQL Tuesday, the wonderful blog series started by Adam Machanic (b|t). This month we are hosted by Matt Velic (b|t) who asks the question, "Why So Serious?", in celebration of April Fool's Day. He asks the contributors for their dirty tricks. And for some reason that escapes me, he and Jeff Verheul (b|t) seem to think I might be able to write about those. Shocked, I am! Nah, not really. They're absolutely right, this one is gonna be fun! I took some inspiration from Matt's suggestions, namely Resource Governor and Login Triggers.  I've done some interesting login trigger stuff for a presentation, but nothing yet with Resource Governor. Best way to learn it! One of my oldest pet peeves is abuse of the sa login. Don't get me wrong, I use it too, but typically only as SQL Agent job owner. It's been a while since I've been stuck with it, but back when I started using SQL Server, EVERY application needed sa to function. It was hard-coded and couldn't be changed. (welllllll, that is if you didn't use a hex editor on the EXE file, but who would do such a thing?) My standard warning applies: don't run anything on this page in production. In fact, back up whatever server you're testing this on, including the master database. Snapshotting a VM is a good idea. Also make sure you have other sysadmin level logins on that server. So here's a standard template for a logon trigger to address those pesky sa users: CREATE TRIGGER SA_LOGIN_PRIORITY ON ALL SERVER WITH ENCRYPTION, EXECUTE AS N'sa' AFTER LOGON AS IF ORIGINAL_LOGIN()<>N'sa' OR APP_NAME() LIKE N'SQL Agent%' RETURN; -- interesting stuff goes here GO   What can you do for "interesting stuff"? Books Online limits itself to merely rolling back the logon, which will throw an error (and alert the person that the logon trigger fired).  That's a good use for logon triggers, but really not tricky enough for this blog.  Some of my suggestions are below: WAITFOR DELAY '23:59:59';   Or: EXEC sp_MSforeach_db 'EXEC sp_detach_db ''?'';'   Or: EXEC msdb.dbo.sp_add_job @job_name=N'`', @enabled=1, @start_step_id=1, @notify_level_eventlog=0, @delete_level=3; EXEC msdb.dbo.sp_add_jobserver @job_name=N'`', @server_name=@@SERVERNAME; EXEC msdb.dbo.sp_add_jobstep @job_name=N'`', @step_id=1, @step_name=N'`', @command=N'SHUTDOWN;'; EXEC msdb.dbo.sp_start_job @job_name=N'`';   Really, I don't want to spoil your own exploration, try it yourself!  The thing I really like about these is it lets me promote the idea that "sa is SLOW, sa is BUGGY, don't use sa!".  Before we get into Resource Governor, make sure to drop or disable that logon trigger. They don't work well in combination. (Had to redo all the following code when SSMS locked up) Resource Governor is a feature that lets you control how many resources a single session can consume. The main goal is to limit the damage from a runaway query. But we're not here to read about its main goal or normal usage! I'm trying to make people stop using sa BECAUSE IT'S SLOW! Here's how RG can do that: USE master; GO CREATE FUNCTION dbo.SA_LOGIN_PRIORITY() RETURNS sysname WITH SCHEMABINDING, ENCRYPTION AS BEGIN RETURN CASE WHEN ORIGINAL_LOGIN()=N'sa' AND APP_NAME() NOT LIKE N'SQL Agent%' THEN N'SA_LOGIN_PRIORITY' ELSE N'default' END END GO CREATE RESOURCE POOL SA_LOGIN_PRIORITY WITH ( MIN_CPU_PERCENT = 0 ,MAX_CPU_PERCENT = 1 ,CAP_CPU_PERCENT = 1 ,AFFINITY SCHEDULER = (0) ,MIN_MEMORY_PERCENT = 0 ,MAX_MEMORY_PERCENT = 1 -- ,MIN_IOPS_PER_VOLUME = 1 ,MAX_IOPS_PER_VOLUME = 1 -- uncomment for SQL Server 2014 ); CREATE WORKLOAD GROUP SA_LOGIN_PRIORITY WITH ( IMPORTANCE = LOW ,REQUEST_MAX_MEMORY_GRANT_PERCENT = 1 ,REQUEST_MAX_CPU_TIME_SEC = 1 ,REQUEST_MEMORY_GRANT_TIMEOUT_SEC = 1 ,MAX_DOP = 1 ,GROUP_MAX_REQUESTS = 1 ) USING SA_LOGIN_PRIORITY; ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION=dbo.SA_LOGIN_PRIORITY); ALTER RESOURCE GOVERNOR RECONFIGURE;   From top to bottom: Create a classifier function to determine which pool the session should go to. More info on classifier functions. Create the pool and provide a generous helping of resources for the sa login. Create the workload group and further prioritize those resources for the sa login. Apply the classifier function and reconfigure RG to use it. I have to say this one is a bit sneakier than the logon trigger, least of all you don't get any error messages.  I heartily recommend testing it in Management Studio, and click around the UI a lot, there's some fun behavior there. And DEFINITELY try it on SQL 2014 with the IO settings included!  You'll notice I made allowances for SQL Agent jobs owned by sa, they'll go into the default workload group.  You can add your own overrides to the classifier function if needed. Some interesting ideas I didn't have time for but expect you to get to before me: Set up different pools/workgroups with different settings and randomize which one the classifier chooses Do the same but base it on time of day (Books Online example covers this)... Or, which workstation it connects from. This can be modified for certain special people in your office who either don't listen, or are attracted (and attractive) to you. And if things go wrong you can always use the following from another sysadmin or Dedicated Admin connection: ALTER RESOURCE GOVERNOR DISABLE;   That will let you go in and either fix (or drop) the pools, workgroups and classifier function. So now that you know these types of things are possible, and if you are tired of your team using sa when they shouldn't, I expect you'll enjoy playing with these quite a bit! Unfortunately, the aforementioned Dedicated Admin Connection kinda poops on the party here.  Books Online for both topics will tell you that the DAC will not fire either feature. So if you have a crafty user who does their research, they can still sneak in with sa and do their bidding without being hampered. Of course, you can still detect their login via various methods, like a server trace, SQL Server Audit, extended events, and enabling "Audit Successful Logins" on the server.  These all have their downsides: traces take resources, extended events and SQL Audit can't fire off actions, and enabling successful logins will bloat your error log very quickly.  SQL Audit is also limited unless you have Enterprise Edition, and Resource Governor is Enterprise-only.  And WORST OF ALL, these features are all available and visible through the SSMS UI, so even a doofus developer or manager could find them. Fortunately there are Event Notifications! Event notifications are becoming one of my favorite features of SQL Server (keep an eye out for more blogs from me about them). They are practically unknown and heinously underutilized.  They are also a great gateway drug to using Service Broker, another great but underutilized feature. Hopefully this will get you to start using them, or at least your enemies in the office will once they read this, and then you'll have to learn them in order to fix things. So here's the setup: USE msdb; GO CREATE PROCEDURE dbo.SA_LOGIN_PRIORITY_act WITH ENCRYPTION AS DECLARE @x XML, @message nvarchar(max); RECEIVE @x=CAST(message_body AS XML) FROM SA_LOGIN_PRIORITY_q; IF @x.value('(//LoginName)[1]','sysname')=N'sa' AND @x.value('(//ApplicationName)[1]','sysname') NOT LIKE N'SQL Agent%' BEGIN -- interesting activation procedure stuff goes here END GO CREATE QUEUE SA_LOGIN_PRIORITY_q WITH STATUS=ON, RETENTION=OFF, ACTIVATION (PROCEDURE_NAME=dbo.SA_LOGIN_PRIORITY_act, MAX_QUEUE_READERS=1, EXECUTE AS OWNER); CREATE SERVICE SA_LOGIN_PRIORITY_s ON QUEUE SA_LOGIN_PRIORITY_q([http://schemas.microsoft.com/SQL/Notifications/PostEventNotification]); CREATE EVENT NOTIFICATION SA_LOGIN_PRIORITY_en ON SERVER WITH FAN_IN FOR AUDIT_LOGIN TO SERVICE N'SA_LOGIN_PRIORITY_s', N'current database' GO   From top to bottom: Create activation procedure for event notification queue. Create queue to accept messages from event notification, and activate the procedure to process those messages when received. Create service to send messages to that queue. Create event notification on AUDIT_LOGIN events that fire the service. I placed this in msdb as it is an available system database and already has Service Broker enabled by default. You should change this to another database if you can guarantee it won't get dropped. So what to put in place for "interesting activation procedure code"?  Hmmm, so far I haven't addressed Matt's suggestion of writing a lengthy script to send an annoying message: SET @[email protected]('(//HostName)[1]','sysname') + N' tried to log in to server ' + @x.value('(//ServerName)[1]','sysname') + N' as SA at ' + @x.value('(//StartTime)[1]','sysname') + N' using the ' + @x.value('(//ApplicationName)[1]','sysname') + N' program. That''s why you''re getting this message and the attached pornography which' + N' is bloating your inbox and violating company policy, among other things. If you know' + N' this person you can go to their desk and hit them, or use the following SQL to end their session: KILL ' + @x.value('(//SPID)[1]','sysname') + N'; Hopefully they''re in the middle of a huge query that they need to finish right away.' EXEC msdb.dbo.sp_send_dbmail @recipients=N'[email protected]', @subject=N'SA Login Alert', @query_result_width=32767, @body=@message, @query=N'EXEC sp_readerrorlog;', @attach_query_result_as_file=1, @query_attachment_filename=N'UtterlyGrossPorn_SeriouslyDontOpenIt.jpg' I'm not sure I'd call that a lengthy script, but the attachment should get pretty big, and I'm sure the email admins will love storing multiple copies of it.  The nice thing is that this also fires on Dedicated Admin connections! You can even identify DAC connections from the event data returned, I leave that as an exercise for you. You can use that info to change the action taken by the activation procedure, and since it's a stored procedure, it can pretty much do anything! Except KILL the SPID, or SHUTDOWN the server directly.  I'm still working on those.

    Read the article

  • Monitoring SQL Server Agent job run times

    - by okeofs
    Introduction A few months back, I was asked how long a particular nightly process took to run. It was a super question and the one thing that struck me was that there were a plethora of factors affecting the processing time. This said, I developed a query to ascertain process run times, the average nightly run times and applied some KPI’s to the end query. The end goal being to enable me to quickly detect anomalies and processes that are running beyond their normal times. As many of you are aware, most of the necessary data for this type of query, lies within the MSDB database. The core portion of the query is shown below.select sj.name,sh.run_date, sh.run_duration, case when len(sh.run_duration) = 6 then convert(varchar(8),sh.run_duration) when len(sh.run_duration) = 5 then '0' + convert(varchar(8),sh.run_duration) when len(sh.run_duration) = 4 then '00' + convert(varchar(8),sh.run_duration) when len(sh.run_duration) = 3 then '000' + convert(varchar(8),sh.run_duration) when len(sh.run_duration) = 2 then '0000' + convert(varchar(8),sh.run_duration) when len(sh.run_duration) = 1 then '00000' + convert(varchar(8),sh.run_duration) end as tt from dbo.sysjobs sj with (nolock) inner join dbo.sysjobHistory sh with (nolock) on sj.job_id = sh.job_id where sj.name = 'My Agent Job' and [sh.Message] like '%The job%') Run_date and run_duration are obvious fields. The field ‘Name’ is the name of the job that we wish to follow. The only major challenge was that the format of the run duration which was not as ‘user friendly’ as I would have liked. As an example, the run duration 1 hour 10 minutes and 3 seconds would be displayed as 11003; whereas I wanted it to display this in a more user friendly manner as 01:10:03. In order to achieve this effect, we need to add leading zeros to the run_duration based upon the case logic shown above. At this point what we need to do add colons between the hours and minutes and one between the minutes and seconds. To achieve this I nested the query shown above (in purple) within a ‘super’ query. Thus the run time ([Run Time]) is constructed concatenating a series of substrings (See below in Blue). select run_date,substring(convert(varchar(20),tt),1,2) + ':' +substring(convert(varchar(20),tt),3,2) + ':' +substring(convert(varchar(20),tt),5,2) as [run_time] from (select sj.name,sh.run_date, sh.run_duration,case when len(sh.run_duration) = 6 then convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 5 then '0' + convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 4 then '00' + convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 3 then '000' + convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 2 then '0000' + convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 1 then '00000' + convert(varchar(8),sh.run_duration)end as ttfrom dbo.sysjobs sj with (nolock)inner join dbo.sysjobHistory sh with (nolock) on sj.job_id = sh.job_id where sj.name = 'My Agent Job'and [sh.Message] like '%The job%') a Now that I had each nightly run time in hours, minutes and seconds (01:10:03), I decided that it would very productive to calculate a rolling run time average. To do this, I decided to do the calculations in base units of seconds. This said, I encapsulated the query shown above into a further ‘super’ query (see the code in RED below). This encapsulation is shown below. The astute reader will note that I used implied casting from integer to string, which is not the best method to use however it works. This said and if I were constructing the query again I would definitely do an explicit convert. To Recap: I now have a key field of ‘1’, each and every applicable run date and the total number of SECONDS that the process ran for each run date, all of this data within the #rawdata1 temporary table. Select 1 as keyy,run_date,(substring(b.run_time,1,2)*3600) + (substring(b.run_time,4,2)*60) + (substring(b.run_time,7,2)) as run_time_in_Seconds,run_time into #rawdata1 from ( select run_date,substring(convert(varchar(20),tt),1,2) + ':' + substring(convert(varchar(20),tt),3,2) + ':' +substring(convert(varchar(20),tt),5,2) as [run_time] from (select sj.name,sh.run_date, sh.run_duration, case when len(sh.run_duration) = 6 then convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 5 then '0' + convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 4 then '00' + convert(varchar(8),sh.run_duration)when len(sh.run_duration)    = 3 then '000' + convert(varchar(8),sh.run_duration)when len(sh.run_duration)    = 2 then '0000' + convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 1 then '00000' + convert(varchar(8),sh.run_duration)end as ttfrom dbo.sysjobs sj with (nolock)inner join dbo.sysjobHistory sh with (nolock)on sj.job_id = sh.job_id where sj.name = 'My Agent Job'and [sh.Message] like '%The job%') a )b   Calculating the average run time We now select each run time in seconds from #rawdata1 and place the values into another temporary table called #rawdata2. Once again we create a ‘key’, a hardwired ‘1’. select 1 as Keyy, run_time_in_Seconds into #rawdata2 from #rawdata1The purpose of doing so is to make the average time AVG() available to the query immediately without having to do adverse grouping. Applying KPI Logic At this point, we shall apply some logic to determine whether processing times are within the norms. We do this by applying colour names. Obviously, this example is a super one for SSRS and traffic light icons.select rd1.run_date, rd1.run_time, rd1.run_time_in_Seconds ,Avg(rd2.run_time_in_Seconds) as Average_run_time_in_seconds,casewhenConvert(decimal(10,1),rd1.run_time_in_Seconds)/Avg(rd2.run_time_in_Seconds)<= 1.2 then 'Green' when Convert(decimal(10,1),rd1.run_time_in_Seconds)/Avg(rd2.run_time_in_Seconds)< 1.4 then 'Yellow' else 'Red'end as [color], Calculating the Average Run Time in Hours Minutes and Seconds and the end of the query. casewhen len(convert(varchar(2),Avg(rd2.run_time_in_Seconds)/(3600))) = 1 then '0' + convert(varchar(2),Avg(rd2.run_time_in_Seconds)/(3600))else convert(varchar(2),Avg(rd2.run_time_in_Seconds)/(3600))end + ':' + case when len(convert(varchar(2),Avg(rd2.run_time_in_Seconds)%(3600)/60)) = 1 then '0' + convert(varchar(2),Avg(rd2.run_time_in_Seconds)%(3600)/60)else convert(varchar(2),Avg(rd2.run_time_in_Seconds)%(3600)/60)end + ':' + case when len(convert(varchar(2),Avg(rd2.run_time_in_Seconds)%60)) = 1 then '0' + convert(varchar(2),Avg(rd2.run_time_in_Seconds)%60)else convert(varchar(2),Avg(rd2.run_time_in_Seconds)%60)end as [Average Run Time HH:MM:SS] from #rawdata2 rd2 innerjoin #rawdata1 rd1on rd1.keyy = rd2.keyygroup by run_date,rd1.run_time ,rd1.run_time_in_Seconds order by run_date descThe complete code example use msdbgo/*drop table #rawdata1drop table #rawdata2go*/select 1 as keyy,run_date,(substring(b.run_time,1,2)*3600) + (substring(b.run_time,4,2)*60) + (substring(b.run_time,7,2)) as run_time_in_Seconds,run_time into #rawdata1 from (select run_date,substring(convert(varchar(20),tt),1,2) + ':' +substring(convert(varchar(20),tt),3,2) + ':' +substring(convert(varchar(20),tt),5,2) as [run_time] from (select name,run_date, run_duration, casewhenlen(run_duration) = 6 then convert(varchar(8),run_duration)whenlen(run_duration) = 5 then '0' + convert(varchar(8),run_duration)whenlen(run_duration) = 4 then '00' + convert(varchar(8),run_duration)whenlen(run_duration) = 3 then '000' + convert(varchar(8),run_duration)whenlen(run_duration) = 2 then '0000' + convert(varchar(8),run_duration)whenlen(run_duration) = 1 then '00000' + convert(varchar(8),run_duration)end as ttfrom dbo.sysjobs sj with (nolock)innerjoin dbo.sysjobHistory sh with (nolock) on sj.job_id = sh.job_id where name = 'My Agent Job'and [Message] like '%The job%') a ) bselect 1 as Keyy, run_time_in_Seconds into #rawdata2 from #rawdata1select rd1.run_date, rd1.run_time, rd1.run_time_in_Seconds ,Avg(rd2.run_time_in_Seconds) as Average_run_time_in_seconds,casewhenConvert(decimal(10,1),rd1.run_time_in_Seconds)/Avg(rd2.run_time_in_Seconds)<= 1.2 then 'Green' when Convert(decimal(10,1),rd1.run_time_in_Seconds)/Avg(rd2.run_time_in_Seconds)< 1.4 then 'Yellow' else 'Red'end as [color],Case when len(convert(varchar(2),Avg(rd2.run_time_in_Seconds)/(3600))) = 1 then '0' + convert(varchar(2),Avg(rd2.run_time_in_Seconds)/(3600))else convert(varchar(2),Avg(rd2.run_time_in_Seconds)/(3600))end + ':' + case when len(convert(varchar(2),Avg(rd2.run_time_in_Seconds)%(3600)/60)) = 1 then '0' + convert(varchar(2),Avg(rd2.run_time_in_Seconds)%(3600)/60)else convert(varchar(2),Avg(rd2.run_time_in_Seconds)%(3600)/60)end + ':' + case when len(convert(varchar(2),Avg(rd2.run_time_in_Seconds)%60)) = 1 then '0' + convert(varchar(2),Avg(rd2.run_time_in_Seconds)%60)else convert(varchar(2),Avg(rd2.run_time_in_Seconds)%60)end as [Average Run Time HH:MM:SS] from #rawdata2 rd2 innerjoin #rawdata1 rd1on rd1.keyy = rd2.keyygroup by run_date,rd1.run_time ,rd1.run_time_in_Seconds order by run_date desc  

    Read the article

  • 8 Reasons Why Even Microsoft Agrees the Windows Desktop is a Nightmare

    - by Chris Hoffman
    Let’s be honest: The Windows desktop is a mess. Sure, it’s extremely powerful and has a huge software library, but it’s not a good experience for average people. It’s not even a good experience for geeks, although we tolerate it. Even Microsoft agrees about this. Microsoft’s Surface tablets with Windows RT don’t support any third-party desktop apps. They consider this a feature — users can’t install malware and other desktop junk, so the system will always be speedy and secure. Malware is Still Common Malware may not affect geeks, but it certainly continues to affect average people. Securing Windows, keeping it secure, and avoiding unsafe programs is a complex process. There are over 50 different file extensions that can contain harmful code to keep track of. It’s easy to have theoretical discussions about how malware could infect Mac computers, Android devices, and other systems. But Mac malware is extremely rare, and has  generally been caused by problem with the terrible Java plug-in. Macs are configured to only run executables from identified developers by default, whereas Windows will run everything. Android malware is talked about a lot, but Android malware is rare in the real world and is generally confined to users who disable security protections and install pirated apps. Google has also taken action, rolling out built-in antivirus-like app checking to all Android devices, even old ones running Android 2.3, via Play Services. Whatever the reason, Windows malware is still common while malware for other systems isn’t. We all know it — anyone who does tech support for average users has dealt with infected Windows computers. Even users who can avoid malware are stuck dealing with complex and nagging antivirus programs, especially since it’s now so difficult to trust Microsoft’s antivirus products. Manufacturer-Installed Bloatware is Terrible Sit down with a new Mac, Chromebook, iPad, Android tablet, Linux laptop, or even a Surface running Windows RT and you can enjoy using your new device. The system is a clean slate for you to start exploring and installing your new software. Sit down with a new Windows PC and the system is a mess. Rather than be delighted, you’re stuck reinstalling Windows and then installing the necessary drivers or you’re forced to start uninstalling useless bloatware programs one-by-one, trying to figure out which ones are actually useful. After uninstalling the useless programs, you may end up with a system tray full of icons for ten different hardware utilities anyway. The first experience of using a new Windows PC is frustration, not delight. Yes, bloatware is still a problem on Windows 8 PCs. Manufacturers can customize the Refresh image, preventing bloatware rom easily being removed. Finding a Desktop Program is Dangerous Want to install a Windows desktop program? Well, you’ll have to head to your web browser and start searching. It’s up to you, the user, to know which programs are safe and which are dangerous. Even if you find a website for a reputable program, the advertisements on that page will often try to trick you into downloading fake installers full of adware. While it’s great to have the ability to leave the app store and get software that the platform’s owner hasn’t approved — as on Android — this is no excuse for not providing a good, secure software installation experience for typical users installing typical programs. Even Reputable Desktop Programs Try to Install Junk Even if you do find an entirely reputable program, you’ll have to keep your eyes open while installing it. It will likely try to install adware, add browse toolbars, change your default search engine, or change your web browser’s home page. Even Microsoft’s own programs do this — when you install Skype for Windows desktop, it will attempt to modify your browser settings t ouse Bing, even if you’re specially chosen another search engine and home page. With Microsoft setting such an example, it’s no surprise so many other software developers have followed suit. Geeks know how to avoid this stuff, but there’s a reason program installers continue to do this. It works and tricks many users, who end up with junk installed and settings changed. The Update Process is Confusing On iOS, Android, and Windows RT, software updates come from a single place — the app store. On Linux, software updates come from the package manager. On Mac OS X, typical users’ software updates likely come from the Mac App Store. On the Windows desktop, software updates come from… well, every program has to create its own update mechanism. Users have to keep track of all these updaters and make sure their software is up-to-date. Most programs now have their act together and automatically update by default, but users who have old versions of Flash and Adobe Reader installed are vulnerable until they realize their software isn’t automatically updating. Even if every program updates properly, the sheer mess of updaters is clunky, slow, and confusing in comparison to a centralized update process. Browser Plugins Open Security Holes It’s no surprise that other modern platforms like iOS, Android, Chrome OS, Windows RT, and Windows Phone don’t allow traditional browser plugins, or only allow Flash and build it into the system. Browser plugins provide a wealth of different ways for malicious web pages to exploit the browser and open the system to attack. Browser plugins are one of the most popular attack vectors because of how many users have out-of-date plugins and how many plugins, especially Java, seem to be designed without taking security seriously. Oracle’s Java plugin even tries to install the terrible Ask toolbar when installing security updates. That’s right — the security update process is also used to cram additional adware into users’ machines so unscrupulous companies like Oracle can make a quick buck. It’s no wonder that most Windows PCs have an out-of-date, vulnerable version of Java installed. Battery Life is Terrible Windows PCs have bad battery life compared to Macs, IOS devices, and Android tablets, all of which Windows now competes with. Even Microsoft’s own Surface Pro 2 has bad battery life. Apple’s 11-inch MacBook Air, which has very similar hardware to the Surface Pro 2, offers double its battery life when web browsing. Microsoft has been fond of blaming third-party hardware manufacturers for their poorly optimized drivers in the past, but there’s no longer any room to hide. The problem is clearly Windows. Why is this? No one really knows for sure. Perhaps Microsoft has kept on piling Windows component on top of Windows component and many older Windows components were never properly optimized. Windows Users Become Stuck on Old Windows Versions Apple’s new OS X 10.9 Mavericks upgrade is completely free to all Mac users and supports Macs going back to 2007. Apple has also announced their intention that all new releases of Mac OS X will be free. In 2007, Microsoft had just shipped Windows Vista. Macs from the Windows Vista era are being upgraded to the latest version of the Mac operating system for free, while Windows PCs from the same era are probably still using Windows Vista. There’s no easy upgrade path for these people. They’re stuck using Windows Vista and maybe even the outdated Internet Explorer 9 if they haven’t installed a third-party web browser. Microsoft’s upgrade path is for these people to pay $120 for a full copy of Windows 8.1 and go through a complicated process that’s actaully a clean install. Even users of Windows 8 devices will probably have to pay money to upgrade to Windows 9, while updates for other operating systems are completely free. If you’re a PC geek, a PC gamer, or someone who just requires specialized software that only runs on Windows, you probably use the Windows desktop and don’t want to switch. That’s fine, but it doesn’t mean the Windows desktop is actually a good experience. Much of the burden falls on average users, who have to struggle with malware, bloatware, adware bundled in installers, complex software installation processes, and out-of-date software. In return, all they get is the ability to use a web browser and some basic Office apps that they could use on almost any other platform without all the hassle. Microsoft would agree with this, touting Windows RT and their new “Windows 8-style” app platform as the solution. Why else would Microsoft, a “devices and services” company, position the Surface — a device without traditional Windows desktop programs — as their mass-market device recommended for average people? This isn’t necessarily an endorsement of Windows RT. If you’re tech support for your family members and it comes time for them to upgrade, you may want to get them off the Windows desktop and tell them to get a Mac or something else that’s simple. Better yet, if they get a Mac, you can tell them to visit the Apple Store for help instead of calling you. That’s another thing Windows PCs don’t offer — good manufacturer support. Image Credit: Blanca Stella Mejia on Flickr, Collin Andserson on Flickr, Luca Conti on Flickr     

    Read the article

  • CodePlex Daily Summary for Tuesday, June 05, 2012

    CodePlex Daily Summary for Tuesday, June 05, 2012Popular ReleasesApplication Architecture Guidelines: Application Architecture Guidelines 3.0.7: 3.0.7Jolt Environment: Jolt v2 Stable: Many new features. Follow development here for more information: http://www.rune-server.org/runescape-development/rs-503-client-server/projects/298763-jolt-environment-v2.html Setup instructions in downloadtedplay: tedplay 1.0: First public release of the Commodore 264 family (C16, plus/4) music player based on the SDL version of YAPE http://yape.homeserver.hu.SharePoint Euro 2012 - UEFA European Football Predictor: havivi.euro2012.wsp (1.5): New fetures:Multilingual Support Max users property in Standings Web Part Games time zone change (UTC +1) bug fix - Version 1.4 locking problem http://euro2012.codeplex.com/discussions/358262 bug fix - Field Title not found (v.1.3) German SP http://euro2012.codeplex.com/discussions/358189#post844228 Bug fix - Access is denied.for users with contribute rights Bug fix - Installing on non-English version of SharePoint Bug fix - Title Rules Installing SharePoint Euro 2012 PredictorSharePoint E...xNet: xNet 2.1.1: Release xNet 2.1.1Command Line Parser Library: 1.9.2.4 stable: This is the first stable of 1.9.* branch. Added tests for HelpText::AutoBuild. Fixed minor formatting error in HelpText::DefaultParsingErrorsHandler.myManga: myManga v1.0.0.4: ChangeLogUpdating from Previous Version: Extract contents of Release - myManga v1.0.0.4.zip to previous version's folder. Replaces: myManga.exe BakaBox.dll CoreMangaClasses.dll Manga.dll Plugins/MangaReader.manga.dll Plugins/MangaFox.manga.dll Plugins/MangaHere.manga.dll Plugins/MangaPanda.manga.dllMVVM Light Toolkit: V4RC (binaries only) including Windows 8 RP: This package contains all the latest DLLs for MVVM Light V4 RC. It includes the DLLs for Windows 8 Release Preview. An updated Nuget package is also available at http://nuget.org/packages/MvvmLightLibsPreviewExtAspNet: ExtAspNet v3.1.7: +2012-06-03 v3.1.7 -?????????BUG,??????RadioButtonList?,AJAX????????BUG(swtseaman、????)。 +?Grid?BoundField、HyperLinkField、LinkButtonField、WindowField??HtmlEncode?HtmlEncodeFormatString(TiDi)。 -HtmlEncode?HtmlEncodeFormatString??????true,??????HTML????????。 -??????Asp.Net??GridView?BoundField?????????。 -http://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.boundfield.htmlencode -?Grid?HyperLinkField、WindowField??UrlEncode??,????URL??(???true)。 -?????????????,?????????????...Ela, functional language: Ela Platform 2012.3: 2012.3 is a stabilization release. It contains several improvements to Ela, std lib and Elide. Ela changes Fix:Op code 'Show' didn't work correctly when a format string was a thunk. Fix:Flipping a function wrapped in a thunk caused VM to crush. Fix:A bug fixed in concatenation of a thunk and a lazy list. Fix:Concatenation of a lazy list and a strict list could cause stack overflow in a case of recursive thunks. Fix:A bug fixed in applying 'show' to a result of a lazy and strict list...Cross Site Treeview - For MOSS 2007: Cross Site Treeview-Beta - WSP Solution: Cross Site Treeview-Beta - WSP SolutionLiveChat Starter Kit: LCSK v1.5.2: New features: Visitor location (City - Country) from geo-location Pass configuration via javascript for the chat box New visitor identification (no more using the IP address as visitor identification) To update from 1.5.1 Run the /src/1.5.2-sql-updates.txt SQL script to update your database tables. If you have it installed via NuGet, simply update your package and the file will be included so you can run the update script. New installation The easiest way to add LCSK to your app is by...Kendo UI ASP.NET Sample Applications: Sample Applications (2012-06-01): Sample application(s) demonstrating the use of Kendo UI in ASP.NET applications.Better Explorer: Better Explorer Beta 1: Finally, the first Beta is here! There were a lot of changes, including: Translations into 10 different languages (the translations are not complete and will be updated soon) Conditional Select new tools for managing archives Folder Tools tab new search bar and Search Tab new image editing tools update function many bug fixes, stability fixes, and memory leak fixes other new features as well! Please check it out and if there are any problems, let us know. :) Also, do not forge...Player Framework by Microsoft: Player Framework for Windows 8 Metro (Preview 3): Player Framework for HTML/JavaScript and XAML/C# Metro Style Applications. Additional DownloadsIIS Smooth Streaming Client SDK for Windows 8 Microsoft PlayReady Client SDK for Metro Style Apps Release notes:Support for Windows 8 Release Preview (released 5/31/12) Advertising support (VAST, MAST, VPAID, & clips) Miscellaneous improvements and bug fixesMicrosoft Ajax Minifier: Microsoft Ajax Minifier 4.54: Fix for issue #18161: pretty-printing CSS @media rule throws an exception due to mismatched Indent/Unindent pair.Silverlight Toolkit: Silverlight 5 Toolkit Source - May 2012: Source code for December 2011 Silverlight 5 Toolkit release.Windows 8 Metro RSS Reader: RSS Reader release 6: Changed background and foreground colors Used VariableSizeGrid layout to wrap blog posts with images Sort items with Images first, text-only last Enabled Caching to improve navigation between framesJson.NET: Json.NET 4.5 Release 6: New feature - Added IgnoreDataMemberAttribute support New feature - Added GetResolvedPropertyName to DefaultContractResolver New feature - Added CheckAdditionalContent to JsonSerializer Change - Metro build now always uses late bound reflection Change - JsonTextReader no longer returns no content after consecutive underlying content read failures Fix - Fixed bad JSON in an array with error handling creating an infinite loop Fix - Fixed deserializing objects with a non-default cons...DotNetNuke® Community Edition CMS: 06.02.00: Major Highlights Fixed issue in the Site Settings when single quotes were being treated as escape characters Fixed issue loading the Mobile Premium Data after upgrading from CE to PE Fixed errors logged when updating folder provider settings Fixed the order of the mobile device capabilities in the Site Redirection Management UI The User Profile page was completely rebuilt. We needed User Profiles to have multiple child pages. This would allow for the most flexibility by still f...New ProjectsDish: ????? ???????? ??? ??.DRESSCOLLECTION: THIS IS A MVC APPLICAITON WHICH HELPS IN CREATING RANGE OF CLOTH AND DRESS COLLECTION TO CHOOSE AND BUY.FlowTasks: FlowTasks is a framework to develop human and business workflowGroup3.ERP.Roleadministration: Software zum generieren von RollendefinitionenHierarchical Pages Orchard Module: Creates a Hierarchical Page content type that does everything the built-in Page type does, but items can also contain other pages (and thus are containable by other pages too).hot24.vn: is first my project, it very cu chuoi.Household Manager: fcsdvsdvsdcdLabView interface for windows HPC: This project is a prototype library that attempts to integrate in the same enviroment the Data Acquisition and the Data Analysis systems. The library is a collection of LabVIEW Virtual Instruments that permts the job submission to a Windows HPC cluster. Jobs results can be easaly collected in order to perform actions via the DAQ control. Full documentation and examples are included.markgroves.us: Hello this is the blog template for http://markgroves.us. MostrarDireccion: Muestra la Direccion donde Reside cada personaMSPTH2012: This project is for MSPTH2012. Pattern Rolling File Appender for log4net: Appender for log4net that is combination of PatternFileAppender and RollingFileAppenderPowershell By Example: The vision of Powershell By Example is to give those interested in Powershell the chance to learn Powershell scripting by example with the goal of empowering them to use Powershell to maintain their own Windows environments. Practica Cuarto A: proyecto bakanProvidence: Providence is an application meant to capture audio, video, and screen-shots for various purposes.Proyecto T2: Tarea 2 - aprendiendo a utilizar la herramientaServer Config Tools: ??? ?????? IIS ?????SharePoint Scheduling Assistant: Scheduling assistant built for schools and universities. Works with out-of-the-box list and involves no custom workflows.Steam Group Players: This tool is meant to allow access to Steam community groups with the ability to sort group members according to hours spent playing specific games (as well as overall play time). The reason for starting this project was to allow a "player of the week"-selection that is slightly more complex than picking one by random or just by total hours played (recently).t2belenlm4c: Ana Belen LandaTaskLogger: Handy little pop-up that allows you to track time spent on project tasks. Can be used to generate timesheets.TechnicBlog: ????TPL DataFlow Debugger Visualizer: Graphic debugger visualizer to TPL DataFlow networks enable to see live state of blocks and relations Compatible with VS 2012 RC VsDoc2JsDoc: The goal of this project is to convert VSDoc JavaScript comments of JayData classes to JSDoc comment format. Using this tool you can generate the API reference of your JayData components and keep your JavaScript documentation up-to-date.WorkOutTimer: WorkOutTimer This little tool provides an easy to use timer for your workouts, trainings, and more... It has two time period, one for working, and one to be cool! By default work time is set to 2 minutes, cool time is set to 1minute (Kick boxing settings), but you can set up each period time as you want. You can hang up, restart, or reset time. It offers a high visibility timer. It also counts rounds. So have great workout! If you use it and you think it’s cool for y...XLIFF Editor: This is an attempt at making an open source XLIFF editor for Windows. I am using the Open Language Tools Editor as the primary source for the ideas of the XLIFF Editor. I am new to C# so this is is for me an ambitious learning and hopefull succesfull project. Any help in positive criticism will be looked at gratefully. xNet: xNet - a class library for .NET Framework, which includes: - Classes for work with proxy servers: HTTP, Socks4 (and), Socks5. - Classes for work with HTTP 1.0/1.1 protocol: keep-alive, gzip, deflate, chunked, SSL, proxies and more. - Classes for work with multithreading: _a multithreaded bypassing the collection, asynchronous events and more_. - Classes helpers that extend standard classes .NET Framework: FileHelper, DirectoryHelper, StringHelper, XmlHelper, BitHelper and others.?????? «??????????? ??????»: ??????????? ??????????: description

    Read the article

  • CodePlex Daily Summary for Friday, November 16, 2012

    CodePlex Daily Summary for Friday, November 16, 2012Popular ReleasesCSLA .NET Contrib: CslaContrib 4.3.13: This is the first release of CslaContrib extension library to CSLA .NET. This release is version synchronized to CSLA .NET 4.3.13 and is also available on NuGet CslaContribObjectCaching data portal implementation with in-memory cache provider for simple cache (courtesy of Magenic) Additional generic rules: LessThan, LessThanOrEqual, GreaterThan, GreaterThanOrEqual, Range, StopIfNotCanWrite, StopIfNotIsNew, StopIfNotIsExisting, StopIfAnyAdditionalHasValue, CalcSum, ToUpperCase ToLowerCase...Paint.NET PSD Plugin: 2.2.0: Changes: Layer group visibility is now applied to all layers within the group. This greatly improves the visual fidelity of complex PSD files that have hidden layer groups. Layer group names are prefixed so that users can get an indication of the layer group hierarchy. (Paint.NET has a flat list of layers, so the hierarchy is flattened out on load.) The progress bar now reports status when saving PSD files, instead of showing an indeterminate rolling bar. Performance improvement of 1...CRM 2011 Visual Ribbon Editor: Visual Ribbon Editor (1.3.1116.7): [IMPROVED] Detailed error message descriptions for FaultException [FIX] Fixed bug in rule CrmOfflineAccessStateRule which had incorrect State attribute name [FIX] Fixed bug in rule EntityPropertyRule which was missing PropertyValue attribute [FIX] Current connection information was not displayed in status bar while refreshing list of entitiesSuper Metroid Randomizer: Super Metroid Randomizer v5: v5 -Added command line functionality for automation purposes. -Implented Krankdud's change to randomize the Etecoon's item. NOTE: this version will not accept seeds from a previous version. The seed format has changed by necessity. v4 -Started putting version numbers at the top of the form. -Added a warning when suitless Maridia is required in a parsed seed. v3 -Changed seed to only generate filename-legal characters. Using old seeds will still work exactly the same. -Files can now be saved...Caliburn Micro: WPF, Silverlight, WP7 and WinRT/Metro made easy.: Caliburn.Micro v1.4: Changes This version includes many bug fixes across all platforms, improvements to nuget support and...the biggest news of all...full support for both WinRT and WP8. Download Contents Debug and Release Assemblies Samples Readme.txt License.txt Packages Available on Nuget Caliburn.Micro – The full framework compiled into an assembly. Caliburn.Micro.Start - Includes Caliburn.Micro plus a starting bootstrapper, view model and view. Caliburn.Micro.Container – The Caliburn.Micro invers...DotNetNuke® Community Edition CMS: 06.02.05: Major Highlights Updated the system so that it supports nested folders in the App_Code folder Updated the Global Error Handling so that when errors within the global.asax handler happen, they are caught and shown in a page displaying the original HTTP error code Fixed issue that stopped users from specifying Link URLs that open on a new window Security FixesFixed issue in the Member Directory module that could show members to non authenticated users Fixed issue in the Lists modul...MVC Bootstrap: MVC Boostrap 0.5.6: A small demo site, based on the default ASP.NET MVC 3 project template, showing off some of the features of MVC Bootstrap. Added features to the Membership provider, a "lock out" feature to help fight brute force hacking of accounts. After a set number of log in attempts, the account is locked for a set time. If you download and use this project, please give some feedback, good or bad!OnTopReplica: Release 3.4: Update to the 3 version with major fixes and improvements. Compatible with Windows 8. Now runs (and requires) .NET Framework v.4.0. Added relative mode for region selection (allows the user to select regions as margins from the borders of the thumbnail, useful for windows which have a variable size but fixed size controls, like video players). Improved window seeking when restoring cloned thumbnail or cloning a window by title or by class. Improved settings persistence. Improved co...DotSpatial: DotSpatial 1.4: This is a Minor Release. See the changes in the issue tracker. Minimal -- includes DotSpatial core and essential extensions Extended -- includes debugging symbols and additional extensions Tutorials are available. Just want to run the software? End user (non-programmer) version available branded as MapWindow Want to add your own feature? Develop a plugin, using the template and contribute to the extension feed (you can also write extensions that you distribute in other ways). Components ...AcDown?????: AcDown????? v4.3: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown??????????????????,????????????????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ???? 32??64? ???Linux ????(1)????????Windows XP???,????????.NET Framework 2.0???(x86),?????"?????????"??? (2)???????????Linux???,????????Mono?? ??2...????: ???? 1.0: ????Unicode IVS Add-in for Microsoft Office: Unicode IVS Add-in for Microsoft Office: Unicode IVS Add-in for Microsoft Office ??? ?????、Unicode IVS?????????????????Unicode IVS???????????????。??、??????????????、?????????????????????????????。Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.74: fix for issue #18836 - sometimes throws null-reference errors in ActivationObject.AnalyzeScope method. add back the Context object's 8-parameter constructor, since someone has code that's using it. throw a low-pri warning if an expression statement is == or ===; warn that the developer may have meant an assignment (=). if window.XXXX or window"XXXX" is encountered, add XXXX (as long as it's a valid JavaScript identifier) to the known globals so subsequent references to XXXX won't throw ...???????: Monitor 2012-11-11: This is the first releasehttpclient?????????: httpclient??????? 1.0: httpclient??????? (1)?????????? (2)????????? (3)??2012-11-06??,???????。VidCoder: 1.4.5 Beta: Removed the old Advanced user interface and moved x264 preset/profile/tune there instead. The functionality is still available through editing the options string. Added ability to specify the H.264 level. Added ability to choose VidCoder's interface language. If you are interested in translating, we can get VidCoder in your language! Updated WPF text rendering to use the better Display mode. Updated HandBrake core to SVN 5045. Removed logic that forced the .m4v extension in certain ...ImageGlass: Version 1.5: http://i1214.photobucket.com/albums/cc483/phapsuxeko/ImageGlass/1.png v1.5.4401.3015 Thumbnail bar: Increase loading speed Thumbnail image with ratio Support personal customization: mouse up, mouse down, mouse hover, selected item... Scroll to show all items Image viewer Zoom by scroll, or selected rectangle Speed up loading Zoom to cursor point New background design and customization and others... v1.5.4430.483 Thumbnail bar: Auto move scroll bar to selected image Show / Hi...Building Windows 8 Apps with C# and XAML: Full Source Chapters 1 - 10 for Windows 8 Fix 002: This is the full source from all chapters of the book, compiled and tested on Windows 8 RTM. Includes: A fix for the Netflix example from Chapter 6 that was missing a service reference A fix for the ImageHelper issue (images were not being saved) - this was due to the buffer being inadequate and required streaming the writeable bitmap to a buffer first before encoding and savingmyCollections: Version 2.3.2.0: New in this version : Added TheGamesDB.net API for Games and NDS Added Support for Windows Media Center Added Support for myMovies Added Support for XBMC Added Support for Dune HD Added Support for Mede8er Added Support for WD HDTV Added Fast search options Added order by Artist/Album for music You can now create covers and background for games You can now update your ID3 tag with the info of myCollections Fixed several provider Performance improvement New Splash ...Draw: Draw 1.0: Drawing PadNew ProjectsAelda: a try to implement a multiplayer game. Just for fun and discover principles.Alphalabs - Node Garden: "Source" code for the 2012 Alphalabs "Node Garden" project.Ants silly C experiments.: A series of experiments I have done while learning C.BANANA Platform: BANANA Platform is a set of utilities for ASP.NET and C#. It provides with 4 languages(English, Korean, Chinese and Japanese).Command Line Parser for C++ projects: Register command-line constructs such as parameters and arguments, describe each of them, give command-line examples and bind functors to trigger your logic!CraftLib: CraftLib is a simple, easy to use Minecraft library written in C# (C-Sharp). It is used to perform functions for Minecraft Launchers.Dialector: Using this program, you can convert pure Turkish texts into different dialects; such as: Emmi, Kufurbaz, Kusdili, Laz, Peltek, Tiki, and many more. Just paste any text, select your dialect, and hit "Convert"!Friend's Contact Web System: This is a simple web - based system for sharing and organizing news about incoming events. Not only for friends, but also for companies' members. Hospital Provider: that is a new project for hospital providerMicrosoft DNS Data Collector: A while back I found myself needing a way to analyze and report on the DNS queries that are coming into our company's DNS. Unfortunately short of verbose logging, there is no out of the box method of doing this with Microsoft DNS... Enter the DNS Catcher service.MTG Match Counter: MTG Match Counter is a simple life\match counter, designed for Magic: The Gathering players.NetMemory: let us remember ancestor better!PvrSharp: PvrSharp is a powerfull texture loading and conversion library, focusing on Pvr container textures, written in pure C#.SimpleGallery Orchard Module: SimpleGallery is an Orchard Module that exposes a gallery of images easy to configure and use. Paging and opening effects are available.Smart Device Updater: Provides mechanism for updating assemblies on smart device under WinMobile 5.0 and higher.stylecopmaker: stylecopmakertesttom11152012git01: dsatesttom11152012git02: fdstesttom11152012hg01: hfgtesttom11152012hg02: fdstesttom11152012tfs01: fdstesttom11152012tfs02: fdsThe Hunter - CAU Project: The Hunter is a student project made for the Game Design course for CAU. In this Codeplex Project, your can find the source code, the documentation.TKOctopus: ok laUHDPBS: Computer Science/Software Engineering Class at UH Downtown. Project is a patient billing system.VersionStar - A tool for applying database patches to a database: Apply database patches in an organized way by structuring the way your patches are stored and rigidly determine the order in which they must be applied.Virtual Pro Card: VirtualProCard is a solution to easily and efficiently manage your traditional business cards WordNet C#: The purpose of this project is demonstrate how to query the WordNet database using Microsoft Entity Framework 5 & C#.

    Read the article

  • CodePlex Daily Summary for Thursday, November 03, 2011

    CodePlex Daily Summary for Thursday, November 03, 2011Popular ReleasesFlagConsole: 1.0.1: BUGFIXES: - Fixed a bug, which caused the label not to draw a word, if it had the same length as the label's length.Nearforums - ASP.NET MVC forum engine: Nearforums v7.0: Version 7.0 of Nearforums, the ASP.NET MVC Forum Engine, containing new features: UI: Flexible layout to handle both list and table-like template layouts. Theming - Visual choice of themes: Deliver some templates on installation, export/import functionality, preview. Allow site owners to choose default list sort order for the forums. Forum latest activity. Visit the project Roadmap for more details.?????????? - ????????: All-In-One Code Framework ??? 2011-11-02: http://download.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1codechs&DownloadId=216140 ??????,11??,?????20????Microsoft OneCode Sample,????6?Program Language Sample,2?Windows Base Sample,2?GDI+ Sample,4?Internet Explorer Sample?6?ASP.NET Sample。?????????????。 ????,?????。http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1code&DownloadId=128165 Program Language CSImageFullScreenSlideShow VBImageFullScreenSlideShow CSDynamicallyBuildLambdaExpressionWithFie...Python Tools for Visual Studio: 1.1 Alpha: We’re pleased to announce the release of Python Tools for Visual Studio 1.1 Alpha. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python programming language. This release includes new core IDE features, a couple of new sample libraries for interacting with Kinect and Excel, and many bug fixes for issues reported since the release of 1.0. For the core IDE features we’ve added many new features which improve the basic edit...BExplorer (Better Explorer): Better Explorer 2.0.0.631 Alpha: Changelog: Added: Some new functions in ribbon Added: Possibility to choose displayed columns Added: Basic Search Fixed: Some bugs after navigation Fixed: Attempt to fix slow navigation and slow start Known issues: - BreadcrumbBar fails on some situations - Basic search not work quite well in some situations Please if anyone find bugs be kind and report them at the Issue Tracker! Thanks!DotNetNuke® Community Edition: 05.06.04: Major Highlights Fixed issue with upgrades on systems that had upgraded the Telerik library to 6.0.0 Fixed issue with Razor Host upgrade to 5.6.3 The logic for module administration checks contains incorrect logic in 1 place, opening the possibility of a user with edit permissions gaining access to functionality they should not have through a particularly crafted url Security FixesBrowsers support the ability to remember common strings such as usernames/addresses etc. Code was adde...Terminals: Version 2.0 - Beta 2 Release: The team has finally put the nail into the official release date for version 2.0. As bugs are winding down on the 2.0 Roadmap we decided to push out another build - the first 2.0 Beta build. Please take time to use and abuse this release. We left logging in place, and this is a debug build so be sure to submit your logs on each bug reported, and please do report all bugs! Check the source code page on the site, this beta includes all commits since (and including) the 90428 check-in back i...iTuner - The iTunes Companion: iTuner 1.4.4322: Added German (unverified, apologies if incorrect) Properly source invariant resources with correct resIDs Replaced obsolete lyric providers with working providers Fix Pseudolater to correctly morph every third char Fix null reference in CatalogBaseTrack Folder Changes: Track Folder Changes 1.0: Track Folder Changes 1.0 (binary)Devpad: 4.6: Whats new for Devpad 4.6: New Recent Files New Run in Safari Minor Bug Fix's, improvements and speed upsWindows Workflow Foundation on Codeplex: Microsoft.Activities v1.8.8: Microsoft.Activities Overview How do I install Microsoft.Activities? Updates in this release9318Technical Analysis Engine for .NET: Technical Analysis Engine 1.25: What's new in the 1.25 release (2011-11-01): - Added Williams %R indicator - Added Moving Average Envelopes indicatorBoxWorld: BoxWorld_2011.10.30: BoxWorld - 8.0.1110.30 This is the initial release of BoxWorld. I'd recommend downloading the installer as it contains the compiled code and everything all nicely contained. By default, you end up with this directory structure: C:\Program Files\ViperWorks\BoxWorld C:\Program Files\ViperWorks\BoxWorld\Data C:\Program Files\ViperWorks\BoxWorld\Interface C:\Program Files\ViperWorks\BoxWorld\Source In the root you have the compiled EXE files, one for the main release, one for the LITE release ...VidCoder: 1.2.1: Fixed a couple regressions: video encoder was blank in queue and crashes with the High Profile preset when opening the Settings window. Fixed problem with auto-update introduced in 1.2.0. If you have 1.2.0 you will need to update manually to get this.AssaultCube Reloaded: Release 2.3: THE RELEASE YOU'VE ALL BEEN WAITING FOR! IT CAN NOW BE CONSIDERED STABLE Linux has Debian 64-bit precompiled binaries, but you can compile your own as it also contains the source. If you are using Mac or other operating systems, download the Linux package. The server pack is ready for both Windows and Linux, but you might need to compile your own for linux (source included) If you are using Windows and require the source code, download the source package!A Microblog API (SINA weibo.com open API in C#, .Net???????API): AMicroblogAPI v1.0: AMicroBlogAPI is a C# implementation, a strong-typed deep encapsulation, a easy-to-use .net wrapper of SINA microblog API v1.0. App developers no longer need to parse various HTTP responses -- all responses are parsed into strong types; No longer need to construt the request query strings -- just simply gives the values of parameters; No longer need to implement the OAuth -- a single call could logs the user on. In this release (AMicroblogAPI v1.0), all basic data APIs are implemented. For ...patterns & practices: Enterprise Library Contrib: Enterprise Library Contrib - 5.0 (Oct 2011): This release of Enterprise Library Contrib is based on the Microsoft patterns & practices Enterprise Library 5.0 core and contains the following: Common extensionsTypeConfigurationElement<T> - A Polymorphic Configuration Element without having to be part of a PolymorphicConfigurationElementCollection. AnonymousConfigurationElement - A Configuration element that can be uniquely identified without having to define its name explicitly. Data Access Application Block extensionsMySql Provider - ...Network Monitor Open Source Parsers: Network Monitor Parsers 3.4.2748: The Network Monitor Parsers packages contain parsers for more than 400 network protocols, including RFC based public protocols and protocols for Microsoft products defined in the Microsoft Open Specifications for Windows and SQL Server. NetworkMonitor_Parsers.msi is the base parser package which defines parsers for commonly used public protocols and protocols for Microsoft Windows. In this release, NetowrkMonitor_Parsers.msi continues to improve quality and fix bugs. It has included the fo...Duckworth Lewis Professional Edition Calculator: DLcalc 3.0: DLcalc 3.0 can perform Duckworth/Lewis Professional Edition calculations 100% accurately. It also produces over-by-over and ball-by-ball PAR score tables.Media Companion: MC 3.420b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) Movies Fixed: Fanart and poster scraping issues TV Shows (Re)Added: Rebuild single show Fixed: Issue when shows are moved from original location Ability to handle " for actor nicknames Crash when episode name contains "<" (does not scrape yet) Clears fanart when switch...New ProjectsASP.NET MVC fluent validation framework: FluentMvc allow you to create validatable models without using any validation attributes.BeanPermutator: The BeanPermutator library provides a robust permutations engine. It was designed with framework API, optimal performance, and memory usage in mind. For performance, the implementation is modeled after STL's next_permutation; except with added support for conditional result sets.Cicero: Cicero is a library/application to programmatically analyze managed crash dumps. It provides a managed API to load dumps and look at managed objects, using [url:SOS|http://msdn.microsoft.com/en-us/library/bb190764.aspx] under the covers.CodePlex Windows Phone App: This project deals with how to browse efficently through codeplex website using windows phone capabilities.Collage: A photo collage screensaver.Docs Professional Archiving System: This application basically provides a professional audit compliant archiving system with storage service.EBSCOLunch: This app simply shows ebsco's lunch menu. It runs in the system tray and when clicked on shows today's cafeteria specials.Emesary: C# Interobject communication and application mssaging: Simple quick and efficient class based interobject communcation to allow decoupled disparate parts of a system to function together without knowing about each, e.g GUI, DB and business logic.Enterprise Data Entry, Reporting and Analysis Tool: This project will have app with different modules that will show the interface tools functions based on the designation of an employee as defined in Active Directory. Example: A team manager can view all the details regarding his immediate subordinates. Financial date and time libary: Date and time routines that know about holidays, rolling conventions, and day count fractions.Frame Work Model by Patán: Proyecto de modelado de FrameworkIttxa: Is in Alpha releaseKooboo CMS data sync module: Kooboo cms data export / import module.MetroBlog: A Windows 8 Metro inspired blog. Mosque: In the name of GOD this is a portalMPO tools: A .net library for parsing .mpo stereo images (in Multi-Picture Format).mytest5: my testtttttttttNetflix MVP Example: Example .NET movie application using the MVP pattern with windows forms and WPF. Uses the Netflix Odata service to get movie information.OpenStreetMap2Oracle: OpenStreetMap2Oracle is a windows application, which exports OpenStreetMap Data (*.osm - files) in an oracle database. The geometries will be stored in oracle's SDO_GEOMETRY datatype. It is developed in C Sharp with modern WPF - UI. PSRR - Remote Registry PowerShell 3.0 Module: Remote Registry PowerShell Module to manage the registry with Windows PowerShell. This version supports the new improvement in .NET 4 to specify a 32-bit or 64-bit view of the registry with the Microsoft.Win32.RegistryView enumeration when you open base keys.Raptor-Plus-Plus: CS240 class project Reset IPv6: Troubleshooting Pack to reset virtual IPv6 interfaces on Windows 7 (ISATAP, Teredo, IPHTTPS and 6to4)resurection: Game for WP7SharePoint 2010 Google Maps Web Part for Sandbox and Farm Solutions: The Google Maps Web Part lets you quickly and easily add Google Maps to your Microsoft SharePoint 2010 pages. Compatible with SharePoint 2010, SharePoint 2010 SP1 and SharePoint Online, via Office 365.SPLight UTL Sharepoint 2010 BackUp / Restore UI: Sharepoint 2010 BackUp / Restore UI Sharepoint 2010 backUp restore user interface site collection backup restore graphical user interfaceVisual Image Processing: Visual Image processing for 2d or 3d real-time video or still image from camera or web cam or scanner ... XNA4LED: XNA 4.0 LED Billboard Example

    Read the article

  • CodePlex Daily Summary for Sunday, November 18, 2012

    CodePlex Daily Summary for Sunday, November 18, 2012Popular ReleasesSTeaL : stealed functionarities from STL: STeaL 0.3 (prerelease): set_adaptor<T> added.ExtJS based ASP.NET 2.0 Controls: FineUI v3.2.0: +2012-11-18 v3.2.0 -?????????????????SelectedValueArray????????(◇?◆:)。 -???????????????????RecoverPropertiesFromJObject????(〓?〓、????、??、Vian_Pan)。 -????????????,?????????????,???SelectedValueArray???????(sam.chang)。 -??Alert.Show???????????(swtseaman)。 -???????????????,??Icon??IconUrl????(swtseaman)。 -?????????TimePicker(??)。 -?????????,??/res.axd?css=blue.css&v=1。 -????????,?????????????,???????。 -????MenuCheckBox(???????)。 -?RadioButton??AutoPostBack??。 -???????FCKEditor?????????...BugNET Issue Tracker: BugNET 1.2: Please read our release notes for BugNET 1.2: http://blog.bugnetproject.com/bugnet-1-2-has-been-released Please do not post questions as reviews. Questions should be posted in the Discussions tab, where they will usually get promptly responded to. If you post a question as a review, you will pollute the rating, and you won't get an answer.Paint.NET PSD Plugin: 2.2.0: Changes: Layer group visibility is now applied to all layers within the group. This greatly improves the visual fidelity of complex PSD files that have hidden layer groups. Layer group names are prefixed so that users can get an indication of the layer group hierarchy. (Paint.NET has a flat list of layers, so the hierarchy is flattened out on load.) The progress bar now reports status when saving PSD files, instead of showing an indeterminate rolling bar. Performance improvement of 1...replaceSID: replaceSID v0.1.145.12321: Changelog: - Added Backup ACL - Added Restore ACL - Some Bugfixes - Settings Tested in development enviroment. Still needs to be tested in production.AppBarUtils: AppBarUtils 2.2: Starting from this release, AppBarUtils supports both Windows Phone SDK 7.1 and 8.0. You can download the dll accordingly. If you're upgrading an existing app to Windows Phone 8.0, you can just replace the dll without any changes to the existing code. Of course, you need to make sure that you have the correct Blend SDK dll referenced. The source code contains two testing projects, one for WP SDK 7.1, the other for WP SDK 8.0, which share the same code base. You can refer to these two projec...YALV! - Yet Another Log4Net Viewer: YALV! v1.2.0.0: New release for YALV Project - Version 1.2.0.0 New feature: - Russian localization Improvements - Minor GUI changesfastBinaryJSON: v1.3.5: - added support for root level DataSet and DataTable deserialize (you have to do ToObject<DataSet>(...) ) - added dataset tests - added MonoDroid projectWater Entity for SunBurn: Sunburn Water Entity For 2.0.1.8 (Deffered Only): Sunburn water entity for Sunburn 2.0.1.8 for deffered rendering only, forward water is not working yet. You need to download water normal maps, from Sunburn Reflection/Refraction example from Here.CRM 2011 Visual Ribbon Editor: Visual Ribbon Editor (1.3.1116.7): [IMPROVED] Detailed error message descriptions for FaultException [FIX] Fixed bug in rule CrmOfflineAccessStateRule which had incorrect State attribute name [FIX] Fixed bug in rule EntityPropertyRule which was missing PropertyValue attribute [FIX] Current connection information was not displayed in status bar while refreshing list of entitiesSuper Metroid Randomizer: Super Metroid Randomizer v5: v5 -Added command line functionality for automation purposes. -Implented Krankdud's change to randomize the Etecoon's item. NOTE: this version will not accept seeds from a previous version. The seed format has changed by necessity. v4 -Started putting version numbers at the top of the form. -Added a warning when suitless Maridia is required in a parsed seed. v3 -Changed seed to only generate filename-legal characters. Using old seeds will still work exactly the same. -Files can now be saved...Caliburn Micro: WPF, Silverlight, WP7 and WinRT/Metro made easy.: Caliburn.Micro v1.4: Changes This version includes many bug fixes across all platforms, improvements to nuget support and...the biggest news of all...full support for both WinRT and WP8. Download Contents Debug and Release Assemblies Samples Readme.txt License.txt Packages Available on Nuget Caliburn.Micro – The full framework compiled into an assembly. Caliburn.Micro.Start - Includes Caliburn.Micro plus a starting bootstrapper, view model and view. Caliburn.Micro.Container – The Caliburn.Micro invers...DirectX Tool Kit: November 15, 2012: November 15, 2012 Added support for WIC2 when available on Windows 8 and Windows 7 with KB 2670838 Cleaned up warning level 4 warningsDotNetNuke® Community Edition CMS: 06.02.05: Major Highlights Updated the system so that it supports nested folders in the App_Code folder Updated the Global Error Handling so that when errors within the global.asax handler happen, they are caught and shown in a page displaying the original HTTP error code Fixed issue that stopped users from specifying Link URLs that open on a new window Security FixesFixed issue in the Member Directory module that could show members to non authenticated users Fixed issue in the Lists modul...xUnit.net Contrib: xunitcontrib-resharper 0.7 (RS 7.1, 6.1.1): xunitcontrib release 0.6.1 (ReSharper runner) This release provides a test runner plugin for Resharper 7.1 RTM and 6.1.1, targetting all versions of xUnit.net. (See the xUnit.net project to download xUnit.net itself.) This release drops 7.0 support and targets the latest revisions of the last two major versions of ReSharper (namely 7.0 and 6.1.1). Copies of the plugin that support previous verions of ReSharper can be downloaded from this release. Also note that all builds work against ALL ...OnTopReplica: Release 3.4: Update to the 3 version with major fixes and improvements. Compatible with Windows 8. Now runs (and requires) .NET Framework v.4.0. Added relative mode for region selection (allows the user to select regions as margins from the borders of the thumbnail, useful for windows which have a variable size but fixed size controls, like video players). Improved window seeking when restoring cloned thumbnail or cloning a window by title or by class. Improved settings persistence. Improved co...DotSpatial: DotSpatial 1.4: This is a Minor Release. See the changes in the issue tracker. Minimal -- includes DotSpatial core and essential extensions Extended -- includes debugging symbols and additional extensions Tutorials are available. Just want to run the software? End user (non-programmer) version available branded as MapWindow Want to add your own feature? Develop a plugin, using the template and contribute to the extension feed (you can also write extensions that you distribute in other ways). Components ...WinRT XAML Toolkit: WinRT XAML Toolkit - 1.3.5: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features Attachable Behaviors AwaitableUI extensions Controls Converters Debugging helpers Extension methods Imaging helpers IO helpers VisualTree helpers Samples Recent changes Docum...AcDown?????: AcDown????? v4.3: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown??????????????????,????????????????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ???? 32??64? ???Linux ????(1)????????Windows XP???,????????.NET Framework 2.0???(x86),?????"?????????"??? (2)???????????Linux???,????????Mono?? ??2...????: ???? 1.0: ????New ProjectsBigStringComparer: Proj for compa re big strin gsCarShow: ProjDocManager: ProjFinDesktop: ProjFolder-File Compare: Windows app that compares the files in two given folders/subfolders. Results display in five tabs: Files in A,in B, in A not in B, in B not in A, and differentFoq: Foq is a lightweight mocking library for F#. Use Foq to mock abstract classes and interfaces.GtFramework: A 2D Game FrameworkGucSharep: ProjiDrive: The iDrive solution is a website that enables you to upload and share files.InitialPrototype: This is Initial Prototype of my web 2.0 projectIvanProjects: This is Ivan's ProjectsJAudio Player: JAudio Player is a player for BMS music sequences that are used in several games for Nintendo GameCube and Wii.JGoldDirector: ProjKontinum: Demo site for working flow.Netkill: Simple kill button for network connectivity, with some additional features. Disable network connectivity with a single click, or when a certain program starts.only one test project: um projecto que basicamente não faz nada!! :PP2StillLife: Making a still life with shaders. Warps an image.PetaPX: PetaPX is a photo community powered by creative people worldwide that lets you discover, share, buy and sell inspiring photographs. Qbicon Editor: 2D Polygonal Maps Editor for games and other - Graphical Editor for polygonal objects, points. - Layers - Textures Quick-Chat-Application: Chat Application made using multi-layered architectures( ASP.NET MVC and WCF service).Regular Expression Editor: Regular Expression Editor is the tool for editing and testing Regular ExpressionsScout - Web Patrol: A Microsoft Internet Explorer plug-in that will notifies the user when selected Favorites have been updated since those pages were last visited.silowniafitness: nothingnessSports Tracker: Sports league management system.Stellissimo Wordpress Text Box plugin: Develope to wordpress plugin to add a text boxvba: vba????WCFNH11: clase nh WebCalendar: This will be the summary...?lassifier Tool For OpenCV: Program to calculate vector for function DetectMultiScale in OpenCV.

    Read the article

  • What's New in Database Lifecycle Management in Enterprise Manager 12c Release 3

    - by HariSrinivasan
    Enterprise Manager 12c Release 3 includes improvements and enhancements across every area of the product. This blog provides an overview of the new and enhanced features in the Database Lifecycle Management area. I will deep dive into specific features more in depth in subsequent posts. "What's New?"  In this release, we focused on four things: 1. Lifecycle Management Support for new Database12c - Pluggable Databases 2. Management of long running processes, such as a security patch cycle (Change Activity Planner) 3. Management of large number of systems by · Leveraging new framework capabilities for lifecycle operations, such as the new advanced ‘emcli’ script option · Refining features such as configuration search and compliance 4. Minor improvements and quality fixes to existing features · Rollback support for Single instance databases · Improved "OFFLINE" Patching experience · Faster collection of ORACLE_HOME configurations Lifecycle Management Support for new Database 12c - Pluggable Databases Database 12c introduces Pluggable Databases (PDBs), the brand new addition to help you achieve your consolidation goals. Pluggable databases offer unprecedented consolidation at database level and native lifecycle verbs for creating, plugging and unplugging the databases on a container database (CDB). Enterprise Manager can supplement the capabilities of pluggable databases by offering workflows for migrating, provisioning and cloning them using the software library and the deployment procedures. For example, Enterprise Manager can migrate an existing database to a PDB or clone a PDB by storing a versioned copy in the software library. One can also manage the planned downtime related to patching by  migrating the PDBs to a new CDB. While pluggable databases offer these exciting features, it can also pose configuration management and compliance challenges if not managed properly. Enterprise Manager features like inventory management, topology associations and configuration search can mitigate the sprawl of PDBs and also lock them to predefined golden standards using configuration comparison and compliance rules. Learn More ... Management of Long Running datacenter processes - Change Activity Planner (CAP) Currently, customers resort to cumbersome methods to create, execute, track and monitor change activities within their data center. Some customers use traditional tools such as spreadsheets, project planners and in-house custom built solutions. Customers often have weekly sync up meetings across stake holders to collect status and updates. Some of the change activities, for example the quarterly patch set update (PSU) patch rollouts are not single tasks but processes with multiple tasks. Some of those tasks are performed within Enterprise Manager Cloud Control (for example Patch) and some are performed outside of Enterprise Manager Cloud Control. These tasks often run for a longer period of time and involve multiple people or teams. Enterprise Manger Cloud Control supports core data center operations such as configuration management, compliance management, and automation. Enterprise Manager Cloud Control release 12.1.0.3 leverages these capabilities and introduces the Change Activity Planner (CAP). CAP provides the ability to plan, execute, and track change activities in real time. It covers the typical datacenter activities that are spread over a long period of time, across multiple people and multiple targets (even target types). Here are some examples of Change Activity Process in a datacenter: · Patching large environments (PSU/CPU Patching cycles) · Upgrading large number of database environments · Rolling out Compliance Rules · Database Consolidation to Exadata environments CAP provides user flows for Compliance Officers/Managers (incl. lead administrators) and Operators (DBAs and admins). Managers can create change activity plans for various projects, allocate resources, targets, and groups affected. Upon activation of the plan, tasks are created and automatically assigned to individual administrators based on target ownership. Administrators (DBAs) can identify their tasks and understand the context, schedules, and priorities. They can complete tasks using Enterprise Manager Cloud Control automation features such as patch plans (or in some cases outside Enterprise Manager). Upon completion, compliance is evaluated for validations and updates the status of the tasks and the plans. Learn More about CAP ...  Improved Configuration & Compliance Management of a large number of systems Improved Configuration Comparison:  Get to the configuration comparison results faster for simple ad-hoc comparisons. When performing a 1 to 1 comparison, Enterprise Manager will perform the comparison immediately and take the user directly to the results without having to wait for a job to be submitted and executed. Flattened system comparisons reduce comparison setup time and reduce complexity. In addition to the previously existing topological comparison, users now have an option to compare using a “flattened” methodology. Flattening means to remove duplicate target instances within the systems and remove the hierarchy of member targets. The result are much easier to spot differences particularly for specific use cases like comparing patch levels between complex systems like RAC and Fusion Apps. Improved Configuration Search & Advanced EMCLI Script option for Mass Automation Enterprise manager 12c introduces a new framework level capability to be able to script and stitch together multiple tasks using EMCLI. This powerful capability can be leveraged for lifecycle operations, especially when executing a task over a large number of targets. Specific usages of this include, retrieving a qualified list of targets using Configuration Search and then using the resultset for automation. Another example would be executing a patching operation and then re-executing on targets where it may have failed. This is complemented by other enhancements, such as a better usability for designing reusable configuration searches. IN EM 12c Rel 3, a simplified UI makes building adhoc searches even easier. Searching for missing patches is a common use of configuration search. This required the use of the advanced options which are now clearly defined and easy to use. Perform “Configuration Search” using the EMCLI. Users can find and execute Configuration Searches from the EMCLI which can be extremely useful for building sophisticated automation scripts. For an example, Run the Search named “Oracle Databases on Exadata” which finds all Database targets running on top of Exadata. Further filter the results by refining by options like name, host, etc.. emcli get_targets -config_search="Databases on Exadata" –target_name="exa%“ Use this in powerful mass automation operations using the new emcli script option. For example, to solve the use case of – Finding all DBs running on Exadata and housing E-Biz and Patch them. Create a Python script with emcli functions and invoke it in the new EMCLI script option shell. Invoke the script in the new EMCLI with script option directly: $<path to emcli>/emcli @myPSU_Patch.py Richer compliance content:  Now over 50 Oracle Provided Compliance Standards including new standards for Pluggable Database, Fusion Applications, Oracle Identity Manager, Oracle VM and Internet Directory. 9 Oracle provided Real Time Monitoring Standards containing over 900 Compliance Rules across 500 Facets. These new Real time Compliance Standards covers both Exadata Compute nodes and Linux servers. The result is increased Oracle software coverage and faster time to compliance monitoring on Exadata. Enhancements to Patch Management: Overhauled "OFFLINE" Patching experience: Simplified Patch uploads UI to improve the offline experience of patching. There is now a single step process to get the patches into software library. Customers often maintain local repositories of patches, sometimes called software depots, where they host the patches downloaded from My Oracle Support. In the past, you had to move these patches to your desktop then upload them to the Enterprise Manager's Software library through the Enterprise Manager Cloud Control user interface. You can now use the following EMCLI command to upload multiple patches directly from a remote location within the data center: $emcli upload_patches -location <Path to Patch directory> -from_host <HOSTNAME> The upload process filters all of the new patches, automatically selects the relevant metadata files from the location, and uploads the patches to software library. Other Improvements:  Patch rollback for single instance databases, new option in the Patch Plan to rollback the patches added to the patch plans. Upon execution, the procedure would rollback the patch and the SQL applied to the single instance Databases. Improved and faster configuration collection of Oracle Home targets can enable more reliable automation at higher level functions like Provisioning, Patching or Database as a Service. Just to recap, here is a list of database lifecycle management features:  * Red highlights mark – New or Enhanced in the Release 3. • Discovery, inventory tracking and reporting • Database provisioning including o Migration to Pluggable databases o Plugging and unplugging of pluggable databases o Gold image based cloning o Scaling of RAC nodes •Schema and data change management •End-to-end patch management in online and offline modes, including o Patch advisories in online (connected with My Oracle Support) and offline mode o Patch pre-deployment analysis, deployment and rollback (currently only for single instance databases) o Reporting • Upgrade planning and execution of the upgrade process • Configuration management including • Compliance management with out-of-box content • Change Activity Planner for planning, designing and tracking long running processes For more information on Enterprise Manager’s database lifecycle management capabilities, visit http://www.oracle.com/technetwork/oem/lifecycle-mgmt/index.html

    Read the article

  • Skanska Builds Global Workforce Insight with Cloud-Based HCM System

    - by HCM-Oracle
    By David Baum - Originally posted on Profit Peter Bjork grew up building things. He started his work life learning all sorts of trades at his father’s construction company in the northern part of Sweden. So in college, it was natural for him to pursue a bachelor’s degree in construction engineering—but he broke new ground when he added a master’s degree in finance to his curriculum vitae. Written on a traditional résumé, Bjork’s current title (vice president of information systems strategies) doesn’t reveal the diversity of his experience—that he’s adept with hammer and nails as well as rows and columns. But a big part of his current job is to work with his counterparts in human resources (HR) designing, building, and deploying the systems needed to get a complete view of the skills and potential of Skanska’s 22,000-strong white-collar workforce. And Bjork believes that complete view is essential to Skanska’s success. “Our business is really all about people,” says Bjork, who has worked with Skanska for 16 years. “You can have equipment and financial resources, but to truly succeed in a business like ours you need to have the right people in the right places. That’s what this system is helping us accomplish.” In a global HR environment that suffers from a paradox of high unemployment and a scarcity of skilled labor, managers need to have a complete understanding of workforce capabilities to develop management skills, recruit for open positions, ensure that staff is getting the training they need, and reduce attrition. Skanska’s human capital management (HCM) systems, based on Oracle Talent Management Cloud, play a critical role delivering that understanding. “Skanska’s philosophy of having great people, encouraging their development, and giving them the chance to move across business units has nurtured a culture of collaboration, but managing a diverse workforce spread across the globe is a monumental challenge,” says Annika Lindholm, global human resources system owner in the HR department at Skanska’s headquarters just outside of Stockholm, Sweden. “We depend heavily on Oracle’s cloud technology to support our HCM function.” Construction, Workers For Skanska’s more than 60,000 employees and contractors, managing huge construction projects is an everyday job. Beyond erecting signature buildings, management’s goal is to build a corporate culture where valuable talent can be sought out and developed, bringing in the right mix of people to support and grow the business. “Of all the companies in our space, Skanska is probably one of the strongest ones, with a laser focus on people and people development,” notes Tom Crane, chief HR and communications officer for Skanska in the United States. “Our business looks like equipment and material, but all we really have at the end of the day are people and their intellectual capital. Without them, second only to clients, of course, you really can’t achieve great things in the high-profile environment in which we work.” During the 1990s, Skanska entered an expansive growth phase. A string of successful acquisitions paved the way for the company’s transformation into a global enterprise. “Today the company’s focus is on profitable growth,” continues Crane. “But you can’t really achieve growth unless you are doing a very good job of developing your people and having the right people in the right places and driving a culture of growth.” In the United States alone, Skanska has more than 8,000 employees in four distinct business units: Skanska USA Building, also known as the Construction Manager, builds everything at ground level and above—hospitals, educational facilities, stadiums, airport terminals, and other massive projects. Skanska USA Civil does everything at ground level and below, such as light rail, water treatment facilities, power plants or power industry facilities, highways, and bridges. Skanska Infrastructure Development develops public-private partnerships—projects in which Skanska adds equity and also arranges for outside financing. Skanska Commercial Development acts like a commercial real estate developer, acquiring land and building offices on spec or build-to-suit for its clients. Skanska's international portfolio includes construction of the new Meadowlands Stadium. Getting the various units to operate collaboratatively helps Skanska deliver high value to clients and shareholders. “When we have this collaboration among units, it allows us to enrich each of the business units and, at the same time, develop our future leaders to be more facile in operating across business units—more accepting of a ‘one Skanska’ approach,” explains Crane. Workforce Worldwide But HR needs processes and tools to support managers who face such business dynamics. Oracle Talent Management Cloud is helping Skanska implement world-class recruiting strategies and generate the insights needed to drive quality hiring practices, internal mobility, and a proactive approach to building talent pipelines. With their new cloud system in place, Skanska HR leaders can manage everything from recruiting, compensation, and goal and performance management to employee learning and talent review—all as part of a single, cohesive software-as-a-service (SaaS) environment. Skanska has successfully implemented two modules from Oracle Talent Management Cloud—the recruiting and performance management modules—and is in the process of implementing the learn module. Internally, they call the systems Skanska Recruit, Skanska Talent, and Skanska Learn. The timing is apropos. With high rates of unemployment in recent years, there have been many job candidates on the market. However, talent scarcity continues to frustrate recruiters. Oracle Taleo Recruiting Cloud Service, one of the applications in the Oracle Talent Management cloud portfolio, enables Skanska managers to create more-intelligent recruiting strategies, pulling high-performer profile statistics to create new candidate profiles and using multitiered screening and assessments to ensure that only the best-suited candidate applications make it to the recruiter’s desk. Tools such as applicant tracking, interview management, and requisition management help recruiters and hiring managers streamline the hiring process. Oracle’s cloud-based software system automates and streamlines many other HR processes for Skanska’s multinational organization and delivers insight into the success of recruiting and talent-management efforts. “The Oracle system is definitely helping us to construct global HR processes,” adds Bjork. “It is really important that we have a business model that is decentralized, so we can effectively serve our local markets, and interact with our global ERP [enterprise resource planning] systems as well. We would not be able to do this without a really good, well-integrated HCM system that could support these efforts.” A key piece of this effort is something Skanska has developed internally called the Skanska Leadership Profile. Core competencies, on which all employees are measured, are used in performance reviews to determine weak areas but also to discover talent, such as those who will be promoted or need succession plans. This global profiling system brings consistency to the way HR professionals evaluate and review talent across the company, with a consistent set of ratings and a consistent definition of competencies. All salaried employees in Skanska are tied to a talent management process that gives opportunity for midyear and year-end reviews. Using the performance management module, managers can align individual goals with corporate goals; provide clear visibility into how each employee contributes to the success of the organization; and drive a strategic, end-to-end talent management strategy with a single, integrated system for all talent-related activities. This is critical to a company that is highly focused on ensuring that every employee has a development plan linked to his or her succession potential. “Our approach all along has been to deploy software applications that are seamless to end users,” says Crane. “The beauty of a cloud-based system is that much of the functionality takes place behind the scenes so we can focus on making sure users can access the data when they need it. This model greatly improves their efficiency.” The employee profile not only sets a competency baseline for new employees but is also integrated with Skanska’s other back-office Oracle systems to ensure consistency in the way information is used to support other business functions. “Since we have about a dozen different HR systems that are providing us with information, we built a master database that collects all the information,” explains Lindholm. “That data is sent not only to Oracle Talent Management Cloud, but also to other systems that are dependent on this information.” Collaboration to Scale Skanska is poised to launch a new Oracle module to link employee learning plans to the review process and recruitment assessments. According to Crane, connecting these processes allows Skanska managers to see employees’ progress and produce an updated learning program. For example, as employees take classes, supervisors can consult the Oracle Talent Management Cloud portal to monitor progress and align it to each individual’s training and development plan. “That’s a pretty compelling solution for an organization that wants to manage its talent on a real-time basis and see how the training is working,” Crane says. Rolling out Oracle Talent Management Cloud was a joint effort among HR, IT, and a global group that oversaw the worldwide implementation. Skanska deployed the solution quickly across all markets at once. In the United States, for example, more than 35 offices quickly got up to speed on the new system via webinars for employees and face-to-face training for the HR group. “With any migration, there are moments when you hold your breath, but in this case, we had very few problems getting the system up and running,” says Crane. Lindholm adds, “There has been very little resistance to the system as users recognize its potential. Customizations are easy, and a lasting partnership has developed between Skanska and Oracle when help is needed. They listen to us.” Bjork elaborates on the implementation process from an IT perspective. “Deploying a SaaS system removes a lot of the complexity,” he says. “You can downsize the IT part and focus on the business part, which increases the probability of a successful implementation. If you want to scale the system, you make a quick phone call. That’s all it took recently when we added 4,000 users. We didn’t have to think about resizing the servers or hiring more IT people. Oracle does that for us, and they have provided very good support.” As a result, Skanska has been able to implement a single, cost-effective talent management solution across the organization to support its strategy to recruit and develop a world-class staff. Stakeholders are confident that they are providing the most efficient recruitment system possible for competent personnel at all levels within the company—from skilled workers at construction sites to top management at headquarters. And Skanska can retain skilled employees and ensure that they receive the development opportunities they need to grow and advance.

    Read the article

  • Of transactions and Mongo

    - by Nuri Halperin
    Originally posted on: http://geekswithblogs.net/nuri/archive/2014/05/20/of-transactions-and-mongo-again.aspxWhat's the first thing you hear about NoSQL databases? That they lose your data? That there's no transactions? No joins? No hope for "real" applications? Well, you *should* be wondering whether a certain of database is the right one for your job. But if you do so, you should be wondering that about "traditional" databases as well! In the spirit of exploration let's take a look at a common challenge: You are a bank. You have customers with accounts. Customer A wants to pay B. You want to allow that only if A can cover the amount being transferred. Let's looks at the problem without any context of any database engine in mind. What would you do? How would you ensure that the amount transfer is done "properly"? Would you prevent a "transaction" from taking place unless A can cover the amount? There are several options: Prevent any change to A's account while the transfer is taking place. That boils down to locking. Apply the change, and allow A's balance to go below zero. Charge person A some interest on the negative balance. Not friendly, but certainly a choice. Don't do either. Options 1 and 2 are difficult to attain in the NoSQL world. Mongo won't save you headaches here either. Option 3 looks a bit harsh. But here's where this can go: ledger. See, and account doesn't need to be represented by a single row in a table of all accounts with only the current balance on it. More often than not, accounting systems use ledgers. And entries in ledgers - as it turns out – don't actually get updated. Once a ledger entry is written, it is not removed or altered. A transaction is represented by an entry in the ledger stating and amount withdrawn from A's account and an entry in the ledger stating an addition of said amount to B's account. For sake of space-saving, that entry in the ledger can happen using one entry. Think {Timestamp, FromAccountId, ToAccountId, Amount}. The implication of the original question – "how do you enforce non-negative balance rule" then boils down to: Insert entry in ledger Run validation of recent entries Insert reverse entry to roll back transaction if validation failed. What is validation? Sum up the transactions that A's account has (all deposits and debits), and ensure the balance is positive. For sake of efficiency, one can roll up transactions and "close the book" on transactions with a pseudo entry stating balance as of midnight or something. This lets you avoid doing math on the fly on too many transactions. You simply run from the latest "approved balance" marker to date. But that's an optimization, and premature optimizations are the root of (some? most?) evil.. Back to some nagging questions though: "But mongo is only eventually consistent!" Well, yes, kind of. It's not actually true that Mongo has not transactions. It would be more descriptive to say that Mongo's transaction scope is a single document in a single collection. A write to a Mongo document happens completely or not at all. So although it is true that you can't update more than one documents "at the same time" under a "transaction" umbrella as an atomic update, it is NOT true that there' is no isolation. So a competition between two concurrent updates is completely coherent and the writes will be serialized. They will not scribble on the same document at the same time. In our case - in choosing a ledger approach - we're not even trying to "update" a document, we're simply adding a document to a collection. So there goes the "no transaction" issue. Now let's turn our attention to consistency. What you should know about mongo is that at any given moment, only on member of a replica set is writable. This means that the writable instance in a set of replicated instances always has "the truth". There could be a replication lag such that a reader going to one of the replicas still sees "old" state of a collection or document. But in our ledger case, things fall nicely into place: Run your validation against the writable instance. It is guaranteed to have a ledger either with (after) or without (before) the ledger entry got written. No funky states. Again, the ledger writing *adds* a document, so there's no inconsistent document state to be had either way. Next, we might worry about data loss. Here, mongo offers several write-concerns. Write-concern in Mongo is a mode that marshals how uptight you want the db engine to be about actually persisting a document write to disk before it reports to the application that it is "done". The most volatile, is to say you don't care. In that case, mongo would just accept your write command and say back "thanks" with no guarantee of persistence. If the server loses power at the wrong moment, it may have said "ok" but actually no written the data to disk. That's kind of bad. Don't do that with data you care about. It may be good for votes on a pole regarding how cute a furry animal is, but not so good for business. There are several other write-concerns varying from flushing the write to the disk of the writable instance, flushing to disk on several members of the replica set, a majority of the replica set or all of the members of a replica set. The former choice is the quickest, as no network coordination is required besides the main writable instance. The others impose extra network and time cost. Depending on your tolerance for latency and read-lag, you will face a choice of what works for you. It's really important to understand that no data loss occurs once a document is flushed to an instance. The record is on disk at that point. From that point on, backup strategies and disaster recovery are your worry, not loss of power to the writable machine. This scenario is not different from a relational database at that point. Where does this leave us? Oh, yes. Eventual consistency. By now, we ensured that the "source of truth" instance has the correct data, persisted and coherent. But because of lag, the app may have gone to the writable instance, performed the update and then gone to a replica and looked at the ledger there before the transaction replicated. Here are 2 options to deal with this. Similar to write concerns, mongo support read preferences. An app may choose to read only from the writable instance. This is not an awesome choice to make for every ready, because it just burdens the one instance, and doesn't make use of the other read-only servers. But this choice can be made on a query by query basis. So for the app that our person A is using, we can have person A issue the transfer command to B, and then if that same app is going to immediately as "are we there yet?" we'll query that same writable instance. But B and anyone else in the world can just chill and read from the read-only instance. They have no basis to expect that the ledger has just been written to. So as far as they know, the transaction hasn't happened until they see it appear later. We can further relax the demand by creating application UI that reacts to a write command with "thank you, we will post it shortly" instead of "thank you, we just did everything and here's the new balance". This is a very powerful thing. UI design for highly scalable systems can't insist that the all databases be locked just to paint an "all done" on screen. People understand. They were trained by many online businesses already that your placing of an order does not mean that your product is already outside your door waiting (yes, I know, large retailers are working on it... but were' not there yet). The second thing we can do, is add some artificial delay to a transaction's visibility on the ledger. The way that works is simply adding some logic such that the query against the ledger never nets a transaction for customers newer than say 15 minutes and who's validation flag is not set. This buys us time 2 ways: Replication can catch up to all instances by then, and validation rules can run and determine if this transaction should be "negated" with a compensating transaction. In case we do need to "roll back" the transaction, the backend system can place the timestamp of the compensating transaction at the exact same time or 1ms after the original one. Effectively, once A or B visits their ledger, both transactions would be visible and the overall balance "as of now" would reflect no change.  The 2 transactions (attempted/ reverted) would be visible , since we do actually account for the attempt. Hold on a second. There's a hole in the story: what if several transfers from A to some accounts are registered, and 2 independent validators attempt to compute the balance concurrently? Is there a chance that both would conclude non-sufficient-funds even though rolling back transaction 100 would free up enough for transaction 117 (some random later transaction)? Yes. there is that chance. But the integrity of the business rule is not compromised, since the prime rule is don't dispense money you don't have. To minimize or eliminate this scenario, we can also assign a single validation process per origin account. This may seem non-scalable, but it can easily be done as a "sharded" distribution. Say we have 11 validation threads (or processing nodes etc.). We divide the account number space such that each validator is exclusively responsible for a certain range of account numbers. Sounds cunningly similar to Mongo's sharding strategy, doesn't it? Each validator then works in isolation. More capacity needed? Chop the account space into more chunks. So where  are we now with the nagging questions? "No joins": Huh? What are those for? "No transactions": You mean no cross-collection and no cross-document transactions? Granted - but don't always need them either. "No hope for real applications": well... There are more issues and edge cases to slog through, I'm sure. But hopefully this gives you some ideas of how to solve common problems without distributed locking and relational databases. But then again, you can choose relational databases if they suit your problem.

    Read the article

  • JavaMail not sending Subject or From under jetty:run-war

    - by Jason Thrasher
    Has anyone seen JavaMail not sending proper MimeMessages to an SMTP server, depending on how the JVM in started? At the end of the day, I can't send JavaMail SMTP messages with Subject: or From: fields, and it appears other headers are missing, only when running the app as a war. The web project is built with Maven and I'm testing sending JavaMail using a browser and a simple mail.jsp to debug and see different behavior when launching the app with: 1) mvn jetty:run (mail sends fine, with proper Subject and From fields) 2) mvn jetty:run-war (mail sends fine, but missing Subject, From, and other fields) I've meticulously run diff on the (verbose) Maven debug output (-X), and there are zero differences in the runtime dependencies between the two. I've also compared System properties, and they are identical. Something else is happening the jetty:run-war case that changes the way JavaMail behaves. What other stones need turning? Curiously, I've tried a debugger in both situations and found that the javax.mail.internet.MimeMessage instance is getting created differently. The webapp is using Spring to send email picked off of an Apache ActiveMQ queue. When running the app as mvn jetty:run the MimeMessage.contentStream variable is used for message content. When running as mvn jetty:run-war, the MimeMessage.content variable is used for the message contents, and the content = ASCIIUtility.getBytes(is); call removes all of the header data from the parsed content. Since this seemed very odd, and debugging Spring/ActiveMQ is a deep dive, I created a simplified test without any of that infrastructure: just a JSP using mail-1.4.2.jar, yet the same headers are missing. Also of note, these headers are missing when running the WAR file under Tomcat 5.5.27. Tomcat behaves just like Jetty when running the WAR, with the same missing headers. With JavaMail debugging turned on, I clearly see different output. GOOD CASE: In the jetty:run (non-WAR) the log output is: DEBUG: JavaMail version 1.4.2 DEBUG: successfully loaded resource: /META-INF/javamail.default.providers DEBUG: Tables of loaded providers DEBUG: Providers Listed By Class Name: {com.sun.mail.smtp.SMTPSSLTransport=javax.mail.Provider[TRANSPORT,smtps,com.sun.mail.smtp.SMTPSSLTransport,Sun Microsystems, Inc], com.sun.mail.smtp.SMTPTransport=javax.mail.Provider[TRANSPORT,smtp,com.sun.mail.smtp.SMTPTransport,Sun Microsystems, Inc], com.sun.mail.imap.IMAPSSLStore=javax.mail.Provider[STORE,imaps,com.sun.mail.imap.IMAPSSLStore,Sun Microsystems, Inc], com.sun.mail.pop3.POP3SSLStore=javax.mail.Provider[STORE,pop3s,com.sun.mail.pop3.POP3SSLStore,Sun Microsystems, Inc], com.sun.mail.imap.IMAPStore=javax.mail.Provider[STORE,imap,com.sun.mail.imap.IMAPStore,Sun Microsystems, Inc], com.sun.mail.pop3.POP3Store=javax.mail.Provider[STORE,pop3,com.sun.mail.pop3.POP3Store,Sun Microsystems, Inc]} DEBUG: Providers Listed By Protocol: {imaps=javax.mail.Provider[STORE,imaps,com.sun.mail.imap.IMAPSSLStore,Sun Microsystems, Inc], imap=javax.mail.Provider[STORE,imap,com.sun.mail.imap.IMAPStore,Sun Microsystems, Inc], smtps=javax.mail.Provider[TRANSPORT,smtps,com.sun.mail.smtp.SMTPSSLTransport,Sun Microsystems, Inc], pop3=javax.mail.Provider[STORE,pop3,com.sun.mail.pop3.POP3Store,Sun Microsystems, Inc], pop3s=javax.mail.Provider[STORE,pop3s,com.sun.mail.pop3.POP3SSLStore,Sun Microsystems, Inc], smtp=javax.mail.Provider[TRANSPORT,smtp,com.sun.mail.smtp.SMTPTransport,Sun Microsystems, Inc]} DEBUG: successfully loaded resource: /META-INF/javamail.default.address.map DEBUG: getProvider() returning javax.mail.Provider[TRANSPORT,smtp,com.sun.mail.smtp.SMTPTransport,Sun Microsystems, Inc] DEBUG SMTP: useEhlo true, useAuth true DEBUG SMTP: trying to connect to host "mail.authsmtp.com", port 465, isSSL false 220 mail.authsmtp.com ESMTP Sendmail 8.14.2/8.14.2/Kp; Thu, 18 Jun 2009 01:35:24 +0100 (BST) DEBUG SMTP: connected to host "mail.authsmtp.com", port: 465 EHLO jmac.local 250-mail.authsmtp.com Hello sul-pubs-3a.Stanford.EDU [171.66.201.2], pleased to meet you 250-ENHANCEDSTATUSCODES 250-PIPELINING 250-8BITMIME 250-SIZE 52428800 250-AUTH CRAM-MD5 DIGEST-MD5 LOGIN PLAIN 250-DELIVERBY 250 HELP DEBUG SMTP: Found extension "ENHANCEDSTATUSCODES", arg "" DEBUG SMTP: Found extension "PIPELINING", arg "" DEBUG SMTP: Found extension "8BITMIME", arg "" DEBUG SMTP: Found extension "SIZE", arg "52428800" DEBUG SMTP: Found extension "AUTH", arg "CRAM-MD5 DIGEST-MD5 LOGIN PLAIN" DEBUG SMTP: Found extension "DELIVERBY", arg "" DEBUG SMTP: Found extension "HELP", arg "" DEBUG SMTP: Attempt to authenticate DEBUG SMTP: check mechanisms: LOGIN PLAIN DIGEST-MD5 AUTH LOGIN 334 VXNlcm5hjbt7 YWM0MDkwhi== 334 UGFzc3dvjbt7 YXV0aHNtdHAydog3 235 2.0.0 OK Authenticated DEBUG SMTP: use8bit false MAIL FROM:<[email protected]> 250 2.1.0 <[email protected]>... Sender ok RCPT TO:<[email protected]> 250 2.1.5 <[email protected]>... Recipient ok DEBUG SMTP: Verified Addresses DEBUG SMTP: Jason Thrasher <[email protected]> DATA 354 Enter mail, end with "." on a line by itself From: Webmaster <[email protected]> To: Jason Thrasher <[email protected]> Message-ID: <[email protected]> Subject: non-Spring: Hello World MIME-Version: 1.0 Content-Type: text/plain;charset=UTF-8 Content-Transfer-Encoding: 7bit Hello World: message body here . 250 2.0.0 n5I0ZOkD085654 Message accepted for delivery QUIT 221 2.0.0 mail.authsmtp.com closing connection BAD CASE: The log output when running as a WAR, with missing headers, is quite different: Loading javamail.default.providers from jar:file:/Users/jason/.m2/repository/javax/mail/mail/1.4.2/mail-1.4.2.jar!/META-INF/javamail.default.providers DEBUG: loading new provider protocol=imap, className=com.sun.mail.imap.IMAPStore, vendor=Sun Microsystems, Inc, version=null DEBUG: loading new provider protocol=imaps, className=com.sun.mail.imap.IMAPSSLStore, vendor=Sun Microsystems, Inc, version=null DEBUG: loading new provider protocol=smtp, className=com.sun.mail.smtp.SMTPTransport, vendor=Sun Microsystems, Inc, version=null DEBUG: loading new provider protocol=smtps, className=com.sun.mail.smtp.SMTPSSLTransport, vendor=Sun Microsystems, Inc, version=null DEBUG: loading new provider protocol=pop3, className=com.sun.mail.pop3.POP3Store, vendor=Sun Microsystems, Inc, version=null DEBUG: loading new provider protocol=pop3s, className=com.sun.mail.pop3.POP3SSLStore, vendor=Sun Microsystems, Inc, version=null Loading javamail.default.providers from jar:file:/Users/jason/Documents/dev/subscribeatron/software/trunk/web/struts/target/work/webapp/WEB-INF/lib/mail-1.4.2.jar!/META-INF/javamail.default.providers DEBUG: loading new provider protocol=imap, className=com.sun.mail.imap.IMAPStore, vendor=Sun Microsystems, Inc, version=null DEBUG: loading new provider protocol=imaps, className=com.sun.mail.imap.IMAPSSLStore, vendor=Sun Microsystems, Inc, version=null DEBUG: loading new provider protocol=smtp, className=com.sun.mail.smtp.SMTPTransport, vendor=Sun Microsystems, Inc, version=null DEBUG: loading new provider protocol=smtps, className=com.sun.mail.smtp.SMTPSSLTransport, vendor=Sun Microsystems, Inc, version=null DEBUG: loading new provider protocol=pop3, className=com.sun.mail.pop3.POP3Store, vendor=Sun Microsystems, Inc, version=null DEBUG: loading new provider protocol=pop3s, className=com.sun.mail.pop3.POP3SSLStore, vendor=Sun Microsystems, Inc, version=null DEBUG: getProvider() returning provider protocol=smtp; type=javax.mail.Provider$Type@98203f; class=com.sun.mail.smtp.SMTPTransport; vendor=Sun Microsystems, Inc DEBUG SMTP: useEhlo true, useAuth false DEBUG SMTP: trying to connect to host "mail.authsmtp.com", port 465, isSSL false 220 mail.authsmtp.com ESMTP Sendmail 8.14.2/8.14.2/Kp; Thu, 18 Jun 2009 01:51:46 +0100 (BST) DEBUG SMTP: connected to host "mail.authsmtp.com", port: 465 EHLO jmac.local 250-mail.authsmtp.com Hello sul-pubs-3a.Stanford.EDU [171.66.201.2], pleased to meet you 250-ENHANCEDSTATUSCODES 250-PIPELINING 250-8BITMIME 250-SIZE 52428800 250-AUTH CRAM-MD5 DIGEST-MD5 LOGIN PLAIN 250-DELIVERBY 250 HELP DEBUG SMTP: Found extension "ENHANCEDSTATUSCODES", arg "" DEBUG SMTP: Found extension "PIPELINING", arg "" DEBUG SMTP: Found extension "8BITMIME", arg "" DEBUG SMTP: Found extension "SIZE", arg "52428800" DEBUG SMTP: Found extension "AUTH", arg "CRAM-MD5 DIGEST-MD5 LOGIN PLAIN" DEBUG SMTP: Found extension "DELIVERBY", arg "" DEBUG SMTP: Found extension "HELP", arg "" DEBUG SMTP: Attempt to authenticate DEBUG SMTP: check mechanisms: LOGIN PLAIN DIGEST-MD5 AUTH LOGIN 334 VXNlcm5hjbt7 YWM0MDkwhi== 334 UGFzc3dvjbt7 YXV0aHNtdHAydog3 235 2.0.0 OK Authenticated DEBUG SMTP: use8bit false MAIL FROM:<[email protected]> 250 2.1.0 <[email protected]>... Sender ok RCPT TO:<[email protected]> 250 2.1.5 <[email protected]>... Recipient ok DEBUG SMTP: Verified Addresses DEBUG SMTP: Jason Thrasher <[email protected]> DATA 354 Enter mail, end with "." on a line by itself Hello World: message body here . 250 2.0.0 n5I0pkSc090137 Message accepted for delivery QUIT 221 2.0.0 mail.authsmtp.com closing connection Here's the actual mail.jsp that I'm testing war/non-war with. <%@page import="java.util.*"%> <%@page import="javax.mail.internet.*"%> <%@page import="javax.mail.*"%> <% InternetAddress from = new InternetAddress("[email protected]", "Webmaster"); InternetAddress to = new InternetAddress("[email protected]", "Jason Thrasher"); String subject = "non-Spring: Hello World"; String content = "Hello World: message body here"; final Properties props = new Properties(); props.setProperty("mail.transport.protocol", "smtp"); props.setProperty("mail.host", "mail.authsmtp.com"); props.setProperty("mail.port", "465"); props.setProperty("mail.username", "myusername"); props.setProperty("mail.password", "secret"); props.setProperty("mail.debug", "true"); props.setProperty("mail.smtp.auth", "true"); props.setProperty("mail.smtp.socketFactory.class", "javax.net.ssl.SSLSocketFactory"); props.setProperty("mail.smtp.socketFactory.fallback", "false"); Session mailSession = Session.getDefaultInstance(props); Message message = new MimeMessage(mailSession); message.setFrom(from); message.setRecipient(Message.RecipientType.TO, to); message.setSubject(subject); message.setContent(content, "text/plain;charset=UTF-8"); Transport trans = mailSession.getTransport(); trans.connect(props.getProperty("mail.host"), Integer .parseInt(props.getProperty("mail.port")), props .getProperty("mail.username"), props .getProperty("mail.password")); trans.sendMessage(message, message .getRecipients(Message.RecipientType.TO)); trans.close(); %> email was sent SOLUTION: Yes, the problem was transitive dependencies of Apache CXF 2. I had to exclude geronimo-javamail_1.4_spec from the build, and just rely on javax's mail-1.4.jar. <dependency> <groupId>org.apache.cxf</groupId> <artifactId>cxf-rt-frontend-jaxws</artifactId> <version>2.2.6</version> <exclusions> <exclusion> <groupId>org.apache.geronimo.specs</groupId> <artifactId>geronimo-javamail_1.4_spec</artifactId> </exclusion> </exclusions> </dependency> Thanks for all of the answers.

    Read the article

  • Installshield Development Error durring the Installation process

    - by Alopekos
    Hey guys, I've been stuck by a problem for the past couple days that makes no sense to me. My installer builds fine in the Installshiled IDE but when it is about to finish the installation, int gets two errors then rollbacks: installation failure. Right when the install bar is at about 100%, an error box pops up that states this: "Error 1001.Exception occurred while initializing the installation: System.IO.FileLoadException: Attempt to load an unverifiable executable with fixups (IAT with more than 2 sections or a TLS section.) (Exception from HRESULT: 0x80131019)." The box pops up once, then the installer flashes its status to "rollback" then pops up another error box, then once 'ok'ed it procedes to rollback as usual. I don't understand that error message so i looked in the msi loggings and found this: "InstallShield 13:20:08: Initializing Property Bag... InstallShield 13:20:08: Getting file count from property bag InstallShield 13:20:08: File Count : 7 InstallShield 13:20:08: Sorting Based On Order... InstallShield 13:20:08: This setup is running on a 32-bit Windows...No need to load ISBEW64.exe InstallShield 13:20:08: Registering file C:\Program Files\Cadwell\Easy III\QMWSChartDataServer.dll (32-bit) InstallShield 13:20:09: Registering file C:\Program Files\Cadwell\Easy III\DataDelivery.dll (32-bit) InstallShield 13:20:09: Registering file C:\Program Files\Cadwell\Easy III\QMGlobalData.dll (32-bit) InstallShield 13:20:09: Registering file C:\Program Files\Cadwell\Easy III\QMAdoDB.dll (32-bit) InstallShield 13:20:09: Registering file C:\Program Files\Cadwell\Easy III\QMPatientData.dll (32-bit) InstallShield 13:20:09: Registering file C:\Program Files\Cadwell\Easy III\MedShareGlobalData.dll (32-bit) InstallShield 13:20:09: Registering file C:\Program Files\Cadwell\Easy III\MedDirectory.dll (32-bit) InstallShield 13:20:09: Begin Comitting Property Bag InstallShield 13:20:09: Write KeyList count InstallShield 13:20:09: Finished Comitting Property Bag Action 13:20:09: _EBDE7916DF6AF3B644016C54F66930DC.commit. Action 13:20:09: _EBDE7916DF6AF3B644016C54F66930DC.rollback. Action 13:20:09: _EBDE7916DF6AF3B644016C54F66930DC.install. Error 1001.Exception occurred while initializing the installation: System.IO.FileLoadException: Attempt to load an unverifiable executable with fixups (IAT with more than 2 sections or a TLS section.) (Exception from HRESULT: 0x80131019). MSI (s) (34!84) [13:20:26:455]: Info 2769.Custom Action _EBDE7916DF6AF3B644016C54F66930DC.install did not close 1 MSIHANDLEs. Action ended 13:20:26: InstallFinalize. Return value 3. Action 13:20:26: Rollback. Rolling back action: Rollback: _EBDE7916DF6AF3B644016C54F66930DC.install Rollback: _EBDE7916DF6AF3B644016C54F66930DC.rollback Error 1001.Exception occurred while initializing the installation: System.IO.FileLoadException: Attempt to load an unverifiable executable with fixups (IAT with more than 2 sections or a TLS section.) (Exception from HRESULT: 0x80131019). MSI (s) (34!E8) [13:20:27:036]: Info 2769.Custom Action _EBDE7916DF6AF3B644016C54F66930DC.rollback did not close 1 MSIHANDLEs. Rollback: _EBDE7916DF6AF3B644016C54F66930DC.commit Rollback: ISSelfRegisterFiles Rollback: Registering modules Rollback: Registering type libraries Rollback: Writing system registry values Rollback: Registering program identifiers" All rollbacking commands after this point. For some reason it looks like to me that installshield is trying to launch my program before it finishes the installation, even when I told it to prompt the user to decide to launch. Is this a registering command system that makes it attemp or what? I've been scouring the web all day and I've found some ideas, but I havent seen any solutions as of yet. The installers that I've tried(and failed) have always needed to be Setup.exes, when i try to build a .msi only setup I get this error message. It may help someone who knows more about this system than I do. Your project contains InstallShield prerequisites. A Setup.exe setup launcher is required if you are build a release that includes InstallShield prerequisites. Change your release settings to build Setup.exe, or remove the prerequisites from your project. -7076 There's nothing on the site that has anything from the error code, so I'm at a loss. Thanks for any help possible, I'm an intern and I'm fresh out of ideas... Josh System: XP SP3 Installshield 2010 Pro Install being tested on a VirtualPC

    Read the article

  • Problem receving in RXTX

    - by drhorrible
    I've been using RXTX for about a year now, without too many problems. I just started a new program to interact with a new piece of hardware, so I reused the connect() method I've used on my other projects, but I have a weird problem I've never seen before. The Problem The device works fine, because when I connect with hyperterminal, I send things and receive what I expect, and Serial Port Monitor(SPM) reflects this. However, when I run the simple hyperterminal-clone I wrote to diagnose the problem I'm having with my main app, bytes are sent, according to SPM, but nothing is received, and my SerialPortEventListener never fires. Even when I check for available data in the main loop, reader.ready() returns false. If I ignore this check, then I get an exception, details below. Relevant section of connect() method // Configure and open port port = (SerialPort) CommPortIdentifier.getPortIdentifier(name) .open(owner,1000) port.setSerialPortParams(baud, databits, stopbits, parity); port.setFlowControlMode(fc_mode); final BufferedReader br = new BufferedReader( new InputStreamReader( port.getInputStream(), "US-ASCII")); // Add listener to print received characters to screen port.addEventListener(new SerialPortEventListener(){ public void serialEvent(SerialPortEvent ev) { try { System.out.println("Received: "+br.readLine()); } catch (IOException e) { e.printStackTrace(); } } }); port.notifyOnDataAvailable(); Exception java.io.IOException: Underlying input stream returned zero bytes at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:268) at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:306) at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:158) at java.io.InputStreamReader.read(InputStreamReader.java:167) at java.io.BufferedReader.fill(BufferedReader.java:136) at java.io.BufferedReader.read(BufferedReader.java:157) at <my code> The big question (again) I think I've eliminated all possible hardware problems, so what could be wrong with my code, or the RXTX library? Edit: something interesting When I open hyperterminal after sending a bunch of commands from java that should have gotten responses, all of the responses appear immediately, as if they had been put in the buffer somewhere, but unavailable. Edit 2: Tried something new, same results I ran the code example found here, with the same results. No data came in, but when I switched to a new program, it came all at once. Edit 3 The hardware is fine, and even a different computer has the same problem. I am not using any sort of USB adapter. I've started using PortMon, too, and it's giving me some interesting results. Hyperterminal and RXTX are not using the same settings, and RXTX always polls the port, unlike HyperTerminal, but I still can't see what settings would affect this. As soon as I can isolate the configuration from the constant polling, I'll post my PortMon logs. Edit 4 Is it possible that some sort of Windows update in the last 3 months could have caused this? It has screwed up one of my MATLAB mex-based programs once. Edit 5 I've also noticed some things that are different between HyperTerminal, RXTX, and a separate program I found that communicates with the device (but doesn't do what I want, which is why I'm rolling my own program) HyperTerminal - set to no flow control, but Serial Port Monitor's RTS and DTR indicators are green Other program - not sure what settings it thinks it's using, but only SPM's RTS indicator is green RXTX - no matter what flow control I set, only SPM's CTS and DTR indicators are on. From Serial Port Monitor's help files (paraphrased): the indicators display the state of the serial control lines RTS - Request To Send CTS - Clear To Send DTR - Data Terminal Ready

    Read the article

  • Core Data Model Design Question - Changing "Live" Objects also Changes Saved Objects

    - by mwt
    I'm working on my first Core Data project (on iPhone) and am really liking it. Core Data is cool stuff. I am, however, running into a design difficulty that I'm not sure how to solve, although I imagine it's a fairly common situation. It concerns the data model. For the sake of clarity, I'll use an imaginary football game app as an example to illustrate my question. Say that there are NSMO's called Downs and Plays. Plays function like templates to be used by Downs. The user creates Plays (for example, Bootleg, Button Hook, Slant Route, Sweep, etc.) and fills in the various properties. Plays have a to-many relationship with Downs. For each Down, the user decides which Play to use. When the Down is executed, it uses the Play as its template. After each down is run, it is stored in history. The program remembers all the Downs ever played. So far, so good. This is all working fine. The question I have concerns what happens when the user wants to change the details of a Play. Let's say it originally involved a pass to the left, but the user now wants it to be a pass to the right. Making that change, however, not only affects all the future executions of that Play, but also changes the details of the Plays stored in history. The record of Downs gets "polluted," in effect, because the Play template has been changed. I have been rolling around several possible fixes to this situation, but I imagine the geniuses of SO know much more about how to handle this than I do. Still, the potential fixes I've come up with are: 1) "Versioning" of Plays. Each change to a Play template actually creates a new, separate Play object with the same name (as far as the user can tell). Underneath the hood, however, it is actually a different Play. This would work, AFAICT, but seems like it could potentially lead to a wild proliferation of Play objects, esp. if the user keeps switching back and forth between several versions of the same Play (creating object after object each time the user switches). Yes, the app could check for pre-existing, identical Plays, but... it just seems like a mess. 2) Have Downs, upon saving, record the details of the Play they used, but not as a Play object. This just seems ridiculous, given that the Play object is there to hold those just those details. 3) Recognize that Play objects are actually fulfilling 2 functions: one to be a template for a Down, and the other to record what template was used. These 2 functions have a different relationship with a Down. The first (template) has a to-many relationship. But the second (record) has a one-to-one relationship. This would mean creating a second object, something like "Play-Template" which would retain the to-many relationship with Downs. Play objects would get reconfigured to have a one-to-one relationship with Downs. A Down would use a Play-Template object for execution, but use the new kind of Play object to store what template was used. It is this change from a to-many relationship to a one-to-one relationship that represents the crux of the problem. Even writing this question out has helped me get clearer. I think something like solution 3 is the answer. However if anyone has a better idea or even just a confirmation that I'm on the right track, that would be helpful. (Remember, I'm not really making a football game, it's just faster/easier to use a metaphor everyone understands.) Thanks.

    Read the article

  • Atomikos rollback doesn't clear JPA persistence context?

    - by HDave
    I have a Spring/JPA/Hibernate application and am trying to get it to pass my Junit integration tests against H2 and MySQL. Currently I am using Atomikos for transactions and C3P0 for connection pooling. Despite my best efforts my DAO integration one of the tests is failing with org.hibernate.NonUniqueObjectException. In the failing test I create an object with the "new" operator, set the ID and call persist on it. @Test @Transactional public void save_UserTestDataNewObject_RecordSetOneLarger() { int expectedNumberRecords = 4; User newUser = createNewUser(); dao.persist(newUser); List<User> allUsers = dao.findAll(0, 1000); assertEquals(expectedNumberRecords, allUsers.size()); } In the previous testmethod I do the same thing (createNewUser() is a helper method that creates an object with the same ID everytime). I am sure that creating and persisting a second object with the same Id is the cause, but each test method is in own transaction and the object I created is bound to a private test method variable. I can even see in the logs that Spring Test and Atomikos are rolling back the transaction associated with each test method. I would have thought the rollback would have also cleared the persistence context too. On a hunch, I added an a call to dao.clear() at the beginning of the faulty test method and the problem went away!! So rollback doesn't clear the persistence context??? If not, then who does?? My EntityManagerFactory config is as follows: <bean id="myappTestLocalEmf" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="persistenceUnitName" value="myapp-core" /> <property name="persistenceUnitPostProcessors"> <bean class="com.myapp.core.persist.util.JtaPersistenceUnitPostProcessor"> <property name="jtaDataSource" ref="myappPersistTestJdbcDataSource" /> </bean> </property> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"> <property name="showSql" value="true" /> <property name="database" value="$DS{hibernate.database}" /> <property name="databasePlatform" value="$DS{hibernate.dialect}" /> </bean> </property> <property name="jpaProperties"> <props> <prop key="hibernate.transaction.factory_class">com.atomikos.icatch.jta.hibernate3.AtomikosJTATransactionFactory</prop> <prop key="hibernate.transaction.manager_lookup_class">com.atomikos.icatch.jta.hibernate3.TransactionManagerLookup</prop> <prop key="hibernate.connection.autocommit">false</prop> <prop key="hibernate.format_sql">true"</prop> <prop key="hibernate.use_sql_comments">true</prop> </property> </bean>

    Read the article

  • Help with 2-part question on ASP.NET MVC and Custom Security Design

    - by JustAProgrammer
    I'm using ASP.NET MVC and I am trying to separate a lot of my logic. Eventually, this application will be pretty big. It's basically a SaaS app that I need to allow for different kinds of clients to access. I have a two part question; the first deals with my general design and the second deals with how to utilize in ASP.NET MVC Primarily, there will initially be an ASP.NET MVC "client" front-end and there will be a set of web-services for third parties to interact with (perhaps mobile, etc). I realize I could have the ASP.NET MVC app interact just through the Web Service but I think that is unnecessary overhead. So, I am creating an API that will essentially be a DLL that the Web App and the Web Services will utilize. The API consists of the main set of business logic and Data Transfer Objects, etc. (So, this includes methods like CreateCustomer, EditProduct, etc for example) Also, my permissions requirements are a little complicated. I can't really use a straight Roles system as I need to have some fine-grained permissions (but all permissions are positive rights). So, I don't think I can really use the ASP.NET Roles/Membership system or if I can it seems like I'd be doing more work than rolling my own. I've used Membership before and for this one I think I'd rather roll my own. Both the Web App and Web Services will need to keep security as a concern. So, my design is kind of like this: Each method in the API will need to verify the security of the caller In the Web App, each "page" ("action" in MVC speak) will also check the user's permissions (So, don't present the user with the "Add Customer" button if the user does not have that right but also whenever the API receives AddCustomer(), check the security too) I think the Web Service really needs the checking in the DLL because it may not always be used in some kind of pre-authenticated context (like using Session/Cookies in a Web App); also having the security checks in the API means I don't really HAVE TO check it in other places if I'm on a mobile (say iPhone) and don't want to do all kinds of checking on the client However, in the Web App I think there will be some duplication of work since the Web App checks the user's security before presenting the user with options, which is ok, but I was thinking of a way to avoid this duplication by allowing the Web App to tell the API not check the security; while the Web Service would always want security to be verified Is this a good method? If not, what's better? If so, what's a good way of implementing this. I was thinking of doing this: In the API, I would have two functions for each action: // Here, "Credential" objects are just something I made up public void AddCustomer(string customerName, Credential credential , bool checkSecurity) { if(checkSecurity) { if(Has_Rights_To_Add_Customer(credential)) // made up for clarity { AddCustomer(customerName); } else // throw an exception or somehow present an error } else AddCustomer(customerName); } public void AddCustomer(string customerName) { // actual logic to add the customer into the DB or whatever // Would it be good for this method to verify that the caller is the Web App // through some method? } So, is this a good design or should I do something differently? My next question is that clearly it doesn't seem like I can really use [Authorize ...] for determining if a user has the permissions to do something. In fact, one action might depend on a variety of permissions and the View might hide or show certain options depending on the permission. What's the best way to do this? Should I have some kind of PermissionSet object that the user carries around throughout the Web App in Session or whatever and the MVC Action method would check if that user can use that Action and then the View will have some ViewData or whatever where it checks the various permissions to do Hide/Show?

    Read the article

  • ScriptManager emits ScriptReferences after ClientScriptIncludes. Why?

    - by Chris F
    I have an application that uses a lot of javascript in a lot of different .js files. Each page can have any number and combination of different controls and each control can use a different set of js files. Instead of including all possible js files as <script src="xxx.js"> in the head of my master page, I thought I would reduce the amount of included files by getting each control to call ScriptManager.Scripts.Add(new ScriptReference("xxx.js")) - thus only the scripts that are actually needed by the page will be included. Well that bit works fine. But...the controls also use ScriptManager.RegisterClientScriptBlock fairly extensively too. (There are scenarios where the controls are inside update panels). I was disappointed to see that the ScriptManager emits the client script includes before the ScriptReferences. For example - see the test file and output at the end (this results in a JS error because $ is not defined). Am I being dumb or is this to be expected? I would have thought that a sensible thing for the ScriptManger to do would be to emit the ScriptReferences first and then the other stuff. Short of rolling my own ScriptManager like object to manage static JS file references, does anyone have any suggestions as to get the behaviour that I want from ScriptManager?? Thanks in advance. Example File <%@ Page Language="C#" AutoEventWireup="true" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <script runat="server"> protected void Page_Load(object sender, EventArgs e) { ScriptManager.RegisterClientScriptBlock(Page, Page.GetType(), "Test", "$(function() {alert('hello');});", true); } </script> </head> <body> <form id="form1" runat="server"> <asp:ScriptManager runat="server" > <Scripts> <asp:ScriptReference Path="~/jquery/jquery.js" /> </Scripts> </asp:ScriptManager> </form> </body> </html> Output <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > ... boring stuff removed .... <script src="/js/WebResource.axd?d=-nOHbla bla " type="text/javascript"></script> <script type="text/javascript"> //<![CDATA[ $(function() {alert('hello');});//]]> </script> ... boring stuff removed .... <script src="/js/ScriptResource.axd?d=qgCbla bla bla" type="text/javascript"></script> <script src="jquery/jquery.js" type="text/javascript"></script> ... boring stuff removed .... </form> </body> </html>

    Read the article

  • Creating a dynamic proxy generator – Part 1 – Creating the Assembly builder, Module builder and cach

    - by SeanMcAlinden
    I’ve recently started a project with a few mates to learn the ins and outs of Dependency Injection, AOP and a number of other pretty crucial patterns of development as we’ve all been using these patterns for a while but have relied totally on third part solutions to do the magic. We thought it would be interesting to really get into the details by rolling our own IoC container and hopefully learn a lot on the way, and you never know, we might even create an excellent framework. The open source project is called Rapid IoC and is hosted at http://rapidioc.codeplex.com/ One of the most interesting tasks for me is creating the dynamic proxy generator for enabling Aspect Orientated Programming (AOP). In this series of articles, I’m going to track each step I take for creating the dynamic proxy generator and I’ll try my best to explain what everything means - mainly as I’ll be using Reflection.Emit to emit a fair amount of intermediate language code (IL) to create the proxy types at runtime which can be a little taxing to read. It’s worth noting that building the proxy is without a doubt going to be slightly painful so I imagine there will be plenty of areas I’ll need to change along the way. Anyway lets get started…   Part 1 - Creating the Assembly builder, Module builder and caching mechanism Part 1 is going to be a really nice simple start, I’m just going to start by creating the assembly, module and type caches. The reason we need to create caches for the assembly, module and types is simply to save the overhead of recreating proxy types that have already been generated, this will be one of the important steps to ensure that the framework is fast… kind of important as we’re calling the IoC container ‘Rapid’ – will be a little bit embarrassing if we manage to create the slowest framework. The Assembly builder The assembly builder is what is used to create an assembly at runtime, we’re going to have two overloads, one will be for the actual use of the proxy generator, the other will be mainly for testing purposes as it will also save the assembly so we can use Reflector to examine the code that has been created. Here’s the code: DynamicAssemblyBuilder using System; using System.Reflection; using System.Reflection.Emit; namespace Rapid.DynamicProxy.Assembly {     /// <summary>     /// Class for creating an assembly builder.     /// </summary>     internal static class DynamicAssemblyBuilder     {         #region Create           /// <summary>         /// Creates an assembly builder.         /// </summary>         /// <param name="assemblyName">Name of the assembly.</param>         public static AssemblyBuilder Create(string assemblyName)         {             AssemblyName name = new AssemblyName(assemblyName);               AssemblyBuilder assembly = AppDomain.CurrentDomain.DefineDynamicAssembly(                     name, AssemblyBuilderAccess.Run);               DynamicAssemblyCache.Add(assembly);               return assembly;         }           /// <summary>         /// Creates an assembly builder and saves the assembly to the passed in location.         /// </summary>         /// <param name="assemblyName">Name of the assembly.</param>         /// <param name="filePath">The file path.</param>         public static AssemblyBuilder Create(string assemblyName, string filePath)         {             AssemblyName name = new AssemblyName(assemblyName);               AssemblyBuilder assembly = AppDomain.CurrentDomain.DefineDynamicAssembly(                     name, AssemblyBuilderAccess.RunAndSave, filePath);               DynamicAssemblyCache.Add(assembly);               return assembly;         }           #endregion     } }   So hopefully the above class is fairly explanatory, an AssemblyName is created using the passed in string for the actual name of the assembly. An AssemblyBuilder is then constructed with the current AppDomain and depending on the overload used, it is either just run in the current context or it is set up ready for saving. It is then added to the cache.   DynamicAssemblyCache using System.Reflection.Emit; using Rapid.DynamicProxy.Exceptions; using Rapid.DynamicProxy.Resources.Exceptions;   namespace Rapid.DynamicProxy.Assembly {     /// <summary>     /// Cache for storing the dynamic assembly builder.     /// </summary>     internal static class DynamicAssemblyCache     {         #region Declarations           private static object syncRoot = new object();         internal static AssemblyBuilder Cache = null;           #endregion           #region Adds a dynamic assembly to the cache.           /// <summary>         /// Adds a dynamic assembly builder to the cache.         /// </summary>         /// <param name="assemblyBuilder">The assembly builder.</param>         public static void Add(AssemblyBuilder assemblyBuilder)         {             lock (syncRoot)             {                 Cache = assemblyBuilder;             }         }           #endregion           #region Gets the cached assembly                  /// <summary>         /// Gets the cached assembly builder.         /// </summary>         /// <returns></returns>         public static AssemblyBuilder Get         {             get             {                 lock (syncRoot)                 {                     if (Cache != null)                     {                         return Cache;                     }                 }                   throw new RapidDynamicProxyAssertionException(AssertionResources.NoAssemblyInCache);             }         }           #endregion     } } The cache is simply a static property that will store the AssemblyBuilder (I know it’s a little weird that I’ve made it public, this is for testing purposes, I know that’s a bad excuse but hey…) There are two methods for using the cache – Add and Get, these just provide thread safe access to the cache.   The Module Builder The module builder is required as the create proxy classes will need to live inside a module within the assembly. Here’s the code: DynamicModuleBuilder using System.Reflection.Emit; using Rapid.DynamicProxy.Assembly; namespace Rapid.DynamicProxy.Module {     /// <summary>     /// Class for creating a module builder.     /// </summary>     internal static class DynamicModuleBuilder     {         /// <summary>         /// Creates a module builder using the cached assembly.         /// </summary>         public static ModuleBuilder Create()         {             string assemblyName = DynamicAssemblyCache.Get.GetName().Name;               ModuleBuilder moduleBuilder = DynamicAssemblyCache.Get.DefineDynamicModule                 (assemblyName, string.Format("{0}.dll", assemblyName));               DynamicModuleCache.Add(moduleBuilder);               return moduleBuilder;         }     } } As you can see, the module builder is created on the assembly that lives in the DynamicAssemblyCache, the module is given the assembly name and also a string representing the filename if the assembly is to be saved. It is then added to the DynamicModuleCache. DynamicModuleCache using System.Reflection.Emit; using Rapid.DynamicProxy.Exceptions; using Rapid.DynamicProxy.Resources.Exceptions; namespace Rapid.DynamicProxy.Module {     /// <summary>     /// Class for storing the module builder.     /// </summary>     internal static class DynamicModuleCache     {         #region Declarations           private static object syncRoot = new object();         internal static ModuleBuilder Cache = null;           #endregion           #region Add           /// <summary>         /// Adds a dynamic module builder to the cache.         /// </summary>         /// <param name="moduleBuilder">The module builder.</param>         public static void Add(ModuleBuilder moduleBuilder)         {             lock (syncRoot)             {                 Cache = moduleBuilder;             }         }           #endregion           #region Get           /// <summary>         /// Gets the cached module builder.         /// </summary>         /// <returns></returns>         public static ModuleBuilder Get         {             get             {                 lock (syncRoot)                 {                     if (Cache != null)                     {                         return Cache;                     }                 }                   throw new RapidDynamicProxyAssertionException(AssertionResources.NoModuleInCache);             }         }           #endregion     } }   The DynamicModuleCache is very similar to the assembly cache, it is simply a statically stored module with thread safe Add and Get methods.   The DynamicTypeCache To end off this post, I’m going to create the cache for storing the generated proxy classes. I’ve spent a fair amount of time thinking about the type of collection I should use to store the types and have finally decided that for the time being I’m going to use a generic dictionary. This may change when I can actually performance test the proxy generator but the time being I think it makes good sense in theory, mainly as it pretty much maintains it’s performance with varying numbers of items – almost constant (0)1. Plus I won’t ever need to loop through the items which is not the dictionaries strong point. Here’s the code as it currently stands: DynamicTypeCache using System; using System.Collections.Generic; using System.Security.Cryptography; using System.Text; namespace Rapid.DynamicProxy.Types {     /// <summary>     /// Cache for storing proxy types.     /// </summary>     internal static class DynamicTypeCache     {         #region Declarations           static object syncRoot = new object();         public static Dictionary<string, Type> Cache = new Dictionary<string, Type>();           #endregion           /// <summary>         /// Adds a proxy to the type cache.         /// </summary>         /// <param name="type">The type.</param>         /// <param name="proxy">The proxy.</param>         public static void AddProxyForType(Type type, Type proxy)         {             lock (syncRoot)             {                 Cache.Add(GetHashCode(type.AssemblyQualifiedName), proxy);             }         }           /// <summary>         /// Tries the type of the get proxy for.         /// </summary>         /// <param name="type">The type.</param>         /// <returns></returns>         public static Type TryGetProxyForType(Type type)         {             lock (syncRoot)             {                 Type proxyType;                 Cache.TryGetValue(GetHashCode(type.AssemblyQualifiedName), out proxyType);                 return proxyType;             }         }           #region Private Methods           private static string GetHashCode(string fullName)         {             SHA1CryptoServiceProvider provider = new SHA1CryptoServiceProvider();             Byte[] buffer = Encoding.UTF8.GetBytes(fullName);             Byte[] hash = provider.ComputeHash(buffer, 0, buffer.Length);             return Convert.ToBase64String(hash);         }           #endregion     } } As you can see, there are two public methods, one for adding to the cache and one for getting from the cache. Hopefully they should be clear enough, the Get is a TryGet as I do not want the dictionary to throw an exception if a proxy doesn’t exist within the cache. Other than that I’ve decided to create a key using the SHA1CryptoServiceProvider, this may change but my initial though is the SHA1 algorithm is pretty fast to put together using the provider and it is also very unlikely to have any hashing collisions. (there are some maths behind how unlikely this is – here’s the wiki if you’re interested http://en.wikipedia.org/wiki/SHA_hash_functions)   Anyway, that’s the end of part 1 – although I haven’t started any of the fun stuff (by fun I mean hairpulling, teeth grating Relfection.Emit style fun), I’ve got the basis of the DynamicProxy in place so all we have to worry about now is creating the types, interceptor classes, method invocation information classes and finally a really nice fluent interface that will abstract all of the hard-core craziness away and leave us with a lightning fast, easy to use AOP framework. Hope you find the series interesting. All of the source code can be viewed and/or downloaded at our codeplex site - http://rapidioc.codeplex.com/ Kind Regards, Sean.

    Read the article

  • Ops Center 12c - Provisioning Solaris Using a Card-Based NIC

    - by scottdickson
    It's been a long time since last I added something here, but having some conversations this last week, I got inspired to update things. I've been spending a lot of time with Ops Center for managing and installing systems these days.  So, I suspect a number of my upcoming posts will be in that area. Today, I want to look at how to provision Solaris using Ops Center when your network is not connected to one of the built-in NICs.  We'll talk about how this can work for both Solaris 10 and Solaris 11, since they are pretty similar.  In both cases, WANboot is a key piece of the story. Here's what I want to do:  I have a Sun Fire T2000 server with a Quad-GbE nxge card installed.  The only network is connected to port 2 on that card rather than the built-in network interfaces.  I want to install Solaris on it across the network, either Solaris 10 or Solaris 11.  I have met with a lot of customers lately who have a similar architecture.  Usually, they have T4-4 servers with the network connected via 10GbE connections. Add to this mix the fact that I use Ops Center to manage the systems in my lab, so I really would like to add this to Ops Center.  If possible, I would like this to be completely hands free.  I can't quite do that yet. Close, but not quite. WANBoot or Old-Style NetBoot? When a system is installed from the network, it needs some help getting the process rolling.  It has to figure out what its network configuration (IP address, gateway, etc.) ought to be.  It needs to figure out what server is going to help it boot and install, and it needs the instructions for the installation.  There are two different ways to bootstrap an installation of Solaris on SPARC across the network.   The old way uses a broadcast of RARP or more recently DHCP to obtain the IP configuration and the rest of the information needed.  The second is to explicitly configure this information in the OBP and use WANBoot for installation WANBoot has a number of benefits over broadcast-based installation: it is not restricted to a single subnet; it does not require special DHCP configuration or DHCP helpers; it uses standard HTTP and HTTPS protocols which traverse firewalls much more easily than NFS-based package installation.  But, WANBoot is not available on really old hardware and WANBoot requires the use o Flash Archives in Solaris 10.  Still, for many people, this is a great approach. As it turns out, WANBoot is necessary if you plan to install using a NIC on a card rather than a built-in NIC. Identifying Which Network Interface to Use One of the trickiest aspects to this process, and the one that actually requires manual intervention to set up, is identifying how the OBP and Solaris refer to the NIC that we want to use to boot.  The OBP already has device aliases configured for the built-in NICs called net, net0, net1, net2, net3.  The device alias net typically points to net0 so that when you issue the command  "boot net -v install", it uses net0 for the boot.  Our task is to figure out the network instance for the NIC we want to use.  We will need to get to the OBP console of the system we want to install in order to figure out what the network should be called.  I will presume you know how to get to the ok prompt.  Once there, we have to see what networks the OBP sees and identify which one is associated with our NIC using the OBP command show-nets. SunOS Release 5.11 Version 11.0 64-bit Copyright (c) 1983, 2011, Oracle and/or its affiliates. All rights reserved. {4} ok banner Sun Fire T200, No Keyboard Copyright (c) 1998, 2010, Oracle and/or its affiliates. All rights reserved. OpenBoot 4.30.4.b, 32640 MB memory available, Serial #69057548. Ethernet address 0:14:4f:1d:bc:c, Host ID: 841dbc0c. {4} ok show-nets a) /pci@7c0/pci@0/pci@2/network@0,1 b) /pci@7c0/pci@0/pci@2/network@0 c) /pci@780/pci@0/pci@8/network@0,3 d) /pci@780/pci@0/pci@8/network@0,2 e) /pci@780/pci@0/pci@8/network@0,1 f) /pci@780/pci@0/pci@8/network@0 g) /pci@780/pci@0/pci@1/network@0,1 h) /pci@780/pci@0/pci@1/network@0 q) NO SELECTION Enter Selection, q to quit: d /pci@780/pci@0/pci@8/network@0,2 has been selected. Type ^Y ( Control-Y ) to insert it in the command line. e.g. ok nvalias mydev ^Y for creating devalias mydev for /pci@780/pci@0/pci@8/network@0,2 {4} ok devalias ... net3 /pci@7c0/pci@0/pci@2/network@0,1 net2 /pci@7c0/pci@0/pci@2/network@0 net1 /pci@780/pci@0/pci@1/network@0,1 net0 /pci@780/pci@0/pci@1/network@0 net /pci@780/pci@0/pci@1/network@0 ... name aliases By looking at the devalias and the show-nets output, we can see that our Quad-GbE card must be the device nodes starting with  /pci@780/pci@0/pci@8/network@0.  The cable for our network is plugged into the 3rd slot, so the device address for our network must be /pci@780/pci@0/pci@8/network@0,2. With that, we can create a device alias for our network interface.  Naming the device alias may take a little bit of trial and error, especially in Solaris 11 where the device alias seems to matter more with the new virtualized network stack. So far in my testing, since this is the "next" network interface to be used, I have found success in naming it net4, even though it's a NIC in the middle of a card that might, by rights, be called net6 (assuming the 0th interface on the card is the next interface identified by Solaris and this is the 3rd interface on the card).  So, we will call it net4.  We need to assign a device alias to it: {4} ok nvalias net4 /pci@780/pci@0/pci@8/network@0,2 {4} ok devalias net4 /pci@780/pci@0/pci@8/network@0,2 ... We also may need to have the MAC for this particular interface, so let's get it, too.  To do this, we go to the device and interrogate its properties. {4} ok cd /pci@780/pci@0/pci@8/network@0,2 {4} ok .properties assigned-addresses 82060210 00000000 03000000 00000000 01000000 82060218 00000000 00320000 00000000 00008000 82060220 00000000 00328000 00000000 00008000 82060230 00000000 00600000 00000000 00100000 local-mac-address 00 21 28 20 42 92 phy-type mif ... From this, we can see that the MAC for this interface is  00:21:28:20:42:92.  We will need this later. This is all we need to do at the OBP.  Now, we can configure Ops Center to use this interface. Network Boot in Solaris 10 Solaris 10 turns out to be a little simpler than Solaris 11 for this sort of a network boot.  Since WANBoot in Solaris 10 fetches a specified In order to install the system using Ops Center, it is necessary to create a OS Provisioning profile and its corresponding plan.  I am going to presume that you already know how to do this within Ops Center 12c and I will just cover the differences between a regular profile and a profile that can use an alternate interface. Create a OS Provisioning profile for Solaris 10 as usual.  However, when you specify the network resources for the primary network, click on the name of the NIC, probably GB_0, and rename it to GB_N/netN, where N is the instance number you used previously in creating the device alias.  This is where the trial and error may come into play.  You may need to try a few instance numbers before you, the OBP, and Solaris all agree on the instance number.  Mark this as the boot network. For Solaris 10, you ought to be able to then apply the OS Provisioning profile to the server and it should install using that interface.  And if you put your cards in the same slots and plug the networks into the same NICs, this profile is reusable across multiple servers. Why This Works If you watch the console as Solaris boots during the OSP process, Ops Center is going to look for the device alias netN.  Since WANBoot requires a device alias called just net, Ops Center uses the value of your netN device alias and assigns that device to the net alias.  That means that boot net will automatically use this device.  Very cool!  Here's a trace from the console as Ops Center provisions a server: Sun Sun Fire T200, No KeyboardCopyright (c) 1998, 2010, Oracle and/or its affiliates. All rights reserved.OpenBoot 4.30.4.b, 32640 MB memory available, Serial #69057548.Ethernet address 0:14:4f:1d:bc:c, Host ID: 841dbc0c.auto-boot? =            false{0} ok  {0} ok printenv network-boot-argumentsnetwork-boot-arguments =  host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=0100144F1DBC0C,file=http://10.140.204.22:5555/cgi-bin/wanboot-cgi{0} ok {0} ok devalias net net                      /pci@780/pci@0/pci@1/network@0{0} ok devalias net4 net4                     /pci@780/pci@0/pci@8/network@0,2{0} ok devalias net /pci@780/pci@0/pci@8/network@0,2{0} ok setenv network-boot-arguments host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=0100144F1DBC0C,file=http://10.140.204.22:8004/cgi-bin/wanboot-cginetwork-boot-arguments =  host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=0100144F1DBC0C,file=http://10.140.204.22:8004/cgi-bin/wanboot-cgi{0} ok {0} ok boot net - installBoot device: /pci@780/pci@0/pci@8/network@0,2  File and args: - install/pci@780/pci@0/pci@8/network@0,2: 1000 Mbps link up<time unavailable> wanboot info: WAN boot messages->console<time unavailable> wanboot info: configuring /pci@780/pci@0/pci@8/network@0,2 See what happened?  Ops Center looked for the network device alias called net4 that we specified in the profile, took the value from it, and made it the net device alias for the boot.  Pretty cool! WANBoot and Solaris 11 Solaris 11 requires an additional step since the Automated Installer in Solaris 11 uses the MAC address of the network to figure out which manifest to use for system installation.  In order to make sure this is available, we have to take an extra step to associate the MAC of the NIC on the card with the host.  So, in addition to creating the device alias like we did above, we also have to declare to Ops Center that the host has this new MAC. Declaring the NIC Start out by discovering the hardware as usual.  Once you have discovered it, take a look under the Connectivity tab to see what networks it has discovered.  In the case of this system, it shows the 4 built-in networks, but not the networks on the additional cards.  These are not directly visible to the system controller.  In order to add the additional network interface to the hardware asset, it is necessary to Declare it.  We will declare that we have a server with this additional NIC, but we will also  specify the existing GB_0 network so that Ops Center can associate the right resources together.  The GB_0 acts as sort of a key to tie our new declaration to the old system already discovered.  Go to the Assets tab, select All Assets, and then in the Actions tab, select Add Asset.  Rather than going through a discovery this time, we will manually declare a new asset. When we declare it, we will give the hostname, IP address, system model that match those that have already been discovered.  Then, we will declare both GB_0 with its existing MAC and the new GB_4 with its MAC.  Remember that we collected the MAC for GB_4 when we created its device alias. After you declare the asset, you will see the new NIC in the connectivity tab for the asset.  You will notice that only the NICs you listed when you declared it are seen now.  If you want Ops Center to see all of the existing NICs as well as the additional one, declare them as well.  Add the other GB_1, GB_2, GB_3 links and their MACs just as you did GB_0 and GB_4.  Installing the OS  Once you have declared the asset, you can create an OS Provisioning profile for Solaris 11 in the same way that you did for Solaris 10.  The only difference from any other provisioning profile you might have created already is the network to use for installation.  Again, use GB_N/netN where N is the interface number you used for your device alias and in your declaration.  And away you go.  When the system boots from the network, the automated installer (AI) is able to see which system manifest to use, based on the new MAC that was associated, and the system gets installed. {0} ok {0} ok printenv network-boot-argumentsnetwork-boot-arguments =  host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=01002128204292,file=http://10.140.204.22:5555/cgi-bin/wanboot-cgi{0} ok {0} ok devalias net net                      /pci@780/pci@0/pci@1/network@0{0} ok devalias net4 net4                     /pci@780/pci@0/pci@8/network@0,2{0} ok devalias net /pci@780/pci@0/pci@8/network@0,2{0} ok setenv network-boot-arguments host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=01002128204292,file=http://10.140.204.22:5555/cgi-bin/wanboot-cginetwork-boot-arguments =  host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=01002128204292,file=http://10.140.204.22:5555/cgi-bin/wanboot-cgi{0} ok {0} ok boot net - installBoot device: /pci@780/pci@0/pci@8/network@0,2  File and args: - install/pci@780/pci@0/pci@8/network@0,2: 1000 Mbps link up<time unavailable> wanboot info: WAN boot messages->console<time unavailable> wanboot info: configuring /pci@780/pci@0/pci@8/network@0,2...SunOS Release 5.11 Version 11.0 64-bitCopyright (c) 1983, 2011, Oracle and/or its affiliates. All rights reserved.Remounting root read/writeProbing for device nodes ...Preparing network image for useDownloading solaris.zlib--2012-02-17 15:10:17--  http://10.140.204.22:5555/var/js/AI/sparc//solaris.zlibConnecting to 10.140.204.22:5555... connected.HTTP request sent, awaiting response... 200 OKLength: 126752256 (121M) [text/plain]Saving to: `/tmp/solaris.zlib'100%[======================================>] 126,752,256 28.6M/s   in 4.4s    2012-02-17 15:10:21 (27.3 MB/s) - `/tmp/solaris.zlib' saved [126752256/126752256] Conclusion So, why go to all of this trouble?  More and more, I find that customers are wiring their data center to only use higher speed networks - 10GbE only to the hosts.  Some customers are moving aggressively toward consolidated networks combining storage and network on CNA NICs.  All of this means that network-based provisioning cannot rely exclusively on the built-in network interfaces.  So, it's important to be able to provision a system using other than the built-in networks.  Turns out, that this is pretty straight-forward for both Solaris 10 and Solaris 11 and fits into the Ops Center deployment process quite nicely. Hopefully, you will be able to use this as you build out your own private cloud solutions with Ops Center.

    Read the article

< Previous Page | 19 20 21 22 23 24  | Next Page >