Search Results

Search found 3973 results on 159 pages for 'biztalk deployment'.

Page 63/159 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • CPU and Scheduler Performance Monitoring using SQL Server and Excel

    This article will demonstrate a method of creating an Excel-based CPU/scheduler performance dashboard for SQL Server 2005+. NEW! Deployment Manager Early Access ReleaseDeploy SQL Server changes and .NET applications fast, frequently, and without fuss, using Deployment Manager, the new tool from Red Gate. Try the Early Access Release to get a 20% discount on Version 1. Download the Early Access Release.

    Read the article

  • Parsing Parameters in a Stored Procedure

    This article shows a clean non-looping method to parse comma separated values from a parameter passed to a stored procedure. NEW! Deployment Manager Early Access ReleaseDeploy SQL Server changes and .NET applications fast, frequently, and without fuss, using Deployment Manager, the new tool from Red Gate. Try the Early Access Release to get a 20% discount on Version 1. Download the Early Access Release.

    Read the article

  • TSQL Challenge 83 - Compare rows in the same table and group the data

    The challenge is to compare the data of the rows and group the input data. The data needs to be grouped based on the Product ID, Date, TotalLines, LinesOutOfService. NEW! Deployment Manager Early Access ReleaseDeploy SQL Server changes and .NET applications fast, frequently, and without fuss, using Deployment Manager, the new tool from Red Gate. Try the Early Access Release to get a 20% discount on Version 1. Download the Early Access Release.

    Read the article

  • Free eBook: SQL Server Backup and Restore

    You can download a free eBook from SQLServerCentral and Red Gate software on the most important task a SQL Server DBA or developer needs to understand. NEW! Deployment Manager Early Access ReleaseDeploy SQL Server changes and .NET applications fast, frequently, and without fuss, using Deployment Manager, the new tool from Red Gate. Try the Early Access Release to get a 20% discount on Version 1. Download the Early Access Release.

    Read the article

  • CodePlex Daily Summary for Tuesday, February 22, 2011

    CodePlex Daily Summary for Tuesday, February 22, 2011Popular ReleasesSearchable Property Updater for Microsoft Dynamics CRM 2011: Searchable Property Updater (1.0.121.59): Initial releaseJHINFORM7: JHINFORM 7 VR. 0.0.2: Versión 0.0.1 En estado de desarrolloSilverlight????[???]: silverlight????[???]2.0: ???????,?????,????????silverlight????。DBSourceTools: DBSourceTools_1.3.0.0: Release 1.3.0.0 Changed editors from FireEdit to ICSharpCode.TextEditor. Complete re-vamp of Intellisense ( further testing needed). Hightlight Field and Table Names in sql scripts. Added field dropdown on all tables and views in DBExplorer. Added data option for viewing data in Tables. Fixed comment / uncomment bug as reported by tareq. Included Synonyms in scripting engine ( nickt_ch ).IronPython: 2.7 Release Candidate 1: We are pleased to announce the first Release Candidate for IronPython 2.7. This release contains over two dozen bugs fixed in preparation for 2.7 Final. See the release notes for 60193 for details and what has already been fixed in the earlier 2.7 prereleases. - IronPython TeamCaliburn Micro: A Micro-Framework for WPF, Silverlight and WP7: Caliburn.Micro 1.0 RC: This is the official Release Candicate for Caliburn.Micro 1.0. The download contains the binaries, samples and VS templates. VS Templates The templates included are designed for situations where the Caliburn.Micro source needs to be embedded within a single project solution. This was targeted at government and other organizations that expressed specific requirements around using an open source project like this. NuGet This release does not have a corresponding NuGet package. The NuGet pack...Caliburn: A Client Framework for WPF and Silverlight: Caliburn 2.0 RC: This is the official Release Candidate for Caliburn 2.0. It contains all binaries, samples and generated code docs.A2Command: 2011-02-21 - Version 1.0: IntroductionThis is the full release version of A2Command 1.0, dated February 21, 2011. These notes supersede any prior version's notes. All prior releases may be found on the project's website at http://a2command.codeplex.com/releases/ where you can read the release notes for older versions as well as download them. This version of A2Command is intended to replace any previous version you may have downloaded in the past. There were several bug fixes made after Release Candidate 2 and all...Chiave File Encryption: Chiave 0.9: Application for file encryption and decryption using 512 Bit rijndael encyrption algorithm with simple to use UI. Its written in C# and compiled in .Net version 3.5. It incorporates features of Windows 7 like Jumplists, Taskbar progress and Aero Glass. Feedbacks are Welcome!....Rawr: Rawr 4.0.20 Beta: Rawr is now web-based. The link to use Rawr4 is: http://elitistjerks.com/rawr.phpThis is the Cataclysm Beta Release. More details can be found at the following link http://rawr.codeplex.com/Thread/View.aspx?ThreadId=237262 As of the 4.0.16 release, you can now also begin using the new Downloadable WPF version of Rawr!This is a pre-alpha release of the WPF version, there are likely to be a lot of issues. If you have a problem, please follow the Posting Guidelines and put it into the Issue Trac...Azure Storage Samples: Version 1.0 (February 2011): These downloads contain source code. Each is a complete sample that fully exercises Windows Azure Storage across blobs, queues, and tables. The difference between the downloads is implementation approach. Storage DotNet CS.zip is a .NET StorageClient library implementation in the C# language. This library come with the Windows Azure SDK. Contains helper classes for accessing blobs, queues, and tables. Storage REST CS.zip is a REST implementation in the C# language. The code to implement R...MiniTwitter: 1.66: MiniTwitter 1.66 ???? ?? ?????????? 2 ??????????????????? User Streams ?????????Windows Phone 7 Isolated Storage Explorer: WP7 Isolated Storage Explorer v1.0 Beta: Current release features:WPF desktop explorer client Visual Studio integrated tool window explorer client (Visual Studio 2010 Professional and above) Supported operations: Refresh (isolated storage information), Add Folder, Add Existing Item, Download File, Delete Folder, Delete File Explorer supports operations running on multiple remote applications at the same time Explorer detects application disconnect (1-2 second delay) Explorer confirms operation completed status Explorer d...Document.Editor: 2011.6: Whats new for Document.Editor 2011.6: New Left to Right and Left to Right support New Indent more/less support Improved Home tab Improved Tooltips/shortcut keys Minor Bug Fix's, improvements and speed upsCatel - WPF and Silverlight MVVM library: 1.2: Catel history ============= (+) Added (*) Changed (-) Removed (x) Error / bug (fix) For more information about issues or new feature requests, please visit: http://catel.codeplex.com =========== Version 1.2 =========== Release date: ============= 2011/02/17 Added/fixed: ============ (+) DataObjectBase now supports Isolated Storage out of the box: Person.Save(myStream) stores a whole object graph in Silverlight (+) DataObjectBase can now be converted to Json via Person.ToJson(); (+)...??????????: All-In-One Code Framework ??? 2011-02-18: ?????All-In-One Code Framework?2011??????????!!http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1code&DownloadId=128165 ?????,?????AzureBingMaps??????,??Azure,WCF, Silverlight, Window Phone????????,????????????????????????。 ???: Windows Azure SQL Azure Windows Azure AppFabric Windows Live Messenger Connect Bing Maps ?????: ??????HTML??? ??Windows PC?Mac?Silverlight??? ??Windows Phone?Silverlight??? ?????:http://blog.csdn.net/sjb5201/archive/2011...Image.Viewer: 2011: First version of 2011Silverlight Toolkit: Silverlight for Windows Phone Toolkit - Feb 2011: Silverlight for Windows Phone Toolkit OverviewSilverlight for Windows Phone Toolkit offers developers additional controls for Windows Phone application development, designed to match the rich user experience of the Windows Phone 7. Suggestions? Features? Questions? Ask questions in the Create.msdn.com forum. Add bugs or feature requests to the Issue Tracker. Help us shape the Silverlight Toolkit with your feedback! Please clearly indicate that the work items and issues are for the phone t...thinktecture WSCF.blue: WSCF.blue V1 Update (1.0.11): Features Added a new option that allows properties on data contract types to be marked as virtual. Bug Fixes Fixed a bug caused by certain project properties not being available on Web Service Software Factory projects. Fixed a bug that could result in the WrapperName value of the MessageContractAttribute being incorrect when the Adjust Casing option is used. The menu item code now caters for CommandBar instances that are not available. For example the Web Item CommandBar does not exist ...Terminals: Version 2 - RC1: The Third build includes the fix for NLA support. A merged in patch dropped the UI support. Its back now. All patch's except 1 are left. Cheers, -Rob The Second build is up. It takes most patch's sent in from the community. One such patch was around security & how the application handles Passwords. You may find that all of your passwords are now invalidated. You may need to reenter all of your credentials. This would be a good time to use the Credential Manager for each connecti...New ProjectsAllTalk: This is a chat client for Windows Phone 7.AssertFramework: AssertFramework is an implementation of Visual Studio/MSTest assert classes. The Asset and StringAssert classes have been implemented so far. CollectionAssert will be implemented next.AsyncInRuby: Async Web Development in RubyAuto Numbering for CRM 4.0: Reuse and standardize Auto Numbering for CRM 4.0BitRaise: Raise money bit by bit!BoxGame: BoxGame is a small project to develop a RPG in XNA.CCI Explorer (An alternative of .NET Reflector): CCI Explorer is an alternative to RedGate Reflector. It use the Microsoft Common Compiler Infrastructure to decompil and view source executable code. The application is writing in WPF and use the MVVM pattern.Configuration Manager Client Health Check Tool: There are many pitfalls with maintaining ConfigMgr managed systems so they install the client software and can continuously report to the hierarchy. This project provides a scripted solution that detects many issues and automates their repair.cppERF: Class ERF function. Test on VC++ 2008 express, and cygwin.CUITe (Coded UI Test enhanced) Framework: CUITe (Coded UI Test enhanced) Framework is a thin layer developed on top of Microsoft Visual Studio Team Test's Coded UI Test engine which helps reduce code, increases readability and maintainability, while also providing a bunch of cool features for the automation engineer.DocMetaMan : Bulk document Upload and MetaData (Taxonomy) Setter: DocMetaMap lets user select a root folder and upload the documents to selected document library in SharePoint 2010. The tool presents a nice GUI prompting the user to select the metadata / taxonomy to be associated with the documents before uploading them to SharePoint. DotNetNuke Azure Accelerator: DNN Azure Accelerator is a project based on the Azure Accelerators Project to publish the famous DotNetNuke Community CMS in the Windows Azure Platform.GK PlatyPi Robotics - Team 2927: Graham-Kapowsin HS Robotics Club's code repository.HgReport: This is a Mercurial reporting engine written in .NET 3.5. The program will allow you to write your own report templates and execute them against a local Mercurial repository to produce text reports, including HTML, with statistics and other items from the repository history.Image Steganography: 'Image Steganography' allows you to embed text and files into images, with optional encryption.im-me-messenger: A simple instant messenger application for the IM-ME messenging gadgetISEFun: PowerShell module(s) to simplify work in it. It contains PowerShell scripts, compiled libs and some formating files. Several modules will come in one batch as optional features.Kailua - The forgotten methods in the ADO.NET API.: Provide standard calls for vendor specific functionality through ADO.NET. Additional functionality includes: enumerate databases, tables, views, columns, stored procedures, parameters; get an autogenerated primary key; return top N rows; and more. Also some non-ADO classes.Linkual: Linkual makes it easier for blog authors to publish their articles in multiple languages. They will no longer have to set up a separate blog for each language. It is developed in C# and ASP.NET MVC.Lumen - Index discovery and querying: Index discovery and querying framework based on Lucene.netMars Rover Exercise: A squad of robotic rovers are to be landed by NASA on a plateau on Mars. This plateau, which is curiously rectangular, must be navigated by the rovers so that their on board cameras can get a complete view of the surrounding terrain to send back to Earth. Message splitting envelope in Biztalk 2009: Message splitting envelope in Biztalk 2009. The project contains: Source code, Examples. Article describing how to develop it: http://www.biztalkbrasil.com.br/2011/02/envelope-sample-using-flat-file.html.Microsoft Dynamics CRM 2011 Development Framework: Framework for developing Microsoft Dynamics CRM 2011 Applications.Potluck Central: Event Manger is a simple place were you can manage your potlucks.PowerSqueakTasks: For now PowerSqueakTasks primary goal is to integrate MsBuild with Powershell. It provides one simple task, that executes Powershell script in a batch manner - creates PS variables using MSBuild item metadata and then runs specified script over them.PSS Airbus Sound Extender: This application offers users of PSS Airbus the sound extension (like electricity, air-conditioning, apu) for standard PSS Airbus 32x planes. Tested with FS2004 and PSS A319. No sound files are distribute with the package, but explaining manual, how to achieve them, is included.SCCM Client Center Automation Library: SCCM Client Automation library (previously smsclictr.automation.dll) is a .NET Library (C#) to automate and access SCCM (Microsoft System Center Configuration Manager) 2007 Agent functions over WMI.Seng 401 Awesome TSS: Telephone switching system for SENG401 course project. Developed in Visual C#.Silverlight????[???]: flyer???????????,????????。????????silverlight??????????。Simple Notify: SimpleNotify is a lightweight client-server implementation that allows you to notify many users in your network with custom messages in a very simple way. There are a couple of ways how you want to push these messages to your clients. SimpleNotify is developed in C#.Slingshot: SlingshotSmartTTS: A smart text to speech app!SystemSoupRMS: SaaS RMStest project101: test source controlUse BizTalk Logging Events in BizUnit Tests: This project will demonstrate how to use the instrumentation from the Microsoft BizTalk CAT Team logging framework to help you test the internals of your BizTalk solutionWalkme HealthVault Application: Walking application for HealthVault.WikiChanges: WikiChanges is a "Recent Changes" monitor for MediaWiki installations that uses non-intrusive, non-annoying yet useful notifications on the corner with link shortcuts to pages, diff, hist, undo and various other links.Win4 Movie Project: This application is being developed for a class group projectWPF UI Authorization infrastructure (MVVM controlled): This infrastructure provide Attribute base authorization for UI elements within WPF applications

    Read the article

  • Failed to re-publish a page - Tridion 2011 SP1

    - by Wilson Yu
    We are getting some strange error when re-publishing the same page. The page was published successfully the first time and we can see the page from presentation server. It failed with the following error (see below) when we tried to publish it again (no change to page). The page ran OK within template builder and we got the correct html output, it failed in the last committing deployment step (Prepare Transport, Transporting, Preparing Deployment and Deploying are all successful). Once it fails to publish the second time, it always fails to publish, and we can't un-publish it either. Also when we make a copy of the failed page and create a new page, we can publish the new page first time, the new page then fails to publush the second time with the same error. Does anyone know what would cause this error? any help would be greatly appreciated. Here is the error msg: Committing Deployment Failed Phase: Deployment Prepare Commit Phase failed, Unable to prepare transaction: tcm:0-4210-66560, For input string: "", For input string: "", Unable to prepare transaction: tcm:0-4210-66560, For input string: "", For input string: ""

    Read the article

  • BRE (Business Rules Engine) Data Services is out...!!!

    - by Vishal
    A few months ago we at Tellago had open sourced the BizTalk Data Services. We were meanwhile working on other artifacts which comes along with BizTalk Server like the “Business Rules Engine”.  We are happy to announce the first version of BRE Data Services. BRE Data Services is a same concept which we covered through BTS Data Services, providing a RESTFul OData – based API to interact with the Business Rules Engine via HTTP using ATOM Publishing Protocol or JSON as the encoding mechanism.   In the first version release, we mainly focused on the browsing, querying and searching BRE artifacts via a RESTFul interface. Also along with that we provide the functionality to execute Business Rules by inserting the Facts for policies via the IUpdatable implementation of WCF Data Services.   The BRE Data Services API provides a lightweight interface for managing Business Rules Engine artifacts such as Policies, Rules, Vocabularies, Conditions, Actions, Facts etc. The following are some examples which details some of the available features in the current version of the API.   Basic Querying: Querying BRE Policies http://localhost/BREDataServices/BREMananagementService.svc/Policies Querying BRE Rules http://localhost/BREDataServices/BREMananagementService.svc/Rules Querying BRE Vocabularies http://localhost/BREDataServices/BREMananagementService.svc/Vocabularies   Navigation: The BRE Data Services API also leverages WCF Data Services to enable navigation across related different BRE objects. Querying a specific Policy http://localhost/BREDataServices/BREMananagementService.svc/Policies(‘PolicyName’) Querying a specific Rule http://localhost/BREDataServices/BREMananagementService.svc/Rules(‘RuleName’) Querying all Rules under a Policy http://localhost/BREDataServices/BREMananagementService.svc/Policies('PolicyName')/Rules Querying all Facts under a Policy http://localhost/BREDataServices/BREMananagementService.svc/Policies('PolicyName')/Facts Querying all Actions for a specific Rule http://localhost/BREDataServices/BREMananagementService.svc/Rules('RuleName')/Actions Querying all Conditions for a specific Rule http://localhost/BREDataServices/BREMananagementService.svc/Rules('RuleName')/Actions Querying a specific Vocabulary: http://localhost/BREDataServices/BREMananagementService.svc/Vocabularies('VocabName')   Implementation: With the BRE Data Services, we also provide the functionality of executing a particular policy via HTTP. There are couple of ways you can do that though the API.   Ø First is though Service Operations feature of WCF Data Services in which you can execute the Facts by passing them in the URL itself. This is a very simple implementations of the executing the policies due to the limitations & restrictions (only primitive types of input parameters which can be passed) currently of the Service Operations of the WCF Data Services. Below is a code sample.                Below is a traced Request/Response message.                                 Ø Second is through the IUpdatable Interface of WCF Data Services. In this method, you can first query the rule which you want to execute and then inserts Facts for that particular Rules and finally when you perform the SaveChanges() call for the IUpdatable Interface API, it executes the policy with the facts which you inserted at runtime. Below is a sample of client side code. Due to the limitations of current version of WCF Data Services where there is no way you can return back the updates happening on the service side back to the client via the SaveChanges() method. Here we are executing the rule passing a serialized XML as Facts and there is no changes made to any data where we can query back to fetch the changes. This is overcome though the first way to executing the policies which is by executing it as a Service Operation call.     This actually generates a AtomPub message shown as below:   POST /Tellago.BRE.REST.ServiceHost/BREMananagementService.svc/$batch HTTP/1.1 User-Agent: Microsoft ADO.NET Data Services DataServiceVersion: 1.0;NetFx MaxDataServiceVersion: 2.0;NetFx Accept: application/atom+xml,application/xml Accept-Charset: UTF-8 Content-Type: multipart/mixed; boundary=batch_6b9a5ced-5ecb-4585-940a-9d5e704c28c7 Host: localhost:8080 Content-Length: 1481 Expect: 100-continue   --batch_6b9a5ced-5ecb-4585-940a-9d5e704c28c7 Content-Type: multipart/mixed; boundary=changeset_184a8c59-a714-4ba9-bb3d-889a88fe24bf   --changeset_184a8c59-a714-4ba9-bb3d-889a88fe24bf Content-Type: application/http Content-Transfer-Encoding: binary   MERGE http://localhost:8080/Tellago.BRE.REST.ServiceHost/BREMananagementService.svc/Facts('TestPolicy') HTTP/1.1 Content-ID: 4 Content-Type: application/atom+xml;type=entry Content-Length: 927   <?xml version="1.0" encoding="utf-8" standalone="yes"?> <entry xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices" xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" font-size: x-small"http://www.w3.org/2005/Atom">   <category scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" term="Tellago.BRE.REST.Resources.Fact" />   <title />   <author>     <name />   </author>   <updated>2011-01-31T20:09:15.0023982Z</updated>   <id>http://localhost:8080/Tellago.BRE.REST.ServiceHost/BREMananagementService.svc/Facts('TestPolicy')</id>   <content type="application/xml">     <m:properties>       <d:FactInstance>&lt;ns0:LoanStatus xmlns:ns0="http://tellago.com"&gt;&lt;Age&gt;10&lt;/Age&gt;&lt;Status&gt;true&lt;/Status&gt;&lt;/ns0:LoanStatus&gt;</d:FactInstance>       <d:FactType>TestSchema</d:FactType>       <d:ID>TestPolicy</d:ID>     </m:properties>   </content> </entry> --changeset_184a8c59-a714-4ba9-bb3d-889a88fe24bf-- --batch_6b9a5ced-5ecb-4585-940a-9d5e704c28c7—     Installation: The installation of the BRE Data Services is pretty straight forward. ·         Create a new IIS website say BREDataServices. ·         Download the SourceCode from TellagoCodeplex and copy the content from Tellago.BRE.REST.ServiceHost to the physical location of the above created website.     ·         The appPool account running the website should have admin access to the BizTalkRuleEngineDb database. ·         TheRight click the BREManagementService.svc in the IIS ContentView for the website and wala..     Conclusion: The BRE Data Services API is an experiment intended to bring the capabilities of RESTful/OData based services to the Traditional BTS/BRE Solutions. The future releases will target on technologies like BAM, ESB Toolkit. This version has been tested with various version of BizTalk Server and we have uploaded the source code to our Tellago's DevLabs workspace at Codeplex. I hope you guys enjoy this release. Keep an eye on our new releases @ Tellago Codeplex. We are working on various other Biztalk Artifacts like BAM, ESB Toolkit.     Till than happy BizzRuling…!!!     Thanks,   Vishal Mody

    Read the article

  • Easy Made Easier

    - by dragonfly
        How easy is it to deploy a 2 node, fully redundant Oracle RAC cluster? Not very. Unless you use an Oracle Database Appliance. The focus of this member of Oracle's Engineered Systems family is to simplify the configuration, management and maintenance throughout the life of the system, while offering pay-as-you-grow scaling. Getting a 2-node RAC cluster up and running in under 2 hours has been made possible by the Oracle Database Appliance. Don't take my word for it, just check out these blog posts from partners and end users. The Oracle Database Appliance Experience - Zip Zoom Zoom http://www.fuadarshad.com/2012/02/oracle-database-appliance-experience.html Off-the-shelf Oracle database servers http://normanweaver.wordpress.com/2011/10/10/off-the-shelf-oracle-database-servers/ Oracle Database Appliance – Deployment Steps http://marcel.vandewaters.nl/oracle/database-appliance/oracle-database-appliance-deployment-steps     See how easy it is to deploy an Oracle Database Appliance for high availability with RAC? Now for the meat of this post, which is the first in a series of posts describing tips for making the deployment of an ODA even easier. The key to the easy deployment of an Oracle Database Appliance is the Appliance Manager software, which does the actual software deployment and configuration, based on best practices. But in order for it to do that, it needs some basic information first, including system name, IP addresses, etc. That's where the Appliance Manager GUI comes in to play, taking a wizard approach to specifying the information needed.     Using the Appliance Manager GUI is pretty straight forward, stepping through several screens of information to enter data in typical wizard style. Like most configuration tasks, it helps to gather the required information before hand. But before you rush out to a committee meeting on what to use for host names, and rely on whatever IP addresses might be hanging around, make sure you are familiar with some of the auto-fill defaults for the Appliance Manager. I'll step through the key screens below to highlight the results of the auto-fill capability of the Appliance Manager GUI.     Depending on which of the 2 Configuration Types (Config Type screen) you choose, you will get a slightly different set of screens. The Typical configuration assumes certain default configuration choices and has the fewest screens, where as the Custom configuration gives you the most flexibility in what you configure from the start. In the examples below, I have used the Custom config type.     One of the first items you are asked for is the System Name (System Info screen). This is used to identify the system, but also as the base for the default hostnames on following screens. In this screen shot, the System Name is "oda".     When you get to the next screen (Generic Network screen), you enter your domain name, DNS IP address(es), and NTP IP address(es). Next up is the Public Network screen, seen below, where you will see the host name fields are automatically filled in with default host names based on the System Name, in this case "oda". The System Name is also the basis for default host names for the extra ethernet ports available for configuration as part of a Custom configuration, as seen in the 2nd screen shot below (Other Network). There is no requirement to use these host names, as you can easily edit any of the host names. This does make filling in the configuration details easier and less prone to "fat fingers" if you are OK with these host names. Here is a full list of the automatically filled in host names. 1 2 1-vip 2-vip -scan 1-ilom 2-ilom 1-net1 2-net1 1-net2 2-net2 1-net3 2-net3     Another auto-fill feature of the Appliance Manager GUI follows a common practice of deploying IP Addresses for a RAC cluster in sequential order. In the screen shot below, I entered the first IP address (Node1-IP), then hit Tab to move to the next field. As a result, the next 5 IP address fields were automatically filled in with the next 5 IP addresses sequentially from the first one I entered. As with the host names, these are not required, and can be changed to whatever your IP address values are. One note of caution though, if the first IP Address field (Node1-IP) is filled out and you click in that field and back out, the following 5 IP addresses will be set to the sequential default. If you don't use the sequential IP addresses, pay attention to where you click that mouse. :-)     In the screen shot below, by entering the netmask value in the Netmask field, in this case 255.255.255.0, the gateway value was auto-filled into the Gateway field, based on the IP addresses and netmask previously entered. As always, you can change this value.     My last 2 screen shots illustrate that the same sequential IP address autofill and netmask to gateway autofill works when entering the IP configuration details for the Integrated Lights Out Manager (ILOM) for both nodes. The time these auto-fill capabilities save in entering data is nice, but from my perspective not as important as the opportunity to avoid data entry errors. In my next post in this series, I will touch on the benefit of using the network validation capability of the Appliance Manager GUI prior to deploying an Oracle Database Appliance.

    Read the article

  • Using Definition of Done to Drive Agile Maturity

    - by Dylan Smith
    I’ve been an Agile Coach at a lot of different clients over the years, and I want to share an approach I use to help them adopt and mature over time. It’s important to realize that “Agile” is not a black/white yes/no thing. Teams can be varying degrees of agile. I think of this as their agile maturity level. When I coach teams I want them to start out being a little agile, and get more agile as they mature. The approach I teach them is to use the definition of done as a technique to continuously improve their agile maturity over time. We’re probably all familiar with the concept of “Done Done” that represents what *actually* being done a feature means. Not just when a developer says he’s done right after he writes that last line of code that makes the feature kind-of work. Done Done means the coding is done, it’s been tested, installers and deployment packages have been created, user manuals have been updated, architecture docs have been updated, etc. To enable teams to internalize the concept of “Done Done”, they usually get together and come up with their Definition of Done (DoD) that defines all the activities that need to be completed before a feature is considered Done Done. The Done Done technique typically is applied only to features (aka User Stories). What I do is extend this to apply to several concepts such as User Stories, Sprints, Releases (and sometimes Check-Ins). During project kick-off I’ll usually sit down with the team and go through an exercise of creating DoD’s for each of these concepts (Stories/Sprints/Releases). We’ll usually start by just brainstorming a bunch of activities that could end up in these various DoD’s. Here’s some examples: Code Reviews StyleCop FxCop User Manuals Updated Architecture Docs Updated Tested by QA Tested by UAT Installers Created Support Knowledge Base Updated Deployment Instructions (for Ops) written Automated Unit Tests Run Automated Integration Tests Run Then we start by arranging these activities into the place they occur today (e.g. Do you do UAT testing only once per release? every sprint? every feature?). If the team was previously Waterfall most of these activities probably end up in the Release DoD. An extremely mature agile team would probably have most of these activities in the DoD for the User Stories (because an extremely mature agile team will probably do continuous deployment and release every story). So what we need to do as a team, is work to move these activities from their current home (Release DoD) down into the Sprint DoD and eventually into the User Story DoD (and maybe into the lower-level Check-In DoD if we decide to use that). We don’t have to move them all down to User Story immediately, but as a team we figure out what we think we’re capable of moving down to the Sprint cycle, and Story cycle immediately, and that becomes our starting DoD’s. Over time the team makes an effort to continue moving activities down from Release->Sprint->Story as they become more agile and more mature. I try to encourage them to envision a world in which they deploy to production as each User Story is completed. They would need to be updating User Manuals, creating installers, doing UAT testing (typical Release cycle activities) on every single User Story. They may never actually reach that point, but they should envision that, and strive to keep driving the activities down closer to the User Story cycle s they mature. This is a great technique to give a team an easy-to-follow roadmap to mature their agile practices over time. Sure there’s other aspects to maturity outside of this, but it’s a great technique, that’s easy to visualize, to drive agility into the team. Just keep moving those activities (aka “gates”) down the board from Release->Sprint->Story. I’ll try to give an example of what a recent client of mine had for their DoD’s (this is from memory, so probably not 100% accurate): Release Create/Update deployment Instructions For Ops Instructional Videos Updated Run manual regression test suite UAT Testing In this case that meant deploying to an environment shared across the enterprise that mirrored production and asking other business groups to test their own apps to ensure we didn’t break anything outside our system Sprint Deploy to UAT Environment But not necessarily actually request UAT testing occur User Guides updated Sprint Features Video Created In this case we decided to create a video each sprint showing off the progress (video version of Sprint Demo) User Story Manual Test scripts developed and run Tested by BA Deployed in shared QA environment Using automated deployment process Peer Code Review Code Check-In Compiled (warning-free) Passes StyleCop Passes FxCop Create installer packages Run Automated Tests Run Automated Integration Tests PS – One of my clients had a great question when we went through this activity. They said that if a Sprint is by definition done when the end-date rolls around (time-boxed), isn’t a DoD on a sprint meaningless – it’s done on the end-date regardless of whether those other activities are complete or not? My answer is that while that statement is true – the sprint is done regardless when the end date rolls around – if the DoD activities haven’t been completed I would consider the Sprint a failure (similar to not completing what was committed/planned – failure may be too strong a word but you get the idea). In the Retrospective that will become an agenda item to discuss and understand why we weren’t able to complete the activities we agreed would need to be completed each Sprint.

    Read the article

  • Creating typed WSDL’s for generic WCF services of the ESB Toolkit

    - by charlie.mott
    source: http://geekswithblogs.net/charliemott Question How do you make it easy for client systems to consume the generic WCF services exposed by the ESB Toolkit using messages that conform to agreed schemas\contracts?  Usually the developer of a system consuming a web service adds a service reference using a WSDL. However, the WSDL’s for the generic services exposed by the ESB Toolkit do not make it easy to develop clients that conform to agreed schemas\contracts. Recommendation Take a copy of the generic WSDL’s and modify it to use the proper contracts. This is very easy.  It will work with the generic on ramps so long as the <part>?</part> wrapping is removed from the WCF adapter configuration in the BizTalk receive locations.  Attempting to create a WSDL where the input and output messages are sent/returned with a <part> wrapper is a nightmare.  I have not managed it.  Consequences I can only see the following consequences of removing the <part> wrapper: ESB Test Client – I needed to modify the out-of-the-box ESB Test Client source code to make it send non-wrapped messages.  Flat file formatted messages – the endpoint will no longer support flat file message formats.  However, even if you needed to support this integration pattern through WCF, you would most-likely want to create a separate receive location anyway with its’ own independently configured XML disassembler pipeline component. Instructions These steps show how to implement a request-response implementation of this. WCF Receive Locations In BizTalk, for the WCF receive location for the ESB on-ramp, set the adapter Message settings\bindings to “UseBodyPath”: Inbound BizTalk message body  = Body Outbound WCF message body = Body Create a WSDL’s for each supported integration use-case Save a copy of the WSDL for the WCF generic receive location above that you intend the client system to use. Give it a name that mirrors the interface agreement (e.g. Esb_SuppliersSearchCommand_wsHttpBinding.wsdl).   Add any xsd schemas files imported below to this same folder.   Edit the WSDL to import schemas For example, this: <xsd:schema targetNamespace=http://microsoft.practices.esb/Imports /> … would become something like: <xsd:schema targetNamespace="http://microsoft.practices.esb/Imports">     <xsd:import schemaLocation="SupplierSearchCommand_V1.xsd"                            namespace="http://schemas.acme.co.uk/suppliersearchcommand/1.0"/>     <xsd:import  schemaLocation="SuppliersDocument_V1.xsd"                              namespace="http://schemas.acme.co.uk/suppliersdocument/1.0"/>     <xsd:import schemaLocation="Types\Supplier_V1.xsd"                              namespace="http://schemas.acme.co.uk/types/supplier/1.0"/>     <xsd:import  schemaLocation="GovTalk\bs7666-v2-0.xsd"                               namespace="http://www.govtalk.gov.uk/people/bs7666"/>     <xsd:import  schemaLocation="GovTalk\CommonSimpleTypes-v1-3.xsd"                             namespace="http://www.govtalk.gov.uk/core"/>     <xsd:import  schemaLocation="GovTalk\AddressTypes-v2-0.xsd"                              namespace="http://www.govtalk.gov.uk/people/AddressAndPersonalDetails"/> </xsd:schema> Modify the Input and Output message For example, this: <wsdl:message name="ProcessRequestResponse_SubmitRequestResponse_InputMessage">   <wsdl:part name="part" type="xsd:anyType"/> </wsdl:message> <wsdl:message name="ProcessRequestResponse_SubmitRequestResponse_OutputMessage">   <wsdl:part name="part" type="xsd:anyType"/> </wsdl:message> … would become something like: <wsdl:message name="ProcessRequestResponse_SubmitRequestResponse_InputMessage">   <wsdl:part name="part"                       element="ssc:SupplierSearchEvent"                         xmlns:ssc="http://schemas.acme.co.uk/suppliersearchcommand/1.0" /> </wsdl:message> <wsdl:message name="ProcessRequestResponse_SubmitRequestResponse_OutputMessage">   <wsdl:part name="part"                       element="sd:SuppliersDocument"                       xmlns:sd="http://schemas.acme.co.uk/suppliersdocument/1.0"/> </wsdl:message> This WSDL can now be added as a service reference in client solutions.

    Read the article

  • Error 0x8007007e When trying to mount WinPE 5 wim in Windows 7 using Powershell

    - by BigHomie
    Using ADK for Windows 8.1, and the DISM cmdlets that come with them. I have WMF 4.0 installed. My machine is Windows 7 x64 SP1, and I'm trying to mount the wim using PS C:\Users\BigHomie> Mount-WindowsImage -ImagePath 'C:\Program Files (x86)\Windows Kits\8.1\Assessment and Deployment Kit\Windows Preinstallation Environment\x86\ en-us\winpe.wim' -Path C:\WinPE_x86 -index 1 And receive the following error: Mount-WindowsImage : DismInitialize failed. Error code = 0x8007007e At line:1 char:1 + Mount-WindowsImage -ImagePath 'C:\Program Files (x86)\Windows Kits\8.1\Assessmen ... + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~ + CategoryInfo : NotSpecified: (:) [Mount-WindowsImage], COMExcep tion + FullyQualifiedErrorId : Microsoft.Dism.Commands.MountWindowsImageCommand Using dism.exe works fine. Update Forgetting I had this problem, I went to mount a wim using the Powershell ISE and actuallygot a visual error message about "C:\Program Files (x86)\Windows Kits\8.1\Assessment and Deployment Kit\Deployment Tools\x86\DISM\api-ms-win-downlevel-advapi32-l4-1-0.dll" not being installed. After checking that the dll did in fact exist in the folder I called regsvr32 and received another error message Will try reinstalling as recommended.

    Read the article

  • Bash: Quotes getting stripped when a command is passed as argument to a function

    - by Shoaibi
    I am trying to implement a dry run kind of mechanism for my script and facing the issue of quotes getting stripped off when a command is passed as an argument to a function and resulting in unexpected behavior. dry_run () { echo "$@" #printf '%q ' "$@" if [ "$DRY_RUN" ]; then return 0 fi "$@" } email_admin() { echo " Emailing admin" dry_run su - $target_username -c "cd $GIT_WORK_TREE && git log -1 -p|mail -s '$mail_subject' $admin_email" echo " Emailed" } Output is: su - webuser1 -c cd /home/webuser1/public_html && git log -1 -p|mail -s 'Git deployment on webuser1' [email protected] Expected: su - webuser1 -c "cd /home/webuser1/public_html && git log -1 -p|mail -s 'Git deployment on webuser1' [email protected]" With printf enabled instead of echo: su - webuser1 -c cd\ /home/webuser1/public_html\ \&\&\ git\ log\ -1\ -p\|mail\ -s\ \'Git\ deployment\ on\ webuser1\'\ [email protected] Result: su: invalid option -- 1 That shouldn't be the case if quotes remained where they were inserted. I have also tried using "eval", not much difference. If i remove the dry_run call in email_admin and then run script, it work great.

    Read the article

  • Free Oracle Special Edition eBook - Server Virtualization for Dummies

    - by Thanos
    Oracle has released a quick and easy-to-read guide on Oracle Virtualization. Now available is "Server Virtualization for Dummies," an Oracle Special Edition eBook. Need to virtualize, but not sure where to start? Virtualization should make things simpler, not more complex. To learn more about how Oracle’s server virtualization solutions can help you eliminate complexity, reduce costs, and respond rapidly to changing needs, download Server Virtualization for Dummies, an Oracle Special Edition eBook. Simply discover how virtualization can make things simpler, from server consolidation to application deployment. This eBook guides you through a range of server virtualization topics, including Why virtualization is critical to transforming today's IT to tomorrow's cloud computing environment. How different types of virtualization are suited to different business needs How application-driven virtualization dramatically accelerates application deployment Oracle Virtualization delivers the most complete and integrated solution for building, flexible IT infrastructures—beyond just server virtualization consolidation. Learn how Oracle Virtualization's unique application-driven approach and integrated management offering helps to accelerate enterprise application deployment and simplify management of data center from disk to apps. All our Customers, prospects, and partners are welcome to follow this link to download an exclusive copy of Server Virtualization for Dummies, Oracle Special Edition today.

    Read the article

  • Best Practices - updated: which domain types should be used to run applications

    - by jsavit
    This post is one of a series of "best practices" notes for Oracle VM Server for SPARC (formerly named Logical Domains). This is an updated and enlarged version of the post on this topic originally posted October 2012. One frequent question "what type of domain should I use to run applications?" There used to be a simple answer: "run applications in guest domains in almost all cases", but now there are more things to consider. Enhancements to Oracle VM Server for SPARC and introduction of systems like the current SPARC servers including the T4 and T5 systems, the Oracle SuperCluster T5-8 and Oracle SuperCluster M6-32 provide scale and performance much higher than the original servers that ran domains. Single-CPU performance, I/O capacity, memory sizes, are much larger now, and far more demanding applications are now being hosted in logical domains. The general advice continues to be "use guest domains in almost all cases", meaning, "use virtual I/O rather than physical I/O", unless there is a specific reason to use the other domain types. The sections below will discuss the criteria for choosing between domain types. Review: division of labor and types of domain Oracle VM Server for SPARC offloads management and I/O functionality from the hypervisor to domains (also called virtual machines), providing a modern alternative to older VM architectures that use a "thick", monolithic hypervisor. This permits a simpler hypervisor design, which enhances reliability, and security. It also reduces single points of failure by assigning responsibilities to multiple system components, further improving reliability and security. Oracle VM Server for SPARC defines the following types of domain, each with their own roles: Control domain - management control point for the server, runs the logical domain daemon and constraints engine, and is used to configure domains and manage resources. The control domain is the first domain to boot on a power-up, is always an I/O domain, and is usually a service domain as well. It doesn't have to be, but there's no reason to not leverage it for virtual I/O services. There is one control domain per T-series system, and one per Physical Domain (PDom) on an M5-32 or M6-32 system. M5 and M6 systems can be physically domained, with logical domains within the physical ones. I/O domain - a domain that has been assigned physical I/O devices. The devices may be one more more PCIe root complexes (in which case the domain is also called a root complex domain). The domain has native access to all the devices on the assigned PCIe buses. The devices can be any device type supported by Solaris on the hardware platform. a SR-IOV (Single-Root I/O Virtualization) function. SR-IOV lets a physical device (also called a physical function) or PF) be subdivided into multiple virtual functions (VFs) which can be individually assigned directly to domains. SR-IOV devices currently can be Ethernet or InfiniBand devices. direct I/O ownership of one or more PCI devices residing in a PCIe bus slot. The domain has direct access to the individual devices An I/O domain has native performance and functionality for the devices it owns, unmediated by any virtualization layer. It may also have virtual devices. Service domain - a domain that provides virtual network and disk devices to guest domains. The services are defined by commands that are run in the control domain. It usually is an I/O domain as well, in order for it to have devices to virtualize and serve out. Guest domain - a domain whose devices are all virtual rather than physical: virtual network and disk devices provided by one or more service domains. In common practice, this is where applications are run. Device considerations Consider the following when choosing between virtual devices and physical devices: Virtual devices provide the best flexibility - they can be dynamically added to and removed from a running domain, and you can have a large number of them up to a per-domain device limit. Virtual devices are compatible with live migration - domains that exclusively have virtual devices can be live migrated between servers supporting domains. On the other hand: Physical devices provide the best performance - in fact, native "bare metal" performance. Virtual devices approach physical device throughput and latency, especially with virtual network devices that can now saturate 10GbE links, but physical devices are still faster. Physical I/O devices do not add load to service domains - all the I/O goes directly from the I/O domain to the device, while virtual I/O goes through service domains, which must be provided sufficient CPU and memory capacity. Physical I/O devices can be other than network and disk - we virtualize network, disk, and serial console, but physical devices can be the wide range of attachable certified devices, including things like tape and CDROM/DVD devices. In some cases the lines are now blurred: virtual devices have better performance than previously: starting with Oracle VM Server for SPARC 3.1 there is near-native virtual network performance. There is more flexibility with physical devices than before: SR-IOV devices can now be dynamically reconfigured on domains. Tradeoffs one used to have to make are now relaxed: you can often have the flexibility of virtual I/O with performance that previously required physical I/O. You can have the performance and isolation of SR-IOV with the ability to dynamically reconfigure it, just like with virtual devices. Typical deployment A service domain is generally also an I/O domain: otherwise it wouldn't have access to physical device "backends" to offer to its clients. Similarly, an I/O domain is also typically a service domain in order to leverage the available PCI buses. Control domains must be I/O domains, because they boot up first on the server and require physical I/O. It's typical for the control domain to also be a service domain too so it doesn't "waste" the I/O resources it uses. A simple configuration consists of a control domain that is also the one I/O and service domain, and some number of guest domains using virtual I/O. In production, customers typically use multiple domains with I/O and service roles to eliminate single points of failure, as described in Availability Best Practices - Avoiding Single Points of Failure . Guest domains have virtual disk and virtual devices provisioned from more than one service domain, so failure of a service domain or I/O path or device does not result in an application outage. This also permits "rolling upgrades" in which service domains are upgraded one at a time while their guests continue to operate without disruption. (It should be noted that resiliency to I/O device failures can also be provided by the single control domain, using multi-path I/O) In this type of deployment, control, I/O, and service domains are used for virtualization infrastructure, while applications run in guest domains. Changing application deployment patterns The above model has been widely and successfully used, but more configuration options are available now. Servers got bigger than the original T2000 class machines with 2 I/O buses, so there is more I/O capacity that can be used for applications. Increased server capacity made it attractive to run more vertically-scaled applications, such as databases, with higher resource requirements than the "light" applications originally seen. This made it attractive to run applications in I/O domains so they could get bare-metal native I/O performance. This is leveraged by the Oracle SuperCluster engineered systems mentioned previously. In those engineered systems, I/O domains are used for high performance applications with native I/O performance for disk and network and optimized access to the Infiniband fabric. Another technical enhancement is Single Root I/O Virtualization (SR-IOV), which make it possible to give domains direct connections and native I/O performance for selected I/O devices. Not all I/O domains own PCI complexes, and there are increasingly more I/O domains that are not service domains. They use their I/O connectivity for performance for their own applications. However, there are some limitations and considerations: at this time, a domain using physical I/O cannot be live-migrated to another server. There is also a need to plan for security and introducing unneeded dependencies: if an I/O domain is also a service domain providing virtual I/O to guests, it has the ability to affect the correct operation of its client guest domains. This is even more relevant for the control domain. where the ldm command must be protected from unauthorized (or even mistaken) use that would affect other domains. As a general rule, running applications in the service domain or the control domain should be avoided. For reference, an excellent guide to secure deployment of domains by Stefan Hinker is at Secure Deployment of Oracle VM Server for SPARC. To recap: Guest domains with virtual I/O still provide the greatest operational flexibility, including features like live migration. They should be considered the default domain type to use unless there is a specific requirement that mandates an I/O domain. I/O domains can be used for applications with the highest performance requirements. Single Root I/O Virtualization (SR-IOV) makes this more attractive by giving direct I/O access to more domains, and by permitting dynamic reconfiguration of SR-IOV devices. Today's larger systems provide multiple PCIe buses - for example, 16 buses on the T5-8 - making it possible to configure multiple I/O domains each owning their own bus. Service domains should in general not be used for applications, because compromised security in the domain, or an outage, can affect domains that depend on it. This concern can be mitigated by providing guests' their virtual I/O from more than one service domain, so interruption of service in one service domain does not cause an application outage. The control domain should in general not be used to run applications, for the same reason. Oracle SuperCluster uses the control domain for applications, but it is an exception. It's not a general purpose environment; it's an engineered system with specifically configured applications and optimization for optimal performance. These are recommended "best practices" based on conversations with a number of Oracle architects. Keep in mind that "one size does not fit all", so you should evaluate these practices in the context of your own requirements. Summary Higher capacity servers that run Oracle VM Server for SPARC are attractive for applications with the most demanding resource requirements. New deployment models permit native I/O performance for demanding applications by running them in I/O domains with direct access to their devices. This is leveraged in SPARC SuperCluster, and can be leveraged in T-series servers to provision high-performance applications running in domains. Carefully planned, this can be used to provide peak performance for critical applications. That said, the improved virtual device performance in Oracle VM Server means that the default choice should still be guest domains with virtual I/O.

    Read the article

  • Mapping between 4+1 architectural view model & UML

    - by Sadeq Dousti
    I'm a bit confused about how the 4+1 architectural view model maps to UML. Wikipedia gives the following mapping: Logical view: Class diagram, Communication diagram, Sequence diagram. Development view: Component diagram, Package diagram Process view: Activity diagram Physical view: Deployment diagram Scenarios: Use-case diagram The paper Role of UML Sequence Diagram Constructs in Object Lifecycle Concept gives the following mapping: Logical view (class diagram (CD), object diagram (OD), sequence diagram (SD), collaboration diagram (COD), state chart diagram (SCD), activity diagram (AD)) Development view (package diagram, component diagram), Process view (use case diagram, CD, OD, SD, COD, SCD, AD), Physical view (deployment diagram), and Use case view (use case diagram, OD, SD, COD, SCD, AD) which combines the four mentioned above. The web page UML 4+1 View Materials presents the following mapping: Finally, the white paper Applying 4+1 View Architecture with UML 2 gives yet another mapping: Logical view class diagrams, object diagrams, state charts, and composite structures Process view sequence diagrams, communication diagrams, activity diagrams, timing diagrams, interaction overview diagrams Development view component diagrams Physical view deployment diagram Use case view use case diagram, activity diagrams I'm sure further search will reveal other mappings as well. While various people usually have different perspectives, I don't see why this is the case here. Specially, each UML diagram describes the system from a particular aspect. So, for instance, why the "sequence diagram" is considered as describing the "logical view" of the system by one author, while another author considers it as describing the "process view"? Could you please help me clarify the confusion?

    Read the article

  • Java Champion Jim Weaver on JavaFX

    - by Janice J. Heiss
    Hardly anyone knows more about JavaFX than Java Champion and Oracle’s JavaFX Evangelist, Jim Weaver, who will be leading two Hands on Labs on aspects of JavaFX at this year’s JavaOne: HOL11265 – “Playing to the Strengths of JavaFX and HTML5” (With Jeff Klamer - App Designer, Jeff Klamer Design) Wednesday, Oct 3, 3:00 PM - 5:00 PM - Hilton San Francisco - Franciscan A/B/C/D HOL3058 – “Custom JavaFX Controls” (With Gerrit Grunwald, Senior Software Engineer, Canoo Engineering AG; Bob Larsen, Consultant, Larsen Consulting; and Peter Vašenda, Software Engineer, Oracle) Tuesday, Oct 2, 12:30 PM - 2:30 PM - Hilton San Francisco - Franciscan A/B/C/D I caught up with Jim at JavaOne to ask him for a current snapshot of JavaFX. “In my opinion,” observed Weaver, “the most important thing happening with JavaFX is the ongoing improvement to rich-client Java application deployment. For example, JavaFX packaging tools now provide built-in support for self-contained application packages. A package may optionally contain the Java Runtime, and be distributed with a native installer (e.g., a DMG or EXE). This makes it easy for users to install JavaFX apps on their client machines, perhaps obtaining the apps from the Mac App Store, for example. Igor Nekrestyanov and Nancy Hildebrandt have written a comprehensive guide to JavaFX application deployment, the following section of which covers Self-Contained Application Packaging: http://docs.oracle.com/javafx/2/deployment/self-contained-packaging.htm#BCGIBBCI.“Igor also wrote a blog post titled, "7u10: JavaFX Packaging Tools Update," that covers improvements introduced so far in Java SE 7 update 10. Here's the URL to the blog post:https://blogs.oracle.com/talkingjavadeployment/entry/packaging_improvements_in_jdk_7”I asked about how the strengths of JavaFX and HTML5 interact and reinforce each other. “They interact and reinforce each other very well. I was about to be amazed at your insight in asking that question, but then recalled that one of my JavaOne sessions is a Hands-on Lab titled ‘Playing to the Strengths of JavaFX and HTML5.’ In that session, we'll cover the JavaFX and HTML5 WebView control, the strengths of each technology, and the various ways that Java and contents of the WebView can interact.”And what is he looking forward to at JavaOne? “I'm personally looking forward to some excellent sessions, and connecting with colleagues and friends that I haven't seen in a while!” Jim Weaver is another good reason to feel good about JavaOne.

    Read the article

  • Windows Azure Learning Plan - Compute

    - by BuckWoody
    This is one in a series of posts on a Windows Azure Learning Plan. You can find the main post here. This one deals with the "compute" function of Windows Azure, which includes Configuration Files, the Web Role, the Worker Role, and the VM Role. There is a general programming guide for Windows Azure that you can find here to help with the overall process.   Configuration Files Configuration Files define the environment for a Windows Azure application, similar to an ASP.NET application. This section explains how to work with these. General Introduction and Overview http://blogs.itmentors.com/bill/2009/11/04/configuration-files-and-windows-azure/ Service Definition File Schema http://msdn.microsoft.com/en-us/library/ee758711.aspx Service Configuration File Schema http://msdn.microsoft.com/en-us/library/ee758710.aspx  Windows Azure Web Role The Web Role runs code (such as ASP pages) that require a User Interface. Web Role "Boot Camp" Video  https://msevents.microsoft.com/CUI/WebCastEventDetails.aspx?culture=en-US&EventID=1032470854&CountryCode=US Web Role Deployment Checklist http://blogs.infragistics.com/blogs/anton_staykov/archive/2010/06/30/windows-azure-web-role-deployment-checklist.aspx  Using a Web Role as a Worker Role for Small Applications http://www.31a2ba2a-b718-11dc-8314-0800200c9a66.com/2010/12/how-to-combine-worker-and-web-role-in.html Windows Azure Worker Role  The Worker Role is used for code that does not require a direct User Interface. Worker Role "Boot Camp" Video https://msevents.microsoft.com/CUI/WebCastEventDetails.aspx?culture=en-US&EventID=1032470871&CountryCode=US Worker Role versus Web Roles http://msdn.microsoft.com/en-us/library/gg433012.aspx Deploying other applications (like Java) in a Windows Azure Worker Role http://blogs.msdn.com/b/mariok/archive/2011/01/05/deploying-java-applications-in-azure.aspx Windows Azure VM Role The Windows Azure VM Role is an Operating System-level mechanism for code deployment. VM Role Overview and Details  http://msdn.microsoft.com/en-us/library/gg465398.aspx  The proper use of the VM Role http://blogs.msdn.com/b/buckwoody/archive/2010/12/28/the-proper-use-of-the-vm-role-in-windows-azure.aspx

    Read the article

  • PHP remote development workflow: git, symfony and hudson

    - by user2022
    I'm looking to develop a website and all the work will be done remotely (no local dev server). The reason for this is that my shared hosting company a2hosting has a specific configuration (symfony,mysql,git) that I don't want to spend time duplicating when I can just ssh and develop remotely or through netbeans remote editing features. My question is how can I use git to separate my site into three areas: live, staging and dev. Here's my initial thought: public_html (live site and git repo) testing: a mirror of the site used for visual tests (full git repo) dev/ticket# : git branches of public_html used for features and bug fixes (full git repo) Version Control with git: Initial setup: cd public_html git init git add * git commit -m ‘initial commit of the site’ cd .. git clone public_html testing mkdir dev Development: cd /dev git clone ../testing ticket# all work is done in ./dev/ticket#, then visit www.domain.com/dev/ticket# to visually test make granular commits as necessary until dev is done git push origin master:ticket# if the above fails: merge latest testing state into current dev work: git merge origin/master then try the push again mark ticket# as ready for integration integration and deployment process: cd ../../testing git merge ticket# -m "integration test for ticket# --no-ff (check for conflicts ) run hudson tests visit www.domain.com/testing for visual test if all tests pass: if this ticket marks the end of a big dev sprint: make a snapshot with git tag git push --tags origin else git push origin cd ../public_html git checkout -f (live site should have the latest dev from ticket#) else: revert the merge: git checkout master~1; git commit -m "reverting ticket#" update ticket# that testing failed with the failure details Snapshots: Each major deployment sprint should have a standard name and should be tracked. Method: git tag Naming convention: TBD Reverting site to previous state If something goes wrong, then revert to previous snapshot and debug the issue in dev with a new ticket#. Once the bug is fixed, follow the deployment process again. My questions: Does this workflow make sense, if not, any recommendations Is my approach for reverting correct or is there a better way to say 'revert to before x commit'

    Read the article

  • Smart defaults [SSDT]

    - by jamiet
    I’ve just discovered a new, somewhat hidden, feature in SSDT that I didn’t know about and figured it would be worth highlighting here because I’ll bet not many others know it either; the feature is called Smart Defaults. It gets around the problem of adding a NOT NULLable column to an existing table that has got data in it – previous to SSDT you would need to define a DEFAULT constraint however it does feel rather cumbersome to create an object purely for the purpose of pushing through a deployment – that’s the situation that Smart Defaults is meant to alleviate. The Smart Defaults option exists in the advanced section of a Publish Profile file: The description of the setting is “Automatically provides a default value when updating a table that contains data with a column that does not allow null values”, in other words checking that option will cause SSDT to insert an arbitrary default value into your newly created NON NULLable column. In case you’re wondering how it does it, here’s how: SSDT creates a DEFAULT CONSTRAINT at the same time as the column is created and then immediately removes that constraint: ALTER TABLE [dbo].[T1]    ADD [C1] INT NOT NULL,         CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b] DEFAULT 0 FOR [C1];ALTER TABLE [dbo].[T1] DROP CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b]; You can then update the value as appropriate in a Post-Deployment script. Pretty cool! On the downside, you can only specify this option for the whole project, not for an individual table or even an individual column – I’m not sure that I’d want to turn this on for an entire project as it could hide problems that a failed deployment would highlight, in other words smart defaults could be seen to be “papering over the cracks”. If you think that should be improved go and vote (and leave a comment) at [SSDT] Allow us to specify Smart defaults per table or even per column. @Jamiet

    Read the article

  • How to communicate within a company what is being Continually Deployed

    - by Francis Spor
    I work for a small development company, 20 people total in the entire company, 3 in actual development, and we've adopted CD for our commits to trunk, and it works great, from a code management and up-time side. However - we're getting flak from our support staff and marketing department that they don't feel that they're getting enough lead time on new features and notifications on bug fixes that could change behavior. Part of why we love the CD system is for us in development, it's fast, we fix the bug, add the quick feature, close the Bugz and move on with our day to the next item. All members of our company are now on HipChat at all times, and when a deployment occurs, a message is sent to a room that all company members are in, letting them know what was just deployed (it just shows the commit messages from tip back to the last recorded deployment). We in development are also attempting to make sure that when we're making a change that modifies the UI or a public facing behavior, we post a screenshot to the All Company room and explain what the behavior change is, seeking pushback or concerns. Often, the response is silence. Sometimes, it's a few minor questions, but nothing that need stop the deployment from happening. What I'm wondering is how do other users of the CD method deal with notifications of new features and changes to areas of the company that are not development - and eventually on to customers in the world? Thanks, Francis

    Read the article

  • Management Reporter Installation – Lessons Learned Part II - Dynamics GP

    - by Ryan McBee
    After feeling pretty good about my deployment skills of Management Reporter for Dynamics GP a few weeks ago, I ran into two additional lessons learned that I wanted to share. First, on another new deployment, I got the error shown below which says “An error occurred while creating the database.  View the installation log for additional information.”  This problem initially pointed me to KB 2406948 which did not provide resolution. After several hours of troubleshooting, I found there is an issue if the defaults database locations in SQL Server are set to the root of a drive. You will want to set the default to something like the following to get it installed; C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA.  My default database locations for the data and log files were indeed sitting on the H:\ and I:\ drives. To change this property in your SQL Server Instance you need to open SQL Server Management Studio, right click on the server, and choose properties and then database settings. When I initially got the error, I briefly considered creating the ManagementReporter database by hand, but experience tells me that would have created more headaches down the road. The second problem I ran into with this particular deployment of Management Reporter happened when I started the FRx conversion utility.  The errors reads “The ‘Microsoft.ACE.OLEDB.12.0’ provider is not registered on the local machine. I had a suspicion that this error was related to the fact FRx uses outdated technology and I happened to be on a new install of Server 2008 R2.  A knowledge base search quickly pointed me to KB 2102486. The resolution for this Management Reporter issue was to install the Microsoft Access Database Engine Redistributable, by following the site below. http://www.microsoft.com/downloads/details.aspx?familyid=C06B8369-60DD-4B64-A44B-84B371EDE16D&displaylang=en

    Read the article

  • Smart defaults [SSDT]

    - by jamiet
    I’ve just discovered a new, somewhat hidden, feature in SSDT that I didn’t know about and figured it would be worth highlighting here because I’ll bet not many others know it either; the feature is called Smart Defaults. It gets around the problem of adding a NOT NULLable column to an existing table that has got data in it – previous to SSDT you would need to define a DEFAULT constraint however it does feel rather cumbersome to create an object purely for the purpose of pushing through a deployment – that’s the situation that Smart Defaults is meant to alleviate. The Smart Defaults option exists in the advanced section of a Publish Profile file: The description of the setting is “Automatically provides a default value when updating a table that contains data with a column that does not allow null values”, in other words checking that option will cause SSDT to insert an arbitrary default value into your newly created NON NULLable column. In case you’re wondering how it does it, here’s how: SSDT creates a DEFAULT CONSTRAINT at the same time as the column is created and then immediately removes that constraint: ALTER TABLE [dbo].[T1]    ADD [C1] INT NOT NULL,         CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b] DEFAULT 0 FOR [C1];ALTER TABLE [dbo].[T1] DROP CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b]; You can then update the value as appropriate in a Post-Deployment script. Pretty cool! On the downside, you can only specify this option for the whole project, not for an individual table or even an individual column – I’m not sure that I’d want to turn this on for an entire project as it could hide problems that a failed deployment would highlight, in other words smart defaults could be seen to be “papering over the cracks”. If you think that should be improved go and vote (and leave a comment) at [SSDT] Allow us to specify Smart defaults per table or even per column. @Jamiet

    Read the article

  • Should we be able to deploy a single package to the SSIS Catalog?

    - by jamiet
    My buddy Sutha Thiru sent me an email recently asking about my opinion on a particular nuance of the project deployment model in SQL Server Integration Services (SSIS) 2012 and I’d like to share my response as I think it warrants a wider discussion. Sutha asked: Jamie What is your take on this? http://www.mattmasson.com/index.php/2012/07/can-i-deploy-a-single-ssis-package-from-my-project-to-the-ssis-catalog/ Overnight I was talking to Matt who confirmed that they got no plans to change the deployment model. For example if we have following scenrio how do we do deploy? Sprint 1 Pkg1, 2 & 3 has been developed and deployed to UAT. Once signed off its been deployed to Live. Sprint 2 Pkg 4 & 5 been developed. During this time users raised a bug on Pkg2. We want to make the change to Pkg2 and deploy that to UAT and eventually to LIVE without releasing Pkg 4 &5. How do we do it? Matt pointed me to his blog entry which I have seen before . http://www.mattmasson.com/index.php/2012/02/thoughts-on-branching-strategies-for-ssis-projects/ Thanks Sutha My response: Personally, even though I've experienced the exact problem you just outlined, I agree with the current approach. I steadfastly believe that there should not be a way for an unscrupulous developer to slide in a new version of a package under the covers. Deploying .ispac files brings a degree of rigour to your operational processes. Yes, that means that we as SSIS developers are going to have to get better at using source control and branching properly but that is no bad thing in my opinion. Claiming to be proper "developers" is a bit of a cheap claim if we don't even do the fundamentals correctly. I would be interested in the thoughts of others who have used the project deployment model. Do you agree with my point of view? @Jamiet

    Read the article

  • RoundhousE now supports Oracle, SQL2000

    - by Robz / Fervent Coder
    RoundhousE, the database migration software that is based on sql scripts has added support for Oracle and SQL 2000.  There have also been numerous other little things, including better logging and a script run errors table. The script errors table captures what went wrong when/if your scripts are not quite up to par or there is some other issue. A special thanks goes out to http://twitter.com/PascalMestdach and http://twitter.com/jochenjonc. They worked hard on this and all I did was provide guidance and help bring it back to the trunk. This is what an entry in the database looks like: This is a preview of new log: ================================================== Versioning ================================================== Attempting to resolve version from C:\code\roundhouse\code_drop\sample\deployment\_BuildInfo.xml using //buildInfo/version. Found version 0.5.0.188 from C:\code\roundhouse\code_drop\sample\deployment\_BuildInfo.xml. Migrating TestRoundhousE from version 0 to 0.5.0.188. Versioning TestRoundhousE database with version 0.5.0.188 based on http://roundhouse.googlecode.com/svn. ================================================== Migration Scripts ================================================== Looking for Update scripts in "C:\code\roundhouse\code_drop\sample\deployment\..\db\TestRoundhousE\up". These should be one time only scripts. -------------------------------------------------- Running 0001_CreateTables.sql on (local) - TestRoundhousE. Running 0002_ChangeTable.sql on (local) - TestRoundhousE. Running 0003_TestBatchSplitter.sql on (local) - TestRoundhousE. -------------------------------------------------- But what are you waiting for? Head out and grab the latest release today!

    Read the article

  • How do I structure code and builds for continuous delivery of multiple applications in a small team?

    - by kingdango
    Background: 3-5 developers supporting (and building new) internal applications for a non-software company. We use TFS although I don't think that matters much for my question. I want to be able to develop a deployment pipeline and adopt continuous integration / deployment techniques. Here's what our source tree looks like right now. We use a single TFS Team Project. $/MAIN/src/ $/MAIN/src/ApplicationA/VSSOlution.sln $/MAIN/src/ApplicationA/ApplicationAProject1.csproj $/MAIN/src/ApplicationA/ApplicationAProject2.csproj $/MAIN/src/ApplicationB/... $/MAIN/src/ApplicationC $/MAIN/src/SharedInfrastructureA $/MAIN/src/SharedInfrastructureB My Goal (a pretty typical promotion pipeline) When a code change is made to a given application I want to be able to build that application and auto-deploy that change to a DEV server. I may also need to build dependencies on Shared Infrastructure Components. I often also have some database scripts or changes as well If developer testing passes I want to have an manually triggered but automated deploy of that build on a STAGING server where end-users will review new functionality. Once it's approved by end users I want to a manually triggered auto-deploy to production Question: How can I best adopt continuous deployment techniques in a multi-application environment? A lot of the advice I see is more single-application-specific, how is that best applied to multiple applications? For step 1, do I simply setup a separate Team Build for each application? What's the best approach to accomplishing steps 2 and 3 of promoting latest build to new environments? I've seen this work well with web apps but what about database changes

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >