Search Results

Search found 17924 results on 717 pages for 'order by'.

Page 626/717 | < Previous Page | 622 623 624 625 626 627 628 629 630 631 632 633  | Next Page >

  • Copy TFS Build Definitions between Projects and Collections

    - by Jakob Ehn
    Originally posted on: http://geekswithblogs.net/jakob/archive/2014/06/05/copy-tfs-build-definitions-between-projects-and-collections.aspxThe last couple of years it has become apparent that using multiple team projects in TFS is generally a bad idea. There are of course exceptions to this, but there are a lot ot things that becomes much easier to do when you put all of your projects and team in the same team project. Fellow ALM MVP Martin Hinshelwood has blogged about this several times, as well as other people in the community. In particular, using the backlog and portfolio management tools makes much more sense when everything is located in the same team project. Consolidating multiple team projects into one is not that easy unfortunately, it involves migrating source code, work items, reports etc.  Another thing that also need to be migrated is build definitions. It is possible to clone build definitions within the same team project using the TFS power tools. The Community TFS Build Manager also lets you clone build definitions to other team projects. But there is no tool that allows you to clone/copy a build definition to another collection. So, I whipped up a simple console application that let you do this. The tool can be downloaded from https://onedrive.live.com/redir?resid=EE034C9F620CD58D!8162&authkey=!ACTr56v1QVowzuE&ithint=file%2c.zip   Using CopyTFSBuildDefinitions You use the tool like this: CopyTFSBuildDefinitions  SourceCollectionUrl  SourceTeamProject  BuildDefinitionName  DestinationCollectionUrl  DestinationTeamProject [NewDefinitionName] Arguments SourceCollectionUrl The URL to the TFS collection that contains the team project with the build definition that you want to copy SourceTeamProject The name of the team project that contains the build definition BuildDefinitionName Name of the build definition DestinationCollectionUrl The URL to the TFS collection that contains the team project that you want to copy your build definition to DestinationTeamProject The name of the team project in the destination collection NewDefinitionName (Optional) Use this to override the name of the new build definition. If you don’t specify this, the name will the same as the original one Example: CopyTFSBuildDefinitions  https://jakob.visualstudio.com DemoProject  WebApplication.CI https://anotheraccount.visualstudio.com     Notes Since we are (potentially) create a build definition in a new collection, there is no guarantee that the various paths that are defined in the build definition exist in the new collection. For example, a build definition refers to server paths in TFVC or repos + branches in TFGit. It also refers to build controllers that definitely don’t exist in the new collection. So there will be some cleanup to do after you copy your build definitions. You can fix some of these using the Community TFS Build Manager, for example it is very easy to apply the correct build controller to a set of build definitions The problem stated above also applies to build process templates. However, the tool tries to find a build process template in the new team project with the same file name as the one that existed in the old team project. If it finds one, it will be used for the new build definition. Otherwise is will use the default build template If you want to run the tool for many build definitions, you can use this SQL scripts, compliments of Mr. Scrum/ALM MVP Richard Hundhausen to generate the necessary commands: USE Tfs_Collection GO SELECT 'CopyTFSBuildDefinitions.exe http://SERVER:8080/tfs/collection "' + P.ProjectName + '" "' + REPLACE(BD.DefinitionName,'\','') + '" http://NEWSERVER:8080/tfs/COLLECTION TEAMPROJECT'   FROM tbl_Project P        INNER JOIN tbl_BuildGroup BG on BG.TeamProject = P.ProjectUri        INNER JOIN tbl_BuildDefinition BD on BD.GroupId = BG.GroupId   ORDER BY P.ProjectName, BD.DefinitionName   Hope that helps, let me know if you have any problems with the tool or if you find it useful

    Read the article

  • Technical workshop with the gurus: Architecting Oracle Database-As-A-Service (DBaaS)

    - by Javier Puerta
    Hardware and Software, Engineered to Work Together inside the Click Here The order you must follow to make the colored link appear in browsers. If not the default window link will appear 1. Select the word you want to use for the link 2. Select the desired color, Red, Black, etc 3. Select bold if necessary ___________________________________________________________________________________________________________________ Templates use two sizes of fonts and the sans-serif font tag for the email. All Fonts should be (Arial, Helvetica, sans-serif) tags Normal size reading body fonts should be set to the size of 2. Small font sizes should be set to 1 !!!!!!!DO NOT USE ANY OTHER SIZE FONT FOR THE EMAILS!!!!!!!! ___________________________________________________________________________________________________________________ -- OCTOBER 2013 Invitation: Architecting Oracle Database-As-A-Service (DBaaS) Stay Connected Sign up for Specific Updates Architecting Oracle Database-As-A-Service (DBaaS) Dear partner, We are pleased to invite you to a 2-day workshop dedicated to EMEA partners on "Architecting Oracle Private Database Cloud & Delivering Database-As-A-Service (DBaaS)". This exclusive workshop will be delivered by Product Management and Product Development from Oracle HQ and focuses on the main theme CIOs are tackling with in the last decade: Consolidation to Private Cloud. For many customers the journey to consolidation has led to DBaaS Cloud deployments to significantly reduce costs and offer agile IT services. With the recent launch of Oracle Database 12c, the game really has changed in terms of what Oracle offers and how database clouds can be deployed. REGISTER NOW Who should attend: Enterprise Architects Infrastructure Architects DB Architects from System Integrators and large Independent Software Vendors. Take this opportunity to learn from the gurus, how you can help your customers maximize on their cloud consolidation strategies. The workshops main focus is service delivery, which includes standardization and consolidation, and how you would help your customers transform their current IT infrastructure to a service delivery model. It will discuss best practices and reviews customer examples that have successfully implemented a database cloud. The agenda is split into two days sessions: Day 1: Overview & Planning Database Cloud - Demos Customer Case Studies Database 12c Day 2: Database Cloud - Design Database Cloud - Implementation EM Cloud Control DBaaS on Engineered Systems Question and Answers Attendance is free of charge for qualified Oracle partners - Register now for one of the below sessions: Date Country Location 5 & 6 November 2013  United Kingdom   Manchester 7 & 8 November 2013  Germany  Munich 11 & 12 November 2013  Netherlands  Amsterdam 14 & 15 November 2013  Turkey Istanbul 18 & 19 November 2013  Austria Vienna Looking forward to seeing you! Javier Puerta Director, Core Technology Partner Programs EMEA Prashant Barot Director, Core Technology Resources OPN Portal OPN Enablement News Blog Oracle Partner Store Use Oracle Trademark in Google AdWords OPN Events Calendar OPN Information Center OPN Solutions Catalog Promote Your Events on Oracle Calendar Copyright © 2013, Oracle and/or its affiliates. All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement Oracle Corporation - Worldwide Headquarters, 500 Oracle Parkway, OPL - E-mail Services, Redwood Shores, CA 94065, United States

    Read the article

  • Technical workshop with the gurus: Architecting Oracle Database-As-A-Service (DBaaS)

    - by Javier Puerta
    Hardware and Software, Engineered to Work Together inside the Click Here The order you must follow to make the colored link appear in browsers. If not the default window link will appear 1. Select the word you want to use for the link 2. Select the desired color, Red, Black, etc 3. Select bold if necessary ___________________________________________________________________________________________________________________ Templates use two sizes of fonts and the sans-serif font tag for the email. All Fonts should be (Arial, Helvetica, sans-serif) tags Normal size reading body fonts should be set to the size of 2. Small font sizes should be set to 1 !!!!!!!DO NOT USE ANY OTHER SIZE FONT FOR THE EMAILS!!!!!!!! ___________________________________________________________________________________________________________________ -- OCTOBER 2013 Invitation: Architecting Oracle Database-As-A-Service (DBaaS) Stay Connected Sign up for Specific Updates Architecting Oracle Database-As-A-Service (DBaaS) Dear partner, We are pleased to invite you to a 2-day workshop dedicated to EMEA partners on "Architecting Oracle Private Database Cloud & Delivering Database-As-A-Service (DBaaS)". This exclusive workshop will be delivered by Product Management and Product Development from Oracle HQ and focuses on the main theme CIOs are tackling with in the last decade: Consolidation to Private Cloud. For many customers the journey to consolidation has led to DBaaS Cloud deployments to significantly reduce costs and offer agile IT services. With the recent launch of Oracle Database 12c, the game really has changed in terms of what Oracle offers and how database clouds can be deployed. REGISTER NOW Who should attend: Enterprise Architects Infrastructure Architects DB Architects from System Integrators and large Independent Software Vendors. Take this opportunity to learn from the gurus, how you can help your customers maximize on their cloud consolidation strategies. The workshops main focus is service delivery, which includes standardization and consolidation, and how you would help your customers transform their current IT infrastructure to a service delivery model. It will discuss best practices and reviews customer examples that have successfully implemented a database cloud. The agenda is split into two days sessions: Day 1: Overview & Planning Database Cloud - Demos Customer Case Studies Database 12c Day 2: Database Cloud - Design Database Cloud - Implementation EM Cloud Control DBaaS on Engineered Systems Question and Answers Attendance is free of charge for qualified Oracle partners - Register now for one of the below sessions: Date Country Location 5 & 6 November 2013  United Kingdom   Manchester 7 & 8 November 2013  Germany  Munich 11 & 12 November 2013  Netherlands  Amsterdam 14 & 15 November 2013  Turkey Istanbul 18 & 19 November 2013  Austria Vienna Looking forward to seeing you! Javier Puerta Director, Core Technology Partner Programs EMEA Prashant Barot Director, Core Technology     Resources OPN Portal OPN Enablement News Blog Oracle Partner Store Use Oracle Trademark in Google AdWords OPN Events Calendar OPN Information Center OPN Solutions Catalog Promote Your Events on Oracle Calendar Copyright © 2013, Oracle and/or its affiliates. All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement Oracle Corporation - Worldwide Headquarters, 500 Oracle Parkway, OPL - E-mail Services, Redwood Shores, CA 94065, United States

    Read the article

  • Software Architecture verses Software Design

    Recently, I was asked what the differences between software architecture and software design are. At a very superficial level both architecture and design seem to mean relatively the same thing. However, if we examine both of these terms further we will find that they are in fact very different due to the level of details they encompass. Software Architecture can be defined as the essence of an application because it deals with high level concepts that do not include any details as to how they will be implemented. To me this gives stakeholders a view of a system or application as if someone was viewing the earth from outer space. At this distance only very basic elements of the earth can be detected like land, weather and water. As the viewer comes closer to earth the details in this view start to become more defined. Details about the earth’s surface will start to actually take form as well as mane made structures will be detected. The process of transitioning a view from outer space to inside our earth’s atmosphere is similar to how an architectural concept is transformed to an architectural design. From this vantage point stakeholders can start to see buildings and other structures as if they were looking out of a small plane window. This distance is still high enough to see a large area of the earth’s surface while still being able to see some details about the surface. This viewing point is very similar to the actual design process of an application in that it takes the very high level architectural concept or concepts and applies concrete design details to form a software design that encompasses the actual implementation details in the form of responsibilities and functions. Examples of these details include: interfaces, components, data, and connections. In review, software architecture deals with high level concepts without regard to any implementation details. Software design on the other hand takes high level concepts and applies concrete details so that software can be implemented. As part of the transition between software architecture to the creation of software design an evaluation on the architecture is recommended. There are several benefits to including this step as part of the transition process. It allows for projects to ensure that they are on the correct path as to meeting the stakeholder’s requirement goals, identifies possible cost savings and can be used to find missing or nonspecific requirements that cause ambiguity in a design. In the book “Evaluating Software Architectures: Methods and Case Studies”, they define key benefits to adding an architectural review process to ensure that an architecture is ready to move on to the design phase. Benefits to evaluating software architecture: Gathers all stakeholders to communicate about the project Goals are clearly defined in regards to the creation or validation of specific requirements Goals are prioritized so that when conflicts occur decisions will be made based on goal priority Defines a clear expectation of the architecture so that all stakeholders have a keen understanding of the project Ensures high quality documentation of the architecture Enables discoveries of architectural reuse  Increases the quality of architecture practices. I can remember a few projects that I worked on that could have really used an architectural review prior to being passed on to developers. This project was to create some new advertising space on the company’s website in order to sell space based on the location and some other criteria. I was one of the developer selected to lead this project and I was given a high level design concept and a long list of ever changing requirements due to the fact that sales department had no clear direction as to what exactly the project was going to do or how they were going to bill the clients once they actually agreed to purchase the Ad space. In my personal opinion IT should have pushed back to have the requirements further articulated instead of forcing programmers to code blindly attempting to build such an ambiguous project.  Unfortunately, we had to suffer with this project for about 4 months when it should have only taken 1.5 to complete due to the constantly changing and unclear requirements. References  Clements, P., Kazman, R., & Klein, M. (2002). Evaluating Software Architectures. Westford, Massachusetts: Courier Westford. 

    Read the article

  • Next Generation Mobile Clients for Oracle Applications & the role of Oracle Fusion Middleware

    - by Manish Palaparthy
    Oracle Enterprise Applications have been available with modern web browser based interfaces for a while now. The web browsers available in smart phones no longer require special markup language such as WML since the processing power of these handsets is quite near to that of a typical personal computer. Modern Mobile devices such as the IPhone, Android Phones, BlackBerry, Windows 8 devices can now render XHTML & HTML quite well. This means you could potentially use your mobile browser to access your favorite enterprise application. While the Mobile browser would render the UI, you might find it difficult to use it due to the formatting & Presentation of the Native UI. Smart phones offer a lot more than just a powerful web browser, they offer capabilities such as Maps, GPS, Multi touch, pinch zoom, accelerometers, vivid colors, camera with video, support for 3G, 4G networks, cloud storage, NFC, streaming media, tethering, voice based features, multi tasking, messaging, social networking web browsers with support for HTML 5 and many more features.  While the full potential of Enterprise Mobile Apps is yet to be realized, Oracle has published a few of its applications that take advantage of the above capabilities and are available for the IPhone natively. Here are some of them Iphone Apps  Oracle Business Approvals for Managers: Offers a highly intuitive user interface built as a native mobile application to conveniently access pending actions related to expenses, purchase requisitions, HR vacancies and job offers. You can even view BI reports related to the worklist actions. Works with Oracle E-Business Suite Oracle Business Indicators : Real-time secure access to OBI reports. Oracle Business Approvals for Sales Managers: Enables sales executives to review key targeted tasks, access relevant business intelligence reports. Works with Siebel CRM, Siebel Quote & Order Capture. Oracle Mobile Sales Assistant: CRM application that provides real-time, secure access to the information your sales organization needs, complete frequent tasks, collaborate with colleagues and customers. Works with Oracle CRMOracle Mobile Sales Forecast: Designed specifically for the mobile business user to view key opportunities. Works with Oracle CRM on demand Oracle iReceipts : Part of Oracle PeopleSoft Expenses, which allows users to create and submit expense lines for cash transactions in real-time. Works with Oracle PeopleSoft expenses Now, we have seen some mobile Apps that Oracle has published, I am sure you are intrigued as to how develop your own clients for the use-cases that you deem most fit. For that Oracle has ADF Mobile ADF Mobile You could develop Mobile Applications with the SDK available with the smart phone platforms!, but you'd really have to be a mobile ninja developer to develop apps with the rich user experience like the ones above. The challenges really multiply when you have to support multiple mobile devices. ADF Mobile framework is really handy to meet this challenge ADF Mobile can in be used to Develop Apps for the Mobile browser : An application built with ADF Mobile framework installs on a smart device, renders user interface via HTML5, and has access to device services. This means the programming model is primarily web-based, which offers consistency with other enterprise applications as well as easier migration to new platforms. Develop Apps for the Mobile Client (Native Apps): These applications have access to device services, enabling a richer experience for users than a browser alone can offer. ADF mobile enables rapid and declarative development of rich, on-device mobile applications. Developers only need to write an application once and then they can deploy the same application across multiple leading smart phone platforms. Oracle SOA Suite Although the Mobile users are using the smart phone apps, and actual transactions are being executed in the underlying app, there is lot of technical wizardry that is going under the surface. All of this key technical components to make 1. WebService calls 2. Authentication 3. Intercepting Webservice calls and adding security credentials to the request 4. Invoking the services of the enterprise application 5. Integrating with the Enterprise Application via the Adapter is all being implemented at the SOA infrastructure layer.  As you can see from the above diagram. The key pre-requisites to mobile enable an Enterprise application are The core enterprise application Oracle SOA Suite ADF Mobile

    Read the article

  • A quick look at: sys.dm_os_buffer_descriptors

    - by Jonathan Allen
    SQL Server places data into cache as it reads it from disk so as to speed up future queries. This dmv lets you see how much data is cached at any given time and knowing how this changes over time can help you ensure your servers run smoothly and are adequately resourced to run your systems. This dmv gives the number of cached pages in the buffer pool along with the database id that they relate to: USE [tempdb] GO SELECT COUNT(*) AS cached_pages_count , CASE database_id WHEN 32767 THEN 'ResourceDb' ELSE DB_NAME(database_id) END AS Database_name FROM sys.dm_os_buffer_descriptors GROUP BY DB_NAME(database_id) , database_id ORDER BY cached_pages_count DESC; This gives you results which are quite useful, but if you add a new column with the code: …to convert the pages value to show a MB value then they become more relevant and meaningful. To see how your server reacts to queries, start up SSMS and connect to a test server and database – mine is called AdventureWorks2008. Make sure you start from a know position by running: -- Only run this on a test server otherwise your production server's-- performance may drop off a cliff and your phone will start ringing. DBCC DROPCLEANBUFFERS GO Now we can run a query that would normally turn a DBA’s hair white: USE [AdventureWorks2008] go SELECT * FROM [Sales].[SalesOrderDetail] AS sod INNER JOIN [Sales].[SalesOrderHeader] AS soh ON [sod].[SalesOrderID] = [soh].[SalesOrderID] …and then check our cache situation: A nice low figure – not! Almost 2000 pages of data in cache equating to approximately 15MB. Luckily these tables are quite narrow; if this had been on a table with more columns then this could be even more dramatic. So, let’s make our query more efficient. After resetting the cache with the DROPCLEANBUFFERS and FREEPROCCACHE code above, we’ll only select the columns we want and implement a WHERE predicate to limit the rows to a specific customer. SELECT [sod].[OrderQty] , [sod].[ProductID] , [soh].[OrderDate] , [soh].[CustomerID] FROM [Sales].[SalesOrderDetail] AS sod INNER JOIN [Sales].[SalesOrderHeader] AS soh ON [sod].[SalesOrderID] = [soh].[SalesOrderID] WHERE [soh].[CustomerID] = 29722 …and check our effect cache: Now that is more sympathetic to our server and the other systems sharing its resources. I can hear you asking: “What has this got to do with logging, Jonathan?” Well, a smart DBA will keep an eye on this metric on their servers so they know how their hardware is coping and be ready to investigate anomalies so that no ‘disruptive’ code starts to unsettle things. Capturing this information over a period of time can lead you to build a picture of how a database relies on the cache and how it interacts with other databases. This might allow you to decide on appropriate schedules for over night jobs or otherwise balance the work of your server. You could schedule this job to run with a SQL Agent job and store the data in your DBA’s database by creating a table with: IF OBJECT_ID('CachedPages') IS NOT NULL DROP TABLE CachedPages CREATE TABLE CachedPages ( cached_pages_count INT , MB INT , Database_Name VARCHAR(256) , CollectedOn DATETIME DEFAULT GETDATE() ) …and then filling it with: INSERT INTO [dbo].[CachedPages] ( [cached_pages_count] , [MB] , [Database_Name] ) SELECT COUNT(*) AS cached_pages_count , ( COUNT(*) * 8.0 ) / 1024 AS MB , CASE database_id WHEN 32767 THEN 'ResourceDb' ELSE DB_NAME(database_id) END AS Database_name FROM sys.dm_os_buffer_descriptors GROUP BY database_id After this has been left logging your system metrics for a while you can easily see how your databases use the cache over time and may see some spikes that warrant your attention. This sort of logging can be applied to all sorts of server statistics so that you can gather information that will give you baseline data on how your servers are performing. This means that when you get a problem you can see what statistics are out of their normal range and target you efforts to resolve the issue more rapidly.

    Read the article

  • How to create managed properties at site collection level in SharePoint2013

    - by ybbest
    In SharePoint2013, you can create managed properties at site collection. Today, I’d like to show you how to do so through PowerShell. 1. Define your managed properties and crawled properties and managed property Type in an external csv file. PowerShell script will read this file and create the managed and the mapping. 2. As you can see I also defined variant Type, this is because you need the variant type to create the crawled property. In order to have the crawled properties, you need to do a full crawl and also make sure you have data populated for your custom column. However, if you do not want to a full crawl to create those crawled properties, you can create them yourself by using the PowerShell; however you need to make sure the crawled properties you created have the same name if created by a full crawl. Managed properties type: Text = 1 Integer = 2 Decimal = 3 DateTime = 4 YesNo = 5 Binary = 6 Variant Type: Text = 31 Integer = 20 Decimal = 5 DateTime = 64 YesNo = 11 3. You can use the following script to create your managed properties at site collection level, the differences for creating managed property at site collection level is to pass in the site collection id. param( [string] $siteUrl="http://SP2013/", [string] $searchAppName = "Search Service Application", $ManagedPropertiesList=(IMPORT-CSV ".\ManagedProperties.csv") ) Add-PSSnapin Microsoft.SharePoint.PowerShell -ErrorAction SilentlyContinue $searchapp = $null function AppendLog { param ([string] $msg, [string] $msgColor) $currentDateTime = Get-Date $msg = $msg + " --- " + $currentDateTime if (!($logOnly -eq $True)) { # write to console Write-Host -f $msgColor $msg } # write to log file Add-Content $logFilePath $msg } $scriptPath = Split-Path $myInvocation.MyCommand.Path $logFilePath = $scriptPath + "\CreateManagedProperties_Log.txt" function CreateRefiner {param ([string] $crawledName, [string] $managedPropertyName, [Int32] $variantType, [Int32] $managedPropertyType,[System.GUID] $siteID) $cat = Get-SPEnterpriseSearchMetadataCategory –Identity SharePoint -SearchApplication $searchapp $crawledproperty = Get-SPEnterpriseSearchMetadataCrawledProperty -Name $crawledName -SearchApplication $searchapp -SiteCollection $siteID if($crawledproperty -eq $null) { Write-Host AppendLog "Creating Crawled Property for $managedPropertyName" Yellow $crawledproperty = New-SPEnterpriseSearchMetadataCrawledProperty -SearchApplication $searchapp -VariantType $variantType -SiteCollection $siteID -Category $cat -PropSet "00130329-0000-0130-c000-000000131346" -Name $crawledName -IsNameEnum $false } $managedproperty = Get-SPEnterpriseSearchMetadataManagedProperty -Identity $managedPropertyName -SearchApplication $searchapp -SiteCollection $siteID -ErrorAction SilentlyContinue if($managedproperty -eq $null) { Write-Host AppendLog "Creating Managed Property for $managedPropertyName" Yellow $managedproperty = New-SPEnterpriseSearchMetadataManagedProperty -Name $managedPropertyName -Type $managedPropertyType -SiteCollection $siteID -SearchApplication $searchapp -Queryable:$true -Retrievable:$true -FullTextQueriable:$true -RemoveDuplicates:$false -RespectPriority:$true -IncludeInMd5:$true } $mappedProperty = $crawledproperty.GetMappedManagedProperties() | ?{$_.Name -eq $managedProperty.Name } if($mappedProperty -eq $null) { Write-Host AppendLog "Creating Crawled -> Managed Property mapping for $managedPropertyName" Yellow New-SPEnterpriseSearchMetadataMapping -CrawledProperty $crawledproperty -ManagedProperty $managedproperty -SearchApplication $searchapp -SiteCollection $siteID } $mappedProperty = $crawledproperty.GetMappedManagedProperties() | ?{$_.Name -eq $managedProperty.Name } #Get-FASTSearchMetadataCrawledPropertyMapping -ManagedProperty $managedproperty } $searchapp = Get-SPEnterpriseSearchServiceApplication $searchAppName $site= Get-SPSite $siteUrl $siteId=$site.id Write-Host "Start creating Managed properties" $i = 1 FOREACH ($property in $ManagedPropertiesList) { $propertyName=$property.managedPropertyName $crawledName=$property.crawledName $managedPropertyType=$property.managedPropertyType $variantType=$property.variantType Write-Host $managedPropertyType Write-Host "Processing managed property $propertyName $($i)..." $i++ CreateRefiner $crawledName $propertyName $variantType $managedPropertyType $siteId Write-Host "Managed property created " $propertyName } Key Concepts Crawled Properties: Crawled properties are discovered by the search index service component when crawling content. Managed Properties: Properties that are part of the Search user experience, which means they are available for search results, advanced search, and so on, are managed properties. Mapping Crawled Properties to Managed Properties: To make a crawled property available for the Search experience—to make it available for Search queries and display it in Advanced Search and search results—you must map it to a managed property. References Administer search in SharePoint 2013 Preview Managing Metadata

    Read the article

  • Page output caching for dynamic web applications

    - by Mike Ellis
    I am currently working on a web application where the user steps (forward or back) through a series of pages with "Next" and "Previous" buttons, entering data until they reach a page with the "Finish" button. Until finished, all data is stored in Session state, then sent to the mainframe database via web services at the end of the process. Some of the pages display data from previous pages in order to collect additional information. These pages can never be cached because they are different for every user. For pages that don't display this dynamic data, they can be cached, but only the first time they load. After that, the data that was previously entered needs to be displayed. This requires Page_Load to fire, which means the page can't be cached at that point. A couple of weeks ago, I knew almost nothing about implementing page caching. Now I still don't know much, but I know a little bit, and here is the solution that I developed with the help of others on my team and a lot of reading and trial-and-error. We have a base page class defined from which all pages inherit. In this class I have defined a method that sets the caching settings programmatically. For pages that can be cached, they call this base page method in their Page_Load event within a if(!IsPostBack) block, which ensures that only the page itself gets cached, not the data on the page. if(!IsPostBack) {     ...     SetCacheSettings();     ... } protected void SetCacheSettings() {     Response.Cache.AddValidationCallback(new HttpCacheValidateHandler(Validate), null);     Response.Cache.SetExpires(DateTime.Now.AddHours(1));     Response.Cache.SetSlidingExpiration(true);     Response.Cache.SetValidUntilExpires(true);     Response.Cache.SetCacheability(HttpCacheability.ServerAndNoCache); } The AddValidationCallback sets up an HttpCacheValidateHandler method called Validate which runs logic when a cached page is requested. The Validate method signature is standard for this method type. public static void Validate(HttpContext context, Object data, ref HttpValidationStatus status) {     string visited = context.Request.QueryString["v"];     if (visited != null && "1".Equals(visited))     {         status = HttpValidationStatus.IgnoreThisRequest; //force a page load     }     else     {         status = HttpValidationStatus.Valid; //load from cache     } } I am using the HttpValidationStatus values IgnoreThisRequest or Valid which forces the Page_Load event method to run or allows the page to load from cache, respectively. Which one is set depends on the value in the querystring. The value in the querystring is set up on each page in the "Next" and "Previous" button click event methods based on whether the page that the button click is taking the user to has any data on it or not. bool hasData = HasPageBeenVisited(url); if (hasData) {     url += VISITED; } Response.Redirect(url); The HasPageBeenVisited method determines whether the destination page has any data on it by checking one of its required data fields. (I won't include it here because it is very system-dependent.) VISITED is a string constant containing "?v=1" and gets appended to the url if the destination page has been visited. The reason this logic is within the "Next" and "Previous" button click event methods is because 1) the Validate method is static which doesn't allow it to access non-static data such as the data fields for a particular page, and 2) at the time at which the Validate method runs, either the data has not yet been deserialized from Session state or is not available (different AppDomain?) because anytime I accessed the Session state information from the Validate method, it was always empty.

    Read the article

  • Merge sort versus quick sort performance

    - by Giorgio
    I have implemented merge sort and quick sort using C (GCC 4.4.3 on Ubuntu 10.04 running on a 4 GB RAM laptop with an Intel DUO CPU at 2GHz) and I wanted to compare the performance of the two algorithms. The prototypes of the sorting functions are: void merge_sort(const char **lines, int start, int end); void quick_sort(const char **lines, int start, int end); i.e. both take an array of pointers to strings and sort the elements with index i : start <= i <= end. I have produced some files containing random strings with length on average 4.5 characters. The test files range from 100 lines to 10000000 lines. I was a bit surprised by the results because, even though I know that merge sort has complexity O(n log(n)) while quick sort is O(n^2), I have often read that on average quick sort should be as fast as merge sort. However, my results are the following. Up to 10000 strings, both algorithms perform equally well. For 10000 strings, both require about 0.007 seconds. For 100000 strings, merge sort is slightly faster with 0.095 s against 0.121 s. For 1000000 strings merge sort takes 1.287 s against 5.233 s of quick sort. For 5000000 strings merge sort takes 7.582 s against 118.240 s of quick sort. For 10000000 strings merge sort takes 16.305 s against 1202.918 s of quick sort. So my question is: are my results as expected, meaning that quick sort is comparable in speed to merge sort for small inputs but, as the size of the input data grows, the fact that its complexity is quadratic will become evident? Here is a sketch of what I did. In the merge sort implementation, the partitioning consists in calling merge sort recursively, i.e. merge_sort(lines, start, (start + end) / 2); merge_sort(lines, 1 + (start + end) / 2, end); Merging of the two sorted sub-array is performed by reading the data from the array lines and writing it to a global temporary array of pointers (this global array is allocate only once). After each merge the pointers are copied back to the original array. So the strings are stored once but I need twice as much memory for the pointers. For quick sort, the partition function chooses the last element of the array to sort as the pivot and scans the previous elements in one loop. After it has produced a partition of the type start ... {elements <= pivot} ... pivotIndex ... {elements > pivot} ... end it calls itself recursively: quick_sort(lines, start, pivotIndex - 1); quick_sort(lines, pivotIndex + 1, end); Note that this quick sort implementation sorts the array in-place and does not require additional memory, therefore it is more memory efficient than the merge sort implementation. So my question is: is there a better way to implement quick sort that is worthwhile trying out? If I improve the quick sort implementation and perform more tests on different data sets (computing the average of the running times on different data sets) can I expect a better performance of quick sort wrt merge sort? EDIT Thank you for your answers. My implementation is in-place and is based on the pseudo-code I have found on wikipedia in Section In-place version: function partition(array, 'left', 'right', 'pivotIndex') where I choose the last element in the range to be sorted as a pivot, i.e. pivotIndex := right. I have checked the code over and over again and it seems correct to me. In order to rule out the case that I am using the wrong implementation I have uploaded the source code on github (in case you would like to take a look at it). Your answers seem to suggest that I am using the wrong test data. I will look into it and try out different test data sets. I will report as soon as I have some results.

    Read the article

  • Apache virtual host does not work properly

    - by Jori
    I have read a lot of information all over the Internet regarding this subject, and can not figure out what I'am doing wrong. I'm trying to host two websites under different names locally under Windows 7 with Apaches Virtual Hosting functionality. This is what I have done already: In the httpd.conf file I uncommented the following line, so that the virtual host configuration file will be included in the main configuration sequence. # Virtual hosts Include conf/extra/httpd-vhosts.conf This is how I edited my httpd-vhosts.conf: # # Virtual Hosts # # If you want to maintain multiple domains/hostnames on your # machine you can setup VirtualHost containers for them. Most configurations # use only name-based virtual hosts so the server doesn't need to worry about # IP addresses. This is indicated by the asterisks in the directives below. # # Please see the documentation at # <URL:http://httpd.apache.org/docs/2.2/vhosts/> # for further details before you try to setup virtual hosts. # # You may use the command line option '-S' to verify your virtual host # configuration. # # Use name-based virtual hosting. # NameVirtualHost *:80 # # VirtualHost example: # Almost any Apache directive may go into a VirtualHost container. # The first VirtualHost section is used for all requests that do not # match a ServerName or ServerAlias in any <VirtualHost> block. # #<VirtualHost *:80> # ServerAdmin [email protected] # DocumentRoot "C:/apache/docs/dummy-host.localhost" # ServerName dummy-host.localhost # ServerAlias www.dummy-host.localhost # ErrorLog "logs/dummy-host.localhost-error.log" # CustomLog "logs/dummy-host.localhost-access.log" common #</VirtualHost> # #<VirtualHost *:80> # ServerAdmin [email protected] # DocumentRoot "C:/apache/docs/dummy-host2.localhost" # ServerName dummy-host2.localhost # ErrorLog "logs/dummy-host2.localhost-error.log" # CustomLog "logs/dummy-host2.localhost-access.log" common #</VirtualHost> <VirtualHost *:80> ServerName arterieur DocumentRoot "J:/webcontent/www20" <Directory "J:/webcontent/www20"> Order allow,deny Allow from all </Directory> </VirtualHost> As you can see I commented the Virtual Host examples out and added my own one (I did one for this example). Also am I sure that J:\webcontent\www20 exists. At last I edited the Windows host file located in: C:\Windows\System32\drivers\etc\hosts, now it looks this: # Copyright (c) 1993-2009 Microsoft Corp. # # This is a sample HOSTS file used by Microsoft TCP/IP for Windows. # # This file contains the mappings of IP addresses to host names. Each # entry should be kept on an individual line. The IP address should # be placed in the first column followed by the corresponding host name. # The IP address and the host name should be separated by at least one # space. # # Additionally, comments (such as these) may be inserted on individual # lines or following the machine name denoted by a '#' symbol. # # For example: # # 102.54.94.97 rhino.acme.com # source server # 38.25.63.10 x.acme.com # x client host # localhost name resolution is handled within DNS itself. # 127.0.0.1 localhost # ::1 localhost 127.0.0.1 arterieur Then I restarted Apache with the Apache Service Monitor, and it gave me the following fatal error: The requested operation has failed!, I tried to look at the apache/logs/error.log file but I did not log anything, I guess it only logs the errors after startup. Does anyone knows what I'am doing wrong?

    Read the article

  • PASS Summit 2011 &ndash; Part IV

    - by Tara Kizer
    This is the final blog for my PASS Summit 2011 series.  Well okay, a mini-series, I guess. On the last day of the conference, I attended Keith Elmore’ and Boris Baryshnikov’s (both from Microsoft) “Introducing the Microsoft SQL Server Code Named “Denali” Performance Dashboard Reports, Jeremiah Peschka’s (blog|twitter) “Rewrite your T-SQL for Great Good!”, and Kimberly Tripp’s (blog|twitter) “Isolated Disasters in VLDBs”. Keith and Boris talked about the lifecycle of a session, figuring out the running time and the waiting time.  They pointed out the transient nature of the reports.  You could be drilling into it to uncover a problem, but the session may have ended by the time you’ve drilled all of the way down.  Also, the reports are for troubleshooting live problems and not historical ones.  You can use Management Data Warehouse for historical troubleshooting.  The reports provide similar benefits to the Activity Monitor, however Activity Monitor doesn’t provide context sensitive drill through. One thing I learned in Keith’s and Boris’ session was that the buffer cache hit ratio should really never be below 87% due to the read-ahead mechanism in SQL Server.  When a page is read, it will read the entire extent.  So for every page read, you get 7 more read.  If you need any of those 7 extra pages, well they are already in cache.  I had a lot of fun in Jeremiah’s session about refactoring code plus I learned a lot.  His slides were visually presented in a fun way, which just made for a more upbeat presentation.  Jeremiah says that before you start refactoring, you should look at your system.  Investigate missing or too many indexes, out-of-date statistics, and other areas that could be leading to your code running slow.  He talked about code standards.  He suggested using common abbreviations for aliases instead of one-letter aliases.  I’m a big offender of one-letter aliases, but he makes a good point.  He said that join order does not matter to the optimizer, but it does matter to those who have to read your code.  Now let’s get into refactoring! Eliminate useless things – useless/unneeded joins and columns.  If you don’t need it, get rid of it! Instead of using DISTINCT/JOIN, replace with EXISTS Simplify your conditions; use UNION or better yet UNION ALL instead of OR to avoid a scan and use indexes for each union query Branching logic – instead of IF this, IF that, and on and on…use dynamic SQL (sp_executesql, please!) or use a parameterized query in the application Correlated subqueries – YUCK! Replace with a join Eliminate repeated patterns Last, but certainly not least, was Kimberly’s session.  Kimberly is my favorite speaker.  I attended her two-day pre-conference seminar at PASS Summit 2005 as well as a SQL Immersion Event last December.  Did I mention she’s my favorite speaker?  Okay, enough of that. Kimberly’s session was packed with demos.  I had seen some of it in the SQL Immersion Event, but it was very nice to get a refresher on these, especially since I’ve got a VLDB with some growing pains.  One key takeaway from her session is the idea to use a log shipping solution with a load delay, such as 6, 8, or 24 hours behind the primary.  In the case of say an accidentally dropped table in a VLDB, we could retrieve it from the secondary database rather than waiting an eternity for a restore to complete.  Kimberly let us know that in SQL Server 2012 (it finally has a name!), online rebuilds are supported even if there are LOB columns in your table.  This will simplify custom code that intelligently figures out if an online rebuild is possible. There was actually one last time slot for sessions that day, but I had an airplane to catch and my kids to see!

    Read the article

  • Where is the value of OEA

    - by [email protected]
    In a room full of architects, if you were to ask for the definition of enterprise architecture, or the importance thereof,  you are likely to get a number of varying view points ranging from,  a complete analysis of the digital assets of an organization,  to, a strategic alignment of business goals/objectives to IT initiatives.  Similiarily in a room full of senior business executives,  if you asked them how they see their IT groups and their effectiveness to align to business strategy,  you would get a myriad of responses,  ranging from, “a huge drain on our bottom line”, “always more expensive than budgeted”, “lack of agility,  by the time IT is ready,  my business strategy has changed”, and on the rare occurrence, “ a leader of innovation,  that is lock step with my business strategy”. However does this necessarily demonstrate the overall value of enterprise architecture.  Having a framework, and process is of critical importance to help produce a number of the artefacts that ultimately align technology goals and initiatives to business strategy,  however,  is that really where the value is?  I believe that first we need to understand the concept of value.  Value typically is a measure of sorts,  when we purchase a product it’s value is equivalent to the maximum amount that someone is willing to pay for the product,  however,  is the same equation valid in terms of the business value of enterprise architecture? Is the library of artefacts generated through a process/framework, inclusive of a strategic roadmap to realize the enterprise architecture where the value is? If we agree that enterprise architecture is the alignment of IT and IT assets to support business strategy, and by achieving our business strategy, we have we have increased the business value of the enterprise then;  it seems that, in order to really identify the true value of an enterprise architecture,  we need to understand how we measure business value .  A number of formal measurement methodologies exist for this purpose, business models, balanced scorecards, etc   After we have an understanding on how to measure the business value of each of the organizational units within an enterprise, then we understand how the enterprise architecture contributes to the success of business strategy,  and EXECUTE on the roadmap to implement, and deliver the IT initiatives that provide MEASUREABLE returns, As we analyse the value chain of each of the individual organizational units within the enterprise we may identify how that unit has performed by quantitatively measuring it proximity to achieving the goals defined by the business for each unit. However, It would appear that true business value (the aggregate of all of the business units in the value chain), is to some degree subjectively measured  as for public companies this lies in shareholder value,  as the true value, or be it, the maximum amount that someone would pay for shares of an organization.

    Read the article

  • Speaker at the German Visual FoxPro Developer Conference 2003

    The following is an excerpt from the UniversalThread conference coverage of the German Visual FoxPro Developer Conference 2003 written by Hans-Otto Lochmann and Armin Neudert. Track: Visual FoxPro and Linux This track consists of 4 sessions presented on one day in one sequence. Originally the Linux portion of this track was to be presented by Whil Hentzen, the well-known publisher, book author and confer-ence speaker. Unfortunately some illness prevented him from joining this DevCon. Rainer got the bad news only on early Friday morning. It was definitely to late to find a replacement among the already invited speaker on such a short notice. So Rainer decided to take over these "three sessions in a row" by himself with "a little help from his friends". He hired a coach for him for the weekend and prepared slides and sessions by himself - the originally planed slides and session material were still in USA. Rainer survived barely an endless disaster of C0000005's due to various wrong configuration settings... At the presentation Jochen Kirstätter helped massively with technical details regarding Linux whereas Rainer did the slides and the presentation. Gerold Lübben then presented the MySQL part - as originally planned. This track concentrated on the how to run Visual FoxPro applications on Linux machines with the help of a Windows emulator like Wine. As more and more people use Linux machines in production (and not just for running servers), more and more invitations to bid for a development job includes the requirement to run the application in a Linux environment. If you would like to participate in such submissions, then you should get familiar with the open source operating system Linux and the open source Data Base system MySQL. [...] These sessions provided a broad, complete overview of where Linux fits into the current computing landscape from the perspective of a VFP developer, where VFP can be used with Linux, and a conceptual plan for how to approach the incorporation of Linux into your day-to-day work. In order for you to be able to work with a Linux back end, you're going to need to know something about how Linux works. The best way involves a two-step process: First, plunk down a Linux workstation on your desk next to your Windows machine and develop some experience with the new OS.Second, once you have a basic level of comfort with Linux, gained through your experience on a workstation, leverage that knowledge and learn to connect to a Linux server from your Windows machine. This track showed both of these processes: What you can expect when you set up your Linux work-station, how to set it up, how to connect to your Windows network, how to fit VFP into the mix, and even how you could use it to replace your Windows workstation in some cases. Also this track demonstrated how to connect to an existing Linux server, running MySQL or an another back end, and how to get your VFP apps talking to that back end data. This track also showed both of the positions you can take. Rainer disliked it wholeheartedly (the bad guy position in these talks) and Jochen loved it (the good guy and "typical Linux techie"-position we all love). These opposite position lasted for three sessions and both sides where shown with their Pros and Cons in live and lively discussions of the speakers (club banging was forbidden). Gerold Luebben showed how Visual Foxpro and MySQL can work together. MySQL is as one the most well known open SOURCE databases for nearly all platforms available. Particularly in eBusiness MySQL is well positioned and well known for its performance and its stability. Still we like Visual FoxPro more - for sure . [...]

    Read the article

  • HPCM 11.1.2.x - Outline Optimisation for Calculation Performance

    - by Jane Story
    When an HPCM application is first created, it is likely that you will want to carry out some optimisation on the HPCM application’s Essbase outline in order to improve calculation execution times. There are several things that you may wish to consider. Because at least one dense dimension for an application is required to deploy from HPCM to Essbase, “Measures” and “AllocationType”, as the only required dimensions in an HPCM application, are created dense by default. However, for optimisation reasons, you may wish to consider changing this default dense/sparse configuration. In general, calculation scripts in HPCM execute best when they are targeting destinations with one or more dense dimensions. Therefore, consider your largest target stage i.e. the stage with the most assignment destinations and choose that as a dense dimension. When optimising an outline in this way, it is not possible to have a dense dimension in every target stage and so testing with the dense/sparse settings in every stage is the key to finding the best configuration for each individual application. It is not possible to change the dense/sparse setting of individual cloned dimensions from EPMA. When a dimension that is to be repeated in multiple stages, and therefore cloned, is defined in EPMA, every instance of that dimension has the same storage setting. However, such manual changes may not be preserved in all cases. Please see below for full explanation. However, once the application has been deployed from EPMA to HPCM and from HPCM to Essbase, it is possible to make the dense/sparse changes to a cloned dimension directly in Essbase. This can be done by editing the properties of the outline in Essbase Administration Services (EAS) and manually changing the dense/sparse settings of individual dimensions. There are two methods of deployment from HPCM to Essbase from 11.1.2.1. There is a “replace” deploy method and an “update” deploy method: “Replace” will delete the Essbase application and replace it. If this method is chosen, then any changes made directly on the Essbase outline will be lost. If you use the update deploy method (with or without archiving and reloading data), then the Essbase outline, including any manual changes you have made (i.e. changes to dense/sparse settings of the cloned dimensions), will be preserved. Notes If you are using the calculation optimisation technique mentioned in a previous blog to calculate multiple POVs (https://blogs.oracle.com/pa/entry/hpcm_11_1_2_optimising) and you are calculating all members of that POV dimension (e.g. all months in the Period dimension) then you could consider making that dimension dense. Always review Block sizes after all changes! The maximum block size recommended in the Essbase Database Administrator’s Guide is 100k for 32 bit Essbase and 200k for 64 bit Essbase. However, calculations may perform better with a larger than recommended block size provided that sufficient memory is available on the Essbase server. Test different configurations to determine the most optimal solution for your HPCM application. Please note that this blog article covers HPCM outline optimisation only. Additional performance tuning can be achieved by methodically testing database settings i.e data cache, index cache and/or commit block settings. For more information on Essbase tuning best practices, please review these items in the Essbase Database Administrators Guide. For additional information on the commit block setting, please see the previous PA blog article https://blogs.oracle.com/pa/entry/essbase_11_1_2_commit

    Read the article

  • Why Your ERP System Isn't Ready for the Next Evolution of the Enterprise

    - by ken.pulverman
      ERP has been the backbone of enterprise software.  The data held in your ERP system is core of most companies.  Efficiencies gained through the accounting and resource allocation through ERP software have literally saved companies trillions of dollars. Not only does everything seem to be fine with your ERP system, you haven't had to touch it in years.  Why aren't you ready for what comes next? Well judging by the growth rates in the space (Oracle posted only a 3% growth rate, while SAP showed a 12% decline) there hasn't been much modernization going on, just a little replacement activity. If you are like most companies, your ERP system is connected to a proprietary middleware solution that only effectively talks with a handful of other systems you might have acquired from the same vendor.   Connecting your legacy system through proprietary middleware is expensive and brittle and if you are like most companies, you were only willing to pay an SI so much before you said "enough."  So your ERP is working.  It's humming along.  You might not be able to get Order to Promise information when you take orders in your call center, but there are work arounds that work just fine. So what's the problem? The problem is that you built your business around your ERP core, and now there is such pressure to innovate your business processes to keep up that you need a whole new slew of modern apps and you need ERP data to be accessible from everywhere.   Every time you change a sales territory or a comp plan or change a benefits provider your ERP system, literally the economic brain of your business, needs to know what's going on.  And this giant need to access and provide information to your ERP is only growing. What makes matters even more challenging is that apps today come in every flavor under the Sun™.   SaaS, cloud, managed, hybrid, outsourced, composite....and they all have different integration protocols. The only easy way to get ahead of all this is to modernize the way you connect and run your applications.  Unlike the middleware solutions of yesteryear, modern middleware is effectively the operating system of the enterprise.  In the same way that you rely on Apple, Microsoft, and Google to find a video driver for your 23" monitor or to ensure the Word or Keynote runs, modern middleware takes care of intra-application connectivity and process execution.  It effectively allows you to take ERP out of the middle while ensuring connectivity to your vital data for anything you want to do.  The diagram below reflects that change.    In this model, the hegemony of ERP is over.  It too has to become a stealthy modern app to help you quickly adapt to business changes while managing vital information.  And through modern middleware it will connect to everything.  So yes ERP as we've know it is dead, but long live ERP as a connected application member of the modern enterprise. I want to Thank Andrew Zoldan, Group Vice President Oracle Manufacturing Industries Business Unit for introducing me to how some of his biggest customers have benefited by modernizing their applications infrastructure and making ERP a connected application. by John Burke, Group Vice President, Applications Business Unit

    Read the article

  • Using the SOA-BPM VIrtualBox Appliance

    - by antony.reynolds
    Quickstart Guide to Using Oracle Appliance for SOA/BPM Recently I have been setting up some machines for fellow engineers.  My base setup consists of Oracle Enterprise Linux with Oracle Virtual Box.  Note that after installing VirtualBox I needed to add the VirtualBox Extension Pack to enable RDP access amongst other features.  In order to get them started quickly with some images I downloaded the pre-built appliance for SOA/BPM from OTN. Out of the box this provides a VirtualBox image that is pre-installed with everything you will need to develop SOA/BPM applications. Specifically by using the virtual appliance I got the following pre-installed and configured. Oracle Enterprise Linux 5 User oracle password oracle User root password oracle. Oracle Database XE Pre-configured with SOA/BPM repository. Set to auto-start on OS startup. Oracle SOA Suite 11g PS2 Configured with a “collapsed domain”, all services (SOA/BAM/EM) running in AdminServer. Listening on port 7001 Oracle BPM Suite 11g Configured in same domain as SOA Suite. Oracle JDeveloper 11g With SOA/BPM extensions. Networking The VM by default uses NAT (Network Address Translation) for network access.  Make sure that the advanced settings for port forwarding allow access through the host to guest ports.  It should be pre-configured to forward requests on the following ports Purpose Host Port Guest Port (VBox Image) SSH 2222 22 HTTP 7001 7001 Database 1521 1521 Note that only one VirtualBox image can use a given host port, so make sure you are not clashing if it seems not to work. What’s Left to Do? There is still some customization of the environment that may be required. If you need to configure a proxy server as I did then for the oracle and root users to set up an HTTP proxy Added “export http_proxy=http://proxy-host:proxy-port” to ~oracle/.bash_profile and ~root/.bash_profile Added “export http_proxy=http://proxy-host:proxy-port” to /etc/.bashrc Edited System->Preferences to set Network Proxy In Firefox set Preferences->Network->Connection Settings to “Use system proxy settings” In JDeveloper set Edit->Preferences->Web Browser and Proxy to required proxy settings You may need to configure yum to point to a public OEL yum repository – such as http://public-yum.oracle.com. If you are going to be accessing the SOA server from outside the VirtualBox image then you may want to set the soa-infra Server URLs to be the hostname of the host OS. Snap! Once I had the machine configured how I wanted to use it I took a snapshot so that I can always get back to the pristine install I have now.  Snapshots are one of the big benefits of putting a development environment into a virtualized environment.  I can make changes to my installation and if I mess it up I can restore the image to a last known good snapshot. Hey Presto!, Ready to Go This is the quickest way to get up and running with SOA/BPM Suite.  Out of the box the download will work, I only did extra customization so I could use services outside the firewall and browse outside the firewall from within by SOA VirtualBox image.  I also use yum to update the OS to the latest binaries. So have fun.

    Read the article

  • Increasing touch surface (#wp7dev)

    - by Laurent Bugnion
    When you design for Windows Phone 7 (or for any touch device, for that matter, and most especially small screens), you need to be very careful to give enough surface to your users’ fingers. It is easy to miss a touch on such small screens, and that can be horrifyingly frustrating. This is especially true when people are on the move, and trying to hit the control while walking and holding their device in one hand, or when the device is mounted in a car and vibrating with the engine. In my experience, a touch surface should be ideally minimum 60x60 pixels to be easy to activate on the Windows Phone 7 screen (which is, as we know, 800 pixels x 480 pixels). Ideally, I try to make my touch surfaces 80x80 pixels minimum. This causes a few design challenges of course. Using transparent backgrounds However, one thing is helping us tremendously: some surfaces can be made transparent, and yet react to touch. The secret is the following: If you have a panel that has a Null background (i.e. the Background is not set at all), then the empty surface does not react to touch. If however the Background is set to the Transparent color (or any color where the Alpha channel is set to 0), then it will react to touch. Setting a transparent background is easy. For example: <Grid Background="#00000000"> </Grid> or <Grid Background="Transparent"> </Grid> In C#: var grid = new Grid { Background = new SolidColorBrush( Colors.Transparent) }; Using negative margins Having a transparent background reactive to touch is a good start, but in addition, you must make sure that the surface is big enough for my clumsy fingers. One way to achieve that is to increase the transparent, touch-reactive surface, and reposition the element using negative margins. For example, consider the following UI. I changed the transparent background of the HyperlinkButton to Red, in order to visualize the touch surface. In this figure, the Settings HyperlinkButton is 105 pixels x 31 pixels. This is wide enough, but really small in height and easy to miss. To improve this, we can use negative margins, for instance: <HyperlinkButton Content="Settings" HorizontalAlignment="Right" VerticalAlignment="Bottom" Height="60" Margin="0,0,0,-15" /> Notice the usage of negative bottom margin to bring the HyperlinkButton back at the bottom of the main Grid’s first row, where it belongs. And the result is: Notice how the touch surface is much bigger than before. This makes the HyperlinkButton easier to reach, and improves the user experience. With the background set back to normal, the UI looks exactly the same, as it should: In summary: Remember to maximize the touch surface for your controls. Plan your design in consequence by reserving enough room around each control to allow their hit surface to be expanded as shown in this article. Do not cram too many controls in one page. If REALLY needed, use an additional page (or even better: use a Pivot control with multiple pivot items) for the controls that don’t fit on the first one. This should ensure a smoother user experience and improved touch behavior. Happy coding! Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Save the Date - Oracle Partner Community Forum: Exadata, Exalogic and Manageability, Vienna, 23-24 April 2013

    - by Javier Puerta
    Hardware and Software Engineered to Work Together .Ritu { font-family: Arial, Helvetica, sans-serif; } .Ritu { font-family: Arial, Helvetica, sans-serif; } .Ritu { font-family: Arial, Helvetica, sans-serif; } body,td,th { font-family: Arial, Helvetica, sans-serif; font-size: x-small; } .color { color: #F00; } .c { color: #F00; } .c { color: #F00; } .c { color: #000; font-size: xx-small; } .c a { color: #F00; } .c { color: #F00; } .cl { color: #F00; } .b { color: #000; font-size: xx-small; } .i { font-style: italic; } .i { font-style: italic; } .i { font-style: italic; } .i { font-style: italic; } .i { font-style: italic; } .c { color: #F00; font-size: small; } .b { font-weight: bold; font-size: x-small; } .c { color: #F00; font-size: x-small; } .clr { color: #F00; } .c { color: #F00; } inside the Click Here The order you must follow to make the colored link appear in browsers. If not the default window link will appear 1. Select the word you want to use for the link 2. Select the desired color, Red, Black, etc 3. Select bold if necessary ___________________________________________________________________________________________________________________ Templates use two sizes of fonts and the sans-serif font tag for the email. All Fonts should be (Arial, Helvetica, sans-serif) tags Normal size reading body fonts should be set to the size of 2. Small font sizes should be set to 1 !!!!!!!DO NOT USE ANY OTHER SIZE FONT FOR THE EMAILS!!!!!!!! ___________________________________________________________________________________________________________________ -- Oracle PartnerNetwork | Account | Feedback SAVE THE DATE ORACLE PARTNER COMMUNITY FORUM: EXADATA, EXALOGIC AND MANAGEABILITY 23-24 APRIL 2013, VIENNA, AUSTRIA The 2013 event expands its scope to cover all the building blocks of the Cloud infrastructure: Exadata, Exalogic and Manageability! Dear partner I am delighted to announce the 2013 edition of the Exadata, Exalogic and Manageability Partner Community Forum for EMEA partners which will take place in Vienna, Austria, on April 23-24, 2013. After the experience of last year where we ran a joint Exadata and Manageability event, we received requests from many of you to add also Exalogic to the scope of the forum, and this way to cover the complete infrastructure architecture on the Exa platform. The continued market adoption of Exadata and Exalogic is being paralleled by a growth in the rate of projects sold and implemented by partners. Sharing customer cases and best-practices presented by other partners constitutes the core of this event. If you want to present an experience of your company around Exadata, Exalogic or Manageability that can be a learning experience for other partners, we still have some slots in the agenda. (Please contact Javier Puerta if you want to present.) Attending the Community Forum you will also have the opportunity to get Oracle’s insight on new products and market trends. And, of course, interact with the Oracle executives responsible for the Exadata, Exalogic and Manageability business. The atmosphere of beautiful Vienna will be the scenario of the event. Detailed venue and hotel booking information will be sent to you in January. Don't miss out on attending this key event! Save the date now - 23 & 24 April 2013, and watch out for your formal invitation coming soon. Kind regards, Javier Puerta Core Technology Partner Programs, Oracle EMEA E-Mail: [email protected] Jürgen Kress SOA Partner Adoption Oracle EMEA E-Mail: [email protected] Copyright © 2012, Oracle and/or its affiliates. All rights reserved. Contact PBC | Legal Notices and Terms of Use | Privacy Oracle Corporation - Worldwide Headquarters, 500 Oracle Parkway, OPL - E-mail Services, Redwood Shores, CA 94065, United States

    Read the article

  • Save the Date - Oracle Partner Community Forum: Exadata, Exalogic and Manageability, Vienna, 23-24 April 2013

    - by Javier Puerta
    Hardware and Software Engineered to Work Together .Ritu { font-family: Arial, Helvetica, sans-serif; } .Ritu { font-family: Arial, Helvetica, sans-serif; } .Ritu { font-family: Arial, Helvetica, sans-serif; } body,td,th { font-family: Arial, Helvetica, sans-serif; font-size: x-small; } .color { color: #F00; } .c { color: #F00; } .c { color: #F00; } .c { color: #000; font-size: xx-small; } .c a { color: #F00; } .c { color: #F00; } .cl { color: #F00; } .b { color: #000; font-size: xx-small; } .i { font-style: italic; } .i { font-style: italic; } .i { font-style: italic; } .i { font-style: italic; } .i { font-style: italic; } .c { color: #F00; font-size: small; } .b { font-weight: bold; font-size: x-small; } .c { color: #F00; font-size: x-small; } .clr { color: #F00; } .c { color: #F00; } inside the Click Here The order you must follow to make the colored link appear in browsers. If not the default window link will appear 1. Select the word you want to use for the link 2. Select the desired color, Red, Black, etc 3. Select bold if necessary ___________________________________________________________________________________________________________________ Templates use two sizes of fonts and the sans-serif font tag for the email. All Fonts should be (Arial, Helvetica, sans-serif) tags Normal size reading body fonts should be set to the size of 2. Small font sizes should be set to 1 !!!!!!!DO NOT USE ANY OTHER SIZE FONT FOR THE EMAILS!!!!!!!! ___________________________________________________________________________________________________________________ -- Oracle PartnerNetwork | Account | Feedback SAVE THE DATE ORACLE PARTNER COMMUNITY FORUM: EXADATA, EXALOGIC AND MANAGEABILITY 23-24 APRIL 2013, VIENNA, AUSTRIA The 2013 event expands its scope to cover all the building blocks of the Cloud infrastructure: Exadata, Exalogic and Manageability! Dear partner I am delighted to announce the 2013 edition of the Exadata, Exalogic and Manageability Partner Community Forum for EMEA partners which will take place in Vienna, Austria, on April 23-24, 2013. After the experience of last year where we ran a joint Exadata and Manageability event, we received requests from many of you to add also Exalogic to the scope of the forum, and this way to cover the complete infrastructure architecture on the Exa platform. The continued market adoption of Exadata and Exalogic is being paralleled by a growth in the rate of projects sold and implemented by partners. Sharing customer cases and best-practices presented by other partners constitutes the core of this event. If you want to present an experience of your company around Exadata, Exalogic or Manageability that can be a learning experience for other partners, we still have some slots in the agenda. (Please contact Javier Puerta if you want to present.) Attending the Community Forum you will also have the opportunity to get Oracle’s insight on new products and market trends. And, of course, interact with the Oracle executives responsible for the Exadata, Exalogic and Manageability business. The atmosphere of beautiful Vienna will be the scenario of the event. Detailed venue and hotel booking information will be sent to you in January. Don't miss out on attending this key event! Save the date now - 23 & 24 April 2013, and watch out for your formal invitation coming soon. Kind regards, Javier Puerta Core Technology Partner Programs, Oracle EMEA E-Mail: [email protected] Jürgen Kress SOA Partner Adoption Oracle EMEA E-Mail: [email protected] Copyright © 2012, Oracle and/or its affiliates. All rights reserved. Contact PBC | Legal Notices and Terms of Use | Privacy Oracle Corporation - Worldwide Headquarters, 500 Oracle Parkway, OPL - E-mail Services, Redwood Shores, CA 94065, United States

    Read the article

  • Oracle WebCenter: Common User Experience Architecture

    - by kellsey.ruppel(at)oracle.com
    You may remember that the key goals of the new release of WebCenter are providing a Modern User Experience, unparalleled Application Integration, converging all the best of the existing portal platforms into WebCenter and delivering a Common User Experience Architecture.  In previous weeks we've provided an overview of Oracle WebCenter and discussed some of the other key goals and this week, we'll focus on how the new release of Oracle WebCenter delivers a Common User Experience Architecture.When Oracle talks about a Common User Experience Architecture, it really focuses on a core set of areas.  First, the way that information is accessed needs to be consistent and extensible so that as requirements change, the applications don't need to be rewritten for every change. Second, this information access layer needs to be securely accessible to any application, site, or any other channel that needs to leverage this information.  Third, there needs to be a consistent presentation layout, Oracle calls it a UI shell, so that all resources can fit together in a useable, productive way.  Fourth, there needs to be a common set of design patterns for how different menus, features, and services fit into this UI Shell for broad and productive usability.  Fifth, there needs to be a set of design patterns for the individual services that plug into this UI shell so that end users can move from one module of the application to another without new learning.  Finally, all of these layers need to be customizable in an easy way that insulates IT from patching and upgrading problems and allows the business owners the agility to quickly change with the market conditions.As Oracle has already announced, we will release our next generation of enterprise applications called Oracle Fusion Applications.  We have thousands of developers building these applications that all had different programming tool experience and UI design experience.  We've educated over 6,000 developers building Oracle Fusion Applications to leverage these Common User Experience Architecture patterns to speed their learning curve of the new Java standards as well as SOA principles to deliver a revolutionary new set of applications.  You could imagine the big challenge with getting all these developers with different backgrounds and different UI design skills to deliver a completely integrated application user experience.  This is why Oracle invested heavily in designing this Common User Experience Architecture, based on Oracle WebCenter and the Oracle Application Development Framework (ADF).  It pulls together the best practices and design patterns that Oracle development required in order to bring Fusion Applications to market and Oracle WebCenter is the user experience layer that all of this is surfaced through.  In this way, customers can quickly brand a deployment for new partnerships without having to redevelop a new site.  Or they can quickly add new options to the UI Shell to enable their line of business managers to quickly adapt to a new competitive product.  And with the core integration of the activities to produce a Business Activity Stream, customers are able to stay on top of all their key business actions when they happen as they happen and more importantly, the system can recommend actions or resources to help act on these activities.And we've authored this whole set of design patterns for Oracle development to take advantage of in delivering Fusion Applications.  We're also applying these design patterns to our existing eBusiness Suite, Peoplesoft, Siebel, and JD Edwards applications so that they can tie in the exact same way that Fusion Applications has been brought together.  This will provide customers with a complete Common User Experience Architecture for their entire ecosystem of applications within their enterprise whether they are from Oracle, another vender, or custom built applications. And this is all provided in the new release of Oracle WebCenter.  These design patterns cover elements around delivering a complete, aggregated menu of all the capabilities that their role allows independent of which application they are trying to access.   It means that as they move from one application to another, they will have a consistent user experience.  And if they are using an Oracle application, any customizations that are made to the application are preserved and managed through upgrades and patches.Be sure to check back this week as we share more information and resources on Oracle's Common User Experience Architecture.

    Read the article

  • Java EE 6 Pocket Guide from O'Reilly - Now Available in Paperback and Kindle Edition

    - by arungupta
    Hot off the press ... Java EE 6 Pocket Guide from 'OReilly Media is now available in Paperback and Kindle Edition. Here are the book details: Release Date: Sep 21, 2012 Language: English Pages: 208 Print ISBN: 978-1-4493-3668-4 | ISBN 10:1-4493-3668-X Ebook ISBN:978-1-4493-3667-7 | ISBN 10:1-4493-3667-1 The book provides a comprehensive summary of the Java EE 6 platform. Main features of different technologies from the platform are explained and accompanied by tons of samples. A chapter is dedicated to Managed Beans, Servlets, Java Persistence API, Enterprise JavaBeans, Contexts and Dependency Injection, JavaServer Faces, SOAP-Based Web Services, RESTful Web Services, Java Message Service, and Bean Validation in that format. Many thanks to Markus Eisele, John Yeary, and Bert Ertman for reviewing and providing valuable comments. This book was not possible without their extensive feedback! This book was mostly written by compiling my blogs, material from 2-day workshops, and several hands-on workshops around the world. The interactions with users of different technologies and whiteboard discussions with different specification leads helped me understand the technology better. Many thanks to them for helping me be a better user! The long international flights during my travel around the world proved extremely useful for authoring the content. No phone, no email, no IM, food served on the table, power outlet = a perfect recipe for authoring ;-) Markus wrote a detailed review of the book. He was one of the manuscript reviewers of the book as well and provided valuable guidance. Some excerpts from his blog: It covers the basics you need to know of Java EE 6 and gives good examples of all relevant parts. ... This is a pocket guide which is comprehensively written. I could follow all examples and it was a good read overall. No complicated constructs and clear writing. ... GO GET IT! It is the only book you probably will need about Java EE 6! It is comprehensive, wonderfully written and covers everything you need in your daily work. It is not a complete reference but provides a great shortcut to the things you need to know. To me it is a good beginners guide and also works as a companion for advanced users. Here is the first tweet feedback ... Jeff West was super prompt to place the first pre-order of my book, pretty much the hour it was announced. Thank you Jeff! @mike_neck posted the very first tweet about the book, thanks for that! The book is now available in Paperback and Kindle Edition from the following websites: O'Reilly Media (Ebook, Print & Ebook, Print) Amazon.com (Kindle Edition and Paperback) Barnes and Noble Overstock (1% off Amazon) Buy.com Booktopia.com Tower Books Angus & Robertson Shopping.com Here is how I can use your help: Help spread the word about the book If you bought a Paperback or downloaded Kindle Edition, then post your review here. If you have not bought, then you can buy it at amazon.com and multiple other websites mentioned above. If you are coming to JavaOne, you'll have an opportunity to get a free copy at O'Reilly's booth on Monday (October 1) from 2-3pm. And you can always buy it from the JavaOne Bookstore. I hope you enjoy reading it and learn something new from it or hone your existing skills. As always, looking forward to your feedback!

    Read the article

  • Why Your ERP System Isn't Ready for the Next Evolution of the Enterprise

    - by [email protected]
    By ken.pulverman on March 24, 2010 8:51 AM ERP has been the backbone of enterprise software. The data held in your ERP system is core of most companies. Efficiencies gained through the accounting and resource allocation through ERP software have literally saved companies trillions of dollars. Not only does everything seem to be fine with your ERP system, you haven't had to touch it in years. Why aren't you ready for what comes next? Well judging by the growth rates in the space (Oracle posted only a 3% growth rate, while SAP showed a 12% decline) there hasn't been much modernization going on, just a little replacement activity. If you are like most companies, your ERP system is connected to a proprietary middleware solution that only effectively talks with a handful of other systems you might have acquired from the same vendor. Connecting your legacy system through proprietary middleware is expensive and brittle and if you are like most companies, you were only willing to pay an SI so much before you said "enough." So your ERP is working. It's humming along. You might not be able to get Order to Promise information when you take orders in your call center, but there are work arounds that work just fine. So what's the problem? The problem is that you built your business around your ERP core, and now there is such pressure to innovate your business processes to keep up that you need a whole new slew of modern apps and you need ERP data to be accessible from everywhere. Every time you change a sales territory or a comp plan or change a benefits provider your ERP system, literally the economic brain of your business, needs to know what's going on. And this giant need to access and provide information to your ERP is only growing. What makes matters even more challenging is that apps today come in every flavor under the Sun™. SaaS, cloud, managed, hybrid, outsourced, composite....and they all have different integration protocols. The only easy way to get ahead of all this is to modernize the way you connect and run your applications. Unlike the middleware solutions of yesteryear, modern middleware is effectively the operating system of the enterprise. In the same way that you rely on Apple, Microsoft, and Google to find a video driver for your 23" monitor or to ensure that Word or Keynote runs, modern middleware takes care of intra-application connectivity and process execution. It effectively allows you to take ERP out of the middle while ensuring connectivity to your vital data for anything you want to do. The diagram below reflects that change. In this model, the hegemony of ERP is over. It too has to become a stealthy modern app to help you quickly adapt to business changes while managing vital information. And through modern middleware it will connect to everything. So yes ERP as we've know it is dead, but long live ERP as a connected application member of the modern enterprise. I want to Thank Andrew Zoldan, Group Vice President Oracle Manufacturing Industries Business Unit for introducing me to how some of his biggest customers have benefited by modernizing their applications infrastructure and making ERP a connected application. by John Burke, Group Vice President, Applications Business Unit

    Read the article

  • SQL SERVER – Shard No More – An Innovative Look at Distributed Peer-to-peer SQL Database

    - by pinaldave
    There is no doubt that SQL databases play an important role in modern applications. In an ideal world, a single database can handle hundreds of incoming connections from multiple clients and scale to accommodate the related transactions. However the world is not ideal and databases are often a cause of major headaches when applications need to scale to accommodate more connections, transactions, or both. In order to overcome scaling issues, application developers often resort to administrative acrobatics, also known as database sharding. Sharding helps to improve application performance and throughput by splitting the database into two or more shards. Unfortunately, this practice also requires application developers to code transactional consistency into their applications. Getting transactional consistency across multiple SQL database shards can prove to be very difficult. Sharding requires developers to think about things like rollbacks, constraints, and referential integrity across tables within their applications when these types of concerns are best handled by the database. It also makes other common operations such as joins, searches, and memory management very difficult. In short, the very solution implemented to overcome throughput issues becomes a bottleneck in and of itself. What if database sharding was no longer required to scale your application? Let me explain. For the past several months I have been following and writing about NuoDB, a hot new SQL database technology out of Cambridge, MA. NuoDB is officially out of beta and they have recently released their first release candidate so I decided to dig into the database in a little more detail. Their architecture is very interesting and exciting because it completely eliminates the need to shard a database to achieve higher throughput. Each NuoDB database consists of at least three or more processes that enable a single database to run across multiple hosts. These processes include a Broker, a Transaction Engine and a Storage Manager.  Brokers are responsible for connecting client applications to Transaction Engines and maintain a global view of the network to keep track of the multiple Transaction Engines available at any time. Transaction Engines are in-memory processes that client applications connect to for processing SQL transactions. Storage Managers are responsible for persisting data to disk and serving up records to the Transaction Managers if they don’t exist in memory. The secret to NuoDB’s approach to solving the sharding problem is that it is a truly distributed, peer-to-peer, SQL database. Each of its processes can be deployed across multiple hosts. When client applications need to connect to a Transaction Engine, the Broker will automatically route the request to the most available process. Since multiple Transaction Engines and Storage Managers running across multiple host machines represent a single logical database, you never have to resort to sharding to get the throughput your application requires. NuoDB is a new pioneer in the SQL database world. They are making database scalability simple by eliminating the need for acrobatics such as sharding, and they are also making general administration of the database simpler as well.  Their distributed database appears to you as a user like a single SQL Server database.  With their RC1 release they have also provided a web based administrative console that they call NuoConsole. This tool makes it extremely easy to deploy and manage NuoDB processes across one or multiple hosts with the click of a mouse button. See for yourself by downloading NuoDB here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: CodeProject, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology Tagged: NuoDB

    Read the article

  • Creating shapes on the fly

    - by Bertrand Le Roy
    Most Orchard shapes get created from part drivers, but they are a lot more versatile than that. They can actually be created from pretty much anywhere, including from templates. One example can be found in the Layout.cshtml file of the ThemeMachine theme: WorkContext.Layout.Footer .Add(New.BadgeOfHonor(), "5"); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } What this is really doing is create a new shape called BadgeOfHonor and injecting it into the Footer global zone (that has not yet been defined, which in itself is quite awesome) with an ordering rank of "5". We can actually come up with something simpler, if we want to render the shape inline instead of sending it into a zone: @Display(New.BadgeOfHonor()) Now let's try something a little more elaborate and create a new shape for displaying a date and time: @Display(New.DateTime(date: DateTime.Now, format: "d/M/yyyy")) For the moment, this throws a "Shape type DateTime not found" exception because the system has no clue how to render a shape called "DateTime" yet. The BadgeOfHonor shape above was rendering something because there is a template for it in the theme: Themes/ThethemeMachine/Views/BadgeOfHonor.cshtml. We need to provide a template for our new shape to get rendered. Let's add a DateTime.cshtml file into our theme's Views folder in order to make the exception go away: Hi, I'm a date time shape. Now we're just missing one thing. Instead of displaying some static text, which is not very interesting, we can display the actual time that got passed into the shape's dynamic constructor. Those parameters will get added to the template's Model, so they are easy to retrieve: @(((DateTime)Model.date).ToString(Model.format)) Now that may remind you a little of WebForm's user controls. That's a fair comparison, except that these shapes are much more flexible (you can add properties on the fly as necessary), and that the actual rendering is decoupled from the "control". For example, any theme can override the template for a shape, you can use alternates, wrappers, etc. Most importantly, there is no lifecycle and protocol abstraction like there was in WebForms. I think this is a real improvement over previous attempts at similar things.

    Read the article

  • Combining template method with strategy

    - by Mekswoll
    An assignment in my software engineering class is to design an application which can play different forms a particular game. The game in question is Mancala, some of these games are called Wari or Kalah. These games differ in some aspects but for my question it's only important to know that the games could differ in the following: The way in which the result of a move is handled The way in which the end of the game is determined The way in which the winner is determined The first thing that came to my mind to design this was to use the strategy pattern, I have a variation in algorithms (the actual rules of the game). The design could look like this: I then thought to myself that in the game of Mancala and Wari the way the winner is determined is exactly the same and the code would be duplicated. I don't think this is by definition a violation of the 'one rule, one place' or DRY principle seeing as a change in rules for Mancala wouldn't automatically mean that rule should be changed in Wari as well. Nevertheless from the feedback I got from my professor I got the impression to find a different design. I then came up with this: Each game (Mancala, Wari, Kalah, ...) would just have attribute of the type of each rule's interface, i.e. WinnerDeterminer and if there's a Mancala 2.0 version which is the same as Mancala 1.0 except for how the winner is determined it can just use the Mancala versions. I think the implementation of these rules as a strategy pattern is certainly valid. But the real problem comes when I want to design it further. In reading about the template method pattern I immediately thought it could be applied to this problem. The actions that are done when a user makes a move are always the same, and in the same order, namely: deposit stones in holes (this is the same for all games, so would be implemented in the template method itself) determine the result of the move determine if the game has finished because of the previous move if the game has finished, determine who has won Those three last steps are all in my strategy pattern described above. I'm having a lot of trouble combining these two. One possible solution I found would be to abandon the strategy pattern and do the following: I don't really see the design difference between the strategy pattern and this? But I am certain I need to use a template method (although I was just as sure about having to use a strategy pattern). I also can't determine who would be responsible for creating the TurnTemplate object, whereas with the strategy pattern I feel I have families of objects (the three rules) which I could easily create using an abstract factory pattern. I would then have a MancalaRuleFactory, WariRuleFactory, etc. and they would create the correct instances of the rules and hand me back a RuleSet object. Let's say that I use the strategy + abstract factory pattern and I have a RuleSet object which has algorithms for the three rules in it. The only way I feel I can still use the template method pattern with this is to pass this RuleSet object to my TurnTemplate. The 'problem' that then surfaces is that I would never need my concrete implementations of the TurnTemplate, these classes would become obsolete. In my protected methods in the TurnTemplate I could just call ruleSet.determineWinner(). As a consequence, the TurnTemplate class would no longer be abstract but would have to become concrete, is it then still a template method pattern? To summarize, am I thinking in the right way or am I missing something easy? If I'm on the right track, how do I combine a strategy pattern and a template method pattern? This is part of a homework assignment but I'm not looking to be gifted the answer, I have deliberately been very verbose in my question to show that I have thought about it before coming here to ask a question

    Read the article

< Previous Page | 622 623 624 625 626 627 628 629 630 631 632 633  | Next Page >