Search Results

Search found 5298 results on 212 pages for 'automated deploy'.

Page 64/212 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • Getting a Database into Source Control

    - by Grant Fritchey
    For any number of reasons, from simple auditing, to change tracking, to automated deployment, to integration with application development processes, you’re going to want to place your database into source control. Using Red Gate SQL Source Control this process is extremely simple. SQL Source Control works within your SQL Server Management Studio (SSMS) interface.  This means you can work with your databases in any way that you’re used to working with them. If you prefer scripts to using the GUI, not a problem. If you prefer using the GUI to having to learn T-SQL, again, that’s fine. After installing SQL Source Control, this is what you’ll see when you open SSMS:   SQL Source Control is now a direct piece of the SSMS environment. The key point initially is that I currently don’t have a database selected. You can even see that in the SQL Source Control window where it shows, in red, “No database selected – select a database in Object Explorer.” If I expand my Databases list in the Object Explorer, you’ll be able to immediately see which databases have been integrated with source control and which have not. There are visible differences between the databases as you can see here:   To add a database to source control, I first have to select it. For this example, I’m going to add the AdventureWorks2012 database to an instance of the SVN source control software (I’m using uberSVN). When I click on the AdventureWorks2012 database, the SQL Source Control screen changes:   I’m going to need to click on the “Link database to source control” text which will open up a window for connecting this database to the source control system of my choice.  You can pick from the default source control systems on the left, or define one of your own. I also have to provide the connection string for the location within the source control system where I’ll be storing my database code. I set these up in advance. You’ll need two. One for the main set of scripts and one for special scripts called Migrations that deal with different kinds of changes between versions of the code. Migrations help you solve problems like having to create or modify data in columns as part of a structural change. I’ll talk more about them another day. Finally, I have to determine if this is an isolated environment that I’m going to be the only one use, a dedicated database. Or, if I’m sharing the database in a shared environment with other developers, a shared database.  The main difference is, under a dedicated database, I will need to regularly get any changes that other developers have made from source control and integrate it into my database. While, under a shared database, all changes for all developers are made at the same time, which means you could commit other peoples work without proper testing. It all depends on the type of environment you work within. But, when it’s all set, it will look like this: SQL Source Control will compare the results between the empty folders in source control and the database, AdventureWorks2012. You’ll get a report showing exactly the list of differences and you can choose which ones will get checked into source control. Each of the database objects is scripted individually. You’ll be able to modify them later in the same way. Here’s the list of differences for my new database:   You can select/deselect all the objects or each object individually. You also get a report showing the differences between what’s in the database and what’s in source control. If there was already a database in source control, you’d only see changes to database objects rather than every single object. You can see that the database objects can be sorted by name, by type, or other choices. I’m going to add a comment such as “Initial creation of database in source control.” And then click on the Commit button which will put all the objects in my database into the source control system. That’s all it takes to get the objects into source control initially. Now is when things can get fun with breaking changes to code, automated deployments, unit testing and all the rest.

    Read the article

  • The Next Wave of PeopleSoft Capabilities for the Staffing Industry Is Here

    - by Mark Rosenberg
    With the release of PeopleSoft Financials and Supply Chain Management 9.1 Feature Pack 2 in January this year, we introduced substantial new capabilities for our Staffing Industry customers. Through a co-development project with Infosys Limited, we have enriched Oracle's PeopleSoft Staffing Solution with new tools aimed at accelerating and improving the quality of job order fulfillment, increasing branch recruiter productivity, and driving profitable growth. Staffing industry firms succeed based on their ability to rapidly, cost-effectively, and continually fill their pipelines with new clients and job orders, recruit the best talent, and match orders with talent. Pressure to execute in each of these functional areas is even more acute on staffing firms as contingent labor becomes a more substantial and permanent part of the workforce mix. In an industry that creates value through speedy execution, there is little room for manual, inefficient processes and brittle, custom integrations, which throttle profitability and growth. The latest wave of investment in the PeopleSoft Staffing Solution focuses on generating efficiency and flexibility for our customers. Simplicity To operate profitably and continue growing, a Staffing enterprise needs its client management, recruiting, order fulfillment, and other processes to function in harmony. Most importantly, they need to be simple for recruiters, branch managers, and applicants to access and understand. The latest PeopleSoft Staffing Solution set of enhancements includes numerous automated defaulting mechanisms and information-rich dashboard pagelets that even a new employee can learn quickly. Pending Applicant, Agenda management, Search, and other pagelets are just a few of the newest, easy-to-use tools that not only aggregate and summarize information, but also provide instant access to applicants, tasks, and key reports for branch staff. Productivity The leading firms in the Staffing industry are those that can more efficiently orchestrate large numbers of candidates, clients, and orders than their competitors can. PeopleSoft Financials and Supply Chain Management 9.1 Feature Pack 2 delivers productivity boosters that Staffing firms can leverage to streamline tasks and processes for competitive advantage. For example, we enhanced the Recruiting Funnel, which manages the candidate on-boarding process, with a highly interactive user interface. It integrates disparate Staffing business processes and exploits new PeopleTools technologies to offer a superior on-boarding user experience. Automated creation of agenda items and assignment tasks for each candidate minimizes setup and organizes assignment steps for the on-boarding process. Mass updates of tasks and instant access to the candidate overview page (which we also expanded), candidate event status, event counts, and other key data enable recruiters to better serve clients and candidates. Lower TCO Constructing and maintaining an efficient yet flexible labor supply chain can be complicated, let alone expensive. Traditionally, Staffing firms have been challenged in controlling their technology cost of ownership because connecting candidate and client-facing tools involved building and integrating custom applications and technologies and managing staff turnover, placing heavy demands on IT and support staff. With PeopleSoft Financials and Supply Chain Management 9.1 Feature Pack 2, there are two major enhancements that aggressively tackle these challenges. First, we added another integration framework to enable cost-effective linking of the Staffing firm’s PeopleSoft applications and its job board distributors. (The first PeopleSoft 9.1 Feature Pack released in March 2011 delivered an integration framework to connect to resume parsing providers.) Second, we introduced the teaming concept to enable work to be partitioned to groups, as well as individuals. These two capabilities, combined with a host of others, position Staffing firms to configure and grow their businesses without growing their IT and overhead expenditures. For our Staffing Industry customers, PeopleSoft Financials and Supply Chain Management 9.1 Feature Pack 2 is loaded with high-value tools aimed at enabling and sustaining a flexible labor supply chain. For more information, contact [email protected] or [email protected].

    Read the article

  • Silverlight Cream for February 10, 2011 -- #1045

    - by Dave Campbell
    In this Issue: Mark Monster, Jaime Rodriguez, Mark Hopkins, WindowsPhoneGeek, David Anson, Jesse Liberty, Jeremy Likness, Martin Krüger(-2-), Beth Massi, Joost van Schaik, Laurent Bugnion, and Arik Poznanski. Above the Fold: Silverlight: "Parsing the Visual Tree with LINQ" Jeremy Likness WP7: "Silverlight-ready PNG encoder implementation shows one way to use .NET IEnumerables effectively" David Anson Lightswitch: "How to Send Automated Appointments from a LightSwitch Application" Beth Massi Shoutouts: Be sure to visit SilverlightShow... check out their top hits last week: SilverlightShow for Jan 31- Feb 06, 2011 Jaime Rodriguez has a post up that all the WP7 folks will be interested in: FAQ about copy paste functionality in upcoming release From SilverlightCream.com: Make use of WCF FaultContracts in Silverlight clients Mark Monster takes a shot at answering “The remote server returned an error: NotFound” while connecting to a WCF Service problem we all see. Communication between HTML in WebBrowser and Silverlight app Jaime Rodriguez responds to questions he received about communication between HTML and SIlverlight with this post about the bi-directional communication between the control and HTML. WP7 - Real Apps, Real Code Mark Hopkins has a post up about some WP7 starter kits that you can get all the source for and actually download the app from the Marketplace first to see if it interests you! WP7 AboutPrompt in depth WindowsPhoneGeek has this cool post up about the AboutPrompt from the Coding4Fun toolkit in detail... great diagrams showing where all the elements are and code examples with images. Silverlight-ready PNG encoder implementation shows one way to use .NET IEnumerables effectively David Anson describes why he took it upon himself to write his own png encoder for Silverlight... and we all thank him for doing so and providing us with the code! Navigation 101–Cancelling Navigation Jesse Liberty's latest WP7 From Scratch episode is up (number 32), and he's talking about Navigation and how to cancel it if you need to. Parsing the Visual Tree with LINQ Jeremy Likness demonstrates using LINQ to rat out information in the visual tree of your XAML. To Quote Jeremy: "you can easily check for intersections between elements and find any type of element no matter how deep within the tree it is". SpriteAnimationBehavior Martin Krüger has a couple more fun things in the Expression Gallery that I haven't discussed. First up is a behavior that animates up to 999 images and lets you control the FramesPerSecond... great demo on the ExpressionGallery to play with. Second alternative: Storyboard should not start before the Silverlight application is loaded Martin Krüger's latest is a way to programmatically wait for the Loaded event so that you know you can let your animations fly. How to Send Automated Appointments from a LightSwitch Application Beth Massi's latest Lightswitch post follows up her Outlook automation one with sending appointments using the standard iCalendar format... all the code included of course. The case for the Bindable Application Bar for Windows Phone 7 Joost van Schaik posts about a bindable Application Bar for your WP7 apps... grab the code and don't leave home without it :) MVVM Light V4 preview (BL0014) release notes Laurent Bugnion posted an update to MVVMLight to Codeplex a couple days ago. This is an early preview of what he plans on having in version 4, so check out the post for what's new and fun. Search Digg on Windows Phone 7 Arik Poznanski followed up his RSS post from last week with this one on searching Digg on WP7... and he's discussing and providing a utility class for doing it. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • top Tweets SOA Partner Community &ndash; June 2012

    - by JuergenKress
    Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity Simone Geib Contact me directly for ideas how to improve http://bit.ly/advancedsoasuite and additional posts, presentations, white papers, #soasuite SOA CommunitySOA Community Newsletter May 2012 https://soacommunity.wordpress.com /2012/05/28/soa-community-newsletter-may-2012/ #soacommunity Simone Geib #soasuite advanced OTN page has become too cluttered. Broke it into separate pages to start with. http://bit.ly/advancedsoasuite SOA CommunitySOA Management with Enterprise Manager Cloud Control 12c and Business Transaction Management 12c Demo https://soacommunity.wordpress.com /2012/05/21/soa-management-with-enterprise-manager-cloud-control-12c-and-business-transaction-management-12c-demo/ #soacommunity OracleBlogs June Webcast: SOA Gateway Implementation and Troubleshooting (2 sessions) http://ow.ly/1kbRFA OTNArchBeatEvery cloud needs an SOA lining: analyst | @JoeMcKendrick http://zd.net/KTgMHk ServiceTechSymposium New session just posted to calendar: "NoSQL for Data Services, Data Virtualization & Big Data" by Guido Schmutz, Trivadis AG ://ow.ly/bjjOe OTNArchBeat?Every cloud needs an SOA lining: analyst | @JoeMcKendrick http://zd.net/KTgMHk Debra Lilley looks good - real proof people are using the apps ! RT @fteter:Very cool Fusion Applications Help site: http://bit.ly/L3nvOR #FusionApps OTNArchBeat How to Set JVM Parameters in Oracle SOA 11G | Francis Ip http://bit.ly/JBDYPj demed"rapid proliferation of cloud computing will drive convergence of SOA and cloud paradigms" http://ovum.com/2012/05/18/soa-paves-the-way-for-cloud/ SOA Community Sending out invitations to our advanced Fusion Middleware Summer Camps! Want to learn more register for the community http://www.oracle.com/goto/emea/soa SOA Community Middleware Oracle Excellence Awards 2012 - HAPPY NEW YEAR! https://soacommunity.wordpress.com/ 2012/05/31/middleware-oracle-excellence-awards-2012 happy-new-year/ #soacommunity #opn #opnaward #specialization #oracle Simone Geib #oraclesoa performance tuning resources. All in one: docs, blogs, WPs, ppts: http://bit.ly/soa_resources OracleBlogs Middleware Oracle Excellence Awards 2012 - HAPPY NEW YEAR! http://ow.ly/1k9ri0 ServiceTechSymposiumNew session just posted to Symposium calendar: "Service Modeling & BPM Business Value Patterns" by Jürgen Kress, Oracle http://www.servicetechsymposium.com/ agenda2012.php #service_modeling_and_bpm _business_value_patterns SOA Community Happy New Year #soacommunity thanks for the business! Time for a drink ;-) http://pic.twitter.com/zkK08KWB Jan van ZoggelUsing execute-sql() function for Name-Value pair lookups in Oracle Service Bus http://wp.me/p1H430-jZ SOA Community Middleware Oracle Excellence Awards 2012&ndash;HAPPY NEW YEAR! http://wp.me/p10C8u-q4 orclateamsoa A-Team Blog #ateam: BPM 11g Deployment & Instance Migration - I have seen a number of request lately asking how to http://ow.ly/1jZ0h8 OTNArchBeat Who should ‘own’ the Enterprise Architecture? | Michael Glas http://bit.ly/K0ge0Q Oracle UPK & Tutor TOMORROW! (June 23rd) - UPK Professional Webinar at Noon ET: Discover why user adoption is a key factor for the http://bit.ly/LjZjdx Sabine Leitner Finance Event im Design-Hotel beim Barbeque: 21. Juni FRA mit Kunden SV Informatik, Schufa, LBBW http://bit.ly/JtwE3v #Oracle @itevent OracleEnterpriseMgr SOA Management with Enterprise Manager Cloud Control 12c and Business Transaction Management 12c Demo http://ow.ly/b3WP1 #em12c ServiceTechSymposium New session just posted to Symposium calendar: "Elastic SOA in the Cloud" by Steve Millidge, C2B2 Consulting http://www.servicetechsymposium.com /agenda2012.php #elastic_soa_in_the_cloud OTNArchBeat Securing Heterogeneous Systems Using Oracle Web Services Manager by @rluttikhuizen & Jens Peters http://bit.ly/KjShFi Oracleteamsoa A-Team Blog #ateam: How to Set JVM Parameters in Oracle SOA 11G http://ow.ly/1k2cnl SOA Community Oracle Service Registry in an automated (Maven) SOA/BPM build http://redstack.wordpress.com /2012/05/22/using-oracle-service-registry-in-an-automated-maven-soabpm-build/ #soacommunity #redstack #soa #osr #opn SOA CommunityHigh demand for advanced Fusion Middleware Summer Camps! Want to learn more register for the #soacommunity http://www.oracle.com/goto/emea/soa OracleBlogs? How to Set JVM Parameters in Oracle SOA 11G http://ow.ly/1k1UTv SOA Community top Tweets SOA Partner Community &ndash; May 2012 http://wp.me/p10C8u-pP ServiceTechSymposium New session just posted to Symposium calendar: "SOA Governance at EDP: A Global Energy Company" by Manuel Rosa, Link http://www.servicetechsymposium.com/ agenda2012.php #soa_governance_at_edp For regular information on Oracle SOA Suite become a member in the SOA Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Technorati Tags: soacommunity,twitter,Oracle,SOA Community,Jürgen Kress,OPN,SOA,BPM

    Read the article

  • Upgrading from MVC 1.0 to MVC2 in Visual Studio 2010 and VS2008.

    - by Sam Abraham
    With MVC2 officially released, I was involved in a few conversations regarding the feasibility of upgrading existing MVC 1.0 projects to quickly leverage the newly introduced MVC features. Luckily, Microsoft has proactively addressed this question for both Visual Studio 2008 and 2010 and many online resources discussing the upgrade process are a "Bing/Google Search" away. As I will happen to be speaking about MVC2 and Visual Studio 2010 at the Ft Lauderdale ArcSig .Net User Group Meeting on April 20th 2010 (Check http://www.fladotnet.com for more info.), I decided to include a quick demo on upgrading the NerdDinner project (which I consider the "Hello MVC World" project) from MVC 1.0 to MVC2 using Visual studio 2010 to demonstrate how simple the upgrade process is. In the next few lines, I will be briefly touching on upgrading to MVC2 for Visual Studio 2008 then discussing, in more detail, the upgrade process using Visual Studio 2010 while highlighting the advantage of its multi-targeting support. Using Visual Studio 2008 SP1 For upgrading to MVC2 Using VS2008 SP1, a Microsoft White Paper [1] presents two approaches:  1- Using a provided automated upgrade tool, 2-Manually upgrading the project. I personally prefer using the automated tool although it comes with an "AS IS" disclaimer. For those brave souls, or those who end up with no luck using the tool, detailed manual upgrade steps are also provided as a second option. Backing up the project in question is a must regardless of which route one would take to upgrade. Using Visual Studio 2010 Life is much easier for developers who already adopted Visual Studio 2010. Simply opening the MVC 1.0 solution file brings up the upgrade wizard as shown in figures 1, 2, 3 and 4. As we proceed with the upgrade process, the wizard requests confirmation on whether we choose to upgrade our target framework version to .Net 4.0 or keep the existing .Net 3.5 (Figure 5). VS2010 does a good job with multi-targeting where we can still develop .Net 3.5 applications while leveraging all the new bells and whistles that VS2010 brings to the table (Multi-targeting enables us to develop with as early as .Net 2.0 in VS2010) Figure 1 - Open Solution File Using VS2010   Figure 2 - VS2010 Conversion Wizard Figure 3- Ready To Convert To VS2010 Confirmation Screen Figure 4 - VS2010 Solution Conversion Progress Figure 5 - Confirm Target Framework Upgrade In an attempt to make my demonstration realistic, I decided to opt to keep the project targeted to the .Net 3.5 Framework.  After the successful completion of the conversion process,  a quick sanity check revealed that the NerdDinner project is still targeted to the .Net 3.5 framework as shown in figure 6. Inspecting the Web.Config revealed that the MVC DLL version our code compiles against has been successfully upgraded to 2.0 (Figure 7) and hence we should now be able to leverage the newly introduced features in MVC2 and VS2010 with no effort or time invested on modifying existing code. Figure 6- Confirm Target Framework Remained .Net 3.5  Figure 7 - Confirm MVC DLL Version Has Been Upgraded In Conclusion, Microsoft has empowered developers with the tools necessary to quickly and seamlessly upgrade their MVC solutions to the newly released MVC2. The multi-targeting feature in Visual Studio 2010 enables us to adopt this latest and greatest development tool while supporting development in as early as .Net 2.0. References 1. "Upgrading an ASP.NET MVC 1.0 Application to ASP.NET MVC 2" http://www.asp.net/learn/whitepapers/aspnet-mvc2-upgrade-notes

    Read the article

  • First Day of Data Integration Track at Oracle OpenWorld 2012

    - by Irem Radzik
    OpenWorld started full speed for us today with a great set of sessions in the Data Integration track. After the exciting keynote session on Oracle Database 12c in the morning; Brad Adelberg, VP of Development for Data Integration products, presented Oracle’s data integration product strategy. His session highlighted the new requirements for data integration to achieve pervasive and continuous access to trusted data. The new requirements and product focus areas presented in this session are: Provide access to any data at any source On premise or on cloud Enable zero downtime operations and maximum performance Leverage real-time data for accurate business insights And ensure high quality data is used across the enterprise During the session Brad walked over how Oracle’s data integration products, Oracle Data Integrator, Oracle GoldenGate, Oracle Enterprise Data Quality, and Oracle Data Service Integrator, deliver on these requirements and how recent product releases build on this strategy. Soon after Brad’s session we heard from a panel of Oracle GoldenGate customers, St. Jude Medical, Equifax, and Bank of America, how they achieved zero downtime operations using Oracle GoldenGate. The panel presented different use cases of GoldenGate, from Active-Active replication to offloading reporting. Especially St. Jude Medical’s implementation, which involves the alert management system for patients that use their pacemakers, reminded me in some cases downtime of mission-critical systems can be a matter of life or death. It is very comforting to hear that GoldenGate delivers highly-reliable continuous availability for life-saving medical systems. In the afternoon, Nick Wagner from the Product Management team and I followed the customer panel with the review of Oracle GoldenGate 11gR2’s New Features.  Many questions we received from audience were about GoldenGate’s new Integrated Capture for Oracle Database and the enhanced Conflict Management features, as well as how GoldenGate compares to Oracle Streams. In addition to giving details on GoldenGate’s unique capability to capture changed data with a direct integration to the Oracle DBMS engine, we reminded the audience that enhancements to Oracle GoldenGate will continue, while Streams will be primarily maintained. Last but not least, Tim Garrod and Ryan Fonnett from Raymond James presented a unified real-time data integration solution using Oracle Data Integrator and GoldenGate for their operational data store (ODS). The ODS supports application services across the enterprise and providing timely data is a critical requirement. In this solution, Oracle GoldenGate does the log-based change data capture for Oracle Data Integrator’s near real-time data integration between heterogeneous systems. As Raymond James’ ODS supports mission-critical services for their advisors, the project team had to set up this integration environment to be highly available. During the session, Ryan and Tim explained how they use ODI to enable automated process execution and “always-on” integration processes. Their presentation included 2 demonstrations that focused on CDC patterns deployed with ODI and the automated multi-instance execution and monitoring. We are very grateful to Tim and Ryan for their very-well prepared presentation at OpenWorld this year. Day 2 (Tuesday) will be also a busy day in our track. In addition to the Fusion Middleware Innovation Awards ceremony at 11:45am at Moscone West 3001, we have the following DI sessions Real-World Operational Reporting Customer Panel 11:45am Moscone West- 3005 Oracle Data Integrator Product Update and Future Strategy 1:15pm Moscone West- 3005 High-volume OLTP with Oracle GoldenGate: Best Practices from Comcast 1:15pm Moscone West- 3005 Everything You need to Know about Monitoring Oracle GoldenGate 5pm Moscone West-3005 If you are at OpenWorld please join us in these sessions. For a full review of data integration track at OpenWorld please see our Focus-On document.

    Read the article

  • Is Oracle Policy Automation a Fit for My Agency? I'll bet it is.

    - by jeffrey.waterman
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Recently, I stumbled upon a new(-ish) whitepaper now posted on the Oracle Technology Network around Oracle Policy Automation (OPA). This paper is certain to become a must read for any customer interested in rules automation. What is OPA?  If you are not sitting in your favorite Greek restaurant waiting for that order of Saganaki to appear, OPA is Oracle’s solution for automated streamlining, standardizing, and the maintenance of policy. It is a specialized rules platform that simplifies the automation of rules and policies, putting the analysis in the hands of the analysts, not the IT organization. In other words, OPA allows the organization to be more efficient by eliminating (or at a minimum, reducing the engagement of) the middle man from the process. The whitepaper I mention above is titled, “Is Oracle Policy Automation a Good Fit for My Business?”. This short document walks the reader through use cases and advice for the reader to consider when deciding if OPA is right for their agency. The paper outlines many different scenarios, different uses of OPA in production today and, where OPA may not be a good fit. Many of the use case examples revolve around end user questionnaires or analyst research. What is often overlooked is OPA’s ability to act as a rules engine behind the scenes. That is, take inputs from one source (e.g., personnel data), process that data in OPA and send the output (e.g., pay data with benefits deductions) to a second source. The rules have been automated, no necessary human intervention to perform analysis. A few of my customers have used the embedded OPA solution to improve transaction processing and reduce the time spent analyzing exceptions. I suggest any reader whose organization is reliant on or deals with high complexity, volume or volatility in rules that are based on documentation – or which need to be documented – take a look at Oracle Policy Automation. You can find the white paper on Oracle Technology Network. You can find the white paper in the Oracle Policy Automation of the OTN. You can find more information around OPA on oracle.com. Finally, you can send me a question any time at [email protected] Thank you for reading. If you have any topics around Oracle Applications in the Federal or Public Sector industries you would like to see addressed in this blog, please leave suggestions in the comments section and I will do my best to address in a future post.

    Read the article

  • You Probably Already Have a “Private Cloud”

    - by BuckWoody
    I’ve mentioned before that I’m not a fan of the word “Cloud”. It’s too marketing-oriented, gimmicky and non-specific. A better definition (in many cases) is “Distributed Computing”. That means that some or all of the computing functions are handled somewhere other than under your specific control. But there is a current use of the word “Cloud” that does not necessarily mean that the computing is done somewhere else. In fact, it’s a vector of Cloud Computing that can better be termed “Utility Computing”. This has to do with the provisioning of a computing resource. That means the setup, configuration, management, balancing and so on that is needed so that a user – which might actually be a developer – can do some computing work. To that person, the resource is just “there” and works like they expect, like the phone system or any other utility. The interesting thing is, you can do this yourself. In fact, you probably already have been, or are now. It’s got a cool new trendy term – “Private Cloud”, but the fact is, if you have your setup automated, the HA and DR handled, balancing and performance tuning done, and a process wrapped around it all, you can call yourself a “Cloud Provider”. A good example here is your E-Mail system. your users – pretty much your whole company – just logs into e-mail and expects it to work. To them, you are the “Cloud” provider. On your side, the more you automate and provision the system, the more you act like a Cloud Provider. Another example is a database server. In this case, the “end user” is usually the development team, or perhaps your SharePoint group and so on. The data professionals configure, monitor, tune and balance the system all the time. The more this is automated, the more you’re acting like a Cloud Provider. Lots of companies help you do this in your own data centers, from VMWare to IBM and many others. Microsoft's offering in this is based around System Center – they have a “cloud in a box” provisioning system that’s actually pretty slick. The most difficult part of operating a Private Cloud is probably the scale factor. In the case of Windows and SQL Azure, we handle this in multiple ways – and we're happy to share how we do it. It’s not magic, and the algorithms for balancing (like the one we started with called Paxos) are well known. The key is the knowledge, infrastructure and people. Sure, you can do this yourself, and in many cases such as top-secret or private systems, you probably should. But there are times where you should evaluate using Azure or other vendors, or even multiple vendors to spread your risk. All of this should be based on client need, not on what you know how to do already. So congrats on your new role as a “Cloud Provider”. If you have an E-mail system or a database platform, you can just put that right on your resume.

    Read the article

  • Acceptance tests done first...how can this be accomplished?

    - by Crazy Eddie
    The basic gist of most Agile methods is that a feature is not "done" until it's been developed, tested, and in many cases released. This is supposed to happen in quick turnaround chunks of time such as "Sprints" in the Scrum process. A common part of Agile is also TDD, which states that tests are done first. My team works on a GUI program that does a lot of specific drawing and such. In order to provide tests, the testing team needs to be able to work with something that at least attempts to perform the things they are trying to test. We've found no way around this problem. I can very much see where they are coming from because if I was trying to write software that targeted some basically mysterious interface I'd have a very hard time. Although we have behavior fairly well specified, the exact process of interacting with various UI elements when it comes to automation seems to be too unique to a feature to allow testers to write automated scripts to drive something that does not exist. Even if we could, a lot of things end up turning up later as having been missing from the specification. One thing we considered doing was having the testers write test "scripts" that are more like a set of steps that must be performed, as described from a use-case perspective, so that they can be "automated" by a human being. This can then be performed by the developer(s) writing the feature and/or verified by someone else. When the testers later get an opportunity they automate the "script" for regression purposes mainly. This didn't end up catching on in the team though. The testing part of the team is actually falling behind us by quite a margin. This is one reason why the apparently extra time of developing a "script" for a human being to perform just did not happen....they're under a crunch to keep up with us developers. If we waited for them, we'd get nothing done. It's not their fault really, they're a bottle neck but they're doing what they should be and working as fast as possible. The process itself seems to be set up against them. Very often we end up having to go back a month or more in what we've done to fix bugs that the testers have finally gotten to checking. It's an ugly truth that I'd like to do something about. So what do other teams do to solve this fail cascade? How can we get testers ahead of us and how can we make it so that there's actually time for them to write tests for the features we do in a sprint without making us sit and twiddle our thumbs in the meantime? As it's currently going, in order to get a feature "done", using agile definitions, would be to have developers work for 1 week, then testers work the second week, and developers hopefully being able to fix all the bugs they come up with in the last couple days. That's just not going to happen, even if I agreed it was a reasonable solution. I need better ideas...

    Read the article

  • Dependency injection: How to sell it

    - by Mel
    Let it be known that I am a big fan of dependency injection (DI) and automated testing. I could talk all day about it. Background Recently, our team just got this big project that is to built from scratch. It is a strategic application with complex business requirements. Of course, I wanted it to be nice and clean, which for me meant: maintainable and testable. So I wanted to use DI. Resistance The problem was in our team, DI is taboo. It has been brought up a few times, but the gods do not approve. But that did not discourage me. My Move This may sound weird but third-party libraries are usually not approved by our architect team (think: "thou shalt not speak of Unity, Ninject, NHibernate, Moq or NUnit, lest I cut your finger"). So instead of using an established DI container, I wrote an extremely simple container. It basically wired up all your dependencies on startup, injects any dependencies (constructor/property) and disposed any disposable objects at the end of the web request. It was extremely lightweight and just did what we needed. And then I asked them to review it. The Response Well, to make it short. I was met with heavy resistance. The main argument was, "We don't need to add this layer of complexity to an already complex project". Also, "It's not like we will be plugging in different implementations of components". And "We want to keep it simple, if possible just stuff everything into one assembly. DI is an uneeded complexity with no benefit". Finally, My Question How would you handle my situation? I am not good in presenting my ideas, and I would like to know how people would present their argument. Of course, I am assuming that like me, you prefer to use DI. If you don't agree, please do say why so I can see the other side of the coin. It would be really interesting to see the point of view of someone who disagrees. Update Thank you for everyone's answers. It really puts things into perspective. It's nice enough to have another set of eyes to give you feedback, fifteen is really awesome! This are really great answers and helped me see the issue from different sides, but I can only choose one answer, so I will just pick the top voted one. Thanks everyone for taking the time to answer. I have decided that it is probably not the best time to implement DI, and we are not ready for it. Instead, I will concentrate my efforts on making the design testable and attempt to present automated unit testing. I am aware that writing tests is additional overhead and if ever it is decided that the additional overhead is not worth it, personally I would still see it as a win situation since the design is still testable. And if ever testing or DI is a choice in future, the design can easily handle it.

    Read the article

  • When should I use a Process Model versus a Use Case?

    - by Dave Burke
    This Blog entry is a follow on to https://blogs.oracle.com/oum/entry/oum_is_business_process_and and addresses a question I sometimes get asked…..i.e. “when I am gathering requirements on a Project, should I use a Process Modeling approach, or should I use a Use Case approach?” Not surprisingly, the short answer is “it depends”! Let’s take a scenario where you are working on a Sales Force Automation project. We’ll call the process that is being implemented “Lead-to-Order”. I would typically think of this type of project as being “Process Centric”. In other words, the focus will be on orchestrating a series of human and system related tasks that ultimately deliver value to the business in a cost effective way. Put in even simpler terms……implement an automated pre-sales system. For this type of (Process Centric) project, requirements would typically be gathered through a series of Workshops where the focal point will be on creating, or confirming, the Future-State (To-Be) business process. If pre-defined “best-practice” business process models exist, then of course they could and should be used during the Workshops, but even in their absence, the focus of the Workshops will be to define the optimum series of Tasks, their connections, sequence, and dependencies that will ultimately reflect a business process that meets the needs of the business. Now let’s take another scenario. Assume you are working on a Content Management project that involves automating the creation and management of content for User Manuals, Web Sites, Social Media publications etc. Would you call this type of project “Process Centric”?.......well you could, but it might also fall into the category of complex configuration, plus some custom extensions to a standard software application (COTS). For this type of project it would certainly be worth considering using a Use Case approach in order to 1) understand the requirements, and 2) to capture the functional requirements of the custom extensions. At this point you might be asking “why couldn’t I use a Process Modeling approach for my Content Management project?” Well, of course you could, but you just need to think about which approach is the most effective. Start by analyzing the types of Tasks that will eventually be automated by the system, for example: Best Suited To? Task Name Process Model Use Case Notes Manage outbound calls Ö A series of linked human and system tasks for calling and following up with prospects Manage content revision Ö Updating the content on a website Update User Preferences Ö Updating a users display preferences Assign Lead Ö Reviewing a lead, then assigning it to a sales person Convert Lead to Quote Ö Updating the status of a lead, and then converting it to a sales order As you can see, it’s not an exact science, and either approach is viable for the Tasks listed above. However, where you have a series of interconnected Tasks or Activities, than when combined, deliver value to the business, then that would be a good indicator to lead with a Process Modeling approach. On the other hand, when the Tasks or Activities in question are more isolated and/or do not cross traditional departmental boundaries, then a Use Case approach might be worth considering. Now let’s take one final scenario….. As you captured the To-Be Process flows for the Sales Force automation project, you discover a “Gap” in terms of what the client requires, and what the standard COTS application can provide. Let’s assume that the only way forward is to develop a Custom Extension. This would now be a perfect opportunity to document the functional requirements (behind the Gap) using a Use Case approach. After all, we will be developing some new software, and one of the most effective ways to begin the Software Development Lifecycle is to follow a Use Case approach. As always, your comments are most welcome.

    Read the article

  • postgresql table for storing automation test results

    - by Martin
    I am building an automation test suite which is running on multiple machines, all reporting their status to a postgresql database. We will run a number of automated tests for which we will store the following information: test ID (a GUID) test name test description status (running, done, waiting to be run) progress (%) start time of test end time of test test result latest screenshot of the running test (updated every 30 seconds) The number of tests isn't huge (say a few thousands) and each machine (say, 50 of them) have a service which checks the database and figures out if it's time to start a new automated test on that machine. How should I organize my SQL table to store all the information? Is a single table with a column per attribute the way to go? If in the future I need to add attributes but want to keep compatibility with old database format (ie I may not want to delete and create a new table with more columns), how should I proceed? Should the new attributes just be in a different table? I'm also thinking of replicating the database. In case of failure, I don't mind if the latest screenshots aren't backed up on the slave database. Should I just store the screenshots in its own table to simplify the replication? Thanks!

    Read the article

  • Using Rails, problem testing has_many relationship

    - by east
    The summary is that I've code that works when manually testing, but isn't doing what I would think it should when trying to build an automated test. Here are the details: I've two models: Payment and PaymentTranscation. class Payment ... has_many :transactions, :class_name => 'PaymentTransaction' class PaymentTranscation ... belongs_to payment The PaymentTransaction is only created in a Payment model method, like so: def pay_up ... transactions.create!(params...) ... end I've manually tested this code, inspected the database, and everything works well. The failing automated test looks like this: def test_pay_up purchase = Payment.new(...) assert purchase.save assert_equal purchase.state, :initialized.to_s assert purchase.pay_up # this should create a new PaymentTransaction... assert_equal purchase.state, :succeeded.to_s assert_equal purchase.transactions.count, 1 # FAILS HERE; transactions is an empty array end If I step through the code, it's clear that the PaymentTransaction is getting created correctly (though I can't see it in the database because everything is in a testing transaction). What I can't figure out is why transactions is returning an empty array in the test when I know a valid PaymentTransaction is getting created. Anybody have some suggestions? Thanks in advance, east

    Read the article

  • What guidelines should be followed when using an unstable/testing/stable branching scheme?

    - by Elliot
    My team is currently using feature branches while doing development. For each user story in our sprint, we create a branch and work it in isolation. Hence, according to Martin Fowler, we practice Continuous Building, not Continuous Integration. I am interested in promoting an unstable/testing/stable scheme, similar to that of Debian, so that code is promoted from unstable = testing = stable. Our definition of done, I'd recommend, is when unit tests pass (TDD always), minimal documentation is complete, automated functional tests pass, and feature has been demo'd and accepted by PO. Once accepted by the PO, the story will be merged into the testing branch. Our test developers spend most of their time in this branch banging on the software and continuously running our automated tests. This scares me, however, because commits from another incomplete story may now make it into the testing branch. Perhaps I'm missing something because this seems like an undesired consequence. So, if moving to a code promotion strategy to solve our problems with feature branches, what strategy/guidelines do you recommend? Thanks.

    Read the article

  • Looking for all-in-one drm/installer/CD creation kit.

    - by user30997
    The company I work for has a download manager in place that handles distribution, DRM, installation of our products - when a user gets them off our website. However, we're using an clunky system for packaging and protecting our products when we do press releases or make retail CDs. Part of the antiquation problem is the fact that the automated system that works with the installer- and DRM-creation software we have is a disaster that needs to be put out of my misery. The list of products that we currently produce, that I would like a new system MUST be capable of producing: Retail CDs, with a certain level of obfuscation to make copying difficult. Downloadable installers that time out after a few hours of use of the product. After the time has expired, removing and reinstalling the product will leave you still blocked from use. Installers that will fail to work after a certain date. I'd love to be able to just feed a tool the directory where a complete product resides and have the installer generated with a couple command-line operations. (The command-line issue is non-negotiable this well be called by an automated tool.) A single-solution package would be far preferable. Software with royalty-based or per-unit based licensing is not an option.

    Read the article

  • CRONTAB doesn't finish svndump

    - by Andrew
    I just discovered that the automated dumps I've been creating of my SVN repository have been getting cut off early and basically only half the dump is there. It's not an emergency, but I hate being in this situation. It defeats the purpose of making automated backups in the first place. The command I'm using is below. If I execute it manually in the terminal, it completes fine; the output.txt file is 16 megs in size with all 335 revisions. But if I leave it to crontab, it bails at the halfway mark, at around 8.1 megs and only the first 169 revisions. # m h dom mon dow command 18 00 * * * svnadmin dump /var/svn/repos/myproject > /home/andrew/output.txt I actually save to a dated gzipped file, and there's no shortage of space on the server, so this is not a disk space issue. It seems to bail after two seconds, so this could be a time issue, but the file size is the same every single time for the past month, so I don't think it's that either. Does crontab execute within a limited memory space?

    Read the article

  • MSBuild: automate collecting of db migration scripts?

    - by P Dub
    Summary of environment. Asp.net web application (source stored in svn) sqlserver database. (Database schema (tables/sprocs) stored in svn) db version is synced with web application assembly version. (stored in table 'CurrentVersion') CI hudson server that checks out web app from repo and runs custom msbuild file to publish/package app. My msbuild script updates the assembly version of the web app (Major.Minor.Revision.Build) on each build. The 'Revision' is set to the currently checked out svn revision and the 'Build' to the hudson build number (incremented on each automated build). This way i can match the app to a specific trunk revision also get other build stats from the hudson build number. I'd like to automate the collecting of migration scripts (updated sprocs etc) to add to the zip package. I guess by comparing the svn revision of the db that has yet to be deployed to, to the revision being deployed, i can find what db files have changed in the trunk since the last deployment to that database/environment. This could easily be achieved by manually calling the svn diff -r REVNO:REVNO command to list changed .sql files. These files could then manually have to be added to the package. It would be great if this could be automated. Firstly i'd imagine I'll have to write a custom task to check the version of the db that has yet to be deployed to. After that I'm quite unsure. Does anyone have any suggestion on how this would be achieved through an msbuild task either existing or custom? Finally I'll have to autogen a script to add to the package that updates the database version table so as to be in sync with the application.

    Read the article

  • Makefile - Dependency generation

    - by Profetylen
    I am trying to create a makefile that automatically compiles and links my .cpp files into an executable via .o files. What I can't get working is automated (or even manual) dependency generation. When i uncomment the below commented code, nothing is recompiled when i run make build. All i get is make: Nothing to be done for 'build'., even if x.h (or any .h file) has changed. I've been trying to learn from this question: Makefile, header dependencies, dmckee's answer, especially. Why isn't this makefile working? Clarification: I can compile everything, but when I modify any header file, the .cpp files that depend on it aren't updated. So, if I for instance compile my entire source, then I change a #define in the header file, and then run make build, and I get Nothing to be done for 'build'. (when I have uncommented either commented chunks of the below code). CC=gcc CFLAGS=-O2 -Wall LDFLAGS=-lSDL -lstdc++ SOURCES=$(wildcard *.cpp) OBJECTS=$(patsubst %.cpp, obj/%.o,$(SOURCES)) TARGET=bin/test.bin # Nothing happens when i uncomment the following. (automated attempt) #depend: .depend # #.depend: $(SOURCES) # rm -f ./.depend # $(CC) $(CFLAGS) -MM $^ >> ./.depend; # #include .depend # And nothing happens when i uncomment the following. x.cpp and x.h are files in my project. (manual attempt) #x.o: x.cpp x.h clean: rm -f $(TARGET) rm -f $(OBJECTS) run: build ./$(TARGET) debug: build nm $(TARGET) gdb $(TARGET) build: $(TARGET) $(TARGET): $(OBJECTS) @mkdir -p $(@D) $(CC) $(LDFLAGS) $(OBJECTS) -o $@ obj/%.o: %.cpp @mkdir -p $(@D) $(CC) -c $(CFLAGS) $< -o $@ include $(DEPENDENCIES)

    Read the article

  • Suggestions for opening the Rails toolbox to design a challenge game?

    - by keruilin
    How would you suggest designing a challenge system as part of a food-eating game so that it's automated as possible? All RoR tools, design patterns and logic are at your disposal (e.g., admin consoles, crontab, arch, etc.). Prize goes to whoever can suggest the simplest and most-automated design! Here are the requirements: User has many challenges. Badge has many challenges. (A unique badge is awarded for each challenge won.) Only one challenge can run at a time. Each challenge has a limited number of days that it runs. For example, one challenge can run 3 days, while another runs 7 days. Challenges can be seasonal. For example, "Eat 13 Pumpkins" only runs during the Fall. New challenges are added to the game on an ongoing basis. For example, a new challenge every week. Each challenge has a certain probability of being selected to run. For example, "Eat 10 Pies" challenge has 10% chance of being selected to run. As each new challenge is added to the database, I want the probabilities of running to change dynamically. I want to avoid the scenario where I'm manually updating a database field just to change the probability from 10% to 5%, for example. Challenges act like Easter eggs. Challenge icons pop-up at different places on the webpage. User is awarded a badge for successfully completing a challenge, but only when it's active. There is some wait time between each challenge. Between 1 and 7 days. Which wait time is random, but the probability of the wait time being short is high and the probability of it being a long wait time is low.

    Read the article

  • PHP arrays. There must be a simpler method to do this

    - by RisingSun
    I have this array in php returned from db Array ( [inv_templates] = Array ( [0] = Array ( [inven_subgroup_template_id] = 1 [inven_group] = Wires [inven_subgroup] = CopperWires [inven_template_id] = 1 [inven_template_name] = CopperWires6G [constrained] = 0 [value_constraints] = [accept_range] = 2 - 16 [information] = Measured Manual ) [1] = Array ( [inven_subgroup_template_id] = 1 [inven_group] = Wires [inven_subgroup] = CopperWires [inven_template_id] = 2 [inven_template_name] = CopperWires2G [constrained] = 0 [value_constraints] = [accept_range] = 1 - 7 [information] = Measured by Automated Calipers ) ) ) I need to output this kind of multidimensional stuff Array ( [Wires] = Array ( [inv_group_name] = Wires [inv_subgroups] = Array ( [CopperWires] = Array ( [inv_subgroup_id] = 1 [inv_subgroup_name] = CopperWires [inv_templates] = Array ( [CopperWires6G] = Array ( [inv_name] = CopperWires6G [inv_id] = 1 ) [CopperWires2G] = Array ( [inv_name] = CopperWires2G [inv_id] = 2 ) ) ) ) ) ) I currently do this stuff foreach ($data['inv_templates'] as $key = $value) { $processeddata[$value['inven_group']]['inv_group_name'] = $value['inven_group']; $processeddata[$value['inven_group']]['inv_subgroups'][$value['inven_subgroup']]['inv_subgroup_id'] = $value['inven_subgroup_template_id']; $processeddata[$value['inven_group']]['inv_subgroups'][$value['inven_subgroup']]['inv_subgroup_name'] = $value['inven_subgroup']; $processeddata[$value['inven_group']]['inv_subgroups'][$value['inven_subgroup']]['inv_templates'][$value['inven_template_name']]['inv_name'] = $value['inven_template_name']; $processeddata[$value['inven_group']]['inv_subgroups'][$value['inven_subgroup']]['inv_templates'][$value['inven_template_name']]['inv_id'] = $value['inven_template_id']; } return $processeddata; EDIT : A var_export array ( 'inv_templates' = array ( 0 = array ( 'inven_subgroup_template_id' = '1', 'inven_group' = 'Wires', 'inven_subgroup' = 'CopperWires', 'inven_template_id' = '1', 'inven_template_name' = 'CopperWires6G', 'constrained' = '0', 'value_constraints' = '', 'accept_range' = '2 - 16', 'information' = 'Measured Manual', ), 1 = array ( 'inven_subgroup_template_id' = '1', 'inven_group' = 'Wires', 'inven_subgroup' = 'CopperWires', 'inven_template_id' = '2', 'inven_template_name' = 'CopperWires6G', 'constrained' = '0', 'value_constraints' = '', 'accept_range' = '1 - 7', 'information' = 'Measured by Automated Calipers', ), ), ) The foreach is almost unreadable. There must be a simpler way

    Read the article

  • Creating a timesheet for work using PHP MySQL

    - by Justin
    I am trying to create a time-sheet for my work. I don't know if getting myself into a lot of work by doing this as I am quiet new to PHP and MYSQL but I do have a good understanding/knowledge of the two. I want the below fields in my database. Job weekPeriod ------A list of weeks Monday Sunday dateWorked ------List Of dates in the form coming from a database e.g. 1/1/2011 startTime ------List of times from 12:00am11:00pm 30 min intervals e.g. 11:30-12:30 endTime ------List of times from 12:00am11:00pm 30 min intervals e.g. 11:30-12:30 totalHours ------Automated amount ------Automated based on dayWorked comments ------Any messages here I want to be able to fill in some drop down boxes through a form that will then submit all information to my database. I want the script to know that if the date worked is on a Weekday Mon-Fri e.g. my rate of pay is 30.00ph On a sat it is 35.00ph and on a Sunday it is 40ph I then want to create a page where i select a particular week and see how many hours i worked and how much i earn and so on. Please let me know if there is such a program already established or if this is something that requires a bit of time and if I could do it being new to PHP and MYSQL

    Read the article

  • Integration testing - can it be done right?

    - by Max
    I used TDD as a development style on some projects in the past two years, but I always get stuck on the same point: how can I test the integration of the various parts of my program? What I am currently doing is writing a testcase per class (this is my rule of thumb: a "unit" is a class, and each class has one or more testcases). I try to resolve dependencies by using mocks and stubs and this works really well as each class can be tested independently. After some coding, all important classes are tested. I then "wire" them together using an IoC container. And here I am stuck: How to test if the wiring was successfull and the objects interact the way I want? An example: Think of a web application. There is a controller class which takes an array of ids, uses a repository to fetch the records based on these ids and then iterates over the records and writes them as a string to an outfile. To make it simple, there would be three classes: Controller, Repository, OutfileWriter. Each of them is tested in isolation. What I would do in order to test the "real" application: making the http request (either manually or automated) with some ids from the database and then look in the filesystem if the file was written. Of course this process could be automated, but still: doesn´t that duplicate the test-logic? Is this what is called an "integration test"? In a book i recently read about Unit Testing it seemed to me that integration testing was more of an anti-pattern?

    Read the article

  • How do you install/configure JBoss on Linux/Unix?

    - by mafro
    I'm currently working on how install and configure multiple (30+) JBoss EAP 5 configurations (both standalone and clusters) for development, test and production at a client's site (running SuSE). I'm not to fancy about the jboss way of storing application/configuration together with system files, so I have tried to split things up (ie moving server config out of the jboss installation directory). I also would like minimize the amount of configuration needed when upgrading/patching jboss - but I'm not done thinking about that... It would be great to hear how you've done and what you think about my approach. This is how my installations look like (for the moment): Standard JBoss EAP install (minus server configs): /opt/jboss/jboss-eap-5.0/jboss-as /opt/jboss/jboss-eap-5.0/jboss-as/bin/ /opt/jboss/jboss-eap-5.0/jboss-as/lib/ /opt/jboss/jboss-eap-5.0/jboss-as/server/ [server configs removed to avoid starting them by mistake] /opt/jboss/jboss-eap-5.0/jboss-as/.../ Application (some jboss folders has been omitted - you'll get the point anyway): /app/<project>/ [$app.dir - application specific base folder] /app/<project>/jboss/ [$jboss.home] /app/<project>/jboss/bin/ -> /opt/jboss/jboss-eap-5.0/jboss-as/bin /app/<project>/jboss/lib/ -> /opt/jboss/jboss-eap-5.0/jboss-as/lib /app/<project>/jboss/server/<cfg>/ [project specific config based on 'production'] /app/<project>/jboss/server/<cfg>/log/ -> /log/<project>/<cfg> /app/<project>/jboss/server/<cfg>/... /app/<project>/jboss/.../ -> /opt/jboss/jboss-eap-5.0/jboss-as/.../ /app/<project>/bin/ [application specific scripts for start/stop etc - wraps jboss supplied scripts] /app/<project>/deploy/ [application deploy folder] /app/<project>/etc/ [application specific config] Questions: How do you install JBoss (on linux/unix systems)? Where do you put JBoss and what modifications do you do? Where do you put your applications and application specific files? Do you share JBoss instances between applications or run one instance/cluster per application? How do you manage configuration changes (i.e. your modifications of jboss standard config)?

    Read the article

  • Moving from WDS to MDT + WDS - Prestaged Computer Name

    - by MSCF
    We previously used just WDS to deploy our images. WDS was setup to request approval for new machines. We used the "Name and Approve" option to name the machines as we added them. If it was pre-existing, it would just use the existing computer name from AD. Then in our unattend.xml file we had Computername=%MACHINENAME%. This picked up the name we gave it during approval and set the computer name accordingly. We are now implementing MDT to manage our images and drivers. But upon testing, we noticed it would assign random computer names. I went into the Unattend.xml for the deploy task sequence and added that value under Specialize amd64_Microsoft-Windows-Shell-Setup_neutral Computername=%MACHINENAME%. But when we try applying the image, it errors out at that point of the install. How can an MDT deployment be configured to leverage the pre-staged name? Some additional info: Error message during the imaging process: Windows could not parse or process the unattend answer file for pass [specialize]. The settings specified in the answer file cannot be applied. The error was detected while processing settings for component [Microsoft-Windows-Shell-Setup].? setuperr.log: 2014-07-22 14:02:13, Error [setup.exe] [Action Queue] : Unattend action failed with exit code 4 2014-07-22 14:02:13, Error [setup.exe] Execution of unattend GCs failed; hr = 0x0; pResults-hrResult = 0x8030000b

    Read the article

  • Migrated SCOM 2007 R2 Reporting Services but reports are gone

    - by Gabriel Guimarães
    I've migrated Reporting Services on a SCOM 2007 R2 install, and noticed that the reports have not being copied. I can create a new report, but the ones I've had because of the management packs are gone. I've tried re-applying the Management Packs however it doesn't re-deploy them and when I try to access for example: Monitoring - Microsoft Windows Print Server - Microsoft Windows Server 2000 and 2003 Print Services - State View - select any item and click Alerts on the right menu. I get the following error: Date: 12/24/2010 12:40:35 PM Application: System Center Operations Manager 2007 R2 Application Version: 6.1.7221.0 Severity: Error Message: Cannot initialize report. Microsoft.Reporting.WinForms.ReportServerException: The item '/Microsoft.SystemCenter.DataWarehouse.Report.Library/Microsoft.SystemCenter.DataWarehouse.Report.Alert' cannot be found. (rsItemNotFound) at Microsoft.Reporting.WinForms.ServerReport.GetExecutionInfo() at Microsoft.Reporting.WinForms.ServerReport.GetParameters() at Microsoft.EnterpriseManagement.Mom.Internal.UI.Reporting.Parameters.ReportParameterBlock.Initialize(ServerReport serverReport) at Microsoft.EnterpriseManagement.Mom.Internal.UI.Console.ReportForm.SetReportJob(Object sender, ConsoleJobEventArgs args) The report doesn't exist on the reporting services side. how do I re-deploy this reports? Any help is appreciated. Thanks in advance.

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >