Search Results

Search found 21666 results on 867 pages for 'business objects'.

Page 441/867 | < Previous Page | 437 438 439 440 441 442 443 444 445 446 447 448  | Next Page >

  • Task ownership with Wordpress - CSS - Designer or Developer?

    - by Syed Absar
    We have a dispute regarding who owns which tasks when it comes to the CSS on our live site. Our designer argues that he is not responsible to log-in to word press and modify the css or use ftp for any changes because that's not his job description while developer argues that since it is css, it belongs to designer and that he is to update the changes to the server and then compare and correct the output. I'd like experienced people working in professional development environment to put a light on this scenario. I'm not sure if this is the right place to ask this question, or is there a separate forum for business development or project management specific questions?

    Read the article

  • The Oracle Retail Week Awards - Store Manager of the year

    - by user801960
    Below is a video featuring interviews with the nominees for the Oracle Retail Week Awards 2012 Store Manager of the Year Award, in which the nominees talk about the value of being nominated for an Oracle Retail Week Award and what it means to them to be recognised. The video includes interviews with ASDA CEO Andy Clarke, who talks about how important the store managers are to the functioning of a retail business. The nominees interviewed were: Ian Allcock from Homebase in Aylesford David Bickell from Argos in Milton Keynes Karl Lynsdale from Co-operative Food in Heathfield, Sussex Paul Norcross from B&Q in Bristol Darren Parfitt from Boots in Melton Mowbray Helen Smith from H Samuel in Manchester Oracle Retail would like to congratulate the winner, Ian Allcock from Homebase in Aylesford. Well done Ian!

    Read the article

  • Donald Ferguson says end-user programming is next big thing. Is it?

    - by Joris Meys
    You can guess how I came to ask this question... Anyway : http://www.bbc.co.uk/news/business-11944966 Donald Ferguson claiming that his websphere was his biggest disaster and proclaiming that end-user programming will be the way forward. This genuinely spurs the question : what with current programming languages. Honestly, I don't think that end-user programming will go much beyond a rather rigid template where you can build some apps around. If you see how many people actually manage to understand the basic functionality of functions in EXCEL... Plus, I fail to see how complex and performant systems can be built in such an end-user programming language ( Visual Basic, anyone?) Nice to play around with, but for many applications they're just not the thing. So no worries for the old languages if you ask me. What's your ideas?

    Read the article

  • Game Maker Studio Gravity Problems

    - by Dusty
    I've started messing around with Game Maker Studio. The problem I'm having is trying to get a gravity code for orbiting. Here's how i did it in XNA foreach (GravItem Item in StarSystem.ActiveItems.OfType<GravItem>()) { if (this != Item) { Velocity += (10 * Vector2.Normalize(Item.Position - this.Position * (this.Mass * Item.Mass) / (Vector2.DistanceSquared(this.Position, Item.Position)) / (this.Mass)); } } Simple and works well, things or bit and everything is nice. but in Game maker i don't have the luxury of Vector2's or a For-each loop to loop threw all the objects that have a mass. I've tried a few different things but nothing seems to work distance = distance_to_object(obj_moon); //--Gravity hspeed += (0.5 * (distance) * (Mass * obj_moon.Mass) / (sqr(distance)) / Mass) vspeed += (0.5 * (distance) * (Mass * obj_moon.Mass) / (sqr(distance)) / Mass) thanks for the help

    Read the article

  • What is wrong with my Dot Product?

    - by Clay Ellis Murray
    I am trying to make a pong game but I wanted to use dot products to do the collisions with the paddles, however whenever I make a dot product objects it never changes much from .9 this is my code to make vectors vector = { make:function(object){ return [object.x + object.width/2,object.y + object.height/2] }, normalize:function(v){ var length = Math.sqrt(v[0] * v[0] + v[1] * v[1]) v[0] = v[0]/length v[1] = v[1]/length return v }, dot:function(v1,v2){ return v1[0] * v2[0] + v1[1] * v2[1] } } and this is where I am calculating the dot in my code vector1 = vector.normalize(vector.make(ball)) vector2 = vector.normalize(vector.make(object)) dot = vector.dot(vector1,vector2) Here is a JsFiddle of my code currently the paddles don't move. Any help would be greatly appreciated

    Read the article

  • How do I set a touch listener in a child scene in AndEngine?

    - by Siddharth
    In my game, I want to implement touch listener for my child scene objects. Basically I tried all the possible way to implement this that I have usually done for my normal scenes, but those methods do not work here. Could somebody provide some guidance for setting touch area listener in child scene? Here is my code: menuScene.setTouchAreaBindingEnabled(true); menuScene.registerTouchArea(resumeButtonSprite); menuScene.registerTouchArea(retryButtonSprite); menuScene.registerTouchArea(exitButtonSprite); menuScene.setOnAreaTouchListener(new IOnAreaTouchListener() { @Override public boolean onAreaTouched(TouchEvent pSceneTouchEvent, ITouchArea pTouchArea, float pTouchAreaLocalX, float pTouchAreaLocalY) { System.out.println("Touch"); return true; } }); In this code, menuScene was a child scene activity. Also after research I found that my engine was stopped while the child scene was activated so the touch event is not detected. I want to implement a pause menu in my game so any desirable solution for a pause menu implementation would help.

    Read the article

  • Oracle Announces General Availability of Oracle Database 12c, the First Database Designed for the Cloud

    - by Javier Puerta
    Oracle Announces General Availability of Oracle Database 12c, the First Database Designed for the Cloud REDWOOD SHORES, Calif. – July 1, 2013 News Summary As organizations embrace the cloud, they seek technologies that will transform business and improve their overall operational agility and effectiveness. Oracle Database 12c is a next-generation database designed to meet these needs, providing a new multitenant architecture on top of a fast, scalable, reliable, and secure database platform. By plugging into the cloud with Oracle Database 12c, customers can improve the quality and performance of applications, save time with maximum availability architecture and storage management and simplify database consolidation by managing hundreds of databases as one. Read full press release

    Read the article

  • Oracle Announces General Availability of Oracle Database 12c, the First Database Designed for the Cloud

    - by Javier Puerta
    Oracle Announces General Availability of Oracle Database 12c, the First Database Designed for the Cloud REDWOOD SHORES, Calif. – July 1, 2013 News Summary As organizations embrace the cloud, they seek technologies that will transform business and improve their overall operational agility and effectiveness. Oracle Database 12c is a next-generation database designed to meet these needs, providing a new multitenant architecture on top of a fast, scalable, reliable, and secure database platform. By plugging into the cloud with Oracle Database 12c, customers can improve the quality and performance of applications, save time with maximum availability architecture and storage management and simplify database consolidation by managing hundreds of databases as one. Read full press release  

    Read the article

  • 12 és fél éve történt: Data Mart Suite és Discoverer/2000

    - by Fekete Zoltán
    Néhány hónapon belül az Oracle Hungary Kft. új irodaházba költözik. Érdemes tehát "inkrementálisan" selejtezni, ahogyan egy jó adattárházba is lépésenként kerülnek be az adatok, és témakörönként kisebb kilométerkövek mentén no a lefedett területek garmadája. :) Az imént akadt a kezembe egy jelentkezési lap az Oracle döntéstámogatás (DSS) témakörbol 1997-bol: Új döntésté(!)mogató eszközök a fejlesztok kezében, Oracle Partneri konferencia, 1997. november 7. :) Oldtimer... És mindez véletlenöl pontosan a NOSZF dátumára idozítve. Együtt ünnepelt a világ! Emlékszik még valaki, mi is a NOSZF feloldása? :) Azóta az Oracle Warehouse Builder és az Oracle Business Intelligence Enterprise Edition és a BI Standard Edition One lettek a zászlóshajók az ETL-ELT és az elemzés-kimutatáskészítés területen.

    Read the article

  • Using Hadooop (HDInsight) with Microsoft - Two (OK, Three) Options

    - by BuckWoody
    Microsoft has many tools for “Big Data”. In fact, you need many tools – there’s no product called “Big Data Solution” in a shrink-wrapped box – if you find one, you probably shouldn’t buy it. It’s tempting to want a single tool that handles everything in a problem domain, but with large, complex data, that isn’t a reality. You’ll mix and match several systems, open and closed source, to solve a given problem. But there are tools that help with handling data at large, complex scales. Normally the best way to do this is to break up the data into parts, and then put the calculation engines for that chunk of data right on the node where the data is stored. These systems are in a family called “Distributed File and Compute”. Microsoft has a couple of these, including the High Performance Computing edition of Windows Server. Recently we partnered with Hortonworks to bring the Apache Foundation’s release of Hadoop to Windows. And as it turns out, there are actually two (technically three) ways you can use it. (There’s a more detailed set of information here: http://www.microsoft.com/sqlserver/en/us/solutions-technologies/business-intelligence/big-data.aspx, I’ll cover the options at a general level below)  First Option: Windows Azure HDInsight Service  Your first option is that you can simply log on to a Hadoop control node and begin to run Pig or Hive statements against data that you have stored in Windows Azure. There’s nothing to set up (although you can configure things where needed), and you can send the commands, get the output of the job(s), and stop using the service when you are done – and repeat the process later if you wish. (There are also connectors to run jobs from Microsoft Excel, but that’s another post)   This option is useful when you have a periodic burst of work for a Hadoop workload, or the data collection has been happening into Windows Azure storage anyway. That might be from a web application, the logs from a web application, telemetrics (remote sensor input), and other modes of constant collection.   You can read more about this option here:  http://blogs.msdn.com/b/windowsazure/archive/2012/10/24/getting-started-with-windows-azure-hdinsight-service.aspx Second Option: Microsoft HDInsight Server Your second option is to use the Hadoop Distribution for on-premises Windows called Microsoft HDInsight Server. You set up the Name Node(s), Job Tracker(s), and Data Node(s), among other components, and you have control over the entire ecostructure.   This option is useful if you want to  have complete control over the system, leave it running all the time, or you have a huge quantity of data that you have to bulk-load constantly – something that isn’t going to be practical with a network transfer or disk-mailing scheme. You can read more about this option here: http://www.microsoft.com/sqlserver/en/us/solutions-technologies/business-intelligence/big-data.aspx Third Option (unsupported): Installation on Windows Azure Virtual Machines  Although unsupported, you could simply use a Windows Azure Virtual Machine (we support both Windows and Linux servers) and install Hadoop yourself – it’s open-source, so there’s nothing preventing you from doing that.   Aside from being unsupported, there are other issues you’ll run into with this approach – primarily involving performance and the amount of configuration you’ll need to do to access the data nodes properly. But for a single-node installation (where all components run on one system) such as learning, demos, training and the like, this isn’t a bad option. Did I mention that’s unsupported? :) You can learn more about Windows Azure Virtual Machines here: http://www.windowsazure.com/en-us/home/scenarios/virtual-machines/ And more about Hadoop and the installation/configuration (on Linux) here: http://en.wikipedia.org/wiki/Apache_Hadoop And more about the HDInsight installation here: http://www.microsoft.com/web/gallery/install.aspx?appid=HDINSIGHT-PREVIEW Choosing the right option Since you have two or three routes you can go, the best thing to do is evaluate the need you have, and place the workload where it makes the most sense.  My suggestion is to install the HDInsight Server locally on a test system, and play around with it. Read up on the best ways to use Hadoop for a given workload, understand the parts, write a little Pig and Hive, and get your feet wet. Then sign up for a test account on HDInsight Service, and see how that leverages what you know. If you're a true tinkerer, go ahead and try the VM route as well. Oh - there’s another great reference on the Windows Azure HDInsight that just came out, here: http://blogs.msdn.com/b/brunoterkaly/archive/2012/11/16/hadoop-on-azure-introduction.aspx  

    Read the article

  • Cocos2dx InApp Purchase for ios

    - by Ahmad dar
    I am trying to integrate In App Purchases in my app made by using cocos2d x c++. I am using easyNdk Helper for In App Purchases. My In App Purchases works perfectly for my Objective C apps. But for cocos2d x it is throwing error for the following line if ([[RageIAPHelper sharedInstance] productPurchased:productP.productIdentifier]) Actually value came from CPP file perfectly in form of arguments and Properly shows their value in NSLog , But it always shows the objects as nil even objetcs print their stored value in NSLog also @try catch condition is not working and finally throw the following error Please Help me what i have to do ? Thanks

    Read the article

  • SOA Suite HealthCare Integration Architecture

    - by Nitesh Jain
    Oracle SOA Suite for HealthCare integration is an integrated, best-of-breed suite that helps HealthCare organizations rapidly design and assemble, deploy and manage, highly agile and adaptable business applications.It  will help healthcare industry to  reduce operating costs and speeds time-to-market by delivering a consistent user interface, management console and monitoring environment, as well as healthcare libraries and templates for healthcare customer projects.Oracle SOA Suite for healthcare integration is fully configurable and extensible, providing a highly flexible platform for collaboration across all healthcare domains.Healthcare message standards support:    Messaging standards - HL7, HIPAA, Custom , X12N    Exchange standards - MLLP (v1.0, v2.0), TCP/IP, File, FTP, SFTP, JMSSimplified dashboards and customized reports helps users to advanced monitoring capabilities that support end-to-end healthcare message tracking.A toolkit for rapid HIPAA 5010 upgrade and compliance provides pre-defined healthcare integration mapping for HIPAA standards that is fully customizable and extensible.MLLP-HA helps easily failover and disaster recovery which makes system running on the long time without any issue.Audit keeps track of all the system changes. Alert and notification (SMS,Email etc) helps user to take the fast action and gives tracking on the real-time.

    Read the article

  • Best Practices For Database Consolidation On Exadata - New Whitepapers

    - by Javier Puerta
     Best Practices For Database Consolidation On Exadata Database Machine (Nov. 2011) Consolidation can minimize idle resources, maximize efficiency, and lower costs when you host multiple schemas, applications or databases on a target system. Consolidation is a core enabler for deploying Oracle database on public and private clouds.This paper provides the Exadata Database Machine (Exadata) consolidation best practices to setup and manage systems and applications for maximum stability and availability:Download here Oracle Exadata Database Machine Consolidation: Segregating Databases and Roles (Sep. 2011) This paper is focused on the aspects of segregating databases from each other in a platform consolidation environment on an Oracle Exadata Database Machine. Platform consolidation is the consolidation of multiple databases on to a single Oracle Exadata Database Machine. When multiple databases are consolidated on a single Database Machine, it may be necessary to isolate certain database components or functions in order to meet business requirements and provide best practices for a secure consolidation. In this paper we outline the use of Oracle Exadata database-scoped security to securely separate database management and provide a detailed case study that illustrates the best practices. Download here

    Read the article

  • Roll Your Own Hologram with DIY Holography Kit

    - by Jason Fitzpatrick
    If you’re looking for a DIY project with a 1980s theme, this create-your-own hologram kit is your ticket to 3D greatness. Over at Make magazine they’ve put together a tutorial for creating your own holograms using the DIY holographic kit featured in the Maker Shed–Make’s storefront for DIYers. The kit is $99; certainly not pocket change but on par with other holography kits on the market and even a bit generous with the inclusion of 20 sheets of holographic film. Check out the video above to see how easy it is to capture small objects on the film and create your own holograms. How-To: Holography [Make] How to See What Web Sites Your Computer is Secretly Connecting To HTG Explains: When Do You Need to Update Your Drivers? How to Make the Kindle Fire Silk Browser *Actually* Fast!

    Read the article

  • Synopsis : Configure WebCenter PS5 with WebCenter Content - Good Example

    - by Vikram Kurma
    In a typical business scenario we often need to display assets like pages , images from Webcenter Content in our portal applications.  WebCenter Portal applications provides you a way to integrate content through Jdeveloper where you can browse and consume the assets from Webcenter content .  In the latest PS5 version , there is a small change to enable this feature . If this is not done properly you would see that the connection is successful but it doesn't allow you to browse through the assets . SEVERE: Could not list contents of folder with ID = dCollectionID:-1oracle.stellent.ridc.protocol.ServiceException: No service defined for COLLECTION_DISPLAY. Don't worry we are here to help you out on this   .  Read on for the solution here

    Read the article

  • Estimating time for planning and technical design using Evidence Based Scheduling

    - by Turgs
    I'm at the beginning of a development project in a large organization. The Functional Requirements are currently being worked out and documented with our business stakeholders by our Enterprise Design department. I'm required to produce Technical Design Documents and manage the team to actually build the solution. I'm wanting to try Evidence Based Scheduling, but as I understand, part of that is breaking the job down into small tasks that are less than 14 hours in duration, which requires me to have already done the Technical Design. Therefore, can Evidence Based Scheduling only be used after the Technical Design has been done? How do you then plan and estimate the time it may take to come up with the Technical Design?

    Read the article

  • Price comparison sites and its effect on Google ranking

    - by Jivago
    I am the webmaster of a website that contains roughly 10,000 products. I would be possibly interested to index those products in a price comparison site like PriceGrabber, Nextag, Shopbot, etc. The principle of price comparison sites is great for an actual user that want to compare prices but my main concern is the effect it could have on my actual ranking on Google... Since a site like Shopbot uses a CPC model (Cost-per-click), all the links on the website are builted to track clicks (IE: http://www.shopbot.ca/r.html?i=3&catc=2&refshop=5706&refshopcodeid=42587349), it uses redirection, no direct links (So no direct backlinking). In your opinion and/or experience, is this a smart, business wise, seo wise move or not? THANKS!

    Read the article

  • Rendering of 2d water

    - by luke
    Suppose you have a nice way to move your 2D particles in order to simulate a fluid (like water). Any ideas on how to render it? Consider the fact that the game is a 2D game. The perspective is like this (the first image i have found): an example of 2d water. The water will be contained in boxes that can be broken in order to let it fall down and interact with other objects. The most simple way that comes to my mind is to use a small image for each particle. I am interested in hearing more ways of rendering water. Thank you.

    Read the article

  • In which cases Robolectric is a relevant solution?

    - by Francis Toth
    As you may now, Robolectric is a framework that provides stubs for Android objects, in order to make tests runnable outside the Dalvik environment. My concern is that, by doing this, one can fake a third party library, which is, I believe, not a good practice (it should be encapsulated instead). If you make assumptions about an interface you don't own, which is changed once your test has been written, you won't be always noticed about the modifications. This can lead to a misunderstanding between your implementations and the interface they depends on. In addition, Android use mostly inheritance over interfaces which limits contract testing. So here's my question: Are there situations when Robolectric is the way to go? Here are some links you can check for further information: test-doubles-with-mockito in-brief-contract-tests

    Read the article

  • Drawing beam effect in UDK?

    - by sgrif
    I'm having trouble drawing a particle effect between two actors in UDK - Both the source and the target are not static objects, so as far as I can tell I need to do it in the code not in kismet. Here's what I've got at the moment and it seems to not be doing anything at all. Ideas? BeamEmitter[0] = new(self) class'UTParticleSystemComponent'; BeamEmitter[0].SetAbsolute(false, false, false); BeamEmitter[0].SetTemplate(BeamTemplate[0]); BeamEmitter[0].SetTickGroup(TG_PostUpdateWork); BeamEmitter[0].bUpdateComponentInTick = true; self.AttachComponent(BeamEmitter[0]); BeamEmitter[0].SetBeamEndPoint(2, tarPos); BeamEmitter[0].ActivateSystem();

    Read the article

  • Using Cloud OER to Find Fusion Applications On-Premise Service Concrete WSDL URL by Rajesh Raheja

    - by JuergenKress
    In my previous post on Fusion Applications Integration, the Fusion Applications OER white paper explains Oracle Enterprise Repository (OER) usage in the applications context, assuming a dedicated OER for your Fusion Applications instance (whether cloud/SaaS or on-premise). Having a dedicated OER instance is recommended as it can provide customized service metadata and can be used for overall SOA governance in addition to simple service discovery. One of the common queries I get is how on-premise customers without a dedicated OER can find a concrete service WSDL URL for their specific environment using the cloud hosted OER instance. Read the full article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: OER,SOA Governance,SOA Community,Oracle SOA,Oracle BPM,BPM,Community,OPN,Jürgen Kress

    Read the article

  • Best Practices for SOA 11g Multi Data Center Active – Active Deployment – White Paper

    - by JuergenKress
    Best practice for High Availability This paper describes the recommended Active - Active solutions that can be used for protecting an Oracle Fusion Middleware 11 g SOA system against downtime across multiple locations (referred to as SOA Active - Active Disaster Recovery Solution or SOA Multi Data Center Active - Active Deployment). It provides the required configuration steps for setting up the recommended topologies and guidance about the performance and failover implications of such a configuration. Get the white paper here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: high availability,best practice,active deployment,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Your free invitation to new CodeAsIs developer community portal

    Hello Friends, Your free invitation to new CodeAsIs developer community portal ! One stop free portal for articles, code,blogs, feeds, discussion forums,news, links and downloads. Register Now !! Be a community star author submit articles. Register as a member and follow the guidelines or Use our article submission wizard link on home page. The benefits of submitting articles to www.codeasis.com : 1.Free advertising2.Boost your personal and business credibility3.Become part of a great community4.Massive...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Towards Database Continuous Delivery – What Next after Continuous Integration? A Checklist

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database delivery patterns & practices STAGE 4 AUTOMATED DEPLOYMENT If you’ve been fortunate enough to get to the stage where you’ve implemented some sort of continuous integration process for your database updates, then hopefully you’re seeing the benefits of that investment – constant feedback on changes your devs are making, advanced warning of data loss (prior to the production release on Saturday night!), a nice suite of automated tests to check business logic, so you know it’s going to work when it goes live, and so on. But what next? What can you do to improve your delivery process further, moving towards a full continuous delivery process for your database? In this article I describe some of the issues you might need to tackle on the next stage of this journey, and how to plan to overcome those obstacles before they appear. Our Database Delivery Learning Program consists of four stages, really three – source controlling a database, running continuous integration processes, then how to set up automated deployment (the middle stage is split in two – basic and advanced continuous integration, making four stages in total). If you’ve managed to work through the first three of these stages – source control, basic, then advanced CI, then you should have a solid change management process set up where, every time one of your team checks in a change to your database (whether schema or static reference data), this change gets fully tested automatically by your CI server. But this is only part of the story. Great, we know that our updates work, that the upgrade process works, that the upgrade isn’t going to wipe our 4Tb of production data with a single DROP TABLE. But – how do you get this (fully tested) release live? Continuous delivery means being always ready to release your software at any point in time. There’s a significant gap between your latest version being tested, and it being easily releasable. Just a quick note on terminology – there’s a nice piece here from Atlassian on the difference between continuous integration, continuous delivery and continuous deployment. This piece also gives a nice description of the benefits of continuous delivery. These benefits have been summed up by Jez Humble at Thoughtworks as: “Continuous delivery is a set of principles and practices to reduce the cost, time, and risk of delivering incremental changes to users” There’s another really useful piece here on Simple-Talk about the need for continuous delivery and how it applies to the database written by Phil Factor – specifically the extra needs and complexities of implementing a full CD solution for the database (compared to just implementing CD for, say, a web app). So, hopefully you’re convinced of moving on the the next stage! The next step after CI is to get some sort of automated deployment (or “release management”) process set up. But what should I do next? What do I need to plan and think about for getting my automated database deployment process set up? Can’t I just install one of the many release management tools available and hey presto, I’m ready! If only it were that simple. Below I list some of the areas that it’s worth spending a little time on, where a little planning and prep could go a long way. It’s also worth pointing out, that this should really be an evolving process. Depending on your starting point of course, it can be a long journey from your current setup to a full continuous delivery pipeline. If you’ve got a CI mechanism in place, you’re certainly a long way down that path. Nevertheless, we’d recommend evolving your process incrementally. Pages 157 and 129-141 of the book on Continuous Delivery (by Jez Humble and Dave Farley) have some great guidance on building up a pipeline incrementally: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 For now, in this post, we’ll look at the following areas for your checklist: You and Your Team Environments The Deployment Process Rollback and Recovery Development Practices You and Your Team It’s a cliché in the DevOps community that “It’s not all about processes and tools, really it’s all about a culture”. As stated in this DevOps report from Puppet Labs: “DevOps processes and tooling contribute to high performance, but these practices alone aren’t enough to achieve organizational success. The most common barriers to DevOps adoption are cultural: lack of manager or team buy-in, or the value of DevOps isn’t understood outside of a specific group”. Like most clichés, there’s truth in there – if you want to set up a database continuous delivery process, you need to get your boss, your department, your company (if relevant) onside. Why? Because it’s an investment with the benefits coming way down the line. But the benefits are huge – for HP, in the book A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware, these are summarized as: -2008 to present: overall development costs reduced by 40% -Number of programs under development increased by 140% -Development costs per program down 78% -Firmware resources now driving innovation increased by a factor of 8 (from 5% working on new features to 40% But what does this mean? It means that, when moving to the next stage, to make that extra investment in automating your deployment process, it helps a lot if everyone is convinced that this is a good thing. That they understand the benefits of automated deployment and are willing to make the effort to transform to a new way of working. Incidentally, if you’re ever struggling to convince someone of the value I’d strongly recommend just buying them a copy of this book – a great read, and a very practical guide to how it can really work at a large org. I’ve spoken to many customers who have implemented database CI who describe their deployment process as “The point where automation breaks down. Up to that point, the CI process runs, untouched by human hand, but as soon as that’s finished we revert to manual.” This deployment process can involve, for example, a DBA manually comparing an environment (say, QA) to production, creating the upgrade scripts, reading through them, checking them against an Excel document emailed to him/her the night before, turning to page 29 in his/her notebook to double-check how replication is switched off and on for deployments, and so on and so on. Painful, error-prone and lengthy. But the point is, if this is something like your deployment process, telling your DBA “We’re changing everything you do and your toolset next week, to automate most of your role – that’s okay isn’t it?” isn’t likely to go down well. There’s some work here to bring him/her onside – to explain what you’re doing, why there will still be control of the deployment process and so on. Or of course, if you’re the DBA looking after this process, you have to do a similar job in reverse. You may have researched and worked out how you’d like to change your methodology to start automating your painful release process, but do the dev team know this? What if they have to start producing different artifacts for you? Will they be happy with this? Worth talking to them, to find out. As well as talking to your DBA/dev team, the other group to get involved before implementation is your manager. And possibly your manager’s manager too. As mentioned, unless there’s buy-in “from the top”, you’re going to hit problems when the implementation starts to get rocky (and what tool/process implementations don’t get rocky?!). You need to have support from someone senior in your organisation – someone you can turn to when you need help with a delayed implementation, lack of resources or lack of progress. Actions: Get your DBA involved (or whoever looks after live deployments) and discuss what you’re planning to do or, if you’re the DBA yourself, get the dev team up-to-speed with your plans, Get your boss involved too and make sure he/she is bought in to the investment. Environments Where are you going to deploy to? And really this question is – what environments do you want set up for your deployment pipeline? Assume everyone has “Production”, but do you have a QA environment? Dedicated development environments for each dev? Proper pre-production? I’ve seen every setup under the sun, and there is often a big difference between “What we want, to do continuous delivery properly” and “What we’re currently stuck with”. Some of these differences are: What we want What we’ve got Each developer with their own dedicated database environment A single shared “development” environment, used by everyone at once An Integration box used to test the integration of all check-ins via the CI process, along with a full suite of unit-tests running on that machine In fact if you have a CI process running, you’re likely to have some sort of integration server running (even if you don’t call it that!). Whether you have a full suite of unit tests running is a different question… Separate QA environment used explicitly for manual testing prior to release “We just test on the dev environments, or maybe pre-production” A proper pre-production (or “staging”) box that matches production as closely as possible Hopefully a pre-production box of some sort. But does it match production closely!? A production environment reproducible from source control A production box which has drifted significantly from anything in source control The big question is – how much time and effort are you going to invest in fixing these issues? In reality this just involves figuring out which new databases you’re going to create and where they’ll be hosted – VMs? Cloud-based? What about size/data issues – what data are you going to include on dev environments? Does it need to be masked to protect access to production data? And often the amount of work here really depends on whether you’re working on a new, greenfield project, or trying to update an existing, brownfield application. There’s a world if difference between starting from scratch with 4 or 5 clean environments (reproducible from source control of course!), and trying to re-purpose and tweak a set of existing databases, with all of their surrounding processes and quirks. But for a proper release management process, ideally you have: Dedicated development databases, An Integration server used for testing continuous integration and running unit tests. [NB: This is the point at which deployments are automatic, without human intervention. Each deployment after this point is a one-click (but human) action], QA – QA engineers use a one-click deployment process to automatically* deploy chosen releases to QA for testing, Pre-production. The environment you use to test the production release process, Production. * A note on the use of the word “automatic” – when carrying out automated deployments this does not mean that the deployment is happening without human intervention (i.e. that something is just deploying over and over again). It means that the process of carrying out the deployment is automatic in that it’s not a person manually running through a checklist or set of actions. The deployment still requires a single-click from a user. Actions: Get your environments set up and ready, Set access permissions appropriately, Make sure everyone understands what the environments will be used for (it’s not a “free-for-all” with all environments to be accessed, played with and changed by development). The Deployment Process As described earlier, most existing database deployment processes are pretty manual. The following is a description of a process we hear very often when we ask customers “How do your database changes get live? How does your manual process work?” Check pre-production matches production (use a schema compare tool, like SQL Compare). Sometimes done by taking a backup from production and restoring in to pre-prod, Again, use a schema compare tool to find the differences between the latest version of the database ready to go live (i.e. what the team have been developing). This generates a script, User (generally, the DBA), reviews the script. This often involves manually checking updates against a spreadsheet or similar, Run the script on pre-production, and check there are no errors (i.e. it upgrades pre-production to what you hoped), If all working, run the script on production.* * this assumes there’s no problem with production drifting away from pre-production in the interim time period (i.e. someone has hacked something in to the production box without going through the proper change management process). This difference could undermine the validity of your pre-production deployment test. Red Gate is currently working on a free tool to detect this problem – sign up here at www.sqllighthouse.com, if you’re interested in testing early versions. There are several variations on this process – some better, some much worse! How do you automate this? In particular, step 3 – surely you can’t automate a DBA checking through a script, that everything is in order!? The key point here is to plan what you want in your new deployment process. There are so many options. At one extreme, pure continuous deployment – whenever a dev checks something in to source control, the CI process runs (including extensive and thorough testing!), before the deployment process keys in and automatically deploys that change to the live box. Not for the faint hearted – and really not something we recommend. At the other extreme, you might be more comfortable with a semi-automated process – the pre-production/production matching process is automated (with an error thrown if these environments don’t match), followed by a manual intervention, allowing for script approval by the DBA. One he/she clicks “Okay, I’m happy for that to go live”, the latter stages automatically take the script through to live. And anything in between of course – and other variations. But we’d strongly recommended sitting down with a whiteboard and your team, and spending a couple of hours mapping out “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” NB: Most of what we’re discussing here is about production deployments. It’s important to note that you will also need to map out a deployment process for earlier environments (for example QA). However, these are likely to be less onerous, and many customers opt for a much more automated process for these boxes. Actions: Sit down with your team and a whiteboard, and draw out the answers to the questions above for your production deployments – “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” Repeat for earlier environments (QA and so on). Rollback and Recovery If only every deployment went according to plan! Unfortunately they don’t – and when things go wrong, you need a rollback or recovery plan for what you’re going to do in that situation. Once you move in to a more automated database deployment process, you’re far more likely to be deploying more frequently than before. No longer once every 6 months, maybe now once per week, or even daily. Hence the need for a quick rollback or recovery process becomes paramount, and should be planned for. NB: These are mainly scenarios for handling rollbacks after the transaction has been committed. If a failure is detected during the transaction, the whole transaction can just be rolled back, no problem. There are various options, which we’ll explore in subsequent articles, things like: Immediately restore from backup, Have a pre-tested rollback script (remembering that really this is a “roll-forward” script – there’s not really such a thing as a rollback script for a database!) Have fallback environments – for example, using a blue-green deployment pattern. Different options have pros and cons – some are easier to set up, some require more investment in infrastructure; and of course some work better than others (the key issue with using backups, is loss of the interim transaction data that has been added between the failed deployment and the restore). The best mechanism will be primarily dependent on how your application works and how much you need a cast-iron failsafe mechanism. Actions: Work out an appropriate rollback strategy based on how your application and business works, your appetite for investment and requirements for a completely failsafe process. Development Practices This is perhaps the more difficult area for people to tackle. The process by which you can deploy database updates is actually intrinsically linked with the patterns and practices used to develop that database and linked application. So you need to decide whether you want to implement some changes to the way your developers actually develop the database (particularly schema changes) to make the deployment process easier. A good example is the pattern “Branch by abstraction”. Explained nicely here, by Martin Fowler, this is a process that can be used to make significant database changes (e.g. splitting a table) in a step-wise manner so that you can always roll back, without data loss – by making incremental updates to the database backward compatible. Slides 103-108 of the following slidedeck, from Niek Bartholomeus explain the process: https://speakerdeck.com/niekbartho/orchestration-in-meatspace As these slides show, by making a significant schema change in multiple steps – where each step can be rolled back without any loss of new data – this affords the release team the opportunity to have zero-downtime deployments with considerably less stress (because if an increment goes wrong, they can roll back easily). There are plenty more great patterns that can be implemented – the book Refactoring Databases, by Scott Ambler and Pramod Sadalage is a great read, if this is a direction you want to go in: http://www.amazon.com/Refactoring-Databases-Evolutionary-paperback-Addison-Wesley/dp/0321774515 But the question is – how much of this investment are you willing to make? How often are you making significant schema changes that would require these best practices? Again, there’s a difference here between migrating old projects and starting afresh – with the latter it’s much easier to instigate best practice from the start. Actions: For your business, work out how far down the path you want to go, amending your database development patterns to “best practice”. It’s a trade-off between implementing quality processes, and the necessity to do so (depending on how often you make complex changes). Socialise these changes with your development group. No-one likes having “best practice” changes imposed on them, so good to introduce these ideas and the rationale behind them early.   Summary The next stages of implementing a continuous delivery pipeline for your database changes (once you have CI up and running) require a little pre-planning, if you want to get the most out of the work, and for the implementation to go smoothly. We’ve covered some of the checklist of areas to consider – mainly in the areas of “Getting the team ready for the changes that are coming” and “Planning our your pipeline, environments, patterns and practices for development”, though there will be more detail, depending on where you’re coming from – and where you want to get to. This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • Kill a tree, save your website? Content strategy in action, part III

    - by Roger Hart
    A lot has been written about how driving content strategy from within an organisation is hard. And that's true. Red Gate is pretty receptive to new ideas, so although I've not had a total walk in the park, it's been a hike with charming scenery. But I'm one of the lucky ones. Lots of people are involved in content, and depending on your organisation some of those people might be the kind who'll gleefully call themselves "stakeholders". People holding a stake generally want to stick it through something's heart and bury it at a crossroads. Winning them over is not always easy. (Richard Ingram has made a nice visual summary of how this can feel - Content strategy Snakes & ladders - pdf ) So yes, a lot of content strategy advocates are having a hard time. And sure, we've got a nice opportunity to get together and have a hug and a cry, but in the interim we could use a hand. What to do? My preferred approach is, I'll confess, brutal. I'd like nothing so much as to take a scorched earth approach to our website. Burn it, salt the ground, and build the new one right: focusing on clearly delineated business and user content goals, and instrumented so we can tell if we're doing it right. I'm never getting buy-in for that, but a boy can dream. So how about just getting buy-in for some small, tenable improvements? Easier, but still non-trivial. I sat down for a chat with our marketing and design guys. It seemed like a good place to start, even if they weren't up for my "Ctrl-A + Delete"  solution. We talked through some of this stuff, and we pretty much agreed that our content is a bit more broken than we'd ideally like. But to get everybody on board, the problems needed visibility. Doing a visual content inventory Print out the internet. Make a Wall Of Content. Seriously. If you've already done a content inventory, you know your architecture, and you know the scale of the problem. But it's quite likely that very few other people do. So make it big and visual. I'm going to carbon hell, but it seems to be working. This morning, I printed out a tiny, tiny part of our website: the non-support content pertaining to SQL Compare I made big, visual, A3 blowups of each page, and covered a wall with them. A page per web page, spread over something like 6M x 2M, with metrics, right in front of people. Even if nobody reads it (and they are doing) the sheer scale is shocking. 53 pages, all told. Some are redundant, some outdated, some trivial, a few fantastic, and frighteningly many that are great ideas delivered not-quite-right. You have to stand quite far away to get it all in your field of vision. For a lot of today, a whole bunch of folks have been gawping in amazement, talking each other through it, peering at the details, and generally getting excited about content. Developers, sales guys, our CEO, the marketing folks - they're engaged. Will it last? I make no promises. But this sort of wave of interest is vital to getting a content strategy project kicked off. While the content strategist is a saucer-eyed orphan in the cupboard under the stairs, they're not getting a whole lot done. Of course, just printing the site won't necessarily cut it. You have to know your content, and be able to talk about it. Ideally, you'll also have page view and time-on-page metrics. One of the most powerful things you can do is, when people are staring at your wall of content, ask them what they think half of it is for. Pretty soon, you've made a case for content strategy. We're also going to get folks to mark it up - cover it with notes and post-its, let us know how they feel about our content. I'll be blogging about how that goes, but it's exciting. Different business functions have different needs from content, so the more exposure the content gets, and the more feedback, the more you know about those needs. Fingers crossed for awesome.

    Read the article

< Previous Page | 437 438 439 440 441 442 443 444 445 446 447 448  | Next Page >