Search Results

Search found 10644 results on 426 pages for 'flash integration'.

Page 99/426 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • Ubuntu karmic 9.10 Live image on USB - not working.

    - by Vivek Sharma
    This is my configuration 4GB pendrive, HP ubuntu-9.10-desktop-i386 image file for live USB install pendrivelinux (u910p) and ubetbootin (unetbootin.sourceforge.net) machine T61 Earlier I have installed ubuntu live image using above two mentioned utilities, numerous times. But, on a 2gb kingston flash-drive. Today, i am trying to install the live-image on 4gb HP flash-drive. Both the utilities install, i can see the files in the drive, even the wubi-installer is working, it say press "reboot" to boot in live-ubuntu. But, when i press "reboot" it does not reboot my win7. Now, when i reboot, select boot-usb in bios, it say "no boot record". I am making my usb bootable, using the utility, even then nothing is working out. Did this a few times. Is 4GB usb a prob, does anyone knows how to partition my usb in 2-2gb and install it on one partition, and then use the live image. Is it possible.

    Read the article

  • Changing size of window when testing Adobe AIR mobile applications

    - by Peter
    Im making a mobile phone Android application in Flash CS 5.5. I set the width/height of the stage to 480/800 px. When I hit CTRL+ENTER to test run the application I get a window that is 480/800 px. It cannot be resized. I want to change the size of that window WITHOUT changing the stage width/height. For example if I run the APK on a mobile phone with a 1000x1000 display the flash will scale automatically to fit the 480/800 stage to the 1000x1000 screen. So it should be possible to change the window size to 1000x1000 without having to change the stage width/height. But how?

    Read the article

  • Problem with running a program from flashdrive

    - by rajivpradeep
    I have a USB drive with two partitions in it, one hidden and one normal. I have an application which swaps the memory and runs the flash application in hidden zone. The problem is that the application works fine on Windows 7 and when run on Win XP, it swaps the partitions but doesn't run the flash applications but just keeps running in the background. I can see it in task manager. But, when I copy the application to desktop and run, it runs with no glitch. I was facing the same problem on Win 7 too, but it was running as required when I ran it using "Run in XP mode" and then I applied a SHIM and is running since then as required. The application is built using VC++ 2008. does anyone know the solution?

    Read the article

  • Problem with running a program from flashdrive

    - by rajivpradeep
    I have a USB drive with two partitions in it, one hidden and one normal. I have an application which swaps the memory and runs the flash application in hidden zone. The problem is that the application works fine on Windows 7 and when run on Win XP, it swaps the partitions but doesn't run the flash applications but just keeps running in the background. I can see it in task manager. But, when I copy the application to desktop and run, it runs with no glitch. I was facing the same problem on Win 7 too, but it was running as required when I ran it using "Run in XP mode" and then I applied a SHIM and is running since then as required. The application is built using VC++ 2008. does anyone know the solution?

    Read the article

  • Problem with running a program from flashdrive

    - by rajivpradeep
    I have a USB drive with two partitions in it, one hidden and one normal. I have an application which swaps the memory and runs the flash application in hidden zone. The problem is that the application works fine on Windows 7 and when run on Win XP, it swaps the partitions but doesn't run the flash applications but just keeps running in the background. I can see it in task manager. But, when I copy the application to desktop and run, it runs with no glitch. I was facing the same problem on Win 7 too, but it was running as required when I ran it using "Run in XP mode" and then I applied a SHIM and is running since then as required. The application is built using VC++ 2008. does anyone know the solution?

    Read the article

  • Flash VS HTML 5 - A Web Design Agency's Dilemma

    The iPad was released on the Australian market last week to the usual Apple hype. People lining up outside the iconic Apple store to be the first get to play with the new toy. Regarded as a revolution in the way we browse the web, it has brought with it a new headache for all designers and developers of websites.

    Read the article

  • WordPress and EverNote integration, any existing solutions?

    - by JXITC
    Actually the tag and category system in WordPress and EverNote are very similar, almost exactly the same. We can create a notebook/notebook group in EverNote, its counterpart in Wordpress is Category/Sub Category We can mutiple tag any note in EverNote, similarly we can tag any post in Wordpress as well! Since the two are so similar to each other in categorizing the article, I am wondering whether there is a existing solution to automatically convert between the two. Like one click to post my EverNote note to WordPress under the same category and same tag. Or pull down my WordPress article to my EverNote account under the same category and same tag. Preferably it can also handle image\video uploading issues as well. :) I saw a similar questions asked years ago: Posting from evernote to wordpress So I was wondering is there any new update for this idea right now?

    Read the article

  • The Truth About Flash - Apple Vs Adobe

    Every emerging technology generation seems to result in a battle of platforms and ideologies - a war between companies for the hearts, minds, dollars and loyalty of consumers for their system of choice. Memories of Microsoft's Internet Explorer finally landing the fatal blow to Netscape, or Google's meteoric rise to power over Yahoo (and the world), are now but footnotes in the history of humanities technological revolution. But no sooner are they forgotten are we plunked into the middle of another war - perhaps the most vicious yet, and the one that may just have the most impact on our...

    Read the article

  • Windows doesn't recognise my USB key anymore (it used to work)

    - by dominicbri7
    I use my friend's USB flash drive (Corsair flash voyager 16gb) to transfer files from my laptop to my desktop computer. However, since a couple of days my laptop stopped recognizing the USB key.. while there is still no problems with all other computers. I use Windows 7 64 bits if that can help. I tried uninstalling the driver, rebooting and all those kind of tricks, but it won't work. When I connect it and open the "My computer" window, I see "Removable Disk (G:)" for a moment, then it disappears... then it reappears again and it keeps doing that periodically. I can't even right click then hit "Properties" because it disappears. As I recall, it DOES work on every other computers, I think it has to do with the driver but what can I do?

    Read the article

  • Page Flip Flash Technology to Save the Environment

    With many more people becoming aware of the global climate change that is taking place around us, an increasing number of them are starting to understand their negative impact on the environment. Thankfully, a lot of them are taking steps to mitigate that negative impact.

    Read the article

  • ID Badge Access System for Building with Active Directory Integration [closed]

    - by Alex
    I hope this is the right place for this question. So, we're looking into setting up a building access that uses badges or cards of some kind. I wanted to ask the users on here if they've had to do such setups and/or if they have recommendations? Is there maybe a system that integrates with Active Directory? I know one of the things our managers want to do is to be able to run reports on when people are entering the buildings. I'd appreciate any suggestions and thanks in advance!

    Read the article

  • Autofac WCF integration + sessions

    - by Michael Sagalovich
    I am having an ASP.NET MVC 3 application that collaborates with a WCF service, which is hosted using Autofac host factory. Here are some code samples: .svc file: <%@ ServiceHost Language="C#" Debug="true" Service="MyNamespace.IMyContract, MyAssembly" Factory="Autofac.Integration.Wcf.AutofacServiceHostFactory, Autofac.Integration.Wcf" %> Global.asax of the WCF service project: protected void Application_Start(object sender, EventArgs e) { ContainerBuilder builder = new ContainerBuilder(); //Here I perform all registrations, including implementation of IMyContract AutofacServiceHostFactory.Container = builder.Build(); } Client proxy class constructor (MVC side): ContainerBuilder builder = new ContainerBuilder(); builder.Register(c => new ChannelFactory<IMyContract>( new BasicHttpBinding(), new EndpointAddress(Settings.Default.Url_MyService))) .SingleInstance(); builder.Register(c => c.Resolve<ChannelFactory<IMyContract>>().CreateChannel()) .UseWcfSafeRelease(); _container = builder.Build(); This works fine until I want WCF service to allow or require sessions ([ServiceContract(SessionMode = SessionMode.Allowed)], or [ServiceContract(SessionMode = SessionMode.Required)]) and to share one session with the MVC side. I changed the binding to WSHttpBinding on the MVC side, but I am having different exceptions depending on how I tune it. I also tried changing AutofacServiceHostFactory to AutofacWebServiceHostFactory, with no result. I am not using config file as I am mainly experimenting, not developing real-life application, but I need to study the case. But if you think I can achieve what I need only with config files, then OK, I'll use them. I will provide exception details for each combination of settings if required, I'm omitting them not to make the post too large. Any ideas on what I can do?

    Read the article

  • Integration test failing through NUnit Gui/Console, but passes through TestDriven in IDE

    - by Cliff
    I am using NHibernate against an Oracle database with the NHibernate.Driver.OracleDataClientDriver driver class. I have an integration test that pulls back expected data properly when executed through the IDE using TestDriven.net. However, when I run the unit test through the NUnit GUI or Console, NHibernate throws an exception saying it cannot find the Oracle.DataAccess assembly. Obviously, this prevents me from running my integration tests as part of my CI process. NHibernate.HibernateException : The IDbCommand and IDbConnection implementation in the assembly Oracle.DataAccess could not be found. Ensure that the assembly Oracle.DataAccess is located in the application directory or in the Global Assembly Cache. If the assembly is in the GAC, use element in the application configuration file to specify the full name of the assembly.* I have tried making the assembly available in two ways, by copying it into the bin\debug folder and by adding the element in the config file. Again, both methods work when executing through TestDriven in the IDE. Neither work when executing through NUnit GUI/Console. The NUnit Gui log displays the following message. 21:42:26,377 ERROR [TestRunnerThread] ReflectHelper [(null)]- Could not load type Oracle.DataAccess.Client.OracleConnection, Oracle.DataAccess. System.BadImageFormatException: Could not load file or assembly 'Oracle.DataAccess, Version=2.111.7.20, Culture=neutral, PublicKeyToken=89b483f429c47342' or one of its dependencies. An attempt was made to load a program with an incorrect format. File name: 'Oracle.DataAccess, Version=2.111.7.20, Culture=neutral, PublicKeyToken=89b483f429c47342' --- System.BadImageFormatException: Could not load file or assembly 'Oracle.DataAccess' or one of its dependencies. An attempt was made to load a program with an incorrect format. File name: 'Oracle.DataAccess' I am running NUnit 2.4.8, TestDriven.net 2.24 and VS2008sp1 on Windows 7 64bit. Oracle Data Provider v2.111.7.20, NHibernate v2.1.0.4. Has anyone run into this issue, better yet, fixed it?

    Read the article

  • Enterprise integration of disparate systems

    - by Chris Latta
    We're about to embark on a fairly large integration effort to kill off a bunch of Access and Sql Server databases and get everything into one coherent enterprise system. There are also a number of other systems (accounting, CRM, payroll, MS Exchange) that hold critical data that we need to integrate (use for data validation in other systems), report on and otherwise expose. It is likely that some of these systems will change in the next few years, so we need to isolate our systems to be ready for change. Ideally we would be able to expose our forms in a consistent manner across as many of our our systems as possible without having to re-develop them for each system. We are currently targeting SharePoint (2007 and soon 2010), Office (2007 and soon 2010 - Word, Excel, PowerPoint and Outlook), Reporting Services, .Net console applications, .Net Windows applications, shell extensions, and with the possibility of exposing some functionality on mobile devices (BlackBerries currently, maybe iPhones later) and via our website. We're moving development to Visual Studio 2010 (from 2005) ahead of migrating to SharePoint 2010 and Office 2010. Given that most of our development is presently targeted to the .Net framework (mostly in C#) it seems logical to stick with this unless there is some compelling reason to switch frameworks/platform for some aspects. We're thinking of your standard Database-Data Integration layer-Business Objects Layer-Web Services (or REST) layer-Client Application plus doing our own client application with WPF (or something else?) forms that can also be exposed in the MS systems (SharePoint, Office, Windows). So, we don't want much, just everything :) Basically we need to isolate ourselves from database and systems changes, create an API that can be used throughout our systems and then make this functionality available in our client applications. I'm very keen to get pointers from anyone who has tips on how to pull this off. Should we look at the Enterprise Library as a place to start? Is REST with ASP.Net MVC2 a better solution than Web Services for a system like this? Will WPF deliver forms re-use or is there something better?

    Read the article

  • Add child to scene from within a class.

    - by Fecal Brunch
    Hi, I'm new to flash in general and have been writing a program with two classes that extend MovieClip (Stems and Star). I need to create a new Stems object as a child of the scene when the user stops dragging a Star object, but do not know how to reference the scene from within the Star class's code. I've tried passing the scene into the constructor of the Star and doing sometihng like: this.scene.addChild (new Stems ()); But apparently that's not how to do it... Below is the code for Stems and Stars, any advice would be appreciated greatly. package { import flash.display.MovieClip; import flash.events.*; import flash.utils.Timer; public class Stems extends MovieClip { public const centreX=1026/2; public const centreY=600/2; public var isFlowing:Boolean; public var flowerType:Number; public const outerLimit=210; public const innerLimit=100; public function Stems(fType:Number) { this.isFlowing=false; this.scaleX=this.scaleY= .0007* distanceFromCentre(this.x, this.y); this.setXY(); trace(distanceFromCentre(this.x, this.y)); if (fType==2) { gotoAndStop("Aplant"); } } public function distanceFromCentre(X:Number, Y:Number):int { return (Math.sqrt((X-centreX)*(X-centreX)+(Y-centreY)*(Y-centreY))); } public function rotateAwayFromCentre():void { var theX:int=centreX-this.x; var theY:int = (centreY - this.y) * -1; var angle = Math.atan(theY/theX)/(Math.PI/180); if (theX<0) { angle+=180; } if (theX>=0&&theY<0) { angle+=360; } this.rotation = ((angle*-1) + 90)+180; } public function setXY() { do { var tempX=Math.random()*centreX*2; var tempY=Math.random()*centreY*2; } while (distanceFromCentre (tempX, tempY)>this.outerLimit || distanceFromCentre (tempX, tempY)<this.innerLimit); this.x=tempX; this.y=tempY; rotateAwayFromCentre(); } public function getFlowerType():Number { return this.flowerType; } } } package { import flash.display.MovieClip; import flash.events.*; import flash.utils.Timer; public class Star extends MovieClip { public const sWide=1026; public const sTall=600; public var startingX:Number; public var startingY:Number; public var starColor:Number; public var flicker:Timer; public var canUpdatePos:Boolean=true; public const innerLimit=280; public function Star(color:Number, basefl:Number, factorial:Number) { this.setXY(); this.starColor=color; this.flicker = new Timer (basefl + factorial * (Math.ceil(100* Math.random ()))); this.flicker.addEventListener(TimerEvent.TIMER, this.tick); this.addEventListener(MouseEvent.MOUSE_OVER, this.hover); this.addEventListener(MouseEvent.MOUSE_UP, this.drop); this.addEventListener(MouseEvent.MOUSE_DOWN, this.drag); this.addChild (new Stems (2)); this.flicker.start(); this.updateAnimation(0, false); } public function distanceOK(X:Number, Y:Number):Boolean { if (Math.sqrt((X-(sWide/2))*(X-(sWide/2))+(Y-(sTall/2))*(Y-(sTall/2)))>innerLimit) { return true; } else { return false; } } public function setXY() { do { var tempX=this.x=Math.random()*sWide; var tempY=this.y=Math.random()*sTall; } while (distanceOK (tempX, tempY)==false); this.startingX=tempX; this.startingY=tempY; } public function tick(event:TimerEvent) { if (this.canUpdatePos) { this.setXY(); } this.updateAnimation(0, false); this.updateAnimation(this.starColor, false); } public function updateAnimation(color:Number, bright:Boolean) { var brightStr:String; if (bright) { brightStr="bright"; } else { brightStr="low"; } switch (color) { case 0 : this.gotoAndStop("none"); break; case 1 : this.gotoAndStop("N" + brightStr); break; case 2 : this.gotoAndStop("A" + brightStr); break; case 3 : this.gotoAndStop("F" + brightStr); break; case 4 : this.gotoAndStop("E" + brightStr); break; case 5 : this.gotoAndStop("S" + brightStr); break; } } public function hover(event:MouseEvent):void { this.updateAnimation(this.starColor, true); this.canUpdatePos=false; } public function drop(event:MouseEvent):void { this.stopDrag(); this.x=this.startingX; this.y=this.startingY; this.updateAnimation(0, false); this.canUpdatePos=true; } public function drag(event:MouseEvent):void { this.startDrag(false); this.canUpdatePos=false; } } }

    Read the article

  • Towards Database Continuous Delivery – What Next after Continuous Integration? A Checklist

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database delivery patterns & practices STAGE 4 AUTOMATED DEPLOYMENT If you’ve been fortunate enough to get to the stage where you’ve implemented some sort of continuous integration process for your database updates, then hopefully you’re seeing the benefits of that investment – constant feedback on changes your devs are making, advanced warning of data loss (prior to the production release on Saturday night!), a nice suite of automated tests to check business logic, so you know it’s going to work when it goes live, and so on. But what next? What can you do to improve your delivery process further, moving towards a full continuous delivery process for your database? In this article I describe some of the issues you might need to tackle on the next stage of this journey, and how to plan to overcome those obstacles before they appear. Our Database Delivery Learning Program consists of four stages, really three – source controlling a database, running continuous integration processes, then how to set up automated deployment (the middle stage is split in two – basic and advanced continuous integration, making four stages in total). If you’ve managed to work through the first three of these stages – source control, basic, then advanced CI, then you should have a solid change management process set up where, every time one of your team checks in a change to your database (whether schema or static reference data), this change gets fully tested automatically by your CI server. But this is only part of the story. Great, we know that our updates work, that the upgrade process works, that the upgrade isn’t going to wipe our 4Tb of production data with a single DROP TABLE. But – how do you get this (fully tested) release live? Continuous delivery means being always ready to release your software at any point in time. There’s a significant gap between your latest version being tested, and it being easily releasable. Just a quick note on terminology – there’s a nice piece here from Atlassian on the difference between continuous integration, continuous delivery and continuous deployment. This piece also gives a nice description of the benefits of continuous delivery. These benefits have been summed up by Jez Humble at Thoughtworks as: “Continuous delivery is a set of principles and practices to reduce the cost, time, and risk of delivering incremental changes to users” There’s another really useful piece here on Simple-Talk about the need for continuous delivery and how it applies to the database written by Phil Factor – specifically the extra needs and complexities of implementing a full CD solution for the database (compared to just implementing CD for, say, a web app). So, hopefully you’re convinced of moving on the the next stage! The next step after CI is to get some sort of automated deployment (or “release management”) process set up. But what should I do next? What do I need to plan and think about for getting my automated database deployment process set up? Can’t I just install one of the many release management tools available and hey presto, I’m ready! If only it were that simple. Below I list some of the areas that it’s worth spending a little time on, where a little planning and prep could go a long way. It’s also worth pointing out, that this should really be an evolving process. Depending on your starting point of course, it can be a long journey from your current setup to a full continuous delivery pipeline. If you’ve got a CI mechanism in place, you’re certainly a long way down that path. Nevertheless, we’d recommend evolving your process incrementally. Pages 157 and 129-141 of the book on Continuous Delivery (by Jez Humble and Dave Farley) have some great guidance on building up a pipeline incrementally: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 For now, in this post, we’ll look at the following areas for your checklist: You and Your Team Environments The Deployment Process Rollback and Recovery Development Practices You and Your Team It’s a cliché in the DevOps community that “It’s not all about processes and tools, really it’s all about a culture”. As stated in this DevOps report from Puppet Labs: “DevOps processes and tooling contribute to high performance, but these practices alone aren’t enough to achieve organizational success. The most common barriers to DevOps adoption are cultural: lack of manager or team buy-in, or the value of DevOps isn’t understood outside of a specific group”. Like most clichés, there’s truth in there – if you want to set up a database continuous delivery process, you need to get your boss, your department, your company (if relevant) onside. Why? Because it’s an investment with the benefits coming way down the line. But the benefits are huge – for HP, in the book A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware, these are summarized as: -2008 to present: overall development costs reduced by 40% -Number of programs under development increased by 140% -Development costs per program down 78% -Firmware resources now driving innovation increased by a factor of 8 (from 5% working on new features to 40% But what does this mean? It means that, when moving to the next stage, to make that extra investment in automating your deployment process, it helps a lot if everyone is convinced that this is a good thing. That they understand the benefits of automated deployment and are willing to make the effort to transform to a new way of working. Incidentally, if you’re ever struggling to convince someone of the value I’d strongly recommend just buying them a copy of this book – a great read, and a very practical guide to how it can really work at a large org. I’ve spoken to many customers who have implemented database CI who describe their deployment process as “The point where automation breaks down. Up to that point, the CI process runs, untouched by human hand, but as soon as that’s finished we revert to manual.” This deployment process can involve, for example, a DBA manually comparing an environment (say, QA) to production, creating the upgrade scripts, reading through them, checking them against an Excel document emailed to him/her the night before, turning to page 29 in his/her notebook to double-check how replication is switched off and on for deployments, and so on and so on. Painful, error-prone and lengthy. But the point is, if this is something like your deployment process, telling your DBA “We’re changing everything you do and your toolset next week, to automate most of your role – that’s okay isn’t it?” isn’t likely to go down well. There’s some work here to bring him/her onside – to explain what you’re doing, why there will still be control of the deployment process and so on. Or of course, if you’re the DBA looking after this process, you have to do a similar job in reverse. You may have researched and worked out how you’d like to change your methodology to start automating your painful release process, but do the dev team know this? What if they have to start producing different artifacts for you? Will they be happy with this? Worth talking to them, to find out. As well as talking to your DBA/dev team, the other group to get involved before implementation is your manager. And possibly your manager’s manager too. As mentioned, unless there’s buy-in “from the top”, you’re going to hit problems when the implementation starts to get rocky (and what tool/process implementations don’t get rocky?!). You need to have support from someone senior in your organisation – someone you can turn to when you need help with a delayed implementation, lack of resources or lack of progress. Actions: Get your DBA involved (or whoever looks after live deployments) and discuss what you’re planning to do or, if you’re the DBA yourself, get the dev team up-to-speed with your plans, Get your boss involved too and make sure he/she is bought in to the investment. Environments Where are you going to deploy to? And really this question is – what environments do you want set up for your deployment pipeline? Assume everyone has “Production”, but do you have a QA environment? Dedicated development environments for each dev? Proper pre-production? I’ve seen every setup under the sun, and there is often a big difference between “What we want, to do continuous delivery properly” and “What we’re currently stuck with”. Some of these differences are: What we want What we’ve got Each developer with their own dedicated database environment A single shared “development” environment, used by everyone at once An Integration box used to test the integration of all check-ins via the CI process, along with a full suite of unit-tests running on that machine In fact if you have a CI process running, you’re likely to have some sort of integration server running (even if you don’t call it that!). Whether you have a full suite of unit tests running is a different question… Separate QA environment used explicitly for manual testing prior to release “We just test on the dev environments, or maybe pre-production” A proper pre-production (or “staging”) box that matches production as closely as possible Hopefully a pre-production box of some sort. But does it match production closely!? A production environment reproducible from source control A production box which has drifted significantly from anything in source control The big question is – how much time and effort are you going to invest in fixing these issues? In reality this just involves figuring out which new databases you’re going to create and where they’ll be hosted – VMs? Cloud-based? What about size/data issues – what data are you going to include on dev environments? Does it need to be masked to protect access to production data? And often the amount of work here really depends on whether you’re working on a new, greenfield project, or trying to update an existing, brownfield application. There’s a world if difference between starting from scratch with 4 or 5 clean environments (reproducible from source control of course!), and trying to re-purpose and tweak a set of existing databases, with all of their surrounding processes and quirks. But for a proper release management process, ideally you have: Dedicated development databases, An Integration server used for testing continuous integration and running unit tests. [NB: This is the point at which deployments are automatic, without human intervention. Each deployment after this point is a one-click (but human) action], QA – QA engineers use a one-click deployment process to automatically* deploy chosen releases to QA for testing, Pre-production. The environment you use to test the production release process, Production. * A note on the use of the word “automatic” – when carrying out automated deployments this does not mean that the deployment is happening without human intervention (i.e. that something is just deploying over and over again). It means that the process of carrying out the deployment is automatic in that it’s not a person manually running through a checklist or set of actions. The deployment still requires a single-click from a user. Actions: Get your environments set up and ready, Set access permissions appropriately, Make sure everyone understands what the environments will be used for (it’s not a “free-for-all” with all environments to be accessed, played with and changed by development). The Deployment Process As described earlier, most existing database deployment processes are pretty manual. The following is a description of a process we hear very often when we ask customers “How do your database changes get live? How does your manual process work?” Check pre-production matches production (use a schema compare tool, like SQL Compare). Sometimes done by taking a backup from production and restoring in to pre-prod, Again, use a schema compare tool to find the differences between the latest version of the database ready to go live (i.e. what the team have been developing). This generates a script, User (generally, the DBA), reviews the script. This often involves manually checking updates against a spreadsheet or similar, Run the script on pre-production, and check there are no errors (i.e. it upgrades pre-production to what you hoped), If all working, run the script on production.* * this assumes there’s no problem with production drifting away from pre-production in the interim time period (i.e. someone has hacked something in to the production box without going through the proper change management process). This difference could undermine the validity of your pre-production deployment test. Red Gate is currently working on a free tool to detect this problem – sign up here at www.sqllighthouse.com, if you’re interested in testing early versions. There are several variations on this process – some better, some much worse! How do you automate this? In particular, step 3 – surely you can’t automate a DBA checking through a script, that everything is in order!? The key point here is to plan what you want in your new deployment process. There are so many options. At one extreme, pure continuous deployment – whenever a dev checks something in to source control, the CI process runs (including extensive and thorough testing!), before the deployment process keys in and automatically deploys that change to the live box. Not for the faint hearted – and really not something we recommend. At the other extreme, you might be more comfortable with a semi-automated process – the pre-production/production matching process is automated (with an error thrown if these environments don’t match), followed by a manual intervention, allowing for script approval by the DBA. One he/she clicks “Okay, I’m happy for that to go live”, the latter stages automatically take the script through to live. And anything in between of course – and other variations. But we’d strongly recommended sitting down with a whiteboard and your team, and spending a couple of hours mapping out “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” NB: Most of what we’re discussing here is about production deployments. It’s important to note that you will also need to map out a deployment process for earlier environments (for example QA). However, these are likely to be less onerous, and many customers opt for a much more automated process for these boxes. Actions: Sit down with your team and a whiteboard, and draw out the answers to the questions above for your production deployments – “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” Repeat for earlier environments (QA and so on). Rollback and Recovery If only every deployment went according to plan! Unfortunately they don’t – and when things go wrong, you need a rollback or recovery plan for what you’re going to do in that situation. Once you move in to a more automated database deployment process, you’re far more likely to be deploying more frequently than before. No longer once every 6 months, maybe now once per week, or even daily. Hence the need for a quick rollback or recovery process becomes paramount, and should be planned for. NB: These are mainly scenarios for handling rollbacks after the transaction has been committed. If a failure is detected during the transaction, the whole transaction can just be rolled back, no problem. There are various options, which we’ll explore in subsequent articles, things like: Immediately restore from backup, Have a pre-tested rollback script (remembering that really this is a “roll-forward” script – there’s not really such a thing as a rollback script for a database!) Have fallback environments – for example, using a blue-green deployment pattern. Different options have pros and cons – some are easier to set up, some require more investment in infrastructure; and of course some work better than others (the key issue with using backups, is loss of the interim transaction data that has been added between the failed deployment and the restore). The best mechanism will be primarily dependent on how your application works and how much you need a cast-iron failsafe mechanism. Actions: Work out an appropriate rollback strategy based on how your application and business works, your appetite for investment and requirements for a completely failsafe process. Development Practices This is perhaps the more difficult area for people to tackle. The process by which you can deploy database updates is actually intrinsically linked with the patterns and practices used to develop that database and linked application. So you need to decide whether you want to implement some changes to the way your developers actually develop the database (particularly schema changes) to make the deployment process easier. A good example is the pattern “Branch by abstraction”. Explained nicely here, by Martin Fowler, this is a process that can be used to make significant database changes (e.g. splitting a table) in a step-wise manner so that you can always roll back, without data loss – by making incremental updates to the database backward compatible. Slides 103-108 of the following slidedeck, from Niek Bartholomeus explain the process: https://speakerdeck.com/niekbartho/orchestration-in-meatspace As these slides show, by making a significant schema change in multiple steps – where each step can be rolled back without any loss of new data – this affords the release team the opportunity to have zero-downtime deployments with considerably less stress (because if an increment goes wrong, they can roll back easily). There are plenty more great patterns that can be implemented – the book Refactoring Databases, by Scott Ambler and Pramod Sadalage is a great read, if this is a direction you want to go in: http://www.amazon.com/Refactoring-Databases-Evolutionary-paperback-Addison-Wesley/dp/0321774515 But the question is – how much of this investment are you willing to make? How often are you making significant schema changes that would require these best practices? Again, there’s a difference here between migrating old projects and starting afresh – with the latter it’s much easier to instigate best practice from the start. Actions: For your business, work out how far down the path you want to go, amending your database development patterns to “best practice”. It’s a trade-off between implementing quality processes, and the necessity to do so (depending on how often you make complex changes). Socialise these changes with your development group. No-one likes having “best practice” changes imposed on them, so good to introduce these ideas and the rationale behind them early.   Summary The next stages of implementing a continuous delivery pipeline for your database changes (once you have CI up and running) require a little pre-planning, if you want to get the most out of the work, and for the implementation to go smoothly. We’ve covered some of the checklist of areas to consider – mainly in the areas of “Getting the team ready for the changes that are coming” and “Planning our your pipeline, environments, patterns and practices for development”, though there will be more detail, depending on where you’re coming from – and where you want to get to. This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • What is the best way to test using grails using IDEA?

    - by egervari
    I am seriously having a very non-pleasant time testing using Grails. I will describe my experience, and I'd like to know if there's a better way. The first problem I have with testing is that Grails doesn't give immediate feedback to the developer when .save() fails inside of an integration test. So let's say you have a domain class with 12 fields, and 1 of them is violating a constraint and you don't know it when you create the instance... it just doesn't save. Naturally, the test code afterward is going to fail. This is most troublesome because the thingy under test is probably fine... and the real risk and pain is the setup code for the test itself. So, I've tried to develop the habit of using .save(failOnError: true) to avoid this problem, but that's not something that can be easily enforced by everyone working on the project... and it's kind of bloaty. It'd be nice to turn this on for code that is running as part of a unit test automatically. Integration Tests run slow. I cannot understand how 1 integration test that saves 1 object takes 15-20 seconds to run. With some careful test planning, I've been able to get 1000 tests talking to an actual database and doing dbunit dumps after every test to happen in about the same time! This is dumb. It is hard to run all the unit tests and not integration tests in IDEA. Integration tests are a massive pain. Idea actually shows a GREEN BAR when integration tests fail. The output given by grails indicates that something failed, but it doesn't say what it was. It says to look in the test reports... which forces the developer to launch up their file system to hunt the stupid html file down. What a pain. Then once you got the html file and click to the failing test, it'll tell you a line number. Since these reports are not in the IDE, you can't just click the stack trace to go to that line of code... you gotta go back and find it yourself. ARGGH!@!@! Maybe people put up with this, but I refuse. Testing should not be this painful. It should be fast and painless, or people won't do it. Please help. What is the solution? Rails instead of Grails? Something else entirely? I love the Grails framework, but they never demo their testing for a reason. They have a snazzy framework, but the testing is painful. After having used Scala for the last 1.5 months, and being totally spoiled by ScalaTest... I can't go back to this.

    Read the article

  • tfs integration with delphi 2010

    - by Robert McCabe
    We are currently upgrading from Delphi 7 to Delphi 2010. With Delphi 7 we use Source Connection to integrate Delphi 7 with TFS, but there does not look like there is going to be a Delphi 2010 version in time. Is there any other integration option out there?

    Read the article

  • JDesktop Integration Components binary has stopped working

    - by rot
    I googled it but not able to find the solution. I am working on swing APIs org.jdesktop.jdic.browser.*. When I am clicking on link displayed in java browser created using bove APIs, it gives windows error pop-up message: JDesktop Integration Components binary has stopped working JDK version I am using is: 1.6.0_31-b05. Please let me know work around or show to solve issue. Thanks in advance.

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >