Search Results

Search found 18940 results on 758 pages for 'existing installation'.

Page 312/758 | < Previous Page | 308 309 310 311 312 313 314 315 316 317 318 319  | Next Page >

  • Adding Async=true to the page- no side effects noticed.

    - by Michael Freidgeim
    Recently I needed to implement PageAsyncTask  in .Net 4 web forms application.According to http://msdn.microsoft.com/en-us/library/system.web.ui.pageasynctask.aspx"A PageAsyncTask object must be registered to the page through the RegisterAsyncTask method. The page itself does not have to be processed asynchronously to execute asynchronous tasks. You can set the Async attribute to either true (as shown in the following code example) or false on the page directive and the asynchronous tasks will still be processed asynchronously:<%@ Page Async="true" %>When the Async attribute is set to false, the thread that executes the page will be blocked until all asynchronous tasks are complete."I was worry about any site effects if I will set  Async=true on the existing page.The only documented restrictions, that I found are that@Async is not compatible with @AspCompat and Transaction attributes (from @ Page directive  MSDN article). In other words, Asynchronous pages do not work when the AspCompat attribute is set to true or the Transactionattribute is set to a value other than Disabled in the @ Page directiveFrom our tests we conclude, that adding Async=true to the page is quite safe, even if you don't always call Async tasks from the page

    Read the article

  • How to install Sweetcron on XAMPP

    - by Sushaantu
    This tutorial will take you to the installation steps required to install Sweetcron in the XAMPP. I am taking the liberty to assume that you have already installed XAMPP. I First of all download sweetcron and copy the extracted “sweetcron” folder inside the htdocs folder in the XAMPP directory. You have to get few things in place before installing sweetcron: 1. Create a sweetcron database using Mysql. You can just put a name “sweetcron” and leave all the other settings such as Mysql connection collation as it is. Now that you have the database ready you can configure few settings before making sweetcron to work on XAMPP. 2. Open the config-sample PHP file with your text editor (something like Notepad++ or Komodo Edit is recommended) which is located in Sweetcron/system/application/config. You have to make few changes in it. a. In the first settings you have to edit the value of $config ['base_url']. The default value is             “http://www.your-site.com”; and you have to change that into “http://localhost/sweetcron/”; b. You also have to change the deafult settings of $config ['uri_protocol']. The value that you will see is “REQUEST_URI”; but you need to change that into “AUTO”; c. Now that we have made all the changes you can rename the file from config_sample to just config. 3. Open the database-sample.php in the text editor. You need to make the edits regarding the databse in here. a. The values of the databse at the moment are like this: $db['default']['hostname'] = “localhost”; $db['default']['username'] = “”; $db['default']['password'] = “”; $db['default']['database'] = “”; $db['default']['dbdriver'] = “mysql”; $db['default']['dbprefix'] = “”; $db['default']['pconnect'] = TRUE; $db['default']['db_debug'] = TRUE; $db['default']['cache_on'] = FALSE; $db['default']['cachedir'] = “”; $db['default']['char_set'] = “utf8″; $db['default']['dbcollat'] = “utf8_general_ci”; You have to change that into $db['default']['hostname'] = “localhost”; $db['default']['username'] = “root”; $db['default']['password'] = “”; $db['default']['database'] = “sweetcron”; $db['default']['dbdriver'] = “mysql”; $db['default']['dbprefix'] = “”; $db['default']['pconnect'] = TRUE; $db['default']['db_debug'] = TRUE; $db['default']['cache_on'] = FALSE; $db['default']['cachedir'] = “”; $db['default']['char_set'] = “utf8″; $db['default']['dbcollat'] = “utf8_general_ci”; We have written the username as root along with the name of database (sweetcron in my case). Since I was not using any password in xampp for the sweetcron database so I have left the password option empty. You can make suitable changes according to your system. Write down your password in the third line if you are using one in xampp. b. Now that we have made all the edits we can change the file name from database-sample to just database. 4. That leaves us with only one setting and that is editing values in the .htaccess file with our text editor. The default values you will have in the .htaccess file are: Options +FollowSymLinks RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?/$1 [L] and you just have to add “sweetcron” after Rewritebase in the third line. Options +FollowSymLinks RewriteEngine On RewriteBase /sweetcron RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?/$1 [L] 5. Now you are all set and done. You can access sweetcron in the localhost by going to http://localhost/sweetcron/ you will see some text on the top which would prompt you to click on one script. Click that script and behold, you have your sweetcrom installation on xampp ready. Further it will ask you to add deatils such as Lifestream name, username and email address. Fill those deatils and you will reach the admin panel.

    Read the article

  • 8 Deadly Commands You Should Never Run on Linux

    - by Chris Hoffman
    Linux’s terminal commands are powerful, and Linux won’t ask you for confirmation if you run a command that won’t break your system. It’s not uncommon to see trolls online recommending new Linux users run these commands as a joke. Learning the commands you shouldn’t run can help protect you from trolls while increasing your understanding of how Linux works. This isn’t an exhaustive guide, and the commands here can be remixed in a variety of ways. Note that many of these commands will only be dangerous if they’re prefixed with sudo on Ubuntu – they won’t work otherwise. On other Linux distributions, most commands must be run as root. Image Credit: Skull and Crossbones remixed from Jason Ford on Twitter How To Create a Customized Windows 7 Installation Disc With Integrated Updates How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using?

    Read the article

  • Using Visual Studio as a Task-Focused IDE

    - by Jay Stevens
    Are there patterns or libraries or any official Microsoft SDK for using Visual Studio as a specifically Task-Focused UI? For example, both Revolution R (IDE for the R language) and SQL 2012 (and I think SQL 2008 and possibly 2005) use Visual Studio as the underlying IDE framework. Is there an officially supported SDK and/or examples/samples for doing this type of thing? I am building a language Parser for an existing language - whose only available IDE is INSANELY expensive - using Irony (and eventually will generate a Language Service as well). Any direct or indirect suggestions/answers are appreciated.

    Read the article

  • Scripting a sophisticated RTS AI with Lua

    - by T. Webster
    I'm planning to develop a somewhat sophisticated RTS AI (eg see BWAPI). have experience programming, but none in game development, so it seems easiest to start by scripting the AI of an existing game I've played, Warhammer 40k: Dawn of War (2004). As far as I can tell, the game AI is scripted with some variant of Lua (by the file extension .ai or .scar). The online documentation is sparse and the community isn't active anymore. I'd like to get some idea of the difficulty of this undertaking. Is it practical with a scripting language like Lua to develop a RTS AI that includes FSMs, decision trees, case-based reasoning, and transposition tables? If someone has any experience scripting Dawn of War, that would also help.

    Read the article

  • Internal WordPress pages all 404 when using WAMP

    - by rlesko
    I have a problem when using WAMP while designing and coding my site. Well, I have a local version of Wordpress installed in WAMP and use it as a tool to see my changes when coding. I also have the same files uploaded to some free hosting. The problem is when I want to access for example http://localhost/contact. Browser gives me a 404 error. As I said, the same files (exact copy from my PC) are on the web here and when I go to http://thefalljourneyindia.iblogger.org/contact it opens the page fine (without the style of course because I haven't made one yet). Why is that and how do I get to see all of my pages locally like I can when the WP installation is online?

    Read the article

  • LiveCD does not work on my desktop

    - by Boris
    I've installed Oneiric on my laptop without any issue using the LiveCD downloaded here (from the French Ubuntu community server). But on my desktop, weird things happen: During the 1st try booting with the LiveCD on my desktop, my 2 year old child just hit the keyboard, and after several error messages the desktop loaded and I've been able to test Oneiric. But I wanted to redo a boot before installing Oneiric to avoid mistakes. So during the 2nd time I tried to boot with the LiveCD, I couldn't access to the point where I can choose to test or install. Before trying a 3rd time, I've "cleaned the system" from System Parameter System. But after that I'm still not able to access to the point where I can choose to test or install. Now it stops all the time on a black screen. I do not understand why several boot attempts with same CD have several results. So I wonder if the state of my current installation 11.04 can affect re-booting with my CD 11.10 ?

    Read the article

  • Name a Byobu session?

    - by Ashimema
    Is there a way to create identifiable Byobu sessions so that when I've got multiple sessions running, the byobu-select-session menu gives me a list of sessions I can recognize, as opposed to non-descript tmux port numbers? In an ideal world, it would be great to be able to both start a session giving it a name and to modify such a session to change a name if it's already running? Is this possible, how? Edit 1: Some further details: I'm using tmux as the backend and don't especially want to switch back to screen. I've now tried starting a session with byobu -S "Name" to no avail :-( Edit 2: Some discoveries: I've now discovered a partial answer in using tmux native commands: tmux rename-session <current-name> <new-name> renames an existing session and tmux new -s session_name creates a new names session. I'm surprised byobu -S "name" isn't liked to tmux new -s session_name for byobu with a tmux backend.

    Read the article

  • APress Deal of the Day 23/May/2014 - Pro WPF 4.5 in C#

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2014/05/23/apress-deal-of-the-day-23may2014---pro-wpf-4.5.aspxToday’s $10 Deal of the Day from APress at http://www.apress.com/9781430243656 is Pro WPF 4.5 in C#. “This book shows you how Windows Presentation Foundation really works. It provides you with the no-nonsense, practical advice that you need in order to build high-quality WPF applications quickly and easily. Pro WPF 4.5 in C# provides a thorough, authoritative guide to how WPF really works. Packed with no-nonsense examples and practical advice you'll learn everything you need to know in order to use WPF in a professional setting. The book begins by building a firm foundation of elementary concepts, using your existing C# skills as a frame of reference, before moving on to discuss advanced concepts and demonstrate them in a hands-on way that emphasizes the time and effort savings that can be gained.”

    Read the article

  • SCCM2012 R2 – How to integrate MDT with SCCM

    - by Waclaw Chrabaszcz
    Originally posted on: http://geekswithblogs.net/Wchrabaszcz/archive/2013/11/12/sccm2012-r2--how-to-integrate-mdt-with-sccm.aspxThere are two maybe not competitive but parallel products: Microsoft Deployment Toolkit and System Center Configuration manager. Few years ago I was wondering why they are separate, I why I cannot share Task Sequences between them. And how it usually happens in live, when I was focused on other technologies, MDT and SCCM became best friends. Let's integrate MDT with SCCM: If first step you need to download MDT http://www.microsoft.com/en-us/download/details.aspx?id=25175 Install MDT on your SCCM server boxaccept the EULA Join CEIP if you like  Once you completed the installation I would recommend you to complete MDT configuring before integration with the SCCM. Start the Deployment Workbenchinstall updatesyou will need to download and install WAIKcreate new deployment shareleave default values Create MDT databaseMake sure you will create separate database, DO NOT use existing SCCM DB Now we are ready to integrate MDT with SCCMthe Integration tool should discover your server automaticallyAfter reopening SCCM console in task sequences you should have new cool features: How to use them? That's another story …

    Read the article

  • Keyring Password, Unity in 11.10

    - by Collin Owens
    Login to 11.10 I input my password and shortly afterward I am asked for a keyring password. I realize that I was asked for this during installation (second time lucky!) and I did enter a password (what a mistaka to maka). This now entails my having to input the keyring password on every boot up! Looking at previous answers it would seem that the applications - accesories - password and encryption keys - was the suggested route. However I assume that was in Gnome (At this stage I look back in fondness!!!) Certainly, I don't get the same route in Unity! I saw a reference to seahorse in a terminal - but this results in several error reports and a sub windows which does not seem to open. The objective in this exercise is to log in using the login password and not also the keyring password! any help would be appreciated - thank you

    Read the article

  • Gnome panel not found

    - by emilbochnik
    Hi I installed the Ubuntu 10.10 on my laptop. 1st time Ubuntu user ever. After successful installation only panel on top with small ubuntu logo on left and system/connections, time, keyboard, volume icons/ on right. No menu and not able to create menu. Right click on the panel - no options. I tried everything, but it could be the most basic think as i have no experience with ubuntu. Please can you help me to resolve this issue. thank you bochnik

    Read the article

  • I attempted to install cuda drivers, and now almost everything is broken

    - by Nathan
    I tried to install CUDA tookit 5.0 on ubuntu 12.04 and the installation proceeded OK. After rebooting however, all I get is a terminal login. I've attempted to start the display manager with sudo start lightdm and it says all OK, but just hangs the prompt when it finishes loading without ever starting any display manager. sudo start gdm just flickers and goes back to the prompt. There is no network connection on the machine either. It usually connects to a wireless network with remembered credentials. What can I do to recover the machine?

    Read the article

  • Towards Database Continuous Delivery – What Next after Continuous Integration? A Checklist

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database delivery patterns & practices STAGE 4 AUTOMATED DEPLOYMENT If you’ve been fortunate enough to get to the stage where you’ve implemented some sort of continuous integration process for your database updates, then hopefully you’re seeing the benefits of that investment – constant feedback on changes your devs are making, advanced warning of data loss (prior to the production release on Saturday night!), a nice suite of automated tests to check business logic, so you know it’s going to work when it goes live, and so on. But what next? What can you do to improve your delivery process further, moving towards a full continuous delivery process for your database? In this article I describe some of the issues you might need to tackle on the next stage of this journey, and how to plan to overcome those obstacles before they appear. Our Database Delivery Learning Program consists of four stages, really three – source controlling a database, running continuous integration processes, then how to set up automated deployment (the middle stage is split in two – basic and advanced continuous integration, making four stages in total). If you’ve managed to work through the first three of these stages – source control, basic, then advanced CI, then you should have a solid change management process set up where, every time one of your team checks in a change to your database (whether schema or static reference data), this change gets fully tested automatically by your CI server. But this is only part of the story. Great, we know that our updates work, that the upgrade process works, that the upgrade isn’t going to wipe our 4Tb of production data with a single DROP TABLE. But – how do you get this (fully tested) release live? Continuous delivery means being always ready to release your software at any point in time. There’s a significant gap between your latest version being tested, and it being easily releasable. Just a quick note on terminology – there’s a nice piece here from Atlassian on the difference between continuous integration, continuous delivery and continuous deployment. This piece also gives a nice description of the benefits of continuous delivery. These benefits have been summed up by Jez Humble at Thoughtworks as: “Continuous delivery is a set of principles and practices to reduce the cost, time, and risk of delivering incremental changes to users” There’s another really useful piece here on Simple-Talk about the need for continuous delivery and how it applies to the database written by Phil Factor – specifically the extra needs and complexities of implementing a full CD solution for the database (compared to just implementing CD for, say, a web app). So, hopefully you’re convinced of moving on the the next stage! The next step after CI is to get some sort of automated deployment (or “release management”) process set up. But what should I do next? What do I need to plan and think about for getting my automated database deployment process set up? Can’t I just install one of the many release management tools available and hey presto, I’m ready! If only it were that simple. Below I list some of the areas that it’s worth spending a little time on, where a little planning and prep could go a long way. It’s also worth pointing out, that this should really be an evolving process. Depending on your starting point of course, it can be a long journey from your current setup to a full continuous delivery pipeline. If you’ve got a CI mechanism in place, you’re certainly a long way down that path. Nevertheless, we’d recommend evolving your process incrementally. Pages 157 and 129-141 of the book on Continuous Delivery (by Jez Humble and Dave Farley) have some great guidance on building up a pipeline incrementally: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 For now, in this post, we’ll look at the following areas for your checklist: You and Your Team Environments The Deployment Process Rollback and Recovery Development Practices You and Your Team It’s a cliché in the DevOps community that “It’s not all about processes and tools, really it’s all about a culture”. As stated in this DevOps report from Puppet Labs: “DevOps processes and tooling contribute to high performance, but these practices alone aren’t enough to achieve organizational success. The most common barriers to DevOps adoption are cultural: lack of manager or team buy-in, or the value of DevOps isn’t understood outside of a specific group”. Like most clichés, there’s truth in there – if you want to set up a database continuous delivery process, you need to get your boss, your department, your company (if relevant) onside. Why? Because it’s an investment with the benefits coming way down the line. But the benefits are huge – for HP, in the book A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware, these are summarized as: -2008 to present: overall development costs reduced by 40% -Number of programs under development increased by 140% -Development costs per program down 78% -Firmware resources now driving innovation increased by a factor of 8 (from 5% working on new features to 40% But what does this mean? It means that, when moving to the next stage, to make that extra investment in automating your deployment process, it helps a lot if everyone is convinced that this is a good thing. That they understand the benefits of automated deployment and are willing to make the effort to transform to a new way of working. Incidentally, if you’re ever struggling to convince someone of the value I’d strongly recommend just buying them a copy of this book – a great read, and a very practical guide to how it can really work at a large org. I’ve spoken to many customers who have implemented database CI who describe their deployment process as “The point where automation breaks down. Up to that point, the CI process runs, untouched by human hand, but as soon as that’s finished we revert to manual.” This deployment process can involve, for example, a DBA manually comparing an environment (say, QA) to production, creating the upgrade scripts, reading through them, checking them against an Excel document emailed to him/her the night before, turning to page 29 in his/her notebook to double-check how replication is switched off and on for deployments, and so on and so on. Painful, error-prone and lengthy. But the point is, if this is something like your deployment process, telling your DBA “We’re changing everything you do and your toolset next week, to automate most of your role – that’s okay isn’t it?” isn’t likely to go down well. There’s some work here to bring him/her onside – to explain what you’re doing, why there will still be control of the deployment process and so on. Or of course, if you’re the DBA looking after this process, you have to do a similar job in reverse. You may have researched and worked out how you’d like to change your methodology to start automating your painful release process, but do the dev team know this? What if they have to start producing different artifacts for you? Will they be happy with this? Worth talking to them, to find out. As well as talking to your DBA/dev team, the other group to get involved before implementation is your manager. And possibly your manager’s manager too. As mentioned, unless there’s buy-in “from the top”, you’re going to hit problems when the implementation starts to get rocky (and what tool/process implementations don’t get rocky?!). You need to have support from someone senior in your organisation – someone you can turn to when you need help with a delayed implementation, lack of resources or lack of progress. Actions: Get your DBA involved (or whoever looks after live deployments) and discuss what you’re planning to do or, if you’re the DBA yourself, get the dev team up-to-speed with your plans, Get your boss involved too and make sure he/she is bought in to the investment. Environments Where are you going to deploy to? And really this question is – what environments do you want set up for your deployment pipeline? Assume everyone has “Production”, but do you have a QA environment? Dedicated development environments for each dev? Proper pre-production? I’ve seen every setup under the sun, and there is often a big difference between “What we want, to do continuous delivery properly” and “What we’re currently stuck with”. Some of these differences are: What we want What we’ve got Each developer with their own dedicated database environment A single shared “development” environment, used by everyone at once An Integration box used to test the integration of all check-ins via the CI process, along with a full suite of unit-tests running on that machine In fact if you have a CI process running, you’re likely to have some sort of integration server running (even if you don’t call it that!). Whether you have a full suite of unit tests running is a different question… Separate QA environment used explicitly for manual testing prior to release “We just test on the dev environments, or maybe pre-production” A proper pre-production (or “staging”) box that matches production as closely as possible Hopefully a pre-production box of some sort. But does it match production closely!? A production environment reproducible from source control A production box which has drifted significantly from anything in source control The big question is – how much time and effort are you going to invest in fixing these issues? In reality this just involves figuring out which new databases you’re going to create and where they’ll be hosted – VMs? Cloud-based? What about size/data issues – what data are you going to include on dev environments? Does it need to be masked to protect access to production data? And often the amount of work here really depends on whether you’re working on a new, greenfield project, or trying to update an existing, brownfield application. There’s a world if difference between starting from scratch with 4 or 5 clean environments (reproducible from source control of course!), and trying to re-purpose and tweak a set of existing databases, with all of their surrounding processes and quirks. But for a proper release management process, ideally you have: Dedicated development databases, An Integration server used for testing continuous integration and running unit tests. [NB: This is the point at which deployments are automatic, without human intervention. Each deployment after this point is a one-click (but human) action], QA – QA engineers use a one-click deployment process to automatically* deploy chosen releases to QA for testing, Pre-production. The environment you use to test the production release process, Production. * A note on the use of the word “automatic” – when carrying out automated deployments this does not mean that the deployment is happening without human intervention (i.e. that something is just deploying over and over again). It means that the process of carrying out the deployment is automatic in that it’s not a person manually running through a checklist or set of actions. The deployment still requires a single-click from a user. Actions: Get your environments set up and ready, Set access permissions appropriately, Make sure everyone understands what the environments will be used for (it’s not a “free-for-all” with all environments to be accessed, played with and changed by development). The Deployment Process As described earlier, most existing database deployment processes are pretty manual. The following is a description of a process we hear very often when we ask customers “How do your database changes get live? How does your manual process work?” Check pre-production matches production (use a schema compare tool, like SQL Compare). Sometimes done by taking a backup from production and restoring in to pre-prod, Again, use a schema compare tool to find the differences between the latest version of the database ready to go live (i.e. what the team have been developing). This generates a script, User (generally, the DBA), reviews the script. This often involves manually checking updates against a spreadsheet or similar, Run the script on pre-production, and check there are no errors (i.e. it upgrades pre-production to what you hoped), If all working, run the script on production.* * this assumes there’s no problem with production drifting away from pre-production in the interim time period (i.e. someone has hacked something in to the production box without going through the proper change management process). This difference could undermine the validity of your pre-production deployment test. Red Gate is currently working on a free tool to detect this problem – sign up here at www.sqllighthouse.com, if you’re interested in testing early versions. There are several variations on this process – some better, some much worse! How do you automate this? In particular, step 3 – surely you can’t automate a DBA checking through a script, that everything is in order!? The key point here is to plan what you want in your new deployment process. There are so many options. At one extreme, pure continuous deployment – whenever a dev checks something in to source control, the CI process runs (including extensive and thorough testing!), before the deployment process keys in and automatically deploys that change to the live box. Not for the faint hearted – and really not something we recommend. At the other extreme, you might be more comfortable with a semi-automated process – the pre-production/production matching process is automated (with an error thrown if these environments don’t match), followed by a manual intervention, allowing for script approval by the DBA. One he/she clicks “Okay, I’m happy for that to go live”, the latter stages automatically take the script through to live. And anything in between of course – and other variations. But we’d strongly recommended sitting down with a whiteboard and your team, and spending a couple of hours mapping out “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” NB: Most of what we’re discussing here is about production deployments. It’s important to note that you will also need to map out a deployment process for earlier environments (for example QA). However, these are likely to be less onerous, and many customers opt for a much more automated process for these boxes. Actions: Sit down with your team and a whiteboard, and draw out the answers to the questions above for your production deployments – “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” Repeat for earlier environments (QA and so on). Rollback and Recovery If only every deployment went according to plan! Unfortunately they don’t – and when things go wrong, you need a rollback or recovery plan for what you’re going to do in that situation. Once you move in to a more automated database deployment process, you’re far more likely to be deploying more frequently than before. No longer once every 6 months, maybe now once per week, or even daily. Hence the need for a quick rollback or recovery process becomes paramount, and should be planned for. NB: These are mainly scenarios for handling rollbacks after the transaction has been committed. If a failure is detected during the transaction, the whole transaction can just be rolled back, no problem. There are various options, which we’ll explore in subsequent articles, things like: Immediately restore from backup, Have a pre-tested rollback script (remembering that really this is a “roll-forward” script – there’s not really such a thing as a rollback script for a database!) Have fallback environments – for example, using a blue-green deployment pattern. Different options have pros and cons – some are easier to set up, some require more investment in infrastructure; and of course some work better than others (the key issue with using backups, is loss of the interim transaction data that has been added between the failed deployment and the restore). The best mechanism will be primarily dependent on how your application works and how much you need a cast-iron failsafe mechanism. Actions: Work out an appropriate rollback strategy based on how your application and business works, your appetite for investment and requirements for a completely failsafe process. Development Practices This is perhaps the more difficult area for people to tackle. The process by which you can deploy database updates is actually intrinsically linked with the patterns and practices used to develop that database and linked application. So you need to decide whether you want to implement some changes to the way your developers actually develop the database (particularly schema changes) to make the deployment process easier. A good example is the pattern “Branch by abstraction”. Explained nicely here, by Martin Fowler, this is a process that can be used to make significant database changes (e.g. splitting a table) in a step-wise manner so that you can always roll back, without data loss – by making incremental updates to the database backward compatible. Slides 103-108 of the following slidedeck, from Niek Bartholomeus explain the process: https://speakerdeck.com/niekbartho/orchestration-in-meatspace As these slides show, by making a significant schema change in multiple steps – where each step can be rolled back without any loss of new data – this affords the release team the opportunity to have zero-downtime deployments with considerably less stress (because if an increment goes wrong, they can roll back easily). There are plenty more great patterns that can be implemented – the book Refactoring Databases, by Scott Ambler and Pramod Sadalage is a great read, if this is a direction you want to go in: http://www.amazon.com/Refactoring-Databases-Evolutionary-paperback-Addison-Wesley/dp/0321774515 But the question is – how much of this investment are you willing to make? How often are you making significant schema changes that would require these best practices? Again, there’s a difference here between migrating old projects and starting afresh – with the latter it’s much easier to instigate best practice from the start. Actions: For your business, work out how far down the path you want to go, amending your database development patterns to “best practice”. It’s a trade-off between implementing quality processes, and the necessity to do so (depending on how often you make complex changes). Socialise these changes with your development group. No-one likes having “best practice” changes imposed on them, so good to introduce these ideas and the rationale behind them early.   Summary The next stages of implementing a continuous delivery pipeline for your database changes (once you have CI up and running) require a little pre-planning, if you want to get the most out of the work, and for the implementation to go smoothly. We’ve covered some of the checklist of areas to consider – mainly in the areas of “Getting the team ready for the changes that are coming” and “Planning our your pipeline, environments, patterns and practices for development”, though there will be more detail, depending on where you’re coming from – and where you want to get to. This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • Announcement Oracle Solaris Cluster 4.1 Availability!

    - by uwes
    On 26th of October Oracle announced the availability of Oracle Solaris Cluster 4.1. Highlights include: New Oracle Solaris 10 Zone Clusters: customers can now consolidate mission critical Oracle Solaris 10 applications on Oracle Solaris 11 virtualized systems in a virtual cluster Expanded disaster recovery operations: Oracle Solaris Cluster now offers managed switchover and disaster-recovery takeover of applications and data using ZFS Storage Appliance replication services in a multi-site, multi-custer configuration Faster application recovery with improved storage failure detection and resource dependencies management New labeled security environment for mission-critical deployments in Oracle Solaris Zone Clusters with Oracle Solaris 11 Trusted Extensions Learn more about Oracle Solaris Cluster 4.1: What's New in Oracle Solaris 4.1 Oracle Solaris Cluster 4.1 FAQ Oracle.com Oracle Solaris Cluster page Oracle Technology Network Oracle Solaris Cluster page Resouces for downloading: Oracle Solaris Cluster 4.1 download or order a media kit Existing Oracle Solaris Cluster 4.0 customers can quickly and simply update by using the network based repository.   Note: This repository requires keys and certificates which can be obtained here.

    Read the article

  • Monitoring your WCF Web Apis with AppFabric

    - by cibrax
    The other day, Ron Jacobs made public a template in the Visual Studio Gallery for enabling monitoring capabilities to any existing WCF Http service hosted in Windows AppFabric. I thought it would be a cool idea to reuse some of that for doing the same thing on the new WCF Web Http stack. Windows AppFabric provides a dashboard that you can use to dig into some metrics about the services usage, such as number of calls, errors or information about different events during a service call. Those events not only include information about the WCF pipeline, but also custom events that any developer can inject and make sense for troubleshooting issues.      This monitoring capabilities can be enabled on any specific IIS virtual directory by using the AppFabric configuration tool or adding the following configuration sections to your existing web app, <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true" /> <diagnostics etwProviderId="3e99c707-3503-4f33-a62d-2289dfa40d41"> <endToEndTracing propagateActivity="true" messageFlowTracing="true" /> </diagnostics> <behaviors> <serviceBehaviors> <behavior name=""> <etwTracking profileName="EndToEndMonitoring Tracking Profile" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel>   <microsoft.applicationServer> <monitoring> <default enabled="true" connectionStringName="ApplicationServerMonitoringConnectionString" monitoringLevel="EndToEndMonitoring" /> </monitoring> </microsoft.applicationServer> Bad news is that none of the configuration above can be easily set on code by using the new configuration model for WCF Web stack.  A good thing is that you easily disable it in the configuration when you no longer need it, and also uses ETW, a general-purpose and high-speed tracing facility provided by the operating system (it’s part of the windows kernel). By adding that configuration section, AppFabric will start monitoring your service automatically and providing some basic event information about the service calls. You need some custom code for injecting custom events in the monitoring data. What I did here is to copy and refactor the “WCFUserEventProvider” class provided as sample in the Ron’s template to make it more TDD friendly when using IoC. I created a simple interface “ILogger” that any service (or resource) can use to inject custom events or monitoring information in the AppFabric database. public interface ILogger { bool WriteError(string name, string format, params object[] args); bool WriteWarning(string name, string format, params object[] args); bool WriteInformation(string name, string format, params object[] args); } The “WCFUserEventProvider” class implements this interface by making possible to send the events to the AppFabric monitoring database. The service or resource implementation can receive an “ILogger” as part of the constructor. [ServiceContract] [Export] public class OrderResource { IOrderRepository repository; ILogger logger;   [ImportingConstructor] public OrderResource(IOrderRepository repository, ILogger logger) { this.repository = repository; this.logger = logger; }   [WebGet(UriTemplate = "{id}")] public Order Get(string id, HttpResponseMessage response) { var order = this.repository.All.FirstOrDefault(o => o.OrderId == int.Parse(id, CultureInfo.InvariantCulture)); if (order == null) { response.StatusCode = HttpStatusCode.NotFound; response.Content = new StringContent("Order not found"); }   this.logger.WriteInformation("Order Requested", "Order Id {0}", id);   return order; } } The example above uses “MEF” as IoC for injecting a repository and the logger implementation into the service. You can also see how the logger is used to write an information event in the monitoring database. The following image illustrates how the custom event is injected and the information becomes available for any user in the dashboard. An issue that you might run into and I hope the WCF and AppFabric teams fixed soon is that any WCF service that uses friendly URLs with ASP.NET routing does not get listed as a available service in the WCF services tab in the AppFabric console. The complete example is available to download from here.

    Read the article

  • OAGi Architecture Council OAGIS Ten Work Group Completes first round review of Concepts for OAGIS Te

    - by michael.rowell
    Today the OAGi Architecture Council OAGIS Ten Work group completed the first level review of concepts for existing content for OAGIS Ten. This is one of the first milestones for OAGIS Ten. In doing this the concepts of key objects (the Nouns) have been identified along with the key context for their use. While OAGIS Ten remains a work-in-process the work group shows progress. Going forward the other councils will provide additional input to these and there own concepts and the contexts for each. Additionally, sub groups will focus on concepts for given domains. Stay tuned for future progress. If anyone is interested in joining the effort. OAGi membership is open to anyone, please see the OAGi Web site.

    Read the article

  • How to achieve a Gaussian Blur effect for shadows in LWJGL/Slick2D?

    - by user46883
    I am currently trying to implement shadows into my game, and after a lot of searching in the interwebs I came to the conclusion that drawing hard edged shadows to a low resolution pass combined with a Gaussian blur effect would fit best and make a good compromise between performance and looks - even though theyre not 100% physically accurate. Now my problem is, that I dont really know how to implement the Gaussian blur part. Its not difficult to draw shadows to a low resolutions buffer image and then stretch it which makes it more smooth, but I need to add the Gaussian blur effect. I have searched a lot on that and found some approachs for GLSL, some even here, but none of them really helped it. My game is written in Java using Slick2D/LWJGL and I would appreciate any help or approaches for an algorithm or maybe even an existing library to achieve that effect. Thanks for any help in advance.

    Read the article

  • How do I clean build and installs, ie un-build?

    - by Kaustubh P
    I have installed and downloaded and built mongodb, and just one works. $ mongo mongo: error while loading shared libraries: libmozjs.so: cannot open shared object file: No such file or directory $ /opt/mongo/bin/mongo /opt/mongo/bin/mongo: error while loading shared libraries: libboost_system-mt.so.1.38.0: cannot open shared object file: No such file or directory $ /usr/bin/mongo MongoDB shell version: 1.6.5 connecting to: test > I can remove the installation via apt-get. But how do I remove all things mongo that were built with make, and get a clean system? I followed this guide to build and install mongodb. Thanks.

    Read the article

  • I&rsquo;m IN!

    - by Aaron Kowall
    Got an email this morning.(yes I checked and it wasn’t an April Fools joke) congratulating me on becoming a Microsoft MVP. I’ve been working with and among MVP’s for quite a while and quite frankly felt left out.  Well, I’m finally part of the crowd. I received an MVP for Visual Studio ALM.  This makes me VERY proud as I know the high caliber of the existing Visual Studio ALM MVP’s and am honored to now count myself among them. Now my challenge is to make sure I continue to do those things that got me nominated so that I can retain this honor. Technorati Tags: MVP,VS2010,ALM

    Read the article

  • Implementing Oracle Exadata for Oracle Utilities Customer Care And Billing

    - by ACShorten
    In association with our performance team, a new whitepaper has been released for Oracle Utilities Customer Care And Billing that outlines the best practices for using Oracle Exadata with that product. The advice in the whitepaper is based upon certification and performance testing performed by our internal performance teams to assit in sites implementing the database component of Oracle Utilities Customer Care And Billing on an Oracle Exadata platform. It is recommended that the contents of this whitepaper be used alongside existing best practices for the Oracle Exadata platform. The whitepaper is available from My Oracle Support under Implementing Oracle Exadata with Oracle Utilities Customer Care and Billing (Dod Id: 1486886.1)

    Read the article

  • OWB 11gR2 for Windows Standalone Installer Now Available!

    - by antonio romero
    The 11gR2 Windows 32-bit standalone is out: http://www.oracle.com/technology/software/products/warehouse/index.html Tips: You may have to clear your browser cache to get the version of the page with the download link. Windows 7 is not specifically supported at this time. If you are on Windows 7, we have anecdotal accounts of Design Center running quite well in XP Mode.  On other 64-bit Windows platforms, we recommend a virtual machine installation of a certified Windows platform. Come and get it! Join our OWB linkedin group: http://www.linkedin.com/groups?gid=140609

    Read the article

  • The Oracle Database Appliance: How to Sell a Unique Product Webcast

    - by Cinzia Mascanzoni
    Due to the great success of our webcast on "The Oracle Database Appliance - How to sell a unique product!" held on April 12th, we are going to conduct a recast live on April 19th at 10:00 CET, a learning opportunity for those who missed it. Join us to learn about: ODA Benefits: Fast, Easy, Cost Efficient, Highly Reliable Feedback from early Customer Wins: What can we Learn? Objection Handling: Overcoming the most common customer questions Going beyond the Database: The ODA ECO System for applications, backup & more  When combined with your high-value services (e.g., migration, consolidation), the end result is a database system that you can use to grow the business in your existing accounts, or capture new business. Click here to register to this webcast.

    Read the article

  • Palm Centro not even appearing on desktop

    - by DaimyoKirby
    Background: I'm trying to set up my dad's new installation of Xubuntu 12.10 (I finally got him to switch from Windows :-D) so he can sync his Palm Centro on his computer. I installed J-Pilot, but the problem is that his palm isn't even showing up anywhere on the computer. When it's plugged in, it lit up and began to charge when I told it to try and sync with the computer, but it failed the sync and Xubuntu still doesn't recognize it. Question: Does anyone know how I can get his Palm to be recognized by Xubuntu?

    Read the article

  • Part 1 - 12c Database and WLS - Overview

    - by Steve Felts
    The download of Oracle 12c database became available on June 25, 2013.  There are some big new features in 12c database and WebLogic Server will take advantage of them. Immediately, we will support using 12c database and drivers with WLS 10.3.6 and 12.1.1.  When the next version of WLS ships, additional functionality will be supported (those rows in the table below with all "No" values will get a "Yes).  The following table maps the Oracle 12c Database features supported with various combinations of currently available WLS releases, 11g and 12c Drivers, and 11g and 12c Databases. Feature WebLogic Server 10.3.6/12.1.1 with 11g drivers and 11gR2 DB WebLogic Server 10.3.6/12.1.1 with 11g drivers and 12c DB WebLogic Server 10.3.6/12.1.1 with 12c drivers and 11gR2 DB WebLogic Server 10.3.6/12.1.1 with 12c drivers and 12c DB JDBC replay No No No Yes (Active GridLink only in 10.3.6, add generic in 12.1.1) Multi Tenant Database No Yes (except set container) No Yes (except set container) Dynamic switching between Tenants No No No No Database Resident Connection pooling (DRCP) No No No No Oracle Notification Service (ONS) auto configuration No No No No Global Database Services (GDS) No Yes (Active GridLink only) No Yes (Active GridLink only) JDBC 4.1 (using ojdbc7.jar files & JDK 7) No No Yes Yes  The My Oracle Support (MOS) document covering this is "WebLogic Server 12.1.1 and 10.3.6 Support for Oracle 12c Database [ID 1564509.1]" at the link https://support.oracle.com/epmos/faces/DocumentDisplay?id=1564509.1. The following documents are also key references:12c Oracle Database Developer Guide http://docs.oracle.com/cd/E16655_01/appdev.121/e17620/toc.htm 12c Oracle Database Administrator's Guide http://docs.oracle.com/cd/E16655_01/server.121/e17636/toc.htm . I plan to write some related blog articles not to duplicate existing product documentation but to introduce the features, provide some examples, and tie together some information to make it easier to understand. How do you get started with 12c?  The easiest way is to point your data source at a 12c database.  The only change on the WLS side is to update the URL in your data source (assuming that you are not just upgrading your database).  You can continue to use the 11.2.0.3 driver jar files that shipped with WLS 10.3.6 or 12.1.1.  You shouldn't see any changes in your application.  You can take advantage of enhancements on the database side that don't affect the mid-tier.  On the WLS side, you can take advantage of using Global Data Service or connecting to a tenant in a multi-tenant database transparently. If you want to use the 12c client jar files, it's a bit of work because they aren't shipped with WLS and you can't just drop in ojdbc6.jar as in the old days.  You need to use a matched set of jar files and they need to come before existing jar files in the CLASSPATH.  The MOS article is written from the standpoint that you need to get the jar files directly - download almost 1G and install over 600M footprint to get 15 jar files.  Assuming that you have the database installed and you can get access to the installation (or ask the DBA), you need to copy the 15 jar files to each machine with a WLS installation and get them in your CLASSPATH.  You can play with setting the PRE_CLASSPATH but the more practical approach may be to just update WL_HOME/common/bin/commEnv.sh directly.  There's a change in the transaction completion behavior (read the MOS) so if you think you might run into that, you will want to set -Doracle.jdbc.autoCommitSpecCompliant=false.  Also if you are running with Active GridLink, you must set -Doracle.ucp.PreWLS1212Compatible=true (how's that for telling you that this is fixed in WLS 12.1.2).  Once you get the configuration out of the way, you can start using the new ojdbc7.jar in place of the ojdbc6.jar to get the new JDBC 4.1 API's.  You can also start using Application Continuity.  This feature is also known as JDBC Replay because when a connection fails you get a new one with all JDBC operations up to the failure point automatically replayed.  As you might expect, there are some limitations but it's an interesting feature.  Obviously I'm going to focus on the 12c database features that we can leverage in WLS data source.  You will need to read other sources or the product documentation to get all of the new features.

    Read the article

< Previous Page | 308 309 310 311 312 313 314 315 316 317 318 319  | Next Page >