Search Results

Search found 3416 results on 137 pages for 'dba alex'.

Page 13/137 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Hilfreiche Linemode Skripte

    - by Ulrike Schwinn (DBA Community)
    Die mitgelieferten Skripte im Verzeichnis $ORACLE_HOME/rdbms/admin bieten schon seit jeher DBAs und Entwickler eine zusätzliche Unterstützung bei ihrer Arbeit. Sie stehen automatisch nach jeder Installation im Verzeichnis rdbms/admin zur weiteren Verwendung zur Verfügung. Nur wie findet man genau das Skript, das einem die richtige Unterstützung bietet? Eine Dokumentation aller Skripte existiert nicht. Man kann sich an den Namen der Skripte orientieren, da sie mit sprechenden Namen aufgelistet sind und kann die Kurzeinleitung im Skript nachlesen. Das ist mühselig und kostet Zeit. Daher wird im neuen Tipp ein Überblick über wichtige Skripte einschließlich einer Kurzbeschreibung gegeben. Mehr dazu hier

    Read the article

  • How to configure emacs by using this file?

    - by Andy Leman
    From http://public.halogen-dg.com/browser/alex-emacs-settings/.emacs?rev=1346 I got: (setq load-path (cons "/home/alex/.emacs.d/" load-path)) (setq load-path (cons "/home/alex/.emacs.d/configs/" load-path)) (defconst emacs-config-dir "~/.emacs.d/configs/" "") (defun load-cfg-files (filelist) (dolist (file filelist) (load (expand-file-name (concat emacs-config-dir file))) (message "Loaded config file:%s" file) )) (load-cfg-files '("cfg_initsplit" "cfg_variables_and_faces" "cfg_keybindings" "cfg_site_gentoo" "cfg_conf-mode" "cfg_mail-mode" "cfg_region_hooks" "cfg_apache-mode" "cfg_crontab-mode" "cfg_gnuserv" "cfg_subversion" "cfg_css-mode" "cfg_php-mode" "cfg_tramp" "cfg_killbuffer" "cfg_color-theme" "cfg_uniquify" "cfg_tabbar" "cfg_python" "cfg_ack" "cfg_scpaste" "cfg_ido-mode" "cfg_javascript" "cfg_ange_ftp" "cfg_font-lock" "cfg_default_face" "cfg_ecb" "cfg_browser" "cfg_orgmode" ; "cfg_gnus" ; "cfg_cyrillic" )) ; enable disabled advanced features (put 'downcase-region 'disabled nil) (put 'scroll-left 'disabled nil) (put 'upcase-region 'disabled nil) ; narrow cursor ;(setq-default cursor-type 'hbar) (cua-mode) ; highlight current line (global-hl-line-mode 1) ; AV: non-aggressive scrolling (setq scroll-conservatively 100) (setq scroll-preserve-screen-position 't) (setq scroll-margin 0) (custom-set-variables ;; custom-set-variables was added by Custom. ;; If you edit it by hand, you could mess it up, so be careful. ;; Your init file should contain only one such instance. ;; If there is more than one, they won't work right. '(ange-ftp-passive-host-alist (quote (("redbus2.chalkface.com" . "on") ("zope.halogen-dg.com" . "on") ("85.119.217.50" . "on")))) '(blink-cursor-mode nil) '(browse-url-browser-function (quote browse-url-firefox)) '(browse-url-new-window-flag t) '(buffers-menu-max-size 30) '(buffers-menu-show-directories t) '(buffers-menu-show-status nil) '(case-fold-search t) '(column-number-mode t) '(cua-enable-cua-keys nil) '(user-mail-address "[email protected]") '(cua-mode t nil (cua-base)) '(current-language-environment "UTF-8") '(file-name-shadow-mode t) '(fill-column 79) '(grep-command "grep --color=never -nHr -e * | grep -v .svn --color=never") '(grep-use-null-device nil) '(inhibit-startup-screen t) '(initial-frame-alist (quote ((width . 80) (height . 40)))) '(initsplit-customizations-alist (quote (("tabbar" "configs/cfg_tabbar.el" t) ("ecb" "configs/cfg_ecb.el" t) ("ange\\-ftp" "configs/cfg_ange_ftp.el" t) ("planner" "configs/cfg_planner.el" t) ("dired" "configs/cfg_dired.el" t) ("font\\-lock" "configs/cfg_font-lock.el" t) ("speedbar" "configs/cfg_ecb.el" t) ("muse" "configs/cfg_muse.el" t) ("tramp" "configs/cfg_tramp.el" t) ("uniquify" "configs/cfg_uniquify.el" t) ("default" "configs/cfg_font-lock.el" t) ("ido" "configs/cfg_ido-mode.el" t) ("org" "configs/cfg_orgmode.el" t) ("gnus" "configs/cfg_gnus.el" t) ("nnmail" "configs/cfg_gnus.el" t)))) '(ispell-program-name "aspell") '(jabber-account-list (quote (("[email protected]")))) '(jabber-nickname "AVK") '(jabber-password nil) '(jabber-server "halogen-dg.com") '(jabber-username "alex") '(remember-data-file "~/Plans/remember.org") '(safe-local-variable-values (quote ((dtml-top-element . "body")))) '(save-place t nil (saveplace)) '(scroll-bar-mode (quote right)) '(semantic-idle-scheduler-idle-time 432000) '(show-paren-mode t) '(svn-status-hide-unmodified t) '(tool-bar-mode nil nil (tool-bar)) '(transient-mark-mode t) '(truncate-lines f) '(woman-use-own-frame nil)) ; ?? ????? ??????? y ??? n? (fset 'yes-or-no-p 'y-or-n-p) (custom-set-faces ;; custom-set-faces was added by Custom. ;; If you edit it by hand, you could mess it up, so be careful. ;; Your init file should contain only one such instance. ;; If there is more than one, they won't work right. '(compilation-error ((t (:foreground "tomato" :weight bold)))) '(cursor ((t (:background "red1")))) '(custom-variable-tag ((((class color) (background dark)) (:inherit variable-pitch :foreground "DarkOrange" :weight bold)))) '(hl-line ((t (:background "grey24")))) '(isearch ((t (:background "orange" :foreground "black")))) '(message-cited-text ((((class color) (background dark)) (:foreground "SandyBrown")))) '(message-header-name ((((class color) (background dark)) (:foreground "DarkGrey")))) '(message-header-other ((((class color) (background dark)) (:foreground "LightPink2")))) '(message-header-subject ((((class color) (background dark)) (:foreground "yellow2")))) '(message-separator ((((class color) (background dark)) (:foreground "thistle")))) '(region ((t (:background "brown")))) '(tooltip ((((class color)) (:inherit variable-pitch :background "IndianRed1" :foreground "black"))))) The above is a python emacs configure file. Where should I put it to use it? And, are there any other changes I need to make?

    Read the article

  • Oracle Database 11g Underground Advice for Database Administrators, by April C. Sims

    - by alejandro.vargas
    Recently I received a request to review the book "Oracle Database 11g Underground Advice for Database Administrators" by April C. Sims I was happy to have the opportunity know some details about the author, she is an active contributor to the Oracle DBA community, through her blog "Oracle High Availability" . The book is a serious and interesting work, I think it provides a good study and reference guide for DBA's that want to understand and implement highly available environments. She starts walking over the more general aspects and skills required by a DBA and then goes on explaining the steps required to implement Data Guard, using RMAN, upgrading to 11g, etc.

    Read the article

  • Database vs Networking

    - by user16258
    I have completed my diploma in (IT) and now pursuing degree, i am in last semester of my B.E(I.T). I want to do specialization either in Database(oracle) or in Networking(cisco). Which one of two will be in more demand in near future, i know it's all about interest but still i would like to know your opinion. Most of people say that a network engineer is never paid as better as a programmer or a DBA, and few says they do get paid well. What would be the scope if i clear my CCNA and CCNP exams, or either OCA & OCP exams, what would be more rewarding. Also i have read somewhere that most of the task of DBA will be automated so in future demand of a DBA will reduce. I would also like to hear from Network engineers what's the scenario out there in India. Thanks

    Read the article

  • SQLCMD Mode: give it one more chance

    - by Maria Zakourdaev
      - Click on me. Choose me. - asked one forgotten feature when some bored DBA was purposelessly wondering through the Management Studio menu at the end of her long and busy working day. - Why would I use you? I have heard of no one who does. What are you for? - perplexedly wondered aged and wise DBA. At least that DBA thought she was aged and wise though each day tried to prove to her that she wasn't. - I know you. You are quite lazy. Why would you do additional clicks to move from window to window? From Tool to tool ? This is irritating, isn't it? I can run windows system commands, sql statements and much more from the same script, from the same query window! - I have all my tools that I‘m used to, I have Management Studio, Cmd, Powershell. They can do anything for me. I don’t need additional tools. - I promise you, you will like me. – the thing continued to whine . - All right, show me. – she gave up. It’s always this way, she thought sadly, - easier to agree than to explain why you don’t want. - Enable me and then think about anything that you always couldn’t do through the management studio and had to use other tools. - Ok. Google for me the list of greatest features of SQL SERVER 2012. - Well... I’m not sure... Think about something else. - Ok, here is something easy for you. I want to check if file folder exists or if file is there. Though, I can easily do this using xp_cmdshell … - This is easy for me. – rejoiced the feature. By the way, having the items of the menu talking to you usually means you should stop working and go home. Or drink coffee. Or both. Well, aged and wise dba wasn’t thinking about the weirdness of the situation at that moment. - After enabling me, – said unfairly forgotten feature (it was thinking of itself in such manner) – after enabling me you can use all command line commands in the same management studio query window by adding two exclamation marks !! at the beginning of the script line to denote that you want to use cmd command: -Just keep in mind that when using this feature, you are actually running the commands ON YOUR computer and not on SQL server that query window is connected to. This is main difference from using xp_cmdshell which is executing commands on sql server itself. Bottomline, use UNC path instead of local path. - Look, there are much more than that. - The SQLCMD feature was getting exited.- You can get IP of your servers, create, rename and drop folders. You can see the contents of any file anywhere and even start different tools from the same query window: Not so aged and wise DBA was getting interested: - I also want to run different scripts on different servers without changing connection of the query window. - Sure, sure! Another great feature that CMDmode is providing us with and giving more power to querying. Use “:” to use additional features, like :connect that allows you to change connection: - Now imagine, you have one script where you have all your changes, like creating staging table on the DWH staging server, adding fact table to DWH itself and updating stored procedures in the server where reporting database is located. - Now, give me more challenges! - Script out a list of stored procedures into the text files. - You can do it easily by using command :out which will write the query results into the specified text file. The output can be the code of the stored procedure or any data. Actually this is the same as changing the query output into the file instead of the grid. - Now, take all of the scripts and run all of them, one by one, on the different server.  - Easily - Come on... I’m sure that you can not... -Why not? Naturally, I can do it using :r commant which is opening a script and executing it. Look, I can also use :setvar command to define an environment variable in SQLCMD mode. Just note that you have to leave the empty string between :r commands, otherwise it’s not working although I have no idea why. - Wow.- She was really impressed. - Ok, I’ll go to try all those… -Wait, wait! I know how to google the SQL SERVER features for you! This example will open chrome explorer with search results for the “SQL server 2012 top features” ( change the path to suit your PC): “Well, this can be probably useful stuff, maybe this feature is really unfairly forgotten”, thought the DBA while going through the dark empty parking lot to her lonely car. “As someone really wise once said: “It is what we think we know that keeps us from learning. Learn, unlearn and relearn”.

    Read the article

  • Towards Database Continuous Delivery – What Next after Continuous Integration? A Checklist

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database delivery patterns & practices STAGE 4 AUTOMATED DEPLOYMENT If you’ve been fortunate enough to get to the stage where you’ve implemented some sort of continuous integration process for your database updates, then hopefully you’re seeing the benefits of that investment – constant feedback on changes your devs are making, advanced warning of data loss (prior to the production release on Saturday night!), a nice suite of automated tests to check business logic, so you know it’s going to work when it goes live, and so on. But what next? What can you do to improve your delivery process further, moving towards a full continuous delivery process for your database? In this article I describe some of the issues you might need to tackle on the next stage of this journey, and how to plan to overcome those obstacles before they appear. Our Database Delivery Learning Program consists of four stages, really three – source controlling a database, running continuous integration processes, then how to set up automated deployment (the middle stage is split in two – basic and advanced continuous integration, making four stages in total). If you’ve managed to work through the first three of these stages – source control, basic, then advanced CI, then you should have a solid change management process set up where, every time one of your team checks in a change to your database (whether schema or static reference data), this change gets fully tested automatically by your CI server. But this is only part of the story. Great, we know that our updates work, that the upgrade process works, that the upgrade isn’t going to wipe our 4Tb of production data with a single DROP TABLE. But – how do you get this (fully tested) release live? Continuous delivery means being always ready to release your software at any point in time. There’s a significant gap between your latest version being tested, and it being easily releasable. Just a quick note on terminology – there’s a nice piece here from Atlassian on the difference between continuous integration, continuous delivery and continuous deployment. This piece also gives a nice description of the benefits of continuous delivery. These benefits have been summed up by Jez Humble at Thoughtworks as: “Continuous delivery is a set of principles and practices to reduce the cost, time, and risk of delivering incremental changes to users” There’s another really useful piece here on Simple-Talk about the need for continuous delivery and how it applies to the database written by Phil Factor – specifically the extra needs and complexities of implementing a full CD solution for the database (compared to just implementing CD for, say, a web app). So, hopefully you’re convinced of moving on the the next stage! The next step after CI is to get some sort of automated deployment (or “release management”) process set up. But what should I do next? What do I need to plan and think about for getting my automated database deployment process set up? Can’t I just install one of the many release management tools available and hey presto, I’m ready! If only it were that simple. Below I list some of the areas that it’s worth spending a little time on, where a little planning and prep could go a long way. It’s also worth pointing out, that this should really be an evolving process. Depending on your starting point of course, it can be a long journey from your current setup to a full continuous delivery pipeline. If you’ve got a CI mechanism in place, you’re certainly a long way down that path. Nevertheless, we’d recommend evolving your process incrementally. Pages 157 and 129-141 of the book on Continuous Delivery (by Jez Humble and Dave Farley) have some great guidance on building up a pipeline incrementally: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 For now, in this post, we’ll look at the following areas for your checklist: You and Your Team Environments The Deployment Process Rollback and Recovery Development Practices You and Your Team It’s a cliché in the DevOps community that “It’s not all about processes and tools, really it’s all about a culture”. As stated in this DevOps report from Puppet Labs: “DevOps processes and tooling contribute to high performance, but these practices alone aren’t enough to achieve organizational success. The most common barriers to DevOps adoption are cultural: lack of manager or team buy-in, or the value of DevOps isn’t understood outside of a specific group”. Like most clichés, there’s truth in there – if you want to set up a database continuous delivery process, you need to get your boss, your department, your company (if relevant) onside. Why? Because it’s an investment with the benefits coming way down the line. But the benefits are huge – for HP, in the book A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware, these are summarized as: -2008 to present: overall development costs reduced by 40% -Number of programs under development increased by 140% -Development costs per program down 78% -Firmware resources now driving innovation increased by a factor of 8 (from 5% working on new features to 40% But what does this mean? It means that, when moving to the next stage, to make that extra investment in automating your deployment process, it helps a lot if everyone is convinced that this is a good thing. That they understand the benefits of automated deployment and are willing to make the effort to transform to a new way of working. Incidentally, if you’re ever struggling to convince someone of the value I’d strongly recommend just buying them a copy of this book – a great read, and a very practical guide to how it can really work at a large org. I’ve spoken to many customers who have implemented database CI who describe their deployment process as “The point where automation breaks down. Up to that point, the CI process runs, untouched by human hand, but as soon as that’s finished we revert to manual.” This deployment process can involve, for example, a DBA manually comparing an environment (say, QA) to production, creating the upgrade scripts, reading through them, checking them against an Excel document emailed to him/her the night before, turning to page 29 in his/her notebook to double-check how replication is switched off and on for deployments, and so on and so on. Painful, error-prone and lengthy. But the point is, if this is something like your deployment process, telling your DBA “We’re changing everything you do and your toolset next week, to automate most of your role – that’s okay isn’t it?” isn’t likely to go down well. There’s some work here to bring him/her onside – to explain what you’re doing, why there will still be control of the deployment process and so on. Or of course, if you’re the DBA looking after this process, you have to do a similar job in reverse. You may have researched and worked out how you’d like to change your methodology to start automating your painful release process, but do the dev team know this? What if they have to start producing different artifacts for you? Will they be happy with this? Worth talking to them, to find out. As well as talking to your DBA/dev team, the other group to get involved before implementation is your manager. And possibly your manager’s manager too. As mentioned, unless there’s buy-in “from the top”, you’re going to hit problems when the implementation starts to get rocky (and what tool/process implementations don’t get rocky?!). You need to have support from someone senior in your organisation – someone you can turn to when you need help with a delayed implementation, lack of resources or lack of progress. Actions: Get your DBA involved (or whoever looks after live deployments) and discuss what you’re planning to do or, if you’re the DBA yourself, get the dev team up-to-speed with your plans, Get your boss involved too and make sure he/she is bought in to the investment. Environments Where are you going to deploy to? And really this question is – what environments do you want set up for your deployment pipeline? Assume everyone has “Production”, but do you have a QA environment? Dedicated development environments for each dev? Proper pre-production? I’ve seen every setup under the sun, and there is often a big difference between “What we want, to do continuous delivery properly” and “What we’re currently stuck with”. Some of these differences are: What we want What we’ve got Each developer with their own dedicated database environment A single shared “development” environment, used by everyone at once An Integration box used to test the integration of all check-ins via the CI process, along with a full suite of unit-tests running on that machine In fact if you have a CI process running, you’re likely to have some sort of integration server running (even if you don’t call it that!). Whether you have a full suite of unit tests running is a different question… Separate QA environment used explicitly for manual testing prior to release “We just test on the dev environments, or maybe pre-production” A proper pre-production (or “staging”) box that matches production as closely as possible Hopefully a pre-production box of some sort. But does it match production closely!? A production environment reproducible from source control A production box which has drifted significantly from anything in source control The big question is – how much time and effort are you going to invest in fixing these issues? In reality this just involves figuring out which new databases you’re going to create and where they’ll be hosted – VMs? Cloud-based? What about size/data issues – what data are you going to include on dev environments? Does it need to be masked to protect access to production data? And often the amount of work here really depends on whether you’re working on a new, greenfield project, or trying to update an existing, brownfield application. There’s a world if difference between starting from scratch with 4 or 5 clean environments (reproducible from source control of course!), and trying to re-purpose and tweak a set of existing databases, with all of their surrounding processes and quirks. But for a proper release management process, ideally you have: Dedicated development databases, An Integration server used for testing continuous integration and running unit tests. [NB: This is the point at which deployments are automatic, without human intervention. Each deployment after this point is a one-click (but human) action], QA – QA engineers use a one-click deployment process to automatically* deploy chosen releases to QA for testing, Pre-production. The environment you use to test the production release process, Production. * A note on the use of the word “automatic” – when carrying out automated deployments this does not mean that the deployment is happening without human intervention (i.e. that something is just deploying over and over again). It means that the process of carrying out the deployment is automatic in that it’s not a person manually running through a checklist or set of actions. The deployment still requires a single-click from a user. Actions: Get your environments set up and ready, Set access permissions appropriately, Make sure everyone understands what the environments will be used for (it’s not a “free-for-all” with all environments to be accessed, played with and changed by development). The Deployment Process As described earlier, most existing database deployment processes are pretty manual. The following is a description of a process we hear very often when we ask customers “How do your database changes get live? How does your manual process work?” Check pre-production matches production (use a schema compare tool, like SQL Compare). Sometimes done by taking a backup from production and restoring in to pre-prod, Again, use a schema compare tool to find the differences between the latest version of the database ready to go live (i.e. what the team have been developing). This generates a script, User (generally, the DBA), reviews the script. This often involves manually checking updates against a spreadsheet or similar, Run the script on pre-production, and check there are no errors (i.e. it upgrades pre-production to what you hoped), If all working, run the script on production.* * this assumes there’s no problem with production drifting away from pre-production in the interim time period (i.e. someone has hacked something in to the production box without going through the proper change management process). This difference could undermine the validity of your pre-production deployment test. Red Gate is currently working on a free tool to detect this problem – sign up here at www.sqllighthouse.com, if you’re interested in testing early versions. There are several variations on this process – some better, some much worse! How do you automate this? In particular, step 3 – surely you can’t automate a DBA checking through a script, that everything is in order!? The key point here is to plan what you want in your new deployment process. There are so many options. At one extreme, pure continuous deployment – whenever a dev checks something in to source control, the CI process runs (including extensive and thorough testing!), before the deployment process keys in and automatically deploys that change to the live box. Not for the faint hearted – and really not something we recommend. At the other extreme, you might be more comfortable with a semi-automated process – the pre-production/production matching process is automated (with an error thrown if these environments don’t match), followed by a manual intervention, allowing for script approval by the DBA. One he/she clicks “Okay, I’m happy for that to go live”, the latter stages automatically take the script through to live. And anything in between of course – and other variations. But we’d strongly recommended sitting down with a whiteboard and your team, and spending a couple of hours mapping out “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” NB: Most of what we’re discussing here is about production deployments. It’s important to note that you will also need to map out a deployment process for earlier environments (for example QA). However, these are likely to be less onerous, and many customers opt for a much more automated process for these boxes. Actions: Sit down with your team and a whiteboard, and draw out the answers to the questions above for your production deployments – “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” Repeat for earlier environments (QA and so on). Rollback and Recovery If only every deployment went according to plan! Unfortunately they don’t – and when things go wrong, you need a rollback or recovery plan for what you’re going to do in that situation. Once you move in to a more automated database deployment process, you’re far more likely to be deploying more frequently than before. No longer once every 6 months, maybe now once per week, or even daily. Hence the need for a quick rollback or recovery process becomes paramount, and should be planned for. NB: These are mainly scenarios for handling rollbacks after the transaction has been committed. If a failure is detected during the transaction, the whole transaction can just be rolled back, no problem. There are various options, which we’ll explore in subsequent articles, things like: Immediately restore from backup, Have a pre-tested rollback script (remembering that really this is a “roll-forward” script – there’s not really such a thing as a rollback script for a database!) Have fallback environments – for example, using a blue-green deployment pattern. Different options have pros and cons – some are easier to set up, some require more investment in infrastructure; and of course some work better than others (the key issue with using backups, is loss of the interim transaction data that has been added between the failed deployment and the restore). The best mechanism will be primarily dependent on how your application works and how much you need a cast-iron failsafe mechanism. Actions: Work out an appropriate rollback strategy based on how your application and business works, your appetite for investment and requirements for a completely failsafe process. Development Practices This is perhaps the more difficult area for people to tackle. The process by which you can deploy database updates is actually intrinsically linked with the patterns and practices used to develop that database and linked application. So you need to decide whether you want to implement some changes to the way your developers actually develop the database (particularly schema changes) to make the deployment process easier. A good example is the pattern “Branch by abstraction”. Explained nicely here, by Martin Fowler, this is a process that can be used to make significant database changes (e.g. splitting a table) in a step-wise manner so that you can always roll back, without data loss – by making incremental updates to the database backward compatible. Slides 103-108 of the following slidedeck, from Niek Bartholomeus explain the process: https://speakerdeck.com/niekbartho/orchestration-in-meatspace As these slides show, by making a significant schema change in multiple steps – where each step can be rolled back without any loss of new data – this affords the release team the opportunity to have zero-downtime deployments with considerably less stress (because if an increment goes wrong, they can roll back easily). There are plenty more great patterns that can be implemented – the book Refactoring Databases, by Scott Ambler and Pramod Sadalage is a great read, if this is a direction you want to go in: http://www.amazon.com/Refactoring-Databases-Evolutionary-paperback-Addison-Wesley/dp/0321774515 But the question is – how much of this investment are you willing to make? How often are you making significant schema changes that would require these best practices? Again, there’s a difference here between migrating old projects and starting afresh – with the latter it’s much easier to instigate best practice from the start. Actions: For your business, work out how far down the path you want to go, amending your database development patterns to “best practice”. It’s a trade-off between implementing quality processes, and the necessity to do so (depending on how often you make complex changes). Socialise these changes with your development group. No-one likes having “best practice” changes imposed on them, so good to introduce these ideas and the rationale behind them early.   Summary The next stages of implementing a continuous delivery pipeline for your database changes (once you have CI up and running) require a little pre-planning, if you want to get the most out of the work, and for the implementation to go smoothly. We’ve covered some of the checklist of areas to consider – mainly in the areas of “Getting the team ready for the changes that are coming” and “Planning our your pipeline, environments, patterns and practices for development”, though there will be more detail, depending on where you’re coming from – and where you want to get to. This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • Planning for Disaster

    There is a certain paradox in being advised to expect the unexpected, but the DBA must plan and prepare in advance to protect their organisation's data assets in the event of an unexpected crisis, and return them to normal operating conditions. To minimise downtime in such circumstances should be the aim of every effective DBA. To plan for recovery, It pays to have the mindset of a pessimist. "It's the freaking iPhone of SQL monitoring""Everyone just gets it… that has tremendous value" - Rob Sullivan, DBA, IdeasRun. Get started with SQL Monitor today - download a free trial.

    Read the article

  • ????????????????????????&?????????????!

    - by OTN-J Master
    ??????????????????????????????????? ????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????!!??????????????????????????????????????????????????????????????    ?????????????!DBA??? ?11? ?????????????? ~????????????????~ ???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????DBA???????????????????????????????????????????DBA????????????????????????????????????????????????????????????????????????????? >> ?????????????? ??     ???????????????22? ????????????(2)? ??????????????????????????????????? ??????????????????????????????????????????????????????????10???????????????????????????????????????????OTN Webmaster?????: ????????????????????????????????????(?????????????????)???????????????????????(*^^*)>> ??????????? ??

    Read the article

  • Linux Debian Security Breach - what now? [closed]

    - by user897075
    Possible Duplicate: My server's been hacked EMERGENCY I installed Debian (Squeeze) a while back in my home network to host some personal sites (thank god). During the installation it prompted me to enter a user other than root - so in a rush I used my name as user and pass (alex/alex for what its worth). I know it's horrible practice but during the setup of this server I'm always logged in as root to perform configurations, etc. Few days or a week passes and I forget to change the password. Then I finally get my web site finished and I open the port forwarding on my router and DynDNS to point to my server in my home. I've done this many times in the past never had issues but I use a cryptic root password and I guess disabled regular accounts. Today I reformat my Windows 7 and after spending all day tweaking and updating SP1 I look for cloning apps and find clonezilla and see it supports SSH cloning, so I go through the process only to discover I need a user, so I log into my web-server and see I have the user 'alex' already in and realize I don't know the password. So I change the password to something cryptic and visit the directory 'home' only to realize their are contents such as passfile, bengos, etc. My heart sinks, I've been hacked!!! Sure as hell there are all sort of scripts and password files. I run a 'last' command and it seems they last logged in april 3rd. Question: What can I do to see if they did anything destructive? Should I reformat and reinstall? How restrictive is Debian/Squeeze in terms of user permissions out of the box - all my personal website stuff was created using 'root' so changing files does not seem to have occured. How did they determine there was a user 'alex' on the machine? Can you query any machine and figure this out? What the users are? Looks like they tried to run a IP scan...other nodes on the network are running Windows 7. One of which seems a little wonky as of late - is it possible they buggered up that system? What corrective action can I take to avoid this from happening again? And figure out what might have changed or been hacked? I'm hoping debian out of box is fairly secure and at best he managed to read some of my source code. :p Regards, Alex

    Read the article

  • Good practices - database programming, unit testing

    - by Piotr Rodak
    Jason Brimhal wrote today on his blog that new book, Defensive Database Programming , written by Alex Kuznetsov ( blog ) is coming to bookstores. Alex writes about various techniques that make your code safer to run. SQL injection is not the only one vulnerability the code may be exposed to. Some other include inconsistent search patterns, unsupported character sets, locale settings, issues that may occur during high concurrency conditions, logic that breaks when certain conditions are not met. The...(read more)

    Read the article

  • Modifying Contiguous Time Periods in a History Table

    Alex Kuznetsov is credited with a clever technique for creating a history table for SQL that is designed to store contiguous time periods and check that these time periods really are contiguous, using nothing but constraints. This is now increasingly useful with the DATE data type in SQL Server. The modification of data in this type of table isn't always entirely intuitive so Alex is on hand to give a brief explanation of how to do it.

    Read the article

  • Oracle for PCI-DSS Security Webcast

    - by Alex Blyth
    Thanks to everyone who attended the Oracle for PCI-DSS security webcast today. It was good to see how the products we talked about last week can be used to address the PCI standard requirements. A big thanks to Chris Pickett for presenting a great session and running us through a very cool demo showing how the data is protected through out its life. The replay of the session can be downloaded here. Slides and be down loaded here. Oracle for PCI-DSS Security Compliance View more presentationsfrom Oracle Australia. Next week we resume our regular schedule with Andrew Clarke taking us through Oracle Application Express (APEX) - one of the best kept secrets in the Oracle Database. Enroll for this session here (and now :) ) Till next week Cheers Alex

    Read the article

  • Slides from Upgrade webcast

    - by Alex Blyth
    Thanks everyone for attending the webcast on "Upgrading to Oracle 11g". I hope there were some useful tips for everyone. My apologies for the issue with the audio streaming - Ill re-record the session later this week and hopefully have it available soon there after. As I mentioned, the next session - on Oracle VM and Oracle Enterprise Linux is on April 28 2010.Please click here to enroll. As for the slides... here they are: You can download the slides here. Upgrade to Oracle 11g View more presentations from Oracle Australia. Thanks again Cheers Alex

    Read the article

  • Greetings!!!!

    - by [email protected]
    Greetings everyone!If you're reading this, hopefully it's because you have been following our series of webcasts on Oracle 11gR2 that we've been hosting on Wordpress. If you found us some other way, well that's even better - the more the merrier as they say.In either case, welcome to our new blog!!! Over the next few days, Ill move the old posts from wordpress to here its all in the one location.Right! Who are we? The authors of this blog are the ANZ Inside Consulting Team.Currently, this is made of of:Tom JurcicYasin MohammedAndrew ClarkeRene Poels and me - Alex BlythBasically, our role in Oracle is to help users of our technologies get the most of their existing investments as well as what's new, old, blue, what have you...Ideally, this is all going to be technical in nature and not of a marketing nature (we'll leave the marketing up to others).For now, there's obviously not much here. But that won't last too long. In the mean time, those who are interested can find replays and slides of our previous webcasts on the "Oracle 11g Webcasts" page.Till next timeAlex

    Read the article

  • Oracle VM Slides and Replay

    - by Alex Blyth
    Thanks everyone for attending the webcast on "Oracle VM and Virtualisation" last week. I know I got some useful info out of the session and on behalf of all those who attended I'll say Thank You to Dean Samuels for spending some time talking to us.Slides are available here Oracle VM - 28/04/2010 View more presentations from Oracle Australia. You can download the replay here. Next week's session is on Oracle Database Security and will cover briefly all the big guns like Transparent Data Encryption, Database Vault, Audit Vault, Flashback Data Archive as well as touching on some of the features that are so often skipped over. You can enroll for this session here. Thanks again Cheers Alex

    Read the article

  • Oracle Security Webcast Slides and Replay now available

    - by Alex Blyth
    Hi EveryoneThanks for attending the "Oracle Database Security" last week. Slides are available here Oracle Database Security OverviewView more presentations from Oracle Australia. You can download the replay here. Next week's session is on Oracle Application Express. APEX is one of the best kept secrets in the Oracle database and can be used to make very simple apps such as phone directories all the way to complex knowledge base style apps that are driven heavily by data. You can enroll for this session here. Thanks again Cheers Alex

    Read the article

  • Using PHP Redirect Script together with Custom Fields (WordPress)? [on hold]

    - by Alex Scherer
    I am currently trying to make yoast's link cloaking script ( Yoast.com script manual // Github Script files ) work together with the Wordpress plugin Advanced Custom Fields. The script fetches 2 values (redirect id, redirect url) via GET and then redirects to this particular URL which is defined in a .txt file called redirects.txt I would like to change the script, so that I can define both the id and redirection URL via custom fields on each post in my WP dashboard.. I would be really happy if someone could help me to code something that does the same as the script above but without using a redirects.txt file to save the values but furthermore gets those values from custom fields. Best regards ! Alex

    Read the article

  • What to do with random pages after a 301 redirect?

    - by Alex
    Hello, I did a standard 301 redirect for a domain, but the original domain has about 300 pages that have some strength. It doesn't make sense to make them all point back to the new home page because the individual pages are about some topics. Also, there aren't the same pages in the new domain, so where should the original random pages redirect to? I would like to have them rank for the same topics they used to, but without having the original domain giving them strength, they will just stop ranking and die off. What should I do? Thanks, Alex

    Read the article

  • Pokemon Yellow wrap transitions

    - by Alex Koukoulas
    So I've been trying to make a pretty accurate clone of the good old Pokemon Yellow for quite some time now and one puzzling but nonetheless subtle mechanic has puzzled me. As you can see in the uploaded image there is a certain colour manipulation done in two stages after entering a wrap to another game location (such as stairs or entering a building). One easy (and sloppy) way of achieving this and the one I have been using so far is to make three copies of each image rendered on the screen all of them with their colours adjusted accordingly to match each stage of the transition. Of course after a while this becomes tremendously time consuming. So my question is does anyone know any better way of achieving this colour manipulation effect using java? Thanks in advance, Alex

    Read the article

  • Programmatically inviting contacts to Google Chat

    - by DBa
    Hello folks, I'm writing a sync application for Lotus Notes and Google (I know, there are some of them out there, but they are either not free or sync only calendar (or only contacts) and most of them cannot deal with local mailfiles). This works so far, but I have a problem when syncing contacts: under certain circustances, the contacts have to be deleted and recreated in Google. This causes them to disappear from the chat list in GMail and the people have to be re-invited manually. Is there any way to send these invites through the API? Thanks in advance DBa

    Read the article

  • Mac OS X 10.6 assign mapped IP to Windows 7 VM in Parallels

    - by Alex
    I'm trying to assign a mapped IP address to a Windows 7 VM. I have setup running in Parallels 5 in wireless bridged networking mode. The problem I am having is that it looks like the VM is actually broadcasting the MAC address of the host machine and thus causing a clash of IP addresses on the network. This is my current setup: Macbook Pro :~ ifconfig -a lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384 inet6 ::1 prefixlen 128 inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1 inet 127.0.0.1 netmask 0xff000000 gif0: flags=8010<POINTOPOINT,MULTICAST> mtu 1280 stf0: flags=0<> mtu 1280 en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether 00:26:b0:df:31:b4 media: autoselect status: inactive supported media: none autoselect 10baseT/UTP <half-duplex> 10baseT/UTP <full-duplex> 10baseT/UTP <full-duplex,flow-control> 10baseT/UTP <full-duplex,hw-loopback> 100baseTX <half-duplex> 100baseTX <full-duplex> 100baseTX <full-duplex,flow-control> 100baseTX <full-duplex,hw-loopback> 1000baseT <full-duplex> 1000baseT <full-duplex,flow-control> 1000baseT <full-duplex,hw-loopback> fw0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 4078 lladdr 00:26:b0:ff:fe:df:31:b4 media: autoselect <full-duplex> status: inactive supported media: autoselect <full-duplex> en1: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 inet6 fe80::226:bbff:fe0a:59a1%en1 prefixlen 64 scopeid 0x6 inet 192.168.1.97 netmask 0xffffff00 broadcast 192.168.1.255 ether 00:26:bb:0a:59:a1 media: autoselect status: active supported media: autoselect vnic0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500 inet 192.168.1.81 netmask 0xffffff00 broadcast 192.168.1.255 ether 00:1c:42:00:00:08 media: autoselect status: active supported media: autoselect vnic1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500 inet 10.37.129.2 netmask 0xffffff00 broadcast 10.37.129.255 ether 00:1c:42:00:00:09 media: autoselect status: active supported media: autoselect Windows 7: :~ ipconfig -all Windows IP Configuration Host Name . . . . . . . . . . . . : Alex-PC Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No Ethernet adapter Local Area Connection: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Parallels Ethernet Adapter Physical Address. . . . . . . . . : 00-1C-42-B8-E7-B4 DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes Tunnel adapter Teredo Tunneling Pseudo-Interface: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Microsoft Teredo Tunneling Adapter Physical Address. . . . . . . . . : 00-00-00-00-00-00-00-E0 DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes Tunnel adapter isatap.{ACAC7EBB-5E5F-4F53-AFD9-E6EAEEA0FEE2}: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Microsoft ISATAP Adapter #3 Physical Address. . . . . . . . . : 00-00-00-00-00-00-00-E0 DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes Billion Bipac 7200 modem router: In DHCP server settings have the following two mapping entries. alex-macbook-win7 00:1c:42:00:00:08 192.168.1.98 alex-macbook 00:26:bb:0a:59:a1 192.168.1.97 The problem I have is that when the VM starts up it gets assigned the 192.168.1.97 address instead of the .98 address and thus networking on the host stops working as it says there is a clash. I have tried removing the mapping for "alex-macbook" which results in the guest machine being assigned a normal DHCP address and NOT the one that is in the mapping of the router.

    Read the article

  • ssh connection slow when using @hostname.com but now when using @ipaddress

    - by Alex Recarey
    When connecting to a Debian server using ssh, if I use [email protected] (the IP address of hte server) the connection is instant. If however I use [email protected] (a DNS redirected to the IP address of the server) the ssh connection hangs for a 20 seconds before connecting successfully. The ssh logs show the following: [alex@alex home]$ ssh -v -v [email protected] OpenSSH_5.5p1, OpenSSL 1.0.0c-fips 2 Dec 2010 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 and here it hangs during 20 seconds before continuing. I think it might have something to do with reverse DNS or similar (the server does not really "know" it's name is hostname.com, it just has that DNS rediriected to its IP address). I have added the following options to /etc/ssh/sshd_config: UseDNS no GSSAPIAuthentication no to no effect. The server's DNS records in /etc/resolv.conf are configured correctly: ping hostname.com PING sub.domain.com (X.X.X.X) 56(84) bytes of data. 64 bytes from replicant (X.X.X.X): icmp_seq=1 ttl=64 time=0.029 ms 64 bytes from replicant (X.X.X.X): icmp_seq=2 ttl=64 time=0.050 ms?s Thanks for the help. Solution: It seems the DSL router my ISP saddled me with was causing the trouble. Changing my DNS server from 192.168.1.1 (router's IP) to google's (8.8.8.8, always good to know when you are in a hurry) instantly solved the connection delay problem. I am guessing that the 50€ router provided does not cache DNS entries, although I don't understand why pinging the DNS address had no delay, and 20 seconds is too long of a wait, even for uncached DNS. Tnanks again for the help!

    Read the article

  • SVN Authentication with LDAP and Active Directory

    - by Alex Holsgrove
    I am having a few problems getting SVN authentication to work with LDAP / Active Directory. My SVN installation works fine, but after enabling LDAP in my apache vhost, I just can't get my users to authenticate. I can use a selection of LDAP browsers to successfully connect to Active Directory, but just can't seem to get this to work. SVN is setup in /var/local/svn Server is svn.domain.local For testing, my repository is /var/local/svn/test My vhost file is as follows: <VirtualHost *:80> ServerAdmin [email protected] ServerAlias svn.domain.local ServerName svn.domain.local DocumentRoot /var/www/svn/ <Location /test> DAV svn #SVNListParentPath On SVNPath /var/local/svn/test AuthzSVNAccessFile /var/local/svn/svnaccess AuthzLDAPAuthoritative off AuthType Basic AuthName "SVN Server" AuthBasicProvider ldap AuthLDAPBindDN "CN=adminuser,OU=SBSAdmin Users,OU=Users,OU=MyBusiness,DC=domain,DC=local" AuthLDAPBindPassword "admin password" AuthLDAPURL "ldap://192.168.1.6:389/OU=SBSUsers,OU=Users,OU=MyBusiness,DC=domain,DC=local?sAMAccountName?sub?(objectClass=*)" Require valid-user </Location> CustomLog /var/log/apache2/svn/access.log combined ErrorLog /var/log/apache2/svn/error.log </VirtualHost> In my error.log, I don't seem to get any bind errors (should I be looking elsewhere?), but just the following: [Thu Jun 21 09:51:38 2012] [error] [client 192.168.1.142] user alex: authentication failure for "/test/": Password Mismatch, referer: http://svn.domain.local/test/ At the end of "AuthLDAPURL", I have seen people using TLS and NONE but neither seem to help in my case. I have the ldap modules loaded and have checked as much as I know, so any help would be most welcome. Thanks

    Read the article

  • SQL Sentry Truth-Telling and Disk Configuration

    - by AjarnMark
    Recently, SQL Sentry told me something about my SQL Server disk configurations that I just didn’t want to believe, but alas, it was true. Several days ago I posted my First Impressions of the SQL Sentry Power Suite.  Today’s post could fall into the category of, “Hey, as long as you have that fancy tool…”  Unfortunately, it also falls into the category of an overloaded worker taking someone else’s word for the truth, not verifying it with independent fact-checking, and then making decisions based on that.  Here’s my story… I’m not exactly an Accidental DBA (or Involuntary DBA as Paul Randal calls it).  I came to this company five years ago as a lead application developer with extensive experience in database design and development.  I worked my way into management, and along the way, took over the DBA responsibilities.  Fortunately, our systems run pretty smoothly most of the time, but I’m always looking for ways to make them better and to fit into my understanding of best practices.  When I took over as DBA, I inherited a SQL 2000 server with about 30 databases on it supporting our main systems, and a SQL 2005 server with multiple instances.  Both of these servers were configured with the Operating System and Application files on the C drive, data files on a different drive letter, and log files on a third drive letter.  Even before I took over as DBA, I verified that this was true with a previous server administrator, and that these represented actual separate disks.  He stated that they did, and I thought that all was well. Then one day, I’m poking around inside the SQL Sentry Performance Advisor, checking out features as I am evaluating whether to purchase the product, and I come across a Disk Configuration section.  The first thing I notice is that the drives do not have the proper partition offset, which was not at all surprising to me given the age of the installation and the relative newness of that topic.  But what threw me for a loop was that the graphic display appeared to be telling me that I did not in fact have three separate drives (or arrays) but rather had two, and that the log files were merely on a separate volume on the same physical array as the OS.  I figured that I must be reading it wrong so I scanned the Help file, but that just seemed to confirm my interpretation.  Then I thought, “there must be something wrong with the demo version of the software!  This can’t be right!”  But just to double-check, I went to our current server admin to talk it over with him, and sure enough, SQL Sentry was telling the truth! I was stunned!  I quickly went through the grieving process…denial…anger…reconciliation.  Here was something that I thought was such a basic truth that was turned upside down.  OK, granted, this wasn’t disastrous.  Our databases didn’t suddenly grind to a halt.  I didn’t get calls late at night inquiring about the sudden downturn in performance.  But it was a bit of a shock to the system, in a good way, to jolt me out of taking what I had believed as the truth for granted, and instead to Trust, but Verify! Yes, before someone else points it out, I know that there are”free” disk management tools built-in to Windows that would have told me the same thing if I had only looked at them; I did not have to buy a fancy tool to tell me that, but the fact is, until I was evaluating the tool, I had just gone with what I was told, and never bothered to check what was actually there. So, what things do you believe to be true but you actually never verified?

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >