Search Results

Search found 4704 results on 189 pages for 'refactoring databases'.

Page 60/189 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • Entity Association Mapping with Code First Part 1 : Mapping Complex Types

    - by mortezam
    Last week the CTP5 build of the new Entity Framework Code First has been released by data team at Microsoft. Entity Framework Code-First provides a pretty powerful code-centric way to work with the databases. When it comes to associations, it brings ultimate flexibility. I’m a big fan of the EF Code First approach and am planning to explain association mapping with code first in a series of blog posts and this one is dedicated to Complex Types. If you are new to Code First approach, you can find a great walkthrough here. In order to build a solid foundation for our discussion, we will start by learning about some of the core concepts around the relationship mapping.   What is Mapping?Mapping is the act of determining how objects and their relationships are persisted in permanent data storage, in our case, relational databases. What is Relationship mapping?A mapping that describes how to persist a relationship (association, aggregation, or composition) between two or more objects. Types of RelationshipsThere are two categories of object relationships that we need to be concerned with when mapping associations. The first category is based on multiplicity and it includes three types: One-to-one relationships: This is a relationship where the maximums of each of its multiplicities is one. One-to-many relationships: Also known as a many-to-one relationship, this occurs when the maximum of one multiplicity is one and the other is greater than one. Many-to-many relationships: This is a relationship where the maximum of both multiplicities is greater than one. The second category is based on directionality and it contains two types: Uni-directional relationships: when an object knows about the object(s) it is related to but the other object(s) do not know of the original object. To put this in EF terminology, when a navigation property exists only on one of the association ends and not on the both. Bi-directional relationships: When the objects on both end of the relationship know of each other (i.e. a navigation property defined on both ends). How Object Relationships Are Implemented in POCO domain models?When the multiplicity is one (e.g. 0..1 or 1) the relationship is implemented by defining a navigation property that reference the other object (e.g. an Address property on User class). When the multiplicity is many (e.g. 0..*, 1..*) the relationship is implemented via an ICollection of the type of other object. How Relational Database Relationships Are Implemented? Relationships in relational databases are maintained through the use of Foreign Keys. A foreign key is a data attribute(s) that appears in one table and must be the primary key or other candidate key in another table. With a one-to-one relationship the foreign key needs to be implemented by one of the tables. To implement a one-to-many relationship we implement a foreign key from the “one table” to the “many table”. We could also choose to implement a one-to-many relationship via an associative table (aka Join table), effectively making it a many-to-many relationship. Introducing the ModelNow, let's review the model that we are going to use in order to implement Complex Type with Code First. It's a simple object model which consist of two classes: User and Address. Each user could have one billing address. The Address information of a User is modeled as a separate class as you can see in the UML model below: In object-modeling terms, this association is a kind of aggregation—a part-of relationship. Aggregation is a strong form of association; it has some additional semantics with regard to the lifecycle of objects. In this case, we have an even stronger form, composition, where the lifecycle of the part is fully dependent upon the lifecycle of the whole. Fine-grained domain models The motivation behind this design was to achieve Fine-grained domain models. In crude terms, fine-grained means “more classes than tables”. For example, a user may have both a billing address and a home address. In the database, you may have a single User table with the columns BillingStreet, BillingCity, and BillingPostalCode along with HomeStreet, HomeCity, and HomePostalCode. There are good reasons to use this somewhat denormalized relational model (performance, for one). In our object model, we can use the same approach, representing the two addresses as six string-valued properties of the User class. But it’s much better to model this using an Address class, where User has the BillingAddress and HomeAddress properties. This object model achieves improved cohesion and greater code reuse and is more understandable. Complex Types: Splitting a Table Across Multiple Types Back to our model, there is no difference between this composition and other weaker styles of association when it comes to the actual C# implementation. But in the context of ORM, there is a big difference: A composed class is often a candidate Complex Type. But C# has no concept of composition—a class or property can’t be marked as a composition. The only difference is the object identifier: a complex type has no individual identity (i.e. no AddressId defined on Address class) which make sense because when it comes to the database everything is going to be saved into one single table. How to implement a Complex Types with Code First Code First has a concept of Complex Type Discovery that works based on a set of Conventions. The convention is that if Code First discovers a class where a primary key cannot be inferred, and no primary key is registered through Data Annotations or the fluent API, then the type will be automatically registered as a complex type. Complex type detection also requires that the type does not have properties that reference entity types (i.e. all the properties must be scalar types) and is not referenced from a collection property on another type. Here is the implementation: public class User{    public int UserId { get; set; }    public string FirstName { get; set; }    public string LastName { get; set; }    public string Username { get; set; }    public Address Address { get; set; }} public class Address {     public string Street { get; set; }     public string City { get; set; }            public string PostalCode { get; set; }        }public class EntityMappingContext : DbContext {     public DbSet<User> Users { get; set; }        } With code first, this is all of the code we need to write to create a complex type, we do not need to configure any additional database schema mapping information through Data Annotations or the fluent API. Database SchemaThe mapping result for this object model is as follows: Limitations of this mappingThere are two important limitations to classes mapped as Complex Types: Shared references is not possible: The Address Complex Type doesn’t have its own database identity (primary key) and so can’t be referred to by any object other than the containing instance of User (e.g. a Shipping class that also needs to reference the same User Address). No elegant way to represent a null reference There is no elegant way to represent a null reference to an Address. When reading from database, EF Code First always initialize Address object even if values in all mapped columns of the complex type are null. This means that if you store a complex type object with all null property values, EF Code First returns a initialized complex type when the owning entity object is retrieved from the database. SummaryIn this post we learned about fine-grained domain models which complex type is just one example of it. Fine-grained is fully supported by EF Code First and is known as the most important requirement for a rich domain model. Complex type is usually the simplest way to represent one-to-one relationships and because the lifecycle is almost always dependent in such a case, it’s either an aggregation or a composition in UML. In the next posts we will revisit the same domain model and will learn about other ways to map a one-to-one association that does not have the limitations of the complex types. References ADO.NET team blog Mapping Objects to Relational Databases Java Persistence with Hibernate

    Read the article

  • Globe Trotters: Asian Healthcare CIOs need ‘Security Inside Out’ Approach

    - by Tanu Sood
    In our second edition of Globe trotters, wanted to share a feature article that was recently published in Enterprise Innovation. EnterpriseInnovation.net, part of Questex Media Group, is Asia's premier business and technology publication. The article featured MOH Holdings (a holding company of Singapore’s Public Healthcare Institutions) and highlighted the project around National Electronic Health Record (NEHR) system currently being deployed within Singapore.  According to the feature, the NEHR system was built to facilitate seamless exchanges of medical information as patients move across different healthcare settings and to give healthcare providers more timely access to patient’s healthcare records in Singapore. The NEHR consolidates all clinically relevant information from patients’ visits across the healthcare system throughout their lives and pulls them in as a single record. It allows for data sharing, making it accessible to authorized healthcare providers, across the continuum of care throughout the country. In healthcare, patient data privacy is critical as is the need to avoid unauthorized access to the electronic medical records. As Alan Dawson, director for infrastructure and operations at MOH Holdings is quoted in the feature, “Protecting the perimeter is no longer enough. Healthcare CIOs today need to adopt a ‘security inside out’ approach that protects information assets all the way from databases to end points.” Oracle has long advocated the ‘Security Inside Out’ approach. From operating systems, infrastructure to databases, middleware all the way to applications, organizations need to build in security at every layer and between these layers. This comprehensive approach to security has never been as important as it is today in the social, mobile, cloud (SoMoClo) world. To learn more about Oracle’s Security Inside Out approach, visit our Security page. And for more information on how to prevent unauthorized access, streamline user administration, bolster security and enforce compliance in healthcare, learn more about Oracle Identity Management.

    Read the article

  • Partner Webcast – More out of Database Appliance with DB Options - 13 September 2012

    - by Thanos
    The Oracle Database Appliance is a new way to take advantage of the world's most popular database—Oracle Database 11g —in a single, easy-to-deploy and manage system. It's a complete package of software, server, storage, and networking that's engineered for simplicity; saving time and money by simplifying deployment, maintenance, and support of database workloads. But that is not all, with the support for all Oracle Database Options, Oracle Database Appliance can be the ideal solution for many use cases. Feature Benefit Simplifies deployment, maintenance, and support of high-availability database workloads Saves significant time and effort throughout the database administration lifecycle An engineered system of software, server, storage, and networking High availability for a wide range of custom and packaged OLTP and data warehousing application databases Simple one-button Installation, full-stack integrated patching and diagnostics Reduces planned and unplanned downtime by automatically monitoring and logging service requests with Oracle Support Built using the world’s #1 database Protects databases from server and storage failures with Oracle Real Application Clusters and Automatic Storage Management Unique Pay-As-You-Grow software licensing Reduces cost with flexibility to adjust your software spend as your business grows without the need for any hardware upgrades Discover the Oracle Database Appliance Value Proposition and learn how to position and combine it with database options to capture new business and easily roll out solutions safely and with maximum cost efficiency. This webcast is repeated once again for your benefit. Agenda: Oracle Database& Engineered Systems Innovation. What’s the Oracle Database Appliance ? Oracle Database Appliance Value Proposition. Oracle Database Appliance with Database Options Oracle Database Appliance Partners Business Delivery FormatThis FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Duration: 1 hour Register Now! Oracle Database Appliance is available for purchase at the Oracle Store under Engineered Systems. For any questions please contact us at partner.imc-AT-beehiveonline.oracle-DOT-com Visit regularly our ISV Migration Center blog Or Follow us @oracleimc to learn more on Oracle Technologies as well as upcoming partner webcasts and events.

    Read the article

  • Upgrade to 2008 R2

    - by DavidWimbush
    I don't like it, Carruthers. It's just too quiet. Well, I've done the pre-production server, the main live server and the Reporting/BI server with remarkably little trouble. Pre-production and live were rebuilds. I failed live over to our log shipping standby for the duration, which has a gotcha I blogged about before. When I failed back to the primary live server again, it was very quick to bring the databases online. I understand the databases don't actually get upgraded until you recover them but there was no noticable delay. It's gone from 2005 Workgroup - limited to 4GB of memory - to 2008 R2 Standard so it can now use nearly all of the 30GB in the server. It's soo much faster. The reporting/BI server I upgraded in situ. This took a while but, again, went smoothly. Just watch out, because the master database was left at compatibility level 90. Also the upgrade decided to use the reporting service's credentials for database access when running reports. It didn't preserve the existing credentials and I had to go into the Reporting Configuration Manager to put them back in. Make sure you know what credentials your server is using before you upgrade. All things considered, a fairly painless experience. Now I just have to upgrade and reset our log shipping standby server again!

    Read the article

  • Log shipping and shrinking transaction logs

    - by DavidWimbush
    I just solved a problem that had me worried for a bit. I'm log shipping from three primary servers to a single secondary server, and the transaction log disk on the secondary server was getting very full. I established that several primary databases had unused space that resulted from big, one-off updates so I could shrink their logs. But would this action be log shipped and applied to the secondary database too? I thought probably not. And, more importantly, would it break log shipping? My secondary databases are in a Standby / Read Only state so I didn't think I could shrink their logs. I RTFMd, Googled, and asked on a Q&A site (not the evil one) but was none the wiser. So I was facing a monumental round of shrink, full backup, full secondary restore and re-start log shipping (which would leave us without a disaster recovery facility for the duration). Then I thought it might be worthwhile to take a non-essential database and just make absolutely sure a log shrink on the primary wouldn't ship over and occur on the secondary as well. So I did a DBCC SHRINKFILE and kept an eye on the secondary. Bingo! Log shipping didn't blink and the log on the secondary shrank too. I just love it when something turns out even better than I dared to hope. (And I guess this highlights something I need to learn about what activities are logged.)

    Read the article

  • Ameristar Wins with Oracle GoldenGate’s Heterogeneous Real-Time Data Integration

    - by Irem Radzik
    Today we announced a press release about another successful project with Oracle GoldenGate. This time at Ameristar. Ameristar is a casino gaming company and needed a single data integration solution to connect multiple heterogeneous systems to its Teradata data warehouse. The project involves integration of Ameristar’s promotional and gaming data from 14 data sources across its 7 casino hotel properties in real time into a central Teradata data warehouse. The source systems include the Aristocrat gaming and MGT promotional management platforms running on Microsoft SQL Server 2000 databases. As you can notice, there was no Oracle Database involved in this project, but Ameristar’s IT leadership knew that  GoldenGate’s strong heterogeneous and real-time data integration capabilities is the right technology for their data warehousing project. With GoldenGate Ameristar was able to reduce data latency to the enterprise data warehouse, and use this real-time customer information for marketing teams in improving overall customer experience. Ameristar customers receive more targeted and timely campaign offers, and the company has more up-to-date visibility into financial metrics of the company. One other key benefit the company experienced with GoldenGate is in operational costs. The previous data capture solution Ameristar used was trigger based and required a lot of effort to manage. They needed dedicated IT staff to maintain it. With GoldenGate, the solution runs seamlessly without needing a fully-dedicated staff, giving the IT team at Ameristar more resources for their other IT projects. If you want to learn more about GoldenGate and the latest features for Oracle Database and non-Oracle databases, please watch our on demand webcast about Oracle GoldenGate 11g Release 2.

    Read the article

  • New Recommended Bundle Patch (APR 2010) - 9405592 for Patch Automation on EM 10.2.0.5

    - by Hari Prasanna Srinivasan
    New Recommended Bundle Patch 9405592 is available for download from My Oracle Support now. This patch primarily enhances the Patching functionality offered by Oracle Enterprise Manager Grid Control. This patch is cumulative and is a superset of the previously released bundles # 9132461, #8992470, and #8653501, and therefore, includes all the features that were introduced as part of those Recommended Bundle Patches. For more information, refer to Comprehensive Overview of Recommended Bundle Patch 9405592 under support note - OMS and Agent Patches required for setting up Provisioning, Patching and Cloning in 10.2.0.3 to 10.2.0.5 GC [ID 427577.1] FAQ: #1 If I had applied the previous recommended patches, do I need to rollback before applying this? Yes, if you had applied any of the patches (# 9132461, #8992470 and #8653501) you would need to rollback the patch and apply this. For rollback instructions, refer to the patch README from the support note 427577.1 #2 I recently applied the patch 9132461, do I still need the new patch? The new patch contains additional bug fixes. (For more info see,Comprehensive Overview of Recommended Bundle Patch 9405592) - Augmented Verification and Support for Oracle Database 9i Release 2 (9.2.0.8) and Oracle Databases on Microsoft Windows Platform - Bug fixes resolving issues with patching CPUs on Databases running on Windows platforms - Key bug fixes identified at various customers. Oracle strongly recommends you to apply the latest patch to make sure you do no encounter these issues and you are at the latest patch level for faster issue resolution through support. #3 Can I apply this patch on top of PSU3 (9282397) for Enterprise Manager ? Yes, this patch does NOT conflict with PSU3 and can be applied over it. #4 Is there any known conflicts? If you had applied the patch 8573971, it would conflict with this patch(9405592). You would need to rollback the patch 8573971 and apply this Bundle. Apply the overlay patch - 9583322 to get the fixes of the rolled back patch 8573971. Note: The overlay patch is currently unavailable, it will be made available in few days.

    Read the article

  • SQL SERVER – Migration Assistant Upgraded to Support SQL Server 2014

    - by Pinal Dave
    We all start somewhere when it is about database. There are different reasons, why we go for one database over another database. Usually the reason is cost and convenience. After a period of time when business is successful and traffic is growing, the same two reasons of cost and convenience start to become secondary goals. I have seen quite a lot of companies starting with free databases and after a while switching to another database as they want stability and service from the product company. Microsoft has an excellent product which lets you migrate your database from the alternate database to SQL Server. It is called SQL Server Migration Assistant (SSMA) and earlier this week, it has been upgraded to support SQL Server 2014. Now you can migrate from your database to to all editions of SQL Server 2005, SQL Server 2008, SQL Server 2008 R2, SQL Server 2012 and SQL Server 2014. SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft. Here is where you can download SSMA v5.3 for various databases. Microsoft SQL Server Migration Assistant v5.3 for Access Microsoft SQL Server Migration Assistant (SSMA) for Access is a tool to automate migration from Microsoft Access database(s) to SQL Server Microsoft SQL Server Migration Assistant v5.3 for Oracle Microsoft SQL Server Migration Assistant (SSMA) for Oracle is a tool to automate migration from Oracle database to SQL Server. Microsoft SQL Server Migration Assistant v5.3 for Sybase Microsoft SQL Server Migration Assistant (SSMA) for Sybase is a tool to automate migration from Sybase ASE database to SQL Server. Microsoft SQL Server Migration Assistant v5.3 for MySQL Microsoft SQL Server Migration Assistant (SSMA) for MySQL is a tool to automate migration from MySQL database to SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Document-oriented vs Column-oriented database fit

    - by user1007922
    I have a data-intensive application that desperately needs a database make-over. The general data model: There are records with RIDs, grouped together by group IDs (GID). The records have arbitrary data fields, (maybe 5-15) with a few of them mandatory and the rest optional, and thus sparse. The general use model: There are LOTS and LOTS of Writes. Millions to Billions of records are stored. Very often, they are associated with new GIDs, but sometimes, they are associated with existing GIDs. There aren't as many reads, but when they happen, they need to be pretty fast or at least constant speed regardless of the database size. And when the reads happen, it will need to retrieve all the records/RIDs with a certain GID. I don't have a need to search by the record field values. Primarily, I will need to query by the GID and maybe RID. What database implementation should I use? I did some initial research between document-oriented and column-oriented databases and it seems the document-oriented ones are a good fit, model-wise. I could store all the records together under the same document key using the GID. But I don't really have any use for their ability to search the document contents itself. I like the simplicity and scalability of column-oriented databases like Cassandra, but how should I model my data in this paradigm for optimal performance? Should my key be the GID and should I create a column for each record/RID? (there maybe thousands or hundreds of thousands of records in a group/GID). Or should my key be the RID and ensure each row has a column for the GID value? What results in faster writes and reads under this model?

    Read the article

  • Game Changer Appliance for SMBs Powered by Oracle Linux

    - by Zeynep Koch
    In the November 28th CRN article  Review: Thumbs-Up On Oracle Database Appliance  , Edward F. Moltzen mentions that "The Test Center likes this appliance (Oracle Database Appliance) , for the performance and for the strong security offered by the underlying Oracle Linux in the box. It’s more than a solid offering for the SMB space; it’s potentially a game-changer as data and security needs race to keep up with the oncoming generations of technology." The Oracle Database Appliance is a new way to take advantage of the world's most popular database—Oracle Database 11g—in a single, easy-to-deploy and manage system. It's a complete package of software, server, storage, and network that's engineered for simplicity; saving time and money by simplifying deployment, maintenance, and support of database workloads. All hardware and software components are supported by a single vendor—Oracle—and offer customers unique pay-as-you-grow software licensing to quickly scale from 2 processor cores to 24 processor cores without incurring the costs and downtime usually associated with hardware upgrades. It is: Simple—Complete plug-and-go hardware and software Reliable—Advanced management features and single-vendor support Affordable—Pay-as-you-grow platform for small database consolidation The Oracle Database Appliance is a 4U rack-mountable system pre-installed with Oracle Linux and Oracle appliance manager software. Redundancy is built into all components and the Oracle appliance manager software reduces the risk and complexity of deploying highly available databases. It's perfect for consolidating OLTP and data warehousing databases up to 4 terabytes in size, making it ideal for midsize companies or departmental systems. Read more about Oracle's Database Appliance  Read more about Oracle Linux

    Read the article

  • Tools of the Trade

    - by Ajarn Mark Caldwell
    I got pretty excited a couple of days ago when my new laptop arrived. “The new phone books are here!  The new phone books are here!  I’m a somebody!” - Steve Martin in The Jerk It is a Dell Precision M4500 with an Intel i7 Core 2.8 GHZ running 64-bit Windows 7 with a 15.6” widescreen, 8 GB RAM, 256 GB SSD.  For some of you high fliers, this may be nothing to write home about, but compared to the 32–bit Windows XP laptop with 2 GB of RAM and a regular hard disk that I’m coming from, it’s a really nice step forward.  I won’t even bore you with the details of the desktop PC I was first given when I started here 5 1/2 years ago.  Let’s just say that things have improved.  One really nice thing is that while we are definitely running a lean and mean department in terms of staffing, my boss believes in supporting that lean staff with good tools in order to stay lean instead of having to spend even more money on additional employees.  Of course, that only goes so far, and at some point you have to add more people in order to get more work done, which is why we are bringing on-board a new employee and a new contract developer next week.  But that’s a different story for a different time. But the main topic for this post is to highlight the variety of tools that I use in my job and that you might find useful, too.  This is easy to do right now because the process of building up my new laptop from scratch has forced me to assemble a list of software that had to be installed and configured.  Keep in mind as you look through this list that I play many roles in our company.  My official title is Software Engineering Manager, but in addition to managing the team, I am also an active ASP.NET and SQL developer, the Database Administrator, and 50% of the SAN Administrator team.  So, without further ado, here are the tools and some comments about why I use them: Tool Purpose Virtual Clone Drive Easily mount an ISO image as a DVD Drive.  This is particularly handy when you are downloading disk images from Microsoft for your tools. SQL Server 2008 R2 Developer Edition We are migrating all of our active systems to SQL 2008 R2.  Developer Edition has all the features of Enterprise Edition, but intended for development use. SQL Server 2005 Developer Edition (BIDS ONLY) The migration to SSRS 2008 R2 is just getting started, and in the meantime, maintenance work still has to be done on the reports on our SQL 2005 server.  For some reason, you can’t use BIDS from 2008 to write reports for a 2005 server.  There is some different format and when you open 2005 reports in 2008 BIDS, it forces you to upgrade, and they can no longer be uploaded to a 2005 server.  Hopefully Microsoft will fix this soon in some manner similar to Visual Studio now allows you to pick which version of the .NET Framework you are coding against. Visual Studio 2010 Premium All of our application development is in ASP.NET, and we might as well use the tool designed for it. I’ve used a version of Visual Studio going all the way back to VB 6.0 and Visual Interdev. Vault Professional Client Several years ago we replaced Visual Source Safe with SourceGear Vault (then Fortress, and now Vault Pro), and I love it.  It is very reliable with low overhead - perfect for a small to medium size development team.  And being a small ISV, their support is exceptional. Red-Gate Developer Bundle with the SQL Source Control update for Vault I first used, and fell in love with, SQL Prompt shortly before Red-Gate bought it, and then Red-Gate’s first release made me love it even more.  SQL Refactor (which has since been rolled into the latest version of SQL Prompt) has saved me many hours and migraine’s trying to understand somebody else’s code when their indenting was nonexistant, or worse, irrational.  SQL Compare has been awesome for troubleshooting potential schema issues between different instances of system databases.  SQL Data Compare helped us identify the cause behind a bug which appeared in PROD but could not be reproduced in a nearly (but not quite exactly) identical copy in UAT.  And the newest tool we are embracing: SQL Source Control.  I blogged about it here (and here, and here) last December.  This is really going to help us keep each developer’s copy of the database in sync with one another. Fiddler Helps you watch the whole traffic stream on web visits.  Haven’t used it a lot, but it did help me track down some odd 404 errors we were finding in our own application logs.  Has some other JavaScript troubleshooting capabilities, but some of its usefulness has been supplanted by the Developer Tools option in IE8. Funduc Search & Replace Find any string anywhere in a mound of source code really, really fast.  Does RegEx searches, if you understand that foreign language.  Has really helped with some refactoring work to pinpoint, for example, everywhere a particular stored procedure is referenced, whether in .NET code or other SQL procedures (which we have in script files).  Provides in-context preview of the search results.  Fantastic tool, and a bargain price. SciTE SciTE is a Scintilla based Text Editor and it is a fantastic, light-weight tool for quickly reviewing (or writing) program code, SQL scripts, and extract files.  It has language-specific syntax highlighting.  I used it to write several batch and CMD programs a year ago, and to examine data extract files for exchanging information with other systems.  Extremely handy are the options to View End of Line and View Whitespace.  Ever receive a file that is supposed to use CRLF as an end-of-line marker, but really only has CRs?  SciTE will quickly make that visible. Infragistics Controls We do a lot of ASP.NET development, and frequently use the WebGrid, WebTab, and date picker controls.  We will likely be implementing the Hierarchical Data Grid soon.  Infragistics has control suites for WebForms, WinForms, Silverlight, and coming soon MVC/JQuery. WinZip - WITH Command-Line add-in The classic compression program with a great command-line interface that allows me to build those CMD (and soon PowerShell) programs for automated compression jobs.  Our versioned Build packages are zip files. XML Notepad Haven’t used this a lot myself, but one of my team really likes it for examining large XML files. LINQPad Again, haven’t used this one a lot, but it was recommended to me for learning and practicing my LINQ skills which will come in handy as we implement Entity Framework. SQL Sentry Plan Explorer SQL Server Show Plan on steroids.  Great for helping you focus on the parts of a large query that are of most importance.  Also great for just compressing the graphical plan into more readable layout. Araxis Merge A great DIFF and Merge tool.  SourceGear provides a great tool called DiffMerge that we use all the time, but occasionally, I like the cross-edit capabilities of Araxis Merge.  For a while, we also produced DIFF reports in HTML that showed all the changes that occurred between two releases.  This was most important when we were putting out very small, but very important hot fixes on a very politically hot system.  The reports produced by Araxis Merge gave the Director of IS assurance that we were not accidentally introducing ripples throughout the system with our releases. Idera SQL Admin Toolset A great collection of tools including a password checker to help analyze your SQL Server for weak user passwords, a Backup Status tool to quickly scan a large list of servers and databases to identify any that are overdue for backups.  Particularly helpful for highlighting new databases that have been deployed without getting included in your backup processing.  I also like Space Analyzer to keep an eye on disk space consumed by database files. Idera SQL Job Manager This free tool provides a nice calendar view of SQL Server Job Schedules, but to a degree, you also get what you pay for.  We will be purchasing SQL Sentry Event Manager later this year as an even better job schedule reviewer/manager.  But in the meantime, this at least gives me a good view on potential resource conflicts across multiple instances of SQL Server. DBFViewer 2000 I inherited a couple of FoxPro databases that I have to keep an eye on occasionally and have not yet been able to migrate them to SQL Server. Balsamiq Mockups We are still in evaluation-mode on this tool, but I really like it as a quick UI mockup tool that does not require Visual Studio, so someone other than a programmer can do UI design.  The interface looks hand-drawn which definitely has some psychological benefits when communicating to users, too. FeedDemon I have to stay on top of my WAY TOO MANY blog subscriptions somehow.  I may read blogs on a couple of different computers, and FeedDemon’s integration with Google Reader allows me to keep them all in sync.  I don’t particularly like the Google Reader interface, or the fact that it always wanted to mark articles as read just because I scrolled past them.  FeedDemon solves this problem for me, and provides a multi-tabbed interface which is good because fairly frequently one blog will link to something else I want to read, and I can end up with a half-dozen open tabs all from one article. Synergy+ In my office, I run four monitors across two computers all with one mouse and keyboard.  Synergy is the magic software that makes this work. TweetDeck I’m not the most active Tweeter in the world, but when I want to check-in with the Twitterverse, this really helps.  I have found the #sqlhelp and #PoshHelp hash tags particularly useful, and I also have columns setup to make it easy to monitor #sqlpass, #PASSProfDev, and short term events like #sqlsat68.   Whew!  That’s a lot.  No wonder it took me a couple of days to get everything setup the way I wanted it.  Oh, that and actually getting some work accomplished at the same time.  Anyway, I know that is a huge dump of info, and most people never make it here to the end, so for those who did, let me say, CONGRATULATIONS, you made it! I hope you’ll find a new tool or two to make your work life a little easier.

    Read the article

  • 11gr2 DataGuard: Restarting DUPLICATE After a Failure

    - by rene.kundersma
    One of the great new features that comes in very handy when databases get larger and larger these days is RMAN's capability to duplicate from an active database and even restart a duplicate when it fails. Imagine yourself the problem I had lately; I used the duplicate from active database feature and had to wait for an hour or 6 before all datafiles where transferred.At the end of the process some error occurred because of the syntax. While this error was easily to solve I was afraid I had to redo the complete procedure and transfer the 2.5 TB again. Well, 11gr2 RMAN surprised when I re-ran my command with the following output: Using previous duplicated file +DATA/fin2prod/datafile/users.2968.719237649 for datafile 12 with checkpoint SCN of 183289288148 Using previous duplicated file +DATA/fin2prod/datafile/users.2703.719237975 for datafile 13 with checkpoint SCN of 183289295823 Above I only show a small snippet, but what happend is that RMAN smartly skipped all files that where already transferred ! The documentation says this: RMAN automatically optimizes a DUPLICATE command that is a repeat of a previously failed DUPLICATE command. The repeat DUPLICATE command notices which datafiles were successfully copied earlier and does not copy them again. This applies to all forms of duplication, whether they are backup-based (with and without a target connection) or active database duplication. The automatic optimization of the DUPLICATE command can be especially useful when a failure occurs during the duplication of very large databases. If a DUPLICATE operation fails, you need only run the DUPLICATE again, using the same parameters contained in the original DUPLICATE command. Please see chapter 23 of the 11g Release 2 Database Backup and Recovery User's Guide for more details. B.w.t. be very careful with the duplicate command. A small mistake in one of the 'convert' parameters can potentially overwrite your target's controlfile without prompting ! Rene Kundersma Technical Architect Oracle Technology Services

    Read the article

  • SQL Cruise Alaska 2011

    - by Grant Fritchey
    I had the extreme good fortune to get sent on the last SQL Cruise to Alaska. I love my job. In case you don't what this is, SQL Cruise is a trip on a cruise ship during which you get to attend classes while on the boat, learning all about SQL Server and related topics as well as network with the instructors and the other Cruisers. Frankly, it's amazing. Classes ran from Monday, 5/30, to Saturday, 6/4. The networking was constant, between classes, at night on cruise ship, out on excursions in Alaskan rainforests and while snorkeling in ocean waters. Here's a run down of the experience from my point of view. Because I couldn't travel out 2 days early, I missed the BBQ that occurred the day before the cruise when many of the Cruisers received their swag bags. Some of that swag came from Red Gate. I researched what was useful on a cruise like this and purchased small flashlights and binoculars for all the Cruisers. The flashlights were because, depending on your cabin, ships can be very dark. The binoculars were so that the cruisers could watch all the beautiful landscape as it flowed by. I would have liked to have been there when the bags were opened, but I heard from several people that they appreciated the gifts. Cruisers "In" the hot tub. Pictured: Marjory Woody, Michele Grondin, Kyle Brandt, Grant Fritchey, John Halunen Sunday I went to board the ship with my wife. We had a bit of an adventure because I messed up our documents. It all worked out and we got on board to meet up at the back of the boat at one of the outdoor bars with the other Cruisers, thanks to tweets letting everyone know where to go. That was the end of electronic coordination on the trip (connectivity in Alaska was horrible for everyone except AT&T). The Cruisers were a great bunch of people and it was a real honor to meet them and get to spend time with them. After everyone settled into their cabins, our very first activity was a contest, sponsored by Red Gate. The Cruisers, in an effort to get to know each other and the ship, were required to go all over taking various photographs, some of them hilarious. The winning team of three would all win prizes. Some of the significant others helped out and I tagged along with a team that tied for first but lost the coin toss. The winning team consisted of Christina Leo (blog|twitter), Ryan Malcom (twitter), Neil Hambly (blog|twitter). They then had to do math and identify the cabin with the lowest prime number, oh, and get a picture of it and be the first to get back up to the bar where we were waiting. Christina came in first and very happily carried home an Ipad2. Ryan won a 1TB portable hard drive and Neil won a wireless mouse (picture below, note my special SQL Server Central Friday Shirt. Thanks Steve (blog|twitter)). Winners: Christina Leo, Neil Hambly, Ryan Malcolm. Just Lucky: Grant Fritchey Monday morning classes started. Buck Woody (blog|twitter) was a special guest speaker on this cruise. His theme was "Three C's on the High Seas: Career, Communication and Cloud." The first session was all on Career. I'm not going to type out all my notes from the session, but let's just say, if you get the chance to hear Buck talk about how to manage your career, I suggest you attend. I have a ton of blog posts that I'll be putting together over the next several months (yes, months) both here and over on ScaryDBA. I also have a bunch of work I'm going to be doing to get my career performance bumped up a notch or two (and let's face it, that won't be easy). Later on Monday, Tim Ford (blog|twitter) did a session on DMOs. Specifically the session was on Tim's Period Table of DMOs that he has put together, and how to use some of the more interesting DMOs in your day to day job. It was a great session, packed with good information. Next, Brent Ozar (blog|twitter) did a session on how to monitor and guide SAN configuration for the DBA that doesn't have access to the SAN. That was some seriously useful information. Tuesday morning we only had a single class. Kendra Little (blog|twitter) taught us all about "No Lock for Yes Fun".  It was all about the different transaction isolation levels and how they work. There is so often confusion in this area and Kendra does a great job in clarifying the information. Also, she tosses in her excellent drawings to liven up the presentation. Then it was excursion time in Juneau. My wife and I, along with several other Cruisers, took a hike up around the Mendenhall Glacier. It was absolutely beautiful weather and walking through the Alaskan rain forest was a treat. Our guide, Jason, was a great guy and it was a good day of hiking. Wednesday was an all day excursion in Skagway. My wife and I took the "Ghost and Good Time Girls" walking tour that ended up at a bar that used to be a brothel, the Red Onion. It was a great history of the town. We went back out and hit a few museums and exhibits. We also hiked up the side of the mountain to see the Dewey Lake and some great views of the town. Finally we hiked out to the far side of town to see the Gold Rush cemetery. Hiking done we went back to the boat and had a quiet dinner on our own. Thursday we cruised through Glacier Bay and saw at least four different glaciers including sitting next to the Marjory Glacier for  about an hour. It was amazing. Then it got better. We went into class with Buck again, this time to talk about Communication. Again, I've got pages of notes that I'm going to be referring back to for some time to come. This was an excellent opportunity to learn. Snorkelers: Nicole Bertrand, Aaron Bertrand, Grant Fritchey, Neil Hambly, Christina Leo, John Robel, Yanni Robel, Tim Ford Friday we pulled into Ketchikan. A bunch of us went snorkeling. Yes, snorkeling. Yes, in Alaska. Yes, snorkeling in the ocean in Alaska. It was fantastic. They had us put on 7mm thick wet suits (an adventure all by itself) so it was basically warm the entire time we were in the water (except for the occasional squirt of cold water down my back). Before we got in the water a bald eagle flew up and landed about 15 feet in front of us, which was just an incredible event. Then our guide pointed out about 14 other eagles in the area, hanging out in the trees. Wow! The water was pretty clear and there was a ton of things to see. That was absolutely a blast. Back on the boat I presented a session called Execution Plans: The Deep Dive (note the nautical theme). It seemed to go over well and I had several good questions come out of the session that will lead to new blog posts. After I presented, it was Aaron Bertrand's (blog|twitter) turn. He did a session on "What's New in Denali" that provided a lot of great information. He was able to incorporate new things straight out of Tech-Ed, so this was expanded beyond his usual presentation. The man really knows what he's talking about and communicates it well. Saturday we were travelling so there was time for a bunch of classes. Jeremiah Peschka (blog|twitter) did a great overview of some of the NoSQL databases and what they should be used for. The session was called "The Database is Dead" but it was really about how there are specific uses for these databases that SQL Server doesn't fill, but also that these databases can't replace SQL Server in other areas. Again, good material. Brent Ozar presented again with a session on Defensive Indexing. It was an overview of how indexes work and a deep dive into how to apply them appropriately in your databases to better support access. A good session, as you would expect. Then we pulled into Victoria, BC, in Canada and had a nice dinner with several of the Cruisers, including Denny Cherry (blog|twitter). After that it was back to Seattle on Sunday. By the way, the Science Fiction Museum in Seattle isn't a Science Fiction Museum any more. I was very disappointed to discover this. Overall, it was a great experience. I'm extremely appreciative of Red Gate for sending me and for Tim, Brent, Kendra and Jeremiah for having me. The other Cruisers were all amazing people and it was an honor & privilege to meet them and spend time with them. While this was a seriously fun time, it was also a very serious training opportunity with solid information coming from seasoned industry pros.

    Read the article

  • Progress 4GL and DB to Oracle and cloud

    - by llaszews
    Getting from client/server based 4GLs and databases where the 4GL is tightly linked to the database to Oracle and the cloud is not easy. The least risky and expensive option (in the short term) is to use the Progress OpenEdge DataServer for Oracle: Progress OpenEdge DataServer This eliminates the need to have to migrate the Progress 4GL to Java/J2EE. The database can be migrated using SQLWays Ispirer: Ispirer SQLWays ProgressDB migrations tool The Progress 4GL can remain as is. In order to get the application on the cloud there are a few approaches: 1. VDI - Virtual Desktop is a way to put all of the users desktop in a centralized environment off the desktop. This is great in cases where it is just not one client/server application that the user needs access too. In many cases, users will utilize MS Access, MS Excel, Crystal Reports and other tools to get at the Progress DB and other centralized databases. Vmware's acquistion of Wanova shows how VDI is growing in usage. Citrix is the 800 pound gorilla in the VDI space with Citrix WinFrame (now called XenDesktop). Oracle offers a VDI solution that Oracle picked up when it acquired Sun. 2. Hypervisor Server Virtualization - Of course you can place applications written in client/server languages like Progress 4GL buy using server virtualization from Oracle, VMWare, Microsoft, Citrix and others. 3. Microsoft Remote Desktop Services (aka: Terminal Services Client) The entire idea is to eliminate all the client/server desktop devices and connections which require desktop software and database drivers. A solution to removing database drivers from the desktop is to use DataDirect SQLLink

    Read the article

  • SQLAuthority News – Learning, Community and Book Signing at #SQLPASS 2012

    - by pinaldave
    SQLPASS event is going excellent we are having great great fun! We are having book signing events and the response is overwhelmingly positive. I am glad that all of you love our books and I totally appreciate your support. Rick and I both are feeling very motivated to write more books in future. Here is our schedule for book signing. SQL Queries 2012 Joes 2 Pros Volume1 Finally a book for the true SQL Server beginner! Whether you are brand new to databases and are thinking of getting your 70-461 certification or already a semi-pro working in the field and need some fingertip support, this is this is the book for you. Joes 2 Pros does not assume you already know anything about databases or SQL server.  This book builds on the success of the previous series and will help anyone transform themselves from a beginner “Joe” into a SQL 2012 “Pro”. Wednesday, November 7, 2012 12pm-1pm – Book Signing at Exhibit Hall Joes Pros booth#117 (FREE BOOK) Rest all the time – I will be at Exhibition Hall Joes 2 Pros Booth #117. Stop by for the goodies! This book is also available on Amazon. SQL 2012 Functions Joes 2 Pros Functions have been around for many years to make our lives easier. Because of them, thousands of lines of valuable programming can be done with one statement. When we know what functions are offered in SQL Server we can get powerful projects done very quickly. Often times, the functions you wished you had are released in the next version. Wednesday, November 7, 2012 7pm-8pm - Embarcadero Booth Book Signing (FREE BOOK) Thursday, November 8, 2012 12pm-1pm - Embarcadero Booth Book Signing (FREE BOOK) This book is also available on Amazon. If you are at SQLPASS stop by Booth #117 – I will be there and many be you can get one of my signed book! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL PASS, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Book Review, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • Oracle Database 12c is available for download now!

    - by Mike Dietrich
    Good things come to those who wait ... finally ... Oracle Database 12c (Oracle 12.1.0.1) is available for download from the Oracle Software Cloud (formerly know as eDelivery) and OTN (Oracle Tech Network) for Linux 64bit (Solaris will follow within the next few hours): eDelivery:Oracle Database 12c (12.1.0.1) for Linux 64bitOracle Database 12c (12.1.0.1) for Solaris SPARC64Oracle Database 12c (12.1.0.1) for Solaris x86. OTN:Oracle Database 12c (12.1.0.1) for Linux 64bitOracle Database 12c (12.1.0.1) for Solaris SPARC64Oracle Database 12c (12.1.0.1) for Solaris x86  . And yes, it will be supported on Oracle Exadata and SuperCluster as well . . And with the release of Oracle Database 12c we are offering you also our NEWUpgrade, Migrate and Consolidate to Oracle Database 12cslide deck with (sorry, we've did it again!) over 500 slides covering: The brand new Parallel Upgrade including new Pre/Post-Upgrade-Fix-Ups The new Full Transportable Export/Import Feature Obviously Oracle Multitenant, which got talked about a lot as Pluggable Databases or Container Databases before Plenty of new parameters, cool and very helpful features and much more ... Download the slides Upgrade, Migrate and Consolidate to Oracle Database 12c And of course, the slide deck will see some updates in the near future -Mike . .

    Read the article

  • Designing a large database with multiple sources

    - by CatchingMonkey
    I have been tasked with redesigning, or at worst optimising the structure of a database for a data warehouse. Currently, the database has 4 other source databases (which is due to expand to X number of others), all of which have their own data structures, naming conventions etc. At the moment an overnight SSIS package pulls the data from the various source and then for each source coverts the data into a standardised, usable format. These tables are then appended to each other creating a 60m row, 40 column beast!. This table is then used in a variety of ways from an OLAP cube to a web front end. The structure has been in place for a very long time, and the work I have been able to prove the advantages of normalisation, and this is the way I would like to go. The problem for me is that the overnight process takes so long I don't then want to spend additional time normalising the last table into something usable. Can anyone offer any insight or ideas into the best way to restructure or optimise the database efficiently? Edit: All the databases are MS SQL Server 2008 R2 Thanks in advance CM

    Read the article

  • SQLAuthority News – Deployment guide for Microsoft SharePoint Foundation 2010

    - by pinaldave
    SharePoint and SQL Server both goes together – hands to hand. SharePoint installation is very interesting. At various organizations, the installation is very different and have various needs. SQL Server installation with SharePoint is equally important and I have often seen that it is being neglected. Microsoft has published the Deployment Guide for SharePoint Foundation. It talks about various database aspects as well. For optimal sharepoint installation the required version of SQL Server, including service packs and cumulative updates must be installed on the database server. The installation must include any additional features, such as SQL Analysis Services, and the appropriate SharePoint Foundation logins have to be added and configured. The database server must be hardened and, if it is required, databases must be created by the DBA. For more information, see: Hardware and software requirements (SharePoint Foundation 2010) Harden SQL Server for SharePoint environments (SharePoint Foundation 2010) Deploy by using DBA-created databases (SharePoint Foundation 2010) Deployment guide for Microsoft SharePoint Foundation 2010 Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SharePoint

    Read the article

  • Security in a private web service

    - by Oni
    I am developing a web site and a web service for a small on-line game. Technically, I'll be using Express (node.js) and MongoDB+Redis for the databases. This the structure I came up with: One Express server that will server as the Web Service. This will connect to the databases. One Express server that will provide the web site. It will connect to the Web Service to retrieve and push the information. iOS and Android application will be able to interact with the WebService. Taking into account: It is a small game. The information transferred is not critical. There will NOT be third party applications. At least for the moment. My concern is about which level of security I should use in each of the scenarios: Security of the user playing through web browser Security of the applications and the Web Server connecting to the WS. I have take a look at the different options and: OAuth and/or Https is too much for this scenario, isn't it? Will be a good option to hash the user and password with MD5(or similar) and some salt? I would like to get some directions and investigate by my own rather than getting a response like "you should you use this node.js module..." Thanks in advance,

    Read the article

  • Shared Database Servers

    - by shivanshu.upadhyay
    As more enterprises consolidate their database environments to support private cloud initiatives, ISVs will have to deal with sceanrios where they need to run on a shared powerful database server like Exadata. Some ISVs are concerned about meeting SLAs for performance in a shared environment. Outside the virtualization world, there are capabilities of Oracle Database which can be used to prevent resource contention and guarantee SLA. These capabilities are - 1) Instance Caging - This guarantees the CPU allocated or limits the maximum number of CPUs (and so the number of Oracle processes) that an instance of Database can use simultaneously. With this feature, ISVs can be assured that their application is allocated adequate CPUs even if the database server is shared with other applications. 2) CPU Resource Allocation with Database Resource Manager - This allocates percentages of CPU time to different users and applications within a database. ISVs can use this feature to ensure that priority user or workloads within their application get CPU resources over other requirements. 3) Exadata I/O Resource Manager - The Database Resource Manager feature in Oracle Database 11g has been enhanced for use with Exadata. This allows the sharing of storage between databases without fear of one database monopolizing the I/O bandwidth and impacting the performance of the other databases sharing the storage. This can be used to ensure that I/O does not become a performance bottleneck due to poor design of other applications sharing the same server.

    Read the article

  • 10gR2 Transportable Tablespaces Certified for EBS 11i

    - by Steven Chan
    Database migration across platforms of different "endian" (byte ordering) formats using the Cross Platform Transportable Tablespaces (XTTS) process is now certified for Oracle E-Business Suite Release 11i (11.5.10.2) with Oracle Database 10g Release 2.  This process is sometimes also referred to as transportable tablespaces (TTS).What is the Cross-Platform Transportable Tablespace Feature?The Cross-Platform Transportable Tablespace feature allows users to move a user tablespace across Oracle databases. It's an efficient way to move bulk data between databases. If the source platform and the target platform are of different endianness, then an additional conversion step must be done on either the source or target platform to convert the tablespace being transported to the target format. If they are of the same endianness, then no conversion is necessary and tablespaces can be transported as if they were on the same platform.Moving data using transportable tablespaces can be much faster than performing either an export/import or unload/load of the same data. This is because transporting a tablespace only requires the copying of datafiles from source to the destination and then integrating the tablespace structural information. You can also use transportable tablespaces to move both table and index data, thereby avoiding the index rebuilds you would have to perform when importing or loading table data.

    Read the article

  • New SQL Azure Development Accelerator Core promotional offer announced

    - by Eric Nelson
    This is (almost) a straight copy and paste but represents an important announcement worthy of a little more “exposure” :-) Starting August 1, 2010, we will release a new SQL Azure Development Accelerator Core promotional offer.  This new offer will give you the flexibility to purchase commitment quantities of SQL Azure Business Edition databases independent of other Windows Azure platform services at a deeply discounted monthly price.  The offer is valid only for a six month term.  You may purchase in 10 GB increments the amount of our Business Edition relational database that you require (each Business Edition database is capable of storing up to 50 GB).  The offer price will be $74.95 per 10 GB per month.  This promotional offer represents 25% off of our normal consumption rates.  Monthly Business Edition relational database usage exceeding the purchased commitment amount and usage for other Windows Azure platform services for this offer will be charged at our normal consumption rates.  Please click here for full details of our new SQL Azure Development Accelerator Core offer.  Related Links: Details of 5GB and 50GB databases have been released http://ukazure.ning.com UK community site Getting started with the Windows Azure Platform

    Read the article

  • Today I talk about you

    - by BuckWoody
    Some time back I posted a blog entry (mirrored here and here) asking you how you design databases. Out of those responses, my own experience, studies I read, and interviews I conducted, I collected a wealth of data. Thanks for your responses. So what am I going to do with that information? Well, all along I had planned for that to be used today. I am giving a presentation at an event called “TechReady” called “How Your Customers Design Databases”. This is a Microsoft-internal event, where technical professionals like myself, salespeople, and the product team get together to talk about what has been working, what doesn’t, what is coming and hopefully (fingers crossed here) what the product team can do to help us help the SQL Server community. I’ve mentioned before that I teach database design as part of a course I run at the University of Washington. I’m also planning to give a mini-lecture from that series at TechEd 2010, so if you’re coming stop by. I’d love to meet you. So today I talk about you – thanks for the input. I hope you and I can make a difference in the product. Might take a while, but it’s nice to know your voice is being heard. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • How do you decide what kind of database to use?

    - by Jason Baker
    I really dislike the name "NoSQL", because it isn't very descriptive. It tells me what the databases aren't where I'm more interested in what the databases are. I really think that this category really encompasses several categories of database. I'm just trying to get a general idea of what job each particular database is the best tool for. A few assumptions I'd like to make (and would ask you to make): Assume that you have the capability to hire any number of brilliant engineers who are equally experienced with every database technology that has ever existed. Assume you have the technical infrastructure to support any given database (including available servers and sysadmins who can support said database). Assume that each database has the best support possible for free. Assume you have 100% buy-in from management. Assume you have an infinite amount of money to throw at the problem. Now, I realize that the above assumptions eliminate a lot of valid considerations that are involved in choosing a database, but my focus is on figuring out what database is best for the job on a purely technical level. So, given the above assumptions, the question is: what jobs are each database (including both SQL and NoSQL) the best tool for and why?

    Read the article

  • How Mature is Your Database Change Management Process?

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database Delivery Patterns & Practices Further Reading Organization and team processes How do you get your database schema changes live, on to your production system? As your team of developers and DBAs are working on the changes to the database to support your business-critical applications, how do these updates wend their way through from dev environments, possibly to QA, hopefully through pre-production and eventually to production in a controlled, reliable and repeatable way? In this article, I describe a model we use to try and understand the different stages that customers go through as their database change management processes mature, from the very basic and manual, through to advanced continuous delivery practices. I also provide a simple chart that will help you determine “How mature is our database change management process?” This process of managing changes to the database – which all of us who have worked in application/database development have had to deal with in one form or another – is sometimes known as Database Change Management (even if we’ve never used the term ourselves). And it’s a difficult process, often painfully so. Some developers take the approach of “I’ve no idea how my changes get live – I just write the stored procedures and add columns to the tables. It’s someone else’s problem to get this stuff live. I think we’ve got a DBA somewhere who deals with it – I don’t know, I’ve never met him/her”. I know I used to work that way. I worked that way because I assumed that making the updates to production was a trivial task – how hard can it be? Pause the application for half an hour in the middle of the night, copy over the changes to the app and the database, and switch it back on again? Voila! But somehow it never seemed that easy. And it certainly was never that easy for database changes. Why? Because you can’t just overwrite the old database with the new version. Databases have a state – more specifically 4Tb of critical data built up over the last 12 years of running your business, and if your quick hotfix happened to accidentally delete that 4Tb of data, then you’re “Looking for a new role” pretty quickly after the failed release. There are a lot of other reasons why a managed database change management process is important for organisations, besides job security, not least: Frequency of releases. Many business managers are feeling the pressure to get functionality out to their users sooner, quicker and more reliably. The new book (which I highly recommend) Lean Enterprise by Jez Humble, Barry O’Reilly and Joanne Molesky provides a great discussion on how many enterprises are having to move towards a leaner, more frequent release cycle to maintain their competitive advantage. It’s no longer acceptable to release once per year, leaving your customers waiting all year for changes they desperately need (and expect) Auditing and compliance. SOX, HIPAA and other compliance frameworks have demanded that companies implement proper processes for managing changes to their databases, whether managing schema changes, making sure that the data itself is being looked after correctly or other mechanisms that provide an audit trail of changes. We’ve found, at Red Gate that we have a very wide range of customers using every possible form of database change management imaginable. Everything from “Nothing – I just fix the schema on production from my laptop when things go wrong, and write it down in my notebook” to “A full Continuous Delivery process – any change made by a dev gets checked in and recorded, fully tested (including performance tests) before a (tested) release is made available to our Release Management system, ready for live deployment!”. And everything in between of course. Because of the vast number of customers using so many different approaches we found ourselves struggling to keep on top of what everyone was doing – struggling to identify patterns in customers’ behavior. This is useful for us, because we want to try and fit the products we have to different needs – different products are relevant to different customers and we waste everyone’s time (most notably, our customers’) if we’re suggesting products that aren’t appropriate for them. If someone visited a sports store, looking to embark on a new fitness program, and the store assistant suggested the latest $10,000 multi-gym, complete with multiple weights mechanisms, dumb-bells, pull-up bars and so on, then he’s likely to lose that customer. All he needed was a pair of running shoes! To solve this issue – in an attempt to simplify how we understand our customers and our offerings – we built a model. This is a an attempt at trying to classify our customers in to some sort of model or “Customer Maturity Framework” as we rather grandly term it, which somehow simplifies our understanding of what our customers are doing. The great statistician, George Box (amongst other things, the “Box” in the Box-Jenkins time series model) gave us the famous quote: “Essentially all models are wrong, but some are useful” We’ve taken this quote to heart – we know it’s a gross over-simplification of the real world of how users work with complex legacy and new database developments. Almost nobody precisely fits in to one of our categories. But we hope it’s useful and interesting. There are actually a number of similar models that exist for more general application delivery. We’ve found these from ThoughtWorks/Forrester, from InfoQ and others, and initially we tried just taking these models and replacing the word “application” for “database”. However, we hit a problem. From talking to our customers we know that users are far less further down the road of mature database change management than they are for application development. As a simple example, no application developer, who wants to keep his/her job would develop an application for an organisation without source controlling that code. Sure, he/she might not be using an advanced Gitflow branching methodology but they’ll certainly be making sure their code gets managed in a repo somewhere with all the benefits of history, auditing and so on. But this certainly isn’t the case (yet) for the database – a very large segment of the people we speak to have no source control set up for their databases whatsoever, even at the most basic level (for example, keeping change scripts in a source control system somewhere). By the way, if this is you, Red Gate has a great whitepaper here, on the barriers people face getting a source control process implemented at their organisations. This difference in maturity is the same as you move in to areas such as continuous integration (common amongst app developers, relatively rare for database developers) and automated release management (growing amongst app developers, very rare for the database). So, when we created the model we started from scratch and biased the levels of maturity towards what we actually see amongst our customers. But, what are these stages? And what level are you? The table below describes our definitions for four levels of maturity – Baseline, Beginner, Intermediate and Advanced. As I say, this is a model – you won’t fit any of these categories perfectly, but hopefully one will ring true more than others. We’ve also created a PDF with a flow chart to help you find which of these groups most closely matches your team:  Download the Database Delivery Maturity Framework PDF here   Level D1 – Baseline Work directly on live databases Sometimes work directly in production Generate manual scripts for releases. Sometimes use a product like SQL Compare or similar to do this Any tests that we might have are run manually Level D2 – Beginner Have some ad-hoc DB version control such as manually adding upgrade scripts to a version control system Attempt is made to keep production in sync with development environments There is some documentation and planning of manual deployments Some basic automated DB testing in process Level D3 – Intermediate The database is fully version-controlled with a product like Red Gate SQL Source Control or SSDT Database environments are managed Production environment schema is reproducible from the source control system There are some automated tests Have looked at using migration scripts for difficult database refactoring cases Level D4 – Advanced Using continuous integration for database changes Build, testing and deployment of DB changes carried out through a proper database release process Fully automated tests Production system is monitored for fast feedback to developers   Does this model reflect your team at all? Where are you on this journey? We’d be very interested in knowing how you get on. We’re doing a lot of work at the moment, at Red Gate, trying to help people progress through these stages. For example, if you’re currently not source controlling your database, then this is a natural next step. If you are already source controlling your database, what about the next stage – continuous integration and automated release management? To help understand these issues, there’s a summary of the Red Gate Database Delivery learning program on our site, alongside a Patterns and Practices library here on Simple-Talk and a Training Academy section on our documentation site to help you get up and running with the tools you need to progress. All feedback is welcome and it would be great to hear where you find yourself on this journey! This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >