Search Results

Search found 3710 results on 149 pages for 'all databases'.

Page 2/149 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Rails database relationships

    - by Danny McClelland
    Hi Everyone, I have three models that I want to interact with each other. Kase, Person and and Company. I have (I think) setup the relationships correctly: class Kase < ActiveRecord::Base #HAS ONE COMPANY has_one :company #HAS MANY PERSONS has_many :persons class Person < ActiveRecord::Base belongs_to :company class Company < ActiveRecord::Base has_many :persons def to_s; companyname; end I have put the select field on the create new Kase view, and the create new Person view as follows: <li>Company<span><%= f.select :company_id, Company.all %> </span></li> All of the above successfully shows a drop down menu dynamically populated with the company names within Companies. What I am trying to do is display the contact of the Company record within the kase and person show.html.erb. For example, If I have a company called "Acme, Inc." and create a new Kase called "Random Case" and choose within the create new case page "Acme, Inc." from the companies drop down menu. I would then want to display "Acme, Inc" along with "Acme, Inc. Mobile" etc. on the "Random Case" show.html.erb. I hope this makes sense to somebody! Thanks, Danny

    Read the article

  • Is it possible to upgrade from Postgres 8.3.3 with existing databases to 8.4.2 (installed windows vi

    - by WildWezyr
    I'm considering upgrade from Postgres 8.3.3 to 8.4.2 on my machine (it has Windows Vista). Windows Installer (one click installer) for Postgres 8.4.2 that can be downloaded from enterprisedb.com offers only fresh install (it does not recognizes my current installation of v8.3.3). Is it possible to upgrade with all existing databases converted and visible (automatically migrated?) in new version just after upgrade? Or I have to do something more - backup/restore all my databases manually?

    Read the article

  • Advantages of relational databases over VSAM, ISAM and hierarchical data stores

    - by llaszews
    When migrating companies from legacy environments to the cloud, invariably you run into older hierarchical, flat file, VSAM, ISAM and other legacy data stores. There are many advantages to moving these databases into a relational database structure. The most important which is that most cloud providers run on relational database models. AWS, for example, supports Oracle, SQL Server, and MySQL. The top three 'other reasons' for moving to a relational database are: 1. Data Access – Thousands of database access tools from query creation to business intelligence. 2. Management and monitoring – Hundreds of tools for management and monitoring of the database. 3. Leverage all the free tools from relational database vendors. Free Oracle database tools include: -Application Express – WYSIWIG browse based application development and deployment. -SQL Developer – SQL and PL/SQL development. Database object maintenance. What is interesting is that Big Data NoSQL databases and XML databases are taking us back to the days of VSAM (key value databases) with NoSQL and IMS (hierarchical) with XML databases?

    Read the article

  • Migrating Databases Checklist Part1

    SQL Server databases move around as an organisation’s data grows, applications are enhanced or new versions of the database software are released. If not anything else, servers become old and unreliable and databases eventually need to find a new home. Here's what to do when migrating your databases. Check SQL Server performance at a glanceWe consulted 1000 SQL Server professionals to make SQL Monitor’s UI as clear as possible. Start monitoring with a free trial.

    Read the article

  • SQL Server Transaction Marks: Restoring multiple databases to a common relative point

    - by Mladen Prajdic
    We’re all familiar with the ability to restore a database to point in time using the RESTORE WITH STOPAT statement. But what if we have multiple databases that are accessed from one application or are modifying each other? And over multiple instances? And all databases have different workloads? And we want to restore all of the databases to some known common relative point? The catch here is that this common relative point isn’t the same point in time for all databases. This common relative point in time might be now in DB1, now-1 hour in DB2 and yesterday in DB3. And we don’t know the exact times. Let me introduce you to Transaction Marks. When we run a marked transaction using the WITH MARK option a flag is set in the transaction log and a row is added to msdb..logmarkhistory table. When restoring a transaction log backup we can restore to either before or after that marked transaction. The best thing is that we don’t even need to have one database modifying another database. All we have to do is use a marked transaction with the same name in different database. Let’s see how this works with an example. The code comments say what’s going on. USE master GOCREATE DATABASE TestTxMark1GOUSE TestTxMark1GOCREATE TABLE TestTable1( ID INT, VALUE UNIQUEIDENTIFIER) -- insert some data into the table so we can have a starting pointINSERT INTO TestTable1SELECT ROW_NUMBER() OVER(ORDER BY number) AS RN, NULLFROM master..spt_valuesORDER BY RNSELECT *FROM TestTable1GO-- TAKE A FULL BACKUP of the databseBACKUP DATABASE TestTxMark1 TO DISK = 'c:\TestTxMark1.bak'GO USE master GOCREATE DATABASE TestTxMark2GOUSE TestTxMark2GOCREATE TABLE TestTable2( ID INT, VALUE UNIQUEIDENTIFIER)-- insert some data into the table so we can have a starting pointINSERT INTO TestTable2SELECT ROW_NUMBER() OVER(ORDER BY number) AS RN, NEWID()FROM master..spt_valuesORDER BY RNSELECT *FROM TestTable2GO-- TAKE A FULL BACKUP of our databseBACKUP DATABASE TestTxMark2 TO DISK = 'c:\TestTxMark2.bak'GO -- start a marked transaction that modifies both databasesBEGIN TRAN TxDb WITH MARK -- update values from NULL to random value UPDATE TestTable1 SET VALUE = NEWID(); -- update first 100 values from random value -- to NULL in different DB UPDATE TestTxMark2.dbo.TestTable2 SET VALUE = NULL WHERE ID <= 100;COMMITGO     -- some time goes by here -- with various database activity... -- We see two entries for marks in each database. -- This is just informational and has no bearing on the restore itself.SELECT * FROM msdb..logmarkhistory USE masterGO-- create a log backup to restore to mark pointBACKUP LOG TestTxMark1 TO DISK = 'c:\TestTxMark1.trn'GO-- drop the database so we can restore it backDROP DATABASE TestTxMark1GO USE masterGO-- create a log backup to restore to mark pointBACKUP LOG TestTxMark2 TO DISK = 'c:\TestTxMark2.trn'GO-- drop the database so we can restore it backDROP DATABASE TestTxMark2GO -- RESTORE THE DATABASE BACK BEFORE OUR TRANSACTION-- restore the full backup RESTORE DATABASE TestTxMark1 FROM DISK = 'c:\TestTxMark1.bak' WITH NORECOVERY;-- restore the log backup to the transaction markRESTORE LOG TestTxMark1 FROM DISK = 'c:\TestTxMark1.trn' WITH RECOVERY, -- recover to state before the transaction STOPBEFOREMARK = 'TxDb'; -- recover to state after the transaction -- STOPATMARK = 'TxDb';GO -- RESTORE THE DATABASE BACK BEFORE OUR TRANSACTION-- restore the full backup RESTORE DATABASE TestTxMark2 FROM DISK = 'c:\TestTxMark2.bak' WITH NORECOVERY;-- restore the log backup to the transaction markRESTORE LOG TestTxMark2 FROM DISK = 'c:\TestTxMark2.trn' WITH RECOVERY, -- recover to state before the transaction STOPBEFOREMARK = 'TxDb'; -- recover to state after the transaction -- STOPATMARK = 'TxDb';GO USE TestTxMark1-- we restored to time before the transaction -- so we have NULL values in our tableSELECT * FROM TestTable1 USE TestTxMark2-- we restored to time before the transaction -- so we DON'T have NULL values in our tableSELECT * FROM TestTable2   Transaction marks can be used like a crude sync mechanism for cross database operations. With them we can mark our databases with a common “restore to” point so we know we have a valid state between all databases to restore to.

    Read the article

  • NoSQL is not about object databases

    - by Bertrand Le Roy
    NoSQL as a movement is an interesting beast. I kinda like that it’s negatively defined (I happen to belong myself to at least one other such a-community). It’s not in its roots about proposing one specific new silver bullet to kill an old problem. it’s about challenging the consensus. Actually, blindly and systematically replacing relational databases with object databases would just replace one set of issues with another. No, the point is to recognize that relational databases are not a universal answer -although they have been used as one for so long- and recognize instead that there’s a whole spectrum of data storage solutions out there. Why is it so hard to recognize, by the way? You are already using some of those other data storage solutions every day. Let me cite a few: The file system Active Directory XML / JSON documents The Web e-mail Logs Excel files EXIF blobs in your photos Relational databases And yes, object databases It’s just a fact of modern life. Notice by the way that most of the data that you use every day is unstructured and thus mostly unsuitable for relational storage. It really is more a matter of recognizing it: you are already doing NoSQL. So what happens when for any reason you need to simultaneously query two or more of these heterogeneous data stores? Well, you build an index of sorts combining them, and that’s what you query instead. Of course, there’s not much distance to travel from that to realizing that querying is better done when completely separated from storage. So why am I writing about this today? Well, that’s something I’ve been giving lots of thought, on and off, over the last ten years. When I built my first CMS all that time ago, one of the main problems my customers were facing was to manage and make sense of the mountain of unstructured data that was constituting most of their business. The central entity of that system was the file system because we were dealing with lots of Word documents, PDFs, OCR’d articles, photos and static web pages. We could have stored all that in SQL Server. It would have worked. Ew. I’m so glad we didn’t. Today, I’m working on Orchard (another CMS ;). It’s a pretty young project but already one of the questions we get the most is how to integrate existing data. One of the ideas I’ll be trying hard to sell to the rest of the team in the next few months is to completely split the querying from the storage. Not only does this provide great opportunities for performance optimizations, it gives you homogeneous access to heterogeneous and existing data sources. For free.

    Read the article

  • SQL SERVER – Backup SQL databases to Box or SkyDrive

    - by Pinal Dave
    To ensure your SQL Server or Azure databases remain safe, you should backup your databases periodically. And it is important to store the backups in a reliable location. Microsoft SkyDrive currently offers 7GB free, Box offers 5GB free – both are reliable and it is simple to send your backups there. SQLBackupAndFTP in it’s latest version 9 added the option to backup to SkyDrive and Box ( in addition to local/network folder, NAS drive, FTP, Dropbox, Google Drive and Amazon S3). Just select the databases that you’d like to backup and select to store the backups in SkyDrive or Box. Below I will show you how to do it in details Select databases to backup First connect to your SQL Server or Azure Sql Database. Then select the databases you’d like to backup. Connect to SkyDrive or Box cloud If you have a free version of SQLBackupAndFTP Box destination is included, but SkyDrive destination will be disabled as it is available in the Standard version or above. Click “Try now” to get 30 days trial on all options On the “SkyDrive Settings” form you’ll need to authorize SQLBackupAndFTP to access your SkyDrive. Click “Authorize…” to open SkyDrive authorization page in your browser, sign in your to SkyDrive account and click at “Allow” . On the next page you will see the field with authorization code. Copy it to the clipboard. Box operation is just the same. After that return to SQLBackupAndFTP, paste the authorization code and click “OK” . After you are authorized, you can enter the path to a backup folder. SQLBackupAndFTP will create the folder if it does not exist. That’s all what has to be done to backup to SkyDrive or Box cloud.  You can now click on “Run Now” button to test this job. Conclusion Whatever is your preference for storing SQL backups, it is easy with SQLBackupAndFTP. Note that at the time of this writing they are running a very rare promotion on volume licenses: 5–9 licenses: 20% off 10–19 licenses: 35% off more than 20 licenses: 50% off Please let me know your favorite options for storing the backups. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Development environment to manage multiple Oracle databases

    - by jkohlhepp
    I am in an enterprise environment where we have applications that need to run against multiple Oracle databases. Developers may need to manage multiple vintages of these databases to support different test data or diagnose bugs against different versions of the code. Right now, we have a limited set of test environments set up on "real" Oracle servers within the data center. We juggle these among development and QA groups and there is a lot of conflicts and inefficiencies that arise because of it. I am taking a look at Oracle Express Edition which would allow me to spin up a local Oracle database. This is similar to the workflow I most often see with SQL Server. Devs work on their location machine until they are ready to integration and then they push their DB changes to integration / QA environments. However, from what I read it seems that Oracle XE only supports one database instance at a time. So if I have an application that utilizes two different databases, I can't have both of them running on my local machine. Is that correct? Does Oracle Standard or Personal editions get around this limitation? If I had one of those installed locally, how difficult would it be to get multiple databases working on the same development machine? How do dev shops handle developing against Oracle where they need to be using several different Oracle instances for their applications?

    Read the article

  • Agile Database Techniques: Effective Strategies for the Agile Software Developer – book review

    - by DigiMortal
       Agile development expects mind shift and developers are not the only ones who must be agile. Every chain is as strong as it’s weakest link and same goes also for development teams. Agile Database Techniques: Effective Strategies for the Agile Software Developer by Scott W. Ambler is book that calls also data professionals to be part of agile development. Often are DBA-s in situation where they are not part of application development and later they have to survive large set of applications that all use databases different way. Of course, only some of these applications are not problematic when looking what database server has to do to serve them. I have seen many applications that rape database servers because developers have no clue what is going on in database (~3K queries to database per web application request – have you seen something like this? I have…) Agile Database Techniques covers some object and database design technologies and gives suggestions to development teams about topics they need help or assistance by DBA-s. The book is also good reading for DBA-s who usually are not very strong in object technologies. You can take this book as bridge between these two worlds. I think teams that build object applications that use databases should buy this book and try at least one or two projects out with Ambler’s suggestions. Table of contents Foreword by Jon Kern. Foreword by Douglas K. Barry. Acknowledgments. Introduction. About the Author. Part One: Setting the Foundation. Chapter 1: The Agile Data Method. Chapter 2: From Use Cases to Databases — Real-World UML. Chapter 3: Data Modeling 101. Chapter 4: Data Normalization. Chapter 5: Class Normalization. Chapter 6: Relational Database Technology, Like It or Not. Chapter 7: The Object-Relational Impedance Mismatch. Chapter 8: Legacy Databases — Everything You Need to Know But Are Afraid to Deal With. Part Two: Evolutionary Database Development. Chapter 9: Vive L’ Évolution. Chapter 10: Agile Model-Driven Development (AMDD). Chapter 11: Test-Driven Development (TDD). Chapter 12: Database Refactoring. Chapter 13: Database Encapsulation Strategies. Chapter 14: Mapping Objects to Relational Databases. Chapter 15: Performance Tuning. Chapter 16: Tools for Evolutionary Database Development. Part Three: Practical Data-Oriented Development Techniques. Chapter 17: Implementing Concurrency Control. Chapter 18: Finding Objects in Relational Databases. Chapter 19: Implementing Referential Integrity and Shared Business Logic. Chapter 20: Implementing Security Access Control. Chapter 21: Implementing Reports. Chapter 22: Realistic XML. Part Four: Adopting Agile Database Techniques. Chapter 23: How You Can Become Agile. Chapter 24: Bringing Agility into Your Organization. Appendix: Database Refactoring Catalog. References and Suggested Reading. Index.

    Read the article

  • Migrating SQL Server Databases – The DBA’s Checklist (Part 1)

    - by Sadequl Hussain
    It is a fact of life: SQL Server databases change homes. They move from one instance to another, from one version to the next, from old servers to new ones.  They move around as an organisation’s data grows, applications are enhanced or new versions of the database software are released. If not anything else, servers become old and unreliable and databases eventually need to find a new home. Consider the following scenarios: 1.     A new  database application is rolled out in a production server from the development or test environment 2.     A copy of the production database needs to be installed in a test server for troubleshooting purposes 3.     A copy of the development database is regularly refreshed in a test server during the system development life cycle 4.     A SQL Server is upgraded to a newer version. This can be an in-place upgrade or a side-by-side migration 5.     One or more databases need to be moved between different instances as part of a consolidation strategy. The instances can be running the same or different version of SQL Server 6.     A database has to be restored from a backup file provided by a third party application vendor 7.     A backup of the database is restored in the same or different instance for disaster recovery 8.     A database needs to be migrated within the same instance: a.     Files are moved from direct attached storage to storage area network b.    The same database is copied under a different name for another application Migrating SQL Server database applications is a complex topic in itself. There are a number of components that can be involved: jobs, DTS or SSIS packages, logins or linked servers are only few pieces of the puzzle. However, in this article we will focus only on the central part of migration: the installation of the database itself. Unless it is an in-place upgrade, typically the database is taken from a source server and installed in a destination instance.  Most of the time, a full backup file is used for the rollout. The backup file is either provided to the DBA or the DBA takes the backup and restores it in the target server. Sometimes the database is detached from the source and the files are copied to and attached in the destination. Regardless of the method of copying, moving, refreshing, restoring or upgrading the physical database, there are a number of steps the DBA should follow before and after it has been installed in the destination. It is these post database installation steps we are going to discuss below. Some of these steps apply in almost every scenario described above while some will depend on the type of objects contained within the database.  Also, the principles hold regardless of the number of databases involved. Step 1:  Make a copy of data and log files when attaching and detaching When detaching and attaching databases, ensure you have made copies of the data and log files if the destination is running a newer version of SQL Server. This is because once attached to a newer version, the database cannot be detached and attached back to an older version. Trying to do so will give you a message like the following: Server: Msg 602, Level 21, State 50, Line 1 Could not find row in sysindexes for database ID 6, object ID 1, index ID 1. Run DBCC CHECKTABLE on sysindexes. Connection Broken If you try to backup the attached database and restore it in the source, it will still fail. Similarly, if you are restoring the database in a newer version, it cannot be backed up or detached and put back in an older version of SQL. Unlike detach and attach method though, you do not lose the backup file or the original database here. When detaching and attaching a database, it is important you keep all the log files available along with the data files. It is possible to attach a database without a log file and SQL Server can be instructed to create a new log file, however this does not work if the database was detached when the primary file group was read-only. You will need all the log files in such cases. Step 2: Change database compatibility level Once the database has been restored or attached to a newer version of SQL Server, change the database compatibility level to reflect the newer version unless there is a compelling reason not to do so. When attaching or restoring from a previous version of SQL, the database retains the older version’s compatibility level.  The only time you would want to keep a database with an older compatibility level is when the code within your database is no longer supported by SQL Server. For example, outer joins with *= or the =* operators were still possible in SQL 2000 (with a warning message), but not in SQL 2005 anymore. If your stored procedures or triggers are using this form of join, you would want to keep the database with an older compatibility level.  For a list of compatibility issues between older and newer versions of SQL Server databases, refer to the Books Online under the sp_dbcmptlevel topic. Application developers and architects can help you in deciding whether you should change the compatibility level or not. You can always change the compatibility mode from the newest to an older version if necessary. To change the compatibility level, you can either use the database’s property from the SQL Server Management Studio or use the sp_dbcmptlevel stored procedure.   Bear in mind that you cannot run the built-in reports for databases from SQL Server Management Studio if you keep the database with an older compatibility level. The following figure shows the error message I received when trying to run the “Disk Usage by Top Tables” report against a database. This database was hosted in a SQL Server 2005 system and still had a compatibility mode 80 (SQL 2000).     Continues…

    Read the article

  • NoSQL is not about object databases

    NoSQL as a movement is an interesting beast. I kinda like that its negatively defined (I happen to belong myself to at least one other such a-community). Its not in its roots about proposing one specific new silver bullet to kill an old problem. its about challenging the consensus. Actually, blindly and systematically replacing relational databases with object databases would just replace one set of issues with another. No, the point is to recognize that relational databases are not a universal...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Migrating Databases from SQL Server 2008 to SqlAzure

    - by kaleidoscope
    Connect SQL Azure through SSMS. (It will get connected, only if you have port 1433 open.) Create required databases on SQL Azure. Create and execute schema scripts for databases.(Make sure you have written scripts which are 100% compatible with SQL Azure. Tool called SQL Azure migration tool by CodePlex helps with doing this.) Create and execute insert scripts for databases. (One can use SSMS for generating insert scripts.) For further details refer to the Windows Azure Platform Kit Nov 2009.   Anish

    Read the article

  • my wiki site using mediawiki - databases not found error

    - by Jayapal Chandran
    I had been using mediawiki opensource in my website to display programming articles. Today i tried to access my site but it showed database not found. Mediawiki uses many databases. When i logged in my control panel and checked i can see that most of the databases created by mediawiki is missing so is the reason i am getting this error. I have used mediawiki for two different purposes. It is like two modules. For one the databases are missing and for the other i think the data is corrupt. Do anybody know any issue with mediawiki security or would this be a problem with the webhosting cause we have faced several problems with them initially like before three years and recently the were good. Yet this happened. I have requested to hosting company to look into it and meanwhile i am expecting the help from stackexchange users. How do i check the logs for table deletion?

    Read the article

  • Powershell – script all objects on all databases to files

    - by Nigel Rivett
    <# This simple PowerShell routine scripts out all the user-defined functions, stored procedures, tables and views in all the databases on the server that you specify, to the path that you specify. SMO must be installed on the machine (it happens if SSMS is installed) To run - set the servername and path Open a command window and run powershell Copy the below into the window and press enter - it should run It will create the subfolders for the databases and objects if necessary. #> $path = “C:\Test\Script\" $ServerName = "MyServerNameOrIpAddress" [System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') $serverInstance = New-Object ('Microsoft.SqlServer.Management.Smo.Server') $ServerName $IncludeTypes = @(“tables”,”StoredProcedures”,"Views","UserDefinedFunctions") $ExcludeSchemas = @(“sys”,”Information_Schema”) $so = new-object (‘Microsoft.SqlServer.Management.Smo.ScriptingOptions’) $so.IncludeIfNotExists = 0 $so.SchemaQualify = 1 $so.AllowSystemObjects = 0 $so.ScriptDrops = 0 #Script Drop Objects $dbs=$serverInstance.Databases foreach ($db in $dbs) { $dbname = "$db".replace("[","").replace("]","") $dbpath = "$path"+"$dbname" + "\" if ( !(Test-Path $dbpath)) {$null=new-item -type directory -name "$dbname"-path "$path"} foreach ($Type in $IncludeTypes) { $objpath = "$dbpath" + "$Type" + "\" if ( !(Test-Path $objpath)) {$null=new-item -type directory -name "$Type"-path "$dbpath"} foreach ($objs in $db.$Type) { If ($ExcludeSchemas -notcontains $objs.Schema ) { $ObjName = "$objs".replace("[","").replace("]","") $OutFile = "$objpath" + "$ObjName" + ".sql" $objs.Script($so)+"GO" | out-File $OutFile #-Append } } } }

    Read the article

  • Where can I find accessible bug/issue databases with complete revision history

    - by namenlos
    I'm performing some research and analysis on bug/issue tracking databases and more specifically on how programmers and teams of programmers actually interact with them. What I'm looking for involves understanding how those databases change over time. So what I don't need for example: is a database of all the bugs of some open source project as the bugs exist today. What I do need is a complete set of revision history for every issue/bug in the database. This would enable me to pick a specific datetime and say here were the list of all the issues/bugs that existed at that moment in time. Anyway know of some publicly accessible issue/bug databases that expose this revision data? Ideally, the revision would look something like this (shown for a single bug, with two revisions) ISSUEID PRI SEV ASSIGNEDTO MODIFIEDON VALIDUNTIL 1 2 2 mel apr-1-2010:5pm apr-1-2010:6pm 1 2 3 steve apr-1-2010:6pm NULL

    Read the article

  • What makes Java so suitable for writing NoSQL Databases

    - by good_computer
    Looking at this page that aggregates the current NoSQL landscape, one can see that the majority of these projects are written in Java. Databases are complex systems software dealing with the file system, and so C/C++ would be a better choice than Java for this. (that's my thinking which might be flawed) Secondly, databases deal with transferring large amounts of data from disk to RAM -- which they call a working set. The JVM takes non-trivial amount of RAM for it's own purpose -- so it would be more efficient to use a platform that leaves lots of memory for data instead of hogging it for its own operations. The major relational databases are ALL written in C/C++ MySQL C, C++ Oracle Assembler, C, C++ SQL Server C++ PostgreSQL C SQLite C So what makes Java so popular in NoSQL world.

    Read the article

  • SQL SERVER – How to Compare the Schema of Two Databases with Schema Compare

    - by Pinal Dave
    Earlier I wrote about An Efficiency Tool to Compare and Synchronize SQL Server Databases and it was very much well received. Since the blog post I have received quite a many question that just like data how we can also compare schema and synchronize it. If you think about comparing the schema manually, it is almost impossible to do so. Table Schema has been just one of the concept but if you really want the all the schema of the database (triggers, views, stored procedure and everything else) it is just impossible task. If you are developer or database administrator who works in the production environment than you know that there are so many different occasions when we have to compare schema of the database. Before deploying any changes to the production server, I personally like to make note of the every single schema change and document it so in case of any issue , I can always go back and refer my documentation. As discussed earlier it is absolutely impossible to do this task without the help of third party tools. I personally use Devart Schema Compare for this task. This is an extremely easy tool. Let us see how it works. First I have two different databases – a) AdventureWorks2012 and b) AdventureWorks2012-V1. There are total three changes between these databases. Here is the list of the same. One of the table has additional column One of the table have new index One of the stored procedure is changed Now let see how dbForge Schema Compare works in this scenario. First open dbForge Schema Compare studio. Click on New Schema Comparison. It will bring you to following screen where we have to configure the database needed to configure. I have selected AdventureWorks2012 and AdventureWorks-V1 databases. In the next screen we can verify various options but for this demonstration we will keep it as it is. We will not change anything in schema mapping screen as in our case it is not required but generically if you are comparing across schema you may need this. This is the most important screen as on this screen we select which kind of object we want to compare. You can see the options which are available to select. The screen lets you select the objects from SQL Server 2000 to SQL Server 2012. Once you click on compare in previous screen it will bring you to this screen, which will essentially display the comparative difference between two of the databases which we had selected in earlier screen. As mentioned above there are three different changes in the database and the same has been listed over here. Two of the changes belongs to the tables and one changes belong to the procedure. Let us click each of them one by one to see what is the difference between them. In very first option we can see that there is an additional column in another database which did not exist earlier. In this example we can see that AdventureWorks2012 database have an additional index. Following example is very interesting as in this case, we have changed the definition of the stored procedure and the result pan contains the same. dbForget Schema Compare very effectively identify the changes in schema and lists them neatly to developers. Here is one more screen. This software not only compares the schema but also provides the options to update or drop them as per the choice. I think this is brilliant option. Well, I have been using schema compare for quite a while and have found it very useful. Here are few of the things which dbForge Schema Compare can do for developers and DBAs. Compare and synchronize SQL Server database schemas Compare schemas of live database and SQL Server backup Generate comparison reports in Excel and HTML formats Eliminate mistakes in schema changes propagation across environments Track production database changes and customizations Automate migration of schema changes using command line interface I suggest that you try out dbForge Schema Compare and let me know what you think of this product. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL

    Read the article

  • SQL SERVER – Iridium I/O – SQL Server Deduplication that Shrinks Databases and Improves Performance

    - by Pinal Dave
    Database performance is a common problem for SQL Server DBA’s.  It seems like we spend more time on performance than just about anything else.  In many cases, we use scripts or tools that point out performance bottlenecks but we don’t have any way to fix them.  For example, what do you do when you need to speed up a query that is already tuned as well as possible?  Or what do you do when you aren’t allowed to make changes for a database supporting a purchased application? Iridium I/O for SQL Server was originally built at Confio software (makers of Ignite) because DBA’s kept asking for a way to actually fix performance instead of just pointing out performance problems. The technology is certified by Microsoft and was so promising that it was spun out into a separate company that is now run by the Confio Founder/CEO and technology management team. Iridium uses deduplication technology to both shrink the databases as well as boost IO performance.  It is intriguing to see it work.  It will deduplicate a live database as it is running transactions.  You can watch the database get smaller while user queries are running. Iridium is a simple tool to use. After installing the software, you click an “Analyze” button which will spend a minute or two on each database and estimate both your storage and performance savings.  Next, you click an “Activate” button to turn on Iridium I/O for your selected databases.  You don’t need to reboot the operating system or restart the database during any part of the process. As part of my test, I also wanted to see if there would be an impact on my databases when Iridium was removed.  The ‘revert’ process (bringing the files back to their SQL Server native format) was executed by a simple click of a button, and completed while the databases were available for normal processing. I was impressed and enjoyed playing with the software and encourage all of you to try it out.  Here is the link to the website to download Iridium for free. . Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • AdventureWorks 2014 Sample Databases Are Now Available

    - by aspiringgeek
      Where in the World is AdventureWorks? Recently, SQL Community feedback from twitter prompted me to look in vain for SQL Server 2014 versions of the AdventureWorks sample databases we’ve all grown to know & love. I searched Codeplex, then used the bing & even the google in an effort to locate them, yet all I could find were samples on different sites highlighting specific technologies, an incomplete collection inconsistent with the experience we users had learned to expect.  I began pinging internally & learned that an update to AdventureWorks wasn’t even on the road map.  Fortunately, SQL Marketing manager Luis Daniel Soto Maldonado (t) lent a sympathetic ear & got the update ball rolling; his direct report Darmodi Komo recently announced the release of the shiny new sample databases for OLTP, DW, Tabular, and Multidimensional models to supplement the extant In-Memory OLTP sample DB.  What Success Looks Like In my correspondence with the team, here’s how I defined success: 1. Sample AdventureWorks DBs hosted on Codeplex showcasing SQL Server 2014’s latest-&-greatest features, including:  In-Memory OLTP (aka Hekaton) Clustered Columnstore Online Operations Resource Governor IO 2. Where it makes sense to do so, consolidate the DBs (e.g., showcasing Columnstore likely involves a separate DW DB) 3. Documentation to support experimenting with these features As Microsoft Senior SDE Bonnie Feinberg (b) stated, “I think it would be great to see an AdventureWorks for SQL 2014.  It would be super helpful for third-party book authors and trainers.  It also provides a common way to share examples in blog posts and forum discussions, for example.”  Exactly.  We’ve established a rich & robust tradition of sample databases on Codeplex.  This is what our community & our customers expect.  The prompt response achieves what we all aim to do, i.e., manifests the Service Design Engineering mantra of “delighting the customer”.  Kudos to Luis’s team in SQL Server Marketing & Kevin Liu’s team in SQL Server Engineering for doing so. Download AdventureWorks 2014 Download your copies of SQL Server 2014 AdventureWorks sample databases here.

    Read the article

  • Evaluating and Investigating Drug Safety Signals with Public Databases Webinar

    - by Roxana Babiciu
    In this one-hour webinar, BioPharm Systems' Dr. Rodney Lemery, vice president of safety and pharmacovigilance, will review a number of public databases available to use during the evaluation and investigation of identified safety signals. The discussion will focus on the use of free and paid longitudinal healthcare databases available online. After attending this presentation, you will better understand how these data sources can be used in your daily PV work. Read more here

    Read the article

  • Find Column in All Databases

    - by Derek Dieter
    Occasionally, there comes a requirement to search all databases on a particular server for either columns with a specific name, or columns relating to a specific subject. In the most recent case, I had to find all similar columns in all databases because the company plans to change the datatype of these columns. [...]

    Read the article

  • SQL SERVER – Get All the Information of Database using sys.databases

    - by pinaldave
    Earlier I wrote blog article SQL SERVER – Finding Last Backup Time for All Database. In the response of this article I have received very interesting script from SQL Server Expert Matteo as a comment in the blog. He has written script using sys.databases which provides plenty of the information about database. I suggest you can run this on your database and know unknown of your databases as well. SELECT database_id, CONVERT(VARCHAR(25), DB.name) AS dbName, CONVERT(VARCHAR(10), DATABASEPROPERTYEX(name, 'status')) AS [Status], state_desc, (SELECT COUNT(1) FROM sys.master_files WHERE DB_NAME(database_id) = DB.name AND type_desc = 'rows') AS DataFiles, (SELECT SUM((size*8)/1024) FROM sys.master_files WHERE DB_NAME(database_id) = DB.name AND type_desc = 'rows') AS [Data MB], (SELECT COUNT(1) FROM sys.master_files WHERE DB_NAME(database_id) = DB.name AND type_desc = 'log') AS LogFiles, (SELECT SUM((size*8)/1024) FROM sys.master_files WHERE DB_NAME(database_id) = DB.name AND type_desc = 'log') AS [Log MB], user_access_desc AS [User access], recovery_model_desc AS [Recovery model], CASE compatibility_level WHEN 60 THEN '60 (SQL Server 6.0)' WHEN 65 THEN '65 (SQL Server 6.5)' WHEN 70 THEN '70 (SQL Server 7.0)' WHEN 80 THEN '80 (SQL Server 2000)' WHEN 90 THEN '90 (SQL Server 2005)' WHEN 100 THEN '100 (SQL Server 2008)' END AS [compatibility level], CONVERT(VARCHAR(20), create_date, 103) + ' ' + CONVERT(VARCHAR(20), create_date, 108) AS [Creation date], -- last backup ISNULL((SELECT TOP 1 CASE TYPE WHEN 'D' THEN 'Full' WHEN 'I' THEN 'Differential' WHEN 'L' THEN 'Transaction log' END + ' – ' + LTRIM(ISNULL(STR(ABS(DATEDIFF(DAY, GETDATE(),Backup_finish_date))) + ' days ago', 'NEVER')) + ' – ' + CONVERT(VARCHAR(20), backup_start_date, 103) + ' ' + CONVERT(VARCHAR(20), backup_start_date, 108) + ' – ' + CONVERT(VARCHAR(20), backup_finish_date, 103) + ' ' + CONVERT(VARCHAR(20), backup_finish_date, 108) + ' (' + CAST(DATEDIFF(second, BK.backup_start_date, BK.backup_finish_date) AS VARCHAR(4)) + ' ' + 'seconds)' FROM msdb..backupset BK WHERE BK.database_name = DB.name ORDER BY backup_set_id DESC),'-') AS [Last backup], CASE WHEN is_fulltext_enabled = 1 THEN 'Fulltext enabled' ELSE '' END AS [fulltext], CASE WHEN is_auto_close_on = 1 THEN 'autoclose' ELSE '' END AS [autoclose], page_verify_option_desc AS [page verify option], CASE WHEN is_read_only = 1 THEN 'read only' ELSE '' END AS [read only], CASE WHEN is_auto_shrink_on = 1 THEN 'autoshrink' ELSE '' END AS [autoshrink], CASE WHEN is_auto_create_stats_on = 1 THEN 'auto create statistics' ELSE '' END AS [auto create statistics], CASE WHEN is_auto_update_stats_on = 1 THEN 'auto update statistics' ELSE '' END AS [auto update statistics], CASE WHEN is_in_standby = 1 THEN 'standby' ELSE '' END AS [standby], CASE WHEN is_cleanly_shutdown = 1 THEN 'cleanly shutdown' ELSE '' END AS [cleanly shutdown] FROM sys.databases DB ORDER BY dbName, [Last backup] DESC, NAME Please let me know if you find this information useful. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Oracle Inroduces a New Line of Defense for Databases

    - by roxana.bradescu
    Today at the 2011 RSA Conference, we announced the immediate availability of our new Oracle Database Firewall, the latest addition to a comprehensive portfolio of database security solutions. Oracle Database Firewall is a network-based software solution that monitors database traffic, and can detect and block SQL injection and other attacks from reaching Oracle and non-Oracle databases. According to the 2010 Verizon Data Breach Investigations Report, SQL injection attacks against databases are responsible for 89% of all breached data. SQL injection attacks are a technique for controlling responses from the database server through applications. This attack exploits the inherent trust between application layer and the back-end database. Previously the only way organizations had to safeguard against SQL injection attacks was a complete overhaul of their application code. Obviously a very costly, complex, and often impossible undertaking for most organizations. Enter the new Oracle Database Firewall. It can help prevent SQL injection attacks by establishing a defensive perimeter around your databases. The Oracle Database Firewall uses an innovative SQL grammar analysis to inspect the database traffic against pre-defined policies. Normal expected traffic is allowed to pass (and can be optionally logged to demonstrate regulatory compliance), ensuring no false positives or disruption to your business. SQL statements that are explicitly forbidden or unknown SQL statements can either pass, be logged, alert, block or be substitute with pre-defined SQL statements. Being able to substitute an unknown potentially harmful SQL statement with a harmless statement is especially powerful since it foils an attack while allowing the application to operate normally and preventing DoS attacks. So, if you're at RSA, stop by our booth or attend the session with Steve Moyle, Oracle Database Firewall CTO. Or if you want to learn more immediately, please watch our on-demand webcast and download the new Oracle Database Firewall Resource Kit with everything you need to get started today.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >