Search Results

Search found 30412 results on 1217 pages for 'spatial database'.

Page 97/1217 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • Fetching database query through function

    - by Shubham Maurya
    I am sick of connecting database in each script i need a more OOP approach to fetching database results. ex like wordpress use wpdb class to fetch results. This what wordpress does to get data <?php $posts = $wpdb->get_results("SELECT ID, post_title FROM $wpdb->posts WHERE post_status = 'publish' AND post_type='post' ORDER BY comment_count DESC LIMIT 0,4") ?> How can i create the same feature too using any class or function and use it in my script Thank you

    Read the article

  • An Unstoppable Force!

    - by TammyBednar
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Building a high-availability database platform presents unique challenges. Combining servers, storage, networking, OS, firmware, and database is complicated and raises important concerns: Will coordination between multiple SME’s delay deployment? Will it be reliable? Will it scale? Will routine maintenance consume precious IT-staff time? Ultimately, will it work? Enter the Oracle Database Appliance, a complete package of software, server, storage, and networking that’s engineered for simplicity. It saves time and money by simplifying deployment, maintenance, and support of database workloads. Plus, it’s based on Intel Xeon processors to ensure a high level of performance and scalability. Take a look at this video to compare Heather and Ted’s approach to building a server for their Oracle database! http://www.youtube.com/watch?v=os4RDVclWS8 If you missed the “Compare Database Platforms: Build vs. Buy” webcast or want to listen again to find out how Jeff Schulte - Vice President at Yodlee uses Oracle Database Appliance.

    Read the article

  • TechEd 2010 Day Three: The Database Designer (Isn't)

    - by BuckWoody
    Yesterday at TechEd 2010 here in New Orleans I worked the front-booth, answering general SQL Server questions for the masses. I was actually a little surprised to find most of the questions I got were from folks that wanted to know more about Stream Insight and Master Data Services. In past conferences I've been asked a lot of "free consulting" questions, about problems folks have had from older products. I don't mind that a bit - in fact, I'm always happy to help in any way I can. But this time people are really interested in the new features in the product, and I like that they are thinking ahead, not just having to solve problems in production. My presentation was on "Database Design in an Hour". We had the usual fun, and SideShow Bob made an appearance - I kid you not. The guy in the back of the room looked just like Sideshow Bob, so I quickly held a "bes thair" contest, and he won. Duing the presentation, I explain the tools you can use to design databases. I also explain that the "Database Designer" tool in SQL Server Management Studio (SSMS) isn't truly a desinger - it uses non-standard notation, doesn't have a meta-data dictionary, and worst of all, it works at the physical level. In other words, whatever you do in SSMS will automatically change the field/table/relationship structures in the database. We fixed this in SSMS 2008 and higher by adding an option to block that, but the tool is not a good design function nonetheless. To be fair, no one I know of at Microsoft recommends that it is - but I was shocked to hear so many developers in the room defending it as a good tool. I think the main issue for someone who doesn't have to work with Relational Systems a great deal is that it can be difficult to figure out Foreign Keys. The syntax makes them look "backwards", so it's just easier to grab a field and place it on the table you want to point to. There are options. You can download a couple of free tools (CA has a community edition of ER-WIN, Quest has one, and Embarcadero also has one) and if you design more than one or two databases a year, it may be worth buying a true design tool. For years I used Visio, but we changed it so that it doesn't forward-engineer (create the DDL) any more, so it isn't a true design tool either. So investigate those free and not-so-free tools. You'll find they help you in your job - but stay away from the Database Designer in SSMS. Or I'll send Sideshow Bob over there to straighten you out. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Database Delivery Patterns and Practices

    The articles collected here will help you understand the theories and methodologies behind every stage of the database delivery pipeline, starting when database changes are checked in, and ending when they're deployed to production. 12 must-have SQL Server toolsThe award-winning SQL Developer Bundle contains 12 tools for faster, simpler SQL Server development. Download a free trial.

    Read the article

  • How do I rescue a small portion of data from a SQL Server database backup?

    - by Greg
    I have a live database that had some data deleted from it and I need that data back. I have a very recent copy of that database that has already been restored on another machine. Unrelated changes have been made to the live database since the backup, so I do not want to wipe out the live database with a full restore. The data I need is small - just a dozen rows - but those dozen rows each have a couple rows from other tables with foreign keys to it, and those couple rows have god knows how many rows with foreign keys pointing to them, so it would be complicated to restore by hand. Ideally I'd be able to tell the backup copy of the database to select the dozen rows I need, and the transitive closure of everything that they depend on, and everything that depends on them, and export just that data, which I can then import into the live database without touching anything else. What's the best approach to take here? Thanks. Everyone has mentioned sp_generate_inserts. When using this, how do you prevent Identity columns from messing everything up? Do you just turn IDENTITY INSERT on?

    Read the article

  • Application Code Redesign to reduce no. of Database Hits from Performance Perspective

    - by Rachel
    Scenario I want to parse a large CSV file and inserts data into the database, csv file has approximately 100K rows of data. Currently I am using fgetcsv to parse through the file row by row and insert data into Database and so right now I am hitting database for each line of data present in csv file so currently database hit count is 100K which is not good from performance point of view. Current Code: public function initiateInserts() { //Open Large CSV File(min 100K rows) for parsing. $this->fin = fopen($file,'r') or die('Cannot open file'); //Parsing Large CSV file to get data and initiate insertion into schema. while (($data=fgetcsv($this->fin,5000,";"))!==FALSE) { $query = "INSERT INTO dt_table (id, code, connectid, connectcode) VALUES (:id, :code, :connectid, :connectcode)"; $stmt = $this->prepare($query); // Then, for each line : bind the parameters $stmt->bindValue(':id', $data[0], PDO::PARAM_INT); $stmt->bindValue(':code', $data[1], PDO::PARAM_INT); $stmt->bindValue(':connectid', $data[2], PDO::PARAM_INT); $stmt->bindValue(':connectcode', $data[3], PDO::PARAM_INT); // Execute the statement $stmt->execute(); $this->checkForErrors($stmt); } } I am looking for a way wherein instead of hitting Database for every row of data, I can prepare the query and than hit it once and populate Database with the inserts. Any Suggestions !!! Note: This is the exact sample code that I am using but CSV file has more no. of field and not only id, code, connectid and connectcode but I wanted to make sure that I am able to explain the logic and so have used this sample code here. Thanks !!!

    Read the article

  • The Database Recovery Advisor in SQL Server 2012

    The Database Recovery Advisor in SQL Server 2012 will aid the ability of DBAs to recover their databases to a point in time in a crisis. Read about this new feature and how it can speed the process of recovery. What are your servers really trying to tell you? Find out with new SQL Monitor 3.0, an easy-to-use tool built for no-nonsense database professionals.For effortless insights into SQL Server, download a free trial today.

    Read the article

  • Steps to Rename a Subscriber Database for SQL Server Transactional Replication

    I have transactional replication configured in production. The business team has a requirement to rename the subscription database. Is it possible to rename the subscription database and ensure that transactional replication will continue to function as before? If so, how could we achieve this? Get smart with SQL Backup ProPowerful centralised management, encryption and more.SQL Backup Pro was the smartest kid at school. Discover why.

    Read the article

  • Database Insider - April 2012 issue

    - by Javier Puerta
    INFORMATION INDEPTH NEWSLETTER Database Insider Edition The  April issue of the Database Insider newsletter is now available.Includes, among many other: Oracle Advanced Analytics for Big Data Best Practices for Workload Management of a Data Warehouse on Oracle Exadata Best Practices for Implementing a Data Warehouse on Oracle Exadata

    Read the article

  • Database Deployment: The Bits - Getting Data In

    Quite often, the database developer or tester is faced with having to load data into a newly created database. What could be simpler? Quite a lot of things, it seems. SQL Backup Pro wins Gold Community Choice AwardFind out why the SQL Server Community voted SQL Backup Pro 'Best Backup and Recovery Product 2012'. Get faster, smaller, fully verified backups. Download a free trial now.

    Read the article

  • Database Mirroring Performance Monitoring

    Many people deploy performance monitoring solutions in a "one-size-fits-all" manner. That is, they tend to build a solution that can be easily deployed to multiple servers and capture basic information from each server. The trouble is that not every server is identical, not even within the same shop. For example, not every server may have database mirroring deployed, which means your performance monitoring solution may be missing some critical pieces of information with regards to monitoring database mirroring.

    Read the article

  • How security of the systems might be improved using database procedures?

    - by Centurion
    The usage of Oracle PL/SQL procedures for controlling access to data often emphasized in PL/SQL books and other sources as being more secure approach. I'v seen several systems where all business logic related with data is performed through packages, procedures and functions, so application code becomes quite "dumb" and is only responsible for visualization part. I even heard some devs call such approaches and driving architects as database nazi :) because all logic code resides in database. I do know about DB procedure performance benefits, but now I'm interested in a "better security" when using thick client model. I assume such design mostly used when Oracle (and maybe MS SQL Server) databases are used. I do agree such approach improves security but only if there are not much users and every system user has a database account, so we might control and monitor data access through standard database user security. However, how such approach could increase the security for an average web system where thick clients are used: for example one database user with DML grants on all tables, and other users are handled using "users" and"user_rights" tables? We could use DB procedures, save usernames into context use that for filtering but vulnerability resides at the root - if the main database account is compromised than nothing will help. Of course in a real system we might consider at least several main users (for example frontend_db_user, backend_db_user).

    Read the article

  • New SQL Monitor Custom Metric: Database Autogrowth

    This metric for Red Gate SQL Monitor measures the number of database autogrowth events (data file or log file) in the last hour. Too many autogrowth events causes disk fragmentation which requires a change in the autogrowth settings of a database. ‘Disturbing Development’Grant Fritchey & the DBA Team present the latest installment of the Top 5 hard-earned lessons of a DBA – read it now

    Read the article

  • Recovering SQL Server Database From Error: 5171

    MS SQL Server is the most preferred relational database management system by database users all over the world. It provides several benefits such as enhanced productivity, scalability, efficiency, av... [Author: Mark Willium - Computers and Internet - May 14, 2010]

    Read the article

  • Database Activity Monitoring Part 1 - An Introduction

    We are inundated with new technologies and products designed to help make our organisations safe from hackers and other malcontents. One technology that has gained ground over the past few years is database activity monitoring. It makes sense to protect valuable databases, and by adding an intelligent monitor capable of sniffing out threats an additional level of protection can be gained. But what is database activity monitoring and why should you care?

    Read the article

  • Free eBook: Defensive Database Programming

    Resilient T-SQL code is code that is designed to last, and to be safely reused by others. The goal of defensive database programming, the goal of this book, is to help you to produce resilient T-SQL code that robustly and gracefully handles cases of unintended use, and is resilient to common changes to the database environment. 12 must-have SQL Server toolsThe award-winning SQL Developer Bundle contains 12 tools for faster, simpler SQL Server development. Download a free trial.

    Read the article

  • ???????/???Oracle Database Core Tech Seminar Oracle Data Guard,Oracle Recovery Manager(RMAN),Flashback

    - by user788995
    ????? ??:2012/05/14 ??:??????/?? Oracle Database????????????????Core Tech Seminar? ????????????????????????????????????Oracle Data Guard?Oracle Recovery Manager?Oracle Flashback Technology????????????·?????????? Active Data GuardRecovery Manager(RMAN)Flashback?????? ????????? ????????????????? http://otndnld.oracle.co.jp/ondemand/otn-seminar/movie/D3-22.wmv http://otndnld.oracle.co.jp/ondemand/otn-seminar/movie/mp4/D3-22.mp4 http://www.oracle.com/technetwork/jp/ondemand/database/db-new/d3-22-dl-1626591-ja.pdf

    Read the article

  • Accidently overwrote system.dbf - What now?

    - by Filip Ekberg
    I accidentally overwrote system.dbf in /usr/lib/oracle/xe/oradata/XE/system.dbf Well I did not actually do it accidentally, however I overwrote it because of other failures in the database. And when I try running the following: SQL> shutdown ORA-01109: database not open Database dismounted. ORACLE instance shut down. SQL> startup ORACLE instance started. Total System Global Area 289406976 bytes Fixed Size 1258488 bytes Variable Size 92277768 bytes Database Buffers 192937984 bytes Redo Buffers 2932736 bytes Database mounted. ORA-01589: must use RESETLOGS or NORESETLOGS option for database open Now I want to try to Recover the database because starting it in mounted or standard surely doesn't work. SQL> recover database using backup controlfile; ORA-00283: recovery session canceled due to errors ORA-01110: data file 1: '/usr/lib/oracle/xe/oradata/XE/system.dbf' ORA-01122: database file 1 failed verification check ORA-01110: data file 1: '/usr/lib/oracle/xe/oradata/XE/system.dbf' ORA-01206: file is not part of this database - wrong database id How do I solve this? Is it even possible? My "real" problem was that I ran the /etc/init.d/oracle-xe configure and it overwrote my old configuration and probably removed passwords and such so my tables were gone, however I found the mytablespace.dbf so I hope that it is possible to recover? Please shed some light on this.

    Read the article

  • With Its MySQL Database-as-a-Service CERN Empowers Scientists

    - by Bertrand Matthelié
    The European Organization for Nuclear Research (CERN) is one of the world’s largest and most respected centers for scientific research. Founded in 1954 and located near Geneva on the Franco-Swiss border, CERN was one of Europe’s first joint ventures. Today, it has 20 member states. The organization uses the world’s largest and most complex scientific instruments to study fundamental particles and the origin of the universe. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Challenges Better support the scientists associated with a CERN research program who selected MySQL as their database. Empower users, enabling them to be as self-reliant as possible. Minimize complexity and costs for the CERN IT department to support the growing number of MySQL deployments. Solution Delivered a MySQL Database-as-a-Service offering to the CERN employees and the scientists associated with the organization. Allowed researchers selecting MySQL for their project to get access to a database instance hosted by the CERN IT department, either from the start or once their application has become critical. Implemented the service using Oracle’s server virtualization software, Oracle VM, for increased flexibility and reduced costs. Empowered users with a self-service approach, providing them with tools to manage MySQL themselves while handling backups and other basic database administration tasks for them. Enabled scientists to rely on MySQL with increased reliability, security and manageability while reducing complexity and minimizing costs. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} "The Cloud model has allowed us to deliver a self-service platform to our MySQL users, empowering them while minimizing costs for CERN." Tony Cass, Database Services Group Leader, IT department, CERN.

    Read the article

  • How to overcome shortcomings in reporting from EAV database?

    - by David Archer
    The major shortcomings with Entity-Attribute-Value database designs in SQL all seem to be related to being able to query and report on the data efficiently and quickly. Most of the information I read on the subject warn against implementing EAV due to these problems and the commonality of querying/reporting for almost all applications. I am currently designing a system where almost all the fields necessary for data storage are not known at design/compile time and are defined by the end-user of the system. EAV seems like a good fit for this requirement but due to the problems I've read about, I am hesitant in implementing it as there are also some pretty heavy reporting requirements for this system as well. I think I've come up with a way around this but would like to pose the question to the SO community. Given that typical normalized database (OLTP) still isn't always the best option for running reports, a good practice seems to be having a "reporting" database (OLAP) where the data from the normalized database is copied to, indexed extensively, and possibly denormalized for easier querying. Could the same idea be used to work around the shortcomings of an EAV design? The main downside I see are the increased complexity of transferring the data from the EAV database to reporting as you may end up having to alter the tables in the reporting database as new fields are defined in the EAV database. But that is hardly impossible and seems to be an acceptable tradeoff for the increased flexibility given by the EAV design. This downside also exists if I use a non-SQL data store (i.e. CouchDB or similar) for the main data storage since all the standard reporting tools are expecting a SQL backend to query against. Do the issues with EAV systems mostly go away if you have a seperate reporting database for querying? EDIT: Thanks for the comments so far. One of the important things about the system I'm working on it that I'm really only talking about using EAV for one of the entities, not everything in the system. The whole gist of the system is to be able to pull data from multiple disparate sources that are not known ahead of time and crunch the data to come up with some "best known" data about a particular entity. So every "field" I'm dealing with is multi-valued and I'm also required to track history for each. The normalized design for this ends up being 1 table per field which makes querying it kind of painful anyway. Here are the table schemas and sample data I'm looking at (obviously changed from what I'm working on but I think it illustrates the point well): EAV Tables Person ------------------- - Id - Name - ------------------- - 123 - Joe Smith - ------------------- Person_Value ------------------------------------------------------------------- - PersonId - Source - Field - Value - EffectiveDate - ------------------------------------------------------------------- - 123 - CIA - HomeAddress - 123 Cherry Ln - 2010-03-26 - - 123 - DMV - HomeAddress - 561 Stoney Rd - 2010-02-15 - - 123 - FBI - HomeAddress - 676 Lancas Dr - 2010-03-01 - ------------------------------------------------------------------- Reporting Table Person_Denormalized ---------------------------------------------------------------------------------------- - Id - Name - HomeAddress - HomeAddress_Confidence - HomeAddress_EffectiveDate - ---------------------------------------------------------------------------------------- - 123 - Joe Smith - 123 Cherry Ln - 0.713 - 2010-03-26 - ---------------------------------------------------------------------------------------- Normalized Design Person ------------------- - Id - Name - ------------------- - 123 - Joe Smith - ------------------- Person_HomeAddress ------------------------------------------------------ - PersonId - Source - Value - Effective Date - ------------------------------------------------------ - 123 - CIA - 123 Cherry Ln - 2010-03-26 - - 123 - DMV - 561 Stoney Rd - 2010-02-15 - - 123 - FBI - 676 Lancas Dr - 2010-03-01 - ------------------------------------------------------ The "Confidence" field here is generated using logic that cannot be expressed easily (if at all) using SQL so my most common operation besides inserting new values will be pulling ALL data about a person for all fields so I can generate the record for the reporting table. This is actually easier in the EAV model as I can do a single query. In the normalized design, I end up having to do 1 query per field to avoid a massive cartesian product from joining them all together.

    Read the article

  • SQL SERVER – Get All the Information of Database using sys.databases

    - by pinaldave
    Earlier I wrote blog article SQL SERVER – Finding Last Backup Time for All Database. In the response of this article I have received very interesting script from SQL Server Expert Matteo as a comment in the blog. He has written script using sys.databases which provides plenty of the information about database. I suggest you can run this on your database and know unknown of your databases as well. SELECT database_id, CONVERT(VARCHAR(25), DB.name) AS dbName, CONVERT(VARCHAR(10), DATABASEPROPERTYEX(name, 'status')) AS [Status], state_desc, (SELECT COUNT(1) FROM sys.master_files WHERE DB_NAME(database_id) = DB.name AND type_desc = 'rows') AS DataFiles, (SELECT SUM((size*8)/1024) FROM sys.master_files WHERE DB_NAME(database_id) = DB.name AND type_desc = 'rows') AS [Data MB], (SELECT COUNT(1) FROM sys.master_files WHERE DB_NAME(database_id) = DB.name AND type_desc = 'log') AS LogFiles, (SELECT SUM((size*8)/1024) FROM sys.master_files WHERE DB_NAME(database_id) = DB.name AND type_desc = 'log') AS [Log MB], user_access_desc AS [User access], recovery_model_desc AS [Recovery model], CASE compatibility_level WHEN 60 THEN '60 (SQL Server 6.0)' WHEN 65 THEN '65 (SQL Server 6.5)' WHEN 70 THEN '70 (SQL Server 7.0)' WHEN 80 THEN '80 (SQL Server 2000)' WHEN 90 THEN '90 (SQL Server 2005)' WHEN 100 THEN '100 (SQL Server 2008)' END AS [compatibility level], CONVERT(VARCHAR(20), create_date, 103) + ' ' + CONVERT(VARCHAR(20), create_date, 108) AS [Creation date], -- last backup ISNULL((SELECT TOP 1 CASE TYPE WHEN 'D' THEN 'Full' WHEN 'I' THEN 'Differential' WHEN 'L' THEN 'Transaction log' END + ' – ' + LTRIM(ISNULL(STR(ABS(DATEDIFF(DAY, GETDATE(),Backup_finish_date))) + ' days ago', 'NEVER')) + ' – ' + CONVERT(VARCHAR(20), backup_start_date, 103) + ' ' + CONVERT(VARCHAR(20), backup_start_date, 108) + ' – ' + CONVERT(VARCHAR(20), backup_finish_date, 103) + ' ' + CONVERT(VARCHAR(20), backup_finish_date, 108) + ' (' + CAST(DATEDIFF(second, BK.backup_start_date, BK.backup_finish_date) AS VARCHAR(4)) + ' ' + 'seconds)' FROM msdb..backupset BK WHERE BK.database_name = DB.name ORDER BY backup_set_id DESC),'-') AS [Last backup], CASE WHEN is_fulltext_enabled = 1 THEN 'Fulltext enabled' ELSE '' END AS [fulltext], CASE WHEN is_auto_close_on = 1 THEN 'autoclose' ELSE '' END AS [autoclose], page_verify_option_desc AS [page verify option], CASE WHEN is_read_only = 1 THEN 'read only' ELSE '' END AS [read only], CASE WHEN is_auto_shrink_on = 1 THEN 'autoshrink' ELSE '' END AS [autoshrink], CASE WHEN is_auto_create_stats_on = 1 THEN 'auto create statistics' ELSE '' END AS [auto create statistics], CASE WHEN is_auto_update_stats_on = 1 THEN 'auto update statistics' ELSE '' END AS [auto update statistics], CASE WHEN is_in_standby = 1 THEN 'standby' ELSE '' END AS [standby], CASE WHEN is_cleanly_shutdown = 1 THEN 'cleanly shutdown' ELSE '' END AS [cleanly shutdown] FROM sys.databases DB ORDER BY dbName, [Last backup] DESC, NAME Please let me know if you find this information useful. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • SQLAuthority News – Bookmark – Deprecated Database Engine Features in SQL Server 2008

    - by pinaldave
    When anybody asked me if any specific feature is available in SQL Server 2008 or if any feature will be disabled in future versions of SQL Server, I always point everybody to following list where all the deprecated database engine features are listed. Deprecated Database Engine Features in SQL Server 2008 R2 Deprecated Database Engine Features in SQL Server 2008 This list is quite helpful and everybody should refer it once. This list has many important details. For example, it suggests “80 compatibility level and upgrade from version 80.” will not be supported in next version of SQL Server. If you are using SQL Server 2000 still today (by any chance) you will be not able to upgrade that to next version of SQL Server directly. It is very important to note that if you are using any feature of SQL Server in compatibility mode and if you find them in the list above. You need to start working on the replacement suggested in article. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Bookmark, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >