Search Results

Search found 4704 results on 189 pages for 'refactoring databases'.

Page 42/189 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • Is it possible to integrate user databases between Drupal and an ASP&SQL Server platform?

    - by hecatomber
    We have a game project designed on ASP&SQL Server, and we need to integrate it's user database with Drupal. This would be easier from Project to Drupal (since there is user_save and user_delete functions available globally by using drupal bootstrap) but I'm not sure if we can execute PHP functions on an ASP platform. Is there any documentation for this kind of problems? What do you suggest?

    Read the article

  • How to Merge two databases in one in MS SQL Server 2008?

    - by SzamDev
    Hi I have 2 PCs, each one of them has MS SQL Server 2008 installed on it and there is a database with data in it. I need a way that I can move data in my DB from this MS SQL Server to another one (another PC which has the same DB) move data from one PC to another one - There is one proplem, the ID column, because each DB in my 2 PCs has data in it so this column counts from 1,2,3,....... ( data will be conflict with other data in my DB ) Is there any way to solve my proplem and move data successfully?

    Read the article

  • how to guarantee atomicity across two databases (the filesystem and your RDBMS)?

    - by Lock up
    i am working on a online file management project.In which we are storing references on the database(sql server) and files data on the on file system;.In which we are facing a problem of coordination between file system and database while we are uploading a file and also in case of deleting a file that first we create a reference in the data base or store files on file system;;the problem is that if create a reference in the database first and then storing a file on file system.bur while storing files on the file system any type of error occur.then reference for that file is created in the database but no file data on the file system;; please give me some solution how to deal with such situation;;i am badly in need of it;; and reason for that?

    Read the article

  • Is it better to use a relational database or document-based database for an app like Wufoo?

    - by mboyle
    I'm working on an application that's similar to Wufoo in that it allows our users to create their own databases and collect/present records with auto generated forms and views. Since every user is creating a different schema (one user might have a database of their baseball card collection, another might have a database of their recipes) our current approach is using MySQL to create separate databases for every user with its own tables. So in other words, the databases our MySQL server contains look like: main-web-app-db (our web app containing tables for users account info, billing, etc) user_1_db (baseball_cards_table) user_2_db (recipes_table) .... And so on. If a user wants to set up a new database to keep track of their DVD collection, we'd do a "create database ..." with "create table ...". If they enter some data in and then decide they want to change a column we'd do an "alter table ....". Now, the further along I get with building this out the more it seems like MySQL is poorly suited to handling this. 1) My first concern is that switching databases every request, first to our main app's database for authentication etc, and then to the user's personal database, is going to be inefficient. 2) The second concern I have is that there's going to be a limit to the number of databases a single MySQL server can host. Pretending for a moment this application had 500,000 user databases, is MySQL designed to operate this way? What if it were a million, or more? 3) Lastly, is this method going to be a nightmare to support and scale? I've never heard of MySQL being used in this way so I do worry about how this affects things like replication and other methods of scaling. To me, it seems like MySQL wasn't built to be used in this way but what do I know. I've been looking at document-based databases like MongoDB, CouchDB, and Redis as alternatives because it seems like a schema-less approach to this particular problem makes a lot of sense. Can anyone offer some advice on this?

    Read the article

  • MySQL Tables Missing/Corrupt After Recreation

    - by Synetech inc.
    Hi, Yesterday I dumped my MySQL databases to an SQL file and renamed the ibdata1 file. I then recreated it and imported the SQL file and moved the new ibdata1 file to my MySQL data directory, deleting the old one. I’ve done it before without issue, however this time something is not right. When I examine the (personal, not MySQL config) databases, they are all there, but they are empty… sort of. The data directory still has the .ibd files with the correct content in them and I can view the table list in the databases, but not the tables themselves. (I have file-per-table enabled, and am using InnoDB as default for everything.) For example with the urls database and its urls table, I can successfully open mysql.exe or phpMyAdmin and use urls;. I can even show tables; to see the expected table, but then when I try to describe urls; or select * from urls;, it complains that the table does not exist (even though it just listed it). (The MySQL Administrator lists the databases, but does not even list the tables, it indicates that the dbs are completely empty.) The problem now is that I have already deleted the SQL file (and cannot recover it even after scouring my hard-drive). So I am trying to figure out a way to repair these databases/tables. I can’t use the table repair function since it complains that the table does not exist, and I can’t dump them because again, it complains that the tables don’t exist. Like I’ve said, the data itself is still present in the .ibd files and the table names are present. I just need a way to get MySQL to recognize that the tables exist in the databases (I can find the column names of the tables in question in the ibdata1 file using a hex-editor). Any idea how I can repair this type of corruption? I don’t mind rolling up my sleeves, digging in, and taking a bunch of steps to fix it. Thanks a lot.

    Read the article

  • The Red Gate Guide to SQL Server Team based Development Free e-book

    - by Mladen Prajdic
    After about 6 months of work, the new book I've coauthored with Grant Fritchey (Blog|Twitter), Phil Factor (Blog|Twitter) and Alex Kuznetsov (Blog|Twitter) is out. They're all smart folks I talk to online and this book is packed with good ideas backed by years of experience. The book contains a good deal of information about things you need to think of when doing any kind of multi person database development. Although it's meant for SQL Server, the principles can be applied to any database platform out there. In the book you will find information on: writing readable code, documenting code, source control and change management, deploying code between environments, unit testing, reusing code, searching and refactoring your code base. I've written chapter 5 about Database testing and chapter 11 about SQL Refactoring. In the database testing chapter (chapter 5) I cover why you should test your database, why it is a good idea to have a database access interface composed of stored procedures, views and user defined functions, what and how to test. I talk about how there are many testing methods like black and white box testing, unit and integration testing, error and stress testing and why and how you should do all those. Sometimes you have to convince management to go for testing in the development lifecycle so I give some pointers and tips how to do that. Testing databases is a bit different from testing object oriented code in a way that to have independent unit tests you need to rollback your code after each test. The chapter shows you ways to do this and also how to avoid it. At the end I show how to test various database objects and how to test access to them. In the SQL Refactoring chapter (chapter 11) I cover why refactor and where to even begin refactoring. I also who you a way to achieve a set based mindset to solve SQL problems which is crucial to good SQL set based programming and a few commonly seen problems to refactor. These problems include: using functions on columns in the where clause, SELECT * problems, long stored procedure with many input parameters, one subquery per condition in the select statement, cursors are good for anything problem, using too large data types everywhere and using your data in code for business logic anti-pattern. You can read more about it and download it here: The Red Gate Guide to SQL Server Team-based Development Hope you like it and send me feedback if you wish too.

    Read the article

  • Database Mirroring of SQL server

    - by jbp117
    I have two databases that are mirrored to another server using database mirroring. The mirror server has to be down for some reason for few days. Now the production server is having principal databases in (PRINCIPAL/DISCONNECTED) State. Clients can access those databases. So what happens when they keep on adding data to these databases?? Will the data get committed or waits till the mirror comes up?

    Read the article

  • Is MySQL Replication Appropriate in this case?

    - by MJB
    I have a series of databases, each of which is basically standalone. It initially seemed like I needed a replication solution, but the more I researched it, the more it felt like replication was overkill and not useful anyway. I have not done MySQL replication before, so I have been reading up on the online docs, googling, and searching SO for relevant questions, but I can't find a scenario quite like mine. Here is a brief description of my issue: The various databases almost never have a live connection to each other. They need to be able to "sync" by copying files to a thumb drive and then moving them to the proper destination. It is OK for the data to not match exactly, but they should have the same parent-child relationships. That is, if a generated key differs between databases, no big deal. But the visible data must match. Timing is not critical. Updates can be done a week later, or even a month later, as long as they are done eventually. Updates cannot be guaranteed to be in the proper order, or in any order for that matter. They will be in order from each database; just not between databases. Rather than a set of master-slave relationships, it is more like a central database (R/W) and multiple remote databases (also R/W). I won't know how many remote databases I have until they are created. And the central DB won't know that a database exists until data arrives from it. (To me, this implies I cannot use the method of giving each its own unique identity range to guarantee uniqueness in the central database.) It appears to me that the bottom line is that I don't want "replication" so much as I want "awareness". I want the central database to know what happened in the remote databases, but there is no time requirement. I want the remote databases to be aware of the central database, but they don't need to know about each other. WTH is my question? It is this: Does this scenario sound like any of the typical replication scenarios, or does it sound like I have to roll my own? Perhaps #7 above is the only one that matters, and given that requirement, out-of-the-box replication is impossible. EDIT: I realize that this question might be more suited to ServerFault. I also searched there and found no answers to my questions. And based on the replication questions I did find both on SO and SF, it seemed that the decision was 50-50 over where to put my question. Sorry if I guessed wrong.

    Read the article

  • Should we use Visual Studio 2010 for all SQL Server Database Development?

    - by Luke
    Our company currently has seven dedicated SQL Server 2008 servers each running an average of 10 databases. All databases have many stored procedures and UDFs that commonly reference other databases both on the same server and also across linked servers. We currently use SSMS for all database related administration and development but we have recently purchased Visual Studio 2010 primarily for ongoing C# WinForms and ASP.NET development. I have used VS2010 to perform schema comparisons when rolling out changes from a development server into production and I'm finding it great for this task. We would like to consider using VS2010 for all database development going forward but as far as I understand, we would have to set up ALL databases as projects because of the dependencies on linked servers etc. My question is, do you have any experience using VS2010 for database development in a similar environment? Is it easy to use in tandem with SSMS or is it a one way street once VS2010 projects have been set up for all databases? Can you make any recommendations/impart any experience with a similar scenario? Thanks, Luke

    Read the article

  • Code maintenance: keeping a bad pattern when extending new code for being consistent or not ?

    - by Guillaume
    I have to extend an existing module of a project. I don't like the way it has been done (lots of anti-pattern involved, like copy/pasted code). I don't want to perform a complete refactor. Should I: create new methods using existing convention, even if I feel it wrong, to avoid confusion for the next maintainer and being consistent with the code base? or try to use what I feel better even if it is introducing another pattern in the code ? Precison edited after first answers: The existing code is not a mess. It is easy to follow and understand. BUT it is introducing lots of boilerplate code that can be avoided with good design (resulting code might become harder to follow then). In my current case it's a good old JDBC (spring template inboard) DAO module, but I have already encounter this dilemma and I'm seeking for other dev feedback. I don't want to refactor because I don't have time. And even with time it will be hard to justify that a whole perfectly working module needs refactoring. Refactoring cost will be heavier than its benefits. Remember: code is not messy or over-complex. I can not extract few methods there and introduce an abstract class here. It is more a flaw in the design (result of extreme 'Keep It Stupid Simple' I think) So the question can also be asked like that: You, as developer, do you prefer to maintain easy stupid boring code OR to have some helpers that will do the stupid boring code at your place ? Downside of the last possibility being that you'll have to learn some stuff and maybe you will have to maintain the easy stupid boring code too until a full refactoring is done)

    Read the article

  • Emacs/Vim/Vi - do they have a place in modern software development ecosystem? [closed]

    - by Anton Gogolev
    Watching all those screencasts (and listening all those podcasts) with more-or-less famous hackers/programmers I hear that many of those use emacs/vi(m) for their daily work. Now, I myself tried using both emacs and vim, and I honestly cannot understand why would anybody use these for any kind of serious development. The most advertised feature is something along the lines of "you'll be able to work with text (meaning cutting, pasting, duplicating, moving, etc) up to ten times faster than with conventional IDEs", but I don't buy that. When has the success of a software project been defined by how fast a programmer can juggle lines in a text editor or by saving a couple of keystrokes here and there? Plugins and extensions? I bet nothing comes close to R# or IDEA in terms of refactoring support ("Rename" refactoring implemented by means of "Search and Replace" is not a refactoring IMO); others are trivial. Ubiquitous and available everywhere? So what? How often do you find yourself editing files over a 300 baud connection on an esoteric *nix installation without a VCS? So here goes: do said editors have a justified place in a modern software development ecosystem?

    Read the article

  • Essbase BSO Data Fragmentation

    - by Ann Donahue
    Essbase BSO Data Fragmentation Data fragmentation naturally occurs in Essbase Block Storage (BSO) databases where there are a lot of end user data updates, incremental data loads, many lock and send, and/or many calculations executed.  If an Essbase database starts to experience performance slow-downs, this is an indication that there may be too much fragmentation.  See Chapter 54 Improving Essbase Performance in the Essbase DBA Guide for more details on measuring and eliminating fragmentation: http://docs.oracle.com/cd/E17236_01/epm.1112/esb_dbag/daprcset.html Fragmentation is likely to occur in the following situations: Read/write databases that users are constantly updating data Databases that execute calculations around the clock Databases that frequently update and recalculate dense members Data loads that are poorly designed Databases that contain a significant number of Dynamic Calc and Store members Databases that use an isolation level of uncommitted access with commit block set to zero There are two types of data block fragmentation Free space tracking, which is measured using the Average Fragmentation Quotient statistic. Block order on disk, which is measured using the Average Cluster Ratio statistic. Average Fragmentation Quotient The Average Fragmentation Quotient ratio measures free space in a given database.  As you update and calculate data, empty spaces occur when a block can no longer fit in its original space and will either append at the end of the file or fit in another empty space that is large enough.  These empty spaces take up space in the .PAG files.  The higher the number the more empty spaces you have, therefore, the bigger the .PAG file and the longer it takes to traverse through the .PAG file to get to a particular record.  An Average Fragmentation Quotient value of 3.174765 means the database is 3% fragmented with free space. Average Cluster Ratio Average Cluster Ratio describes the order the blocks actually exist in the database. An Average Cluster Ratio number of 1 means all the blocks are ordered in the correct sequence in the order of the Outline.  As you load data and calculate data blocks, the sequence can start to be out of order.  This is because when you write to a block it may not be able to place back in the exact same spot in the database that it existed before.  The lower this number the more out of order it becomes and the more it affects performance.  An Average Cluster Ratio value of 1 means no fragmentation.  Any value lower than 1 i.e. 0.01032828 means the data blocks are getting further out of order from the outline order. Eliminating Data Block Fragmentation Both types of data block fragmentation can be removed by doing a dense restructure or export/clear/import of the data.  There are two types of dense restructure: 1. Implicit Restructures Implicit dense restructure happens when outline changes are done using EAS Outline Editor or Dimension Build. Essbase restructures create new .PAG files restructuring the data blocks in the .PAG files. When Essbase restructures the data blocks, it regenerates the index automatically so that index entries point to the new data blocks. Empty blocks are NOT removed with implicit restructures. 2. Explicit Restructures Explicit dense restructure happens when a manual initiation of the database restructure is executed. An explicit dense restructure is a full restructure which comprises of a dense restructure as outlined above plus the removal of empty blocks Empty Blocks vs. Fragmentation The existence of empty blocks is not considered fragmentation.  Empty blocks can be created through calc scripts or formulas.  An empty block will add to an existing database block count and will be included in the block counts of the database properties.  There are no statistics for empty blocks.  The only way to determine if empty blocks exist in an Essbase database is to record your current block count, export the entire database, clear the database then import the exported data.  If the block count decreased, the difference is the number of empty blocks that had existed in the database.

    Read the article

  • Do there exist programming languages where a variable can truly know its own name?

    - by Job
    In PHP and Python one can iterate over the local variables and, if there is only once choice where the value matches, you could say that you know what the variable's name is, but this does not always work. Machine code does not have variable names. C compiles to assembly and does not have any native reflection capabilities, so it would not know it's name. (Edit: per Anton's answer the pre-processor can know the variable's name). Do there exist programming languages where a variable would know it's name? It gets tricky if you do something like b = a and b does not become a copy of a but a reference to the same place. EDIT: Why in the world would you want this? I can think of one example: error checking that can survive automatic refactoring. Consider this C# snippet: private void CheckEnumStr(string paramName, string paramValue) { if (paramName != "pony" && paramName != "horse") { string exceptionMessage = String.Format( "Unexpected value '{0}' of the parameter named '{1}'.", paramValue, paramName); throw new ArgumentException(exceptionMessage); } } ... CheckEnumStr("a", a); // Var 'a' does not know its name - this will not survive naive auto-refactoring There are other libraries provided by Microsoft and others that allow to check for errors (sorry the names have escaped me). I have seen one library which with the help of closures/lambdas can accomplish error checking that can survive refactoring, but it does not feel idiomatic. This would be one reason why I might want a language where a variable knows its name.

    Read the article

  • SQL SERVER – Quiz and Video – Introduction to SQL Server Security

    - by pinaldave
    This blog post is inspired from Beginning SQL Joes 2 Pros: The SQL Hands-On Guide for Beginners – SQL Exam Prep Series 70-433 – Volume 1. [Amazon] | [Flipkart] | [Kindle] | [IndiaPlaza] This is follow up blog post of my earlier blog post on the same subject - SQL SERVER – Introduction to SQL Server Security – A Primer. In the article we discussed various basics terminology of the security. The article further covers following important concepts of security. Granting Permissions Denying Permissions Revoking Permissions Above three are the most important concepts related to security and SQL Server.  There are many more things one has to learn but without beginners fundamentals one can’t learn the advanced  concepts. Let us have small quiz and check how many of you get the fundamentals right. Quiz 1) If you granted Phil control to the server, but denied his ability to create databases, what would his effective permissions be? Phil can do everything. Phil can do nothing. Phil can do everything except create databases. 2) If you granted Phil control to the server and revoked his ability to create databases, what would his effective permissions be? Phil can do everything. Phil can do nothing. Phil can do everything except create databases. 3) You have a login named James who has Control Server permission. You want to elimintate his ability to create databases without affecting any other permissions. What SQL statement would you use? ALTER LOGIN James DISABLE DROP LOGIN James DENY CREATE DATABASE To James REVOKE CREATE DATABASE To James GRANT CREATE DATABASE To James Now make sure that you write down all the answers on the piece of paper. Watch following video and read earlier article over here. If you want to change the answer you still have chance. Solution 1) 3 2) 1 3) 3 Now compare let us check the answers and compare your answers to following answers. I am very confident you will get them correct. Available at USA: Amazon India: Flipkart | IndiaPlaza Volume: 1, 2, 3, 4, 5 Please leave your feedback in the comment area for the quiz and video. Did you know all the answers of the quiz? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Hadoop, NOSQL, and the Relational Model

    - by Phil Factor
    (Guest Editorial for the IT Pro/SysAdmin Newsletter)Whereas Relational Databases fit the world of commerce like a glove, it is useless to pretend that they are a perfect fit for all human endeavours. Although, with SQL Server, we’ve made great strides with indexing text, in processing spatial data and processing markup, there is still a problem in dealing efficiently with large volumes of ephemeral semi-structured data. Key-value stores such as Cassandra, Project Voldemort, and Riak are of great value for ephemeral data, and seem of equal value as a data-feed that provides aggregations to an RDBMS. However, the Document databases such as MongoDB and CouchDB are ideal for semi-structured data for which no fixed schema exists; analytics and logging are obvious examples. NoSQL products, such as MongoDB, tackle the semi-structured data problem with panache. MongoDB is designed with a simple document-oriented data model that scales horizontally across multiple servers. It doesn’t impose a schema, and relies on the application to enforce the data structure. This is another take on the old ‘EAV’ problem (where you don’t know in advance all the attributes of a particular entity) It uses a clever replica set design that allows automatic failover, and uses journaling for data durability. It allows indexing and ad-hoc querying. However, for SQL Server users, the obvious choice for handling semi-structured data is Apache Hadoop. There will soon be an ODBC Driver for Apache Hive .and an Add-in for Excel. Additionally, there are now two Hadoop-based connectors for SQL Server; the Apache Hadoop connector for SQL Server 2008 R2, and the SQL Server Parallel Data Warehouse (PDW) connector. We can connect to Hadoop process the semi-structured data and then store it in SQL Server. For one steeped in the culture of Relational SQL Databases, I might be expected to throw up my hands in the air in a gesture of contempt for a technology that was, judging by the overblown journalism on the subject, about to make my own profession as archaic as the Saggar makers bottom knocker (a potter’s assistant who helped the saggar maker to make the bottom of the saggar by placing clay in a metal hoop and bashing it). However, on the contrary, I find that I'm delighted with the advances made by the NoSQL databases in the past few years. Having the flow of ideas from the NoSQL providers will knock any trace of complacency out of the providers of Relational Databases and inspire them into back-fitting some features, such as horizontal scaling, with sharding and automatic failover into SQL-based RDBMSs. It will do the breed a power of good to benefit from all this lateral thinking.

    Read the article

  • Do MSDTC and disaster recovery go together?

    - by DevDelivery
    Our application writes to multiple Sql Server databases within a distributed transaction. The Ops guys are saying that this messes up their disaster recovery plan because while the transactions on the live tables may commit at the same time, the log shipping on the separate databases happen at slightly different times. So in in a disaster recovery situation, there will be a few partial transactions. Is there a method for maintaining separate but synced databases in DR? Or do we have to re-design to relatively independent databases (or a single database)?

    Read the article

  • How to diagnose repeated "Starting up database '<dbname>'"

    - by Richard Slater
    I have a SQL 2008 server which is predominantly used as a development server, in the last two weeks it has been having occasional "fits", I have isolated the cause of these fits as CHECKDB being run almost continuiously, the following log information is logged to the Windows Event Log (Source: MSSQLSERVER, Category: Server): Event: 1073758961, Message: Starting up database 'DBName1'. Event: 1073758961, Message: Starting up database 'DBName2'. Event: 1073759397, Message: CHECKDB for database 'DBName1' finished without errors on 2010-07-19 20:29:26.993 (local time). This is an informational message only; no user action is required. Event: 1073759397, Message: CHECKDB for database 'DBName1' finished without errors on 2010-07-19 20:29:26.993 (local time). This is an informational message only; no user action is required. This is repeated every 1-2 seconds untill SQL Server is restarted or the offending databases are detatched. I initially thought that it was a problem with the databases so I took a backup and restored them to a SQL Express instance, all of the data is in tact, and CHECKDB runs without problem. The two databases that were causing a problem last week were not being used; so I took full backups of them and detached the databases, this resolved the problem. However at 0100 GMT this morning to other totally unrelated databases started showing the same problems. There is nothing in the event log to suggest that something happened to the server such as a restart, there are no messages about processes crashing or issues being detected with the storage controller. Speaking to the owner of the company this computer has suffered from "gremlins" in the past, however advice was taken and the motherboard was replaced and the computer rebuilt, memory and processor are the same. Stats: O/S: Windows 2008 Standard Build 6002 CPU: 2x Pentium Dual-Core E5200 @ 2.5GHz RAM: 2GB SQL: 2008 Standard 10.0.2531 Edit: someone posted then deleted a comment about AutoClose, it was turned on on the databases affected. It seems that best practice is to disable it so I have done that with the folllowing. EXECUTE sp_MSforeachdb 'IF (''?'' NOT IN (''master'', ''tempdb'', ''msdb'', ''model'')) EXECUTE (''ALTER DATABASE [?] SET AUTO_CLOSE OFF WITH NO_WAIT'')' I won't know if the problem recurs for some time so I am still open to further answers.

    Read the article

  • PHP vs Batch file for mysql cronjob?

    - by mysqllearner
    Hi, My server details: OS: Windows Server 2003 IIS6 Plesk 8.xx installed (currently using Plesk to set the cronjob) I need your advice. I have 2 methods: Method 1: Using php + mysqldump, create databases backup files into gzip, and then send email with attachment (each databases has around about 25mb) Method 2: Using batch + mysqldump, create databases backup files into gzip, and then send email with attachment (same, each databases has around about 25mb) My questions: Whats the difference of using php file and batch file for cronjob? Which method is better in term of backup speed and send email, and (maybe)safety (e.g., lesser file corrupt occurance)? If i set the cronjob hourly, will it effect my web performances? I mean, lets say my website has 100++ users online now, and each user making transaction to MySQL, when I perform backup at my web peak hour, will it decrease the performances, like the loading speed, prone to errors etc?? (sorry for my bad english) P.S: If you need my php and batch file code, please ask me to post it here. I didnt post it now is because, its very simple and standard code.

    Read the article

  • Domino Document data compression and design compression

    - by pipalia
    I was thinking of turning this on some large databases not just mail files - we have around 8 - 10GB of large databases as well as small databases of couple of hundred MB in size. But after reading this post I am not too sure: http://www-10.lotus.com/ldd/nd85forum.nsf/4b9931b774db788c85256bf0006b5e6d/1f4e67b569720e54852576c0003cb8ac?OpenDocument Can anyone confirm whether this is true? Are these any ill effects on performance by turning this feature on and if so what's the difference in performance? Thanks.

    Read the article

  • Powershell (sqlps) lastbackupdate not changing despite having run a sqlserver backup

    - by user1666376
    I'm using Powershell to check last backup times across all our sqlserver databases. This seems to work really well, but I've got a question If I run this (a cut-down version of the actual script): dir SQLSERVER:\SQL\Server1\default\databases | select parent, name, lastbackupdate I get: Parent Name LastBackupDate ------ ---- -------------- [Server1] ADBA 10/09/2012 21:15:37 [Server1] ReportServer 10/09/2012 21:00:17 [Server1] ReportServerTempDB 10/09/2012 21:00:18 [Server1] db1 10/09/2012 21:15:35 If I then run a sql backup of the Server1 default instance, and run the same query the last backup date doesn't change: PS C:\temp> dir SQLSERVER:\SQL\Server1\default\databases | select parent, name, lastbackupdate Parent Name LastBackupDate ------ ---- -------------- [Server1] ADBA 10/09/2012 21:15:37 [Server1] ReportServer 10/09/2012 21:00:17 [Server1] ReportServerTempDB 10/09/2012 21:00:18 [Server1] db1 10/09/2012 21:15:35 ..but if I open a new powershell window, it shows the backup I just took: PS SQLSERVER:\> dir SQLSERVER:\SQL\Server1\default\databases | select parent, name, lastbackupdate Parent Name LastBackupDate ------ ---- -------------- [server1] ADBA 12/09/2012 09:03:23 [server1] ReportServer 12/09/2012 08:48:03 [server1] ReportServerTempDB 12/09/2012 08:48:04 [server1] db1 12/09/2012 09:03:21 My guess is that this is expected behaviour, but could anybody show me where it's documented/explained - I just want to understand what's going on. This is running the SQlps which came with 2008, against a 2008 instance. Thanks Matt

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >