Search Results

Search found 31483 results on 1260 pages for 'database migration'.

Page 39/1260 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • Compiz setting migration from 12.04 to 12.10

    - by Maksim
    I just make migration my ubuntu 12.04 home folder to fresh installed ubuntu 12.10. And again have problem with compiz. It have no one my custom settings! How it's possible then full copy home dir not transfer settings? Those things began the compiz start play with digits on config folders. like .compiz-1 or .compiz. I have now idea then I have to use this -1 then not. But I've rename my compiz folders in .gconf and .compiz. Which not help. How get my setting back. And why this mess happening?

    Read the article

  • Using git (or any version control) regarding migration from one language to another [on hold]

    - by Max Benin
    I'm polishing an old project that i released some years ago, and the main purpose on that, is to arrange some folder structures and port the entire code from actionscript to haxe. All the game features, assets and design will remain the same. I have some doubts regarding versioning the project in this circumstance. Assuming that the only thing that will be drastically changed is the code migration, is it correct maintaining the new project changes on the same repository ? I was thinking in tagging it something like V1.1 or Branch the entire project. But i'm afraid that i'm gonna deviate from the versioning patterns. How can i use this version control issue in the best practice way ? Thanks.

    Read the article

  • What is the best database structure for this scenario?

    - by Ricketts
    I have a database that is holding real estate MLS (Multiple Listing Service) data. Currently, I have a single table that holds all the listing attributes (price, address, sqft, etc.). There are several different property types (residential, commercial, rental, income, land, etc.) and each property type share a majority of the attributes, but there are a few that are unique to that property type. My question is the shared attributes are in excess of 250 fields and this seems like too many fields to have in a single table. My thought is I could break them out into an EAV (Entity-Attribute-Value) format, but I've read many bad things about that and it would make running queries a real pain as any of the 250 fields could be searched on. If I were to go that route, I'd literally have to pull all the data out of the EAV table, grouped by listing id, merge it on the application side, then run my query against the in memory object collection. This also does not seem very efficient. I am looking for some ideas or recommendations on which way to proceed. Perhaps the 250+ field table is the only way to proceed. Just as a note, I'm using SQL Server 2012, .NET 4.5 w/ Entity Framework 5, C# and data is passed to asp.net web application via WCF service. Thanks in advance.

    Read the article

  • Using an ORM with a database that has no defined relationships?

    - by Ahmad
    Consider a database(MSSQL 2005) that consists of 100+ tables which have primary keys defined to a certain degree. There are 'relationships' between tables, however these are not enforced with foreign key constraints. Consider the following simplified example of typical types of tables I am dealing with. The are clear relations between the User and City and Province tables. However, they key issues is the inconsistent data types in the tables and naming conventions. User: UserRowId [int] PK Name [varchar(50)] CityId [smallint] ProvinceRowId [bigint] City: CityRowId [bigint] PK CityDescription [varchar(100)] Province: ProvinceId [int] PK ProvinceDesc [varchar(50)] I am considering a rewrite of the application (in ASP.net MVC) that uses this data source as is similar in design to MVC storefront. However I am going through a proof of concept phase and this is one of the stumbling blocks I have come across. What are my options in terms of ORM choice that can be easily used and why? Should I even be considering an ORM? (The reason I ask this is that most explanations and tutorials all work with relatively cleanly designed existing databases, or newly created ones when compared to mine. I am thus having a very hard time trying to find a way forward with this problem) There is a huge amount of existing SQL queries, would a datamappper(eg IBatis.net) be more suitable since we could easily modify them to work and reuse the investment already made? I have found this question on SO which indicates to me that an ORM can be used - however I get the impression that this a question of mapping? Note: at the moment, the object model is not clearly defined as it was non-existent. The existing system pretty much did almost everything in SQL or consisted of overly complicated, and numerous queries to complete fucntionality. I am pretty much a noob and have zero experience around ORMs and MVC - so this an awesome learning curve I am on.

    Read the article

  • Is it a good idea to use an integer column for storing US ZIP codes in a database?

    - by Yadyn
    From first glance, it would appear I have two basic choices for storing ZIP codes in a database table: Text (probably most common), i.e. char(5) or varchar(9) to support +4 extension Numeric, i.e. 32-bit integer Both would satisfy the requirements of the data, if we assume that there are no international concerns. In the past we've generally just gone the text route, but I was wondering if anyone does the opposite? Just from brief comparison it looks like the integer method has two clear advantages: It is, by means of its nature, automatically limited to numerics only (whereas without validation the text style could store letters and such which are not, to my knowledge, ever valid in a ZIP code). This doesn't mean we could/would/should forgo validating user input as normal, though! It takes less space, being 4 bytes (which should be plenty even for 9-digit ZIP codes) instead of 5 or 9 bytes. Also, it seems like it wouldn't hurt display output much. It is trivial to slap a ToString() on a numeric value, use simple string manipulation to insert a hyphen or space or whatever for the +4 extension, and use string formatting to restore leading zeroes. Is there anything that would discourage using int as a datatype for US-only ZIP codes?

    Read the article

  • Why do we need Audit Columns in Database Tables?

    - by Software Enthusiastic
    Hi I have seen many database designs having following audit columns on all the tables... Created By Create DateTime Updated By Upldated DateTime From one perspective I see tables from the following view... Entity Tables: Good candidate for Audit columns) Reference Tables: Audit columns may or may not required. In some case last update information is not at all required because record is never going to be modified.) Reference Data Tables Like Country Names, Entity State etc... Audit columns may not required because these information is created only during system installation time, and never going to be changed. I have seen many designers blindly put all audit columns to all tables, is this practice good, if yes what could be the reason... I just want to know because to me it seems illogical. It is difficult for me to figure out why do they design their db this way? I am not saying they are wrong or right, just want to know the WHY? You can also suggest me, if there is an alternative auditing patter or solution available... Thanks and Regards

    Read the article

  • Suggested Web Application Framework and Database for Enterprise, “Big-Data” App?

    - by willOEM
    I have a web application that I have been developing for a small group within my company over the past few years, using Pipeline Pilot (plus jQuery and Python scripting) for web development and back-end computation, and Oracle 10g for my RDBMS. Users upload experimental genomic data, which is parsed into a database, and made available for querying, transformation, and reporting. Experimental data sets are large and have many layers of metadata. A given experimental data record might have a foreign key relationship with a table that describes this data point's assay. Assays can cover multiple genes, which can have multiple transcript, which can have multiple mutations, which can affect multiple signaling pathways, etc. Users need to approach this data from any point in those layers in the metadata. Since all data sets for a given data type can run over a billion rows, this results in some large, dynamic queries that are hard to predict. New data sets are added on a weekly basis (~1GB per set). Experimental data is never updated, but the associated metadata can be updated weekly for a few records and yearly for most others. For every data set insert the system sees, there will be between 10 and 100 selects run against it and associated data. It is okay for updates and inserts to run slow, so long as queries run quick and are as up-to-date as possible. The application continues to grow in size and scope and is already starting to run slower than I like. I am worried that we have about outgrown Pipeline Pilot, and perhaps Oracle (as the sole database). Would a NoSQL database or an OLAP system be appropriate here? What web application frameworks work well with systems like this? I'd like the solution to be something scalable, portable and supportable X-years down the road. Here is the current state of the application: Web Server/Data Processing: Pipeline Pilot on Windows Server + IIS Database: Oracle 10g, ~1TB of data, ~180 tables with several billion-plus row tables Network Storage: Isilon, ~50TB of low-priority raw data

    Read the article

  • How to filter a mysql database with user input on a website and then spit the filtered table back to the website? [migrated]

    - by Luke
    I've been researching this on google for literally 3 weeks, racking my brain and still not quite finding anything. I can't believe this is so elusive. (I'm a complete beginner so if my terminology sounds stupid then that's why.) I have a database in mysql/phpmyadmin on my web host. I'm trying to create a front end that will allow a user to specify criteria for querying the database in a way that they don't have to know sql, basically just combo boxes and checkboxes on a form. Then have this form 'submit' a query to the database, and show the filtered tables. This is how the SQL looks in Microsoft Access: PARAMETERS TEXTINPUT1 Text ( 255 ), NUMBERINPUT1 IEEEDouble; // pops up a list of parameters for the user to input SELECT DISTINCT Table1.Column1, Table1.Column2, Table1.Column3,* // selects only the unique rows in these three columns FROM Table1 // the table where this query is happening WHERE (((Table1.Column1) Like TEXTINPUT1] AND ((Table1.Column2)<=[NUMBERINPUT1] AND ((Table1.Column3)>=[NUMBERINPUT1])); // the criteria for the filter, it's comparing the user input parameters to the data in the rows and only selecting matches according to the equal sign, or greater than + equal sign, or less than + equal sign What I don't get: WHAT IN THE WORLD AM I SUPPOSED TO USE (that isn't totally hard)!? I've tried google fusion tables - doesn't filter right with numerical data or empty cells in rows, can't relate tables I've tried DataTables.net, can't filter right with numerical data and can't use SQL without a bunch of indepth knowledge, not even sure it can if you have that.. I've looked into using jQuery with google spreadsheets, doesn't work at all either I have no idea how I'm supposed to build a front end with my database. Every place that looks promising (like zohocreator) is asking for money, and is far too simplified to be able to do the LIKE criteria or SELECT DISTINCT stuff.

    Read the article

  • How to handle Real Time Data from a database perspective?

    - by balexandre
    I have an idea in mind, but it still confuses me the database area. Imagine that I want to show real time data, and using one of the latest browser technologies (web sockets - even using older browsers) it is very easy to show to all observables (user browser) what everyone is doing. Remy Sharp has an example about the simplicity about this. But I still don't get the database part, how would I feed, let's imagine (using Remy game Tron) that I want to save the path for each connected user in a database and if a client wants to see what is going on with a 5 sec delay, he will see that, not only the 5 sec until that moment but the continuation in time ... how can I query a DB like that? SELECT x, y FROM run WHERE time >= DATEADD(second, -5, rundate); is not the recommended path right? and pulling this x in x time ... this is not real data feed correct? If can someone help me understand the Database point of view, I would greatly appreciate.

    Read the article

  • What is the disadvantage of using abstract class as a database connectivity in zend framework 2 instead of service locator

    - by arslaan ejaz
    If I use database by creating adapter with drivers, initialize it in some abstract class and extend that abstract class to required model. Then use simple query statement. Like this: namespace My-Model\Model\DB; abstract class MysqliDB { protected $adapter; public function __construct(){ $this->adapter = new \Zend\Db\Adapter\Adapter(array( 'driver' => 'Mysqli', 'database' => 'my-database', 'username' => 'root', 'password' => '' )); } } And use abstract class of database like this in my models: class States extends DB\MysqliDB{ public function __construct(){ parent::__construct(); } protected $states = array(); public function select_all_states(){ $data = $this->adapter->query('select * from states'); foreach ($data->execute() as $row){ $this->states[] = $row; } return $this->states; } } I am new to zend framework, before i have experience of working in YII and Codeigniter. I like the object oriented in zend so i want to use it like this. And don't want to use it through service locater something like this: public function getServiceConfig(){ return array( 'factories' => array( 'addserver-mysqli' => new Model\MyAdapterFactory('addserver-mysqli'), 'loginDB' => function ($sm){ $adapter = $sm->get('addserver-mysqli'); return new LoginDB($adapter); } ) ); } In module. Am i Ok with this approach?

    Read the article

  • Android SQLite database locale, locking, and version

    - by gdoten
    In some books and online I see these method calls being made after a database is created: db.setLocale(Locale.getDefault()); db.setLockingEnabled(true); db.setVersion(DB_VERSION); Why is this done? As far as I can tell, after creating a new database, the system adds a table named android_metadata with one field named locale and that table has one row with the locale field set to "en_US". Now I assume the column has that value because I am using a U.S. phone, and if I were using a phone from a different region then the locale field would be set appropriately. Can anyone confirm this? I'm guessing that the setLocale method would only be useful in the case that you install a pre-built database onto a phone and then want to change the locale to match the phone's locale. Sound right? The documentation for setLockingEnabled says it defaults to true so there's no need to make that call, right? Lastly, what's with the call to setVersion? I can't find a table that contains this information so I've been assuming that the database file itself stores the version number somewhere. So when I create a database, which requires you to have already specified the version number in the call to the SQLiteOpenHelper constructor, there's no point in calling setVersion. Again, perhaps this method exists for the case of installing a pre-built database to a device and you then wish to change the database's version (though I can't think of when doing this would make sense). Thanks for any insight!

    Read the article

  • using a database and deploying the application

    - by evan
    I have a WPF application that stores a large amount of information in XML files and as the user uses the application they add more information to the XML files. It's basically using the XML files as a database. Since over the life of the program the XML files have gotten quite large, and I've been think about putting the data on a website, I've been looking into how to move all the information into an SQL database. I've used SQL databases with web applications (PHP, Ruby, and ASP.NET) but never with a Desktop application. Ideally I'd like to be able to keep all the information in one database file and distribute it along with the application without requiring the user to connect to a remote database (so they don't need an internet connection - though eventually it would be nice if could compare the local file's version with one online somewhere and update if necessary) and without making them install a local database server on their computer. Is this possible? I'd also like to use LINQ with any new database solution so switching to a database doesn't force to many changes (I read the XML with LINQ). I'm sure this question has been asked and that there are already some good tutorials on the subject but I just can't find them.

    Read the article

  • Managing My Database in Source Control

    - by Jason
    As I am working with a new database project (within VS2008), and as I have never developed a database from scratch, I immediately began looking into how to manage a database within source control (in this case, Subversion). I found some information on SO, including this post: Keeping development databases in multiple environments in sync. One of the answers in particular pointed to a number of a links, all of which had good, useful information. I was reading a series of posts by K. Scott Allen which describe how he manages database change. From my reading (and please pardon the noobishness of my question), it seems as though the database itself is never checked into a repository. Rather, scripts that can build the database, along with test data (which is also populated from scripts) is checked into the repository. Ultimately, this means that, when a developer is testing his or her app, these scripts, which are part of the build process, are run. This ensures that the database is up-to-date, but is also run locally from every developer's machine. This makes sense to me (if I am indeed reading that correctly). However, if I am missing something, I would appreciate correction or additional guidance. In addition, another question I wanted to ask - does this also mean that I should NOT check in the mdf or ldf files that are created from Visual Studio? Thanks for any help and additional insight. Always appreciated.

    Read the article

  • Convert SQLITE SQL dump file to POSTGRESQL

    - by DevX
    I've been doing development using SQLITE database with production in POSTGRESQL. I just updated my local database with a huge amount of data and need to transfer a specific table to the production database. SQLITE outputs a table dump in the following format: BEGIN TRANSACTION; CREATE TABLE "courses_school" ("id" integer PRIMARY KEY, "department_count" integer NOT NULL DEFAULT 0, "the_id" integer UNIQUE, "school_name" varchar(150), "slug" varchar(50)); INSERT INTO "courses_school" VALUES(1,168,213,'TEST Name A',NULL); INSERT INTO "courses_school" VALUES(2,0,656,'TEST Name B',NULL); .... COMMIT; How do I convert the above into a POSTGRESQL compatible dump file that I can import into my production server?

    Read the article

  • Firebird 2.5 Database Corrupt

    - by BrendanH
    We have an issue where a database hangs the server when: a backup is performed (Hangs on a specific table) selecting * or count(1) from a specific table or viewing data that is related to the table (FKs, etc) We could browse the table to a certain point (using IBExpert) however after about 2900 records the machine just spikes and hangs. Performing a gfix -m does not work, and the validation reports back Record level errors = 4 (no matter how many times we run gfix -m, -v, etc. The Firebird.log file reports back these types of messages: Relation has 91631 orphan backversions (9214273 in use) in table BINS (137) - {Which is apparently just a warning} Unable to complete network request to host "MHPLZA1". Error reading data from the connection. INET/inet_error: read errno = 10054 SERVER/process_packet: broken port, server exiting Shutting down the server with 1 active connection(s) to 1 database(s), 0 active service(s) - {If we leave the backup to run while hanging, it eventually logs this error message} The setup is: The table is question has about 7000 records. The Firebird version is 2.5 Classic Server x64 install. The OS is Windows Server 2008. This is a virtual machine (VMWare) running on a massive server. (Does anyone have issues with VMs and Firebird?). We have the same setup running fine on other servers (However they are not virtual machines). Is there anyway to pin point the issue and or the cause?

    Read the article

  • Computer specs for a large database

    - by SpeksETC
    What sort of computer specs (CPU, RAM, disk speed) should I use for running queries on a database of 200+ million records? The queries are for a research project, so there is only one "user" and only one query will be running at a time. I tried it on my own laptop with SQL Server with an i3 processor, 2GB RAM, 5400 RPM disk and a simple query didn't finish even after 8+ hours. I have an option to connect a SSD via eSata and upgrade to 4GB RAM, but not sure if this will be enough... Thanks! Edit: The database is about 25 GB and the indexes are not setup properly. When I tried to add an index, I let it run for about 8 hours and it still hadn't finished so I gave up. Should I have more patience :)? In general, the queries will run once in a while and its ok even if it takes a couple hours to complete.... Also, the queries will produce probably about 10 million records which I need to process using Stata/Matlab and I'm concerned that my current laptop is not strong enough, but unsure of the bottleneck....

    Read the article

  • Error code 1005 (errno: 121) upon create table while restoring MySQL database from a dump

    - by Jonathan
    I have a linux prod machine and a Win7 64bit dev machine. My workflow includes dumping the production MySQL database on the linux machine and restoring it in my local MySQL database on the windows machine (using SQLyog). This worked fine for a long time. Following some trouble, I formatted and reinstalled my windows dev machine. Since then I'm unable to restore the db on it. I keep receiving the following error: Query: CREATE TABLE `auth_group` ( `id` int(11) NOT NULL auto_increment, `name` varchar(80) collate utf8_unicode_ci NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `name` (`name`) ) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci Error occured at:2010-06-26 17:16:14 Line no.:30 Error Code: 1005 - Can't create table 'ap_site.auth_group' (errno: 121) Notice that this is the first create table statement in the sql dump file. This error occurs both on MySQL Community Server 5.1.41 and 5.1.48 and with SQLyog Community 8.0.4 and 8.5.1. I really don't know what's different in my configuration from before the reinstall and now and why does it have this effect. Restoring from sql dump is something I need to keep on doing, so I need a permanent fix and not a tailored workaround.

    Read the article

  • Database mirroring login failure attempts on mirror server

    - by Chandan
    I have configured database mirroring between two servers at a distance 40 miles away from each other. Server specifications: SQL Server 2008,Standard Edition 64-bit This is same for principal,mirror and witness. The configuration is high-safety with automatic failover Initially we tested our .net application(web application) on both the principal and mirror and made sure that the login is not orpahned. Things run fine generally.But sometimes on the mirror server,I see login failed attempts: Login failed for user 'd0main\user'. Reason: Failed to open the explicitly specified database. [CLIENT: xx.xx.x.x] Message Error: 18456, Severity: 14, State: 38. This error appears 3-4 times a day but not more than that. My question to the experts is:If the principal is alive so why the application tries to connect to mirror.The default time-out for a .net webpage is 30 seconds,so is it possible that the application tries to connect principal and after 30 seconds even if principal is alive,it assumes that it is dead and thus tries to open a connection to mirror where it fails. Please help me with this problem.

    Read the article

  • Sorting DB tables from least dependent to most dependent

    - by Sergey Mikhanov
    Hi community, I'm performing a data migration and the database I'm using only allows to export and import each table separately. In such a setup importing becomes a problem since the order in which tables are imported is important (you have to import referenced tables before referencing ones). Is there any external tool that allows me to list the database tables sorted from least dependent to most dependent? Thanks in advance.

    Read the article

  • Error code 1005 (errno: 121) upon create table while restoring MySQL database from a dump

    - by Jonathan
    I have a linux prod machine and a Win7 64bit dev machine. My workflow includes dumping the production MySQL database on the linux machine and restoring it in my local MySQL database on the windows machine (using SQLyog). This worked fine for a long time. Following some trouble, I formatted and reinstalled my windows dev machine. Since then I'm unable to restore the db on it. I keep receiving the following error: Query: CREATE TABLE `auth_group` ( `id` int(11) NOT NULL auto_increment, `name` varchar(80) collate utf8_unicode_ci NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `name` (`name`) ) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci Error occured at:2010-06-26 17:16:14 Line no.:30 Error Code: 1005 - Can't create table 'ap_site.auth_group' (errno: 121) Notice that this is the first create table statement in the sql dump file. This error occurs both on MySQL Community Server 5.1.41 and 5.1.48 and with SQLyog Community 8.0.4 and 8.5.1. I really don't know what's different in my configuration from before the reinstall and now and why does it have this effect. Restoring from sql dump is something I need to keep on doing, so I need a permanent fix and not a tailored workaround.

    Read the article

  • Smartest way to import massive datasets into a Rails application?

    - by williamjones
    I've got multiple massive (multi gigabyte) datasets I need to import into a Rails app. The datasets are currently each in their own database on my development machine, and I need to read from them and create rows in tables in my Rails database based on the information they contain. The tables in my Rails database will not be exactly the same as the tables in the source databases. What's the smartest way to go about this? I was thinking migrations, but I'm not exactly sure how to connect the migration to the databases, and even if that is possible, is that going to be ridiculously slow?

    Read the article

  • BULK INSERT from one table to another all on the server

    - by steve_d
    I have to copy a bunch of data from one database table into another. I can't use SELECT ... INTO because one of the columns is an identity column. Also, I have some changes to make to the schema. I was able to use the export data wizard to create an SSIS package, which I then edited in Visual Studio 2005 to make the changes desired and whatnot. It's certainly faster than an INSERT INTO, but it seems silly to me to download the data to a different computer just to upload it back again. (Assuming that I am correct that that's what the SSIS package is doing). Is there an equivalent to BULK INSERT that runs directly on the server, allows keeping identity values, and pulls data from a table? (as far as I can tell, BULK INSERT can only pull data from a file) Edit: I do know about IDENTITY_INSERT, but because there is a fair amount of data involved, INSERT INTO ... SELECT is kinda of slow. SSIS/BULK INSERT dumps the data into the table without regards to indexes and logging and whatnot, so it's faster. (Of course creating the clustered index on the table once it's populated is not fast, but it's still faster than the INSERT INTO...SELECT that I tried in my first attempt) Edit 2: The schema changes include (but are not limited to) the following: 1. Splitting one table into two new tables. In the future each will have its own IDENTITY column, but for the migration I think it will be simplest to use the identity from the original table as the identity for the both new tables. Once the migration is over one of the tables will have a one-to-many relationship to the other. 2. Moving columns from one table to another. 3. Deleting some cross reference tables that only cross referenced 1-to-1. Instead the reference will be a foreign key in one of the two tables. 4. Some new columns will be created with default values. 5. Some tables aren’t changing at all, but I have to copy them over due to the "put it all in a new DB" request.

    Read the article

  • migrate/ update core data app without erasing user data!

    - by treasure
    hello, i have a very complicated problem that i would like to share with you and maybe someone can answer it for me. before i start i have to say that i am very new in this. So, i have a coredata iphone app (much like the recipes app) that uses a pre-populated sql database. The user can add/edit his own data but the default data cannot be deleted. the useres data are ALL saved in the same sql database. QUESTION: what do i have to do in order to: - update some (not all) of the default data that are stored in the sql database without "touching" the user's data? (the model will stay the same - no new entities etc-) (if the user uninstall the app and then reinstall the new version everything will be ok but i dont want to do this, obviously). can someone PLEASE help in coding level?

    Read the article

  • SQL SERVER – Difference Between ROLLBACK IMMEDIATE and WITH NO_WAIT during ALTER DATABASE

    - by pinaldave
    Today, we are going to discuss about something very simple, but quite commonly confused two options of ALTER DATABASE. The first one is ALTER DATABASE …ROLLBACK IMMEDIATE and the second one is WITH NO_WAIT. Many people think they are the same or are not sure of the difference between these two options. Before we continue our explaination, let us go through the explanation given by Book On Line. ROLLBACK AFTER integer [SECONDS] | ROLLBACK IMMEDIATE Specifies whether to roll back after a specified number of seconds or immediately. NO_WAIT Specifies that if the requested database state or option change cannot complete immediately without waiting for transactions to commit or roll back on their own, then the request will fail. If you have understood the difference by now, there is no need to proceed further. If you are still confused, continue with the rest of the post. There is one big difference between ROLLBACK and NO_WAIT. In case incomplete Transaction ALTER DATABASE … ROLLBACK rollbacks those incomplete transaction immediately, where as ALTER DATABASE … NO_WAIT will terminate and rollback the transaction of ALTER DATABASE … NO_WAIT itself. I think it can be clearly explained with the help of the following images. Option 1: ALTER DATABASE … ROLLBACK Connection 1 – Simulating some operation using WAITFOR DELAY WAITFOR DELAY '1:00:00' Connection 2 ALTER DATABASE TestDb SET SINGLE_USER WITH ROLLBACK IMMEDIATE; Option 2: ALTER DATABASE … NO_WAIT Connection 1 – Simulating some operation using WAITFOR DELAY WAITFOR DELAY '1:00:00' Connection 2 ALTER DATABASE TestDb SET SINGLE_USER WITH NO_WAIT; Let me know if this example was simple enough. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Agile Database Techniques: Effective Strategies for the Agile Software Developer – book review

    - by DigiMortal
       Agile development expects mind shift and developers are not the only ones who must be agile. Every chain is as strong as it’s weakest link and same goes also for development teams. Agile Database Techniques: Effective Strategies for the Agile Software Developer by Scott W. Ambler is book that calls also data professionals to be part of agile development. Often are DBA-s in situation where they are not part of application development and later they have to survive large set of applications that all use databases different way. Of course, only some of these applications are not problematic when looking what database server has to do to serve them. I have seen many applications that rape database servers because developers have no clue what is going on in database (~3K queries to database per web application request – have you seen something like this? I have…) Agile Database Techniques covers some object and database design technologies and gives suggestions to development teams about topics they need help or assistance by DBA-s. The book is also good reading for DBA-s who usually are not very strong in object technologies. You can take this book as bridge between these two worlds. I think teams that build object applications that use databases should buy this book and try at least one or two projects out with Ambler’s suggestions. Table of contents Foreword by Jon Kern. Foreword by Douglas K. Barry. Acknowledgments. Introduction. About the Author. Part One: Setting the Foundation. Chapter 1: The Agile Data Method. Chapter 2: From Use Cases to Databases — Real-World UML. Chapter 3: Data Modeling 101. Chapter 4: Data Normalization. Chapter 5: Class Normalization. Chapter 6: Relational Database Technology, Like It or Not. Chapter 7: The Object-Relational Impedance Mismatch. Chapter 8: Legacy Databases — Everything You Need to Know But Are Afraid to Deal With. Part Two: Evolutionary Database Development. Chapter 9: Vive L’ Évolution. Chapter 10: Agile Model-Driven Development (AMDD). Chapter 11: Test-Driven Development (TDD). Chapter 12: Database Refactoring. Chapter 13: Database Encapsulation Strategies. Chapter 14: Mapping Objects to Relational Databases. Chapter 15: Performance Tuning. Chapter 16: Tools for Evolutionary Database Development. Part Three: Practical Data-Oriented Development Techniques. Chapter 17: Implementing Concurrency Control. Chapter 18: Finding Objects in Relational Databases. Chapter 19: Implementing Referential Integrity and Shared Business Logic. Chapter 20: Implementing Security Access Control. Chapter 21: Implementing Reports. Chapter 22: Realistic XML. Part Four: Adopting Agile Database Techniques. Chapter 23: How You Can Become Agile. Chapter 24: Bringing Agility into Your Organization. Appendix: Database Refactoring Catalog. References and Suggested Reading. Index.

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >