Search Results

Search found 31022 results on 1241 pages for 'compact database'.

Page 24/1241 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Database design - table relationship question

    - by iama
    I am designing schema for a simple quiz application. It has 2 tables - "Question" and "Answer Choices". Question table has 'question ID', 'question text' and 'answer id' columns. "Answer Choices" table has 'question ID', 'answer ID' and 'answer text' columns. With this simple schema it is obvious that a question can have multiple answer choices & hence the need for the answer choices table. However, a question can have only one correct answer and hence the need for the 'answer ID' in the question table. However, this 'answer ID' column in the question table provides a illusion as though there can be multiple questions for a single answer which is not correct. The other alternative to eliminate this illusion is to have another table just for correct answer that will have just 2 columns namely the question ID and the answer ID with a 1-1 relationship between the two tables. However, I think this is redundant. Any recommendation on how best to design this thereby enforcing the rules that a question can have multiple answer choices but only one correct answer? Many Thanks.

    Read the article

  • Generic version control strategy for select table data within a heavily normalized database

    - by leppie
    Hi Sorry for the long winded title, but the requirement/problem is rather specific. With reference to the following sample (but very simplified) structure (in psuedo SQL), I hope to explain it a bit better. TABLE StructureName { Id GUID PK, Name varchar(50) NOT NULL } TABLE Structure { Id GUID PK, ParentId GUID (FK to Structure), NameId GUID (FK to StructureName) NOT NULL } TABLE Something { Id GUID PK, RootStructureId GUID (FK to Structure) NOT NULL } As one can see, Structure is a simple tree structure (not worried about ordering of children for the problem). StructureName is a simplification of a translation system. Finally 'Something' is simply something referencing the tree's root structure. This is just one of many tables that need to be versioned, but this one serves as a good example for most cases. There is a requirement to version to any changes to the name and/or the tree 'layout' of the Structure table. Previous versions should always be available. There seems to be a few possibilities to tackle this issue, like copying the entire structure, but most approaches causes one to 'loose' referential integrity. Example if one followed this approach, one would have to make a duplicate of the 'Something' record, given that the root structure will be a new record, and have a new ID. Other avenues of possible solutions are looking into how Wiki's handle this or go a lot further and look how proper version control systems work. Currently, I feel a bit clueless how to proceed on this in a generic way. Any ideas will be greatly appreciated. Thanks leppie

    Read the article

  • Visual Studio 2008 (C#) with SQL Compact Edition database error: 26

    - by Tommy
    A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) I've created a SQL compact database, included it in my application, and can connect to the database fine from other database editors, but within my application im trying using (SqlConnection con = new SqlConnection(Properties.Settings.Default.DatabaseConnection)) { con.Open(); } the connection string is Data Source=|DataDirectory|\Database.sdf I'm stumped, any insight?

    Read the article

  • Naming of boolean column in database table

    - by Space Cracker
    I have 'Service' table and the following column description as below Is User Verification Required for service ? Is User's Email Activation Required for the service ? Is User's Mobile Activation required for the service ? I Hesitate in naming these columns as below IsVerificationRequired IsEmailActivationRequired IsMobileActivationRequired or RequireVerification RequireEmailActivation RequireMobileActivation I can't determined which way is the best .So, Is one of the above suggested name is the best or is there other better ones ?

    Read the article

  • Upgrading from SQL2000 database to SQL Express 2008 R2

    - by itwb
    Hi, We have a web application which uses a MSSQL 2000 backend database. We are currently paying a ridiculous amount for Shared Hosting, with the database costs alone costing us $150 per month (MSSQL 100mb extra space is $40 per month). Our database size is 896.38 MB I am looking at getting a Virtual Private Server and upgrading the database to a MSSQL2008 Express database. I am aware that the Express version is limited to a 10GB database (with R2), and is constrained to a single CPU. I have also been offered SQL Server 2008 Web Edition for $19/per month, but I cannot find many details on the difference between Express and Web. Any suggestions here? What I would also like to know is: If we upgrade the database to MSSQL 2008 database, is there any issues with possible data transformations in the future? I.e. Is it possible to download and mount it with SQL Server 2008 Standard edition? I'm more concerned about how to get data in and out of the database through SQL Management tools. Are there any other issues that I might face? Thanks, Mike

    Read the article

  • MySQL root user can't access database

    - by Ed Schofield
    Hi all, We have a MySQL database ('myhours') on a production database server that is accessible to one user ('edsf') only, but not to the root user. The command 'SHOW DATABASES' as the root user does not list the 'myhours' database. The same command as the 'edsf' user lists the database: mysql> SHOW DATABASES; +--------------------+ | Database | +--------------------+ | information_schema | | myhours | +--------------------+ 2 rows in set (0.01 sec) Only the 'edsf' user can access the 'myhours' database with 'USE myhours'. Neither user seems to have permission to grant further permissions for this database. My questions are: Q1. How is it that the root user does not have permission to use the database? How could this have come about? The output of SHOW GRANTS FOR 'root'@'localhost'; looks fine to me: GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' IDENTIFIED BY PASSWORD '*xxx' WITH GRANT OPTION Q2. How can I recover this situation to make this database visible to the MySQL root user and grant further permissions on it? Thanks in advance for any help! -- Ed

    Read the article

  • PandoraBar Packs Pandora Radio Client into a Compact Case

    - by Jason Fitzpatrick
    This stylish and compact build makes it easy to enjoy streaming radio without the bulk and overhead of running your entire computer to do so. Check out the video to see the compact streaming radio box in action. Courtesy of tinker blog Engscope, we find this clean Pandora-client-in-box build. Currently the project blog has a cursory overview of the project with the demo video but promises future updates detailing the software and hardware components of the build. If you can’t wait that long, make sure to check out some of the previous Wi-Fi radio builds we’ve shared: DIY Wi-Fi Radio Brings Wireless Tunes Anywhere in Your House and Wi-Fi Speakers Stream Music Anywhere. Pandobar [via Hacked Gadgets] How To Create a Customized Windows 7 Installation Disc With Integrated Updates How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using?

    Read the article

  • Getting started with Oracle Database In-Memory Part III - Querying The IM Column Store

    - by Maria Colgan
    In my previous blog posts, I described how to install, enable, and populate the In-Memory column store (IM column store). This weeks post focuses on how data is accessed within the IM column store. Let’s take a simple query “What is the most expensive air-mail order we have received to date?” SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE  lo_shipmode = 5; The LINEORDER table has been populated into the IM column store and since we have no alternative access paths (indexes or views) the execution plan for this query is a full table scan of the LINEORDER table. You will notice that the execution plan has a new set of keywords “IN MEMORY" in the access method description in the Operation column. These keywords indicate that the LINEORDER table has been marked for INMEMORY and we may use the IM column store in this query. What do I mean by “may use”? There are a small number of cases were we won’t use the IM column store even though the object has been marked INMEMORY. This is similar to how the keyword STORAGE is used on Exadata environments. You can confirm that the IM column store was actually used by examining the session level statistics, but more on that later. For now let's focus on how the data is accessed in the IM column store and why it’s faster to access the data in the new column format, for analytical queries, rather than the buffer cache. There are four main reasons why accessing the data in the IM column store is more efficient. 1. Access only the column data needed The IM column store only has to scan two columns – lo_shipmode and lo_ordtotalprice – to execute this query while the traditional row store or buffer cache has to scan all of the columns in each row of the LINEORDER table until it reaches both the lo_shipmode and the lo_ordtotalprice column. 2. Scan and filter data in it's compressed format When data is populated into the IM column it is automatically compressed using a new set of compression algorithms that allow WHERE clause predicates to be applied against the compressed formats. This means the volume of data scanned in the IM column store for our query will be far less than the same query in the buffer cache where it will scan the data in its uncompressed form, which could be 20X larger. 3. Prune out any unnecessary data within each column The fastest read you can execute is the read you don’t do. In the IM column store a further reduction in the amount of data accessed is possible due to the In-Memory Storage Indexes(IM storage indexes) that are automatically created and maintained on each of the columns in the IM column store. IM storage indexes allow data pruning to occur based on the filter predicates supplied in a SQL statement. An IM storage index keeps track of minimum and maximum values for each column in each of the In-Memory Compression Unit (IMCU). In our query the WHERE clause predicate is on the lo_shipmode column. The IM storage index on the lo_shipdate column is examined to determine if our specified column value 5 exist in any IMCU by comparing the value 5 to the minimum and maximum values maintained in the Storage Index. If the value 5 is outside the minimum and maximum range for an IMCU, the scan of that IMCU is avoided. For the IMCUs where the value 5 does fall within the min, max range, an additional level of data pruning is possible via the metadata dictionary created when dictionary-based compression is used on IMCU. The dictionary contains a list of the unique column values within the IMCU. Since we have an equality predicate we can easily determine if 5 is one of the distinct column values or not. The combination of the IM storage index and dictionary based pruning, enables us to only scan the necessary IMCUs. 4. Use SIMD to apply filter predicates For the IMCU that need to be scanned Oracle takes advantage of SIMD vector processing (Single Instruction processing Multiple Data values). Instead of evaluating each entry in the column one at a time, SIMD vector processing allows a set of column values to be evaluated together in a single CPU instruction. The column format used in the IM column store has been specifically designed to maximize the number of column entries that can be loaded into the vector registers on the CPU and evaluated in a single CPU instruction. SIMD vector processing enables the Oracle Database In-Memory to scan billion of rows per second per core versus the millions of rows per second per core scan rate that can be achieved in the buffer cache. I mentioned earlier in this post that in order to confirm the IM column store was used; we need to examine the session level statistics. You can monitor the session level statistics by querying the performance views v$mystat and v$statname. All of the statistics related to the In-Memory Column Store begin with IM. You can see the full list of these statistics by typing: display_name format a30 SELECT display_name FROM v$statname WHERE  display_name LIKE 'IM%'; If we check the session statistics after we execute our query the results would be as follow; SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE lo_shipmode = 5; SELECT display_name FROM v$statname WHERE  display_name IN ('IM scan CUs columns accessed',                        'IM scan segments minmax eligible',                        'IM scan CUs pruned'); As you can see, only 2 IMCUs were accessed during the scan as the majority of the IMCUs (44) in the LINEORDER table were pruned out thanks to the storage index on the lo_shipmode column. In next weeks post I will describe how you can control which queries use the IM column store and which don't. +Maria Colgan

    Read the article

  • Windows Embedded Compact 7 in Padua

    - by Valter Minute
    Yesterday I did a presentation about Windows CE at the University of Padua Even if the picture seems to suggest that I just showed and empty slide, I illustrated the new features of the OS and did a quick demo of Silverlight for Windows Embedded on Windows Embedded Compact 7 (I’ve to get used to this new name), showing the new tools that provide a better integration between Expression Blend and Visual Studio for the development of Silverlight applications (I hope to be able to write more on this topic soon!). The Operating System was running on some real hardware (TI OMAP3530 evaluation board) and many people had a chance to interact with the new customizable shell. Most of the 60 people attending were still awake at the end of the one hour and a half session, and some of them even asked questions! I would like to thank all the people attending and all the people of Arrow, Fortech Embedded Labs and the University of Padua that made this event possible and provide me the tool and the time to do this presentation. Technorati Tags: Windows Embedded Compact 7

    Read the article

  • Problems locating Redmine database

    - by zordor
    I have an active redmine but I can not find the database where it is running right now. It should be on PostgreSQL but the database where it should be running is empty. Does anybody have any idea how to check current database used by redmine? Please let me know if you need any extra information. Thank you EDIT: Ok I know the database it is using. On the database.yml I have project_redmine but it is using the database project I dont know why. That database it is used by developers for the actual project. So that is getting me problems of course. I am unable to run it on the right DB (project_redmine) any ideas? :S

    Read the article

  • Centrally managing 100+ websites without bankrupting a small company

    - by palintropos
    I'm mainly interested in opinions on the trade-offs between having a single central server all the websites connect to as opposed to each website mirroring a subset of the master database with all the products in it. For example, will I run into severe performance issues (or even security issues, or restrictions) making queries to an offsite database? Will we hit scalability issues we can't handle early on from the sheer bandwidth required to maintain this? If we do go with something like a script that keeps smaller databases (each containing a subset of the central master data) in sync, what sorts of issues will we likely encounter there? I would really like the opinions of people far more knowledgeable than I am regarding the pros and cons of both setups and what headaches we are likely to encounter. CLARIFICATION: This should not be viewed as a question about whether we should implement one database vs multiple databases. This question has been answered numerous times. The question is regarding the pros and cons for a deployment like this having the ability to manage all the websites centrally (one server) vs trying to keep them all in sync if they each have their own db (multiple servers). REAL-WORLD EXAMPLE: We are a t-shirt company, and we have individual websites for our different kinds of t-shirts, but we're looking at a central order management integrated with our single shopping cart (which is ColdFusion + MySQL). Now, let's say we have a t-shirt that's on 10 of our websites and we change an image for it. Ideally we would change that in one place and the change would propagate, but how would we set this up?

    Read the article

  • Relational vs. Dimensional Databases, what's the difference?

    - by grautur
    I'm trying to learn about OLAP and data warehousing, and I'm confused about the difference between relational and dimensional modeling. Is dimensional modeling basically relational modeling, but allowing for redundant/un-normalized data? For example, let's say I have historical sales data on (product, city, # sales). I understand that the following would be a relational point-of-view: Product | City | # Sales Apples, San Francisco, 400 Apples, Boston, 700 Apples, Seattle, 600 Oranges, San Francisco, 550 Oranges, Boston, 500 Oranges, Seattle, 600 While the following is a more dimensional point-of-view: Product | San Francisco | Boston | Seattle Apples, 400, 700, 600 Oranges, 550, 500, 600 But it seems like both points of view would nonetheless be implemented in an identical star schema: Fact table: Product ID, Region ID, # Sales Product dimension: Product ID, Product Name City dimension: City ID, City Name And it's not until you start adding some additional details to each dimension that the differences start popping up. For instance, if you wanted to track regions as well, a relational database would tend to have a separate region table, in order to keep everything normalized: City dimension: City ID, City Name, Region ID Region dimension: Region ID, Region Name, Region Manager, # Regional Stores While a dimensional database would allow for denormalization to keep the region data inside the city dimension, in order to make it easier to slice the data: City dimension: City ID, City Name, Region Name, Region Manager, # Regional Stores Is this correct?

    Read the article

  • Still Confused About Identifying vs. Non-Identifying Relationships

    - by Jason
    So, I've been reading up on identifying vs. non-identifying relationships in my database design, and a number of the answers on SO seem contradicting to me. Here are the two questions I am looking at: What's the Difference Between Identifying and Non-Identifying Relationships Trouble Deciding on Identifying or Non-Identifying Relationship Looking at the top answers from each question, I appear to get two different ideas of what an identifying relationship is. The first question's response says that an identifying relationship "describes a situation in which the existence of a row in the child table depends on a row in the parent table." An example of this that is given is, "An author can write many books (1-to-n relationship), but a book cannot exist without an author." That makes sense to me. However, when I read the response to question two, I get confused as it says, "if a child identifies its parent, it is an identifying relationship." The answer then goes on to give examples such as SSN (is identifying of a Person), but an address is not (because many people can live at an address). To me, this sounds more like a case of the decision between primary key and non-primary key. My own gut feeling (and additional research on other sites) points to the first question and its response being correct. However, I wanted to verify before I continued forward as I don't want to learn something wrong as I am working to understand database design. Thanks in advance.

    Read the article

  • How do you decide what kind of database to use?

    - by Jason Baker
    I really dislike the name "NoSQL", because it isn't very descriptive. It tells me what the databases aren't where I'm more interested in what the databases are. I really think that this category really encompasses several categories of database. I'm just trying to get a general idea of what job each particular database is the best tool for. A few assumptions I'd like to make (and would ask you to make): Assume that you have the capability to hire any number of brilliant engineers who are equally experienced with every database technology that has ever existed. Assume you have the technical infrastructure to support any given database (including available servers and sysadmins who can support said database). Assume that each database has the best support possible for free. Assume you have 100% buy-in from management. Assume you have an infinite amount of money to throw at the problem. Now, I realize that the above assumptions eliminate a lot of valid considerations that are involved in choosing a database, but my focus is on figuring out what database is best for the job on a purely technical level. So, given the above assumptions, the question is: what jobs are each database (including both SQL and NoSQL) the best tool for and why?

    Read the article

  • any practices ,samples for ERD?

    - by just_name
    Q: I wanna any web sites , any books just for training on ERD and normalization ,, i wanna a lot of samples ,practices,and case studies with recommended answers, to strength myself in database design.and avoid the poor data base design i made . note:i don't need books to explain the concepts , what i need is practices ,examples,case studies with recommended answers. Thanks in advance.

    Read the article

  • Android threading and database locking

    - by Sena Gbeckor-Kove
    Hi, We are using AsyncTasks to access database tables and cursors. Unfortunately we are seeing occasional exceptions regarding the database being locked. E/SQLiteOpenHelper(15963): Couldn't open iviewnews.db for writing (will try read-only): E/SQLiteOpenHelper(15963): android.database.sqlite.SQLiteException: database is locked E/SQLiteOpenHelper(15963): at android.database.sqlite.SQLiteDatabase.native_setLocale(Native Method) E/SQLiteOpenHelper(15963): at android.database.sqlite.SQLiteDatabase.setLocale(SQLiteDatabase.java:1637) E/SQLiteOpenHelper(15963): at android.database.sqlite.SQLiteDatabase.<init>(SQLiteDatabase.java:1587) E/SQLiteOpenHelper(15963): at android.database.sqlite.SQLiteDatabase.openDatabase(SQLiteDatabase.java:638) E/SQLiteOpenHelper(15963): at android.database.sqlite.SQLiteDatabase.openOrCreateDatabase(SQLiteDatabase.java:659) E/SQLiteOpenHelper(15963): at android.database.sqlite.SQLiteDatabase.openOrCreateDatabase(SQLiteDatabase.java:652) E/SQLiteOpenHelper(15963): at android.app.ApplicationContext.openOrCreateDatabase(ApplicationContext.java:482) E/SQLiteOpenHelper(15963): at android.content.ContextWrapper.openOrCreateDatabase(ContextWrapper.java:193) E/SQLiteOpenHelper(15963): at android.database.sqlite.SQLiteOpenHelper.getWritableDatabase(SQLiteOpenHelper.java:98) E/SQLiteOpenHelper(15963): at android.database.sqlite.SQLiteOpenHelper.getReadableDatabase(SQLiteOpenHelper.java:158) E/SQLiteOpenHelper(15963): at com.iview.android.widget.IViewNewsTopStoryWidget.initData(IViewNewsTopStoryWidget.java:73) E/SQLiteOpenHelper(15963): at com.iview.android.widget.IViewNewsTopStoryWidget.updateNewsWidgets(IViewNewsTopStoryWidget.java:121) E/SQLiteOpenHelper(15963): at com.iview.android.async.GetNewsTask.doInBackground(GetNewsTask.java:338) E/SQLiteOpenHelper(15963): at com.iview.android.async.GetNewsTask.doInBackground(GetNewsTask.java:1) E/SQLiteOpenHelper(15963): at android.os.AsyncTask$2.call(AsyncTask.java:185) E/SQLiteOpenHelper(15963): at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:256) E/SQLiteOpenHelper(15963): at java.util.concurrent.FutureTask.run(FutureTask.java:122) E/SQLiteOpenHelper(15963): at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:648) E/SQLiteOpenHelper(15963): at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:673) E/SQLiteOpenHelper(15963): at java.lang.Thread.run(Thread.java:1060) Does anybody have a general example for code which writes to a database from a different thread than the one reading and how can we ensure thread safety. One suggestion I've had is to use a ContentProvider, as this would handle the access of the database from multiple threads. I am going to look at this, but is this the recommended method of handling such a problem? It seems rather heavyweight considering we're talking about in front or behind Thanks in advance.

    Read the article

  • Enable Automatic Code First Migrations On SQL Database in Azure Web Sites

    - by Steve Michelotti
    Now that Azure supports .NET Framework 4.5, you can use all the latest and greatest available features. A common scenario is to be able to use Entity Framework Code First Migrations with a SQL Database in Azure. Prior to Code First Migrations, Entity Framework provided database initializers. While convenient for demos and prototypes, database initializers weren’t useful for much beyond that because, if you delete and re-create your entire database when the schema changes, you lose all of your operational data. This is the void that Migrations are meant to fill. For example, if you add a column to your model, Migrations will alter the database to add the column rather than blowing away the entire database and re-creating it from scratch. Azure is becoming increasingly easier to use – especially with features like Azure Web Sites. Being able to use Entity Framework Migrations in Azure makes deployment easier than ever. In this blog post, I’ll walk through enabling Automatic Code First Migrations on Azure. I’ll use the Simple Membership provider for my example. First, we’ll create a new Azure Web site called “migrationstest” including creating a new SQL Database along with it:   Next we’ll go to the web site and download the publish profile:   In the meantime, we’ve created a new MVC 4 website in Visual Studio 2012 using the “Internet Application” template. This template is automatically configured to use the Simple Membership provider. We’ll do our initial Publish to Azure by right-clicking our project and selecting “Publish…”. From the “Publish Web” dialog, we’ll import the publish profile that we downloaded in the previous step:   Once the site is published, we’ll just click the “Register” link from the default site. Since the AccountController is decorated with the [InitializeSimpleMembership] attribute, the initializer will be called and the initial database is created.   We can verify this by connecting to our SQL Database on Azure with SQL Management Studio (after making sure that our local IP address is added to the list of Allowed IP Addresses in Azure): One interesting note is that these tables got created with the default Entity Framework initializer – which is to create the database if it doesn’t already exist. However, our database did already exist! This is because there is a new feature of Entity Framework 5 where Code First will add tables to an existing database as long as the target database doesn’t contain any of the tables from the model. At this point, it’s time to enable Migrations. We’ll open the Package Manger Console and execute the command: PM> Enable-Migrations -EnableAutomaticMigrations This will enable automatic migrations for our project. Because we used the "-EnableAutomaticMigrations” switch, it will create our Configuration class with a constructor that sets the AutomaticMigrationsEnabled property set to true: 1: public Configuration() 2: { 3: AutomaticMigrationsEnabled = true; 4: } We’ll now add our initial migration: PM> Add-Migration Initial This will create a migration class call “Initial” that contains the entire model. But we need to remove all of this code because our database already exists so we are just left with empty Up() and Down() methods. 1: public partial class Initial : DbMigration 2: { 3: public override void Up() 4: { 5: } 6: 7: public override void Down() 8: { 9: } 10: } If we don’t remove this code, we’ll get an exception the first time we attempt to run migrations that tells us: “There is already an object named 'UserProfile' in the database”. This blog post by Julie Lerman fully describes this scenario (i.e., enabling migrations on an existing database). Our next step is to add the Entity Framework initializer that will automatically use Migrations to update the database to the latest version. We will add these 2 lines of code to the Application_Start of the Global.asax: 1: Database.SetInitializer(new MigrateDatabaseToLatestVersion<UsersContext, Configuration>()); 2: new UsersContext().Database.Initialize(false); Note the Initialize() call will force the initializer to run if it has not been run before. At this point, we can publish again to make sure everything is still working as we are expecting. This time we’re going to specify in our publish profile that Code First Migrations should be executed:   Once we have re-published we can once again navigate to the Register page. At this point the database has not been changed but Migrations is now enabled on our SQL Database in Azure. We can now customize our model. Let’s add 2 new properties to the UserProfile class – Email and DateOfBirth: 1: [Table("UserProfile")] 2: public class UserProfile 3: { 4: [Key] 5: [DatabaseGeneratedAttribute(DatabaseGeneratedOption.Identity)] 6: public int UserId { get; set; } 7: public string UserName { get; set; } 8: public string Email { get; set; } 9: public DateTime DateOfBirth { get; set; } 10: } At this point all we need to do is simply re-publish. We’ll once again navigate to the Registration page and, because we had Automatic Migrations enabled, the database has been altered (*not* recreated) to add our 2 new columns. We can verify this by once again looking at SQL Management Studio:   Automatic Migrations provide a quick and easy way to keep your database in sync with your model without the worry of having to re-create your entire database and lose data. With Azure Web Sites you can set up automatic deployment with Git or TFS and automate the entire process to make it dead simple.

    Read the article

  • Database version control resources

    - by Wes McClure
    In the process of creating my own DB VCS tool tsqlmigrations.codeplex.com I ran into several good resources to help guide me along the way in reviewing existing offerings and in concepts that would be needed in a good DB VCS.  This is my list of helpful links that others can use to understand some of the concepts and some of the tools in existence.  In the next few posts I will try to explain how I used these to create TSqlMigrations.   Blogs entries Three rules for database work - K. Scott Allen http://odetocode.com/blogs/scott/archive/2008/01/30/three-rules-for-database-work.aspx Versioning databases - the baseline http://odetocode.com/blogs/scott/archive/2008/01/31/versioning-databases-the-baseline.aspx Versioning databases - change scripts http://odetocode.com/blogs/scott/archive/2008/02/02/versioning-databases-change-scripts.aspx Versioning databases - views, stored procedures and the like http://odetocode.com/blogs/scott/archive/2008/02/02/versioning-databases-views-stored-procedures-and-the-like.aspx Versioning databases - branching and merging http://odetocode.com/blogs/scott/archive/2008/02/03/versioning-databases-branching-and-merging.aspx Evolutionary Database Design - Martin Fowler http://martinfowler.com/articles/evodb.html Are database migration frameworks worth the effort? - Good challenges http://www.ridgway.co.za/archive/2009/01/03/are-database-migration-frameworks-worth-the-effort.aspx Continuous Integration (in general) http://martinfowler.com/articles/continuousIntegration.html http://martinfowler.com/articles/originalContinuousIntegration.html Is Your Database Under Version Control? http://www.codinghorror.com/blog/archives/000743.html 11 Tools for Database Versioning http://secretgeek.net/dbcontrol.asp How to do database source control and builds http://mikehadlow.blogspot.com/2006/09/how-to-do-database-source-control-and.html .Net Database Migration Tool Roundup http://flux88.com/blog/net-database-migration-tool-roundup/ Books Book Description Refactoring Databases: Evolutionary Database Design Martin Fowler signature series on refactoring databases. Book site: http://databaserefactoring.com/ Recipes for Continuous Database Integration: Evolutionary Database Development (Digital Short Cut) A good question/answer layout of common problems and solutions with database version control. http://www.informit.com/store/product.aspx?isbn=032150206X

    Read the article

  • NetBackup-pal is muködik az Oracle Database 11gR2 mentés Exadata V2 környezetben

    - by Fekete Zoltán
    A Veritas NetBackup szoftverrel is menthetok az Oracle 11gR2 adatbázisok az Oracle Enterprise Linux-on is (RMAN-t használva), 64-bites környezetben. A dokumentumokban a Red Hat-re vonatkozó infót kell keresnünk, mivel http://seer.entsupport.symantec.com/docs/337048.htm szerint "Oracle Enterprise Linux (OEL)" Supported based on NetBackup Red Hat Enterprise Linux 4.x/5.x Client, Server, and Oracle Agent support. BMR is not supported. NetBackup compatibility listák: http://seer.entsupport.symantec.com/docs/303344.htm - A NetBackup 7 kompatibilis az Oracle Exadata V2-vel: http://seer.entsupport.symantec.com/docs/340295.htm - A NetBackup 6.x verziókra telepíteni kell a következo patch-et: NB_6.5.5_ET1940073_1_347227.zip is a NetBackup 6.5.5 EEB (Emergency Engineering Binary) for Oracle Clients. http://seer.entsupport.symantec.com/docs/347227.htm és http://support.veritas.com/docs/279048.

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >