Search Results

Search found 5611 results on 225 pages for 'contained databases'.

Page 24/225 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Any tools or techniques for validating constraints programmatically between databases?

    - by Brandon
    If you had two databases, that had two tables between them that would normally implement a one to one (or many to many) constraint but cannot since they are separate databases, how would you validate this relationship in an application or test? Is there a simple way to do this? For example, a tool or technique that can, given a constraint type, tables and fields, does the validation. I imagine that this isn't the first time this come up so I'm hoping people can share their solution. Thanks.

    Read the article

  • Is it possible to merge two MySQL databases into one?

    - by Mike L.
    Lets say I have two MySQL databases with some complex table structures. Neither database has the same table name. Lets say these tables contain no rows (they do but I could truncate the tables, the data is not important right now, just testing stuff). Lets say I need these 2 databases merged into one. For instance: DB1: cities states DB2: index subindex posts I want to end up with a single DB that contains: cities states index subindex posts Is this possible?

    Read the article

  • How do I identify and fix the cause of transaction log growth on SIMPLE recovery model databases?

    - by Stuart B
    I recently upgraded our SQL Server 2008 installations to service pack 2. One of our databases is on the simple recovery model, but its transaction log is growing extremely fast. The path I'm currently investigating is that we have a transaction somewhere out there stuck in active state. Here is why: select name, recovery_model_desc, log_reuse_wait_desc from sys.databases where name in ('SimpleDB') name recovery_model_desc log_reuse_wait_desc SimpleDB SIMPLE ACTIVE_TRANSACTION When I check my active transactions, I get the following. Note that I installed SP2 and restarted our server on 12/25 at around noonish. select transaction_id, name, transaction_begin_time, transaction_type from sys.dm_tran_active_transactions transaction_id name transaction_begin_time transaction_type 233 worktable 2010-12-25 12:44:29.283 2 236 worktable 2010-12-25 12:44:29.283 2 238 worktable 2010-12-25 12:44:29.283 2 240 worktable 2010-12-25 12:44:29.283 2 243 worktable 2010-12-25 12:44:29.283 2 245 worktable 2010-12-25 12:44:29.283 2 62210 tran_sp_MScreate_peer_tables 2010-12-25 12:45:00.880 1 55422856 user_transaction 2010-12-28 16:41:56.703 1 55422889 SELECT 2010-12-28 16:41:57.303 2 470 LobStorageProviderSession 2010-12-25 12:44:30.510 2 Note that according to the documentation a transaction_type of 1 means read/write, and 2 means read-only. So, my line of thinking is that the trans_sp_MScreate_peer_tables transaction is stuck for some reason and holding up transaction log truncation. Is this a plausible scenario? Correct me if my line of thinking is off, as I'm not a SQL Server expert. If this is correct, how do I erase that transaction so that my transaction log is truncated as usual?

    Read the article

  • How do I keep a table in sync across multiple SQL Databases?

    - by Refracted Paladin
    I have a Win Form, Data Entry, application that uses 4 seperate Data Bases. This is an occasionally connected app that uses Merge Replication (SQL 2005) to stay in Sync. This is working just fine. The next hurdle I am trying to tackle is adding Filters to my Publications. Right now we are replicating 70mbs, compressed, to each of our 150 subscribers when, truthfully, they only need a tiny fraction of that. Using Filters I am able to accomplish this(see code below) but I had to make a mapping table in order to do so. This mapping table consists of 3 columns. A PrimaryID(Guid), WorkerName(varchar), and ClientID(int). The problem is I need this table present in all FOUR Databases in order to use it for the filter since, to my knowledge, views or cross-db query's are not allowed in a Filter Statement. What are my options? Seems like I would set it up to be maintained in 1 Database and then use Triggers to keep it updated in the other 3 Databases. In order to be a part of the Filter I have to include that table in the Replication Set so how do I flag it appropriately. Is there a better way, altogether? SELECT <published_columns> FROM [dbo].[tblPlan] WHERE [ClientID] IN (select ClientID from [dbo].[tblWorkerOwnership] where WorkerID = SUSER_SNAME()) Which allows you to chain together Filters, this next one is below the first one so it only pulls from the first's Filtered Set. SELECT <published_columns> FROM [dbo].[tblPlan] INNER JOIN [dbo].[tblHealthAssessmentReview] ON [tblPlan].[PlanID] = [tblHealthAssessmentReview].[PlanID] P.S. - I know how illogical the DB structure sounds. I didn't make it. I inherited it and was then told to make it a "disconnected app." Go figure!

    Read the article

  • Disable a form and all contained elements until an ajax query completes (or another solution to prev

    - by Max Williams
    I have a search form with inputs and selects, and when any input/select is changed i run some js and then make an ajax query with jquery. I want to stop the user from making further changes to the form while the request is in progress, as at the moment they can initiate several remote searches at once, effectively causing a race between the different searches. It seems like the best solution to this is to prevent the user from interacting with the form while waiting for the request to come back. At the moment i'm doing this in the dumbest way possible by hiding the form before making the ajax query and then showing it again on success/error. This solves the problem but looks horrible and isn't really acceptable. Is there another, better way to prevent interaction with the form? To make things more complicated, to allow nice-looking selects, the user actually interacts with spans which have js hooked up to them to tie them to the actual, hidden, selects. So, even though the spans aren't inputs, they are contained in the form and represent the actual interactive elements of the form. Grateful for any advice - max. Here's what i'm doing now: function submitQuestionSearchForm(){ //bunch of irrelevant stuff var questionSearchForm = jQuery("#searchForm"); questionSearchForm.addClass("searching"); jQuery.ajax({ async: true, data: jQuery.param(questionSearchForm.serializeArray()), dataType: 'script', type: 'get', url: "/questions", success: function(msg){ //more irrelevant stuff questionSearchForm.removeClass("searching"); }, error: function(msg){ questionSearchForm.removeClass("searching"); } }); return true; }

    Read the article

  • I want to combine the databases from two different sites under one URL. How is this possible?

    - by Punct Ulica
    I have a small site that I want to merge with a bigger one. How can I merge the second one with the first? I know that one solution would be to make the smaller one a subdomain of the bigger one, but I would like the following thing to happen: when I click on a category or a tag, posts from both sites/databases would appear. Something like Smashing Magazine did when it assimilated designinformer.com. The other solution and the one that I would prefer would be to merge the two databases, but I don't know if this is possible.

    Read the article

  • SQL SERVER – Integrate Your Data with Skyvia – Cloud ETL Solution

    - by Pinal Dave
    In our days data integration often becomes a key aspect of business success. For business analysts it’s very important to get integrated data from various sources, such as relational databases, cloud CRMs, etc. to make correct and successful decisions. There are various data integration solutions on market, and today I will tell about one of them – Skyvia. Skyvia is a cloud data integration service, which allows integrating data in cloud CRMs and different relational databases. It is a completely online solution and does not require anything except for a browser. Skyvia provides powerful etl tools for data import, export, replication, and synchronization for SQL Server and other databases and cloud CRMs. You can use Skyvia data import tools to load data from various sources to SQL Server (and SQL Azure). Skyvia supports such cloud CRMs as Salesforce and Microsoft Dynamics CRM and such databases as MySQL and PostgreSQL. You even can migrate data from SQL Server to SQL Server, or from SQL Server to other databases and cloud CRMs. Additionally Skyvia supports import of CSV files, either uploaded manually or stored on cloud file storage services, such as Dropbox, Box, Google Drive, or FTP servers. When data import is not enough, Skyvia offers bidirectional data synchronization. With this tool, you can synchronize SQL Server data with other databases and cloud CRMs. After performing the first synchronization, Skyvia tracks data changes in the synchronized data storages. In SQL Server databases (and other relational databases) it creates additional tracking tables and triggers. This allows synchronizing only the changed data. Skyvia also maps records by their primary key values to each other, so it does not require different sources to have the same primary key structure. It still can match the corresponding records without having to add any additional columns or changing data structure. The only requirement for synchronization is that primary keys must be autogenerated. With Skyvia it’s not necessary for data to have the same structure in integrated data storages. Skyvia supports powerful mapping mechanisms that allow synchronizing data with completely different structure. It provides support for complex mathematical and string expressions when mapping data, using lookups, etc. You may use data splitting – loading data from a single CSV file or source table to multiple related target tables. Or you may load data from several source CSV files or tables to several related target tables. In each case Skyvia preserves data relations. It builds corresponding relations between the target data automatically. When you often work with cloud CRM data, native CRM data reporting and analysis tools may be not enough for you. And there is a vast set of professional data analysis and reporting tools available for SQL Server. With Skyvia you can quickly copy your cloud CRM data to an SQL Server database and apply corresponding SQL Server tools to the data. In such case you can use Skyvia data replication tools. It allows you to quickly copy cloud CRM data to SQL Server or other databases without customizing any mapping. You need just to specify columns to copy data from. Target database tables will be created automatically. Skyvia offers powerful filtering settings to replicate only the records you need. Skyvia also provides capability to export data from SQL Server (including SQL Azure) and other databases and cloud CRMs to CSV files. These files can be either downloadable manually or loaded to cloud file storages or FTP server. You can use export, for example, to backup SQL Azure data to Dropbox. Any data integration operation can be scheduled for automatic execution. Thus, you can automate your SQL Azure data backup or data synchronization – just configure it once, then schedule it, and benefit from automatic data integration with Skyvia. Currently registration and using Skyvia is completely free, so you can try it yourself and find out whether its data migration and integration tools suits for you. Visit this link to register on Skyvia: https://app.skyvia.com/register Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Cloud Computing

    Read the article

  • Details of 5GB and 50GB SQL Azure databases have now been released, along with new price points

    - by Eric Nelson
    Like many others signed up to the Windows Azure Platform, I received an email overnight detailing the upcoming database size changes for SQL Azure. I know from our work with early adopters over the last 12 months that the 1GB and 10GB limits were sometimes seen as blockers, especially when migrating existing application to SQL Azure. On June 28th 2010, we will be increasing the size limits: SQL Azure Web Edition database from 1 GB to 5 GB SQL Azure Business Edition database will go from 10 GB to 50 GB Along with these changes comes new price points, including the option to increase in increments of 10GB: Web Edition: Up to 1 GB relational database = $9.99 / month Up to 5 GB relational database = $49.95 / month Business Edition: Up to 10 GB relational database = $99.99 / month Up to 20 GB relational database = $199.98 / month Up to 30 GB relational database = $299.97 / month Up to 40 GB relational database = $399.96 / month Up to 50 GB relational database = $499.95 / month Check out the full SQL Azure pricing. Related Links: http://ukazure.ning.com UK community site Getting started with the Windows Azure Platform

    Read the article

  • What is the best way to work with large databases in Java depending on context?

    - by Singletony
    Hi guys. We are trying to figure out the best practice for working with very large DBs in Java. What we do is a kind of BI, i.e analyzing very large DBs, and using them to create intermediate DBs that represent intelligent knowledge of the DBs. We are currently using JDBC, and just preforming queries using a ResultSet. As more and more data is being created, we are wondering whether more appropriate ways exist for parsing and manipulating these large DBs: We need to support 'chunk' manipulation and not an entire DB at once(e.g. limit in JDBC, very poor performance) We do not need to be constantly connected since we are just pulling results and creating new tables of our own. We want to understand JDBC alternatives, with respect to advantages and disadvantages. Whether you think JDBC is the way to go or not, what are the best practices to go by depending on context (e.g. for large DBs queried in chunks) ? If my question is not clear, I will gladly elaborate! THANK YOU SO MUCH!

    Read the article

  • Does the deprecation of mysql_* functions in PHP carry over to other Databases(MSSQL)?

    - by MobyD
    I'm not talking about MySQL, I'm talking about Microsoft SQL Server I've been aware of PDO for quite some time now, standard mysql functions are dangerous and should be avoided. http://php.net/manual/en/function.mysql-connect.php But what about the MSSQL function in PHP? They are, for most purposes, identical sets of functions, but the PHP page describing mssql_* carries no warning of deprecation. http://us.php.net/manual/en/function.mssql-connect.php There are PDO drivers available for MSSQL, but they aren't quite as readily available or used as the MySQL drivers. Ideally, it looks to me like I should get them working and move from mssql_* to PDO like I have with MySQL, but is it as big of a priority? Is there some hidden safety to MSSQL that means it's exempt from all of the mysql_* hatred as of late? Or is its obscurity as a backend the only reason there hasn't been more PDO encouragement?

    Read the article

  • How should I handle using two databases with a legacy PHP application?

    - by Toby Allen
    I have a legacy PHP application that was written in 2004 and uses MSSQL as a database backend. At this stage MSSQL is still supported by PHP but only just via a Microsoft driver. I have looked at converting to mysql via automated tools, which work quite well, but I have quite complex views which need a lot of individual work to convert. I don't have a great deal of time to do this. Many tools I wish to use and frameworks I would like to move the application to, don't support MSSQL, so I was considering adding new features using a new mysql database and wondered if anyone had opinions on the pros and cons of using two seperate database backends in a single application?

    Read the article

  • Should I keep separate client codebases and databases for a software-as-a-service application?

    - by John
    My question is about the architecture of my application. I have a Rails application where companies can administrate all things related to their clients. Companies would buy a subscription and their users can access the application online. Hopefully I will get multiple companies subscribing to my application/service. What should I do with my code and database? Seperate app code base and database per company One app code base but seperate database per company One app code base and one database The decision involves security (e.g. a user from company X should not see any data from company Y) performance (let's suppose it becomes successful, it should have a good performance) and scalability (again, if successful, it should have a good performance but also easy for me to handle all the companies, code changes, etc). For the sake of maintainability, I tend to opt for the one code base, but for the database I really don't know. What do you think is the best option?

    Read the article

  • What is the best way to work with large databases in Java depending on context?

    - by user19000
    We are trying to figure out the best practice for working with very large DBs in Java. What we do is a kind of BI (business Intelligence), i.e analyzing very large DBs, and using them to create intermediate DBs that represent intelligent knowledge of the DBs. We are currently using JDBC, and just preforming queries using a ResultSet. As more and more data is being created, we are wondering whether more appropriate ways exist for parsing and manipulating these large DBs: We need to support 'chunk' manipulation and not an entire DB at once(e.g. limit in JDBC, very poor performance) We do not need to be constantly connected since we are just pulling results and creating new tables of our own. We want to understand JDBC alternatives, with respect to advantages and disadvantages. Whether you think JDBC is the way to go or not, what are the best practices to go by depending on context (e.g. for large DBs queried in chunks)?

    Read the article

  • Can I create two databases, each for different directories?

    - by Tim
    I have run Recoll which created a database for my data partition on my internal hard drive. The database is stored under my home partition in the same internal hard drive. I now want to run Recoll to create a database for a dicrectory on my external hard drive, and store this new database on that external hard drive, because my internal hard drive doesn't have enough space to hold the new database. I was wondering how to do that in Recoll? Note: my current Recoll was installed from software center of Ubuntu 12.04. Thanks!

    Read the article

  • Database Snapshots of Mirrored databases affect performance of Principal database?

    - by yrushka
    I have 2 servers set in Mirroring High-safety. One is Principal and another in Mirror. Currently I have 2 snapshots of a Production database (100 GB size) created on Principal server (for no_lock purpose of massive select processes) and 2 snapshots on the mirror server for the same database for reporting purposes. I know snapshots reduce performance of source databases but I am not sure if snapshots from mirror server have any impact on principal server's performance. thanks,

    Read the article

  • How to handle failure to release a resource which is contained in a smart pointer?

    - by cj
    How should an error during resource deallocation be handled, when the object representing the resource is contained in a shared pointer? Smart pointers are a useful tool to manage resources safely. Examples of such resources are memory, disk files, database connections, or network connections. // open a connection to the local HTTP port boost::shared_ptr<Socket> socket = Socket::connect("localhost:80"); In a typical scenario, the class encapsulating the resource should be noncopyable and polymorphic. A good way to support this is to provide a factory method returning a shared pointer, and declare all constructors non-public. The shared pointers can now be copied from and assigned to freely. The object is automatically destroyed when no reference to it remains, and the destructor then releases the resource. /** A TCP/IP connection. */ class Socket { public: static boost::shared_ptr<Socket> connect(const std::string& address); virtual ~Socket(); protected: Socket(const std::string& address); private: // not implemented Socket(const Socket&); Socket& operator=(const Socket&); }; But there is a problem with this approach. The destructor must not throw, so a failure to release the resource will remain undetected. A common way out of this problem is to add a public method to release the resource. class Socket { public: virtual void close(); // may throw // ... }; Unfortunately, this approach introduces another problem: Our objects may now contain resources which have already been released. This complicates the implementation of the resource class. Even worse, it makes it possible for clients of the class to use it incorrectly. The following example may seem far-fetched, but it is a common pitfall in multi-threaded code. socket->close(); // ... size_t nread = socket->read(&buffer[0], buffer.size()); // wrong use! Either we ensure that the resource is not released before the object is destroyed, thereby losing any way to deal with a failed resource deallocation. Or we provide a way to release the resource explicitly during the object's lifetime, thereby making it possible to use the resource class incorrectly. There is a way out of this dilemma. But the solution involves using a modified shared pointer class. These modifications are likely to be controversial. Typical shared pointer implementations, such as boost::shared_ptr, require that no exception be thrown when their object's destructor is called. Generally, no destructor should ever throw, so this is a reasonable requirement. These implementations also allow a custom deleter function to be specified, which is called in lieu of the destructor when no reference to the object remains. The no-throw requirement is extended to this custom deleter function. The rationale for this requirement is clear: The shared pointer's destructor must not throw. If the deleter function does not throw, nor will the shared pointer's destructor. However, the same holds for other member functions of the shared pointer which lead to resource deallocation, e.g. reset(): If resource deallocation fails, no exception can be thrown. The solution proposed here is to allow custom deleter functions to throw. This means that the modified shared pointer's destructor must catch exceptions thrown by the deleter function. On the other hand, member functions other than the destructor, e.g. reset(), shall not catch exceptions of the deleter function (and their implementation becomes somewhat more complicated). Here is the original example, using a throwing deleter function: /** A TCP/IP connection. */ class Socket { public: static SharedPtr<Socket> connect(const std::string& address); protected: Socket(const std::string& address); virtual Socket() { } private: struct Deleter; // not implemented Socket(const Socket&); Socket& operator=(const Socket&); }; struct Socket::Deleter { void operator()(Socket* socket) { // Close the connection. If an error occurs, delete the socket // and throw an exception. delete socket; } }; SharedPtr<Socket> Socket::connect(const std::string& address) { return SharedPtr<Socket>(new Socket(address), Deleter()); } We can now use reset() to free the resource explicitly. If there is still a reference to the resource in another thread or another part of the program, calling reset() will only decrement the reference count. If this is the last reference to the resource, the resource is released. If resource deallocation fails, an exception is thrown. SharedPtr<Socket> socket = Socket::connect("localhost:80"); // ... socket.reset();

    Read the article

  • Faq and Best tips Regarding Learning Database ?

    - by AdityaGameProgrammer
    For a programmer with no prior exposure to databases What would be a good database to learn Oracle vs SQLserver vs MySQLvs PostgreSQL? I have come across lot of discussion MySQL and PostgreSQL and frankly I am confused on which to start with. Are these very different, in the sense if one had to switch, would the exposure to one be counter-productive to learning the other? Is working with databases heavily platform dependent? What exactly do people mean by Data base programming vs. administration? Do people chose databases based on the programming language used for the application developed? In general, Working with databases is it implicit that we work with some server? Does the choice of databases differ when it comes to game development? If so what factors does it differ by? What are the Best Tips that you have found to be useful when learning databases Edit: Some FAQ i had and found the same on SO What should every developer know about databases? Which database if learning from scratch in 2010? For a beginner, is there much difference between MySQL and PostgreSQL What RDBMS should I learn/use? (MySql/SQL Server/Oracle etc.) To what extent should a developer learn database? How are database programmers different from other programmers? what kind of database are used in games?

    Read the article

  • Announcement: Oracle Database Appliance 2.4 patch update now available

    - by uwes
    The Oracle Database Appliance 2.4 patch is now available from My Oracle Support (MOS).  If you search for the Oracle Database Appliance 2.4.0.0.0 Kit under Patches it will display the newly uploaded bundles. The patch highlights include: Normal redundancy (double-mirroring) option providing 6TB of usable storage Enhanced Diagnostics - Trace File Analyzer and ODACHK Also, if you review the README, you may see content that says:        "The grid infrastructure and database patching, both are rolling upgradable. During our patching, we patch the node 1 first and when completed, we patch the node 2." I would like to clarify that the 'infrastructure' updates (OS, Firmware, ILOM, etc) will require a  short downtime of the ODA while it is applied.  When you update the grid infrastructure (--gi), the appliance manager verifies that the infrastructure was updated so you cannot just patch the GI without first updating the infrastructure. The high level update patch steps include (but not limited to): Download patch update to your ODA The --infra (infrastructure) is updated and ODA Databases are down and the ODA is/may be rebooted ODA and GI/Databases are restarted Issue the command to update the Grid Infrastructure/databases (The order of the steps are completed automatically and you cannot control when the nodes are brought up and down during the patching) Node 1 -- shutdown databases and GI Node 1 -- patch GI/database Node 1 -- bring up databases and GI Node 2 -- shutdown databases and GI Node 2 -- patch GI/database Node 2 -- bring up databases and GI A replay from Friday's with Sohan on the 2.4 release can be found here.  The PDF of the presentation is here. The Data Sheet, WP, and 2.4 Configurator are available on the ODA OTN site.

    Read the article

  • Developing Schema Compare for Oracle (Part 5): Query Snapshots

    - by Simon Cooper
    If you've emailed us about a bug you've encountered with the EAP or beta versions of Schema Compare for Oracle, we probably asked you to send us a query snapshot of your databases. Here, I explain what a query snapshot is, and how it helps us fix your bug. Problem 1: Debugging users' bug reports When we started the Schema Compare project, we knew we were going to get problems with users' databases - configurations we hadn't considered, features that weren't installed, unicode issues, wierd dependencies... With SQL Compare, users are generally happy to send us a database backup that we can restore using a single RESTORE DATABASE command on our test servers and immediately reproduce the problem. Oracle, on the other hand, would be a lot more tricky. As Oracle generally has a 1-to-1 mapping between instances and databases, any databases users sent would have to be restored to their own instance. Furthermore, the number of steps required to get a properly working database, and the size of most oracle databases, made it infeasible to ask every customer who came across a bug during our beta program to send us their databases. We also knew that there would be lots of issues with data security that would make it hard to get backups. So we needed an easier way to be able to debug customers issues and sort out what strange schema data Oracle was returning. Problem 2: Test execution time Another issue we knew we would have to solve was the execution time of the tests we would produce for the Schema Compare engine. Our initial prototype showed that querying the data dictionary for schema information was going to be slow (at least 15 seconds per database), and this is generally proportional to the size of the database. If you're running thousands of tests on the same databases, each one registering separate schemas, not only would the tests would take hours and hours to run, but the test servers would be hammered senseless. The solution To solve these, we needed to be able to populate the schema of a database without actually connecting to it. Well, the IDataReader interface is the primary way we read data from an Oracle server. The data dictionary queries we use return their data in terms of simple strings and numbers, which we then process and reconstruct into an object model, and the results of these queries are identical for identical schemas. So, we can record the raw results of the queries once, and then replay these results to construct the same object model as many times as required without needing to actually connect to the original database. This is what query snapshots do. They are binary files containing the raw unprocessed data we get back from the oracle server for all the queries we run on the data dictionary to get schema information. The core of the query snapshot generation takes the results of the IDataReader we get from running queries on Oracle, and passes the row data to a BinaryWriter that writes it straight to a file. The query snapshot can then be replayed to create the same object model; when the results of a specific query is needed by the population code, we can simply read the binary data stored in the file on disk and present it through an IDataReader wrapper. This is far faster than querying the server over the network, and allows us to run tests in a reasonable time. They also allow us to easily debug a customers problem; using a simple snapshot generation program, users can generate a query snapshot that could be sent along with a bug report that we can immediately replay on our machines to let us debug the issue, rather than having to obtain database backups and restore databases to test systems. There are also far fewer problems with data security; query snapshots only contain schema information, which is generally less sensitive than table data. Query snapshots implementation However, actually implementing such a feature did have a couple of 'gotchas' to it. My second blog post detailed the development of the dependencies algorithm we use to ensure we get all the dependencies in the database, and that algorithm uses data from both databases to find all the needed objects - what database you're comparing to affects what objects get populated from both databases. We get information on these additional objects using an appropriate WHERE clause on all the population queries. So, in order to accurately replay the results of querying the live database, the query snapshot needs to be a snapshot of a comparison of two databases, not just populating a single database. Furthermore, although the code population queries (eg querying all_tab_cols to get column information) can simply be passed straight from the IDataReader to the BinaryWriter, we need to hook into and run the live dependencies algorithm while we're creating the snapshot to ensure we get the same WHERE clauses, and the same query results, as if we were populating straight from a live system. We also need to store the results of the dependencies queries themselves, as the resulting dependency graph is stored within the OracleDatabase object that is produced, and is later used to help order actions in synchronization scripts. This is significantly helped by the dependencies algorithm being a deterministic algorithm - given the same input, it will always return the same output. Therefore, when we're replaying a query snapshot, and processing dependency information, we simply have to return the results of the queries in the order we got them from the live database, rather than trying to calculate the contents of all_dependencies on the fly. Query snapshots are a significant feature in Schema Compare that really helps us to debug problems with the tool, as well as making our testers happier. Although not really user-visible, they are very useful to the development team to help us fix bugs in the product much faster than we otherwise would be able to.

    Read the article

  • Ubiquitous Language and Custom types

    - by EdvRusj
    Note that my question is referring to those attributes that even on their own already represent a concept ( ie on their own provide a cohesive meaning ). Thus such attribute needs no additional functional support and as such is self-contained. I'm also well-aware that even with self-contained attributes the custom types may prove beneficial ( for example, they give the ability to add new behavior later, when business requirements change ). Thus, my question focuses only on whether custom types for self-contained attributes really enrich Ubiquitous Language UL a) I've read that in most cases, even simple, self-contained attributes should have custom, more descriptive types rather than basic value types ( double, string ... ), because among other things, descriptive types add to the UL, while the use of basic types instead weakens the language. I understand the importance of UL, but how does having a basic type for a self-contained attribute weaken the language, since with self-contained attributes the name of the attribute already adequately describes the concept and thus contributes to the UL vocabulary? For example, the term person_age already adequately explains the concept of quantifying the number of years a person has: class Person { string person_age; } so what could we possibly gain by also introducing the term ThingAge to the UL: class person { ThingAge person_age; } thanks

    Read the article

  • How to track changes in many MSSQL databases from .NET application?

    - by Yauheni Sivukha
    Problem: There are a lot of different databases, which is populated by many different applications directly (without any common application layer). Data can be accessed only through SP (by policy) Task: Application needs to track changes in these databases and react in minimal time. Possible solutions: 1) Create trigger for each table in each database, which will populate one table with events. Application will watch this table through SqlDependency. 2) Watch each table in each database through SqlDependency. 3) Create trigger for each table in each database, which will notify application using managed extension. Which is the best way?

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >