Search Results

Search found 36420 results on 1457 pages for 'database developers'.

Page 15/1457 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Creating a test database with copied data *and* its own data

    - by Jordan Reiter
    I'd like to create a test database that each day is refreshed with data from the production database. BUT, I'd like to be able to create records in the test database and retain them rather than having them be overwritten. I'm wondering if there is a simple straightforward way to do this. Both databases run on the same server, so apparently that rules out replication? For clarification, here is what I would like to happen: Test database is created with production data I create some test records that I want to keep running on the test server (basically so I can have example records that I can play with) Next day, the database is completely refreshed, but the records I created that day are retained. Records that were untouched that day are replaced with records from the production database. The complication is if a record in the production database is deleted, I want it to be deleted on the test database too, so I do want to get rid of records in the test database that no longer exist in the production database, unless those records were created within the test database. Seems like the only way to do this would be to have some sort of table storing metadata about the records being created? So for example, something like this: CREATE TABLE MetaDataRecords ( id integer not null primary key auto_increment, tablename varchar(100), action char(1), pk varchar(100) ); DELETE FROM testdb.users WHERE NOT EXISTS (SELECT * from proddb.users WHERE proddb.users.id=testdb.users.id) AND NOT EXISTS (SELECT * from testdb.MetaDataRecords WHERE testdb.MetaDataRecords.pk=testdb.users.pk AND testdb.MetaDataRecords.action='C' AND testdb.MetaDataRecords.tablename='users' );

    Read the article

  • Database Table Schema and Aggregate Roots

    - by bretddog
    Hi, Applicaiton is single user, 1-tier(1 pc), database SqlCE. DataService layer will be (I think) : Repository returning domain objects and quering database with LinqToSql (dbml). There are obviously a lot more columns, this is simplified view. http://img573.imageshack.us/img573/3612/ss20110115171817w.png This is my first attempt of creating a 2 tables database. I think the table schema makes sense, but I need some reassurance or critics. Because the table relations looks quite scary to be honest. I'm hoping you could; Look at the table schema and respond if there are clear signs of troubles or errors that you spot right away.. And if you have time, Look at Program Summary/Questions, and see if the table layout makes makes sense to those points. Please be brutal, I will try to defend :) Program summary: a) A set of categories, each having a set of strategies (1:m) b) Each day a number of items will be produced. And each strategy MAY reference it. (So there can be 50 items, and a strategy may reference 23 of them) c) An item can be referenced by more than one strategy. So I think it's an m:m relation. d) Status values will be logged at fixed time-fractions through the day, for: - .... each Strategy.....each StrategyItem....each item e) An action on an item may be executed by a strategy that reference it. - This is logged as ItemAction (Could have called it StrategyItemAction) User Requsts b) - e) described the main activity mode of the program. To work with only today's DayLog , for each category. 2nd priority activity is retrieval of history, which typically will be From all categories, from day x to day y; Get all StrategyDailyLog. Questions First, does the overall layout look sound? I'm worried to see that there are so many relationships in all directions, connecting everything. Is this normal, or does it look like trouble? StrategyItem is made to represent an m:m relationship. Is it correct as I noted 1:m / 1:1 (marked red) ? StrategyItemTimeLog and ItemTimeLog; Logs values that both need to be retrieved together, when retreiving a StrategyItem. Reason I separated is that the first one is strategy-specific, and several strategies can reference same item. So I thought not to duplicate those values that are not dependent no strategy, but only on the item. Hence I also dragged out the LogTime, as it seems to be the only parameter to unite the logs. But this all looks quite disturbing with those 3 tables. Does it make sense at all? Or you have suggestion? Pink circles shows my vague attempt of Aggregate Root Paths. I've been thinking in terms of "what entity is responsible for delete". Though I'm unsure about the actual root. I think it's Category. Does it make sense related to User Requests described above?

    Read the article

  • Keeping video viewing statistics breakdown by video time in a database

    - by Septagram
    I need to keep a number of statistics about the videos being watched, and one of them is what parts of the video are being watched most. The design I came up with is to split the video into 256 intervals and keep the floating-point number of views for each of them. I receive the data as a number of intervals the user watched continuously. The problem is how to store them. There are two solutions I see. Row per every video segment Let's have a database table like this: CREATE TABLE `video_heatmap` ( `id` int(11) NOT NULL AUTO_INCREMENT, `video_id` int(11) NOT NULL, `position` tinyint(3) unsigned NOT NULL, `views` float NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `idx_lookup` (`video_id`,`position`) ) ENGINE=MyISAM Then, whenever we have to process a number of views, make sure there are the respective database rows and add appropriate values to the views column. I found out it's a lot faster if the existence of rows is taken care of first (SELECT COUNT(*) of rows for a given video and INSERT IGNORE if they are lacking), and then a number of update queries is used like this: UPDATE video_heatmap SET views = views + ? WHERE video_id = ? AND position >= ? AND position < ? This seems, however, a little bloated. The other solution I came up with is Row per video, update in transactions A table will look (sort of) like this: CREATE TABLE video ( id INT NOT NULL AUTO_INCREMENT, heatmap BINARY (4 * 256) NOT NULL, ... ) ENGINE=InnoDB Then, upon every time a view needs to be stored, it will be done in a transaction with consistent snapshot, in a sequence like this: If the video doesn't exist in the database, it is created. A row is retrieved, heatmap, an array of floats stored in the binary form, is converted into a form more friendly for processing (in PHP). Values in the array are increased appropriately and the array is converted back. Row is changed via UPDATE query. So far the advantages can be summed up like this: First approach Stores data as floats, not as some magical binary array. Doesn't require transaction support, so doesn't require InnoDB, and we're using MyISAM for everything at the moment, so there won't be any need to mix storage engines. (only applies in my specific situation) Doesn't require a transaction WITH CONSISTENT SNAPSHOT. I don't know what are the performance penalties of those. I already implemented it and it works. (only applies in my specific situation) Second approach Is using a lot less storage space (the first approach is storing video ID 256 times and stores position for every segment of the video, not to mention primary key). Should scale better, because of InnoDB's per-row locking as opposed to MyISAM's table locking. Might generally work faster because there are a lot less requests being made. Easier to implement in code (although the other one is already implemented). So, what should I do? If it wasn't for the rest of our system using MyISAM consistently, I'd go with the second approach, but currently I'm leaning to the first one. But maybe there are some reasons to favour one approach or another?

    Read the article

  • What's the "best" database for embedded?

    - by mawg
    I'm an embedded guy, not a database guy. I've been asked to redesign an existing system which has bottlenecks in several places. The embedded device is based around an ARM 9 processor running at 220mHz. There should be a database of 50k entries (may increase to 250k) each with 1k of data (max 8 filed). That's approximate - I can try to get more precise figures if necessary. They are currently using SqlLite 2 and planning to move to SqlLite 3. Without starting a flame war - I am a complete d/b newbie just seeking advice - is that the "best" decision? I realize that this might be a "how long is a piece of string?" question, but any pointers woudl be greatly welcomed. I don't mind doing a lot of reading & research, but just hoped that you could get me off to a flying start. Thanks. p.s Again, a total rewrite, might not even stick with embedded Linux, but switch to eCos, don't worry too much about one time conversion between d/b formats. Oh, and accesses should be infrequent, at most one every few seconds. edit: ok, it seems they have 30k entries (may reach 100k or more) of only 5 or 6 fields each, but at least 3 of them can be a search key for a record. They are toying with "having no d/b at all, since the data are so simple", but it seems to me that with multiple keys, we couldn't use fancy stuff like a quicksort() type search (recursive, binary search). Any thoughts on "no d/b", just data-structures? Btw, one key is 800k - not sure how well SqlLite handles that (maybe with "no d/b" I have to hash that 800k to something smaller?)

    Read the article

  • Database Design Question regaurding duplicate information.

    - by galford13x
    I have a database that contains a history of product sales. For example the following table CREATE TABLE SalesHistoryTable ( OrderID, // Order Number Unique to all orders ProductID, // Product ID can be used as a Key to look up product info in another table Price, // Price of the product per unit at the time of the order Quantity, // quantity of the product for the order Total, // total cost of the order for the product. (Price * Quantity) Date, // Date of the order StoreID, // The store that created the Order PRIMARY KEY(OrderID)); The table will eventually have millions of transactions. From this, profiles can be created for products in different geographical regions (based on the StoreID). Creating these profiles can be very time consuming as a database query. For example. SELECT ProductID, StoreID, SUM(Total) AS Total, SUM(Quantity) QTY, SUM(Total)/SUM(Quantity) AS AvgPrice FROM SalesHistoryTable GROUP BY ProductID, StoreID; The above query could be used to get the Information based on products for any particular store. You could then determine which store has sold the most, has made the most money, and on average sells for the most/least. This would be very costly to use as a normal query run anytime. What are some design descisions in order to allow these types of queries to run faster assuming storage size isn’t an issue. For example, I could create another Table with duplicate information. Store ID (Key), Product ID, TotalCost, QTY, AvgPrice And provide a trigger so that when a new order is received, the entry for that store is updated in a new table. The cost for the update is almost nothing. What should be considered when given the above scenario?

    Read the article

  • Saving Abstract and Sub classes to database

    - by bretddog
    Hi, I have an abstract class "StrategyBase", and a set of sub classes, StrategyA/B/C etc. The sub classes use some of the properties of the base class, and have some individual properties. My question is how to save this to a database. I'm currently using SqlCE, and Linq-To-Sql by creating entity classes automatically with SqlMetal.exe. I've seen there are three solutions shown in this question, but I'm not able to see how these solutions will work or not with SqlMetal/entity classes. Though it seems to me the "concrete table inheritance" would probably work without any manual modifying. What about the other two, would they be problematic? For "Single Table Inheritance" wouldn't all classes get all variables, even though they don't need them? And for the "Class table inheritance" solution I can't really see at all how that will map into the entity-classes for a useful purpose. I may note that I extend these partial entity classes for making the classes of my business objects. I may also consider moving to EntityFramework instead of SqlMetal/Linq2Sql, so would be nice also to know if that makes any difference to what schema is easy to implement. One likely important thing to note is that I will constantly be develop new strategies, which makes me have to modify the program code, and probably the database shcema; when adding a new strategy. Sorry the question is a bit "all over the place", but hopefully it's some clear advantages/disadvantages here that you may be able to advice. ? Cheers!

    Read the article

  • Database design: Calculating the Account Balance

    - by 001
    How do I design the database to calculate the account balance? 1) Currently I calculate the account balance from the transaction table In my transaction table I have "description" and "amount" etc.. I would then add up all "amount" values and that would work out the user's account balance. I showed this to my friend and he said that is not a good solution, when my database grows its going to slow down???? He said I should create separate table to store the calculated account balance. If did this, I will have to maintain two tables, and its risky, the account balance table could go out of sync. Any suggestion? EDIT: OPTION 2: should I add an extra column to my transaction tables "Balance". now I do not need to go through many rows of data to perform my calculation. Example John buys $100 credit, he debt $60, he then adds $200 credit. Amount $100, Balance $100. Amount -$60, Balance $40. Amount $200, Balance $240.

    Read the article

  • build a Database from Ms Word list information...

    - by Jayron Soares
    Please someone can advise me how to approach a given problem: I have a sequential list of metadata in a document in MS Word. The basic idea is create a python algorithm to iterate over of the information, retrieving just the name of PROCESS, when is made a queue, from a database. for example. Process: Process Walker (1965) Exact reference: Walker Process Equipment., nc. v. Food Machinery Corp.. Link: http://caselaw.lp.findlaw.com/scripts/getcase.pl?court=US&vol=382&invol= Type of procedure: Certiorari To The United States Court of Appeals for the SeventhCircuit. Parties: Walker Process Equipment, Inc. Sector: Systems is … Start Date: October 12-13 Arguedas, 1965 Summary: Food Machinery Company has initiated a process to stop or slow the entry of competitors through the use of a patent obtained by fraud. The case concerned a patenton "knee ction swing diffusers" used in aeration equipment for sewage treatment systems, and the question was whether "the maintenance and enforcement of a patent obtained by fraud before the patent office" may be a basis for antitrust punishment. Report of the evolution process: petitioner, in answer to respond .. Importance: a) First case which established an analysis for the diagnosis of dispute… There are about 200 pages containing the information above. I have in mind the idea of creating an algorithm in python to be able to break this information sequenced and try to store them in a web database[open source application that I’m looking for] in order to allow for free consultations ...

    Read the article

  • Announcing: Oracle Enterprise Manager 12c Delivers Advanced Self-Service Automation for Oracle Database 12c Multitenant

    - by Scott McNeil
    New Self-Service Driven Provisioning of Pluggable Databases Today Oracle announced new capabilities that support managing the full lifecycle of pluggable database as a service in Oracle Enterprise Manager 12c Release 3 (12.1.0.3). This latest release builds on the existing capabilities to provide advanced automation for deploying database as a service using Oracle Database 12c Multitenant option. It takes it one step further by offering pluggable database as a service through Oracle Enterprise Manager 12c self-service portal providing customers with fast provisioning of database cloud services with minimal time and effort. This is a significant addition to Oracle Enterprise Manager 12c’s existing portfolio of cloud services that includes infrastructure as a service, database as a service, testing as a service, and Java platform as a service. The solution provides a self-service mechanism to provision pluggable databases allowing users to request and access database(s) on-demand. The self-service operations are also enabled through REST APIs allowing customers to integrate with third-party automation systems or their custom enterprise portals. Benefits Self-service provisioning allows rapid access to pluggable database as a service for hosting or certifying applications on Oracle Database 12c Self-service driven migration to pluggable database as a service in order to migrate a pre-Oracle Database 12c database to a pluggable database as a service model and test the consolidation strategy Single service catalog for all approved pluggable database as a service configurations which helps customers achieve standardization while catering to all applications and users in the enterprise Resource guarantee via database resource manager (and IORM on Oracle Exadata) that enables deployment of mixed workloads in a shared environment Quota, role based access, and policy based management that enforces governance and reduces administrative overhead Chargeback or showback which improves metering and accountability for services consumed by each pluggable database Comprehensive REST APIs that support integration with ticketing or change management systems, and or with other self-service portals Minimal administrative and maintenance overhead through self-managing automation that allows for intelligent placement of pluggable databases To understand how pluggable database as a service works, watch this quick demo: Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter Download the Oracle Enterprise Manager Cloud Control12c Mobile app

    Read the article

  • Handling changes to data types and entries in a database migration

    - by jandjorgensen
    I'm fully redesigning a site that indexes a number of articles with basic search functionality. The previous site was written about a decade ago, and I'm salvaging about 30,000 entries with data stored in less-than-ideal formats. While I'm moving from MSSQL to MySQL, I don't need to make any "live" changes, so this is not a production-level migration issue so much as a redesign. For instance, dates are stored the same as tags/subjects about the articles, but in strings as "YYYYMMDDd" (the lowercase d stands for "date" in the string). Essentially, before or after I move from the previous database format to a new one, I'm going to need to do a lot of replacement of individual entries. While I understand how to do operations with regular expressions in non-database issues, my database experience isn't robust enough to know the best way to handle this. What is the best (or standard) way to handle major changes like this? Is there an SQL operation I should be looking into? Please let me know if the problem isn't clear--I'm not entirely sure what kind of answer I'm looking for.

    Read the article

  • Staying OO and Testable while working with a database

    - by Adam Backstrom
    What are some OOP strategies for working with a database but keeping thing testable? Say I have a User class and my production environment works against MySQL. I see a couple possible approaches, shown here using PHP: Pass in a $data_source with interfaces for load() and save(), to abstract the backend source of data. When testing, pass a different data store. $user = new User( $mysql_data_source ); $user-load( 'bob' ); $user-setNickname( 'Robby' ); $user-save(); Use a factory that accesses the database and passes the result row to User's constructor. When testing, manually generate the $row parameter, or mock the object in UserFactory::$data_source. (How might I save changes to the record?) class UserFactory { static $data_source; public static function fetch( $username ) { $row = self::$data_source->get( [params] ); $user = new User( $row ); return $user; } } I have Design Patterns and Clean Code here next to me, but I'm struggling to find applicable concepts.

    Read the article

  • Android Card Game Database for Deck Building

    - by Singularity222
    I am making a card game for Android where a player can choose from a selection of cards to build a deck that would contain around 60 cards. Currently, I have the entire database of cards created that the user can browse. The next step is allowing the user to select cards and create a deck with whatever cards they would like. I have a form where the user can search for specific cards based off a few different attributes. The search results are displayed in a List Activity. My thought about deck creation is to add the primary key of each card the user selects to a SQLite Database table with the amount they would like in the deck. This way as the user performs searches for cards they can see the state of the deck. Once the user decides to save the deck. I'll export the card list to XML and wipe the contents of the table. If the user wanted to make changes to the deck, they would load it, it would be parsed back into the table so they could make the changes. A similar situation would occur when the eventually load the deck to play a game. I'm just curious what the rest of you may think of this method. Currently, this is a personal project and I am the only one working on it. If I can figure out the best implementation before I even begin coding I'm hoping to save myself some time and trouble.

    Read the article

  • How to quickly search through a very large list of strings / records on a database

    - by Giorgio
    I have the following problem: I have a database containing more than 2 million records. Each record has a string field X and I want to display a list of records for which field X contains a certain string. Each record is about 500 bytes in size. To make it more concrete: in the GUI of my application I have a text field where I can enter a string. Above the text field I have a table displaying the (first N, e.g. 100) records that match the string in the text field. When I type or delete one character in the text field, the table content must be updated on the fly. I wonder if there is an efficient way of doing this using appropriate index structures and / or caching. As explained above, I only want to display the first N items that match the query. Therefore, for N small enough, it should not be a big issue loading the matching items from the database. Besides, caching items in main memory can make retrieval faster. I think the main problem is how to find the matching items quickly, given the pattern string. Can I rely on some DBMS facilities, or do I have to build some in-memory index myself? Any ideas? EDIT I have run a first experiment. I have split the records into different text files (at most 200 records per file) and put the files in different directories (I used the content of one data field to determine the directory tree). I end up with about 50000 files in about 40000 directories. I have then run Lucene to index the files. Searching for a string with the Lucene demo program is pretty fast. Splitting and indexing took a few minutes: this is totally acceptable for me because it is a static data set that I want to query. The next step is to integrate Lucene in the main program and use the hits returned by Lucene to load the relevant records into main memory.

    Read the article

  • Database migrations for SQL Server

    - by Art
    I need a database migration framework for SQL Server, capable of managing both schema changes and data migrations. I guess I am looking for something similar to django's South framework here. Given the fact that South is tightly coupled with django's ORM, and the fact that there's so many ORMs for SQL Server I guess having just a generic migration framework, enabling you to write and execute in controlled and sequential manner SQL data/schema change scripts should be sufficient.

    Read the article

  • Database migrations for MS SQL Server

    - by Art
    I need a database migration framework for MS SQL Server, capable of managing both schema changes and data migrations. I guess I am looking for something similar to django's South framework here. given the fact that South is tightly coupled with django's ORM, and the fact that there's so many ORMs for MS SQL I guess having just a generic migration framework, enabling you to write and execute in controlled and sequential manner SQL data/schema change scripts should be sufficient. Thanks!

    Read the article

  • Storing Preferences/One-to-One Relationships in Database

    - by LnDCobra
    What is the best way to store settings for certain objects in my database? Method one: Using a single table Table: Company {CompanyID, CompanyName, AutoEmail, AutoEmailAddress, AutoPrint, AutoPrintPrinter} Method two: Using two tables Table Company {CompanyID, COmpanyName} Table2 CompanySettings{CompanyID, utoEmail, AutoEmailAddress, AutoPrint, AutoPrintPrinter}

    Read the article

  • What is a columnar database?

    - by Raj More
    I have been working with warehousing for a while now. I am intrigued by Columnar Databases and the speed that they have to offer for data retrievals. I have multi-part question: How do Columnar Databases work? How do they differ from relational databases? Is there a trial version of a columnar database I can install to play around? (I am on Windows 7)

    Read the article

  • Database Table of Boolean Values

    - by guazz
    What's the best method of storing a large number of booleans in a database table? Should I create a column for each boolean value or is there a more optimal method? Employee Table IsHardWorking IsEfficient IsCrazy IsOverworked IsUnderpaid ...etc.

    Read the article

  • Database design for heavy timed data logging

    - by user293995
    Hi, I have an application where I receive each data 40.000 rows. I have 5 million rows to handle (500 Mb MySQL 5.0 database). Actually, thoses rows are stored in the same table = slow to update, hard to backup, ... Which king of scheme is used in such application to allow long term accessibility to the datas without problems with too big tables, easy backup, fast read / write ? Is postgresql better than mysql for such purpose ? Thanks in advance BEst regards

    Read the article

  • Good tool to visualise database schema?

    - by Mat
    Are there any good tools for visualising a pre-existing database schema? I'm using MySQL if it matters. I'm currently using MySQL Workbench to process an SQL create script dump, but it's clunky, slow and a manual process to drag all the tables about (which would be okay if it wasn't so slow).

    Read the article

  • internal implementation of database Queries

    - by harigm
    In my experience I have used many queries like select, order by, where clause etc.. in mysql, sql-server, oracle etc For a moment i have thought, 1)how is this internally written to implement the above queries 2) which language do they use? 3) is that programming language? if yes which language? 4)what kind of environment required to implement this kind of complex database

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >