Search Results

Search found 30474 results on 1219 pages for 'relational database'.

Page 17/1219 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • How In-Memory Database Objects Affect Database Design: The Conceptual Model

    - by drsql
    After a rather long break in the action to get through some heavy tech editing work (paid work before blogging, I always say!) it is time to start working on this presentation about In-Memory Databases. I have been trying to decide on the scope of the demo code in the back of my head, and I have added more and taken away bits and pieces over time trying to find the balance of "enough" complexity to show data integrity issues and joins, but not so much that we get lost in the process of trying to...(read more)

    Read the article

  • Export Foxbase database tables to csv

    - by RKS
    I have no experience with Foxbase whatsoever and I'm used to working with MySQL via phpmyadmin or interfaces like that. My company has a third party database we're trying to move away from, but we have no support from the company. The database is on our servers, but in a foxbase format. What kind tools do I need to convert these into other formats, or is there any type of admin UI I can tell them to use to export to csv or anything like that? Basically I'm asking how to export the foxbase tables to csv. Sorry if this question isn't clear or you need more information. I will edit with anything else you need.

    Read the article

  • Books or guides regarding secure key storage and database encryption

    - by Matty
    I have an idea for a SaaS product I want to create, however, this product will store extremely sensitive data that needs to be encrypted at rest. The trouble is not so much the encryption, but the problem of securely storing the keys so that in the event the server was somehow compromised, the keys couldn't just be recovered and used to decrypt the database. Are there any decent books to guides regarding database encryption, and in particular secure key storage? This seems to be a less than straightforward topic and something that is difficult to get right. I'm seeing multiple ways to attack such a system, but unable to come up with one that is secure enough to store highly confidential information.

    Read the article

  • Why is database developer pay so high? [closed]

    - by user433500
    Just wondering why someone would get 10k+ in some area in US for just writing queries and creating tables. While the average salary for someone who does scripting, object oriented programming, J2EE and database all together is only ~12K in new york city. Is there similar opportunities in cities like new york where only doing database gets one 10K+? What is the rational of companies paying such a high salary to consultants for just writing simple queries? I am sure college grad can do that with ease and will be quite satisfied with a 60k+ pay for a couple of year. Does location really matter so much?

    Read the article

  • Database for survey

    - by zfm
    One of my job now is to design a database for a survey. Let's say we have a series of questions (web-based), in which one page contains one question. Not every person will be given the same questions, those are based on their previous answers and also randomness. I would like to know whether it is better to have database like this user question answer userX question1 answer1A userX question2 answer2C userX question5 answer5F userY question1 answer1B userY question3 answer3B userY question6 answer6D ... or user q1 q2 q3 q4 q5 q6 userX 1A 2C null null 5F null userY 1B null 3B null null 6D ... My idea here is, using the second approach seems better, however I would like to know whether updating the table is (much) slower than inserting a new row? Also with the first approach, I can omit having some null answers. The total questions given are fix, the client wont add any more question later on. So my question is, what will you do if you were me?

    Read the article

  • Breaking into database administration

    - by user603794
    Hello to all, I am brand new on this site and look forward to interacting with each of you. I am graduating in June with my Bachelors Degree in Computer Information Systems with hopes of become a DBA in the future. I am currently taking a Database class now and studying SQL Server/T-SQL on the side. My experience in IT is limited to managing an Access database at my last employer for two years. What are the chances of landing a junior DBA position after I graduate?

    Read the article

  • how to copy sql server database on same server

    - by Sam
    I've got a SQL Server 2008 and want to make a copy of a database so I've got a 2nd Version of the database for testing on the same server. The database copy wizard is not able to copy the database, it always sends funny error Messages about missing objects (using SMO copy). When I try to make a backup and restore it under a different database name it still keeps the file names of the original database and overrides this (crashing the original database). So how do I copy a SQL database? Shutdown SQL Server, copy the physical files and attach them? Maybe a command line tool for database copy? Shouldn't there be an easy way to make a copy?

    Read the article

  • How In-Memory Database Objects Affect Database Design: Hybrid Code

    - by drsql
    In my first attempts at building my code, I strictly went with either native or on-disk code. I specifically wrote the on-disk code to only use features that worked in-memory. This lead to one majorly silly bit of code, used to create system assigned key values. How would I create a customer number that was unique. We can’t use the Max(value) + 1 approach because it will be very hideous with MVCC isolation levels, since 100 connections might see the same value, leading to lots of duplication. You...(read more)

    Read the article

  • Best way to auto-restore a database every hour

    - by aron
    I have a demo site where anyone can login and test a management interface. Every hour I would like to flush all the data in the SQL 2008 Database and restore it from the original. Red Gate Software has some awesome tools for this, however they are beyond my budget right now. Could I simply make a backup copy of the database's data file, then have a c# console app that deletes it and copies over the original. Then I can have a windows schedule task to run the .exe every hour. It's simple and free... would this work? I'm using SQL Server 2008 R2 Web edition I understand that Red Gate Software is technically better because I can set it to analyze the db and only update the records that were altered, and the approach I have above is like a "sledge hammer".

    Read the article

  • Database structure for various items

    - by XGouchet
    I'm building a sqlite database for an android app which will hold a list of items, each of which have different characteristics. Some of the characteristics are available for all objects, some are only relevant for a subset of objects. For example, all my items have a name, a description, an image. Some items will also have an expiration date, others wont. Some will have a size, some wont. Etc... How should I build my Database, as I don't know how many characteristics may be added in the future, and knowing I should be able to filter the list by any characteristic ?

    Read the article

  • Data Warehouse: One Database or many?

    - by drrollins
    At my new company, they keep all data associated with the data warehouse, including import, staging, audit, dimension and fact tables, together in the same physical database. I've been a database developer for a number of years now and this consolidation of function and form seems counter to everything I know. It seems to make security, backup/restore and performance management issues more manually intensive. Is this something that is done in the industry? Are there substantial reasons for doing or not doing it? The platform is Netezza. The size is in terabytes, hundreds of millions of rows. What I'm looking to get from answers to this question is a solid understanding of how right or wrong this path is. From your experience, what are the issues I should be focused on arguing if this is a path that will cause trouble for us down the road. If it is no big deal, then I'd like to know that as well.

    Read the article

  • Handle all authentication logic in database or code?

    - by Snuffleupagus
    We're starting a new(ish) project at work that has been handed off to me. A lot of the database sided stuff has been fleshed out, including some stored procedures. One of the stored procedures, for example, handles creation of a new user. All of the data is validated in the stored procedure (for example, password must be at least 8 characters long, must contain numbers, etc) and other things, such as hashing the password, is done in the database as well. Is it normal/right for everything to be handled in the stored procedure instead of the application itself? It's nice that any application can use the stored procedure and have the same validation, but the application should have a standard framework/API function that solves the same problem. I also feel like it takes away the data from the application and is going to be harder to maintain/add new features to.

    Read the article

  • calling database driver in java app [basically a swing app]

    - by user993250
    I have made a java application that allows a user to choose from certain standards and then allows him to customize those standards according to his needs. Now, the customization [via a swing application] that has been made needs to be persistent. For this we use a database [mysql/access] and hook it to the application so that with each customization made, a table [if non-existent] is created [thus making it runtime and we can not pre-determine the table names or the keys of table etc] and an appropriate entry in the table is made. I have written the driver for this connection. How do I call it in the java application and what approach should I take? I would much appreciate that if somebody can refer me some example that not only shows a sample connection being made via driver but its appropriate calls to the database as well so that i can use it as a guide.

    Read the article

  • Eloquera Database 2.7.0 is released (native .NET object database)

    Eloquera ( www.eloquera.com ) originally designed and developed for use in the Web environment and its designed as native .NET application in C#. Eloquera wasnt ported from Java as many other databases. Eloquera natively as part of architecture supports: Save the data with a single line of code// Create the object we would like to work with. Movie movie = new Movie() { Location = "Sydney", Year = 2010, OpenDates = new DateTime[] { new DateTime(2003, 12, 10),...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to create a nested form in backbone relational?

    - by jebek
    I would like to be able to create nested models at the same time in backbone. I know how to use backbone relational to create the parent model. Then once it is saved, I can create child models through backbone relational. However, I want to be able to create both the parent and child models at the same time, which might not be possible because I can only create the child model once the parent model has already been created. For example, let's say I was creating a forum like the one from the awesome backbone relational tutorial - http://antoviaque.org/docs/tutorials/backbone-relational-tutorial/. I would want to create a thread and a message at the same time(through the click of a single button) rather than create a thread then a message. Is this possible? Is there a better way of doing this that I'm not thinking of?

    Read the article

  • Synthetic database records

    - by michipili
    Assume we are getting some statistics from a customer which we analyse and we send our comments to the customer. Now, the customer tells us that the statistic they computed between January and March are based on a wrong methodology and sends us corrected series. We want perform analysis with the wrong and with the correct set of data, which are huge and only differ from January to March. Therefore, we need something like synthetic database records implementing the following logic: synthetic[1] = wrong_data synthetic[2] = correct_data between Januar and March, wrong_data otherwise With this, we can easily perform our analyses on synthetic records. Should such synthetic records be implemented in the application logic or on the side of the database? What are common pitfalls of such an implementation?

    Read the article

  • basic database design table on rails

    - by runcode
    I am confuse on a concept. I am doing this on rails. Is that Entity set equal to a table in the database? Is that Relationship set equal to a table in the database? Let say we have Entity set "USER" and Entity set "POST" and Entity set "COMMENT" User- can post many posts and comments as they want Post- belong to users Comments-belong to posts ,users, so comment is weak entity. SCHEMA ====== USER -id -name POST -id -user_id(FK) -comment_id (FK) COMMENT -id -user_id (FK) -post_id (FK) so USER,POST,COMMENT are tables I think. And what else is a table? And do I need a table for the relationship??

    Read the article

  • ????!Platinum?????????Oracle Database ??????@??????

    - by Yusuke.Yamamoto
    ?ORACLE MASTER Platinum??????????????????????????????????????????? ?????????????????????????????????PS??????????????????? 12???????????????????????????????????????? ??????????????????????18?30?~??? 4????????????????????? ????? Oracle Database ????????????????????????????????????????????????????? ???????·????????? SQL??? ??????????? ?????? ??? ???? ???? 2011?03?11?(?)18:30~20:30 ?? ORACLE MASTER Platinum ?????????Oracle Database ??????

    Read the article

  • Creating a test database with copied data *and* its own data

    - by Jordan Reiter
    I'd like to create a test database that each day is refreshed with data from the production database. BUT, I'd like to be able to create records in the test database and retain them rather than having them be overwritten. I'm wondering if there is a simple straightforward way to do this. Both databases run on the same server, so apparently that rules out replication? For clarification, here is what I would like to happen: Test database is created with production data I create some test records that I want to keep running on the test server (basically so I can have example records that I can play with) Next day, the database is completely refreshed, but the records I created that day are retained. Records that were untouched that day are replaced with records from the production database. The complication is if a record in the production database is deleted, I want it to be deleted on the test database too, so I do want to get rid of records in the test database that no longer exist in the production database, unless those records were created within the test database. Seems like the only way to do this would be to have some sort of table storing metadata about the records being created? So for example, something like this: CREATE TABLE MetaDataRecords ( id integer not null primary key auto_increment, tablename varchar(100), action char(1), pk varchar(100) ); DELETE FROM testdb.users WHERE NOT EXISTS (SELECT * from proddb.users WHERE proddb.users.id=testdb.users.id) AND NOT EXISTS (SELECT * from testdb.MetaDataRecords WHERE testdb.MetaDataRecords.pk=testdb.users.pk AND testdb.MetaDataRecords.action='C' AND testdb.MetaDataRecords.tablename='users' );

    Read the article

  • Database Table Schema and Aggregate Roots

    - by bretddog
    Hi, Applicaiton is single user, 1-tier(1 pc), database SqlCE. DataService layer will be (I think) : Repository returning domain objects and quering database with LinqToSql (dbml). There are obviously a lot more columns, this is simplified view. http://img573.imageshack.us/img573/3612/ss20110115171817w.png This is my first attempt of creating a 2 tables database. I think the table schema makes sense, but I need some reassurance or critics. Because the table relations looks quite scary to be honest. I'm hoping you could; Look at the table schema and respond if there are clear signs of troubles or errors that you spot right away.. And if you have time, Look at Program Summary/Questions, and see if the table layout makes makes sense to those points. Please be brutal, I will try to defend :) Program summary: a) A set of categories, each having a set of strategies (1:m) b) Each day a number of items will be produced. And each strategy MAY reference it. (So there can be 50 items, and a strategy may reference 23 of them) c) An item can be referenced by more than one strategy. So I think it's an m:m relation. d) Status values will be logged at fixed time-fractions through the day, for: - .... each Strategy.....each StrategyItem....each item e) An action on an item may be executed by a strategy that reference it. - This is logged as ItemAction (Could have called it StrategyItemAction) User Requsts b) - e) described the main activity mode of the program. To work with only today's DayLog , for each category. 2nd priority activity is retrieval of history, which typically will be From all categories, from day x to day y; Get all StrategyDailyLog. Questions First, does the overall layout look sound? I'm worried to see that there are so many relationships in all directions, connecting everything. Is this normal, or does it look like trouble? StrategyItem is made to represent an m:m relationship. Is it correct as I noted 1:m / 1:1 (marked red) ? StrategyItemTimeLog and ItemTimeLog; Logs values that both need to be retrieved together, when retreiving a StrategyItem. Reason I separated is that the first one is strategy-specific, and several strategies can reference same item. So I thought not to duplicate those values that are not dependent no strategy, but only on the item. Hence I also dragged out the LogTime, as it seems to be the only parameter to unite the logs. But this all looks quite disturbing with those 3 tables. Does it make sense at all? Or you have suggestion? Pink circles shows my vague attempt of Aggregate Root Paths. I've been thinking in terms of "what entity is responsible for delete". Though I'm unsure about the actual root. I think it's Category. Does it make sense related to User Requests described above?

    Read the article

  • Keeping video viewing statistics breakdown by video time in a database

    - by Septagram
    I need to keep a number of statistics about the videos being watched, and one of them is what parts of the video are being watched most. The design I came up with is to split the video into 256 intervals and keep the floating-point number of views for each of them. I receive the data as a number of intervals the user watched continuously. The problem is how to store them. There are two solutions I see. Row per every video segment Let's have a database table like this: CREATE TABLE `video_heatmap` ( `id` int(11) NOT NULL AUTO_INCREMENT, `video_id` int(11) NOT NULL, `position` tinyint(3) unsigned NOT NULL, `views` float NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `idx_lookup` (`video_id`,`position`) ) ENGINE=MyISAM Then, whenever we have to process a number of views, make sure there are the respective database rows and add appropriate values to the views column. I found out it's a lot faster if the existence of rows is taken care of first (SELECT COUNT(*) of rows for a given video and INSERT IGNORE if they are lacking), and then a number of update queries is used like this: UPDATE video_heatmap SET views = views + ? WHERE video_id = ? AND position >= ? AND position < ? This seems, however, a little bloated. The other solution I came up with is Row per video, update in transactions A table will look (sort of) like this: CREATE TABLE video ( id INT NOT NULL AUTO_INCREMENT, heatmap BINARY (4 * 256) NOT NULL, ... ) ENGINE=InnoDB Then, upon every time a view needs to be stored, it will be done in a transaction with consistent snapshot, in a sequence like this: If the video doesn't exist in the database, it is created. A row is retrieved, heatmap, an array of floats stored in the binary form, is converted into a form more friendly for processing (in PHP). Values in the array are increased appropriately and the array is converted back. Row is changed via UPDATE query. So far the advantages can be summed up like this: First approach Stores data as floats, not as some magical binary array. Doesn't require transaction support, so doesn't require InnoDB, and we're using MyISAM for everything at the moment, so there won't be any need to mix storage engines. (only applies in my specific situation) Doesn't require a transaction WITH CONSISTENT SNAPSHOT. I don't know what are the performance penalties of those. I already implemented it and it works. (only applies in my specific situation) Second approach Is using a lot less storage space (the first approach is storing video ID 256 times and stores position for every segment of the video, not to mention primary key). Should scale better, because of InnoDB's per-row locking as opposed to MyISAM's table locking. Might generally work faster because there are a lot less requests being made. Easier to implement in code (although the other one is already implemented). So, what should I do? If it wasn't for the rest of our system using MyISAM consistently, I'd go with the second approach, but currently I'm leaning to the first one. But maybe there are some reasons to favour one approach or another?

    Read the article

  • What's the "best" database for embedded?

    - by mawg
    I'm an embedded guy, not a database guy. I've been asked to redesign an existing system which has bottlenecks in several places. The embedded device is based around an ARM 9 processor running at 220mHz. There should be a database of 50k entries (may increase to 250k) each with 1k of data (max 8 filed). That's approximate - I can try to get more precise figures if necessary. They are currently using SqlLite 2 and planning to move to SqlLite 3. Without starting a flame war - I am a complete d/b newbie just seeking advice - is that the "best" decision? I realize that this might be a "how long is a piece of string?" question, but any pointers woudl be greatly welcomed. I don't mind doing a lot of reading & research, but just hoped that you could get me off to a flying start. Thanks. p.s Again, a total rewrite, might not even stick with embedded Linux, but switch to eCos, don't worry too much about one time conversion between d/b formats. Oh, and accesses should be infrequent, at most one every few seconds. edit: ok, it seems they have 30k entries (may reach 100k or more) of only 5 or 6 fields each, but at least 3 of them can be a search key for a record. They are toying with "having no d/b at all, since the data are so simple", but it seems to me that with multiple keys, we couldn't use fancy stuff like a quicksort() type search (recursive, binary search). Any thoughts on "no d/b", just data-structures? Btw, one key is 800k - not sure how well SqlLite handles that (maybe with "no d/b" I have to hash that 800k to something smaller?)

    Read the article

  • Database Design Question regaurding duplicate information.

    - by galford13x
    I have a database that contains a history of product sales. For example the following table CREATE TABLE SalesHistoryTable ( OrderID, // Order Number Unique to all orders ProductID, // Product ID can be used as a Key to look up product info in another table Price, // Price of the product per unit at the time of the order Quantity, // quantity of the product for the order Total, // total cost of the order for the product. (Price * Quantity) Date, // Date of the order StoreID, // The store that created the Order PRIMARY KEY(OrderID)); The table will eventually have millions of transactions. From this, profiles can be created for products in different geographical regions (based on the StoreID). Creating these profiles can be very time consuming as a database query. For example. SELECT ProductID, StoreID, SUM(Total) AS Total, SUM(Quantity) QTY, SUM(Total)/SUM(Quantity) AS AvgPrice FROM SalesHistoryTable GROUP BY ProductID, StoreID; The above query could be used to get the Information based on products for any particular store. You could then determine which store has sold the most, has made the most money, and on average sells for the most/least. This would be very costly to use as a normal query run anytime. What are some design descisions in order to allow these types of queries to run faster assuming storage size isn’t an issue. For example, I could create another Table with duplicate information. Store ID (Key), Product ID, TotalCost, QTY, AvgPrice And provide a trigger so that when a new order is received, the entry for that store is updated in a new table. The cost for the update is almost nothing. What should be considered when given the above scenario?

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >