Search Results

Search found 32538 results on 1302 pages for 'database restore'.

Page 26/1302 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • How to map to tables in database PHPMyAdmin

    - by thegrede
    I'm working now on a project which a user can save their own coupon codes on the websites, so I want to know what is the best to do that, Lets say, I have 1 table with the users, like this, userId | firstName | lastName | codeId and then I have a table of the coupon codes, like this, codeId | codeNumber So what I can do is to connect the codeId to userId so when someone saves the coupons goes the codeId from the coupon table into the codeId of the users table, But now what if when a user have multiple coupons what do I do it should be connected to the user? I have 2 options what to do, Option 1, Saving the codeId from coupons table into the codeId of users table like 1,2,3,4,5, Option 2 To make a new row into the coupons table and to connect the user to the code with adding another field in the coupon table userId and putting into it the user which has added the coupon his userId of the users table, So what of the two options is better to do? Thanks you guys.

    Read the article

  • friendship database schema

    - by Daniel Hertz
    I'm creating a db schema that involves users that can be friends, and I was wondering what the best way to model the ability for these friends to have friendships. Should it be its own table that simply has two columns that each represent a user? Thanks!

    Read the article

  • correct approach to store in database

    - by John
    I'm developing an online website (using Django and Mysql). I have a Tests table and User table. I have 50 tests within the table and each user completes them at their own pace. How do I store the status of the tests in my DB? One idea that came to my mind is to create an additional column in User table. That column containing testid's separated by comma or any other delimiter. userid | username | testscompleted 1 john 1, 5, 34 2 tom 1, 10, 23, 25 Another idea was to create a seperate table to store userid and testid. So, I'll have only 2 columns but thousands of rows (no of tests * no of users) and they will always continue to increase. userid | testid 1 1 1 5 2 1 1 34 2 10

    Read the article

  • Database table relationships: Always also relate to specified value (Linq to SQL in .NET Framework)

    - by sinni800
    I really can not describe my question better in the title. If anyone has suggestions: Please tell! I use the Linq to SQL framework in .NET. I ran into something which could be easily solved if the framework supported this, it would be a lot of extra coding otherwise: I have a n to n relation with a helper table in between. Those tables are: Items, places and the connection table which relates items to places and the other way. One item can be found in many places, so can one place have many items. Now of course there will be many items which will be in ALL places. Now there is a problem: Places can always be added. So I need a place-ID which encompasses ALL places, always. Like maybe a place-id "0". If the helper table has a row with the place-id of zero, this should be visible in all places. In SQL this would be a simple "Where [...] or place-id = 0", but how do I do this in Linq relations? Also, for a little side question: How could I manage "all but this place" kind of exclusions?

    Read the article

  • Database Modelling - Conceptually different entities with near identical fields

    - by Andrew Shepherd
    Suppose you have two sets of conceptual entities: MarketPriceDataSet which has multiple ForwardPriceEntries PoolPriceForecastDataSet which has multiple PoolPriceForecastEntry Both different child objects have near identical fields: ForwardPriceEntry has StartDate EndDate SimulationItemId ForwardPrice MarketPriceDataSetId (foreign key to parent table) PoolPriceForecastEntry has StartDate EndDate SimulationItemId ForecastPoolPrice PoolPriceForecastDataSetId (foreign key to parent table) If I modelled them as separate tables, the only difference would be the foreign key, and the name of the price field. There has been a debate as to whether the two near identical tables should be merged into one. Options I've thought of to model this is: Just keep them as two independent, separate tables Have both sets in the one table with an additional "type" field, and a parent_id equalling a foreign key to either parent table. This would sacrifice referential integrity checks. Have both sets in the one table with an additional "type" field, and create a complicated sequence of joining tables to maintain referential integrity. What do you think I should do, and why?

    Read the article

  • Database design - table relationship question

    - by iama
    I am designing schema for a simple quiz application. It has 2 tables - "Question" and "Answer Choices". Question table has 'question ID', 'question text' and 'answer id' columns. "Answer Choices" table has 'question ID', 'answer ID' and 'answer text' columns. With this simple schema it is obvious that a question can have multiple answer choices & hence the need for the answer choices table. However, a question can have only one correct answer and hence the need for the 'answer ID' in the question table. However, this 'answer ID' column in the question table provides a illusion as though there can be multiple questions for a single answer which is not correct. The other alternative to eliminate this illusion is to have another table just for correct answer that will have just 2 columns namely the question ID and the answer ID with a 1-1 relationship between the two tables. However, I think this is redundant. Any recommendation on how best to design this thereby enforcing the rules that a question can have multiple answer choices but only one correct answer? Many Thanks.

    Read the article

  • Is my TFS2010 backup/restore hosed?

    - by bwerks
    Hi all, I recently set up a sandbox TFS to test TFS-specific features without interfering with the production TFS. I was happy I did this sooner than I thought--I hadn't been backing up the encryption key from SSRS and upon restoring the reporting databases, they remained inactive, requiring initialization that could only come from applying the encryption key. Said encryption key was lost when I nuked the partition after backing up the TFS databases. The only option I seemed to have is to delete the encrypted data. I'm fine with this, since there wasn't much in there to begin with, however once they're deleted I'm not quite sure how to configure TFS to recognize a new installation of these services while using the restored versions of everything else. Unfortunately, the TFS help file doesn't seem to account for this state though. Is there a way to essentially rebuild the reporting and analysis databases? Or are they gone forever?

    Read the article

  • Generic version control strategy for select table data within a heavily normalized database

    - by leppie
    Hi Sorry for the long winded title, but the requirement/problem is rather specific. With reference to the following sample (but very simplified) structure (in psuedo SQL), I hope to explain it a bit better. TABLE StructureName { Id GUID PK, Name varchar(50) NOT NULL } TABLE Structure { Id GUID PK, ParentId GUID (FK to Structure), NameId GUID (FK to StructureName) NOT NULL } TABLE Something { Id GUID PK, RootStructureId GUID (FK to Structure) NOT NULL } As one can see, Structure is a simple tree structure (not worried about ordering of children for the problem). StructureName is a simplification of a translation system. Finally 'Something' is simply something referencing the tree's root structure. This is just one of many tables that need to be versioned, but this one serves as a good example for most cases. There is a requirement to version to any changes to the name and/or the tree 'layout' of the Structure table. Previous versions should always be available. There seems to be a few possibilities to tackle this issue, like copying the entire structure, but most approaches causes one to 'loose' referential integrity. Example if one followed this approach, one would have to make a duplicate of the 'Something' record, given that the root structure will be a new record, and have a new ID. Other avenues of possible solutions are looking into how Wiki's handle this or go a lot further and look how proper version control systems work. Currently, I feel a bit clueless how to proceed on this in a generic way. Any ideas will be greatly appreciated. Thanks leppie

    Read the article

  • choose append to existing backup instead of overwrite

    - by aron
    Hello, I have a database and I made it's first backup 2 days ago. Then yesterday I spent an entire adding new records. This morning I ran a backup, (but I selected append to existing backup set) as pictured below. I just ran a restore and I found that it wiped out all my data from yesterday and it restored it from the backup of 2 days ago. Not the version from this mornings backup. I zipped this backup file to be safe. I changed some data in the DB, Then I ran the back up again, but this time I selected "overwrite all existing backup sets" Now when I restore the db it's seems to restore the data from the backup correctly. I think I learned a lesson here, correctly if I'm wrong My questions is, Did I lose an entire day of work? I still have this morning's backup .bak file safe in a zip. Is there anyway I can restore is with the right data?

    Read the article

  • Naming of boolean column in database table

    - by Space Cracker
    I have 'Service' table and the following column description as below Is User Verification Required for service ? Is User's Email Activation Required for the service ? Is User's Mobile Activation required for the service ? I Hesitate in naming these columns as below IsVerificationRequired IsEmailActivationRequired IsMobileActivationRequired or RequireVerification RequireEmailActivation RequireMobileActivation I can't determined which way is the best .So, Is one of the above suggested name is the best or is there other better ones ?

    Read the article

  • Upgrading from SQL2000 database to SQL Express 2008 R2

    - by itwb
    Hi, We have a web application which uses a MSSQL 2000 backend database. We are currently paying a ridiculous amount for Shared Hosting, with the database costs alone costing us $150 per month (MSSQL 100mb extra space is $40 per month). Our database size is 896.38 MB I am looking at getting a Virtual Private Server and upgrading the database to a MSSQL2008 Express database. I am aware that the Express version is limited to a 10GB database (with R2), and is constrained to a single CPU. I have also been offered SQL Server 2008 Web Edition for $19/per month, but I cannot find many details on the difference between Express and Web. Any suggestions here? What I would also like to know is: If we upgrade the database to MSSQL 2008 database, is there any issues with possible data transformations in the future? I.e. Is it possible to download and mount it with SQL Server 2008 Standard edition? I'm more concerned about how to get data in and out of the database through SQL Management tools. Are there any other issues that I might face? Thanks, Mike

    Read the article

  • MySQL root user can't access database

    - by Ed Schofield
    Hi all, We have a MySQL database ('myhours') on a production database server that is accessible to one user ('edsf') only, but not to the root user. The command 'SHOW DATABASES' as the root user does not list the 'myhours' database. The same command as the 'edsf' user lists the database: mysql> SHOW DATABASES; +--------------------+ | Database | +--------------------+ | information_schema | | myhours | +--------------------+ 2 rows in set (0.01 sec) Only the 'edsf' user can access the 'myhours' database with 'USE myhours'. Neither user seems to have permission to grant further permissions for this database. My questions are: Q1. How is it that the root user does not have permission to use the database? How could this have come about? The output of SHOW GRANTS FOR 'root'@'localhost'; looks fine to me: GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' IDENTIFIED BY PASSWORD '*xxx' WITH GRANT OPTION Q2. How can I recover this situation to make this database visible to the MySQL root user and grant further permissions on it? Thanks in advance for any help! -- Ed

    Read the article

  • Getting started with Oracle Database In-Memory Part III - Querying The IM Column Store

    - by Maria Colgan
    In my previous blog posts, I described how to install, enable, and populate the In-Memory column store (IM column store). This weeks post focuses on how data is accessed within the IM column store. Let’s take a simple query “What is the most expensive air-mail order we have received to date?” SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE  lo_shipmode = 5; The LINEORDER table has been populated into the IM column store and since we have no alternative access paths (indexes or views) the execution plan for this query is a full table scan of the LINEORDER table. You will notice that the execution plan has a new set of keywords “IN MEMORY" in the access method description in the Operation column. These keywords indicate that the LINEORDER table has been marked for INMEMORY and we may use the IM column store in this query. What do I mean by “may use”? There are a small number of cases were we won’t use the IM column store even though the object has been marked INMEMORY. This is similar to how the keyword STORAGE is used on Exadata environments. You can confirm that the IM column store was actually used by examining the session level statistics, but more on that later. For now let's focus on how the data is accessed in the IM column store and why it’s faster to access the data in the new column format, for analytical queries, rather than the buffer cache. There are four main reasons why accessing the data in the IM column store is more efficient. 1. Access only the column data needed The IM column store only has to scan two columns – lo_shipmode and lo_ordtotalprice – to execute this query while the traditional row store or buffer cache has to scan all of the columns in each row of the LINEORDER table until it reaches both the lo_shipmode and the lo_ordtotalprice column. 2. Scan and filter data in it's compressed format When data is populated into the IM column it is automatically compressed using a new set of compression algorithms that allow WHERE clause predicates to be applied against the compressed formats. This means the volume of data scanned in the IM column store for our query will be far less than the same query in the buffer cache where it will scan the data in its uncompressed form, which could be 20X larger. 3. Prune out any unnecessary data within each column The fastest read you can execute is the read you don’t do. In the IM column store a further reduction in the amount of data accessed is possible due to the In-Memory Storage Indexes(IM storage indexes) that are automatically created and maintained on each of the columns in the IM column store. IM storage indexes allow data pruning to occur based on the filter predicates supplied in a SQL statement. An IM storage index keeps track of minimum and maximum values for each column in each of the In-Memory Compression Unit (IMCU). In our query the WHERE clause predicate is on the lo_shipmode column. The IM storage index on the lo_shipdate column is examined to determine if our specified column value 5 exist in any IMCU by comparing the value 5 to the minimum and maximum values maintained in the Storage Index. If the value 5 is outside the minimum and maximum range for an IMCU, the scan of that IMCU is avoided. For the IMCUs where the value 5 does fall within the min, max range, an additional level of data pruning is possible via the metadata dictionary created when dictionary-based compression is used on IMCU. The dictionary contains a list of the unique column values within the IMCU. Since we have an equality predicate we can easily determine if 5 is one of the distinct column values or not. The combination of the IM storage index and dictionary based pruning, enables us to only scan the necessary IMCUs. 4. Use SIMD to apply filter predicates For the IMCU that need to be scanned Oracle takes advantage of SIMD vector processing (Single Instruction processing Multiple Data values). Instead of evaluating each entry in the column one at a time, SIMD vector processing allows a set of column values to be evaluated together in a single CPU instruction. The column format used in the IM column store has been specifically designed to maximize the number of column entries that can be loaded into the vector registers on the CPU and evaluated in a single CPU instruction. SIMD vector processing enables the Oracle Database In-Memory to scan billion of rows per second per core versus the millions of rows per second per core scan rate that can be achieved in the buffer cache. I mentioned earlier in this post that in order to confirm the IM column store was used; we need to examine the session level statistics. You can monitor the session level statistics by querying the performance views v$mystat and v$statname. All of the statistics related to the In-Memory Column Store begin with IM. You can see the full list of these statistics by typing: display_name format a30 SELECT display_name FROM v$statname WHERE  display_name LIKE 'IM%'; If we check the session statistics after we execute our query the results would be as follow; SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE lo_shipmode = 5; SELECT display_name FROM v$statname WHERE  display_name IN ('IM scan CUs columns accessed',                        'IM scan segments minmax eligible',                        'IM scan CUs pruned'); As you can see, only 2 IMCUs were accessed during the scan as the majority of the IMCUs (44) in the LINEORDER table were pruned out thanks to the storage index on the lo_shipmode column. In next weeks post I will describe how you can control which queries use the IM column store and which don't. +Maria Colgan

    Read the article

  • Problems locating Redmine database

    - by zordor
    I have an active redmine but I can not find the database where it is running right now. It should be on PostgreSQL but the database where it should be running is empty. Does anybody have any idea how to check current database used by redmine? Please let me know if you need any extra information. Thank you EDIT: Ok I know the database it is using. On the database.yml I have project_redmine but it is using the database project I dont know why. That database it is used by developers for the actual project. So that is getting me problems of course. I am unable to run it on the right DB (project_redmine) any ideas? :S

    Read the article

  • Centrally managing 100+ websites without bankrupting a small company

    - by palintropos
    I'm mainly interested in opinions on the trade-offs between having a single central server all the websites connect to as opposed to each website mirroring a subset of the master database with all the products in it. For example, will I run into severe performance issues (or even security issues, or restrictions) making queries to an offsite database? Will we hit scalability issues we can't handle early on from the sheer bandwidth required to maintain this? If we do go with something like a script that keeps smaller databases (each containing a subset of the central master data) in sync, what sorts of issues will we likely encounter there? I would really like the opinions of people far more knowledgeable than I am regarding the pros and cons of both setups and what headaches we are likely to encounter. CLARIFICATION: This should not be viewed as a question about whether we should implement one database vs multiple databases. This question has been answered numerous times. The question is regarding the pros and cons for a deployment like this having the ability to manage all the websites centrally (one server) vs trying to keep them all in sync if they each have their own db (multiple servers). REAL-WORLD EXAMPLE: We are a t-shirt company, and we have individual websites for our different kinds of t-shirts, but we're looking at a central order management integrated with our single shopping cart (which is ColdFusion + MySQL). Now, let's say we have a t-shirt that's on 10 of our websites and we change an image for it. Ideally we would change that in one place and the change would propagate, but how would we set this up?

    Read the article

  • Relational vs. Dimensional Databases, what's the difference?

    - by grautur
    I'm trying to learn about OLAP and data warehousing, and I'm confused about the difference between relational and dimensional modeling. Is dimensional modeling basically relational modeling, but allowing for redundant/un-normalized data? For example, let's say I have historical sales data on (product, city, # sales). I understand that the following would be a relational point-of-view: Product | City | # Sales Apples, San Francisco, 400 Apples, Boston, 700 Apples, Seattle, 600 Oranges, San Francisco, 550 Oranges, Boston, 500 Oranges, Seattle, 600 While the following is a more dimensional point-of-view: Product | San Francisco | Boston | Seattle Apples, 400, 700, 600 Oranges, 550, 500, 600 But it seems like both points of view would nonetheless be implemented in an identical star schema: Fact table: Product ID, Region ID, # Sales Product dimension: Product ID, Product Name City dimension: City ID, City Name And it's not until you start adding some additional details to each dimension that the differences start popping up. For instance, if you wanted to track regions as well, a relational database would tend to have a separate region table, in order to keep everything normalized: City dimension: City ID, City Name, Region ID Region dimension: Region ID, Region Name, Region Manager, # Regional Stores While a dimensional database would allow for denormalization to keep the region data inside the city dimension, in order to make it easier to slice the data: City dimension: City ID, City Name, Region Name, Region Manager, # Regional Stores Is this correct?

    Read the article

  • Still Confused About Identifying vs. Non-Identifying Relationships

    - by Jason
    So, I've been reading up on identifying vs. non-identifying relationships in my database design, and a number of the answers on SO seem contradicting to me. Here are the two questions I am looking at: What's the Difference Between Identifying and Non-Identifying Relationships Trouble Deciding on Identifying or Non-Identifying Relationship Looking at the top answers from each question, I appear to get two different ideas of what an identifying relationship is. The first question's response says that an identifying relationship "describes a situation in which the existence of a row in the child table depends on a row in the parent table." An example of this that is given is, "An author can write many books (1-to-n relationship), but a book cannot exist without an author." That makes sense to me. However, when I read the response to question two, I get confused as it says, "if a child identifies its parent, it is an identifying relationship." The answer then goes on to give examples such as SSN (is identifying of a Person), but an address is not (because many people can live at an address). To me, this sounds more like a case of the decision between primary key and non-primary key. My own gut feeling (and additional research on other sites) points to the first question and its response being correct. However, I wanted to verify before I continued forward as I don't want to learn something wrong as I am working to understand database design. Thanks in advance.

    Read the article

  • SQL Server 2008 Restore from Backup fails with error 3241 'cannot process this media family'

    - by pearcewg
    I am attempting to backup a database from a SQL Server instance on one machine and restore it to another, and I am encountering the frequently discovered 'SQL Server cannot process this media family' error. Each of my instances are SQL Server 2008, but with different patch levels Restore: 10.0.2531.0 Backup: 10.0.1600.22 ((SQL_PreRelease).080709-1414 ) The restore DB is express. Not sure about the backup version. The backup version is on a virtual private server. The restore is on my development box. When I restore to a different database on the source (backup) server, it restores fine. Lots of stuff on google about this issue, some on stackoverflow about this issue, but nothing which is this exact situation. Any thoughts? It should be straightforward to do a backup and restore from one machine to another (having done this thousands of times in with SQL 6.5,7,2000,2005). Any ideas how to restore a database in this situation, which gives this error when attempting to restore? PARTIAL RESOLUTION: When I restored to a different box, running SQL 2008 Express on Windows Server 2003, all worked well. It just wouldn't work on the Windows 7 box. Not sure why. If anyone else has a similar experience, please let me know (there are many similar issues in different forums out there).

    Read the article

  • How do you decide what kind of database to use?

    - by Jason Baker
    I really dislike the name "NoSQL", because it isn't very descriptive. It tells me what the databases aren't where I'm more interested in what the databases are. I really think that this category really encompasses several categories of database. I'm just trying to get a general idea of what job each particular database is the best tool for. A few assumptions I'd like to make (and would ask you to make): Assume that you have the capability to hire any number of brilliant engineers who are equally experienced with every database technology that has ever existed. Assume you have the technical infrastructure to support any given database (including available servers and sysadmins who can support said database). Assume that each database has the best support possible for free. Assume you have 100% buy-in from management. Assume you have an infinite amount of money to throw at the problem. Now, I realize that the above assumptions eliminate a lot of valid considerations that are involved in choosing a database, but my focus is on figuring out what database is best for the job on a purely technical level. So, given the above assumptions, the question is: what jobs are each database (including both SQL and NoSQL) the best tool for and why?

    Read the article

  • Restore Oracle XE data from *.DBF

    - by asero
    Is it possible to restore oracle database from *.DBF files? If yes, then how? I really find it hard to deal with backup and restore things in Oracle compared to SQL Server. I have a backup of the whole oraclexe folder including these files.

    Read the article

  • any practices ,samples for ERD?

    - by just_name
    Q: I wanna any web sites , any books just for training on ERD and normalization ,, i wanna a lot of samples ,practices,and case studies with recommended answers, to strength myself in database design.and avoid the poor data base design i made . note:i don't need books to explain the concepts , what i need is practices ,examples,case studies with recommended answers. Thanks in advance.

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >