Search Results

Search found 33029 results on 1322 pages for 'database queries'.

Page 91/1322 | < Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >

  • Items cannot be installed or removed

    - by Gyanendra Kumar Gyan
    installArchives() failed: (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 136187 files and directories currently installed.) Removing pidgin-ppa ... gpg: key "67265EB522BDD6B1C69E66ED7FB8BEE0A1F196A8" not found: eof gpg: 67265EB522BDD6B1C69E66ED7FB8BEE0A1F196A8: delete key failed: eof dpkg: error processing pidgin-ppa (--remove): subprocess installed post-removal script returned error exit status 2 No apport report written because MaxReports is reached already Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot Errors were encountered while processing: pidgin-ppa Error in function: SystemError: E:Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Build vs Buy Webcast: November 8, 2012

    - by TammyBednar
    Date: Thursday, November 8, 2012, 1:00 PM EST You have a choice. Do you build your own database platform or buy a pre-engineered database appliance? Building a high-availability database platform presents unique challenges. Combining servers, storage, networking, OS, firmware, and database is complicated and raises important concerns: Will coordination between multiple SME’s delay deployment? Will it be reliable? Will it scale? Will routine maintenance consume precious IT-staff time? Ultimately, will it work? Enter the Oracle Database Appliance, a complete package of software, server, storage, and networking that’s engineered for simplicity. It saves time and money by simplifying deployment, maintenance, and support of database workloads. Plus, it’s based on Intel Xeon processors to ensure a high level of performance and scalability. Attend this Webcast to hear customer stories and discover how the Oracle Database Appliance: Increases ROI by reducing capital and operational expenses Frees IT staff by reducing deployment and management time from weeks to hours Takes the worry out of supporting mission critical application workloads Register For this WebCast today!

    Read the article

  • ?Oracle DB 11gR2 ??????????????????/????????????????!

    - by Yuichi.Hayashi
    ?????????????????????????????40~60%????????????? ??????????????????????????????????????????????????????????????????????????????????... ????????????????????????????????????????TCO(Total Cost of Ownership)???????????? ??????????????????????????????????????????????????????????????????????????? ???????1?1???????????????????????????????????1??????????????????????????????????????·????????????? ??????????????????·???·????????????????TCO????????????? ????????????????Grid(????)????????????????????????????·???? = Oracle Real Application Clusters(RAC)???????·???? = Oracle Automatic Storage Management(ASM)????????????????????????????????????? Oracle Database???????11g R2?????????????????????????/???????????????????????(????????????????)?????????????????????????????! SCAN Single Client Access Name(SCAN)??Oracle Real Application Clusters(RAC)11g R2??????? SCAN??????????????????????????????????????????????????RAC?????????????????????????·????????????????????SCAN?????????????????????VIP?????????????RAC????????????????????????????????!???????????????? ???????????????????????????????????????SCAN?????????????????????????????TCO?????????????????(????????????)???????????????????????????????!????????????? SCAN?????????????????????? ??????Oracle Database 11gR2 Real Application Clusters(?????????) ??????Oracle Real Application Clusters 11g Release 2 SCAN??? ACFS ASM Cluster File System(ACFS)??Automatic Storage Management(ASM)11g R2??????? ASM??S.A.M.E.(Stripe And Mirror Everything)????????????????????????????????????????????????????????·???????·??????????????????10g????????????????·??????????·???????????????ASM????????????·??????????????????????????????????????????·?????????????????????????????????????????????? 11g R2??????ACFS?????????????????(????????????????????????????????????????????????????????)????ASM???????????????????????????????????????????????????????·??????????????ACFS????! · ??????????????????? · ????????????/???? · ??????????????????????(?????)??? · ????????????? ?2???????????????????????????? · ???????????? ??????????????????????·??????????·?????????????????????!???????????????????????????????????????? ACFS??????????????????? ??????Oracle Database 11gR2 Automatic Storage Management ??????Oracle Database 11g Release 2 Automatic Storage Management???????????????? ??????·????? ??????·???????Oracle Database Resource Manager(????·?????)11g R2??????? ????·????????????·???????????????????????????????Oracle Database???Oracle RAC????????????????????????????????????????????????????????????????????????????????????????????????????1????????????? ????????????·?????????????????????????????????????????????????????????????????? CPU ???????????????????????????????????????????????????·??????????????????????????? 11g R2??????????·???????????????? CPU_COUNT ?????????????? CPU ???????????????????????????·??????? CPU ?????????????????????????????????????????????? ????????·???????????????????????????????????????????Oracle ??????????????????????·??????? CPU ??????????????????????????????·???????????????!???????????????? ??????·???????????????????????? ??????Oracle Database 11gR2 ????????????? ?????????? ? ??????????????????????????????????!? ? ???????????????????????????????????!?

    Read the article

  • BizTalk host throttling &ndash; Singleton pattern and High database size

    - by S.E.R.
    Originally posted on: http://geekswithblogs.net/SERivas/archive/2013/06/30/biztalk-host-throttling-ndash-singleton-pattern-and-high-database-size.aspxI have worked for some days around the singleton pattern (for those unfamiliar with it, read this post by Victor Fehlberg) and have come across a few very interesting posts, among which one dealt with performance issues (here, also by Victor Fehlberg). Simply put: if you have an orchestration which implements the singleton pattern, then performances will continuously decrease as the orchestration receives and consumes messages, and that behavior is more obvious when the orchestration never ends (ie : it keeps looping and never terminates or completes). As I experienced the same kind of problem (actually I was alerted by SCOM, which told me that the host was being throttled because of High database size), I thought it would be a good idea to dig a little bit a see what happens deep inside BizTalk and thus understand the reasons for this behavior. NOTE: in this article, I will focus on this High database size throttling condition. I will try and work on the other conditions in some not too distant future… Test conditions The singleton orchestration For the purpose of this study, I have created the following orchestration, which is a very basic implementation of a singleton that piles up incoming messages, then does something else when a certain timeout has been reached without receiving another message: Throttling settings I have two distinct hosts : one that hosts the receive port (basic FILE port) : Ports_ReceiveHostone that hosts the orchestration : ProcessingHost In order to emphasize the throttling mechanism, I have modified the throttling settings for each of these hosts are as follows (all other parameters are set to the default value): [Throttling thresholds] Message count in database: 500 (default value : 50000) Evolution of performance counters when submitting messages Since we are investigating the High database size throttling condition, here are the performance counter that we should take a look at (all of them are in the BizTalk:Message Agent performance object): Database sizeHigh database sizeMessage delivery throttling stateMessage publishing throttling stateMessage delivery delay (ms)Message publishing delay (ms)Message delivery throttling state durationMessage publishing throttling state duration (If you are not used to Perfmon, I strongly recommend that you start using it right now: it is a wonderful tool that allows you to open the hood and see what is going on inside BizTalk – and other systems) Database size It is quite obvious that we will start by watching the database size and high database size counters, just to see when the first reaches the configured threshold (500) and when the second rings the alarm. NOTE : During this test I submitted 600 messages, one message at a time every 10ms to see the evolution of the counters we have previously selected. It might not show very well on this screenshot, but here is what happened: From 15:46:50 to 15:47:50, the database size for the Ports_ReceiveHost host (blue line) kept growing until it reached a maximum of 504.At 15:47:50, the high database size alert fires At first I was surprised by this result: why is it the database size of the receiving host that keeps growing since it is the processing host that piles up messages? Actually, it makes total sense. This counter measures the size of the database queue that is being filled by the host, not consumed. Therefore, the high database size alert is raised on the host that fills the queue: Ports_ReceiveHost. More information is available on the Public MPWiki page. Now, looking at the Message publishing throttling state for the receiving host (green line), we can see that a throttling condition has been reached at 15:47:50: We can also see that the Message publishing delay(ms) (blue line) has begun growing slowly from this point. All of this explains why performances keep decreasing when a singleton keeps processing new messages: the database size grows and when it has exceeded the Message count in database threshold, the host is throttled and the publishing delay keeps increasing. Digging further So, what happens to the database queue then? Is it flushed some day or does it keep growing and growing indefinitely? The real question being: will the host be throttled forever because of this singleton? To answer this question, I set the Message count in database threshold to 20 (this value is very low in order not to wait for too long, otherwise I certainly would have fallen asleep in front of my screen) and I submitted 30 messages. The test was started at 18:26. At 18:56 (ie : exactly 30min later) the throttling was stopped and the database size was divided by 2. 30 min later again, the database size had dropped to almost zero: I guess I’ll have to find some documentation and do some more testing before I sort this out! My guess is that some maintenance job is at work here, though I cannot tell which one Digging even further If we take a look at the Message delivery throttling state counter for the processing host, we can see that this host was also throttled during the submission of the 600 documents: The value for the counter was 1, meaning that Message delivery incoming rate for the host instance exceeds the Message delivery outgoing rate * the specified Rate overdrive factor (percent) value. We will see this another day… :) A last word Let’s end this article with a warning: DO NOT CHANGE THE THROTTLING SETTINGS LIGHTLY! The temptation can be great to just bypass throttling by setting very high values for each parameter (or zero in some cases, which simply disables throttling). Nevertheless, always keep in mind that this mechanism is here for a very good reason: prevent your BizTalk infrastructure from exploding!! So whatever you do with those settings, do a lot of testing and benchmarking!

    Read the article

  • How to do inclusive range queries when only half-open range is supported (ala SortedMap.subMap)

    - by polygenelubricants
    On SortedMap.subMap This is the API for SortedMap<K,V>.subMap: SortedMap<K,V> subMap(K fromKey, K toKey) : Returns a view of the portion of this map whose keys range from fromKey, inclusive, to toKey, exclusive. This inclusive lower bound, exclusive upper bound combo ("half-open range") is something that is prevalent in Java, and while it does have its benefits, it also has its quirks, as we shall soon see. The following snippet illustrates a simple usage of subMap: static <K,V> SortedMap<K,V> someSortOfSortedMap() { return Collections.synchronizedSortedMap(new TreeMap<K,V>()); } //... SortedMap<Integer,String> map = someSortOfSortedMap(); map.put(1, "One"); map.put(3, "Three"); map.put(5, "Five"); map.put(7, "Seven"); map.put(9, "Nine"); System.out.println(map.subMap(0, 4)); // prints "{1=One, 3=Three}" System.out.println(map.subMap(3, 7)); // prints "{3=Three, 5=Five}" The last line is important: 7=Seven is excluded, due to the exclusive upper bound nature of subMap. Now suppose that we actually need an inclusive upper bound, then we could try to write a utility method like this: static <V> SortedMap<Integer,V> subMapInclusive(SortedMap<Integer,V> map, int from, int to) { return (to == Integer.MAX_VALUE) ? map.tailMap(from) : map.subMap(from, to + 1); } Then, continuing on with the above snippet, we get: System.out.println(subMapInclusive(map, 3, 7)); // prints "{3=Three, 5=Five, 7=Seven}" map.put(Integer.MAX_VALUE, "Infinity"); System.out.println(subMapInclusive(map, 5, Integer.MAX_VALUE)); // {5=Five, 7=Seven, 9=Nine, 2147483647=Infinity} A couple of key observations need to be made: The good news is that we don't care about the type of the values, but... subMapInclusive assumes Integer keys for to + 1 to work. A generic version that also takes e.g. Long keys is not possible (see related questions) Not to mention that for Long, we need to compare against Long.MAX_VALUE instead Overloads for the numeric primitive boxed types Byte, Character, etc, as keys, must all be written individually A special check need to be made for toInclusive == Integer.MAX_VALUE, because +1 would overflow, and subMap would throw IllegalArgumentException: fromKey > toKey This, generally speaking, is an overly ugly and overly specific solution What about String keys? Or some unknown type that may not even be Comparable<?>? So the question is: is it possible to write a general subMapInclusive method that takes a SortedMap<K,V>, and K fromKey, K toKey, and perform an inclusive-range subMap queries? Related questions Are upper bounds of indexed ranges always assumed to be exclusive? Is it possible to write a generic +1 method for numeric box types in Java? On NavigableMap It should be mentioned that there's a NavigableMap.subMap overload that takes two additional boolean variables to signify whether the bounds are inclusive or exclusive. Had this been made available in SortedMap, then none of the above would've even been asked. So working with a NavigableMap<K,V> for inclusive range queries would've been ideal, but while Collections provides utility methods for SortedMap (among other things), we aren't afforded the same luxury with NavigableMap. Related questions Writing a synchronized thread-safety wrapper for NavigableMap On API providing only exclusive upper bound range queries Does this highlight a problem with exclusive upper bound range queries? How were inclusive range queries done in the past when exclusive upper bound is the only available functionality?

    Read the article

  • Wondering how Facebook does the "Mutual friends" feature

    - by Pierre
    Hello, I'm currently developing an application to allow students to manage their courses, and I don't really know how to design the database for a specific feature. The client wants, a lot like Facebook, that when a student displays the list of people currently in a specific course, the people with the most mutual courses are displayed first. As an additional feature, I would like to add a search feature to allow students to search for another one, and displaying first in the search results the people with most mutual courses. I currently use MySQL, I plan to use Cassandra for some other features, and I also use Memcached for result caching. Thanks.

    Read the article

  • Backup & recovery of multiple MySQL databases (InnoDB & MyISAM)

    - by Cymon
    I am working on nightly and hourly backups of MySQL Databases. There are multiple MySQL databases which are either InnoDB or MyISAM (Note: Each database is either InnoDB or MyISAM for a reason). With the 2 different types I want to make sure I am grabbing everything that is needed for backup and recovery. Here is my current plan Nightly -mysqldump of each DB which is stored locally and remotely. Hourly -flush binary logs and store them locally and remotely. Weekly -expire binary logs older than a week. I feel like I am grabbing everything that is needed for the MyISAM databases but I am concerned about the InnoDB databases and the log files (ib_logfile0, ib_logfile1, ibdata1) they create. Should I backup these files? Nightly? Hourly? Both? Do I really need them if I am already doing the above nightly and hourly backups?

    Read the article

  • Is it a good idea to cache data from web services into a database?

    - by Thierry Lam
    Let's assume that Stackoverflow offers web services where you can retrieve all the questions asked by a specific user. A request to get all question from user A can result in the following json output: { { "question": "What is rest?", "date_created": "20/02/2010", "votes": 1, }, { "question": "Which database to use for ...", "date_created": "20/07/2009", "votes": 5, }, } If I want to manipulate and present the data in any ways that I want, will it be wise to dump it in a local database? At some point, I will also want to retrieve all answers for each question and store them in a local database. The workflow that I'm thinking is: User logs in. Web services retrieve all questions asked by the logged in user, dump them in a local database. User wants all answers for a specific question, another web service does the retrieval and dump them in a local database. After user logs out, delete from the local database all questions and answers from that user.

    Read the article

  • How do I correctly model data in SQL-based databases that have some columns in common, but also have

    - by Brandon Weiss
    For instance, let's say I have a User model. Users have things like logins, passwords, e-mail addresses, avatars, etc. But there are two types of Users that will be using this site, let's say Parents and Businesses. I need to store some different information for the Parents (e.g. childrens' names, domestic partner, salaries, etc.) than for the Businesses (e.g. industry, number of employees, etc.), but also some of it is the same, like logins and passwords. How do I correctly structure this in a SQL-based database? Thanks!

    Read the article

  • ER Diagram flaws

    - by spacker_lechuck
    I have the following ER Diagram for a bank database - customers may have several accounts, accounts may be held jointly by several customers, and each customer is associated with an account set and accounts are members of one or more account sets. What design rules are violated? What modifications should be made and why? So far, a few flaws I'm not sure about are: 1) Redundant owner-address attribute in AcctSets Entity. 2) This ER does not include accounts with multiple owners with different addresses. My Question is: How would I go about fixing these flaws and/or other flaws that I may be missing from my analysis? Thanks!

    Read the article

  • Table with a lot of attributes

    - by Robert
    Hi, I'm planing to build some database project. One of the tables have a lot of attributes. My question is: What is better, to divide the the class into 2 separate tables or put all of them into one table. below is an example create table User { id, name, surname,... show_name, show_photos, ...) or create table User { id, name, surname,... ) create table UserPrivacy {usr_id, show_name, show_photos, ...) The performance i suppose is similar due to i can use index.

    Read the article

  • How to organize asp.net mvc project (using entity framework) and a corresponding database project?

    - by Bernie
    I recently switched to vs2010 and am experimenting with asp.net MVC2. I am building a simple website and use the entity framework to design the data model. From the .edmx file, I generate the database tables. After a few iterations, I decided that it would be nice to have version control of the database schema as well, and therefore I added a database project into which I imported the script that is generated from the datamodel. As a result, I have to generate the sql every time that I change the model and redo the import. The database project automatically updates the database. Although the manual steps to generate the sql and the import are annoying, this works pretty well, until I wanted to add the standard tables for user accounts/authorization etc. I can use the framework tools to add the necessary tables, views etc. to the database, but as I do not want to have them in the .edmx model I end up with a third manual step. Is anybody facing similar issues?

    Read the article

  • Two Tables Serving as one Model in Rails

    - by matsko
    Is is possible in rails to setup on model which is dependant on a join from two tables? This would mean that for the the model record to be found/updated/destroyed there would need to be both records in both database tables linked together in a join. The model would just be all the columns of both tables wrapped together which may then be used for the forms and so on. This way when the model gets created/updated it is just one form variable hash that gets applied to the model? Is this possible in Rails 2 or 3?

    Read the article

  • How do I save user specific data in an asp.net site?

    - by Greg McNulty
    I just set up user profiles using asp.net 3.5 using wvd. For each user I would like to store data that they will be updating every day. For example, every time they go for a run they will update time and distance. I intend to allow them to also look up their history of distance and time from any past date. My question is, what does the database schema usually look like for such a set up? Currently asp.net set up a db for me when I made user profiles. Do I just add an extra table for every user? Should there be one big table with all users data? How do I relate a user I'd to their specific data? Etc.... I have never done this before so any ideas on how this is usually done would be very helpful. Thank you.

    Read the article

  • Top Questions and Answers for Pluging into Oracle Database as a Service

    - by David Swanger
    Yesterday we hosted a comprehensive online forum that shared a comprehensive path to help your organization design, deploy, and deliver a Database as a Service cloud. If you missed the online forum, you can watch it on demand by registering here. We received numerous questions.  Below are highlights of the most informative: DBaaS requires a lengthy and careful design efforts. What is the minimum requirements of setting up a scaled-down environment and test it out? You should have an OEM 12c environment for DBaaS administration and then a target database deployment platform that has the key characteristics of what your production environment will look like. This could be a single server or it could be a small pool of hosts if your production DBaaS will be larger and you want to test a more robust / real world configuration with Zones and Pools or DR capabilities for example. How does this benefit companies having their own data center? This allows companies to transform their internal IT to a service delivery model for the database. The benefits to the company are significant cost savings, improved business agility and reduced risk. The benefits to the consumers (internal) of services if much fast provisioning, and response to change in business requirements. From a deployment perspective, is DBaaS's job solely DBA's job? The best deployment model enables the DBA (or end-user) to control the entire process. All resources required to deploy the service are pre-provisioned, and there are no external dependencies (on network, storage, sysadmins teams). The service is created either via a self-service portal or by the DBA. The purpose of self service seems to be that the end user does not rely on the DBA. I just need to give him a template. He decides how much AMM he needs. Why shall I set it one by one. That doesn't seem to be the purpose of self service. Most customers we have worked with define a standardized service catalog, with a few (2 to 5) different classes of service. For each of these classes, there is a pre-defined deployment template, and the user has the ability to select from some pre-defined service sizes. The administrator only has to create this catalog once. Each user then simply selects from the options offered in the catalog.  Looking at DBaaS service definition, it seems to be no different from a service definition provided by a well defined DBA team. Why do you attribute it to DBaaS? There are a couple of perspectives. First, some organizations might already be operating with a high level of standardization and a higher level of maturity from an ITIL or Service Management perspective. Their journey to DBaaS could be shorter and their Service Definition will evolve less but they still might need to add capabilities such as Self Service and Metering/Chargeback. Other organizations are still operating in highly siloed environments with little automation and their formal Service Definition (if they have one) will be a lot less mature today. Therefore their future state DBaaS will look a lot different from their current state, as will their Service Definition. How database as a service impact or help with "Click to Compute" or deploying "Database in cloud infrastructure" DBaaS enables Click to Compute. Oracle DBaaS can be implemented using three architecture models: Oracle Multitenant 12c, native consolidation using Oracle Database and consolidation using virtualization in infrastructure cloud. As Deploy session showed, you get higher consolidating density and efficiency using Multitenant and higher isolation using infrastructure cloud. Depending upon your business needs, DBaaS can be implemented using any of these models. How exactly is the DBaaS different from the traditional db? Storage/OS/DB all together to 'transparently' provide service to applications? Will there be across-databases access by application/user. Some key differences are: 1) The services run on a shared platform. 2) The services can be rapidly provisioned (< 15 minutes). 3) The services are dynamic and can be relocated, grown, shrunk as needed to meet business needs without disruption and rapidly. 4) The user is able to provision the services directly from a standardized service catalog.. With 24x7x365 databases its difficult to find off peak hrs to do basic admin tasks such as gathering stats, running backups, batch jobs. How does pluggable database handle this and different needs/patching downtime of apps databases might be serving? You can gather stats in Oracle Multitenant the same way you had been in regular databases. Regarding patching/upgrading, Oracle Multitenant makes patch/upgrade very efficient in that you can pre-provision a new version/patched multitenant db in a different ORACLE_HOME and then unplug a PDB from its CDB and plug it into the newer/patched CDB in seconds.  Thanks for all the great questions!  If you'd like to learn more and missed the online forum, you can watch it on demand here.

    Read the article

  • Bulletproof way to DROP and CREATE a database under Continuous Integration.

    - by H. Abraham Chavez
    I am attempting to drop and recreate a database from my CI setup. But I'm finding it difficult to automate the dropping and creation of the database, which is to be expected given the complexities of the db being in use. Sometimes the process hangs, errors out with "db is currently in use" or just takes too long. I don't care if the db is in use, I want to kill it and create it again. Does some one have a straight shot method to do this? alternatively does anyone have experience dropping all objects in the db instead of dropping the db itself? USE master --Create a database IF EXISTS(SELECT name FROM sys.databases WHERE name = 'mydb') BEGIN ALTER DATABASE mydb SET SINGLE_USER --or RESTRICTED_USER --WITH ROLLBACK IMMEDIATE DROP DATABASE uAbraham_MapSifterAuthority END CREATE DATABASE mydb;

    Read the article

  • I've created a database table using Visual Studio for my C# program. Now what?

    - by Kevin
    Hi! I'm very new to C#, so please forgive me if I've overlooked something here. I've created a database using Visual Studio (add new item service-based database) called LoadForecast.mdf. I then created a table called ForecastsDB and added some fields. My main question is this: I've created a console application with the intention of writing some data to the newly created database. I've added LoadForecast.mdf as a data source for my program, but is there anything else I should do? I saw an example where the next step was adding a "data diagram", but this was for a visual application, not a console application. Do I still need to diagram the database for my console app? I just want to be able to write new records out to my database table and wasn't sure if there were any other things I needed to do for the VS environment to be "aware" of my database. Thanks for any advise!

    Read the article

  • How to deploy a Java Swing application with an embedded JavaDB database?

    - by Jonas
    I have implemented an Java Swing application that uses an embedded JavaDB database. The database need to be stored somewhere and the database tables need to be created at the first run. What is the preferred way to do these procedures? Should I always create the database in the local directory, and first check if the database file exist, and if it doesn't exist let the user create the tables (or at least show a message that the tables will be created). Or should I let the user choose a path? but then I have to save the path somewhere. Should I save the path with Preferences.systemRoot();, and check if that variable is set on startup? If the user choses a path and save it in the Preferences, can I get any problems with user permissions? or should it be safe wherever the user store the database? Or how do I handle this? Any other suggestions for this procedure?

    Read the article

  • Working with foreign keys - cannot insert

    - by Industrial
    Hi everyone! Doing my first tryouts with foreign keys in a mySQL database and are trying to do a insert, that fails for this reason: Integrity constraint violation: 1452 Cannot add or update a child row: a foreign key constraint fails Does this mean that foreign keys restrict INSERTS as well as DELETES and/or UPDATES on each table that is enforced with foreign keys relations? Thanks! Updated description: Products ---------------------------- id | type ---------------------------- 0 | 0 1 | 3 ProductsToCategories ---------------------------- productid | categoryid ---------------------------- 0 | 0 1 | 1 Product table has following structure CREATE TABLE IF NOT EXISTS `alpha`.`products` ( `id` MEDIUMINT UNSIGNED NOT NULL AUTO_INCREMENT , `type` TINYINT(2) UNSIGNED NOT NULL DEFAULT 0 , PRIMARY KEY (`id`) , CONSTRAINT `prodsku` FOREIGN KEY (`id` ) REFERENCES `alpha`.`productsToSku` (`product` ) ON DELETE CASCADE, ON UPDATE CASCADE) ENGINE = InnoDB;

    Read the article

  • SQL Server many-to-many design recommendation

    - by Jean-Philippe Brabant
    I have a SQL Server database with two table : Users and Achievements. My users can have multiple achievements so it a many-to-many relation. At school we learned to create an associative table for that sort of relation. That mean creating a table with a UserID and an AchivementID. But if I have 500 users and 50 achievements that could lead to 25 000 row. As an alternative, I could add a binary field to my Users table. For example, if that field contained 10010 that would mean that this user unlocked the first and the fourth achievements. Is their other way ? And which one should I use.

    Read the article

  • choose append to existing backup instead of overwrite

    - by aron
    Hello, I have a database and I made it's first backup 2 days ago. Then yesterday I spent an entire adding new records. This morning I ran a backup, (but I selected append to existing backup set) as pictured below. I just ran a restore and I found that it wiped out all my data from yesterday and it restored it from the backup of 2 days ago. Not the version from this mornings backup. I zipped this backup file to be safe. I changed some data in the DB, Then I ran the back up again, but this time I selected "overwrite all existing backup sets" Now when I restore the db it's seems to restore the data from the backup correctly. I think I learned a lesson here, correctly if I'm wrong My questions is, Did I lose an entire day of work? I still have this morning's backup .bak file safe in a zip. Is there anyway I can restore is with the right data?

    Read the article

  • How to represent a Many-To-Many relationship in XML or other simple file format?

    - by CSharperWithJava
    I have a list management appliaction that stores its data in a many-to-many relationship database. I.E. A note can be in any number of lists, and a list can have any number of notes. I also can export this data to and XML file and import it in another instance of my app for sharing lists between users. However, this is based on a legacy system where the list to note relationship was one-to-many (ideal for XML). Now a note that is in multiple lists is esentially split into two identical rows in the DB and all relation between them is lost. Question: How can I represent this many-to-many relationship in a simple, standard file format? (Preferably XML to maintain backwards compatibility)

    Read the article

  • Best datastructure for this relationship...

    - by Travis
    I have a question about database 'style'. I need a method of storing user accounts. Some users "own" other user accounts (sub-accounts). However not all user accounts are owned, just some. Is it best to represent this using a table structure like so... TABLE accounts ( ID ownerID -> ID name ) ...even though there will be some NULL values in the ownerID column for accounts that do not have an owner. Or would it be stylistically preferable to have two tables, like so. TABLE accounts ( ID name ) TABLE ownedAccounts ( accountID -> accounts(ID) ownerID -> accounts(ID) ) Thanks for the advice.

    Read the article

  • Unique constraint on more than 10 columns

    - by tk
    I have a time-series simulation model which has more than 10 input variables. The number of distinct simulation instances would be more than 1 million, and each simulation instance generates a few output rows every day. To save the simulation result in a relational database, i designed tables like this. Table SimulationModel { simul_id : integer (primary key), input0 : string or numeric, input1 : string or numeric, ...} Table SimulationOutput { dt : DateTime (primary key), simul_id : integer (primary key), output0 : numeric, ...} My question is, is it fine to put an unique constraint on all of the input columns of SimulationModel table? If it is not a good idea, then what kind of other options do i have to make sure each model is unique?

    Read the article

  • Should I Split Tables Relevant to X Module Into Different DB? Mysql

    - by Michael Robinson
    I've inherited a rather large and somewhat messy codebase, and have been tasked with making it faster, less noodly and generally better. Currently we use one big database to hold all data for all aspects of the site. As we need to plan for significant growth in the future, I'm considering splitting tables relevant to specific sections of the site into different databases, so if/when one gets too large for one server I can more easily migrate some user data to different mysql servers while retaining overall integrity. I would still need to use joins on some tables across the new databases. Is this a normal thing to do? Would I incur a performance hit because of this?

    Read the article

< Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >