Search Results

Search found 32391 results on 1296 pages for 'graph database'.

Page 98/1296 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • NFS (with Kerberos) mount failing due to "Server not found in Kerberos database" error

    - by Kendall Hopkins
    When running: `sudo mount -t nfs4 -o sec=krb5 sol.domain.com:/ /mnt` I get this error on the client: mount.nfs4: access denied by server while mounting sol.domain.com:/ And on the server syslogs UNKNOWN_SERVER: authtime 0, nfs/[email protected] for nfs/ip-#-#-#-#[email protected], Server not found in Kerberos database UNKNOWN_SERVER: authtime 0, nfs/[email protected] for krbtgt/[email protected], Server not found in Kerberos database UNKNOWN_SERVER: authtime 0, nfs/[email protected] for krbtgt/[email protected], Server not found in Kerberos database UNKNOWN_SERVER: authtime 0, nfs/[email protected] for krbtgt/[email protected], Server not found in Kerberos database UNKNOWN_SERVER: authtime 0, nfs/[email protected] for krbtgt/[email protected], Server not found in Kerberos database UNKNOWN_SERVER: authtime 0, nfs/[email protected] for nfs/ip-#-#-#-#[email protected], Server not found in Kerberos database UNKNOWN_SERVER: authtime 0, nfs/[email protected] for krbtgt/[email protected], Server not found in Kerberos database UNKNOWN_SERVER: authtime 0, nfs/[email protected] for krbtgt/[email protected], Server not found in Kerberos database UNKNOWN_SERVER: authtime 0, nfs/[email protected] for krbtgt/[email protected], Server not found in Kerberos database UNKNOWN_SERVER: authtime 0, nfs/[email protected] for krbtgt/[email protected], Server not found in Kerberos database Server keytab file: ubuntu@sol:~$ sudo klist -e -k /etc/krb5.keytab Keytab name: WRFILE:/etc/krb5.keytab KVNO Principal ---- -------------------------------------------------------------------------- 7 host/[email protected] (aes256-cts-hmac-sha1-96) 7 host/[email protected] (arcfour-hmac) 7 host/[email protected] (des3-cbc-sha1) 7 host/[email protected] (des-cbc-crc) 9 nfs/[email protected] (aes256-cts-hmac-sha1-96) 9 nfs/[email protected] (arcfour-hmac) 9 nfs/[email protected] (des3-cbc-sha1) 9 nfs/[email protected] (des-cbc-crc) Client keytab file: ubuntu@mercury:~$ sudo klist -e -k /etc/krb5.keytab Keytab name: WRFILE:/etc/krb5.keytab KVNO Principal ---- -------------------------------------------------------------------------- 3 host/[email protected] (aes256-cts-hmac-sha1-96) 3 host/[email protected] (arcfour-hmac) 3 host/[email protected] (des3-cbc-sha1) 3 host/[email protected] (des-cbc-crc) 3 nfs/[email protected] (aes256-cts-hmac-sha1-96) 3 nfs/[email protected] (arcfour-hmac) 3 nfs/[email protected] (des3-cbc-sha1) 3 nfs/[email protected] (des-cbc-crc)

    Read the article

  • Import GraphViz graph to Microsoft Word 14

    - by rmetzger
    I have created a GraphViz dot-file to visualize a data flow. I have to write a documentation using Microsoft Word and I'd like to include the graph in the document. For some wired reason, MS Word is not able to import SVG files. Then, I generated a .eps file using dot -Teps plan.dot -o plan.eps But once imported into Word, the picture looks horrible. I also tried to convert the svg to wmf using Inkscape. It also looked horrible. Is there a clean way to generate a file using GraphViz that Word can read?

    Read the article

  • Database which only holds indexes and last X records in memory?

    - by Xeoncross
    I'm looking for a data store that is very memory efficient while still allowing many object changes per second and disregarding ACID compliance for the last X records. I need this database for a server with not much memory and I can make a key-value store, document, or SQL database work. The idea is that indexes/keys are the only thing I need in memory and all the actual values/objects/rows can be saved on disk do to the low read rate (I just want index/key lookup to be fast). I also don't want records constantly being flushed to disk, so I would like the last X number of records to be held in memory so that 100 or so of them can all be written at once. I don't care if I lose the last 10 seconds worth of objects/values. I do care if the database as a whole is in danger of becoming corrupt. Is there a data-store like this?

    Read the article

  • Migrating master-slave MySQL database servers to 2 new servers, any tips or suggestions?

    - by mmattax
    I'm setting up 2 new database servers that will be replacing a current master-slave setup. All boxes are running / will be running MySQL on RHEL. Our current naming conventions: db1 - master database db2 - slave (using MySQL replication) db01 - new master db02 - new slave We need to get db01 to be the new master with db02 as the new slave. What is the best way to migrate db1 and db2 to db01 and db02? db1 and db2 are running in a production setting and we need to minimize all downtime; db1 has roughly 30GB of data in the database. Any suggestions or tips on how to migrate to our new servers would be much appreciated.

    Read the article

  • Windows authenticated users have lost access to master (default) database

    - by Rob Nicholson
    Something very strange has occurred on our production SQL database. Users connecting via Windows authentication appear to have lost all access to the master database. By default, all logins have the default database set to master. So when you connect using SQL Server management studio, they get the error: "Cannot open user default database. Login failed error 4064". What's also worrying is that we have a group called "COMPANY - SQL Administrator" which has sysadmin rights and users in this group also get the same error. Worse, they don't appear to be system administrators anymore... If they change their default database to something else, they can connect and then work on the database, it's just the master database that is problematic. I'm not even sure by what mechanism windows authenticated users get access to the master database. Is it something hard coded in or some property that's got changed? Any ideas? Cheers, Rob.

    Read the article

  • In Process Explorer is it possible change scaling of activity graph the be able to further analyse graph?

    - by therobyouknow
    Is there any way to zoom in, in the System Internals Process Explorer graph? Background I'm trying to work out why my PC freezes/locks up for about a second (the pointer does not move) every so often. This has only been happening for the last 2 days. There is a very narrow spike associated with the freeze, but it's hard to hover over it an analyse what is causing it. My PC spec: ThinkPad X201S 1440x900 i7 2.0GHz 8Gb RAM, 256GB Samsung 840Pro SSD, Windows 8.1 Pro 64bit CalDigit USB 3.0 ExpressCard 34, Ultrabase X200 with DisplayPort to HDMI

    Read the article

  • Can I convert my database/script to UTF-8 ?

    - by Mohannad Otaibi
    How can I convert a database to support UTF-8 and convert it's old data from what ever encoding they're in to UTF-8 ? Extra Info: I'm running a server which has many websites on it, and one of them is running WHMCS (php script to manage hosting clients). WHMCS has an iPhone application where i can browse it through iPhone, the problem is that this application will only run if everything in my website is in UTF-8 encoding. I was using windows-1256 as encoding in my script's settings, and i tried changing that in some point of time to UTF-8 for a while then changed it back to windows-1256 so, the data in the database are some inserted using UTF standards and most of them are windows-1256 If someone could clear the picture for me, Do I need to convert every database on the server or just one DB ? what should I change? If i had to do that manually, I'll do it but I need some expert advise.

    Read the article

  • DBA Command line options

    - by Anthony Shorten
    There are a number of database utilities supplied with the installation of the Oracle Utilities Application Framework based products. These are typically run in interactive mode where the utility prompts you for the values and then executes the required functionality. Did you know that the utilities also have command line options that allow you to run the utility in silent mode as well? You can assess the command line options by specifying the -h option on the command line. Here is an example of the oragensec command line options: oragensec -d <Owner,OwnerPswd,DbName> -u <Database Users> -r <ReadRole,UserRole> -l <logfile> -h where: -d <Owner,OwnerPswd,DbName> Database connect information for the target database. e.g. spladm,spladm,DB200ODB. -u <Database Users> A comma-separated list of database users where synonyms need to be created. e.g. spluser, splread -r <ReadRole,UserRole> Optional. Names of database roles with read and read-write privileges. Default roles are SPL_READ, SPL_USER. e.g. spl_read,spl_user -l <logfile> Optional. Name of the log file. -h Help The command line options allow the DBA to automate the exeucution either via a script or some utility can than execute utilities. This optin can apply to the majority of DBA utilities supplied with the product. Take a look at others.

    Read the article

  • SQL SERVER – Identify Most Resource Intensive Queries – SQL in Sixty Seconds #029 – Video

    - by pinaldave
    There are a few questions I often get asked. I wonder how interesting is that in our daily life all of us have to often need the same kind of information at the same time. Here is the example of the similar questions: How many user created tables are there in the database? How many non clustered indexes each of the tables in the database have? Is table Heap or has clustered index on it? How many rows each of the tables is contained in the database? I finally wrote down a very quick script (in less than sixty seconds when I originally wrote it) which can answer above questions. I also created a very quick video to explain the results and how to execute the script. Here is the complete script which I have used in the SQL in Sixty Seconds Video. SELECT [schema_name] = s.name, table_name = o.name, MAX(i1.type_desc) ClusteredIndexorHeap, COUNT(i.TYPE) NoOfNonClusteredIndex, p.rows FROM sys.indexes i INNER JOIN sys.objects o ON i.[object_id] = o.[object_id] INNER JOIN sys.schemas s ON o.[schema_id] = s.[schema_id] LEFT JOIN sys.partitions p ON p.OBJECT_ID = o.OBJECT_ID AND p.index_id IN (0,1) LEFT JOIN sys.indexes i1 ON i.OBJECT_ID = i1.OBJECT_ID AND i1.TYPE IN (0,1) WHERE o.TYPE IN ('U') AND i.TYPE = 2 GROUP BY s.name, o.name, p.rows ORDER BY schema_name, table_name Related Tips in SQL in Sixty Seconds: Find Row Count in Table – Find Largest Table in Database Find Row Count in Table – Find Largest Table in Database – T-SQL Identify Numbers of Non Clustered Index on Tables for Entire Database Index Levels, Page Count, Record Count and DMV – sys.dm_db_index_physical_stats Index Levels and Delete Operations – Page Level Observation What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Excel

    Read the article

  • How to use Nginx to export the mongoDB connection?

    - by Totty
    I have on my server 2 things: the node.js server and a mongodb database; The node.js server is reachable from myip/server; and now I would like to export the mongodb database to myip/database for example. Now when I use my mongodb viewer (MongoVUE) with "http://myip/database:9000" (the port 9000 is set in nginx and it's also the port that I start mongod). If I go to "http://myip/database:9000" or "http://myip/database" in a browser it look like: "You are trying to access MongoDB on the native driver port. For http diagnostic access, add 1000 to the port number". But in MongoVUE it says: Unable to connect to server 192.168.1.16/database:9000: No such host is known. Type: MongoDB.Driver.MongoConnectionException Stack: at MongoDB.Driver.Internal.DirectConnector.Connect(TimeSpan timeout) at MongoDB.Driver.MongoServer.Connect(TimeSpan timeout, ConnectWaitFor waitFor) at MongoDB.Driver.MongoServer.Connect(TimeSpan timeout) at MongoDB.Driver.MongoServer.Connect() at MangoUI.MMongo.FQlxNlJKqO74gYmXgZR4(Object ) at MangoUI.MMongo.Open(Boolean useSamus) at MangoUI.MMongo.Open() at MangoUI.ComNavTree.wJQdUqApCpjoC39P59n(Object ) at MangoUI.ComNavTree.ExpandMe(MTreeNode expand) at MangoUI.ComNavTree.tree_BeforeExpand(Object sender, TreeViewCancelEventArgs e) No such host is known Type: System.Net.Sockets.SocketException Stack: at System.Net.Dns.GetAddrInfo(String name) at System.Net.Dns.InternalGetHostByName(String hostName, Boolean includeIPv6) at System.Net.Dns.GetHostAddresses(String hostNameOrAddress) at MongoDB.Driver.MongoServerAddress.ToIPEndPoint(AddressFamily addressFamily) at MongoDB.Driver.MongoServerInstance.Connect(Boolean slaveOk) at MongoDB.Driver.Internal.DirectConnector.Connect(TimeSpan timeout)

    Read the article

  • Items cannot be installed or removed

    - by Gyanendra Kumar Gyan
    installArchives() failed: (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 136187 files and directories currently installed.) Removing pidgin-ppa ... gpg: key "67265EB522BDD6B1C69E66ED7FB8BEE0A1F196A8" not found: eof gpg: 67265EB522BDD6B1C69E66ED7FB8BEE0A1F196A8: delete key failed: eof dpkg: error processing pidgin-ppa (--remove): subprocess installed post-removal script returned error exit status 2 No apport report written because MaxReports is reached already Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot Errors were encountered while processing: pidgin-ppa Error in function: SystemError: E:Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Build vs Buy Webcast: November 8, 2012

    - by TammyBednar
    Date: Thursday, November 8, 2012, 1:00 PM EST You have a choice. Do you build your own database platform or buy a pre-engineered database appliance? Building a high-availability database platform presents unique challenges. Combining servers, storage, networking, OS, firmware, and database is complicated and raises important concerns: Will coordination between multiple SME’s delay deployment? Will it be reliable? Will it scale? Will routine maintenance consume precious IT-staff time? Ultimately, will it work? Enter the Oracle Database Appliance, a complete package of software, server, storage, and networking that’s engineered for simplicity. It saves time and money by simplifying deployment, maintenance, and support of database workloads. Plus, it’s based on Intel Xeon processors to ensure a high level of performance and scalability. Attend this Webcast to hear customer stories and discover how the Oracle Database Appliance: Increases ROI by reducing capital and operational expenses Frees IT staff by reducing deployment and management time from weeks to hours Takes the worry out of supporting mission critical application workloads Register For this WebCast today!

    Read the article

  • ?Oracle DB 11gR2 ??????????????????/????????????????!

    - by Yuichi.Hayashi
    ?????????????????????????????40~60%????????????? ??????????????????????????????????????????????????????????????????????????????????... ????????????????????????????????????????TCO(Total Cost of Ownership)???????????? ??????????????????????????????????????????????????????????????????????????? ???????1?1???????????????????????????????????1??????????????????????????????????????·????????????? ??????????????????·???·????????????????TCO????????????? ????????????????Grid(????)????????????????????????????·???? = Oracle Real Application Clusters(RAC)???????·???? = Oracle Automatic Storage Management(ASM)????????????????????????????????????? Oracle Database???????11g R2?????????????????????????/???????????????????????(????????????????)?????????????????????????????! SCAN Single Client Access Name(SCAN)??Oracle Real Application Clusters(RAC)11g R2??????? SCAN??????????????????????????????????????????????????RAC?????????????????????????·????????????????????SCAN?????????????????????VIP?????????????RAC????????????????????????????????!???????????????? ???????????????????????????????????????SCAN?????????????????????????????TCO?????????????????(????????????)???????????????????????????????!????????????? SCAN?????????????????????? ??????Oracle Database 11gR2 Real Application Clusters(?????????) ??????Oracle Real Application Clusters 11g Release 2 SCAN??? ACFS ASM Cluster File System(ACFS)??Automatic Storage Management(ASM)11g R2??????? ASM??S.A.M.E.(Stripe And Mirror Everything)????????????????????????????????????????????????????????·???????·??????????????????10g????????????????·??????????·???????????????ASM????????????·??????????????????????????????????????????·?????????????????????????????????????????????? 11g R2??????ACFS?????????????????(????????????????????????????????????????????????????????)????ASM???????????????????????????????????????????????????????·??????????????ACFS????! · ??????????????????? · ????????????/???? · ??????????????????????(?????)??? · ????????????? ?2???????????????????????????? · ???????????? ??????????????????????·??????????·?????????????????????!???????????????????????????????????????? ACFS??????????????????? ??????Oracle Database 11gR2 Automatic Storage Management ??????Oracle Database 11g Release 2 Automatic Storage Management???????????????? ??????·????? ??????·???????Oracle Database Resource Manager(????·?????)11g R2??????? ????·????????????·???????????????????????????????Oracle Database???Oracle RAC????????????????????????????????????????????????????????????????????????????????????????????????????1????????????? ????????????·?????????????????????????????????????????????????????????????????? CPU ???????????????????????????????????????????????????·??????????????????????????? 11g R2??????????·???????????????? CPU_COUNT ?????????????? CPU ???????????????????????????·??????? CPU ?????????????????????????????????????????????? ????????·???????????????????????????????????????????Oracle ??????????????????????·??????? CPU ??????????????????????????????·???????????????!???????????????? ??????·???????????????????????? ??????Oracle Database 11gR2 ????????????? ?????????? ? ??????????????????????????????????!? ? ???????????????????????????????????!?

    Read the article

  • BizTalk host throttling &ndash; Singleton pattern and High database size

    - by S.E.R.
    Originally posted on: http://geekswithblogs.net/SERivas/archive/2013/06/30/biztalk-host-throttling-ndash-singleton-pattern-and-high-database-size.aspxI have worked for some days around the singleton pattern (for those unfamiliar with it, read this post by Victor Fehlberg) and have come across a few very interesting posts, among which one dealt with performance issues (here, also by Victor Fehlberg). Simply put: if you have an orchestration which implements the singleton pattern, then performances will continuously decrease as the orchestration receives and consumes messages, and that behavior is more obvious when the orchestration never ends (ie : it keeps looping and never terminates or completes). As I experienced the same kind of problem (actually I was alerted by SCOM, which told me that the host was being throttled because of High database size), I thought it would be a good idea to dig a little bit a see what happens deep inside BizTalk and thus understand the reasons for this behavior. NOTE: in this article, I will focus on this High database size throttling condition. I will try and work on the other conditions in some not too distant future… Test conditions The singleton orchestration For the purpose of this study, I have created the following orchestration, which is a very basic implementation of a singleton that piles up incoming messages, then does something else when a certain timeout has been reached without receiving another message: Throttling settings I have two distinct hosts : one that hosts the receive port (basic FILE port) : Ports_ReceiveHostone that hosts the orchestration : ProcessingHost In order to emphasize the throttling mechanism, I have modified the throttling settings for each of these hosts are as follows (all other parameters are set to the default value): [Throttling thresholds] Message count in database: 500 (default value : 50000) Evolution of performance counters when submitting messages Since we are investigating the High database size throttling condition, here are the performance counter that we should take a look at (all of them are in the BizTalk:Message Agent performance object): Database sizeHigh database sizeMessage delivery throttling stateMessage publishing throttling stateMessage delivery delay (ms)Message publishing delay (ms)Message delivery throttling state durationMessage publishing throttling state duration (If you are not used to Perfmon, I strongly recommend that you start using it right now: it is a wonderful tool that allows you to open the hood and see what is going on inside BizTalk – and other systems) Database size It is quite obvious that we will start by watching the database size and high database size counters, just to see when the first reaches the configured threshold (500) and when the second rings the alarm. NOTE : During this test I submitted 600 messages, one message at a time every 10ms to see the evolution of the counters we have previously selected. It might not show very well on this screenshot, but here is what happened: From 15:46:50 to 15:47:50, the database size for the Ports_ReceiveHost host (blue line) kept growing until it reached a maximum of 504.At 15:47:50, the high database size alert fires At first I was surprised by this result: why is it the database size of the receiving host that keeps growing since it is the processing host that piles up messages? Actually, it makes total sense. This counter measures the size of the database queue that is being filled by the host, not consumed. Therefore, the high database size alert is raised on the host that fills the queue: Ports_ReceiveHost. More information is available on the Public MPWiki page. Now, looking at the Message publishing throttling state for the receiving host (green line), we can see that a throttling condition has been reached at 15:47:50: We can also see that the Message publishing delay(ms) (blue line) has begun growing slowly from this point. All of this explains why performances keep decreasing when a singleton keeps processing new messages: the database size grows and when it has exceeded the Message count in database threshold, the host is throttled and the publishing delay keeps increasing. Digging further So, what happens to the database queue then? Is it flushed some day or does it keep growing and growing indefinitely? The real question being: will the host be throttled forever because of this singleton? To answer this question, I set the Message count in database threshold to 20 (this value is very low in order not to wait for too long, otherwise I certainly would have fallen asleep in front of my screen) and I submitted 30 messages. The test was started at 18:26. At 18:56 (ie : exactly 30min later) the throttling was stopped and the database size was divided by 2. 30 min later again, the database size had dropped to almost zero: I guess I’ll have to find some documentation and do some more testing before I sort this out! My guess is that some maintenance job is at work here, though I cannot tell which one Digging even further If we take a look at the Message delivery throttling state counter for the processing host, we can see that this host was also throttled during the submission of the 600 documents: The value for the counter was 1, meaning that Message delivery incoming rate for the host instance exceeds the Message delivery outgoing rate * the specified Rate overdrive factor (percent) value. We will see this another day… :) A last word Let’s end this article with a warning: DO NOT CHANGE THE THROTTLING SETTINGS LIGHTLY! The temptation can be great to just bypass throttling by setting very high values for each parameter (or zero in some cases, which simply disables throttling). Nevertheless, always keep in mind that this mechanism is here for a very good reason: prevent your BizTalk infrastructure from exploding!! So whatever you do with those settings, do a lot of testing and benchmarking!

    Read the article

  • Wondering how Facebook does the "Mutual friends" feature

    - by Pierre
    Hello, I'm currently developing an application to allow students to manage their courses, and I don't really know how to design the database for a specific feature. The client wants, a lot like Facebook, that when a student displays the list of people currently in a specific course, the people with the most mutual courses are displayed first. As an additional feature, I would like to add a search feature to allow students to search for another one, and displaying first in the search results the people with most mutual courses. I currently use MySQL, I plan to use Cassandra for some other features, and I also use Memcached for result caching. Thanks.

    Read the article

  • Backup & recovery of multiple MySQL databases (InnoDB & MyISAM)

    - by Cymon
    I am working on nightly and hourly backups of MySQL Databases. There are multiple MySQL databases which are either InnoDB or MyISAM (Note: Each database is either InnoDB or MyISAM for a reason). With the 2 different types I want to make sure I am grabbing everything that is needed for backup and recovery. Here is my current plan Nightly -mysqldump of each DB which is stored locally and remotely. Hourly -flush binary logs and store them locally and remotely. Weekly -expire binary logs older than a week. I feel like I am grabbing everything that is needed for the MyISAM databases but I am concerned about the InnoDB databases and the log files (ib_logfile0, ib_logfile1, ibdata1) they create. Should I backup these files? Nightly? Hourly? Both? Do I really need them if I am already doing the above nightly and hourly backups?

    Read the article

  • Is it a good idea to cache data from web services into a database?

    - by Thierry Lam
    Let's assume that Stackoverflow offers web services where you can retrieve all the questions asked by a specific user. A request to get all question from user A can result in the following json output: { { "question": "What is rest?", "date_created": "20/02/2010", "votes": 1, }, { "question": "Which database to use for ...", "date_created": "20/07/2009", "votes": 5, }, } If I want to manipulate and present the data in any ways that I want, will it be wise to dump it in a local database? At some point, I will also want to retrieve all answers for each question and store them in a local database. The workflow that I'm thinking is: User logs in. Web services retrieve all questions asked by the logged in user, dump them in a local database. User wants all answers for a specific question, another web service does the retrieval and dump them in a local database. After user logs out, delete from the local database all questions and answers from that user.

    Read the article

  • How do I correctly model data in SQL-based databases that have some columns in common, but also have

    - by Brandon Weiss
    For instance, let's say I have a User model. Users have things like logins, passwords, e-mail addresses, avatars, etc. But there are two types of Users that will be using this site, let's say Parents and Businesses. I need to store some different information for the Parents (e.g. childrens' names, domestic partner, salaries, etc.) than for the Businesses (e.g. industry, number of employees, etc.), but also some of it is the same, like logins and passwords. How do I correctly structure this in a SQL-based database? Thanks!

    Read the article

  • ER Diagram flaws

    - by spacker_lechuck
    I have the following ER Diagram for a bank database - customers may have several accounts, accounts may be held jointly by several customers, and each customer is associated with an account set and accounts are members of one or more account sets. What design rules are violated? What modifications should be made and why? So far, a few flaws I'm not sure about are: 1) Redundant owner-address attribute in AcctSets Entity. 2) This ER does not include accounts with multiple owners with different addresses. My Question is: How would I go about fixing these flaws and/or other flaws that I may be missing from my analysis? Thanks!

    Read the article

  • Table with a lot of attributes

    - by Robert
    Hi, I'm planing to build some database project. One of the tables have a lot of attributes. My question is: What is better, to divide the the class into 2 separate tables or put all of them into one table. below is an example create table User { id, name, surname,... show_name, show_photos, ...) or create table User { id, name, surname,... ) create table UserPrivacy {usr_id, show_name, show_photos, ...) The performance i suppose is similar due to i can use index.

    Read the article

  • SQL Complicated Group / Join by Category

    - by Mike Silvis
    I currently have a database structure with two important tables. 1) Food Types (Fruit, Vegetables, Meat) 2) Specific Foods (Apple, Oranges, Carrots, Lettuce, Steak, Pork) I am currently trying to build a SQL statement such that I can have the following. Fruit < Apple, Orange Vegetables < Carrots, Lettuce Meat < Steak, Port I have tried using a statement like the following Select * From Food_Type join (Select * From Foods) as Foods on Food_Type.Type_ID = Foods.Type_ID but this returns every Specific Food, while I only want the first 2 per category. So I basically need my subquery to have a limit statement in it so that it finds only the first 2 per category. However if I simply do the following Select * From Food_Type join (Select * From Foods LIMIT 2) as Foods on Food_Type.Type_ID = Foods.Type_ID My statement only returns 2 results total.

    Read the article

  • How to organize asp.net mvc project (using entity framework) and a corresponding database project?

    - by Bernie
    I recently switched to vs2010 and am experimenting with asp.net MVC2. I am building a simple website and use the entity framework to design the data model. From the .edmx file, I generate the database tables. After a few iterations, I decided that it would be nice to have version control of the database schema as well, and therefore I added a database project into which I imported the script that is generated from the datamodel. As a result, I have to generate the sql every time that I change the model and redo the import. The database project automatically updates the database. Although the manual steps to generate the sql and the import are annoying, this works pretty well, until I wanted to add the standard tables for user accounts/authorization etc. I can use the framework tools to add the necessary tables, views etc. to the database, but as I do not want to have them in the .edmx model I end up with a third manual step. Is anybody facing similar issues?

    Read the article

  • Two Tables Serving as one Model in Rails

    - by matsko
    Is is possible in rails to setup on model which is dependant on a join from two tables? This would mean that for the the model record to be found/updated/destroyed there would need to be both records in both database tables linked together in a join. The model would just be all the columns of both tables wrapped together which may then be used for the forms and so on. This way when the model gets created/updated it is just one form variable hash that gets applied to the model? Is this possible in Rails 2 or 3?

    Read the article

  • How do I save user specific data in an asp.net site?

    - by Greg McNulty
    I just set up user profiles using asp.net 3.5 using wvd. For each user I would like to store data that they will be updating every day. For example, every time they go for a run they will update time and distance. I intend to allow them to also look up their history of distance and time from any past date. My question is, what does the database schema usually look like for such a set up? Currently asp.net set up a db for me when I made user profiles. Do I just add an extra table for every user? Should there be one big table with all users data? How do I relate a user I'd to their specific data? Etc.... I have never done this before so any ideas on how this is usually done would be very helpful. Thank you.

    Read the article

  • Top Questions and Answers for Pluging into Oracle Database as a Service

    - by David Swanger
    Yesterday we hosted a comprehensive online forum that shared a comprehensive path to help your organization design, deploy, and deliver a Database as a Service cloud. If you missed the online forum, you can watch it on demand by registering here. We received numerous questions.  Below are highlights of the most informative: DBaaS requires a lengthy and careful design efforts. What is the minimum requirements of setting up a scaled-down environment and test it out? You should have an OEM 12c environment for DBaaS administration and then a target database deployment platform that has the key characteristics of what your production environment will look like. This could be a single server or it could be a small pool of hosts if your production DBaaS will be larger and you want to test a more robust / real world configuration with Zones and Pools or DR capabilities for example. How does this benefit companies having their own data center? This allows companies to transform their internal IT to a service delivery model for the database. The benefits to the company are significant cost savings, improved business agility and reduced risk. The benefits to the consumers (internal) of services if much fast provisioning, and response to change in business requirements. From a deployment perspective, is DBaaS's job solely DBA's job? The best deployment model enables the DBA (or end-user) to control the entire process. All resources required to deploy the service are pre-provisioned, and there are no external dependencies (on network, storage, sysadmins teams). The service is created either via a self-service portal or by the DBA. The purpose of self service seems to be that the end user does not rely on the DBA. I just need to give him a template. He decides how much AMM he needs. Why shall I set it one by one. That doesn't seem to be the purpose of self service. Most customers we have worked with define a standardized service catalog, with a few (2 to 5) different classes of service. For each of these classes, there is a pre-defined deployment template, and the user has the ability to select from some pre-defined service sizes. The administrator only has to create this catalog once. Each user then simply selects from the options offered in the catalog.  Looking at DBaaS service definition, it seems to be no different from a service definition provided by a well defined DBA team. Why do you attribute it to DBaaS? There are a couple of perspectives. First, some organizations might already be operating with a high level of standardization and a higher level of maturity from an ITIL or Service Management perspective. Their journey to DBaaS could be shorter and their Service Definition will evolve less but they still might need to add capabilities such as Self Service and Metering/Chargeback. Other organizations are still operating in highly siloed environments with little automation and their formal Service Definition (if they have one) will be a lot less mature today. Therefore their future state DBaaS will look a lot different from their current state, as will their Service Definition. How database as a service impact or help with "Click to Compute" or deploying "Database in cloud infrastructure" DBaaS enables Click to Compute. Oracle DBaaS can be implemented using three architecture models: Oracle Multitenant 12c, native consolidation using Oracle Database and consolidation using virtualization in infrastructure cloud. As Deploy session showed, you get higher consolidating density and efficiency using Multitenant and higher isolation using infrastructure cloud. Depending upon your business needs, DBaaS can be implemented using any of these models. How exactly is the DBaaS different from the traditional db? Storage/OS/DB all together to 'transparently' provide service to applications? Will there be across-databases access by application/user. Some key differences are: 1) The services run on a shared platform. 2) The services can be rapidly provisioned (< 15 minutes). 3) The services are dynamic and can be relocated, grown, shrunk as needed to meet business needs without disruption and rapidly. 4) The user is able to provision the services directly from a standardized service catalog.. With 24x7x365 databases its difficult to find off peak hrs to do basic admin tasks such as gathering stats, running backups, batch jobs. How does pluggable database handle this and different needs/patching downtime of apps databases might be serving? You can gather stats in Oracle Multitenant the same way you had been in regular databases. Regarding patching/upgrading, Oracle Multitenant makes patch/upgrade very efficient in that you can pre-provision a new version/patched multitenant db in a different ORACLE_HOME and then unplug a PDB from its CDB and plug it into the newer/patched CDB in seconds.  Thanks for all the great questions!  If you'd like to learn more and missed the online forum, you can watch it on demand here.

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >