Search Results

Search found 33029 results on 1322 pages for 'database queries'.

Page 97/1322 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • visual studio 2010 database project, is there a visual way?

    - by b0x0rz
    started a visual studio 2010 database project. however i am only able to write sql in a text mode, there is no functionality in making the table for example in a visual view as exists when you add a new database to app_data folder and the work on it there. is this the only way and there is no visual way of doing this in the visual studio 2010 database project? or am i missing some obvious way of getting to it? thank you also if there is a tutorial anywhere (video maybe!?) please link it. i only found importing a database from an existing script video using a wizard. would like new database from scratch without wizard.

    Read the article

  • structured vs. unstructured data in db

    - by Igor
    the question is one of design. i'm gathering a big chunk of performance data with lots of key-value pairs. pretty much everything in /proc/cpuinfo, /proc/meminfo/, /proc/loadavg, plus a bunch of other stuff, from several hundred hosts. right now, i just need to display the latest chunk of data in my UI. i will probably end up doing some analysis of the data gathered to figure out performance problems down the road, but this is a new application so i'm not sure what exactly i'm looking for performance-wise just yet. i could structure the data in the db -- have a column for each key i'm gathering. the table would end up being O(100) columns wide, it would be a pain to put into the db, i would have to add new columns if i start gathering a new stat. but it would be easy to sort/analyze the data just using SQL. or i could just dump my unstructured data blob into the table. maybe three columns -- host id, timestamp, and a serialized version of my array, probably using JSON in a TEXT field. which should I do? am i going to be sorry if i go with the unstructured approach? when doing analysis, should i just convert the fields i'm interested in and create a new, more structured table? what are the trade-offs i'm missing here?

    Read the article

  • What database is easy to maintain and manage in a cluster?

    - by Sanoj
    I'm looking for a database (DBMS) that is easy to scale out. I would like to have high availability so I need a multi-master cluster, where the data is replicated to two or more physical computers. I would also like to be able to start with one node (no replication), and then scale out to more nodes as needed without a reinstallation or downtime. I would like to have a DBMS that are easy to maintain and manage. It should be easy to add nodes, remove nodes, take live backup and monitor the use of resources. It doesn't have to be a relational database system, so a NoSQL is okey. And I would like to have a free version so I can test it in small scale and compare it with alternatives. What alternatives do I have?

    Read the article

  • Synonym for "Many-to-Many" relationship (relational databases)

    - by Byron
    What's a synonym for a "many-to-many" relationship? I've finished writing an object-relational mapper but I'm still stumped as to what to name the function that adds that relation. addParent() and addChild() seemed quite logical for the many-to-one/one-to-many and addSuperclass() for one-to-one inheritance, but addManyToMany() would sound quite unintuitive to an object-oriented programmer. addSibling() or addCousin() doesn't really make sense either. Any suggestions? And before you dismiss this as a non-programming question, please remember that consistent naming schemes and encapsulation are pretty integral to programming :)

    Read the article

  • Should this even be a has_many :through association?

    - by GoodGets
    A Post belongs_to a User, and a User has_many Posts. A Post also belongs_to a Topic, and a Topic has_many Posts. class User < ActiveRecord::Base has_many :posts end class Topic < ActiveRecord::Base has_many :posts end class Post < ActiveRecord::Base belongs_to :user belongs_to :topic end Well, that's pretty simple and very easy to set up, but when I display a Topic, I not only want all of the Posts for that Topic, but also the user_name and the user_photo of the User that made that Post. However, those attributes are stored in the User model and not tied to the Topic. So how would I go about setting that up? Maybe it can already be called since the Post model has two foreign keys, one for the User and one for the Topic? Or, maybe this is some sort of "one-way" has_many through assiociation. Like the Post would be the join model, and a Topic would has_many :users, :through = :posts. But the reverse of this is not true. Like a User does NOT has_many :topics. So would this even need to be has_many :though association? I guess I'm just a little confused on what the controller would look like to call both the Post and the User of that Post for a give Topic. Edit: Seriously, thank you to all that weighed in. I chose tal's answer because I used his code for my controller; however, I could have just as easily chosen either j.'s or tim's instead. Thank you both as well. This was so damn simple to implement, and I think today marks the day that I'm beginning to fall in love with rails.

    Read the article

  • Undo table updates in SQL Server 2008

    - by sikas
    I updated a table in my MS SQL Server 2008 by accident, I was updating a table from another by copying cell by cell, but I have overwritten the original table. Is there a way that I can restore my table contents as it was?

    Read the article

  • GORM ID generation and belongsTo association ?

    - by fabien-barbier
    I have two domains : class CodeSetDetail { String id String codeSummaryId static hasMany = [codes:CodeSummary] static constraints = { id(unique:true,blank:false) } static mapping = { version false id column:'code_set_detail_id', generator: 'assigned' } } and : class CodeSummary { String id String codeClass String name String accession static belongsTo = [codeSetDetail:CodeSetDetail] static constraints = { id(unique:true,blank:false) } static mapping = { version false id column:'code_summary_id', generator: 'assigned' } } I get two tables with columns: code_set_detail: code_set_detail_id code_summary_id and code_summary: code_summary_id code_set_detail_id (should not exist) code_class name accession I would like to link code_set_detail table and code_summary table by 'code_summary_id' (and not by 'code_set_detail_id'). Note : 'code_summary_id' is define as column in code_set_detail table, and define as primary key in code_summary table. To sum-up, I would like define 'code_summary_id' as primary key in code_summary table, and map 'code_summary_id' in code_set_detail table. How to define a primary key in a table, and also map this key to another table ?

    Read the article

  • Program to find canonical cover or minimum number of functional dependencies

    - by Sev
    I would like to know if there is a program or algorithm to find canonical cover or minimum number of functional dependencies? For example: If you have: R = (A,B,C) <-- these are tables: A,B,C And dependencies: A ? BC B ? C A ? B AB ? C The canonical cover (or minimum number of dependencies) is: A ? B B ? C Is there a program that can accomplish this? If not, any code/pseudocode to help me write one would be appreciated. Prefer in Python or Java.

    Read the article

  • Best way to fetch data from a single database table with multiple threads?

    - by Ravi Bhatt
    Hi, we have a system where we collect data every second on user activity on multiple web sites. we dump that data into a database X (say MS SQL Server). we now need to fetch data from this single table from daatbase X and insert into database Y (say mySql). we want to fetch time based data from database X through multiple threads so that we fetch as fast as we can. Once fetched and stored in database Y, we will delete data from database X. Are there any best practices on this sort of design? any specific things to take care on table design like sharing or something? Are there any other things that we need to take care to make sure we fetch it as fast as we can from threads running on multiple machines? Thanks in advance! Ravi

    Read the article

  • Building a many-to-many db schema using only an unpredictable number of foreign keys

    - by user1449855
    Good afternoon (at least around here), I have a many-to-many relationship schema that I'm having trouble building. The main problem is that I'm only working with primary and foreign keys (no varchars or enums to simplify things) and the number of many-to-many relationships is not predictable and can increase at any time. I looked around at various questions and couldn't find something that directly addressed this issue. I split the problem in half, so I now have two one-to-many schemas. One is solved but the other is giving me fits. Let's assume table FOO is a standard, boring table that has a simple primary key. It's the one in the one-to-many relationship. Table BAR can relate to multiple keys of FOO. The number of related keys is not known beforehand. An example: From a query FOO returns ids 3, 4, 5. BAR needs a unique key that relates to 3, 4, 5 (though there could be any number of ids returned) The usual join table does not work: Table FOO_BAR primary_key | foo_id | bar_id | Since FOO returns 3 unique keys and here bar_id has a one-to-one relationship with foo_id. Having two join tables does not seem to work either, as it still can't map foo_ids 3, 4, 5 to a single bar_id. Table FOO_TO_BAR primary_key | foo_id | bar_to_foo_id | Table BAR_TO_FOO primary_key | foo_to_bar_id | bar_id | What am I doing wrong? Am I making things more complicated than they are? How should I approach the problem? Thanks a lot for the help.

    Read the article

  • Any simple approaches for managing customer data change requests for global reference files?

    - by Kelly Duke
    For the first time, I am developing in an environment in which there is a central repository for a number of different industry standard reference data tables and many different customers who need to select records from these industry standard reference data tables to fill in foreign key information for their customer specific records. Because these industry standard reference files are utilized by all customers, I want to reserve Create/Update/Delete access to these records for global product administrators. However, I would like to implement a (semi-)automated interface by which specific customers could request record additions, deletions or modifications to any of the industry standard reference files that are shared among all customers. I know I need something like a "data change request" table specifying: user id, user request datetime, request type (insert, modify, delete), a user entered text explanation of the change request, the user request's current status (pending, declined, completed), admin resolution datetime, admin id, an admin entered text description of the resolution, etc. What I can't figure out is how to elegantly handle the fact that these data change requests could apply to dozens of different tables with differing table column definitions. I would like to give the customer users making these data change requests a convenient way to enter their proposed record additions/modifications directly into CRUD screens that look very much like the reference table CRUD screens they don't have write/delete permissions for (with an additional text explanation and perhaps request priority field). I would also like to give the global admins a tool that allows them to view all the outstanding data change requests for the users they oversee sorted by date requested or user/date requested. Upon selecting a data change request record off the list, the admin would be directed to another CRUD screen that would be populated with the fields the customer users requested for the new/modified industry standard reference table record along with customer's text explanation, the request status and the text resolution explanation field. At this point the admin could accept/edit/reject the requested change and if accepted the affected industry standard reference file would be automatically updated with the appropriate fields and the data change request record's status, text resolution explanation and resolution datetime would all also be appropriately updated. However, I want to keep the actual production reference tables as simple as possible and free from these extraneous and typically null customer change request fields. I'd also like the data change request file to aggregate all data change requests across all the reference tables yet somehow "point to" the specific reference table and primary key in question for modification & deletion requests or the specific reference table and associated customer user entered field values in question for record creation requests. Does anybody have any ideas of how to design something like this effectively? Is there a cleaner, simpler way I am missing? Thank you so much for reading.

    Read the article

  • add schema to path in postgresql

    - by veilig
    I'm the process of moving applications over from all in the public schema to each having their own schema. for each application, I have a small script that will create the schema and then create the tables,functions,etc... to that schema. Is there anyway to automatically add a newly created schema to the search_path? Currently, the only way I see is to find the users current path SHOW search_path; and then add the new schema to it SET search_path to xxx,yyy,zzz; I would like some way to just say, append schema zzz to the users_search path. is this possible?

    Read the article

  • Experimenting with data to determine value - migration/methods?

    - by TCK
    Hey guys, I have a LOT of data available to me, and want to experiment with data that isn't currently being used in production. The obvious solution seems to be to make a copy of production data and integrate it with what I want to play around with, but I was wondering if there was a better (less expensive?) way to do this. Both isolation and integration are important. I'd like to be able to keep lightweight/experimental data assets apart from high volume production data, but also be able to integrate (RELATIVELY) painlessly if experimental assets are deemed useful. Thanks.

    Read the article

  • Modify database for the SharePoint 2010 Enterprise Search administration web site

    - by Mark Hall
    Does anyone know how to modify the database settings for the Enterprise Search administration web site? When you configure the service application via Central Administration, SharePoint just decides to use the default database server and gives a name like Enterprise_Search_DB_Identifier. I want to modify this to atleast give a name that makes scense like SharePoint_Search_AdministrationWebContent, and it might be nice to move it to the database server that is hosting the crawl and property database. I figured out how to move the Central Administration web content database, but this database is not listed as a content database. It is listed as a Microsoft.Office.Server.Search.Administration.SearchAdminDatabase. I have not tested to see if the same process would work but because you are doing a RemoveContentDatabase and NewContentDatabase, I would assume not. Any help would be appreciated.

    Read the article

  • Entity framework (1): implement 1 foreign key to multiple tables

    - by Michel
    Hi, i've modeled this: i have an import table, and an import steps table import 1 .. N importsteps Now i have a table importparams, which hold key/value pairs to register all kind of info about the import or the importsteps. So i have modeled a FK in SqlServer which points to the PK of the import table and to the PK of the importsteps table (the ID's for both the import as the importsteps table are guids, so i can query the importparams with either the id from import or from importsteps and get the right importparams). Makes sense a bit? But how can i model this in the EF? I can see it's a bit hard for the EF to model this, because one realtion can point to multiple classes, but is there a way? The workaround normally is just to get all importparams where FK is the ID, but as you know the FK is not available in the EF version 1. I hope you can help me out, michel

    Read the article

  • Best solution for a comment table for multiple content types

    - by KRTac
    I'm currently designing a comments table for a site I'm building. Users will be able to upload images, link videos and add audio files to the profile. Each of these types of content must be commentable. Now I'm wondering what's the best approach to this. My current options are: 1. to have one big comments table and a link tables for every content type (comments_videos, ...) with comment_id and _id. 2. to have comments separated by the type of content their for. So each type of content would have his own comments table with the comments for that type.

    Read the article

  • When is referential integrity not appropriate?

    - by Curtis Inderwiesche
    I understand the need to have referential integrity for limiting specific values on entry or possibly preventing them from removal upon a request of deletion. However, I am unclear as to a valid use case which would exclude this mechanism from always being used. I guess this would fall into several sub-questions: When is referential integrity not appropriate? Is it appropriate to have fields containing multiple and/or possibly incomplete subsets of a foreign key's list? Typically, should this be a schema structure design decision or an interface design decision? (Or possibly neither or both) Thoughts?

    Read the article

  • dns queries not using nscd for caching

    - by xenoterracide
    I'm trying to use nscd (Nameservices Cache Daemon) to cache dns locally so I can stop using bind to do it. I've gotten it started and ntpd seems to attempt to use it. But everything else for hosts seems to ignore it. e.g if I do dig apache.org 3 times none of them will hit the cache. I'm viewing the cache stats using nscd -g to determine whether it's been used. I've also turned the debug log level up to see if I can see it hitting and the queries don't even hit nscd. nsswitch.conf # Begin /etc/nsswitch.conf passwd: files group: files shadow: files publickey: files hosts: cache files dns networks: files protocols: files services: files ethers: files rpc: files netgroup: files # End /etc/nsswitch.confenter code here nscd.conf # # /etc/nscd.conf # # An example Name Service Cache config file. This file is needed by nscd. # # Legal entries are: # # logfile <file> # debug-level <level> # threads <initial #threads to use> # max-threads <maximum #threads to use> # server-user <user to run server as instead of root> # server-user is ignored if nscd is started with -S parameters # stat-user <user who is allowed to request statistics> # reload-count unlimited|<number> # paranoia <yes|no> # restart-interval <time in seconds> # # enable-cache <service> <yes|no> # positive-time-to-live <service> <time in seconds> # negative-time-to-live <service> <time in seconds> # suggested-size <service> <prime number> # check-files <service> <yes|no> # persistent <service> <yes|no> # shared <service> <yes|no> # max-db-size <service> <number bytes> # auto-propagate <service> <yes|no> # # Currently supported cache names (services): passwd, group, hosts, services # logfile /var/log/nscd.log threads 4 max-threads 32 server-user nobody # stat-user somebody debug-level 9 # reload-count 5 paranoia no # restart-interval 3600 enable-cache passwd yes positive-time-to-live passwd 600 negative-time-to-live passwd 20 suggested-size passwd 211 check-files passwd yes persistent passwd yes shared passwd yes max-db-size passwd 33554432 auto-propagate passwd yes enable-cache group yes positive-time-to-live group 3600 negative-time-to-live group 60 suggested-size group 211 check-files group yes persistent group yes shared group yes max-db-size group 33554432 auto-propagate group yes enable-cache hosts yes positive-time-to-live hosts 3600 negative-time-to-live hosts 20 suggested-size hosts 211 check-files hosts yes persistent hosts yes shared hosts yes max-db-size hosts 33554432 enable-cache services yes positive-time-to-live services 28800 negative-time-to-live services 20 suggested-size services 211 check-files services yes persistent services yes shared services yes max-db-size services 33554432 resolv.conf # Generated by dhcpcd from eth0 nameserver 127.0.0.1 domain westell.com nameserver 192.168.1.1 nameserver 208.67.222.222 nameserver 208.67.220.220 as kind of a side note I'm using archlinux.

    Read the article

  • Two different tables or just one with bool column?

    - by Aidas
    We have two tables: OriginalDocument and ProcessedDocument. In the first one we put an original, not processed document. After it's validated and processed (converted to our xml format and parsed), it's put into Document table. Processed document can be valid or invalid. Which makes more sense: have two different tables for valid and invalid documents or just have one with 'Valid' column? Some of the columns (~5-7) are irrelevant for invalid document. Storing both invalid and valid documents would also make Document table filled with 'NULL' columns (if document is invalid, information like document number, receiver can be unknown). What else should we consider and weigh, when making this decision?

    Read the article

  • Data Modeling of Entity with Attributes

    - by StackOverflowNewbie
    I'm storing some very basic information "data sources" coming into my application. These data sources can be in the form of a document (e.g. PDF, etc.), audio (e.g. MP3, etc.) or video (e.g. AVI, etc.). Say, for example, I am only interested in the filename of the data source. Thus, I have the following table: DataSource Id (PK) Filename For each data source, I also need to store some of its attributes. Example for a PDF would be "numbe of pages." Example for audio would be "bit rate." Example for video would be "duration." Each DataSource will have different requirements for the attributes that need to be stored. So, I have modeled "data source attribute" this way: DataSourceAttribute Id (PK) DataSourceId (FK) Name Value Thus, I would have records like these: DataSource->Id = 1 DataSource->Filename = 'mydoc.pdf' DataSource->Id = 2 DataSource->Filename = 'mysong.mp3' DataSource->Id = 3 DataSource->Filename = 'myvideo.avi' DataSourceAttribute->Id = 1 DataSourceAttribute->DataSourceId = 1 DataSourceAttribute->Name = 'TotalPages' DataSourceAttribute->Value = '10' DataSourceAttribute->Id = 2 DataSourceAttribute->DataSourceId = 2 DataSourceAttribute->Name = 'BitRate' DataSourceAttribute->Value '16' DataSourceAttribute->Id = 3 DataSourceAttribute->DataSourceId = 3 DataSourceAttribute->Name = 'Duration' DataSourceAttribute->Value = '1:32' My problem is that this doesn't seem to scale. For example, say I need to query for all the PDF documents along with thier total number of pages: Filename, TotalPages 'mydoc.pdf', '10' 'myotherdoc.pdf', '23' ... The JOINs needed to produce the above result is just too costly. How should I address this problem?

    Read the article

  • Tables as relations in ER diagrams

    - by Richard Mar.
    Assume I have the following tables (**bold** - primary key, *italics* - foreign key): patient(**patient_id**, name) disease(**disease_id**, name) patient_disease(**p_d_id**, *patient_id*, *disease,_id* ) I want to draw the ER diagram for this. My idea is to make two entities, one for patient and one for disease, then make a n-to-n relation between them, with p_d_id as its attribute. Is that how it's supposed to be?

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >