Search Results

Search found 5380 results on 216 pages for 'primary'.

Page 53/216 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • Duplicate partitioning key performance impact

    - by Anshul
    I've read in some posts that having duplicate partitioning key can have a performance impact. I've two tables like: CREATE TABLE "Test1" ( CREATE TABLE "Test2" ( key text, key text, column1 text, name text, value text, age text, PRIMARY KEY (key, column1) ... ) PRIMARY KEY (key, name,age) ) In Test1 column1 will contain column name and value will contain its corresponding value.The main advantage of Test1 is that I can add any number of column/value pairs to it without altering the table by just providing same partitioning key each time. Now my question is how will each of these table schema's impact the read/write performance if I've millions of rows and number of columns can be upto 50 in each row. How will it impact the compaction/repair time if I'm writing duplicate entries frequently?

    Read the article

  • [Django] How to find out whether a model's column is a foreign key?

    - by codethief
    I'm dynamically storing information in the database depending on the request: // table, id and column are provided by the request table_obj = getattr(models, table) record = table_obj.objects.get(pk=id) setattr(record, column, request.POST['value']) The problem is that request.POST['value'] sometimes contains a foreign record's primary key (i.e. an integer) whereas Django expects the column's value to be an object of type ForeignModel: Cannot assign "u'122'": "ModelA.b" must be a "ModelB" instance. Now, is there an elegant way to dynamically check whether b is a column containing foreign keys and what model these keys are linked to? (So that I can load the foreign record by it's primary key and assign it to ModelA?) Or doesn't Django provide information like this to the programmer so I really have to get my hands dirty and use isinstance() on the foreign-key column?

    Read the article

  • MVC Entity Model not showing my table

    - by Jessica
    I have a database with multiple tables, and some basic relationships. Here is an example of the problem I am having: My Database: **Org** ID Name etc **Detail1** ID D1name **Org_Detail1** Org_ID Detail1_ID **Detail2** ID D2Name **Org_Detail2** Org_ID Detial1_ID BooleanField My problem is, the Org_detail1 table is not showing up in the entity model, but the Org_Details2 table does. I thought it may have been because the Org_Detail1 table only contains two ID fields that are both primary keys, while the Org_Details2 table contains 2 primary key ID fields as well as a boolean field. If I add a dummy field to Org_detail1 and update it, it still won't show up and wont allow me to add a new entity relating to the Org_Detail1 table. The table won't even show up in the list, but it is listed under the tables. Is there any solution to get this table to appear in my model?

    Read the article

  • Mongodb using db.help() on a particular db command

    - by user1325696
    When I type db.help() It returns DB methods: db.addUser(username, password[, readOnly=false]) db.auth(username, password) ... ... db.printShardingStatus() ... ... db.fsyncLock() flush data to disk and lock server for backups db.fsyncUnock() unlocks server following a db.fsyncLock() I'd like to find out how to get more detailed help for the particular command. The problem was with the printShardingStatus as it returned "too many chunks to print, use verbose if you want to print" mongos> db.printShardingStatus() --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3 } shards: { "_id" : "shard0000", "host" : "localhost:10001" } { "_id" : "shard0001", "host" : "localhost:10002" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "dbTest", "partitioned" : true, "primary" : "shard0000" } dbTest.things chunks: shard0001 12 shard0000 19 too many chunks to print, use verbose if you want to for ce print I found that for that particular command I can specify boolean parameter db.printShardingStatus(true) which wasn't shown using db.help().

    Read the article

  • Optimising speeds in HDF5 using Pytables

    - by Sree Aurovindh
    The problem is with respect to the writing speed of the computer (10 * 32 bit machine) and the postgresql query performance.I will explain the scenario in detail. I have data about 80 Gb (along with approprite database indexes in place). I am trying to read it from Postgresql database and writing it into HDF5 using Pytables.I have 1 table and 5 variable arrays in one hdf5 file.The implementation of Hdf5 is not multithreaded or enabled for symmetric multi processing.I have rented about 10 computers for a day and trying to write them inorder to speed up my data handling. As for as the postgresql table is concerned the overall record size is 140 million and I have 5 primary- foreign key referring tables.I am not using joins as it is not scalable So for a single lookup i do 6 lookup without joins and write them into hdf5 format. For each lookup i do 6 inserts into each of the table and its corresponding arrays. The queries are really simple select * from x.train where tr_id=1 (primary key & indexed) select q_t from x.qt where q_id=2 (non-primary key but indexed) (similarly five queries) Each computer writes two hdf5 files and hence the total count comes around 20 files. Some Calculations and statistics: Total number of records : 14,37,00,000 Total number of records per file : 143700000/20 =71,85,000 The total number of records in each file : 71,85,000 * 5 = 3,59,25,000 Current Postgresql database config : My current Machine : 8GB RAM with i7 2nd generation Processor. I made changes to the following to postgresql configuration file : shared_buffers : 2 GB effective_cache_size : 4 GB Note on current performance: I have run it for about ten hours and the performance is as follows: The total number of records written for each file is about 6,21,000 * 5 = 31,05,000 The bottle neck is that i can only rent it for 10 hours per day (overnight) and if it processes in this speed it will take about 11 days which is too high for my experiments. Please suggest me on how to improve. Questions: 1. Should i use Symmetric multi processing on those desktops(it has 2 cores with about 2 GB of RAM).In that case what is suggested or prefereable? 2. If i change my postgresql configuration file and increase the RAM will it enhance my process. 3. Should i use multi threading.. In that case any links or pointers would be of great help Thanks Sree aurovindh V

    Read the article

  • Unable to relate two MySQL tables (foreign keys)

    - by KPL
    Hello people, Here's my USER table CREATE TABLE IF NOT EXISTS `users` ( `id` int(11) NOT NULL AUTO_INCREMENT, `username` varchar(100) NOT NULL, `expiry` varchar(6) NOT NULL, `contact_id` int(11) NOT NULL, `email` varchar(255) NOT NULL, `password` varchar(100) NOT NULL, `level` int(3) NOT NULL, `active` tinyint(4) NOT NULL DEFAULT '1', PRIMARY KEY (`id`,`email`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ; And here's my contact_info table CREATE TABLE IF NOT EXISTS `contact_info` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(255) NOT NULL, `email_address` varchar(255) NOT NULL, `company_name` varchar(255) NOT NULL, `license_number` varchar(255) NOT NULL, `phone` varchar(30) NOT NULL, `fax` varchar(30) NOT NULL, `mobile` varchar(30) NOT NULL, `category` varchar(100) NOT NULL, `country` varchar(20) NOT NULL, `state` varchar(20) NOT NULL, `city` varchar(100) NOT NULL, `postcode` varchar(50) NOT NULL, PRIMARY KEY (`id`,`email_address`), ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ; The system uses username to login users.I want to modify it in such a way that it uses email for login. But there's no email_address in users table. I have added foreign key - email in user table(which is email_address in contact_info). How should I query database?

    Read the article

  • How to catch specific exception without error number?

    - by CJ7
    I need to catch the following specific exception: System.Data.OleDb.OleDbException was caught ErrorCode=-2147467259 Message="The changes you requested to the table were not successful because they would create duplicate values in the index, primary key, or relationship. Change the data in the field or fields that contain duplicate data, remove the index, or redefine the index to permit duplicate entries and try again." Source="Microsoft JET Database Engine" I'm not sure what ErrorCode is but it looks unreliable. Can I rely on Message being identical across platforms? Is the only solution to do a text search of Message for words like duplicate and primary key? Note: see my question here for why I need to catch this exception.

    Read the article

  • @SequenceGenerator - allocationSize, reverse engineering with Eclipse Hibernate Tools

    - by Spooky
    I use the Eclipse Hibernate Tools to create domain classes with JPA annotations from my Oracle database. To control sequence generation I have added the following entry to the hibernate.reveng.xml: ... <primary-key> <generator class="sequence"> <param name="sequence">SEQ_FOO_ID</param> </generator> </primary-key> ... This results in the following annotation: @SequenceGenerator(name = "generator", sequenceName = "SEQ_FOO_ID") However I need to set the "allocationSize" like this: @SequenceGenerator(name = "generator", sequenceName = "SEQ_FOO_ID", allocationSize = 1) Is it possible to set this somehow in the hibernate.reveng.xml?

    Read the article

  • What is the meanning of 'idx_categories_desc_categories_name' in osCommerce

    - by Sumant
    while working on osCommerce-3 i got the table structure for category & categories_description as CREATE TABLE IF NOT EXISTS `osc_categories` ( `categories_id` int(10) unsigned NOT NULL AUTO_INCREMENT, `categories_image` varchar(255) DEFAULT NULL, `parent_id` int(10) unsigned DEFAULT NULL, `sort_order` int(11) DEFAULT NULL, `date_added` datetime DEFAULT NULL, `last_modified` datetime DEFAULT NULL, PRIMARY KEY (`categories_id`), KEY `idx_categories_parent_id` (`parent_id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=5 ; CREATE TABLE IF NOT EXISTS `osc_categories_description` ( `categories_id` int(10) unsigned NOT NULL, `language_id` int(10) unsigned NOT NULL, `categories_name` varchar(255) NOT NULL, PRIMARY KEY (`categories_id`,`language_id`), KEY `idx_categories_desc_categories_id` (`categories_id`), KEY `idx_categories_desc_language_id` (`language_id`), KEY `idx_categories_desc_categories_name` (`categories_name`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; here i am not getting the meanning of indexing "idx_categories_desc_categories_id", "idx_categories_desc_language_id", "idx_categories_desc_categories_name" What is the use of this indexing.What does it mean?

    Read the article

  • Data model for timesheet to task and/or timesheet to project?

    - by John
    Let's say I want to make a simple project tracking system. A manager can create a project. Then he can create tasks for that project. Team members can record the hours they work for each task or for the project as a whole. Is the following design for the t_timesheet table a good idea? timesheet_id - primary key, autoincrement project_id - not null, foreign key constraint to t_project task_id - nullable, foreign key constraint to t_task user_id - not null, foreign key constraint to t_user hours - decimal Or should I do something like this: timesheet_id - primary key, autoincrement task_id - not null, foreign key constraint to t_task user_id - not null, foreign key constraint to t_user hours - decimal In the second option, I intend to always have a record in t_task labelled "miscellaneous items" with a foreign key to the relevant t_project record. Then I'll be able to track all hours for a project that aren't for any particular task. Are any of the ideas above good? What would be better?

    Read the article

  • DB design abbreviations

    - by CChriss
    I know PK means primary key and FK means foreign key, but what do "rK" (in section 3) and "PF" (in sections 3, 4, 6, 7, and 8) mean on this page? http://www.databaseanswers.org/tutorial4_data_modelling/index.htm And what does "FF" mean (in the Customer_Addresses table) on this page? -I'm new so it would only let me put in one hyperlink, so copy/paste this to go to the page I'm asking about: databaseanswers.org/tutorial4_db_schema/tutorial_slide_7.htm Thanks. Edit: also, I understand the concepts of primary keys and foreign keys, but what are these other ones used for?

    Read the article

  • HBase as web app backend

    - by NathanD
    Can anyone advise if it is a good idea to have HBase as primary data source for web-based application? My primary concern is HBase's response time to queries. Is it possible to have sub-second response? edit: more details about the app itself. Amount of data: ~500GB of text data, expect to reach 1TB soon Number of concurrent users using the app: up to 50 The app will be used to present reports about data stored in HBase, like how many times keyword "X" occured in last 24h. For ~80% of requests from that app I will know the exact key, 20% will be scans (I'm looking into HBase schema design related topics to make it run fast)

    Read the article

  • The conceptual process of populating related tables in a database (MySql) from a CSV file

    - by user322772
    I'm new to relational databases and all the material I've read covered primary and foreign keys, normal forms, and joins but left out to populate the database once it's created. How do you import a CSV file so the fields match their related table? Say you were tying to build a beer database and had a CSV file with each line as a record. Header: brewer, beer_name, country, city, state, beer_category, beer_type, alcohol_content Record 1: Anheuser-Busch, Budweiser, United States, St. Louis, Mo, Pale lager, Regular, 5.0% Record 2: Anheuser-Busch, Bud Light, United States, St. Louis, Mo, Pale lager Light, 4.2% Record 3: Miller Brewing Company, Miller Lite, United States, Milwaukee, WI, Pale lager, Light, 4.2% You can create a "Brewer" table and a "Beer" table. When importing how does you connect the primary keys between the tables?

    Read the article

  • Get the BindingSource position based on DataTable row

    - by Ronald
    I have a datatable that contains the rows of a database table. This table has a primary key formed by 2 columns. The components are assigned this way: datatable - bindingsource - datagridview. What I want is to search a specific row (based on the primary key) to select it on the grid. I cant use the bindingsource.Find method because you only can use one column. I have access to the datatable, so I do manually search on the datatable, but how can I get bindingsource row position based on the datatable row? Or there is another way to solve this? Im using Visual Studio 2005, VB.NET.

    Read the article

  • Files and filegroups sql server 2005

    - by Dhivagar
    Can we move default file to another filegroup. sample code is given below Sample code create database EMPLOYEE ON PRIMARY ( NAME = 'PRIMARY_01', FILENAME = 'C:\METADATA\PRIM01.MDF', SIZE = 5 MB , MAXSIZE =50 MB, FILEGROWTH = 2 MB), ( NAME = 'SECONDARY_02', FILENAME = 'C:\METADATA\SEC02.NDF' ), FILEGROUP EMPLOYEE_dETAILS ( NAME = 'EMPDETILS_01', FILENAME = 'C:\METADATA\EMPDET01.NDF', SIZE = 5 MB , MAXSIZE =50 MB, FILEGROWTH = 2 MB), ( NAME = 'EMPDETILS_02', FILENAME = 'C:\METADATA\EMPDET02.NDF', SIZE = 5 MB , MAXSIZE =50 MB, FILEGROWTH = 2 MB) LOG ON ( NAME = 'TRANSACLOG', FILENAME ='c:\METADATA\TRAS01.LDF', SIZE = 5 MB , MAXSIZE =50 MB, FILEGROWTH = 2 MB ) now i want to move the FILENAME = 'C:\METADATA\SEC02.NDF' from deault primary file to the FILEGROUP EMPLOYEE_dETAILS ? need assist ??

    Read the article

  • "SQLSTATE[23000]: Integrity constraint violation" in Doctrine

    - by rags
    Hi, i do get an Integrity constraint violation for Doctrine though i really can't see why. Schema.yml User: columns: id: type: integer primary: true autoincrement: true username: type: varchar(64) notnull: true email: type: varchar(128) notnull: true password: type: varchar(128) notnull: true relations: Websites: class: Website local: id foreign: owner type: many foreignType: one onDelete: CASCADE Website: columns: id: type: integer primary: true autoincrement: true active: type: bool owner: type: integer notnull: true plz: type: integer notnull: true longitude: type: double(10,6) notnull: true latitude: type: double(10,6) notnull: true relations: Owner: type: one foreignType: many class: User local: owner foreign: id And here's my data Fixtures (data.yml) Model_User: User_1: username: as email: as****.com password: ***** Model_Website: Website_1: active: true plz: 34222 latitude: 13.12 longitude: 3.56 Owner: User_1

    Read the article

  • insert into several inheritance tables with OUTPUT - sql servr 2005

    - by csetzkorn
    Hi, I have a bunch of items – for simplicity reasons – a flat table with unique names seeded via bulk insert: create table #items ( ItemName NVARCHAR(255) ) The database has this structure: create table Statements ( Id INT IDENTITY NOT NULL, Version INT not null, FurtherDetails varchar(max) null, ProposalDateTime DATETIME null, UpdateDateTime DATETIME null, ProposerFk INT null, UpdaterFk INT null, primary key (Id) ) create table Item ( StatementFk INT not null, ItemName NVARCHAR(255) null, primary key (StatementFk) ) Here Item is a child of Statement (inheritance). I would like to insert items in #items using a set based approach (avoiding triggers and loops). Can this be achieved with OUTPUT in my scenario. A ‘loop based’ approach is just too slow where I use something like this: insert into Statements (Version, FurtherDetails, ProposalDateTime, UpdateDateTime, ProposerFk, UpdaterFk) VALUES (1, null, getdate(), getdate(), @user_id, @user_id) etc. This is a start for the OUTPUT based approach – but I am not sure whether this would work in my case as ItemName is only inserted into Item: insert into Statements ( Version, FurtherDetails, ProposalDateTime, UpdateDateTime, ProposerFk, UpdaterFk ) output inserted.Id ... ??? Thanks. Best wishes, Christian

    Read the article

  • How to structure (normalize?) a database of physical parameters?

    - by Arrieta
    Hello: I have a collection of physical parameters associated with different items. For example: Item, p1, p2, p3 a, 1, 2, 3 b, 4, 5, 6 [...] where px stands for parameter x. I could go ahead and store the database exactly as presented; the schema would be CREATE TABLE t1 (item TEXT PRIMARY KEY, p1 FLOAT, p2 FLOAT, p3 FLOAT); I could retrieve the parameter p1 for all the items with the statement: SELECT p1 FROM t1; A second alternative is to have an schema like: CREATE TABLE t1 (id INT PRIMARY KEY, item TEXT, par TEXT, val FLOAT) This seems much simpler if you have many parameters (as I do). However, the parameter retrieval seems very awkward: SELECT val FROM t1 WHERE par == 'p1' What do you advice? Should go for the "pivoted" (first) version or the id, par, val (second) version? Many thanks.

    Read the article

  • Can Atom be used for things besides syndication feeds?

    - by greim
    Purely in terms of its conceptual model, is the purpose of Atom (and RSS) only to provide a time-sequential series of frequently-updated items, such as "most recent blog posts" or "last twenty SVN commits," or can Atom be legitimately used to represent static and/or non-time-sequential listings/indices? As an example, "index of files under this directory", "dog breeds" or "music genres". Even if there's a date associated with the items, like a file's last modified date, what if you don't necessarily want time to be the primary consideration when you represent that model to your users? The context for this is passing around (generating and consuming) lists of things in a REST-ful environment, hopefully using a well-understood format, where "date something was created/updated" is a pertinent detail, but not the primary consideration. I realize there's probably no right answer, but wanted to get some perspectives. Thanks.

    Read the article

  • One to One relationship in MySQL

    - by Botto
    I'm trying to make a one to one relationship in a MySQL DB. I'm using the InnoDB engine and the basic table looks like this: CREATE TABLE `foo` ( `fooID` INT(11) NOT NULL PRIMARY KEY AUTO_INCREMENT, `name` TEXT NOT NULL ) CREATE TABLE `bar` ( `barName` VARCHAR(100) NOT NULL, `fooID` INT(11) NOT NULL PRIMARY KEY, CONSTRAINT `contact` FOREIGN KEY (`fooID`) REFERENCES `foo`(`fooID`) ) Now once I have set up these I alter the foo table so that the fooID also becomes a foreign key to the fooID in bar. The only issue I am facing with this is that there will be a integrity issue when I try to insert into either. I would like some help, thanks.

    Read the article

  • Copying Data between table without identity column

    - by user668479
    I have two table and I need to copy the data across from SRCServiceUsers to Clients Everytime i run it I get the following: Violation of PRIMARY KEY constraint 'PK_Clients'. Cannot insert duplicate key in object 'dbo.Clients'. The statement has been terminated. The Primary key ClientId field is not an identity column and therefore requires filling To date I have the following insert into Clients( ClientID, Title, Forenames, FamilyName, [Address], Town, County, PostCode, PhoneNumber, StartDate) SELECT ( Select Max(Clients.ClientID)+ 1, SRCServiceUsers.Title, SRCServiceUsers.[First Names], SRCServiceUsers.Surname, --BUILD UP MUITIPLE COLUMNS SRCServiceUsers.[Property Name] + ', ' + SRCServiceUsers.Street + ', ' + SRCServiceUsers.Suburb as [Address], SRCServiceUsers.Town, SRCServiceUsers.County, SRCServiceUsers.Postcode, SRCServiceUsers.Telephone, SRCServiceUsers.[Start Date] From srcsERVICEuSERS How can i autoincrement the PK field - CLientID when inserting the data? Many thanks Andrew

    Read the article

  • Database indexes and their Big-O notation

    - by miket2e
    I'm trying to understand the performance of database indexes in terms of Big-O notation. Without knowing much about it, I would guess that: Querying on a primary key or unique index will give you a O(1) lookup time. Querying on a non-unique index will also give a O(1) time, albeit maybe the '1' is slower than for the unique index (?) Querying on a column without an index will give a O(N) lookup time (full table scan). Is this generally correct ? Will querying on a primary key ever give worse performance than O(1) ? My specific concern is for SQLite, but I'd be interested in knowing to what extent this varies between different databases too.

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >