Search Results

Search found 22078 results on 884 pages for 'composite primary key'.

Page 258/884 | < Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >

  • Files and filegroups sql server 2005

    - by Dhivagar
    Can we move default file to another filegroup. sample code is given below Sample code create database EMPLOYEE ON PRIMARY ( NAME = 'PRIMARY_01', FILENAME = 'C:\METADATA\PRIM01.MDF', SIZE = 5 MB , MAXSIZE =50 MB, FILEGROWTH = 2 MB), ( NAME = 'SECONDARY_02', FILENAME = 'C:\METADATA\SEC02.NDF' ), FILEGROUP EMPLOYEE_dETAILS ( NAME = 'EMPDETILS_01', FILENAME = 'C:\METADATA\EMPDET01.NDF', SIZE = 5 MB , MAXSIZE =50 MB, FILEGROWTH = 2 MB), ( NAME = 'EMPDETILS_02', FILENAME = 'C:\METADATA\EMPDET02.NDF', SIZE = 5 MB , MAXSIZE =50 MB, FILEGROWTH = 2 MB) LOG ON ( NAME = 'TRANSACLOG', FILENAME ='c:\METADATA\TRAS01.LDF', SIZE = 5 MB , MAXSIZE =50 MB, FILEGROWTH = 2 MB ) now i want to move the FILENAME = 'C:\METADATA\SEC02.NDF' from deault primary file to the FILEGROUP EMPLOYEE_dETAILS ? need assist ??

    Read the article

  • HBase as web app backend

    - by NathanD
    Can anyone advise if it is a good idea to have HBase as primary data source for web-based application? My primary concern is HBase's response time to queries. Is it possible to have sub-second response? edit: more details about the app itself. Amount of data: ~500GB of text data, expect to reach 1TB soon Number of concurrent users using the app: up to 50 The app will be used to present reports about data stored in HBase, like how many times keyword "X" occured in last 24h. For ~80% of requests from that app I will know the exact key, 20% will be scans (I'm looking into HBase schema design related topics to make it run fast)

    Read the article

  • Get the BindingSource position based on DataTable row

    - by Ronald
    I have a datatable that contains the rows of a database table. This table has a primary key formed by 2 columns. The components are assigned this way: datatable - bindingsource - datagridview. What I want is to search a specific row (based on the primary key) to select it on the grid. I cant use the bindingsource.Find method because you only can use one column. I have access to the datatable, so I do manually search on the datatable, but how can I get bindingsource row position based on the datatable row? Or there is another way to solve this? Im using Visual Studio 2005, VB.NET.

    Read the article

  • Can Atom be used for things besides syndication feeds?

    - by greim
    Purely in terms of its conceptual model, is the purpose of Atom (and RSS) only to provide a time-sequential series of frequently-updated items, such as "most recent blog posts" or "last twenty SVN commits," or can Atom be legitimately used to represent static and/or non-time-sequential listings/indices? As an example, "index of files under this directory", "dog breeds" or "music genres". Even if there's a date associated with the items, like a file's last modified date, what if you don't necessarily want time to be the primary consideration when you represent that model to your users? The context for this is passing around (generating and consuming) lists of things in a REST-ful environment, hopefully using a well-understood format, where "date something was created/updated" is a pertinent detail, but not the primary consideration. I realize there's probably no right answer, but wanted to get some perspectives. Thanks.

    Read the article

  • How to structure (normalize?) a database of physical parameters?

    - by Arrieta
    Hello: I have a collection of physical parameters associated with different items. For example: Item, p1, p2, p3 a, 1, 2, 3 b, 4, 5, 6 [...] where px stands for parameter x. I could go ahead and store the database exactly as presented; the schema would be CREATE TABLE t1 (item TEXT PRIMARY KEY, p1 FLOAT, p2 FLOAT, p3 FLOAT); I could retrieve the parameter p1 for all the items with the statement: SELECT p1 FROM t1; A second alternative is to have an schema like: CREATE TABLE t1 (id INT PRIMARY KEY, item TEXT, par TEXT, val FLOAT) This seems much simpler if you have many parameters (as I do). However, the parameter retrieval seems very awkward: SELECT val FROM t1 WHERE par == 'p1' What do you advice? Should go for the "pivoted" (first) version or the id, par, val (second) version? Many thanks.

    Read the article

  • insert into several inheritance tables with OUTPUT - sql servr 2005

    - by csetzkorn
    Hi, I have a bunch of items – for simplicity reasons – a flat table with unique names seeded via bulk insert: create table #items ( ItemName NVARCHAR(255) ) The database has this structure: create table Statements ( Id INT IDENTITY NOT NULL, Version INT not null, FurtherDetails varchar(max) null, ProposalDateTime DATETIME null, UpdateDateTime DATETIME null, ProposerFk INT null, UpdaterFk INT null, primary key (Id) ) create table Item ( StatementFk INT not null, ItemName NVARCHAR(255) null, primary key (StatementFk) ) Here Item is a child of Statement (inheritance). I would like to insert items in #items using a set based approach (avoiding triggers and loops). Can this be achieved with OUTPUT in my scenario. A ‘loop based’ approach is just too slow where I use something like this: insert into Statements (Version, FurtherDetails, ProposalDateTime, UpdateDateTime, ProposerFk, UpdaterFk) VALUES (1, null, getdate(), getdate(), @user_id, @user_id) etc. This is a start for the OUTPUT based approach – but I am not sure whether this would work in my case as ItemName is only inserted into Item: insert into Statements ( Version, FurtherDetails, ProposalDateTime, UpdateDateTime, ProposerFk, UpdaterFk ) output inserted.Id ... ??? Thanks. Best wishes, Christian

    Read the article

  • Database indexes and their Big-O notation

    - by miket2e
    I'm trying to understand the performance of database indexes in terms of Big-O notation. Without knowing much about it, I would guess that: Querying on a primary key or unique index will give you a O(1) lookup time. Querying on a non-unique index will also give a O(1) time, albeit maybe the '1' is slower than for the unique index (?) Querying on a column without an index will give a O(N) lookup time (full table scan). Is this generally correct ? Will querying on a primary key ever give worse performance than O(1) ? My specific concern is for SQLite, but I'd be interested in knowing to what extent this varies between different databases too.

    Read the article

  • "SQLSTATE[23000]: Integrity constraint violation" in Doctrine

    - by rags
    Hi, i do get an Integrity constraint violation for Doctrine though i really can't see why. Schema.yml User: columns: id: type: integer primary: true autoincrement: true username: type: varchar(64) notnull: true email: type: varchar(128) notnull: true password: type: varchar(128) notnull: true relations: Websites: class: Website local: id foreign: owner type: many foreignType: one onDelete: CASCADE Website: columns: id: type: integer primary: true autoincrement: true active: type: bool owner: type: integer notnull: true plz: type: integer notnull: true longitude: type: double(10,6) notnull: true latitude: type: double(10,6) notnull: true relations: Owner: type: one foreignType: many class: User local: owner foreign: id And here's my data Fixtures (data.yml) Model_User: User_1: username: as email: as****.com password: ***** Model_Website: Website_1: active: true plz: 34222 latitude: 13.12 longitude: 3.56 Owner: User_1

    Read the article

  • Using SQL to get the Last Reply on a Post

    - by Anraiki
    I am trying to replicate a forum function by getting the last reply of a post. For clarity, see PHPBB: there are four columns, and the last column is what I like to replicate. I have my tables created as such: discussion_id (primary key) user_id parent_id comment status pubdate I was thinking of creating a Link Table that would update for each time the post is replied to. The link table would be as follow: discussion_id (primary key) last_user_id last_user_update However, I am hoping that theres a advance query to achieve this method. That is, grabbing each Parent Discussion, and finding the last reply in each of those Parent Discussions. Am I right that there is such a query?

    Read the article

  • How to expose an entity via alternate keys with spring data rest

    - by dan carter
    Spring-data-rest does a great job exposing entities via their primary key for GET, PUT and DELETE etc. operations. /myentityies/123 It also exposes search operations. /myentities/search/byMyOtherKey?myOtherKey=123 In my case the entities have a number of alternate keys. The systems calling us, will know the objects by these IDs, rather than our internal primary key. Is it possible to expose the objects via another URL and have the GET, PUT and DELETE handled by the built-in spring-data-rest controllers? /myentities/myotherkey/456 We'd like to avoid forcing the calling systems to have to make two requests for each update. I've tried playing with @RestResource path value, but there doesn't seem to be a way to add additional paths.

    Read the article

  • SSRS Column Grouping with specific order

    - by AmiT
    Hi Experts, Is it possible to change order of records/groups in a result-set from a query using Group By? =I have a query: SELECT Category, Subcategory, ProductName, CreatedDate, Sales From TableCategory tc INNER JOIN TableSubCategory ts ON tc.col1 = ts.col2 INNER JOIN TableProductName tp ON ts.col2 = tp.col3 Group By Category, SubCategory, ProductName, CreatedDate, Sales = Now, I am creating a ssrs report where Category is Primary row group, then SubCategory is its child row group. Then ProductName is a Primary Column Group. It works perfect, But it shows the ProductNames in alphabatic order. I want it to show the ProductNames in custom order(defined by me).Like, ProductNo5 in 3rd column, ProductNo8 in 4th column, ProductNo1 in 5th column ... and so on!

    Read the article

  • Link a sequence with to an identity in hsqldb

    - by Candy Chiu
    In PostgreSql, one can define a sequence and use it as the primary key of a table. In HsqlDB, one can still accomplish creating an auto-increment identity column which doesn't link to any user defined sequence. Is it possible to use a user defined sequence as the generator of an auto-increment identity column in HsqlDB? Sample sql in PostgreSql: CREATE SEQUENCE seq_company_id START WITH 1; CREATE TABLE company ( id bigint PRIMARY KEY DEFAULT nextval('seq_company_id'), name varchar(128) NOT NULL CHECK (name < '') ); What's the equivalent in HsqlDB? Thanks.

    Read the article

  • Apache setting mod_auth_ldap require settings per sub-directory

    - by Anthony
    I would like to set up a primary directory that has one set of LDAP-based restrictions and then have various sub-directories use other restrictions, but only have the actual LDAP search done in the base directory. For example: .htaccess per directory /Primary_Directory AuthLDAPURL "ldap://ldap1.airius.com:389/ou=People, o=Airius?uid?sub?(objectClass=*)" Require group cn=admins ../Open2All Require valid-user ../No_Admins_Allowed Require group cn!=admins So basically, the primary directory (in this example) can only be accessed by users who are in the admins group, while the first sub-directory can be accessed by anyone in the directory, and the second sub-folder can be reached by anyone who is NOT in the admin-group. But I only want to set the Require line for the sub-directories, and not re-setup the LDAP query on each sub-directory. Is this possible, even though there are clear permissions conflicts from level to level? Does the deepest .htaccess file know that the Require line refers to the LDAP search in the parent folder?

    Read the article

  • Postgresql count+sort performance

    - by invictus
    I have built a small inventory system using postgresql and psycopg2. Everything works great, except, when I want to create aggregated summaries/reports of the content, I get really bad performance due to count()'ing and sorting. The DB schema is as follows: CREATE TABLE hosts ( id SERIAL PRIMARY KEY, name VARCHAR(255) ); CREATE TABLE items ( id SERIAL PRIMARY KEY, description TEXT ); CREATE TABLE host_item ( id SERIAL PRIMARY KEY, host INTEGER REFERENCES hosts(id) ON DELETE CASCADE ON UPDATE CASCADE, item INTEGER REFERENCES items(id) ON DELETE CASCADE ON UPDATE CASCADE ); There are some other fields as well, but those are not relevant. I want to extract 2 different reports: - List of all hosts with the number of items per, ordered from highest to lowest count - List of all items with the number of hosts per, ordered from highest to lowest count I have used 2 queries for the purpose: Items with host count: SELECT i.id, i.description, COUNT(hi.id) AS count FROM items AS i LEFT JOIN host_item AS hi ON (i.id=hi.item) GROUP BY i.id ORDER BY count DESC LIMIT 10; Hosts with item count: SELECT h.id, h.name, COUNT(hi.id) AS count FROM hosts AS h LEFT JOIN host_item AS hi ON (h.id=hi.host) GROUP BY h.id ORDER BY count DESC LIMIT 10; Problem is: the queries runs for 5-6 seconds before returning any data. As this is a web based application, 6 seconds are just not acceptable. The database is heavily populated with approximately 50k hosts, 1000 items and 400 000 host/items relations, and will likely increase significantly when (or perhaps if) the application will be used. After playing around, I found that by removing the "ORDER BY count DESC" part, both queries would execute instantly without any delay whatsoever (less than 20ms to finish the queries). Is there any way I can optimize these queries so that I can get the result sorted without the delay? I was trying different indexes, but seeing as the count is computed it is possible to utilize an index for this. I have read that count()'ing in postgresql is slow, but its the sorting that are causing me problems... My current workaround is to run the queries above as an hourly job, putting the result into a new table with an index on the count column for quick lookup. I use Postgresql 9.2.

    Read the article

  • As a team should we develop locally and merge into the dev server, or develop on the dev server?

    - by CogitoErgoSum
    Hey, Recently I was tasked with writing up formal procedures for a team based development enviroment. We have several projects with multiple modules each. Right now there are only two programmers, however there are plans to expand to 4-6 programmers. Each programmer will be working on the same project and possibly pages which may cause over writing or error issues. So far the ideal solution I have thought up is: Local development (WAMP/VM or some virtual server instance on their own machine). Once a developer has finished their developments, they check it into the CVS Repository and merge it wih other fixes etc. The CVS version is then deployed to the primary dev server for testing by the devs. The MySQL DAtabases are kept on the primary dev server and users may remotely connect to it. Any Schema / Data alterations are run through a DB Admin who will notify all devs of any DB Changes (Which should be rare). Does anyone see an issue with this or have a better solution?

    Read the article

  • How to update many databases when we update any tablw

    - by Lalit Kandpal
    I am creating a C# window application which is based on a medical inventory.In this application I have mainly three forms as PurchaseDetail,SalesDetail,and StockDetail. Now I want a functionality in which if i insert or modify the records in PurchaseDetail,or SalesDetail the Data in the StockDetail should also be modified.(for example if i insert some quantity of medicines in PurchaseDetail then Quantity In StockDetail should also modified and same as for SalesDetail ) Columns in PurchaseDetail: Id(Primary Key and auto increment int),BatchNumber,MedicineName,ManufacturingDate,ExpiryDate,Rate,MRP,Tax,Discount,Quantity Columns in SalesDetail: Id(PrimaryKey and auto increment int),BillNumber,CustomerName,BatchNumber,Quantity,Rate,SalesDate Columns in StockDetail: Id(Primary Key and auto increment int),ProductId,ProductName,OpeningStock,ClosingStock,PurchaseQty,DispenseQty,PurchaseReturn,DispenseReturn Please Help me.

    Read the article

  • Maintaining the position of columns in Grails/GORM

    - by firnnauriel
    Is there a way to fix the position of the columns in a domain? I have this domain: class SnbrActVector { int nid String term double weight static mapping = { version false id(generator: 'assigned') } static constraints = { nid(blank:false) term(blank:false) weight(blank:false) } } This is the schema of the table generated: CREATE TABLE `fractor_grailsDEV`.`snbr_act_vector` ( `id` bigint(20) NOT NULL, `weight` double NOT NULL, `term` varchar(255) COLLATE utf8_unicode_ci NOT NULL, `nid` int(11) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci It seems that the order of the columns were reversed. Is there a way to make it like this? (order is nid, term, weight) CREATE TABLE `fractor_grailsDEV`.`snbr_act_vector` ( `id` bigint(20) NOT NULL, `nid` int(11) NOT NULL, `term` varchar(255) COLLATE utf8_unicode_ci NOT NULL, `weight` double NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci

    Read the article

  • mysql select query optimization

    - by Saharsh Shah
    I have two table testa & testb. CREATE TABLE `testa` ( `id` INT(10) NOT NULL AUTO_INCREMENT, `name` VARCHAR(50) DEFAULT NULL, PRIMARY KEY (`id`) ); CREATE TABLE `testb` ( `id` INT(10) NOT NULL AUTO_INCREMENT, `name` VARCHAR(50) DEFAULT NULL, `aid1` INT(10) DEFAULT NULL, `aid2` INT(10) DEFAULT NULL, `aid3` INT(10) DEFAULT NULL, PRIMARY KEY (`id`) ); Currently I am running below query for retrieving all rows where id in testa table matches with any columns of aid1,aid2,aid3 in tableb. The query is retreiving acurate result but it is taking minimum 30 seconds to execute which is too much. I have also tried to optimise my query using UNION but failed to do so. SELECT a.id, a.name, b.name, b.id FROM testb b INNER JOIN testa a ON b.aid1 = a.id OR b.aid2 = a.id OR b.aid3 = a.id ; How do i optimize my query so it's total execution time is within 2-3 seconds? Thanks in advance...

    Read the article

  • how to write T-SQL to compare and copy data?

    - by George2
    Hello everyone, I have two SQL Server 2008 Enterprise databases (on two machines), and one of the databases is master database and another database is slave database. I want to transfer update from a table in source database to a table in destination database (two tables are of the same schema, both of them are using a single column as unique primary key). The transfer rule is (in short, the rule is keeping the destination database the same as source database because of the update of the source database), if there is a new row in source database but not in destination database, insert the row in destination database; if a row not exists in source database but exists in destination database, delete the row in destination database; if a row's content (i.e. columns other than primary key columns) changes in source database, update the new content into destination database. thanks in advance, George

    Read the article

  • Setting a preferred item of a many-to-one in Django

    - by Mike DeSimone
    I'm trying to create a Django model that handles the following: An Item can have several Names. One of the Names for an Item is its primary Name, i.e. the Name displayed given an Item. (The model names were changed to protect the innocent.) The models.py I've got looks like: class Item(models.Model): primaryName = models.OneToOneField("Name", verbose_name="Primary Name", related_name="_unused") def __unicode__(self): return self.primaryName.name class Name(models.Model): item = models.ForeignKey(Item) name = models.CharField(max_length=32, unique=True) def __unicode__(self): return self.name class Meta: ordering = [ 'name' ] The admin.py looks like: class NameInline(admin.TabularInline): model = Name class ItemAdmin(admin.ModelAdmin): inlines = [ NameInline ] admin.site.register(Item, ItemAdmin) It looks like the database schema is working fine, but I'm having trouble with the admin, so I'm not sure of anything at this point. My main questions are: How do I explain to the admin that primaryName needs to be one of the Names of the item being edited? Is there a way to automatically set primaryName to the first Name found, if primaryName is not set, since I'm using inline admin for the names?

    Read the article

  • how to update multiple tables in oracle DB?

    - by murali
    hi, i am using two tables in my oracle 10g. the first table having the keyword,count,id(primary key) and my second table having id, timestamp.. but i am doing any chages in the first table(keyword,count) it will reflect on the my second table timestamp.. i am using id as reference for both the tables... table1: CREATE TABLE Searchable_Keywords (KEYWORD_ID NUMBER(18) PRIMARY KEY, KEYWORD VARCHAR2(255) NOT NULL, COUNT NUMBER(18) NOT NULL, CONSTRAINT Searchable_Keywords_unique UNIQUE(KEYWORD) ); table2: CREATE TABLE Keywords_Tracking_Report (KEYWORD_ID NUMBER(18), PROCESS_TIMESTAMP TIMESTAMP(8) ); how can update one table with reference of another table.. help me plz...

    Read the article

  • Log Shipped but Won't Update

    - by MooCow
    I'm currently taking the MS SQL 2K5 Admin course at a local college and ran into a problem with the Log Shipping part. My setup is the following: Windows 7 x64 SQL 2005 SP3 2 SQL server instances on the same machine Log Shipping settings: Performed full then log back up of Primary Manually restore on Secondary in STANDBY MODE Insert a new record into the table Set up Log Shipping on Primary using SQL Authenication login to connect to the Secondary Set up timers and copy destination on Secondary Monitoring instance not being used I set up a shared folder for WORKGROUP so both instances on the machine can read & write to it. I can see transaction logs generated and copied as defined by the Transaction Shipping wizard. However, the specified table on the Secondary instance is not updating.

    Read the article

  • CakePHP: How can I disable auto-increment on Model.id?

    - by tomws
    CakePHP 1.3.0, mysqli I have a model, Manifest, whose ID should be the unique number from a printed form. However, with Manifest.id set as the primary key, CakePHP is helping me by setting up auto-increment on the field. Is there a way to flag the field via schema.php and/or elsewhere to disable auto-increment? I need just a plain, old primary key without it. The only other solution I can imagine is adding on a separate manifest number field and changing foreign keys in a half dozen other tables. A bit wasteful and not as intuitive.

    Read the article

  • Voting Script, Possibility of Simplifying Database Queries

    - by Sev
    I have a voting script which stores the post_id and the user_id in a table, to determine whether a particular user has already voted on a post and disallow them in the future. To do that, I am doing the following 3 queries. SELECT user_id, post_id from votes_table where postid=? AND user_id=? If that returns no rows, then: UPDATE post_table set votecount = votecount-1 where post_id = ? Then SELECT votecount from post where post_id=? To display the new votecount on the web page Any better way to do this? 3 queries are seriously slowing down the user's voting experience Edit In the votes table, vote_id is a primary key In the post table, post_id is a primary key. Any other suggestions to speed things up?

    Read the article

< Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >