Search Results

Search found 1110 results on 45 pages for 'constraint'.

Page 32/45 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • Grails - Need to restrict fetched rows based on condition on join table

    - by sector7
    Hi guys, I have these two domains Car and Driver which have many-to-many relationship. This association is defined in table tblCarsDrivers which has, not surprisingly, primary keys of both the tables BUT additionally also has a new boolean field deleted. Herein lies the problem. When I find/get query on domain Car, I am fetched all related drivers irrespective of their deleted status in tblCarsDrivers, which is expected. I need to put a clause/constraint to exclude the deleted drivers from the list of fetched records. PS: I tried using an association domain CarDriver in joinTable name but that seems not to work. Apparently it expects only table names, not maps. PPS: I know its unnatural to have any other fields besides the mapping keys in mapping table but this is how I got it and it cant be changed. Car domain is defined as such - class Car { Integer id String name static hasMany = [drivers:Driver] static mapping = { table 'tblCars' version false drivers joinTable:[name: 'tblCarsDrivers',column:'driverid',key:'carid'] } } Thanks!

    Read the article

  • MySQL - Find entries that refer to a specified index.

    - by Conor H
    Hi, So I have a booking system where I have a 'lesson_type' table with 'lesson_type_id' as PK. I have a constraint in place here so I can't delete a lesson_type if there are bookings made for that lesson_type. I would like to be able to determine if this lesson_type_id is being referred to by any entries in the bookings table (or any other table for that matter) so I can notify the user gracefully. i.e. not have a mysql error be thrown when they try and delete a record. What kind of query would I use for this? Thanks.

    Read the article

  • Doubt in clustered and non Clustered index

    - by Mahesh
    I have a doubt that if my table do n't have any constraint like Primary Key,Foreign key,Unique key etc. then can i create the clustered index on table and clustered index can have the douplicate records ? My 2nd question is where should we exectly use the non clustered index and when it is useful and benificial to create in table? My 3rd question is How can we create the 249 non clustered index in a table .Is it the meaning, Creating the non clustered index on 249 columns ? Can you anyone help me to remove my confusion in this.

    Read the article

  • how to update multiple tables in oracle DB?

    - by murali
    hi, i am using two tables in my oracle 10g. the first table having the keyword,count,id(primary key) and my second table having id, timestamp.. but i am doing any chages in the first table(keyword,count) it will reflect on the my second table timestamp.. i am using id as reference for both the tables... table1: CREATE TABLE Searchable_Keywords (KEYWORD_ID NUMBER(18) PRIMARY KEY, KEYWORD VARCHAR2(255) NOT NULL, COUNT NUMBER(18) NOT NULL, CONSTRAINT Searchable_Keywords_unique UNIQUE(KEYWORD) ); table2: CREATE TABLE Keywords_Tracking_Report (KEYWORD_ID NUMBER(18), PROCESS_TIMESTAMP TIMESTAMP(8) ); how can update one table with reference of another table.. help me plz...

    Read the article

  • Will creating index help in this case

    - by The King
    I'm still a learning user of SQL-SERVER2005. Here is my table structure CREATE TABLE [dbo].[Trn_PostingGroups]( [ControlGroup] [char](5) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL, [PracticeCode] [char](5) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL, [ScanDate] [smalldatetime] NULL, [DepositDate] [smalldatetime] NULL, [NameOfFile] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [DepositValue] [decimal](11, 2) NULL, [RecordStatus] [char](1) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, CONSTRAINT [PK_Trn_PostingGroups_1] PRIMARY KEY CLUSTERED ( [ControlGroup] ASC, [PracticeCode] ASC )WITH (IGNORE_DUP_KEY = OFF) ON [PRIMARY] ) ON [PRIMARY] Scenario 1 : Suppose I have a query like this... Select * from Trn_PostingGroups where PracticeCode = 'ABC' Will indexing on Practice Code seperately help me in making my query faster?? Scenario 2 : Select * from Trn_PostingGroups where ControlGroup = 12701 and PracticeCode = 'ABC' and NameOfFile = 'FileName1' Will indexing on NameOfFile seperately help me in making my query faster ??

    Read the article

  • nUnit Assert.That(method,Throws.Exception) not catching exceptions

    - by JasonM
    Hi Everyone, Can someone tell me why this unit test that checks for exceptions fails? Obviously my real test is checking other code but I'm using Int32.Parse to show the issue. [Test] public void MyTest() { Assert.That(Int32.Parse("abc"), Throws.Exception.TypeOf<FormatException>()); } The test fails, giving this error. Obviously I'm trying to test for this exception and I think I'm missing something in my syntax. Error 1 TestCase '.MyTest' failed: System.FormatException : Input string was not in a correct format. at System.Number.StringToNumber(String str, NumberStyles options, NumberBuffer& number, NumberFormatInfo info, Boolean parseDecimal) at System.Number.ParseInt32(String s, NumberStyles style, NumberFormatInfo info) at System.Int32.Parse(String s) based on the documentation at Throws Constraint (NUnit 2.5)

    Read the article

  • Auto Generated - CREATE Table Script in SQL 2008 throws error

    - by jack
    I scripted the tables in my dev database using SQL 2008 - generate scripts option (Datbase-right click-Tasks-Generate Scripts) and ran it on the staging database, but the script throws the below error for each table Msg 102, Level 15, State 1, Line 1 Incorrect syntax near '('. Msg 319, Level 15, State 1, Line 15 Incorrect syntax near the keyword 'with'. If this statement is a common table expression, an xmlnamespaces clause or a change tracking context clause, the previous statement must be terminated with a semicolon. Below is the script for one of my table ALTER TABLE [dbo].[Customer]( [CustomerID] [int] IDENTITY(1,1) NOT NULL, [FirstName] [nvarchar](500) NULL, [LastName] [nvarchar](500) NULL, [DateOfBirth] [datetime] NULL, [EmailID] [nvarchar](200) NULL, [ContactForOffers] [bit] NULL, CONSTRAINT [PK_Customer] PRIMARY KEY CLUSTERED ( [CustomerID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO Any help will be much appreciated.

    Read the article

  • Different behavior for REF CURSOR between Oracle 10g and 11g when unique index present?

    - by wweicker
    Description I have an Oracle stored procedure that has been running for 7 or so years both locally on development instances and on multiple client test and production instances running Oracle 8, then 9, then 10, and recently 11. It has worked consistently until the upgrade to Oracle 11g. Basically, the procedure opens a reference cursor, updates a table then completes. In 10g the cursor will contain the expected results but in 11g the cursor will be empty. No DML or DDL changed after the upgrade to 11g. This behavior is consistent on every 10g or 11g instance I've tried (10.2.0.3, 10.2.0.4, 11.1.0.7, 11.2.0.1 - all running on Windows). The specific code is much more complicated but to explain the issue in somewhat realistic overview: I have some data in a header table and a bunch of child tables that will be output to PDF. The header table has a boolean (NUMBER(1) where 0 is false and 1 is true) column indicating whether that data has been processed yet. The view is limited to only show rows in that have not been processed (the view also joins on some other tables, makes some inline queries and function calls, etc). So at the time when the cursor is opened, the view shows one or more rows, then after the cursor is opened an update statement runs to flip the flag in the header table, a commit is issued, then the procedure completes. On 10g, the cursor opens, it contains the row, then the update statement flips the flag and running the procedure a second time would yield no data. On 11g, the cursor never contains the row, it's as if the cursor does not open until after the update statement runs. I'm concerned that something may have changed in 11g (hopefully a setting that can be configured) that might affect other procedures and other applications. What I'd like to know is whether anyone knows why the behavior is different between the two database versions and whether the issue can be resolved without code changes. Update 1: I managed to track the issue down to a unique constraint. It seems that when the unique constraint is present in 11g the issue is reproducible 100% of the time regardless of whether I'm running the real world code against the actual objects or the following simple example. Update 2: I was able to completely eliminate the view from the equation. I have updated the simple example to show the problem exists even when querying directly against the table. Simple Example CREATE TABLE tbl1 ( col1 VARCHAR2(10), col2 NUMBER(1) ); INSERT INTO tbl1 (col1, col2) VALUES ('TEST1', 0); /* View is no longer required to demonstrate the problem CREATE OR REPLACE VIEW vw1 (col1, col2) AS SELECT col1, col2 FROM tbl1 WHERE col2 = 0; */ CREATE OR REPLACE PACKAGE pkg1 AS TYPE refWEB_CURSOR IS REF CURSOR; PROCEDURE proc1 (crs OUT refWEB_CURSOR); END pkg1; CREATE OR REPLACE PACKAGE BODY pkg1 IS PROCEDURE proc1 (crs OUT refWEB_CURSOR) IS BEGIN OPEN crs FOR SELECT col1 FROM tbl1 WHERE col1 = 'TEST1' AND col2 = 0; UPDATE tbl1 SET col2 = 1 WHERE col1 = 'TEST1'; COMMIT; END proc1; END pkg1; Anonymous Block Demo DECLARE crs1 pkg1.refWEB_CURSOR; TYPE rectype1 IS RECORD ( col1 vw1.col1%TYPE ); rec1 rectype1; BEGIN pkg1.proc1 ( crs1 ); DBMS_OUTPUT.PUT_LINE('begin first test'); LOOP FETCH crs1 INTO rec1; EXIT WHEN crs1%NOTFOUND; DBMS_OUTPUT.PUT_LINE(rec1.col1); END LOOP; DBMS_OUTPUT.PUT_LINE('end first test'); END; /* After creating this index, the problem is seen */ CREATE UNIQUE INDEX unique_col1 ON tbl1 (col1); /* Reset data to initial values */ TRUNCATE TABLE tbl1; INSERT INTO tbl1 (col1, col2) VALUES ('TEST1', 0); DECLARE crs1 pkg1.refWEB_CURSOR; TYPE rectype1 IS RECORD ( col1 vw1.col1%TYPE ); rec1 rectype1; BEGIN pkg1.proc1 ( crs1 ); DBMS_OUTPUT.PUT_LINE('begin second test'); LOOP FETCH crs1 INTO rec1; EXIT WHEN crs1%NOTFOUND; DBMS_OUTPUT.PUT_LINE(rec1.col1); END LOOP; DBMS_OUTPUT.PUT_LINE('end second test'); END; Example of what the output on 10g would be:   begin first test   TEST1   end first test   begin second test   TEST1   end second test Example of what the output on 11g would be:   begin first test   TEST1   end first test   begin second test   end second test Clarification I can't remove the COMMIT because in the real world scenario the procedure is called from a web application. When the data provider on the front end calls the procedure it will issue an implicit COMMIT when disconnecting from the database anyways. So if I remove the COMMIT in the procedure then yes, the anonymous block demo would work but the real world scenario would not because the COMMIT would still happen. Question Why is 11g behaving differently? Is there anything I can do other than re-write the code?

    Read the article

  • Use of bit-torrent for large file download as an alternative to FTP

    - by questzen
    The company I work for procures large volumes of data and does this by subscribing to FTP locations. I was wondering if it is possible to download the same using a tracker, the major challenge is authentication of the users IMO. Most ftp servers we subscribe to have a restriction of the number of ftp connection attempts. Does any one here have any experience with this? Any advice is welcome. Edit To clarify, we subscribe to third party vendors and access their ftp location using credentials provided by them. The service is not exclusive to us, they do sell their data to several others. If we could be part of the swarm, the download rates would be pretty high without added penalty. The question is about the possibility of achieving this, so that we can put-forth a proposal in those lines. The vendors obviously wouldn't share data to non-subscribers, so that is a constraint.

    Read the article

  • ORACLE -1401 error

    - by Sachin Chourasiya
    I have a stored procedure in Oracle 9i which inserts records in a table. The table has a primary key built to ensure duplicte rows doesnot exists. I am trying to insert a record by calling this stored procedure and it works first time properly. I am again trying to insert a duplicate record and expecting unique constraint violation error. But I am getting ORA-01401 inserted value too large for column I knew its meaning but my query is , if the value inserted is really large then how it got successful in the first attempt.

    Read the article

  • What are the repercussions of not checking existing data when adding a foreign key?

    - by scottm
    I've inherited a database that doesn't exactly strive for data integrity. I am trying to add some foreign keys to change that, but there is data in some tables that doesn't fit the constraints. Most likely, the data won't be used again so I want to know what problems I might face by leaving it there. The other option I see is to move it into some kind of table without referential constraints, just for historical purposes. So, what are the repercussions of not checking existing data? If I create a foreign key constraint on a table and don't check existing data, will all new data inserted into the table be enforced?

    Read the article

  • How to implement a book preview (2 page spread) without using Flash?

    - by littlejim84
    I'm looking into a solution for work, where you have a two page spread of the book to preview. Either side of this, you can hover in the corner to create a pseudo-flip and then click the mouse button to actually turn the page. I know there is many Flash solutions out there, but in this case we cannot use it... So we are looking for a possible solution that can work across all major browsers (yes, including IE6)... I looked a few canvas solutions, but with Google's canvas extension for IE, these will terribly slow. So was thinking about an SVG/VML solution, like Raphael Javascript library. This could be good, but then trying to look into how to code this, without examples, could be a challenge with the time constraint. Is there a solution out there that fits (or almost fits) this problem?

    Read the article

  • Stuck on solving the Minimal Spanning Tree problem.

    - by kunjaan
    I have reduced my problem to finding the minimal spanning tree in the graph. But I want to have one more constraint which is that the total degree for each vertex shouldnt exceed a certain constant factor. How do I model my problem? Is MST the wrong path? Do you know any algorithms that will help me? One more problem: My graph has duplicate edge weights so is there a way to count the number of unique MSTs? Are there algorithms that do this? Thank You. Edit: By degree, I mean the total number of edges connecting the vertex. By duplicate edge weight I mean that two edges have the same weight.

    Read the article

  • Oracle: TABLE ACCESS FULL with Primary key?

    - by tim
    There is a table: CREATE TABLE temp ( IDR decimal(9) NOT NULL, IDS decimal(9) NOT NULL, DT date NOT NULL, VAL decimal(10) NOT NULL, AFFID decimal(9), CONSTRAINT PKtemp PRIMARY KEY (IDR,IDS,DT) ) ; SQL>explain plan for select * from temp; Explained. SQL> select plan_table_output from table(dbms_xplan.display('plan_table',null,'serial')); PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- --------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| --------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 61 | 2 (0)| | 1 | TABLE ACCESS FULL| TEMP | 1 | 61 | 2 (0)| --------------------------------------------------------------- Note ----- - 'PLAN_TABLE' is old version 11 rows selected. SQL server 2008 shows in the same situation Clustered index scan. What is the reason?

    Read the article

  • deleting a large number of rows from a table

    - by Azeem
    We have a requirement to delete rows in the order of millions from multiple tables as a batch job (note that we are not deleting all the rows, we are deleting based on a timestamp stored in an indexed column). Obviously a normal DELETE takes forever (because of logging, referential constraint checking etc.). I know in the LUW world, we have ALTER TABLE NOT LOGGED INITIALLY but I can't seem to find the an equivalent SQL statement for DB2 v8 z/OS. Any one has any ideas on how to do this really fast? Also, any ideas on how to avoid the referential checks when deleting the rows? Please let me know.

    Read the article

  • SQL Server 2000, how to automate import data from excel

    - by Stan
    Say the source data comes in excel format, below is how I import the data. Converting to csv format via MS Excel Roughly find bad rows/columns by inspecting backup the table that needs to be updated in SQL Query Analyzer truncate the table (may need to drop foreign key constraint as well) import data from the revised csv file in SQL Server Enterprise Manager If there's an error like duplicate columns, I need to check the original csv and remove them I was wondering how to make this procedure more effecient in every step? I have some idea but not complete. For step 2&6, using scripts that can check automatically and print out all error row/column data. So it's easier to remove all errors once. For step 3&5, is there any way to automatically update the table without manually go through the importing steps? Could the community advise, please? Thanks.

    Read the article

  • Behaviour to simulate an enum implementing an interface

    - by fearofawhackplanet
    Say I have an enum something like: enum OrderStatus { AwaitingAuthorization, InProduction, AwaitingDespatch } I've also created an extension method on my enum to tidy up the displayed values in the UI, so I have something like: public static string ToDisplayString(this OrderStatus status) { switch (status) { case Status.AwaitingAuthorization: return "Awaiting Authorization"; case Status.InProduction: return "Item in Production"; ... etc } } Inspired by the excellent post here, I want to bind my enums to a SelectList with an extension method: public static SelectList ToSelectList<TEnum>(this TEnum enumObj) however, to use the DisplayString values in the UI drop down I'd need to add a constraint along the lines of : where TEnum has extension ToDisplayString Obviously none of this is going to work at all with the current approach, unless there's some clever trick I don't know about. Does anyone have any ideas about how I might be able to implement something like this?

    Read the article

  • Truncating a table referenced by a foreign key

    - by born to hula
    Hey, We have two tables in a SQL Server 2005 database, A and B.There is a service which truncates table A every day. Recently, a foreign key constraint was added to table B, referencing table A. As a result, it isn't possible truncating table A anymore, even if table B is empty. Is there any workaround to get the same result as truncating table A? I've already tried the approach below but the identity wasn't reset. DBCC CHECKIDENT (TABLENAME, RESEED, 0) PS. before anyone points this as a duplicate, the different thing here is that I'm not allowed to drop constraints, nor creating any.

    Read the article

  • Save or update for FK relationship Sqlalchemy

    - by Alex
    I've googled, but haven't been able to find the answer to this seemingly simple question. I have two relations, a customer and an order. Each order is associated to a single cusomter, and therefore has a FK relationship to the customer table. The customer relation only stores customer names, and I have set a unique constraint on the customer table barring duplicate names. Let's say I create a new order instance and set a customer for the order. Something like: order_instance.customer = Customer("customer name") When I save the order instance, SqlAlchemy will complain if a customer with this name already exists in the customer table. How do I specify to SqlAlchemy to insert into the customer table if a customer with this name doesn't already exist, or just ignore (or even update) to the customer relation? I don't really want to have to check each time if a customer with some name already exists...

    Read the article

  • SQL Server Composite Primary Keys

    - by Colin
    I am attempting to replace all records for a give day in a certain table. The table has a composite primary key comprised of 7 fields. One such field is date. I have deleted all records which have a date value of 2/8/2010. When I try to then insert records into the table for 2/8/2010, I get a primary key violation. The records I am attempting to insert are only for 2/8/2010. Since date is a component of the PK, shouldn't there be no way to violate the constraint as long as the date I'm inserting is not already in the table? Thanks in advance.

    Read the article

  • Interpreting type codes in sys.objects in SQL Server

    - by fatcat1111
    On SQL Server, the sys.objects table includes "Type" and "Type_Desc" attributes. For example: SELECT DISTINCT [Type], Type_Desc FROM Sys.Objects ORDER BY [Type] Returns: C CHECK_CONSTRAINT D DEFAULT_CONSTRAINT F FOREIGN_KEY_CONSTRAINT FN SQL_SCALAR_FUNCTION FS CLR_SCALAR_FUNCTION IT INTERNAL_TABLE P SQL_STORED_PROCEDURE PK PRIMARY_KEY_CONSTRAINT S SYSTEM_TABLE SQ SERVICE_QUEUE TR SQL_TRIGGER U USER_TABLE UQ UNIQUE_CONSTRAINT V VIEW Is there a comprehensive list of these types somewhere? There isn't a constraint on sys.objects that points me to table of these, and sys.types contains data types. I've searched SQL BOL but haven't found it. Any help would be appreciated. EDIT: Some DBs use only a subset of these types. For example, if I have a database with no views, when I query Sys.Objects as above, there are no "V" rows in the results. I am looking for a list of all possible types and descriptions used by SQL Server.

    Read the article

  • Nesting a SharePoint Webpart inside of a User Control

    - by jlech
    I know it's usually the other way around, but I have some extenuating requirements that must be met (read as "No one bothered to do the research and now I have to bail them out") I have a standard user control (ascx) that is to be imported into a SharePoint 2007 website. Due to a design constraint, a sharepoint web part that is also needed has to be nested inside of this user control. So in other words, the user control would have to look something like this: <%@ Control Language="C#" AutoEventWireup="true" CodeFile="foo.ascx.cs" Inherits="foo" %> <div id="container"> ...snipped... <!-- SharePoint web part goes here --> ...snipped... </div> Any help would be appreciated. Thanks!

    Read the article

  • OpenERP model tracking external resource, hooking into parent's copy() or copy_data()

    - by CB.
    I have added a new model (call it crm_lead_external) that is linked via a new one2many on crm_lead. Thus, my module has two models defined: an updated crm_lead (with _name=crm_lead) and a new crm_lead_external. This external model tracks a file and as such has a 'filename' field. I also created a unique SQL index on this filename field. This is part of my module: def copy(self, cr, uid, id, default=None, context=None): if not default: default = {} default.update({ 'state': 'new', 'filename': '', }) ret = super(crm_lead_external, self).copy(cr, uid, id, default, context=context) #do file copy return ret The intent here is to allow an external entity to be duplicated, but to retarget the file path. Now, if I click duplicate on the Lead, I get an IntegrityError on my unique constraint. Is there a particular reason why copy() isn't being called? Should I add this logic to copy_data()? Myst I really override copy() for the lead? Thanks in advance.

    Read the article

  • Retrieve a cross domain RSS(xml) through Javascript

    - by Ajay
    I have seen server side proxy workarounds for retrieving rss (xmls) from cross-domains. In fact this very question addressess my same problem but gives out a different solution. I have a constraint of do not use a proxy to retrieve rss feeds. And hence the Google AJAX Feed API solution also goes out of picture. Is there a client-only workaround for this problem. JSONP is the solution for requests that respond with JSON output. But here, I have RSS feeds which can respond with pure xml . How do I solve the problem.

    Read the article

  • T-SQL Self Join in combination with aggregate function

    - by Nick
    Hi, i have the following table. CREATE TABLE [dbo].[Tree]( [AutoID] [int] IDENTITY(1,1) NOT NULL, [Category] [varchar](10) NULL, [Condition] [varchar](10) NULL, [Description] [varchar](50) NULL, CONSTRAINT [PK_Tree] PRIMARY KEY CLUSTERED ( [AutoID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO the data looks like this: INSERT INTO [Test].[dbo].[Tree] ([Category] ,[Condition] ,[Description]) VALUES ('1','Alpha','Type 1') INSERT INTO [Test].[dbo].[Tree] ([Category] ,[Condition] ,[Description]) VALUES ('1','Alpha','Type 1') INSERT INTO [Test].[dbo].[Tree] ([Category] ,[Condition] ,[Description]) VALUES ('2','Alpha','Type 2') INSERT INTO [Test].[dbo].[Tree] ([Category] ,[Condition] ,[Description]) VALUES ('2','Alpha','Type 2') go I try now to do the following: SELECT Category,COUNT(*) as CategoryCount FROM Tree where Condition = 'Alpha' group by Category but i wish also to get the Description for each Element. I tried several subqueries, self joins etc. i always come to the problem that the subquery cannot return more than one record. The problem is caused by a poor database design which i cannot change and i run out of ideas to get this done in a single query ;-(

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >