Search Results

Search found 1110 results on 45 pages for 'constraint'.

Page 38/45 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • How to enforce foreign keys using Xerial SQLite JDBC?

    - by Space_C0wb0y
    According to their release notes, the Xerial SQLite JDBC driver supports foreign keys since version 3.6.20.1. I have tried some time now to get a foreign key constraint to be enforced, but to no avail. Here is what I came up with: public static void main(String[] args) throws ClassNotFoundException, SQLException { Class.forName("org.sqlite.JDBC"); SQLiteConfig config = new SQLiteConfig(); config.enforceForeignKeys(true); Connection connection = DriverManager.getConnection("jdbc:sqlite::memory:", config.toProperties()); connection.createStatement().executeUpdate( "CREATE TABLE artist(" + "artistid INTEGER PRIMARY KEY, " + "artistname TEXT);"); connection.createStatement().executeUpdate( "CREATE TABLE track("+ "trackid INTEGER," + "trackname TEXT," + "trackartist INTEGER," + "FOREIGN KEY(trackartist) REFERENCES artist(artistid)" + ");"); connection.createStatement().executeUpdate( "INSERT INTO track VALUES(14, 'Mr. Bojangles', 3)"); } The table definitions are taken directly from the sample in the SQLite documentation. This is supposed to fail, but it doesn't. I also checked, and it really inserts the tuple (no ignore or something like that). Does anyone have any experience with that, or knows how to make it work?

    Read the article

  • Fluent Nhibernate Mapping Single class on two database tables

    - by nabeelfarid
    Hi guys, I am having problems with Mapping. I have two tables in my database as follows: Employee and EmployeeManagers Employee EmployeeId int Name nvarchar EmployeeManagers EmployeeIdFk int ManagerIdFk int So the employee can have 0 or more Managers. A manager itself is also an Employee. I have the following class to represent the Employee and Managers public class Employee { public virtual int Id { get; set; } public virtual string Name { get; set; } public virtual IList<Employee> Managers { get; protected set; } public Employee() { Managers = new List<Employee>(); } } I don't have any class to represent Manager because I think there is no need for it, as Manager itself is an Employee. I am using autoMapping and I just can't figure out how to map this class to these two tables. I am implementing IAutoMappingOverride for overriding automappings for Employee but I am not sure what to do in it. public class NodeMap : IAutoMappingOverride { public void Override(AutoMapping<Node> mapping) { //mapping.HasMany(x => x.ValidParents).Cascade.All().Table("EmployeeManager"); //mapping.HasManyToMany(x => x.ValidParents).Cascade.All().Table("EmployeeManager"); } } I also want to make sure that an employee can not be assigned the same manager twice. This is something I can verify in my application but I would like to put constraint on the EmployeeManager table (e.g. a composite key) so a same manager can not be assigned to an employee more than once. Could anyone out there help me with this please? Awaiting Nabeel

    Read the article

  • SCD2 + Merge Statement + SQL Server

    - by Nev_Rahd
    I am trying work out with MERGE statment to Insert / Update Dimension Table of Type SCD2 My source is a Table var to Merge with Dimension table. My MERGE statement is throwing an error as: The target table 'DM.DATA_ERROR.ERROR_DIMENSION' of the INSERT statement cannot be on either side of a (primary key, foreign key) relationship when the FROM clause contains a nested INSERT, UPDATE, DELETE, or MERGE statement. Found reference constraint 'FK_ERROR_DIMENSION_to_AUDIT_CreatedBy'. My MERGE Statement: DECLARE @DATAERROROBJECT AS [ERROR_DIMENSION] INSERT INTO DM.DATA_ERROR.ERROR_DIMENSION SELECT ERROR_CODE, DATA_STREAM_ID, [ERROR_SEVERITY], DATA_QUALITY_RATING, ERROR_LONG_DESCRIPTION, ERROR_DESCRIPTION, VALIDATION_RULE, ERROR_TYPE, ERROR_CLASS, VALID_FROM, VALID_TO, CURR_FLAG, CREATED_BY_AUDIT_SK, UPDATED_BY_AUDIT_SK FROM (MERGE DM.DATA_ERROR.ERROR_DIMENSION ED USING @DATAERROROBJECT OBJ ON(ED.ERROR_CODE = OBJ.ERROR_CODE AND ED.DATA_STREAM_ID = OBJ.DATA_STREAM_ID) WHEN NOT MATCHED THEN INSERT VALUES( OBJ.ERROR_CODE ,OBJ.DATA_STREAM_ID ,OBJ.[ERROR_SEVERITY] ,OBJ.DATA_QUALITY_RATING ,OBJ.ERROR_LONG_DESCRIPTION ,OBJ.ERROR_DESCRIPTION ,OBJ.VALIDATION_RULE ,OBJ.ERROR_TYPE ,OBJ.ERROR_CLASS ,GETDATE() ,'9999-12-13' ,'Y' ,1 ,1 ) WHEN MATCHED AND ED.CURR_FLAG = 'Y' AND ( ED.[ERROR_SEVERITY] <> OBJ.[ERROR_SEVERITY] OR ED.[DATA_QUALITY_RATING] <> OBJ.[DATA_QUALITY_RATING] OR ED.[ERROR_LONG_DESCRIPTION] <> OBJ.[ERROR_LONG_DESCRIPTION] OR ED.[ERROR_DESCRIPTION] <> OBJ.[ERROR_DESCRIPTION] OR ED.[VALIDATION_RULE] <> OBJ.[VALIDATION_RULE] OR ED.[ERROR_TYPE] <> OBJ.[ERROR_TYPE] OR ED.[ERROR_CLASS] <> OBJ.[ERROR_CLASS] ) THEN UPDATE SET ED.CURR_FLAG = 'N', ED.VALID_TO = GETDATE() OUTPUT $ACTION ACTION_OUT, OBJ.ERROR_CODE ERROR_CODE, OBJ.DATA_STREAM_ID DATA_STREAM_ID, OBJ.[ERROR_SEVERITY] [ERROR_SEVERITY], OBJ.DATA_QUALITY_RATING DATA_QUALITY_RATING, OBJ.ERROR_LONG_DESCRIPTION ERROR_LONG_DESCRIPTION, OBJ.ERROR_DESCRIPTION ERROR_DESCRIPTION, OBJ.VALIDATION_RULE VALIDATION_RULE, OBJ.ERROR_TYPE ERROR_TYPE, OBJ.ERROR_CLASS ERROR_CLASS, GETDATE() VALID_FROM, '9999-12-31' VALID_TO, 'Y' CURR_FLAG, 555 CREATED_BY_AUDIT_SK, 555 UPDATED_BY_AUDIT_SK ) AS MERGE_OUT WHERE MERGE_OUT.ACTION_OUT = 'UPDATE'; What am I doing wrong ?

    Read the article

  • alignment and granularity of mmap

    - by OwnWaterloo
    I am confused by the specification of mmap. Let pa be the return address of mmap (the same as the specification) pa = mmap(addr, len, prot, flags, fildes, off); In my opinion after the function call succeed the following range is valid [ pa, pa+len ) My question is whether the range of the following is still valid? [ round_down(pa, pagesize) , round_up(pa+len, pagesize) ) [ base, base + size ] for short That is to say: is the base always aligned on the page boundary? is the size always a multiple of pagesize (the granularity is pagesize in other words) Thanks for your help. I think it is implied in this paragraph : The off argument is constrained to be aligned and sized according to the value returned by sysconf() when passed _SC_PAGESIZE or _SC_PAGE_SIZE. When MAP_FIXED is specified, the application shall ensure that the argument addr also meets these constraints. The implementation performs mapping operations over whole pages. Thus, while the argument len need not meet a size or alignment constraint, the implementation shall include, in any mapping operation, any partial page specified by the range [pa,pa+len). But I'm not sure and I do not have much experience on POSIX. Please show me some more explicit and more definitive evidence Or show me at least one system which supports POSIX and has different behavior Thanks agian.

    Read the article

  • Windows Mobile : How to bind dropdown's selectedvalue to a column in table A and the list data to a

    - by Rob
    Hi, I am trying to learn the basics of Windows Mobile development against SQL CE and have come across a basic problem. I have two tables. One called Customers that stores customer info and has an identity column called ID as the primary key. The other table is called Orders which has a column called CustomerID (the FK constraint is present). I have added a DataSet to the project that contains both tables and have autogenerated the edit/view forms. This has produced a text control for the CustomerID column in the Order table for the new/edit form and I deleted it and replaced it with a dropdown list. Then, using the 'Advanced' databinding options (in Properties) I set the datasource of the list to the Customers table setting the value to the ID field and the text to the CustomerName field. I then set the SelectedValue of the list box to the CustomerID field of the Orders dataset. So far so good. When I run the app in the emulator and view the 'New' form for Orders the Customer dropdown is indeed populated with a list of customer names and I can select one and happily create a new order successfully. This is confirmed when I see the order appear in the Orders Grid form. However, when I then click on the order in the grid and then select 'Edit' the order loads but the dropdown always shows the first customer in the list and doesn't seem to bind the SelectedValue to the Orders dataset CustomerID field. Now I am an ASP.NET guy and normally hand craft the DAL and it's binding to the UI so I'm not entirely sure where to look to investigate what is going wrong here as this is all generated code. I am sure it is something very trivial but any pointers would be appreciated. My gut feeling is that the SelectedValue and the Customers.CustomerID values do not match for some reason? Many thanks, Rob.

    Read the article

  • Curve fitting: Find a CDF (or any function) that satisfies a list of constraints.

    - by dreeves
    I have some constraints on a CDF in the form of a list of x-values and for each x-value, a pair of y-values that the CDF must lie between. We can represent that as a list of {x,y1,y2} triples such as constraints = {{0, 0, 0}, {1, 0.00311936, 0.00416369}, {2, 0.0847077, 0.109064}, {3, 0.272142, 0.354692}, {4, 0.53198, 0.646113}, {5, 0.623413, 0.743102}, {6, 0.744714, 0.905966}} Graphically that looks like this: And since this is a CDF there's an additional implicit constraint of {Infinity, 1, 1} Ie, the function must never exceed 1. Also, it must be monotone. Now, without making any assumptions about its functional form, we want to find a curve that respects those constraints. For example: (I cheated to get that one: I actually started with a nice log-normal distribution and then generated fake constraints based on it.) One possibility is a straight interpolation through the midpoints of the constraints: mids = ({#1, Mean[{#2,#3}]}&) @@@ constraints f = Interpolation[mids, InterpolationOrder->0] Plotted, f looks like this: That sort of technically satisfies the constraints but it needs smoothing. We can increase the interpolation order but now it violates the implicit constraints (always less than one, and monotone): How can I get a curve that looks as much like the first one above as possible? Note that NonLinearModelFit with a LogNormalDistribution will do the trick in this example but is insufficiently general as sometimes there will sometimes not exist a log-normal distribution satisfying the constraints.

    Read the article

  • NULL-keys for key/value table

    - by user72185
    (Using Oracle) I have a table with key/value pairs like this: create table MESSAGE_INDEX ( KEY VARCHAR2(256) not null, VALUE VARCHAR2(4000) not null, MESSAGE_ID NUMBER not null ) I now want to find all the messages where key = 'someKey' and value is 'val1', 'val2' or 'val3' - OR value is null in which case there will be no entry in the table at all. This is to save space; there would be a large number of keys with null values if I stored them all. I think this works: SELECT message_id FROM message_index idx WHERE ((key = 'someKey' AND value IN ('val1', 'val2', 'val3')) OR NOT EXISTS (SELECT 1 FROM message_index WHERE key = 'someKey' AND idx.message_id = message_id)) But is is extremely slow. Takes 8 seconds with 700K records in message_index and there will be many more records and more search criteria when moving outside of my test environment. Primary key is key, value, message_id: add constraint PK_KEY_VALUE primary key (KEY, VALUE, MESSAGE_ID) And I added another index for message_id, to speed up searching for missing keys: create index IDX_MESSAGE_ID on MESSAGE_INDEX (MESSAGE_ID) I will be doing several of these key/value lookups in every search, not just one as shown above. So far I am doing them nested, where output id's of one level is the input to the next. E.g.: SELECT message_id from message_index WHERE (key/value compare) AND message_id IN ( SELECT ... and so on ) What can I do to speed this up?

    Read the article

  • How to represent and insert into an ordered list in SQL?

    - by Travis
    I want to represent the list "hi", "hello", "goodbye", "good day", "howdy" (with that order), in a SQL table: pk | i | val ------------ 1 | 0 | hi 0 | 2 | hello 2 | 3 | goodbye 3 | 4 | good day 5 | 6 | howdy 'pk' is the primary key column. Disregard its values. 'i' is the "index" that defines that order of the values in the 'val' column. It is only used to establish the order and the values are otherwise unimportant. The problem I'm having is with inserting values into the list while maintaining the order. For example, if I want to insert "hey" and I want it to appear between "hello" and "goodbye", then I have to shift the 'i' values of "goodbye" and "good day" (but preferably not "howdy") to make room for the new entry. So, is there a standard SQL pattern to do the shift operation, but only shift the elements that are necessary? (Note that a simple "UPDATE table SET i=i+1 WHERE i=3" doesn't work, because it violates the uniqueness constraint on 'i', and also it updates the "howdy" row unnecessarily.) Or, is there a better way to represent the ordered list? I suppose you could make 'i' a floating point value and choose values between, but then you have to have a separate rebalancing operation when no such value exists. Or, is there some standard algorithm for generating string values between arbitrary other strings, if I were to make 'i' a varchar? Or should I just represent it as a linked list? I was avoiding that because I'd like to also be able to do a SELECT .. ORDER BY to get all the elements in order.

    Read the article

  • Creating self-referential tables with polymorphism in SQLALchemy

    - by Jace
    I'm trying to create a db structure in which I have many types of content entities, of which one, a Comment, can be attached to any other. Consider the following: from datetime import datetime from sqlalchemy import create_engine from sqlalchemy import Column, ForeignKey from sqlalchemy import Unicode, Integer, DateTime from sqlalchemy.orm import relation, backref from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class Entity(Base): __tablename__ = 'entities' id = Column(Integer, primary_key=True) created_at = Column(DateTime, default=datetime.utcnow, nullable=False) edited_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow, nullable=False) type = Column(Unicode(20), nullable=False) __mapper_args__ = {'polymorphic_on': type} # <...insert some models based on Entity...> class Comment(Entity): __tablename__ = 'comments' __mapper_args__ = {'polymorphic_identity': u'comment'} id = Column(None, ForeignKey('entities.id'), primary_key=True) _idref = relation(Entity, foreign_keys=id, primaryjoin=id == Entity.id) attached_to_id = Column(Integer, ForeignKey('entities.id'), nullable=False) #attached_to = relation(Entity, remote_side=[Entity.id]) attached_to = relation(Entity, foreign_keys=attached_to_id, primaryjoin=attached_to_id == Entity.id, backref=backref('comments', cascade="all, delete-orphan")) text = Column(Unicode(255), nullable=False) engine = create_engine('sqlite://', echo=True) Base.metadata.bind = engine Base.metadata.create_all(engine) This seems about right, except SQLAlchemy doesn't like having two foreign keys pointing to the same parent. It says ArgumentError: Can't determine join between 'entities' and 'comments'; tables have more than one foreign key constraint relationship between them. Please specify the 'onclause' of this join explicitly. How do I specify onclause?

    Read the article

  • Mapping composite foreign keys in a many-many relationship, with overlapping components.

    - by Kirk Broadhurst
    I have a Page table and a View table. There is a many-many relationship between these two via a PageView table. Unfortunately all of these tables need to have composite keys (for business reasons). Page has a primary key of (PageCode, Version), View has a primary key of (ViewCode, Version). PageView obviously enough has PageCode, ViewCode, and Version. The FK to Page is (PageCode, Version) and the FK to View is (ViewCode, Version) Makes sense and works, but when I try to map this in Entity framework I get Error 3021: Problem in mapping fragments...: Each of the following columns in table PageView is mapped to multiple conceptual side properties: PageView.Version is mapped to (PageView_Association.View.Version, PageView_Association.Page.Version) So clearly enough, EF is having a complain about the Version column being a common component of the two foreign keys. Obviously I could create a PageVersion and ViewVersion column in the join table, but that kind of defeats the point of the constraint, i.e. the Page and View must have the same Version value. Has anyone encountered this, and is there anything I can do get around it? Thanks!

    Read the article

  • Should I use concrete Inheritance or not?

    - by Mez
    I have a project using Propel where I have three objects (potentially more in the future) Occasion Event extends Occasion Gig extends Occasion Occasion is an item that has the shared things, that will always be needed (Venue, start, end etc) With this - I want to be able to add in extra functionality, say for example, adding "Band" objects to the Gig object, or "Flyers" to an "Event" object. For this, I plan to create objects for these. However, without concrete inheritance, I have to have the foreign key point to the Occasion object - giving the (propel generated) functions for all of these extra bits to anything inherited from Occasion. I could, in theory do this without a foreign constraint, and add in functions to use the Peer or Query classes to get things related to the "Gig" or similar. Whereas with concrete inheritance, I would only have these functions in the things where they are. I think the decision here is whether I should Duck Type the objects (after all they are occasions) or whether I should just use the "Occasion" object as a "template" (only being used to search for things, like, all occasions at a venue) Thoughts? Comments?

    Read the article

  • LINQ to SQL - Why can't you use a WHERE after an ORDER BY?

    - by MCS
    The following code: // select all orders var orders = from o in FoodOrders where o.STATUS = 1 order by o.ORDER_DATE descending select o; // if customer id is specified, only select orders from specific customer if (customerID!=null) { orders = orders.Where(o => customerID.Equals(o.CUSTOMER_ID)); } gives me the following error: Cannot implicitly convert type 'System.Linq.IQueryable' to 'System.Linq.IOrderedQueryable'. An explicit conversion exists (are you missing a cast?) I fixed the error by doing the sorting at the end: // select all orders var orders = from o in FoodOrders where o.STATUS = 1 select o; // if customer id is specified, only select orders from specific customer if (customerID!=null) { orders = orders.Where(o => customerID.Equals(o.CUSTOMER_ID)); } // I'm forced to do the ordering here orders = orders.OrderBy(o => o.ORDER_DATE).Reverse(); But I'm wondering why is this limitation in place? What's the reason the API was designed in such a way that you can't add a where constraint after using an order by operator?

    Read the article

  • How to have a where clause on an insert or an update in Linq to Sql?

    - by Kelsey
    I am trying to convert the following stored proc to a LinqToSql call (this is a simplied version of the SQL): INSERT INTO [MyTable] ([Name], [Value]) SELECT @name, @value WHERE NOT EXISTS(SELECT [Value] FROM [MyTable] WHERE [Value] = @value) The DB does not have a constraint on the field that is getting checked for so in this specific case the check needs to be made manually. Also there are many items constantly being inserted as well so I need to make sure that when this specific insert happens there is no dupe of the value field. My first hunch is to do the following: using (TransactionScope scope = new TransactionScope()) { if (Context.MyTables.SingleOrDefault(t => t.Value == in.Value) != null) { MyLinqModels.MyTable t = new MyLinqModels.MyTable() { Name = in.Name, Value = in.Value }; // Do some stuff in the transaction scope.Complete(); } } This is the first time I have really run into this scenario so I want to make sure I am going about it the right way. Does this seem correct or can anyone suggest a better way of going about it without having two seperate calls? Edit: I am running into a similar issue with an update: UPDATE [AnotherTable] SET [Code] = @code WHERE [ID] = @id AND [Code] IS NULL How would I do the same check with Linqtosql? I assume I need to do a get and then set all the values and submit but what if someone updates [Code] to something other than null from the time I do the get to when the update executes? Same problem as the insert...

    Read the article

  • Using OUTPUT/INTO within instead of insert trigger invalidates 'inserted' table

    - by Dan
    I have a problem using a table with an instead of insert trigger. The table I created contains an identity column. I need to use an instead of insert trigger on this table. I also need to see the value of the newly inserted identity from within my trigger which requires the use of OUTPUT/INTO within the trigger. The problem is then that clients that perform INSERTs cannot see the inserted values. For example, I create a simple table: CREATE TABLE [MyTable]( [MyID] [int] IDENTITY(1,1) NOT NULL, [MyBit] [bit] NOT NULL, CONSTRAINT [PK_MyTable_MyID] PRIMARY KEY NONCLUSTERED ( [MyID] ASC )) Next I create a simple instead of trigger: create trigger [trMyTableInsert] on [MyTable] instead of insert as BEGIN DECLARE @InsertedRows table( MyID int, MyBit bit); INSERT INTO [MyTable] ([MyBit]) OUTPUT inserted.MyID, inserted.MyBit INTO @InsertedRows SELECT inserted.MyBit FROM inserted; -- LOGIC NOT SHOWN HERE THAT USES @InsertedRows END; Lastly, I attempt to perform an insert and retrieve the inserted values: DECLARE @tbl TABLE (myID INT) insert into MyTable (MyBit) OUTPUT inserted.MyID INTO @tbl VALUES (1) SELECT * from @tbl The issue is all I ever get back is zero. I can see the row was correctly inserted into the table. I also know that if I remove the OUTPUT/INTO from within the trigger this problem goes away. Any thoughts as to what I'm doing wrong? Or is how I want to do things not feasible? Thanks.

    Read the article

  • DataTable throwing exception on RejectChanges

    - by Vale
    I found this bug while working with a DataTable. I added a primary key column to a DataTable, than added one row to that table, removed that row, and added row with the same key to the table. This works. When I tried to call RejectChanges() on it, I got ConstraintException saying that value is already present. Here is the example: var dataTable = new DataTable(); var column = new DataColumn("ID", typeof(decimal)); dataTable.Columns.Add(column); dataTable.PrimaryKey = new [] {column }; decimal id = 1; var oldRow = dataTable.NewRow(); oldRow[column] = id; dataTable.Rows.Add(oldRow); dataTable.AcceptChanges(); oldRow.Delete(); var newRow = dataTable.NewRow(); newRow[column] = id; dataTable.Rows.Add(newRow); dataTable.RejectChanges(); // This is where it crashes I think since the row is deleted, exception should not be thrown (constraint is not violated because row is in deleted state). Is there something I can do about this? Any help is appreciated.

    Read the article

  • Hibernate: deletes not cascading for self-referencing entities

    - by jwaddell
    I have the following (simplified) Hibernate entities: @Entity @Table(name = "package") public abstract class Package { protected Content content; @ManyToOne(cascade = {javax.persistence.CascadeType.ALL}) @JoinColumn(name = "content_id") @Fetch(value = FetchMode.JOIN) public Content getContent() { return content; } public void setContent(Content content) { this.content = content; } } @Entity @Table(name = "content") public class Content { private Set<Content> subContents = new HashSet<Content>(); @ManyToMany(fetch = FetchType.EAGER) @JoinTable(name = "subcontents", joinColumns = {@JoinColumn(name = "content_id")}, inverseJoinColumns = {@JoinColumn(name = "elt")}) @Cascade(value = {org.hibernate.annotations.CascadeType.DELETE, org.hibernate.annotations.CascadeType.REPLICATE}) @Fetch(value = FetchMode.SUBSELECT) public Set<Content> getSubContents() { return subContents; } public void setSubContents(Set<Content> subContents) { this.subContents = subContents; } } So a Package has a Content, and a Content is self-referencing in that it has many sub-Contents (which may contain sub-Contents of their own etc). The relationships are required to be ManyToOne (Package to Content) and ManyToMany (Content to sub-Contents) but for the case I am currently testing each sub-Content only relates to one Package or Content. The problem is that when I delete a Package and flush the session, I get a Hibernate error stating that I'm violating a foreign key constraint on table subcontents, with a particular content_id still referenced from table subcontents. I've tried specifically (recursively) deleting the Contents before deleting the Package but I get the same error. Is there a reason why this entity tree is not being deleted properly?

    Read the article

  • Server side form validation and POST data

    - by tomcritchlow
    Hi, I have a user input form here: http://www.7bks.com/create (Google login required) When you first create a list you are asked to create a public username. Unfortuantely currently there is no constraint to make this unique. I'm working on the code to enforce unique usernames at the moment and would like to know the best way to do it. Tech details: appengine, python, webapp framework What I'm planning is something like this: first the /create form posts the data to /inputlist/ (this is the same as currently happens) /inputlist/ queries the datastore for the given username. If it already exists then redirect back to /create display the /create page with all the info previously but with an additional error message of "this username is already taken" My question is: Is this the best way of handling server side validation? What's the best way of storing the list details while I verify and modify the username? As I see it I have 3 options to store the list details but I'm not sure which is "best": Store the list details in the session cookie (I am using GAEsessions for cookies) Define a separate POST class for /create and post the list data back from /inputlist/ to the /create page (currently /create only has a GET class) Store the list in the datastore, even though the username is non-unique. Thank you very much for your help :) I'm pretty new to python and coding in general so if I've missed something obvious my apologies. Tom PS - I'm sure I can eventually figure it out but I can't find any documentation on POSTing data using the webapp appengine framework which I'd need in order to do solution 2 above :s maybe you could point me in the right direction for that too? Thanks! PPS - It's a little out of date now but you can see roughly how the /create and /inputlist/ code works at the moment here: 7bks.com Gist

    Read the article

  • How can I force a ListView with a custom panel to re-measure when the ListView width goes below the

    - by Scott Whitlock
    Sorry for the long winded question (I'm including background here). If you just want the question, go to the end. I have a ListView with a custom Panel implementation that I'm using to implement something similar to a WrapPanel, but not quite. I'm overriding the MeasureOverride and ArrangeOverride methods in the custom panel. If I do the naive implementation of a WrapPanel in the MeasureOverride method it doesn't work when the ListView is resized. Let's say the custom panel does a measure and the constraint is a width of 100 and let's say I have 3 items that are 40 wide each. The naive approach is to return a size of 80,80 but when I resize the window that the ListView is in, down to say 75, it just turns on the horizontal scrollbar and never calls measure or arrange again (it does keep measuring and arranging if the width is greater than 80). To get around this, I hard coded the measurement to only have a width of the widest item. Then in the arrange, it gives me more space than I asked for and I use as much horizontal space as I can before wrapping. If I resize the window smaller than the smallest item in the ListView, then it turns on the scrollbar, which is great. Unfortunately this is causing a big problem when I have one of these ListViews with a custom panel nested inside of another one. The outside one works ok, but I can't get the inside one to "take as much as it needs". It always sizes to the smallest item, and the only way around it is to set the MinWidth to be something greater than zero. Anyway, stepping back for a second, I think the real way to fix this is to go back to the Naive implementation of the WrapPanel but force it to re-measure when the ListView width goes below the Size I previously returned as a measurement. That should solve my problem with the nested one. So, that's my question: I have a ListView with a custom panel If I return a measurement width on the panel and the ListView is resized to less than that width, it stops calling MeasureOverride How can I get it to continue calling MeasureOverride?

    Read the article

  • SCD2 + Merge Statement + MSSQL

    - by Nev_Rahd
    I am trying work out with MERGE statment to Insert / Update Dimension Table of Type SCD2 My source is a Table var to Merge with Dimension table. My Merget statement is throwing an error as: The target table 'DM.DATA_ERROR.ERROR_DIMENSION' of the INSERT statement cannot be on either side of a (primary key, foreign key) relationship when the FROM clause contains a nested INSERT, UPDATE, DELETE, or MERGE statement. Found reference constraint 'FK_ERROR_DIMENSION_to_AUDIT_CreatedBy'. My Merge Statement: DECLARE @DATAERROROBJECT AS [ERROR_DIMENSION] INSERT INTO DM.DATA_ERROR.ERROR_DIMENSION SELECT ERROR_CODE, DATA_STREAM_ID, [ERROR_SEVERITY], DATA_QUALITY_RATING, ERROR_LONG_DESCRIPTION, ERROR_DESCRIPTION, VALIDATION_RULE, ERROR_TYPE, ERROR_CLASS, VALID_FROM, VALID_TO, CURR_FLAG, CREATED_BY_AUDIT_SK, UPDATED_BY_AUDIT_SK FROM (MERGE DM.DATA_ERROR.ERROR_DIMENSION ED USING @DATAERROROBJECT OBJ ON(ED.ERROR_CODE = OBJ.ERROR_CODE AND ED.DATA_STREAM_ID = OBJ.DATA_STREAM_ID) WHEN NOT MATCHED THEN INSERT VALUES( OBJ.ERROR_CODE ,OBJ.DATA_STREAM_ID ,OBJ.[ERROR_SEVERITY] ,OBJ.DATA_QUALITY_RATING ,OBJ.ERROR_LONG_DESCRIPTION ,OBJ.ERROR_DESCRIPTION ,OBJ.VALIDATION_RULE ,OBJ.ERROR_TYPE ,OBJ.ERROR_CLASS ,GETDATE() ,'9999-12-13' ,'Y' ,1 ,1 ) WHEN MATCHED AND ED.CURR_FLAG = 'Y' AND ( ED.[ERROR_SEVERITY] <> OBJ.[ERROR_SEVERITY] OR ED.[DATA_QUALITY_RATING] <> OBJ.[DATA_QUALITY_RATING] OR ED.[ERROR_LONG_DESCRIPTION] <> OBJ.[ERROR_LONG_DESCRIPTION] OR ED.[ERROR_DESCRIPTION] <> OBJ.[ERROR_DESCRIPTION] OR ED.[VALIDATION_RULE] <> OBJ.[VALIDATION_RULE] OR ED.[ERROR_TYPE] <> OBJ.[ERROR_TYPE] OR ED.[ERROR_CLASS] <> OBJ.[ERROR_CLASS] ) THEN UPDATE SET ED.CURR_FLAG = 'N', ED.VALID_TO = GETDATE() OUTPUT $ACTION ACTION_OUT, OBJ.ERROR_CODE ERROR_CODE, OBJ.DATA_STREAM_ID DATA_STREAM_ID, OBJ.[ERROR_SEVERITY] [ERROR_SEVERITY], OBJ.DATA_QUALITY_RATING DATA_QUALITY_RATING, OBJ.ERROR_LONG_DESCRIPTION ERROR_LONG_DESCRIPTION, OBJ.ERROR_DESCRIPTION ERROR_DESCRIPTION, OBJ.VALIDATION_RULE VALIDATION_RULE, OBJ.ERROR_TYPE ERROR_TYPE, OBJ.ERROR_CLASS ERROR_CLASS, GETDATE() VALID_FROM, '9999-12-31' VALID_TO, 'Y' CURR_FLAG, 555 CREATED_BY_AUDIT_SK, 555 UPDATED_BY_AUDIT_SK ) AS MERGE_OUT WHERE MERGE_OUT.ACTION_OUT = 'UPDATE'; What am i doing wrong ?

    Read the article

  • SQL Server 2008 - Keyword search using table Join

    - by Aaron Wagner
    Ok, I created a Stored Procedure that, among other things, is searching 5 columns for a particular keyword. To accomplish this, I have the keywords parameter being split out by a function and returned as a table. Then I do a Left Join on that table, using a LIKE constraint. So, I had this working beautifully, and then all of the sudden it stops working. Now it is returning every row, instead of just the rows it needs. The other caveat, is that if the keyword parameter is empty, it should ignore it. Given what's below, is there A) a glaring mistake, or B) a more efficient way to approach this? Here is what I have currently: ALTER PROCEDURE [dbo].[usp_getOppsPaged] @startRowIndex int, @maximumRows int, @city varchar(100) = NULL, @state char(2) = NULL, @zip varchar(10) = NULL, @classification varchar(15) = NULL, @startDateMin date = NULL, @startDateMax date = NULL, @endDateMin date = NULL, @endDateMax date = NULL, @keywords varchar(400) = NULL AS BEGIN SET NOCOUNT ON; ;WITH Results_CTE AS ( SELECT opportunities.*, organizations.*, departments.dept_name, departments.dept_address, departments.dept_building_name, departments.dept_suite_num, departments.dept_city, departments.dept_state, departments.dept_zip, departments.dept_international_address, departments.dept_phone, departments.dept_website, departments.dept_gen_list, ROW_NUMBER() OVER (ORDER BY opp_id) AS RowNum FROM opportunities JOIN departments ON opportunities.dept_id = departments.dept_id JOIN organizations ON departments.org_id=organizations.org_id LEFT JOIN Split(',',@keywords) AS kw ON (title LIKE '%'+kw.s+'%' OR [description] LIKE '%'+kw.s+'%' OR tasks LIKE '%'+kw.s+'%' OR requirements LIKE '%'+kw.s+'%' OR comments LIKE '%'+kw.s+'%') WHERE ( (@city IS NOT NULL AND (city LIKE '%'+@city+'%' OR dept_city LIKE '%'+@city+'%' OR org_city LIKE '%'+@city+'%')) OR (@state IS NOT NULL AND ([state] = @state OR dept_state = @state OR org_state = @state)) OR (@zip IS NOT NULL AND (zip = @zip OR dept_zip = @zip OR org_zip = @zip)) OR (@classification IS NOT NULL AND (classification LIKE '%'+@classification+'%')) OR ((@startDateMin IS NOT NULL AND @startDateMax IS NOT NULL) AND ([start_date] BETWEEN @startDateMin AND @startDateMax)) OR ((@endDateMin IS NOT NULL AND @endDateMax IS NOT NULL) AND ([end_date] BETWEEN @endDateMin AND @endDateMax)) OR ( (@city IS NULL AND @state IS NULL AND @zip IS NULL AND @classification IS NULL AND @startDateMin IS NULL AND @startDateMax IS NULL AND @endDateMin IS NULL AND @endDateMin IS NULL) ) ) ) SELECT * FROM Results_CTE WHERE RowNum >= @startRowIndex AND RowNum < @startRowIndex + @maximumRows; END

    Read the article

  • need help fixing unique key in rails. rails is adding id causing duplicate key

    - by railsnew
    I need some help in fixing the below issue. I had transaction blocks in my rails code like below: @sqlcontact = "INSERT INTO contacts (id,\"cid\", \"hphone\", mphone, provider, cemail, email, sms , mail, phone) VALUES ('"+@id1+"','" + @id1 + "', '"+ params[:hphone] + "', '"+params[:mphone]+ "', '" + params[:provider] + "', '" + params[:cemail]+ "', '" + @varemail+ "', '"+@varsms+ "', '"+ @varmail+"', '"+@varphone+"')" my app was deployed to heroku so I was advised by them to remove transaction blocks. So I changed the above to: @cont = Contact.new(:id => @id1, :cid => @id1, :hphone => params[:hphone], :mphone => params[:mphone], :provider => params[:provider], :cemail => params[:cemail], :email => @varemail, :sms => @varsms, :mail => @varmail, :phone => @varphone) @cont.save My app also already had data stored. Now the problem is that when I try to save a record ...I keep getting the error: duplicate key value violates unique constraint "contacts_pkey" The error also shows the sql query trying to insert data ...however, in that sql query i Do not see id value. As you can see from my code that I am passing the id. then why is rails not accepting it? does it always include its own sequential id? can I not overwrite the default rails magic? and if it does that...does it not look at data that is already in the DB?? I am really stuck here. What should I do? should I just go back to my transaction block

    Read the article

  • A "smart" (forgiving) date parser?

    - by jdmuys
    I have to migrate a very large dataset from one system to another. One of the "source" column contains a date but is really a string with no constraint, while the destination system mandates a date in the format yyyy-mm-dd. Many, but not all, of the source dates are formatted as yyyymmdd. So to coerce them to the expected format, I do (in Perl): return "$1-$2-$3" if ($val =~ /(\d{4})[-\/]*(\d{2})[-\/]*(\d{2})/); The problem arises when the source dates moves away from the "generic" yyyymmdd. The goal is to salvage as many dates as possible, before giving up. Example source strings include: 21/3/1998, March 2004, 2001, 3/4/97 I can try to match as many of the examples I can find with a succession of regular expressions such as the one above. But is there something smarter to do? Am I not reinventing the wheel? Is there a library somewhere doing something similar? I couldn't find anything relevant googling "forgiving date parser". (any language is OK).

    Read the article

  • Alternative or succesor to GDBM

    - by Anon Guy
    We a have a GDBM key-value database as the backend to a load-balanced web-facing application that is in implemented in C++. The data served by the application has grown very large, so our admins have moved the GDBM files from "local" storage (on the webservers, or very close by) to a large, shared, remote, NFS-mounted filesystem. This has affected performance. Our performance tests (in a test environment) show page load times jumping from hundreds of milliseconds (for local disk) to several seconds (over NFS, local network), and sometimes getting as high as 30 seconds. I believe a large part of the problem is that the application makes lots of random reads from the GDBM files, and that these are slow over NFS, and this will be even worse in production (where the front-end and back-end have even more network hardware between them) and as our database gets even bigger. While this is not a critical application, I would like to improve performance, and have some resources available, including the application developer time and Unix admins. My main constraint is time only have the resources for a few weeks. As I see it, my options are: Improve NFS performance by tuning parameters. My instinct is we wont get much out of this, but I have been wrong before, and I don't really know very much about NFS tuning. Move to a different key-value database, such as memcachedb or Tokyo Cabinet. Replace NFS with some other protocol (iSCSI has been mentioned, but i am not familiar with it). How should I approach this problem?

    Read the article

  • SQL Server Long Query

    - by thormj
    Ok... I don't understand why this query is taking so long (MSSQL Server 2005): [Typical output 3K rows, 5.5 minute execution time] SELECT dbo.Point.PointDriverID, dbo.Point.AssetID, dbo.Point.PointID, dbo.Point.PointTypeID, dbo.Point.PointName, dbo.Point.ForeignID, dbo.Pointtype.TrendInterval, coalesce(dbo.Point.trendpts,5) AS TrendPts, LastTimeStamp = PointDTTM, LastValue=PointValue, Timezone FROM dbo.Point LEFT JOIN dbo.PointType ON dbo.PointType.PointTypeID = dbo.Point.PointTypeID LEFT JOIN dbo.PointData ON dbo.Point.PointID = dbo.PointData.PointID AND PointDTTM = (SELECT Max(PointDTTM) FROM dbo.PointData WHERE PointData.PointID = Point.PointID) LEFT JOIN dbo.SiteAsset ON dbo.SiteAsset.AssetID = dbo.Point.AssetID LEFT JOIN dbo.Site ON dbo.Site.SiteID = dbo.SiteAsset.SiteID WHERE onlinetrended =1 and WantTrend=1 PointData is the biggun, but I thought its definition should allow me to pick up what I want easily enough: CREATE TABLE [dbo].[PointData]( [PointID] [int] NOT NULL, [PointDTTM] [datetime] NOT NULL, [PointValue] [real] NULL, [DataQuality] [tinyint] NULL, CONSTRAINT [PK_PointData_1] PRIMARY KEY CLUSTERED ( [PointID] ASC, [PointDTTM] ASC ) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO CREATE NONCLUSTERED INDEX [IX_PointDataDesc] ON [dbo].[PointData] ( [PointID] ASC, [PointDTTM] DESC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO PointData is 550M rows, and Point (source of PointID) is only 28K rows. I tried making an Indexed View, but I can't figure out how to get the Last Timestamp/Value out of it in a compatible way (no Max, no subquery, no CTE). This runs twice an hour, and after it runs I put more data into those 3K PointID's that I selected. I thought about creating LastTime/LastValue tables directly into Point, but that seems like the wrong approach. Am I missing something, or should I rebuild something? (I'm also the DBA, but I know very little about A'ing a DB!)

    Read the article

  • Linq In Clause & Predicate building

    - by Michael G
    I have two tables. Report and ReportData. ReportData has a constraint ReportID. How can I write my linq query to return all Report objects where the predicate conditions are met for ReportData? Something like this in SQL: SELECT * FROM Report as r Where r.ServiceID = 3 and r.ReportID IN (Select ReportID FROM ReportData WHERE JobID LIKE 'Something%') This is how I'm building my predicate: Expression<Func<ReportData, bool>> predicate = PredicateBuilder.True<ReportData>(); predicate = predicate.And(x => x.JobID.StartsWith(QueryConfig.Instance.DataStreamName)); var q = engine.GetReports(predicate, reportsDataContext); reports = q.ToList(); This is my query construction at the moment: public override IQueryable<Report> GetReports(Expression<Func<ReportData, bool>> predicate, LLReportsDataContext reportDC) { if (reportDC == null) throw new ArgumentNullException("reportDC"); var q = reportDC.ReportDatas.Where(predicate).Where(r => r.ServiceID.Equals(1)).Select(r => r.Report); return q; }

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >