Search Results

Search found 1851 results on 75 pages for 'constraint layouts'.

Page 62/75 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • Entity Framework - Foreign key constraints not added for inherited entity

    - by Tri Q
    Hello, It appears to me that a strange phenomenon is occurring with inherited entities (TPT) in EF4. I have three entities. 1. Asset 2. Property 3. Activity Property is a derived-type of Asset. Property has many activities (many-to-many) When modeling this in my EDMX, everything seems fine until I try to insert a new Property into the database. If the property does not contain any Activity, it works, but all hell breaks loose when I add some new activities to the new Property. As it turns out after 2 days of crawling the web and fiddling around, I noticed that in the EF store (SSDL) some of the constraints between entities were not picked up during the update process. Property_Activity table which links properties and activities show only one constraint FK_Property_Activity_Activity but FK_Property_Activity_Property was missing. I knew this is an Entity Framework anomoly because when I switched the relationship in the database to: Asset <-- Asset_Activity <-- Activity After an update, all foreign key constraints are picked up and the save is successful, with or without activities in the new property. Is this intended or a bug in EF? How do I get around this problem? Should I abandon inheritance altogether?

    Read the article

  • "Ambigous type variable" error when defining custom "read" function

    - by Tener
    While trying to compile the following code, which is enhanced version of read build on readMay from Safe package. readI :: (Typeable a, Read a) => String -> a readI str = case readMay str of Just x -> x Nothing -> error ("Prelude.read failed, expected type: " ++ (show (typeOf > (undefined :: a))) ++ "String was: " ++ str) I get an error from GHC: WavefrontSimple.hs:54:81: Ambiguous type variable `a' in the constraint: `Typeable a' arising from a use of `typeOf' at src/WavefrontSimple.hs:54:81-103 Probable fix: add a type signature that fixes these type variable(s)` I don't understand why. What should be fixed to get what I meant? EDIT: Ok, so the solution to use ScopedTypeVariables and forall a in type signature works. But why the following produces very similar error to the one above? The compiler should infer the right type since there is asTypeOf :: a -> a -> a used. readI :: (Typeable a, Read a) => String -> a readI str = let xx = undefined in case readMay str of Just x -> x `asTypeOf` xx Nothing -> error ("Prelude.read failed, expected type: " ++ (show (typeOf xx)) ++ "String was: " ++ str)

    Read the article

  • Impact of ordering of correlated subqueries within a projection

    - by Michael Petito
    I'm noticing something a bit unexpected with how SQL Server (SQL Server 2008 in this case) treats correlated subqueries within a select statement. My assumption was that a query plan should not be affected by the mere order in which subqueries (or columns, for that matter) are written within the projection clause of the select statement. However, this does not appear to be the case. Consider the following two queries, which are identical except for the ordering of the subqueries within the CTE: --query 1: subquery for Color is second WITH vw AS ( SELECT p.[ID], (SELECT TOP(1) [FirstName] FROM [Preference] WHERE p.ID = ID AND [FirstName] IS NOT NULL ORDER BY [LastModified] DESC) [FirstName], (SELECT TOP(1) [Color] FROM [Preference] WHERE p.ID = ID AND [Color] IS NOT NULL ORDER BY [LastModified] DESC) [Color] FROM Person p ) SELECT ID, Color, FirstName FROM vw WHERE Color = 'Gray'; --query 2: subquery for Color is first WITH vw AS ( SELECT p.[ID], (SELECT TOP(1) [Color] FROM [Preference] WHERE p.ID = ID AND [Color] IS NOT NULL ORDER BY [LastModified] DESC) [Color], (SELECT TOP(1) [FirstName] FROM [Preference] WHERE p.ID = ID AND [FirstName] IS NOT NULL ORDER BY [LastModified] DESC) [FirstName] FROM Person p ) SELECT ID, Color, FirstName FROM vw WHERE Color = 'Gray'; If you look at the two query plans, you'll see that an outer join is used for each subquery and that the order of the joins is the same as the order the subqueries are written. There is a filter applied to the result of the outer join for color, to filter out rows where the color is not 'Gray'. (It's odd to me that SQL would use an outer join for the color subquery since I have a non-null constraint on the result of the color subquery, but OK.) Most of the rows are removed by the color filter. The result is that query 2 is significantly cheaper than query 1 because fewer rows are involved with the second join. All reasons for constructing such a statement aside, is this an expected behavior? Shouldn't SQL server opt to move the filter as early as possible in the query plan, regardless of the order the subqueries are written?

    Read the article

  • Changing the admin edit window display values

    - by Henri
    I have a database table with e.g. a weight value e.g. CREATE TABLE product ( id SERIAL NOT NULL, product_name item_name NOT NULL, . . weight NUMERIC(7,3), -- the weight in kg . . CONSTRAINT PK_product PRIMARY KEY (id) ); This results is the model: class Product(models.Model): . weight = models.DecimalField(max_digits=7, decimal_places=3, blank=True, null=True) . I store the weight in kg's, i.e. 1 kg is stores as 1, 0.1 kg or 100g is stored as 0.1 To make it easier for the user, I display the weight in the Admin list display in grams by specifying: def show_weight(self): if self.weight: weight_in_g = self.weight * 1000 return '%0f' % weight_in_g So if a product weighs e.g. 0.5 kg and is stored in the database as such, the admin list display shows 500 Is there also a way to alter the number shown in the 'Change product' window. This window now shows the value extracted from the database, i.e. 0.5. This will confuse a user when I tell him with the help_text to enter the number in g, while seeing the number of kgs. Before saving the product I override save as follows: def save(self): if self.weight: self.weight = self.weight / 1000 This converts the number entered in grams to kgs.

    Read the article

  • Asynchronous callback - gwt

    - by sprasad12
    Hi, I am using gwt and postgres for my project. On the front end i have few widgets whose data i am trying to save on to tables at the back-end when i click on "save project" button(this also takes the name for the created project). In the asynchronous callback part i am setting more than one table. But it is not sending the data properly. I am getting the following error: org.postgresql.util.PSQLException: ERROR: insert or update on table "entitytype" violates foreign key constraint "entitytype_pname_fkey" Detail: Key (pname)=(Project Name) is not present in table "project". But when i do the select statement on project table i can see that the project name is present. Here is how the callback part looks like: oksave.addClickHandler(new ClickHandler(){ @Override public void onClick(ClickEvent event) { if(erasync == null) erasync = GWT.create(EntityRelationService.class); AsyncCallback<Void> callback = new AsyncCallback<Void>(){ @Override public void onFailure(Throwable caught) { } @Override public void onSuccess(Void result){ } }; erasync.setProjects(projectname, callback); for(int i = 0; i < boundaryPanel.getWidgetCount(); i++){ top = new Integer(boundaryPanel.getWidget(i).getAbsoluteTop()).toString(); left = new Integer(boundaryPanel.getWidget(i).getAbsoluteLeft()).toString(); if(widgetTitle.startsWith("ATTR")){ type = "regular"; erasync.setEntityAttribute(name1, name, type, top, left, projectname, callback); } else{ erasync.setEntityType(name, top, left, projectname, callback); } } } Question: Is it wrong to set more than one in the asynchronous callback where all the other tables are dependent on a particular table? when i say setProjects in the above code isn't it first completed and then moved on to the next one? Please any input will be greatly appreciated. Thank you.

    Read the article

  • SPPersistedObject and List<T>

    - by Sam
    Hi I want sharepoint to "persist" a List of object I wrote a class SPAlert wich inherit from SPPersistedObject : public class SMSAlert: SPPersistedObject { [Persisted] private DateTime _scheduledTime; [Persisted] private Guid _listId; [Persisted] private Guid _siteID; } Then I wrote a class wich inherit from SPJobDefinition an add a List of my previous object: public sealed class MyCustomJob: SPJobDefinition { [Persisted] private List<SMSAlert> _SMSAlerts; } The problem is : when I call the Update method of y MyCustomJob: myCustomJob.Update(); It throw an exception : message : An object in the SharePoint administrative framework, depends on other objects which do not exist. Ensure that all of the objects dependencies are created and retry this operation. stack at Microsoft.SharePoint.Administration.SPConfigurationDatabase.StoreObject(SPPersistedObject obj, Boolean storeClassIfNecessary, Boolean ensure) at Microsoft.SharePoint.Administration.SPConfigurationDatabase.PutObject(SPPersistedObject obj, Boolean ensure) at Microsoft.SharePoint.Administration.SPPersistedObject.Update() at Microsoft.SharePoint.Administration.SPJobDefinition.Update() at Sigi.Common.AlertBySMS.SmsAlertHandler.ScheduleJob(SPWeb web, SPAlertHandlerParams ahp) inner exception An object in the SharePoint administrative framework depends on other objects which do not exist. The INSERT statement conflicted with the FOREIGN KEY constraint "FK_Dependencies1_Objects". The conflict occurred in database "SharePoint_Config, table "dbo.Objects", column 'Id'. The statement has been terminated. Can anyone help me with that??

    Read the article

  • nhibernate error recovery

    - by Berryl
    I downloaded Rhino Security today and started going through some of the tests. Several that run perfectly in isolation start getting errors after one that purposely raises an exception runs though. Here is that test: [Test] public void EntitesGroup_CanCreate() { var group = _authorizationRepository.CreateEntitiesGroup("Accounts"); _session.Flush(); _session.Evict(group); var fromDb = _session.Get<EntitiesGroup>(group.Id); Assert.NotNull(fromDb); Assert.That(fromDb.Name, Is.EqualTo(group.Name)); } And here are the tests and error messages that fail: [Test] public void User_CanSave() { var ayende = new User {Name = "ayende"}; _session.Save(ayende); _session.Flush(); _session.Evict(ayende); var fromDb = _session.Get<User>(ayende.Id); Assert.That(fromDb, Is.Not.Null); Assert.That(ayende.Name, Is.EqualTo(fromDb.Name)); } ----> System.Data.SQLite.SQLiteException : Abort due to constraint violation column Name is not unique [Test] public void UsersGroup_CanCreate() { var group = _authorizationRepository.CreateUsersGroup("Admininstrators"); _session.Flush(); _session.Evict(group); var fromDb = _session.Get<UsersGroup>(group.Id); Assert.NotNull(fromDb); Assert.That(fromDb.Name, Is.EqualTo(group.Name)); } failed: NHibernate.AssertionFailure : null id in Rhino.Security.Tests.User entry (don't flush the Session after an exception occurs) Does anyone see how I can reset the state of the in memory SQLite db after the first test? I changed the code to use nunit instead of xunit so maybe that is part of the problem here as well. Cheers, Berryl

    Read the article

  • PostgreSQL insert on primary key failing with contention, even at serializable level

    - by Steven Schlansker
    I'm trying to insert or update data in a PostgreSQL db. The simplest case is a key-value pairing (the actual data is more complicated, but this is the smallest clear example) When you set a value, I'd like it to insert if the key is not there, otherwise update. Sadly Postgres does not have an insert or update statement, so I have to emulate it myself. I've been working with the idea of basically SELECTing whether the key exists, and then running the appropriate INSERT or UPDATE. Now clearly this needs to be be in a transaction or all manner of bad things could happen. However, this is not working exactly how I'd like it to - I understand that there are limitations to serializable transactions, but I'm not sure how to work around this one. Here's the situation - ab: => set transaction isolation level serializable; a: => select count(1) from table where id=1; --> 0 b: => select count(1) from table where id=1; --> 0 a: => insert into table values(1); --> 1 b: => insert into table values(1); --> ERROR: duplicate key value violates unique constraint "serial_test_pkey" Now I would expect it to throw the usual "couldn't commit due to concurrent update" but I'm guessing since the inserts are different "rows" this does not happen. Is there an easy way to work around this?

    Read the article

  • Mapping composite foreign keys in a many-many relationship in Entity Framework

    - by Kirk Broadhurst
    I have a Page table and a View table. There is a many-many relationship between these two via a PageView table. Unfortunately all of these tables need to have composite keys (for business reasons). Page has a primary key of (PageCode, Version), View has a primary key of (ViewCode, Version). PageView obviously enough has PageCode, ViewCode, and Version. The FK to Page is (PageCode, Version) and the FK to View is (ViewCode, Version) Makes sense and works, but when I try to map this in Entity framework I get Error 3021: Problem in mapping fragments...: Each of the following columns in table PageView is mapped to multiple conceptual side properties: PageView.Version is mapped to (PageView_Association.View.Version, PageView_Association.Page.Version) So clearly enough, EF is having a complain about the Version column being a common component of the two foreign keys. Obviously I could create a PageVersion and ViewVersion column in the join table, but that kind of defeats the point of the constraint, i.e. the Page and View must have the same Version value. Has anyone encountered this, and is there anything I can do get around it? Thanks!

    Read the article

  • Which non-clustered index should I use?

    - by Junior Mayhé
    Here I am studying nonclustered indexes on SQL Server Management Studio. I've created a table with more than 1 million records. This table has a primary key. CREATE TABLE [dbo].[Customers]( [CustomerId] [int] IDENTITY(1,1) NOT NULL, [CustomerName] [varchar](100) NOT NULL, [Deleted] [bit] NOT NULL, [Active] [bit] NOT NULL, CONSTRAINT [PK_Customers] PRIMARY KEY CLUSTERED ( [CustomerId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] This is the query I'll be using to see what execution plan is showing: SELECT CustomerName FROM Customers Well, executing this command with no additional non-clustered index, it leads the execution plan to show me: I/O cost = 3.45646 Operator cost = 4.57715 Now I'm trying to see if it's possible to improve performance, so I've created a non-clustered index for this table: 1) First non-clustered index CREATE NONCLUSTERED INDEX [IX_CustomerID_CustomerName] ON [dbo].[Customers] ( [CustomerId] ASC, [CustomerName] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO Executing again the select against Customers table, the execution plan shows me: I/O cost = 2.79942 Operator cost = 3.92001 It seems better. Now I've deleted this just created non-clustered index, in order to create a new one: 2) First non-clustered index CREATE NONCLUSTERED INDEX [IX_CustomerIDIncludeCustomerName] ON [dbo].[Customers] ( [CustomerId] ASC ) INCLUDE ( [CustomerName]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO With this new non-clustered index, I've executed the select statement again and the execution plan shows me the same result: I/O cost = 2.79942 Operator cost = 3.92001 So, which non-clustered index should I use? Why the costs are the same on execution plan for I/O and Operator? Am I doing something wrong or this is expected? thank you

    Read the article

  • MonoRail ActiveRecord - The UPDATE statement conflicted with the FOREIGN KEY SAME TABLE

    - by Justin
    Hey all, I'm new to MonoRail and ActiveRecord and have inherited an application that I need to modify. I have a Category table (Id, Name) and I'm adding a ParentId FK which references the Id in the same table so that you can have parent/child category relationships. I tried adding this to my Category model class: [Property] public int ParentId { get; set; } I also tried doing it this way: [BelongsTo("ParentId")] public Category Parent { get; set; } When I call a method to get all parent categories (they have a null ParentId), it works fine: public static Category[] GetParentCategories() { var criteria = DetachedCriteria.For<Core.Models.Category>(); return (FindAllByProperty("ParentId", null)); } However, when I call a method to get all child categories within a specific category, it errors out: public static Category[] GetChildCategories(int parentId) { var criteria = DetachedCriteria.For<Core.Models.Category>(); return (FindAllByProperty("ParentId", parentId)); } The error is: "The UPDATE statement conflicted with the FOREIGN KEY SAME TABLE constraint \"FK_Category_ParentId\". The conflict occurred in database \"UCampus\", table \"dbo.Category\", column 'Id'.\r\nThe statement has been terminated." I'm hard-coding in the parentId parameter as 1 and I'm 100% sure it exists as an id in the Category table so I don't know why it'd give this error. Also, I'm doing a select, not an update, so what is going on here?? Thanks for any input on this, Justin

    Read the article

  • Context Menu on QGraphicsWidget

    - by onurozcelik
    Hi, In my application I have two object type. One is field item, other is composite item. Composite items may contain two or more field items. Here is my composite item implementation. #include "compositeitem.h" CompositeItem::CompositeItem(QString id,QList<FieldItem *> _children) { children = _children; } CompositeItem::~CompositeItem() { } QRectF CompositeItem::boundingRect() const { //Not carefully thinked about it return QRectF(QPointF(-50,-150),QSizeF(250,250)); } void CompositeItem::paint(QPainter *painter, const QStyleOptionGraphicsItem *option, QWidget *widget ) { FieldItem *child; foreach(child,children) { child->paint(painter,option,widget); } } QSizeF CompositeItem::sizeHint(Qt::SizeHint which, const QSizeF &constraint) const { QSizeF itsSize(0,0); FieldItem *child; foreach(child,children) { // if its size empty set first child size to itsSize if(itsSize.isEmpty()) itsSize = child->sizeHint(Qt::PreferredSize); else { QSizeF childSize = child->sizeHint(Qt::PreferredSize); if(itsSize.width() < childSize.width()) itsSize.setWidth(childSize.width()); itsSize.setHeight(itsSize.height() + childSize.height()); } } return itsSize; } void CompositeItem::contextMenuEvent(QGraphicsSceneContextMenuEvent *event) { qDebug()<<"Test"; } My first question is how I can propagate context menu event to specific child. Picture on the above demonstrates one of my possible composite item. If you look on the code above you will see that I print "Test" when context menu event occurs. When I right click on the line symbol I see that "Test" message is printed. But when I right click on the signal symbol "Test" is not printed and I want it to be printed. My second question what cause this behaviour. How do I overcome this.

    Read the article

  • Inheritance concept in jpa

    - by megala
    I created one table using Inheritance concept to sore data into google app engine datastore. It contained following coding but it shows error.How to user Inheritance concept.What error in my program Program 1: @Entity @Inheritance(strategy=InheritanceType.JOINED) public class calender { @Id private String EmailId; @Basic private String CalName; @Basic public void setEmailId(String emailId) { EmailId = emailId; } public String getEmailId() { return EmailId; } public void setCalName(String calName) { CalName = calName; } public String getCalName() { return CalName; } public calender(String EmailId,String CalName) { this.EmailId=EmailId; this.CalName=CalName; } } program 2: @Entity public class method { @Id private String method; public void setMethod(String method) { this.method = method; } public String getMethod() { return method; } public method(String method) { this.method=method; } } My constraint is I want ouput like this Calendertable coantain Emailid calenername and method table contain Emailid method How to achive this? thanks inadvance.

    Read the article

  • Setting Ringtone notification from SD card file

    - by sgarman
    My goal is to set the users notification sound from a file that is stored onto the SD card from with in the application. I am using this code: if(path != null){ File k = new File(path, "moment.mp3"); ContentValues values = new ContentValues(); values.put(MediaStore.MediaColumns.DATA, k.getAbsolutePath()); values.put(MediaStore.MediaColumns.TITLE, "My Song title"); values.put(MediaStore.MediaColumns.SIZE, 215454); values.put(MediaStore.MediaColumns.MIME_TYPE, "audio/mp3"); values.put(MediaStore.Audio.Media.ARTIST, "Some Artist"); values.put(MediaStore.Audio.Media.DURATION, 230); values.put(MediaStore.Audio.Media.IS_RINGTONE, false); values.put(MediaStore.Audio.Media.IS_NOTIFICATION, true); values.put(MediaStore.Audio.Media.IS_ALARM, false); values.put(MediaStore.Audio.Media.IS_MUSIC, false); values.put(MediaStore.MediaColumns.DISPLAY_NAME, "Some Name"); //Insert it into the database Uri uri = MediaStore.Audio.Media.getContentUriForPath(k.getAbsolutePath()); Uri newUri = MainActivity.this.getContentResolver().insert(uri, values); RingtoneManager.setActualDefaultRingtoneUri( MainActivity.this, RingtoneManager.TYPE_NOTIFICATION, newUri ); //RingtoneManager.setActualDefaultRingtoneUri(this, RingtoneManager.TYPE_NOTIFICATION, newUri); Toast.makeText(this, "Notification Ringtone Set", Toast.LENGTH_SHORT).show(); } When I run this on the device I keep getting the error: 06-12 15:19:36.741: ERROR/Database(2847): Error inserting is_alarm=false is_ringtone=false artist_id=35 is_music=false album_id=-1 title=My Song title duration=230 is_notification=true title_key=%D%\%%P%H%F%8%%R%<%R%B%4% mime_type=audio/mp3 date_added=1276370376 _display_name=moment.mp3 _size=215454 _data=/mnt/sdcard/Android/data/_MY APP PATH_/files/moment.mp3 06-12 15:19:36.741: ERROR/Database(2847): android.database.sqlite.SQLiteConstraintException: error code 19: constraint failed I have seen others using this technique and I can't find any documentation on which values actually need to be passed in to successfully add the file into the Android system so that it can be set as a notification.

    Read the article

  • db:migrate creates sequences but doesn't alter table?

    - by RewbieNewbie
    Hello, I have a migration that creates a postres sequence for auto incrementing a primary identifier, and then executes a statement for altering the column and specifying the default value: execute 'CREATE SEQUENCE "ServiceAvailability_ID_seq";' execute <<-SQL ALTER TABLE "ServiceAvailability" ALTER COLUMN "ID" set DEFAULT NEXTVAL('ServiceAvailability_ID_seq'); SQL If I run db:migrate everything seems to work, in that no errors are returned, however, if I run the rails application I get: Mnull value in column "ID" violates not-null constraint I have discovered by executing the sql statement in the migration manually, that this error is because the alter statement isn't working, or isn't being executed. If I manually execute the following statement: CREATE SEQUENCE "ServiceAvailability_ID_seq; I get: error : ERROR: relation "serviceavailability_id_seq" already exists Which means the migration successfully created the sequence! However, if I manually run: ALTER TABLE "ServiceProvider" ALTER COLUMN "ID" set DEFAULT NEXTVAL('ServiceProvider_ID_seq'); SQL It runs successfully and creates the default NEXTVAL. So the question is, why is the migration file creating the sequence with the first execute statement, but not altering the table in the second execute? (Remembering, no errors are output on running db:migrate) Thank you and apologies for tl:dr

    Read the article

  • rake test not copying development postgres db with sequences

    - by Robert Crida
    I am trying to develop a rails application on postgresql using a sequence to increment a field instead of a default ruby approach based on validates_uniqueness_of. This has proved challenging for a number of reasons: 1. This is a migration of an existing table, not a new table or column 2. Using parameter :default = "nextval('seq')" didn't work because it tries to set it in parenthesis 3. Eventually got migration working in 2 steps: change_column :work_commencement_orders, :wco_number_suffix, :integer, :null => false#, :options => "set default nextval('wco_number_suffix_seq')" execute %{ ALTER TABLE work_commencement_orders ALTER COLUMN wco_number_suffix SET DEFAULT nextval('wco_number_suffix_seq'); } Now this would appear to have done the correct thing in the development database and the schema looks like: wco_number_suffix | integer | not null default nextval('wco_number_suffix_seq'::regclass) However, the tests are failing with PGError: ERROR: null value in column "wco_number_suffix" violates not-null constraint : INSERT INTO "work_commencement_orders" ("expense_account_id", "created_at", "process_id", "vo2_issued_on", "wco_template", "updated_at", "notes", "process_type", "vo_number", "vo_issued_on", "vo2_number", "wco_type_id", "created_by", "contractor_id", "old_wco_type", "master_wco_number", "deadline", "updated_by", "detail", "elective_id", "authorization_batch_id", "delivery_lat", "delivery_long", "operational", "state", "issued_on", "delivery_detail") VALUES(226, '2010-05-31 07:02:16.764215', 728, NULL, E'Default', '2010-05-31 07:02:16.764215', NULL, E'Procurement::Process', NULL, NULL, NULL, 226, NULL, 276, NULL, E'MWCO-213', '2010-06-14 07:02:16.756952', NULL, E'Name 4597', 220, NULL, NULL, NULL, 'f', E'pending', NULL, E'728 Test Road; Test Town; 1234; Test Land') RETURNING "id" The explanation can be found when you inspect the schema of the test database: wco_number_suffix | integer | not null So what happened to the default? I tried adding task: template: smmt_ops_development to the database.yml file which has the effect of issuing create database smmt_ops_test template = "smmt_ops_development" encoding = 'utf8' I have verified that if I issue this then it does in fact copy the default nextval. So clearly rails is doing something after that to suppress it again. Any suggestions as to how to fix this? Thanks Robert

    Read the article

  • Why put a DAO layer over a persistence layer (like JDO or Hibernate)

    - by Todd Owen
    Data Access Objects (DAOs) are a common design pattern, and recommended by Sun. But the earliest examples of Java DAOs interacted directly with relational databases -- they were, in essence, doing object-relational mapping (ORM). Nowadays, I see DAOs on top of mature ORM frameworks like JDO and Hibernate, and I wonder if that is really a good idea. I am developing a web service using JDO as the persistence layer, and am considering whether or not to introduce DAOs. I foresee a problem when dealing with a particular class which contains a map of other objects: public class Book { // Book description in various languages, indexed by ISO language codes private Map<String,BookDescription> descriptions; } JDO is clever enough to map this to a foreign key constraint between the "BOOKS" and "BOOKDESCRIPTIONS" tables. It transparently loads the BookDescription objects (using lazy loading, I believe), and persists them when the Book object is persisted. If I was to introduce a "data access layer" and write a class like BookDao, and encapsulate all the JDO code within this, then wouldn't this JDO's transparent loading of the child objects be circumventing the data access layer? For consistency, shouldn't all the BookDescription objects be loaded and persisted via some BookDescriptionDao object (or BookDao.loadDescription method)? Yet refactoring in that way would make manipulating the model needlessly complicated. So my question is, what's wrong with calling JDO (or Hibernate, or whatever ORM you fancy) directly in the business layer? Its syntax is already quite concise, and it is datastore-agnostic. What is the advantage, if any, of encapsulating it in Data Access Objects?

    Read the article

  • Grails XOM linkageerror - SAXParserException

    - by Stefan Kendall
    Possibly related: http://stackoverflow.com/questions/2762439/grails-attempting-to-include-htppbuilder-linkage-error I'm trying to include XOM in my grails project. How do I know which dependency library I need to exclude? I'm lost here. dependencies { build('xom:xom:1.1') { excludes "xml-apis" } } Error: java.lang.LinkageError: loader constraint violation: loader (instance of <bootloader>) previously initiated loading for a different type with name "org/xml/sax/SAXParseException" at java.lang.Class.getDeclaredMethods0(Native Method) at java.lang.Class.privateGetDeclaredMethods(Class.java:2427) at java.lang.Class.getDeclaredMethods(Class.java:1791) at java.security.AccessController.doPrivileged(Native Method) at org.codehaus.groovy.util.LazyReference.getLocked(LazyReference.java:33) at org.codehaus.groovy.util.LazyReference.get(LazyReference.java:20) at grails.util.PluginBuildSettings.getPluginInfos(PluginBuildSettings.groovy:124) at grails.util.PluginBuildSettings.getPluginInfos(PluginBuildSettings.groovy) at grails.util.PluginBuildSettings$getPluginInfos.callCurrent(Unknown Source) at grails.util.PluginBuildSettings.getPluginInfo(PluginBuildSettings.groovy:160) at grails.util.PluginBuildSettings$getPluginInfo.callCurrent(Unknown Source) at grails.util.PluginBuildSettings.getPluginInfoForSource(PluginBuildSettings.groovy:195) at org.codehaus.groovy.transform.ASTTransformationVisitor$3.call(ASTTransformationVisitor.java:303) at org.codehaus.groovy.control.CompilationUnit.applyToSourceUnits(CompilationUnit.java:820) at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:513) at org.codehaus.groovy.control.CompilationUnit.processPhaseOperations(CompilationUnit.java:489) at org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:466) at _GrailsEvents_groovy.run(_GrailsEvents_groovy:54) at _GrailsEvents_groovy$run.call(Unknown Source) at _GrailsArgParsing_groovy$run.call(Unknown Source) at _GrailsArgParsing_groovy.run(_GrailsArgParsing_groovy:29) at _GrailsArgParsing_groovy$run.call(Unknown Source) at _GrailsInit_groovy$run.call(Unknown Source) at _GrailsInit_groovy.run(_GrailsInit_groovy:38) at _GrailsInit_groovy$run.call(Unknown Source) at Help_.run(Help_.groovy:27) at Help_$run.call(Unknown Source) at gant.Gant.processTargets(Gant.groovy:494) at gant.Gant.processTargets(Gant.groovy:480)

    Read the article

  • TSQL to insert an ascending value

    - by David Neale
    I am running some SQL that identifies records which need to be marked for deletion and to insert a value into those records. This value must be changed to render the record useless and each record must be changed to a unique value because of a database constraint. UPDATE Users SET Username = 'Deleted' + (ISNULL( Cast(SELECT RIGHT(MAX(Username),1) FROM Users WHERE Username LIKE 'Deleted%') AS INT) ,0) + 1 FROM Users a LEFT OUTER JOIN #ADUSERS b ON a.Username = 'AVSOMPOL\' + b.sAMAccountName WHERE (b.sAMAccountName is NULL AND a.Username LIKE 'AVSOMPOL%') OR b.userAccountControl = 514 This is the important bit: SET Username = 'Deleted' + (ISNULL( Cast(SELECT RIGHT(MAX(Username),1) FROM Users WHERE Username LIKE 'Deleted%') AS INT) ,0) + 1 What I've tried to do is have deleted records have their Username field set to 'Deletedxxx'. The ISNULL is needed because there may be no records matching the SELECT RIGHT(MAX(Username),1) FROM Users WHERE Username LIKE 'Deleted%' statement and this will return NULL. I get a syntax error when trying to parse this (Msg 156, Level 15, State 1, Line 2 Incorrect syntax near the keyword 'SELECT'. Msg 102, Level 15, State 1, Line 2 Incorrect syntax near ')'. I'm sure there must be a better way to go about this, any ideas?

    Read the article

  • How to record different authentication types (username / password vs token based) in audit log

    - by RM
    I have two types of users for my system, normal human users with a username / password, and delegation authorized accounts through OAuth (i.e. using a token identifier). The information that is stored for each is quite different, and are managed by different subsytems. They do however interact with the same tables / data within the system, so I need to maintain the audit trail regardless of whether human user, or token-based user modified the data. My solution at the moment is to have a table called something like AuditableIdentity, and then have the two types inheriting off that table (either in the single table, or as two seperate tables with 1 to 1 PK with AuditableIdentity. All operations would use the common AuditableIdentity PK for CreatedBy, ModifiedBy etc columns. There isn't any FK constraint on the audit columns, so any text can go in there, but I want an easy way to easily determine whether it was a human or system that made the change, and joining to the one AuditableIdentity table seems like a clean way to do that? Is there a best practice for this scenario? Is this an appropriate way of approaching the problem - or would you not bother with the common table and just rely on joins (to the two seperate un-related user / token tables) later to determine which user type matches which audit records?

    Read the article

  • miglayout: can't get right-alignment to work

    - by Jason S
    Something's not right here. I'm trying to get the rightmost button (labeled "help" in the example below) to be right-aligned to the JFrame, and the huge buttons to have their width tied to the JFrame but be at least 180px each. I got the huge button constraint to work, but not the right alignment. I thought the right alignment was accomplished by gapbefore push (as in this other SO question), but I can't figure it out. Can anyone help me? import javax.swing.JButton; import javax.swing.JFrame; import javax.swing.JPanel; import net.miginfocom.swing.MigLayout; public class RightAlignQuestion { public static void main(String[] args) { JFrame frame = new JFrame("right align question"); JPanel mainPanel = new JPanel(); frame.setContentPane(mainPanel); mainPanel.setLayout(new MigLayout("insets 0", "[grow]", "")); JPanel topPanel = new JPanel(); topPanel.setLayout(new MigLayout("", "[][][][]", "")); for (int i = 0; i < 3; ++i) topPanel.add(new JButton("button"+i), ""); topPanel.add(new JButton("help"), "gapbefore push, wrap"); topPanel.add(new JButton("big button"), "span 3, grow"); JPanel bottomPanel = new JPanel(); bottomPanel.setLayout(new MigLayout("", "[180:180:,grow][180:180:,grow]","100:")); bottomPanel.add(new JButton("tweedledee"), "grow"); bottomPanel.add(new JButton("tweedledum"), "grow"); mainPanel.add(topPanel, "grow, wrap"); mainPanel.add(bottomPanel, "grow"); frame.pack(); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frame.setVisible(true); } }

    Read the article

  • SQL Concurrent test update question

    - by ptoinson
    Howdy Folks, I have a SQLServer 2008 database in which I have a table for Tags. A tag is just an id and a name. The definition of the tags table looks like: CREATE TABLE [dbo].[Tag]( [ID] [int] IDENTITY(1,1) NOT NULL, [Name] [varchar](255) NOT NULL CONSTRAINT [PK_Tag] PRIMARY KEY CLUSTERED ( [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ) Name is also a unique index. further I have several processes adding data to this table at a pretty rapid rate. These processes use a stored proc that looks like: ALTER PROC [dbo].[lg_Tag_Insert] @Name varchar(255) AS DECLARE @ID int SET @ID = (select ID from Tag where Name=@Name ) if @ID is null begin INSERT Tag(Name) VALUES (@Name) RETURN SCOPE_IDENTITY() end else begin return @ID end My issues is that, other than being a novice at concurrent database design, there seems to be a race condition that is causing me to occasionally get an error that I'm trying to enter duplicate keys (Name) into the DB. The error is: Cannot insert duplicate key row in object 'dbo.Tag' with unique index 'IX_Tag_Name'. This makes sense, I'm just not sure how to fix this. If it where code I would know how to lock the right areas. SQLServer is quite a different beast. First question is what is the proper way to code this 'check, then update pattern'? It seems I need to get an exclusive lock on the row during the check, rather than a shared lock, but it's not clear to me the best way to do that. Any help in the right direction will be greatly appreciated. Thanks in advance.

    Read the article

  • How do I exclude data from local table schema_migrations from being pushed to Heroku DB?

    - by Thierry Lam
    I was able to push my Ruby on Rails app with MySQL(local dev) to the Heroku server along with migrating my model with the command heroku rake db:migrate. I have also read the documentation on Database Import/Export. Is that doc referring to pushing actual data from my local dev DB to whichever Heroku's DB? Do I need to modify anything in the file database.yml to make it happen? I ran the following command: heroku db:push and I am getting the error: Sending data 2 tables, 3 records !!! Caught Server Exception | ETA: --:--:-- Taps Server Error: PGError ERROR: duplicate key value violates unique constraint "unique_schema_migrations" I have 2 tables, one I create for my app and the other schema_migrations. The total number of entries among the 2 tables is 3. I'm also printing the number of entries I have in the table I have created and it's showing 0. Any ideas what I might be missing or what I am doing wrong? EDIT: I figured out the above, Heroku's DB already have schema_migrations the moment I ran migrate. New question: Does anyone know how I can exclude data from a specific table from being pushed to Heroku DB. The table to exclude in this case will be schema_migrations. Not so good solution: I googled around and someone else was having the same issue. He suggested naming the schema_migrations table to zschema_migrations. In this way data from the other tables will be pushed properly until it fails on the last table. It's a pretty bad solution but will do for the time being. A better solution will be to use an existing Rails command which can reset a specific table from a database. I don't think Rake can do that.

    Read the article

  • Model login constraints based on time

    - by DaDaDom
    Good morning, for an existing web application I need to implement "time based login constraints". It means that for each user, later maybe each group, I can define timeslots when they are (not) allowed to log in into the system. As all data for the application is stored in database tables, I need to somehow create a way to model this idea in that way. My first approach, I will try to explain it here: Create a tree of login constraints (called "timeslots") with the main "categories", like "workday", "weekend", "public holiday", etc. on the top level, which are in a "sorted" order (meaning "public holiday" has a higher priority than "weekday") for each top level node create subnodes, which have a finer timespan, like "monday", "tuesday", ... below that, create an "hour" level: 0, 1, 2, ..., 23. No further details are necessary. set every member to "allowed" by default For every member of the system create a 1:n relationship member:timeslots which defines constraints, e.g. a member A may have A:monday-forbidden and A:tuesday-forbidden Do a depth-first search at every login and check if the member has a constraint. Why a depth first search? Well, I thought that it may be that a member has the rules: A:monday->forbidden, A:monday-10->allowed, A:mondey-11->allowed So a login on monday at 12:30 would fail, but one at 10:30 succeed. For performance reasons I could break the relational database paradigm and set a flag for every entry in the member-to-timeslots-table which is set to true if the member has information set for "finer" timeslots, but that's a second step. Is this model in principle a good idea? Are there existing models? Thanks.

    Read the article

  • Automatically grow document view of NSScrollView using auto layout?

    - by Monolo
    Is there a simple way to get an NSScrollView to adapt to its document view changing size when using autolayout (the Lion feature)? I have tried to call both setNeedsUpdateConstraints: and setNeedsLayout: on the document view, the clip view and the scroll view, without any results. fittingSize of the document view reports the correct size. An NSPopover in conjunction with an NSViewController handles this nicely, with the popover growing and shrinking as needed, and I was hoping to get a similar simple and robust behaviour with the scroll view. I have checked the documentation for scroll views, but they don't seem to be updated to use autolayout. Edited to clarify: The problem I experience is that the document view, which holds subviews, is not re-sized when the subviews change their size, even if they call invalidateIntrinsicContentSize. The contents of the document view are hence clipped to the original size of the document view as they grow. The document view is created in a nib and set as the scroll view's document view in an awakeFromBib method. What I hoped to obtain was that the document view frame would automatically be adjusted to when its fittingSize changes, and the scrollbars updated accordingly. NSPopover does something similar - provided that the subviews of the content controller's view have the constraints set right and various content hugging values are high enough (higher than the hidden popover window's hight constraint priority, for one).

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >