Search Results

Search found 9730 results on 390 pages for 'dynamic schema'.

Page 322/390 | < Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >

  • How to check if two records have a self-referencing relation?

    - by Machine
    Consider the following schema with users and their collegues (friends): Users User: columns: user_id: name: user_id as userId type: integer(8) unsigned: 1 primary: true autoincrement: true first_name: name: first_name as firstName type: string(45) notnull: true last_name: name: last_name as lastName type: string(45) notnull: true email: type: string(45) notnull: true unique: true relations: Collegues: class: User local: invitor foreign: invitee refClass: CollegueStatus equal: true onDelete: CASCADE onUpdate: CASCADE Join table: CollegueStatus: columns: invitor: type: integer(8) unsigned: 1 primary: true invitee: type: integer(8) unsigned: 1 primary: true status: type: enum(8) values: [pending, accepted, denied] default: pending notnull: true Now, let's say I two records, one for the user making a HTTP request (the logged in user), and one record for a user he wants to send a message to. I want to check if these users are collegues. Questions: Does Doctrine have any pre-build functionality to check if two records with with self-relations are related? If not, how would you write a method to check this? Where would you put said method? (In the User-class, UserTable-class etc) I could probably do something like this: public function (User $user1, User $user2) { // Ensure we load collegues if $user1 was fetched with DQL that // doesn't load this relation $collegues = $user1->get('Collegues'); $areCollegues = false; foreach($collegues as $collegue) { if($collegue['userId'] === $user2['userId']) { $areCollegues = true; break; } } return $areCollegues; } But this looks a neither efficient nor pretty. I just feel that it should be solved already for self-referencing relations to be nice to use.

    Read the article

  • ManyToOne annotation fails with Hibernate 4.1: MappingException

    - by barelas
    Using Hibernate 4.1.1.Final. When I try to add @ManyToOne, schema creation fails with: org.hibernate.MappingException: Could not instantiate persister org.hibernate.persister.entity.SingleTableEntityPersister User.java: @Entity public class User { @Id private int id; public int getId() {return id;} public void setId(int id) {this.id = id;} @ManyToOne Department department; public Department getDepartment() {return department;} public void setDepartment(Department department) {this.department = department;} } Department.java @Entity public class Department { @Id private int departmentNumber; public int getDepartmentNumber() {return departmentNumber;} public void setDepartmentNumber(int departmentNumber) {this.departmentNumber = departmentNumber;} } hibernate.properties: hibernate.connection.driver_class=com.mysql.jdbc.Driver hibernate.connection.url=jdbc:mysql://localhost:3306/dbname hibernate.connection.username=user hibernate.connection.password=pass hibernate.connection.pool_size=5 hibernate.dialect=org.hibernate.dialect.MySQL5InnoDBDialect hibernate.hbm2ddl.auto=create init (throwing exception): ServiceRegistry serviceRegistry = new ServiceRegistryBuilder().buildServiceRegistry(); sessionFactory = new MetadataSources( serviceRegistrY.addAnnotatedClass(Department.class).addAnnotatedClass(User.class).buildMetadata().buildSessionFactory(); exception throwed at init: org.hibernate.MappingException: Could not instantiate persister org.hibernate.persister.entity.SingleTableEntityPersister at org.hibernate.persister.internal.PersisterFactoryImpl.create(PersisterFactoryImpl.java:174) at org.hibernate.persister.internal.PersisterFactoryImpl.createEntityPersister(PersisterFactoryImpl.java:148) at org.hibernate.internal.SessionFactoryImpl.<init>(SessionFactoryImpl.java:820) at org.hibernate.metamodel.source.internal.SessionFactoryBuilderImpl.buildSessionFactory(SessionFactoryBuilderImpl.java:65) at org.hibernate.metamodel.source.internal.MetadataImpl.buildSessionFactory(MetadataImpl.java:340) I have tried adding some other annotations, but shouldn't the defaults work and create the tables and foreign key? If I remove the department from User, tables get generated fine. Thanks in advance!

    Read the article

  • Alter Dilemma : How to use to set Primary and other attributes.

    - by Rachel
    I have following table in database AND I need to alter it to below mentioned schema. Initially I was drop the current database and creating new one using the create but I am not supposed to do that and use ALTER but am not sure as to how can I use ALTER to add primary key and other constraints. Any Suggestions !!! Code Current: CREATE TABLE `details` ( `KEY` varchar(255) NOT NULL, `ID` bigint(20) NOT NULL, `CODE` varchar(255) NOT NULL, `C_ID` bigint(20) NOT NULL, `C_CODE` varchar(64) NOT NULL, `CCODE` varchar(255) NOT NULL, `TCODE` varchar(255) NOT NULL, `LCODE` varchar(255) NOT NULL, `CAMCODE` varchar(255) NOT NULL, `OFCODE` varchar(255) NOT NULL, `OFNAME` varchar(255) NOT NULL, `PRIORITY` bigint(20) NOT NULL, `STDATE` datetime NOT NULL, `ENDATE` datetime NOT NULL, `INT` varchar(255) NOT NULL, `PHONE` varchar(255) NOT NULL, `TV` varchar(255) NOT NULL, `MTV` varchar(255) NOT NULL, `TYPE` varchar(255) NOT NULL, `CREATED` datetime NOT NULL, `MAIN` varchar(255) NOT NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1; Desired: CREATE TABLE `details` ( `id` bigint(20) NOT NULL, `code` varchar(255) NOT NULL, `cid` bigint(20) NOT NULL, `ccode` varchar(64) NOT NULL, `c_code` varchar(255) NOT NULL, `tcode` varchar(255) NOT NULL, `lcode` varchar(255) NOT NULL, `camcode` varchar(255) NOT NULL, `ofcode` varchar(255) NOT NULL, `ofname` varchar(255) NOT NULL, `priority` bigint(20) NOT NULL, `stdate` datetime NOT NULL, `enddate` datetime NOT NULL, `list` varchar(255) NOT NULL, `name` varchar(255) NOT NULL, `created` datetime NOT NULL, `date` datetime NOT NULL, `ofshn` int(20) NOT NULL, `ofcl` int(20) NOT NULL, `ofr` int(20) NOT NULL, PRIMARY KEY (`code`,`ccode`,`list`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1; Thanks !!!

    Read the article

  • Fluent NHibernate - Delete a related object when no explicit relationship exists in the model

    - by John Price
    I've got an application that keeps track of (for the sake of an example) what drinks are available at a given restaurant. My domain model looks like: class Restaurant { public IEnumerable<RestaurantDrink> GetRestaurantDrinks() { ... } //other various properties } class RestaurantDrink { public Restaurant Restaurant { get; set; } public Drink { get; set; } public string DrinkVariety { get; set; } //"fountain drink", "bottled", etc. //other various properties } class Drink { public string Name { get; set; } public string Manufacturer { get; set; } //other various properties } My db schema is (I hope) about what you'd expect; "RestaurantDrinks" is essentially a mapping table between Restaurants and Drinks with some extra properties (like "DrinkVariety" tacked on). Using Fluent NHibernate to set up mappings, I've set up a "HasMany" relationship from Restaurants to RestaurantDrinks that causes the latter to be deleted when its parent Restaurant is deleted. My question is, given that "Drink" does not have any property on it that explicitly references RestaurantDrinks (the relationship only exists in the underlying database), can I set up a mapping that will cause RestaurantDrinks to be deleted if their associated Drink is deleted?

    Read the article

  • how to generate unique numbers less than 8 characters long.

    - by loudiyimo
    hi I want to generate unique id's everytime i call methode generateCustumerId(). The generated id must be 8 characters long or less than 8 characters. This requirement is necessary because I need to store it in a data file and schema is determined to be 8 characters long for this id. Option 1 works fine. Instead of option 1, I want to use UUID. The problem is that UUID generates an id which has to many characters. Does someone know how to generate a unique id which is less then 99999999? option 1 import java.util.HashSet; import java.util.Random; import java.util.Set; public class CustomerIdGenerator { private static Set<String> customerIds = new HashSet<String>(); private static Random random = new Random(); // XXX: replace with java.util.UUID public static String generateCustumerId() { String customerId = null; while (customerId == null || customerIds.contains(customerId)) { customerId = String.valueOf(random.nextInt(89999999) + 10000000); } customerIds.add(customerId); return customerId; } } option2 generates an unique id which is too long public static String generateCustumerId() { String ownerId = UUID.randomUUID().toString(); System.out.println("ownerId " + ownerId); return ownerId }

    Read the article

  • Problem using SQLDataReader with Sybase ASE

    - by John K.
    We're developing a reporting application that uses asp.net-mvc (.net 4). We connect through DDTEK.Sybase middleware to a Sybase ASE 12.5 database. We're having a problem pulling data into a datareader (from a stored procedure). The stored procedure computes values (approximately 50 columns) by doing sums, counts, and calling other stored procedures. The problem we're experiencing is... certain (maybe 5% of the columns) come back with NULL or 0. If we debug and copy the SQL statement being used for the datareader and run it inside another SQL tool we get all valid values for all columns. conn = new SybaseConnection { ConnectionString = ConfigurationManager.ConnectionStrings[ConnectStringName].ToString() }; conn.Open(); cmd = new SybaseCommand { CommandTimeout = cmdTimeout, Connection = conn, CommandText = mainSql }; reader = cmd.ExecuteReader(); // AT THIS POINT IMMEDIATELY AFTER THE EXECUTEREADER COMMAND // THE READER CONTAINS THE BAD (NULL OR 0) DATA FOR THESE COLUMNS. DataTable schemaTable = reader.GetSchemaTable(); // AT THIS POINT WE CAN VIEW THE DATATABLE FOR THE SCHEMA AND IT APPEARS CORRECT // THE COLUMNS THAT DON'T WORK HAVE SPECIFICATIONS IDENTICAL TO THE COLUMNS THAT DO WORK Has anyone had problems like this using Sybase and ADO? Thanks, John K.

    Read the article

  • advanced winform framework

    - by alfredo dobrekk
    Hi, i m starting a new project that would basically take input from user and save them to database among about 30 screens, and i would like to find a framework that will allow the maximum number of these features out of the box : .net c#. windows form. unit testing continuous integration screens with lists, combo boxes, text boxes, add, delete, save, cancel that are easy to update when you add a property to your classes or a field to your database. auto completion on controls to help user find its way use of an orm like nhibernate easy multithreading and display of wait screens for user easy undo redo tabbed child windows search forms ability to grant access to some functionnalities according to user profiles mvp/mvvm or whatever design patterns either some code generation from database to c# classe or generation of database schema from c# classes some kind of database versioning / upgrade to easily update database when i release patches to application once in production code metrics analysis some code generator i can use against my entities that would generate some rough form i can rearrange after code documentation generator ... Any ideas ? I know its lot but i really would like to use existing code to build upon so i can focus on business rules. Do u have any suggestion to add to the list before starting ? What open source tools would u use to achieve these ?

    Read the article

  • How to determine one to many column name from entity type

    - by snicker
    I need a way to determine the name of the column used by NHibernate to join one-to-many collections from the collected entity's type. I need to be able to determine this at runtime. Here is an example: I have some entities: namespace Entities { public class Stable { public virtual int Id {get; set;} public virtual string StableName {get; set;} public virtual IList<Pony> Ponies { get; set; } } public class Dude { public virtual int Id { get; set; } public virtual string DudesName { get; set; } public virtual IList<Pony> PoniesThatBelongToDude { get; set; } } public class Pony { public virtual int Id {get; set;} public virtual string Name {get; set;} public virtual string Color { get; set; } } } I am using NHibernate to generate the database schema, which comes out looking like this: create table "Stable" (Id integer, StableName TEXT, primary key (Id)) create table "Dude" (Id integer, DudesName TEXT, primary key (Id)) create table "Pony" (Id integer, Name TEXT, Color TEXT, Stable_id INTEGER, Dude_id INTEGER, primary key (Id)) Given that I have a Pony entity in my code, I need to be able to find out: A. Does Pony even belong to a collection in the mapping? B. If it does, what are the column names in the database table that pertain to collections In the above instance, I would like to see that Pony has two collection columns, Stable_id and Dude_id.

    Read the article

  • Alternatives to the Entity Framework for Serving/Consuming an OData Interface

    - by Egahn
    I'm researching how to set up an OData interface to our database. I would like to be able to pull/query data from our DB into Excel, as a start. Eventually I would like to have Excel run queries and pull data over HTTP from a remote client, including authentication, etc. I've set up a working (rickety) prototype so far, using the ADO.NET Entity Data Model wizard in Visual Studio, and VSTO to create a test Excel worksheet with a button to pull from that ADO.NET interface. This works OK so far, and I can query the DB using Linq through the entities/objects that are created by the ADO.NET EDM wizard. However, I have started to run into some problems with this approach. I've been finding the Entity Framework difficult to work with (and in fact, also difficult to research solutions to, as there's a lot of chaff out there regarding it and older versions of it). An example of this is my being unable to figure out how to set the SQL command timeout (as opposed to the HTTP request timeout) on the DataServiceContext object that the wizard generates for my schema, but that's not the point of my question. The real question I have is, if I want to use OData as my interface standard, am I stuck with the Entity Framework? Are there any other solutions out there (preferably open source) which can set up, serve and consume an OData interface, and are easier to work with and less bloated than the Entity Framework? I have seen mention of NHibernate as an alternative, but most of the comparison threads I've seen are a few years old. Are there any other alternatives out there now? Thanks very much!

    Read the article

  • myXmlDataDoc.DataSet.ReadXml problem

    - by alex
    Hi: I am using myXmlDataDoc.DataSet.ReadXml(xml_file_name, XmlReadMode.InferSchema); to populate the tables within the dataset created by reading in a xml schema using: myStreamReader = new StreamReader(xsd_file_name); myXmlDataDoc.DataSet.ReadXmlSchema(myStreamReader); The problems I am facing is when it comes to read the xml tag: The readXml function puts the element in a different table and everything in the tables are 0s. Here is the print out of the datatable: TableName: test_data 100 2 1 TableName: parameters 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 9 0 10 0 11 0 12 0 In my xsd file, I represent an array using this: <xs:element name="test_data"> <xs:complexType> <xs:complexContent> <xs:extension base="test_base"> <xs:sequence> <xs:element name="a" type="xs:unsignedShort"/> <xs:element name="b" type="xs:unsignedShort"/> <xs:element name="c" type="xs:unsignedInt"/> <xs:element name="parameters" minOccurs="0" maxOccurs="12" type="xs:unsignedInt"/> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> And here is my xml file: 100 2 1 1 2 3 4 5 6 7 8 9 10 11 12 I am expecting after the readXml function call, the "parameters" is part of the test_data table. TableName: test_data 100 2 1 1 2 3 4 5 6 7 8 9 10 11 12 Does anyone knows what's wrong? Thanks

    Read the article

  • Using twig variable to dynamically call an imported macro sub-function

    - by Chausser
    I am attempting if use a variable to call a specific macro name. I have a macros file that is being imported {% import 'form-elements.html.twig' as forms %} Now in that file there are all the form element macros: text, textarea, select, radio etc. I have an array variable that gets passed in that has an elements in it: $elements = array( array( 'type'=>'text, 'value'=>'some value', 'atts'=>null, ), array( 'type'=>'text, 'value'=>'some other value', 'atts'=>null, ), ); {{ elements }} what im trying to do is generate those elements from the macros. they work just fine when called by name: {{ forms.text(element.0.name,element.0.value,element.0.atts) }} However what i want to do is something like this: {% for element in elements %} {{ forms[element.type](element.name,element.value,element.atts) }} {% endfor %} I have tried the following all resulting in the same error: {{ forms["'"..element.type.."'"](element.name,element.value,element.atts) }} {{ forms.(element.type)(element.name,element.value,element.atts) }} {{ forms.{element.type}(element.name,element.value,element.atts) }} This unfortunately throws the following error: Fatal error: Uncaught exception 'LogicException' with message 'Attribute "value" does not exist for Node "Twig_Node_Expression_GetAttr".' in Twig\Environment.php on line 541 Any help or advice on a solution or a better schema to use would be very helpful.

    Read the article

  • HELP ME my dataset xsd content HAS GONE

    - by Mustafa Magdy
    I'm working in an erp project using Visual Studio 2008 Sp1, I've a typed dataset it was containg alot of datatable and alot of table adapter the .Designer.cs file was 8 MB, suddenly when i was trying to openit using visual studio designer the following code comes to me <?xml version="1.0" encoding="utf-8"?> <xs:schema id="erpDataSet" targetNamespace="http://tempuri.org/erpDataSet1.xsd" xmlns:mstns="http://tempuri.org/erpDataSet1.xsd" xmlns="http://tempuri.org/erpDataSet1.xsd" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:msdata="urn:schemas-microsoft-com:xml-msdata" xmlns:msprop="urn:schemas-microsoft-com:xml-msprop" attributeFormDefault="qualified" elementFormDefault="qualified"> <xs:annotation> <xs:appinfo source="urn:schemas-microsoft-com:xml-msdatasource"> <DataSource DefaultConnectionIndex="0" FunctionsComponentName="QueriesTableAdapter" Modifier="AutoLayout, AnsiClass, Class, Public" SchemaSerializationMode="IncludeSchema" xmlns="urn:schemas-microsoft-com:xml-msdatasource"> <Connections> <Connection AppSettingsObjectName="Settings" AppSettingsPropertyName="erpConnectionString" IsAppSettingsProperty="true" Modifier="Assembly" Name="erpConnectionString (Settings)" ParameterPrefix="@" PropertyReference="ApplicationSettings.Sbic.Pro My XSD file content has gone, :( :( :( I don't understand why, and how can i recover it. i mad something but i don't know if it is the reason for that or not, My connectionstring was in the settings "app.config" i removed it and add it to the resources of another project that is refernced by the main project. what can i do, pleaze help me.

    Read the article

  • How to use Nhibernate Validator + NHib component + ddl

    - by mynkow
    I just configured my NHibValidator. My NHibernate creates the DB schema. When I set MaxLenght="20" to some property of a class then in the database the length appears in the database column. I am doing this in the NHibValidator xml file. But the problem is that I have components and cannot figure out how to achieve this behaviour. The component is configured correctly in the Customer.hbm.xml file. EDIT: Well, I found that Hibernate Validator users had the same problem two years ago. http://opensource.atlassian.com/projects/hibernate/browse/HV-25 Is this an issue for NHibernate Validator or it is fixed. If it is working tell me how please. ----------------------------------------------------- public class Customer { public virtual string Name{get;set;} public virtual Contact Contacts{ get; } } ----------------------------------------------------- public class Contact { public virtual string Address{get;set;} } ----------------------------------------------------- <?xml version="1.0" encoding="utf-8" ?> <nhv-mapping xmlns="urn:nhibernate-validator-1.0" namespace="MyNamespace" assembly="MyAssembly"> <class name="Customer"> <property name="Name"> <length max="20"/> </property> <property name="Contacts"> <notNull/> <valid/> </property> </class> </nhv-mapping> ----------------------------------------------------- <?xml version="1.0" encoding="utf-8" ?> <nhv-mapping xmlns="urn:nhibernate-validator-1.0" namespace="MyNamespace" assembly="MyAssembly"> <class name="Contact"> <property name="Address"> <length max="50"/> <valid/> </property> </class> </nhv-mapping> -----------------------------------------------------

    Read the article

  • Visual Studio + Database Edition + CDC = Deploy Fail

    - by Ben
    Hi All, I've got a database using change data capture (CDC) that is created from a Visual Studio database project (GDR2). My problem is that I have a stored procedure that is analyzing the CDC information and then returning data. How is that a problem you ask? Well, the order of operation is as follows. Pre-deployment Script Tables Indexes, keys, etc. Procedures Post-deployment Script Inside the post-deployment script is where I enable CDC. Here-in lies the problem. The procedure that is acting on the CDC tables is bombing because they don't exist yet! I've tried to put the call to sys.sp_cdc_enable_table in the script that creates the table, but it doesn't like that. Error 102 TSD03070: This statement is not recognized in this context. C:...\Schema Objects\Schemas\dbo\Tables\Foo.table.sql 20 1 Foo Is there a better/built-in way to enable CDC such that it's references are available when the stored procedures are created? Is there a way to run a script after tables are created but before other objects are created? How about a way to create the procedure dependencies be damned? Or maybe I'm just doing things that shouldn't be done?!?! Now, I have a work around. Comment out the sproc body Deploy (CDC is created) Uncomment sproc Deploy Everything is great until the next time I update a CDC tracked table. Then I need to comment out the 'offending' procedure. Thanks for reading my question and thanks for your help!

    Read the article

  • Populate an unmapped property of domain object from result of join with Nhibernate

    - by Adam Pope
    I have a situation where I have 3 tables: StockItem, Office and StockItemPrice. The price for each StockItem can be different for each Office. StockItem( ID Name ) Office( ID Name ) StockItemPrice( ID StockItemID OfficeID Price ) I've set up a schema with 2 many-to-one relations to link StockItem and Office. So in my StockItem domain object I have a property: IList<StockItemPrice> Prices; which gets loaded with the price of the item for each office. That's working fine. Now I'm trying to get the price of an item for a single office. I have the following Criteria query: NHibernateSession.CreateCriteria(persistentType) .Add(Restrictions.Eq("ID", id)) .CreateAlias("Prices", "StockItemPrice") .Add(Restrictions.Eq("StockItemPrice.Office", office)) .UniqueResult<StockItem>(); This appears to work fine as the SQL it generates is what I qould expect. However, I dont know if it populates StockItem.Prices with a single object correctly as as soon as I reference that property NHibernate performs a lazy load of all the office's prices. Also, even if it does work, it feels really crufty having to access the price by using: mystockitem.Prices[0].Price What I would really like is to have a Price field on the StockItem object and have the price of the item put into that field by NHibernate. I've tried adding .CreateCriteria("Price", "StockItemPrice.Price") and the same with CreateAlias, but I get the error NHibernate.QueryException : could not resolve property: Price of: StockItem which makes sense I guess as Price isn't a mapped property. How would I adjust the query to make this possible?

    Read the article

  • Database abstraction/adapters for ruby

    - by Stiivi
    What are the database abstractions/adapters you are using in Ruby? I am mainly interested in data oriented features, not in those with object mapping (like active record or data mapper). I am currently using Sequel. Are there any other options? I am mostly interested in: simple, clean and non-ambiguous API data selection (obviously), filtering and aggregation raw value selection without field mapping: SELECT col1, col2, col3 = [val1, val2, val3] not hash of { :col1 = val1 ...} API takes into account table schemas 'some_schema.some_table' in a consistent (and working) way; also reflection for this (get schema from table) database reflection: get list of table columns, their database storage types and perhaps adaptor's abstracted types table creation, deletion be able to work with other tables (insert, update) in a loop enumerating selection from another table without requiring to fetch all records from table being enumerated Purpose is to manipulate data with unknown structure at the time of writing code, which is the opposite to object mapping where structure or most of the structure is usually well known. I do not need the object mapping overhead. What are the options, including back-ends for object-mapping libraries?

    Read the article

  • Multi-Column Join in Hibernate/JPA Annotations

    - by bowsie
    I have two entities which I would like to join through multiple columns. These columns are shared by an @Embeddable object that is shared by both entities. In the example below, Foo can have only one Bar but Bar can have multiple Foos (where AnEmbeddableObject is a unique key for Bar). Here is an example: @Entity @Table(name = "foo") public class Foo { @Id @Column(name = "id") @GeneratedValue(generator = "seqGen") @SequenceGenerator(name = "seqGen", sequenceName = "FOO_ID_SEQ", allocationSize = 1) private Long id; @Embedded private AnEmbeddableObject anEmbeddableObject; @ManyToOne(targetEntity = Bar.class, fetch = FetchType.LAZY) @JoinColumns( { @JoinColumn(name = "column_1", referencedColumnName = "column_1"), @JoinColumn(name = "column_2", referencedColumnName = "column_2"), @JoinColumn(name = "column_3", referencedColumnName = "column_3"), @JoinColumn(name = "column_4", referencedColumnName = "column_4") }) private Bar bar; // ... rest of class } And the Bar class: @Entity @Table(name = "bar") public class Bar { @Id @Column(name = "id") @GeneratedValue(generator = "seqGen") @SequenceGenerator(name = "seqGen", sequenceName = "BAR_ID_SEQ", allocationSize = 1) private Long id; @Embedded private AnEmbeddableObject anEmbeddableObject; // ... rest of class } Finally the AnEmbeddedObject class: @Embeddable public class AnEmbeddedObject { @Column(name = "column_1") private Long column1; @Column(name = "column_2") private Long column2; @Column(name = "column_3") private Long column3; @Column(name = "column_4") private Long column4; // ... rest of class } Obviously the schema is poorly normalised, it is a restriction that AnEmbeddedObject's fields are repeated in each table. The problem I have is that I receive this error when I try to start up Hibernate: org.hibernate.AnnotationException: referencedColumnNames(column_1, column_2, column_3, column_4) of Foo.bar referencing Bar not mapped to a single property I have tried marking the JoinColumns are not insertable and updatable, but with no luck. Is there a way to express this with Hibernate/JPA annotations? Thank you.

    Read the article

  • Delete only reference to child object

    - by Al
    I'm running in to a bit of a problem where any attempt to delete just the reference to a child also deletes the child record. My schema looks like this Person Organisation OrganisationContacts : Person OrgId PersonId Role When removing an Organisation i want to only delete the record in OrgnaisationContacts, but not touch the Person record. My Mapping looks like this Code: public OrganisationMap() { Table("Organsations"); .... HasMany<OrganisationContact>(x => x.Contacts) .Table("OrganisationContacts ") .KeyColumn("OrgId") .Not.Inverse() .Cascade.AllDeleteOrphan(); } public class OrganisationContactMap : SubclassMap<OrganisationContact> { public OrganisationContactMap() { Table("OrganisationContacts"); Map(x => x.Role, "PersonRole"); Map(x => x.IsPrimary); } } At the moment just removing a contact from the IList either doesn't reflect in the database at all, or it issues two delete statements DELETE FROM OrganisationContact & DELETE FROM Person, or tries to set PersonId to null in the OrganisationContacts table. All of which are not desirable. Any help would be great appreciated.

    Read the article

  • how to Solve the "Digg" problem in MongoDB

    - by user193116
    A while back,a Digg developer had posted this blog ,"http://about.digg.com/blog/looking-future-cassandra", where the he described one of the issues that were not optimally solved in MySQL. This was cited as one of the reasons for their move to Cassandra. I have been playing with MongoDB and I would like to understand how to implement the MongoDB collections for this problem From the article, the schema for this information in MySQL : CREATE TABLE Diggs ( id INT(11), itemid INT(11), userid INT(11), digdate DATETIME, PRIMARY KEY (id), KEY user (userid), KEY item (itemid) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; CREATE TABLE Friends ( id INT(10) AUTO_INCREMENT, userid INT(10), username VARCHAR(15), friendid INT(10), friendname VARCHAR(15), mutual TINYINT(1), date_created DATETIME, PRIMARY KEY (id), UNIQUE KEY Friend_unique (userid,friendid), KEY Friend_friend (friendid) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; This problem is ubiquitous in social networking scenario implementation. People befriend a lot of people and they in turn digg a lot of things. Quickly showing a user what his/her friends are up to is very critical. I understand that several blogs have since then provided a pure RDBMs solution with indexes for this issue; however I am curious as to how this could be solved in MongoDB.

    Read the article

  • Is XSLT worth investing time in and are there any actual alternatives?

    - by Keeno
    I realize this has been a few other questions on this topic, and people are saying use your language of choice to manipulate the XML etc etc however, not quite fit my question exactly. Firstly, the scope of the project: We want to develop platform independent e-learning, currently, its a bunch of HTML pages but as they grow and develop they become hard to maintain. The idea: Generate up an XML file + Schema, then produce some XSLT files that process the XML into the eLearning modiles. XML to HTML via XSLT. Why: We would like the flexibilty to be able to easy reformat the content (I realize CSS is a viable alternative here) If we decide to alter the pages layout or functionality in anyway, im guessing altering the "shared" XSLT files would be easier than updating the HTML files. So far, we have about 30 modules, with up to 10-30 pages each Depending on some "parameters" we could output drastically different page layouts/structures, above and beyond what CSS can do Now, all this has to be platform independent, and to be able to run "offline" i.e. without a server powering the HTML Negatives I've read so far for XSLT: Overhead? Not exactly sure why...is it the compute power need to convert to HTML? Difficult to learn Better alternatives Now, what I would like to know exactly is: are there actually any viable alternatives for this "offline"? Am I going about it in the correct manner, do you guys have any advice or alternatives. Thanks!

    Read the article

  • Restoring dev db from production: Running a set of SQL scripts based on a list stored in a table?

    - by mattley
    I need to restore a backup from a production database and then automatically reapply SQL scripts (e.g. ALTER TABLE, INSERT, etc) to bring that db schema back to what was under development. There will be lots of scripts, from a handful of different developers. They won't all be in the same directory. My current plan is to list the scripts with the full filesystem path in table in a psuedo-system database. Then create a stored procedure in this database which will first run RESTORE DATABASE and then run a cursor over the list of scripts, creating a command string for SQLCMD for each script, and then executing that SQLCMD string for each script using xp_cmdshell. The sequence of cursor-sqlstring-xp_cmdshell-sqlcmd feels clumsy to me. Also, it requires turning on xp_cmdshell. I can't be the only one who has done something like this. Is there a cleaner way to run a set of scripts that are scattered around the filesystem on the server? Especially, a way that doesn't require xp_cmdshell?

    Read the article

  • How can I build a generic dataset-handling Perl library?

    - by Pep.
    Hello, I want to build a generic Perl module for handling and analysing biomedical character separated datasets and which can, most certain, be used on any kind of datasets that contain a mixture of categorical (A,B,C,..) and continuous (1.2,3,881..) and identifier (XXX1,XXX2...). The plan is to have people initialize the module and then use some arguments to point to the data file(s), the place were the analysis reports should be placed and the structure of the data. By structure of data I mean which variable is in which place and its name/type. And this is where I need some enlightenment. I am baffled how to do this in a clean way. Obviously, having people create a simple schema file, be it XML or some other format would be the cleanest but maybe not all people enjoy doing something like this. The solutions I can think of are: Create a configuration file in XML or similar and with a prespecified format. Pass the information during initialization of the module. Use the first row of the data as headers and try to guess types (ouch) Surely there must be a "canonical" way of doing this that is also usable and efficient. Thanks p.

    Read the article

  • ignoring elements order in xml validation against xsd

    - by segolas
    Hi, Ia processing an email and saving some header inside a xml document. I also need to validate the document against a xml schema. As the subject suggest, I need to validate ignoring the elements order but, as far as I read this seems to be impossible. Am I correct? If I put the headers in a<xsd:sequence>, the order obviously matter. If I us the order is ignored but for some strange reason this imply that the elements must occur at least once. My xml is something like this: <headers> <subject>bla bla bla</subject> <recipient>[email protected]</recipient> <recipient>rcp02domain.com</recipient> <recipient>[email protected]</recipient> </headers> but I think the final document is valid even if subject and recipient elements are swapped. There is really nothing to do?

    Read the article

  • Is there a reason why SSIS significantly slows down after a few minutes?

    - by Mark
    I'm running a fairly substantial SSIS package against SQL 2008 - and I'm getting the same results both in my dev environment (Win7-x64 + SQL-x64-Developer) and the production environment (Server 2008 x64 + SQL Std x64). The symptom is that initial data loading screams at between 50K - 500K records per second, but after a few minutes the speed drops off dramatically and eventually crawls embarrasingly slowly. The database is in Simple recovery model, the target tables are empty, and all of the prerequisites for minimally logged bulk inserts are being met. The data flow is a simple load from a RAW input file to a schema-matched table (i.e. no complex transforms of data, no sorting, no lookups, no SCDs, etc.) The problem has the following qualities and resiliences: Problem persists no matter what the target table is. RAM usage is lowish (45%) - there's plenty of spare RAM available for SSIS buffers or SQL Server to use. Perfmon shows buffers are not spooling, disk response times are normal, disk availability is high. CPU usage is low (hovers around 25% shared between sqlserver.exe and DtsDebugHost.exe) Disk activity primarily on TempDB.mdf, but I/O is very low (< 600 Kb/s) OLE DB destination and SQL Server Destination both exhibit this problem. To sum it up, I expect either disk, CPU or RAM to be exhausted before the package slows down, but instead its as if the SSIS package is taking an afternoon nap. SQL server remains responsive to other queries, and I can't find any performance counters or logged events that betray the cause of the problem. I'll gratefully reward any reasonable answers / suggestions.

    Read the article

  • MSBuild: automate collecting of db migration scripts?

    - by P Dub
    Summary of environment. Asp.net web application (source stored in svn) sqlserver database. (Database schema (tables/sprocs) stored in svn) db version is synced with web application assembly version. (stored in table 'CurrentVersion') CI hudson server that checks out web app from repo and runs custom msbuild file to publish/package app. My msbuild script updates the assembly version of the web app (Major.Minor.Revision.Build) on each build. The 'Revision' is set to the currently checked out svn revision and the 'Build' to the hudson build number (incremented on each automated build). This way i can match the app to a specific trunk revision also get other build stats from the hudson build number. I'd like to automate the collecting of migration scripts (updated sprocs etc) to add to the zip package. I guess by comparing the svn revision of the db that has yet to be deployed to, to the revision being deployed, i can find what db files have changed in the trunk since the last deployment to that database/environment. This could easily be achieved by manually calling the svn diff -r REVNO:REVNO command to list changed .sql files. These files could then manually have to be added to the package. It would be great if this could be automated. Firstly i'd imagine I'll have to write a custom task to check the version of the db that has yet to be deployed to. After that I'm quite unsure. Does anyone have any suggestion on how this would be achieved through an msbuild task either existing or custom? Finally I'll have to autogen a script to add to the package that updates the database version table so as to be in sync with the application.

    Read the article

< Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >