Search Results

Search found 3444 results on 138 pages for 'mapping'.

Page 126/138 | < Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >

  • Daylight saving time and Timezone best practices

    - by Oded
    I am hoping to make this question and the answers to it the definitive guide to dealing with daylight saving time, in particular for dealing with the actual change overs. If you have anything to add, please do Many systems are dependent on keeping accurate time, the problem is with changes to time due to daylight savings - moving the clock forward or backwards. For instance, one has business rules in an order taking system that depend on the time of the order - if the clock changes, the rules might not be as clear. How should the time of the order be persisted? There is of course an endless number of scenarios - this one is simply an illustrative one. How have you dealt with the daylight saving issue? What assumptions are part of your solution? (looking for context here) As important, if not more so: What did you try that did not work? Why did it not work? I would be interested in programming, OS, data persistence and other pertinent aspects of the issue. General answers are great, but I would also like to see details especially if they are only available on one platform. Summary of answers and other data: (please add yours) Do: Always persist time according to a unified standard that is not affected by daylight savings. GMT and UTC have been mentioned by different people. Include the local time offset (including DST offset) in stored timestamps. Remember that DST offsets are not always an integer number of hours (e.g. Indian Standard Time is UTC+05:30). If using Java, use JodaTime. - http://joda-time.sourceforge.net/ Create a table TZOffsets with three columns: RegionClassId, StartDateTime, and OffsetMinutes (int, in minutes). See answer Check if your DBMS needs to be shutdown during transition. Business rules should always work on civil time. Internally, keep timestamps in something like civil-time-seconds-from-epoch. See answer Only convert to local times at the last possible moment. Don't: Do not use javascript date and time calculations in web apps unless you ABSOLUTELY have to. Testing: When testing make sure you test countries in the Western and Eastern hemispheres, with both DST in progress and not and a country that does not use DST (6 in total). Reference: Olson database, aka Tz_database - ftp://elsie.nci.nih.gov/pub Sources for Time Zone and DST - http://www.twinsun.com/tz/tz-link.htm ISO format (ISO 8601) - http://en.wikipedia.org/wiki/ISO_8601 Mapping between Olson database and Windows TimeZone Ids, from the Unicode Consortium - http://unicode.org/repos/cldr-tmp/trunk/diff/supplemental/windows_tzid.html TimeZone page on WikiPedia - http://en.wikipedia.org/wiki/Tz_database StackOverflow questions tagged dst - http://stackoverflow.com/questions/tagged/dst StackOverflow questions tagged timezone - http://stackoverflow.com/questions/tagged/timezone Other: Lobby your representative to end the abomination that is DST. We can always hope...

    Read the article

  • Problem with eager load polymorphic associations using Linq and NHibernate

    - by Voislav
    Is it possible to eagerly load polymorphic association using Linq and NH? For example: Company is base class, Department is inherited from Company, and Company has association Employees to the User (one-to-many) and also association to the Country (many-to-one). Here is mapping part related to inherited class (without User and Country classes): <class name="Company" discriminator-value="Company"> <id name="Id" type="int" unsaved-value="0" access="nosetter.camelcase-underscore"> <generator class="native"></generator> </id> <discriminator column="OrganizationUnit" type="string" length="10" not-null="true"/> <property name="Name" type="string" length="50" not-null="true"/> <many-to-one name="Country" class="Country" column="CountryId" not-null ="false" foreign-key="FK_Company_CountryId" access="field.camelcase-underscore" /> <set name="Departments" inverse="true" lazy="true" access="field.camelcase-underscore"> <key column="DepartmentParentId" not-null="false" foreign-key="FK_Department_DepartmentParentId"></key> <one-to-many class="Department"></one-to-many> </set> <set name="Employees" inverse="true" lazy="true" access="field.camelcase-underscore"> <key column="CompanyId" not-null="false" foreign-key="FK_User_CompanyId"></key> <one-to-many class="User"></one-to-many> </set> <subclass name="Department" extends="Company" discriminator-value="Department"> <many-to-one name="DepartmentParent" class="Company" column="DepartmentParentId" not-null ="false" foreign-key="FK_Department_DepartmentParentId" access="field.camelcase-underscore" /> </subclass> </class> I do not have problem to eagerly load any of the association on the Company: Session.Query<Company>().Where(c => c.Name == "Main Company").Fetch(c => c.Country).Single(); Session.Query<Company>().Where(c => c.Name == "Main Company").FetchMany(c => c.Employees).Single(); Also, I could eagerly load not-polymorphic association on the department: Session.Query<Department>().Where(d => d.Name == "Department 1").Fetch(d => d.DepartmentParent).Single(); But I get NullReferenceException when I try to eagerly load any of the polymorphic association (from the Department): Assert.Throws<NullReferenceException>(() => Session.Query<Department>().Where(d => d.Name == "Department 1").Fetch(d => d.Country).Single()); Assert.Throws<NullReferenceException>(() => Session.Query<Department>().Where(d => d.Name == "Department 1").FetchMany(d => d.Employees).Single()); Am I doing something wrong or this is not supported yet?

    Read the article

  • Spring MVC return ajax response using Jackson

    - by anshumn
    I have a scenario where I am filling a dropdown box in JSP through AJAX response from the server. In the controller, I am retuning a Collection of Product objects and have annotated the return type with @ResponseBody. Controller - @RequestMapping(value="/getServicesForMarket", method = RequestMethod.GET) public @ResponseBody Collection<Product> getServices(@RequestParam(value="marketId", required=true) int marketId) { Collection<Product> products = marketService.getProducts(marketId); return products; } And Product is @Entity @Table(name = "PRODUCT") public class Product implements Serializable { private static final long serialVersionUID = 1L; private int id; private Market market; private Service service; private int price; @Id @GeneratedValue(strategy = GenerationType.AUTO) public int getId() { return id; } public void setId(int id) { this.id = id; } @ManyToOne(fetch=FetchType.LAZY) @JoinColumn(name = "MARKET_ID") public Market getMarket() { return market; } public void setMarket(Market market) { this.market = market; } @ManyToOne(fetch=FetchType.LAZY) @JoinColumn(name = "SERVICE_ID") public Service getService() { return service; } public void setService(Service service) { this.service = service; } @Column(name = "PRICE") public int getPrice() { return price; } public void setPrice(int price) { this.price = price; } } Service is @Entity @Table(name="SERVICE") public class Service implements Serializable { /** * */ private static final long serialVersionUID = 1L; private int id; private String name; private String description; @Id @GeneratedValue @Column(name="ID") public int getId() { return id; } public void setId(int id) { this.id = id; } @Column(name="NAME") public String getName() { return name; } public void setName(String name) { this.name = name; } @Column(name="DESCRIPTION") public String getDescription() { return description; } public void setDescription(String description) { this.description = description; } } In the JSP, I need to get the data from the service field of Product also. So I in my JQuery callback function, I have written like product.service.description to get the data. It seems that by default Jackson is not mapping the associated service object (or any other custom object). Also I am not getting any exception. In the JSP, I do not get the data. It is working fine when I return Collection of some object which does not contain any other custom objects as its fields. Am I missing any settings for this to work? Thanks!

    Read the article

  • Linq-to-sql Compiled Query returning object NOT belonging to submitted DataContext

    - by Vladimir Kojic
    Compiled query: public static class Machines { public static readonly Func<OperationalDataContext, short, Machine> QueryMachineById = CompiledQuery.Compile((OperationalDataContext db, short machineID) => db.Machines.Where(m => m.MachineID == machineID).SingleOrDefault() ); public static Machine GetMachineById(IUnitOfWork unitOfWork, short id) { Machine machine; // Old code (working) //var machineRepository = unitOfWork.GetRepository<Machine>(); //machine = machineRepository.Find(m => m.MachineID == id).SingleOrDefault(); // New code (making problems) machine = QueryMachineById(unitOfWork.DataContext, id); return machine; } It looks like compiled query is caching Machine object and returning the same object even if query is called from new DataContext (I’m disposing DataContext in the service but I’m getting Machine from previous DataContext). I use POCOs and XML mapping. Revised: It looks like compiled query is returning result from new data context and it is not using the one that I passed in compiled-query. Therefore I can not reuse returned object and link it to another object obtained from datacontext thru non compiled queries. [TestMethod] public void GetMachinesTest() { // Test Preparation (not important) using (var unitOfWork = IoC.Get<IUnitOfWork>()) { var machineRepository = unitOfWork.GetRepository<Machine>(); // GET ALL List<Machine> list = machineRepository.FindAll().ToList<Machine>(); VerifyIntegratedMachine(list[2], 3, "Machine 3", "333333", "G300PET", "MachineIconC.xaml", false, true, LicenseType.Licensed, "10.0.97.3", "10.0.97.3", 0); var machine = Machines.GetMachineById(unitOfWork, 3); Assert.AreSame(list[2], machine); // PASS !!!! } using (var unitOfWork = IoC.Get<IUnitOfWork>()) { var machineRepository = unitOfWork.GetRepository<Machine>(); // GET ALL List<Machine> list = machineRepository.FindAll().ToList<Machine>(); VerifyIntegratedMachine(list[2], 3, "Machine 3", "333333", "G300PET", "MachineIconC.xaml", false, true, LicenseType.Licensed, "10.0.97.3", "10.0.97.3", 0); var machine = Machines.GetMachineById(unitOfWork, 3); Assert.AreSame(list[2], machine); // FAIL !!!! } } If I run other (complex) unit tests I'm getting as expected: An attempt has been made to Attach or Add an entity that is not new, perhaps having been loaded from another DataContext.

    Read the article

  • How to find minimum of nonlinear, multivariate function using Newton's method (code not linear algeb

    - by Norman Ramsey
    I'm trying to do some parameter estimation and want to choose parameter estimates that minimize the square error in a predicted equation over about 30 variables. If the equation were linear, I would just compute the 30 partial derivatives, set them all to zero, and use a linear-equation solver. But unfortunately the equation is nonlinear and so are its derivatives. If the equation were over a single variable, I would just use Newton's method (also known as Newton-Raphson). The Web is rich in examples and code to implement Newton's method for functions of a single variable. Given that I have about 30 variables, how can I program a numeric solution to this problem using Newton's method? I have the equation in closed form and can compute the first and second derivatives, but I don't know quite how to proceed from there. I have found a large number of treatments on the web, but they quickly get into heavy matrix notation. I've found something moderately helpful on Wikipedia, but I'm having trouble translating it into code. Where I'm worried about breaking down is in the matrix algebra and matrix inversions. I can invert a matrix with a linear-equation solver but I'm worried about getting the right rows and columns, avoiding transposition errors, and so on. To be quite concrete: I want to work with tables mapping variables to their values. I can write a function of such a table that returns the square error given such a table as argument. I can also create functions that return a partial derivative with respect to any given variable. I have a reasonable starting estimate for the values in the table, so I'm not worried about convergence. I'm not sure how to write the loop that uses an estimate (table of value for each variable), the function, and a table of partial-derivative functions to produce a new estimate. That last is what I'd like help with. Any direct help or pointers to good sources will be warmly appreciated. Edit: Since I have the first and second derivatives in closed form, I would like to take advantage of them and avoid more slowly converging methods like simplex searches.

    Read the article

  • Maven nexus with jetty 7 and apache2 reverse proxy

    - by user613154
    Hello stackoverflow ! Here is my problem : i try to run a maven nexus behind an apache reverse proxy. As i have multiples war in my jetty, i want the nexus to run here : http://localhost:8080/nexus I made a jetty context file as follow : {jetty.home}/contexts/nexus.xml <?xml version="1.0" encoding="ISO-8859-1"?> <!DOCTYPE Configure PUBLIC "-//Jetty//Configure//EN" "http://www.eclipse.org/jetty/configure.dtd"> <Configure class="org.eclipse.jetty.webapp.WebAppContext"> <Set name="contextPath">/nexus</Set> <Set name="war"><SystemProperty name="jetty.home" default="."/>/webapps/nexus.war</Set> </Configure> My jetty connector in jetty.xml is as follow : <Call name="addConnector"> <Arg> <New class="org.eclipse.jetty.server.nio.SelectChannelConnector"> <Set name="host"><Property name="jetty.host" /></Set> <Set name="port"><Property name="jetty.port" default="8080"/></Set> <Set name="maxIdleTime">300000</Set> <Set name="Acceptors">2</Set> <Set name="forwarded">true</Set> <Set name="statsOn">false</Set> <Set name="confidentialPort">8443</Set> <Set name="lowResourcesConnections">20000</Set> <Set name="lowResourcesMaxIdleTime">5000</Set> </New> </Arg> </Call> I want http://maven.foo.com/ as an end point for the nexus, so i made this apache2 configuration file : ProxyRequests Off ProxyVia Off ProxyPreserveHost On <Proxy *> AddDefaultCharset off Order deny,allow Allow from all </Proxy> <VirtualHost *:80> ServerName maven.foo.com ProxyPass / http://localhost:8080/nexus/ ProxyPassReverse / http://localhost:8080/nexus/ ErrorLog ${APACHE_LOG_DIR}/error_nexus.log </VirtualHost> But i can't manage to make it work. The error message displayed in the browser is "The server has not found anything matching the request URI". I tried to read docs on jetty and apache web site, but didn't find information for mapping a subdomain "sub.foo.com" to a context "localhost:8080/sub" ... Any help welcome ! Thanks

    Read the article

  • VB6 app not executing as scheduled task unless user is logged on

    - by Tedd Hansen
    Hi Would greatly apprechiate some help on this one! It may be a tricky one. :) Problem I have an VB6 application which is set up as scheduled task. It starts every time, but when executing CreateObject it fails if user is not logged on to computer. I am looking for information on what could cause this. Primary suspicion is that some Windows API fails. Key points Behaviour confirmed on Windows 2000, 2003, 2008 and Vista. The application executes as user X at scheduled time, executed by Windows Task Scheduler. It executes every time. Application does start! -- If user X is logged on via RDP it runs perfectly. (Note that user doesn't need to be connected, only logged on) -- If user X is not logged on to computer the application fails. Failure point Application fails when using CreateObject() to instansiate a DCOM object which is also part of the application. The DCOM objects declare .dll-references at startup (globally/on top of .bas-file) and run a small startup function. Failure must be during startup, possibly in one of the .dll-declarations. Thoughts After some Googling my initial suspicion was directed at MAPI. From what I could see MAPI required user to be logged on. The application has MAPI references. But even with all MAPI references removed it still does not work. What is the difference if an user is logged on? Registry mapping? Environment? explorer.exe is running. Isn't the user logged on when application executes as the user? What info would help? A definitive answer would be truly great. Any information regarding any VB6 feature/Windows API that could act differently depending on wether user is logged on or not would definitively help. Similar experiences may lead me in the right direction. Tips on debuggin this. Thanks! :)

    Read the article

  • NoSQL DB for .Net document-based database (ECM)

    - by Dane
    I'm halfway through coding a basic multi-tenant SaaS ECM solution. Each client has it's own instance of the database / datastore, but the .Net app is single instance. The documents are pretty much read only (i.e. an image archive of tiffs or PDFs) I've used MSSQL so far, but then started thinking this might be viable in a NoSQL DB (e.g. MongoDB, CouchDB). The basic premise is that it stores documents, each with their own particular indexes. Each tenant can have multiple document types. e.g. One tenant might have an invoice type, which has Customer ID, Invoice Number and Invoice Date. Another tenant might have an application form, which has Member Number, Application Number, Member Name, and Application Date. So far I've used the old method which Sharepoint (used?) to use, and created a document table which has int_field_1, int_field_2, date_field_1, date_field_2, etc. Then, I've got a "mapping" table which stores the customer specific index name, and the database field that will map to. I've avoided the key-value pair model in the DB due to volume of documents. This way, we can support multiple document types in the one table, and get reasonably high performance out of it, and allow for custom document type searches (i.e. user selects a document type, then they're presented with a list of search fields). However, a NoSQL DB might make this a lot simpler, as I don't need to worry about denormalizing the document. However, I've just got concerns about the rest of the data around a document. We store an "action history" against the document. This tracks views, whether someone emails the document from within the system, and other "future" functionality (e.g. faxing). We have control over the document load process, so we can manipulate the data however it needs to be to get it in the document store (e.g. assign unique IDs). Users will not be adding in their own documents, so we shouldn't need to worry about ACID compliance, as the documents are relatively static. So, my questions I guess : Is a NoSQL DB a good fit Is MongoDB the best for Asp.Net (I saw Raven and Velocity, but they're still kinda beta) Can I store a key for each document, and then store the action history in a MSSQL DB with this key? I don't need to do joins, it would be if a person clicks "View History" against a document. How would performance compare between the two (NoSQL DB vs denormalized "document" table) Volumes would be up to 200,000 new documents per month for a single tenant. My current scaling plan with the SQL DB involves moving the SQL DB into a cluster when certain thresholds are reached, and then reviewing partitioning and indexing structures.

    Read the article

  • DataRelation Insert and ForeignKey

    - by Steve
    Guys, I have a winforms application with two DataGridViews displaying a master-detail relationship from my Person and Address tables. Person table has a PersonID field that is auto-incrementing primary key. Address has a PersonID field that is the FK. I fill my DataTables with DataAdapter and set Person.PersonID column's AutoIncrement=true and AutoIncrementStep=-1. I can insert records in the Person DataTable from the DataGridView. The PersonID column displays unique negative values for PersonID. I update the database by calling DataAdapter.Update(PersonTable) and the negative PersonIDs are converted to positive unique values automatically by SQL Server. Here's the rub. The Address DataGridView show the address table which has a DataRelation to Person by PersonID. Inserted Person records have the temporary negative PersonID. I can now insert records into Address via DataGridView and Address.PersonID is set to the negative value from the DataRelation mapping. I call Adapter.Update(AddressTable) and the negative PersonIDs go into the Address table breaking the relationship. How do you guys handle primary/foreign keys using DataTables and master-detail DataGridViews? Thanks! Steve EDIT: After more googling, I found that SqlDataAdapter.RowUpdated event gives me what I need. I create a new command to query the last id inserted by using @@IDENTITY. It works pretty well. The DataRelation updates the Address.PersonID field for me so it's required to Update the Person table first then update the Address table. All the new records insert properly with correct ids in place! Adapter = new SqlDataAdapter(cmd); Adapter.RowUpdated += (s, e) => { if (e.StatementType != StatementType.Insert) return; //set the id for the inserted record SqlCommand c = e.Command.Connection.CreateCommand(); c.CommandText = "select @@IDENTITY id"; e.Row[0] = Convert.ToInt32( c.ExecuteScalar() ); }; Adapter.Fill(this); SqlCommandBuilder sb = new SqlCommandBuilder(Adapter); sb.GetDeleteCommand(); sb.GetUpdateCommand(); sb.GetInsertCommand(); this.Columns[0].AutoIncrement = true; this.Columns[0].AutoIncrementSeed = -1; this.Columns[0].AutoIncrementStep = -1;

    Read the article

  • EntityManagerFactory error on Websphere

    - by neverland
    I have a very weird problem. Have an application using Hibernate and spring.I have an entitymanger defined which uses a JNDI lookup .It looks something like this <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="dataSource" ref="dataSource" /> <property name="persistenceUnitName" value="ConfigAPPPersist" /> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"> <property name="showSql" value="true" /> <property name="generateDdl" value="false" /> <property name="databasePlatform" value="org.hibernate.dialect.Oracle9Dialect" /> </bean> </property> </bean> <bean id="dataSource" class="org.springframework.jdbc.datasource.WebSphereDataSourceAdapter"> <property name="targetDataSource"> <bean class="org.springframework.jndi.JndiObjectFactoryBean"> <property name="jndiName" value="jdbc/pmp" /> </bean> </property> </bean> This application runs fine in DEV. But when we move to higher envs the team that deploys this application does it successfully initially but after a few restarts of the application the entitymanager starts giving this problem Caused by: javax.persistence.PersistenceException: [PersistenceUnit: ConfigAPPPersist] Unable to build EntityManagerFactory at org.hibernate.ejb.Ejb3Configuration.buildEntityManagerFactory(Ejb3Configuration.java:677) at org.hibernate.ejb.HibernatePersistence.createContainerEntityManagerFactory(HibernatePersistence.java:132) at org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean.createNativeEntityManagerFactory(LocalContainerEntityManagerFactoryBean.java:224) at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.afterPropertiesSet(AbstractEntityManagerFactoryBean.java:291) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1368) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1334) ... 32 more Caused by: org.hibernate.MappingException: **property mapping has wrong number of columns**: com.***.***.jpa.marketing.entity.MarketBrands.$performasure_j2eeInfo type: object Now you would say this is pretty obvious the entity MarketBrands is incorrect. But its not it maps to the table just fine. And the same code works on DEV. Also the jndi cannot be incorrect since it deploys and works fine initially but throws uo this error after a restart. This is weird and not very logical. But if someone has faced this or has any idea on what might be causing this Please!! help The persistence.xml for the persitence unit has very little <?xml version="1.0" encoding="UTF-8"?> <persistence xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd" version="1.0"> <persistence-unit name="ConfigAPPPersist"> <!-- commented code --> </persistence-unit> </persistence>

    Read the article

  • SQLSTATE[HY093]: Invalid parameter number: mixed named and positional parameters

    - by Gremo
    Weird error and this is driving me crazy all the day. To me it seems a bug because there are no positional parameters in my query. Here is the method: public function getAll(User $user, DateTime $start = null, DateTime $end = null) { $params = array('user_id' => $user->getId()); $rsm = new \Doctrine\ORM\Query\ResultSetMapping(); // Result set mapping $rsm->addScalarResult('subtype', 'subtype'); $rsm->addScalarResult('count', 'count'); $sms_sql = "SELECT CONCAT('sms_', IF(is_auto = 0, 'user' , 'auto')) AS subtype, " . "SUM(messages_count * (customers_count + recipients_count)) AS count " . "FROM outgoing_message AS m INNER JOIN small_text_message AS s ON " . "m.id = s.id WHERE status <> 'pending' AND user_id = :user_id"; $news_sql = "SELECT CONCAT('news_', IF(is_auto = 0, 'user' , 'auto')) AS subtype, " . "SUM(customers_count + recipients_count) AS count " . "FROM outgoing_message AS m JOIN newsletter AS n ON m.id = n.id " . "WHERE status <> 'pending' AND user_id = :user_id"; if($start) : $sms_sql .= " AND sent_at >= :start"; $news_sql .= " AND sent_at >= :start"; $params['start'] = $start->format('Y-m-d'); endif; $sms_sql .= ' GROUP BY type, is_auto'; $news_sql .= ' GROUP BY type, is_auto'; return $this->_em->createNativeQuery("$sms_sql UNION ALL $news_sql", $rsm) >setParameters($params)->getResult(); } And this throws the exception: SQLSTATE[HY093]: Invalid parameter number: mixed named and positional parameters Array $arams is OK and so generated SQL: var_dump($params); array (size=2) 'user_id' => int 1 'start' => string '2012-01-01' (length=10) Strangest thing is that it works with "$sms_sql" only! Any help would make my day, thanks. Update Found another strange thing. If i change only the name (to start_date instead of start): if($start) : $sms_sql .= " AND sent_at >= :start_date"; $news_sql .= " AND sent_at >= :start_date"; $params['start_date'] = $start->format('Y-m-d'); endif; What happens is that Doctrine/PDO says: SQLSTATE[42S22]: Column not found: 1054 Unknown column 'sent1rt_date' in 'where clause' ... as string 1rt was added in the middle of the column name! Crazy...

    Read the article

  • Storing simulation results in a persistent manner for Python?

    - by Az
    Background: I'm running multiple simuations on a set of data. For each session, I'm allocating projects to students. The difference between each session is that I'm randomising the order of the students such that all the students get a shot at being assigned a project they want. I was writing out some of the allocations in a spreadsheet (i.e. Excel) and it basically looked like this (tiny snapshot, actual table extends to a few thousand sessions, roughly 100 students). | | Session 1 | Session 2 | Session 3 | |----------|-----------|-----------|-----------| |Stu1 |Proj_AA |Proj_AB |Proj_AB | |----------|-----------|-----------|-----------| |Stu2 |Proj_AB |Proj_AA |Proj_AC | |----------|-----------|-----------|-----------| |Stu3 |Proj_AC |Proj_AC |Proj_AA | |----------|-----------|-----------|-----------| Now, the code that deals with the allocation currently stores a session in an object. The next time the allocation is run, the object is over-written. Thus what I'd really like to do is to store all the allocation results. This is important since I later need to derive from the data, information such as: which project Stu1 got assigned to the most or perhaps how popular Proj_AC was (how many times it was assigned / number of sessions). Question(s): What methods can I possibly use to basically store such session information persistently? Basically, each session output needs to add itself to the repository after ending and before beginning the next allocation cycle. One solution that was suggested by a friend was mapping these results to a relational database using SQLAlchemy. I kind of like the idea since this does give me an opportunity to delve into databases. Now the database structure I was recommended was: |----------|-----------|-----------| |Session |Student |Project | |----------|-----------|-----------| |1 |Stu1 |Proj_AA | |----------|-----------|-----------| |1 |Stu2 |Proj_AB | |----------|-----------|-----------| |1 |Stu3 |Proj_AC | |----------|-----------|-----------| |2 |Stu1 |Proj_AB | |----------|-----------|-----------| |2 |Stu2 |Proj_AA | |----------|-----------|-----------| |2 |Stu3 |Proj_AC | |----------|-----------|-----------| |3 |Stu1 |Proj_AB | |----------|-----------|-----------| |3 |Stu2 |Proj_AC | |----------|-----------|-----------| |3 |Stu3 |Proj_AA | |----------|-----------|-----------| Here, it was suggested that I make the Session and Student columns a composite key. That way I can access a specific record for a particular student for a particular session. Or I can merely get the entire allocation run for a particular session. Questions: Is the idea a good one? How does one implement and query a composite key using SQLAlchemy? What happens to the database if a particular student is not assigned a project (happens if all projects that he wants are taken)? In the code, if a student is not assigned a project, instead of a proj_id he simply gets None for that field/object. I apologise for asking multiple questions but since these are closely-related, I thought I'd ask them in the same space.

    Read the article

  • HQL : LEFT OUTER JOIN

    - by Parama
    Hi all, I have two tables and respective classes in java.The mapping in the HBM.xml is as follows : The query in the HBM.xml is as follows : from Reports as rep left join rep.parts as parts I am getting the following exception while executing the code : May 19, 2010 10:47:04 AM org.hibernate.util.JDBCExceptionReporter logExceptions WARNING: SQL Error: 904, SQLState: 42000 May 19, 2010 10:47:04 AM org.hibernate.util.JDBCExceptionReporter logExceptions SEVERE: ORA-00904: "REPORTS0_"."PARTNO": invalid identifier org.hibernate.exception.SQLGrammarException: could not execute query at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:67) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:43) at org.hibernate.loader.Loader.doList(Loader.java:2223) at org.hibernate.loader.Loader.listIgnoreQueryCache(Loader.java:2104) at org.hibernate.loader.Loader.list(Loader.java:2099) at org.hibernate.loader.hql.QueryLoader.list(QueryLoader.java:378) at org.hibernate.hql.ast.QueryTranslatorImpl.list(QueryTranslatorImpl.java:338) at org.hibernate.engine.query.HQLQueryPlan.performList(HQLQueryPlan.java:172) at org.hibernate.impl.SessionImpl.list(SessionImpl.java:1121) at org.hibernate.impl.QueryImpl.list(QueryImpl.java:79) at com.hcl.spring.db.sample.dao.ItemDAOImpl.loadItems(ItemDAOImpl.java:43) at com.hcl.spring.db.sample.service.ItemServiceImpl.loadItems(ItemServiceImpl.java:20) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:304) at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:182) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:149) at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:106) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204) at $Proxy3.loadItems(Unknown Source) at com.hcl.spring.db.sample.Main.loadItems(Main.java:40) at com.hcl.spring.db.sample.Main.main(Main.java:19) Caused by: java.sql.SQLException: ORA-00904: "REPORTS0_"."PARTNO": invalid identifier at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112) at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:331) at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:288) at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:743) at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:216) at oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:799) at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1038) at oracle.jdbc.driver.T4CPreparedStatement.executeMaybeDescribe(T4CPreparedStatement.java:839) at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1133) at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3285) at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3329) at org.hibernate.jdbc.AbstractBatcher.getResultSet(AbstractBatcher.java:186) at org.hibernate.loader.Loader.getResultSet(Loader.java:1787) at org.hibernate.loader.Loader.doQuery(Loader.java:674) at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:236) at org.hibernate.loader.Loader.doList(Loader.java:2220) ... 22 more Do request your help on the same.

    Read the article

  • Any tool(s) for knowing the layout (segments) of running process in Windows?

    - by claws
    I've always been curious about How exactly the process looks in memory? What are the different segments(parts) in it? How exactly will be the program (on the disk) & process (in the memory) are related? My previous question: http://stackoverflow.com/questions/1966920/more-info-on-memory-layout-of-an-executable-program-process In my quest, I finally found a answer. I found this excellent article that cleared most of my queries: http://www.linuxforums.org/articles/understanding-elf-using-readelf-and-objdump_125.html In the above article, author shows how to get different segments of the process (LINUX) & he compares it with its corresponding ELF file. I'm quoting this section here: Courious to see the real layout of process segment? We can use /proc//maps file to reveal it. is the PID of the process we want to observe. Before we move on, we have a small problem here. Our test program runs so fast that it ends before we can even dump the related /proc entry. I use gdb to solve this. You can use another trick such as inserting sleep() before it calls return(). In a console (or a terminal emulator such as xterm) do: $ gdb test (gdb) b main Breakpoint 1 at 0x8048376 (gdb) r Breakpoint 1, 0x08048376 in main () Hold right here, open another console and find out the PID of program "test". If you want the quick way, type: $ cat /proc/`pgrep test`/maps You will see an output like below (you might get different output): [1] 0039d000-003b2000 r-xp 00000000 16:41 1080084 /lib/ld-2.3.3.so [2] 003b2000-003b3000 r--p 00014000 16:41 1080084 /lib/ld-2.3.3.so [3] 003b3000-003b4000 rw-p 00015000 16:41 1080084 /lib/ld-2.3.3.so [4] 003b6000-004cb000 r-xp 00000000 16:41 1080085 /lib/tls/libc-2.3.3.so [5] 004cb000-004cd000 r--p 00115000 16:41 1080085 /lib/tls/libc-2.3.3.so [6] 004cd000-004cf000 rw-p 00117000 16:41 1080085 /lib/tls/libc-2.3.3.so [7] 004cf000-004d1000 rw-p 004cf000 00:00 0 [8] 08048000-08049000 r-xp 00000000 16:06 66970 /tmp/test [9] 08049000-0804a000 rw-p 00000000 16:06 66970 /tmp/test [10] b7fec000-b7fed000 rw-p b7fec000 00:00 0 [11] bffeb000-c0000000 rw-p bffeb000 00:00 0 [12] ffffe000-fffff000 ---p 00000000 00:00 0 Note: I add number on each line as reference. Back to gdb, type: (gdb) q So, in total, we see 12 segment (also known as Virtual Memory Area--VMA). But I want to know about Windows Process & PE file format. Any tool(s) for getting the layout (segments) of running process in Windows? Any other good resources for learning more on this subject? EDIT: Are there any good articles which shows the mapping between PE file sections & VA segments?

    Read the article

  • How to properly test Hibernate length restriction?

    - by Cesar
    I have a POJO mapped with Hibernate for persistence. In my mapping I specify the following: <class name="ExpertiseArea"> <id name="id" type="string"> <generator class="assigned" /> </id> <version name="version" column="VERSION" unsaved-value="null" /> <property name="name" type="string" unique="true" not-null="true" length="100" /> ... </class> And I want to test that if I set a name longer than 100 characters, the change won't be persisted. I have a DAO where I save the entity with the following code: public T makePersistent(T entity){ transaction = getSession().beginTransaction(); transaction.begin(); try{ getSession().saveOrUpdate(entity); transaction.commit(); }catch(HibernateException e){ logger.debug(e.getMessage()); transaction.rollback(); } return entity; } Actually the code above is from a GenericDAO which all my DAOs inherit from. Then I created the following test: public void testNameLengthMustBe100orLess(){ ExpertiseArea ea = new ExpertiseArea( "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890"); assertTrue("Name should be 100 characters long", ea.getName().length() == 100); ead.makePersistent(ea); List<ExpertiseArea> result = ead.findAll(); assertEquals("Size must be 1", result.size(),1); ea.setName(ea.getName()+"1234567890"); ead.makePersistent(ea); ExpertiseArea retrieved = ead.findById(ea.getId(), false); assertTrue("Both objects should be equal", retrieved.equals(ea)); assertTrue("Name should be 100 characters long", (retrieved.getName().length() == 100)); } The object is persisted ok. Then I set a name longer than 100 characters and try to save the changes, which fails: 14:12:14,608 INFO StringType:162 - could not bind value '12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890' to parameter: 2; data exception: string data, right truncation 14:12:14,611 WARN JDBCExceptionReporter:100 - SQL Error: -3401, SQLState: 22001 14:12:14,611 ERROR JDBCExceptionReporter:101 - data exception: string data, right truncation 14:12:14,614 ERROR AbstractFlushingEventListener:324 - Could not synchronize database state with session org.hibernate.exception.DataException: could not update: [com.exp.model.ExpertiseArea#33BA7E09-3A79-4C9D-888B-4263314076AF] //Stack trace 14:12:14,615 DEBUG GenericDAO:87 - could not update: [com.exp.model.ExpertiseArea#33BA7E09-3A79-4C9D-888B-4263314076AF] 14:12:14,616 DEBUG JDBCTransaction:186 - rollback 14:12:14,616 DEBUG JDBCTransaction:197 - rolled back JDBC Connection That's expected behavior. However when I retrieve the persisted object to check if its name is still 100 characters long, the test fails. The way I see it, the retrieved object should have a name that is 100 characters long, given that the attempted update failed. The last assertion fails because the name is 110 characters long now, as if the ea instance was indeed updated. What am I doing wrong here?

    Read the article

  • What are the benefits of using ORM over XML Serialization/Deserialization?

    - by Tequila Jinx
    I've been reading about NHibernate and Microsoft's Entity Framework to perform Object Relational Mapping against my data access layer. I'm interested in the benefits of having an established framework to perform ORM, but I'm curious as to the performance costs of using it against standard XML Serialization and Deserialization. Right now, I develop stored procedures in Oracle and SQL Server that use XML Types for either input or output parameters and return or shred XML depending on need. I use a custom database command object that uses generics to deserialize the XML results into a specified serializable class. By using a combination of generics, xml (de)serialization and Microsoft's DAAB, I've got a process that's fairly simple to develop against regardless of the data source. Moreover, since I exclusively use Stored Procedures to perform database operations, I'm mostly protected from changes in the data structure. Here's an over-simplified example of what I've been doing. static void main() { testXmlClass test = new test(1); test.Name = "Foo"; test.Save(); } // Example Serializable Class ------------------------------------------------ [XmlRootAttribute("test")] class testXmlClass() { [XmlElement(Name="id")] public int ID {set; get;} [XmlElement(Name="name")] public string Name {set; get;} //create an instance of the class loaded with data. public testXmlClass(int id) { GenericDBProvider db = new GenericDBProvider(); this = db.ExecuteSerializable("myGetByIDProcedure"); } //save the class to the database... public Save() { GenericDBProvider db = new GenericDBProvider(); db.AddInParameter("myInputParameter", DbType.XML, this); db.ExecuteSerializableNonQuery("mySaveProcedure"); } } // Database Handler ---------------------------------------------------------- class GenericDBProvider { public T ExecuteSerializable<T>(string commandText) where T : class { XmlSerializer xml = new XmlSerializer(typeof(T)); // connection and command code is assumed for the purposes of this example. // the final results basically just come down to... return xml.Deserialize(commandResults) as T; } public void ExecuteSerializableNonQuery(string commandText) { // once again, connection and command code is assumed... // basically, just execute the command along with the specified // parameters which have been serialized. } public void AddInParameter(string name, DbType type, object value) { StringWriter w = new StringWriter(); XmlSerializer x = new XmlSerializer(value.GetType()); //handle serialization for serializable classes. if (type == DbType.Xml && (value.GetType() != typeof(System.String))) { x.Serialize(w, value); w.Close(); // store serialized object in a DbParameterCollection accessible // to my other methods. } else { //handle all other parameter types } } } I'm starting a new project which will rely heavily on database operations. I'm very curious to know whether my current practices will be sustainable in a high-traffic situation and whether or not I should consider switching to NHibernate or Microsoft's Entity Framework to perform what essentially seems to boil down to the same thing I'm currently doing. I appreciate any advice you may have.

    Read the article

  • NHibernate GenericADO Exception

    - by Ris90
    Hi, I'm trying to make simple many-to-one association, using NHibernate.. I have class Recruit with this mapping: <class name="Recruit" table="Recruits"> <id name="ID"> <generator class="native"/> </id> <property name="Lastname" column="lastname"/> <property name="Name" column="name"/> <property name="MedicalReport" column="medicalReport"/> <property name="DateOfBirth" column ="dateOfBirth" type="Date"/> <many-to-one name="AssignedOnRecruitmentOffice" column="assignedOnRecruitmentOffice" class="RecruitmentOffice"/> which is many-to-one connected to RecruitmentOffices: <class name="RecruitmentOffice" table="RecruitmentOffices"> <id name="ID" column="ID"> <generator class="native"/> </id> <property name="Chief" column="chief"/> <property name="Name" column="name"/> <property name ="Address" column="address"/> <set name="Recruits" cascade="save-update" inverse="true" lazy="true"> <key> <column name="AssignedOnRecruitmentOffice"/> </key> <one-to-many class="Recruit"/> </set> And create Repository class with method Insert: public void Insert(Recruit recruit) { using (ITransaction transaction = session.BeginTransaction()) { session.Save(recruit); transaction.Commit(); } } then I try to save new recrui to base: Recruit test = new Recruit(); RecruitmentOffice office = new RecruitmentOffice(); ofice.Name = "test"; office.Chief = "test"; test.AssignedOnRecruitmentOffice = office; test.Name = "test"; test.DateOfBirth = DateTime.Now; RecruitRepository testing = new RecruitRepository(); testing.Insert(test); And have this error GenericADOException could not insert: [OSiUBD.Models.DAO.Recruit][SQL: INSERT INTO Recruits (lastname, name, medicalReport, dateOfBirth, assignedOnRecruitmentOffice) VALUES (?, ?, ?, ?, ?); select SCOPE_IDENTITY()] on session.Save

    Read the article

  • Fixing up an entity framework query

    - by vdh_ant
    My table structure is as follows: Person 1-M PesonAddress Person 1-M PesonPhone Person 1-M PesonEmail Person 1-M Contract Contract M-M Program Contract M-1 Organization At the end of this query I need a populated object graph where each person has their: PesonAddress's PesonPhone's PesonEmail's PesonPhone's Contract's - and this has its respective Program's Now I had the following query and I thought that it was working great, but it has a couple of problems: from people in ctx.People.Include("PersonAddress") .Include("PersonLandline") .Include("PersonMobile") .Include("PersonEmail") .Include("Contract") .Include("Contract.Program") where people.Contract.Any( contract => (param.OrganizationId == contract.OrganizationId) && contract.Program.Any( contractProgram => (param.ProgramId == contractProgram.ProgramId))) select people; The problem is that it filters the person to the criteria but not the Contracts or the Contract's Programs. It brings back all Contracts that each person has not just the ones that have an OrganizationId of x and the same goes for each of those Contract's Programs respectively. What I want is only the people that have at least one contract with an OrgId of x with and where that contract has a Program with the Id of y... and for the object graph that is returned to have only the contracts that match and programs within that contract that match. I kinda understand why its not working, but I don't know how to change it so it is working... This is my attempt thus far: from people in ctx.People.Include("PersonAddress") .Include("PersonLandline") .Include("PersonMobile") .Include("PersonEmail") .Include("Contract") .Include("Contract.Program") let currentContracts = from contract in people.Contract where (param.OrganizationId == contract.OrganizationId) select contract let currentContractPrograms = from contractProgram in currentContracts let temp = from x in contractProgram.Program where (param.ProgramId == contractProgram.ProgramId) select x where temp.Any() select temp where currentContracts.Any() && currentContractPrograms.Any() select new Person { PersonId = people.PersonId, FirstName = people.FirstName, ..., ...., MiddleName = people.MiddleName, Surname = people.Surname, ..., ...., Gender = people.Gender, DateOfBirth = people.DateOfBirth, ..., ...., Contract = currentContracts, ... }; //This doesn't work But this has several problems (where the Person type is an EF object): I am left to do the mapping by myself, which in this case there is quite a lot to map When ever I try to map a list to a property (i.e. Scholarship = currentScholarships) it says I can't because IEnumerable is trying to be cast to EntityCollection Include doesn't work Hence how do I get this to work. Keeping in mind that I am trying to do this as a compiled query so I think that means anonymous types are out.

    Read the article

  • Complex Entity Framework linked-graphs issue: how to limit change set / break the graph?

    - by Hightechrider
    I have an EDMX containing Sentences, and Words, say and a Sentence contains three Words, say. Appropriate FK relationships exist between the tables. I create some words: Word word1 = new Word(); Word word2 = ... I build a Sentence: Sentence x = new Sentence (word1, word2, word3); I build another Sentence: Sentence y = new Sentence (word1, word4, word5); I try to save x to the database, but EF builds a change set that includes everything, including y, word4 and word5 that aren't ready to save to the database. When SaveChanges() happens it throws an exception: Unable to determine the principal end of the ... relationship. Multiple added entities may have the same primary key. I think it does this because Word has an EntityCollection<Sentence> on it from the FK relationship between the two tables, and thus Sentence y is inextricably linked to Sentence x through word1. So I remove the Navigation Property Sentences from Word and try again. It still tries to put the entire graph into the change set. What suggestions do the Entity Framework experts have for ways to break this connection. Essentially what I want is a one-way mapping from Sentence to Word; I don't want an EntityCollection<Sentence> on Word and I don't want the object graph to get intertwined like this. Code sample: This puts two sentences into the database because Verb1 links them and EF explores the entire graph of existing objects and added objects when you do Add/SaveChanges. Word subject1 = new Word(){ Text = "Subject1"}; Word subject2 = new Word(){ Text = "Subject2"}; Word verb1 = new Word(){ Text = "Verb11"}; Word object1 = new Word(){ Text = "Object1"}; Word object2 = new Word(){ Text = "Object2"}; Sentence s1 = new Sentence(){Subject = subject1, Verb=verb1, Object=object1}; Sentence s2 = new Sentence(){Subject=subject2, Verb=verb1, Object=object2}; context.AddToSentences(s1); context.SaveChanges(); foreach (var s in context.Sentences) { Console.WriteLine(s.Subject + " " + s.Verb + " " + s.Object); }

    Read the article

  • hibernate3-maven-plugin: entiries in different maven projects, hbm2ddl fails

    - by Mike
    I'm trying to put an entity in a different maven project. In the current project I have: @Entity public class User { ... private FacebookUser facebookUser; ... public FacebookUser getFacebookUser() { return facebookUser; } ... public void setFacebookUser(FacebookUser facebookUser) { this.facebookUser = facebookUser; } Then FacebookUser (in a different maven project, that's a dependency of a current project) is defined as: @Entity public class FacebookUser { ... @Id @GeneratedValue(strategy = GenerationType.AUTO) public Long getId() { return id; } Here is my maven hibernate3-maven-plugin configuration: <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>hibernate3-maven-plugin</artifactId> <version>2.2</version> <executions> <execution> <phase>process-classes</phase> <goals> <goal>hbm2ddl</goal> </goals> </execution> </executions> <configuration> <components> <component> <name>hbm2ddl</name> <implementation>jpaconfiguration</implementation> </component> </components> <componentProperties> <ejb3>false</ejb3> <persistenceunit>Default</persistenceunit> <outputfilename>schema.ddl</outputfilename> <drop>false</drop> <create>true</create> <export>false</export> <format>true</format> </componentProperties> </configuration> </plugin> Here is the error I'm getting: org.hibernate.MappingException: Could not determine type for: com.xxx.facebook.model.FacebookUser, at table: user, for columns: [org.hibernate.mapping.Column(facebook_user)] I know that FacebookUser is on the classpath because if I make facebook user transient, project compiles fine: @Transient public FacebookUser getFacebookUser() { return facebookUser; }

    Read the article

  • Dependency Injection and Unit of Work pattern

    - by sunwukung
    I have a dilemma. I've used DI (read: factory) to provide core components for a homebrew ORM. The container provides database connections, DAO's,Mappers and their resultant Domain Objects on request. Here's a basic outline of the Mappers and Domain Object classes class Mapper{ public function __constructor($DAO){ $this->DAO = $DAO; } public function load($id){ if(isset(Monitor::members[$id]){ return Monitor::members[$id]; $values = $this->DAO->selectStmt($id); //field mapping process omitted for brevity $Object = new Object($values); return $Object; } } class User(){ public function setName($string){ $this->name = $string; //mark modified by means fair or foul } } The ORM also contains a class (Monitor) based on the Unit of Work pattern i.e. class Monitor(){ private static array modified; private static array dirty; public function markClean($class); public function markModified($class); } The ORM class itself simply co-ordinates resources extracted from the DI container. So, to instantiate a new User object: $Container = new DI_Container; $ORM = new ORM($Container); $User = $ORM->load('user',1); //at this point the container instantiates a mapper class //and passes a database connection to it via the constructor //the mapper then takes the second argument and loads the user with that id $User->setName('Rumpelstiltskin');//at this point, User must mark itself as "modified" My question is this. At the point when a user sets values on a Domain Object class, I need to mark the class as "dirty" in the Monitor class. I have one of three options as I can see it 1: Pass an instance of the Monitor class to the Domain Object. I noticed this gets marked as recursive in FirePHP - i.e. $this-Monitor-markModified($this) 2: Instantiate the Monitor directly in the Domain Object - does this break DI? 3: Make the Monitor methods static, and call them from inside the Domain Object - this breaks DI too doesn't it? What would be your recommended course of action (other than use an existing ORM, I'm doing this for fun...)

    Read the article

  • FluentNHibernate Many-To-One References where Foreign Key is not to Primary Key and column names are

    - by Todd Langdon
    I've been sitting here for an hour trying to figure this out... I've got 2 tables (abbreviated): CREATE TABLE TRUST ( TRUSTID NUMBER NOT NULL, ACCTNBR VARCHAR(25) NOT NULL ) CONSTRAINT TRUST_PK PRIMARY KEY (TRUSTID) CREATE TABLE ACCOUNTHISTORY ( ID NUMBER NOT NULL, ACCOUNTNUMBER VARCHAR(25) NOT NULL, TRANSAMT NUMBER(38,2) NOT NULL POSTINGDATE DATE NOT NULL ) CONSTRAINT ACCOUNTHISTORY_PK PRIMARY KEY (ID) I have 2 classes that essentially mirror these: public class Trust { public virtual int Id {get; set;} public virtual string AccountNumber { get; set; } } public class AccountHistory { public virtual int Id { get; set; } public virtual Trust Trust {get; set;} public virtual DateTime PostingDate { get; set; } public virtual decimal IncomeAmount { get; set; } } How do I do the many-to-one mapping in FluentNHibernate to get the AccountHistory to have a Trust? Specifically, since it is related on a different column than the Trust primary key of TRUSTID and the column it is referencing is also named differently (ACCTNBR vs. ACCOUNTNUMBER)???? Here's what I have so far - how do I do the References on the AccountHistoryMap to Trust??? public class TrustMap : ClassMap<Trust> { public TrustMap() { Table("TRUST"); Id(x => x.Id).Column("TRUSTID"); Map(x => x.AccountNumber).Column("ACCTNBR"); } } public class AccountHistoryMap : ClassMap<AccountHistory> { public AccountHistoryMap() { Table("TRUSTACCTGHISTORY"); Id (x=>x.Id).Column("ID"); References<Trust>(x => x.Trust).Column("ACCOUNTNUMBER").ForeignKey("ACCTNBR").Fetch.Join(); Map(x => x.PostingDate).Column("POSTINGDATE"); ); I've tried a few different variations of the above line but can't get anything to work - it pulls back AccountHistory data and a proxy for the Trust; however it says no Trust row with given identifier. This has to be something simple. Anyone? Thanks in advance.

    Read the article

  • How do I destruct data associated with an object after the object no longer exists?

    - by Phineas
    I'm creating a class (say, C) that associates data (say, D) with an object (say, O). When O is destructed, O will notify C that it soon will no longer exist :( ... Later, when C feels it is the right time, C will let go of what belonged to O, namely D. If D can be any type of object, what's the best way for C to be able to execute "delete D;"? And what if D is an array of objects? My solution is to have D derive from a base class that C has knowledge of. When the time comes, C calls delete on a pointer to the base class. I've also considered storing void pointers and calling delete, but I found out that's undefined behavior and doesn't call D's destructor. I considered that templates could be a novel solution, but I couldn't work that idea out. Here's what I have so far for C, minus some details: // This class is C in the above description. There may be many instances of C. class Context { public: // D will inherit from this class class Data { public: virtual ~Data() {} }; Context(); ~Context(); // Associates an owner (O) with its data (D) void add(const void* owner, Data* data); // O calls this when he knows its the end (O's destructor). // All instances of C are now aware that O is gone and its time to get rid // of all associated instances of D. static void purge (const void* owner); // This is called periodically in the application. It checks whether // O has called purge, and calls "delete D;" void refresh(); // Side note: sometimes O needs access to D Data *get (const void *owner); private: // Used for mapping owners (O) to data (D) std::map _data; }; // Here's an example of O class Mesh { public: ~Mesh() { Context::purge(this); } void init(Context& c) const { Data* data = new Data; // GL initialization here c.add(this, new Data); } void render(Context& c) const { Data* data = c.get(this); } private: // And here's an example of D struct Data : public Context::Data { ~Data() { glDeleteBuffers(1, &vbo); glDeleteTextures(1, &texture); } GLint vbo; GLint texture; }; }; P.S. If you're familiar with computer graphics and VR, I'm creating a class that separates an object's per-context data (e.g. OpenGL VBO IDs) from its per-application data (e.g. an array of vertices) and frees the per-context data at the appropriate time (when the matching rendering context is current).

    Read the article

  • Closure Tables - Is this enough data to display a tree view?

    - by James Pitt
    Here is the table I have created by testing the closure table method. | id | parentId | childId | hops | | | | | 270 | 6 | 6 | 0 | 271 | 7 | 7 | 0 | 272 | 8 | 8 | 0 | 273 | 9 | 9 | 0 | 276 | 10 | 10 | 0 | 281 | 9 | 10 | 1 | 282 | 7 | 9 | 1 | 283 | 7 | 10 | 2 | 285 | 7 | 8 | 1 | 286 | 6 | 7 | 1 | 287 | 6 | 9 | 2 | 288 | 6 | 10 | 3 | 289 | 6 | 8 | 2 | 293 | 6 | 9 | 1 | 294 | 6 | 10 | 2 I am trying to create a simple tree of this using PHP. There does not seem to be enough data to create the table. For example, when I look purely at parentId = 6: -Part 6 -Part 7 - ? - ? -Part 9 - ? - ? We know that parts 8 and 10 exists below Part 7 or 9, but not which. We know that part 10 exists at both 3 and 4 nodes deep but where? If I look at other data in the table it is possible to tell it should be: - Part 6 - Part 7 - Part 9 - Part 10 - Part 9 - Part 10 I thought one of the benefits of closure tables was there was no need for recursive queries? Could you help explain what I am doing wrong? EDIT: For clarification, this is a mapping table. There is another table called "parts" which has a column called part_id that correlates to both the parentId and childId columns in the "closure" table. The "id" column in the table above (closure) is just for the purposes of maintaining a primary key. It is not really necessary. The methods I have used to create this closure table is described in the following article: http://dirtsimple.org/2010/11/simplest-way-to-do-tree-based-queries.html EDIT2: It can have two and three hops. I will explain easier by assigning names to the items. Part 6 = Bicycle Part 7 = Gears Part 8 = Chain Part 9 = Bolt Part 10 = Nut Nut is part of Bolt. The Bolt and Nut combo exists directly within Bicycle and within Gears which is part of Bicycle. In relation to what method to use I have looked at Adjacency, Edges, Enum Paths, Closures, DAGS(networks) and the Nested Set Model. I am still trying to work out what is what, but this is an extremely complex component database where there are multiple parents and any modification to a sub-tree must propogate through the other trees. More importantly there will be insertions, deletions and tree views that I wish to avoid recursion during general use, even at the cost of database space and query time during entry.

    Read the article

  • ASP.NET MVC 2 AJAX dilemma: Lose Models concept or create unmanageable JavaScript

    - by Slightly Frustrated
    Hi, Ok, let's assume we are working with ASP.NET MVC 2 (latest and greatest preview) and we want to create AJAX user interface with jQuery. So what are our real options here? Option 1 - Pass Json from the Controller to the view, and then the view submits Json back to the controller. This means (in the order given): User opens some View (let's say - /Invoices/January) which has to visualize a list of data (e.g. <IEnumerable<X.Y.Z.Models.Invoice>>) Controller retrieves the Model from the repository (assuming we are using repository pattern). Controller creates a new instance of a class which we will serialize to Json. The reasaon we do this, is because the model may not be serializable (circular reference ftl) Controller populates the soon-to-be-serialized class with data Controller serializes the class to Json and passes it the view. User does some change and submits the 'form' The View submits back Json to the controller The Controller now must 'manually' validate the input, because the Json passed does not bind to a Model See, if our View is communicating to the controller via Json, we lose the Model validation, which IMHO is incredible disadvantage. In this case, forget about data annotations and stuff. Option 2 - Ok, the alternative of the first approach is to pass the Models to the Views, which is the default behavior in the template when you start a new project. We pass a strong typed model to the view The view renders the appropriate html and javascript, sticking to the model property names. This is important! The user submits the form. If we stick to the model names, when we .serialize() the form and submit it to the controller it will map to a model. There is no Json mapping. The submitted form directly binds to a strongly typed model, hence, we can use the model validation. E.g. we keep the business logic where it should be. Problem with this approach is, if we refactor some of the Models (change property names, types, etc), the javascript we wrote would become invalid. We will have to manually refactor the scripting and hope we don't miss something. There is no way you can test it either. Ok, the question is - how to write an AJAX front end, which keeps the business logic validation in the model (e.g. controller passes and receives a Model type), but in the same time doesn't screw up the javascript and html when we refactor the model?

    Read the article

< Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >