Search Results

Search found 32116 results on 1285 pages for 'object object mapping'.

Page 93/1285 | < Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >

  • Saving child collections with NHibernate

    - by Ben
    Hi, I am in the process or learning NHibernate so bare with me. I have an Order class and a Transaction class. Order has a one to many association with transaction. The transaction table in my database has a not null constraint on the OrderId foreign key. Order class: public class Order { public virtual Guid Id { get; set; } public virtual DateTime CreatedOn { get; set; } public virtual decimal Total { get; set; } public virtual ICollection<Transaction> Transactions { get; set; } public Order() { Transactions = new HashSet<Transaction>(); } } Order Mapping: <class name="Order" table="Orders"> <cache usage="read-write"/> <id name="Id"> <generator class="guid"/> </id> <property name="CreatedOn" type="datetime"/> <property name="Total" type="decimal"/> <set name="Transactions" table="Transactions" lazy="false" inverse="true"> <key column="OrderId"/> <one-to-many class="Transaction"/> </set> Transaction Class: public class Transaction { public virtual Guid Id { get; set; } public virtual DateTime ExecutedOn { get; set; } public virtual bool Success { get; set; } public virtual Order Order { get; set; } } Transaction Mapping: <class name="Transaction" table="Transactions"> <cache usage="read-write"/> <id name="Id" column="Id" type="Guid"> <generator class="guid"/> </id> <property name="ExecutedOn" type="datetime"/> <property name="Success" type="bool"/> <many-to-one name="Order" class="Order" column="OrderId" not-null="true"/> Really I don't want a bidirectional association. There is no need for my transaction objects to reference their order object directly (I just need to access the transactions of an order). However, I had to add this so that Order.Transactions is persisted to the database: Repository: public void Update(Order entity) { using (ISession session = NHibernateHelper.OpenSession()) { using (ITransaction transaction = session.BeginTransaction()) { session.Update(entity); foreach (var tx in entity.Transactions) { tx.Order = entity; session.SaveOrUpdate(tx); } transaction.Commit(); } } } My problem is that this will then issue an update for every transaction on the order collection (regardless of whether it has changed or not). What I was trying to get around was having to explicitly save the transaction before saving the order and instead just add the transactions to the order and then save the order: public void Can_add_transaction_to_existing_order() { var orderRepo = new OrderRepository(); var order = orderRepo.GetById(new Guid("aa3b5d04-c5c8-4ad9-9b3e-9ce73e488a9f")); Transaction tx = new Transaction(); tx.ExecutedOn = DateTime.Now; tx.Success = true; order.Transactions.Add(tx); orderRepo.Update(order); } Although I have found quite a few articles covering the set up of a one-to-many association, most of these discuss retrieving of data and not persisting back. Many thanks, Ben

    Read the article

  • Hibernate overriding database modifications with detached object state

    - by EugeneP
    I'm gonna go with this design: create an object and keep it alive during all web-app session. And I need to synchronize its state with database state. What I want to achieve is that : IF between my db operations, that is, modifications that I persist to a db someone intentionally spoils table rows, then on next saving to a database all those changes WOULD BE OVERWRITTEN with the object state, that always contains valid data. What Hibernate methods do you recommend me to use to persist the modifications in a database? saveOrUpdate() is a possible solution, but maybe there's anything better? Again, I repeat how it looks. First I create an object without collections. Persist it (save()). Then user provides us with additional data. In a serviceLayer, again, we modify our object in memory (say, populate it with collections) and then, persist it again. So every serviceLayer operation of the next step must simply guarantee that database contains the exact persistent copy of this object that we have in memory. If data in a database differ, it MUST BE OVERRIDDEN with the object (kept in memory) state. What Session operations do you recommend?

    Read the article

  • binding nested json object value to a form field

    - by Jack
    I am building a dynamic form to edit data in a json object. First, if something like this exists let me know. I would rather not build it but I have searched many times for a tool and have found only tree like structures that require entering quotes. I would be happy to treat all values as strings. This edit functionality is for end users so it needs to be easy an not intimidating. So far I have code that generates nested tables to represent a json object. For each value I display a form field. I would like to bind the form field to the associated nested json value. If I could store a reference to the json value I would build an array of references to each value in a json object tree. I have not found a way to do that with javascript. My last resort approach will be to traverse the table after edits are made. I would rather have dynamic updates but a single submit would be better than nothing. Any ideas? // the json in files nests only a few levels. Here is the format of a simple case, { "researcherid_id":{ "id_key":"researcherid_id", "description":"Use to retrieve bibliometric data", "url_template" :[ { "name": "Author Detail", "url": "http://www.researcherid.com/rid/${key}" } ] } } $.get('file.json',make_json_form); function make_json_form(response) { dataset = $.secureEvalJSON(response); // iterate through the object and generate form field for string values. } // Then after the form is edited I want to display the raw updated json (then I want to save it but that is for another thread) // now I iterate through the form and construct the json object // I would rather have the dataset object var updated on focus out after each edit. function show_json(form_id){ var r = {}; var el = document.getElementById(form_id); table_to_json(r,el,null); $('body').html(formattedJSON(r)); }

    Read the article

  • sharing message object between web applications

    - by jezhilvalan
    I need to share java mail message objects between two web applications(A and B). WebApplication A obtains the message and write it to the outputStream for(int i=0;i<messagesArr.length;i++){ uid = pop3FolderObj.getUID(messagesArr[i]); //storing messages with uid names inorder to maintain uniqueness File f = new File("F:/PersistedMessagesFolder" + uid); FileOutputStream fos = new FileOutputStream(f); messagesArr[i].writeTo(fos); fos.flush(); fos.close(); } Is FileOutputStream the best output stream for persisting message objects? Is it possible to use ObjectOutputStream for message object persistence? WebApplication B reads the message object via InputStream FileInputStream fis = new FileInputStream("F:/MessagesPersistedFolder"+uid); MimeMessage mm = new MimeMessage(sessionObj,fis); What if the mail message object which is already written via WebApplication A is not a MimeMessage? How can I read non-mime messages using input stream? MimeMessage constructor mandates sessionObj as the first parameter? How can I obtain this sessionObj in WebApplicationB? Do I have to again establish store connection with the same emailid,emailpassword,popserver and port(already used in WebApplication A) with the email server inorder to obtain this session object? Even if obtained, will this session object remains the same as that of the session object which is priorly obtained in WebApplicationA? Since I am using uids to name Message objects (inorder to maintain uniqueness of file names) how can I share these uids between WebApplication A and WebApplication B? WebApplication B needs the uid inorder to access the specific file which is present in "F:/MessagesPersistedFolder" Please help me in resolving the aforeseen issues.

    Read the article

  • Hibernate won't save into database

    - by Blitzkr1eg
    I mapped some classes to some tables with hibernate in java, i set hibernate to show SQL, it opens the session, it show that id does the SQL, it closes the session but there are no modifications to the database. Entity public class Profesor implements Comparable<Profesor> { private int id; private String nume; private String prenume; private int departament_id; private Set<Disciplina> listaDiscipline; //the teacher gives some courses} public class Disciplina implements Comparable<Disciplina>{ //the course class private int id; private String denumire; private String syllabus; private String schNotare; private Set<Lectie> lectii; private Set<Tema> listaTeme; private Set<Grup> listaGrupuri; // the course gets teached/assigmened to some groups of students private Set<Assignment> listAssignments;} Mapping <hibernate-mapping default-cascade="all"> <class name="model.Profesor" table="devgar_scoala.profesori"> <id name="id" column="id"> <generator class="increment"/> </id> <set name="listaDiscipline" table="devgar_scoala.`profesori-discipline`"> <key column="Profesori_id" /> <many-to-many class="model.Disciplina" column="Discipline_id" /> </set> <property name="nume" column="Nume" type="string" /> <property name="prenume" column="Prenume" type="string" /> <property name="departament_id" column="Departamente_id" type="integer" /> </class> <class name="model.Grup" table="devgar_scoala.grupe"> <id name="id" unsaved-value="0"> <generator class="increment"/> </id> <set name="listaStudenti" table="devgar_scoala.`studenti-grupe`"> <key column="Grupe_id" /> <many-to-many column="Studenti_nrMatricol" class="model.Student" /> </set> <property name="nume" column="Grupa" type="string"/> <property name="programStudiu" column="progStudii_id" type="integer" /> </class> <class name="model.Disciplina" table="devgar_scoala.discipline" > <id name="id" > <generator class="increment"/> </id> <property name="denumire" column="Denumire" type="string"/> <property name="syllabus" type="string" column="Syllabus"/> <property name="schNotare" type="string" column="SchemaNotare"/> <set name="listaGrupuri" table="devgar_scoala.`Discipline-Grupe`"> <key column="Discipline_id" /> <many-to-many column="Grupe_id" class="model.Grup" /> </set> <set name="lectii" table="devgar_scoala.lectii"> <key column="Discipline_id" not-null="true"/> <one-to-many class="model.Lectie" /> </set> </class> The only 'funny' thing is that the Profesor object gets loaded not with/by Hibernate but with manual classic SQL Java. Thats why i save the Profesor object like this p - the manually loaded Profesor object Profesor p2 = (Profesor) session.merge(p); session.saveOrUpdate(p2); //flush session of course After this i get in the Java console: Hibernate: insert into devgar_scoala.grupe (Grupa, progStudii_id, id) values (?, ?, ?) but when i look into the database there are no new rows in the table Grupe (the Groups table)

    Read the article

  • filter by value in object of array

    - by zahir hussain
    hi i want to know how to filter the value in object of array... i just display the below is one data of my object array Object ( [_fields:private] => Array ( [a] => c7b920f57e553df2bb68272f61570210 [index_date] => 2010/05/11 12:00:58 [b] => i am zahir [c] => google.com [d] => 23c4a1f90fb577a006bdef4c718f5cc2 ) ) Object ( [_fields:private] => Array ( [a] => c7b920f57e553df2bb68272f61570210 [index_date] => 2010/05/11 12:00:58 [b] => i am zahir [c] => yahoo.com [d] => 23c4a1f90fb577a006bdef4c718f5cc2 ) ) Object ( [_fields:private] => Array ( [a] => c7b920f57e553df2bb68272f61570210 [index_date] => 2010/05/11 12:00:58 [b] => i am beni [c] => google.com [d] => 23c4a1f90fb577a006bdef4c718f5cc2 ) ) . . . Object ( [_fields:private] => Array ( [a] => c7b920f57e553df2bb68272f61570210 [index_date] => 2010/05/11 12:00:58 [b] => i am sani [c] => yahoo.com [d] => 23c4a1f90fb577a006bdef4c718f5cc2 ) ) i have to filter the [c] value... thanks and advance

    Read the article

  • [Ruby] Object assignment and pointers

    - by Jergason
    I am a little confused about object assignment and pointers in Ruby, and coded up this snippet to test my assumptions. class Foo attr_accessor :one, :two def initialize(one, two) @one = one @two = two end end bar = Foo.new(1, 2) beans = bar puts bar puts beans beans.one = 2 puts bar puts beans puts beans.one puts bar.one I had assumed that when I assigned bar to beans, it would create a copy of the object, and modifying one would not affect the other. Alas, the output shows otherwise. ^_^[jergason:~]$ ruby test.rb #<Foo:0x100155c60> #<Foo:0x100155c60> #<Foo:0x100155c60> #<Foo:0x100155c60> 2 2 I believe that the numbers have something to do with the address of the object, and they are the same for both beans and bar, and when I modify beans, bar gets changed as well, which is not what I had expected. It appears that I am only creating a pointer to the object, not a copy of it. What do I need to do to copy the object on assignment, instead of creating a pointer? Tests with the Array class shows some strange behavior as well. foo = [0, 1, 2, 3, 4, 5] baz = foo puts "foo is #{foo}" puts "baz is #{baz}" foo.pop puts "foo is #{foo}" puts "baz is #{baz}" foo += ["a hill of beans is a wonderful thing"] puts "foo is #{foo}" puts "baz is #{baz}" This produces the following wonky output: foo is 012345 baz is 012345 foo is 01234 baz is 01234 foo is 01234a hill of beans is a wonderful thing baz is 01234 This blows my mind. Calling pop on foo affects baz as well, so it isn't a copy, but concatenating something onto foo only affects foo, and not baz. So when am I dealing with the original object, and when am I dealing with a copy? In my own classes, how can I make sure that assignment copies, and doesn't make pointers? Help this confused guy out.

    Read the article

  • NSDrawer delegate pointing to deallocated object?

    - by Isaac
    A user has sent in a crash report with the stack trace listed below (I have not been able to reproduce the crash myself, but every other crash this user has reported has been a valid bug, even when I couldn't reproduce the effect). The application is a reference-counted Objective-C/Cocoa app. If I am interpreting it correctly, the crash is caused by attempting to send a drawerDidOpen: message to a deallocated object. The only object that should be receiving drawerDidOpen: is the drawer's delegate object (nowhere does any object register to receive drawer notifications), and the drawer's delegate object is instantiated via the XIB/NIB file, wired to the delegate outlet of the drawer, and not referenced anywhere else. Given that, how can I protect against the delegate getting dealloc'd before the drawer notification? Or, alternately, what have I misinterpreted that might be causing the crash? Crash log/stack trace: Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x0000000000000010 Crashed Thread: 0 Dispatch queue: com.apple.main-thread Application Specific Information: objc_msgSend() selector name: drawerDidOpen: Thread 0 Crashed: Dispatch queue: com.apple.main-thread 0 libobjc.A.dylib 0x00007fff8272011c objc_msgSend + 40 1 com.apple.Foundation 0x00007fff87d0786e _nsnote_callback + 167 2 com.apple.CoreFoundation 0x00007fff831bcaea __CFXNotificationPost + 954 3 com.apple.CoreFoundation 0x00007fff831a9098 _CFXNotificationPostNotification + 200 4 com.apple.Foundation 0x00007fff87cfe7d8 -[NSNotificationCenter postNotificationName:object:userInfo:] + 101 5 com.apple.AppKit 0x00007fff8512e944 _NSDrawerObserverCallBack + 840 6 com.apple.CoreFoundation 0x00007fff831d40d7 __CFRunLoopDoObservers + 519 7 com.apple.CoreFoundation 0x00007fff831af8c4 CFRunLoopRunSpecific + 548 8 com.apple.HIToolbox 0x00007fff839b8ada RunCurrentEventLoopInMode + 333 9 com.apple.HIToolbox 0x00007fff839b883d ReceiveNextEventCommon + 148 10 com.apple.HIToolbox 0x00007fff839b8798 BlockUntilNextEventMatchingListInMode + 59 11 com.apple.AppKit 0x00007fff84de8a2a _DPSNextEvent + 708 12 com.apple.AppKit 0x00007fff84de8379 -[NSApplication nextEventMatchingMask:untilDate:inMode:dequeue:] + 155 13 com.apple.AppKit 0x00007fff84dae05b -[NSApplication run] + 395 14 com.apple.AppKit 0x00007fff84da6d7c NSApplicationMain + 364 15 (my app's identifier) 0x0000000100001188 start + 52

    Read the article

  • Object equality in context of hibernate / webapp

    - by bert
    How do you handle object equality for java objects managed by hibernate? In the 'hibernate in action' book they say that one should favor business keys over surrogate keys. Most of the time, i do not have a business key. Think of addresses mapped to a person. The addresses are keeped in a Set and displayed in a Wicket RefreshingView (with a ReuseIfEquals strategy). I could either use the surrogate id or use all fields in the equals() and hashCode() functions. The problem is that those fields change during the lifetime ob the object. Either because the user entered some data or the id changes due to JPA merge() being called inside the OSIV (Open Session in View) filter. My understanding of the equals() and hashCode() contract is that those should not change during the lifetime of an object. What i have tried so far: equals() based on hashCode() which uses the database id (or super.hashCode() if id is null). Problem: new addresses start with an null id but get an id when attached to a person and this person gets merged() (re-attached) in the osiv-filter. lazy compute the hashcode when hashCode() is first called and make that hashcode @Transitional. Does not work, as merge() returns a new object and the hashcode does not get copied over. What i would need is an ID that gets assigned during object creation I think. What would be my options here? I don't want to introduce some additional persistent property. Is there a way to explicitly tell JPA to assign an ID to an object? Regards

    Read the article

  • C# struct with object as data member

    - by source-energy
    As we know, in C# structs are passed by value, not by reference. So if I have a struct with the following data members: private struct MessageBox { // data members private DateTime dm_DateTimeStamp; // a struct type private TimeSpan dm_TimeSpanInterval; // also a struct private ulong dm_MessageID; // System.Int64 type, struct private String dm_strMessage; // an object (hence a reference is stored here) // more methods, properties, etc ... } So when a MessageBox is passed as a parameter, a COPY is made on the stack, right? What does that mean in terms of how the data members are copied? The first two are struct types, so copies should be made of DateTime and TimeSpan. The third type is a primitive, so it's also copied. But what about the dm_strMessage, which is a reference to an object? When it's copied, another reference to the same String is created, right? The object itself resides in the heap, and is NOT copied (there is only one instance of it on the heap.) So now we have to references to the same object of type String. If the two references are accessed from different threads, it's conceivable that the String object could be corrupted by being modified from two different directions simultaneously. The MSDN documentation says that System.String is thread safe. Does that mean that the String class has a built-in mechanism to prevent an object being corrupted in exactly the type of situation described here? I'm trying to figure out if my MessageBox struct has any potential flaws / pitfalls being a structure vs. a class. Thanks for any input. Source.Energy.

    Read the article

  • ZF2 - How to use the Hydrator/exchangeArray() to populate a nested object

    - by Dominic Watson
    I've got an object with values that are stored in my database. My object also contains another object which is stored in the database using just the ID of it (foreign key). http://framework.zend.com/manual/2.0/en/modules/zend.stdlib.hydrator.html Before the Hydrator/exchangeArray functionality in ZF2 you would use a Mapper to grab everything you need to create the object. Now I'm trying to eliminate this extra layer by just using Hydration/exchangeArray to populate my objects but am a bit stuck on creating the nested object. Should my entity have the Inner object's table injected into it so I can create it if the ID of it is passed to my 'exchangeArray' ? Here are example entities as an example. // Village id, name, position, square_id // Map Square id, name, type Upon sending square_id to my Village's exchangeArray() function. It would get the mapTable and use hydrator to pull in the square using the ID I have. It doesn't seem right to be to have mapper instances inside my entity as I thought they should be disconnected from anything but it's own entity specific parameters and functionality?

    Read the article

  • Need an advice for unit testing using mock object

    - by Andree
    Hi there, I just recently read about "Mocking objects" for unit testing and currently I'm having a difficulties implementing this approach in my application. Please let me explain my problem. I have a User model class, which is dependent on 2 data sources (database and facebook web service). The controller class simply use this User model as an interface to access data and it doesn't care about where the data came from. Currently I never done any unit test to this User model because it is dependent on an external web service. But just a while ago, I read about object mocking and now I know that it is a common approach to unit test a class that depends on external resources (like in my case). Now I want to create a unit test for the User model, but then I encountered a design issue: In order for the User model to use a mocked Facebook SDK, I have to inject this mocked Facebook SDK to the User object (probably using a setter). Therefore I can't construct the Facebook SDK inside the User object. I have to construct it outside the User object, and inject the SDK into the User object. The real client of my User model is the application's controller. Therefore I have to construct the Facebook SDK inside the controller and inject it to the user object. Well, this is a problem because I want my controller to be as clean as possible. I want my controller to be ignorant about the application's data source. I'm not good at explaining something systematically, so you'll probably sleeping before reading this last paragraph. But anyway, I want to ask if anyone here ever encountered the same problem as mine? How do you solve this problem? Regards, Andree

    Read the article

  • Core Data object into an NSDictionary with possible nil objects

    - by Chuck
    I have a core data object that has a bunch of optional values. I'm pushing a table view controller and passing it a reference to the object so I can display its contents in a table view. Because I want the table view displayed a specific way, I am storing the values from the core data object into an array of dictionaries then using the array to populate the table view. This works great, and I got editing and saving working properly. (i'm not using a fetched results controller because I don't have anything to sort on) The issue with my current code is that if one of the items in the object is missing, then I end up trying to put nil into the dictionary, which won't work. I'm looking for a clean way to handle this, I could do the following, but I can't help but feeling like there's a better way. *passedEntry is the core data object handed to the view controller when it is pushed, lets say it contains firstName, lastName, and age, all optional. if ([passedEntry firstName] != nil) { [dictionary setObject:[passedEntry firstName] forKey:@"firstName"] } else { [dictionary setObject:@"" forKey:@"firstName"] } And so on. This works, but it feels kludgy, especially if I end up adding more items to the core data object down the road.

    Read the article

  • C# object (2 numbers) performing 2 calculations

    - by Chris
    I have a couple questions about creating a object (2 values) and how to "call" it. Initializing the object with: Tweetal t1, t2, t3, t4, t5, t6; t1 = new Tweetal(); //a: 0 , b = 0 t2 = new Tweetal(-2); //a: -2, b = -2 t3 = new Tweetal(5, 17); //a: 5, b = 17 t4 = new Tweetal(t3); //a:5, b = 17 Console.Write("t1 = " + t1); Console.Write("\tt2 = " + t2); Console.Write("\tt3 = " + t3); Console.Write("\tt4 = " + t4); Console.WriteLine("\n"); t1 = t1.Som(t2); t4 = t2.Som(t2); //...... Now the 2 things i want to do with this object are taking the SUM and the SUMNumber: Sum: t4 = t2.sum(t3); (this would result in t4: a:3 (-2+5), b: 15(-2+17) SumNumber: t1 = t3.sum(8) (this would result in t1: a:13 , b:25) Next is my code for the object (in a separate class), but how exactly do i perform the simple sum calculation when i call up for example t2 etc... public class Tweetal: Object { private int a; private int b; public Tweetal() { //??? //Sum(...,...) } public Tweetal(int a) { //??? //Sum(...,...) } public Tweetal(int a, int b) { //??? } public Tweetal(Tweetal //....) // to call upton the object if i request t1, t2, t3,... insteed of a direct number value) { // ???? } public void Sum(int aValue, int bValue) { //a = ??? //b = ??? //Sum(...,...) } public void SumNumber(int aValue, int bValue) { } public override string ToString() { return string.Format("({0}, {1})", a, b); }/*ToString*/ }

    Read the article

  • WCF - Return object without serializing?

    - by Mayo
    One of my WCF functions returns an object that has a member variable of a type from another library that is beyond my control. I cannot decorate that library's classes. In fact, I cannot even use DataContractSurrogate because the library's classes have private member variables that are essential to operation (i.e. if I return the object without those private member variables, the public properties throw exceptions). If I say that interoperability for this particular method is not needed (at least until the owners of this library can revise to make their objects serializable), is it possible for me to use WCF to return this object such that it can at least be consumed by a .NET client? How do I go about doing that? Update: I am adding pseudo code below... // My code, I have control [DataContract] public class MyObject { private TheirObject theirObject; [DataMember] public int SomeNumber { get { return theirObject.SomeNumber; } // public property exposed private set { } } } // Their code, I have no control public class TheirObject { private TheirOtherObject theirOtherObject; public int SomeNumber { get { return theirOtherObject.SomeOtherProperty; } set { // ... } } } I've tried adding DataMember to my instance of their object, making it public, using a DataContractSurrogate, and even manually streaming the object. In all cases, I get some error that eventually leads back to their object not being explicitly serializable.

    Read the article

  • SSH from mac to linux -> start gnome-session -> X11 keyboard mapping all messed up.

    - by Justin
    I have 2 computers: echo.local is running Ubuntu 9.04. justin.local is running Mac OS 10.6.1. X11 version on the mac is 2.3.4. I open X11 on the mac, and open a new xterm window (Applications Menu - Terminal), everything is fine. Keyboard works as expected. I do ssh -X echo.local from the mac (connecting to the linux box), and from the linux command prompt, start xterm - everything is fine. Keyboard works as expected. I do gnome-session from the linux command prompt (through SSH), gnome launches, but keyboard mapping is ALL types of screwed up. If I kill gnome-session and open an xterm via ssh, keyboard mapping is still screwed up. If I then kill the SSH session entirely, and do X11 - Applications Menu - Terminal, opening a brand new xterm window on the mac with no SSH session running at all ... keyboard mapping is still screwed up. Only after I quit X11 and relaunch, is the keyboard mapping back to normal. Keyboard layout under GNOME is Apple-MacBook/MacBook Pro.

    Read the article

  • Generic Aggregation of C++ Objects by Attribute When Attribute Name is Unknown at Runtime

    - by stretch
    I'm currently implementing a system with a number of class's representing objects such as client, business, product etc. Standard business logic. As one might expect each class has a number of standard attributes. I have a long list of essentially identical requirements such as: the ability to retrieve all business' whose industry is manufacturing. the ability to retrieve all clients based in London Class business has attribute sector and client has attribute location. Clearly this a relational problem and in pseudo SQL would look something like: SELECT ALL business in business' WHERE sector == manufacturing Unfortunately plugging into a DB is not an option. What I want to do is have a single generic aggregation function whose signature would take the form: vector<generic> genericAggregation(class, attribute, value); Where class is the class of object I want to aggregate, attribute and value being the class attribute and value of interest. In my example I've put vector as return type, but this wouldn't work. Probably better to declare a vector of relevant class type and pass it as an argument. But this isn't the main problem. How can I accept arguments in string form for class, attribute and value and then map these in a generic object aggregation function? Since it's rude not to post code, below is a dummy program which creates a bunch of objects of imaginatively named classes. Included is a specific aggregation function which returns a vector of B objects whose A object is equal to an id specified at the command line e.g. .. $ ./aggregations 5 which returns all B's whose A objects 'i' attribute is equal to 5. See below: #include <iostream> #include <cstring> #include <sstream> #include <vector> using namespace std; //First imaginativly names dummy class class A { private: int i; double d; string s; public: A(){} A(int i, double d, string s) { this->i = i; this->d = d; this->s = s; } ~A(){} int getInt() {return i;} double getDouble() {return d;} string getString() {return s;} }; //second imaginativly named dummy class class B { private: int i; double d; string s; A *a; public: B(int i, double d, string s, A *a) { this->i = i; this->d = d; this->s = s; this->a = a; } ~B(){} int getInt() {return i;} double getDouble() {return d;} string getString() {return s;} A* getA() {return a;} }; //Containers for dummy class objects vector<A> a_vec (10); vector<B> b_vec;//100 //Util function, not important.. string int2string(int number) { stringstream ss; ss << number; return ss.str(); } //Example function that returns a new vector containing on B objects //whose A object i attribute is equal to 'id' vector<B> getBbyA(int id) { vector<B> result; for(int i = 0; i < b_vec.size(); i++) { if(b_vec.at(i).getA()->getInt() == id) { result.push_back(b_vec.at(i)); } } return result; } int main(int argc, char** argv) { //Create some A's and B's, each B has an A... //Each of the 10 A's are associated with 10 B's. for(int i = 0; i < 10; ++i) { A a(i, (double)i, int2string(i)); a_vec.at(i) = a; for(int j = 0; j < 10; j++) { B b((i * 10) + j, (double)j, int2string(i), &a_vec.at(i)); b_vec.push_back(b); } } //Got some objects so lets do some aggregation //Call example aggregation function to return all B objects //whose A object has i attribute equal to argv[1] vector<B> result = getBbyA(atoi(argv[1])); //If some B's were found print them, else don't... if(result.size() != 0) { for(int i = 0; i < result.size(); i++) { cout << result.at(i).getInt() << " " << result.at(i).getA()->getInt() << endl; } } else { cout << "No B's had A's with attribute i equal to " << argv[1] << endl; } return 0; } Compile with: g++ -o aggregations aggregations.cpp If you wish :) Instead of implementing a separate aggregation function (i.e. getBbyA() in the example) I'd like to have a single generic aggregation function which accounts for all possible class attribute pairs such that all aggregation requirements are met.. and in the event additional attributes are added later, or additional aggregation requirements, these will automatically be accounted for. So there's a few issues here but the main one I'm seeking insight into is how to map a runtime argument to a class attribute. I hope I've provided enough detail to adequately describe what I'm trying to do...

    Read the article

  • C# Select clause returns system exception instead of relevant object

    - by Kashif
    I am trying to use the select clause to pick out an object which matches a specified name field from a database query as follows: objectQuery = from obj in objectList where obj.Equals(objectName) select obj; In the results view of my query, I get: base {System.SystemException} = {"Boolean Equals(System.Object)"} Where I should be expecting something like a Car, Make, or Model Would someone please explain what I am doing wrong here? The method in question can be seen here: // this function searches the database's table for a single object that matches the 'Name' property with 'objectName' public static T Read<T>(string objectName) where T : IEquatable<T> { using (ISession session = NHibernateHelper.OpenSession()) { IQueryable<T> objectList = session.Query<T>(); // pull (query) all the objects from the table in the database int count = objectList.Count(); // return the number of objects in the table // alternative: int count = makeList.Count<T>(); IQueryable<T> objectQuery = null; // create a reference for our queryable list of objects T foundObject = default(T); // create an object reference for our found object if (count > 0) { // give me all objects that have a name that matches 'objectName' and store them in 'objectQuery' objectQuery = from obj in objectList where obj.Equals(objectName) select obj; // make sure that 'objectQuery' has only one object in it try { foundObject = (T)objectQuery.Single(); } catch { return default(T); } // output some information to the console (output screen) Console.WriteLine("Read Make: " + foundObject.ToString()); } // pass the reference of the found object on to whoever asked for it return foundObject; } } Note that I am using the interface "IQuatable<T>" in my method descriptor. An example of the classes I am trying to pull from the database is: public class Make: IEquatable<Make> { public virtual int Id { get; set; } public virtual string Name { get; set; } public virtual IList<Model> Models { get; set; } public Make() { // this public no-argument constructor is required for NHibernate } public Make(string makeName) { this.Name = makeName; } public override string ToString() { return Name; } // Implementation of IEquatable<T> interface public virtual bool Equals(Make make) { if (this.Id == make.Id) { return true; } else { return false; } } // Implementation of IEquatable<T> interface public virtual bool Equals(String name) { if (this.Name.Equals(name)) { return true; } else { return false; } } } And the interface is described simply as: public interface IEquatable<T> { bool Equals(T obj); }

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • MySQL for Excel 1.1.0 GA has been released

    - by Javier Treviño
    The MySQL Windows Experience Team is proud to announce the release of MySQL for Excel version 1.1.0 GA, one of our newest products contained in the MySQL Installer suite. You can download it from our official Downloads page at http://dev.mysql.com/downloads/installer/. The 1.1.0 release of MySQL for Excel introduces the following features: Edit MySQL Data. Edit MySQL Data This may be the coolest feature so far; users will be able to edit the data in a MySQL table using MS Excel in a very friendly and intuitive way.  Edit Data supports inserting new rows, deleting existing rows and updating existing data as easy as playing with data in an Excel’s spreadsheet and pushing changes back to the server.  Also this version contains the following bug fixes: Enabled the following checkboxes in the Append Data's Advanced Options dialog and added code in the Append Data dialog to use the checkboxes as follows: Automatically store the column mapping for the given table     If checked the current mapping will be stored automatically after clicking the Append button if the append operation is successful and there is no mapping for the current connection.schema.table already; the new mapping is stored with a proposed name of Mapping. Reload stored column mapping for the selected table automatically     If checked the first Stored Mapping found where all column names in the source grid match all column names in the target grid is automatically selected and applied when the Append Data dialog is loaded. Fixed code in Append Data that applies a stored column mapping to skip target columns where the associated mapping is empty (saved as a -1). Enclosed the Add-In's startup code in a try-catch block in order to log any possible error thrown during startup; and added information messages to the log at the beginning of the Add-In's startup code and at the end of the shutdown code.  Also changed the wrapper method that calls the MySQLUtility to write messages to the log to make logging easier, thus changed the log call throughout all the code that contains a try-catch block. Added code to the main wix configuration file to check if a newer version is already installed and if so abort the installation Fixed code to refresh the Import Procedure Form's preview grid's data source to repaint its contents every time the Call button is pressed. Added code to re-pull connections after connections are migrated from Excel to Workbench. Fixed code so when the Append Data's Automatic Mapping is performed any subsequent change on a mapping resets the mapping to a Manual Mapping. Added code to the InfoDialog class to set the button text to "Show Details" or "Hide Details" depending on the status of the Details text container. Fixed a GUID in the main wix configuration file so now previous versions are uninstalled during a new installation. Added an option to the Export Data's Advanced Options dialog to remove columns with no data, by default the Export Dialog will only flag those columns as Excluded. Added code to display a warning and paint a column red if the column name in the Export Data dialog is not set, display a warning if the table name is not set, and stack warnings but not display them if a column is Excluded, warnings are displayed normally for columns if they are not Excluded anymore.  Added code to prevent the Append and Export of Data if more than 1 selection is made (selecting more than 1 area holding the Ctrl key while selecting Excel cells). Fixed problem that prevented MySQL for Excel from loading when Display settings in Windows 7 is set to Adjust to Best Performance (Oracle bug 14521405 - UNHANDLED EXCEPTION IS THROWN WHEN LOADING MYSQL FOR EXCEL). Fixed code that renames the auto-generated Primary Key column when the Table name changes since it was not detecting if a column with the same name already existed in the table. The column duplication was not actually happening, it looked that way because the automatically generated PK column was not detecting a column had that same name. Fixed code in Export Data dialog to always set an empty string instead of null to the MySQLDataColumn properties that stores MySQL data types (MySQLDataType, RowsFrom1stDataType and RowsFrom2ndDataType). Added code to display a warning and color red a column which Data Type has not been set by the user or has been manually cleared. Added code to output to the application log exception messages consistently in all places where exceptions are catched. A series of blog posts explaining the new Edit MySQL Data feature and the other existing features are coming in this blog. You can access the MySQL for Excel documentation at http://dev.mysql.com/doc/refman/5.5/en/mysql-for-excel.html You can also post questions on our MySQL for Excel forum found at http://forums.mysql.com/. You can also post questions on our MySQL for Excel forum found at http://forums.mysql.com/. Enjoy and thanks for the support!

    Read the article

  • Using mocks to set up object even if you will not be mocking any behavior or verifying any interaction with it?

    - by smp7d
    When building a unit test, is it appropriate to use a mocking tool to assist you in setting up an object even if you will not be mocking any behavior or verifying any interaction with that object? Here is a simple example in pseudo-code: //an object we actually want to mock Object someMockedObject = Mock(Object.class); EqualityChecker checker = new EqualityChecker(someMockedObject); //an object we are mocking only to avoid figuring out how to instantiate or //tying ourselves to some constructor that may be removed in the future ComplicatedObject someObjectThatIsHardToInstantiate = Mock(ComplicatedObject.class); //set the expectation on the mock When(someMockedObject).equals(someObjectThatIsHardToInstantiate).return(false); Assert(equalityChecker.check(someObjectThatIsHardToInstantiate)).isFalse(); //verify that the mock was interacted with properly Verify(someMockedObject).equals(someObjectThatIsHardToInstantiate).oneTime(); Is it appropriate to mock ComplicatedObject in this scenario?

    Read the article

  • Object validator - is this good design?

    - by neo2862
    I'm working on a project where the API methods I write have to return different "views" of domain objects, like this: namespace View.Product { public class SearchResult : View { public string Name { get; set; } public decimal Price { get; set; } } public class Profile : View { public string Name { get; set; } public decimal Price { get; set; } [UseValidationRuleset("FreeText")] public string Description { get; set; } [SuppressValidation] public string Comment { get; set; } } } These are also the arguments of setter methods in the API which have to be validated before storing them in the DB. I wrote an object validator that lets the user define validation rulesets in an XML file and checks if an object conforms to those rules: [Validatable] public class View { [SuppressValidation] public ValidationError[] ValidationErrors { get { return Validator.Validate(this); } } } public static class Validator { private static Dictionary<string, Ruleset> Rulesets; static Validator() { // read rulesets from xml } public static ValidationError[] Validate(object obj) { // check if obj is decorated with ValidatableAttribute // if not, return an empty array (successful validation) // iterate over the properties of obj // - if the property is decorated with SuppressValidationAttribute, // continue // - if it is decorated with UseValidationRulesetAttribute, // use the ruleset specified to call // Validate(object value, string rulesetName, string FieldName) // - otherwise, get the name of the property using reflection and // use that as the ruleset name } private static List<ValidationError> Validate(object obj, string fieldName, string rulesetName) { // check if the ruleset exists, if not, throw exception // call the ruleset's Validate method and return the results } } public class Ruleset { public Type Type { get; set; } public Rule[] Rules { get; set; } public List<ValidationError> Validate(object property, string propertyName) { // check if property is of type Type // if not, throw exception // iterate over the Rules and call their Validate methods // return a list of their return values } } public abstract class Rule { public Type Type { get; protected set; } public abstract ValidationError Validate(object value, string propertyName); } public class StringRegexRule : Rule { public string Regex { get; set; } public StringRegexRule() { Type = typeof(string); } public override ValidationError Validate(object value, string propertyName) { // see if Regex matches value and return // null or a ValidationError } } Phew... Thanks for reading all of this. I've already implemented it and it works nicely, and I'm planning to extend it to validate the contents of IEnumerable fields and other fields that are Validatable. What I'm particularly concerned about is that if no ruleset is specified, the validator tries to use the name of the property as the ruleset name. (If you don't want that behavior, you can use [SuppressValidation].) This makes the code much less cluttered (no need to use [UseValidationRuleset("something")] on every single property) but it somehow doesn't feel right. I can't decide if it's awful or awesome. What do you think? Any suggestions on the other parts of this design are welcome too. I'm not very experienced and I'm grateful for any help. Also, is "Validatable" a good name? To me, it sounds pretty weird but I'm not a native English speaker.

    Read the article

  • How do I map a composite primary key in Entity Framework 4 code first?

    - by jamesfm
    I'm getting to grips with EF4 code first, and liking it so far. But I'm having trouble mapping an entity to a table with a composite primary key. The configuration I've tried looks like this: public SubscriptionUserConfiguration() { Property(u => u.SubscriptionID).IsIdentity(); Property(u => u.UserName).IsIdentity(); } Which throws this exception: Unable to infer a key for entity type 'SubscriptionUser'. What am I missing?

    Read the article

  • Map external directory to web.xml

    - by prometheus
    Is there an easy way to map a directory in the web.xml or other deployment descriptor (jetty.xml, etc) files? For example, if I have a directory /opt/files/ is there a way that I can access its files and sub-directories by visiting http://localhost/some-mapping/? It strikes me that there should be some simple way of doing this, but I haven't been able to find out how (via google, stackoverflow, etc). All I've found are servlets that mimic fileservers, which is not what I would like. For reference I am using jetty on an AIX box.

    Read the article

  • Best approach to convert XML to RDF/XML using an ontology

    - by krisvandenbergh
    I have an XML which uses the XPDL standard (which has an XML schema). What I'm trying to do now is to convert its content to RDF format (serialized in XML), in terms of a certain ontology. Clearly, there needs to be some sort of mapping here. I would like to do this using PHP. The thing is, I have no idea how to do this best. I know how to read an XML file, but how would the mappings occur? What would be a good approach?

    Read the article

< Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >