Search Results

Search found 23661 results on 947 pages for 'worse is better'.

Page 289/947 | < Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >

  • UITabBarController with viewControllers utilizing different orientations?

    - by RickiG
    Hi I can see that this is something that has been troubling a lot of people:/ I have a UITabBarController that has 4 viewControllers, all of type UINavigationController. One of the navigationControllers gets a viewController pushed onto its stack, this viewController should be presented in landscape mode/orientation. The viewController is a graph, it is the absolutely only place in the app where landscape makes sense. (I hide the UITabBar when this is presented to not lead the user to believe this will work everywhere) To make a UITabBarController respond correctly to changes in orientation all its viewControllers need to return the same value from the delegate method: - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation So to accomodate this behavior I have implemented this method in all the viewControllers belonging to the UITabBarController: - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { NSUserDefaults *defaults = [NSUserDefaults standardUserDefaults]; BOOL canRotate = [defaults boolForKey:@"can_rotate"]; return canRotate; } The "trick" is now that when my can-be-landscape viewController is pushed I do this: - (void) viewWillAppear:(BOOL)animated { NSUserDefaults *defaults = [NSUserDefaults standardUserDefaults]; [defaults setBool:YES forKey:@"can_rotate"]; [defaults synchronize]; } and when it is popped, I do this: - (void) viewWillDisappear:(BOOL)animated { NSUserDefaults *defaults = [NSUserDefaults standardUserDefaults]; [defaults setBool:NO forKey:@"can_rotate"]; [defaults synchronize]; } This works really well. When the viewController is on the stack I can rotate the device and the view follows. The problem is however, that if the user taps the "back" button on the navigationBar while in landscape mode, thus popping the viewController to the previous viewController, this "old" viewController is of course also in landscape mode. To make things worse, because I set the BOOL to NO, this "old" viewController can not rotate back when I orientate the device to portrait mode. Is there a way to update everything so that none of my other viewControllers will be in landscape mode when I pop the can-be-in-landscape mode viewController? I am a bit worried that if this could be done from landscape to portrait it should also be possible from portrait to landscape, thus making my "hack" unnecessary.. but if it can not, then I am back to square one :/ Hope I am close and that someone could help me get there, thanks:)

    Read the article

  • Windows.Forms RichTextBox Control - Avoid inserting large data.

    - by SchlaWiener
    I have a Windows Form with a RichTextBox on it. The content of the RichTextBox is written to a database field that ist limited to 64k data. For my purpose that is way more than enough text to store. I have set the MaxLength property to avoid insertng more data than allowed. rtcControl.MaxLength = 65536 Howevery, that only restricts the amount of characters that so is allowed to put in the text. But with the formatting overhead from the Rtf I can type more text than I should be allowed to. It even get's worse if I insert a large image, which dosn't increase the TextLength at all but the Rtf Length grows quite a lot. At the moment I check the Length of the richttextboxes' Rtf property in the FormClosing event and display a message to the user if it's to large. However that is just a workaround because I want to disallow putting more data than allowed into the control (like in a textbox if you exceed the MaxLength property nothing is inserted into the control and you hear the default beep(). Any ideas how to achive this? I already tried: using a custom control which extends the richtextbox and shadows th Rtf property to intercept the insertation. But it seems it isn't executed if I add text. Even the TextChanged Event does not fire if I type smth. in the control.

    Read the article

  • ObjectDisposedException when .Show()'ing a form that shouldn't be disposed.

    - by user320781
    ive checked out some of the other questions and obviously the best solution is to prevent the behavior that causes this issue in the first place, but the problem is very intermittent, and very un-reproduceable. I basically have a main form, with sub forms. The sub forms are shown from menus and/or buttons from the main form like so: private void myToolStripMenuItem_Click(object sender, EventArgs e) { try { xDataForm.Show(); xDataForm.Activate(); } catch (ObjectDisposedException) { MessageBox.Show("ERROR 10103"); ErrorLogging newLogger = new ErrorLogging("10103"); Thread errorThread = new Thread(ErrorLogging.writeErrorToLog); errorThread.Start(); } } and the sub forms are actually in the main form(for better or worse. i would actually like to change this but would be a considerable amount of time to do so): public partial class FormMainScreen : Form { Form xDataForm = new xData(); ...(lots more here) public FormMainScreen(int pCount, string pName) { InitializeComponent(); ... } ... } The Dispose function for the sub form is modified so that, the 'close' and 'X' buttons actually hide the form so we dont have to re-create it every time. When the main screen closes, it sets a "flag" to 2, so the other forms know that it is actually ok to close; protected override void Dispose(bool disposing) { if (FormMainScreen.isExiting == 2) { if (disposing && (components != null)) { components.Dispose(); } base.Dispose(disposing); } else { if (xData.ActiveForm != null) { xData.ActiveForm.Hide(); } } } So, the question is, why would this work over and over and over again flawlessly, but, literally, about every 1/1000 of the time, cause an exception, or rather, why is my form being disposed? I had a suspicion that the garbage collector was getting confused, because it occurs slightly more frequently after it has been running for many hours.

    Read the article

  • Web Shop Schema - Document Db

    - by Maxem
    I'd like to evaluate a document db, probably mongo db in an ASP.Net MVC web shop. A little reasoning at the beginning: There are about 2 million products. The product model would be pretty bad for rdbms as there'd be many different kinds of products with unique attributes. For example, there'd be books which have isbn, authors, title, pages etc as well as dvds with play time, directors, artists etc and quite a few more types. In the end, I'd have about 9 different products with a combined column count (counting common columns like title only once) of about 70 to 100 whereas each individual product has 15 columns at most. The three commonly used ways in RDBMS would be: EAV model which would have pretty bad performance characteristics and would make it either impractical or perform even worse if I'd like to display the author of a book in a list of different products (think start page, recommended products etc.). Ignore the column count and put it all in the product table: Although I deal with somewhat bigger databases (row wise), I don't have any experience with tables with more than 20 columns as far as performance is concered but I guess 100 columns would have some implications. Create a table for each product type: I personally don't like this approach as it complicates everything else. C# Driver / Classes: I'd like to use the NoRM driver and so far I think i'll try to create a product dto that contains all properties (grouped within detail classes like book details, except for those properties that should be displayed on list views etc.). In the app I'll use BookBehavior / DvdBehaviour which are wrappers around a product dto but only expose the revelent Properties. My questions now: Are my performance concerns with the many columns approach valid? Did I overlook something and there is a much better way to do it in an RDBMS? Is MongoDb on Windows stable enough? Does my approach with different behaviour wrappers make sense?

    Read the article

  • Why is my code slower using #import "progid:typelib" than using "MFC Class From TypeLib"?

    - by Pakman
    I am writing an automation client in Visual C++ with MFC. If I right-click on my solution » Add » Class, I have the option to select MFC Class From TypeLib. Selecting this option generates source/header files for all interfaces. This allows me to write code such as: #include "CApplication.h" #include "CDocument.h" // ... connect to automation server ... CApplication *myApp = new CApplication(pDisp); CDocument myDoc = myApp->get_ActiveDocument(); Using this method, my benchmarking function that makes about 12000 automation calls takes 1 second. Meanwhile, the following code: #import "progid:Library.Application" Library::IApplicationPtr myApp; // ... connect to automation server ... Library::IDocumentPtr myDoc = myApp->GetActiveDocument(); takes about 2.4 seconds for the same benchmark. I assume the smart-pointer implementation is slowing me down, but I don't know why. Even worse, I'm not sure how to use #import construct to achieve the speeds that the first method yields. Is this possible? How or why not? Thanks for your time!

    Read the article

  • Truly declarative language?

    - by gjvdkamp
    Hi all, Does anyone know of a truly declarative language? The behaviour I'm looking for is kind of what Excel does, where I can define variables and formulas, and have the formula's result change when the input changes (without having set the answer again myself) The behaviour I'm looking for is best shown with this pseudo code: X = 10 // define and assign two variables Y = 20; Z = X + Y // declare a formula that uses these two variables X = 50 // change one of the input variables ?Z // asking for Z should now give 70 (50 + 20) I've tried this in a lot of languages like F#, python, matlab etc, but every time i try this they come up with 30 instead of 70. Wich is correct from an imperative point of view, but i'm looking for a more declerative behaviour if you know what i mean. And this is just a very simple calculation. When things get more difficult it should handle stuff like recursion and memoization automagically. The code below would obviously work in C# but it's just so much code for the job, i'm looking for something a bit more to the point without all that 'technical noise' class BlaBla{ public int X {get;set;} // this used to be even worse before 3.0 public int Y {get;set;} public int Z {get{return X + Y;}} } static void main(){ BlaBla bla = new BlaBla(); bla.X = 10; bla.Y = 20; // can't define anything here bla.X = 50; // bit pointless here but I'll do it anyway. Console.Writeline(bla.Z);// 70, hurray! } This just seems like so much code, curly braces and semicolons that add nothing. Is there a language/ application (apart from Exel) that does this? Maybe I'm no doing it right in the mentioned langauges, or I've completely missed an app that does just this. I prototyped a language/ application that does this (along with some other stuff) and am thinking of productizing it. I just can't believe it's not there yet. Don't want to waste my time. Thanks in advance, Gert-Jan

    Read the article

  • Switching to WPF, the best use of time at Visual Studio Launch 2010

    - by Stewbob
    Yes, this is a programming-related question, if a little indirectly (that's why I marked it Community Wiki right away). For better or worse, I am switching from Winforms to WPF in April. I am also going to be in attendance at the Visual Studio Launch in Las Vegas. I have a real need to get up to speed quickly in WPF, so my question is: What sessions are going to be the best use of my time? I've got some picked out already, but I'm looking for some more advice on how to wade through all the marketing fluff and get some real educational value out of these few days. I have not been to one of these events before, so I don't really know how much is marketing hype, and how much is solid content. A couple of the workshops look interesting (VPR02 and VPS02), but I don't know enough about the actual content of these to justify the extra expense right now. Any thoughts there would be appreciated. And yes, I do have WPF learning planned other than just these few days in Vegas, but since I'm going to be there anyway, I want to learn as much as I can in the time available.

    Read the article

  • What can cause my code to run slower when the server JIT is activated?

    - by durandai
    I am doing some optimizations on an MPEG decoder. To ensure my optimizations aren't breaking anything I have a test suite that benchmarks the entire codebase (both optimized and original) as well as verifying that they both produce identical results (basically just feeding a couple of different streams through the decoder and crc32 the outputs). When using the "-server" option with the Sun 1.6.0_18, the test suite runs about 12% slower on the optimized version after warmup (in comparison to the default "-client" setting), while the original codebase gains a good boost running about twice as fast as in client mode. While at first this seemed to be simply a warmup issue to me, I added a loop to repeat the entire test suite multiple times. Then execution times become constant for each pass starting at the 3rd iteration of the test, still the optimized version stays 12% slower than in the client mode. I am also pretty sure its not a garbage collection issue, since the code involves absolutely no object allocations after startup. The code consists mainly of some bit manipulation operations (stream decoding) and lots of basic floating math (generating PCM audio). The only JDK classes involved are ByteArrayInputStream (feeds the stream to the test and excluding disk IO from the tests) and CRC32 (to verify the result). I also observed the same behaviour with Sun JDK 1.7.0_b98 (only that ist 15% instead of 12% there). Oh, and the tests were all done on the same machine (single core) with no other applications running (WinXP). While there is some inevitable variation on the measured execution times (using System.nanoTime btw), the variation between different test runs with the same settings never exceeded 2%, usually less than 1% (after warmup), so I conclude the effect is real and not purely induced by the measuring mechanism/machine. Are there any known coding patterns that perform worse on the server JIT? Failing that, what options are available to "peek" under the hood and observe what the JIT is doing there?

    Read the article

  • Python, Ruby, and C#: Use cases?

    - by thaorius
    Hi everyone. For as long as I can remember, I've always had a "favorite" language, which I use for most projects, until, for some particular reason, there is no way/point on using it for project XYZ. At that point, I find myself rusty (and sometimes outdated) on other languages+libraries+toolchains. So I decided, I would just use some languages/libs/tools for some things, and some for other, effectively keeping them fresh (there would obviously be exceptions, I'm not looking for an arbitrary rule set, but some guidelines). I wanted an opinion on what would be your standard use cases (new projects) for Python, Ruby, and C# (Mono). At the moment, I have time like this:Languages: C#: Mid-Large Sized Projects (mainly server-side daemons) High Performance (I hardly ever need C's performance, but Python just doesn't cut it) Relatively Low Footprint (vs the JVM, for example) Ruby: Web Applications Python: General Use Scripts (automation, system config, etc) Small-Mid Sized Projects Prototyping Web Applications About Ruby, I have no idea what to use it for that I can't use Python for (specially considering Python is more easily found installed by default). And I like both languages (though I'm really new to Ruby), which makes things even worse. As for C#, I have not used a Windows powered computer in a few years, I don't make things for Windows computers, and I don't mind waiting for Mono to implement some new features. That being said, I haven't found many people on the internet using it for server-sided *nix programming (not web related). I would appreciate some insight on this too. Thanks for your time.

    Read the article

  • ASP.NET inline code in a server control

    - by John
    Ok, we had a problem come up today at work. It is a strange one that I never would have even thought to try. <form id="form1" runat="server" method="post" action="Default.aspx?id=<%= ID %>" > Ok, it is very ugly and I wouldn't have ever tried it myself. It came up in some code that was written years ago but had been working up until this weekend after a bunch of updates were installed on a client's web server where the code is hosted. The actual result of this is the following html: <form name="form1" method="post" action="Default.aspx?id=&lt;%= ID %>" id="form1"> The url ends up like this: http://localhost:6735/Default.aspx?id=<%= ID %> Which as you can see, demonstrates that the "<" symbol is being encoded before ASP.NET actually processes the page. It seems strange to me as I thought that even though it is not pretty by any means, it should work. I'm confused. To make matters worse, the client insists that it is a bug in IE since it appears to work in Firefox. In fact, it is broken in Firefox as well, except for some reason Firefox treats it as a 0. Any ideas on why this happens and how to fix it easily? Everything I try to render within the server control ends up getting escaped. Edit Ok, I found a "fix" <form id="form1" runat="server" method="post" action='<%# String.Format("Default.aspx?id={0}", 5) %>' > But that requires me to call DataBind which is adding more of a hack to the original hack. Guess if nobody thinks of anything else I'll have to go with that.

    Read the article

  • What should I do if i have a factory method which requires different parameters for different implem

    - by Sam Holder
    I have an interface, IMessage and a class which have several methods for creating different types of message like so: class MessageService { IMessage TypeAMessage(param 1, param 2) IMessage TypeBMessage(param 1, param 2, param 3, param 4) IMessage TypeCMessage(param 1, param 2, param 3) IMessage TypeDMessage(param 1) } I don't want this class to do all the work for creating these messages so it simply delegates to a MessageCreatorFactory which produces an IMessageCreator depending on the type given (an enumeration based on the type of the message TypeA, TypeB, TypeC etc) interface IMessageCreator { IMessage Create(MessageParams params); } So I have 4 implementations of IMessageCreator: TypeAMessageCreator, TypeBMessageCreator, TypeCMessageCreator, TypeDMessageCreator I ok with this except for the fact that because each type requires different parameters I have had to create a MessageParams object which contains 4 properties for the 4 different params, but only some of them are used in each IMessageCreator. Is there an alternative to this? One other thought I had was to have a param array as the parameter in the Create emthod, but this seems even worse as you don't have any idea what the params are. Or to create several overloads of Create in the interface and have some of them throw an exception if they are not suitable for that particular implementation (ie you called a method which needs more params, so you should have called one of the other overloads.) Does this seem ok? Is there a better solution?

    Read the article

  • Running mysql query using node blocks the whole process and then timesout

    - by lobengula3rd
    I have a node javascript that uses mysql npm (Felix). I have a procedure stored in my DB which I call when the user selects an option to kind of create its own instance of the program. The user chooses for how long he wants that data to be initialized for him. This is suppsoed to be between 1 and 2 years. So if he choose 1 year this query will insert around 20,000 rows into 1 table. If I run this query and a local DB this takes around 30 seconds (I suppose it is reasonable because its a big query which should be done only once in 1 or 2 years so its ok). For some reason my node script freezes as if it can't handle any more calls from other users. The even worse problem is that after like 2 minutes my client ui gets like an error from the server. At this point not all the data that was supposed to enter the DB is entered. After waiting like another minute all the data finally gets to the DB and only then it will accept new requests. This is my connection: this.connection = mysql.createConnection({ host : '********rds.amazonaws.com', user : 'admin', password : '******', database : '*****' }); and this is my query function: this.createCourts = function (req, res, next){ connection.query('CALL filldates("' + req.body['startDate'] + '","' + req.body['endDate'] + '","' + req.body['numOfCourts'] + '","' + req.body['duration'] + '","' + req.body['sundayOpen'] + '","' + req.body['mondayOpen'] + '","' + req.body['tuesdayOpen'] + '","' + req.body['wednesdayOpen'] + '","' + req.body['thursdayOpen'] + '","' + req.body['fridayOpen'] + '","' + req.body['saturdayOpen'] + '","' + req.body['sundayClose'] + '","' + req.body['mondayClose'] + '","' + req.body['tuesdayClose'] + '","' + req.body['wednesdayClose'] + '","' + req.body['thursdayClose'] + '","' + req.body['fridayClose'] + '","' + req.body['saturdayClose'] + '");', function(err){ if (err){ console.log(err); } else return res.send(200); }); }; what am i missing here? as i understand connection.query should by async so why is it actually blocking my node script? thanks.

    Read the article

  • Can I get rid of this read lock?

    - by Pieter
    I have the following helper class (simplified): public static class Cache { private static readonly object _syncRoot = new object(); private static Dictionary<Type, string> _lookup = new Dictionary<Type, string>(); public static void Add(Type type, string value) { lock (_syncRoot) { _lookup.Add(type, value); } } public static string Lookup(Type type) { string result; lock (_syncRoot) { _lookup.TryGetValue(type, out result); } return result; } } Add will be called roughly 10/100 times in the application and Lookup will be called by many threads, many of thousands of times. What I would like is to get rid of the read lock. How do you normally get rid of the read lock in this situation? I have the following ideas: Require that _lookup is stable before the application starts operation. The could be build up from an Attribute. This is done automatically through the static constructor the attribute is assigned to. Requiring the above would require me to go through all types that could have the attribute and calling RuntimeHelpers.RunClassConstructor which is an expensive operation; Move to COW semantics. public static void Add(Type type, string value) { lock (_syncRoot) { var lookup = new Dictionary<Type, string>(_lookup); lookup.Add(type, value); _lookup = lookup; } } (With the lock (_syncRoot) removed in the Lookup method.) The problem with this is that this uses an unnecessary amount of memory (which might not be a problem) and I would probably make _lookup volatile, but I'm not sure how this should be applied. (John Skeets' comment here gives me pause.) Using ReaderWriterLock. I believe this would make things worse since the region being locked is small. Suggestions are very welcome.

    Read the article

  • Alternative or succesor to GDBM

    - by Anon Guy
    We a have a GDBM key-value database as the backend to a load-balanced web-facing application that is in implemented in C++. The data served by the application has grown very large, so our admins have moved the GDBM files from "local" storage (on the webservers, or very close by) to a large, shared, remote, NFS-mounted filesystem. This has affected performance. Our performance tests (in a test environment) show page load times jumping from hundreds of milliseconds (for local disk) to several seconds (over NFS, local network), and sometimes getting as high as 30 seconds. I believe a large part of the problem is that the application makes lots of random reads from the GDBM files, and that these are slow over NFS, and this will be even worse in production (where the front-end and back-end have even more network hardware between them) and as our database gets even bigger. While this is not a critical application, I would like to improve performance, and have some resources available, including the application developer time and Unix admins. My main constraint is time only have the resources for a few weeks. As I see it, my options are: Improve NFS performance by tuning parameters. My instinct is we wont get much out of this, but I have been wrong before, and I don't really know very much about NFS tuning. Move to a different key-value database, such as memcachedb or Tokyo Cabinet. Replace NFS with some other protocol (iSCSI has been mentioned, but i am not familiar with it). How should I approach this problem?

    Read the article

  • Toggle two divs and classes

    - by kuswantin
    I have two links with classes (login-form and register-form) relevant to their target forms ID, they want to toggle. I have also a predefined 'slideToggle' function to toggle better. This is what I have tried so far: $('#userbar a').click(function() { var c = $(this).attr('class'); $('#userbar a').removeClass('active'); $(this).toggleClass('active'); $('#register-form,#login-form').hide(); //bad, causing flashy $('#' + c).slideToggle('slow'); return false; }); With this I have trouble with the flashy window, and to correctly toggle the active classes when another link is clicked, the other link should not have active class anymore. Additional problem is the link is dead on serial clicks. I have another try, longer one: $('#userbar a').click(function() { var c = $(this).attr('class'); switch (c) { case 'login-form': $('#' + c).slideToggle('slow'); $(this).toggleClass('active'); $('#register-form').hide(); break; case 'register-form': $('#' + c).slideToggle('slow'); $(this).toggleClass('active'); $('#login-form').hide(); break; } return false; }); This one is worse than the first :( Any suggestion to correct the behavior? What I want is when a link with class login-form is click, so toggle the form with ID login-form, and hide the register-form if open. Any help would be very much appreciated. Thanks.

    Read the article

  • How to (unit-)test data intensive PL/SQL application

    - by doom2.wad
    Our team is willing to unit-test a new code written under a running project extending an existing huge Oracle system. The system is written solely in PL/SQL, consists of thousands of tables, hundreds of stored procedures packages, mostly getting data from tables and/or inserting/updating other data. Our extension is not an exception. Most functions return data from a quite complex SELECT statementa over many mutually bound tables (with a little added logic before returning them) or make transformation from one complicated data structure to another (complicated in another way). What is the best approach to unit-test such code? There are no unit tests for existing code base. To make things worse, only packages, triggers and views are source-controlled, table structures (including "alter table" stuff and necessary data transformations are deployed via channel other than version control). There is no way to change this within our project's scope. Maintaining testing data set seems to be impossible since there is new code deployed to the production environment on weekly basis, usually without prior notice, often changing data structure (add a column here, remove one there). I'd be glad for any suggestion or reference to help us. Some team members tend to be tired by figuring out how to even start for our experience with unit-testing does not cover PL/SQL data intensive legacy systems (only those "from-the-book" greenfield Java projects).

    Read the article

  • FileReference.save() duplicates ByteArray

    - by bartekb
    Hi, I've encountered a memory problem using FileReference.save(). My Flash application generates of a lot of data in real-time and needs to save this data to a local file. As I understand, Flash 10 (as opposed to AIR) does not support streaming to a file. But, what's even worse is that FileReference.save() duplicates all the data before saving it. I was looking for a workaround to this doubled memory usage and thought about the following approach: What if I pass a custom subclass of ByteArray as an argument to FileReference.save(), where this ByteArray subclass would override all read*() methods. The overridden read*() methods would wait for a piece of data to be generated by my application, return this piece of data and immediately remove it from the memory. I know how much data will be generated, so I could also override length/bytesAvailable methods. Would it be possible? Could you give me some hint how to do it? I've created a subclass of ByteArray, registered an alias for it, passed an instance of this subclass to FileReference.save(), but somehow FileReference.save() seems to treat it just as it was a ByteArray instance and doesn't call any of my overridden methods... Thanks a lot for any help!

    Read the article

  • Transaction to find an entity - locks all entities of that type?

    - by user246114
    Hi, Reading the docs for transactions: http://code.google.com/appengine/docs/java/datastore/transactions.html An example provided shows one way to make an instance of an object: try { tx.begin(); Key k = KeyFactory.createKey("SalesAccount", id); try { account = pm.getObjectById(Employee.class, k); } catch (JDOObjectNotFoundException e) { account = new SalesAccount(); account.setId(id); } ... When the above transaction gets executed, it will probably block all other write attempts on Account objects? I'm wondering because I'd like to have a user signup which checks for a username or email already in use: tx.begin(); "select from User where mUsername == str1 LIMIT 1"; if (count > 0) { throw new Exception("username already in use!"); } "select from User where mEmail == str1 LIMIT 1"; if (count > 0) { throw new Exception("email already in use!"); } pm.makePersistent(user(username, email)); // ok. tx.commit(); but the above would be even more time consuming I think, making an even worse bottleneck? Am I understanding what will happen correctly? Thanks

    Read the article

  • Passing information safely between Wicket and Hibernate in long running conversations

    - by Peter Tillemans
    We are using Wicket with Hibernate in the background. As part of out UI we have quite long running conversations spanning multiple requests before the updated information is written back to the database. To avoid getting hibernate errors with detached objects we are now using value objects to transfer info from the service layer to Wicket. However we now end up with an explosion of almost the same objects : e.g. Answer (mapped entity saved in hibernate) AnswerVO (immutable value object) AnswerModel (A mutable bean in the session domain) IModel wrapped Wicket Model and usually this gets wrapped in a CompoundPropertyModel This plumbing becomes exponentially worse when collections to other objects are involved in the objects. There has to be a better way to organize this. Can anyone share tips to make this less onerous? Maybe make the value objects mutable so we can remove the need for a seaprate backing bean in Wicket? Use the entity beans but absolutely make dead-certain they are detached from hibernate. (easier said than done)? Some other tricks or patterns?

    Read the article

  • How to make Solution Explorer behave after clearing search?

    - by stijn
    I currently have a VS installation with no extensions to see how that works out. For navigation that means making heavy use of Ctrl+; aka Search Solution Explorer. While the search itself is ok, it has one major drawback for me that makes it a pain to use for me (both with keyboard and mouse): Solution with two projects, one collapsed, one opened: Use Ctrl+; and start typing until match found from collapsed project What I want now is to simply clear the search and return to the previous view. Seems like a pretty standard requirement, no? But there seems to be no such functionality built in. Problem with the current commands that come close (pressing Esc, clicking Back or Home buttons in Solution Explorer Toolbar) is all the same: they have the extremely annoying behaviour that they insist on suddenly uncollapsing the previously collapsed project and track the match found! (Btw the Track Active Item in Solution Explorer option is turned of in the options). This makes no sense from a UX point of view? You select some kind of 'undo' command, the search box clears which is expected, but then suddenly there's an item visible from a previous search: So if the collapsed project has like 50 items in it, solution explorer is now useless visually since it litters the screen with stuff you don't want to see, and worse you have to manually collapse the project again to return to the previous view. Is there a way around this? I thought maybe keyboard shortcuts for Back/Home would be different, but the commands do not seem to be registered. I looked into EnvDTE80.DTE2.ToolWindows.SolutionExplorer but it has no properties/methods that have anything to do with this issue. And somewhere in the tree there is a Microsoft.VisualStudio.PlatformUI.SolutionPivotNavigator which is probably the class responsible for this behaviour, but I have no idea how to access it?

    Read the article

  • What is the IoC / "Springy" way to handle MVP in GWT? (Hint, probably not the Spring Roo 1.1 way)

    - by Ehrann Mehdan
    This is the Spring Roo 1.1 way of doing a factory that returns a GWT Activity (Yes, Spring Framework) public Activity getActivity(ProxyPlace place) { switch (place.getOperation()) { case DETAILS: return new EmployeeDetailsActivity((EntityProxyId<EmployeeProxy>)place.getProxyId(), requests, placeController, ScaffoldApp.isMobile() ? EmployeeMobileDetailsView.instance() : EmployeeDetailsView.instance()); case EDIT: return makeEditActivity(place); case CREATE: return makeCreateActivity(); } throw new IllegalArgumentException("Unknown operation " + place.getOperation()); } It seems to me that we just went back hundred of years if we use a switch case with constants to make a factory. Now this is official auto generated Spring roo 1.1 with GWT / GAE integration, I kid you not I can only assume this is some executives empty announcements because this is definitly not Spring It seems VMWare and Google were too fast to get something out and didn't quite finish it, isn't it? Am I missing something or this is half baked and by far not the way Spring + GWT MVP should work? Do you have a better example of how Spring, GWT (2.1 MVP approach) and GAE should connect? I would hate to do all the plumbing of managing history and activities like this. (no annotations? IOC?) I also would hate to reinvent the wheel and write my own Spring enhancement just to find someone else did the same, or worse, find out that SpringSource and Google will release roo 1.2 soon and make it right

    Read the article

  • How to benchmark on multi-core processors

    - by Pascal Cuoq
    I am looking for ways to perform micro-benchmarks on multi-core processors. Context: At about the same time desktop processors introduced out-of-order execution that made performance hard to predict, they, perhaps not coincidentally, also introduced special instructions to get very precise timings. Example of these instructions are rdtsc on x86 and rftb on PowerPC. These instructions gave timings that were more precise than could ever be allowed by a system call, allowed programmers to micro-benchmark their hearts out, for better or for worse. On a yet more modern processor with several cores, some of which sleep some of the time, the counters are not synchronized between cores. We are told that rdtsc is no longer safe to use for benchmarking, but I must have been dozing off when we were explained the alternative solutions. Question: Some systems may save and restore the performance counter and provide an API call to read the proper sum. If you know what this call is for any operating system, please let us know in an answer. Some systems may allow to turn off cores, leaving only one running. I know Mac OS X Leopard does when the right Preference Pane is installed from the Developers Tools. Do you think that this make rdtsc safe to use again? More context: Please assume I know what I am doing when trying to do a micro-benchmark. If you are of the opinion that if an optimization's gains cannot be measured by timing the whole application, it's not worth optimizing, I agree with you, but I cannot time the whole application until the alternative data structure is finished, which will take a long time. In fact, if the micro-benchmark were not promising, I could decide to give up on the implementation now; I need figures to provide in a publication whose deadline I have no control over.

    Read the article

  • Data Structures for Junior Java Developer

    - by user1639637
    Ok,still learning Arrays. I wrote this code which fills the array named "rand" with random numbers between 0 and 1( exclusive). I want to start learning Complexity. the For loop executes n times (100 times) ,every time it takes O(1) time,so the worse case scenario is O(n),am I right? Also,I used ArrayList to store the 100 elements and I imported "Collections" and used Collections.sort() method to sort the elements. import java.util.Arrays; public class random { public static void main(String args[]) { double[] rand=new double[10]; for(int i=0;i<rand.length;i++) { rand[i]=(double) Math.random(); System.out.println(rand[i]); } Arrays.sort(rand); System.out.println(Arrays.toString(rand)); } } ArrayList: import java.util.ArrayList; import java.util.Collections; public class random { public static void main(String args[]) { ArrayList<Double> MyArrayList=new ArrayList<Double>(); for(int i=0;i<100;i++) { MyArrayList.add(Math.random()); } Collections.sort(MyArrayList); for(int j=0;j<MyArrayList.size();j++) { System.out.println(MyArrayList.get(j)); } } }

    Read the article

  • VS2010 Web Deploy: how to remove absolute paths and automate setAcl?

    - by Julien Lebosquain
    The integrated Web Deployment in Visual Studio 2010 is pretty nice. It can create a package ready to be deployed using MSDeploy on a target IIS machine. Problem is, this package will be redistributed to a client that will install it himself using the "Import Application" from IIS when MSDeploy is installed. The default package created always include the full path from the development machine, "D:\Dev\XXX\obj\Debug\Package\PackageTmp" in the source manifest file. It doesn't prevent installation of course since it was designed this way, but it looks ugly in the import dialog and has no meaning to the client. Worse he will wonder what are those paths and it looks quite confusing. By customizing the .csproj file (by adding MSBuild properties used by the package creation task), I managed to add additional parameters to the package. However, I spent most of the afternoon in the 2600 lines long Web.Publishing.targets trying to understand what parameter influenced the "development path" behavior, in vain. I also tried to use the setAcl to customize security on a given folder after deployment, but I only managed to do this with MSBuild by using a relative path... it shouldn't matter if I resolve the first problem though. I could modify the generated archive after its creation but I would prefer if everything was automatized using MSBuild. Does anyone know how to do that?

    Read the article

  • NHibernate unintential lazy property loading

    - by chiccodoro
    I introduced a mapping for a business object which has (among others) a property called "Name": public class Foo : BusinessObjectBase { ... public virtual string Name { get; set; } } For some reason, when I fetch "Foo" objects, NHibernate seems to apply lazy property loading (for simple properties, not associations): The following code piece generates n+1 SQL statements, whereof the first only fetches the ids, and the remaining n fetch the Name for each record: ISession session = ...IQuery query = session.CreateQuery(queryString); ITransaction tx = session.BeginTransaction(); List<Foo> result = new List<Foo>(); foreach (Foo foo in query.Enumerable()) { result.Add(foo); } tx.Commit(); session.Close(); produces: NHibernate: select foo0_.FOO_ID as col_0_0_ from V1_FOO foo0_ NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 81 NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 36470 NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 36473 Similarly, the following code leads to a LazyLoadingException after session is closed: ISession session = ... ITransaction tx = session.BeginTransaction(); Foo result = session.Load<Foo>(id); tx.Commit(); session.Close(); Console.WriteLine(result.Name); Following this post, "lazy properties ... is rarely an important feature to enable ... (and) in Hibernate 3, is disabled by default." So what am I doing wrong? I managed to work around the LazyLoadingException by doing a NHibernateUtil.Initialize(foo) but the even worse part are the n+1 sql statements which bring my application to its knees. This is how the mapping looks like: <class name="Foo" table="V1_FOO"> ... <property name="Name" column="NAME"/> </class> BTW: The abstract "BusinessObjectBase" base class encapsulates the ID property which serves as the internal identifier.

    Read the article

< Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >