Search Results

Search found 5915 results on 237 pages for 'practices'.

Page 219/237 | < Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >

  • Comparing two Objects which implement the same interface for equality / equivalence - Design help

    - by gav
    Hi All, I have an interface and two objects implementing that interface, massively simplied; public interface MyInterface { public int getId(); public int getName(); ... } public class A implements MyInterface { ... } public class B implements MyInterface { ... } We are migrating from using one implementation to the other but I need to check that the objects of type B that are generated are equivalent to those of type A. Specifically I mean that for all of the interface methods an object of Type A and Type B will return the same value (I'm just checking my code for generating this objects is correct). How would you go about this? Map<String, MyInterface> oldGeneratedObjects = getOldGeneratedObjects(); Map<String, MyInterface> newGeneratedObjects = getNewGeneratedObjects(); // TODO: Establish that for each Key the Values in the two maps return equivalent values. I'm looking for good coding practices and style here. I appreciate that I could just iterate through one key set pulling out both objects which should be equivalent and then just call all the methods and compare, I'm just thinking there may be a cleaner, more extensible way and I'm interested to learn what options there might be. Would it be appropriate / possible / advised to override equals or implement Comparable? Thanks in advance, Gavin

    Read the article

  • Client-server synchronization pattern / algorithm?

    - by tm_lv
    I have a feeling that there must be client-server synchronization patterns out there. But i totally failed to google up one. Situation is quite simple - server is the central node, that multiple clients connect to and manipulate same data. Data can be split in atoms, in case of conflict, whatever is on server, has priority (to avoid getting user into conflict solving). Partial synchronization is preferred due to potentially large amounts of data. Are there any patterns / good practices for such situation, or if you don't know of any - what would be your approach? Below is how i now think to solve it: Parallel to data, a modification journal will be held, having all transactions timestamped. When client connects, it receives all changes since last check, in consolidated form (server goes through lists and removes additions that are followed by deletions, merges updates for each atom, etc.). Et voila, we are up to date. Alternative would be keeping modification date for each record, and instead of performing data deletes, just mark them as deleted. Any thoughts?

    Read the article

  • Java Constructor Style (Check parameters aren't null)

    - by Peter
    What are the best practices if you have a class which accepts some parameters but none of them are allowed to be null? The following is obvious but the exception is a little unspecific: public class SomeClass { public SomeClass(Object one, Object two) { if (one == null || two == null) { throw new IllegalArgumentException("Parameters can't be null"); } //... } } Here the exceptions let you know which parameter is null, but the constructor is now pretty ugly: public class SomeClass { public SomeClass(Object one, Object two) { if (one == null) { throw new IllegalArgumentException("one can't be null"); } if (two == null) { throw new IllegalArgumentException("two can't be null"); } //... } Here the constructor is neater, but now the constructor code isn't really in the constructor: public class SomeClass { public SomeClass(Object one, Object two) { setOne(one); setTwo(two); } public void setOne(Object one) { if (one == null) { throw new IllegalArgumentException("one can't be null"); } //... } public void setTwo(Object two) { if (two == null) { throw new IllegalArgumentException("two can't be null"); } //... } } Which of these styles is best? Or is there an alternative which is more widely accepted? Cheers, Pete

    Read the article

  • How to get the most out of a 3 month intern?

    - by firoso
    We've got a software engineering intern coming in who's fairly competent and shows promise. There's one catch: we have him for 3 months full time and can't count on anything past that. He still has a year of school left, which is why we can't say for sure that we have him past 3 months. We have a specific project we're putting him on. How can we maximize his productivity while still giving him a positive learning experience? He wants to learn about development cycles and real-world software engineering. Anything that you think would be critical that you wish you had learned earlier? Nearly six months later: He's preformed admirably and even I have learned a lot from him. Thank you all for the input. Now I want to provide feedback to YOU! He has benefited most from sitting down and writing code. However, he has had a nasty history of bad software engineering practices which I'm trying to replace with good habits (properly finishing a method before moving on, not hacking code together, proper error channeling, etc). He has also really gained a lot by feeling involved in design decisions, even if most of the time they're related to my own design plans.

    Read the article

  • How to return ArrayList results from an IntentService

    - by gcl1
    I have an IntentService that loads up an ArrayList with data from a network source (AWS SDB tables). The ArrayList is in a global space -- accessible to both the calling Activity and the IntentService (like this: appState = ((App)getApplicationContext())). When the IntentService is done, it notifies the Activity through a ResultReceiver, and the Activity calls adapter.notifyDataChanged() to update the ListView. This solution works most of the time, ... but it violates the rule that only the UI thread should make changes to data underlying a ListView. So as it is, I sometimes get an error: "The content of the adapter has changed but ListView did not receive a notification." I think this must be a common situation. Please let me know if you have any suggestions or best practices for this problem. Here are three options I'm aware of: Keep the IntentService, and have it store the results in another "working" ArrayList, also in the global space. When the result is ready, the IntentService calls the ResultReceiver (on the UI thread), which can then: a) copy the result to the ArrayList associated with the ListView, and b) call adapter.notifyDataChanged(). CONS: I don't like the idea of putting temp/working data in a global space, and copying the result list seems inefficient. Keep the IntentService, and have it pass the results back through a bundle loaded with a ParcelableArrayList. CONS: I'm not sure if this approach would scale for very large result sets. It also requires copying the result list. Switch to a Service which builds a local copy of the result list. Have the Activity directly access the address space of the Service in order to read the result list. CON: Still requires copying results to the ArrayList associated with the ListView. Thank you.

    Read the article

  • Threading calls to web service in a web service - (.net 2.0)

    - by Ryan Ternier
    Got a question regarding best practices for doing parallel web service calls, in a web service. Our portal will get a message, split that message into 2 messages, and then do 2 calls to our broker. These need to be on separate threads to lower the timeout. One solution is to do something similar to (pseudo code): XmlNode DNode = GetaGetDemoNodeSomehow(); XmlNode ENode = GetAGetElNodeSomehow(); XmlNode elResponse; XmlNode demResponse; Thread dThread = new Thread(delegate { //Web Service Call GetDemographics d = new GetDemographics(); demResponse = d.HIALRequest(DNode); }); Thread eThread = new Thread(delegate { //Web Service Call GetEligibility ge = new GetEligibility(); elResponse = ge.HIALRequest(ENode); }); dThread.Start(); eThread.Start(); dThread.Join(); eThread.Join(); //combine the resulting XML and return it. //Maybe throw a bit of logging in to make architecture happy Another option we thought of is to create a worker class, and pass it the service information and have it execute. This would allow us to have a bit more control over what is going on, but could add additional overhead. Another option brought up would be 2 asynchronous calls and manage the returns through a loop. When the calls are completed (success or error) the loop picks it up and ends. The portal service will be called about 50,000 times a day. I don't want to gold plate this sucker. I'm looking for something light weight. The services that are being called on the broker do have time out limits set, and are already heavily logged and audited, so I'm not worried on that part. This is .NET 2.0 , and as much as I would love to upgrade I can't right now. So please leave all the goodies of 2.0 out please.

    Read the article

  • Modelling in Agile Development

    - by bertzzie
    I'm writing a bachelor dissertation report where I'm developing a system with Agile methodology. Given that the development is an one man show, of course the "Agile" I did was not really agile at all (from my understanding at least). So I want some perspective from SO crowds, who is of course a professional, real world, developer with tons of experience. I think real world experience is better than the theory and experiments that I did. My question is: Do we model during development time when using Agile? UML? DFD? Or a Functional Specification is enough1? If modelling is not really necessary, what do we use to communicate to the user, as the user almost always won't understand UML or DFD? For my system, I use UI & UX Design with heavy prototyping, but then I don't have time to draw UML any more. Which one is better? 1 http://www.joelonsoftware.com/articles/fog0000000036.html I hope the question's not "subjective and argumentative" as I know this question exist because of my lack of understanding in the agile development. If it is, could someone just give me a pointer or reference about that? Possible duplicate: Do you use UML in Agile development practices?

    Read the article

  • Diff/Merge functionality for objects (not files!)

    - by gehho
    I have a collection of objects of the same type, let's call it DataItem. The user can view and edit these items in an editor. It should also be possible to compare and merge different items, i.e. some sort of diff/merge for DataItem instances. The DIFF functionality should compare all (relevant) properties/fields of the items and detect possible differences. The MERGE functionality should then be able to merge two instances by applying selected differences to one of the items. For example (pseudo objects): DataItem1 { DataItem2 { Prop1 = 10 Prop1 = 10 Prop2 = 25 Prop2 = 13 Prop3 = 0 Prop3 = 5 Coll = { 7, 4, 8 } Coll = { 7, 4, 8, 12 } } } Now, the user should be provided with a list of differences (i.e. Prop2, Prop3, and Coll) and he should be able to select which differences he wants to eliminate by assigning the value from one item to the other. He should also be able to choose if he wants to assign the value from DataItem1 to DataItem2 or vice versa. Are there common practices which should be used to implement this functionality? Since the same editor should also provide undo/redo functionality (using the Command pattern), I was thinking about reusing the ICommand implementations because both scenarios basically handle with property assignments, collection changes, and so on... My idea was to create Difference objects with ICommand properties which can be used to perform a merge operation for this specific Difference. Btw: The programming language will be C# with .NET 3.5SP1/4.0. However, I think this is more of a language-independent question. Any design pattern/idea/whatsoever is welcome!

    Read the article

  • How can I limit asp.net control actions based on user role?

    - by Duke
    I have several pages or views in my application which are essentially the same for both authenticated users and anonymous users. I'd like to limit the insert/update/delete actions in formviews and gridviews to authenticated users only, and allow read access for both authed and anon users. I'm using the asp.net configuration system for handling authentication and roles. This system limits access based on path so I've been creating duplicate pages for authed and anon paths. The solution that comes to mind immediately is to check roles in the appropriate event handlers, limiting what possible actions are displayed (insert/update/delete buttons) and also limiting what actions are performed (for users that may know how to perform an action in the absence of a button.) However, this solution doesn't eliminate duplication - I'd be duplicating security code on a series of pages rather than duplicating pages and limiting access based on path; the latter would be significantly less complicated. I could always build some controls that offered role-based configuration, but I don't think I have time for that kind of commitment right now. Is there a relatively easy way to do this (do such controls exist?) or should I just stick to path-based access and duplicate pages? Does it even make sense to use two methods of authorization? There are still some pages which are strictly for either role so I'll be making use of path-based authorization anyway. Finally, would using something other than path-based authorization be contrary to typical asp.net design practices, at least in the context of using the asp.net configuration system?

    Read the article

  • ASP.NET MVC Unit Testing Controllers - Repositories

    - by Brian McCord
    This is more of an opinion seeking question, so there may not be a "right" answer, but I would welcome arguments as to why your answer is the "right" one. Given an MVC application that is using Entity Framework for the persistence engine, a repository layer, a service layer that basically defers to the repository, and a delete method on a controller that looks like this: public ActionResult Delete(State model) { try { if( model == null ) { return View( model ); } _stateService.Delete( model ); return RedirectToAction("Index"); } catch { return View( model ); } } I am looking for the proper way to Unit Test this. Currently, I have a fake repository that gets used in the service, and my unit test looks like this: [TestMethod] public void Delete_Post_Passes_With_State_4() { //Arrange var stateService = GetService(); var stateController = new StateController( stateService ); ViewResult result = stateController.Delete( 4 ) as ViewResult; var model = (State)result.ViewData.Model; //Act RedirectToRouteResult redirectResult = stateController.Delete( model ) as RedirectToRouteResult; stateController = new StateController( stateService ); var newresult = stateController.Delete( 4 ) as ViewResult; var newmodel = (State)newresult.ViewData.Model; //Assert Assert.AreEqual( redirectResult.RouteValues["action"], "Index" ); Assert.IsNull( newmodel ); } Is this overkill? Do I need to check to see if the record actually got deleted (as I already have Service and Repository tests that verify this)? Should I even use a fake repository here or would it make more sense just to mock the whole thing? The examples I'm looking at used this model of doing things, and I just copied it, but I'm really open to doing things in a "best practices" way. Thanks.

    Read the article

  • What is a good approach to preloading data?

    - by Bob Horn
    Are there best practices out there for loading data into a database, to be used with a new installation of an application? For example, for application foo to run, it needs some basic data before it can even be started. I've used a couple options in the past: TSQL for every row that needs to be preloaded: IF NOT EXISTS (SELECT * FROM Master.Site WHERE Name = @SiteName) INSERT INTO [Master].[Site] ([EnterpriseID], [Name], [LastModifiedTime], [LastModifiedUser]) VALUES (@EnterpriseId, @SiteName, GETDATE(), @LastModifiedUser) Another option is a spreadsheet. Each tab represents a table, and data is entered into the spreadsheet as we realize we need it. Then, a program can read this spreadsheet and populate the DB. There are complicating factors, including the relationships between tables. So, it's not as simple as loading tables by themselves. For example, if we create Security.Member rows, then we want to add those members to Security.Role, we need a way of maintaining that relationship. Another factor is that not all databases will be missing this data. Some locations will already have most of the data, and others (that may be new locations around the world), will start from scratch. Any ideas are appreciated.

    Read the article

  • How to make categories and current page highlighting in PHP?

    - by Markus Ossi
    I am trying to find some example code or best practices about making CMS-type categories with PHP. This is a problem that has for sure been solved gazillion times but for some reason I am unable to find any example code using PHP about how to implement this. As far as I can tell, there are two parts to the problem. The first one has to with the styling side of things: outputting the link in the navigation so that the current page has a special style (class="active") and to not print out the link for the current page. The second part is handling categories, subcategories and the dynamic pages under the categories. The second part seems pretty straightforward. I am thinking of making it so that the name of the category in the navigation is a link to categories.php?id=x and on this page I just print out the pages with that category id. Then, if the user klicks on a page he will be taken to pages.php?id=y. However, I am not quite sure on how to make a navigation to check if we are now on the current page. Should I just use some case switch or what? Any ideas or links to some good example code are much appreciated.

    Read the article

  • In an Android application, should I have one content provider per table or only one for the entire a

    - by Andrew Dyer
    I have years of experience with Microsoft .NET development (primarily C#) and have been working to come up to speed on Android and Java. So far, I've built a small application with a couple screens and a working content provider. All of the examples I've seen for developing content providers typically work with a single table, so I got the impression that this was the convention. I built a couple more content providers for other tables and ran into the "Unknown URI" IllegalArgumentException when I tried to test them. The exception is being thrown by one of my content providers, but not the one I was intending to call. It appears that my application is using the first content provider in the AndroidManifest.xml file, which now has me wondering if I should only have a single content provider for the entire application. Are there any best practices and/or examples for working with multiple tables in an Android application? Should I have one content provider per table or only one for the entire application? If the former, how do I resolve URIs to the proper provider? If the latter, how do I keep my content provider code from being polluted with switch statements?

    Read the article

  • Best way to migrate export/import from SQL Server to oracle

    - by matao
    Hi guys! I'm faced with needing access for reporting to some data that lives in Oracle and other data that lives in a SQL Server 2000 database. For various reasons these live on different sides of a firewall. Now we're looking at doing an export/import from sql server to oracle and I'd like some advice on the best way to go about it... The procedure will need to be fully automated and run nightly, so that excludes using the SQL developer tools. I also can't make a live link between databases from our (oracle) side as the firewall is in the way. The data needs to be transformed in the process from a star schema to a de-normalised table ready for reporting. What I'm thinking about is writing a monster query for SQL Server (which I mostly have already) that will denormalise and read out the data from SQL Server into a flat file using the sql server equivalent of sqlplus as a scheduled task, dump into a Well Known Location, then on the oracle side have a cron job that copies down the file and loads it with sql loader and rebuilds indexes etc. This is all doable, but very manual. Is there one or a combination of FOSS or standard oracle/SQL Server tools that could automate this for me? the Irreducible complexity is the query on one side and building indexes on the other, but I would love to not have to write the CSV dumping detail or the SQL loader script, just say dump this view out to CSV on one side, and on the other truncate and insert into this table from CSV and not worry about mapping column names and all other arcane sqlldr voodoo... best practices? thoughts? comments? edit: I have about 50+ columns all of varying types and lengths in my dataset, which is why I'd prefer to not have to write out how to generate and map each single column...

    Read the article

  • Creating C++ client app for some abstract windows server - how to manage TCP connection to server speed?

    - by Kabumbus
    So we have some server with some address port and ip. we are developing that server so we can implement on it what ever we need for help. What are standard/best practices for data transfer speed management between C++ windows client app and server (C++)? My main point is in how to get how much data can be uploaded/downloaded from/to client via his low speed network to my relatively super fast server. (I need it for set up of his live stream Audio/Video bit rate) My try on explaining number 3. We do not care how fast is our server. It is always faster than needed. We care about client tyring to stream out to our server his media. he streams encoded (via ffmpeg) live video data to our server. But he has say ADSL with 500kb/s of outgoing traffic. Also he uses some ICQ or what so ever so he has less than 500 kb/s per second. And he wants to stream live video! So we need to set up our ffmpeg to encode video with respect to the bit rate user can provide. We develop server side and client side. We need a way of finding out how much user can upload per second currently (so value can change dynamically over time)

    Read the article

  • Dealing With Java Default Level Access Specifiers

    - by Tom Tresansky
    I've seen some code in a project recently where some fields in a couple classes have been using the default access modifier without good reason to. It almost looks like a case of "oops, forgot to make these private". Since the classes are used almost exclusively outside of the package they are defined in, the fields are not visible from the calling code, and are treated as private. So the mistake/oversight would not be very noticeable. However, encapsulation is broken. If I wanted to add a new class to the existing package, I could then mess with internal data in objects using fields with default access. So, my questions: Are there any best practices concerning default access specifiers that I should be aware of? Anything that would help prevent this type of accident from re-occurring? Are are any annotations which might say something to the effect of "I really meant for these to be default access"? Using CheckStyle, or any other Eclipse plugins, is there any way to flag instances of default fields, or disallow any not accompanied by, say, a "//default access" comment trailing them?

    Read the article

  • WSDL first for existing service layer

    - by Jurgen H
    I am working on an existing Java project with a typical services - dao setup for which only a webapplication was available. My job is to add webservices on top of the services layer, but the webservices have their own functional analysis and datamodel. The functional analyses ofcource focuses on what is possible in the different service methods. As good practice demands, we used the WSDL first strategy and generated JAXB bound Java classes and a SEI for the webservices. After having implemented the webservices partially, we noticed a 70% match between the datamodel. This resulted in writing converters which take the webservice JAXB classes and map them with the service layer classes. Customer customer = new Customer(); customer.setName(wsCustomer.getName()); customer.setFirstName(wsCustomer.getFirstName(); .. This is a very obvious example, some other mappings where little more complicated. Can anyone give his best practices, experiences, solutions to this kind of situations? Are any of these frameworks usefull? http://transmorph.sourceforge.net/wiki/index.php/Main_Page http://ezmorph.sourceforge.net/ Please don't start a discussion about WSDL first vs code first.

    Read the article

  • Are there good reasons not to use an ORM?

    - by hangy
    During my apprenticeship, I have used NHibernate for some smaller projects which I mostly coded and designed on my own. Now, before starting some bigger project, the discussion arose how to design data access and whether or not to use an ORM layer. As I am still in my apprenticeship and still consider myself a beginner in enterprise programming, I did not really try to push in my opinion, which is that using an object relational mapper to the database can ease development quite a lot. The other coders in the development team are much more experienced than me, so I think I will just do what they say. :-) However, I do not completely understand two of the main reasons for not using NHibernate or a similar project: One can just build one’s own data access objects with SQL queries and copy those queries out of Microsoft SQL Server Management Studio. Debugging an ORM can be hard. So, of course I could just build my data access layer with a lot of SELECTs etc, but here I miss the advantage of automatic joins, lazy-loading proxy classes and a lower maintenance effort if a table gets a new column or a column gets renamed. (Updating numerous SELECT, INSERT and UPDATE queries vs. updating the mapping config and possibly refactoring the business classes and DTOs.) Also, using NHibernate you can run into unforeseen problems if you do not know the framework very well. That could be, for example, trusting the Table.hbm.xml where you set a string’s length to be automatically validated. However, I can also imagine similar bugs in a “simple” SqlConnection query based data access layer. Finally, are those arguments mentioned above really a good reason not to utilise an ORM for a non-trivial database based enterprise application? Are there probably other arguments they/I might have missed? (I should probably add that I think this is like the first “big” .NET/C# based application which will require teamwork. Good practices, which are seen as pretty normal on Stack Overflow, such as unit testing or continuous integration, are non-existing here up to now.)

    Read the article

  • What are some commonly used source code check-in policies?

    - by rwmnau
    I'm curious what code review policies other development shops apply to their source code when it's checked into the source control repository. I'm setting up a TFS (Team Foundation) server, and I'd like to apply some check-in policies to start to stamp out bad practices. For example, I was thinking of starting with the following couple, so this is the kind of stuff I'm looking for: Prohibit empty "Catch" blocks. This would prevent applications from swallowing any exceptions without at least requiring a comment explaining why it's not necessary to do anything with the exception. Prohibit "Catch ex as Exception" generic exception handling. Instead, require code to catch specific types of exceptions and deal with them appropriately, instead of just building catch-all handling. Require a check-in comment. This one should be self-explanatory, though it seems that TFS (and most other source-control systems) don't require a comment by default. While these are just examples, they're where I'm thinking of starting, and while I'd like some additional examples of what's popular, I'm open to feedback on these. Also, though we're a mostly .NET shop, I imagine the popular policies are universal across languages and IDEs (we have some Java development and a few people who will use the repository develop with Eclipse).

    Read the article

  • When should I implement globalization and localization in C#?

    - by Geo Ego
    I am cleaning up some code in a C# app that I wrote and really trying to focus on best practices and coding style. As such, I am running my assembly through FXCop and trying to research each message it gives me to decide what should and shouldn't be changed. What I am currently focusing on are locale settings. For instance, the two errors that I have currently are that I should be specifying the IFormatProvider parameter for Convert.ToString(int), and setting the Dataset and Datatable locale. This is something that I've never done, and never put much thought into. I've always just left that overload out. The current app that I am working on is an internal app for a small company that will very likely never need to run in another country. As such, it is my opinion that I do not need to set these at all. On the other hand, doing so would not be such a big deal, but it seems like it is unneccessary and could hinder readability to a degree. I understand that Microsoft's contention is to use it if it's there, period. Well, I'm technically supposed to call Dispose() on every object that implements IDisposable, but I don't bother doing that with Datasets and Datatables, so I wonder what the practice is "in the wild."

    Read the article

  • Questions on Juval Lowy's IDesign C# Coding Standard

    - by Jan
    We are trying to use the IDesign C# Coding standard. Unfortunately, I found no comprehensive document to explain all the rules that it gives, and also his book does not always help. Here are the open questions that remain for me (from chapter 2, Coding Practices): No. 26: Avoid providing explicit values for enums unless they are integer powers of 2 No. 34: Always explicitly initialize an array of reference types using a for loop No. 50: Avoid events as interface members No. 52: Expose interfaces on class hierarchies No. 73: Do not define method-specific constraints in interfaces No. 74: Do not define constraints in delegates Here's what I think about those: I thought that providing explicit values would be especially useful when adding new enum members at a later point in time. If these members are added between other already existing members, I would provide explicit values to make sure the integer representation of existing members does not change. No idea why I would want to do this. I'd say this totally depends on the logic of my program. I see that there is alternative option of providing "Sink interfaces" (simply providing already all "OnXxxHappened" methods), but what is the reason to prefer one over the other? Unsure what he means here: Could this mean "When implementing an interface explicitly in a non-sealed class, consider providing the implementation in a protected virtual method that can be overridden"? (see Programming .NET Components 2nd Edition, end of chapter “Interfaces and Class Hierarchies”). I suppose this is about providing a "where" clause when using generics, but why is this bad on an interface? I suppose this is about providing a "where" clause when using generics, but why is this bad on a delegate?

    Read the article

  • How to understand existing projects

    - by John
    Hi. I am a trainee developer and have been writing .NET applications for about a year now. Most of the work I have done has involved building new applications (mainly web apps) from scratch and I have been given more or less full control over the software design. This has been a great experience however, as a trainee developer my confidence about whether the approaches I have taken are the best is minimal. Ideally I would love to collaborate with more experienced developers (I find this the best was I learn) however in the company I work for developers tend to work in isolation (a great shame for me). Recently I decided that a good way to learn more about how experienced developers approach their design might be to explore some open source projects. I found myself a little overwhelmed by the projects I looked at. With my level of experience it was hard to understand the body of code I faced. My question is slight fuzzy one. How do developers approach the task of understanding a new medium to large scale project. I found myself pouring over lots of code and struggling to see the wood for the trees. At any one time I felt that I could understand a small portion of the system but not see how its all fits together. Do others get this same feeling? If so what approaches do you take to understanding the project? Do you have any other advice about how to learn design best practices? Any advice will be very much appreciated. Thank you.

    Read the article

  • As an Agile Java developer, what should I be looking for when hiring a C++ developer?

    - by agoudzwaard
    I come from an effective team of Agile Java developers. We've had a lot of success in hiring more people like ourselves - people passionate about technology with experience primarily in the Agile Java/J2EE space. We're looking to hire our first C++ developer to serve as an on-shore resource for maintaining and adding to the C++ portion of our code base. Up until now the entirety of our C++ development has been done out of an off-shore location. We consider our interview process to be fairly thorough: A phone screen centered on Object-Oriented Programming and Java A non-trivial at-home code project using Java An in-person interview covering technical and behavioral competency We look for a demonstration of Agile best practices (expressive code, test-driven development, continuous integration) throughout the entire process, however there is a common conception that Agility is primarily practiced by Java developers. If we retrofit our interview process for C++, should we still expect Agile qualities when interviewing for a C++ role? I'm asking on behalf of a team that has worked with Java too long to know what a good C++ developer looks like. Specifically we're looking to answer the following questions: Can we expect a demonstrated understanding of OO design and Separation of Concerns? In the code project we want the candidate to write unit tests. Would a good C++ developer be surprised by this expectation? Are there any "extra" competencies we can look for? For example with Java developers we always look for a familiarity with Dependency Injection.

    Read the article

  • `enable_shared_from_this` has a non-virtual destructor

    - by Shtééf
    I have a pet project with which I experiment with new features of the upcoming C++0x standard. While I have experience with C, I'm fairly new to C++. To train myself into best practices, (besides reading a lot), I have enabled some strict compiler parameters (using GCC 4.4.1): -std=c++0x -Werror -Wall -Winline -Weffc++ -pedantic-errors This has worked fine for me. Until now, I have been able to resolve all obstacles. However, I have a need for enable_shared_from_this, and this is causing me problems. I get the following warning (error, in my case) when compiling my code (probably triggered by -Weffc++): base class ‘class std::enable_shared_from_this<Package>’ has a non-virtual destructor So basically, I'm a bit bugged by this implementation of enable_shared_from_this, because: A destructor of a class that is intended for subclassing should always be virtual, IMHO. The destructor is empty, why have it at all? I can't imagine anyone would want to delete their instance by reference to enable_shared_from_this. But I'm looking for ways to deal with this, so my question is really, is there a proper way to deal with this? And: am I correct in thinking that this destructor is bogus, or is there a real purpose to it?

    Read the article

  • Drools - Doing Complex Stuff inside a Rule Condition or Consequence

    - by mfcabrera
    Hello, In my company we are planning to use Drools a BRE for couple of projects. Now we trying to define some best-practices. My question is what should be and shouldn't be done inside a Rule Condition/Consequence. Given that we can write Java directly or call methods (for example From a Global object in the Working Memory). Example. Given a Rule that evaluates a generic Object (e.g. Person) have property set to true. Now, that specific propertie can only be defined for that Object going to the database and fetching that info. So we have two ways of implementing that: Alternative A: Go to the database and fetch the object property (true/false, a code) Insert the Object in the working memory Evaluate the rule Alternative B: Insert a Global Object that has a method that connects to the database and check for the property for the given object. Insert the Object to eval in Working Memory In the rule, call the Global Object and perform the access to the database Which of those is considered better? I really like A, but sometimes B is more straightforward, however what would happen if something like a Exception from the Database is raised? I have seen the alternative B implemented in the Drools 5.0 Book from Packt Publishing,however they are doing a mocking and they don't talk about the actual implications of going to the database at all. Thank you,

    Read the article

< Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >