Search Results

Search found 13692 results on 548 pages for 'bad practices'.

Page 135/548 | < Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >

  • Are ASCII diagrams worth my time?

    - by Jesse Stimpson
    Are ASCII diagrams within source code worth the time they take to create? I could create a bitmap diagram much faster, but images are much more difficult to in line in a source file (until VS2010). For the record, I'm not talking about decorative ASCII art. Here's an example of a diagram I recently created for my code that I probably could have constructed in half the time in MS Paint. Scenario A: v (U)_________________(N)_______<--(P) Legend: ' / | J = ... ' / | P = ... ' /d | U = ... ' / | v = ... ' / | d = ... '/ | N = ... (J) | | | |___________________|

    Read the article

  • Creating an interface and swappable implementations in python

    - by Blankman
    Hi, Would it be possible to create a class interface in python and various implementations of the interface. Example: I want to create a class for pop3 access (and all methods etc.). If I go with a commercial component, I want to wrap it to adhere to a contract. In the future, if I want to use another component or code my own, I want to be able to swap things out and not have things very tightly coupled. Possible? I'm new to python.

    Read the article

  • best-practice on for loop's condition

    - by guest
    what is considered best-practice in this case? for (i=0; i<array.length(); ++i) or for (i=array.length(); i>0; --i) assuming i don't want to iterate from a certain direction, but rather over the bare length of the array. also, i don't plan to alter the array's size in the loop body. so, will the array.length() become constant during compilation? if not, then the second approach should be the one to go for..

    Read the article

  • Connection Pool Strategy: Good, Bad or Ugly?

    - by Drew
    I'm in charge of developing and maintaining a group of Web Applications that are centered around similar data. The architecture I decided on at the time was that each application would have their own database and web-root application. Each application maintains a connection pool to its own database and a central database for shared data (logins, etc.) A co-worker has been positing that this strategy will not scale because having so many different connection pools will not be scalable and that we should refactor the database so that all of the different applications use a single central database and that any modifications that may be unique to a system will need to be reflected from that one database and then use a single pool powered by Tomcat. He has posited that there is a lot of "meta data" that goes back and forth across the network to maintain a connection pool. My understanding is that with proper tuning to use only as many connections as necessary across the different pools (low volume apps getting less connections, high volume apps getting more, etc.) that the number of pools doesn't matter compared to the number of connections or more formally that the difference in overhead required to maintain 3 pools of 10 connections is negligible compared to 1 pool of 30 connections. The reasoning behind initially breaking the systems into a one-app-one-database design was that there are likely going to be differences between the apps and that each system could make modifications on the schema as needed. Similarly, it eliminated the possibility of system data bleeding through to other apps. Unfortunately there is not strong leadership in the company to make a hard decision. Although my co-worker is backing up his worries only with vagueness, I want to make sure I understand the ramifications of multiple small databases/connections versus one large database/connection pool.

    Read the article

  • Is it weird or strange to make multiple WCF Calls to build a ViewModel before presenting it?

    - by Nate Bross
    Am I doing something wrong if I need code like this in a Controller? Should I be doing something differently? public ActionResult Details(int id) { var svc = new ServiceClient(); var model = new MyViewModel(); model.ObjectA = svc.GetObjectA(id); model.ObjectB = svc.GetObjectB(id); model.ObjectC = svc.GetObjectC(id); return View(model); } The reason I ask, is because I've got Linq-To-Sql on the back end and a WCF Service which exposes functionality through a set of DTOs which are NOT the Linq-To-Sql generated classes and thus do not have the parent/child properties; but in the detail view, I would like to see some of the parent/child data.

    Read the article

  • How work with common utils project.

    - by ais
    For example, I have some project Common.Utils.csproj and use it in all other projects. I can store its (Utils) sourses in one repository and modify it only there, register dll in gac and use it as dll in other projects, or I can clone sourse anywhere I need, include project in solution, use it as source and push modifications. So, what is best practice?

    Read the article

  • Pattern for database-wrapper in java

    - by Space_C0wb0y
    I am currently writing a java-class that wraps an SQLite database. This class has two ways to be instantiated: Open an existing database. Create a new database. This is what I cam up with: public class SQLiteDatabaseWrapper { public static SQLiteDatabaseWrapper openExisting(File PathToDB) { return new SQLiteDatabaseWrapper(PathToDB); } public static SQLiteDatabaseWrapper createNew(File PathToDB) { CreateAndInitializeNewDatabase(PathToDB); return new SQLiteDatabaseWrapper(PathToDB); } private SQLiteDatabaseWrapper(File PathToDB) { // Open connection and setup wrapper } } Is this the way to go in Java, or is there any other best practice for this situation?

    Read the article

  • In which controller do you put the CRUD for the child part of a relationship?

    - by uriDium
    I am using ASP.Net MVC but this probably applies to all MVC patterns in general. My problem, for example I have companies and in each company I have a list of contacts. When I have selected a company I can see its details and a list of the contacts for that company. When I want to add a new contact for that company, should the implementation of that action go into the company controller as an "AddContact" action or should it go into the contact controller into a "New" action and we pass the Company ID in the URL? What is the usual way of dealing with this sort of thing in ASP.Net MVC? Is there a better stategy?

    Read the article

  • Nested Array with one foreach Loop?

    - by streetparade
    I need to have access to a array which looks like this. Array ( [0] => Array ( [54] => Array ( [test] => 54 [tester] => result ) ) ) foreach($array as $key=>$value) { echo $key;// prints 0 echo $value;// prints Array /* now i can iterate through $value but i dont want it solve that way example: foreach($value as $k=>$v) { echo $k;//prints test echo $v; //prints 54 } */ } How can iterate just once ? to get the values of test and tester? I hope i could explain my problem clear

    Read the article

  • Haskell function composition (.) and function application ($) idioms: correct use.

    - by Robert Massaioli
    I have been reading Real World Haskell and I am nearing the end but a matter of style has been niggling at me to do with the (.) and ($) operators. When you write a function that is a composition of other functions you write it like: f = g . h But when you apply something to the end of those functions I write it like this: k = a $ b $ c $ value But the book would write it like this: k = a . b . c $ value Now to me they look functionally equivalent, they do the exact same thing in my eyes. However, the more I look, the more I see people writing their functions in the manner that the book does: compose with (.) first and then only at the end use ($) to append a value to evaluate the lot (nobody does it with many dollar compositions). Is there a reason for using the books way that is much better than using all ($) symbols? Or is there some best practice here that I am not getting? Or is it superfluous and I shouldn't be worrying about it at all? Thanks.

    Read the article

  • Documentation style: how do you differentiate variable names from the rest of the text within a comm

    - by Alix
    Hi, This is a quite superfluous and uninteresting question, I'm afraid, but I always wonder about this. When you're commenting code with inline comments (as opposed to comments that will appear in the generated documentation) and the name of a variable appears in the comment, how do you differentiate it from normal text? E.g.: // Try to parse type. parsedType = tryParse(type); In the comment, "type" is the name of the variable. Do you mark it in any way to signify that it's a symbol and not just part of the comment's text? I've seen things like this: // Try to parse "type". // Try to parse 'type'. // Try to parse *type*. // Try to parse <type>. // Try to parse [type]. And also: // Try to parse variable type. (I don't think the last one is very helpful; it's a bit confusing; you could think "variable" is an adjective there) Do you have any preference? I find that I need to use some kind of marker; otherwise the comments are sometimes ambiguous, or at least force you to reread them when you realise a particular word in the comment was actually the name of a variable. (In comments that will appear in the documentation I use the appropriate tags for the generator, of course: @code, <code></code>, etc) Thanks!

    Read the article

  • why copy and paste codes is dangerous

    - by Yigang Wu
    sometimes, my boss will complain us why we need so long time to implement a feature, actually, the feature has been implemented in other AP before, you just need to copy and paste codes from there. The cost should be low. It's really a hard question, because copy and paste codes is not a easy thing from my point. Do you have any good reason to explain your boss who doesn't know technology?

    Read the article

  • Why do I need to give my options a value attribute in my dropdown? JQuery.

    - by Alex
    So far in my web developing experiences, I've noticed that almost all web developers/designers choose to give their options in a select a value like so: <select name="foo"> <option value="bar">BarCheese</option> // etc. // etc. </select> Is this because it is best practice to do so? I ask this because I have done a lot of work with jQuery and dropdown's lately, and sometimes I get really annoyed when I have to check something like: $('select[name=foo]').val() == "bar"); To me, many times that seems less clear than just being able to check the val() against BarCheese. So why is it that most web developers/designers specify a value paramater instead of just letting the options actual value be its value?

    Read the article

  • Continuous integration with multiple branch development

    - by ryanprayogo
    In the project that I'm working on, we are using SVN with 'Stable Trunk' strategy. What that means is that for each bug that is found, QA opens a bug ticket and assigns it to a developer. Then, a developer fixes that bug and checks it in a branch (off trunk, let's call this the bug branch) and that branch will only contain fixes for that particular bug ticket When we decided to do a release, for each bug fixes that we want to release to the customer, a developer will merge all the fixes from several bug branch to trunk and proceed with the normal QA cycle. The problem is that we use trunk as the codebase for our CI job (Hudson, specifically), and therefore, for all commits to the bug branch, it will miss the daily build until it gets merged to trunk when we decided to release the new version of the software. Obviously, that defeats the purpose of having CI. What is the proper way to fix this issue?

    Read the article

  • How to evade writing a lot of repetitive code when mapping?

    - by JPCF
    I have a data access layer (DAL) using Entity Framework, and I want to use Automapper to communicate with upper layers. I will have to map data transfer objects (DTOs) to entities as the first operation on every method, process my inputs, then proceed to map from entities to DTOs. What would you do to skip writing this code? As an example, see this: //This is a common method in my DAL public CarDTO getCarByOwnerAndCreditStatus(OwnerDTO ownerDto, CreditDto creditDto) { //I want to automatize this code on all methods similar to this Mapper.CreateMap<OwnerDTO,Owner>(); Mapper.CreateMap<CreditDTO,Credit>(); Owner owner = Mapper.map(ownerDto); Owner credit = Mapper.map(creditDto) //... Some code processing the mapped DTOs //I want to automatize this code on all methods similar to this Mapper.CreateMap<Car,CarDTO>(); Car car = Mapper.map(ownedCar); return car; }

    Read the article

  • Preprocessor #define vs. function pointer - best practice?

    - by Dustin
    I recently started a small personal project (RGB value to BGR value conversion program) in C, and I realised that a function that converts from RGB to BGR can not only perform the conversion but also the inversion. Obviously that means I don't really need two functions rgb2bgr and bgr2rgb. However, does it matter whether I use a function pointer instead of a macro? For example: int rgb2bgr (const int rgb); /* * Should I do this because it allows the compiler to issue * appropriate error messages using the proper function name, * not to mention possible debugging benefits? */ int (*bgr2rgb) (const int bgr) = rgb2bgr; /* * Or should I do this since it is merely a convenience * and they're really the same function anyway? */ #define bgr2rgb(bgr) (rgb2bgr (bgr)) I'm not necessarily looking for a change in execution efficiency as it's more of a subjective question out of curiosity. I am well aware of the fact that type safety is neither lost nor gained using either method. Would the function pointer merely be a convenience or are there more practical benefits to be gained of which I am unaware?

    Read the article

  • Extend legacy site with another server-side programming platform best practice

    - by Andrew Florko
    Company I work for have a site developed 6-8 years ago by a team that was enthusiastic enough to use their own private PHP-based CMS. I have to put dynamic data from one intranet company database on this site in one week: 2-3 pages. I contacted company site administrator and she showed me administrative part - CMS allows only to insert html blocks & manage site map (site is deployed on machine that is inside company & fully accessible & upgradeable). I'm not a PHP-guy & I don't want to dive into legacy hardly-who-ever-heard-about CMS engine I also don't want to contact developers team, 'cos I'm not sure they are still present and capable enough to extend this old days site and it'll take too much time anyway. I am about to deploy helper asp.net site on IIS with 2-3 pages required & refer helper site via iframe from present site. New pages will allow to download some dynamic content from present site also. Is it ok and what are the pitfalls with iframe approach?

    Read the article

  • JAR files, don't they just bloat and slow Java down?

    - by Josamoto
    Okay, the question might seem dumb, but I'm asking it anyways. After struggling for hours to get a Spring + BlazeDS project up and running, I discovered that I was having problems with my project as a result of not including the right dependencies for Spring etc. There were .jars missing from my WEB-INF/lib folder, yes, silly me. After a while, I managed to get all the .jar files where they belong, and it comes at a whopping 12.5MB at that, and there's more than 30 of them! Which concerns me, but it probably and hopefully shouldn't be concerned. How does Java operate in terms of these JAR files, they do take up quite a bit of hard drive space, taking into account that it's compressed and compiled source code. So that can really quickly populate a lot of RAM and in an instant. My questions are: Does Java load an entire .jar file into memory when say for instance a class in that .jar is instantiated? What about stuff that's in the .jar that never gets used. Do .jars get cached somehow, for optimized application performance? When a single .jar is loaded, I understand that the thing sits in memory and is available across multiple HTTP requests (i.e. for the lifetime of the server instance running), unlike PHP where objects are created on the fly with each request, is this assumption correct? When using Spring, I'm thinking, I had to include all those fiddly .jars, wouldn't I just be better off just using native Java, with say at least and ORM solution like Hibernate? So far, Spring just took extra time configuring, extra hard drive space, extra memory, cpu consumption, so I'm concerned that the framework is going to cost too much application performance just to get for example, IoC implemented with my BlazeDS server. There still has to come ORM, a unit testing framework and bits and pieces here and there. It's just so easy to bloat up a project quickly and irresponsibly easily. Where do I draw the line?

    Read the article

  • package private static member class vs. package private class

    - by Helper Method
    I was writing two implementations of a linked list for an assignment, a doubly linked list and a circular doubly linked list. Now as the class representing a Link within the linked list is the same in both implementations, I want to use it in both. Now I wonder which approach would be better: Implement the Link class as a package private static member class in the first implementation and then use this class in the second implementation or make the Link class a package private class.

    Read the article

  • Getting up to speed on modern architecture

    - by Matt Thrower
    Hi, I don't have any formal qualifications in computer science, rather I taught myself classic ASP back in the days of the dotcom boom and managed to get myself a job and my career developed from there. I was a confident and, I think, pretty good programmer in ASP 3 but as others have observed one of the problems with classic ASP was that it did a very good job of hiding the nitty-gritty of http so you could become quite competent as a programmer on the basis of relatively poor understanding of the technology you were working with. When I changed on to .NET at first I treated it like classic ASP, developing stand-alone applications as individual websites simply because I didn't know any better at the time. I moved jobs at this point and spent the next several years working on a single site whose architecture relied heavily on custom objects: in other words I gained a lot of experience working with .NET as a middle-tier development tool using a quite old-fashioned approach to OO design along the lines of the classic "car" class example that's so often used to teach OO. Breaking down programs into blocks of functionality and basing your classes and methods around that. Although we worked under an Agile approach to manage the work the whole setup was classic client/server stuff. That suited me and I gradually got to grips with .NET and started using it far more in the manner that it should be, and I began to see the power inherent in the technology and precisely why it was so much better than good old ASP 3. In my latest job I have found myself suddenly dropped in at the deep end with two quite young, skilled and very cutting-edge programmers. They've built a site architecture which is modelling along a lot of stuff which is new to me and which, in truth I'm having a lot of trouble understanding. The application is built on a cloud computing model with multi-tenancy and the architecture is all loosely coupled using a lot of interfaces, factories and the like. They use nHibernate a lot too. Shortly after I joined, both these guys left and I'm now supposedly the senior developer on a system whose technology and architecture I don't really understand and I have no-one to ask questions of. Except you, the internet. Frankly I feel like I've been pitched in at the deep end and I'm sinking. I'm not sure if this is because I lack the educational background to understand this stuff, if I'm simply not mathematically minded enough for modern computing (my maths was never great - my approach to design is often to simply debug until it works, then refactor until it looks neat), or whether I've simply been presented with too much of too radical a nature at once. But the only way to find out which it is is to try and learn it. So can anyone suggest some good places to start? Good books, tutorials or blogs? I've found a lot of internet material simply presupposes a level of understanding that I just don't have. Your advice is much appreciated. Help a middle-aged, stuck in the mud developer get enthusastic again! Please!

    Read the article

  • How to differentiate between exceptions i can show the user, and ones i can't?

    - by Ian Boyd
    i have some business logic that traps some logically invalid situations, e.g. trying to reverse a transaction that was already reversed. In this case the correct action is to inform the user: Transaction already reversed or Cannot reverse a reversing transaction or You do not have permission to reverse transactions or This transaction is on a session that has already been closed or This transaction is too old to be reversed The question is, how do i communicate these exceptional cases back to the calling code, so they can show the user? Do i create a separate exception for each case: catch (ETransactionAlreadyReversedException) MessageBox.Show('Transaction already reversed') catch (EReversingAReversingTransactionException) MessageBox.Show('Cannot reverse a reversing transaction') catch (ENoPermissionToReverseTranasctionException) MessageBox.Show('You do not have permission to reverse transactions') catch (ECannotReverseTransactionOnAlredyClosedSessionException) MessageBox.Show('This transaction is on a session that has already been closed') catch (ECannotReverseTooOldTransactionException) MessageBox.Show('This transaction is too old to be reversed') Downside for this is that when there's a new logical case to show the user: Tranasctions created by NSL cannot be reversed i don't simply show the user a message, and instead it leaks out as an unhandled excpetion, when really it should be handled with another MessageBox. The alternative is to create a single exception class: `EReverseTransactionException` With the understanding that any exception of this type is a logical check, that should be handled with a message box: catch (EReverseTransactionException) But it's still understood that any other exceptions, ones that involve, for example, an memory ECC parity error, continue unhandled. In other words, i don't convert all errors that can be thrown by the ReverseTransaction() method into EReverseTransactionException, only ones that are logically invalid cause of the user.

    Read the article

< Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >